Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create mechanism to programmatically navigate rendering #158

Open
jaredjj3 opened this issue Oct 18, 2023 · 14 comments
Open

Create mechanism to programmatically navigate rendering #158

jaredjj3 opened this issue Oct 18, 2023 · 14 comments
Assignees

Comments

@jaredjj3
Copy link
Collaborator

Requirements

From #118, #122, and myself, the requirements are:

  • The cursor can accept position instructions. Callers should be able to specify time, position (x, y), measure index/number with or without repeat information, next(n), previous(n), and the cursor should update its position accordingly.
  • vexml can generate default position instructions coupled to time.
  • The cursor provides information about the active elements, including provenance information about the MusicXML that is associated with the active elements.
  • The cursor emits events: activeelementchange, positioninstructionreceived, cursorexhausted, etc..
  • The cursor's visibility is toggleable.
  • A rendering can accommodate multiple cursors. Each cursor is agnostic to one another and therefore it is the caller's responsibility to coordinate them.
  • Cursors can be styled.
  • Cursors can be interpolated between voice entries or they can snap to the closest voice entry given the last position instruction.
  • Cursors can navigate through entire voice entries or they can navigate individual elements within a voice. The former is typically more useful for reading music, while the latter is more useful for editing MusicXML.

Hints

I would like something that is derived from the *Rendering object tree that accomplishes all of these things. Right now, the *Rendering object tree expose the underlying vexflow objects, but I would like callers to not have to interact with that.

I recommend creating a new library within this project called cursor, which will house all the logic needed to accomplish the requirements. Consider converting the *Rendering object tree to something more useful for the cursor as a first step, which will also have the benefit of decoupling cursor from rendering. That way, if rendering has to change significantly, the cursor library should only need to be updated in one place.

Inspiration

This was referenced Oct 18, 2023
@jaredjj3
Copy link
Collaborator Author

@infojunkie, I think it'll be O(months) before I really nail down rendering, so I invite you to start looking into this. What do you think is a good strategy for dealing with *Rendering object tree (ScoreRendering as the root) changes that would break this work?

@infojunkie
Copy link
Contributor

infojunkie commented Oct 19, 2023

Thanks for inviting me! This is a big subject, and it can get overwhelming quickly.

From my perspective, the less the sheet renderer interferes with cursor rendering, the better the client app is able to customize the cursor's behaviour. Other renderers attempt to control too many aspects of the cursor rendering which inevitably leads to hacks and wasted time circumventing unwanted behaviour instead of being productive.

From the point of view of a reader application, a minimal interface between the sheet and its client would look like:

  • Easy access to rendered sheet objects (DOM / SVG elements)
  • Click event subscription for all rendered sheet objects
  • Bidirectional mapping between timestamps and timed sheet objects
  • Chronological iterator for timed sheet objects (next, previous)
  • Sheet objects provide references to their source MusicXML element
  • Ability to override MusicXML formatting instructions (e.g. defaults and print elements)

There's likely more, but in general starting small and evolving the interface will probably keep the code manageable.

@jaredjj3
Copy link
Collaborator Author

the less the sheet renderer interferes with cursor rendering, the better the client app is able to customize the cursor's behaviour

I agree. I think the renderer should provide everything the cursor needs to work without knowing about a cursor implementation specifically.

Click event subscription for all rendered sheet objects

This should be addressed by #159, but you're welcome to start this as part of this issue, too. An edge case that comes to mind is how the cursor-clickable area can be outside of the graphics bounding boxes. For example, a user may want to click above a measure in "no-mans land" to move the cursor. If you only have handlers on the graphics, this may end up doing nothing.

Ability to override MusicXML formatting instructions

This will be contained in rendering/config.ts. Callers can specify the options they want when the render the score. Right now, it's severely lacking documentation and options. Feel free to propose some and I can wire them in as I work on the renderer.


I look forward to what you come up with!

@infojunkie
Copy link
Contributor

Thanks, sounds good. To set expectations, I am not planning to start integrating vexml any time soon. I will probably start by testing how your module handles my typical scores, which is a prerequisite to any further work.

@alindsay55661
Copy link

Hi @jaredjj3, I am working on a new project using vexflow 5 with some interactivity and can test vexml features described in this issue. I am happy to take incomplete / early iterations of this work and will provide feedback. I'll test and provide feedback either way. Really looking forward to updates here!

@jaredjj3
Copy link
Collaborator Author

Hey @alindsay55661, thanks for the heads up — your help would be greatly appreciated. Would you tell me a little about the project? It could inform how we prioritize the features.

@alindsay55661
Copy link

@jaredjj3 Yes, no problem. The project allows tutors/teachers to build lessons with interactive elements.

  • One element is exactly what you have on stringsync (media sync'd to vex output) but with stave notation instead of tab.
  • Also the ability to display/highlight incoming midi notes in stepwise fashion (only highlighting when the student hits the next note(s) in the score, must hit all notes in a chord before the cursor progresses) or against the clock (showing hit/missed notes, for example as an overlay / additional voice with different colors a layer above the score).
  • And generally, being able to select a note, chord, bar, or range for playback in harmony exercises.
  • Finally, perhaps beyond the scope of vexml, is direct input/editing rather than string/file-based input. This is less critical than being able to interact with an existing vexml render, however, as most teachers have ways to export MusicXML from other software. Although, if notes can already be selected (to hear the tone, for example), it would be ideal to easily move them up and down, or to add additional notes of the same duration at the same point in time [same chord] as part of harmony exercises (for example, the teacher provides one chord and asks the student to modify it to match a specific harmonic structure - this is much lighter than "creating a score from scratch" but still very useful and maybe a first step in general score editing).

@jaredjj3
Copy link
Collaborator Author

Thanks for taking the time to write this out — you have some interesting use cases! I'll be mindful when crafting the API to be flexible enough for what you want to use it for.

Finally, perhaps beyond the scope of vexml, is direct input/editing rather than string/file-based input.

Have you checked out https://github.com/AaronDavidNewman/Smoosic by @AaronDavidNewman?

@AaronDavidNewman
Copy link

I have ;) In case you're interested in using it, either in fact or just as a model/guide...

I am close to being able to formally express the Smoosic grammar as a JSON schema. I have the code documentation in place, but I need to play around with a typescript extension to get it to go pretty.

Basically, for each musical object you have (for example) SmoMeasure, SmoMeasureParams, SmoMeasureSer. SmoMeasure is the class. SmoMEasureParams is the arguments for the constructor. SmoMeasure.defaults is the default SmoMeasureParams. SmoMeasureSer is the serialized form. Each musical object in smo/data has a serialize and deserialize method, with the score being the top-level.

There are also classes in the base objects that handle selections, copy, paste, and undo, and some common theory operations like transpose, in a rendering-agnostic format.

There is a SuiTracker and SuiMapper object in the rendering part that does what you're discussing re: a cursor. But the logic is a bit messy for specification/export right now. Basically, there are real-time considerations (esp. scrolling), and also many musical operations are affected by order. Like, if you change the duration of a note and then the pitch, it's different than doing it in the opposite order, but you also want to avoid rendering the music twice.

@alindsay55661
Copy link

@jaredjj3 Thank you for your consideration! And thanks for pointing me to Smoosic, a very handy library, there is a big need for a js level format to facilitate manipulation that can be piped back to tools like vexflow for a UI.

@AaronDavidNewman great work on this! I will dive in deeper for sure but looking at the README it appears you've separated things nicely and made it possible to programmatically get at the various methods (i.e. headless operation). I was trying to determine vexflow 5 support which seems you've gone back and forth on... where do things stand now and what are your plans moving forward in this regard?

The paradigm of SMO generators/mutators -> SMO format -> SMO renderers (vexflow currently) is very powerful. Especially when there is a configurable interaction model (keybindings, etc.) for calling the mutators directly from the UI. This is really well organized and seems I could use it for a lightweight alternative to requiring desktop app -> MusicXML -> web app.

I haven't dug into the code yet but...

There are also classes in the base objects that handle selections, copy, paste, and undo, and some common theory operations like transpose, in a rendering-agnostic format.

This would be my preference / expectation, that I can interact with a SMO format completely headlessly, including things like building chords, adding voices, bars, etc. That affords the most opportunity for creating custom UIs without losing or needing to reimplement any authoring logic. For example, one requirement I have is to render both notation and MIDI piano roll against the same horizontal time grid. This is easier to do in a single renderer than forcing vexflow to conform to an external MIDI piano roll width or vice versa.

However...

There is a SuiTracker and SuiMapper object in the rendering part that does what you're discussing re: a cursor.

It sounds like cursor placement is currently tied to the renderer itself which may mean completely headless operation isn't possible. My vote would be to move cursor navigation out of the render layer and in with the generation/mutation layer. Since you can already make selections there, a next logical step would be to navigate as well. The render layer would then call cursor methods and optionally listen for cursor events either individually or in batch to handle the order of operations logic. It would be really nice, for example, to create a "jump to bar" UI that selects the first beat of a provided bar number. If a renderer is attached an listening, it will display these updates, if not, who cares, you can still continue to edit the SMO headlessly. ("Change the Bb in measure 12 beat 3 to a C..." - imagine asking ChatGPT to do that - no renderer necessary 😉)

Headless operation also opens up opportunity for procedural or partially procedural creation. It would be very interesting to provide specific musical frameworks and constraints to a program that generates outputs by calling SMO methods. Those outputs could be displayed (and edited) in a renderer but further editing post render would be totally optional. You could just play the audio without even displaying the notation. Another interesting case would be creating non-graphical renderers such as a text renderer that outputs ABC notation. Maybe this isn't a renderer, maybe it's just a format converter, but I am seeing the value having SMO format + SMO generators/mutators as completely independent from rendering in the same way that MIDI and MusicXML are independent from rendering. It would (will?) be really cool to see others create unique authoring tools against a native JS/TS format (SMO) rather than just doing conversions all the time.

Sorry for the brain dump, I went off topic but both of these are really important projects! 😄

@AaronDavidNewman
Copy link

I am a minor-to-moderate contributor to Vexflow. Once Vexflow 5 is fully backwards-compatible with version 4 functionally, I'll probably switch Smoosic to Vexflow 5 and stay there. in the meantime, I create VF-5 compatible versions for testing backwards-compatibility, as time allows.

I think we are using 'Cursor' to mean different things. The SmoSelection class allows logical ('headless') navigation and manipulation, without any dependencies. If you look at some of the sample html files, you can see examples of this. You could use the SMO classes with your own rendering logic, if you wanted. I plan to split SMO into its own repository.

SMO includes both MIDI and musicXML input and output. These transformations are lossy, but I think that is inevitable giving the limitations of these formats. I hoped to address some of the limitations of these other formats with SMO. And also of course, my implementation of musicXML is not complete.

@alindsay55661
Copy link

Vexflow 5 is fully backwards-compatible with version 4 functionally, I'll probably switch Smoosic to Vexflow 5 and stay there.

Great, thanks for letting me know.

I think we are using 'Cursor' to mean different things. The SmoSelection class allows logical ('headless') navigation and manipulation, without any dependencies. If you look at some of the sample html files, you can see examples of this. You could use the SMO classes with your own rendering logic, if you wanted. I plan to split SMO into its own repository.

Very cool! Really looking forward to jumping into this more deeply. I see so much utility here.

SMO includes both MIDI and musicXML input and output. These transformations are lossy, but I think that is inevitable giving the limitations of these formats. I hoped to address some of the limitations of these other formats with SMO.

The formats themselves don't allow for lossless conversion so this is expected. Exciting to hear that SMO is intended as a superset. Some proprietary software solutions have/do combine formats, for example to merge and retain human played midi timing/velocity/expression/etc. with quantized and properly decorated notation. An open format that captures all this information would be a wonderful authoring baseline. So glad to see your work here for the web!

@rvilarl
Copy link
Collaborator

rvilarl commented Jan 2, 2024

@jaredjj3 let me know when I should consider to start the porting to vexml in https://github.com/rvilarl/pianoplay

@jaredjj3
Copy link
Collaborator Author

jaredjj3 commented Jan 2, 2024

@rvilarl will do. Keep doing what you're doing by making GitHub issue bugs using scores that you would render in pianoplay. It will take a very long time (>1 year) to get >80% MusicXML coverage, but chances are, we don't need to for most scores. I imagine most scores can render mostly fine with <20% MusicXML coverage, so my immediate goal is to cross that threshold before focusing on features like this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants