[MEI-L] Layout module (WAS: clef as milestone element)

Andrew Hankinson, Mr andrew.hankinson at mail.mcgill.ca
Wed Jun 1 18:03:41 CEST 2011


Hi all,

I've changed the subject of this discussion, since we seem to be getting more into the layout module.


It's a terminology thing. I like thinking about musicians in rehearsal to understand what nomenclature to use. I often hear things like "On the second page, first system, second measure...",

According to my English music notation books, a system is the
"container" of staffs (in the layout sense, my "systemstaffs").  Like
Ted Ross, The Art of Music Engraving and Processing, p. 151: "A
'systemic' barline [,,,] is used for keyboard instruments with
two-stave systems [...]."  Or George Heussenstamm, The Norton Manual
of Music Notation, p. 87: "When two or more staves are connected in
this way they form a system."  In my understanding as a non-native
English speaker, this therefore makes no sense:


<system clef.line="3" clef.shape="C" />


because every staff in the system might require a different clef.

You are, of course, right. This was a moment of mental weakness on my part, and I hadn't really though through that suggestion.

The "container" is correct, but I think that's really a "post-hoc" analysis of what a system is; that is, if you just have printed music to look at, everything is going to be in a system, because that's the way printed music works. However, for more abstract representations, like most music encoding languages use, there is no need for staves to be directly related to a system. This is usually taken care of by the rendering function of the notation editing software, so that it can dynamically re-arrange the notes into optimal spacings.



Then I'm misunderstanding something.  I thought the layout element
would be for *generating* a rendition of the music notation, not to
describe some facsimile?

No,

This clarifies a lot.

The idea being that in most cases (should the MEI be imported into a notation editor) you would want to ignore this information and allow the notation editor to do the layout itself, but in some cases (Optical Music Recognition, for example) you would want to be able to define and correlate physical characteristics of the printed image with musical elements -- the location of systems, notes, rests, etc.


I'm pretty new to MEI and don't know what MEI's envisaged spectrum of
uses is, but for decent digital editions, I don't think it's enough to
just run the music through a notation editor or a formatter.  It of
course depends on the complexity of the music, but in general, for a
certain level of quality it's necessary that some human editor revises
the notation program's decisions.  And if there are situations that
are maybe musically erroneous or incomplete but shall be presented in
this unedited form, then the notation program might produce garbage or
refuse to process the data.

Again, you're right. For decent digital editions you would want human oversight for producing printed music, since it can look like garbage. However, we're not producing printed music. We're using MEI to index collections of printed music using OMR: We scan the digital image, retrieve all the positions of all the musical elements, along with their "semantic" musical meaning, and encode it in MEI. We then use this encoding for searching and indexing the content of the book, and tie all of the elements on the page back to its physical location.

We never render the notation directly, but given this representation we can do other cool things, like automatic analysis of melody, harmony or rhythm. That's on top of making large amounts of printed music searchable. If you're just doing editions of a single composer, you have a certain luxury to take your time and get it right; if you're digitizing the print music collection of the British Library, or even any decent sized university library it's questionable whether one person could even get through all the sources in a lifetime to correct the mistakes of automatic recognition, let alone optimally edit it for layout.

We've had three people correcting the output of one book now for over three months, just correcting the shape recognition errors, and we're not quite done yet. Granted, this is a research project so we're not optimizing for speed and throughput, but it's definitely not a task that can be done quickly for any source of a given size.



Another problem I see is that when the MEI data is corrected or
otherwise edited, then you have to do the entire process of formatting
and manual cleanup again, or somehow synchronize with the formatted
data.  And I'm not only thinking about "dead" printouts or vector
graphics renditions.  What about interactive ones that might be able
to display additional information for certain elements, highlight and
(un-)hide elements or integrate with other sources and a special
interface? Can this be achieved without storing the "rendering hints"
and the semantic data in a form that makes the relationship between
graphics and actual data clear?

MEI currently does have functionality for this. For most musical elements (notes, rests, clefs, etc.), there is the @ho and @vo attributes that specify a horizontal and vertical offset, and @x and @y for encoding a placement. For any notation renderer, then, it is entirely possible to store exact locations for these elements.

However, as anyone who has had to deal with moving between Sibelius and Finale knows, different notation programs render the same file differently. One application could choose to render a system on a different page, or cram more measures into a line. It all depends on their layout engine, their font, and what their designers considered "optimal" layout. So, storing objects in reference to absolute points is problematic, since one renderer may render differently than another.

So, for the types of displays you are talking about, you can either a) design your score and store the information in reference to a single renderer, knowing that it won't be displayed using anything else, b) design your score with "hints" about the positioning of certain elements, but knowing that most of the others will be left to the renderer to figure out, or c) design it with absolutely no layout information and leave it up to the rendering and layout capabilities of any software that a user chooses to open it in. This is the same, regardless of if it is going for a "dead" rendering, or a "live" interactive rendering.

All three are currently very do-able in MEI *without* the layout module. You can choose to use the @x, at y, or @ho, at vo attributes, and then <sb /> for encoding explicit system breaks, @bezier, @bulge, @curvedir on slurs and ties, and so on.

The layout module is designed to provide very minimal layout information in reference to the layout of an existing page. Early on in the design of this, we realized that there were very few new elements that needed to be added to make it work, and that didn't duplicate existing functionality in MEI. This was the <page> and <system> elements; the <layout> element was added as a container for these two.

Currently, the <system> element is used as a linker between a system break and a <facs> element. When you encounter an <sb />, it will link to a <system> element via the @systemref attribute; <system> then has an @facs attribute that points to a <facs> element that defines a bounding box around a zone on a given page image.

Is this ideal? or complete? Probably not. But I guess that's the point of Incubator projects -- put something out there and discuss and critique it openly.



I thought this was what the layout tree would be about.  So...


Are we talking past each other?

I think maybe we are.

we definitely are.

And that's fine. :)


Thomas

_______________________________________________
mei-l mailing list
mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de>
https://lists.uni-paderborn.de/mailman/listinfo/mei-l




More information about the mei-l mailing list