[MEI-L] MEI Customisation

Laurent Pugin lxpugin at gmail.com
Wed May 25 22:40:17 CEST 2016

Hi James,

Let me take the opportunity to clarify the role of customizations in MEI,
which are very flexible as with many other things in MEI. One possible use
is for restricting MEI to a subset. This is achieved with the Tido's
customization, which acts as an application profile. It is also the goal of
the MEI Go! (or whatever it will be called) where we would like to define
the most essential components we need for facilitating data exchange
between projects.

Now, the good news for you is that customization can also be used for
uncommon notation features, which is what it seems you would like to do.
This can also be used for correcting what you think is a mistake in MEI.
The other good news is that you do not need to convince everybody on the
list to do so. You can do it on your own. Eventually, if your proposal
seems to be a good idea there is no reason for not having it integrated in


On Wed, May 25, 2016 at 4:51 PM, Andrew Hankinson <
andrew.hankinson at mail.mcgill.ca> wrote:

> I'll try to respond inline below.
> Hi Andrew, Zoltan,
> Zoltan: thanks for your answers to this thread on the W3C CG list. This
> reply to Andrew (below) continues the thread here, so its also a reply to
> what you said.
> AH: For rhythm and duration we differentiate between "@dur" (written
> duration) and "@dur.ges" (performed duration). Both of these are available
> on note and rest objects.
> Yes, that's one of the problems -- see below. :-)
> JI: I want to make an MEI customisation that uses most of the symbols that
> are used by CWMN, but without assuming tempo. If there is no tempo, then
> neither tuplets nor grace-notes make sense. It should also be possible for
> the description of *more than one* temporal instantiation to be stored
> inside the XML elements that represent the score's graphics.
> AH: I'm not sure I understand this. Tempo is generally expressed by a
> playback mechanism. It can be hinted at in the encoding, but most systems
> have controls for overriding it.
> The @dur attribute describes both the shape of the symbol and its meaning
> (the number of beats it represents in a <measure>). Beats mean tempo, and
> that's a problem if I'm trying to create a customisation that does without
> it.
> I want to separate the meaning of each duration symbol from its visual
> appearance by putting the temporal information in an enclosed element. Lets
> call the enclosed element <time>.  Actually, I want (potentially) to have a
> *list* of <time> elements inside each duration symbol, each position in
> such lists containing the temporal information for a different performance
> of the piece.
> If you'll permit me a frank comment, it sounds like you have a pretty good
> idea of your own encoding format, which is neither MEI nor MNX.
> Depending on how far down the rabbit hole you want to go, it is completely
> possible to separate visual identity from durational identity in MEI. We
> use '<note>' and '<rest>' effectively as `time` elements -- that is,
> elements that record the passing of time. One records a pitch, the other
> records silence. Since @dur.ges accepts absolute durations, it effectively
> acts exactly as you are expecting.
> With the @headshape attribute you can precisely control the visual
> appearance of the object, separate from its time. I believe in the
> forthcoming release you can even use this to include a SMuFL codepoint.
> [begin aside]
> The first <time> element would describe the symbol's default duration,
> which in MusicXML and MEI Go can be calculated from the tempo, ppq, logical
> value of @dur. etc.
> If used in CWMN, the <time> element would make all that tempo, ppq,
> logical value @dur.ges stuff redundant. Simplification is always a good
> idea! :-) Note that this strategy means that @dur's logical value and
> @dur.ges are both being treated as being in the same dimension (time).
> Pretending that att.duration.musical and att.duration.performed need to be
> treated differently is, I think, a mistake.
> Note also that the latest MIDI standard (the Web MIDI API) no longer
> supports tempo. Including tempo in the 1981 MIDI standard was a mistake,
> which has now been rectified. The Web MIDI API just uses milliseconds.
> Even CWMN scores should be allowed to contain descriptions of additional
> temporal renderings, apart from the default, metronomic one... This
> strategy makes metronomes redundant, since scores can contain accurate
> temporal renderings of what the composer really meant, not just an implied
> mechanical realisation.
> [end aside]
> If you'll permit me another frank comment, it seems you have very narrow
> needs that (in my experience) do not necessarily apply to a much wider
> audience. This is absolutely fine -- figuring out how to encode narrow
> repertoires is something that we welcome and encourage in the MEI
> community. My own experience is that within MEI there is usually a way to
> express exactly what you want to, but that way is not always obvious
> (unless you're from West Virginia... :).
> att.duration.musical and att.duration.performed are one of the most
> reviewed and core components of MEI, and they are hardly a 'mistake' nor
> are they misguided efforts. They effectively model a vast existing
> repertoire. If we are talking about new notation systems then all bets are
> off, but saying things are 'unimportant' or 'mistakes' simply because they
> don't conform to a narrow vision of notation is an equally problematic
> position.
> If you have an existing absolute rendering (i.e., a recording) you can use
> the <when>/@when/@data timepoint indications to show how a symbolic
> rendering aligns with an absolute record of the performance.
> The simplest case would be if the <time> element simply had an @ms
> attribute which would be its duration in milliseconds. But this element
> should also be able to contain more complex temporal information. A <note>
> or <chord> symbol can contain ornaments that can be described using MIDI
> information. All MIDI info is purely temporal. So the duration of the
> <time> elements embedded in <note>s and <chord>s should really be
> calculated from the durations in the contained MIDI sequence. A <time>
> element in a <rest> might just have a simple @ms attribute.
> But lets forget about such refinements for the moment and just note that
> the <time> element has a (or defines) a duration.
> I want to use <measure>, but here again, I want to separate its graphical
> aspects from its meaning.
> *Question:* Does setting measure at metcon to false do that? If @metcon is
> false, can I then use arbitrary symbols in the various contained <level>s,
> and ignore the logical values of all the @dur attributes?
> Graphically, a measure is just two vertical lines enclosing <staff>
> elements that enclose <layer> elements that enclose duration symbols
> (<note>s, <chord>s and <rest>s) and other "events" in a left-right sequence.
> The duration symbols' left-right sequence in the measure corresponds to
> their earlier-later sequence in time, so there has to be a mechanism for
> describing the left-right sequence of all the symbols in a <measure>.
> Elements inside of layer are ordered. If they have a duration, they are
> assumed to be processed in the order they are listed.
> [begin aside]
> My current solution for this, is to use the default millisecond duration
> of the symbol's <time> element (an integer) as a dimensionless number. The
> algorithm that creates an (SVG) instantiation of the abstract XML info
> first uses this number as a spatial unit (a number of pixels) in a space
> wide enough to position the symbols so that they don't overlap, then
> compresses the width of the system into the actually available space. My
> algorithm is recursive, and results in spacing that corresponds to what I
> think is the best, (most legible) way to space music symbols. If the width
> is too small for the symbols to be spaced proportionally to their durations
> (the usual case), then less space is given to the longer durations.
> [end aside]
> While the logical durations of the @dur attributes may not have to add up
> in a tempoless <measure>, the <measure> still imposes a restriction on the
> durations of the <time> elements. The durations of the <time> elements in
> each <layer> have to add up to the same value in any particular
> performance. They don't, of course, have to add up to the same value across
> different performances. How can I express that in a schema? Maybe I don't
> *need* to express that explicitly, but its still something that can be
> validated automatically by software.
> AH: I'm really not sure where you're going with the duration symbols and
> fixed meanings, though. Why would you be adding gracenotes and tuplets?
> In a tempoless notation, there is no need to litter the score with little
> numbers and brackets (tuplets) or to make some symbols smaller in order to
> make it clear that some notes are outside the counting scheme (grace-notes)
> or to add little dots to the symbols to mean that they are longer than they
> would be otherwise (augmentation dots). I think of all these as
> *annotations* that can be added ad lib to a score, without affecting the
> playback of the contained temporal information.
> Tuplets and grace-notes only make sense in music that has tempo (CWMN).
> Again, I'm not sure where you're going with this.
> Tempo means time. All music takes up time -- that's what makes it music!
> There is no such thing as 'tempoless notation' since that (essentially)
> means notation that is not meant to express time. So I think you need to
> more clearly express this so that I (we?) can understand it.
> *You* can certainly write music that does not use "little numbers and
> brackets" but your use case is really a very small part of larger efforts.
> I can assure you that there are plenty of people who *do* want little
> numbers and brackets, and they want them to be a core part of the notation
> scheme. Thinking of them as annotations is not adequate, since there are
> non-absolute durational implications to having these elements present.
> I'm not so sure about augmentation dots -- which could be used as
> annotations in tempoless music to imply that the note should be performed
> longishly... But a tenuto articulation could do that just as well. Its
> probably something that should be left to the composer to decide. Maybe the
> composer wants to use tenuto to imply something else (emphasis of some
> kind).
> Apropos verification: Maybe there's some way in CWMN to ensure that the
> default <time> durations inside a tuplet are as equal as possible. But
> those durations *must be **integers* in order to prevent the endless
> hassle with rounding floating point numbers. None of the durations in the
> default (metronomic) performance should be allowed to be more than one
> millisecond longer than any of the others. Grace notes inside tuplets
> complicate things of course, but the problem should not be insoluble.
> JI: Can you imagine such a hierarchy of customisations? I'm also thinking
> of the container hierarchy that could become part of the W3C standard for
> describing *any *polyphonic music notation.
> AH: A customization is not arranged in a hierarchy. The primary reason for
> a customization is to produce a schema that will validate an encoding.
> That's not a problem.
> I was thinking more of another level: I think that all polyphonic music
> notations could share the same container hierarchy. The
> page->system->staff->layer hierarchy is actually independent of the
> graphics. Notations can use the same terms, even if they are read left to
> right, top to bottom or round in circles. The graphic representation is
> really just up to the software that instantiates the score. I just didn't
> want to miss the chance of some more modularity.
> Its probably not very important in polyphonic music, but it might help the
> evolution of software for homophonic music, if the developers always used
> the same names for their simplified container hierarchy. For example:
> page->staff.
> MEI has no particular affinity to rendering or graphical layout. Many of
> us consider "rendering" to be sonic or analytic as well -- i.e., a system
> that 'renders' notation for the purpose of search and analysis.
> In closing, I would encourage you to keep asking questions. However, if I
> were to make a suggestion it would be that the tone and tenor of your
> messages would skew towards trying to learn about what we've done in the
> community before trying to suggest fundamental changes. I think you'll find
> that there are many people who know what they're talking about here, and
> who have decades of experience with all types of music notation efforts.
> Right now it seems that there is an assumption in your messages that
> everyone is doing it wrong, and I don't think that's a productive place to
> start our conversations.
> Cheers,
> -Andrew
> Hope that helps,
> James
> _______________________________________________
> mei-l mailing list
> mei-l at lists.uni-paderborn.de
> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20160525/bad7230c/attachment.html>

More information about the mei-l mailing list