[MEI-L] MEI Customisation
James Ingram
j.ingram at netcologne.de
Mon May 30 14:51:19 CEST 2016
Hi, Andrew, Laurent, Johannes, all,
Thanks for the replies. Sorry for the delay, but I've been away for the
last couple of days.
First, I owe the list (and Andrew in particular) an apology: I was so
excited at having discovered MEI and its customizability, that I forgot
to introduce myself. I even managed to delete my signature links! No
wonder Andrew was a little taken aback! Andrew: Thanks for our private
exchange, I hope the following helps you, and everyone else here, to
understand my position better.
Also, apologies for my sometimes less than academic language style. I'm
coming from another world, and sometimes find it difficult not to sound
flippant. I've been trying to get this penny to drop in the academic and
software worlds for more than 30 years... Enough said. I forgive you
all! :-))
I've copied the complete replies to my original posting below. I'll
reply to Johannes' and Laurent's postings in that order. My reply to
Andrew's public posting is implicit.
Johannes said:
> JK: I believe that your definition is clearly outside of CMN as
> perceived and utilized by composers such as Bach, Beethoven, Bizet,
> Busoni or Borodin.
That's not quite right.
What I really think is that MEI has inherited the /neo/-classical
interpretation of CWMN used by composers like Stravinsky during the
first half of the 20th century. Stravinsky, and other neo-classical
composers had no choice but to go back to using well defined tempi. They
were trying to avoid the notational chaos left by late Romanticism (in
which performance practice can no longer be described in terms of fixed
tempi).
It seems a bit daring to include Bach in your list of composers who use
CWN! Did he use metronome marks? Nested tuplets? What about his use of
"dotted eighth" + "sixteenth" adding up to three "eighths"? Baroque
ornaments? Curvy beams... :-)
I think you are over-simplifying by leaving out the composers who don't
fit your model. What about Chopin, Wagner, Bruckner, Debussy? These are
composers for whom living traditions of performance practice are more
important than what's on the paper -- though they do, of course, try to
be logical within their 19th century context.
In the 19th century, it was assumed that there are spatial and temporal
ethers. Tempo is the 19th century's temporal ether. (There are also good
commercial reasons for the rise of the metronome in the 19th century.)
There is, however, no such thing as a temporal ether, (tempo is /not/
fundamental) so the notation conventions eventually collapsed (at about
the same time as the concept of a spatial ether).
> JK: Technically speaking, you seem to overload the definition of CWMN
> in order to make it "broad" enough for covering your approach.
That's also not quite right.
A tempo-less customization of MEI should be regarded as a strict,
formal, (even academic) exercise, that has no /necessary/ relation to
CWMN at all.
And I am /not/ just trying to be different. This is /not/ just my
private problem -- though I've been working on it since I was a student.
The notation of tempo-less music is the central problem that that was
left for future generations to solve when the/Avant Garde/ gave up on
their notational experiments in 1970. You are all ignoring 20th century
music history! :-)
I'm using CWMN symbols in my projects because they are the most
advanced, most /legible/ set of such symbols available. But, in
principle, I could use any other glyphs or glyph combinations. The
/Avant-Garde/'s assumption that CWMN symbols have to be reserved for
"precisely defined" music (i.e. music having no performance-practice
tradition), and that a completely different notation (or set of symbols)
has to be invented for other kinds of music, comes from neo-classicism,
and is simply wrong. Staves, clefs and chord symbols are also used in
pre-classical music.
When this tempo-less notation has been defined, we can see if there are
any consequences for the full CWMN customization. I suspect, of course,
that a MEI customization that can deal with arbitrary durations, could
be easily adapted for using the restricted set of duration proportions
available in (metronomic) CWMN. Tuplets, grace-notes etc. will have to
be treated differently than they are at present, but I see no
fundamental obstacles to giving them strict, verifiable definitions
related to default, metronomically defined, performances.
I also think that the existence of a tempo-less customization would have
consequences for the other kinds of notation defined by MEI. Since the
glyphs or glyph combinations can be freely chosen, neumes are also in scope.
Note that this proposal allows /multiple interpretations/ of the symbols
to be stored, quite independently of the glyphs or glyph combinations in
use. It enables the storing of performance practice traditions.
Currently, this is being done outside scores, using separate,
unsynchronized recordings. Opera singers currently learn their parts by
listening to CDs. The revival of performance practice traditions for
Ancient Music would never have happened without the use of /recordings/.
I'd like to enable something similar for New Music /that has not yet
been performed/. I want to be able to write new music that /breathes/.
Listen to the score, and imitate... using the symbols as an /aide
memoire/... that's what they have /always/ been for...
Laurent said:
> LP: Let me take the opportunity to clarify the role of customizations
> in MEI, which are very flexible as with many other things in MEI. One
> possible use is for restricting MEI to a subset. This is achieved with
> the Tido's customization, which acts as an application profile. It is
> also the goal of the MEI Go! (or whatever it will be called) where we
> would like to define the most essential components we need for
> facilitating data exchange between projects.
I still don't really understand what MEI Go! is for, or whether it
really has anything to do with my project.
> Now, the good news for you is that customization can also be used for
> uncommon notation features, which is what it seems you would like to do.
As I said above, I don't think this is just my private problem.
> This can also be used for correcting what you think is a mistake in
> MEI. The other good news is that you do not need to convince everybody
> on the list to do so. You can do it on your own.
Or I only need to convince one person who knows MEI inside-out to do it!
I'm very willing and able to follow what that person would be doing (and
pointing out any problems that I see arising), but I don't think I
should be the person in charge. This might look as if I'm chickening
out, but here are my reasons:
1. I'm currently working on a new piece for the web, in which I want to
try out some new, advanced data structures that are the beginnings of
"harmony and counterpoint" in tempo-less music. Yes, I compose
experimental music... :-)
2. Even if I took the time to learn how to customize MEI, it looks as if
I'm going to run into non-trivial problems setting up the framework. I
program using Visual Studio (latest Community edition) on Windows 10.
Visual Studio has advanced support for using schemas, so there shouldn't
be any problem there.
But https://github.com/music-encoding/music-encoding says that
"Building MEI requires the TEI Stylesheets. You should clone their git
repository, or download them in a packaged zip file",
and https://github.com/TEIC/Stylesheets says that its binaries will "run
on Linux, OS X or other Unix operating systems." I don't know if or how
I can compile the source code...
3. I don't really need the tempo-less MEI customization myself. I think
you need it more than I do.
4. There may well be consequences at a pretty deep level inside the MEI
code. Such consequences need to be dealt with by someone who knows the
code extremely well. I simply don't have that experience.
5. It would be really wonderful if someone else took up this challenge.
I'm pretty fed up with having to do everything myself. :-)
Any ideas?
All the best,
James
--
http://james-ingram-act-two.de
https://github.com/notator
------------------------------------------------------------------------
Am 25.05.2016 um 23:08 schrieb Johannes Kepper:
>
> Hi James,
> let me chime in as well. I believe that the uneasiness that is almost
> tangible in Andrew's (excellent) response comes from your
> understanding of CWMN. Actually, it seems to me that you have a
> conception of CWMN that doesn't follow a more traditional
> understanding, as implemented in MusicXML or MEI. While we all know
> that CMN isn't as strictly defined as we sometimes pretend, and
> despite the very free form of cadenzas and similar material, I believe
> that your definition is clearly outside of CMN as perceived and
> utilized by composers such as Bach, Beethoven, Bizet, Busoni or
> Borodin. However, part of the game has always been to stretch (and
> sometimes break) the rules of notation to achieve novelty. At the same
> time, CWMN is about _notation_, and is per se independent of any
> performance. Every generation had it's own take on how to interpret
> these symbols, and this is what makes performance practice such an
> interesting field.
>
> However, you seek to use an encoding scheme for your own compositions.
> For this purpose, you utilize a set of symbols borrowed from CWMN, and
> you use them in an *almost* compatible way. However, while some
> constructs like tuplets may have no special meaning / use in your
> approach, they are definitely a crucial part of CWMN in its
> traditional meaning. Technically speaking, you seem to overload the
> definition of CWMN in order to make it "broad" enough for covering
> your approach. This motivation seems valid, but I guess that I'm not
> the only one who feels that this "stretching" costs too much of CWMN's
> specificity. Instead, I tend to understand this as notational system
> in its own right, which is (just) based on CWMN. If that's the case,
> it would be perfectly possible to actually model this relation in an
> MEI customization, which bases on the CMN module and adjusts it to
> your specific needs, as Laurent pointed out. That might also give an
> opportunity to document your concept of CWMN+ (or however this thing
> should be called), and how it relates to "regular" CMN.
>
> I don't know if my analysis is correct or now, but maybe it helps to
> get a clearer picture.
>
> All best,
> Johannes
>
>> Am 25.05.2016 um 22:40 schrieb Laurent Pugin <lxpugin at gmail.com>:
>>
>> Hi James,
>>
>> Let me take the opportunity to clarify the role of customizations in
>> MEI, which are very flexible as with many other things in MEI. One
>> possible use is for restricting MEI to a subset. This is achieved
>> with the Tido's customization, which acts as an application profile.
>> It is also the goal of the MEI Go! (or whatever it will be called)
>> where we would like to define the most essential components we need
>> for facilitating data exchange between projects.
>>
>> Now, the good news for you is that customization can also be used for
>> uncommon notation features, which is what it seems you would like to
>> do. This can also be used for correcting what you think is a mistake
>> in MEI. The other good news is that you do not need to convince
>> everybody on the list to do so. You can do it on your own.
>> Eventually, if your proposal seems to be a good idea there is no
>> reason for not having it integrated in MEI.
>>
>> Best,
>> Laurent
>>
>> On Wed, May 25, 2016 at 4:51 PM, Andrew Hankinson
>> <andrew.hankinson at mail.mcgill.ca> wrote:
>>
>> AH: I'll try to respond inline below.
>>> JI: Hi Andrew, Zoltan, JI: Zoltan: thanks for your answers to this
>>> thread on the W3C CG list. This reply to Andrew (below) continues
>>> the thread here, so its also a reply to what you said.
>>>> AH: For rhythm and duration we differentiate between "@dur"
>>>> (written duration) and "@dur.ges" (performed duration). Both of
>>>> these are available on note and rest objects.
>>> JI: Yes, that's one of the problems -- see below. :-)
>>>>> JI: I want to make an MEI customisation that uses most of the
>>>>> symbols that are used by CWMN, but without assuming tempo. If
>>>>> there is no tempo, then neither tuplets nor grace-notes make
>>>>> sense. It should also be possible for the description of more than
>>>>> one temporal instantiation to be stored inside the XML elements
>>>>> that represent the score's graphics.
>>>> AH: I'm not sure I understand this. Tempo is generally expressed by
>>>> a playback mechanism. It can be hinted at in the encoding, but most
>>>> systems have controls for overriding it.
>>> JI: The @dur attribute describes both the shape of the symbol and
>>> its meaning (the number of beats it represents in a <measure>).
>>> Beats mean tempo, and that's a problem if I'm trying to create a
>>> customisation that does without it. I want to separate the meaning
>>> of each duration symbol from its visual appearance by putting the
>>> temporal information in an enclosed element. Lets call the enclosed
>>> element <time>. Actually, I want (potentially) to have a list of
>>> <time> elements inside each duration symbol, each position in such
>>> lists containing the temporal information for a different
>>> performance of the piece.
>> AH: If you'll permit me a frank comment, it sounds like you have a
>> pretty good idea of your own encoding format, which is neither MEI
>> nor MNX. Depending on how far down the rabbit hole you want to go, it
>> is completely possible to separate visual identity from durational
>> identity in MEI. We use '<note>' and '<rest>' effectively as `time`
>> elements -- that is, elements that record the passing of time. One
>> records a pitch, the other records silence. Since @dur.ges accepts
>> absolute durations, it effectively acts exactly as you are expecting.
>> With the @headshape attribute you can precisely control the visual
>> appearance of the object, separate from its time. I believe in the
>> forthcoming release you can even use this to include a SMuFL codepoint.
>>> JI: [begin aside] The first <time> element would describe the
>>> symbol's default duration, which in MusicXML and MEI Go can be
>>> calculated from the tempo, ppq, logical value of @dur. etc. If used
>>> in CWMN, the <time> element would make all that tempo, ppq, logical
>>> value @dur.ges stuff redundant. Simplification is always a good
>>> idea! :-) Note that this strategy means that @dur's logical value
>>> and @dur.ges are both being treated as being in the same dimension
>>> (time). Pretending that att.duration.musical and
>>> att.duration.performed need to be treated differently is, I think, a
>>> mistake. Note also that the latest MIDI standard (the Web MIDI API)
>>> no longer supports tempo. Including tempo in the 1981 MIDI standard
>>> was a mistake, which has now been rectified. The Web MIDI API just
>>> uses milliseconds. Even CWMN scores should be allowed to contain
>>> descriptions of additional temporal renderings, apart from the
>>> default, metronomic one... This strategy makes metronomes redundant,
>>> since scores can contain accurate temporal renderings of what the
>>> composer really meant, not just an implied mechanical realisation.
>>> [end aside]
>> AH: If you'll permit me another frank comment, it seems you have very
>> narrow needs that (in my experience) do not necessarily apply to a
>> much wider audience. This is absolutely fine -- figuring out how to
>> encode narrow repertoires is something that we welcome and encourage
>> in the MEI community. My own experience is that within MEI there is
>> usually a way to express exactly what you want to, but that way is
>> not always obvious (unless you're from West Virginia... :).
>> att.duration.musical and att.duration.performed are one of the most
>> reviewed and core components of MEI, and they are hardly a 'mistake'
>> nor are they misguided efforts. They effectively model a vast
>> existing repertoire. If we are talking about new notation systems
>> then all bets are off, but saying things are 'unimportant' or
>> 'mistakes' simply because they don't conform to a narrow vision of
>> notation is an equally problematic position. If you have an existing
>> absolute rendering (i.e., a recording) you can use the
>> <when>/@when/@data timepoint indications to show how a symbolic
>> rendering aligns with an absolute record of the performance.
>>> JI: The simplest case would be if the <time> element simply had an
>>> @ms attribute which would be its duration in milliseconds. But this
>>> element should also be able to contain more complex temporal
>>> information. A <note> or <chord> symbol can contain ornaments that
>>> can be described using MIDI information. All MIDI info is purely
>>> temporal. So the duration of the <time> elements embedded in <note>s
>>> and <chord>s should really be calculated from the durations in the
>>> contained MIDI sequence. A <time> element in a <rest> might just
>>> have a simple @ms attribute. But lets forget about such refinements
>>> for the moment and just note that the <time> element has a (or
>>> defines) a duration. I want to use <measure>, but here again, I want
>>> to separate its graphical aspects from its meaning. Question: Does
>>> setting measure at metcon to false do that? If @metcon is false, can I
>>> then use arbitrary symbols in the various contained <level>s, and
>>> ignore the logical values of all the @dur attributes? Graphically, a
>>> measure is just two vertical lines enclosing <staff> elements that
>>> enclose <layer> elements that enclose duration symbols (<note>s,
>>> <chord>s and <rest>s) and other "events" in a left-right sequence.
>>> The duration symbols' left-right sequence in the measure corresponds
>>> to their earlier-later sequence in time, so there has to be a
>>> mechanism for describing the left-right sequence of all the symbols
>>> in a <measure>.
>> AH: Elements inside of layer are ordered. If they have a duration,
>> they are assumed to be processed in the order they are listed.
>>> JI: [begin aside] My current solution for this, is to use the
>>> default millisecond duration of the symbol's <time> element (an
>>> integer) as a dimensionless number. The algorithm that creates an
>>> (SVG) instantiation of the abstract XML info first uses this number
>>> as a spatial unit (a number of pixels) in a space wide enough to
>>> position the symbols so that they don't overlap, then compresses the
>>> width of the system into the actually available space. My algorithm
>>> is recursive, and results in spacing that corresponds to what I
>>> think is the best, (most legible) way to space music symbols. If the
>>> width is too small for the symbols to be spaced proportionally to
>>> their durations (the usual case), then less space is given to the
>>> longer durations. [end aside] While the logical durations of the
>>> @dur attributes may not have to add up in a tempoless <measure>, the
>>> <measure> still imposes a restriction on the durations of the <time>
>>> elements. The durations of the <time> elements in each <layer> have
>>> to add up to the same value in any particular performance. They
>>> don't, of course, have to add up to the same value across different
>>> performances. How can I express that in a schema? Maybe I don't need
>>> to express that explicitly, but its still something that can be
>>> validated automatically by software.
>>>> AH: I'm really not sure where you're going with the duration
>>>> symbols and fixed meanings, though. Why would you be adding
>>>> gracenotes and tuplets?
>>> JI: In a tempoless notation, there is no need to litter the score
>>> with little numbers and brackets (tuplets) or to make some symbols
>>> smaller in order to make it clear that some notes are outside the
>>> counting scheme (grace-notes) or to add little dots to the symbols
>>> to mean that they are longer than they would be otherwise
>>> (augmentation dots). I think of all these as annotations that can be
>>> added ad lib to a score, without affecting the playback of the
>>> contained temporal information. Tuplets and grace-notes only make
>>> sense in music that has tempo (CWMN).
>> AH: Again, I'm not sure where you're going with this. Tempo means
>> time. All music takes up time -- that's what makes it music! There is
>> no such thing as 'tempoless notation' since that (essentially) means
>> notation that is not meant to express time. So I think you need to
>> more clearly express this so that I (we?) can understand it. *You*
>> can certainly write music that does not use "little numbers and
>> brackets" but your use case is really a very small part of larger
>> efforts. I can assure you that there are plenty of people who *do*
>> want little numbers and brackets, and they want them to be a core
>> part of the notation scheme. Thinking of them as annotations is not
>> adequate, since there are non-absolute durational implications to
>> having these elements present.
>>> JI: I'm not so sure about augmentation dots -- which could be used
>>> as annotations in tempoless music to imply that the note should be
>>> performed longishly... But a tenuto articulation could do that just
>>> as well. Its probably something that should be left to the composer
>>> to decide. Maybe the composer wants to use tenuto to imply something
>>> else (emphasis of some kind). Apropos verification: Maybe there's
>>> some way in CWMN to ensure that the default <time> durations inside
>>> a tuplet are as equal as possible. But those durations must be
>>> integers in order to prevent the endless hassle with rounding
>>> floating point numbers. None of the durations in the default
>>> (metronomic) performance should be allowed to be more than one
>>> millisecond longer than any of the others. Grace notes inside
>>> tuplets complicate things of course, but the problem should not be
>>> insoluble.
>>>>> JI: Can you imagine such a hierarchy of customisations? I'm also
>>>>> thinking of the container hierarchy that could become part of the
>>>>> W3C standard for describing any polyphonic music notation.
>>>> AH: A customization is not arranged in a hierarchy. The primary
>>>> reason for a customization is to produce a schema that will
>>>> validate an encoding.
>>> JI: That's not a problem. I was thinking more of another level: I
>>> think that all polyphonic music notations could share the same
>>> container hierarchy. The page->system->staff->layer hierarchy is
>>> actually independent of the graphics. Notations can use the same
>>> terms, even if they are read left to right, top to bottom or round
>>> in circles. The graphic representation is really just up to the
>>> software that instantiates the score. I just didn't want to miss the
>>> chance of some more modularity. Its probably not very important in
>>> polyphonic music, but it might help the evolution of software for
>>> homophonic music, if the developers always used the same names for
>>> their simplified container hierarchy. For example: page->staff.
>> AH: MEI has no particular affinity to rendering or graphical layout.
>> Many of us consider "rendering" to be sonic or analytic as well --
>> i.e., a system that 'renders' notation for the purpose of search and
>> analysis. In closing, I would encourage you to keep asking questions.
>> However, if I were to make a suggestion it would be that the tone and
>> tenor of your messages would skew towards trying to learn about what
>> we've done in the community before trying to suggest fundamental
>> changes. I think you'll find that there are many people who know what
>> they're talking about here, and who have decades of experience with
>> all types of music notation efforts. Right now it seems that there is
>> an assumption in your messages that everyone is doing it wrong, and I
>> don't think that's a productive place to start our conversations.
>>
>> Cheers,
>> Andrew
>>> Hope that helps,
>>> James
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20160530/b455f5fd/attachment.html>
More information about the mei-l
mailing list