[MEI-L] Tick-based Timing

James Ingram j.ingram at netcologne.de
Sun Apr 18 13:11:46 CEST 2021


Thanks, Simon and Jo, for your responses,

@Simon: Please be patient, I'll come back to you, but I first need to 
sort out some basics with Jo.

@Jo: Before replying to your posting, I first need to provide some 
context so that you can better understand what I'm saying:

Context 1: Outline of the state of the debate in the MNX Repository
MNX is intended to be a set of next-generation, web-friendly music 
notation encodings, related via common elements in their schemas. The 
first format being developed is MNXcommon1900, which is intended to be 
the successor to MusicXML. It does not have to be backwardly compatible 
with MusicXML, so the co-chair wants to provide documentation comparing 
the different ways in which MNXcommon1900 and MusicXML encode a number 
of simple examples. Unfortunately, they first need to revise the 
MusicXML documentation in order to do that, so work on MNXcommon1900 has 
temporarily stopped. The intention is to start work on it again as soon 
as the MusicXML documentation revision is complete. After 5+ years of 
debate, MNXcommon1900 is actually in a fairly advanced state.
I have two MNX-related GitHub repositories (the best way to really 
understand software is to write it):

  * MNXtoSVG:  A (C#) desktop application that converts MNX files to
    SVG. This application is a test-bed for MNX's data structures, and
    successfully converts the first completed MusicXML-MNX comparison
    examples to (graphics only) SVG. When/if MNX includes temporal info,
    it will do that too (using a special namespace in the SVG).
  * A fork of the MNX repository: This contains (among other things) the
    beginnings of a draft schema for MNXcommon1900. The intention is to
    plunder that schema for other schemas...

I'm looking for things that all event-based music notations have in 
common, so that software libraries can be used efficiently across all 
such notations. That's important if we want to develop standards that 
are consistent all over the web.

Context 2: My background
I'm a relic of the '60s /Avant-Garde/. Left college in the early 1970s, 
and became K. Stockhausen's principal copyist 1974-2000.  In the '60s, 
they were still trying to develop new notations, but that project 
collapsed//quite suddenly in 1970 when all the leading composers gave it 
up (without solving anything), and reverted to using standard notation. 
In 1982, having learned a lot from my boss, and having had a few years 
practical experience pushing the dots around, I suddenly realised what 
had gone wrong, and wrote an article about it that was eventually 
published in 1985. The article contains a critical analysis of CWMN...
So I'm coming from a rather special niche in the practical world of 
music publishing, not from the academic world. In 1982, I was not aware 
of Maxwell (1981)  [1], and I hadn't realised until researching this 
post, how it relates to things like /metrical time/ in the (1985) MIDI 
1.0 standard (see §5, §5.1.2 in [3]).

***

MEI:
The Background page [2] on the MEI website cites Maxwell (1981) [1] as 
the source of the three principle domains /physical/, /logical/ and 
/graphical/. In contrast, my 1982 insight was that the domains /space/ 
and /time/ are fundamental, and need to be clearly and radically 
distinguished. Maxwell himself says (at the beginning of §2.0 of his 
paper) that his "classification is not the only way that music notation 
could be broken up..."

So I'm thinking that Maxwell's domains are not as fundamental as the 
ones I found, and that mine lead to simpler, more general and more 
powerful results.

 From my point of view, Maxwell's /logical/ domain seems particularly 
problematic:
Understandably for the date (1981), and the other problems he was coping 
with, I think Maxwell has a too respectful attitude to the symbols he 
was dealing with. The then unrivalled supremacy of CWMN1900 over all 
other notations leads him to think that he can assign fixed relative 
values to the duration symbols. That could, of course, also be explained 
in terms of him wanting to limit the scope of his project but, 
especially when one looks at legitimate, non-standard (e.g.Baroque) uses 
of the symbols (see §4.1.1 of [3]), his/logical /domain still seems to 
be on rather shaky ground.
Being able to include notations containing /any/ kind of event-symbol in 
my model (see §4.2) is exactly what's needed in order to create a 
consistent set of related schemas for all the world's event-based music 
notations...

So, having said all that, MEI's @dur attribute subclasses look to me 
like ad-hoc postulates that have been added to the paradigm to shore it 
up, without questioning its underlying assumptions. The result is that 
MEI has become over-complicated and unwieldy. That's a common theme in 
ageing paradigms... remember Ptolemy?

Okay, maybe I'm being a bit provocative there. But am I justified? :-)

Hope that helps,
all the best,
James

[1] Maxwell (1981): http://dspace.mit.edu/handle/1721.1/15893

[2] https://music-encoding.org/resources/background.html

[3] 
https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html

-- 

https://james-ingram-act-two.de
https://github.com/notator


Am 16.04.2021 um 18:02 schrieb Johannes Kepper:
> Dear all,
>
> I’m not much into this discussion, and haven’t really investigated into the use cases behind this, so my answer may not be appropriate for the question asked. However, I believe that most of the requirements articulated here are safely covered by MEI. Looking at the attributes available on notes (seehttps://music-encoding.org/guidelines/v4/elements/note.html#attributes), there are plenty of different approaches available:
>
> @dur – Records the duration of a feature using the relative durational values provided by the data.DURATION datatype.
> @dur.ges – Records performed duration information that differs from the written duration.
> @dur.metrical – Duration as a count of units provided in the time signature denominator.
> @dur.ppq – Duration recorded as pulses-per-quarter note, e.g. MIDI clicks or MusicXML divisions.
> @dur.real – Duration in seconds, e.g. '1.732‘.
> @dur.recip – Duration as an optionally dotted Humdrum *recip value.
>
> In addition, there is also
>
> @tstamp – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature.
> @tstamp.ges – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature.
> @tstamp.real – Records the onset time in terms of ISO time.
> @to – Records a timestamp adjustment of a feature's programmatically-determined location in terms of musical time; that is, beats.
> @synch – Points to elements that are synchronous with the current element.
> @when – Indicates the point of occurrence of this feature along a time line. Its value must be the ID of a when element elsewhere in the document.
>
> They’re all for slightly different purposes, and surely many of those attributes are not (well) supported by existing software, but they seem to offer good starting points to find a model for the questions asked. It is important to keep in mind that music manifests in various forms – sound, notation, concepts (what _is_ a quarter?), and that MEI tries to treat those „domains“ as independently as possible. Of course, they’re all connected, but not being specific (enough) in that regard did no good to other formats…
>
> Hope this helps,
> jo
>
>
>> Am 16.04.2021 um 15:35 schrieb Simon Wascher<bureau at tradmus.org>:
>>
>> Hi alltogether,
>>
>> Am 14.04.2021 um 21:49 schrieb James Ingram<j.ingram at netcologne.de>:
>>>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed.
>>>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2].
>>>>> [...]
>> Am 16.04.2021 um 11:06 schrieb James Ingram<j.ingram at netcologne.de>:
>>> First: Did you intend your reply just to be private, or did you want to send it to the public list as well. I'm not sure.
>>> If you'd like this all to be public, I could send this to MEI-L as well...
>> I answered to James Ingram off list, but now move to MEI-L  with my answer, as it seems it was the intention to get answers on the list.
>> My full first answer to  James Ingram is down at the end of this mail, if someone is interested (I did not forward James Ingram's repy to me in full as I did not want to forward someone elses private answer to me to public.
>>
>> Am 15.04.2021 um 01:03 schrieb Simon Wascher<bureau at tradmus.org>:
>>>> I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours.
>> Am 16.04.2021 um 11:06 schrieb James Ingram<j.ingram at netcologne.de>:
>>> I took a look at Lauge Dideriksens website, but can't see enough music examples to know quite what you mean by "positioned according to musical timing". I see that he (sometimes?) uses tuplets. Does he (sometimes?) position the symbols in (flat) space to represent (flat) time?
>>> In either case, that's not quite what I'm saying. I'm talking about the underlying machine-readable encoding of notation, not just the way it looks on the surface.
>> maybe there is still no material online of this. He is talking about this at European Voices VI  in Vienna 27–30 September 2021.
>> It might be sensible to contact him direct.
>>
>> Am 15.04.2021 um 01:03 schrieb Simon Wascher<bureau at tradmus.org>:
>>>> Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time?
>> Am 16.04.2021 um 11:06 schrieb James Ingram<j.ingram at netcologne.de>:
>>> Here again, I'm not quite sure what you mean. Perhaps it would help if I again emphasise the difference between the surface appearance of a notation and its machine-readable encoding.
>> I see, your focus seems to be on machine-readability and the problem of the relation between CWMN and its machine-playback.
>> My focus is the problem of the relation between CWMN and real live-performance.
>> I am looking for tools to code real live performances, using the symbols of CWMN but allowing to include the _display_ of the real live durations of the real live-performance (the difference between real live-performance and CWMN).
>>
>>
>> Am 16.04.2021 um 11:06 schrieb James Ingram<j.ingram at netcologne.de>:
>>> You ask about emic and etic, and the problem of notating traditional Scandinavian Polska or jodling:
>>> To get us on the same page, here's where I am: Transcriptions of music that is in an aural tradition always reflect what the transcriber thinks is important. Transcriptions often leave out nuances (timing, tonal inflexions etc.) that the original performers and their public would regard as essential.
>>> I think that aural traditions correctly ignore machine time (seconds, milliseconds), but that if we use machines to record them, we ultimately have to use such timings (in the machines). I don't think that matters, providing that the transcribers don't try to impose machine time (in the form of beats per second) too literally on their interpretations of the original performances.
>> Well, to be precise: in transcribing music, there is (at least) three points of view (versions of notation):
>>
>> 1. the musician's "emic" intention
>> 2. the machines "phonetic" protocol (which can be automatically transformed to a duration and pitch notation applying a certain level of accuracy, but which cannot know about light and heavy time and barlines, as these are cultural phenomenons. The level of accuracy is indeed already a cultural decission, but: If the transformation is not into CWMN but for example into a time/pitch chart of the fundamental frequencies the limits of readability of CWMN do not apply.
>> 3. the transcribers intention, which usually is called "etic" but is in fact "emic" to the transcriber.
>> (emic and etic is not my favorite wording)
>> (I am not worring about the composer, as in my field music gets composed sounding, the composer is a musician here.)
>>
>> Am 16.04.2021 um 11:06 schrieb James Ingram<j.ingram at netcologne.de>:
>>> "Stress programs" in abc-notation:
>>> I can't find any references to "stress programs" at the abc site [2],
>> Ah, you are right, that is a kind of de facto standard, which is weakly documented. It is interpreted by abc2midi and BarFly (and maybe other programs).
>> It makes use of the R:header of abc. Either the stress program is written there directly or in an external file.
>> Here is one of the stress programs I use:
>>
>> * 37
>> Mazurka
>> 3/4
>> 3/4=35
>> 6
>> 120 1.4
>> 100 0.6
>> 110 1.2
>> 100 0.8
>> 115 1.32
>> 100 0.67
>>
>> so that is:
>>
>> "*37"	it starts with a number (that does not do anything).
>> "Mazurka"	is the identifying string used in the R:header field connecting abc-file and stress program for the playback program.
>> "3/4"	is the meter. The stress program only applies to abc-notation in this meter. So there may be stress programs with the same name, but for different meters.
>> "3/4=35"		is the tempo indication.
>> "6"	is the number of sections the bar is split up to in this stress program (a free choisse). So it should be followed by that number of describing lines.
>> "120 1.4"	describes the first section of the bar. "120" is the volume (beteen 0-127), "1.4" is the multiplier, the core of the thing, so to say: It says the duration of the first sixt of the notated bar is to be played 1.4 times as long than it would be played considering the metronome tempo given.
>> "100 0.6"	and so on.
>>
>> I attached BarFly's "Stress Programs" file which also contains the descriptin provided by the author of "Barfly" Phil Taylor.
>> (I personally would prefer if this mechanism would not be limited to one single bar, but could be used to describe durations of a choosen number of bars/beats.)
>>
>>
>> So, thanks for the moment,
>> and feel free to tell me I shall not send this longish and maybe not very clever e-mails to this list.
>>
>> Thanks,
>> Health,
>> Simon
>> Wascher
>>
>> Anfang der weitergeleiteten Nachricht:
>>> Von: Simon Wascher<bureau at tradmus.org>
>>> Betreff: Aw: [MEI-L] Tick-based Timing
>>> Datum: 15. April 2021 01:03:32 MESZ
>>> An: James Ingram<j.ingram at netcologne.de>
>>>
>>> Hello,
>>>
>>> reading your post to MEI mailing list (I am not active in MEI) I started to read your text
>>>> [2]https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html
>>> and would like to just add my two cents of ideas about
>>>> Ticks carry both temporal and spatial information.
>>>> In particular, synchronous events at the beginning of a bar have the same tick.time, so:
>>>> The (abstract) tick durations of the events in parallel voices in a bar, add up to the same value.
>>>> In other words:
>>>> Bars “add up” in (abstract) ticks.
>>>> The same is true for parallel voices in systems (that are as wide as the page allows) even when there are no barlines, so:
>>>> Systems also “add up” in (abstract) ticks.
>>> * First I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours.
>>>
>>> * About Barlines I would like to add, that barlines also represent human perception (the composer's, the musician's, the listener's or the transcriber's), as barlines do not exist in the audio-signal.
>>> Barlines do not need to align. It is the music as a whole that keeps a common pace (the musicians stay together, but not necessarily at beats or barlines.
>>> It is even possible to play along with completly different barlines in mind, that really happends, I experienced it myself.
>>>
>>> * Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time?
>>>
>>> * In many musical styles of traditional music, also in Europe there are severe differences between emic and ethic music perception. Typical and well known examples are Polska-playing in Tradions of Scandinavia or the problems of scientific  notation of Jodler/Jodel (Jodler interpretation has a very loose relation to beat). Looking for examples of perfect common pace in a music that treats the tension of timing between the ensemble members as a carrier of musical expression have a look at central polish traditional instrumental dance music.
>>>
>>> * About notational approaches: are you aware of the "Stress Programs" used with abc-notation to describe microtiming? It is a method where the bar is split up into a freely choosen number of fractions described by multipiers (1 is the standardlenght of one fraction, so 0.76 is 0.76 times shorter and 1.43 is 1.43 times longer than standard)?
>>>
>>> Not sure if this is meeting your intentions,
>>> Thanks,
>>> Health,
>>>
>>> Simon Wascher, (Vienna; musician, transcriber of historical music notation; researcher in folk music)
>>>
>>>
>>>
>>>
>>> Am 14.04.2021 um 21:49 schrieb James Ingram<j.ingram at netcologne.de>:
>>>
>>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed.
>>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2].
>>>> My conclusions are that Tick-based Timing
>>>> 	• has to do with the difference between absolute (mechanical, physical) time and performance practice,
>>>> 	• is relevant to the encoding of all the world's event-based music notations, not just CWMN1900.
>>>> 	• needs to be considered for the next generation of music encoding formats
>>>> I would especially like to get some feedback from those working on non-western notations, so am posting this not only to the W3C MNCG's public mailing list, but also to MEI's.
>>>> All the best,
>>>> James Ingram
>>>> (notator)
>>>> [1] MNX Issue #217:https://github.com/w3c/mnx/issues/217
>>>> [2]https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html
>>>>
>>>> --
>>>> https://james-ingram-act-two.de
>>>> https://github.com/notator
>>>> _______________________________________________
>>>> mei-l mailing list
>>>> mei-l at lists.uni-paderborn.de
>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
>> <Stress_Programs>
>> _______________________________________________
>> mei-l mailing list
>> mei-l at lists.uni-paderborn.de
>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
> Dr. Johannes Kepper
> Wissenschaftlicher Mitarbeiter
>
> Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition
> Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold
> kepper at beethovens-werkstatt.de  | -49 (0) 5231 / 975669
>
> www.beethovens-werkstatt.de
> Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz
>
>
>
>
> _______________________________________________
> mei-l mailing list
> mei-l at lists.uni-paderborn.de
> https://lists.uni-paderborn.de/mailman/listinfo/mei-l

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210418/bb37a233/attachment.htm>


More information about the mei-l mailing list