[MEI-L] Tick-based Timing

James Ingram j.ingram at netcologne.de
Thu Apr 22 13:09:22 CEST 2021


Hi Jo,

Thanks for your thoughts, and the links.

Yes, I also think that MNXcommon1900 has not yet reached the level of 
maturity necessary for a project like your MNX to MEI Converter. There 
are still too many open issues. In particular, the ones I opened last 
January (Actions is particularly interesting).
My own MNXtoSVG project is designed to be changed when MNXcommon1900 
changes. Its a test-bed, not a finished tool, though it could end up 
that way when/if MNXcommon1900 gets finalised.

********

Unfortunately I couldn't download all of the Milton Babbit paper from 
the link you gave, but I was reading /Perspectives of New Music/ during 
the late 1960s, so I probably read it then, and there are lots of other 
people one might also mention.
Interesting as it is, I don't think we need to discuss the history of 
these ideas further here. Better to stay focussed on the topic as it 
presents itself to us now. :-)

********

I'm quite new to MEI so, finding the learning curve a bit steep 
beginning at your link to §1.2.4 Profiles [1], I decided to read the 

document (the development version of the MEI Guidelines) from the top.
Here are my thoughts on §1.2 *Basic Concepts of MEI*:

§1.2.1 Musical Domains [2] describes the four musical domains used by 
MEI: /logical/, /gestural/, /visual/ and /analytical/. The text says 
that MEI does not keep these domains hermetically separate. That is, I 
think, rather confusing. Things would be clearer if the domains were 
explained a bit differently.
Here's how I understand them (please correct me if I get anything wrong):

The *logical domain* is the content of a machine-readable XML file. This 
is the MEI encoding.

The *visual domain* is a *spatial* instantiation of data in the XML 
file. It is a score, on paper, screen etc., and is created by a machine 
that can read the XML. The instantiation can be done automatically 
(using default styles), or by a human using the XML-reading machine 
(adding stylistic information ad lib.).

The *gestural domain* is a *temporal *instantiation of data in the XML 
file. This is a live or deferred performance created by a machine that 
can read the XML. A deferred performance is simply a recording that can 
be performed, without the addition of further information, by a machine 
designed for that purpose. The instantiation can be done automatically 
(using default styles), or by a human using the XML-reading machine 
(adding stylistic information ad lib.).
*N.B.*: Since the logical information in the original XML file is 
preserved in the *visual domain* (score), a temporal  instantiation 
(*gestural domain*) of the data in the original XML file can also be 
created by a human interpreting the spatial instantiation (the score), 
(adding advanced stylistic information, stored in human memory, ad lib.).
Historically, performance-practice information has been ignored by 
computer music algorithms because its not directly accessible to the 
machines -- but it is nevertheless fundamental to the development of 
musical style. Failing to include it misses the whole point of music 
notation -- which is to be an /aide-memoire/. Musical culture *is* 
performance practice stored in human memory.
As I point out in §5.1.3 of my article [5], attempting to ignore 
performance practice traditions was a common problem in the 20th 
century. Its /still/ not addressed by MEI, but needs to be:
We are entering an era in which Artificial Intelligence applications can 
be trained to learn (or develop) performance practice traditions. Given 
the information in an MEI file, an AI could be trained to play Mozart, 
Couperin or any other style correctly (i.e. in accordance with a 
particular tradition).

The *analytical domain* is, I think, just a question of advanced 
metadata, so consists of information that can be put in the XML file as 
ancillary data, without affecting the other information there.

****

§1.2.2 Events and Controlevents
There seem to be close parallels between the way these are defined, and 
the way global and other "directions" are used in MNX.

****

§1.2.3 Timestamps in MEI [3]. (MNX's usage is currently very similar 
to 
MEI's.)
For a *Basic Concept*, this section seems to me to be curiously 
CWMN-centric. The document says that:

> timestamps rely solely on the numbers given in the meter signature.
What about music notations that don't use the CWMN duration symbols, or 
use them in non-standard ways? The same section also says:
> At this point, MEI uses real numbers only to express timestamps. In 
> case of (nested or complex) tuplets, this solution is inferior to 
> fractions because of rounding errors. It is envisioned to introduce a 
> fraction-based value for timestamps in a future revision of MEI.
The proposal in my article [5] is that events should be allocated 
tick-durations that are /integers/. If that is done, the future revision 
of MEI would use /integer/ (tick.time) timestamp values for both @tstamp 
and @tstamp2. This would also eliminate rounding errors when aligning 
event symbols.
If the gestural (=temporal) domain is also needed, absolute time values 

(numbers of seconds) can be found by adding the millisecond durations of 
successive ticks. (Note that tempo is a redundant concept here.)
As I point out in [5], that's no problem in CWMN, and it allows *all* 
the world's event-based music notations to be treated in the same way 
(i.e. it really is a *Basic Concept*).

****

§1.2.4 MEI Profiles [1] and §1.2.5 Customizing MEI [5]

I'm speculating a bit here, but:
Obviously, MEI's existing profiles have to continue to be supported, but 
I think that providing a common strategy for coding durations would make 
it easier (more economical) to develop parsers. You say
> ideally, projects that work with more complex MEI profiles internally 
> also offer serializations to MEI Basic for others to use
Perhaps there should be a new version of mei-Basic, that could be used 
not only in a similar way, but also for new customisations (e.g. for 
Asian notations). The existing customisations could then migrate 
gracefully if/when the new formats turn out to be better supported.

Hope that helps,
all the best,
James

[1] 
https://music-encoding.org/guidelines/dev/content/introduction.html#meiprofiles

[2] 
https://music-encoding.org/guidelines/dev/content/introduction.html#musicalDomains

[3] 
https://music-encoding.org/guidelines/dev/content/introduction.html#timestamps

[4] 
https://music-encoding.org/guidelines/dev/content/introduction.html#meicustomization

[5] 
https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html 



Am 20.04.2021 um 19:31 schrieb Johannes Kepper:
> Hi James,
>
> I’ve been loosely following the MNX efforts as well. About a year ago, I wrote a converter [1] from the then current MNX to MEI Basic (to which I’ll come back to in a sec). However, MNX felt pretty unstable at that time, and the documentation didn’t always match the 
available examples, so I put that aside. As soon as MNX has reached a certain level of maturity, I will go back to it. In this context, I’m talking about what you call MNXcommon1900 only – the other aspects of MNX seem much less stable and fleshed out.
>
> Maxwell 1981 is just one reference for this concept of musical domains, 
and many other people had similar ideas. I lack the time to look it up right now, but I’m quite confident I have read similar stuff in 19th century literature on music philology – maybe Spitta or Nottebohm. Milton Babbitt [2] needs to be mentioned in any case. To be honest, it’s a quite obvious thing that music can be seen from multiple perspectives. I think it’s also obvious that there are more than those three perspectives mentioned so far: There is a plethora of analytical approaches to music, and Schenker's and Riemann’s perspectives (to name just two) are quite different and may not be served well by any single approach. In my own project, we’re working on the genesis of musical works, so we’re interested in ink colors, writing orders, revision instructions written on the margins etc. And there’s 
much more than that: Asking a synesthete, we should probably consider to encode the different colors of music as well (and it’s surprising 
to see how far current MEI would already take us down that road…). Of course this doesn’t mean that each and everyone needs to use those categories, but in specific contexts, they might be relevant. So, I have no messianic zeal whatsoever to define a closed list of allowed musical domains – life (incl. music encoding) is more complex than that.
>
> There seems to be a misunderstanding of the intention and purpose of MEI. MEI is not a music encoding _format_, but a framework / toolkit to build such formats. One should not use the so-called MEI-all customization, which offers all possibilities of MEI at the same time. Instead, MEI should be cut down to the very repertoires / notation types, perspectives and 
intentions of a given encoding task to facilitate consistent markup that is on spot for the (research) question at hand. Of course there is need and room for a common ground within MEI, where people and projects can share their encodings and re-use them for purposes other than their original 
uses. One such common ground is probably MEI Basic [3], which tries to simplify MEI as much as possible, allowing only one specific way of encoding things. It’s still rather new, and not many projects support it 
yet, but ideally, projects that work with more complex MEI profiles internally also offer serializations to MEI Basic for others to use – as they know their own data best. At the same time, MEI Basic may serve as interface to other encoding formats like MusicXML, Humdrum, MIDI, you-name-it. However, this interchange is just one purpose of music encoding, and many other use cases are equally legitimate. MEI is based on the idea 
that there are many different types of music and manifestations thereof, numerous use-cases and reasons to encode them, and diverging intentions what to achieve and do with those encodings, but that there are still some 
commonalities in there which are worth to be considered, as they help to better understand the phenomenon at hand. MEI is a mental model (which happens to be serialized as an XML format right now, but it could be expressed as JSON or even RDF instead…), but it’s not necessary 
to cover that model in full for any given encoding task.
>
> To make a long story short: If you feel like you don’t need a specific aspect of MEI, that’s perfectly fine, and nothing forces you to use that. Others may come to other conclusions, and that is equally fine. Admittedly, this flexibility comes at the price of a certain complexity of the model, but MEI’s intention is not to squeeze every use case into a prescribed static model, and rule out everything that doesn’t fit – it’s not a hammer that treats everything as nails. At the same time, MEI offers (among others) a simple (basic) 
starting point for the CWMN repertoire, but it is easy to build on up from there, utilizing the full potential of the framework when and where necessary.
>
> I hope this helps to get a better picture of what MEI is, and how it relates to your own efforts on music encoding.
>
> All best,
> jo
>
>
> [1] Converter MNX to MEI:https://github.com/music-encoding/encoding-tools/blob/master/mnx2mei/mnx2mei.xsl
> [2] Milton Babbitt 1965: The Use of Computers in Musicological Research,https://doi.org/10.1515/9781400841226.202, p. 204f
> [3] MEI Basic:https://music-encoding.org/guidelines/dev/content/introduction.html#meiprofiles
>
>
>> Am 18.04.2021 um 13:11 schrieb James Ingram<j.ingram at netcologne.de>:
>>
>> Thanks, Simon and Jo, for your responses,
>>
>> @Simon: Please be patient, I'll come back to you, but I first need to sort out some basics with Jo.
>>
>> @Jo: Before replying to your posting, I first need to provide some context so that you can better understand what I'm saying:
>>
>> Context 1: Outline of the state of the debate in the MNX Repository
>> MNX is intended to be a set of next-generation, web-friendly music notation encodings, related via common elements in their schemas. The first format being developed is MNXcommon1900, which is intended to be the successor to MusicXML. It does not have to be backwardly compatible with MusicXML, so the co-chair wants to provide documentation comparing the different ways in which MNXcommon1900 and MusicXML encode a number of simple examples. Unfortunately, they first need to revise the MusicXML documentation in order to do that, so work on MNXcommon1900 has temporarily stopped. 
The intention is to start work on it again as soon as the MusicXML documentation revision is complete. After 5+ years of debate, MNXcommon1900 is actually in a fairly advanced state.
>> I have two MNX-related GitHub repositories (the best way to really understand software is to write it):
>>
>> 	• MNXtoSVG:  A (C#) desktop application that converts MNX files to SVG. This application is a test-bed for MNX's data structures, and successfully converts the first completed MusicXML-MNX comparison examples to (graphics only) SVG. When/if MNX includes temporal info, it will do that too (using a special namespace in the SVG).
>> 	• A fork of the MNX repository: This contains (among other things) the beginnings of a draft schema for MNXcommon1900. The intention is to plunder that schema for other schemas...
>> I'm looking for things that all event-based music notations have in common, so that software libraries can be used efficiently across all such notations. That's important if we want to develop standards that are consistent all over the web.
>>
>> Context 2: My background
>> I'm a relic of the '60s Avant-Garde. Left college in the early 1970s, and became K. Stockhausen's principal copyist 1974-2000.  In the '60s, they were still trying to develop new notations, but that project collapsed 
quite suddenly in 1970 when all the leading composers gave it up (without 
solving anything), and reverted to using standard notation. In 1982, having learned a lot from my boss, and having had a few years practical experience pushing the dots around, I suddenly realised what had gone wrong, and wrote an article about it that was eventually published in 1985. The article contains a critical analysis of CWMN...
>> So I'm coming from a rather special niche in the practical world of music publishing, not from the academic world. In 1982, I was not aware of Maxwell (1981)  [1], and I hadn't realised until researching this post, how it relates to things like metrical time in the (1985) MIDI 1.0 standard (see §5, §5.1.2 in [3]).
>>
>> ***
>>
>> MEI:
>> The Background page [2] on the MEI website cites Maxwell (1981) [1] as 
the source of the three principle domains physical, logical and graphical. In contrast, my 1982 insight was that the domains space and time are fundamental, and need to be clearly and radically distinguished. Maxwell himself says (at the beginning of §2.0 of his paper) that his "classification is not the only way that music notation could be broken up..."
>>
>> So I'm thinking that Maxwell's domains are not as fundamental as the ones I found, and that mine lead to simpler, more general and more powerful results.
>>
>>  From my point of view, Maxwell's logical domain seems particularly problematic:
>> Understandably for the date (1981), and the other problems he was coping with, I think Maxwell has a too respectful attitude to the symbols he was dealing with. The then unrivalled supremacy of  CWMN1900 over all other notations leads him to think that he can assign fixed relative values to the duration symbols. That could, of course, also be explained in terms of him wanting to limit the scope of his project but, especially when one looks at legitimate, non-standard (e.g.Baroque) uses of the symbols (see §4.1.1 of [3]), his logical domain still seems to be on rather shaky ground.
>> Being able to include notations containing any kind of event-symbol in 
my model (see §4.2) is exactly what's needed in order to create a consistent set of related schemas for all the world's event-based music notations...
>>
>> So, having said all that, MEI's @dur attribute subclasses look to me like ad-hoc postulates that have been added to the paradigm to shore it up, without questioning its underlying assumptions. The result is that MEI has become over-complicated and unwieldy. That's a common theme in ageing 
paradigms... remember Ptolemy?
>>
>> Okay, maybe I'm being a bit provocative there. But am I justified? :-)
>>
>> Hope that helps,
>> all the best,
>> James
>>
>> [1] Maxwell (1981):http://dspace.mit.edu/handle/1721.1/15893
>>
>> [2]https://music-encoding.org/resources/background.html
>>
>> [3]https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html
>>
>> --
>>
>> https://james-ingram-act-two.de
>> https://github.com/notator
>>
>>
>> Am 16.04.2021 um 18:02 schrieb Johannes Kepper:
>>> Dear all,
>>>
>>> I’m not much into this discussion, and haven’t really 
investigated into the use cases behind this, so my answer may not be appropriate for the question asked. However, I believe that most of the requirements articulated here are safely covered by MEI. Looking at the attributes available on notes (see
>>> https://music-encoding.org/guidelines/v4/elements/note.html#attributes
>>> ), there are plenty of different approaches available:
>>>
>>> @dur – Records the duration of a feature using the relative durational values provided by the data.DURATION datatype.
>>> @dur.ges – Records performed duration information that differs from the written duration.
>>> @dur.metrical – Duration as a count of units provided in the time signature denominator.
>>> @dur.ppq – Duration recorded as pulses-per-quarter note, e.g. 
MIDI clicks or MusicXML divisions.
>>> @dur.real – Duration in seconds, e.g. '1.732‘.
>>> @dur.recip – Duration as an optionally dotted Humdrum *recip value.
>>>
>>> In addition, there is also
>>>
>>> @tstamp – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature.
>>> @tstamp.ges – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature.
>>> @tstamp.real – Records the onset time in terms of ISO time.
>>> @to – Records a timestamp adjustment of a feature's programmatically-determined location in terms of musical time; that is, beats.
>>> @synch – Points to elements that are synchronous with the current element.
>>> @when – Indicates the point of occurrence of this feature along a time line. Its value must be the ID of a when element elsewhere in the document.
>>>
>>> They’re all for slightly different purposes, and surely many of those attributes are not (well) supported by existing software, but they seem to offer good starting points to find a model for the questions asked. It is important to keep in mind that music manifests in various forms – sound, notation, concepts (what _is_ a quarter?), and that MEI tries to treat those „domains“ as independently as possible. Of course, they’re all connected, but not being specific (enough) in that regard did no good to other formats…
>>>
>>> Hope this helps,
>>> jo
>>>
>>>
>>>
>>>> Am 16.04.2021 um 15:35 schrieb Simon Wascher<bureau at tradmus.org>
>>>> :
>>>>
>>>> Hi alltogether,
>>>>
>>>> Am 14.04.2021 um 21:49 schrieb James Ingram
>>>> <j.ingram at netcologne.de>
>>>> :
>>>>
>>>>>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed.
>>>>>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2].
>>>>>>> [...]
>>>>>>>
>>>> Am 16.04.2021 um 11:06 schrieb James Ingram<j.ingram at netcologne.de>
>>>> :
>>>>
>>>>> First: Did you intend your reply just to be private, or did you want to send it to the public list as well. I'm not sure.
>>>>> If you'd like this all to be public, I could send this to MEI-L as well...
>>>>>
>>>> I answered to James Ingram off list, but now move to MEI-L  with my answer, as it seems it was the intention to get answers on the list.
>>>> My full first answer to  James Ingram is down at the end of this mail, if someone is interested (I did not forward James Ingram's repy to me in full as I did not want to forward someone elses private answer to me to public.
>>>>
>>>> Am 15.04.2021 um 01:03 schrieb Simon Wascher
>>>> <bureau at tradmus.org>
>>>> :
>>>>
>>>>>> I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I 
suppose that is a compareable approach to yours.
>>>>>>
>>>> Am 16.04.2021 um 11:06 schrieb James Ingram<j.ingram at netcologne.de>
>>>> :
>>>>
>>>>> I took a look at Lauge Dideriksens website, but can't see enough music examples to know quite what you mean by "positioned according to musical timing". I see that he (sometimes?) uses tuplets. Does he (sometimes?) position the symbols in (flat) space to represent (flat) time?
>>>>> In either case, that's not quite what I'm saying. I'm talking about 
the underlying machine-readable encoding of notation, not just the way it 
looks on the surface.
>>>>>
>>>> maybe there is still no material online of this. He is talking about 
this at European Voices VI  in Vienna 27–30 September 2021.
>>>> It might be sensible to contact him direct.
>>>>
>>>> Am 15.04.2021 um 01:03 schrieb Simon Wascher
>>>> <bureau at tradmus.org>
>>>> :
>>>>
>>>>>> Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time?
>>>>>>
>>>> Am 16.04.2021 um 11:06 schrieb James Ingram<j.ingram at netcologne.de>
>>>> :
>>>>
>>>>> Here again, I'm not quite sure what you mean. Perhaps it would help 
if I again emphasise the difference between the surface appearance of a notation and its machine-readable encoding.
>>>>>
>>>> I see, your focus seems to be on machine-readability and the problem 
of the relation between CWMN and its machine-playback.
>>>> My focus is the problem of the relation between CWMN and real live-performance.
>>>> I am looking for tools to code real live performances, using the symbols of CWMN but allowing to include the _display_ of the real live durations of the real live-performance (the difference between real live-performance and CWMN).
>>>>
>>>>
>>>> Am 16.04.2021 um 11:06 schrieb James Ingram
>>>> <j.ingram at netcologne.de>
>>>> :
>>>>
>>>>> You ask about emic and etic, and the problem of notating traditional Scandinavian Polska or jodling:
>>>>> To get us on the same page, here's where I am: Transcriptions of music that is in an aural tradition always reflect what the transcriber thinks is important. Transcriptions often leave out nuances (timing, tonal inflexions etc.) that the original performers and their public would regard as essential.
>>>>> I think that aural traditions correctly ignore machine time (seconds, milliseconds), but that if we use machines to record them, we ultimately have to use such timings (in the machines). I don't think that matters, providing that the transcribers don't try to impose machine time (in the form of beats per second) too literally on their interpretations of the 
original performances.
>>>>>
>>>> Well, to be precise: in transcribing music, there is (at least) three points of view (versions of notation):
>>>>
>>>> 1. the musician's "emic" intention
>>>> 2. the machines "phonetic" protocol (which can be automatically transformed to a duration and pitch notation applying a certain level of accuracy, but which cannot know about light and heavy time and barlines, as these are cultural phenomenons. The level of accuracy is indeed already a cultural decission, but: If the transformation is not into CWMN but for example into a time/pitch chart of the fundamental frequencies the limits of readability of CWMN do not apply.
>>>> 3. the transcribers intention, which usually is called "etic" but is 
in fact "emic" to the transcriber.
>>>> (emic and etic is not my favorite wording)
>>>> (I am not worring about the composer, as in my field music gets composed sounding, the composer is a musician here.)
>>>>
>>>> Am 16.04.2021 um 11:06 schrieb James Ingram
>>>> <j.ingram at netcologne.de>
>>>> :
>>>>
>>>>> "Stress programs" in abc-notation:
>>>>> I can't find any references to "stress programs" at the abc site [2],
>>>>>
>>>> Ah, you are right, that is a kind of de facto standard, which is weakly documented. It is interpreted by abc2midi and BarFly (and maybe other 
programs).
>>>> It makes use of the R:header of abc. Either the stress program is written there directly or in an external file.
>>>> Here is one of the stress programs I use:
>>>>
>>>> * 37
>>>> Mazurka
>>>> 3/4
>>>> 3/4=35
>>>> 6
>>>> 120 1.4
>>>> 100 0.6
>>>> 110 1.2
>>>> 100 0.8
>>>> 115 1.32
>>>> 100 0.67
>>>>
>>>> so that is:
>>>>
>>>> "*37"	it starts with a number (that does not do anything).
>>>> "Mazurka"	is the identifying string used in the R:header field connecting abc-file and stress program for the playback program.
>>>> "3/4"	is the meter. The stress program only applies to abc-notation in this meter. So there may be stress programs with the same name, but for different meters.
>>>> "3/4=35"		is the tempo indication.
>>>> "6"	is the number of sections the bar is split up to in this stress program (a free choisse). So it should be followed by that number of describing lines.
>>>> "120 1.4"	describes the first section of the bar. "120" is the volume (beteen 0-127), "1.4" is the multiplier, the core of the thing, so to say: It says the duration of the first sixt of the notated bar is to be played 1.4 times as long than it would be played considering the metronome tempo given.
>>>> "100 0.6"	and so on.
>>>>
>>>> I attached BarFly's "Stress Programs" file which also contains the descriptin provided by the author of "Barfly" Phil Taylor.
>>>> (I personally would prefer if this mechanism would not be limited to 
one single bar, but could be used to describe durations of a choosen number of bars/beats.)
>>>>
>>>>
>>>> So, thanks for the moment,
>>>> and feel free to tell me I shall not send this longish and maybe not 
very clever e-mails to this list.
>>>>
>>>> Thanks,
>>>> Health,
>>>> Simon
>>>> Wascher
>>>>
>>>> Anfang der weitergeleiteten Nachricht:
>>>>
>>>>> Von: Simon Wascher<bureau at tradmus.org>
>>>>>
>>>>> Betreff: Aw: [MEI-L] Tick-based Timing
>>>>> Datum: 15. April 2021 01:03:32 MESZ
>>>>> An: James Ingram
>>>>> <j.ingram at netcologne.de>
>>>>>
>>>>>
>>>>> Hello,
>>>>>
>>>>> reading your post to MEI mailing list (I am not active in MEI) I started to read your text
>>>>>
>>>>>> [2]https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html
>>>>> and would like to just add my two cents of ideas about
>>>>>
>>>>>> Ticks carry both temporal and spatial information.
>>>>>> In particular, synchronous events at the beginning of a bar have the same tick.time, so:
>>>>>> The (abstract) tick durations of the events in parallel voices in a bar, add up to the same value.
>>>>>> In other words:
>>>>>> Bars “add up” in (abstract) ticks.
>>>>>> The same is true for parallel voices in systems (that are as wide as the page allows) even when there are no barlines, so:
>>>>>> Systems also “add up” in (abstract) ticks.
>>>>>>
>>>>> * First I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours.
>>>>>
>>>>> * About Barlines I would like to add, that barlines also represent human perception (the composer's, the musician's, the listener's or the transcriber's), as barlines do not exist in the audio-signal.
>>>>> Barlines do not need to align. It is the music as a whole that keeps a common pace (the musicians stay together, but not necessarily at beats or barlines.
>>>>> It is even possible to play along with completly different barlines 
in mind, that really happends, I experienced it myself.
>>>>>
>>>>> * Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time?
>>>>>
>>>>> * In many musical styles of traditional music, also in Europe there 
are severe differences between emic and ethic music perception. Typical and well known examples are Polska-playing in Tradions of Scandinavia or the problems of scientific  notation of Jodler/Jodel (Jodler interpretation has a very loose relation to beat). Looking for examples of perfect common pace in a music that treats the tension of timing between the ensemble members as a carrier of musical expression have a look at central polish traditional instrumental dance music.
>>>>>
>>>>> * About notational approaches: are you aware of the "Stress Programs" used with abc-notation to describe microtiming? It is a method where the bar is split up into a freely choosen number of fractions described by 
multipiers (1 is the standardlenght of one fraction, so 0.76 is 0.76 times shorter and 1.43 is 1.43 times longer than standard)?
>>>>>
>>>>> Not sure if this is meeting your intentions,
>>>>> Thanks,
>>>>> Health,
>>>>>
>>>>> Simon Wascher, (Vienna; musician, transcriber of historical music notation; researcher in folk music)
>>>>>
>>>>>
>>>>>
>>>>>
>>>>> Am 14.04.2021 um 21:49 schrieb James Ingram
>>>>> <j.ingram at netcologne.de>
>>>>> :
>>>>>
>>>>>
>>>>>> Last January, I raised an issue about Tick-based Timing in the W3C 
Music Notation Community Group's MNX Repository [1], but it was closed in 
February without my being satisfied that it had been sufficiently discussed.
>>>>>> I had the feeling that something important was being glossed over, 
so have been thinking hard about the subject over the past few weeks, and 
have now uploaded an article about it to my website [2].
>>>>>> My conclusions are that Tick-based Timing
>>>>>> 	• has to do with the difference between absolute (mechanical, physical) time and performance practice,
>>>>>> 	• is relevant to the encoding of all the world's event-based music notations, not just CWMN1900.
>>>>>> 	• needs to be considered for the next generation of music 
encoding formats
>>>>>> I would especially like to get some feedback from those working on 
non-western notations, so am posting this not only to the W3C MNCG's public mailing list, but also to MEI's.
>>>>>> All the best,
>>>>>> James Ingram
>>>>>> (notator)
>>>>>> [1] MNX Issue #217:
>>>>>> https://github.com/w3c/mnx/issues/217
>>>>>>
>>>>>> [2]
>>>>>> https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html
>>>>>>
>>>>>>
>>>>>> --
>>>>>>
>>>>>> https://james-ingram-act-two.de
>>>>>> https://github.com/notator
>>>>>>
>>>>>> _______________________________________________
>>>>>> mei-l mailing list
>>>>>>
>>>>>> mei-l at lists.uni-paderborn.de
>>>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
>>>> <Stress_Programs>
>>>> _______________________________________________
>>>> mei-l mailing list
>>>>
>>>> mei-l at lists.uni-paderborn.de
>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
>>> Dr. Johannes Kepper
>>> Wissenschaftlicher Mitarbeiter
>>>
>>> Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition
>>> Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold
>>>
>>> kepper at beethovens-werkstatt.de
>>>   | -49 (0) 5231 / 975669
>>>
>>>
>>> www.beethovens-werkstatt.de
>>>
>>> Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz
>>>
>>>
>>>
>>>
>>>
>>>
>>> _______________________________________________
>>> mei-l mailing list
>>>
>>> mei-l at lists.uni-paderborn.de
>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
>> _______________________________________________
>> mei-l mailing list
>> mei-l at lists.uni-paderborn.de
>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
> Dr. Johannes Kepper
> Wissenschaftlicher Mitarbeiter
>
> Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition
> Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold
> kepper at beethovens-werkstatt.de  | -49 (0) 5231 / 975669
>
> www.beethovens-werkstatt.de
> Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz
>
>
>
>
> _______________________________________________
> mei-l mailing list
> mei-l at lists.uni-paderborn.de
> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
-- 
email signature

https://james-ingram-act-two.de
https://github.com/notator

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210422/76283805/attachment.htm>


More information about the mei-l mailing list