[MEI-L] sounding vs. written pitch, <octave>
Byrd, Donald A.
donbyrd at indiana.edu
Wed Mar 15 02:09:08 CET 2017
It seems clear that addressing scordatura at a higher level than the note requires grouping notes by the string they're to be played on, something like @string="G", as well as saying -- maybe in the staffDef -- how each string transposes.
How about timpani parts like the ones my "Written Vs. Sounding Pitch" article discusses, including Beethoven's Third and Fourth? In the Eulenberg editions and surely others, the timpani parts have no key signatures or accidentals. So movements in Eb have the timpani playing only B-flats and E-flats, but they appear in the score as B’s and E’s. That's equivalent to transposing down a half-step, of course. But most movements of the Fourth are in B-flat, so the timpani play only B-flats and Fs; they appear as B and F, so only the B-flats are transposed! A lot like scordatura, though much simpler. The best way to handle this might be to somehow say in the staffDef that all B's sound as B-flats.
--Don
On Mar 13, 2017, at 4:36 PM, "Roland, Perry D. (pdr4h)" <pdr4h at eservices.virginia.edu> wrote:
>
> @dis and @dis.place on <octave> capture the amount and direction of the shift. The application of this to the note level may be accomplished in a couple of ways -- at "run time" by looking for the notes affected by the <octave> instruction and applying its parameters or at "encoding time" by transferring/reiterating the octave shift data on each <note> using @oct.ges.
>
> The same principle applies with regard to transposition: @trans.diat and @trans.semi capture the amount and direction of the transpositional shift and may be specified at the score, staff, and layer levels. The actual sounding pitch of a <note> may be determined at "run time" by modifying the note's pitch (given by @pname, @oct, @accid) or at "encoding time" by storing the results of this calculation in the note's @pname.ges, @oct.ges, and @accid.ges attributes.
>
> I agree that scordatura isn't adequately addressed. I think what's required is @scord.diat and @scord.semi attributes that provide the amount and direction of the detuning. These attributes must be provided at the note level. The same process can be applied -- the results of the detuning can be calculated at "run time" or calculated and stored in the note's @pname.ges, @oct.ges, and @accid.ges attributes at "encoding time".
>
> I think of this process as a formula:
>
> @pname.ges = @pname + (@trans.diat and @trans.semi) + (@scord.diat and @scord.semi)
>
> How to address scordatura at a higher level than the note isn't clear to me yet.
>
> --
> p.
>
>
>
> From: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp
> Sent: Monday, March 13, 2017 12:50 PM
> To: Music Encoding Initiative
> Subject: Re: [MEI-L] sounding vs. written pitch, <octave>
>
>
> On 13 March 2017 at 04:08, Thomas Weber <tw at notabit.eu> wrote:
> For <octave> transpositions, it seems to be the convention that the written pitches (ignoring the <octave> line) are encoded. I suspect that this is not a conscious and well founded decision but "just happened" like that because all the exporters/importers (and scorewriters themselves) lazily treat ottava lines as if they were generic lines without pitch related meaning - similar to hairpins, pedal markings etc.
>
> I think the more useful way of encoding ottava situations would be to encode the actual logical pitch with @pname/@oct - that would be equivalent to the sounding pitch on non-transposing instruments. In any case, the specs should not leave it open what encoding approach to take.
>
> Yes, notes encoded under an ottava mark (<octave>) use the written pitch (SCORE-style encoding). For use in verovio, I also give an optional @oct.ges to allow proper generation of MIDI files for the music:
>
>
> <score>
> <scoreDef xml:id="scoredef-311854">
> <staffGrp xml:id="staffgrp-352709">
> <staffDef xml:id="staffdef-922725" clef.shape="G" clef.line="2" meter.count="4" meter.unit="4" n="1" lines="5" />
> </staffGrp>
> </scoreDef>
> <section xml:id="section-0000001991177130">
> <measure xml:id="measure-L3" n="1">
> <staff xml:id="staff-L3F1N1" n="1">
> <layer xml:id="layer-L3F1N1" n="1">
> <note xml:id="note-L4F1" dur="4" oct="5" pname="f" accid.ges="n"/>
> <note xml:id="note-L5F1" dur="4" oct="5" pname="a" accid.ges="n"/>
> <note xml:id="note-L6F1" dur="4" oct="5" pname="c" accid.ges="n"/>
> <note xml:id="note-L8F1" dur="4" oct.ges="6" oct="5" pname="e" accid.ges="n"/>
> </layer>
> </staff>
> <octave xml:id="octave-0000001759794455" staff="1" startid="#note-L8F1" endid="#note-L13F1" dis="8" dis.place="above" />
> </measure>
> <measure xml:id="measure-L9">
> <staff xml:id="staff-L9F1N1" n="1">
> <layer xml:id="layer-L9F1N1" n="1">
> <note xml:id="note-L10F1" dur="4" oct.ges="6" oct="5" pname="g" accid.ges="n"/>
> <note xml:id="note-L11F1" dur="4" oct.ges="7" oct="6" pname="c" accid.ges="n"/>
> <note xml:id="note-L12F1" dur="4" oct.ges="6" oct="5" pname="g" accid.ges="n"/>
> <note xml:id="note-L13F1" dur="4" oct.ges="6" oct="5" pname="e" accid.ges="n"/>
> </layer>
> </staff>
> </measure>
> <measure xml:id="measure-L15" right="end">
> <staff xml:id="staff-L15F1N1" n="1">
> <layer xml:id="layer-L15F1N1" n="1">
> <note xml:id="note-L16F1" dur="4" oct="6" pname="c" accid.ges="n"/>
> <note xml:id="note-L17F1" dur="4" oct="5" pname="a" accid.ges="n"/>
> <note xml:id="note-L18F1" dur="4" oct="5" pname="f" accid.ges="n"/>
> </layer>
> </staff>
> </measure>
> </section>
> </score>
>
>
>
> Here is the visual rendering (if images are allowed inline in the message):
>
> <image002.png>
>
> If I did not add @oct.ges to the notes, then the <octave> mark would have to be processed to calculate @oct.ges on the notes it applies to before the note's pitch could be converted to MIDI. The current MDI conversion in verovio does not do that, so that is why I add them myself.
>
> I haven't dealt with transposing instruments and ottavas yet, but I would expect @oct.ges (and @pname.ges/@accid.ges) to actually work in the logical written domain rather than the logical sounding domain: the @*.ges for a transposed part for a clarinet would reference the transposed pitch rather than the sounding pitch (if there was a transposing directive encoded in the MEI data for the part). If the true sounding pitch were used in @*.ges, then <octave> and harmonics, then that would get messy when you want to change a part from one transposition to another (such as print a clarinet part in B-flat when the original data was in A).
>
> Scordatura causes interesting problems. This basically changes the instrument into a partially transposing instrument. And to make it more complicated is that the same written pitch could be transposed or not transposed (depending on which string of a violin is playing the written note, for example). For scordatura, there should be a local transposition attribute on the <note> level. To get the sounding pitch of a note, and @*.ges information would be the logical pitch (tied to the written pitch) which then would be unpacked first by doing the scordatura transposition local on the note, and then the global transposition for the part.
>
> Brass instrumental parts would add a minor complication (particularly French horn), since the transposition is not necessarily global for the <score> but rather for <section>s.
>
> Humdrum **kern data takes the exact opposite approach, always encoding the sounding pitch. Then modifiers can be added to the data to work backward to the written pitch. Here is the Humdrum encoding of the same music:
>
> **kern
> *M4/4
> =1-
> 4ff
> 4aa
> 4cc
> *8va
> 4eee
> =2
> 4ggg
> 4cccc
> 4ggg
> 4eee
> *X8va
> =3
> 4ccc
> 4aa
> 4ff
> ==
> *-
>
> Where the *8va turns on an octave down transposition for the notes after it, until canceled by the *X8va mark.
>
> But mixing the Humdrum model (logical sounding pitch ) with the MEI model (logical written pitch) would probably be messy. Creating MIDI files from Humdrum data is relatively easy and creating written scores is relatively hard, while for MEI it is the other way around. Note that lilypond is closer to Humdrum in this respect. For semantic (transformational) processing of data, the Humdrum method is easier overall in my opinion. I am not aware that MusicXML does any semantic treatment of scordatura.
>
> I came across a scordatura example recently which I posted in the music-encoding issues on Github:
> https://github.com/music-encoding/music-encoding/issues/415
> In this music any note above G3 is scordatura, except when there is a <dir> under the music such as "IIa" which means to play the notes on the second string (which is not scordatura).
>
> Here is some discussion about representing harmonics in MEI which devolved from a different issue in verovio:
> https://github.com/rism-ch/verovio/issues/375#issuecomment-265260544
>
>
> -=+Craig
>
>
> _______________________________________________
> mei-l mailing list
> mei-l at lists.uni-paderborn.de
> https://lists.uni-paderborn.de/mailman/listinfo/mei-l
---
Donald Byrd
Woodrow Wilson Indiana Teaching Fellow
Adjunct Associate Professor of Informatics
Visiting Scientist, Research Technologies
Indiana University Bloomington
More information about the mei-l
mailing list