From pdr4h at eservices.virginia.edu Tue Jan 3 15:20:07 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 3 Jan 2012 14:20:07 +0000 Subject: [MEI-L] and external references In-Reply-To: References: Message-ID: Hi, Alastair, In order to allow multiple end-points and to make MEI more conformant with TEI, the target attribute replaced xlink:href. In addition, @targettype was added to att.pointing so that *the targets* could be classified -- as opposed to the global type attribute which provides a way to classify the element *to which it's attached.* Happy New Year! -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Alastair Porter [alastair at porter.net.nz] Sent: Saturday, December 31, 2011 4:05 PM To: mei-l at lists.uni-paderborn.de Subject: [MEI-L] and external references Hi, I've been working on revalidating some old files against the latest version of the MEI schema (I've generated an RNG file from branches/MEI_dev) and have a question about the attributes to the element. The att.pointing attribute list contains definitions for attributes pointing to external sources. @actuate, @role, @show, and @title are defined in the xlink schema, however @target and @targettype are not. Old versions of the schema (e.g the tag library at http://music-encoding.org/documentation/tagLibrary/graphic) appear to use xlink:href for external sources instead. Is there a reason that these target attributes are not the xlink ones (e.g. xlink:href)? Thanks, Alastair From zupftom at googlemail.com Wed Jan 4 17:41:05 2012 From: zupftom at googlemail.com (TW) Date: Wed, 4 Jan 2012 17:41:05 +0100 Subject: [MEI-L] @dur on and Message-ID: Does anything speak against having @dur on and , like on ? If no, I'd add a feature request to the issue tracker. Thomas From andrew.hankinson at mail.mcgill.ca Wed Jan 4 18:38:46 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson, Mr) Date: Wed, 4 Jan 2012 17:38:46 +0000 Subject: [MEI-L] @dur on and In-Reply-To: <3647_1325695279_4F04812F_3647_66_1_CAEB1mAq0AhWd5uGbCzc5evrDYK-OeusBazMfdBB7J-BXTQ6XTA@mail.gmail.com> References: <3647_1325695279_4F04812F_3647_66_1_CAEB1mAq0AhWd5uGbCzc5evrDYK-OeusBazMfdBB7J-BXTQ6XTA@mail.gmail.com> Message-ID: Hi Thomas, It sounds OK to me (I'll leave the logistics up to other people to chime in on), but we will need to make sure we tag it for the release *after* 2012 since the schema is now frozen. I've added a new tag called "Milestone-Release2013" (If someone can think of something better, I'll change it). Please use that to tag your feature request. Cheers, -Andrew On 2012-01-04, at 11:41 AM, TW wrote: > Does anything speak against having @dur on and , like > on ? If no, I'd add a feature request to the issue tracker. > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Wed Jan 4 18:47:03 2012 From: zupftom at googlemail.com (TW) Date: Wed, 4 Jan 2012 18:47:03 +0100 Subject: [MEI-L] @dur on and In-Reply-To: References: <3647_1325695279_4F04812F_3647_66_1_CAEB1mAq0AhWd5uGbCzc5evrDYK-OeusBazMfdBB7J-BXTQ6XTA@mail.gmail.com> Message-ID: I would have tagged it "Priority-Low", which is already described as "might slip to later milestone". 2012/1/4 Andrew Hankinson, Mr : > Hi Thomas, > > It sounds OK to me (I'll leave the logistics up to other people to chime in on), but we will need to make sure we tag it for the release *after* 2012 since the schema is now frozen. > > I've added a new tag called "Milestone-Release2013" (If someone can think of something better, I'll change it). Please use that to tag your feature request. > > Cheers, > -Andrew > > On 2012-01-04, at 11:41 AM, TW wrote: > >> Does anything speak against having @dur on and , like >> on ? ?If no, I'd add a feature request to the issue tracker. >> >> Thomas >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Wed Jan 4 20:09:34 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Wed, 4 Jan 2012 19:09:34 +0000 Subject: [MEI-L] @dur on and In-Reply-To: References: Message-ID: Adding @dur on and , or more precisely, making them members of the att.duration.musical and att.augmentdots classes, is ok with me. Doing so would make these elements consistent with the element; that is, as an encoding shortcut, @dur can be placed on the or parent element instead of on the child or elements. But, because these attributes can't be disallowed on the child elements (they're needed when and occur outside of and of course), it will be valid for @dur and @dots to occur in either or both places. Perhaps schematron rules could be created that disallow @dur and @dots on the child elements when they exist on the parent. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Wednesday, January 04, 2012 11:41 AM To: Music Encoding Initiative Subject: [MEI-L] @dur on and Does anything speak against having @dur on and , like on ? If no, I'd add a feature request to the issue tracker. Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Wed Jan 4 20:32:03 2012 From: zupftom at googlemail.com (TW) Date: Wed, 4 Jan 2012 20:32:03 +0100 Subject: [MEI-L] @dur on and In-Reply-To: References: Message-ID: I submitted the issue for the 2013 milestone. 2012/1/4 Roland, Perry (pdr4h) : > Adding @dur on and , or more precisely, making them members of the att.duration.musical and att.augmentdots classes, is ok with me. > > Doing so would make these elements consistent with the element; that is, as an encoding shortcut, @dur can be placed on the or parent element instead of on the child or elements. > > But, because these attributes can't be disallowed on the child elements (they're needed when and occur outside of and of course), it will be valid for @dur and @dots to occur in either or both places. ?Perhaps schematron rules could be created that disallow @dur and @dots on the child elements when they exist on the parent. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] > Sent: Wednesday, January 04, 2012 11:41 AM > To: Music Encoding Initiative > Subject: [MEI-L] @dur on and > > Does anything speak against having @dur on and , like > on ? ?If no, I'd add a feature request to the issue tracker. > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Sat Jan 7 08:23:59 2012 From: kepper at edirom.de (Johannes Kepper) Date: Sat, 7 Jan 2012 08:23:59 +0100 Subject: [MEI-L] @dur on and In-Reply-To: References: Message-ID: <53DE0E79-F27A-4F59-AF88-C2D26D42CCDA@edirom.de> Am 04.01.2012 um 20:09 schrieb Roland, Perry (pdr4h): > Adding @dur on and , or more precisely, making them members of the att.duration.musical and att.augmentdots classes, is ok with me. > > Doing so would make these elements consistent with the element; that is, as an encoding shortcut, @dur can be placed on the or parent element instead of on the child or elements. > > But, because these attributes can't be disallowed on the child elements (they're needed when and occur outside of and of course), it will be valid for @dur and @dots to occur in either or both places. Perhaps schematron rules could be created that disallow @dur and @dots on the child elements when they exist on the parent. I wonder if we really want to disallow these attributes on the child elements. I have no access to Read etc. currently, but if I remember correctly, at least one of them allows chords to contain notes with differing length. I'm not sure if I would like to follow this argumentation, but I don't think that MEI should disallow such a situation. I'm perfectly happy to address this in the Guidelines and make sure that this would be a rather uncommon and unexpected situation, but although Schematron certainly can suppress @dur within chords and trems, I wouldn't recommend it. Just my two drachma? Johannes > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] > Sent: Wednesday, January 04, 2012 11:41 AM > To: Music Encoding Initiative > Subject: [MEI-L] @dur on and > > Does anything speak against having @dur on and , like > on ? If no, I'd add a feature request to the issue tracker. > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Mon Jan 9 02:59:27 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 9 Jan 2012 01:59:27 +0000 Subject: [MEI-L] @dur on and In-Reply-To: <53DE0E79-F27A-4F59-AF88-C2D26D42CCDA@edirom.de> References: , <53DE0E79-F27A-4F59-AF88-C2D26D42CCDA@edirom.de> Message-ID: Noted, and anticipated. Hence, the word "perhaps". :) I probably should have said that schematron rules can be written to control this situation *when it is not desirable to allow it.* -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] Sent: Saturday, January 07, 2012 2:23 AM To: Music Encoding Initiative Subject: Re: [MEI-L] @dur on and Am 04.01.2012 um 20:09 schrieb Roland, Perry (pdr4h): > Adding @dur on and , or more precisely, making them members of the att.duration.musical and att.augmentdots classes, is ok with me. > > Doing so would make these elements consistent with the element; that is, as an encoding shortcut, @dur can be placed on the or parent element instead of on the child or elements. > > But, because these attributes can't be disallowed on the child elements (they're needed when and occur outside of and of course), it will be valid for @dur and @dots to occur in either or both places. Perhaps schematron rules could be created that disallow @dur and @dots on the child elements when they exist on the parent. I wonder if we really want to disallow these attributes on the child elements. I have no access to Read etc. currently, but if I remember correctly, at least one of them allows chords to contain notes with differing length. I'm not sure if I would like to follow this argumentation, but I don't think that MEI should disallow such a situation. I'm perfectly happy to address this in the Guidelines and make sure that this would be a rather uncommon and unexpected situation, but although Schematron certainly can suppress @dur within chords and trems, I wouldn't recommend it. Just my two drachma? Johannes > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] > Sent: Wednesday, January 04, 2012 11:41 AM > To: Music Encoding Initiative > Subject: [MEI-L] @dur on and > > Does anything speak against having @dur on and , like > on ? If no, I'd add a feature request to the issue tracker. > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From raffaeleviglianti at gmail.com Thu Jan 19 17:54:47 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Thu, 19 Jan 2012 16:54:47 +0000 Subject: [MEI-L] Drum notation, or...? Message-ID: Dear list, Please forgive me if this is a silly question with a simple answer, but I've been asked to produce an encoding of this: http://www.warmuseum.ca/cwm/exhibitions/guerre/photo-e.aspx?PageId=4.B.3&photo=3.D.8.c&f=%2Fcwm%2Fexhibitions%2Fguerre%2Fa-soldiers-life-e.aspx What is the text above the staff? I suspect it is another staff for some sort of drum, probably a military drum. However, I've never come across anything like this and I can't really make sense of it, except maybe for a few things like "r" for roll. Does anyone know that that is and how to read it? And also: how would one go about encoding that in MEI? Best wishes, Raffaele -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hankinson at mail.mcgill.ca Thu Jan 19 18:17:54 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson, Mr) Date: Thu, 19 Jan 2012 17:17:54 +0000 Subject: [MEI-L] Drum notation, or...? In-Reply-To: <28671_1326992122_4F184AF9_28671_243_1_CAMyHAnPNO1JSrVOaqP+YdFr2TXrTb0vHxeTk=_cHnPY9wUNX5Q@mail.gmail.com> References: <28671_1326992122_4F184AF9_28671_243_1_CAMyHAnPNO1JSrVOaqP+YdFr2TXrTb0vHxeTk=_cHnPY9wUNX5Q@mail.gmail.com> Message-ID: <66EF65EF-ABE2-4BB8-BE6B-FC95CCC470EC@mail.mcgill.ca> Hi Raffaele, That's good ol' solfege notation! d = do r = re m = mi etc... -Andrew On 2012-01-19, at 11:54 AM, Raffaele Viglianti wrote: Dear list, Please forgive me if this is a silly question with a simple answer, but I've been asked to produce an encoding of this: http://www.warmuseum.ca/cwm/exhibitions/guerre/photo-e.aspx?PageId=4.B.3&photo=3.D.8.c&f=%2Fcwm%2Fexhibitions%2Fguerre%2Fa-soldiers-life-e.aspx What is the text above the staff? I suspect it is another staff for some sort of drum, probably a military drum. However, I've never come across anything like this and I can't really make sense of it, except maybe for a few things like "r" for roll. Does anyone know that that is and how to read it? And also: how would one go about encoding that in MEI? Best wishes, Raffaele _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Thu Jan 19 18:26:29 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Thu, 19 Jan 2012 09:26:29 -0800 Subject: [MEI-L] Drum notation, or...? In-Reply-To: References: Message-ID: Hi Raffelle, That is Tonic Sol-Fa. I have it in in a list of music representations: http://www.ccarh.org/courses/253/link/index.html#TonicSolFa ?"Text-based notation system which can be written on a standard typewriter developed by John Curwen in the middle of the 19th century in England. Similar in functionality to Shape-note notation but without staff lines. " en.wikipedia.org/wiki/Tonic_sol-fa www.mcsr.olemiss.edu/~mudws/notes/solfa.html (example) -=+Craig From raffaeleviglianti at gmail.com Thu Jan 19 18:36:01 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Thu, 19 Jan 2012 17:36:01 +0000 Subject: [MEI-L] Drum notation, or...? In-Reply-To: References: Message-ID: Ah! Thanks Andrew and Craig. I've never come across it before. Ok, it makes sense, but is it just me or is that all a semitone up compared to the notation below? Any suggestions for encoding this in MEI? I'd think the only way at the moment is by adding directions on top of notes, but it doesn't sounds great. Best, Raffaele On Thu, Jan 19, 2012 at 5:26 PM, Craig Sapp wrote: > Hi Raffelle, > > That is Tonic Sol-Fa. > > I have it in in a list of music representations: > http://www.ccarh.org/courses/253/link/index.html#TonicSolFa > > "Text-based notation system which can be written on a standard > typewriter developed by John Curwen in the middle of the 19th century > in England. Similar in functionality to Shape-note notation but > without staff lines. " > > en.wikipedia.org/wiki/Tonic_sol-fa > > www.mcsr.olemiss.edu/~mudws/notes/solfa.html (example) > > -=+Craig > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From craigsapp at gmail.com Thu Jan 19 18:55:12 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Thu, 19 Jan 2012 09:55:12 -0800 Subject: [MEI-L] Drum notation, or...? In-Reply-To: References: Message-ID: Hi Raffaele, > I've never come across it before. Ok, it makes sense, but is it just me or > is that all a semitone up compared to the notation below? Only because you are thinking (like a southern European) in a fixed-do system. The Tonic Sol-Fa method placed "do" on the tonic note, which in this case is B-flat (or B for Germans :-). Technically this is a parallel equivalent representation of the graphical music notation, so it should not necessarily be represented inside MEI (unless you want to duplicate encoding work), but rather be generated from the musical data in MEI. The only piece of non-graphical information needed should be the tonic note, then the notation can be derived from the MEI data which also generates the graphical notation. -=+Craig From pdr4h at eservices.virginia.edu Thu Jan 19 20:04:51 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 19 Jan 2012 19:04:51 +0000 Subject: [MEI-L] Drum notation, or...? In-Reply-To: References: , Message-ID: Craig is correct -- the ideal way to get the solfa is to generate it. BUT, is a member of att.solfa, which provides @psolfa for capturing solfa along with (or instead of) pitch name. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Craig Sapp [craigsapp at gmail.com] Sent: Thursday, January 19, 2012 12:55 PM To: Music Encoding Initiative Subject: Re: [MEI-L] Drum notation, or...? Hi Raffaele, > I've never come across it before. Ok, it makes sense, but is it just me or > is that all a semitone up compared to the notation below? Only because you are thinking (like a southern European) in a fixed-do system. The Tonic Sol-Fa method placed "do" on the tonic note, which in this case is B-flat (or B for Germans :-). Technically this is a parallel equivalent representation of the graphical music notation, so it should not necessarily be represented inside MEI (unless you want to duplicate encoding work), but rather be generated from the musical data in MEI. The only piece of non-graphical information needed should be the tonic note, then the notation can be derived from the MEI data which also generates the graphical notation. -=+Craig _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From raffaeleviglianti at gmail.com Thu Jan 19 21:07:20 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Thu, 19 Jan 2012 20:07:20 +0000 Subject: [MEI-L] Drum notation, or...? In-Reply-To: References: Message-ID: Ok, this all makes sense now. Many thanks! Raffaele On Jan 19, 2012 7:04 PM, "Roland, Perry (pdr4h)" < pdr4h at eservices.virginia.edu> wrote: > Craig is correct -- the ideal way to get the solfa is to generate it. > BUT, is a member of att.solfa, which provides @psolfa for capturing > solfa along with (or instead of) pitch name. > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de[mei-l-bounces+pdr4h= > virginia.edu at lists.uni-paderborn.de] on behalf of Craig Sapp [ > craigsapp at gmail.com] > Sent: Thursday, January 19, 2012 12:55 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] Drum notation, or...? > > Hi Raffaele, > > > I've never come across it before. Ok, it makes sense, but is it just me > or > > is that all a semitone up compared to the notation below? > > Only because you are thinking (like a southern European) in a fixed-do > system. The Tonic Sol-Fa method placed "do" on the tonic note, which > in this case is B-flat (or B for Germans :-). > > Technically this is a parallel equivalent representation of the > graphical music notation, so it should not necessarily be represented > inside MEI (unless you want to duplicate encoding work), but rather be > generated from the musical data in MEI. The only piece of > non-graphical information needed should be the tonic note, then the > notation can be derived from the MEI data which also generates the > graphical notation. > > > -=+Craig > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdr4h at eservices.virginia.edu Thu Feb 9 16:07:46 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 9 Feb 2012 15:07:46 +0000 Subject: [MEI-L] @trans.diat and @trans.semi Message-ID: Looking at the documentation for these 2 attributes, it isn't exactly clear what values are expected. The documentation for trans.diat says, "records the amount of diatonic pitch shift, e.g. C to C? = 0, C to D? = 1. Transposition requires both trans.diat and trans.semi attributes in order to distinguish the difference, for example, between a transposition from C to C? and one from C to D?." while for trans.semi, it says, "contains the amount of pitch shift in semitones, C to C? = 1, C to D? = 1. Transposition requires both trans.diat and trans.semi attributes in order to distinguish the difference, for example, between a transposition from C to C? and one from C to D?." So, for the clarinet in B?should the values be trans.diat="1" and trans.semi="2", or should they read trans.diat="-1" and trans.semi="-2" ? It seems to me that the first set of values is somewhat redundant since the written pitches for the clarinet are already recorded as one step "too high" with respect to concert pitch. This redundancy, however, could be used to mean that the written pitches are not errors; that is, that an F# on the clarinet staff is correct in the concert key of C. Another way to approach these attributes (which is, by the way, used in the musicxml2mei XSL transform) is to use them to indicate the amount of "correction" necessary to achieve the concert pitch from the written one. In this case, the second set of values would be more helpful; that is, given a pitch of D4, to get a performable concert pitch, subtract the value of trans.semi. (Or perhaps more technically correct, add -2 semitones.) So, having made the decision once before (when writing the XSL transform), it would be convenient to make the documentation agree with the already-made decision. However, if anyone has a good argument, other than tradition, why this decision (and the documentation) should be reversed, please speak up. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From craigsapp at gmail.com Sat Feb 11 03:23:03 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Fri, 10 Feb 2012 18:23:03 -0800 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: References: Message-ID: Hi Perry, The specification seems to come from Humdrum trans tool (which in turn comes from some 2D algorithm of transposition): http://musicog.ohio-state.edu/Humdrum/commands/trans.html And I have a transposing program for Humdrum which implements the base-40 method of transposing: http://extra.humdrum.org/man/transpose Example: Here is the score for a B-flat clarinet in Humdrum format: **kern *ITrd1c2 *Iclars *k[] *C: 4c 4d 4e 4f 4g 4a 4b 4cc *- The pitches are listed in concert pitch (the convention for Humdrum scores), and there is an interpretation "*ITrd1c2" which gives instructions on how to transpose to written pitch: *I = instrument code Tr = transpose d1 = go up one diatonic step c2 = while going up two chromatic steps So C->D, C#->D# and so on. > So, for the clarinet in B?should the values be trans.diat="1" and trans.semi="2", or should they read trans.diat="-1" and trans.semi="-2" ? I would say that using trans.diat="1" and trans.semi="2" would be better (particularly if the pitches in the file are listed in concert pitch). And these numbers then represent the transposition amount/direction which is necessary to transpose from concert pitch to written pitch of the transposing instrument. Since B-flat clarinets "sound down" they need to "transpose up" to get to the written pitch, so the intervals should be positive. In the documentation you should say something like "the transposition values indicate the size and direction of the transposition interval used to transpose concert pitches into written pitches for the transposing instrument." If you want the transposition to mean the size/direction to go from written to concert (which is the same size but reverse direction), then you switch the wording (although I would prefer convert->written transposition to be encoded in the score). In Humdrum files, the pitches are always expected to be in concert pitch. I use my transpose program mentioned above to transpose written parts into sounding parts when going from OMR encoded data into a final score. In that case I have a "-I" option which stores the reverse of the written->sounding transposition in the final score (I transpose down the B-flat part to concert pitch, but I store the upward direction indicating how to go from concert pitch to written). Are transposing parts always encoded in concert pitch in MEI? If so, then the sounding->written direction for transposition is very much preferred. If the MEI files can have either concert or written pitch, then what to do is not as clear. I would have the transposition value always from concert->written, and then some indication of what state the score/part is in, either "written" or "sounding". If the score is in Written mode, then the transpose interval would be negated to convert to concert pitch, and if the score is in Concert mode, then the transpose interval would be used as-is to transpose to Written mode. <> You or I are confused about this sentence. Given the pitch of D4 written for a B-flat clarinet, you need to apply both trans.semi and trans.diat to calculate the concert pitch (not just trans.semi; both are needed to get the correct transposed diatonic pitch name). In terms of calculating a base-12 MIDI pitch, you can just use trans.semi, but for correct diatonic spelling you will need both: go down one diatonic step while doing down two chromatic steps which is D4 to C4 (down a major second): D->C:d->c#->c. -=+Craig 2012/2/9 Roland, Perry (pdr4h) > Looking at the documentation for these 2 attributes, it isn't exactly > clear what values are expected. The documentation for trans.diat says, > > "records the amount of diatonic pitch shift, e.g. C to C? = 0, C to D? = > 1. > Transposition requires both trans.diat and trans.semi attributes in > order to distinguish > the difference, for example, between a transposition from C to C? and > one from C to D?." > > while for trans.semi, it says, > > "contains the amount of pitch shift in semitones, C to C? = 1, C to D? = > 1. > Transposition requires both trans.diat and trans.semi attributes in order > to distinguish > the difference, for example, between a transposition from C to C? and one > from C to D?." > > So, for the clarinet in B?should the values be trans.diat="1" and > trans.semi="2", or should they read trans.diat="-1" and trans.semi="-2" ? > > It seems to me that the first set of values is somewhat redundant since > the written pitches for the clarinet are already recorded as one step "too > high" with respect to concert pitch. This redundancy, however, could be > used to mean that the written pitches are not errors; that is, that an F# > on the clarinet staff is correct in the concert key of C. > > Another way to approach these attributes (which is, by the way, used in > the musicxml2mei XSL transform) is to use them to indicate the amount of > "correction" necessary to achieve the concert pitch from the written one. > In this case, the second set of values would be more helpful; that is, > given a pitch of D4, to get a performable concert pitch, subtract the value > of trans.semi. (Or perhaps more technically correct, add -2 semitones.) > > So, having made the decision once before (when writing the XSL transform), > it would be convenient to make the documentation agree with the > already-made decision. However, if anyone has a good argument, other than > tradition, why this decision (and the documentation) should be reversed, > please speak up. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cuthbert at MIT.EDU Sat Feb 11 20:10:51 2012 From: cuthbert at MIT.EDU (Michael Scott Cuthbert) Date: Sat, 11 Feb 2012 14:10:51 -0500 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: References: Message-ID: <005b01cce8f0$df985800$9ec90800$@mit.edu> The processes Craig and Perry suggest work well for simple cases, but break on more difficult instruments, keys, and transpositions. A Db piccolo (very common in band music) might technically be -1 diatonic, -1 chromatic, but it?s quite often 0 diatonic, -1 chromatic when the scores go even a bit towards the sharp side. Even the B-flat clarinet often will choose to be an A# clarinet when the orchestra is in 5, 6, or 7 sharps. And in pieces for band, learning ensembles, etc., individual notes will be written enharmonically. So that a concert-pitch passage, C, D, D#, E#, F# might be written as D, E, F, G, Ab so as to avoid needing using double sharps or augmented intervals. Notating the score in written pitch is the better way to avoid this problem. Best, Myke From: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp Sent: Friday, February 10, 2012 21:23 To: Music Encoding Initiative Subject: Re: [MEI-L] @trans.diat and @trans.semi Hi Perry, The specification seems to come from Humdrum trans tool (which in turn comes from some 2D algorithm of transposition): http://musicog.ohio-state.edu/Humdrum/commands/trans.html And I have a transposing program for Humdrum which implements the base-40 method of transposing: http://extra.humdrum.org/man/transpose Example: Here is the score for a B-flat clarinet in Humdrum format: **kern *ITrd1c2 *Iclars *k[] *C: 4c 4d 4e 4f 4g 4a 4b 4cc *- The pitches are listed in concert pitch (the convention for Humdrum scores), and there is an interpretation "*ITrd1c2" which gives instructions on how to transpose to written pitch: *I = instrument code Tr = transpose d1 = go up one diatonic step c2 = while going up two chromatic steps So C->D, C#->D# and so on. > So, for the clarinet in B?should the values be trans.diat="1" and trans.semi="2", or should they read trans.diat="-1" and trans.semi="-2" ? I would say that using trans.diat="1" and trans.semi="2" would be better (particularly if the pitches in the file are listed in concert pitch). And these numbers then represent the transposition amount/direction which is necessary to transpose from concert pitch to written pitch of the transposing instrument. Since B-flat clarinets "sound down" they need to "transpose up" to get to the written pitch, so the intervals should be positive. In the documentation you should say something like "the transposition values indicate the size and direction of the transposition interval used to transpose concert pitches into written pitches for the transposing instrument." If you want the transposition to mean the size/direction to go from written to concert (which is the same size but reverse direction), then you switch the wording (although I would prefer convert->written transposition to be encoded in the score). In Humdrum files, the pitches are always expected to be in concert pitch. I use my transpose program mentioned above to transpose written parts into sounding parts when going from OMR encoded data into a final score. In that case I have a "-I" option which stores the reverse of the written->sounding transposition in the final score (I transpose down the B-flat part to concert pitch, but I store the upward direction indicating how to go from concert pitch to written). Are transposing parts always encoded in concert pitch in MEI? If so, then the sounding->written direction for transposition is very much preferred. If the MEI files can have either concert or written pitch, then what to do is not as clear. I would have the transposition value always from concert->written, and then some indication of what state the score/part is in, either "written" or "sounding". If the score is in Written mode, then the transpose interval would be negated to convert to concert pitch, and if the score is in Concert mode, then the transpose interval would be used as-is to transpose to Written mode. <> You or I are confused about this sentence. Given the pitch of D4 written for a B-flat clarinet, you need to apply both trans.semi and trans.diat to calculate the concert pitch (not just trans.semi; both are needed to get the correct transposed diatonic pitch name). In terms of calculating a base-12 MIDI pitch, you can just use trans.semi, but for correct diatonic spelling you will need both: go down one diatonic step while doing down two chromatic steps which is D4 to C4 (down a major second): D->C:d->c#->c. -=+Craig 2012/2/9 Roland, Perry (pdr4h) Looking at the documentation for these 2 attributes, it isn't exactly clear what values are expected. The documentation for trans.diat says, "records the amount of diatonic pitch shift, e.g. C to C? = 0, C to D? = 1. Transposition requires both trans.diat and trans.semi attributes in order to distinguish the difference, for example, between a transposition from C to C? and one from C to D?." while for trans.semi, it says, "contains the amount of pitch shift in semitones, C to C? = 1, C to D? = 1. Transposition requires both trans.diat and trans.semi attributes in order to distinguish the difference, for example, between a transposition from C to C? and one from C to D?." So, for the clarinet in B?should the values be trans.diat="1" and trans.semi="2", or should they read trans.diat="-1" and trans.semi="-2" ? It seems to me that the first set of values is somewhat redundant since the written pitches for the clarinet are already recorded as one step "too high" with respect to concert pitch. This redundancy, however, could be used to mean that the written pitches are not errors; that is, that an F# on the clarinet staff is correct in the concert key of C. Another way to approach these attributes (which is, by the way, used in the musicxml2mei XSL transform) is to use them to indicate the amount of "correction" necessary to achieve the concert pitch from the written one. In this case, the second set of values would be more helpful; that is, given a pitch of D4, to get a performable concert pitch, subtract the value of trans.semi. (Or perhaps more technically correct, add -2 semitones.) So, having made the decision once before (when writing the XSL transform), it would be convenient to make the documentation agree with the already-made decision. However, if anyone has a good argument, other than tradition, why this decision (and the documentation) should be reversed, please speak up. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From craigsapp at gmail.com Sat Feb 11 20:41:46 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Sat, 11 Feb 2012 11:41:46 -0800 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: <005b01cce8f0$df985800$9ec90800$@mit.edu> References: <005b01cce8f0$df985800$9ec90800$@mit.edu> Message-ID: Hi Myke, On Sat, Feb 11, 2012 at 11:10 AM, Michael Scott Cuthbert wrote: > Even the B-flat clarinet often will choose to be an A# clarinet when the > orchestra is in 5, 6, or 7 sharps. This wouldn't be a problem, since the transposition is related to how you want to see it on the page, not to the name of the instrument. B-flat transposition would be 1 diatonic 2 chromatic (C->B; c->b->b-flat), A-sharp transposition would be 2 diatonic 2 chromatic (C->B->A; c->b->a#). Similarly for D-flat/C# piccolo. > And in pieces for band, learning ensembles, etc., individual notes will be > written enharmonically. So that a concert-pitch passage, C, D, D#, E#, F# > might be written as D, E, F, G, Ab so as to avoid needing using double > sharps or augmented intervals. Notating the score in written pitch is the > better way to avoid this problem. This is a good point, but can also be handled by a constant transposition system. The enharmonic spelling would be stored in the sounding-pitch spellings, rather than changed afterwards once the written notes are calculated. In other words the written pitches D E F G Ab are represented in concert pitch as: C D Eb F Gb and not as C D D# E# F# -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From cuthbert at MIT.EDU Sat Feb 11 21:04:31 2012 From: cuthbert at MIT.EDU (Michael Scott Cuthbert) Date: Sat, 11 Feb 2012 15:04:31 -0500 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: References: <005b01cce8f0$df985800$9ec90800$@mit.edu> Message-ID: <007d01cce8f8$5e972940$1bc57bc0$@mit.edu> > > From: Craig Sapp > On Sat, Feb 11, 2012 at 11:10 AM, Michael Scott Cuthbert wrote: > Even the B-flat clarinet often will choose to be an A# clarinet when the orchestra is in 5, 6, or 7 sharps. > This wouldn't be a problem, since the transposition is related to how you want to see it on the page, not to the name of the instrument. B-flat transposition would be 1 diatonic 2 chromatic (C->B; c->b->b-flat), A-sharp transposition would be 2 diatonic 2 chromatic (C->B->A; c->b->a#). Similarly for D-flat/C# piccolo. The issue is that you want to be able to change the transposition for different passages, and I don't think that changing the Instrument tag is the best approach. > And in pieces for band, learning ensembles, etc., individual notes will be written enharmonically. So that a concert-pitch passage, C, D, D#, E#, F# might be written as D, E, F, G, Ab so as to avoid needing using double sharps or augmented intervals. Notating the score in written pitch is the better way to avoid this problem. > This is a good point, but can also be handled by a constant transposition system. The enharmonic spelling would be stored in the sounding-pitch spellings, rather than changed afterwards once the written notes are calculated. In other words the written pitches D E F G Ab are represented in concert pitch as: C D Eb F Gb and not as C D D# E# F# What you would like to have is the clarinet's notes displaying as "C D D# E# F#" when the score is viewed in concert pitch (so it matches the rest of the ensemble) and "D E F G Ab" when viewed transposed; the alternate representation ("C D Eb F Gb") represents a passage that doesn't appear in either a concert-pitch score or on the performers' parts. It may be too difficult to encode such things but they're frequently done instinctively by skilled copyists. Best, Myke -=+Craig > -------------- next part -------------- An HTML attachment was scrubbed... URL: From craigsapp at gmail.com Sat Feb 11 22:33:17 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Sat, 11 Feb 2012 13:33:17 -0800 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: <007d01cce8f8$5e972940$1bc57bc0$@mit.edu> References: <005b01cce8f0$df985800$9ec90800$@mit.edu> <007d01cce8f8$5e972940$1bc57bc0$@mit.edu> Message-ID: Hi Myke, On Sat, Feb 11, 2012 at 12:04 PM, Michael Scott Cuthbert wrote: > The issue is that you want to be able to change the transposition for > different passages, and I don?t think that changing the Instrument tag is > the best approach. > Yes, there will be cases where the transposition for a transposing instrument may alternate between enharmonic equivalents within the same movement. A good example is Tchaikovsky's Romeo and Juliet overture-fantasy: http://imslp.org/wiki/Romeo_and_Juliet_%28overture-fantasia%29_%28Tchaikovsky,_Pyotr%29 In this piece the first two keys are A major and A-flat major. The Clarinet in A is notated with C major and B major, rather than C major and C-flat major. I just encoded this work via OMR with SharpEye from the parts. I prepared MuseData files for Walter in written pitch, and then he transposes to concert pitch and adds a transpotion-to-written-pitch code in the MuseData files (similar to Humdrum), since MuseData files for printing are stored in concert pitch with transposition instructions given for generating written pitch spellings. In measure 1 of the A clarinets, there will be an instruction to transpose up a minor third, while in measure 21 there will be an instruction to transpose up a diminished second (enharmonically equivalent to a minor third). Written parts for timpani are complicated--parts are displayed transposed, and there is no key signature notated (invisible key signature :-). > ** > > > And in pieces for band, learning ensembles, etc., individual notes will > be written enharmonically. So that a concert-pitch passage, C, D, D#, E#, > F# might be written as D, E, F, G, Ab so as to avoid needing using double > sharps or augmented intervals. Notating the score in written pitch is the > better way to avoid this problem. **** > > This is a good point, but can also be handled by a constant transposition > system. The enharmonic spelling would be stored in the sounding-pitch > spellings, rather than changed afterwards once the written notes are > calculated. In other words the written pitches > D E F G Ab > are represented in concert pitch as: > C D Eb F Gb > and not as > C D D# E# F# > > What you would like to have is the clarinet?s notes displaying as ?C D D# > E# F#? when the score is viewed in concert pitch (so it matches the rest of > the ensemble) and ?D E F G Ab? when viewed transposed; the alternate > representation (?C D Eb F Gb?) represents a passage that doesn?t appear in > either a concert-pitch score or on the performers? parts. It may be too > difficult to encode such things but they?re frequently done instinctively > by skilled copyists. The key phrase is "skilled copyist" which computers cannot be (not to mention "instictively" :-). In other words, to use the same data for generating a transposed part and a concert-pitch score, you would probably use variants in MEI. One variant specifies how to display it in the transposed part, and the other variant for the concert-pitch score. Transposition without enharmonic alteration is deterministic, so the written variant could equally be stored in concert pitch or written pitch. If you are going to be utilizing enharmonic equivalents, then neither system is better (if you are going to want to display in both written-pitch and concert-pitch). Humdrum and MuseData encode parts in concert pitch. I just checked, and MusicXML does it the other way. Here is a C-sounding pitch in a B-flat clarinet part which includes a transpose-to-concert-pitch instruction: 1 2 major G 2 -1 -2 D 4 4 1 whole light-heavy The tag is vague, as Perry was pointing out for MEI's equivalent. It should more verbosely be in contrast to . But it seems that the MusicXML convention is to encode the part already transposed, and then indicate the transpostion from written -> concert pitch. It would be useful specify the state of the data (written or concert) and also an explicit direction that the transposition represents (such as written->concert or concert->written). When going in the opposite direction the transposition interval could be reversed without confusion, such as concert data with a written->concert transposition could generate the written data by negating the written->concert transposition. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdr4h at eservices.virginia.edu Sat Feb 11 23:26:25 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Sat, 11 Feb 2012 22:26:25 +0000 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: References: <005b01cce8f0$df985800$9ec90800$@mit.edu> <007d01cce8f8$5e972940$1bc57bc0$@mit.edu>, Message-ID: Hi, Craig, Mike, everyone, Thanks for jumping in. It's a good idea to use written pitch in MEI because 1) it's the only thing naive encoders (human and mechanical) know about the notation and 2) MusicXML is often the source of MEI data. BTW, if I remember correctly, the MusicXML documentation says that pitch should be encoded at sounding pitch, but in practice most implementations use written pitch. However, MEI doesn't have to limited to encoding written pitch -- there must to be a way of stating whether the document captures written pitch (with transposition to concert pitch) or vice versa. So, I propose adding another attribute (trans.method, trans.dir, or similar name) that takes the values "toConcert" or "toWritten" in order to capture the "state of the data" as Craig called it, as well as the target. The assumption is that "toConcert" means that the data is captured "asWritten" and vice versa. Even though I agree with Mike that encoding written pitch is a better starting point, the extra attribute permits MuseData-style, concert pitch encodings too. With the addition of this new attribute, in the case of the Bb clarinet, trans.diat and trans.semi can accommodate the values "1" and "2" or "-1" and "-2", depending on the pitch-encoding style. The presence of any of the @trans* attributes is contextual. Of course, a non-transposing staff won't use them at all. If a diatonic transposition is desired, as Craig pointed out, @trans.diat and @trans.semi must both be given. However, if a base-12 MIDI pitch is enough, only @trans.semi is necessary. However, if @trans.diat or @trans.semi is present, @trans.dir must be present. The bottom line is that all these attributes must be optional and the context of their use controlled by schematron. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Craig Sapp [craigsapp at gmail.com] Sent: Saturday, February 11, 2012 4:33 PM To: Music Encoding Initiative Subject: Re: [MEI-L] @trans.diat and @trans.semi Hi Myke, On Sat, Feb 11, 2012 at 12:04 PM, Michael Scott Cuthbert > wrote: The issue is that you want to be able to change the transposition for different passages, and I don?t think that changing the Instrument tag is the best approach. Yes, there will be cases where the transposition for a transposing instrument may alternate between enharmonic equivalents within the same movement. A good example is Tchaikovsky's Romeo and Juliet overture-fantasy: http://imslp.org/wiki/Romeo_and_Juliet_%28overture-fantasia%29_%28Tchaikovsky,_Pyotr%29 In this piece the first two keys are A major and A-flat major. The Clarinet in A is notated with C major and B major, rather than C major and C-flat major. I just encoded this work via OMR with SharpEye from the parts. I prepared MuseData files for Walter in written pitch, and then he transposes to concert pitch and adds a transpotion-to-written-pitch code in the MuseData files (similar to Humdrum), since MuseData files for printing are stored in concert pitch with transposition instructions given for generating written pitch spellings. In measure 1 of the A clarinets, there will be an instruction to transpose up a minor third, while in measure 21 there will be an instruction to transpose up a diminished second (enharmonically equivalent to a minor third). Written parts for timpani are complicated--parts are displayed transposed, and there is no key signature notated (invisible key signature :-). > And in pieces for band, learning ensembles, etc., individual notes will be written enharmonically. So that a concert-pitch passage, C, D, D#, E#, F# might be written as D, E, F, G, Ab so as to avoid needing using double sharps or augmented intervals. Notating the score in written pitch is the better way to avoid this problem. This is a good point, but can also be handled by a constant transposition system. The enharmonic spelling would be stored in the sounding-pitch spellings, rather than changed afterwards once the written notes are calculated. In other words the written pitches D E F G Ab are represented in concert pitch as: C D Eb F Gb and not as C D D# E# F# What you would like to have is the clarinet?s notes displaying as ?C D D# E# F#? when the score is viewed in concert pitch (so it matches the rest of the ensemble) and ?D E F G Ab? when viewed transposed; the alternate representation (?C D Eb F Gb?) represents a passage that doesn?t appear in either a concert-pitch score or on the performers? parts. It may be too difficult to encode such things but they?re frequently done instinctively by skilled copyists. The key phrase is "skilled copyist" which computers cannot be (not to mention "instictively" :-). In other words, to use the same data for generating a transposed part and a concert-pitch score, you would probably use variants in MEI. One variant specifies how to display it in the transposed part, and the other variant for the concert-pitch score. Transposition without enharmonic alteration is deterministic, so the written variant could equally be stored in concert pitch or written pitch. If you are going to be utilizing enharmonic equivalents, then neither system is better (if you are going to want to display in both written-pitch and concert-pitch). Humdrum and MuseData encode parts in concert pitch. I just checked, and MusicXML does it the other way. Here is a C-sounding pitch in a B-flat clarinet part which includes a transpose-to-concert-pitch instruction: 1 2 major G 2 -1 -2 D 4 4 1 whole light-heavy The tag is vague, as Perry was pointing out for MEI's equivalent. It should more verbosely be in contrast to . But it seems that the MusicXML convention is to encode the part already transposed, and then indicate the transpostion from written -> concert pitch. It would be useful specify the state of the data (written or concert) and also an explicit direction that the transposition represents (such as written->concert or concert->written). When going in the opposite direction the transposition interval could be reversed without confusion, such as concert data with a written->concert transposition could generate the written data by negating the written->concert transposition. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Sun Feb 12 23:22:18 2012 From: kepper at edirom.de (Johannes Kepper) Date: Sun, 12 Feb 2012 23:22:18 +0100 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: References: <005b01cce8f0$df985800$9ec90800$@mit.edu> <007d01cce8f8$5e972940$1bc57bc0$@mit.edu>, Message-ID: <04AF4EC0-4B6C-4A72-9D29-EC10CAE6B799@edirom.de> Hi all, I would like to add some more points to this discussion. In my opinion, allowing both directions of transposition in MEI is the worst decision we could ever make. This would require absolutely every application dealing with MEI files to be capable of calculating the pitches as it needs them. If we pick one, we can spare at least some tools from this additional effort. If MEI stores concert pitch, this would be more natural to sound-focused apps (-> MIDI exporters) and for music analysis. If it keeps written pitch, it would be much easier for renderers, OMR and hand encoders not sure what they're facing. It might be my personal focus to understand MEI as being somewhat document-centric, but there is another argument to choose written pitch: MIR tools are normally operating on simplifications of the original data (n-grams etc.), but not on data as complex as pure MEI. If they're abstracting anyway, adding the step of transposition to their calculations seems less demanding to me than to require a renderer to "misplace" every single note. After all, I could back a decision for concert pitch, but I think it would be a very bad choice to allow both directions at the same time. We only add unnecessary complexity, processing efforts and a potential source for trouble without a need. Best regards, Johannes Am 11.02.2012 um 23:26 schrieb Roland, Perry (pdr4h): > Hi, Craig, Mike, everyone, > > Thanks for jumping in. > > It's a good idea to use written pitch in MEI because 1) it's the only thing naive encoders (human and mechanical) know about the notation and 2) MusicXML is often the source of MEI data. BTW, if I remember correctly, the MusicXML documentation says that pitch should be encoded at sounding pitch, but in practice most implementations use written pitch. > > However, MEI doesn't have to limited to encoding written pitch -- there must to be a way of stating whether the document captures written pitch (with transposition to concert pitch) or vice versa. So, I propose adding another attribute (trans.method, trans.dir, or similar name) that takes the values "toConcert" or "toWritten" in order to capture the "state of the data" as Craig called it, as well as the target. The assumption is that "toConcert" means that the data is captured "asWritten" and vice versa. Even though I agree with Mike that encoding written pitch is a better starting point, the extra attribute permits MuseData-style, concert pitch encodings too. > > With the addition of this new attribute, in the case of the Bb clarinet, trans.diat and trans.semi can accommodate the values "1" and "2" or "-1" and "-2", depending on the pitch-encoding style. > > The presence of any of the @trans* attributes is contextual. Of course, a non-transposing staff won't use them at all. If a diatonic transposition is desired, as Craig pointed out, @trans.diat and @trans.semi must both be given. However, if a base-12 MIDI pitch is enough, only @trans.semi is necessary. However, if @trans.diat or @trans.semi is present, @trans.dir must be present. The bottom line is that all these attributes must be optional and the context of their use controlled by schematron. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Craig Sapp [craigsapp at gmail.com] > Sent: Saturday, February 11, 2012 4:33 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] @trans.diat and @trans.semi > > Hi Myke, > > On Sat, Feb 11, 2012 at 12:04 PM, Michael Scott Cuthbert wrote: > The issue is that you want to be able to change the transposition for different passages, and I don?t think that changing the Instrument tag is the best approach. > > Yes, there will be cases where the transposition for a transposing instrument may alternate between enharmonic equivalents within the same movement. A good example is Tchaikovsky's Romeo and Juliet overture-fantasy: > http://imslp.org/wiki/Romeo_and_Juliet_%28overture-fantasia%29_%28Tchaikovsky,_Pyotr%29 > In this piece the first two keys are A major and A-flat major. The Clarinet in A is notated with C major and B major, rather than C major and C-flat major. > > I just encoded this work via OMR with SharpEye from the parts. I prepared MuseData files for Walter in written pitch, and then he transposes to concert pitch and adds a transpotion-to-written-pitch code in the MuseData files (similar to Humdrum), since MuseData files for printing are stored in concert pitch with transposition instructions given for generating written pitch spellings. In measure 1 of the A clarinets, there will be an instruction to transpose up a minor third, while in measure 21 there will be an instruction to transpose up a diminished second (enharmonically equivalent to a minor third). > > Written parts for timpani are complicated--parts are displayed transposed, and there is no key signature notated (invisible key signature :-). > > > And in pieces for band, learning ensembles, etc., individual notes will be written enharmonically. So that a concert-pitch passage, C, D, D#, E#, F# might be written as D, E, F, G, Ab so as to avoid needing using double sharps or augmented intervals. Notating the score in written pitch is the better way to avoid this problem. > This is a good point, but can also be handled by a constant transposition system. The enharmonic spelling would be stored in the sounding-pitch spellings, rather than changed afterwards once the written notes are calculated. In other words the written pitches > D E F G Ab > are represented in concert pitch as: > C D Eb F Gb > and not as > C D D# E# F# > > What you would like to have is the clarinet?s notes displaying as ?C D D# E# F#? when the score is viewed in concert pitch (so it matches the rest of the ensemble) and ?D E F G Ab? when viewed transposed; the alternate representation (?C D Eb F Gb?) represents a passage that doesn?t appear in either a concert-pitch score or on the performers? parts. It may be too difficult to encode such things but they?re frequently done instinctively by skilled copyists. > > The key phrase is "skilled copyist" which computers cannot be (not to mention "instictively" :-). In other words, to use the same data for generating a transposed part and a concert-pitch score, you would probably use variants in MEI. One variant specifies how to display it in the transposed part, and the other variant for the concert-pitch score. > > Transposition without enharmonic alteration is deterministic, so the written variant could equally be stored in concert pitch or written pitch. If you are going to be utilizing enharmonic equivalents, then neither system is better (if you are going to want to display in both written-pitch and concert-pitch). > > Humdrum and MuseData encode parts in concert pitch. I just checked, and MusicXML does it the other way. Here is a C-sounding pitch in a B-flat clarinet part which includes a transpose-to-concert-pitch instruction: > > > > > 1 > > 2 > major > > > > G > 2 > > > -1 > -2 > > > > > > D > 4 > > 4 > 1 > whole > > > light-heavy > > > > > > The tag is vague, as Perry was pointing out for MEI's equivalent. It should more verbosely be in contrast to . But it seems that the MusicXML convention is to encode the part already transposed, and then indicate the transpostion from written -> concert pitch. > > It would be useful specify the state of the data (written or concert) and also an explicit direction that the transposition represents (such as written->concert or concert->written). When going in the opposite direction the transposition interval could be reversed without confusion, such as concert data with a written->concert transposition could generate the written data by negating the written->concert transposition. > > -=+Craig > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Mon Feb 13 04:03:18 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Sun, 12 Feb 2012 19:03:18 -0800 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: <04AF4EC0-4B6C-4A72-9D29-EC10CAE6B799@edirom.de> References: <005b01cce8f0$df985800$9ec90800$@mit.edu> <007d01cce8f8$5e972940$1bc57bc0$@mit.edu> <04AF4EC0-4B6C-4A72-9D29-EC10CAE6B799@edirom.de> Message-ID: Hi MEIers, On Sun, Feb 12, 2012 at 2:22 PM, Johannes Kepper wrote: > > I would like to add some more points to this discussion. In my opinion, > allowing both directions of transposition in MEI is the worst decision we > could ever make. This would require absolutely every application dealing > with MEI files to be capable of calculating the pitches as it needs them. > It will be less confusing if the transposed parts were stored in the transposed state. And I have looked more closely at a MuseData part and now see that they are stored in the written (transposed) form. Here is the start of the 1st clarinet (in A) which has an F5 of the transposed form (see the score: http://www.musedata.org/cgi-bin/mddata?work=beet/bh/sym/no2&file=distrib/pdf/score-hand/work.pdf) Breitkopf & H\a3rtel, Leipzig, Series 1 No. 2 Beethoven Symphony No. 2 in D Major, Op. 36 Mvt. 1 Clarinetto 1 in A 0 0 Group memberships: sound sound: part 5 of 18 $ K:-1 Q:24 T:3/4 C:4 X:-11 D:Adagio molto F5 3 t d ff measure 1 F5 36 q. d F rest 12 e rest 24 q measure 2 rest 72 measure 3 rest 72 measure 4 rest 24 q rest 12 e G5 9 s. d [[ p. The "$.. X:-11" information indicates how to get from the written form to the concert pitch form (-11 means transpose down a minor third). So I think that the original form of the MEI transposition attributes are intended to mean the same thing (likewise MusicXML). B-flat clarinets parts would be encoded in the transposed state, with @trans.diat="-1" and @trans.semit="-2" given to indicate how to get to concert pitch from the written state of the data. So since both MuseData and MusicXML store transposed parts in transposed form, it is a wise idea to do the same in MEI. (This also explains why I have to fix some transpositions in Haydn symphony data converted from MuseData to Humdrum which does not have the transposition information necessary to get to concert pitch). > If MEI stores concert pitch, this would be more natural to sound-focused > apps (-> MIDI exporters) and for music analysis. If it keeps written pitch, > it would be much easier for renderers, OMR and hand encoders not sure what > they're facing. > So you can guess what Perry will say :-) It might be my personal focus to understand MEI as being somewhat > document-centric, but there is another argument to choose written pitch: > MIR tools are normally operating on simplifications of the original data > (n-grams etc.), but not on data as complex as pure MEI. If they're > abstracting anyway, adding the step of transposition to their calculations > seems less demanding to me than to require a renderer to "misplace" every > single note. > Transposition is fairly trivial and deterministic, so a renderer which has problems with transposition will most likely have lots of other bugs in it... MEI Iron is the ideal place to simplify the data to a standardized form. The user/application could specify to MEI Iron that they want transposed data to end up in the written state or the concert-pitch state, then no matter what the original state of the data, the output from MEI Iron is what is desired. When a part-generating program wants the data in transpose form, it asks MEI Iron for the transposed state of the data, when a MIDI-generating program want the data in concert pitch, it asks MEI Iron for data in the untransposed state. When a renderer needs to display the score in concert-pitch, it could alternately ask MEI Iron for data in the concert-pitch state. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Mon Feb 13 09:45:45 2012 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 13 Feb 2012 09:45:45 +0100 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: References: <005b01cce8f0$df985800$9ec90800$@mit.edu> <007d01cce8f8$5e972940$1bc57bc0$@mit.edu> <04AF4EC0-4B6C-4A72-9D29-EC10CAE6B799@edirom.de> Message-ID: <4D829BDA-852E-4563-9B16-85B726C75DAA@edirom.de> Hi Craig, Am 13.02.2012 um 04:03 schrieb Craig Sapp: > Hi MEIers, > > On Sun, Feb 12, 2012 at 2:22 PM, Johannes Kepper wrote: > >> I would like to add some more points to this discussion. In my opinion, allowing both directions of transposition in MEI is the worst decision we could ever make. This would require absolutely every application dealing with MEI files to be capable of calculating the pitches as it needs them. >> > It will be less confusing if the transposed parts were stored in the transposed state. And I have looked more closely at a MuseData part and now see that they are stored in the written (transposed) form. Here is the start of the 1st clarinet (in A) which has an F5 of the transposed form (see the score: http://www.musedata.org/cgi-bin/mddata?work=beet/bh/sym/no2&file=distrib/pdf/score-hand/work.pdf ) > > Breitkopf & H\a3rtel, Leipzig, Series 1 No. 2 > Beethoven Symphony No. 2 in D Major, Op. 36 > Mvt. 1 > Clarinetto 1 in A > 0 0 > Group memberships: sound > sound: part 5 of 18 > $ K:-1 Q:24 T:3/4 C:4 X:-11 D:Adagio molto > F5 3 t d ff > measure 1 > F5 36 q. d F > rest 12 e > rest 24 q > measure 2 > rest 72 > measure 3 > rest 72 > measure 4 > rest 24 q > rest 12 e > G5 9 s. d [[ p. > > The "$.. X:-11" information indicates how to get from the written form to the concert pitch form (-11 means transpose down a minor third). So I think that the original form of the MEI transposition attributes are intended to mean the same thing (likewise MusicXML). B-flat clarinets parts would be encoded in the transposed state, with @trans.diat="-1" and @trans.semit="-2" given to indicate how to get to concert pitch from the written state of the data. > > So since both MuseData and MusicXML store transposed parts in transposed form, it is a wise idea to do the same in MEI. (This also explains why I have to fix some transpositions in Haydn symphony data converted from MuseData to Humdrum which does not have the transposition information necessary to get to concert pitch). > >> If MEI stores concert pitch, this would be more natural to sound-focused apps (-> MIDI exporters) and for music analysis. If it keeps written pitch, it would be much easier for renderers, OMR and hand encoders not sure what they're facing. > > So you can guess what Perry will say :-) > >> It might be my personal focus to understand MEI as being somewhat document-centric, but there is another argument to choose written pitch: MIR tools are normally operating on simplifications of the original data (n-grams etc.), but not on data as complex as pure MEI. If they're abstracting anyway, adding the step of transposition to their calculations seems less demanding to me than to require a renderer to "misplace" every single note. > > Transposition is fairly trivial and deterministic, so a renderer which has problems with transposition will most likely have lots of other bugs in it? I didn't say that it's rocket science to calculate pitch, I said it's an additional step that could be spared for _some_ applications. > > MEI Iron is the ideal place to simplify the data to a standardized form. The user/application could specify to MEI Iron that they want transposed data to end up in the written state or the concert-pitch state, then no matter what the original state of the data, the output from MEI Iron is what is desired. When a part-generating program wants the data in transpose form, it asks MEI Iron for the transposed state of the data, when a MIDI-generating program want the data in concert pitch, it asks MEI Iron for data in the untransposed state. When a renderer needs to display the score in concert-pitch, it could alternately ask MEI Iron for data in the concert-pitch state. Brilliant idea! I haven't spent a thought on the iron for this purpose, but you're absolutely right, that's the ideal place to resolve it. We will put it on the MEIron Todo? But, I would still argue that deciding for one direction is crucial to avoid any confusion about this. The MEIron would then only provide a one-way translation, as this would only be a first step in processing the data and turning them into something else. It would not be an officially supported state of an MEI file that one would save as interchangeable file*. Did I hear correctly that you would prefer written pitch as well? So, any other opinions out there? Johannes * Maybe we should offer the other direction in MEIron as well, but I'm afraid that this would require Perry's additional attribute to indicate the current state. And if we add this to the current model, I suppose it's hard to suppress 'misuse' of this feature? Am I wrong? > > -=+Craig > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Mon Feb 13 12:08:55 2012 From: zupftom at googlemail.com (TW) Date: Mon, 13 Feb 2012 12:08:55 +0100 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: <4D829BDA-852E-4563-9B16-85B726C75DAA@edirom.de> References: <005b01cce8f0$df985800$9ec90800$@mit.edu> <007d01cce8f8$5e972940$1bc57bc0$@mit.edu> <04AF4EC0-4B6C-4A72-9D29-EC10CAE6B799@edirom.de> <4D829BDA-852E-4563-9B16-85B726C75DAA@edirom.de> Message-ID: 2012/2/13 Johannes Kepper : > Am 13.02.2012 um 04:03 schrieb Craig Sapp: > > >> >> MEI Iron is the ideal place to simplify the data to a standardized form. The user/application could specify to MEI Iron that they want transposed data to end up in the written state or the concert-pitch state, then no matter what the original state of the data, the output from MEI Iron is what is desired. ?When a part-generating program wants the data in transpose form, it asks MEI Iron for the transposed state of the data, when a MIDI-generating program want the data in concert pitch, it asks MEI Iron for data in the untransposed state. ?When a renderer needs to display the score in concert-pitch, it could alternately ask MEI Iron for data in the concert-pitch state. > > Brilliant idea! I haven't spent a thought on the iron for this purpose, but you're absolutely right, that's the ideal place to resolve it. We will put it on the MEIron Todo? But, I would still argue that deciding for one direction is crucial to avoid any confusion about this. The MEIron would then only provide a one-way translation, as this would only be a first step in processing the data and turning them into something else. It would not be an officially supported state of an MEI file that one would save as interchangeable file*. Did I hear correctly that you would prefer written pitch as well? So, any other opinions out there? > >From my perspective, written pitch is the way to go. Transposing has a lot of impact on the appearance (stem direction, accidental arrangement, placement of beams/slurs, placement of articulation marks etc.). MEI can provide information about all this, but is this sensible if sounding pitch rather than visual pitch is recorded? I'm not sure whether it would be a good idea to transform all visual stuff to a separate layout tree. Thomas From raffaeleviglianti at gmail.com Mon Feb 13 12:16:11 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Mon, 13 Feb 2012 11:16:11 +0000 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: References: <005b01cce8f0$df985800$9ec90800$@mit.edu> <007d01cce8f8$5e972940$1bc57bc0$@mit.edu> <04AF4EC0-4B6C-4A72-9D29-EC10CAE6B799@edirom.de> <4D829BDA-852E-4563-9B16-85B726C75DAA@edirom.de> Message-ID: Hello, I'm also a supporter of MEI as firstly being a document-oriented format, so the written pitch should be preferred in my opinion. We need to make sure, though, that enough information is provided to compute the transposition for analytical purposes and for exporting to other formats, and it seems that the attributes in question are the place where to express this information. If I understand correctly, this does not fully solved what Michael pointed out: "The issue is that you want to be able to change the transposition for different passages, and I don?t think that changing the Instrument tag is the best approach." Do we perhaps need a way to specify when and how the transposition rules are supposed to change within a piece? Is re-defining staffDef the best way? Best, Raffaele On Mon, Feb 13, 2012 at 11:08 AM, TW wrote: > 2012/2/13 Johannes Kepper : > > Am 13.02.2012 um 04:03 schrieb Craig Sapp: > > > > > >> > >> MEI Iron is the ideal place to simplify the data to a standardized > form. The user/application could specify to MEI Iron that they want > transposed data to end up in the written state or the concert-pitch state, > then no matter what the original state of the data, the output from MEI > Iron is what is desired. When a part-generating program wants the data in > transpose form, it asks MEI Iron for the transposed state of the data, when > a MIDI-generating program want the data in concert pitch, it asks MEI Iron > for data in the untransposed state. When a renderer needs to display the > score in concert-pitch, it could alternately ask MEI Iron for data in the > concert-pitch state. > > > > Brilliant idea! I haven't spent a thought on the iron for this purpose, > but you're absolutely right, that's the ideal place to resolve it. We will > put it on the MEIron Todo? But, I would still argue that deciding for one > direction is crucial to avoid any confusion about this. The MEIron would > then only provide a one-way translation, as this would only be a first step > in processing the data and turning them into something else. It would not > be an officially supported state of an MEI file that one would save as > interchangeable file*. Did I hear correctly that you would prefer written > pitch as well? So, any other opinions out there? > > > > From my perspective, written pitch is the way to go. Transposing has > a lot of impact on the appearance (stem direction, accidental > arrangement, placement of beams/slurs, placement of articulation marks > etc.). MEI can provide information about all this, but is this > sensible if sounding pitch rather than visual pitch is recorded? I'm > not sure whether it would be a good idea to transform all visual stuff > to a separate layout tree. > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdr4h at eservices.virginia.edu Mon Feb 20 23:29:06 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 20 Feb 2012 22:29:06 +0000 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: References: <005b01cce8f0$df985800$9ec90800$@mit.edu> <007d01cce8f8$5e972940$1bc57bc0$@mit.edu> <04AF4EC0-4B6C-4A72-9D29-EC10CAE6B799@edirom.de> <4D829BDA-852E-4563-9B16-85B726C75DAA@edirom.de> , Message-ID: Hello, all, Sorry I dropped out of the conversation. I was attending the MLA meeting in Dallas. So, it seems my proposal to allow MEI to store either written or sounded pitch fell flat. That's fine, it was just a straw man anyway. We're agreed then that @trans.diat and @trans.semi indicate the amount of "correction" necessary to achieve the concert pitch from the written one. I'll change the documentation to reflect this, something like: trans.diat -- "records the amount of diatonic pitch shift, e.g., C to C? = 0, C to D? = 1, necessary to achieve the sounded pitch from the written one. Transposition requires both trans.diat and trans.semi attributes in order to distinguish the difference, for example, between a transposition from the key of C to C? and one from the key of C to D?." trans.semi -- "records the amount of pitch shift in semitones, C to C? = 1, C to D? = 1, to achieve the sounded pitch from the written one. Transposition requires both trans.diat and trans.semi attributes in order to distinguish the difference, for example, between a transposition from C to C? and one from C to D?." A statement that MEI always records the written pitch in pname will also be added to the description of the pname attribute. And an example of transposition will be added to the guidelines. To answer Raffaele's question, I can't say it's the "best way", but currently the way to record changes in transposition, say from B? to A clarinet, is by using . The label and instrument definition for the staff are independent of the @trans attributes. This makes it possible for the transposition to change while the label (say, "Player 1") and the MIDI instrument name (say, "Clarinet") remain unchanged. Or vice versa. :-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From bohl at edirom.de Sat Feb 25 12:20:09 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Sat, 25 Feb 2012 12:20:09 +0100 Subject: [MEI-L] Report from the MEI Technical Team Meeting 2012-02-21 Message-ID: <4F48C3E9.9000101@edirom.de> On 21 February 2012 the MEI Technical Team held it's quarterly meeting . Topics discussed were repository strategies, referencing MEI tools from the http://www.music-encoding.org website and of course the MEI 2012 release including the previously unreleased MEI Guidelines. The official release of MEI 2012 and corresponding guidelines is to be expected by August. During the next few days a pre-release of the schema will be made available through the newly established schema customization web service based on ROMA and capable of handling customizations files in the ODD format (http://customization.music-encoding.org). This web service will gradually offer more customizations preconfigured for specific use cases. At the same time projects and developers are encouraged to submit their own customizations to the mei-incubator project (http://code.google.com/p/mei-incubator/), which is designated as platform to share application specific schema customizations, experimental status modules or modifications to the MEI core development. Good examples for this are the MEI-FRBR-customization or the new solesmes module currently under development. Everybody is very welcome to share and test the pre-release of the schema or any of the customizations on the mei-incubator. If you need assistance in getting a working schema out of these files, the Technical Team is happy to answer any such questions on this list. At the same time, there are ways to support the Technical Team on its work for the coming release. Several module descriptions are still unattended for. If you would like to support the MEI community, fleshing out some chapters for the guidelines is a good way. Your participation may range from providing a couple of paragraphs as a word processor document to a fully TEI encoded chapter including musical examples in MEI. Currently the following modules are vacant: analysis, corpus, figtable, harmony, linkalign, namesdates, ptrref, tablature, text. If you would like to know what these chapters actually have to cover, or you would like to participate on one of the other chapters, please contact Perry Roland (pdr4h at virginia.edu ) or Johannes Kepper (kepper at edirom.de ). On behalf of the MEI Technical Team, Benjamin W. Bohl -- *********************************************************** Edirom - Projekt "Digitale Musikedition" Musikwissenschaftliches Seminar Detmold/Paderborn Gartenstra?e 20 D -- 32756 Detmold Tel. +49 (0) 5231 / 975-669 Fax: +49 (0) 5231 / 975-668 http://www.edirom.de *********************************************************** -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: From donbyrd at indiana.edu Sat Feb 25 21:38:03 2012 From: donbyrd at indiana.edu (Byrd, Donald A.) Date: Sat, 25 Feb 2012 15:38:03 -0500 Subject: [MEI-L] @trans.diat and @trans.semi Message-ID: <20120225153803.ckpjfqc5i8kw8c8s@webmail.iu.edu> Hi, everyone. To make a very long story very short, my change of careers is at least temporarily on hold, and I again have time to think about MEI etc.; beyond that, I can't say, though I've talked to Perry a bit about how I might contribute to the MEI effort (by writing a full notation editor?)... Anyway, I have a belated comment or two on this issue. Please forgive me if I'm repeating what others said while I wasn't paying attention, or if I'm just off track and this is irrelevant! My article "Written vs. Sounding Pitch", which is (or was!) in Workshop Resources has a lot of examples of the problems here. In general, the relationship between written and sounding pitch is so messy that even using the word "transposition" to describe it makes me nervous. With scordatura, you have to think of different transpositions being in effect at the same time, even within a chord, and you have to interpret the key signature very carefully (Fig. 6 of my article). Even timpani notation of the late 18th and early 19th century, with just two notes, is like that: consider the first movement of the Beethoven 4th, in B-flat major, where the timpani B-flat's and F's are on a staff with no key signature and no accidentals (Fig. 4). I think Thomas is exactly right in that written pitch must be encoded for a lot of information about the appearance of the score to be meaningful. However, there are also many cases where you can't reliably infer the sounding pitch without giving the "transposition" on a note-by-note basis. But in that situation, why think about "transposition" at all? It makes more sense to just encode the sounding as well as the written pitch. --Don On Mon, 13 Feb 2012 12:08:55 +0100, TW wrote: > 2012/2/13 Johannes Kepper : >> Am 13.02.2012 um 04:03 schrieb Craig Sapp: >> >> >>> >>> MEI Iron is the ideal place to simplify the data to a standardized >>> form. The user/application could specify to MEI Iron that they want >>> transposed data to end up in the written state or the concert-pitch >>> state, then no matter what the original state of the data, the >>> output from MEI Iron is what is desired. ?When a part-generating >>> program wants the data in transpose form, it asks MEI Iron for the >>> transposed state of the data, when a MIDI-generating program want >>> the data in concert pitch, it asks MEI Iron for data in the >>> untransposed state. ?When a renderer needs to display the score in >>> concert-pitch, it could alternately ask MEI Iron for data in the >>> concert-pitch state. >> >> Brilliant idea! I haven't spent a thought on the iron for this >> purpose, but you're absolutely right, that's the ideal place to >> resolve it. We will put it on the MEIron Todo? But, I would still >> argue that deciding for one direction is crucial to avoid any >> confusion about this. The MEIron would then only provide a one-way >> translation, as this would only be a first step in processing the >> data and turning them into something else. It would not be an >> officially supported state of an MEI file that one would save as >> interchangeable file*. Did I hear correctly that you would prefer >> written pitch as well? So, any other opinions out there? >> > > From my perspective, written pitch is the way to go. Transposing has > a lot of impact on the appearance (stem direction, accidental > arrangement, placement of beams/slurs, placement of articulation marks > etc.). MEI can provide information about all this, but is this > sensible if sounding pitch rather than visual pitch is recorded? I'm > not sure whether it would be a good idea to transform all visual stuff > to a separate layout tree. > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Donald Byrd Woodrow Wilson Indiana Teaching Fellow Adjunct Associate Professor of Informatics & Music Indiana University, Bloomington From pdr4h at eservices.virginia.edu Sat Feb 25 23:40:40 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Sat, 25 Feb 2012 22:40:40 +0000 Subject: [MEI-L] @trans.diat and @trans.semi In-Reply-To: <20120225153803.ckpjfqc5i8kw8c8s@webmail.iu.edu> References: <20120225153803.ckpjfqc5i8kw8c8s@webmail.iu.edu> Message-ID: > It makes more sense to just encode the sounding as well as the written pitch. Which MEI accommodates by allowing to have @pname (for written pitch) and @pname.ges (for sounded pitch). -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From pdr4h at eservices.virginia.edu Thu Mar 1 23:16:31 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 1 Mar 2012 22:16:31 +0000 Subject: [MEI-L] eventLike in bend, gliss, mordent, etc. Message-ID: Hello all, Please pardon the duplication if you get this message more than once. Elements in the model.eventLike class -- barLine, beam, beatRpt, bend, bTrem, chord, clef, clefGrp, custos, fTrem, gliss, halfmRpt, ineume, keySig, ligature, mensur, mRest, mRpt, mRpt2, mSpace, multiRest, multiRpt, note, pad, proport, rest, space, tuplet, uneume were allowed to occur in selected eventLike elements -- bend, gliss, mordent, trill, turn, note in earlier versions of MEI that didn't yet have the ability to encode multiple readings. Now that MEI does have for dealing with multiple readings, having this "event within event" structure is redundant and confusing. I think we need to kill off this dinosaur. Since the next release of MEI has already been frozen with regard to the addition / deletion of features, I propose to add documentation that deprecates this feature even though it will technically be allowed. Of course, in the next-next release this feature will be disabled. Any objections? Is anyone using this? Going, going, ... Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From kristina.richts at gmx.de Mon Mar 5 10:26:32 2012 From: kristina.richts at gmx.de (Kristina Richts) Date: Mon, 5 Mar 2012 10:26:32 +0100 Subject: [MEI-L] trills within beams Message-ID: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> Hi all, while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: ? Did I miss anything? Best, Kristina -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Bildschirmfoto 2012-03-05 um 08.07.00.png Type: image/png Size: 20470 bytes Desc: not available URL: From pdr4h at eservices.virginia.edu Mon Mar 5 14:49:42 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 5 Mar 2012 13:49:42 +0000 Subject: [MEI-L] trills within beams In-Reply-To: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> Message-ID: Hi, Kristina, MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. >From snowy (yes, snowy!) Charlottesville, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] Sent: Monday, March 05, 2012 4:26 AM To: Music Encoding Initiative Subject: [MEI-L] trills within beams Hi all, while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. [cid:2B8787F5-91C3-4F4D-A92F-10B5B83D2F4B at bib.hfm-detmold.de] Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: ? Did I miss anything? Best, Kristina -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Bildschirmfoto 2012-03-05 um 08.07.00.png Type: image/png Size: 20470 bytes Desc: Bildschirmfoto 2012-03-05 um 08.07.00.png URL: From raffaeleviglianti at gmail.com Mon Mar 5 15:16:13 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Mon, 5 Mar 2012 14:16:13 +0000 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> Message-ID: Hi Kristina, I agree with Perry. I very much prefer to deal with any element that spans multiple events and measures at the bottom of the measure where they start. It avoids all sorts of overlapping hierarchies. I mainly mean and , but also , , , etc. is perhaps my only exception to this rule. I typically use time stamp and/or ids (in this order or preference) to anchor them to the relevant events or position in time. I believe this is really efficient, especially when you're dealing with manuscript notation and editorial intervention, as it leaves room for those elements. Best wishes, Raffaele On Mon, Mar 5, 2012 at 1:49 PM, Roland, Perry (pdr4h) < pdr4h at eservices.virginia.edu> wrote: > Hi, Kristina, > > > > MEI is not designed to be encoded in one pass -- some things, such as > trills, pedal markings, text directives, etc., must be captured after the > notes. > > > > It might be possible to do what you suggest in some cases but it won't > work all the time because it potentially leads to overlapping hierarchies. > It also means that your proposed element would have to allow every > other possible element, leading to opportunities for encoders to do > unsupported things. > > > > From snowy (yes, snowy!) Charlottesville, > > > > -- > > p. > > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ------------------------------ > *From:* mei-l-bounces at lists.uni-paderborn.de [ > mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [ > kristina.richts at gmx.de] > *Sent:* Monday, March 05, 2012 4:26 AM > *To:* Music Encoding Initiative > *Subject:* [MEI-L] trills within beams > > Hi all, > > while encoding the following passage, I just mentioned, that there seems > to be no way to encode the trill right here, as I don't want to extract > this information and place it at the end of the measure. > > > > Why isn't it possible to provide notes within a beam with a > element, as could be done with single notes, like this: > "down"/>? > > Did I miss anything? > > Best, > Kristina > > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Bildschirmfoto 2012-03-05 um 08.07.00.png Type: image/png Size: 20470 bytes Desc: not available URL: From kepper at edirom.de Mon Mar 5 15:18:57 2012 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 5 Mar 2012 15:18:57 +0100 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> Message-ID: <321241D0-D59E-4C72-B7B5-4D9A9F560EEF@edirom.de> I think the question was mostly targeting at trills that do not span over a number of notes, but instead just read a "tr" on top of a note. One could argue this to be a value for @artic or some such? Johannes Am 05.03.2012 um 15:16 schrieb Raffaele Viglianti: > Hi Kristina, > > I agree with Perry. I very much prefer to deal with any element that spans multiple events and measures at the bottom of the measure where they start. It avoids all sorts of overlapping hierarchies. I mainly mean and , but also , , , etc. is perhaps my only exception to this rule. > > I typically use time stamp and/or ids (in this order or preference) to anchor them to the relevant events or position in time. I believe this is really efficient, especially when you're dealing with manuscript notation and editorial intervention, as it leaves room for those elements. > > Best wishes, > Raffaele > > On Mon, Mar 5, 2012 at 1:49 PM, Roland, Perry (pdr4h) wrote: > Hi, Kristina, > > > MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. > > > It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. > > > From snowy (yes, snowy!) Charlottesville, > > > -- > > p. > > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] > Sent: Monday, March 05, 2012 4:26 AM > To: Music Encoding Initiative > Subject: [MEI-L] trills within beams > > Hi all, > > while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. > > > > Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: > ? > > Did I miss anything? > > Best, > Kristina > > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From esfield at stanford.edu Tue Mar 6 00:03:29 2012 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Mon, 5 Mar 2012 15:03:29 -0800 (PST) Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> Message-ID: <00f601ccfb24$2d7bbc20$88733460$@stanford.edu> Hi, Kristina, Perry, et al. >From my perspective Kristina is prompting a really important question, and one response ("more than one pass....") seems to be the inevitable place where all encoders end up when confronted with real music. It may be unavoidable, but it is not ideal. What is unsettling in the responses is that we are putting hierarchy above music and the needs of encoders. When working with MSS, there are dozens of potential distractions, and making a second pass to capture left-over details requires finding the exact spot on the folio, checking to see which features were encoded on the first pass, and, over time, a lot of secondary bookkeeping about what is finished and what has yet to be done. (I know; I did that kind of housework for my Marcello catalogue in the 1980s---3000 bitty music files, each one in need of its own particular notes.) The risks of eventual inaccuracy, incomplete information, and duplication are very real. Granted we want MEI to work, but if it is optimized for programming efficiency at the cost of usability, we may need to step back and look for other solutions. The low level of generalizability of music features across repertories is widely acknowledged, and we are simply encountering one instance here. For another example from the same category, consider this CPE Bach incipit: We used it in our "desk-top publishing IEEE tutorial of 1994. [For all the examples go to http://www.ccarh.org/publications/reprints/ieee/ --Category 2, Type 1] How would MEI handle it? Eleanor Eleanor Selfridge-Field Consulting Professor, Music Braun Music Center #129 Stanford University Stanford, CA 94305-3076, USA http://www.stanford.edu/~esfield/ http://www.ccarh.org From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) Sent: Monday, March 05, 2012 5:50 AM To: Music Encoding Initiative Subject: Re: [MEI-L] trills within beams Hi, Kristina, MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. >From snowy (yes, snowy!) Charlottesville, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu _____ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] Sent: Monday, March 05, 2012 4:26 AM To: Music Encoding Initiative Subject: [MEI-L] trills within beams Hi all, while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: ? Did I miss anything? Best, Kristina -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 20470 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 15918 bytes Desc: not available URL: From andrew.hankinson at mail.mcgill.ca Tue Mar 6 01:57:06 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson, Mr) Date: Tue, 6 Mar 2012 00:57:06 +0000 Subject: [MEI-L] trills within beams In-Reply-To: <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> Message-ID: <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> Hi Eleanor, The way I understand the problem is a choice between the following options: or: or: In the first example, note becomes a "child" element of trill; the second example inverts that so that trill becomes a child of note. If we were to wish to express this in "pure" XML, it would need to be a choice between either of these, since XML imposes a very hierarchical structure if used naively. Sometimes this hierarchy makes sense (note as a child of chord), but in this case it doesn't really make musical sense for trill to be a parent or a child of note. If we were to want to expand trill so that it covers more than one note, we would have to choose option 1 OR we would have to try and figure out some other way of grouping notes. Perry's concern was that if we allow all things that can be trilled, or that can hold children that can also be trilled, then we pretty much have to allow most things as children of trills. This makes the encoding task much more difficult, since it can be very easy to get into trouble and do nonsensical things. MEI and TEI have a fairly elegant solution to this which is still valid XML but allows us to break out of this rigid hierarchy. In the third example, we remove the hierarchy and assign the trill to the note by reference; that is, the element is not hierarchically related to note, but the @startid attribute points to the element where the trill starts. This is much easier to handle, since you can put many other elements between them. For example, you could do this (a highly simplified version of the first measure of the example you attached): This allows much more flexibility in the encoding, since it means you do not have to decide whether the trill is hierarchically higher or lower than the note; you can simply list all the "spanning" elements at the end of the measure, and then give @startid/@endid or @tstamp references (as Raffaele mentioned). I can't speak directly for Perry, but I think that's what he meant by "one pass" vs. "two pass". It's not that you can't do all the encoding in the same sitting, it's just that sometimes you'll want to identify and encode elements that don't strictly fall into the hierarchy later in the measure. So you would, in effect, do two passes through the measure: one to encode the notes, and the other to encode the other events. The complexity of keeping all this straight when encoding is certainly not trivial, but I don't think that's an MEI issue. My own feeling is that it should be the job of the notation encoding software to help you manage all of the bits and pieces I think this addresses your concern directly. You don't have to put all elements in an arbitrary hierarchy since things can be referenced after they have "happened" in the score, without needing to decide if it makes musical sense to have it as a hierarchical relationship. This, in my opinion, is more musical than other attempts at encoding music notation in XML since you don't have to make seemingly arbitrary decisions over which musical structure is a child of another. -Andrew On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: Hi, Kristina, Perry, et al. >From my perspective Kristina is prompting a really important question, and one response (?more than one pass....?) seems to be the inevitable place where all encoders end up when confronted with real music. It may be unavoidable, but it is not ideal. What is unsettling in the responses is that we are putting hierarchy above music and the needs of encoders. When working with MSS, there are dozens of potential distractions, and making a second pass to capture left-over details requires finding the exact spot on the folio, checking to see which features were encoded on the first pass, and, over time, a lot of secondary bookkeeping about what is finished and what has yet to be done. (I know; I did that kind of housework for my Marcello catalogue in the 1980s---3000 bitty music files, each one in need of its own particular notes.) The risks of eventual inaccuracy, incomplete information, and duplication are very real. Granted we want MEI to work, but if it is optimized for programming efficiency at the cost of usability, we may need to step back and look for other solutions. The low level of generalizability of music features across repertories is widely acknowledged, and we are simply encountering one instance here. For another example from the same category, consider this CPE Bach incipit: We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all the examples go to http://www.ccarh.org/publications/reprints/ieee/ --Category 2, Type 1] How would MEI handle it? Eleanor Eleanor Selfridge-Field Consulting Professor, Music Braun Music Center #129 Stanford University Stanford, CA 94305-3076, USA http://www.stanford.edu/~esfield/ http://www.ccarh.org From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) Sent: Monday, March 05, 2012 5:50 AM To: Music Encoding Initiative Subject: Re: [MEI-L] trills within beams Hi, Kristina, MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. >From snowy (yes, snowy!) Charlottesville, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] Sent: Monday, March 05, 2012 4:26 AM To: Music Encoding Initiative Subject: [MEI-L] trills within beams Hi all, while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: ? Did I miss anything? Best, Kristina _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Tue Mar 6 13:11:16 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 6 Mar 2012 13:11:16 +0100 Subject: [MEI-L] trills within beams In-Reply-To: <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> Message-ID: Hi Andrew, I absolutely agree with what you say about hierarchy issues etc. I think this discussion is very helpful for identifying what MEI is and what it is not, and, even more beneficial, how people understand it. But still, I think it misses Kristina's initial question. Sometimes, a trill stretches no longer than the note it's written above. In these cases, it seems to be no spanning element on its own, but rather a playing instruction for this particular note. The question then is not whether we want to redefine the model of to allow it within a note or as a container of notes, but instead if "tr" should be an allowed value of @artic (or any other attribute on ). By no means this would argue against the existence of the current standoff , it would just be a shortcut for describing trill that do not stretch beyond their initial note. Currently, the would have to duplicate the @tstamp and @dur of this note (@tstamp on notes is normally omitted, I know). It is fine to decide that we don't want to offer this limited-power shortcut for a certain kind of trills, but we have several such constructs for other things in MEI already. I don't see that anyone asked for remodelling the trill element (but please correct me!!!!). Best, Johannes Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: > Hi Eleanor, > > The way I understand the problem is a choice between the following options: > > > > > > or: > > > > > > or: > > > > > In the first example, note becomes a "child" element of trill; the second example inverts that so that trill becomes a child of note. If we were to wish to express this in "pure" XML, it would need to be a choice between either of these, since XML imposes a very hierarchical structure if used naively. Sometimes this hierarchy makes sense (note as a child of chord), but in this case it doesn't really make musical sense for trill to be a parent or a child of note. > > If we were to want to expand trill so that it covers more than one note, we would have to choose option 1 OR we would have to try and figure out some other way of grouping notes. Perry's concern was that if we allow all things that can be trilled, or that can hold children that can also be trilled, then we pretty much have to allow most things as children of trills. This makes the encoding task much more difficult, since it can be very easy to get into trouble and do nonsensical things. > > MEI and TEI have a fairly elegant solution to this which is still valid XML but allows us to break out of this rigid hierarchy. > > In the third example, we remove the hierarchy and assign the trill to the note by reference; that is, the element is not hierarchically related to note, but the @startid attribute points to the element where the trill starts. This is much easier to handle, since you can put many other elements between them. For example, you could do this (a highly simplified version of the first measure of the example you attached): > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This allows much more flexibility in the encoding, since it means you do not have to decide whether the trill is hierarchically higher or lower than the note; you can simply list all the "spanning" elements at the end of the measure, and then give @startid/@endid or @tstamp references (as Raffaele mentioned). > > I can't speak directly for Perry, but I think that's what he meant by "one pass" vs. "two pass". It's not that you can't do all the encoding in the same sitting, it's just that sometimes you'll want to identify and encode elements that don't strictly fall into the hierarchy later in the measure. So you would, in effect, do two passes through the measure: one to encode the notes, and the other to encode the other events. > > The complexity of keeping all this straight when encoding is certainly not trivial, but I don't think that's an MEI issue. My own feeling is that it should be the job of the notation encoding software to help you manage all of the bits and pieces > > I think this addresses your concern directly. You don't have to put all elements in an arbitrary hierarchy since things can be referenced after they have "happened" in the score, without needing to decide if it makes musical sense to have it as a hierarchical relationship. This, in my opinion, is more musical than other attempts at encoding music notation in XML since you don't have to make seemingly arbitrary decisions over which musical structure is a child of another. > > -Andrew > > On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: > > Hi, Kristina, Perry, et al. > > From my perspective Kristina is prompting a really important question, and one response (?more than one pass....?) seems to be the inevitable place where all encoders end up when confronted with real music. It may be unavoidable, but it is not ideal. > > What is unsettling in the responses is that we are putting hierarchy above music and the needs of encoders. > When working with MSS, there are dozens of potential distractions, and making a second pass to capture left-over details requires finding the exact spot on the folio, checking to see which features were encoded on the first pass, and, over time, a lot of secondary bookkeeping about what is finished and what has yet to be done. (I know; I did that kind of housework for my Marcello catalogue in the 1980s---3000 bitty music files, each one in need of its own particular notes.) The risks of eventual inaccuracy, incomplete information, and duplication are very real. > > Granted we want MEI to work, but if it is optimized for programming efficiency at the cost of usability, we may need to step back and look for other solutions. The low level of generalizability of music features across repertories is widely acknowledged, and we are simply encountering one instance here. For another example from the same category, consider this CPE Bach incipit: > > > We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all the examples go to http://www.ccarh.org/publications/reprints/ieee/ --Category 2, Type 1] > > How would MEI handle it? > > Eleanor > > > Eleanor Selfridge-Field > Consulting Professor, Music > Braun Music Center #129 > Stanford University > Stanford, CA 94305-3076, USA > http://www.stanford.edu/~esfield/ > http://www.ccarh.org > > > > From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) > Sent: Monday, March 05, 2012 5:50 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] trills within beams > > Hi, Kristina, > > MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. > > It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. > > From snowy (yes, snowy!) Charlottesville, > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] > Sent: Monday, March 05, 2012 4:26 AM > To: Music Encoding Initiative > Subject: [MEI-L] trills within beams > Hi all, > > while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. > > > > Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: > ? > > Did I miss anything? > > Best, > Kristina > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From andrew.hankinson at mail.mcgill.ca Tue Mar 6 16:05:28 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson, Mr) Date: Tue, 6 Mar 2012 15:05:28 +0000 Subject: [MEI-L] trills within beams In-Reply-To: <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> Message-ID: Thanks Johannes! I gave a couple examples to show how trill *could* be done in XML (and is done in some other XML-based music encoding schemes), since I was responding to Eleanor's question. But I don't think that it *should* be done that way, since, like I said, it creates a situation where you're forced into creating these artificial and non-musical hierarchies. I didn't mean to suggest that I felt any changes to should be made, I just wanted to address Eleanor's question directly by showing how MEI does it differently. But to address Kristina's question: I did some asking around our lab yesterday. We came to the agreement that a trill is an ornament, not an articulation. If doesn't have @ornament or something along those lines, I think this is a great argument for it. One of the musicologists in our lab found a couple helpful pages for this discussion: http://www.music.vt.edu/musicdictionary/appendix/ornaments/ornaments.html Couperin's Ornaments: http://books.google.ca/books?id=CecBsvk7Oz0C&lpg=PA34&ots=VQ2uwRSt_n&dq=ornaments%20couperin&pg=PA34#v=onepage&q=ornaments%20couperin&f=false -Andrew On 2012-03-06, at 7:11 AM, Johannes Kepper wrote: > Hi Andrew, > > I absolutely agree with what you say about hierarchy issues etc. I think this discussion is very helpful for identifying what MEI is and what it is not, and, even more beneficial, how people understand it. But still, I think it misses Kristina's initial question. Sometimes, a trill stretches no longer than the note it's written above. In these cases, it seems to be no spanning element on its own, but rather a playing instruction for this particular note. The question then is not whether we want to redefine the model of to allow it within a note or as a container of notes, but instead if "tr" should be an allowed value of @artic (or any other attribute on ). By no means this would argue against the existence of the current standoff , it would just be a shortcut for describing trill that do not stretch beyond their initial note. Currently, the would have to duplicate the @tstamp and @dur of this note (@tstamp on notes is normally omitted, I know). > > It is fine to decide that we don't want to offer this limited-power shortcut for a certain kind of trills, but we have several such constructs for other things in MEI already. I don't see that anyone asked for remodelling the trill element (but please correct me!!!!). > > Best, > Johannes > > > > > Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: > >> Hi Eleanor, >> >> The way I understand the problem is a choice between the following options: >> >> >> >> >> >> or: >> >> >> >> >> > >> or: >> >> >> >> >> In the first example, note becomes a "child" element of trill; the second example inverts that so that trill becomes a child of note. If we were to wish to express this in "pure" XML, it would need to be a choice between either of these, since XML imposes a very hierarchical structure if used naively. Sometimes this hierarchy makes sense (note as a child of chord), but in this case it doesn't really make musical sense for trill to be a parent or a child of note. >> >> If we were to want to expand trill so that it covers more than one note, we would have to choose option 1 OR we would have to try and figure out some other way of grouping notes. Perry's concern was that if we allow all things that can be trilled, or that can hold children that can also be trilled, then we pretty much have to allow most things as children of trills. This makes the encoding task much more difficult, since it can be very easy to get into trouble and do nonsensical things. >> >> MEI and TEI have a fairly elegant solution to this which is still valid XML but allows us to break out of this rigid hierarchy. >> >> In the third example, we remove the hierarchy and assign the trill to the note by reference; that is, the element is not hierarchically related to note, but the @startid attribute points to the element where the trill starts. This is much easier to handle, since you can put many other elements between them. For example, you could do this (a highly simplified version of the first measure of the example you attached): >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> This allows much more flexibility in the encoding, since it means you do not have to decide whether the trill is hierarchically higher or lower than the note; you can simply list all the "spanning" elements at the end of the measure, and then give @startid/@endid or @tstamp references (as Raffaele mentioned). >> >> I can't speak directly for Perry, but I think that's what he meant by "one pass" vs. "two pass". It's not that you can't do all the encoding in the same sitting, it's just that sometimes you'll want to identify and encode elements that don't strictly fall into the hierarchy later in the measure. So you would, in effect, do two passes through the measure: one to encode the notes, and the other to encode the other events. >> >> The complexity of keeping all this straight when encoding is certainly not trivial, but I don't think that's an MEI issue. My own feeling is that it should be the job of the notation encoding software to help you manage all of the bits and pieces >> >> I think this addresses your concern directly. You don't have to put all elements in an arbitrary hierarchy since things can be referenced after they have "happened" in the score, without needing to decide if it makes musical sense to have it as a hierarchical relationship. This, in my opinion, is more musical than other attempts at encoding music notation in XML since you don't have to make seemingly arbitrary decisions over which musical structure is a child of another. >> >> -Andrew >> >> On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: >> >> Hi, Kristina, Perry, et al. >> >> From my perspective Kristina is prompting a really important question, and one response (?more than one pass....?) seems to be the inevitable place where all encoders end up when confronted with real music. It may be unavoidable, but it is not ideal. >> >> What is unsettling in the responses is that we are putting hierarchy above music and the needs of encoders. >> When working with MSS, there are dozens of potential distractions, and making a second pass to capture left-over details requires finding the exact spot on the folio, checking to see which features were encoded on the first pass, and, over time, a lot of secondary bookkeeping about what is finished and what has yet to be done. (I know; I did that kind of housework for my Marcello catalogue in the 1980s---3000 bitty music files, each one in need of its own particular notes.) The risks of eventual inaccuracy, incomplete information, and duplication are very real. >> >> Granted we want MEI to work, but if it is optimized for programming efficiency at the cost of usability, we may need to step back and look for other solutions. The low level of generalizability of music features across repertories is widely acknowledged, and we are simply encountering one instance here. For another example from the same category, consider this CPE Bach incipit: >> >> >> We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all the examples go to http://www.ccarh.org/publications/reprints/ieee/ --Category 2, Type 1] >> >> How would MEI handle it? >> >> Eleanor >> >> >> Eleanor Selfridge-Field >> Consulting Professor, Music >> Braun Music Center #129 >> Stanford University >> Stanford, CA 94305-3076, USA >> http://www.stanford.edu/~esfield/ >> http://www.ccarh.org >> >> >> >> From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) >> Sent: Monday, March 05, 2012 5:50 AM >> To: Music Encoding Initiative >> Subject: Re: [MEI-L] trills within beams >> >> Hi, Kristina, >> >> MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. >> >> It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. >> >> From snowy (yes, snowy!) Charlottesville, >> >> -- >> p. >> >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> ________________________________ >> From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] >> Sent: Monday, March 05, 2012 4:26 AM >> To: Music Encoding Initiative >> Subject: [MEI-L] trills within beams >> Hi all, >> >> while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. >> >> >> >> Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: >> ? >> >> Did I miss anything? >> >> Best, >> Kristina >> >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1054 bytes Desc: not available URL: From pdr4h at eservices.virginia.edu Tue Mar 6 16:57:55 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 6 Mar 2012 15:57:55 +0000 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de>, Message-ID: If single-note trills are treated differently, then why not any other single-note / instantaneous "control event", such as arpeg, breath, pedal, reh, dynam, etc.? Once you start down that road, Pandora's box is opened. A proliferation of attributes wouldn't be helpful. If attributes were added, @ornam for example, they would only be useful part of the time; that is, in the case of @ornam/ @trill, for a single-note trill and when complete control of the rendering of the trill is to be handled by the rendering engine. There's no opportunity to add visual information to that trill without resorting to attributes about attributes (and that way certainly lies madness!) If an child of were added or allowed to be a child of in order to accommodate visual info., then we're back into the hierarchy problems Andrew so eloquently described yesterday. In addition, if we allow both possibilities (attribute and child), then user confusion is increased and interchange / interoperability diminished. When these other things are considered, I believe recording trills and such after the notes they're attached to is still the best of the alternatives. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Andrew Hankinson, Mr [andrew.hankinson at mail.mcgill.ca] Sent: Tuesday, March 06, 2012 10:05 AM To: Music Encoding Initiative Subject: Re: [MEI-L] trills within beams Thanks Johannes! I gave a couple examples to show how trill *could* be done in XML (and is done in some other XML-based music encoding schemes), since I was responding to Eleanor's question. But I don't think that it *should* be done that way, since, like I said, it creates a situation where you're forced into creating these artificial and non-musical hierarchies. I didn't mean to suggest that I felt any changes to should be made, I just wanted to address Eleanor's question directly by showing how MEI does it differently. But to address Kristina's question: I did some asking around our lab yesterday. We came to the agreement that a trill is an ornament, not an articulation. If doesn't have @ornament or something along those lines, I think this is a great argument for it. One of the musicologists in our lab found a couple helpful pages for this discussion: http://www.music.vt.edu/musicdictionary/appendix/ornaments/ornaments.html Couperin's Ornaments: http://books.google.ca/books?id=CecBsvk7Oz0C&lpg=PA34&ots=VQ2uwRSt_n&dq=ornaments%20couperin&pg=PA34#v=onepage&q=ornaments%20couperin&f=false -Andrew On 2012-03-06, at 7:11 AM, Johannes Kepper wrote: Hi Andrew, I absolutely agree with what you say about hierarchy issues etc. I think this discussion is very helpful for identifying what MEI is and what it is not, and, even more beneficial, how people understand it. But still, I think it misses Kristina's initial question. Sometimes, a trill stretches no longer than the note it's written above. In these cases, it seems to be no spanning element on its own, but rather a playing instruction for this particular note. The question then is not whether we want to redefine the model of to allow it within a note or as a container of notes, but instead if "tr" should be an allowed value of @artic (or any other attribute on ). By no means this would argue against the existence of the current standoff , it would just be a shortcut for describing trill that do not stretch beyond their initial note. Currently, the would have to duplicate the @tstamp and @dur of this note (@tstamp on notes is normally omitted, I know). It is fine to decide that we don't want to offer this limited-power shortcut for a certain kind of trills, but we have several such constructs for other things in MEI already. I don't see that anyone asked for remodelling the trill element (but please correct me!!!!). Best, Johannes Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: Hi Eleanor, The way I understand the problem is a choice between the following options: or: or: In the first example, note becomes a "child" element of trill; the second example inverts that so that trill becomes a child of note. If we were to wish to express this in "pure" XML, it would need to be a choice between either of these, since XML imposes a very hierarchical structure if used naively. Sometimes this hierarchy makes sense (note as a child of chord), but in this case it doesn't really make musical sense for trill to be a parent or a child of note. If we were to want to expand trill so that it covers more than one note, we would have to choose option 1 OR we would have to try and figure out some other way of grouping notes. Perry's concern was that if we allow all things that can be trilled, or that can hold children that can also be trilled, then we pretty much have to allow most things as children of trills. This makes the encoding task much more difficult, since it can be very easy to get into trouble and do nonsensical things. MEI and TEI have a fairly elegant solution to this which is still valid XML but allows us to break out of this rigid hierarchy. In the third example, we remove the hierarchy and assign the trill to the note by reference; that is, the element is not hierarchically related to note, but the @startid attribute points to the element where the trill starts. This is much easier to handle, since you can put many other elements between them. For example, you could do this (a highly simplified version of the first measure of the example you attached): This allows much more flexibility in the encoding, since it means you do not have to decide whether the trill is hierarchically higher or lower than the note; you can simply list all the "spanning" elements at the end of the measure, and then give @startid/@endid or @tstamp references (as Raffaele mentioned). I can't speak directly for Perry, but I think that's what he meant by "one pass" vs. "two pass". It's not that you can't do all the encoding in the same sitting, it's just that sometimes you'll want to identify and encode elements that don't strictly fall into the hierarchy later in the measure. So you would, in effect, do two passes through the measure: one to encode the notes, and the other to encode the other events. The complexity of keeping all this straight when encoding is certainly not trivial, but I don't think that's an MEI issue. My own feeling is that it should be the job of the notation encoding software to help you manage all of the bits and pieces I think this addresses your concern directly. You don't have to put all elements in an arbitrary hierarchy since things can be referenced after they have "happened" in the score, without needing to decide if it makes musical sense to have it as a hierarchical relationship. This, in my opinion, is more musical than other attempts at encoding music notation in XML since you don't have to make seemingly arbitrary decisions over which musical structure is a child of another. -Andrew On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: Hi, Kristina, Perry, et al. >From my perspective Kristina is prompting a really important question, and one response (?more than one pass....?) seems to be the inevitable place where all encoders end up when confronted with real music. It may be unavoidable, but it is not ideal. What is unsettling in the responses is that we are putting hierarchy above music and the needs of encoders. When working with MSS, there are dozens of potential distractions, and making a second pass to capture left-over details requires finding the exact spot on the folio, checking to see which features were encoded on the first pass, and, over time, a lot of secondary bookkeeping about what is finished and what has yet to be done. (I know; I did that kind of housework for my Marcello catalogue in the 1980s---3000 bitty music files, each one in need of its own particular notes.) The risks of eventual inaccuracy, incomplete information, and duplication are very real. Granted we want MEI to work, but if it is optimized for programming efficiency at the cost of usability, we may need to step back and look for other solutions. The low level of generalizability of music features across repertories is widely acknowledged, and we are simply encountering one instance here. For another example from the same category, consider this CPE Bach incipit: We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all the examples go to http://www.ccarh.org/publications/reprints/ieee/ --Category 2, Type 1] How would MEI handle it? Eleanor Eleanor Selfridge-Field Consulting Professor, Music Braun Music Center #129 Stanford University Stanford, CA 94305-3076, USA http://www.stanford.edu/~esfield/ http://www.ccarh.org From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) Sent: Monday, March 05, 2012 5:50 AM To: Music Encoding Initiative Subject: Re: [MEI-L] trills within beams Hi, Kristina, MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. >From snowy (yes, snowy!) Charlottesville, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] Sent: Monday, March 05, 2012 4:26 AM To: Music Encoding Initiative Subject: [MEI-L] trills within beams Hi all, while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: ? Did I miss anything? Best, Kristina _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Tue Mar 6 17:06:11 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 6 Mar 2012 17:06:11 +0100 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de>, Message-ID: I see the point, but in this case, shouldn't we consider to deprecate @tie, @slur and similar constructs? The arguments you provide below apply to them as well? devil's advocate Johannes Am 06.03.2012 um 16:57 schrieb Roland, Perry (pdr4h): > If single-note trills are treated differently, then why not any other single-note / instantaneous "control event", such as arpeg, breath, pedal, reh, dynam, etc.? Once you start down that road, Pandora's box is opened. A proliferation of attributes wouldn't be helpful. > > If attributes were added, @ornam for example, they would only be useful part of the time; that is, in the case of @ornam/ @trill, for a single-note trill and when complete control of the rendering of the trill is to be handled by the rendering engine. There's no opportunity to add visual information to that trill without resorting to attributes about attributes (and that way certainly lies madness!) > > If an child of were added or allowed to be a child of in order to accommodate visual info., then we're back into the hierarchy problems Andrew so eloquently described yesterday. In addition, if we allow both possibilities (attribute and child), then user confusion is increased and interchange / interoperability diminished. > > When these other things are considered, I believe recording trills and such after the notes they're attached to is still the best of the alternatives. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > > > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Andrew Hankinson, Mr [andrew.hankinson at mail.mcgill.ca] > Sent: Tuesday, March 06, 2012 10:05 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] trills within beams > > > Thanks Johannes! > > > I gave a couple examples to show how trill *could* be done in XML (and is done in some other XML-based music encoding schemes), since I was responding to Eleanor's question. But I don't think that it *should* be done that way, since, like I said, it creates a situation where you're forced into creating these artificial and non-musical hierarchies. I didn't mean to suggest that I felt any changes to should be made, I just wanted to address Eleanor's question directly by showing how MEI does it differently. > > > But to address Kristina's question: I did some asking around our lab yesterday. We came to the agreement that a trill is an ornament, not an articulation. If doesn't have @ornament or something along those lines, I think this is a great argument for it. > > > One of the musicologists in our lab found a couple helpful pages for this discussion: > > > http://www.music.vt.edu/musicdictionary/appendix/ornaments/ornaments.html > > > Couperin's Ornaments: > http://books.google.ca/books?id=CecBsvk7Oz0C&lpg=PA34&ots=VQ2uwRSt_n&dq=ornaments%20couperin&pg=PA34#v=onepage&q=ornaments%20couperin&f=false > > > -Andrew > > > On 2012-03-06, at 7:11 AM, Johannes Kepper wrote: > > > Hi Andrew, > > I absolutely agree with what you say about hierarchy issues etc. I think this discussion is very helpful for identifying what MEI is and what it is not, and, even more beneficial, how people understand it. But still, I think it misses Kristina's initial question. Sometimes, a trill stretches no longer than the note it's written above. In these cases, it seems to be no spanning element on its own, but rather a playing instruction for this particular note. The question then is not whether we want to redefine the model of to allow it within a note or as a container of notes, but instead if "tr" should be an allowed value of @artic (or any other attribute on ). By no means this would argue against the existence of the current standoff , it would just be a shortcut for describing trill that do not stretch beyond their initial note. Currently, the would have to duplicate the @tstamp and @dur of this note (@tstamp on notes is normally omitted, I know). > > It is fine to decide that we don't want to offer this limited-power shortcut for a certain kind of trills, but we have several such constructs for other things in MEI already. I don't see that anyone asked for remodelling the trill element (but please correct me!!!!). > > Best, > Johannes > > > > > Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: > > > Hi Eleanor, > > > > The way I understand the problem is a choice between the following options: > > > > > > > > > > > > or: > > > > > > > > > > > > > > or: > > > > > > > > > > In the first example, note becomes a "child" element of trill; the second example inverts that so that trill becomes a child of note. If we were to wish to express this in "pure" XML, it would need to be a choice between either of these, since XML imposes a very hierarchical structure if used naively. Sometimes this hierarchy makes sense (note as a child of chord), but in this case it doesn't really make musical sense for trill to be a parent or a child of note. > > > > If we were to want to expand trill so that it covers more than one note, we would have to choose option 1 OR we would have to try and figure out some other way of grouping notes. Perry's concern was that if we allow all things that can be trilled, or that can hold children that can also be trilled, then we pretty much have to allow most things as children of trills. This makes the encoding task much more difficult, since it can be very easy to get into trouble and do nonsensical things. > > > > MEI and TEI have a fairly elegant solution to this which is still valid XML but allows us to break out of this rigid hierarchy. > > > > In the third example, we remove the hierarchy and assign the trill to the note by reference; that is, the element is not hierarchically related to note, but the @startid attribute points to the element where the trill starts. This is much easier to handle, since you can put many other elements between them. For example, you could do this (a highly simplified version of the first measure of the example you attached): > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This allows much more flexibility in the encoding, since it means you do not have to decide whether the trill is hierarchically higher or lower than the note; you can simply list all the "spanning" elements at the end of the measure, and then give @startid/@endid or @tstamp references (as Raffaele mentioned). > > > > I can't speak directly for Perry, but I think that's what he meant by "one pass" vs. "two pass". It's not that you can't do all the encoding in the same sitting, it's just that sometimes you'll want to identify and encode elements that don't strictly fall into the hierarchy later in the measure. So you would, in effect, do two passes through the measure: one to encode the notes, and the other to encode the other events. > > > > The complexity of keeping all this straight when encoding is certainly not trivial, but I don't think that's an MEI issue. My own feeling is that it should be the job of the notation encoding software to help you manage all of the bits and pieces > > > > I think this addresses your concern directly. You don't have to put all elements in an arbitrary hierarchy since things can be referenced after they have "happened" in the score, without needing to decide if it makes musical sense to have it as a hierarchical relationship. This, in my opinion, is more musical than other attempts at encoding music notation in XML since you don't have to make seemingly arbitrary decisions over which musical structure is a child of another. > > > > -Andrew > > > > On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: > > > > Hi, Kristina, Perry, et al. > > > > From my perspective Kristina is prompting a really important question, and one response (?more than one pass....?) seems to be the inevitable place where all encoders end up when confronted with real music. It may be unavoidable, but it is not ideal. > > > > What is unsettling in the responses is that we are putting hierarchy above music and the needs of encoders. > > When working with MSS, there are dozens of potential distractions, and making a second pass to capture left-over details requires finding the exact spot on the folio, checking to see which features were encoded on the first pass, and, over time, a lot of secondary bookkeeping about what is finished and what has yet to be done. (I know; I did that kind of housework for my Marcello catalogue in the 1980s---3000 bitty music files, each one in need of its own particular notes.) The risks of eventual inaccuracy, incomplete information, and duplication are very real. > > > > Granted we want MEI to work, but if it is optimized for programming efficiency at the cost of usability, we may need to step back and look for other solutions. The low level of generalizability of music features across repertories is widely acknowledged, and we are simply encountering one instance here. For another example from the same category, consider this CPE Bach incipit: > > > > > > We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all the examples go to http://www.ccarh.org/publications/reprints/ieee/ --Category 2, Type 1] > > > > How would MEI handle it? > > > > Eleanor > > > > > > Eleanor Selfridge-Field > > Consulting Professor, Music > > Braun Music Center #129 > > Stanford University > > Stanford, CA 94305-3076, USA > > http://www.stanford.edu/~esfield/ > > http://www.ccarh.org > > > > > > > > From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) > > Sent: Monday, March 05, 2012 5:50 AM > > To: Music Encoding Initiative > > Subject: Re: [MEI-L] trills within beams > > > > Hi, Kristina, > > > > MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. > > > > It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. > > > > From snowy (yes, snowy!) Charlottesville, > > > > -- > > p. > > > > > > __________________________ > > Perry Roland > > Music Library > > University of Virginia > > P. O. Box 400175 > > Charlottesville, VA 22904 > > 434-982-2702 (w) > > pdr4h (at) virginia (dot) edu > > ________________________________ > > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] > > Sent: Monday, March 05, 2012 4:26 AM > > To: Music Encoding Initiative > > Subject: [MEI-L] trills within beams > > Hi all, > > > > while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. > > > > > > > > Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: > > ? > > > > Did I miss anything? > > > > Best, > > Kristina > > > > > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Tue Mar 6 17:18:18 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 6 Mar 2012 16:18:18 +0000 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de>, , Message-ID: @tie, @slur, and such, were put in as conveniences for the hand encoder. The question is: When does a plethora of conveniences become inconvenient? Where the line gets drawn may be arbitrary, but a line needs to be drawn nonetheless. I can certainly imagine the attributes you cite being deprecated over time. Or at least ignored when encoding software (like a GUI editor) is used. I can also imagine them being converted to the element form by a canonizer, like MEIron. If attributes such as these are conveniences that will probably be ignored or deprecated in the future, why add more like them now? (That's a rhetorical question.) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] Sent: Tuesday, March 06, 2012 11:06 AM To: Music Encoding Initiative Subject: Re: [MEI-L] trills within beams I see the point, but in this case, shouldn't we consider to deprecate @tie, @slur and similar constructs? The arguments you provide below apply to them as well? devil's advocate Johannes Am 06.03.2012 um 16:57 schrieb Roland, Perry (pdr4h): > If single-note trills are treated differently, then why not any other single-note / instantaneous "control event", such as arpeg, breath, pedal, reh, dynam, etc.? Once you start down that road, Pandora's box is opened. A proliferation of attributes wouldn't be helpful. > > If attributes were added, @ornam for example, they would only be useful part of the time; that is, in the case of @ornam/ @trill, for a single-note trill and when complete control of the rendering of the trill is to be handled by the rendering engine. There's no opportunity to add visual information to that trill without resorting to attributes about attributes (and that way certainly lies madness!) > > If an child of were added or allowed to be a child of in order to accommodate visual info., then we're back into the hierarchy problems Andrew so eloquently described yesterday. In addition, if we allow both possibilities (attribute and child), then user confusion is increased and interchange / interoperability diminished. > > When these other things are considered, I believe recording trills and such after the notes they're attached to is still the best of the alternatives. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > > > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Andrew Hankinson, Mr [andrew.hankinson at mail.mcgill.ca] > Sent: Tuesday, March 06, 2012 10:05 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] trills within beams > > > Thanks Johannes! > > > I gave a couple examples to show how trill *could* be done in XML (and is done in some other XML-based music encoding schemes), since I was responding to Eleanor's question. But I don't think that it *should* be done that way, since, like I said, it creates a situation where you're forced into creating these artificial and non-musical hierarchies. I didn't mean to suggest that I felt any changes to should be made, I just wanted to address Eleanor's question directly by showing how MEI does it differently. > > > But to address Kristina's question: I did some asking around our lab yesterday. We came to the agreement that a trill is an ornament, not an articulation. If doesn't have @ornament or something along those lines, I think this is a great argument for it. > > > One of the musicologists in our lab found a couple helpful pages for this discussion: > > > http://www.music.vt.edu/musicdictionary/appendix/ornaments/ornaments.html > > > Couperin's Ornaments: > http://books.google.ca/books?id=CecBsvk7Oz0C&lpg=PA34&ots=VQ2uwRSt_n&dq=ornaments%20couperin&pg=PA34#v=onepage&q=ornaments%20couperin&f=false > > > -Andrew > > > On 2012-03-06, at 7:11 AM, Johannes Kepper wrote: > > > Hi Andrew, > > I absolutely agree with what you say about hierarchy issues etc. I think this discussion is very helpful for identifying what MEI is and what it is not, and, even more beneficial, how people understand it. But still, I think it misses Kristina's initial question. Sometimes, a trill stretches no longer than the note it's written above. In these cases, it seems to be no spanning element on its own, but rather a playing instruction for this particular note. The question then is not whether we want to redefine the model of to allow it within a note or as a container of notes, but instead if "tr" should be an allowed value of @artic (or any other attribute on ). By no means this would argue against the existence of the current standoff , it would just be a shortcut for describing trill that do not stretch beyond their initial note. Currently, the would have to duplicate the @tstamp and @dur of this note (@tstamp on notes is normally omitted, I know). > > It is fine to decide that we don't want to offer this limited-power shortcut for a certain kind of trills, but we have several such constructs for other things in MEI already. I don't see that anyone asked for remodelling the trill element (but please correct me!!!!). > > Best, > Johannes > > > > > Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: > > > Hi Eleanor, > > > > The way I understand the problem is a choice between the following options: > > > > > > > > > > > > or: > > > > > > > > > > > > > > or: > > > > > > > > > > In the first example, note becomes a "child" element of trill; the second example inverts that so that trill becomes a child of note. If we were to wish to express this in "pure" XML, it would need to be a choice between either of these, since XML imposes a very hierarchical structure if used naively. Sometimes this hierarchy makes sense (note as a child of chord), but in this case it doesn't really make musical sense for trill to be a parent or a child of note. > > > > If we were to want to expand trill so that it covers more than one note, we would have to choose option 1 OR we would have to try and figure out some other way of grouping notes. Perry's concern was that if we allow all things that can be trilled, or that can hold children that can also be trilled, then we pretty much have to allow most things as children of trills. This makes the encoding task much more difficult, since it can be very easy to get into trouble and do nonsensical things. > > > > MEI and TEI have a fairly elegant solution to this which is still valid XML but allows us to break out of this rigid hierarchy. > > > > In the third example, we remove the hierarchy and assign the trill to the note by reference; that is, the element is not hierarchically related to note, but the @startid attribute points to the element where the trill starts. This is much easier to handle, since you can put many other elements between them. For example, you could do this (a highly simplified version of the first measure of the example you attached): > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > This allows much more flexibility in the encoding, since it means you do not have to decide whether the trill is hierarchically higher or lower than the note; you can simply list all the "spanning" elements at the end of the measure, and then give @startid/@endid or @tstamp references (as Raffaele mentioned). > > > > I can't speak directly for Perry, but I think that's what he meant by "one pass" vs. "two pass". It's not that you can't do all the encoding in the same sitting, it's just that sometimes you'll want to identify and encode elements that don't strictly fall into the hierarchy later in the measure. So you would, in effect, do two passes through the measure: one to encode the notes, and the other to encode the other events. > > > > The complexity of keeping all this straight when encoding is certainly not trivial, but I don't think that's an MEI issue. My own feeling is that it should be the job of the notation encoding software to help you manage all of the bits and pieces > > > > I think this addresses your concern directly. You don't have to put all elements in an arbitrary hierarchy since things can be referenced after they have "happened" in the score, without needing to decide if it makes musical sense to have it as a hierarchical relationship. This, in my opinion, is more musical than other attempts at encoding music notation in XML since you don't have to make seemingly arbitrary decisions over which musical structure is a child of another. > > > > -Andrew > > > > On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: > > > > Hi, Kristina, Perry, et al. > > > > From my perspective Kristina is prompting a really important question, and one response (?more than one pass....?) seems to be the inevitable place where all encoders end up when confronted with real music. It may be unavoidable, but it is not ideal. > > > > What is unsettling in the responses is that we are putting hierarchy above music and the needs of encoders. > > When working with MSS, there are dozens of potential distractions, and making a second pass to capture left-over details requires finding the exact spot on the folio, checking to see which features were encoded on the first pass, and, over time, a lot of secondary bookkeeping about what is finished and what has yet to be done. (I know; I did that kind of housework for my Marcello catalogue in the 1980s---3000 bitty music files, each one in need of its own particular notes.) The risks of eventual inaccuracy, incomplete information, and duplication are very real. > > > > Granted we want MEI to work, but if it is optimized for programming efficiency at the cost of usability, we may need to step back and look for other solutions. The low level of generalizability of music features across repertories is widely acknowledged, and we are simply encountering one instance here. For another example from the same category, consider this CPE Bach incipit: > > > > > > We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all the examples go to http://www.ccarh.org/publications/reprints/ieee/ --Category 2, Type 1] > > > > How would MEI handle it? > > > > Eleanor > > > > > > Eleanor Selfridge-Field > > Consulting Professor, Music > > Braun Music Center #129 > > Stanford University > > Stanford, CA 94305-3076, USA > > http://www.stanford.edu/~esfield/ > > http://www.ccarh.org > > > > > > > > From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) > > Sent: Monday, March 05, 2012 5:50 AM > > To: Music Encoding Initiative > > Subject: Re: [MEI-L] trills within beams > > > > Hi, Kristina, > > > > MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. > > > > It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. > > > > From snowy (yes, snowy!) Charlottesville, > > > > -- > > p. > > > > > > __________________________ > > Perry Roland > > Music Library > > University of Virginia > > P. O. Box 400175 > > Charlottesville, VA 22904 > > 434-982-2702 (w) > > pdr4h (at) virginia (dot) edu > > ________________________________ > > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] > > Sent: Monday, March 05, 2012 4:26 AM > > To: Music Encoding Initiative > > Subject: [MEI-L] trills within beams > > Hi all, > > > > while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. > > > > > > > > Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: > > ? > > > > Did I miss anything? > > > > Best, > > Kristina > > > > > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Tue Mar 6 17:22:28 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 6 Mar 2012 17:22:28 +0100 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de>, , Message-ID: <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> if this turns out to be a way to get rid of the [i|m|t][1-6] datatype, I'm more than happy :-) (do we need to cover that in the guidelines, or can't we just deprecate it now?) (that's also a rhetorical question) Am 06.03.2012 um 17:18 schrieb Roland, Perry (pdr4h): > > @tie, @slur, and such, were put in as conveniences for the hand encoder. The question is: When does a plethora of conveniences become inconvenient? Where the line gets drawn may be arbitrary, but a line needs to be drawn nonetheless. > > I can certainly imagine the attributes you cite being deprecated over time. Or at least ignored when encoding software (like a GUI editor) is used. I can also imagine them being converted to the element form by a canonizer, like MEIron. > > If attributes such as these are conveniences that will probably be ignored or deprecated in the future, why add more like them now? (That's a rhetorical question.) > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > Sent: Tuesday, March 06, 2012 11:06 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] trills within beams > > I see the point, but in this case, shouldn't we consider to deprecate @tie, @slur and similar constructs? The arguments you provide below apply to them as well? > > devil's advocate Johannes > > > Am 06.03.2012 um 16:57 schrieb Roland, Perry (pdr4h): > >> If single-note trills are treated differently, then why not any other single-note / instantaneous "control event", such as arpeg, breath, pedal, reh, dynam, etc.? Once you start down that road, Pandora's box is opened. A proliferation of attributes wouldn't be helpful. >> >> If attributes were added, @ornam for example, they would only be useful part of the time; that is, in the case of @ornam/ @trill, for a single-note trill and when complete control of the rendering of the trill is to be handled by the rendering engine. There's no opportunity to add visual information to that trill without resorting to attributes about attributes (and that way certainly lies madness!) >> >> If an child of were added or allowed to be a child of in order to accommodate visual info., then we're back into the hierarchy problems Andrew so eloquently described yesterday. In addition, if we allow both possibilities (attribute and child), then user confusion is increased and interchange / interoperability diminished. >> >> When these other things are considered, I believe recording trills and such after the notes they're attached to is still the best of the alternatives. >> >> -- >> p. >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> >> >> >> From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Andrew Hankinson, Mr [andrew.hankinson at mail.mcgill.ca] >> Sent: Tuesday, March 06, 2012 10:05 AM >> To: Music Encoding Initiative >> Subject: Re: [MEI-L] trills within beams >> >> >> Thanks Johannes! >> >> >> I gave a couple examples to show how trill *could* be done in XML (and is done in some other XML-based music encoding schemes), since I was responding to Eleanor's question. But I don't think that it *should* be done that way, since, like I said, it creates a situation where you're forced into creating these artificial and non-musical hierarchies. I didn't mean to suggest that I felt any changes to should be made, I just wanted to address Eleanor's question directly by showing how MEI does it differently. >> >> >> But to address Kristina's question: I did some asking around our lab yesterday. We came to the agreement that a trill is an ornament, not an articulation. If doesn't have @ornament or something along those lines, I think this is a great argument for it. >> >> >> One of the musicologists in our lab found a couple helpful pages for this discussion: >> >> >> http://www.music.vt.edu/musicdictionary/appendix/ornaments/ornaments.html >> >> >> Couperin's Ornaments: >> http://books.google.ca/books?id=CecBsvk7Oz0C&lpg=PA34&ots=VQ2uwRSt_n&dq=ornaments%20couperin&pg=PA34#v=onepage&q=ornaments%20couperin&f=false >> >> >> -Andrew >> >> >> On 2012-03-06, at 7:11 AM, Johannes Kepper wrote: >> >> >> Hi Andrew, >> >> I absolutely agree with what you say about hierarchy issues etc. I think this discussion is very helpful for identifying what MEI is and what it is not, and, even more beneficial, how people understand it. But still, I think it misses Kristina's initial question. Sometimes, a trill stretches no longer than the note it's written above. In these cases, it seems to be no spanning element on its own, but rather a playing instruction for this particular note. The question then is not whether we want to redefine the model of to allow it within a note or as a container of notes, but instead if "tr" should be an allowed value of @artic (or any other attribute on ). By no means this would argue against the existence of the current standoff , it would just be a shortcut for describing trill that do not stretch beyond their initial note. Currently, the would have to duplicate the @tstamp and @dur of this note (@tstamp on notes is normally omitted, I know). >> >> It is fine to decide that we don't want to offer this limited-power shortcut for a certain kind of trills, but we have several such constructs for other things in MEI already. I don't see that anyone asked for remodelling the trill element (but please correct me!!!!). >> >> Best, >> Johannes >> >> >> >> >> Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: >> >> >> Hi Eleanor, >> >> >> >> The way I understand the problem is a choice between the following options: >> >> >> >> >> >> >> >> >> >> >> >> or: >> >> >> >> >> >> >> >> >> >> >> >> >> >> or: >> >> >> >> >> >> >> >> >> >> In the first example, note becomes a "child" element of trill; the second example inverts that so that trill becomes a child of note. If we were to wish to express this in "pure" XML, it would need to be a choice between either of these, since XML imposes a very hierarchical structure if used naively. Sometimes this hierarchy makes sense (note as a child of chord), but in this case it doesn't really make musical sense for trill to be a parent or a child of note. >> >> >> >> If we were to want to expand trill so that it covers more than one note, we would have to choose option 1 OR we would have to try and figure out some other way of grouping notes. Perry's concern was that if we allow all things that can be trilled, or that can hold children that can also be trilled, then we pretty much have to allow most things as children of trills. This makes the encoding task much more difficult, since it can be very easy to get into trouble and do nonsensical things. >> >> >> >> MEI and TEI have a fairly elegant solution to this which is still valid XML but allows us to break out of this rigid hierarchy. >> >> >> >> In the third example, we remove the hierarchy and assign the trill to the note by reference; that is, the element is not hierarchically related to note, but the @startid attribute points to the element where the trill starts. This is much easier to handle, since you can put many other elements between them. For example, you could do this (a highly simplified version of the first measure of the example you attached): >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> This allows much more flexibility in the encoding, since it means you do not have to decide whether the trill is hierarchically higher or lower than the note; you can simply list all the "spanning" elements at the end of the measure, and then give @startid/@endid or @tstamp references (as Raffaele mentioned). >> >> >> >> I can't speak directly for Perry, but I think that's what he meant by "one pass" vs. "two pass". It's not that you can't do all the encoding in the same sitting, it's just that sometimes you'll want to identify and encode elements that don't strictly fall into the hierarchy later in the measure. So you would, in effect, do two passes through the measure: one to encode the notes, and the other to encode the other events. >> >> >> >> The complexity of keeping all this straight when encoding is certainly not trivial, but I don't think that's an MEI issue. My own feeling is that it should be the job of the notation encoding software to help you manage all of the bits and pieces >> >> >> >> I think this addresses your concern directly. You don't have to put all elements in an arbitrary hierarchy since things can be referenced after they have "happened" in the score, without needing to decide if it makes musical sense to have it as a hierarchical relationship. This, in my opinion, is more musical than other attempts at encoding music notation in XML since you don't have to make seemingly arbitrary decisions over which musical structure is a child of another. >> >> >> >> -Andrew >> >> >> >> On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: >> >> >> >> Hi, Kristina, Perry, et al. >> >> >> >> From my perspective Kristina is prompting a really important question, and one response (?more than one pass....?) seems to be the inevitable place where all encoders end up when confronted with real music. It may be unavoidable, but it is not ideal. >> >> >> >> What is unsettling in the responses is that we are putting hierarchy above music and the needs of encoders. >> >> When working with MSS, there are dozens of potential distractions, and making a second pass to capture left-over details requires finding the exact spot on the folio, checking to see which features were encoded on the first pass, and, over time, a lot of secondary bookkeeping about what is finished and what has yet to be done. (I know; I did that kind of housework for my Marcello catalogue in the 1980s---3000 bitty music files, each one in need of its own particular notes.) The risks of eventual inaccuracy, incomplete information, and duplication are very real. >> >> >> >> Granted we want MEI to work, but if it is optimized for programming efficiency at the cost of usability, we may need to step back and look for other solutions. The low level of generalizability of music features across repertories is widely acknowledged, and we are simply encountering one instance here. For another example from the same category, consider this CPE Bach incipit: >> >> >> >> >> >> We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all the examples go to http://www.ccarh.org/publications/reprints/ieee/ --Category 2, Type 1] >> >> >> >> How would MEI handle it? >> >> >> >> Eleanor >> >> >> >> >> >> Eleanor Selfridge-Field >> >> Consulting Professor, Music >> >> Braun Music Center #129 >> >> Stanford University >> >> Stanford, CA 94305-3076, USA >> >> http://www.stanford.edu/~esfield/ >> >> http://www.ccarh.org >> >> >> >> >> >> >> >> From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) >> >> Sent: Monday, March 05, 2012 5:50 AM >> >> To: Music Encoding Initiative >> >> Subject: Re: [MEI-L] trills within beams >> >> >> >> Hi, Kristina, >> >> >> >> MEI is not designed to be encoded in one pass -- some things, such as trills, pedal markings, text directives, etc., must be captured after the notes. >> >> >> >> It might be possible to do what you suggest in some cases but it won't work all the time because it potentially leads to overlapping hierarchies. It also means that your proposed element would have to allow every other possible element, leading to opportunities for encoders to do unsupported things. >> >> >> >> From snowy (yes, snowy!) Charlottesville, >> >> >> >> -- >> >> p. >> >> >> >> >> >> __________________________ >> >> Perry Roland >> >> Music Library >> >> University of Virginia >> >> P. O. Box 400175 >> >> Charlottesville, VA 22904 >> >> 434-982-2702 (w) >> >> pdr4h (at) virginia (dot) edu >> >> ________________________________ >> >> From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Kristina Richts [kristina.richts at gmx.de] >> >> Sent: Monday, March 05, 2012 4:26 AM >> >> To: Music Encoding Initiative >> >> Subject: [MEI-L] trills within beams >> >> Hi all, >> >> >> >> while encoding the following passage, I just mentioned, that there seems to be no way to encode the trill right here, as I don't want to extract this information and place it at the end of the measure. >> >> >> >> >> >> >> >> Why isn't it possible to provide notes within a beam with a element, as could be done with single notes, like this: >> >> ? >> >> >> >> Did I miss anything? >> >> >> >> Best, >> >> Kristina >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> >> >> _______________________________________________ >> >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From raffaeleviglianti at gmail.com Tue Mar 6 17:28:30 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Tue, 6 Mar 2012 16:28:30 +0000 Subject: [MEI-L] trills within beams In-Reply-To: <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> Message-ID: +1 to deprecation Raffaele On Tue, Mar 6, 2012 at 4:22 PM, Johannes Kepper wrote: > if this turns out to be a way to get rid of the [i|m|t][1-6] datatype, I'm > more than happy :-) > > (do we need to cover that in the guidelines, or can't we just deprecate it > now?) (that's also a rhetorical question) > > > Am 06.03.2012 um 17:18 schrieb Roland, Perry (pdr4h): > > > > > @tie, @slur, and such, were put in as conveniences for the hand encoder. > The question is: When does a plethora of conveniences become inconvenient? > Where the line gets drawn may be arbitrary, but a line needs to be drawn > nonetheless. > > > > I can certainly imagine the attributes you cite being deprecated over > time. Or at least ignored when encoding software (like a GUI editor) is > used. I can also imagine them being converted to the element form by a > canonizer, like MEIron. > > > > If attributes such as these are conveniences that will probably be > ignored or deprecated in the future, why add more like them now? (That's a > rhetorical question.) > > > > -- > > p. > > > > > > __________________________ > > Perry Roland > > Music Library > > University of Virginia > > P. O. Box 400175 > > Charlottesville, VA 22904 > > 434-982-2702 (w) > > pdr4h (at) virginia (dot) edu > > ________________________________________ > > From: mei-l-bounces at lists.uni-paderborn.de [ > mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [ > kepper at edirom.de] > > Sent: Tuesday, March 06, 2012 11:06 AM > > To: Music Encoding Initiative > > Subject: Re: [MEI-L] trills within beams > > > > I see the point, but in this case, shouldn't we consider to deprecate > @tie, @slur and similar constructs? The arguments you provide below apply > to them as well? > > > > devil's advocate Johannes > > > > > > Am 06.03.2012 um 16:57 schrieb Roland, Perry (pdr4h): > > > >> If single-note trills are treated differently, then why not any other > single-note / instantaneous "control event", such as arpeg, breath, pedal, > reh, dynam, etc.? Once you start down that road, Pandora's box is opened. > A proliferation of attributes wouldn't be helpful. > >> > >> If attributes were added, @ornam for example, they would only be useful > part of the time; that is, in the case of @ornam/ @trill, for a single-note > trill and when complete control of the rendering of the trill is to be > handled by the rendering engine. There's no opportunity to add visual > information to that trill without resorting to attributes about attributes > (and that way certainly lies madness!) > >> > >> If an child of were added or allowed to be a > child of in order to accommodate visual info., then we're back into > the hierarchy problems Andrew so eloquently described yesterday. In > addition, if we allow both possibilities (attribute and child), then user > confusion is increased and interchange / interoperability diminished. > >> > >> When these other things are considered, I believe recording trills and > such after the notes they're attached to is still the best of the > alternatives. > >> > >> -- > >> p. > >> > >> __________________________ > >> Perry Roland > >> Music Library > >> University of Virginia > >> P. O. Box 400175 > >> Charlottesville, VA 22904 > >> 434-982-2702 (w) > >> pdr4h (at) virginia (dot) edu > >> > >> > >> > >> From: mei-l-bounces at lists.uni-paderborn.de [ > mei-l-bounces at lists.uni-paderborn.de] on behalf of Andrew Hankinson, Mr [ > andrew.hankinson at mail.mcgill.ca] > >> Sent: Tuesday, March 06, 2012 10:05 AM > >> To: Music Encoding Initiative > >> Subject: Re: [MEI-L] trills within beams > >> > >> > >> Thanks Johannes! > >> > >> > >> I gave a couple examples to show how trill *could* be done in XML (and > is done in some other XML-based music encoding schemes), since I was > responding to Eleanor's question. But I don't think that it *should* be > done that way, since, like I said, it creates a situation where you're > forced into creating these artificial and non-musical hierarchies. I didn't > mean to suggest that I felt any changes to should be made, I just > wanted to address Eleanor's question directly by showing how MEI does it > differently. > >> > >> > >> But to address Kristina's question: I did some asking around our lab > yesterday. We came to the agreement that a trill is an ornament, not an > articulation. If doesn't have @ornament or something along those > lines, I think this is a great argument for it. > >> > >> > >> One of the musicologists in our lab found a couple helpful pages for > this discussion: > >> > >> > >> > http://www.music.vt.edu/musicdictionary/appendix/ornaments/ornaments.html > >> > >> > >> Couperin's Ornaments: > >> > http://books.google.ca/books?id=CecBsvk7Oz0C&lpg=PA34&ots=VQ2uwRSt_n&dq=ornaments%20couperin&pg=PA34#v=onepage&q=ornaments%20couperin&f=false > >> > >> > >> -Andrew > >> > >> > >> On 2012-03-06, at 7:11 AM, Johannes Kepper wrote: > >> > >> > >> Hi Andrew, > >> > >> I absolutely agree with what you say about hierarchy issues etc. I > think this discussion is very helpful for identifying what MEI is and what > it is not, and, even more beneficial, how people understand it. But still, > I think it misses Kristina's initial question. Sometimes, a trill stretches > no longer than the note it's written above. In these cases, it seems to be > no spanning element on its own, but rather a playing instruction for this > particular note. The question then is not whether we want to redefine the > model of to allow it within a note or as a container of notes, but > instead if "tr" should be an allowed value of @artic (or any other > attribute on ). By no means this would argue against the existence of > the current standoff , it would just be a shortcut for describing > trill that do not stretch beyond their initial note. Currently, the > would have to duplicate the @tstamp and @dur of this note (@tstamp on notes > is normally omitted, I know). > >> > >> It is fine to decide that we don't want to offer this limited-power > shortcut for a certain kind of trills, but we have several such constructs > for other things in MEI already. I don't see that anyone asked for > remodelling the trill element (but please correct me!!!!). > >> > >> Best, > >> Johannes > >> > >> > >> > >> > >> Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: > >> > >> > >> Hi Eleanor, > >> > >> > >> > >> The way I understand the problem is a choice between the following > options: > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> or: > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> or: > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> In the first example, note becomes a "child" element of trill; the > second example inverts that so that trill becomes a child of note. If we > were to wish to express this in "pure" XML, it would need to be a choice > between either of these, since XML imposes a very hierarchical structure if > used naively. Sometimes this hierarchy makes sense (note as a child of > chord), but in this case it doesn't really make musical sense for trill to > be a parent or a child of note. > >> > >> > >> > >> If we were to want to expand trill so that it covers more than one > note, we would have to choose option 1 OR we would have to try and figure > out some other way of grouping notes. Perry's concern was that if we allow > all things that can be trilled, or that can hold children that can also be > trilled, then we pretty much have to allow most things as children of > trills. This makes the encoding task much more difficult, since it can be > very easy to get into trouble and do nonsensical things. > >> > >> > >> > >> MEI and TEI have a fairly elegant solution to this which is still valid > XML but allows us to break out of this rigid hierarchy. > >> > >> > >> > >> In the third example, we remove the hierarchy and assign the trill to > the note by reference; that is, the element is not hierarchically > related to note, but the @startid attribute points to the element where the > trill starts. This is much easier to handle, since you can put many other > elements between them. For example, you could do this (a highly simplified > version of the first measure of the example you attached): > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> This allows much more flexibility in the encoding, since it means you > do not have to decide whether the trill is hierarchically higher or lower > than the note; you can simply list all the "spanning" elements at the end > of the measure, and then give @startid/@endid or @tstamp references (as > Raffaele mentioned). > >> > >> > >> > >> I can't speak directly for Perry, but I think that's what he meant by > "one pass" vs. "two pass". It's not that you can't do all the encoding in > the same sitting, it's just that sometimes you'll want to identify and > encode elements that don't strictly fall into the hierarchy later in the > measure. So you would, in effect, do two passes through the measure: one to > encode the notes, and the other to encode the other events. > >> > >> > >> > >> The complexity of keeping all this straight when encoding is certainly > not trivial, but I don't think that's an MEI issue. My own feeling is that > it should be the job of the notation encoding software to help you manage > all of the bits and pieces > >> > >> > >> > >> I think this addresses your concern directly. You don't have to put all > elements in an arbitrary hierarchy since things can be referenced after > they have "happened" in the score, without needing to decide if it makes > musical sense to have it as a hierarchical relationship. This, in my > opinion, is more musical than other attempts at encoding music notation in > XML since you don't have to make seemingly arbitrary decisions over which > musical structure is a child of another. > >> > >> > >> > >> -Andrew > >> > >> > >> > >> On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: > >> > >> > >> > >> Hi, Kristina, Perry, et al. > >> > >> > >> > >> From my perspective Kristina is prompting a really important question, > and one response (?more than one pass....?) seems to be the inevitable > place where all encoders end up when confronted with real music. It may be > unavoidable, but it is not ideal. > >> > >> > >> > >> What is unsettling in the responses is that we are putting hierarchy > above music and the needs of encoders. > >> > >> When working with MSS, there are dozens of potential distractions, and > making a second pass to capture left-over details requires finding the > exact spot on the folio, checking to see which features were encoded on the > first pass, and, over time, a lot of secondary bookkeeping about what is > finished and what has yet to be done. (I know; I did that kind of housework > for my Marcello catalogue in the 1980s---3000 bitty music files, each one > in need of its own particular notes.) The risks of eventual inaccuracy, > incomplete information, and duplication are very real. > >> > >> > >> > >> Granted we want MEI to work, but if it is optimized for programming > efficiency at the cost of usability, we may need to step back and look for > other solutions. The low level of generalizability of music features across > repertories is widely acknowledged, and we are simply encountering one > instance here. For another example from the same category, consider this > CPE Bach incipit: > >> > >> > >> > >> > >> > >> We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all > the examples go to http://www.ccarh.org/publications/reprints/ieee/--Category 2, Type 1] > >> > >> > >> > >> How would MEI handle it? > >> > >> > >> > >> Eleanor > >> > >> > >> > >> > >> > >> Eleanor Selfridge-Field > >> > >> Consulting Professor, Music > >> > >> Braun Music Center #129 > >> > >> Stanford University > >> > >> Stanford, CA 94305-3076, USA > >> > >> http://www.stanford.edu/~esfield/ > >> > >> http://www.ccarh.org > >> > >> > >> > >> > >> > >> > >> > >> From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de> [mailto: > mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de stanford.edu at lists.uni-paderborn.de>] On Behalf Of Roland, Perry (pdr4h) > >> > >> Sent: Monday, March 05, 2012 5:50 AM > >> > >> To: Music Encoding Initiative > >> > >> Subject: Re: [MEI-L] trills within beams > >> > >> > >> > >> Hi, Kristina, > >> > >> > >> > >> MEI is not designed to be encoded in one pass -- some things, such as > trills, pedal markings, text directives, etc., must be captured after the > notes. > >> > >> > >> > >> It might be possible to do what you suggest in some cases but it won't > work all the time because it potentially leads to overlapping hierarchies. > It also means that your proposed element would have to allow every > other possible element, leading to opportunities for encoders to do > unsupported things. > >> > >> > >> > >> From snowy (yes, snowy!) Charlottesville, > >> > >> > >> > >> -- > >> > >> p. > >> > >> > >> > >> > >> > >> __________________________ > >> > >> Perry Roland > >> > >> Music Library > >> > >> University of Virginia > >> > >> P. O. Box 400175 > >> > >> Charlottesville, VA 22904 > >> > >> 434-982-2702 (w) > >> > >> pdr4h (at) virginia (dot) edu > >> > >> ________________________________ > >> > >> From: mei-l-bounces at lists.uni-paderborn.de mei-l-bounces at lists.uni-paderborn.de> [ > mei-l-bounces at lists.uni-paderborn.de mei-l-bounces at lists.uni-paderborn.de>] on behalf of Kristina Richts [ > kristina.richts at gmx.de] > >> > >> Sent: Monday, March 05, 2012 4:26 AM > >> > >> To: Music Encoding Initiative > >> > >> Subject: [MEI-L] trills within beams > >> > >> Hi all, > >> > >> > >> > >> while encoding the following passage, I just mentioned, that there > seems to be no way to encode the trill right here, as I don't want to > extract this information and place it at the end of the measure. > >> > >> > >> > >> > >> > >> > >> > >> Why isn't it possible to provide notes within a beam with a > element, as could be done with single notes, like this: > >> > >> stem.dir="down"/>? > >> > >> > >> > >> Did I miss anything? > >> > >> > >> > >> Best, > >> > >> Kristina > >> > >> > >> > >> > >> > >> > >> > >> > >> > >> _______________________________________________ > >> > >> mei-l mailing list > >> > >> mei-l at lists.uni-paderborn.de > >> > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > >> > >> > >> > >> > >> _______________________________________________ > >> > >> mei-l mailing list > >> > >> mei-l at lists.uni-paderborn.de > >> > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > >> > >> > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zupftom at googlemail.com Tue Mar 6 17:36:04 2012 From: zupftom at googlemail.com (TW) Date: Tue, 6 Mar 2012 17:36:04 +0100 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> Message-ID: I'm also in favor of deprecating [i|m|t][1-6]. Thomas 2012/3/6 Raffaele Viglianti : > +1 to deprecation > > Raffaele > > > On Tue, Mar 6, 2012 at 4:22 PM, Johannes Kepper wrote: >> >> if this turns out to be a way to get rid of the [i|m|t][1-6] datatype, I'm >> more than happy :-) >> >> (do we need to cover that in the guidelines, or can't we just deprecate it >> now?) (that's also a rhetorical question) >> >> >> Am 06.03.2012 um 17:18 schrieb Roland, Perry (pdr4h): >> >> > >> > @tie, @slur, and such, were put in as conveniences for the hand encoder. >> > ?The question is: When does a plethora of conveniences become inconvenient? >> > ?Where the line gets drawn may be arbitrary, but a line needs to be drawn >> > nonetheless. >> > >> > I can certainly imagine the attributes you cite being deprecated over >> > time. Or at least ignored when encoding software (like a GUI editor) is >> > used. ?I can also imagine them being converted to the element form by a >> > canonizer, like MEIron. >> > >> > If attributes such as these are conveniences that will probably be >> > ignored or deprecated in the future, why add more like them now? ?(That's a >> > rhetorical question.) >> > >> > -- >> > p. >> > >> > >> > __________________________ >> > Perry Roland >> > Music Library >> > University of Virginia >> > P. O. Box 400175 >> > Charlottesville, VA 22904 >> > 434-982-2702 (w) >> > pdr4h (at) virginia (dot) edu >> > ________________________________________ >> > From: mei-l-bounces at lists.uni-paderborn.de >> > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper >> > [kepper at edirom.de] >> > Sent: Tuesday, March 06, 2012 11:06 AM >> > To: Music Encoding Initiative >> > Subject: Re: [MEI-L] trills within beams >> > >> > I see the point, but in this case, shouldn't we consider to deprecate >> > @tie, @slur and similar constructs? The arguments you provide below apply to >> > them as well? >> > >> > devil's advocate Johannes >> > >> > >> > Am 06.03.2012 um 16:57 schrieb Roland, Perry (pdr4h): >> > >> >> If single-note trills are treated differently, then why not any other >> >> single-note / instantaneous "control event", such as arpeg, breath, pedal, >> >> reh, dynam, etc.? ?Once you start down that road, Pandora's box is opened. >> >> ?A proliferation of attributes wouldn't be helpful. >> >> >> >> If attributes were added, @ornam for example, they would only be useful >> >> part of the time; that is, in the case of @ornam/ @trill, for a single-note >> >> trill and when complete control of the rendering of the trill is to be >> >> handled by the rendering engine. ?There's no opportunity to add visual >> >> information to that trill without resorting to attributes about attributes >> >> (and that way certainly lies madness!) >> >> >> >> If an child of were added or allowed to be a >> >> child of in order to accommodate visual info., then we're back into >> >> the hierarchy problems Andrew so eloquently described yesterday. ?In >> >> addition, if we allow both possibilities (attribute and child), then user >> >> confusion is increased and interchange / interoperability diminished. >> >> >> >> When these other things are considered, I believe recording trills and >> >> such after the notes they're attached to is still the best of the >> >> alternatives. >> >> >> >> -- >> >> p. >> >> >> >> __________________________ >> >> Perry Roland >> >> Music Library >> >> University of Virginia >> >> P. O. Box 400175 >> >> Charlottesville, VA 22904 >> >> 434-982-2702 (w) >> >> pdr4h (at) virginia (dot) edu >> >> >> >> >> >> >> >> From: mei-l-bounces at lists.uni-paderborn.de >> >> [mei-l-bounces at lists.uni-paderborn.de] on behalf of Andrew Hankinson, Mr >> >> [andrew.hankinson at mail.mcgill.ca] >> >> Sent: Tuesday, March 06, 2012 10:05 AM >> >> To: Music Encoding Initiative >> >> Subject: Re: [MEI-L] trills within beams >> >> >> >> >> >> Thanks Johannes! >> >> >> >> >> >> I gave a couple examples to show how trill *could* be done in XML (and >> >> is done in some other XML-based music encoding schemes), since I was >> >> responding to Eleanor's question. But I don't think that it *should* be done >> >> that way, since, like I said, it creates a situation where you're forced >> >> into creating these artificial and non-musical hierarchies. I didn't mean to >> >> suggest that I felt any changes to should be made, I just wanted >> >> to address Eleanor's question directly by showing how MEI does it >> >> differently. >> >> >> >> >> >> But to address Kristina's question: I did some asking around our lab >> >> yesterday. We came to the agreement that a trill is an ornament, not an >> >> articulation. If doesn't have @ornament or something along those >> >> lines, I think this is a great argument for it. >> >> >> >> >> >> One of the musicologists in our lab found a couple helpful pages for >> >> this discussion: >> >> >> >> >> >> >> >> http://www.music.vt.edu/musicdictionary/appendix/ornaments/ornaments.html >> >> >> >> >> >> Couperin's Ornaments: >> >> >> >> http://books.google.ca/books?id=CecBsvk7Oz0C&lpg=PA34&ots=VQ2uwRSt_n&dq=ornaments%20couperin&pg=PA34#v=onepage&q=ornaments%20couperin&f=false >> >> >> >> >> >> -Andrew >> >> >> >> >> >> On 2012-03-06, at 7:11 AM, Johannes Kepper wrote: >> >> >> >> >> >> Hi Andrew, >> >> >> >> I absolutely agree with what you say about hierarchy issues etc. I >> >> think this discussion is very helpful for identifying what MEI is and what >> >> it is not, and, even more beneficial, how people understand it. But still, I >> >> think it misses Kristina's initial question. Sometimes, a trill stretches no >> >> longer than the note it's written above. In these cases, it seems to be no >> >> spanning element on its own, but rather a playing instruction for this >> >> particular note. The question then is not whether we want to redefine the >> >> model of to allow it within a note or as a container of notes, but >> >> instead if "tr" should be an allowed value of @artic (or any other attribute >> >> on ). By no means this would argue against the existence of the >> >> current standoff , it would just be a shortcut for describing trill >> >> that do not stretch beyond their initial note. Currently, the would >> >> have to duplicate the @tstamp and @dur of this note (@tstamp on notes is >> >> normally omitted, I know). >> >> >> >> It is fine to decide that we don't want to offer this limited-power >> >> shortcut for a certain kind of trills, but we have several such constructs >> >> for other things in MEI already. I don't see that anyone asked for >> >> remodelling the trill element (but please correct me!!!!). >> >> >> >> Best, >> >> Johannes >> >> >> >> >> >> >> >> >> >> Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: >> >> >> >> >> >> Hi Eleanor, >> >> >> >> >> >> >> >> The way I understand the problem is a choice between the following >> >> options: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> or: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> or: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> In the first example, note becomes a "child" element of trill; the >> >> second example inverts that so that trill becomes a child of note. If we >> >> were to wish to express this in "pure" XML, it would need to be a choice >> >> between either of these, since XML imposes a very hierarchical structure if >> >> used naively. Sometimes this hierarchy makes sense (note as a child of >> >> chord), but in this case it doesn't really make musical sense for trill to >> >> be a parent or a child of note. >> >> >> >> >> >> >> >> If we were to want to expand trill so that it covers more than one >> >> note, we would have to choose option 1 OR we would have to try and figure >> >> out some other way of grouping notes. Perry's concern was that if we allow >> >> all things that can be trilled, or that can hold children that can also be >> >> trilled, then we pretty much have to allow most things as children of >> >> trills. This makes the encoding task much more difficult, since it can be >> >> very easy to get into trouble and do nonsensical things. >> >> >> >> >> >> >> >> MEI and TEI have a fairly elegant solution to this which is still valid >> >> XML but allows us to break out of this rigid hierarchy. >> >> >> >> >> >> >> >> In the third example, we remove the hierarchy and assign the trill to >> >> the note by reference; that is, the element is not hierarchically >> >> related to note, but the @startid attribute points to the element where the >> >> trill starts. This is much easier to handle, since you can put many other >> >> elements between them. For example, you could do this (a highly simplified >> >> version of the first measure of the example you attached): >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> ? >> >> >> >> ? ? >> >> >> >> ? ? >> >> >> >> ? >> >> >> >> ? >> >> >> >> ? ? >> >> >> >> ? ? >> >> >> >> ? >> >> >> >> ? >> >> >> >> ? ? >> >> >> >> ? ? >> >> >> >> ? >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> This allows much more flexibility in the encoding, since it means you >> >> do not have to decide whether the trill is hierarchically higher or lower >> >> than the note; you can simply list all the "spanning" elements at the end of >> >> the measure, and then give @startid/@endid or @tstamp references (as >> >> Raffaele mentioned). >> >> >> >> >> >> >> >> I can't speak directly for Perry, but I think that's what he meant by >> >> "one pass" vs. "two pass". It's not that you can't do all the encoding in >> >> the same sitting, it's just that sometimes you'll want to identify and >> >> encode elements that don't strictly fall into the hierarchy later in the >> >> measure. So you would, in effect, do two passes through the measure: one to >> >> encode the notes, and the other to encode the other events. >> >> >> >> >> >> >> >> The complexity of keeping all this straight when encoding is certainly >> >> not trivial, but I don't think that's an MEI issue. My own feeling is that >> >> it should be the job of the notation encoding software to help you manage >> >> all of the bits and pieces >> >> >> >> >> >> >> >> I think this addresses your concern directly. You don't have to put all >> >> elements in an arbitrary hierarchy since things can be referenced after they >> >> have "happened" in the score, without needing to decide if it makes musical >> >> sense to have it as a hierarchical relationship. This, in my opinion, is >> >> more musical than other attempts at encoding music notation in XML since you >> >> don't have to make seemingly arbitrary decisions over which musical >> >> structure is a child of another. >> >> >> >> >> >> >> >> -Andrew >> >> >> >> >> >> >> >> On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: >> >> >> >> >> >> >> >> Hi, Kristina, Perry, et al. >> >> >> >> >> >> >> >> From my perspective Kristina is prompting a really important question, >> >> and one ?response (?more than one pass....?) seems to be the inevitable >> >> place where all encoders end up when confronted with real music. It may be >> >> unavoidable, but it is not ideal. >> >> >> >> >> >> >> >> What is unsettling ?in the responses is that we are putting hierarchy >> >> above music and the needs of encoders. >> >> >> >> When working with MSS, there are dozens of potential distractions, and >> >> making a second pass to capture left-over details requires finding the exact >> >> spot on the folio, checking to see which features were encoded on the first >> >> pass, and, over time, a lot of secondary bookkeeping about what is finished >> >> and what has yet to be done. (I know; I did that kind of housework for my >> >> Marcello catalogue in the 1980s---3000 bitty music files, each one in need >> >> of its own particular notes.) ?The ?risks of eventual inaccuracy, incomplete >> >> information, and duplication are very real. >> >> >> >> >> >> >> >> Granted we want MEI to work, but if it is optimized for programming >> >> efficiency at the cost of usability, we may need to step back and look for >> >> other solutions. The low level of generalizability of music features across >> >> repertories is widely acknowledged, and we are simply encountering one >> >> instance here. For another example from the same category, consider ?this >> >> CPE Bach incipit: >> >> >> >> >> >> >> >> >> >> >> >> We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all >> >> the examples go to http://www.ccarh.org/publications/reprints/ieee/ >> >> --Category 2, Type 1] >> >> >> >> >> >> >> >> How would MEI handle it? >> >> >> >> >> >> >> >> Eleanor >> >> >> >> >> >> >> >> >> >> >> >> Eleanor Selfridge-Field >> >> >> >> Consulting Professor, Music >> >> >> >> Braun Music Center #129 >> >> >> >> Stanford University >> >> >> >> Stanford, CA 94305-3076, USA >> >> >> >> http://www.stanford.edu/~esfield/ >> >> >> >> http://www.ccarh.org >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> From: >> >> mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de >> >> [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] >> >> On Behalf Of Roland, Perry (pdr4h) >> >> >> >> Sent: Monday, March 05, 2012 5:50 AM >> >> >> >> To: Music Encoding Initiative >> >> >> >> Subject: Re: [MEI-L] trills within beams >> >> >> >> >> >> >> >> Hi, Kristina, >> >> >> >> >> >> >> >> MEI is not designed to be encoded in one pass -- some things, such as >> >> trills, pedal markings, text directives, etc., must be captured after the >> >> notes. >> >> >> >> >> >> >> >> It might be possible to do what you suggest in some cases but it won't >> >> work all the time because it potentially leads to overlapping hierarchies. >> >> ?It also means that your proposed element would have to allow every >> >> other possible element, leading to opportunities for encoders to do >> >> unsupported things. >> >> >> >> >> >> >> >> From snowy (yes, snowy!) Charlottesville, >> >> >> >> >> >> >> >> -- >> >> >> >> p. >> >> >> >> >> >> >> >> >> >> >> >> __________________________ >> >> >> >> Perry Roland >> >> >> >> Music Library >> >> >> >> University of Virginia >> >> >> >> P. O. Box 400175 >> >> >> >> Charlottesville, VA 22904 >> >> >> >> 434-982-2702 (w) >> >> >> >> pdr4h (at) virginia (dot) edu >> >> >> >> ________________________________ >> >> >> >> From: >> >> mei-l-bounces at lists.uni-paderborn.de >> >> [mei-l-bounces at lists.uni-paderborn.de] >> >> on behalf of Kristina Richts >> >> [kristina.richts at gmx.de] >> >> >> >> Sent: Monday, March 05, 2012 4:26 AM >> >> >> >> To: Music Encoding Initiative >> >> >> >> Subject: [MEI-L] trills within beams >> >> >> >> Hi all, >> >> >> >> >> >> >> >> while encoding the following passage, I just mentioned, that there >> >> seems to be no way to encode the trill right here, as I don't want to >> >> extract this information and place it at the end of the measure. >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Why isn't it possible to provide notes within a beam with a >> >> element, as could be done with single notes, like this: >> >> >> >> > >> stem.dir="down"/>? >> >> >> >> >> >> >> >> Did I miss anything? >> >> >> >> >> >> >> >> Best, >> >> >> >> Kristina >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> >> mei-l mailing list >> >> >> >> mei-l at lists.uni-paderborn.de >> >> >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> >> mei-l mailing list >> >> >> >> mei-l at lists.uni-paderborn.de >> >> >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> >> >> >> >> _______________________________________________ >> >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > >> > >> > _______________________________________________ >> > mei-l mailing list >> > mei-l at lists.uni-paderborn.de >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > _______________________________________________ >> > mei-l mailing list >> > mei-l at lists.uni-paderborn.de >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > From pdr4h at eservices.virginia.edu Tue Mar 6 17:48:44 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 6 Mar 2012 16:48:44 +0000 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> , Message-ID: Slow down, take it easy, remain calm. I said I could imagine a future without them, but I didn't mean in the next 5 minutes. :) These features are very convenient for hand-encoders. They have been frequently requested in the past and, in fact, this thread started with a request for just such a feature. These features also make it somewhat easier to more-directly capture data from other systems that allow/encourage this kind of thing. So, I would urge caution at this point. Anyone who doesn't want to use these "conveniences" shouldn't feel compelled to do so. But I don't think that I want to rush to remove them. I think we should be thinking about canonicalization instead. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Tuesday, March 06, 2012 11:36 AM To: Music Encoding Initiative Subject: Re: [MEI-L] trills within beams I'm also in favor of deprecating [i|m|t][1-6]. Thomas 2012/3/6 Raffaele Viglianti : > +1 to deprecation > > Raffaele > > > On Tue, Mar 6, 2012 at 4:22 PM, Johannes Kepper wrote: >> >> if this turns out to be a way to get rid of the [i|m|t][1-6] datatype, I'm >> more than happy :-) >> >> (do we need to cover that in the guidelines, or can't we just deprecate it >> now?) (that's also a rhetorical question) >> >> >> Am 06.03.2012 um 17:18 schrieb Roland, Perry (pdr4h): >> >> > >> > @tie, @slur, and such, were put in as conveniences for the hand encoder. >> > The question is: When does a plethora of conveniences become inconvenient? >> > Where the line gets drawn may be arbitrary, but a line needs to be drawn >> > nonetheless. >> > >> > I can certainly imagine the attributes you cite being deprecated over >> > time. Or at least ignored when encoding software (like a GUI editor) is >> > used. I can also imagine them being converted to the element form by a >> > canonizer, like MEIron. >> > >> > If attributes such as these are conveniences that will probably be >> > ignored or deprecated in the future, why add more like them now? (That's a >> > rhetorical question.) >> > >> > -- >> > p. >> > >> > >> > __________________________ >> > Perry Roland >> > Music Library >> > University of Virginia >> > P. O. Box 400175 >> > Charlottesville, VA 22904 >> > 434-982-2702 (w) >> > pdr4h (at) virginia (dot) edu >> > ________________________________________ >> > From: mei-l-bounces at lists.uni-paderborn.de >> > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper >> > [kepper at edirom.de] >> > Sent: Tuesday, March 06, 2012 11:06 AM >> > To: Music Encoding Initiative >> > Subject: Re: [MEI-L] trills within beams >> > >> > I see the point, but in this case, shouldn't we consider to deprecate >> > @tie, @slur and similar constructs? The arguments you provide below apply to >> > them as well? >> > >> > devil's advocate Johannes >> > >> > >> > Am 06.03.2012 um 16:57 schrieb Roland, Perry (pdr4h): >> > >> >> If single-note trills are treated differently, then why not any other >> >> single-note / instantaneous "control event", such as arpeg, breath, pedal, >> >> reh, dynam, etc.? Once you start down that road, Pandora's box is opened. >> >> A proliferation of attributes wouldn't be helpful. >> >> >> >> If attributes were added, @ornam for example, they would only be useful >> >> part of the time; that is, in the case of @ornam/ @trill, for a single-note >> >> trill and when complete control of the rendering of the trill is to be >> >> handled by the rendering engine. There's no opportunity to add visual >> >> information to that trill without resorting to attributes about attributes >> >> (and that way certainly lies madness!) >> >> >> >> If an child of were added or allowed to be a >> >> child of in order to accommodate visual info., then we're back into >> >> the hierarchy problems Andrew so eloquently described yesterday. In >> >> addition, if we allow both possibilities (attribute and child), then user >> >> confusion is increased and interchange / interoperability diminished. >> >> >> >> When these other things are considered, I believe recording trills and >> >> such after the notes they're attached to is still the best of the >> >> alternatives. >> >> >> >> -- >> >> p. >> >> >> >> __________________________ >> >> Perry Roland >> >> Music Library >> >> University of Virginia >> >> P. O. Box 400175 >> >> Charlottesville, VA 22904 >> >> 434-982-2702 (w) >> >> pdr4h (at) virginia (dot) edu >> >> >> >> >> >> >> >> From: mei-l-bounces at lists.uni-paderborn.de >> >> [mei-l-bounces at lists.uni-paderborn.de] on behalf of Andrew Hankinson, Mr >> >> [andrew.hankinson at mail.mcgill.ca] >> >> Sent: Tuesday, March 06, 2012 10:05 AM >> >> To: Music Encoding Initiative >> >> Subject: Re: [MEI-L] trills within beams >> >> >> >> >> >> Thanks Johannes! >> >> >> >> >> >> I gave a couple examples to show how trill *could* be done in XML (and >> >> is done in some other XML-based music encoding schemes), since I was >> >> responding to Eleanor's question. But I don't think that it *should* be done >> >> that way, since, like I said, it creates a situation where you're forced >> >> into creating these artificial and non-musical hierarchies. I didn't mean to >> >> suggest that I felt any changes to should be made, I just wanted >> >> to address Eleanor's question directly by showing how MEI does it >> >> differently. >> >> >> >> >> >> But to address Kristina's question: I did some asking around our lab >> >> yesterday. We came to the agreement that a trill is an ornament, not an >> >> articulation. If doesn't have @ornament or something along those >> >> lines, I think this is a great argument for it. >> >> >> >> >> >> One of the musicologists in our lab found a couple helpful pages for >> >> this discussion: >> >> >> >> >> >> >> >> http://www.music.vt.edu/musicdictionary/appendix/ornaments/ornaments.html >> >> >> >> >> >> Couperin's Ornaments: >> >> >> >> http://books.google.ca/books?id=CecBsvk7Oz0C&lpg=PA34&ots=VQ2uwRSt_n&dq=ornaments%20couperin&pg=PA34#v=onepage&q=ornaments%20couperin&f=false >> >> >> >> >> >> -Andrew >> >> >> >> >> >> On 2012-03-06, at 7:11 AM, Johannes Kepper wrote: >> >> >> >> >> >> Hi Andrew, >> >> >> >> I absolutely agree with what you say about hierarchy issues etc. I >> >> think this discussion is very helpful for identifying what MEI is and what >> >> it is not, and, even more beneficial, how people understand it. But still, I >> >> think it misses Kristina's initial question. Sometimes, a trill stretches no >> >> longer than the note it's written above. In these cases, it seems to be no >> >> spanning element on its own, but rather a playing instruction for this >> >> particular note. The question then is not whether we want to redefine the >> >> model of to allow it within a note or as a container of notes, but >> >> instead if "tr" should be an allowed value of @artic (or any other attribute >> >> on ). By no means this would argue against the existence of the >> >> current standoff , it would just be a shortcut for describing trill >> >> that do not stretch beyond their initial note. Currently, the would >> >> have to duplicate the @tstamp and @dur of this note (@tstamp on notes is >> >> normally omitted, I know). >> >> >> >> It is fine to decide that we don't want to offer this limited-power >> >> shortcut for a certain kind of trills, but we have several such constructs >> >> for other things in MEI already. I don't see that anyone asked for >> >> remodelling the trill element (but please correct me!!!!). >> >> >> >> Best, >> >> Johannes >> >> >> >> >> >> >> >> >> >> Am 06.03.2012 um 01:57 schrieb Andrew Hankinson, Mr: >> >> >> >> >> >> Hi Eleanor, >> >> >> >> >> >> >> >> The way I understand the problem is a choice between the following >> >> options: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> or: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> or: >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> In the first example, note becomes a "child" element of trill; the >> >> second example inverts that so that trill becomes a child of note. If we >> >> were to wish to express this in "pure" XML, it would need to be a choice >> >> between either of these, since XML imposes a very hierarchical structure if >> >> used naively. Sometimes this hierarchy makes sense (note as a child of >> >> chord), but in this case it doesn't really make musical sense for trill to >> >> be a parent or a child of note. >> >> >> >> >> >> >> >> If we were to want to expand trill so that it covers more than one >> >> note, we would have to choose option 1 OR we would have to try and figure >> >> out some other way of grouping notes. Perry's concern was that if we allow >> >> all things that can be trilled, or that can hold children that can also be >> >> trilled, then we pretty much have to allow most things as children of >> >> trills. This makes the encoding task much more difficult, since it can be >> >> very easy to get into trouble and do nonsensical things. >> >> >> >> >> >> >> >> MEI and TEI have a fairly elegant solution to this which is still valid >> >> XML but allows us to break out of this rigid hierarchy. >> >> >> >> >> >> >> >> In the third example, we remove the hierarchy and assign the trill to >> >> the note by reference; that is, the element is not hierarchically >> >> related to note, but the @startid attribute points to the element where the >> >> trill starts. This is much easier to handle, since you can put many other >> >> elements between them. For example, you could do this (a highly simplified >> >> version of the first measure of the example you attached): >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> This allows much more flexibility in the encoding, since it means you >> >> do not have to decide whether the trill is hierarchically higher or lower >> >> than the note; you can simply list all the "spanning" elements at the end of >> >> the measure, and then give @startid/@endid or @tstamp references (as >> >> Raffaele mentioned). >> >> >> >> >> >> >> >> I can't speak directly for Perry, but I think that's what he meant by >> >> "one pass" vs. "two pass". It's not that you can't do all the encoding in >> >> the same sitting, it's just that sometimes you'll want to identify and >> >> encode elements that don't strictly fall into the hierarchy later in the >> >> measure. So you would, in effect, do two passes through the measure: one to >> >> encode the notes, and the other to encode the other events. >> >> >> >> >> >> >> >> The complexity of keeping all this straight when encoding is certainly >> >> not trivial, but I don't think that's an MEI issue. My own feeling is that >> >> it should be the job of the notation encoding software to help you manage >> >> all of the bits and pieces >> >> >> >> >> >> >> >> I think this addresses your concern directly. You don't have to put all >> >> elements in an arbitrary hierarchy since things can be referenced after they >> >> have "happened" in the score, without needing to decide if it makes musical >> >> sense to have it as a hierarchical relationship. This, in my opinion, is >> >> more musical than other attempts at encoding music notation in XML since you >> >> don't have to make seemingly arbitrary decisions over which musical >> >> structure is a child of another. >> >> >> >> >> >> >> >> -Andrew >> >> >> >> >> >> >> >> On 2012-03-05, at 6:03 PM, Eleanor Selfridge-Field wrote: >> >> >> >> >> >> >> >> Hi, Kristina, Perry, et al. >> >> >> >> >> >> >> >> From my perspective Kristina is prompting a really important question, >> >> and one response (?more than one pass....?) seems to be the inevitable >> >> place where all encoders end up when confronted with real music. It may be >> >> unavoidable, but it is not ideal. >> >> >> >> >> >> >> >> What is unsettling in the responses is that we are putting hierarchy >> >> above music and the needs of encoders. >> >> >> >> When working with MSS, there are dozens of potential distractions, and >> >> making a second pass to capture left-over details requires finding the exact >> >> spot on the folio, checking to see which features were encoded on the first >> >> pass, and, over time, a lot of secondary bookkeeping about what is finished >> >> and what has yet to be done. (I know; I did that kind of housework for my >> >> Marcello catalogue in the 1980s---3000 bitty music files, each one in need >> >> of its own particular notes.) The risks of eventual inaccuracy, incomplete >> >> information, and duplication are very real. >> >> >> >> >> >> >> >> Granted we want MEI to work, but if it is optimized for programming >> >> efficiency at the cost of usability, we may need to step back and look for >> >> other solutions. The low level of generalizability of music features across >> >> repertories is widely acknowledged, and we are simply encountering one >> >> instance here. For another example from the same category, consider this >> >> CPE Bach incipit: >> >> >> >> >> >> >> >> >> >> >> >> We used it in our ?desk-top publishing IEEE tutorial of 1994. [For all >> >> the examples go to http://www.ccarh.org/publications/reprints/ieee/ >> >> --Category 2, Type 1] >> >> >> >> >> >> >> >> How would MEI handle it? >> >> >> >> >> >> >> >> Eleanor >> >> >> >> >> >> >> >> >> >> >> >> Eleanor Selfridge-Field >> >> >> >> Consulting Professor, Music >> >> >> >> Braun Music Center #129 >> >> >> >> Stanford University >> >> >> >> Stanford, CA 94305-3076, USA >> >> >> >> http://www.stanford.edu/~esfield/ >> >> >> >> http://www.ccarh.org >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> From: >> >> mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de >> >> [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] >> >> On Behalf Of Roland, Perry (pdr4h) >> >> >> >> Sent: Monday, March 05, 2012 5:50 AM >> >> >> >> To: Music Encoding Initiative >> >> >> >> Subject: Re: [MEI-L] trills within beams >> >> >> >> >> >> >> >> Hi, Kristina, >> >> >> >> >> >> >> >> MEI is not designed to be encoded in one pass -- some things, such as >> >> trills, pedal markings, text directives, etc., must be captured after the >> >> notes. >> >> >> >> >> >> >> >> It might be possible to do what you suggest in some cases but it won't >> >> work all the time because it potentially leads to overlapping hierarchies. >> >> It also means that your proposed element would have to allow every >> >> other possible element, leading to opportunities for encoders to do >> >> unsupported things. >> >> >> >> >> >> >> >> From snowy (yes, snowy!) Charlottesville, >> >> >> >> >> >> >> >> -- >> >> >> >> p. >> >> >> >> >> >> >> >> >> >> >> >> __________________________ >> >> >> >> Perry Roland >> >> >> >> Music Library >> >> >> >> University of Virginia >> >> >> >> P. O. Box 400175 >> >> >> >> Charlottesville, VA 22904 >> >> >> >> 434-982-2702 (w) >> >> >> >> pdr4h (at) virginia (dot) edu >> >> >> >> ________________________________ >> >> >> >> From: >> >> mei-l-bounces at lists.uni-paderborn.de >> >> [mei-l-bounces at lists.uni-paderborn.de] >> >> on behalf of Kristina Richts >> >> [kristina.richts at gmx.de] >> >> >> >> Sent: Monday, March 05, 2012 4:26 AM >> >> >> >> To: Music Encoding Initiative >> >> >> >> Subject: [MEI-L] trills within beams >> >> >> >> Hi all, >> >> >> >> >> >> >> >> while encoding the following passage, I just mentioned, that there >> >> seems to be no way to encode the trill right here, as I don't want to >> >> extract this information and place it at the end of the measure. >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> Why isn't it possible to provide notes within a beam with a >> >> element, as could be done with single notes, like this: >> >> >> >> > >> stem.dir="down"/>? >> >> >> >> >> >> >> >> Did I miss anything? >> >> >> >> >> >> >> >> Best, >> >> >> >> Kristina >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> >> mei-l mailing list >> >> >> >> mei-l at lists.uni-paderborn.de >> >> >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> >> >> >> >> >> >> >> >> _______________________________________________ >> >> >> >> mei-l mailing list >> >> >> >> mei-l at lists.uni-paderborn.de >> >> >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> >> >> >> >> _______________________________________________ >> >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > >> > >> > _______________________________________________ >> > mei-l mailing list >> > mei-l at lists.uni-paderborn.de >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > _______________________________________________ >> > mei-l mailing list >> > mei-l at lists.uni-paderborn.de >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Tue Mar 6 18:46:55 2012 From: zupftom at googlemail.com (TW) Date: Tue, 6 Mar 2012 18:46:55 +0100 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> Message-ID: 2012/3/6 Roland, Perry (pdr4h) : > Slow down, take it easy, remain calm. ?I said I could imagine a future without them, but I didn't mean in the next 5 minutes. ?:) > > These features are very convenient for hand-encoders. ?They have been frequently requested in the past and, in fact, this thread started with a request for just such a feature. > > These features also make it somewhat easier to more-directly capture data from other systems that allow/encourage this kind of thing. > > So, I would urge caution at this point. ?Anyone who doesn't want to use these "conveniences" shouldn't feel compelled to do so. The problem for people like me and Julian is that we don't have a choice because we have to eat what people are throwing at us. If we want to handle MEI as completely as possible, we currently would have to take into account four different ways for specifying beams (@beam.group/@beam.rest, , and @beam). again has two ways of being used, @startid+ at endid and @tstamp+ at dur, (In theory and mathematicaly, the specs allow for twelve different combinations, but I believe that the @*.ges and @*.real attributes don't make a lot of sense and mixing IDs and time based start/end attributes can be neglected). > But I don't think that I want to rush to remove them. ?I think we should be thinking about canonicalization instead. I agree. An "inputting environment" might want to offer the convenience notation and extend them "automatically". I don't know whether this is a sensible suggestion, but what about moving convenience features to a module of their own so they can easily be enabled/disabled? (Am I opening another can of worms?) Thomas From pdr4h at eservices.virginia.edu Tue Mar 6 19:32:51 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 6 Mar 2012 18:32:51 +0000 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> , Message-ID: >> So, I would urge caution at this point. Anyone who doesn't want to use these "conveniences" shouldn't feel compelled to do so. >The problem for people like me and Julian is that we don't have a >choice because we have to eat what people are throwing at us. If we >want to handle MEI as completely as possible, ... Given the extensibility of MEI, I don't know if any software can ever handle MEI "completely". Instead, it should declare that it uses a certain subset of MEI features and reject anything that doesn't validate against the schema that defines the chosen profile. One of the advantages of ODD is that it facilitates the creation of these profiles. >> But I don't think that I want to rush to remove them. I think we should be thinking about canonicalization instead. >I agree. An "inputting environment" might want to offer the >convenience notation and extend them "automatically". I don't know >whether this is a sensible suggestion, but what about moving >convenience features to a module of their own so they can easily be >enabled/disabled? (Am I opening another can of worms?) I'm not opposed to exploring this, but again we run into music's contradictions -- what is a "convenience" for some, is "essential" for others. If we're serious about pursuing this approach, though, I think we would have to put alternatives (beam attributes and beam elements in your example) into separate modules and allow users to select between them (or turn them both on, as they effectively are now). Of course, this will complicate the already-complex class hierarchy of MEI, making the schema more difficult to work with and harder to explain to the uninitiated. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Tuesday, March 06, 2012 12:46 PM To: Music Encoding Initiative Subject: Re: [MEI-L] trills within beams 2012/3/6 Roland, Perry (pdr4h) : > Slow down, take it easy, remain calm. I said I could imagine a future without them, but I didn't mean in the next 5 minutes. :) > > These features are very convenient for hand-encoders. They have been frequently requested in the past and, in fact, this thread started with a request for just such a feature. > > These features also make it somewhat easier to more-directly capture data from other systems that allow/encourage this kind of thing. > > So, I would urge caution at this point. Anyone who doesn't want to use these "conveniences" shouldn't feel compelled to do so. The problem for people like me and Julian is that we don't have a choice because we have to eat what people are throwing at us. If we want to handle MEI as completely as possible, we currently would have to take into account four different ways for specifying beams (@beam.group/@beam.rest, , and @beam). again has two ways of being used, @startid+ at endid and @tstamp+ at dur, (In theory and mathematicaly, the specs allow for twelve different combinations, but I believe that the @*.ges and @*.real attributes don't make a lot of sense and mixing IDs and time based start/end attributes can be neglected). > But I don't think that I want to rush to remove them. I think we should be thinking about canonicalization instead. I agree. An "inputting environment" might want to offer the convenience notation and extend them "automatically". I don't know whether this is a sensible suggestion, but what about moving convenience features to a module of their own so they can easily be enabled/disabled? (Am I opening another can of worms?) Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From laurent at music.mcgill.ca Wed Mar 7 12:29:01 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Wed, 7 Mar 2012 11:29:01 +0000 Subject: [MEI-L] trills within beams In-Reply-To: <22803_1331058782_4F56585E_22803_116_1_BBCC497C40D85642B90E9F94FC30343D0114FEF6@GRANT.eservices.virginia.edu> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> <22803_1331058782_4F56585E_22803_116_1_BBCC497C40D85642B90E9F94FC30343D0114FEF6@GRANT.eservices.virginia.edu> Message-ID: >I agree. An "inputting environment" might want to offer the > >convenience notation and extend them "automatically". I don't know > >whether this is a sensible suggestion, but what about moving > >convenience features to a module of their own so they can easily be > >enabled/disabled? (Am I opening another can of worms?) > This seems to be over the top for me. I think we should take a decision and I would vote for deprecating them. It does not have to be within 5 minutes, but the earlier the better (10 minutes?). Once we will have tools relying on them, we will have votes against deprecation, which does not seem to be the case now - actually, I already use them but would not mind shutting up if it can allow some simplification. Now for the trill, I my understanding correct that we would expect this encoding to be used for encoding the realization of the trill, and that to be used for encoding the note (as written) with its ornament? I guess one would rather (or also) use .ges attributes for encoding its realization, but using the first one with only the written note seems awkward to me because logically, I see a note with a trill and really not a trill with a note. It is significantly different from a chord and from a beam. Do we care about this? Laurent > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de[mei-l-bounces+pdr4h= > virginia.edu at lists.uni-paderborn.de] on behalf of TW [ > zupftom at googlemail.com] > Sent: Tuesday, March 06, 2012 12:46 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] trills within beams > > 2012/3/6 Roland, Perry (pdr4h) : > > Slow down, take it easy, remain calm. I said I could imagine a future > without them, but I didn't mean in the next 5 minutes. :) > > > > These features are very convenient for hand-encoders. They have been > frequently requested in the past and, in fact, this thread started with a > request for just such a feature. > > > > These features also make it somewhat easier to more-directly capture > data from other systems that allow/encourage this kind of thing. > > > > So, I would urge caution at this point. Anyone who doesn't want to use > these "conveniences" shouldn't feel compelled to do so. > > The problem for people like me and Julian is that we don't have a > choice because we have to eat what people are throwing at us. If we > want to handle MEI as completely as possible, we currently would have > to take into account four different ways for specifying beams > (@beam.group/@beam.rest, , and @beam). > again has two ways of being used, @startid+ at endid and @tstamp+ at dur, > (In theory and mathematicaly, the specs allow for twelve different > combinations, but I believe that the @*.ges and @*.real attributes > don't make a lot of sense and mixing IDs and time based start/end > attributes can be neglected). > > > But I don't think that I want to rush to remove them. I think we should > be thinking about canonicalization instead. > > I agree. An "inputting environment" might want to offer the > convenience notation and extend them "automatically". I don't know > whether this is a sensible suggestion, but what about moving > convenience features to a module of their own so they can easily be > enabled/disabled? (Am I opening another can of worms?) > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From kepper at edirom.de Wed Mar 7 13:20:31 2012 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 7 Mar 2012 13:20:31 +0100 Subject: [MEI-L] trills within beams In-Reply-To: References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> <22803_1331058782_4F56585E_22803_116_1_BBCC497C40D85642B90E9F94FC30343D0114FEF6@GRANT.eservices.virginia.edu> Message-ID: <14554848-F3C7-43A0-A688-7BF7B2A46A4E@edirom.de> Slowly, calm down. I am on your side, I want to deprecate this. But we have agreed that the schema for the upcoming release is already frozen. There is a loose idea to have another release before the current DFG/NEH grant ends, which will be around late summer 2013. If we would deprecate (= technically allow, but discourage to use) it by then and kill it in the release following that, I think we would have a reasonable schedule. The only question is if we want to mention the future deprecation in the current guidelines or not ("@tie is likely to be deprecated in future releases of MEI" etc.). This seems like a doubled deprecation process to me, but would probably be more fair to everyone who's going to start encoding today. The remaining time could also help to have the applications we've been talking about in place. Am 07.03.2012 um 12:29 schrieb Laurent Pugin: > > > >I agree. An "inputting environment" might want to offer the > >convenience notation and extend them "automatically". I don't know > >whether this is a sensible suggestion, but what about moving > >convenience features to a module of their own so they can easily be > >enabled/disabled? (Am I opening another can of worms?) > > This seems to be over the top for me. I think we should take a decision and I would vote for deprecating them. It does not have to be within 5 minutes, but the earlier the better (10 minutes?). Once we will have tools relying on them, we will have votes against deprecation, which does not seem to be the case now - actually, I already use them but would not mind shutting up if it can allow some simplification. > > Now for the trill, I my understanding correct that we would expect this encoding > > > > > > > > to be used for encoding the realization of the trill, That's another issue that has been raised already internally (@Laurent, look for Perry's mail from 2011-07-07), but which was never responded. I think we will have to put that in a separate thread. I know that Perry is busy today, so I will try to sum it up to give us a good start in this discussion. Best, Johannes > and that > > > > > to be used for encoding the note (as written) with its ornament? > > I guess one would rather (or also) use .ges attributes for encoding its realization, but using the first one with only the written note seems awkward to me because logically, I see a note with a trill and really not a trill with a note. It is significantly different from a chord and from a beam. Do we care about this? > > Laurent > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] > Sent: Tuesday, March 06, 2012 12:46 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] trills within beams > > 2012/3/6 Roland, Perry (pdr4h) : > > Slow down, take it easy, remain calm. I said I could imagine a future without them, but I didn't mean in the next 5 minutes. :) > > > > These features are very convenient for hand-encoders. They have been frequently requested in the past and, in fact, this thread started with a request for just such a feature. > > > > These features also make it somewhat easier to more-directly capture data from other systems that allow/encourage this kind of thing. > > > > So, I would urge caution at this point. Anyone who doesn't want to use these "conveniences" shouldn't feel compelled to do so. > > The problem for people like me and Julian is that we don't have a > choice because we have to eat what people are throwing at us. If we > want to handle MEI as completely as possible, we currently would have > to take into account four different ways for specifying beams > (@beam.group/@beam.rest, , and @beam). > again has two ways of being used, @startid+ at endid and @tstamp+ at dur, > (In theory and mathematicaly, the specs allow for twelve different > combinations, but I believe that the @*.ges and @*.real attributes > don't make a lot of sense and mixing IDs and time based start/end > attributes can be neglected). > > > But I don't think that I want to rush to remove them. I think we should be thinking about canonicalization instead. > > I agree. An "inputting environment" might want to offer the > convenience notation and extend them "automatically". I don't know > whether this is a sensible suggestion, but what about moving > convenience features to a module of their own so they can easily be > enabled/disabled? (Am I opening another can of worms?) > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Wed Mar 7 15:25:56 2012 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 7 Mar 2012 15:25:56 +0100 Subject: [MEI-L] Events inside events References: Message-ID: <64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> As I mentioned earlier, here is an eMail that Perry wrote last year. We've never discussed his proposal so far, and his initial intention was to bring this to MEI-L. I modified only the beginning and end, there's no comment from me yet (I will "respond" to this in another mail). ------- Currently, a few elements (bend, gliss, mordent, trill, turn, note, ineume, uneume are the most pertinent ones here) permit other events in their content, e.g., The original purpose of this was to allow for interpretative data to be record "in-line". This preceded the introduction of the editorial elements, such as app and choice. Now that we have these elements (app and choice), I believe allowing event content in these situations is not only redundant, but confusing. It immediately raises a question of whether to encode the interpretative info directly inside, say, the turn element (as above) or use when marking up a single source, or if the turn and its "resolution" exist in different sources In some cases; that is, in diastemmatic neume notation, such as Solesmes, this feature is still necessary in order to record the actual, uninterpreted pitch values of the neumes. It has also already been used to capture interpreted pitch values in non-diastemmatic neume notation, such as for Hildegard's works. Although this last use might be worth to reconsider, we can't disallow it for earlier, unheighted notation if we allow it for diastemmatic neumes. In spite of the fact that removing the feature doesn't completely remove any possibility of its mis-use, I think it should be removed for elements in the CMN repertoire (bend, gliss, mordent, trill, turn, note). This will steer users toward a "proper" encoding using and . This is a significant enough change (much like the camelCasing of element names) to warrant making it now rather than later. I don't want to make any change to the source file now, but I think it should be done for the next release. Comments? -- p. -------------- Johannes again. What I wanted to add is that this discussion may not aim at changing the upcoming 2012 release, as the schema for this has already been fixed by a Council decision. What we can do now is to announce future changes in the 2012 Guidelines we're currently writing. I think a discussion of this thread is desperately needed, but we need to be clear about the schedule for any changes we might come up with. Johannes From pdr4h at eservices.virginia.edu Thu Mar 8 16:00:56 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 8 Mar 2012 15:00:56 +0000 Subject: [MEI-L] Events inside events In-Reply-To: <64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> References: , <64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> Message-ID: Thanks, Johannes, For bringing this up again. I sent another message recently to MEI-L and MEI-developers, but perhaps it didn't get through -- see below. -- p. ________________________________________ From: Roland, Perry (pdr4h) Sent: Thursday, March 01, 2012 5:16 PM To: mei-l at lists.uni-paderborn.de; MEI Developers Subject: eventLike in bend, gliss, mordent, etc. Hello all, Please pardon the duplication if you get this message more than once. Elements in the model.eventLike class -- barLine, beam, beatRpt, bend, bTrem, chord, clef, clefGrp, custos, fTrem, gliss, halfmRpt, ineume, keySig, ligature, mensur, mRest, mRpt, mRpt2, mSpace, multiRest, multiRpt, note, pad, proport, rest, space, tuplet, uneume were allowed to occur in selected eventLike elements -- bend, gliss, mordent, trill, turn, note in earlier versions of MEI that didn't yet have the ability to encode multiple readings. Now that MEI does have for dealing with multiple readings, having this "event within event" structure is redundant and confusing. I think we need to kill off this dinosaur. Since the next release of MEI has already been frozen with regard to the addition / deletion of features, I propose to add documentation that deprecates this feature even though it will technically be allowed. Of course, in the next-next release this feature will be disabled. Any objections? Is anyone using this? Going, going, ... Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From kepper at edirom.de Thu Mar 8 16:15:13 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 8 Mar 2012 16:15:13 +0100 Subject: [MEI-L] Events inside events In-Reply-To: References: , <64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> Message-ID: <57102765-F590-4829-BC8D-C42D70D23C60@edirom.de> Hi Perry, ah, that was the mail I had in mind originally. As there seem to be no strong opinions on this topic, I wonder if we should start a survey on the use of these features. For the [i|m|t][1-6] attributes, a similar approach could be helpful. My only concern is that after having checked the subscribers of MEI-L recently I noticed that I had a lot of questions about MEI from people not subscribed. There might be a user base that we cannot reach easily. Suggestions for this, anyone? Does anyone know a good and free survey service? surveymonkey.com seems to be limited to 100 responses without additional fees. I don't expect that we cross that border, but who knows? Best, Johannes Am 08.03.2012 um 16:00 schrieb Roland, Perry (pdr4h): > Thanks, Johannes, > > For bringing this up again. > > I sent another message recently to MEI-L and MEI-developers, but perhaps it didn't get through -- see below. > > -- > p. > > ________________________________________ > From: Roland, Perry (pdr4h) > Sent: Thursday, March 01, 2012 5:16 PM > To: mei-l at lists.uni-paderborn.de; MEI Developers > Subject: eventLike in bend, gliss, mordent, etc. > > Hello all, > > Please pardon the duplication if you get this message more than once. > > Elements in the model.eventLike class -- > > barLine, beam, beatRpt, bend, bTrem, chord, clef, clefGrp, custos, fTrem, gliss, halfmRpt, ineume, keySig, ligature, mensur, mRest, mRpt, mRpt2, mSpace, multiRest, multiRpt, note, pad, proport, rest, space, tuplet, uneume > > were allowed to occur in selected eventLike elements -- > > bend, gliss, mordent, trill, turn, note > > in earlier versions of MEI that didn't yet have the ability to encode multiple readings. Now that MEI does have for dealing with multiple readings, having this "event within event" structure is redundant and confusing. I think we need to kill off this dinosaur. > > Since the next release of MEI has already been frozen with regard to the addition / deletion of features, I propose to add documentation that deprecates this feature even though it will technically be allowed. Of course, in the next-next release this feature will be disabled. > > Any objections? Is anyone using this? Going, going, ... > > Best wishes, > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Thu Mar 8 17:06:48 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 8 Mar 2012 16:06:48 +0000 Subject: [MEI-L] trills within beams In-Reply-To: <14554848-F3C7-43A0-A688-7BF7B2A46A4E@edirom.de> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> <22803_1331058782_4F56585E_22803_116_1_BBCC497C40D85642B90E9F94FC30343D0114FEF6@GRANT.eservices.virginia.edu> , <14554848-F3C7-43A0-A688-7BF7B2A46A4E@edirom.de> Message-ID: Hi, everyone, You know, I was almost persuaded by the deprecation argument. But, thinking more about Kristina's original question and Eleanor and Andrew's responses, I think that's not the best way to go. My first response to Kristina was written in a moment of weakness and Eleanor was correct to call me on it. :-) As I've said many times, I sympathize with developers who want "the one, true, correct way". But, that can lead one down a rocky road. For example, you know that place where ships are docked? What's the word for that in English? Is the "one, true, correct" word "harbor" (as we spell it in America) or is it "harbour" (as it is spelled/spelt by our British cousins)? Then there are the Old and Middle English spellings. Of course, all are correct in the appropriate context. Writing a word processor that checks spelling would be a lot easier if there were one "right" answer, but ... MEI is designed for multiple contexts. What is true and correct for one use or user, may not be for another. For example, MEI supports printing but it is not designed exclusively for that purpose. I think the practical result of this philosophical approach is that each use / user must define the context in which it / he operates. This means making choices. Kristina identified a need for a trill attribute in the context of hand encoding (or one-pass encoding, as much as that's possible in MEI), which was seconded by Eleanor and Andrew. MEI already provides attributes that function similarly, so the question, "Why can't I ...?", was legitimate. Surely, the answer is not to define the question out of existence. When I said before that I could imagine a future in which these attributes didn't exist, I was thinking of the time when sophisticated MEI authoring tools exist. And that time may come someday soon, but it's not here yet. And even if no one authored or edited MEI in oXygen, there would still be the analytical uses to consider -- see below. In spite of my temporary lapse, I still believe the appropriate answer is to accommodate these multiple contexts. Software, of course, then must also be able to deal with multiple contexts, probably by switching between them rather than supporting them all simultaneously. (In the word processor example, one switches contexts between British and American spelling by selecting different dictionaries.) What are the options for software? 1. Silently ignore anything it doesn't understand. 2. Ignore anything it doesn't understand, but alert the user to its "deafness". 3. Refuse to work with anything it doesn't understand. All of these options conform with the notion of "supporting MEI". Anything less, such as deprecating one context or another in MEI itself, isn't acceptable. I think that if we deprecated @tie, @slur, etc. it wouldn't be long before the cry went out to add them back. Or worse yet, individuals would start extending the schema *each in their own way* to get them back because they are useful in both simple input systems (that is, using an XML editor) and in the analytical context where one often needs to tightly couple an entity (in this, a note) and its properties for ease of conceptualization and for efficiency (it takes resources -- time and memory -- to navigate down into the document to find a trill that might be associated with a given note). In order to avoid putting them back later or virtually assuring uncontrolled extension, I believe it's better to allow these attributes (and even add more; that is, @ornam or similar, in this case). Doing so steers modifications toward restrictions, which are easier to create, maintain, and enforce. I believe this places MEI in the same philosophical space as TEI. That is, one should not use TEI straight out-of-the-box without making some choices. The mei-all schema (like tei-all) is only the first step in defining any particular use of MEI. So, I want to 1. leave the current crop of attributes in place (no deprecation, but *explanation* of their proper uses), 2. add @ornam with appropriate values, 3. devise methods of converting between attribute- and element-centered markup, 4. create customizations of mei-all that make it easier for users/agents to declare what they're ready to accept and conversely what they will ignore / refuse, for example, a customization that emphasizes attributes and one that emphasizes elements. This plan is not new. With the addition of no. 2, it's what I think we've been working toward for quite some time. We just momentarily stepped off the path. :-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From zupftom at googlemail.com Thu Mar 8 18:05:45 2012 From: zupftom at googlemail.com (TW) Date: Thu, 8 Mar 2012 18:05:45 +0100 Subject: [MEI-L] inside Message-ID: I just tried something like: </titleStmt> <pubStmt/> </fileDesc> </meiHead> <music> <body> <mdiv> <score> <section> <measure> <staff> <layer> <note> <choice> <orig> <syl>Ms.</syl> </orig> <reg> <syl>Miss</syl> </reg> </choice> </note> </layer> </staff> </measure> </section> </score> </mdiv> </body> </music> </mei> I let oXygen validate against the RNG that the web service gave me, and it complains about <syl> in <orig> or <reg>. Indeed, the documentation says, <syl> is only allowed inside <lem>, <rdg>, <verse>, <syllable> and <note>. Why is this so? Bug or feature? Thomas From zupftom at googlemail.com Thu Mar 8 18:10:19 2012 From: zupftom at googlemail.com (TW) Date: Thu, 8 Mar 2012 18:10:19 +0100 Subject: [MEI-L] <syl> inside <choice> In-Reply-To: <CAEB1mApswRW92C1u2WjSMX2_soUHme0q4uCKyVo+eBNHyg99Sw@mail.gmail.com> References: <CAEB1mApswRW92C1u2WjSMX2_soUHme0q4uCKyVo+eBNHyg99Sw@mail.gmail.com> Message-ID: <CAEB1mAqhdRVpuPSpOq8XxKUPHd0HJnapytTAvU867jCuz96nug@mail.gmail.com> It seems the following is legal: <?xml version="1.0" encoding="UTF-8"?> <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> </fileDesc> </meiHead> <music> <body> <mdiv> <score> <section> <measure> <staff> <layer> <note> <syl> <orig>Ms.</orig> <reg>Miss</reg> </syl> </note> </layer> </staff> </measure> </section> </score> </mdiv> </body> </music> </mei> But why like this and not with <choice>? Thomas 2012/3/8 TW <zupftom at googlemail.com>: > I just tried something like: > > <?xml version="1.0" encoding="UTF-8"?> > <mei xmlns="http://www.music-encoding.org/ns/mei"> > ?<meiHead> > ? ?<fileDesc> > ? ? ?<titleStmt> > ? ? ? ?<title/> > ? ? ?</titleStmt> > ? ? ?<pubStmt/> > ? ?</fileDesc> > ?</meiHead> > ?<music> > ? ?<body> > ? ? ?<mdiv> > ? ? ? ?<score> > ? ? ? ? ?<section> > ? ? ? ? ? ?<measure> > ? ? ? ? ? ? ?<staff> > ? ? ? ? ? ? ? ?<layer> > ? ? ? ? ? ? ? ? ?<note> > ? ? ? ? ? ? ? ? ? ?<choice> > ? ? ? ? ? ? ? ? ? ? ?<orig> > ? ? ? ? ? ? ? ? ? ? ? ?<syl>Ms.</syl> > ? ? ? ? ? ? ? ? ? ? ?</orig> > ? ? ? ? ? ? ? ? ? ? ?<reg> > ? ? ? ? ? ? ? ? ? ? ? ?<syl>Miss</syl> > ? ? ? ? ? ? ? ? ? ? ?</reg> > ? ? ? ? ? ? ? ? ? ?</choice> > ? ? ? ? ? ? ? ? ?</note> > ? ? ? ? ? ? ? ?</layer> > ? ? ? ? ? ? ?</staff> > ? ? ? ? ? ?</measure> > ? ? ? ? ?</section> > ? ? ? ?</score> > ? ? ?</mdiv> > ? ?</body> > ?</music> > </mei> > > > I let oXygen validate against the RNG that the web service gave me, > and it complains about <syl> in <orig> or <reg>. ?Indeed, the > documentation says, <syl> is only allowed inside <lem>, <rdg>, > <verse>, <syllable> and <note>. ?Why is this so? ?Bug or feature? > > Thomas From pdr4h at eservices.virginia.edu Thu Mar 8 20:57:40 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 8 Mar 2012 19:57:40 +0000 Subject: [MEI-L] <syl> inside <choice> In-Reply-To: <CAEB1mAqhdRVpuPSpOq8XxKUPHd0HJnapytTAvU867jCuz96nug@mail.gmail.com> References: <CAEB1mApswRW92C1u2WjSMX2_soUHme0q4uCKyVo+eBNHyg99Sw@mail.gmail.com>, <CAEB1mAqhdRVpuPSpOq8XxKUPHd0HJnapytTAvU867jCuz96nug@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01150759@GRANT.eservices.virginia.edu> Hi, Thomas, I just validated your examples against /trunk/schemata/mei-all.rng, which I believe reflects the proper behavior expressed in the current version of mei-source. Because <choice> is a member of model.editLike, it is allowed in the content of <syl>. <!-- syl content --> <content> <rng:zeroOrMore> <rng:choice> <rng:text/> <rng:ref name="model.textphraseLike.limited"/> <rng:ref name="model.editLike"/> <rng:ref name="model.transcriptionLike"/> </rng:choice> </rng:zeroOrMore> </content> So, this should validate: <syl> <choice> <orig><!-- English --></orig> <reg><!-- German --></orig> </choice> </syl> The following is (perhaps unfortuntely) also permitted <syl> <orig/> <reg/> </syl> because <syl> also allows model.transcriptionLike members, one of which is <orig>, as do many elements that allow mixed content. In this case, no choice between the 2 things is suggested. The assumption is that the members of transcriptionLike, most usefully <add>, <del>, <sic>, <corr>, etc., will be used to mark parts of the text phrase, as in <syl>glare<add>d</add></syl> which indicates that the 'd' was added later. The <orig> and <reg> elements come as part of the transcriptionLike class -- it's too hard to exclude them. The <syl> element is *not allowed* in any of the transcriptionLike elements, such as <orig> or <reg> because <choice> <orig> <syl>Ms.</syl> </orig> <reg> <syl>Miss</syl> </reg> </choice> can be expressed more succinctly as <syl> <choice> <orig>Ms.</orig> <reg>Miss</reg> </choice> </syl> and, unlike some other things under discussion recently, in this case the decision was to allow only one way to do it. (MEI, like most other things in the real world, is not completely self-consistent -- sue me.) Unless the customization service is using an outdated/cached version of mei-source.xml, I can't say why you're getting different behavior. Maybe this is a question for Daniel R. Hope this helps, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Thursday, March 08, 2012 12:10 PM To: Music Encoding Initiative Subject: Re: [MEI-L] <syl> inside <choice> It seems the following is legal: <?xml version="1.0" encoding="UTF-8"?> <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> </fileDesc> </meiHead> <music> <body> <mdiv> <score> <section> <measure> <staff> <layer> <note> <syl> <orig>Ms.</orig> <reg>Miss</reg> </syl> </note> </layer> </staff> </measure> </section> </score> </mdiv> </body> </music> </mei> But why like this and not with <choice>? Thomas 2012/3/8 TW <zupftom at googlemail.com>: > I just tried something like: > > <?xml version="1.0" encoding="UTF-8"?> > <mei xmlns="http://www.music-encoding.org/ns/mei"> > <meiHead> > <fileDesc> > <titleStmt> > <title/> > </titleStmt> > <pubStmt/> > </fileDesc> > </meiHead> > <music> > <body> > <mdiv> > <score> > <section> > <measure> > <staff> > <layer> > <note> > <choice> > <orig> > <syl>Ms.</syl> > </orig> > <reg> > <syl>Miss</syl> > </reg> > </choice> > </note> > </layer> > </staff> > </measure> > </section> > </score> > </mdiv> > </body> > </music> > </mei> > > > I let oXygen validate against the RNG that the web service gave me, > and it complains about <syl> in <orig> or <reg>. Indeed, the > documentation says, <syl> is only allowed inside <lem>, <rdg>, > <verse>, <syllable> and <note>. Why is this so? Bug or feature? > > Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Thu Mar 8 22:03:52 2012 From: zupftom at googlemail.com (TW) Date: Thu, 8 Mar 2012 22:03:52 +0100 Subject: [MEI-L] <syl> inside <choice> In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01150759@GRANT.eservices.virginia.edu> References: <CAEB1mApswRW92C1u2WjSMX2_soUHme0q4uCKyVo+eBNHyg99Sw@mail.gmail.com> <CAEB1mAqhdRVpuPSpOq8XxKUPHd0HJnapytTAvU867jCuz96nug@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01150759@GRANT.eservices.virginia.edu> Message-ID: <CAEB1mAobR8BeOoaeeXmDcd396kRVyi-TG6_a2fFYCFhHBLEa7Q@mail.gmail.com> 2012/3/8 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: > > Because <choice> is a member of model.editLike, it is allowed in the content of <syl>. > > <!-- syl content --> > <content> > ?<rng:zeroOrMore> > ? ?<rng:choice> > ? ? ?<rng:text/> > ? ? ?<rng:ref name="model.textphraseLike.limited"/> > ? ? ?<rng:ref name="model.editLike"/> > ? ? ?<rng:ref name="model.transcriptionLike"/> > ? ?</rng:choice> > ?</rng:zeroOrMore> > </content> > > So, this should validate: > > <syl> > ?<choice> > ? ?<orig><!-- English --></orig> > ? ?<reg><!-- German --></orig> > ?</choice> > </syl> > Ah, that's the way to go! Thanks! Thomas From pdr4h at eservices.virginia.edu Thu Mar 8 22:25:44 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 8 Mar 2012 21:25:44 +0000 Subject: [MEI-L] <syl> inside <choice> In-Reply-To: <CAEB1mAobR8BeOoaeeXmDcd396kRVyi-TG6_a2fFYCFhHBLEa7Q@mail.gmail.com> References: <CAEB1mApswRW92C1u2WjSMX2_soUHme0q4uCKyVo+eBNHyg99Sw@mail.gmail.com> <CAEB1mAqhdRVpuPSpOq8XxKUPHd0HJnapytTAvU867jCuz96nug@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01150759@GRANT.eservices.virginia.edu>, <CAEB1mAobR8BeOoaeeXmDcd396kRVyi-TG6_a2fFYCFhHBLEa7Q@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0115078C@GRANT.eservices.virginia.edu> Thomas, Don't let my hastily-chosen example mislead you. I wasn't saying that this <syl> <choice> <orig><!-- English --></orig> <reg><!-- German --></reg> </choice> </syl> would always necessarily be the best way to encode multi-lingual texts. This markup would be useful for those cases where an editor wants to offer a German translation of, say, a song with English text. In the case where the source material contains both English and German in alternate verses, the following would be better <note> <verse n="1" xml:lang="eng"> <syl><!-- English --></syl> </verse> <verse n="2" xml:lang="ger"> <syl><!-- German --></syl> </verse> </note> -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Thursday, March 08, 2012 4:03 PM To: Music Encoding Initiative Subject: Re: [MEI-L] <syl> inside <choice> 2012/3/8 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: > > Because <choice> is a member of model.editLike, it is allowed in the content of <syl>. > > <!-- syl content --> > <content> > <rng:zeroOrMore> > <rng:choice> > <rng:text/> > <rng:ref name="model.textphraseLike.limited"/> > <rng:ref name="model.editLike"/> > <rng:ref name="model.transcriptionLike"/> > </rng:choice> > </rng:zeroOrMore> > </content> > > So, this should validate: > > <syl> > <choice> > <orig><!-- English --></orig> > <reg><!-- German --></orig> > </choice> > </syl> > Ah, that's the way to go! Thanks! Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Fri Mar 9 08:13:37 2012 From: bohl at edirom.de (=?utf-8?B?QmVuamFtaW4gVy4gQm9obA==?=) Date: Fri, 09 Mar 2012 08:13:37 +0100 Subject: [MEI-L] =?utf-8?q?Antw=2E=3A__trills_within_beams?= Message-ID: <0LnDWH-1SZBuK3ueb-00h5d9@mrelayeu.kundenserver.de> Hi LISTeners, hi Perry! What you wrote about the trill question on second thought is exactly what kept nagging in my mind having followed the discussion for the last few days. As I always understood MEI was/is being designed by scientists for scientists in order to being able to capture music phenomena that one was not able to in "common" notation software, at least because its development was not bound to that of a certain software. Moreover there was no big notion of replacing/taking the position of a interchange-format for different software. Considering this it is VERY important to listen to the needs of hand encoders and keep in mind that straight forward encoding possibilities are features that capture attention of future editors and users of MEI; of course that's what sophisticated software might accomplish as well. But to the day there is no such thing for MEI-all or ans other subset thereof. Of course the needs of software developers is equally important because we want them to make real that sophisticated thing. Perry, I greatly appreciate you "stepping back" and looking on what's happening from a greater distance. You proposal sounds to me like a sound plan. Best wishes, Benjamin ----- Reply message ----- Von: "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> An: "Music Encoding Initiative" <mei-l at lists.uni-paderborn.de> Betreff: [MEI-L] trills within beams Datum: Do., M?r. 8, 2012 17:06 Hi, everyone, You know, I was almost persuaded by the deprecation argument. But, thinking more about Kristina's original question and Eleanor and Andrew's responses, I think that's not the best way to go. My first response to Kristina was written in a moment of weakness and Eleanor was correct to call me on it. :-) As I've said many times, I sympathize with developers who want "the one, true, correct way". But, that can lead one down a rocky road. For example, you know that place where ships are docked? What's the word for that in English? Is the "one, true, correct" word "harbor" (as we spell it in America) or is it "harbour" (as it is spelled/spelt by our British cousins)? Then there are the Old and Middle English spellings. Of course, all are correct in the appropriate context. Writing a word processor that checks spelling would be a lot easier if there were one "right" answer, but ... MEI is designed for multiple contexts. What is true and correct for one use or user, may not be for another. For example, MEI supports printing but it is not designed exclusively for that purpose. I think the practical result of this philosophical approach is that each use / user must define the context in which it / he operates. This means making choices. Kristina identified a need for a trill attribute in the context of hand encoding (or one-pass encoding, as much as that's possible in MEI), which was seconded by Eleanor and Andrew. MEI already provides attributes that function similarly, so the question, "Why can't I ...?", was legitimate. Surely, the answer is not to define the question out of existence. When I said before that I could imagine a future in which these attributes didn't exist, I was thinking of the time when sophisticated MEI authoring tools exist. And that time may come someday soon, but it's not here yet. And even if no one authored or edited MEI in oXygen, there would still be the analytical uses to consider -- see below. In spite of my temporary lapse, I still believe the appropriate answer is to accommodate these multiple contexts. Software, of course, then must also be able to deal with multiple contexts, probably by switching between them rather than supporting them all simultaneously. (In the word processor example, one switches contexts between British and American spelling by selecting different dictionaries.) What are the options for software? 1. Silently ignore anything it doesn't understand. 2. Ignore anything it doesn't understand, but alert the user to its "deafness". 3. Refuse to work with anything it doesn't understand. All of these options conform with the notion of "supporting MEI". Anything less, such as deprecating one context or another in MEI itself, isn't acceptable. I think that if we deprecated @tie, @slur, etc. it wouldn't be long before the cry went out to add them back. Or worse yet, individuals would start extending the schema *each in their own way* to get them back because they are useful in both simple input systems (that is, using an XML editor) and in the analytical context where one often needs to tightly couple an entity (in this, a note) and its properties for ease of conceptualization and for efficiency (it takes resources -- time and memory -- to navigate down into the document to find a trill that might be associated with a given note). In order to avoid putting them back later or virtually assuring uncontrolled extension, I believe it's better to allow these attributes (and even add more; that is, @ornam or similar, in this case). Doing so steers modifications toward restrictions, which are easier to create, maintain, and enforce. I believe this places MEI in the same philosophical space as TEI. That is, one should not use TEI straight out-of-the-box without making some choices. The mei-all schema (like tei-all) is only the first step in defining any particular use of MEI. So, I want to 1. leave the current crop of attributes in place (no deprecation, but *explanation* of their proper uses), 2. add @ornam with appropriate values, 3. devise methods of converting between attribute- and element-centered markup, 4. create customizations of mei-all that make it easier for users/agents to declare what they're ready to accept and conversely what they will ignore / refuse, for example, a customization that emphasizes attributes and one that emphasizes elements. This plan is not new. With the addition of no. 2, it's what I think we've been working toward for quite some time. We just momentarily stepped off the path. :-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120309/e0d1db7a/attachment.html> From laurent at music.mcgill.ca Sun Mar 11 12:02:58 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Sun, 11 Mar 2012 11:02:58 +0000 Subject: [MEI-L] trills within beams In-Reply-To: <11002_1331222920_4F58D987_11002_214_2_BBCC497C40D85642B90E9F94FC30343D01150684@GRANT.eservices.virginia.edu> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <BBCC497C40D85642B90E9F94FC30343D0114FB72@GRANT.eservices.virginia.edu> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <E2C7EFD2-0485-48EB-8453-C0C0667E854C@mail.mcgill.ca> <BBCC497C40D85642B90E9F94FC30343D0114FE33@GRANT.eservices.virginia.edu> <C053F977-7433-4ED5-848E-24D2C968558E@edirom.de> <BBCC497C40D85642B90E9F94FC30343D0114FE8C@GRANT.eservices.virginia.edu> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> <CAMyHAnMVTbvoS=62jVfGSBE=KTR+EPXRj5KUv8HX7HdFoOjS1g@mail.gmail.com> <CAEB1mAr+-Np9_tJLUXaWeyRyEMOuqxH2iDnVwogXG3JnGayMsw@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D0114FEB8@GRANT.eservices.virginia.edu> <CAEB1mAqDky2XxPsHt1E3wQg3fc2i3tfeDb+_ttuOt_6MVPCeMw@mail.gmail.com> <22803_1331058782_4F56585E_22803_116_1_BBCC497C40D85642B90E9F94FC30343D0114FEF6@GRANT.eservices.virginia.edu> <CAJ306HZdyhiEY93c5Lmbx1RBsF9X+69zQrnR8dJzt5hzEBHyAg@mail.gmail.com> <14554848-F3C7-43A0-A688-7BF7B2A46A4E@edirom.de> <11002_1331222920_4F58D987_11002_214_2_BBCC497C40D85642B90E9F94FC30343D01150684@GRANT.eservices.virginia.edu> Message-ID: <CAJ306HawBt1U=j4wV0Z4+3gYS=VYFkK_C1kGB63AJ2V2-xanZA@mail.gmail.com> Sorry for my ignorance, but wouldn't schematron be more appropriate for no 4.? Laurent On Thu, Mar 8, 2012 at 4:06 PM, Roland, Perry (pdr4h) < pdr4h at eservices.virginia.edu> wrote: > Hi, everyone, > > You know, I was almost persuaded by the deprecation argument. But, > thinking more about Kristina's original question and Eleanor and Andrew's > responses, I think that's not the best way to go. My first response to > Kristina was written in a moment of weakness and Eleanor was correct to > call me on it. :-) > > As I've said many times, I sympathize with developers who want "the one, > true, correct way". But, that can lead one down a rocky road. For > example, you know that place where ships are docked? What's the word for > that in English? Is the "one, true, correct" word "harbor" (as we spell it > in America) or is it "harbour" (as it is spelled/spelt by our British > cousins)? Then there are the Old and Middle English spellings. Of course, > all are correct in the appropriate context. Writing a word processor that > checks spelling would be a lot easier if there were one "right" answer, but > ... > > MEI is designed for multiple contexts. What is true and correct for one > use or user, may not be for another. For example, MEI supports printing > but it is not designed exclusively for that purpose. I think the practical > result of this philosophical approach is that each use / user must define > the context in which it / he operates. This means making choices. > > Kristina identified a need for a trill attribute in the context of hand > encoding (or one-pass encoding, as much as that's possible in MEI), which > was seconded by Eleanor and Andrew. MEI already provides attributes that > function similarly, so the question, "Why can't I ...?", was legitimate. > Surely, the answer is not to define the question out of existence. > > When I said before that I could imagine a future in which these attributes > didn't exist, I was thinking of the time when sophisticated MEI authoring > tools exist. And that time may come someday soon, but it's not here yet. > And even if no one authored or edited MEI in oXygen, there would still be > the analytical uses to consider -- see below. > > In spite of my temporary lapse, I still believe the appropriate answer is > to accommodate these multiple contexts. Software, of course, then must > also be able to deal with multiple contexts, probably by switching between > them rather than supporting them all simultaneously. (In the word > processor example, one switches contexts between British and American > spelling by selecting different dictionaries.) > > What are the options for software? > > 1. Silently ignore anything it doesn't understand. > 2. Ignore anything it doesn't understand, but alert the user to its > "deafness". > 3. Refuse to work with anything it doesn't understand. > > All of these options conform with the notion of "supporting MEI". > > Anything less, such as deprecating one context or another in MEI itself, > isn't acceptable. I think that if we deprecated @tie, @slur, etc. it > wouldn't be long before the cry went out to add them back. Or worse yet, > individuals would start extending the schema *each in their own way* to get > them back because they are useful in both simple input systems (that is, > using an XML editor) and in the analytical context where one often needs to > tightly couple an entity (in this, a note) and its properties for ease of > conceptualization and for efficiency (it takes resources -- time and memory > -- to navigate down into the document to find a trill that might be > associated with a given note). > > In order to avoid putting them back later or virtually assuring > uncontrolled extension, I believe it's better to allow these attributes > (and even add more; that is, @ornam or similar, in this case). Doing so > steers modifications toward restrictions, which are easier to create, > maintain, and enforce. > > I believe this places MEI in the same philosophical space as TEI. That > is, one should not use TEI straight out-of-the-box without making some > choices. The mei-all schema (like tei-all) is only the first step in > defining any particular use of MEI. > > So, I want to > > 1. leave the current crop of attributes in place (no deprecation, but > *explanation* of their proper uses), > 2. add @ornam with appropriate values, > 3. devise methods of converting between attribute- and element-centered > markup, > 4. create customizations of mei-all that make it easier for users/agents > to declare what they're ready to accept and conversely what they will > ignore / refuse, for example, a customization that emphasizes attributes > and one that emphasizes elements. > > This plan is not new. With the addition of no. 2, it's what I think we've > been working toward for quite some time. We just momentarily stepped off > the path. :-) > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120311/502e3065/attachment.html> From laurent at music.mcgill.ca Mon Mar 12 09:21:21 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Mon, 12 Mar 2012 08:21:21 +0000 Subject: [MEI-L] Events inside events In-Reply-To: <766_1331130368_4F577000_766_173_1_64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> References: <BBCC497C40D85642B90E9F94FC30343D094FDC@WILSON.eservices.virginia.edu> <766_1331130368_4F577000_766_173_1_64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> Message-ID: <CAJ306HbTnZp4y=5MB_E8BRyqT0jdjb=m2HPO=AkxEubX1Sku+Q@mail.gmail.com> Hi Perry, I am confused. In the following example: <choice> <orig> <turn></turn><!-- capture of the turn symbol --> </orig> <reg> <note/><!-- interpretation of how to perform the turn --> <note/> <note/> <note/> </reg> </choice> Where is given for the original the note to which the turn applies. Am I overlooking something? Sorry about it. Laurent On Wed, Mar 7, 2012 at 2:25 PM, Johannes Kepper <kepper at edirom.de> wrote: > As I mentioned earlier, here is an eMail that Perry wrote last year. We've > never discussed his proposal so far, and his initial intention was to bring > this to MEI-L. I modified only the beginning and end, there's no comment > from me yet (I will "respond" to this in another mail). > > ------- > > > Currently, a few elements (bend, gliss, mordent, trill, turn, note, > ineume, uneume are the most pertinent ones here) permit other events in > their content, e.g., > > <turn> > <note/> > <note/> > <note/> > </turn> > > The original purpose of this was to allow for interpretative data to be > record "in-line". This preceded the introduction of the editorial > elements, such as app and choice. Now that we have these elements (app and > choice), I believe allowing event content in these situations is not only > redundant, but confusing. It immediately raises a question of whether to > encode the interpretative info directly inside, say, the turn element (as > above) or use > > <choice> > <orig> > <turn></turn><!-- capture of the turn symbol --> > </orig> > <reg> > <note/><!-- interpretation of how to perform the turn --> > <note/> > <note/> > <note/> > </reg> > </choice> > > when marking up a single source, or if the turn and its "resolution" exist > in different sources > > <app> > <rdg source="A"> > <turn/> > </rdg> > <rdg souce="B"> > <note/> > <note/> > <note/> > <note/> > </rdg> > </app> > > In some cases; that is, in diastemmatic neume notation, such as Solesmes, > this feature is still necessary in order to record the actual, > uninterpreted pitch values of the neumes. It has also already been used to > capture interpreted pitch values in non-diastemmatic neume notation, such > as for Hildegard's works. Although this last use might be worth to > reconsider, we can't disallow it for earlier, unheighted notation if we > allow it for diastemmatic neumes. > > In spite of the fact that removing the feature doesn't completely remove > any possibility of its mis-use, I think it should be removed for elements > in the CMN repertoire (bend, gliss, mordent, trill, turn, note). This will > steer users toward a "proper" encoding using <app> and <choice>. > > This is a significant enough change (much like the camelCasing of element > names) to warrant making it now rather than later. I don't want to make > any change to the source file now, but I think it should be done for the > next release. > > Comments? > > -- > p. > > > -------------- > > Johannes again. What I wanted to add is that this discussion may not aim > at changing the upcoming 2012 release, as the schema for this has already > been fixed by a Council decision. What we can do now is to announce future > changes in the 2012 Guidelines we're currently writing. I think a > discussion of this thread is desperately needed, but we need to be clear > about the schedule for any changes we might come up with. > > Johannes > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120312/d163afd5/attachment.html> From pdr4h at eservices.virginia.edu Mon Mar 12 14:57:52 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 12 Mar 2012 13:57:52 +0000 Subject: [MEI-L] trills within beams In-Reply-To: <CAJ306HawBt1U=j4wV0Z4+3gYS=VYFkK_C1kGB63AJ2V2-xanZA@mail.gmail.com> References: <5DE906DC-281C-4DB4-BC46-B3A8A0800E72@gmx.de> <BBCC497C40D85642B90E9F94FC30343D0114FB72@GRANT.eservices.virginia.edu> <24846_1330988626_4F554651_24846_2_1_00f601ccfb24$2d7bbc20$88733460$@stanford.edu> <114F68D8-A9A2-422B-9C85-716CAE715DD6@mail.mcgill.ca> <21712_1331035890_4F55FEF2_21712_135_1_ED80AB9F-6C17-4270-BAE9-E2CE42AE2D24@edirom.de> <E2C7EFD2-0485-48EB-8453-C0C0667E854C@mail.mcgill.ca> <BBCC497C40D85642B90E9F94FC30343D0114FE33@GRANT.eservices.virginia.edu> <C053F977-7433-4ED5-848E-24D2C968558E@edirom.de> <BBCC497C40D85642B90E9F94FC30343D0114FE8C@GRANT.eservices.virginia.edu> <07B81BC7-2C56-4C51-BCA3-FC7A274036C4@edirom.de> <CAMyHAnMVTbvoS=62jVfGSBE=KTR+EPXRj5KUv8HX7HdFoOjS1g@mail.gmail.com> <CAEB1mAr+-Np9_tJLUXaWeyRyEMOuqxH2iDnVwogXG3JnGayMsw@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D0114FEB8@GRANT.eservices.virginia.edu> <CAEB1mAqDky2XxPsHt1E3wQg3fc2i3tfeDb+_ttuOt_6MVPCeMw@mail.gmail.com> <22803_1331058782_4F56585E_22803_116_1_BBCC497C40D85642B90E9F94FC30343D0114FEF6@GRANT.eservices.virginia.edu> <CAJ306HZdyhiEY93c5Lmbx1RBsF9X+69zQrnR8dJzt5hzEBHyAg@mail.gmail.com> <14554848-F3C7-43A0-A688-7BF7B2A46A4E@edirom.de> <11002_1331222920_4F58D987_11002_214_2_BBCC497C40D85642B90E9F94FC30343D01150684@GRANT.eservices.virginia.edu>, <CAJ306HawBt1U=j4wV0Z4+3gYS=VYFkK_C1kGB63AJ2V2-xanZA@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D011513A3@GRANT.eservices.virginia.edu> Hi, Laurent, I was assuming that, especially with the development of assistive software, a customized schema would be the first choice. But, schematron that places limits on a general schema is a viable option as well. The choice depends on the user and the uses, of course. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Laurent Pugin [laurent at music.mcgill.ca] Sent: Sunday, March 11, 2012 7:02 AM To: Music Encoding Initiative Subject: Re: [MEI-L] trills within beams Sorry for my ignorance, but wouldn't schematron be more appropriate for no 4.? Laurent On Thu, Mar 8, 2012 at 4:06 PM, Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu<mailto:pdr4h at eservices.virginia.edu>> wrote: Hi, everyone, You know, I was almost persuaded by the deprecation argument. But, thinking more about Kristina's original question and Eleanor and Andrew's responses, I think that's not the best way to go. My first response to Kristina was written in a moment of weakness and Eleanor was correct to call me on it. :-) As I've said many times, I sympathize with developers who want "the one, true, correct way". But, that can lead one down a rocky road. For example, you know that place where ships are docked? What's the word for that in English? Is the "one, true, correct" word "harbor" (as we spell it in America) or is it "harbour" (as it is spelled/spelt by our British cousins)? Then there are the Old and Middle English spellings. Of course, all are correct in the appropriate context. Writing a word processor that checks spelling would be a lot easier if there were one "right" answer, but ... MEI is designed for multiple contexts. What is true and correct for one use or user, may not be for another. For example, MEI supports printing but it is not designed exclusively for that purpose. I think the practical result of this philosophical approach is that each use / user must define the context in which it / he operates. This means making choices. Kristina identified a need for a trill attribute in the context of hand encoding (or one-pass encoding, as much as that's possible in MEI), which was seconded by Eleanor and Andrew. MEI already provides attributes that function similarly, so the question, "Why can't I ...?", was legitimate. Surely, the answer is not to define the question out of existence. When I said before that I could imagine a future in which these attributes didn't exist, I was thinking of the time when sophisticated MEI authoring tools exist. And that time may come someday soon, but it's not here yet. And even if no one authored or edited MEI in oXygen, there would still be the analytical uses to consider -- see below. In spite of my temporary lapse, I still believe the appropriate answer is to accommodate these multiple contexts. Software, of course, then must also be able to deal with multiple contexts, probably by switching between them rather than supporting them all simultaneously. (In the word processor example, one switches contexts between British and American spelling by selecting different dictionaries.) What are the options for software? 1. Silently ignore anything it doesn't understand. 2. Ignore anything it doesn't understand, but alert the user to its "deafness". 3. Refuse to work with anything it doesn't understand. All of these options conform with the notion of "supporting MEI". Anything less, such as deprecating one context or another in MEI itself, isn't acceptable. I think that if we deprecated @tie, @slur, etc. it wouldn't be long before the cry went out to add them back. Or worse yet, individuals would start extending the schema *each in their own way* to get them back because they are useful in both simple input systems (that is, using an XML editor) and in the analytical context where one often needs to tightly couple an entity (in this, a note) and its properties for ease of conceptualization and for efficiency (it takes resources -- time and memory -- to navigate down into the document to find a trill that might be associated with a given note). In order to avoid putting them back later or virtually assuring uncontrolled extension, I believe it's better to allow these attributes (and even add more; that is, @ornam or similar, in this case). Doing so steers modifications toward restrictions, which are easier to create, maintain, and enforce. I believe this places MEI in the same philosophical space as TEI. That is, one should not use TEI straight out-of-the-box without making some choices. The mei-all schema (like tei-all) is only the first step in defining any particular use of MEI. So, I want to 1. leave the current crop of attributes in place (no deprecation, but *explanation* of their proper uses), 2. add @ornam with appropriate values, 3. devise methods of converting between attribute- and element-centered markup, 4. create customizations of mei-all that make it easier for users/agents to declare what they're ready to accept and conversely what they will ignore / refuse, for example, a customization that emphasizes attributes and one that emphasizes elements. This plan is not new. With the addition of no. 2, it's what I think we've been working toward for quite some time. We just momentarily stepped off the path. :-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702<tel:434-982-2702> (w) pdr4h (at) virginia (dot) edu _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120312/8ec75aa9/attachment.html> From pdr4h at eservices.virginia.edu Mon Mar 12 15:37:13 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 12 Mar 2012 14:37:13 +0000 Subject: [MEI-L] Events inside events In-Reply-To: <CAJ306HbTnZp4y=5MB_E8BRyqT0jdjb=m2HPO=AkxEubX1Sku+Q@mail.gmail.com> References: <BBCC497C40D85642B90E9F94FC30343D094FDC@WILSON.eservices.virginia.edu> <766_1331130368_4F577000_766_173_1_64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de>, <CAJ306HbTnZp4y=5MB_E8BRyqT0jdjb=m2HPO=AkxEubX1Sku+Q@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D011513B3@GRANT.eservices.virginia.edu> Laurent, My example wasn't very clear, was it? Sorry about that. I was assuming that the encoding of the <note> to which the turn applies was previously encoded in the appropriate measure/staff/layer, like so: <measure n="1"> <staff n="1"> <layer> <note pname="c" oct="4" dur="4"/> <note pname="d"/> <note pname="e"/> <note pname="f"/> </layer> </staff> <!-- control events, such as turns, here --> <choice> <orig> <turn tstamp="2"/><!-- capture of the turn symbol --> </orig> <reg> <note/><!-- interpretation of how to perform the turn --> <note/> <note/> <note/> </reg> </choice> </measure> But, you know, now that I see it written out, I don't like it; that is, for the control events -- bend, gliss, mordent, trill, and turn. The following still feels more intuitively correct: <measure n="1"> <staff n="1"> <layer> <note pname="c" oct="4" dur="4"/> <note pname="d"/> <note pname="e"/> <note pname="f"/> </layer> </staff> <turn tstamp="2"> <!-- interpretation of how to perform the turn --> <note/> <note/> <note/> <note/> </turn> </measure> because the turn and its interpretation are more closely linked together. If there were to be more than one interpretation of how the turn should be performed, <choice> (or <app>, maybe <ossia> ?) could be allowed *within <turn>*: <turn tstamp="2"> <!-- interpretations of how to perform the turn --> <choice> <reg> <note/> <note/> <note/> <note/> </reg> <reg> <note/> <note/> <note/> <note/> <note/> <note/> </reg> </turn> However, I still think the following is an abomination: <note> <note/> <note/> </note> It seems like I'm going in circles, (at least partially) talking myself out of my own suggestion. Everyone, please feel free to jump into the fray at any time. :-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Laurent Pugin [laurent at music.mcgill.ca] Sent: Monday, March 12, 2012 4:21 AM To: Music Encoding Initiative Subject: Re: [MEI-L] Events inside events Hi Perry, I am confused. In the following example: <choice> <orig> <turn></turn><!-- capture of the turn symbol --> </orig> <reg> <note/><!-- interpretation of how to perform the turn --> <note/> <note/> <note/> </reg> </choice> Where is given for the original the note to which the turn applies. Am I overlooking something? Sorry about it. Laurent On Wed, Mar 7, 2012 at 2:25 PM, Johannes Kepper <kepper at edirom.de> wrote: As I mentioned earlier, here is an eMail that Perry wrote last year. We've never discussed his proposal so far, and his initial intention was to bring this to MEI-L. I modified only the beginning and end, there's no comment from me yet (I will "respond" to this in another mail). ------- Currently, a few elements (bend, gliss, mordent, trill, turn, note, ineume, uneume are the most pertinent ones here) permit other events in their content, e.g., <turn> <note/> <note/> <note/> </turn> The original purpose of this was to allow for interpretative data to be record "in-line". This preceded the introduction of the editorial elements, such as app and choice. Now that we have these elements (app and choice), I believe allowing event content in these situations is not only redundant, but confusing. It immediately raises a question of whether to encode the interpretative info directly inside, say, the turn element (as above) or use <choice> <orig> <turn></turn><!-- capture of the turn symbol --> </orig> <reg> <note/><!-- interpretation of how to perform the turn --> <note/> <note/> <note/> </reg> </choice> when marking up a single source, or if the turn and its "resolution" exist in different sources <app> <rdg source="A"> <turn/> </rdg> <rdg souce="B"> <note/> <note/> <note/> <note/> </rdg> </app> In some cases; that is, in diastemmatic neume notation, such as Solesmes, this feature is still necessary in order to record the actual, uninterpreted pitch values of the neumes. It has also already been used to capture interpreted pitch values in non-diastemmatic neume notation, such as for Hildegard's works. Although this last use might be worth to reconsider, we can't disallow it for earlier, unheighted notation if we allow it for diastemmatic neumes. In spite of the fact that removing the feature doesn't completely remove any possibility of its mis-use, I think it should be removed for elements in the CMN repertoire (bend, gliss, mordent, trill, turn, note). This will steer users toward a "proper" encoding using <app> and <choice>. This is a significant enough change (much like the camelCasing of element names) to warrant making it now rather than later. I don't want to make any change to the source file now, but I think it should be done for the next release. Comments? -- p. -------------- Johannes again. What I wanted to add is that this discussion may not aim at changing the upcoming 2012 release, as the schema for this has already been fixed by a Council decision. What we can do now is to announce future changes in the 2012 Guidelines we're currently writing. I think a discussion of this thread is desperately needed, but we need to be clear about the schedule for any changes we might come up with. Johannes _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From laurent at music.mcgill.ca Mon Mar 12 21:58:28 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Mon, 12 Mar 2012 20:58:28 +0000 Subject: [MEI-L] Events inside events In-Reply-To: <23357_1331563047_4F5E0A27_23357_361_21_BBCC497C40D85642B90E9F94FC30343D011513B3@GRANT.eservices.virginia.edu> References: <BBCC497C40D85642B90E9F94FC30343D094FDC@WILSON.eservices.virginia.edu> <766_1331130368_4F577000_766_173_1_64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> <CAJ306HbTnZp4y=5MB_E8BRyqT0jdjb=m2HPO=AkxEubX1Sku+Q@mail.gmail.com> <23357_1331563047_4F5E0A27_23357_361_21_BBCC497C40D85642B90E9F94FC30343D011513B3@GRANT.eservices.virginia.edu> Message-ID: <CAJ306HYD0s4xSvjXTmQKyA0BjQAq8UX_Q6-HmiAbssEuqL4JLg@mail.gmail.com> Thanks for the clarification. I had forgotten that turn would have a @timestamp. I agree that the second option is more intuitive and I also like it better. If we still want to allow the first one, wouldn't the first <note> within <reg> also have to have a @timestamp? One more reason to prefer the second option, I guess. Now I don't want to perpetuate the circular discussion, but do we want to always use the @timestamp solution (i.e., that requires double pass decoding) even for cases where <turn> (or other) applies to one single note? That is, couldn't the following solution be acceptable? <note> <turn> <!-- interpretation of how to perform the turn --> <note/> <note/> <note/> <note/> </turn> <note> I am sure you already answered it before, sorry about it. Or is this what you mean by abomination? Laurent On Mon, Mar 12, 2012 at 2:37 PM, Roland, Perry (pdr4h) < pdr4h at eservices.virginia.edu> wrote: > Laurent, > > My example wasn't very clear, was it? Sorry about that. > > I was assuming that the encoding of the <note> to which the turn applies > was previously encoded in the appropriate measure/staff/layer, like so: > > <measure n="1"> > <staff n="1"> > <layer> > <note pname="c" oct="4" dur="4"/> > <note pname="d"/> > <note pname="e"/> > <note pname="f"/> > </layer> > </staff> > <!-- control events, such as turns, here --> > <choice> > <orig> > <turn tstamp="2"/><!-- capture of the turn symbol --> > </orig> > <reg> > <note/><!-- interpretation of how to perform the turn --> > <note/> > <note/> > <note/> > </reg> > </choice> > </measure> > > But, you know, now that I see it written out, I don't like it; that is, > for the control events -- bend, gliss, mordent, trill, and turn. > > The following still feels more intuitively correct: > > <measure n="1"> > <staff n="1"> > <layer> > <note pname="c" oct="4" dur="4"/> > <note pname="d"/> > <note pname="e"/> > <note pname="f"/> > </layer> > </staff> > <turn tstamp="2"> > <!-- interpretation of how to perform the turn --> > <note/> > <note/> > <note/> > <note/> > </turn> > </measure> > > because the turn and its interpretation are more closely linked together. > > If there were to be more than one interpretation of how the turn should be > performed, <choice> (or <app>, maybe <ossia> ?) could be allowed *within > <turn>*: > > <turn tstamp="2"> > <!-- interpretations of how to perform the turn --> > <choice> > <reg> > <note/> > <note/> > <note/> > <note/> > </reg> > <reg> > <note/> > <note/> > <note/> > <note/> > <note/> > <note/> > </reg> > </turn> > > However, I still think the following is an abomination: > > <note> > <note/> > <note/> > </note> > > It seems like I'm going in circles, (at least partially) talking myself > out of my own suggestion. Everyone, please feel free to jump into the fray > at any time. :-) > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > > > From: mei-l-bounces at lists.uni-paderborn.de [ > mei-l-bounces at lists.uni-paderborn.de] on behalf of Laurent Pugin [ > laurent at music.mcgill.ca] > Sent: Monday, March 12, 2012 4:21 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] Events inside events > > > Hi Perry, > > > I am confused. In the following example: > > > <choice> > <orig> > <turn></turn><!-- capture of the turn symbol --> > </orig> > <reg> > <note/><!-- interpretation of how to perform the turn --> > <note/> > <note/> > <note/> > </reg> > </choice> > > > Where is given for the original the note to which the turn applies. Am I > overlooking something? Sorry about it. > > > Laurent > > > On Wed, Mar 7, 2012 at 2:25 PM, Johannes Kepper <kepper at edirom.de> wrote: > > As I mentioned earlier, here is an eMail that Perry wrote last year. We've > never discussed his proposal so far, and his initial intention was to bring > this to MEI-L. I modified only the beginning and end, there's no comment > from me yet (I will "respond" to this in another mail). > > ------- > > > Currently, a few elements (bend, gliss, mordent, trill, turn, note, > ineume, uneume are the most pertinent ones here) permit other events in > their content, e.g., > > <turn> > <note/> > <note/> > <note/> > </turn> > > The original purpose of this was to allow for interpretative data to be > record "in-line". This preceded the introduction of the editorial > elements, such as app and choice. Now that we have these elements (app and > choice), I believe allowing event content in these situations is not only > redundant, but confusing. It immediately raises a question of whether to > encode the interpretative info directly inside, say, the turn element (as > above) or use > > <choice> > <orig> > <turn></turn><!-- capture of the turn symbol --> > </orig> > <reg> > <note/><!-- interpretation of how to perform the turn --> > <note/> > <note/> > <note/> > </reg> > </choice> > > when marking up a single source, or if the turn and its "resolution" exist > in different sources > > <app> > <rdg source="A"> > <turn/> > </rdg> > <rdg souce="B"> > <note/> > <note/> > <note/> > <note/> > </rdg> > </app> > > In some cases; that is, in diastemmatic neume notation, such as Solesmes, > this feature is still necessary in order to record the actual, > uninterpreted pitch values of the neumes. It has also already been used to > capture interpreted pitch values in non-diastemmatic neume notation, such > as for Hildegard's works. Although this last use might be worth to > reconsider, we can't disallow it for earlier, unheighted notation if we > allow it for diastemmatic neumes. > > In spite of the fact that removing the feature doesn't completely remove > any possibility of its mis-use, I think it should be removed for elements > in the CMN repertoire (bend, gliss, mordent, trill, turn, note). This will > steer users toward a "proper" encoding using <app> and <choice>. > > This is a significant enough change (much like the camelCasing of element > names) to warrant making it now rather than later. I don't want to make > any change to the source file now, but I think it should be done for the > next release. > > Comments? > > -- > p. > > > -------------- > > Johannes again. What I wanted to add is that this discussion may not aim > at changing the upcoming 2012 release, as the schema for this has already > been fixed by a Council decision. What we can do now is to announce future > changes in the 2012 Guidelines we're currently writing. I think a > discussion of this thread is desperately needed, but we need to be clear > about the schedule for any changes we might come up with. > > Johannes > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120312/561baa18/attachment.html> From pdr4h at eservices.virginia.edu Tue Mar 13 15:02:52 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 13 Mar 2012 14:02:52 +0000 Subject: [MEI-L] Events inside events In-Reply-To: <CAJ306HYD0s4xSvjXTmQKyA0BjQAq8UX_Q6-HmiAbssEuqL4JLg@mail.gmail.com> References: <BBCC497C40D85642B90E9F94FC30343D094FDC@WILSON.eservices.virginia.edu> <766_1331130368_4F577000_766_173_1_64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> <CAJ306HbTnZp4y=5MB_E8BRyqT0jdjb=m2HPO=AkxEubX1Sku+Q@mail.gmail.com> <23357_1331563047_4F5E0A27_23357_361_21_BBCC497C40D85642B90E9F94FC30343D011513B3@GRANT.eservices.virginia.edu>, <CAJ306HYD0s4xSvjXTmQKyA0BjQAq8UX_Q6-HmiAbssEuqL4JLg@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D011514E3@GRANT.eservices.virginia.edu> Laurent, Yep, the first "option" (which really isn't an option now) has several problems. Allowing <turn>, <trill>, etc. to occur inside <note> (and other things) *and* following all events, not only means each of these elements could occur in multiple places, but more importantly that they will have slightly different semantics depending on the context of their occurrence. Maybe this isn't so bad for turns because they always apply to single notes, but 2 different ways of encoding trills ("single note" vs. "wavy line" or "instaneous" vs. "continuing") seems to me to be asking for trouble. Or so I'm always told by my developer friends. :-) My other objection to allowing <turn> inside <note> is that it opens the possibility that eventually someone will suggest we have a <turn type="start"> and a <turn type="end"> (or some such construction). Milestones such as these are what the stand-off markup is intended to avoid. In addition, since a turn and its friends can be attached to chords, maybe rests, and God-knows-what-else, in order to remain consistent, all these other things would have to allow <turn> and such as well. So, this is one place where I believe consistency is our friend, even if it's a not-so-beautiful, slightly difficult to deal with friend. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Laurent Pugin [laurent at music.mcgill.ca] Sent: Monday, March 12, 2012 4:58 PM To: Music Encoding Initiative Subject: Re: [MEI-L] Events inside events Thanks for the clarification. I had forgotten that turn would have a @timestamp. I agree that the second option is more intuitive and I also like it better. If we still want to allow the first one, wouldn't the first <note> within <reg> also have to have a @timestamp? One more reason to prefer the second option, I guess. Now I don't want to perpetuate the circular discussion, but do we want to always use the @timestamp solution (i.e., that requires double pass decoding) even for cases where <turn> (or other) applies to one single note? That is, couldn't the following solution be acceptable? <note> <turn> <!-- interpretation of how to perform the turn --> <note/> <note/> <note/> <note/> </turn> <note> I am sure you already answered it before, sorry about it. Or is this what you mean by abomination? Laurent On Mon, Mar 12, 2012 at 2:37 PM, Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu<mailto:pdr4h at eservices.virginia.edu>> wrote: Laurent, My example wasn't very clear, was it? Sorry about that. I was assuming that the encoding of the <note> to which the turn applies was previously encoded in the appropriate measure/staff/layer, like so: <measure n="1"> <staff n="1"> <layer> <note pname="c" oct="4" dur="4"/> <note pname="d"/> <note pname="e"/> <note pname="f"/> </layer> </staff> <!-- control events, such as turns, here --> <choice> <orig> <turn tstamp="2"/><!-- capture of the turn symbol --> </orig> <reg> <note/><!-- interpretation of how to perform the turn --> <note/> <note/> <note/> </reg> </choice> </measure> But, you know, now that I see it written out, I don't like it; that is, for the control events -- bend, gliss, mordent, trill, and turn. The following still feels more intuitively correct: <measure n="1"> <staff n="1"> <layer> <note pname="c" oct="4" dur="4"/> <note pname="d"/> <note pname="e"/> <note pname="f"/> </layer> </staff> <turn tstamp="2"> <!-- interpretation of how to perform the turn --> <note/> <note/> <note/> <note/> </turn> </measure> because the turn and its interpretation are more closely linked together. If there were to be more than one interpretation of how the turn should be performed, <choice> (or <app>, maybe <ossia> ?) could be allowed *within <turn>*: <turn tstamp="2"> <!-- interpretations of how to perform the turn --> <choice> <reg> <note/> <note/> <note/> <note/> </reg> <reg> <note/> <note/> <note/> <note/> <note/> <note/> </reg> </turn> However, I still think the following is an abomination: <note> <note/> <note/> </note> It seems like I'm going in circles, (at least partially) talking myself out of my own suggestion. Everyone, please feel free to jump into the fray at any time. :-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702<tel:434-982-2702> (w) pdr4h (at) virginia (dot) edu From: mei-l-bounces at lists.uni-paderborn.de<mailto:mei-l-bounces at lists.uni-paderborn.de> [mei-l-bounces at lists.uni-paderborn.de<mailto:mei-l-bounces at lists.uni-paderborn.de>] on behalf of Laurent Pugin [laurent at music.mcgill.ca<mailto:laurent at music.mcgill.ca>] Sent: Monday, March 12, 2012 4:21 AM To: Music Encoding Initiative Subject: Re: [MEI-L] Events inside events Hi Perry, I am confused. In the following example: <choice> <orig> <turn></turn><!-- capture of the turn symbol --> </orig> <reg> <note/><!-- interpretation of how to perform the turn --> <note/> <note/> <note/> </reg> </choice> Where is given for the original the note to which the turn applies. Am I overlooking something? Sorry about it. Laurent On Wed, Mar 7, 2012 at 2:25 PM, Johannes Kepper <kepper at edirom.de<mailto:kepper at edirom.de>> wrote: As I mentioned earlier, here is an eMail that Perry wrote last year. We've never discussed his proposal so far, and his initial intention was to bring this to MEI-L. I modified only the beginning and end, there's no comment from me yet (I will "respond" to this in another mail). ------- Currently, a few elements (bend, gliss, mordent, trill, turn, note, ineume, uneume are the most pertinent ones here) permit other events in their content, e.g., <turn> <note/> <note/> <note/> </turn> The original purpose of this was to allow for interpretative data to be record "in-line". This preceded the introduction of the editorial elements, such as app and choice. Now that we have these elements (app and choice), I believe allowing event content in these situations is not only redundant, but confusing. It immediately raises a question of whether to encode the interpretative info directly inside, say, the turn element (as above) or use <choice> <orig> <turn></turn><!-- capture of the turn symbol --> </orig> <reg> <note/><!-- interpretation of how to perform the turn --> <note/> <note/> <note/> </reg> </choice> when marking up a single source, or if the turn and its "resolution" exist in different sources <app> <rdg source="A"> <turn/> </rdg> <rdg souce="B"> <note/> <note/> <note/> <note/> </rdg> </app> In some cases; that is, in diastemmatic neume notation, such as Solesmes, this feature is still necessary in order to record the actual, uninterpreted pitch values of the neumes. It has also already been used to capture interpreted pitch values in non-diastemmatic neume notation, such as for Hildegard's works. Although this last use might be worth to reconsider, we can't disallow it for earlier, unheighted notation if we allow it for diastemmatic neumes. In spite of the fact that removing the feature doesn't completely remove any possibility of its mis-use, I think it should be removed for elements in the CMN repertoire (bend, gliss, mordent, trill, turn, note). This will steer users toward a "proper" encoding using <app> and <choice>. This is a significant enough change (much like the camelCasing of element names) to warrant making it now rather than later. I don't want to make any change to the source file now, but I think it should be done for the next release. Comments? -- p. -------------- Johannes again. What I wanted to add is that this discussion may not aim at changing the upcoming 2012 release, as the schema for this has already been fixed by a Council decision. What we can do now is to announce future changes in the 2012 Guidelines we're currently writing. I think a discussion of this thread is desperately needed, but we need to be clear about the schedule for any changes we might come up with. Johannes _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120313/33389d0f/attachment.html> From laurent at music.mcgill.ca Thu Mar 15 12:10:18 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Thu, 15 Mar 2012 12:10:18 +0100 Subject: [MEI-L] Events inside events In-Reply-To: <25211_1331647386_4F5F539A_25211_72_2_BBCC497C40D85642B90E9F94FC30343D011514E3@GRANT.eservices.virginia.edu> References: <BBCC497C40D85642B90E9F94FC30343D094FDC@WILSON.eservices.virginia.edu> <766_1331130368_4F577000_766_173_1_64179483-CB63-4008-92B4-D8F77A1BB30A@edirom.de> <CAJ306HbTnZp4y=5MB_E8BRyqT0jdjb=m2HPO=AkxEubX1Sku+Q@mail.gmail.com> <23357_1331563047_4F5E0A27_23357_361_21_BBCC497C40D85642B90E9F94FC30343D011513B3@GRANT.eservices.virginia.edu> <CAJ306HYD0s4xSvjXTmQKyA0BjQAq8UX_Q6-HmiAbssEuqL4JLg@mail.gmail.com> <25211_1331647386_4F5F539A_25211_72_2_BBCC497C40D85642B90E9F94FC30343D011514E3@GRANT.eservices.virginia.edu> Message-ID: <CAJ306HbtuAJ_X3P3FqV9dRGTXxcUs+8T0Fjc7F-s8pSHq66-jA@mail.gmail.com> On Tue, Mar 13, 2012 at 3:02 PM, Roland, Perry (pdr4h) < pdr4h at eservices.virginia.edu> wrote: > Laurent, > > > > Yep, the first "option" (which really isn't an option now) has several > problems. > > > > Allowing <turn>, <trill>, etc. to occur inside <note> (and other things) > *and* following all events, not only means each of these elements could > occur in multiple places, but more importantly that they will have slightly > different semantics depending on the context of their occurrence. > > > > Maybe this isn't so bad for turns because they always apply to single > notes, but 2 different ways of encoding trills ("single note" vs. "wavy > line" or "instaneous" vs. "continuing") seems to me to be asking for > trouble. Or so I'm always told by my developer friends. :-) > I see your point. I know developers who will say that not having to deal with timestamps is easier, so they would be ready in that case to deal with 2 different ways of encoding. But these are lazy developers... > > My other objection to allowing <turn> inside <note> is that it opens the > possibility that eventually someone will suggest we have a <turn > type="start"> and a <turn type="end"> (or some such construction). > Milestones such as these are what the stand-off markup is intended to > avoid. In addition, since a turn and its friends can be attached to > chords, maybe rests, and God-knows-what-else, in order to remain > consistent, all these other things would have to allow <turn> and such as > well. > > > > So, this is one place where I believe consistency is our friend, even if > it's a not-so-beautiful, slightly difficult to deal with friend. > > > > -- > > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ------------------------------ > *From:* mei-l-bounces at lists.uni-paderborn.de [ > mei-l-bounces at lists.uni-paderborn.de] on behalf of Laurent Pugin [ > laurent at music.mcgill.ca] > *Sent:* Monday, March 12, 2012 4:58 PM > > *To:* Music Encoding Initiative > *Subject:* Re: [MEI-L] Events inside events > > Thanks for the clarification. I had forgotten that turn would have a > @timestamp. I agree that the second option is more intuitive and I also > like it better. If we still want to allow the first one, wouldn't the first > <note> within <reg> also have to have a @timestamp? One more reason to > prefer the second option, I guess. > > Now I don't want to perpetuate the circular discussion, but do we want > to always use the @timestamp solution (i.e., that requires double pass > decoding) even for cases where <turn> (or other) applies to one single > note? That is, couldn't the following solution be acceptable? > > <note> > <turn> > <!-- interpretation of how to perform the turn --> > <note/> > <note/> > <note/> > <note/> > </turn> > <note> > > I am sure you already answered it before, sorry about it. Or is this > what you mean by abomination? > > Laurent > > On Mon, Mar 12, 2012 at 2:37 PM, Roland, Perry (pdr4h) < > pdr4h at eservices.virginia.edu> wrote: > >> Laurent, >> >> My example wasn't very clear, was it? Sorry about that. >> >> I was assuming that the encoding of the <note> to which the turn applies >> was previously encoded in the appropriate measure/staff/layer, like so: >> >> <measure n="1"> >> <staff n="1"> >> <layer> >> <note pname="c" oct="4" dur="4"/> >> <note pname="d"/> >> <note pname="e"/> >> <note pname="f"/> >> </layer> >> </staff> >> <!-- control events, such as turns, here --> >> <choice> >> <orig> >> <turn tstamp="2"/><!-- capture of the turn symbol --> >> </orig> >> <reg> >> <note/><!-- interpretation of how to perform the turn --> >> <note/> >> <note/> >> <note/> >> </reg> >> </choice> >> </measure> >> >> But, you know, now that I see it written out, I don't like it; that is, >> for the control events -- bend, gliss, mordent, trill, and turn. >> >> The following still feels more intuitively correct: >> >> <measure n="1"> >> <staff n="1"> >> <layer> >> <note pname="c" oct="4" dur="4"/> >> <note pname="d"/> >> <note pname="e"/> >> <note pname="f"/> >> </layer> >> </staff> >> <turn tstamp="2"> >> <!-- interpretation of how to perform the turn --> >> <note/> >> <note/> >> <note/> >> <note/> >> </turn> >> </measure> >> >> because the turn and its interpretation are more closely linked together. >> >> If there were to be more than one interpretation of how the turn should >> be performed, <choice> (or <app>, maybe <ossia> ?) could be allowed *within >> <turn>*: >> >> <turn tstamp="2"> >> <!-- interpretations of how to perform the turn --> >> <choice> >> <reg> >> <note/> >> <note/> >> <note/> >> <note/> >> </reg> >> <reg> >> <note/> >> <note/> >> <note/> >> <note/> >> <note/> >> <note/> >> </reg> >> </turn> >> >> However, I still think the following is an abomination: >> >> <note> >> <note/> >> <note/> >> </note> >> >> It seems like I'm going in circles, (at least partially) talking myself >> out of my own suggestion. Everyone, please feel free to jump into the fray >> at any time. :-) >> >> -- >> p. >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> >> >> >> From: mei-l-bounces at lists.uni-paderborn.de [ >> mei-l-bounces at lists.uni-paderborn.de] on behalf of Laurent Pugin [ >> laurent at music.mcgill.ca] >> Sent: Monday, March 12, 2012 4:21 AM >> To: Music Encoding Initiative >> Subject: Re: [MEI-L] Events inside events >> >> >> Hi Perry, >> >> >> I am confused. In the following example: >> >> >> <choice> >> <orig> >> <turn></turn><!-- capture of the turn symbol --> >> </orig> >> <reg> >> <note/><!-- interpretation of how to perform the turn --> >> <note/> >> <note/> >> <note/> >> </reg> >> </choice> >> >> >> Where is given for the original the note to which the turn applies. Am I >> overlooking something? Sorry about it. >> >> >> Laurent >> >> >> On Wed, Mar 7, 2012 at 2:25 PM, Johannes Kepper <kepper at edirom.de> wrote: >> >> As I mentioned earlier, here is an eMail that Perry wrote last year. >> We've never discussed his proposal so far, and his initial intention was to >> bring this to MEI-L. I modified only the beginning and end, there's no >> comment from me yet (I will "respond" to this in another mail). >> >> ------- >> >> >> Currently, a few elements (bend, gliss, mordent, trill, turn, note, >> ineume, uneume are the most pertinent ones here) permit other events in >> their content, e.g., >> >> <turn> >> <note/> >> <note/> >> <note/> >> </turn> >> >> The original purpose of this was to allow for interpretative data to be >> record "in-line". This preceded the introduction of the editorial >> elements, such as app and choice. Now that we have these elements (app and >> choice), I believe allowing event content in these situations is not only >> redundant, but confusing. It immediately raises a question of whether to >> encode the interpretative info directly inside, say, the turn element (as >> above) or use >> >> <choice> >> <orig> >> <turn></turn><!-- capture of the turn symbol --> >> </orig> >> <reg> >> <note/><!-- interpretation of how to perform the turn --> >> <note/> >> <note/> >> <note/> >> </reg> >> </choice> >> >> when marking up a single source, or if the turn and its "resolution" >> exist in different sources >> >> <app> >> <rdg source="A"> >> <turn/> >> </rdg> >> <rdg souce="B"> >> <note/> >> <note/> >> <note/> >> <note/> >> </rdg> >> </app> >> >> In some cases; that is, in diastemmatic neume notation, such as Solesmes, >> this feature is still necessary in order to record the actual, >> uninterpreted pitch values of the neumes. It has also already been used to >> capture interpreted pitch values in non-diastemmatic neume notation, such >> as for Hildegard's works. Although this last use might be worth to >> reconsider, we can't disallow it for earlier, unheighted notation if we >> allow it for diastemmatic neumes. >> >> In spite of the fact that removing the feature doesn't completely remove >> any possibility of its mis-use, I think it should be removed for elements >> in the CMN repertoire (bend, gliss, mordent, trill, turn, note). This will >> steer users toward a "proper" encoding using <app> and <choice>. >> >> This is a significant enough change (much like the camelCasing of element >> names) to warrant making it now rather than later. I don't want to make >> any change to the source file now, but I think it should be done for the >> next release. >> >> Comments? >> >> -- >> p. >> >> >> -------------- >> >> Johannes again. What I wanted to add is that this discussion may not aim >> at changing the upcoming 2012 release, as the schema for this has already >> been fixed by a Council decision. What we can do now is to announce future >> changes in the 2012 Guidelines we're currently writing. I think a >> discussion of this thread is desperately needed, but we need to be clear >> about the schedule for any changes we might come up with. >> >> Johannes >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120315/0d2fea0d/attachment.html> From zupftom at googlemail.com Fri Mar 16 08:54:15 2012 From: zupftom at googlemail.com (TW) Date: Fri, 16 Mar 2012 08:54:15 +0100 Subject: [MEI-L] symbol/symbolDef Message-ID: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> I'd like to ask for your opinion on some aspects of the usersymbols module as I'm in charge of its guidelines. I'm not particularly clear how the relationship between <symbolDef> and <symbol> is meant to work. <symbolDef> mustn't be an empty element, so I might want to put a <symbol> element inside that might have a @facs attribute to point to a graphical example representation of the symbol. But this <symbol> element is required to have a @ref attribute which must be "a reference to a previously-declared user-defined symbol". The dog seems to chase its tail, right? One use case I see for <symbol> is if we have a scribe who uses some unusual symbols that might possibly not be fully understood. Then I might want to give some textual information about what meaning my research has suggested or what use pattern can be recognized. Would I do this with an <annot>, pointing to <symbolDef> by means of @plist? Thanks for your help! Thomas Weber From pdr4h at eservices.virginia.edu Sun Mar 18 20:04:46 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Sun, 18 Mar 2012 19:04:46 +0000 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> Hi, Thomas, The <symbolDef> element is intended to allow the inclusion of arbitrary symbols/signs. Using <symbolDef>, one can say how a symbol should be drawn in terms of its graphic components; that is, text, curves, and lines. After defining the symbol's coordinate space (using @ulx, @uly, @lrx, and @lry), the <anchoredText>, <curve>, and <line> elements (with appropriate x, y, x2, and y2 attributes) can be used to construct the symbol. The <symbol> element can then be used to make reference to this user-defined sign. For example, one could define a new sign within <scoreDef> -- <scoreDef> <symbolTable> <symbolDef xml:id="mySign" ulx="0" uly="0" lrx="20" lry="20"> <line x="10" y="0" x2="10" y2="10"/> <line x="10" y="10" x2="20" y2="10"/> <line x="20" y="10" x2="20" y2="20"/> </symbolDef> </symbolTable> </scoreDef> then later in the document data invoke this symbol -- <measure> <staff n="1"> <layer> <note xml:id="n1" .../> ... </layer> </staff> <symbol ref="mySign"/> </measure> <symbolDef> may contain references to other <symbol> elements. If a line of a certain length and style is a common component, it can be defined once and re-used. The symbol can be placed relative to elements in the notation (using some combination of ho, vo , and to attributes) -- <symbol startid="n1" ho="5"/> (This example indicates the symbol is placed at the same vertical position as, but 5 half-step units above, n1.) <symbol> and <symbolDef> cannot be used to point to a feature in a facsimile image because they have no @facs attribute. This was done purposefully in order to encourage the use of elements of the notation; that is, <note>, <chord>, <staff>, etc., for this purpose. <annot> can be used to record commentary on symbols just as it can with other elements of notation. It's technically possible make a <symbolDef> element a target (using @plist), but I think <symbol> is the proper target. In other words, <symbol> is a generic placeholder for an "unknown" notational sign. <symbolDef> is "just" the instructions for drawing it. Does that help? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Friday, March 16, 2012 3:54 AM To: Music Encoding Initiative Subject: [MEI-L] symbol/symbolDef I'd like to ask for your opinion on some aspects of the usersymbols module as I'm in charge of its guidelines. I'm not particularly clear how the relationship between <symbolDef> and <symbol> is meant to work. <symbolDef> mustn't be an empty element, so I might want to put a <symbol> element inside that might have a @facs attribute to point to a graphical example representation of the symbol. But this <symbol> element is required to have a @ref attribute which must be "a reference to a previously-declared user-defined symbol". The dog seems to chase its tail, right? One use case I see for <symbol> is if we have a scribe who uses some unusual symbols that might possibly not be fully understood. Then I might want to give some textual information about what meaning my research has suggested or what use pattern can be recognized. Would I do this with an <annot>, pointing to <symbolDef> by means of @plist? Thanks for your help! Thomas Weber _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Mon Mar 19 11:20:17 2012 From: zupftom at googlemail.com (TW) Date: Mon, 19 Mar 2012 11:20:17 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> Message-ID: <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> 2012/3/18 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: > Hi, Thomas, > > The <symbolDef> element is intended to allow the inclusion of arbitrary symbols/signs. ?Using <symbolDef>, one can say how a symbol should be drawn in terms of its graphic components; that is, text, curves, and lines. After defining the symbol's coordinate space (using @ulx, @uly, @lrx, and @lry), the <anchoredText>, <curve>, and <line> elements (with appropriate x, y, x2, and y2 attributes) can be used to construct the symbol. > > The <symbol> element can then be used to make reference to this user-defined sign. > > For example, one could define a new sign within <scoreDef> -- > > <scoreDef> > ?<symbolTable> > ? ?<symbolDef xml:id="mySign" ulx="0" uly="0" lrx="20" lry="20"> > ? ? ?<line x="10" y="0" x2="10" y2="10"/> > ? ? ?<line x="10" y="10" x2="20" y2="10"/> > ? ? ?<line x="20" y="10" x2="20" y2="20"/> > ? ?</symbolDef> > ?</symbolTable> > </scoreDef> > > then later in the document data invoke this symbol -- > > <measure> > ?<staff n="1"> > ? ?<layer> > ? ? ?<note xml:id="n1" .../> > ? ? ? ?... > ? ?</layer> > ?</staff> > ?<symbol ref="mySign"/> > </measure> > > <symbolDef> may contain references to other <symbol> elements. ?If a line of a certain length and style is a common component, it can be defined once and re-used. > > The symbol can be placed relative to elements in the notation (using some combination of ho, vo , and to attributes) -- > > <symbol startid="n1" ho="5"/> > > (This example indicates the symbol is placed at the same vertical position as, but 5 half-step units above, n1.) > > <symbol> and <symbolDef> cannot be used to point to a feature in a facsimile image because they have no @facs attribute. ?This was done purposefully in order to encourage the use of elements of the notation; that is, <note>, <chord>, <staff>, etc., for this purpose. > > <annot> can be used to record commentary on symbols just as it can with other elements of notation. ?It's technically possible make a <symbolDef> element a target (using @plist), but I think <symbol> is the proper target. ?In other words, <symbol> is a generic placeholder for an "unknown" notational sign. ?<symbolDef> is "just" the instructions for drawing it. > I see, so the module is meant for rendering purposes only. Then <symbol> inside <symbolDef> is intended for "composite" symbols, right? <curve> and <line> elements would obviously describe the strokes to create a symbol. I think I can make up some examples for the guidelines. But for defining decent symbols, <curve> and <line> seem pretty crude to me, especially because they can only describe lines and not filled areas. I'll think about this. Thomas From kepper at edirom.de Mon Mar 19 11:25:48 2012 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 19 Mar 2012 11:25:48 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> Message-ID: <CA57C88A-7D62-4B47-9278-57C5804C3762@edirom.de> maybe we discovered another spot in MEI where including SVG might be helpful?? jo Am 19.03.2012 um 11:20 schrieb TW: > 2012/3/18 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: >> Hi, Thomas, >> >> The <symbolDef> element is intended to allow the inclusion of arbitrary symbols/signs. Using <symbolDef>, one can say how a symbol should be drawn in terms of its graphic components; that is, text, curves, and lines. After defining the symbol's coordinate space (using @ulx, @uly, @lrx, and @lry), the <anchoredText>, <curve>, and <line> elements (with appropriate x, y, x2, and y2 attributes) can be used to construct the symbol. >> >> The <symbol> element can then be used to make reference to this user-defined sign. >> >> For example, one could define a new sign within <scoreDef> -- >> >> <scoreDef> >> <symbolTable> >> <symbolDef xml:id="mySign" ulx="0" uly="0" lrx="20" lry="20"> >> <line x="10" y="0" x2="10" y2="10"/> >> <line x="10" y="10" x2="20" y2="10"/> >> <line x="20" y="10" x2="20" y2="20"/> >> </symbolDef> >> </symbolTable> >> </scoreDef> >> >> then later in the document data invoke this symbol -- >> >> <measure> >> <staff n="1"> >> <layer> >> <note xml:id="n1" .../> >> ... >> </layer> >> </staff> >> <symbol ref="mySign"/> >> </measure> >> >> <symbolDef> may contain references to other <symbol> elements. If a line of a certain length and style is a common component, it can be defined once and re-used. >> >> The symbol can be placed relative to elements in the notation (using some combination of ho, vo , and to attributes) -- >> >> <symbol startid="n1" ho="5"/> >> >> (This example indicates the symbol is placed at the same vertical position as, but 5 half-step units above, n1.) >> >> <symbol> and <symbolDef> cannot be used to point to a feature in a facsimile image because they have no @facs attribute. This was done purposefully in order to encourage the use of elements of the notation; that is, <note>, <chord>, <staff>, etc., for this purpose. >> >> <annot> can be used to record commentary on symbols just as it can with other elements of notation. It's technically possible make a <symbolDef> element a target (using @plist), but I think <symbol> is the proper target. In other words, <symbol> is a generic placeholder for an "unknown" notational sign. <symbolDef> is "just" the instructions for drawing it. >> > > I see, so the module is meant for rendering purposes only. Then > <symbol> inside <symbolDef> is intended for "composite" symbols, > right? <curve> and <line> elements would obviously describe the > strokes to create a symbol. I think I can make up some examples for > the guidelines. But for defining decent symbols, <curve> and <line> > seem pretty crude to me, especially because they can only describe > lines and not filled areas. > > I'll think about this. > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Mon Mar 19 13:34:41 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 19 Mar 2012 12:34:41 +0000 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <CA57C88A-7D62-4B47-9278-57C5804C3762@edirom.de> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com>, <CA57C88A-7D62-4B47-9278-57C5804C3762@edirom.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01151D3A@GRANT.eservices.virginia.edu> Hi, Johannes, > maybe we discovered another spot in MEI where including SVG might be helpful?? Placing SVG inside <symbolDef> is fine with me. But we need to investigate exactly how this might work, particularly with regard to "invoking" the SVG with symbol/@ref. Also, should we *add SVG* or *replace the primitives with SVG*? I admit I don't know enough about SVG to make good arguments either way. This is somewhat related to the issue of TEI in MEI. Or *any other XML* in MEI. Honestly, it makes my head spin. Will you create an issue in the tracker please? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From kepper at edirom.de Mon Mar 19 13:53:48 2012 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 19 Mar 2012 13:53:48 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01151D3A@GRANT.eservices.virginia.edu> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com>, <CA57C88A-7D62-4B47-9278-57C5804C3762@edirom.de> <BBCC497C40D85642B90E9F94FC30343D01151D3A@GRANT.eservices.virginia.edu> Message-ID: <36F1D685-C084-48D4-9218-B43933F54CD8@edirom.de> Am 19.03.2012 um 13:34 schrieb Roland, Perry (pdr4h): > Hi, Johannes, > >> maybe we discovered another spot in MEI where including SVG might be helpful?? > > Placing SVG inside <symbolDef> is fine with me. But we need to investigate exactly how this might work, particularly with regard to "invoking" the SVG with symbol/@ref. Also, should we *add SVG* or *replace the primitives with SVG*? I admit I don't know enough about SVG to make good arguments either way. > > This is somewhat related to the issue of TEI in MEI. Or *any other XML* in MEI. Honestly, it makes my head spin. > > Will you create an issue in the tracker please? will do. regarding the replacement of MEI primitives, I'd vote against for consistency's sake. The best way to include SVG seems to make it model.graphicLike, as it happens in TEI+SVG. I've done this mod for Edirom, and it works as expected. Basically, it allows SVG inside a surface (among other spots). Doing this, we surely don't want to loose mei:zone. If we allow mei:zone and svg:* within surface, why should we disallow lines and such in symbols? jo > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Mon Mar 19 13:55:46 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 19 Mar 2012 12:55:46 +0000 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu>, <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> Thomas, > I see, so the module is meant for rendering purposes only. Then > <symbol> inside <symbolDef> is intended for "composite" symbols, > right? <curve> and <line> elements would obviously describe the > strokes to create a symbol. I think I can make up some examples for > the guidelines. But for defining decent symbols, <curve> and <line> > seem pretty crude to me, especially because they can only describe > lines and not filled areas. Yes, the module is meant for rendering. And, yes, <symbol> inside <symbolDef> is for composite symbols. My intent was to provide basic MEI functionality *in the absence of* a special-purpose drawing language. So, I would lean toward augmenting the MEI line, curve, and text graphic primitives (perhaps by adding an element for filled areas/paths) rather than replacing them. However, it may be more advantageous to replace the content of <symbolDef> with SVG or define <symbolDef> as a placeholder whose content must be declared before it can be used. The latter would permit the use of any drawing language (SVG, PostScript, etc.), but would have a detrimental effect on interoperability. I'd like to hear from the "rendering team" (Laurent, Craig, Thomas) whether SVG is so widely-used and widely-accepted that we should import its elements here exclusively. How important is it to allow other "markup", such as PostScript or other existing schemes? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From zupftom at googlemail.com Mon Mar 19 18:14:46 2012 From: zupftom at googlemail.com (TW) Date: Mon, 19 Mar 2012 18:14:46 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> Message-ID: <CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> 2012/3/19 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: > > I'd like to hear from the "rendering team" (Laurent, Craig, Thomas) whether SVG is so widely-used and widely-accepted that we should import its elements here exclusively. ?How important is it to allow other "markup", such as PostScript or other existing schemes? > I hesitated to dig into this when writing my last mail, but as you're asking for it: In case I'll be losing myself in an excessive monologous vector graphics discussion, I'll put the blame on you, alright? As PostScript is a programming language and not markup, I'd vote against putting that into MEI. (If I had to pick my favorite programming language, I might choose PostScript, but I think MEI is the wrong place for it.) I feel that SVG is the only viable choice as it's the only open vector graphics standard I'm aware of. Of course, there's PDF, but I think that's hardly possible to "inline", and SVG is XML and therefore feels pretty comfortable inside XML. The graphics model behind SVG is basically identical to the one that PostScript and PDF use, and I think that pretty much any more recent 2D graphics technology copied from Adobe's model. So in the end, it's only about parsing, the rendering should not be too problematic (within reasonable bounds). When I wrote my last mail, I thought about replacing <line> and <curve> with something along the lines of <svg:path>. Not full SVG, just <svg:path>. To allow composite symbols, maybe also <svg:use>. That should be fully sufficient for music--we don't need all this fancy stuff like masks, patterns, clipping, gradients, animation and the like. SVG's line, polyline, polygon, circle, ellipse and rect elements are convience elements that don't really give you anything that path can't give you. PostScript is happy without those. I'd also lean towards a stricter subset of the path syntax that can easily be parsed and rendered using any available 2D graphics technology (PostScript/PDF, Windows' GDI+ or whatever they are using nowadays, Apple's Quartz, Cairo, HTML canvas, whatever). I'd maybe restrict paths to the operators M,L, C and Z. All other operators are, again, convenience operators, except maybe for A/a. However A/a is omitted from the SVG Tiny/Mobile profile because it's the most complex of the operators to implement[1]. On the rendering level, A/a are usually broken down to approximating C operations anyway, so in that sense it *is* a convenience operator. I'd also maybe restrict the filling rule to non-zero-winding, which seems to be the more common one. To facilitate parsing, I'd demand whitespace between each operator and number. SVG allows things like "M.5-4.1.2 3", but this compressed syntax (which is legal) isn't always parsed correctly--namely by librsvg, the SVG library that e.g. Wikipedia uses. Subsetting of course has two sides: On the one hand it greatly facilitates processing, but on the other hand you can't throw in any SVG you might have generated. So I'm not sure what's the best solution, full SVG (or SVG Tiny) or only <path> with restrictions to the path syntax. Is it sufficient if <symbolDef> can specify outlines of symbols, like they are defined in a font? Otherwise, we'd also have to consider stroke widths, dash patterns, line caps and line joins. Not to forget stroke and fill color. I think that's it. I don't think we'd need transparency effects, would we? I'm not sure if we'd need SVG transformations. <symbolDef> seems more like defining glyphs of a font rather than painting fancy graphics. For simple glyph outlines, we wouldn't even have to consider fill colors, stroking, dash patterns, line caps and line joins. Once again, I might be seeing this too much from the implementor's perspective with all this subsetting and stripping convenience functionality... Thomas [1] http://www.w3.org/TR/SVGMobile/#paths From laurent at music.mcgill.ca Tue Mar 20 07:33:37 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Tue, 20 Mar 2012 07:33:37 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> Message-ID: <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> I agree with Thomas that SVG is probably the best choice. I also think that it would be a good idea to have <symbolDef> as a placeholder for SVG rather than defining our own shapes. I guess we are not directly talking about rendering MEI here, but more about including the encoding of shapes within a MEI document, so I don't see any reason why we should restrict the use to a subset of SVG. Laurent On Mon, Mar 19, 2012 at 6:14 PM, TW <zupftom at googlemail.com> wrote: > 2012/3/19 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: >> >> I'd like to hear from the "rendering team" (Laurent, Craig, Thomas) whether SVG is so widely-used and widely-accepted that we should import its elements here exclusively. ?How important is it to allow other "markup", such as PostScript or other existing schemes? >> > > I hesitated to dig into this when writing my last mail, but as you're > asking for it: ?In case I'll be losing myself in an excessive > monologous vector graphics discussion, I'll put the blame on you, > alright? > > As PostScript is a programming language and not markup, I'd vote > against putting that into MEI. ?(If I had to pick my favorite > programming language, I might choose PostScript, but I think MEI is > the wrong place for it.) ?I feel that SVG is the only viable choice as > it's the only open vector graphics standard I'm aware of. ?Of course, > there's PDF, but I think that's hardly possible to "inline", and SVG > is XML and therefore feels pretty comfortable inside XML. > > The graphics model behind SVG is basically identical to the one that > PostScript and PDF use, and I think that pretty much any more recent > 2D graphics technology copied from Adobe's model. ?So in the end, it's > only about parsing, the rendering should not be too problematic > (within reasonable bounds). > > When I wrote my last mail, I thought about replacing <line> and > <curve> with something along the lines of <svg:path>. ?Not full SVG, > just <svg:path>. ?To allow composite symbols, maybe also <svg:use>. > That should be fully sufficient for music--we don't need all this > fancy stuff like masks, patterns, clipping, gradients, animation and > the like. ?SVG's line, polyline, polygon, circle, ellipse and rect > elements are convience elements that don't really give you anything > that path can't give you. ?PostScript is happy without those. > > I'd also lean towards a stricter subset of the path syntax that can > easily be parsed and rendered using any available 2D graphics > technology (PostScript/PDF, Windows' GDI+ or whatever they are using > nowadays, Apple's Quartz, Cairo, HTML canvas, whatever). ?I'd maybe > restrict paths to the operators M,L, C and Z. ?All other operators > are, again, convenience operators, except maybe for A/a. ?However A/a > is omitted from the SVG Tiny/Mobile profile because it's the most > complex of the operators to implement[1]. ?On the rendering level, A/a > are usually broken down to approximating C operations anyway, so in > that sense it *is* a convenience operator. > > I'd also maybe restrict the filling rule to non-zero-winding, which > seems to be the more common one. > > To facilitate parsing, I'd demand whitespace between each operator and > number. ?SVG allows things like "M.5-4.1.2 3", but this compressed > syntax (which is legal) isn't always parsed correctly--namely by > librsvg, the SVG library that e.g. Wikipedia uses. > > Subsetting of course has two sides: ?On the one hand it greatly > facilitates processing, but on the other hand you can't throw in any > SVG you might have generated. ?So I'm not sure what's the best > solution, full SVG (or SVG Tiny) or only <path> with restrictions to > the path syntax. > > Is it sufficient if <symbolDef> can specify outlines of symbols, like > they are defined in a font? ?Otherwise, we'd also have to consider > stroke widths, dash patterns, line caps and line joins. ?Not to forget > stroke and fill color. ?I think that's it. ?I don't think we'd need > transparency effects, would we? ?I'm not sure if we'd need SVG > transformations. ?<symbolDef> seems more like defining glyphs of a > font rather than painting fancy graphics. ?For simple glyph outlines, > we wouldn't even have to consider fill colors, stroking, dash > patterns, line caps and line joins. > > Once again, I might be seeing this too much from the implementor's > perspective with all this subsetting and stripping convenience > functionality... > > Thomas > > > [1] http://www.w3.org/TR/SVGMobile/#paths > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > From zupftom at googlemail.com Tue Mar 20 07:50:24 2012 From: zupftom at googlemail.com (TW) Date: Tue, 20 Mar 2012 07:50:24 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> Message-ID: <CAEB1mArwRgEu3NXNKruJLA42GuEMZ9bxe=OBTxAU2p3ZhRXpeg@mail.gmail.com> 2012/3/20 Laurent Pugin <laurent at music.mcgill.ca>: > I agree with Thomas that SVG is probably the best choice. I also think > that it would be a good idea to have <symbolDef> as a placeholder for > SVG rather than defining our own shapes. I guess we are not directly > talking about rendering MEI here, but more about including the > encoding of shapes within a MEI document, so I don't see any reason > why we should restrict the use to a subset of SVG. > >From Perry's post I got the impression that the module's intent indeed was to provide symbols for rendering. If not, then we'd only be saying something like "See, this is how this symbol looks like", and the symbol's dimensions and origin wouldn't be significant. In this case, I'd think it would be more (or at least similarly) adequate to link to an example in the facsimile rather than redrawing the symbol. This was my initial understanding that I expressed in the original post. It would basically provide a means of classifying and identifying different kinds of symbols, especially ones that aren't covered by MEI (and maybe shouldn't be because they are only used by a certain scribe/composer/very special notation or only in a single work). Thomas From laurent at music.mcgill.ca Tue Mar 20 09:05:16 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Tue, 20 Mar 2012 09:05:16 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <25118_1332226231_4F6828B7_25118_393_1_CAEB1mArwRgEu3NXNKruJLA42GuEMZ9bxe=OBTxAU2p3ZhRXpeg@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> <25118_1332226231_4F6828B7_25118_393_1_CAEB1mArwRgEu3NXNKruJLA42GuEMZ9bxe=OBTxAU2p3ZhRXpeg@mail.gmail.com> Message-ID: <CAJ306HYVEJKoH+5i-x1ziYGf=0HcnNjA-WEQmOgiFJgux4aDcg@mail.gmail.com> On Tue, Mar 20, 2012 at 7:50 AM, TW <zupftom at googlemail.com> wrote: > 2012/3/20 Laurent Pugin <laurent at music.mcgill.ca>: >> I agree with Thomas that SVG is probably the best choice. I also think >> that it would be a good idea to have <symbolDef> as a placeholder for >> SVG rather than defining our own shapes. I guess we are not directly >> talking about rendering MEI here, but more about including the >> encoding of shapes within a MEI document, so I don't see any reason >> why we should restrict the use to a subset of SVG. >> > > >From Perry's post I got the impression that the module's intent indeed > was to provide symbols for rendering. ?If not, then we'd only be > saying something like "See, this is how this symbol looks like", and > the symbol's dimensions and origin wouldn't be significant. ?In this > case, I'd think it would be more (or at least similarly) adequate to > link to an example in the facsimile rather than redrawing the symbol. > This was my initial understanding that I expressed in the original > post. ?It would basically provide a means of classifying and > identifying different kinds of symbols, especially ones that aren't > covered by MEI (and maybe shouldn't be because they are only used by a > certain scribe/composer/very special notation or only in a single > work). I think we agree. What I meant was not for rendering other MEI elements (e.g., a <note>). SVG for special symbols would of course be used for rendering. That is, we would take benefit from the fact that SVG is a self-rendered (or "renderable", sorry for the neologism?) encoding. Laurent From pdr4h at eservices.virginia.edu Tue Mar 20 17:04:12 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 20 Mar 2012 16:04:12 +0000 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <CAJ306HYVEJKoH+5i-x1ziYGf=0HcnNjA-WEQmOgiFJgux4aDcg@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> <25118_1332226231_4F6828B7_25118_393_1_CAEB1mArwRgEu3NXNKruJLA42GuEMZ9bxe=OBTxAU2p3ZhRXpeg@mail.gmail.com>, <CAJ306HYVEJKoH+5i-x1ziYGf=0HcnNjA-WEQmOgiFJgux4aDcg@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01151F2A@GRANT.eservices.virginia.edu> Just to clarify: The purpose behind the symbolDef/symbol pair is to encode *user-defined* shapes/signs/symbols so that they can be rendered. This is distinctly different from classifying/identifying shapes/signs/symbols in a facsimile, which can/should be handled using @facs on MEI elements. Just a reminder: Don't forget about @altsym which can be used to link an MEI element directly to a <symbolDef>. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Laurent Pugin [laurent at music.mcgill.ca] Sent: Tuesday, March 20, 2012 4:05 AM To: Music Encoding Initiative Subject: Re: [MEI-L] symbol/symbolDef On Tue, Mar 20, 2012 at 7:50 AM, TW <zupftom at googlemail.com> wrote: > 2012/3/20 Laurent Pugin <laurent at music.mcgill.ca>: >> I agree with Thomas that SVG is probably the best choice. I also think >> that it would be a good idea to have <symbolDef> as a placeholder for >> SVG rather than defining our own shapes. I guess we are not directly >> talking about rendering MEI here, but more about including the >> encoding of shapes within a MEI document, so I don't see any reason >> why we should restrict the use to a subset of SVG. >> > > >From Perry's post I got the impression that the module's intent indeed > was to provide symbols for rendering. If not, then we'd only be > saying something like "See, this is how this symbol looks like", and > the symbol's dimensions and origin wouldn't be significant. In this > case, I'd think it would be more (or at least similarly) adequate to > link to an example in the facsimile rather than redrawing the symbol. > This was my initial understanding that I expressed in the original > post. It would basically provide a means of classifying and > identifying different kinds of symbols, especially ones that aren't > covered by MEI (and maybe shouldn't be because they are only used by a > certain scribe/composer/very special notation or only in a single > work). I think we agree. What I meant was not for rendering other MEI elements (e.g., a <note>). SVG for special symbols would of course be used for rendering. That is, we would take benefit from the fact that SVG is a self-rendered (or "renderable", sorry for the neologism?) encoding. Laurent _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Tue Mar 20 19:23:25 2012 From: zupftom at googlemail.com (TW) Date: Tue, 20 Mar 2012 19:23:25 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01151F2A@GRANT.eservices.virginia.edu> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> <25118_1332226231_4F6828B7_25118_393_1_CAEB1mArwRgEu3NXNKruJLA42GuEMZ9bxe=OBTxAU2p3ZhRXpeg@mail.gmail.com> <CAJ306HYVEJKoH+5i-x1ziYGf=0HcnNjA-WEQmOgiFJgux4aDcg@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151F2A@GRANT.eservices.virginia.edu> Message-ID: <CAEB1mAr2oXMHaL31Gk1k1D3jNZkuEOp=3MHMtWu5NqPeOpv_tg@mail.gmail.com> 2012/3/20 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: > > The purpose behind the symbolDef/symbol pair is to encode *user-defined* shapes/signs/symbols so that they can be rendered. > > This is distinctly different from classifying/identifying shapes/signs/symbols in a facsimile, which ?can/should be handled using @facs on MEI elements. > > Just a reminder: > > Don't forget about @altsym which can be used to link an MEI element directly to a <symbolDef>. > When talking with the Corpus monodicum people from W?rzburg about encoding their data in MEI, the problem occured that occansionally they find neumes that they cannot interpret (yet). However, those neumes aren't just sloppily written, they clearly manifest a certain kind of symbol as it is found repeatedly (for example within the works of a certain scribe). @facs doesn't express this, while @altsym could do. Would this be misuse as this would not be targeting rendering? If not, why force them by means of the Schema to draw a vector version of a symbol? Just the classification and one or more facsimile references would pretty much say everything that can be said, I think. Thomas From pdr4h at eservices.virginia.edu Tue Mar 20 20:13:52 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 20 Mar 2012 19:13:52 +0000 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <CAEB1mAr2oXMHaL31Gk1k1D3jNZkuEOp=3MHMtWu5NqPeOpv_tg@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> <25118_1332226231_4F6828B7_25118_393_1_CAEB1mArwRgEu3NXNKruJLA42GuEMZ9bxe=OBTxAU2p3ZhRXpeg@mail.gmail.com> <CAJ306HYVEJKoH+5i-x1ziYGf=0HcnNjA-WEQmOgiFJgux4aDcg@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151F2A@GRANT.eservices.virginia.edu>, <CAEB1mAr2oXMHaL31Gk1k1D3jNZkuEOp=3MHMtWu5NqPeOpv_tg@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0115201A@GRANT.eservices.virginia.edu> > When talking with the Corpus monodicum people from W?rzburg about > encoding their data in MEI, the problem occured that occansionally > they find neumes that they cannot interpret (yet). However, those > neumes aren't just sloppily written, they clearly manifest a certain > kind of symbol as it is found repeatedly (for example within the works > of a certain scribe). @facs doesn't express this, while @altsym could > do. Please forgive me, but I don't understand what you're trying to say. @facs doesn't express what? The fact that they can't / don't want to say what a certain symbol is / means? What does @altsym do in this case that @facs doesn't? Whatever "it" is, @facs points to a region of an image and says "there it is", while @altsym points to a vector graphic and says "this is how you draw it". Neither of these attributes has anything to do with interpretation. Both of these require the encoder to make a decision about what "it" is by choosing an MEI element. So, for a neume one can say <neume @facs="d1" altsym="us1"/> <!-- This is a neume, it's there at "d1", and instructions for rendering it are at "us1" --> Are you wanting <symbol> to function as a generic marker for an unknown sign? That is, if a symbol's meaning is unknown, then are you looking for markup like -- <symbol @facs="d1 d2 d3 d4 d5"/> saying, in effect, "I don't know what this thing is, but it occurs 5 times"? This doesn't sound right to me because you already said they have *neumes* that can't yet be interpreted. But at the very least they can be called "neumes", right? So what's wrong with calling them neumes by using the <neume> element? Am I completely off the track here? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Tuesday, March 20, 2012 2:23 PM To: Music Encoding Initiative Subject: Re: [MEI-L] symbol/symbolDef 2012/3/20 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: > > The purpose behind the symbolDef/symbol pair is to encode *user-defined* shapes/signs/symbols so that they can be rendered. > > This is distinctly different from classifying/identifying shapes/signs/symbols in a facsimile, which can/should be handled using @facs on MEI elements. > > Just a reminder: > > Don't forget about @altsym which can be used to link an MEI element directly to a <symbolDef>. > When talking with the Corpus monodicum people from W?rzburg about encoding their data in MEI, the problem occured that occansionally they find neumes that they cannot interpret (yet). However, those neumes aren't just sloppily written, they clearly manifest a certain kind of symbol as it is found repeatedly (for example within the works of a certain scribe). @facs doesn't express this, while @altsym could do. Would this be misuse as this would not be targeting rendering? If not, why force them by means of the Schema to draw a vector version of a symbol? Just the classification and one or more facsimile references would pretty much say everything that can be said, I think. Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Wed Mar 21 07:53:39 2012 From: bohl at edirom.de (=?utf-8?B?QmVuamFtaW4gVy4gQm9obA==?=) Date: Wed, 21 Mar 2012 07:53:39 +0100 Subject: [MEI-L] =?utf-8?q?Antw=2E=3A__symbol/symbolDef?= Message-ID: <0M2Wmb-1SSkFM3PpV-00snnH@mrelayeu.kundenserver.de> Hi all! I don't think that you're off track Perry ;-) Thomas, what you're describing sounds to me like the occasion for a critical note (annotation); @plist could refer to the occurrences of the uninterpreted neume or even the user defined symbol. I don't see any necessity or use in describing the fact that the neume is unknown or or occurs in other places directly in the music body. Benjamin ----- Reply message ----- Von: "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> An: "Music Encoding Initiative" <mei-l at lists.uni-paderborn.de> Betreff: [MEI-L] symbol/symbolDef Datum: Di., M?r. 20, 2012 20:13 > When talking with the Corpus monodicum people from W?rzburg about > encoding their data in MEI, the problem occured that occansionally > they find neumes that they cannot interpret (yet). However, those > neumes aren't just sloppily written, they clearly manifest a certain > kind of symbol as it is found repeatedly (for example within the works > of a certain scribe). @facs doesn't express this, while @altsym could > do. Please forgive me, but I don't understand what you're trying to say. @facs doesn't express what? The fact that they can't / don't want to say what a certain symbol is / means? What does @altsym do in this case that @facs doesn't? Whatever "it" is, @facs points to a region of an image and says "there it is", while @altsym points to a vector graphic and says "this is how you draw it". Neither of these attributes has anything to do with interpretation. Both of these require the encoder to make a decision about what "it" is by choosing an MEI element. So, for a neume one can say <neume @facs="d1" altsym="us1"/> <!-- This is a neume, it's there at "d1", and instructions for rendering it are at "us1" --> Are you wanting <symbol> to function as a generic marker for an unknown sign? That is, if a symbol's meaning is unknown, then are you looking for markup like -- <symbol @facs="d1 d2 d3 d4 d5"/> saying, in effect, "I don't know what this thing is, but it occurs 5 times"? This doesn't sound right to me because you already said they have *neumes* that can't yet be interpreted. But at the very least they can be called "neumes", right? So what's wrong with calling them neumes by using the <neume> element? Am I completely off the track here? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Tuesday, March 20, 2012 2:23 PM To: Music Encoding Initiative Subject: Re: [MEI-L] symbol/symbolDef 2012/3/20 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: > > The purpose behind the symbolDef/symbol pair is to encode *user-defined* shapes/signs/symbols so that they can be rendered. > > This is distinctly different from classifying/identifying shapes/signs/symbols in a facsimile, which can/should be handled using @facs on MEI elements. > > Just a reminder: > > Don't forget about @altsym which can be used to link an MEI element directly to a <symbolDef>. > When talking with the Corpus monodicum people from W?rzburg about encoding their data in MEI, the problem occured that occansionally they find neumes that they cannot interpret (yet). However, those neumes aren't just sloppily written, they clearly manifest a certain kind of symbol as it is found repeatedly (for example within the works of a certain scribe). @facs doesn't express this, while @altsym could do. Would this be misuse as this would not be targeting rendering? If not, why force them by means of the Schema to draw a vector version of a symbol? Just the classification and one or more facsimile references would pretty much say everything that can be said, I think. Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120321/fe7a2278/attachment.html> From zupftom at googlemail.com Wed Mar 21 08:54:55 2012 From: zupftom at googlemail.com (TW) Date: Wed, 21 Mar 2012 08:54:55 +0100 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D0115201A@GRANT.eservices.virginia.edu> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> <25118_1332226231_4F6828B7_25118_393_1_CAEB1mArwRgEu3NXNKruJLA42GuEMZ9bxe=OBTxAU2p3ZhRXpeg@mail.gmail.com> <CAJ306HYVEJKoH+5i-x1ziYGf=0HcnNjA-WEQmOgiFJgux4aDcg@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151F2A@GRANT.eservices.virginia.edu> <CAEB1mAr2oXMHaL31Gk1k1D3jNZkuEOp=3MHMtWu5NqPeOpv_tg@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D0115201A@GRANT.eservices.virginia.edu> Message-ID: <CAEB1mAqdQ+h-50YN7eZjy0s9sMR4eA5e11k=FDLyo5MCu=RmOw@mail.gmail.com> 2012/3/20 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: >> When talking with the Corpus monodicum people from W?rzburg about >> encoding their data in MEI, the problem occured that occansionally >> they find neumes that they cannot interpret (yet). ?However, those >> neumes aren't just sloppily written, they clearly manifest a certain >> kind of symbol as it is found repeatedly (for example within the works >> of a certain scribe). ?@facs doesn't express this, while @altsym could >> do. > > Please forgive me, but I don't understand what you're trying to say. ?@facs doesn't express what? ?The fact that they can't / don't want to say what a certain symbol is / means? ?What does @altsym do in this case that @facs doesn't? > > Whatever "it" is, @facs points to a region of an image and says "there it is", while @altsym points to a vector graphic and says "this is how you draw it". ?Neither of these attributes has anything to do with interpretation. > > Both of these require the encoder to make a decision about what "it" is by choosing an MEI element. ?So, for a neume one can say > > <neume @facs="d1" altsym="us1"/> > <!-- This is a neume, it's there at "d1", and instructions for rendering it are at "us1" --> > > Are you wanting <symbol> to function as a generic marker for an unknown sign? ?That is, if a symbol's meaning is unknown, then are you looking for markup like -- > > <symbol @facs="d1 d2 d3 d4 d5"/> > > saying, in effect, "I don't know what this thing is, but it occurs 5 times"? No, I was more thinking about something like this: <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> <sourceDesc> <source> <physDesc> <handList> <hand xml:id="handB"> <name>Beta</name> </hand> <hand xml:id="handC"> <name>Gamma</name> </hand> </handList> </physDesc> <history> <creation> <geogName xml:id="mon_Alpha">monastery Alpha</geogName> </creation> </history> </source> </sourceDesc> </fileDesc> </meiHead> <music> <facsimile> <surface> <graphic target="facsimile00001.jpg"/> <zone data="#symbolAB123_description" xml:id="symbolAB123_example" ulx="180" uly="66" lrx="220" lry="81"/> <zone data="#symbolCD456_description" xml:id="symbolCD456_example" ulx="3475" uly="1290" lrx="3510" lry="1302"/> </surface> </facsimile> <body> <mdiv> <score> <scoreDef> <symbolTable> <symbolDef xml:id="symbolAB123"> <symbol facs="#symbolAB123_example"/> </symbolDef> <symbolDef xml:id="symbolCD456"> <symbol facs="#symbolCD456_description"/> </symbolDef> </symbolTable> </scoreDef> <annot startid="#symbolAB123"> This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB">Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> <annot startid="#symbolCD456"> <!-- Something interesting about this symbol --> </annot> <section> <staff> <layer> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume altsym="#symbolAB123"/> </syllable> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume altsym="#symbolAB123"/> <uneume altsym="#symbolCD456"/> </syllable> <!-- ... --> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Does it at least make some sense? It's not valid MEI as I gave the <symbol> elements inside symbolDef only a @facs rather than a @ref. (Unfortunately, there aren't any examples for <annot>, <handList>, <creation> or <facsimile> on Google Code, so I'm using them as I understand them. But they're only meant as illustrative background actors, anyway.) > > This doesn't sound right to me because you already said they have *neumes* that can't yet be interpreted. ?But at the very least they can be called "neumes", right? ?So what's wrong with calling them neumes by using the <neume> element? > Of course <uneume> would have to be used. <uneume> has the @name attribute that can be used to classify the symbol. If there is a symbol that doesn't fall in any of the categories that @name offers, but still can be identified as a certain symbol, I would have used an @altsym to point to the symbol, like shown above. Of course one could use <annot> or something to say "This is the neume of special type pink-dog-with-green-tail", but I think @altsym is more accessible and less cluttered. For example, it's very straightforward to formulate a search query "Find me all occurrences of pink-dog-with-green-tail". Or if it should be found out that this is just a strange way of writing a torculus, then @altsym can be replaced with the proper @name. But back to the original question: It seems that using <symbol> or @altsym for classifying unknown symbols (in any context) isn't something that I should encourage in the guidelines. At least that's my interim conclusion of the discussion so far. Thomas From pdr4h at eservices.virginia.edu Wed Mar 21 15:01:30 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Wed, 21 Mar 2012 14:01:30 +0000 Subject: [MEI-L] symbol/symbolDef In-Reply-To: <CAEB1mAqdQ+h-50YN7eZjy0s9sMR4eA5e11k=FDLyo5MCu=RmOw@mail.gmail.com> References: <CAEB1mArXkBm17TR8FpAMyD3wSZm4W1qd42zm6RVRw2dugx898Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151CEB@GRANT.eservices.virginia.edu> <CAEB1mAoow3znx-j3bqw9YnD1zyoB9SArBp-wrnJLLhsQNxCO=Q@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151D51@GRANT.eservices.virginia.edu> <29155_1332177366_4F6769D4_29155_127_1_CAEB1mApzwRBtkt_uoJnpBb1OX4ov7XQxDWLLP-Gr0JEgpgEpHQ@mail.gmail.com> <CAJ306HbgHmkuOeKzmU_XJqahGT4wNHuep_FS7pYi+ac3Vw_zpw@mail.gmail.com> <25118_1332226231_4F6828B7_25118_393_1_CAEB1mArwRgEu3NXNKruJLA42GuEMZ9bxe=OBTxAU2p3ZhRXpeg@mail.gmail.com> <CAJ306HYVEJKoH+5i-x1ziYGf=0HcnNjA-WEQmOgiFJgux4aDcg@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D01151F2A@GRANT.eservices.virginia.edu> <CAEB1mAr2oXMHaL31Gk1k1D3jNZkuEOp=3MHMtWu5NqPeOpv_tg@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D0115201A@GRANT.eservices.virginia.edu>, <CAEB1mAqdQ+h-50YN7eZjy0s9sMR4eA5e11k=FDLyo5MCu=RmOw@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01152179@GRANT.eservices.virginia.edu> > But back to the original question: It seems that using <symbol> or > @altsym for classifying unknown symbols (in any context) isn't > something that I should encourage in the guidelines. At least that's > my interim conclusion of the discussion so far. Correct. Especially since your example can be simplified, as in the following: <?xml version="1.0" encoding="UTF-8"?> <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> <sourceDesc> <source> <physDesc> <handList> <hand xml:id="handB"> <name>Beta</name> </hand> <hand xml:id="handC"> <name>Gamma</name> </hand> </handList> </physDesc> <history> <creation> <geogName xml:id="mon_Alpha">monastery Alpha</geogName> </creation> </history> </source> </sourceDesc> </fileDesc> </meiHead> <music> <facsimile> <surface> <graphic target="facsimile00001.jpg"/> <zone data="#symbolAB123_description" xml:id="symbolAB123_example" ulx="180" uly="66" lrx="220" lry="81"/> <zone data="#symbolCD456_description" xml:id="symbolCD456_example" ulx="3475" uly="1290" lrx="3510" lry="1302"/> </surface> </facsimile> <body> <mdiv> <score> <scoreDef> <!-- scoreDef-y stuff--> </scoreDef> <annot xml:id="symbolAB123_description" plist="#symbolAB123_example">This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB" >Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> <annot xml:id="symbolCD456_description" plist="#symbolCD456_example"> <!-- Something interesting about this symbol --> </annot> <section> <staff> <layer> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume facs="#symbolAB123_example"/> </syllable> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume facs="#symbolAB123_example"/> <uneume facs="#symbolCD456_example"/> </syllable> <!-- ... --> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Instead of using <symbolDef> as an intermediary, uneume/@facs points directly to the facsimile/zone. annot/@plist also points to the zone so that every uneume doesn't have to be enumerated, however, I think this is somewhat suspect. It's better to just pick one neume as an exemplar: ... <annot xml:id="symbolAB123_description" plist="#symbolAB123">This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB">Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> ... <uneume xml:id="symbolAB123" facs="#symbolAB123_example"/> ... -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Wednesday, March 21, 2012 3:54 AM To: Music Encoding Initiative Subject: Re: [MEI-L] symbol/symbolDef 2012/3/20 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: >> When talking with the Corpus monodicum people from W?rzburg about >> encoding their data in MEI, the problem occured that occansionally >> they find neumes that they cannot interpret (yet). However, those >> neumes aren't just sloppily written, they clearly manifest a certain >> kind of symbol as it is found repeatedly (for example within the works >> of a certain scribe). @facs doesn't express this, while @altsym could >> do. > > Please forgive me, but I don't understand what you're trying to say. @facs doesn't express what? The fact that they can't / don't want to say what a certain symbol is / means? What does @altsym do in this case that @facs doesn't? > > Whatever "it" is, @facs points to a region of an image and says "there it is", while @altsym points to a vector graphic and says "this is how you draw it". Neither of these attributes has anything to do with interpretation. > > Both of these require the encoder to make a decision about what "it" is by choosing an MEI element. So, for a neume one can say > > <neume @facs="d1" altsym="us1"/> > <!-- This is a neume, it's there at "d1", and instructions for rendering it are at "us1" --> > > Are you wanting <symbol> to function as a generic marker for an unknown sign? That is, if a symbol's meaning is unknown, then are you looking for markup like -- > > <symbol @facs="d1 d2 d3 d4 d5"/> > > saying, in effect, "I don't know what this thing is, but it occurs 5 times"? No, I was more thinking about something like this: <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> <sourceDesc> <source> <physDesc> <handList> <hand xml:id="handB"> <name>Beta</name> </hand> <hand xml:id="handC"> <name>Gamma</name> </hand> </handList> </physDesc> <history> <creation> <geogName xml:id="mon_Alpha">monastery Alpha</geogName> </creation> </history> </source> </sourceDesc> </fileDesc> </meiHead> <music> <facsimile> <surface> <graphic target="facsimile00001.jpg"/> <zone data="#symbolAB123_description" xml:id="symbolAB123_example" ulx="180" uly="66" lrx="220" lry="81"/> <zone data="#symbolCD456_description" xml:id="symbolCD456_example" ulx="3475" uly="1290" lrx="3510" lry="1302"/> </surface> </facsimile> <body> <mdiv> <score> <scoreDef> <symbolTable> <symbolDef xml:id="symbolAB123"> <symbol facs="#symbolAB123_example"/> </symbolDef> <symbolDef xml:id="symbolCD456"> <symbol facs="#symbolCD456_description"/> </symbolDef> </symbolTable> </scoreDef> <annot startid="#symbolAB123"> This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB">Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> <annot startid="#symbolCD456"> <!-- Something interesting about this symbol --> </annot> <section> <staff> <layer> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume altsym="#symbolAB123"/> </syllable> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume altsym="#symbolAB123"/> <uneume altsym="#symbolCD456"/> </syllable> <!-- ... --> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Does it at least make some sense? It's not valid MEI as I gave the <symbol> elements inside symbolDef only a @facs rather than a @ref. (Unfortunately, there aren't any examples for <annot>, <handList>, <creation> or <facsimile> on Google Code, so I'm using them as I understand them. But they're only meant as illustrative background actors, anyway.) > > This doesn't sound right to me because you already said they have *neumes* that can't yet be interpreted. But at the very least they can be called "neumes", right? So what's wrong with calling them neumes by using the <neume> element? > Of course <uneume> would have to be used. <uneume> has the @name attribute that can be used to classify the symbol. If there is a symbol that doesn't fall in any of the categories that @name offers, but still can be identified as a certain symbol, I would have used an @altsym to point to the symbol, like shown above. Of course one could use <annot> or something to say "This is the neume of special type pink-dog-with-green-tail", but I think @altsym is more accessible and less cluttered. For example, it's very straightforward to formulate a search query "Find me all occurrences of pink-dog-with-green-tail". Or if it should be found out that this is just a strange way of writing a torculus, then @altsym can be replaced with the proper @name. But back to the original question: It seems that using <symbol> or @altsym for classifying unknown symbols (in any context) isn't something that I should encourage in the guidelines. At least that's my interim conclusion of the discussion so far. Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Thu Mar 22 08:05:11 2012 From: bohl at edirom.de (=?utf-8?B?QmVuamFtaW4gVy4gQm9obA==?=) Date: Thu, 22 Mar 2012 08:05:11 +0100 Subject: [MEI-L] =?utf-8?q?Antw=2E=3A__symbol/symbolDef?= Message-ID: <0MOVST-1SGOqX2RJV-005str@mrelayeu.kundenserver.de> Ok, this all seems plausible. But warming up the second issue: How would I go about supplying rendition information for the unknown neume? That should happen in the symbolDef, shouldn't it? How do I give a reference size, indicating how big the symbol is in comparison to other symbols or e.g. the line distance? B ----- Reply message ----- Von: "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> An: "Music Encoding Initiative" <mei-l at lists.uni-paderborn.de> Betreff: [MEI-L] symbol/symbolDef Datum: Mi., M?r. 21, 2012 15:01 > But back to the original question: It seems that using <symbol> or > @altsym for classifying unknown symbols (in any context) isn't > something that I should encourage in the guidelines. At least that's > my interim conclusion of the discussion so far. Correct. Especially since your example can be simplified, as in the following: <?xml version="1.0" encoding="UTF-8"?> <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> <sourceDesc> <source> <physDesc> <handList> <hand xml:id="handB"> <name>Beta</name> </hand> <hand xml:id="handC"> <name>Gamma</name> </hand> </handList> </physDesc> <history> <creation> <geogName xml:id="mon_Alpha">monastery Alpha</geogName> </creation> </history> </source> </sourceDesc> </fileDesc> </meiHead> <music> <facsimile> <surface> <graphic target="facsimile00001.jpg"/> <zone data="#symbolAB123_description" xml:id="symbolAB123_example" ulx="180" uly="66" lrx="220" lry="81"/> <zone data="#symbolCD456_description" xml:id="symbolCD456_example" ulx="3475" uly="1290" lrx="3510" lry="1302"/> </surface> </facsimile> <body> <mdiv> <score> <scoreDef> <!-- scoreDef-y stuff--> </scoreDef> <annot xml:id="symbolAB123_description" plist="#symbolAB123_example">This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB" >Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> <annot xml:id="symbolCD456_description" plist="#symbolCD456_example"> <!-- Something interesting about this symbol --> </annot> <section> <staff> <layer> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume facs="#symbolAB123_example"/> </syllable> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume facs="#symbolAB123_example"/> <uneume facs="#symbolCD456_example"/> </syllable> <!-- ... --> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Instead of using <symbolDef> as an intermediary, uneume/@facs points directly to the facsimile/zone. annot/@plist also points to the zone so that every uneume doesn't have to be enumerated, however, I think this is somewhat suspect. It's better to just pick one neume as an exemplar: ... <annot xml:id="symbolAB123_description" plist="#symbolAB123">This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB">Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> ... <uneume xml:id="symbolAB123" facs="#symbolAB123_example"/> ... -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Wednesday, March 21, 2012 3:54 AM To: Music Encoding Initiative Subject: Re: [MEI-L] symbol/symbolDef 2012/3/20 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: >> When talking with the Corpus monodicum people from W?rzburg about >> encoding their data in MEI, the problem occured that occansionally >> they find neumes that they cannot interpret (yet). However, those >> neumes aren't just sloppily written, they clearly manifest a certain >> kind of symbol as it is found repeatedly (for example within the works >> of a certain scribe). @facs doesn't express this, while @altsym could >> do. > > Please forgive me, but I don't understand what you're trying to say. @facs doesn't express what? The fact that they can't / don't want to say what a certain symbol is / means? What does @altsym do in this case that @facs doesn't? > > Whatever "it" is, @facs points to a region of an image and says "there it is", while @altsym points to a vector graphic and says "this is how you draw it". Neither of these attributes has anything to do with interpretation. > > Both of these require the encoder to make a decision about what "it" is by choosing an MEI element. So, for a neume one can say > > <neume @facs="d1" altsym="us1"/> > <!-- This is a neume, it's there at "d1", and instructions for rendering it are at "us1" --> > > Are you wanting <symbol> to function as a generic marker for an unknown sign? That is, if a symbol's meaning is unknown, then are you looking for markup like -- > > <symbol @facs="d1 d2 d3 d4 d5"/> > > saying, in effect, "I don't know what this thing is, but it occurs 5 times"? No, I was more thinking about something like this: <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> <sourceDesc> <source> <physDesc> <handList> <hand xml:id="handB"> <name>Beta</name> </hand> <hand xml:id="handC"> <name>Gamma</name> </hand> </handList> </physDesc> <history> <creation> <geogName xml:id="mon_Alpha">monastery Alpha</geogName> </creation> </history> </source> </sourceDesc> </fileDesc> </meiHead> <music> <facsimile> <surface> <graphic target="facsimile00001.jpg"/> <zone data="#symbolAB123_description" xml:id="symbolAB123_example" ulx="180" uly="66" lrx="220" lry="81"/> <zone data="#symbolCD456_description" xml:id="symbolCD456_example" ulx="3475" uly="1290" lrx="3510" lry="1302"/> </surface> </facsimile> <body> <mdiv> <score> <scoreDef> <symbolTable> <symbolDef xml:id="symbolAB123"> <symbol facs="#symbolAB123_example"/> </symbolDef> <symbolDef xml:id="symbolCD456"> <symbol facs="#symbolCD456_description"/> </symbolDef> </symbolTable> </scoreDef> <annot startid="#symbolAB123"> This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB">Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> <annot startid="#symbolCD456"> <!-- Something interesting about this symbol --> </annot> <section> <staff> <layer> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume altsym="#symbolAB123"/> </syllable> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume altsym="#symbolAB123"/> <uneume altsym="#symbolCD456"/> </syllable> <!-- ... --> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Does it at least make some sense? It's not valid MEI as I gave the <symbol> elements inside symbolDef only a @facs rather than a @ref. (Unfortunately, there aren't any examples for <annot>, <handList>, <creation> or <facsimile> on Google Code, so I'm using them as I understand them. But they're only meant as illustrative background actors, anyway.) > > This doesn't sound right to me because you already said they have *neumes"neumes", right? So what's wrong with calling them neumes by using the <neume> element? > Of course <uneume> would have to be used. <uneume> has the @name attribute that can be used to classify the symbol. If there is a symbol that doesn't fall in any of the categories that @name offers, but still can be identified as a certain symbol, I would have used an @altsym to point to the symbol, like shown above. Of course one could use <annot> or something to say "This is the neume of special type pink-dog-with-green-tail", but I think @altsym is more accessible and less cluttered. For example, it's very straightforward to formulate a search query "Find me all occurrences of pink-dog-with-green-tail". Or if it should be found out that this is just a strange way of writing a torculus, then @altsym can be replaced with the proper @name. But back to the original question: It seems that using <symbol> or @altsym for classifying unknown symbols (in any context) isn't something that I should encourage in the guidelines. At least that's my interim conclusion of the discussion so far. Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120322/ef6eff7b/attachment.html> From pdr4h at eservices.virginia.edu Thu Mar 22 14:54:11 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 22 Mar 2012 13:54:11 +0000 Subject: [MEI-L] Antw.: symbol/symbolDef In-Reply-To: <0MOVST-1SGOqX2RJV-005str@mrelayeu.kundenserver.de> References: <0MOVST-1SGOqX2RJV-005str@mrelayeu.kundenserver.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D011522CA@GRANT.eservices.virginia.edu> Hi, Benni, Instructions for "drawing" the symbol go in <symbolDef>. The simple answer to your question is, Use @scale on <symbol> to specify the size of a particular occurrence of the symbol. The more complex answer is, Rendering is a coordinated (pun intended) dance between/amongst the values of @page.units and @page.scale (found in att.scoreDef.vis), the values of <symbolDef>'s @ulx, @uly, @lrx, and @lry attributes (from att.coordinated), and <symbol>'s @scale attribute (in att.scalable). Does that help? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Benjamin W. Bohl [bohl at edirom.de] Sent: Thursday, March 22, 2012 3:05 AM To: mei-l at lists.uni-paderborn.de Subject: [MEI-L] Antw.: symbol/symbolDef Ok, this all seems plausible. But warming up the second issue: How would I go about supplying rendition information for the unknown neume? That should happen in the symbolDef, shouldn't it? How do I give a reference size, indicating how big the symbol is in comparison to other symbols or e.g. the line distance? B ----- Reply message ----- Von: "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> An: "Music Encoding Initiative" <mei-l at lists.uni-paderborn.de> Betreff: [MEI-L] symbol/symbolDef Datum: Mi., M?r. 21, 2012 15:01 > But back to the original question: It seems that using <symbol> or > @altsym for classifying unknown symbols (in any context) isn't > something that I should encourage in the guidelines. At least that's > my interim conclusion of the discussion so far. Correct. Especially since your example can be simplified, as in the following: <?xml version="1.0" encoding="UTF-8"?> <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> <sourceDesc> <source> <physDesc> <handList> <hand xml:id="handB"> <name>Beta</name> </hand> <hand xml:id="handC"> <name>Gamma</name> </hand> </handList> </physDesc> <history> <creation> <geogName xml:id="mon_Alpha">monastery Alpha</geogName> </creation> </history> </source> </sourceDesc> </fileDesc> </meiHead> <music> <facsimile> <surface> <graphic target="facsimile00001.jpg"/> <zone data="#symbolAB123_description" xml:id="symbolAB123_example" ulx="180" uly="66" lrx="220" lry="81"/> <zone data="#symbolCD456_description" xml:id="symbolCD456_example" ulx="3475" uly="1290" lrx="3510" lry="1302"/> </surface> </facsimile> <body> <mdiv> <score> <scoreDef> <!-- scoreDef-y stuff--> </scoreDef> <annot xml:id="symbolAB123_description" plist="#symbolAB123_example">This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB" >Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> <annot xml:id="symbolCD456_description" plist="#symbolCD456_example"> <!-- Something interesting about this symbol --> </annot> <section> <staff> <layer> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume facs="#symbolAB123_example"/> </syllable> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume facs="#symbolAB123_example"/> <uneume facs="#symbolCD456_example"/> </syllable> <!-- ... --> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Instead of using <symbolDef> as an intermediary, uneume/@facs points directly to the facsimile/zone. annot/@plist also points to the zone so that every uneume doesn't have to be enumerated, however, I think this is somewhat suspect. It's better to just pick one neume as an exemplar: ... <annot xml:id="symbolAB123_description" plist="#symbolAB123">This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB">Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> ... <uneume xml:id="symbolAB123" facs="#symbolAB123_example"/> ... -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of TW [zupftom at googlemail.com] Sent: Wednesday, March 21, 2012 3:54 AM To: Music Encoding Initiative Subject: Re: [MEI-L] symbol/symbolDef 2012/3/20 Roland, Perry (pdr4h) <pdr4h at eservices.virginia.edu>: >> When talking with the Corpus monodicum people from W?rzburg about >> encoding their data in MEI, the problem occured that occansionally >> they find neumes that they cannot interpret (yet). However, those >> neumes aren't just sloppily written, they clearly manifest a certain >> kind of symbol as it is found repeatedly (for example within the works >> of a certain scribe). @facs doesn't express this, while @altsym could >> do. > > Please forgive me, but I don't understand what you're trying to say. @facs doesn't express what? The fact that they can't / don't want to say what a certain symbol is / means? What does @altsym do in this case that @facs doesn't? > > Whatever "it" is, @facs points to a region of an image and says "there it is", while @altsym points to a vector graphic and says "this is how you draw it". Neither of these attributes has anything to do with interpretation. > > Both of these require the encoder to make a decision about what "it" is by choosing an MEI element. So, for a neume one can say > > <neume @facs="d1" altsym="us1"/> > <!-- This is a neume, it's there at "d1", and instructions for rendering it are at "us1" --> > > Are you wanting <symbol> to function as a generic marker for an unknown sign? That is, if a symbol's meaning is unknown, then are you looking for markup like -- > > <symbol @facs="d1 d2 d3 d4 d5"/> > > saying, in effect, "I don't know what this thing is, but it occurs 5 times"? No, I was more thinking about something like this: <mei xmlns="http://www.music-encoding.org/ns/mei"> <meiHead> <fileDesc> <titleStmt> <title/> </titleStmt> <pubStmt/> <sourceDesc> <source> <physDesc> <handList> <hand xml:id="handB"> <name>Beta</name> </hand> <hand xml:id="handC"> <name>Gamma</name> </hand> </handList> </physDesc> <history> <creation> <geogName xml:id="mon_Alpha">monastery Alpha</geogName> </creation> </history> </source> </sourceDesc> </fileDesc> </meiHead> <music> <facsimile> <surface> <graphic target="facsimile00001.jpg"/> <zone data="#symbolAB123_description" xml:id="symbolAB123_example" ulx="180" uly="66" lrx="220" lry="81"/> <zone data="#symbolCD456_description" xml:id="symbolCD456_example" ulx="3475" uly="1290" lrx="3510" lry="1302"/> </surface> </facsimile> <body> <mdiv> <score> <scoreDef> <symbolTable> <symbolDef xml:id="symbolAB123"> <symbol facs="#symbolAB123_example"/> </symbolDef> <symbolDef xml:id="symbolCD456"> <symbol facs="#symbolCD456_description"/> </symbolDef> </symbolTable> </scoreDef> <annot startid="#symbolAB123"> This symbol can be found in sources stemming from <ref target="#mon_Alpha">monastery Alpha</ref> and is used by hands <ref target="#handB">Beta</ref> and <ref target="#handC">Gamma</ref>. It frequently appears after a clivis. It's meaning is unknown. </annot> <annot startid="#symbolCD456"> <!-- Something interesting about this symbol --> </annot> <section> <staff> <layer> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume altsym="#symbolAB123"/> </syllable> <syllable> <syl>bla</syl> <uneume name="clivis"/> <uneume altsym="#symbolAB123"/> <uneume altsym="#symbolCD456"/> </syllable> <!-- ... --> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Does it at least make some sense? It's not valid MEI as I gave the <symbol> elements inside symbolDef only a @facs rather than a @ref. (Unfortunately, there aren't any examples for <annot>, <handList>, <creation> or <facsimile> on Google Code, so I'm using them as I understand them. But they're only meant as illustrative background actors, anyway.) > > This doesn't sound right to me because you already said they have *neumes"neumes", right? So what's wrong with calling them neumes by using the <neume> element? > Of course <uneume> would have to be used. <uneume> has the @name attribute that can be used to classify the symbol. If there is a symbol that doesn't fall in any of the categories that @name offers, but still can be identified as a certain symbol, I would have used an @altsym to point to the symbol, like shown above. Of course one could use <annot> or something to say "This is the neume of special type pink-dog-with-green-tail", but I think @altsym is more accessible and less cluttered. For example, it's very straightforward to formulate a search query "Find me all occurrences of pink-dog-with-green-tail". Or if it should be found out that this is just a strange way of writing a torculus, then @altsym can be replaced with the proper @name. But back to the original question: It seems that using <symbol> or @altsym for classifying unknown symbols (in any context) isn't something that I should encourage in the guidelines. At least that's my interim conclusion of the discussion so far. Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From Maja.Hartwig at gmx.de Mon Mar 26 09:43:54 2012 From: Maja.Hartwig at gmx.de (Maja Hartwig) Date: Mon, 26 Mar 2012 09:43:54 +0200 Subject: [MEI-L] del in chord Message-ID: <279EDEDD-C7C5-423E-8F4E-563B015B1F88@gmx.de> Dear List, I wanted to encode a deleted note within a chord: <chord> <note/> <del><note/></del> <note/> <note/> <chord/> And I wondered that this is not allowed, while encoding deleted notes within a beam for instance, seems to be no problem. Are there any other solutions or ideas? Best, Maja -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120326/b3075357/attachment.html> -------------- n?chster Teil -------------- Ein Dateianhang mit Bin?rdaten wurde abgetrennt... Dateiname : Pasted Graphic.tiff Dateityp : image/tiff Dateigr??e : 235990 bytes Beschreibung: nicht verf?gbar URL : <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120326/b3075357/attachment.tiff> From pdr4h at eservices.virginia.edu Mon Mar 26 14:48:12 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 26 Mar 2012 12:48:12 +0000 Subject: [MEI-L] del in chord In-Reply-To: <279EDEDD-C7C5-423E-8F4E-563B015B1F88@gmx.de> References: <279EDEDD-C7C5-423E-8F4E-563B015B1F88@gmx.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01152ACF@GRANT.eservices.virginia.edu> Hi, Maja, The only solution I see is to allow model.transcriptionLike inside <chord>. I'll add an issue in the tracker. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [Maja.Hartwig at gmx.de] Sent: Monday, March 26, 2012 3:43 AM To: Music Encoding Initiative Subject: [MEI-L] del in chord Dear List, I wanted to encode a deleted note within a chord: <chord> <note/> <del><note/></del> <note/> <note/> <chord/> And I wondered that this is not allowed, while encoding deleted notes within a beam for instance, seems to be no problem. Are there any other solutions or ideas? Best, Maja -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120326/1a3cdc9b/attachment.html> From atge at kb.dk Thu Mar 29 13:28:08 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Thu, 29 Mar 2012 11:28:08 +0000 Subject: [MEI-L] physLoc & provenance In-Reply-To: <279EDEDD-C7C5-423E-8F4E-563B015B1F88@gmx.de> References: <279EDEDD-C7C5-423E-8F4E-563B015B1F88@gmx.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D13A4BF3C4@EXCHANGE-02.kb.dk> Hi all, working with source metadata encodings, my colleague Sigge and I are wondering why <physLoc> and <provenance> are children of <physDesc>, not its siblings. From our point of view, the physical description of an item focuses on describing the object itself, i.e. its dimensions, the medium etc. independently of its location or history. Where it is located or who has owned it does not change the physical object in this sense (except that the circumstances may have left some physical traces on the object itself, of course, but that is hardly the point...). It seems like <physDesc>, as it is, is to be understood somewhat like the FRBR "item" level, containing all item-specific data, but then the tag name "physDesc" does not seem very accurate. The issue becomes even more apparent in our current customization of the schema, which actually introduces the FRBR item level in <source> (as <itemList><item>, where <item> has almost the same content model as <source>). What would be the arguments against moving <physLoc> and <provenance> out of <physDesc>? All the best, Axel -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120329/e6cb49fb/attachment.html> From pdr4h at eservices.virginia.edu Thu Mar 29 15:26:49 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 29 Mar 2012 13:26:49 +0000 Subject: [MEI-L] Who are the Digital Humanities? Message-ID: <BBCC497C40D85642B90E9F94FC30343D01152FB4@GRANT.eservices.virginia.edu> Hello, all, Just wanted to pass along this message which was posted to TEI-L. If you consider yourself a member of the digital humanities community, you might want to respond to this survey. -- p. > Date: Wed, 28 Mar 2012 09:46:42 +0100 > From: Lou Burnard <lou.burnard at RETIRED.OX.AC.UK> > Subject: Who are the Digital Humanities? > > Slightly off topic, but not without interest for readers of this list: > http://t.co/bH2Gri3u -- a broad-based attempt to find out who considers > themselves to be doing "digital humanities" __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From kepper at edirom.de Thu Mar 29 15:38:25 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 29 Mar 2012 15:38:25 +0200 Subject: [MEI-L] physLoc & provenance In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D13A4BF3C4@EXCHANGE-02.kb.dk> References: <279EDEDD-C7C5-423E-8F4E-563B015B1F88@gmx.de> <0B6F63F59F405E4C902DFE2C2329D0D13A4BF3C4@EXCHANGE-02.kb.dk> Message-ID: <343E1AF4-28B2-47FE-9DD4-28EF1C4203D9@edirom.de> Hi Axel, what you describe seems right and convincing, especially in the light of FRBR. I'm not sure if we can change it in the regular MEI model, though. The regular model is very open and has more or less the same content model for works and sources, which is obviously even without FRBR questionable. I wonder if we should really change this model, or instead tighten up our FRBR proposal. At some point, we will have to specify the models of works, expressions, sources (manifestations) and items anyway, so I think this could be another opportunity. If we can come up with a convincing model that works in this context, we can propose it for regular MEI as well (and who knows, maybe at some point 'regular' MEI will be a simplified FRBR-MEI?). Also, we have agreed that the current model won't be changed for the upcoming release anymore, so this couldn't be adopted before summer. At the same time, there is no such restriction for our FRBR-ODD, we may change it as often as necessary without worrying other's breaking software? So unless there is a consensus on this list that we should adopt your proposal in regular MEI as soon as possible, I would suggest to try it out in our own customization first. As soon as we have something debatable, we may re-present it here. But as always, that's just my first impression? Best regards, Johannes Am 29.03.2012 um 13:28 schrieb Axel Teich Geertinger: > Hi all, > > working with source metadata encodings, my colleague Sigge and I are wondering why <physLoc> and <provenance> are children of <physDesc>, not its siblings. From our point of view, the physical description of an item focuses on describing the object itself, i.e. its dimensions, the medium etc. independently of its location or history. Where it is located or who has owned it does not change the physical object in this sense (except that the circumstances may have left some physical traces on the object itself, of course, but that is hardly the point...). > It seems like <physDesc>, as it is, is to be understood somewhat like the FRBR ?item? level, containing all item-specific data, but then the tag name ?physDesc? does not seem very accurate. The issue becomes even more apparent in our current customization of the schema, which actually introduces the FRBR item level in <source> (as <itemList><item>, where <item> has almost the same content model as <source>). > What would be the arguments against moving <physLoc> and <provenance> out of <physDesc>? > > All the best, > Axel > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Thu Mar 29 16:24:26 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Thu, 29 Mar 2012 14:24:26 +0000 Subject: [MEI-L] physLoc & provenance In-Reply-To: <343E1AF4-28B2-47FE-9DD4-28EF1C4203D9@edirom.de> References: <279EDEDD-C7C5-423E-8F4E-563B015B1F88@gmx.de> <0B6F63F59F405E4C902DFE2C2329D0D13A4BF3C4@EXCHANGE-02.kb.dk> <343E1AF4-28B2-47FE-9DD4-28EF1C4203D9@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D13A4BF4BC@EXCHANGE-02.kb.dk> Hi Johannes, yes, that was the plan I had in mind too. I was just curious to know whether we had overlooked some good reason for the way it is done in the existing model. We can consider changing it in our customization and let it be part of some future MEI-FRBR proposal. Best, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Johannes Kepper Sendt: 29. marts 2012 15:38 Til: Music Encoding Initiative Emne: Re: [MEI-L] physLoc & provenance Hi Axel, what you describe seems right and convincing, especially in the light of FRBR. I'm not sure if we can change it in the regular MEI model, though. The regular model is very open and has more or less the same content model for works and sources, which is obviously even without FRBR questionable. I wonder if we should really change this model, or instead tighten up our FRBR proposal. At some point, we will have to specify the models of works, expressions, sources (manifestations) and items anyway, so I think this could be another opportunity. If we can come up with a convincing model that works in this context, we can propose it for regular MEI as well (and who knows, maybe at some point 'regular' MEI will be a simplified FRBR-MEI?). Also, we have agreed that the current model won't be changed for the upcoming release anymore, so this couldn't be adopted before summer. At the same time, there is no such restriction for our FRBR-ODD, we may change it as often as necessary without worrying other's breaking software. So unless there is a consensus on this list that we should adopt your proposal in regular MEI as soon as possible, I would suggest to try it out in our own customization first. As soon as we have something debatable, we may re-present it here. But as always, that's just my first impression. Best regards, Johannes Am 29.03.2012 um 13:28 schrieb Axel Teich Geertinger: > Hi all, > > working with source metadata encodings, my colleague Sigge and I are wondering why <physLoc> and <provenance> are children of <physDesc>, not its siblings. From our point of view, the physical description of an item focuses on describing the object itself, i.e. its dimensions, the medium etc. independently of its location or history. Where it is located or who has owned it does not change the physical object in this sense (except that the circumstances may have left some physical traces on the object itself, of course, but that is hardly the point...). > It seems like <physDesc>, as it is, is to be understood somewhat like the FRBR "item" level, containing all item-specific data, but then the tag name "physDesc" does not seem very accurate. The issue becomes even more apparent in our current customization of the schema, which actually introduces the FRBR item level in <source> (as <itemList><item>, where <item> has almost the same content model as <source>). > What would be the arguments against moving <physLoc> and <provenance> out of <physDesc>? > > All the best, > Axel > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From raffaeleviglianti at gmail.com Thu Mar 29 16:58:00 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Thu, 29 Mar 2012 15:58:00 +0100 Subject: [MEI-L] XSLT 2.0 and XPath 2.0 by Michael Kay Message-ID: <CAMyHAnPNKzsw7ESNxm6qvVGgx5spSO+dq80mq5ONvesL+1Y=KQ@mail.gmail.com> Hello all, I've just noticed that MEI is mentioned in "XSLT 2.0 and XPath 2.0 Programmer's Reference" by Michael Kay. Here's the page on Google Books: http://j.mp/HjmAqq Best, Raffaele -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120329/5be93c2b/attachment.html> From kepper at edirom.de Thu Mar 29 17:07:07 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 29 Mar 2012 17:07:07 +0200 Subject: [MEI-L] XSLT 2.0 and XPath 2.0 by Michael Kay In-Reply-To: <CAMyHAnPNKzsw7ESNxm6qvVGgx5spSO+dq80mq5ONvesL+1Y=KQ@mail.gmail.com> References: <CAMyHAnPNKzsw7ESNxm6qvVGgx5spSO+dq80mq5ONvesL+1Y=KQ@mail.gmail.com> Message-ID: <A40E0E55-1CB9-49A1-B4A8-E14124578CBE@edirom.de> Hi Raffaele, good catch. I must have missed something, though ? I didn't know we provide a working stylesheet from MEI to MusicXML yet (or is this provided by Michael since he's working for MakeMusic?). And I have to admit I haven't ever thought about converting to SMDL ;-) Thanks, Johannes Am 29.03.2012 um 16:58 schrieb Raffaele Viglianti: > Hello all, > > I've just noticed that MEI is mentioned in "XSLT 2.0 and XPath 2.0 Programmer's Reference" by Michael Kay. Here's the page on Google Books: http://j.mp/HjmAqq > > Best, > Raffaele > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Thu Mar 29 17:19:13 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 29 Mar 2012 15:19:13 +0000 Subject: [MEI-L] XSLT 2.0 and XPath 2.0 by Michael Kay In-Reply-To: <CAMyHAnPNKzsw7ESNxm6qvVGgx5spSO+dq80mq5ONvesL+1Y=KQ@mail.gmail.com> References: <CAMyHAnPNKzsw7ESNxm6qvVGgx5spSO+dq80mq5ONvesL+1Y=KQ@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01153035@GRANT.eservices.virginia.edu> Hi, Raffaele, Actually, Kay mentioned MEI in XSLT 2.0 Programmer's Reference, 3rd edition (p. 4), which we called attention to in our grant applications. I like that he says that MEI and SMDL are "really serious contenders". Coverage of MusicXML has been added, as it didn't appear in the earlier edition. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Raffaele Viglianti [raffaeleviglianti at gmail.com] Sent: Thursday, March 29, 2012 10:58 AM To: Music Encoding Initiative Subject: [MEI-L] XSLT 2.0 and XPath 2.0 by Michael Kay Hello all, I've just noticed that MEI is mentioned in "XSLT 2.0 and XPath 2.0 Programmer's Reference" by Michael Kay. Here's the page on Google Books: http://j.mp/HjmAqq Best, Raffaele From pdr4h at eservices.virginia.edu Thu Mar 29 17:23:15 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 29 Mar 2012 15:23:15 +0000 Subject: [MEI-L] XSLT 2.0 and XPath 2.0 by Michael Kay In-Reply-To: <A40E0E55-1CB9-49A1-B4A8-E14124578CBE@edirom.de> References: <CAMyHAnPNKzsw7ESNxm6qvVGgx5spSO+dq80mq5ONvesL+1Y=KQ@mail.gmail.com>, <A40E0E55-1CB9-49A1-B4A8-E14124578CBE@edirom.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01153049@GRANT.eservices.virginia.edu> Johannes, We haven't written mei2musicxml.xsl yet, but have talked about it extensively. I'll let your SMDL comment pass without adding comment of my own. ;-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] Sent: Thursday, March 29, 2012 11:07 AM To: Music Encoding Initiative Subject: Re: [MEI-L] XSLT 2.0 and XPath 2.0 by Michael Kay Hi Raffaele, good catch. I must have missed something, though ? I didn't know we provide a working stylesheet from MEI to MusicXML yet (or is this provided by Michael since he's working for MakeMusic?). And I have to admit I haven't ever thought about converting to SMDL ;-) Thanks, Johannes Am 29.03.2012 um 16:58 schrieb Raffaele Viglianti: > Hello all, > > I've just noticed that MEI is mentioned in "XSLT 2.0 and XPath 2.0 Programmer's Reference" by Michael Kay. Here's the page on Google Books: http://j.mp/HjmAqq > > Best, > Raffaele > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Thu Mar 29 17:27:33 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 29 Mar 2012 17:27:33 +0200 Subject: [MEI-L] XSLT 2.0 and XPath 2.0 by Michael Kay In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01153049@GRANT.eservices.virginia.edu> References: <CAMyHAnPNKzsw7ESNxm6qvVGgx5spSO+dq80mq5ONvesL+1Y=KQ@mail.gmail.com>, <A40E0E55-1CB9-49A1-B4A8-E14124578CBE@edirom.de> <BBCC497C40D85642B90E9F94FC30343D01153049@GRANT.eservices.virginia.edu> Message-ID: <A6244AFD-83CE-498A-9471-82408D1FE9CF@edirom.de> Am 29.03.2012 um 17:23 schrieb Roland, Perry (pdr4h): > Johannes, > > We haven't written mei2musicxml.xsl yet, but have talked about it extensively. That's true. And now, as we have a strategy for flattening MEI by XSLT and / or MEISE, we can actually start working on it by providing a reduced schema of MEI that serves as filter for things that can go into MusicXML. But honestly, that's something the Technical Team should discuss _after_ the upcoming release? jo From stadler at edirom.de Mon Apr 9 17:00:15 2012 From: stadler at edirom.de (Peter Stadler) Date: Mon, 9 Apr 2012 17:00:15 +0200 Subject: [MEI-L] Fwd: [MUSIC-IR] ME Summer Workshop, "Introduction to MEI" References: <sympa.1333835383.32436.463@listes.ircam.fr> Message-ID: <2626C1E2-826A-4CE7-A800-4E117DC39749@edirom.de> This might be of interest to this group as well ;-) All the best Peter Anfang der weitergeleiteten E-Mail: > Von: Perry Roland <pdr4h at eservices.virginia.edu> > Datum: 7. April 2012 23:54:18 MESZ > An: music-ir at listes.ircam.fr > Betreff: [MUSIC-IR] ME Summer Workshop, "Introduction to MEI" > > > MEI SUMMER WORKSHOP > > The University of Virginia Library and the University of Paderborn are offering > an opportunity to learn about the Music Encoding Initiative (MEI), an > increasingly important tool for digital humanities music research. Spend three > days learning the fundamentals of using MEI for research, teaching, electronic > publishing, and management of digital music collections. > > "Introduction to MEI," an intensive, three-day, hands-on workshop, will be > offered Wednesday, August 22nd, 2012 through Friday, August 24th, 2012 at the > University of Virginia Library. Experts from the Music Encoding Initiative > Council will teach the workshop, during which participants will learn about MEI > history and design principles, tools for creating, editing, and rendering MEI, > and techniques for customizing the MEI schema. Each day will include lectures, > plenty of hands-on practice, and opportunities to address participant-specific > issues. Attendees are encouraged to bring example material that they would > like to encode. > > No previous experience with MEI or XML is required, but an understanding of > music notation and other markup schemes, such as HTML and TEI, will be helpful. > There are also no fees associated with this workshop, but participants must > bear travel, housing, and food costs. > > To apply, visit http://tinyurl.com/bs9e6oe before June 1, 2012. The number of > participants is limited, so apply early! Successful applicants will be > notified of acceptance as soon as possible after June 1. > > For more information on MEI, visit http://www.music-encoding.org. Please > address questions to info at music-encoding.org. > > GETTING THERE > The University of Virginia is located in Charlottesville, VA, 110 miles > southwest of Washington, D.C. and 68 miles west of Richmond. > > The city of Charlottesville is served by five airports: >> Charlottesville (CHO) (http://www.gocho.com/) >> Richmond International (RIC) (http://www.flyrichmond.com/) >> Washington-Dulles (IAD)(http://www.metwashairports.com/dulles/dulles.htm) >> Reagan National (DCA) (http://www.metwashairports.com/reagan/reagan.htm) >> Baltimore Washington International (BWI) (http://www.bwiairport.com) > > The Amtrak station (http://www.amtrak.com) is conveniently located one > half-mile from the University. > > A map showing UVA's libraries and driving directions to Alderman Library are > available at http://www2.lib.virginia.edu/map/. Additional maps of the > University, such as accessibility and University Transit Service maps, are > available at http://www.virginia.edu/Map/. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > From raffaeleviglianti at gmail.com Thu Apr 12 12:48:52 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Thu, 12 Apr 2012 11:48:52 +0100 Subject: [MEI-L] Digital Humanities @ Oxford Summer School Message-ID: <CAMyHAnPPQzyFRFoWx216_eS=axBUd9HNbL5u5D_erYB2K9E4mw@mail.gmail.com> Dear all, The Digital Humanities at Oxford Summer School is now open for registration. http://digital.humanities.ox.ac.uk/dhoxss/ Delegates will be introduced to a range of topics suitable for researchers, project managers, research assistants, and students who are interested in the creation, management, or publication of digital data in the humanities. The course is very XML- and TEI-centric, but I have been invited to provide a training session about music encoding and MEI: http://digital.humanities.ox.ac.uk/dhoxss/programme.html#session4 Please circulate this to anyone who you think might be interested. Many thanks and best regards, Raffaele -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120412/5d3fbe69/attachment.html> From maja.hartwig at gmx.de Thu Apr 19 09:33:47 2012 From: maja.hartwig at gmx.de (Maja Hartwig) Date: Thu, 19 Apr 2012 09:33:47 +0200 Subject: [MEI-L] analysis Message-ID: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> Dear List, writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. In my opinion it would make sense to use the @hfunc also on chords for describing the function of it in a musical work, such as a "tonic" or something like that. So I think the att.chord.anl should be memberOf att.harmonicfunction. Is it a bug or any other opinions? Best regards, Maja -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120419/62510e46/attachment.html> From pdr4h at eservices.virginia.edu Mon Apr 23 21:11:00 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 23 Apr 2012 19:11:00 +0000 Subject: [MEI-L] analysis In-Reply-To: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu> Hi, Maja, We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] Sent: Thursday, April 19, 2012 3:33 AM To: Music Encoding Initiative Subject: [MEI-L] analysis Dear List, writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. In my opinion it would make sense to use the @hfunc also on chords for describing the function of it in a musical work, such as a "tonic" or something like that. So I think the att.chord.anl should be memberOf att.harmonicfunction. Is it a bug or any other opinions? Best regards, Maja -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120423/7e4a0c1e/attachment.html> From kepper at edirom.de Mon Apr 23 22:30:37 2012 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 23 Apr 2012 22:30:37 +0200 Subject: [MEI-L] analysis In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu> Message-ID: <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> Hi both, I would argue that for consistency's sake we should add @hfunc to chords. If one does his analysis using this attribute on notes, why should he switch to <harm> on chords? jo Am 23.04.2012 um 21:11 schrieb Roland, Perry (pdr4h): > Hi, Maja, > > We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] > Sent: Thursday, April 19, 2012 3:33 AM > To: Music Encoding Initiative > Subject: [MEI-L] analysis > > Dear List, > > writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. > It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. > But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. > In my opinion it would make sense to use the @hfunc also on chords for describing the function of it > in a musical work, such as a "tonic" or something like that. > So I think the att.chord.anl should be memberOf att.harmonicfunction. > Is it a bug or any other opinions? > > Best regards, > Maja > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From esfield at stanford.edu Mon Apr 23 22:56:21 2012 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Mon, 23 Apr 2012 13:56:21 -0700 (PDT) Subject: [MEI-L] analysis In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu> Message-ID: <b6f430a0.00001e00.0000002a@CCARH-ADM-2.su.win.stanford.edu> I had intended to respond to Maja: I'm so accustomed to Humdrum, which allows multiple labels to be assigned, that I find the either/or question difficult. In the Humdrum files themselves, multiple harmonic labels may be used (and have been very useful in comparative studies). A single chord can have one function in the designated key but quite another in a modulatory context. That distinction enables Craig to generate keyscapes reconciled to one key (good for comparison between works) or to see key-specific usage (good for understanding how individual composers operate). There is also chord quality: major/minor/augmented/diminished, plus inversion types, plus extended harmonies (7th, 9th, et al). How much of all this MEI wants will depend on whether or not it wants to support harmonic analysis, basso continuo realization (let's say in sound; but don't stage too much on this-the printed labels are often full of errors and impossibilities), etc. For now, it could be left fairly simple, provided that nothing interferes with expansion of capabilities later. Eleanor Eleanor Selfridge-Field Consulting Professor, Music (and, by courtesy, Symbolic Systems) Braun Music Center #129 Stanford University Stanford, CA 94305-3076, USA <http://www.stanford.edu/~esfield/> http://www.stanford.edu/~esfield/ From: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) Sent: Monday, April 23, 2012 12:11 PM To: Music Encoding Initiative Subject: Re: [MEI-L] analysis Hi, Maja, We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu _____ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] Sent: Thursday, April 19, 2012 3:33 AM To: Music Encoding Initiative Subject: [MEI-L] analysis Dear List, writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. In my opinion it would make sense to use the @hfunc also on chords for describing the function of it in a musical work, such as a "tonic" or something like that. So I think the att.chord.anl should be memberOf att.harmonicfunction. Is it a bug or any other opinions? Best regards, Maja -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120423/af245648/attachment.html> From pdr4h at eservices.virginia.edu Mon Apr 23 23:08:25 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 23 Apr 2012 21:08:25 +0000 Subject: [MEI-L] analysis In-Reply-To: <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu>, <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0116471F@GRANT.eservices.virginia.edu> Johannes, Your argument is convincing. But, there's also the possibility of moving @mfunc to an element similar to <harm>. Of course, this would require more extended planning and execution. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] Sent: Monday, April 23, 2012 4:30 PM To: Music Encoding Initiative Subject: Re: [MEI-L] analysis Hi both, I would argue that for consistency's sake we should add @hfunc to chords. If one does his analysis using this attribute on notes, why should he switch to <harm> on chords? jo Am 23.04.2012 um 21:11 schrieb Roland, Perry (pdr4h): > Hi, Maja, > > We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] > Sent: Thursday, April 19, 2012 3:33 AM > To: Music Encoding Initiative > Subject: [MEI-L] analysis > > Dear List, > > writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. > It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. > But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. > In my opinion it would make sense to use the @hfunc also on chords for describing the function of it > in a musical work, such as a "tonic" or something like that. > So I think the att.chord.anl should be memberOf att.harmonicfunction. > Is it a bug or any other opinions? > > Best regards, > Maja > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Mon Apr 23 23:26:06 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 23 Apr 2012 21:26:06 +0000 Subject: [MEI-L] analysis In-Reply-To: <b6f430a0.00001e00.0000002a@CCARH-ADM-2.su.win.stanford.edu> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu>, <b6f430a0.00001e00.0000002a@CCARH-ADM-2.su.win.stanford.edu> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01164729@GRANT.eservices.virginia.edu> Eleanor, By using multiple <harm> elements, one can assign multiple harmonic labels, even the incorrect, printed ones you refer to. Using @hfunc, however, (should we add it) one could assign only one. I think the choice is up to the encoder. There are many reasons why one would choose an attribute over an element or vice versa. For some encoders, a simple file structure where only one label can be assigned and/or values can be easily contrained is important. For others, in order to perform the kind of analysis you mention, multiple labels will be the way to go. Neither path hinders further development. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Eleanor Selfridge-Field [esfield at stanford.edu] Sent: Monday, April 23, 2012 4:56 PM To: 'Music Encoding Initiative' Subject: Re: [MEI-L] analysis I had intended to respond to Maja: I?m so accustomed to Humdrum, which allows multiple labels to be assigned, that I find the either/or question difficult. In the Humdrum files themselves, multiple harmonic labels may be used (and have been very useful in comparative studies). A single chord can have one function in the designated key but quite another in a modulatory context. That distinction enables Craig to generate keyscapes reconciled to one key (good for comparison between works) or to see key-specific usage (good for understanding how individual composers operate). There is also chord quality: major/minor/augmented/diminished, plus inversion types, plus extended harmonies (7th, 9th, et al). How much of all this MEI wants will depend on whether or not it wants to support harmonic analysis, basso continuo realization (let?s say in sound; but don?t stage too much on this?the printed labels are often full of errors and impossibilities), etc. For now, it could be left fairly simple, provided that nothing interferes with expansion of capabilities later. Eleanor Eleanor Selfridge-Field Consulting Professor, Music (and, by courtesy, Symbolic Systems) Braun Music Center #129 Stanford University Stanford, CA 94305-3076, USA http://www.stanford.edu/~esfield/ From: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) Sent: Monday, April 23, 2012 12:11 PM To: Music Encoding Initiative Subject: Re: [MEI-L] analysis Hi, Maja, We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de<mailto:mei-l-bounces at lists.uni-paderborn.de> [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] Sent: Thursday, April 19, 2012 3:33 AM To: Music Encoding Initiative Subject: [MEI-L] analysis Dear List, writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. In my opinion it would make sense to use the @hfunc also on chords for describing the function of it in a musical work, such as a "tonic" or something like that. So I think the att.chord.anl should be memberOf att.harmonicfunction. Is it a bug or any other opinions? Best regards, Maja -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120423/af10b094/attachment.html> From pdr4h at eservices.virginia.edu Mon Apr 23 23:41:04 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 23 Apr 2012 21:41:04 +0000 Subject: [MEI-L] analysis In-Reply-To: <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu>, <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01164742@GRANT.eservices.virginia.edu> Given the current definition of hfunc one should not "do his analysis" of chords using the hfunc attribute. The att.harmonicfunction class is for attributes describing the harmonic function *of a single pitch* in a chord. It was intended for labels such as "root", "third", "fifth", etc. This is why it's available on note but not on chord. <classSpec ident="att.harmonicfunction" module="MEI.analysis" type="atts"> <desc>Attributes describing the harmonic function of a single pitch</desc> <attList> <attDef ident="hfunc" usage="opt"> <desc>describes harmonic function in any convenient typology.</desc> <datatype> <rng:data type="NMTOKEN"/> </datatype> </attDef> </attList> </classSpec> Chord labels, like "Cm7", or indications of harmonic functionality, like "ii7", belong in <harm>, unless we expand the definition of att.harmonicfunction and, in all likelihood, its datatype. It seems to me that we need right now is better documentation in the Guidelines, not more changes to the schema. We can consider this topic again at a later date. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] Sent: Monday, April 23, 2012 4:30 PM To: Music Encoding Initiative Subject: Re: [MEI-L] analysis Hi both, I would argue that for consistency's sake we should add @hfunc to chords. If one does his analysis using this attribute on notes, why should he switch to <harm> on chords? jo Am 23.04.2012 um 21:11 schrieb Roland, Perry (pdr4h): > Hi, Maja, > > We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] > Sent: Thursday, April 19, 2012 3:33 AM > To: Music Encoding Initiative > Subject: [MEI-L] analysis > > Dear List, > > writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. > It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. > But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. > In my opinion it would make sense to use the @hfunc also on chords for describing the function of it > in a musical work, such as a "tonic" or something like that. > So I think the att.chord.anl should be memberOf att.harmonicfunction. > Is it a bug or any other opinions? > > Best regards, > Maja > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From maja.hartwig at gmx.de Tue Apr 24 09:32:02 2012 From: maja.hartwig at gmx.de (Maja Hartwig) Date: Tue, 24 Apr 2012 09:32:02 +0200 Subject: [MEI-L] analysis In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01164742@GRANT.eservices.virginia.edu> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu>, <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> <BBCC497C40D85642B90E9F94FC30343D01164742@GRANT.eservices.virginia.edu> Message-ID: <4309C00A-3BA5-4BD8-A8FD-1295A54CBED2@gmx.de> Hi all, thanks for the responses. I understand that we don?t want to make more changes to the schema. I would also use the @hfunc on a note for labels such as "root" or "keynote". But I think harmonic functions of chords encoded with the @hfunc should also be possible, because it would allow to describe the chords function in a harmonic context such as "tonic" and so on, without switching to the harm element. In my opinion the <harm> is used to give the chords a name but not a function, or to describe figured bass numbers. I also think, that it should be the decision of the encoder which way he uses for his analysis. But isn?t it a good thing to have the choice between more than one option? I would describe the labels "third" and "fifth" etc. with the @inth. Or for what is the use of @inth intended? The question came up when I was writing the analysis chapter of the guidelines and I would like to avoid any misunderstandings! Best regards, Maja P.S.: And why again is the @mfunc allowed on a chord? :-) Am 23.04.2012 um 23:41 schrieb Roland, Perry (pdr4h): > Given the current definition of hfunc one should not "do his analysis" of chords using the hfunc attribute. > > The att.harmonicfunction class is for attributes describing the harmonic function *of a single pitch* in a chord. It was intended for labels such as "root", "third", "fifth", etc. This is why it's available on note but not on chord. > > <classSpec ident="att.harmonicfunction" module="MEI.analysis" type="atts"> > <desc>Attributes describing the harmonic function of a single pitch</desc> > <attList> > <attDef ident="hfunc" usage="opt"> > <desc>describes harmonic function in any convenient typology.</desc> > <datatype> > <rng:data type="NMTOKEN"/> > </datatype> > </attDef> > </attList> > </classSpec> > > Chord labels, like "Cm7", or indications of harmonic functionality, like "ii7", belong in <harm>, unless we expand the definition of att.harmonicfunction and, in all likelihood, its datatype. > > It seems to me that we need right now is better documentation in the Guidelines, not more changes to the schema. We can consider this topic again at a later date. > > -- > p. > > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > Sent: Monday, April 23, 2012 4:30 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] analysis > > Hi both, > > I would argue that for consistency's sake we should add @hfunc to chords. If one does his analysis using this attribute on notes, why should he switch to <harm> on chords? > > jo > > > Am 23.04.2012 um 21:11 schrieb Roland, Perry (pdr4h): > >> Hi, Maja, >> >> We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? >> >> -- >> p. >> >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] >> Sent: Thursday, April 19, 2012 3:33 AM >> To: Music Encoding Initiative >> Subject: [MEI-L] analysis >> >> Dear List, >> >> writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. >> It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. >> But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. >> In my opinion it would make sense to use the @hfunc also on chords for describing the function of it >> in a musical work, such as a "tonic" or something like that. >> So I think the att.chord.anl should be memberOf att.harmonicfunction. >> Is it a bug or any other opinions? >> >> Best regards, >> Maja >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Tue Apr 24 14:31:20 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 24 Apr 2012 12:31:20 +0000 Subject: [MEI-L] analysis In-Reply-To: <4309C00A-3BA5-4BD8-A8FD-1295A54CBED2@gmx.de> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu>, <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> <BBCC497C40D85642B90E9F94FC30343D01164742@GRANT.eservices.virginia.edu>, <4309C00A-3BA5-4BD8-A8FD-1295A54CBED2@gmx.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D01164854@GRANT.eservices.virginia.edu> Maja, @inth was intended to carry numeric values (half-steps above the root), even though it seems that intention didn't get expressed properly in the move from the original RNG to ODD. The datatype of @inth should be a list of numbers, not NMTOKENS. This is an error that should be corrected even now in the frozen 2012 release. Though its use would be somewhat rare, @mfunc is allowed on chords because they can have a melodic function as well as a harmonic one, as in the case of so-called "passing chords". I suppose this could be subsumed into @hfunc, now that I think about it. That is, a chord's harmonic function could be "non-harmonic". If @hfunc were allowed on chords, it should have exactly the function (no pun intended) that you describe -- labeling chords with a general label, such as "tonic", "dominant", etc., not with chord labels transcribed from the document, like "Cm7". I am still conflicted, however, whether "functional harmony" labeled with Roman numerals goes in <harm> or @hfunc. The answer to your question about choices depends on who's answering. :-) Choices are often a good thing, but not when they duplicate each other -- "Would you like vanilla ice or vanilla ice cream?" There should be a difference between how @hfunc is used and how <harm> is used. If there's no difference, then we probably don't need both. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] Sent: Tuesday, April 24, 2012 3:32 AM To: Music Encoding Initiative Subject: Re: [MEI-L] analysis Hi all, thanks for the responses. I understand that we don?t want to make more changes to the schema. I would also use the @hfunc on a note for labels such as "root" or "keynote". But I think harmonic functions of chords encoded with the @hfunc should also be possible, because it would allow to describe the chords function in a harmonic context such as "tonic" and so on, without switching to the harm element. In my opinion the <harm> is used to give the chords a name but not a function, or to describe figured bass numbers. I also think, that it should be the decision of the encoder which way he uses for his analysis. But isn?t it a good thing to have the choice between more than one option? I would describe the labels "third" and "fifth" etc. with the @inth. Or for what is the use of @inth intended? The question came up when I was writing the analysis chapter of the guidelines and I would like to avoid any misunderstandings! Best regards, Maja P.S.: And why again is the @mfunc allowed on a chord? :-) Am 23.04.2012 um 23:41 schrieb Roland, Perry (pdr4h): > Given the current definition of hfunc one should not "do his analysis" of chords using the hfunc attribute. > > The att.harmonicfunction class is for attributes describing the harmonic function *of a single pitch* in a chord. It was intended for labels such as "root", "third", "fifth", etc. This is why it's available on note but not on chord. > > <classSpec ident="att.harmonicfunction" module="MEI.analysis" type="atts"> > <desc>Attributes describing the harmonic function of a single pitch</desc> > <attList> > <attDef ident="hfunc" usage="opt"> > <desc>describes harmonic function in any convenient typology.</desc> > <datatype> > <rng:data type="NMTOKEN"/> > </datatype> > </attDef> > </attList> > </classSpec> > > Chord labels, like "Cm7", or indications of harmonic functionality, like "ii7", belong in <harm>, unless we expand the definition of att.harmonicfunction and, in all likelihood, its datatype. > > It seems to me that we need right now is better documentation in the Guidelines, not more changes to the schema. We can consider this topic again at a later date. > > -- > p. > > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > Sent: Monday, April 23, 2012 4:30 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] analysis > > Hi both, > > I would argue that for consistency's sake we should add @hfunc to chords. If one does his analysis using this attribute on notes, why should he switch to <harm> on chords? > > jo > > > Am 23.04.2012 um 21:11 schrieb Roland, Perry (pdr4h): > >> Hi, Maja, >> >> We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? >> >> -- >> p. >> >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] >> Sent: Thursday, April 19, 2012 3:33 AM >> To: Music Encoding Initiative >> Subject: [MEI-L] analysis >> >> Dear List, >> >> writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. >> It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. >> But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. >> In my opinion it would make sense to use the @hfunc also on chords for describing the function of it >> in a musical work, such as a "tonic" or something like that. >> So I think the att.chord.anl should be memberOf att.harmonicfunction. >> Is it a bug or any other opinions? >> >> Best regards, >> Maja >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Tue Apr 24 15:24:15 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 24 Apr 2012 13:24:15 +0000 Subject: [MEI-L] analysis In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01164854@GRANT.eservices.virginia.edu> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu>, <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> <BBCC497C40D85642B90E9F94FC30343D01164742@GRANT.eservices.virginia.edu>, <4309C00A-3BA5-4BD8-A8FD-1295A54CBED2@gmx.de>, <BBCC497C40D85642B90E9F94FC30343D01164854@GRANT.eservices.virginia.edu> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0116489B@GRANT.eservices.virginia.edu> The last paragraph should read: "The answer to your question about choices depends on who's answering. :-) Choices are often a good thing, but not when they duplicate each other -- "Would you like vanilla ice cream or vanilla ice cream?" There should be a difference between how @hfunc is used and how <harm> is used. If there's no difference, then we probably don't need both." I should stop using analogies if I can't get 'em right. :-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Roland, Perry (pdr4h) [pdr4h at eservices.virginia.edu] Sent: Tuesday, April 24, 2012 8:31 AM To: Music Encoding Initiative Subject: Re: [MEI-L] analysis Maja, @inth was intended to carry numeric values (half-steps above the root), even though it seems that intention didn't get expressed properly in the move from the original RNG to ODD. The datatype of @inth should be a list of numbers, not NMTOKENS. This is an error that should be corrected even now in the frozen 2012 release. Though its use would be somewhat rare, @mfunc is allowed on chords because they can have a melodic function as well as a harmonic one, as in the case of so-called "passing chords". I suppose this could be subsumed into @hfunc, now that I think about it. That is, a chord's harmonic function could be "non-harmonic". If @hfunc were allowed on chords, it should have exactly the function (no pun intended) that you describe -- labeling chords with a general label, such as "tonic", "dominant", etc., not with chord labels transcribed from the document, like "Cm7". I am still conflicted, however, whether "functional harmony" labeled with Roman numerals goes in <harm> or @hfunc. The answer to your question about choices depends on who's answering. :-) Choices are often a good thing, but not when they duplicate each other -- "Would you like vanilla ice or vanilla ice cream?" There should be a difference between how @hfunc is used and how <harm> is used. If there's no difference, then we probably don't need both. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] Sent: Tuesday, April 24, 2012 3:32 AM To: Music Encoding Initiative Subject: Re: [MEI-L] analysis Hi all, thanks for the responses. I understand that we don?t want to make more changes to the schema. I would also use the @hfunc on a note for labels such as "root" or "keynote". But I think harmonic functions of chords encoded with the @hfunc should also be possible, because it would allow to describe the chords function in a harmonic context such as "tonic" and so on, without switching to the harm element. In my opinion the <harm> is used to give the chords a name but not a function, or to describe figured bass numbers. I also think, that it should be the decision of the encoder which way he uses for his analysis. But isn?t it a good thing to have the choice between more than one option? I would describe the labels "third" and "fifth" etc. with the @inth. Or for what is the use of @inth intended? The question came up when I was writing the analysis chapter of the guidelines and I would like to avoid any misunderstandings! Best regards, Maja P.S.: And why again is the @mfunc allowed on a chord? :-) Am 23.04.2012 um 23:41 schrieb Roland, Perry (pdr4h): > Given the current definition of hfunc one should not "do his analysis" of chords using the hfunc attribute. > > The att.harmonicfunction class is for attributes describing the harmonic function *of a single pitch* in a chord. It was intended for labels such as "root", "third", "fifth", etc. This is why it's available on note but not on chord. > > <classSpec ident="att.harmonicfunction" module="MEI.analysis" type="atts"> > <desc>Attributes describing the harmonic function of a single pitch</desc> > <attList> > <attDef ident="hfunc" usage="opt"> > <desc>describes harmonic function in any convenient typology.</desc> > <datatype> > <rng:data type="NMTOKEN"/> > </datatype> > </attDef> > </attList> > </classSpec> > > Chord labels, like "Cm7", or indications of harmonic functionality, like "ii7", belong in <harm>, unless we expand the definition of att.harmonicfunction and, in all likelihood, its datatype. > > It seems to me that we need right now is better documentation in the Guidelines, not more changes to the schema. We can consider this topic again at a later date. > > -- > p. > > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > Sent: Monday, April 23, 2012 4:30 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] analysis > > Hi both, > > I would argue that for consistency's sake we should add @hfunc to chords. If one does his analysis using this attribute on notes, why should he switch to <harm> on chords? > > jo > > > Am 23.04.2012 um 21:11 schrieb Roland, Perry (pdr4h): > >> Hi, Maja, >> >> We could add @hfunc to <chord>, but I've always thought that it would duplicate the function of <harm>, which is to assign harmonic labels. I could be wrong though, if <harm> were defined only to be used for transcription and not analysis. Anyone else have thoughts on this? >> >> -- >> p. >> >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] >> Sent: Thursday, April 19, 2012 3:33 AM >> To: Music Encoding Initiative >> Subject: [MEI-L] analysis >> >> Dear List, >> >> writing the guidelines of the analysis module, I am wondering about the use of the @hfunc. >> It is allowed within a <note> for describing a note as a "keynote" or "root" or anything else. >> But the @hfunc is not permitted wtihin the <chord>, although the @mfunc e.g. is allowed. >> In my opinion it would make sense to use the @hfunc also on chords for describing the function of it >> in a musical work, such as a "tonic" or something like that. >> So I think the att.chord.anl should be memberOf att.harmonicfunction. >> Is it a bug or any other opinions? >> >> Best regards, >> Maja >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From Maja.Hartwig at gmx.de Tue Apr 24 19:36:46 2012 From: Maja.Hartwig at gmx.de (Maja Hartwig) Date: Tue, 24 Apr 2012 19:36:46 +0200 Subject: [MEI-L] analysis In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01164854@GRANT.eservices.virginia.edu> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu>, <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> <BBCC497C40D85642B90E9F94FC30343D01164742@GRANT.eservices.virginia.edu>, <4309C00A-3BA5-4BD8-A8FD-1295A54CBED2@gmx.de> <BBCC497C40D85642B90E9F94FC30343D01164854@GRANT.eservices.virginia.edu> Message-ID: <20120424173646.27070@gmx.net> Hi, ok that clarifies the issue with the @inth and the @mfunc. Isn?t it difference enough to encode harmonic function on the one hand with an attribute and on the other to have the possibility to encode it with the harm element? We have this opportunity with other elements/attributes, too. I still think the @hfunc would make sense on chords when it would be used for describing functions. Another difference might be, that using the @hfunc is only intended for analytical purposes. But I know, the schema is still frozen! Maja P.S.: Yes, I really like vanilla ice cream...:-) -------- Original-Nachricht -------- > Datum: Tue, 24 Apr 2012 12:31:20 +0000 > Von: "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Betreff: Re: [MEI-L] analysis > Maja, > > @inth was intended to carry numeric values (half-steps above the root), > even though it seems that intention didn't get expressed properly in the move > from the original RNG to ODD. The datatype of @inth should be a list of > numbers, not NMTOKENS. This is an error that should be corrected even now > in the frozen 2012 release. > > Though its use would be somewhat rare, @mfunc is allowed on chords because > they can have a melodic function as well as a harmonic one, as in the case > of so-called "passing chords". I suppose this could be subsumed into > @hfunc, now that I think about it. That is, a chord's harmonic function could > be "non-harmonic". > > If @hfunc were allowed on chords, it should have exactly the function (no > pun intended) that you describe -- labeling chords with a general label, > such as "tonic", "dominant", etc., not with chord labels transcribed from the > document, like "Cm7". I am still conflicted, however, whether "functional > harmony" labeled with Roman numerals goes in <harm> or @hfunc. > > The answer to your question about choices depends on who's answering. :-) > Choices are often a good thing, but not when they duplicate each other -- > "Would you like vanilla ice or vanilla ice cream?" There should be a > difference between how @hfunc is used and how <harm> is used. If there's no > difference, then we probably don't need both. > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] > Sent: Tuesday, April 24, 2012 3:32 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] analysis > > Hi all, > thanks for the responses. > I understand that we don?t want to make more changes to the schema. I > would also use the @hfunc on a note for labels such as "root" or "keynote". > But I think harmonic functions of chords encoded with the @hfunc should > also be possible, because it would allow to describe the chords function > in a harmonic context such as "tonic" and so on, without switching to the > harm element. In my opinion the <harm> is used to give the chords a name but > not a function, or to describe figured bass numbers. > I also think, that it should be the decision of the encoder which way he > uses for his analysis. But isn?t it a good thing to have the choice between > more than one option? > > I would describe the labels "third" and "fifth" etc. with the @inth. Or > for what is the use of @inth intended? > The question came up when I was writing the analysis chapter of the > guidelines and I would like to avoid any misunderstandings! > Best regards, > Maja > > P.S.: And why again is the @mfunc allowed on a chord? :-) > > > Am 23.04.2012 um 23:41 schrieb Roland, Perry (pdr4h): > > > Given the current definition of hfunc one should not "do his analysis" > of chords using the hfunc attribute. > > > > The att.harmonicfunction class is for attributes describing the harmonic > function *of a single pitch* in a chord. It was intended for labels such > as "root", "third", "fifth", etc. This is why it's available on note but > not on chord. > > > > <classSpec ident="att.harmonicfunction" module="MEI.analysis" > type="atts"> > > <desc>Attributes describing the harmonic function of a single > pitch</desc> > > <attList> > > <attDef ident="hfunc" usage="opt"> > > <desc>describes harmonic function in any convenient > typology.</desc> > > <datatype> > > <rng:data type="NMTOKEN"/> > > </datatype> > > </attDef> > > </attList> > > </classSpec> > > > > Chord labels, like "Cm7", or indications of harmonic functionality, like > "ii7", belong in <harm>, unless we expand the definition of > att.harmonicfunction and, in all likelihood, its datatype. > > > > It seems to me that we need right now is better documentation in the > Guidelines, not more changes to the schema. We can consider this topic again > at a later date. > > > > -- > > p. > > > > > > > > __________________________ > > Perry Roland > > Music Library > > University of Virginia > > P. O. Box 400175 > > Charlottesville, VA 22904 > > 434-982-2702 (w) > > pdr4h (at) virginia (dot) edu > > ________________________________________ > > From: mei-l-bounces at lists.uni-paderborn.de > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > > Sent: Monday, April 23, 2012 4:30 PM > > To: Music Encoding Initiative > > Subject: Re: [MEI-L] analysis > > > > Hi both, > > > > I would argue that for consistency's sake we should add @hfunc to > chords. If one does his analysis using this attribute on notes, why should he > switch to <harm> on chords? > > > > jo > > > > > > Am 23.04.2012 um 21:11 schrieb Roland, Perry (pdr4h): > > > >> Hi, Maja, > >> > >> We could add @hfunc to <chord>, but I've always thought that it would > duplicate the function of <harm>, which is to assign harmonic labels. I > could be wrong though, if <harm> were defined only to be used for > transcription and not analysis. Anyone else have thoughts on this? > >> > >> -- > >> p. > >> > >> > >> __________________________ > >> Perry Roland > >> Music Library > >> University of Virginia > >> P. O. Box 400175 > >> Charlottesville, VA 22904 > >> 434-982-2702 (w) > >> pdr4h (at) virginia (dot) edu > >> From: mei-l-bounces at lists.uni-paderborn.de > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] > >> Sent: Thursday, April 19, 2012 3:33 AM > >> To: Music Encoding Initiative > >> Subject: [MEI-L] analysis > >> > >> Dear List, > >> > >> writing the guidelines of the analysis module, I am wondering about the > use of the @hfunc. > >> It is allowed within a <note> for describing a note as a "keynote" or > "root" or anything else. > >> But the @hfunc is not permitted wtihin the <chord>, although the @mfunc > e.g. is allowed. > >> In my opinion it would make sense to use the @hfunc also on chords for > describing the function of it > >> in a musical work, such as a "tonic" or something like that. > >> So I think the att.chord.anl should be memberOf att.harmonicfunction. > >> Is it a bug or any other opinions? > >> > >> Best regards, > >> Maja > >> > >> > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a From Maja.Hartwig at gmx.de Tue Apr 24 19:36:46 2012 From: Maja.Hartwig at gmx.de (Maja Hartwig) Date: Tue, 24 Apr 2012 19:36:46 +0200 Subject: [MEI-L] analysis In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D01164854@GRANT.eservices.virginia.edu> References: <0FFC1623-C25E-4EA1-9A4C-6839D2F64305@gmx.de> <BBCC497C40D85642B90E9F94FC30343D011646D1@GRANT.eservices.virginia.edu>, <AE231485-8300-43BF-B9B6-D315D7810100@edirom.de> <BBCC497C40D85642B90E9F94FC30343D01164742@GRANT.eservices.virginia.edu>, <4309C00A-3BA5-4BD8-A8FD-1295A54CBED2@gmx.de> <BBCC497C40D85642B90E9F94FC30343D01164854@GRANT.eservices.virginia.edu> Message-ID: <20120424173646.27070@gmx.net> Hi, ok that clarifies the issue with the @inth and the @mfunc. Isn?t it difference enough to encode harmonic function on the one hand with an attribute and on the other to have the possibility to encode it with the harm element? We have this opportunity with other elements/attributes, too. I still think the @hfunc would make sense on chords when it would be used for describing functions. Another difference might be, that using the @hfunc is only intended for analytical purposes. But I know, the schema is still frozen! Maja P.S.: Yes, I really like vanilla ice cream...:-) -------- Original-Nachricht -------- > Datum: Tue, 24 Apr 2012 12:31:20 +0000 > Von: "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Betreff: Re: [MEI-L] analysis > Maja, > > @inth was intended to carry numeric values (half-steps above the root), > even though it seems that intention didn't get expressed properly in the move > from the original RNG to ODD. The datatype of @inth should be a list of > numbers, not NMTOKENS. This is an error that should be corrected even now > in the frozen 2012 release. > > Though its use would be somewhat rare, @mfunc is allowed on chords because > they can have a melodic function as well as a harmonic one, as in the case > of so-called "passing chords". I suppose this could be subsumed into > @hfunc, now that I think about it. That is, a chord's harmonic function could > be "non-harmonic". > > If @hfunc were allowed on chords, it should have exactly the function (no > pun intended) that you describe -- labeling chords with a general label, > such as "tonic", "dominant", etc., not with chord labels transcribed from the > document, like "Cm7". I am still conflicted, however, whether "functional > harmony" labeled with Roman numerals goes in <harm> or @hfunc. > > The answer to your question about choices depends on who's answering. :-) > Choices are often a good thing, but not when they duplicate each other -- > "Would you like vanilla ice or vanilla ice cream?" There should be a > difference between how @hfunc is used and how <harm> is used. If there's no > difference, then we probably don't need both. > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] > Sent: Tuesday, April 24, 2012 3:32 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] analysis > > Hi all, > thanks for the responses. > I understand that we don?t want to make more changes to the schema. I > would also use the @hfunc on a note for labels such as "root" or "keynote". > But I think harmonic functions of chords encoded with the @hfunc should > also be possible, because it would allow to describe the chords function > in a harmonic context such as "tonic" and so on, without switching to the > harm element. In my opinion the <harm> is used to give the chords a name but > not a function, or to describe figured bass numbers. > I also think, that it should be the decision of the encoder which way he > uses for his analysis. But isn?t it a good thing to have the choice between > more than one option? > > I would describe the labels "third" and "fifth" etc. with the @inth. Or > for what is the use of @inth intended? > The question came up when I was writing the analysis chapter of the > guidelines and I would like to avoid any misunderstandings! > Best regards, > Maja > > P.S.: And why again is the @mfunc allowed on a chord? :-) > > > Am 23.04.2012 um 23:41 schrieb Roland, Perry (pdr4h): > > > Given the current definition of hfunc one should not "do his analysis" > of chords using the hfunc attribute. > > > > The att.harmonicfunction class is for attributes describing the harmonic > function *of a single pitch* in a chord. It was intended for labels such > as "root", "third", "fifth", etc. This is why it's available on note but > not on chord. > > > > <classSpec ident="att.harmonicfunction" module="MEI.analysis" > type="atts"> > > <desc>Attributes describing the harmonic function of a single > pitch</desc> > > <attList> > > <attDef ident="hfunc" usage="opt"> > > <desc>describes harmonic function in any convenient > typology.</desc> > > <datatype> > > <rng:data type="NMTOKEN"/> > > </datatype> > > </attDef> > > </attList> > > </classSpec> > > > > Chord labels, like "Cm7", or indications of harmonic functionality, like > "ii7", belong in <harm>, unless we expand the definition of > att.harmonicfunction and, in all likelihood, its datatype. > > > > It seems to me that we need right now is better documentation in the > Guidelines, not more changes to the schema. We can consider this topic again > at a later date. > > > > -- > > p. > > > > > > > > __________________________ > > Perry Roland > > Music Library > > University of Virginia > > P. O. Box 400175 > > Charlottesville, VA 22904 > > 434-982-2702 (w) > > pdr4h (at) virginia (dot) edu > > ________________________________________ > > From: mei-l-bounces at lists.uni-paderborn.de > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > > Sent: Monday, April 23, 2012 4:30 PM > > To: Music Encoding Initiative > > Subject: Re: [MEI-L] analysis > > > > Hi both, > > > > I would argue that for consistency's sake we should add @hfunc to > chords. If one does his analysis using this attribute on notes, why should he > switch to <harm> on chords? > > > > jo > > > > > > Am 23.04.2012 um 21:11 schrieb Roland, Perry (pdr4h): > > > >> Hi, Maja, > >> > >> We could add @hfunc to <chord>, but I've always thought that it would > duplicate the function of <harm>, which is to assign harmonic labels. I > could be wrong though, if <harm> were defined only to be used for > transcription and not analysis. Anyone else have thoughts on this? > >> > >> -- > >> p. > >> > >> > >> __________________________ > >> Perry Roland > >> Music Library > >> University of Virginia > >> P. O. Box 400175 > >> Charlottesville, VA 22904 > >> 434-982-2702 (w) > >> pdr4h (at) virginia (dot) edu > >> From: mei-l-bounces at lists.uni-paderborn.de > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Maja Hartwig [maja.hartwig at gmx.de] > >> Sent: Thursday, April 19, 2012 3:33 AM > >> To: Music Encoding Initiative > >> Subject: [MEI-L] analysis > >> > >> Dear List, > >> > >> writing the guidelines of the analysis module, I am wondering about the > use of the @hfunc. > >> It is allowed within a <note> for describing a note as a "keynote" or > "root" or anything else. > >> But the @hfunc is not permitted wtihin the <chord>, although the @mfunc > e.g. is allowed. > >> In my opinion it would make sense to use the @hfunc also on chords for > describing the function of it > >> in a musical work, such as a "tonic" or something like that. > >> So I think the att.chord.anl should be memberOf att.harmonicfunction. > >> Is it a bug or any other opinions? > >> > >> Best regards, > >> Maja > >> > >> > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -- NEU: FreePhone 3-fach-Flat mit kostenlosem Smartphone! Jetzt informieren: http://mobile.1und1.de/?ac=OM.PW.PW003K20328T7073a From zupftom at googlemail.com Sun May 6 22:42:47 2012 From: zupftom at googlemail.com (TW) Date: Sun, 6 May 2012 22:42:47 +0200 Subject: [MEI-L] @dur on <beamSpan> Message-ID: <CAEB1mAoY0LU5+ZvHUGnn+AJ4WjsCmOiSwMpNn4h9x6Adipmreg@mail.gmail.com> I'm wondering why @dur on <beamSpan> is from att.duration.timestamp rather than att.duration.musical or something along the lines of att.tupletSpan.log. AFAICS, using @tstamp and @dur, it's not possible to define start and end points of constellations like a group of three beamed eighths as the "musical" @dur can only describe powers of two. Shouldn't this be possible? Thomas From Julian.Dabbert at gmail.com Fri May 18 12:05:37 2012 From: Julian.Dabbert at gmail.com (Julian Dabbert) Date: Fri, 18 May 2012 12:05:37 +0200 Subject: [MEI-L] Release of MEISE 1.0 Message-ID: <4FB61EF1.4090307@gmail.com> Dear MEI-List, I am happy to announce the first official release of the MEI Score Editor (MEISE) as standalone version on SourceForge. Requirements for the application are an installed Java Runtime Environment (JRE) in version 6 and a computer that is running Windows (XP and above), Linux (tested on Ubunto 10.04 and above) or Mac 10.5 (intel only) and above. You will find guidance for your first steps in the MEISE on Sourceforge (http://sourceforge.net/p/meise/wiki/Quickstart%20MEISE/). Sourceforge also offers bugtracking and the subversion repository with the current source code. MEISE is released under the LGPL 3.0. Since I will be leaving the MEI project by the end of May, the further development will be assigned to Niko Beer at the Musikwissenschaftliches Seminar in Detmold. If you have additional questions about using MEISE or its development, you may also contact Johannes Kepper (kepper at edirom.de). I would like to thank you all for providing a cooperative and friendly work atmosphere during my work at this project. Best regards, -Julian Dabbert -------------- next part -------------- A non-text attachment was scrubbed... Name: Julian_Dabbert.vcf Type: text/x-vcard Size: 314 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120518/ebcfd29e/attachment.vcf> From raffaeleviglianti at gmail.com Mon May 21 11:58:51 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Mon, 21 May 2012 10:58:51 +0100 Subject: [MEI-L] Release of MEISE 1.0 In-Reply-To: <4FB61EF1.4090307@gmail.com> References: <4FB61EF1.4090307@gmail.com> Message-ID: <CAMyHAnPC3rKPCP3Uo6M=HHf=vvVum854Pra3ZfZ3dGgJusXPhw@mail.gmail.com> Excellent news! Well done. Best, Raffaele On Fri, May 18, 2012 at 11:05 AM, Julian Dabbert <Julian.Dabbert at gmail.com>wrote: > Dear MEI-List, > > I am happy to announce the first official release of the MEI Score Editor > (MEISE) as standalone version on SourceForge. > Requirements for the application are an installed Java Runtime Environment > (JRE) in version 6 and a computer that is running Windows (XP and above), > Linux (tested on Ubunto 10.04 and above) or Mac 10.5 (intel only) and > above. You will find guidance for your first steps in the MEISE on > Sourceforge (http://sourceforge.net/p/**meise/wiki/Quickstart%20MEISE/<http://sourceforge.net/p/meise/wiki/Quickstart%20MEISE/> > **). Sourceforge also offers bugtracking and the subversion repository > with the current source code. MEISE is released under the LGPL 3.0. > > Since I will be leaving the MEI project by the end of May, the further > development will be assigned to Niko Beer at the Musikwissenschaftliches > Seminar in Detmold. If you have additional questions about using MEISE or > its development, you may also contact Johannes Kepper (kepper at edirom.de). > > I would like to thank you all for providing a cooperative and friendly > work atmosphere during my work at this project. > > Best regards, > -Julian Dabbert > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120521/50cb9885/attachment.html> From bohl at edirom.de Wed May 23 14:23:20 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Wed, 23 May 2012 14:23:20 +0200 Subject: [MEI-L] =?windows-1252?q?=5BANN=5D_Edirom-Summer-School_2012_=96_?= =?windows-1252?q?SAVE_THE_DATE?= Message-ID: <07491CA4-9A08-4BFB-BD5A-08FEB6EF1D63@edirom.de> Dear colleagues, it is our great pleasure to announce the third Edirom-Summer-School, to be held at the **University of Paderborn from September 24 to 28, 2012.** As in past years there will be introductory classes on the encoding standards TEI and MEI, related X-Technologies as well as tools. Further information will soon be available on http://www.edirom.de/summerschool2012. Hope to see you in Paderborn! Best wishes, Peter Stadler and Benjamin W. Bohl *********************************************************** Edirom - Projekt "Digitale Musikedition" Musikwissenschaftliches Seminar Detmold/Paderborn Gartenstra?e 20 D ? 32756 Detmold Tel. +49 (0) 5231 / 975-669 Fax: +49 (0) 5231 / 975-668 http://www.edirom.de *********************************************************** From kepper at edirom.de Thu May 31 10:31:31 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 31 May 2012 10:31:31 +0200 Subject: [MEI-L] Current work on the Guidelines Message-ID: <EBCEC538-515A-495D-BC39-8F02202A2EF0@edirom.de> Dear MEI Council and other MEI-L subscribers, it's been a (too) long time since our last report from the Technical Team. As you might remember, we're still preparing the upcoming release of MEI. Whereas the schema itself is already fixed, we're currently working on the guidelines for this release, which shall mimic the purpose and functionality of the TEI guidelines. We're already in the lucky position of having some text for almost every chapter (each chapter deals with one of MEI's modules). This was only possible with the help of Benjamin Bohl, Andrew Hankinson, Maja Hartwig, Laurent Pugin, Kristina Richts, Craig Sapp, Raffaele Viglianti, Thomas Weber and Perry Roland. Although we haven't finished it yet, I would like to thank all of them for their hard and continuous work on this. As you all know, collaboratively writing a text with ten authors leads to some differences, so right now Perry and me are reviewing those chapters and try to edit them as good as possible, while always staying in dialogue with the original authors. It is our plan to finish our review and have a first draft of the complete Guidelines by June 11th. Then, we would like to make that available to the subscribers of this list, which includes the whole MEI Council. We would ask you to have a thorough look at least to the chapters you're most interested in, and give us some feedback on it. Maybe we need to explain some features better, maybe you're missing an example for something, or maybe we did something completely wrong. All of this criticism will be helpful for us to improve the Guidelines. But: We don't have much time left for this release. Basically we offer you two weeks of time to review our draft (deadline: June 24th), and then we have the rest of June to work in as much as is possible. This means that we might not be able to react on fundamental criticism this time. but the Guidelines will probably stay a work in progress for the next few years ? TEI hasn't finished work on theirs, and they started more than 25 years ago? Our plan is to have everything ready for release by the end of July. This includes the Guidelines, the schema, an updated website, and our sample collection of MEI files. If we want this to come true, we need your help. Please reserve some time to review the Guidelines in the two weeks starting on June 11th. Feel free to criticize everything you regard as improbable, but please forgive us if we can't react on everything you say. We will keep track on these issues and will regard them for the next revision of the Guidelines. If you have some time to work on the text on your own, we're happy to introduce you to the technical workflows necessary. With best regards from the text smithery, Johannes From richard.lewis at gold.ac.uk Wed Jun 13 11:24:44 2012 From: richard.lewis at gold.ac.uk (Richard Lewis) Date: Wed, 13 Jun 2012 10:24:44 +0100 Subject: [MEI-L] Current work on the Guidelines In-Reply-To: <EBCEC538-515A-495D-BC39-8F02202A2EF0@edirom.de> References: <EBCEC538-515A-495D-BC39-8F02202A2EF0@edirom.de> Message-ID: <87ipevimxv.wl%richard.lewis@gold.ac.uk> Dear MEI-L, At Thu, 31 May 2012 10:31:31 +0200, Johannes Kepper wrote: > If we want this to come true, we need your help. Please reserve some > time to review the Guidelines in the two weeks starting on June > 11th. Apologies if I've just missed something really obvious, but where is the document you'd like us to review? Is it <http://code.google.com/p/music-encoding/source/browse/trunk/source/guidelines/>? Richard -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Richard Lewis ISMS, Computing Goldsmiths, University of London t: +44 (0)20 7078 5134 j: ironchicken at jabber.earth.li @: lewisrichard s: richardjlewis http://www.richardlewis.me.uk/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- From kepper at edirom.de Wed Jun 13 11:45:59 2012 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 13 Jun 2012 11:45:59 +0200 Subject: [MEI-L] Current work on the Guidelines In-Reply-To: <87ipevimxv.wl%richard.lewis@gold.ac.uk> References: <EBCEC538-515A-495D-BC39-8F02202A2EF0@edirom.de> <87ipevimxv.wl%richard.lewis@gold.ac.uk> Message-ID: <9A8A5C10-8666-4D26-BF4D-3CE9D2928680@edirom.de> Hi Richard, as I noticed while reading your mail, we got the wrong mailing list with an updated notification. Sorry for that! Right now, there is nothing to review for you. As I've been ill for some time now, we're slightly delayed and will come back to you with a document to review later. It is not absolutely clear yet when this will be, and I try to resist to announce something right now. Just wait for another mail on this list. In this mail, you will find a compiled document, preferably as PDF, which contains everything you need. We're also still working on the formatting, which seems to require some changes to the TEI processors, so the extra time here might help us to provide a better formatted document. Thanks for the reminder, jo Am 13.06.2012 um 11:24 schrieb Richard Lewis: > Dear MEI-L, > > At Thu, 31 May 2012 10:31:31 +0200, > Johannes Kepper wrote: > >> If we want this to come true, we need your help. Please reserve some >> time to review the Guidelines in the two weeks starting on June >> 11th. > > Apologies if I've just missed something really obvious, but where is > the document you'd like us to review? > > Is it <http://code.google.com/p/music-encoding/source/browse/trunk/source/guidelines/>? > > Richard > -- > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > Richard Lewis > ISMS, Computing > Goldsmiths, University of London > t: +44 (0)20 7078 5134 > j: ironchicken at jabber.earth.li > @: lewisrichard > s: richardjlewis > http://www.richardlewis.me.uk/ > -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From richard.lewis at gold.ac.uk Wed Jun 13 12:02:09 2012 From: richard.lewis at gold.ac.uk (Richard Lewis) Date: Wed, 13 Jun 2012 11:02:09 +0100 Subject: [MEI-L] Current work on the Guidelines In-Reply-To: <9A8A5C10-8666-4D26-BF4D-3CE9D2928680@edirom.de> References: <EBCEC538-515A-495D-BC39-8F02202A2EF0@edirom.de> <87ipevimxv.wl%richard.lewis@gold.ac.uk> <9A8A5C10-8666-4D26-BF4D-3CE9D2928680@edirom.de> Message-ID: <87haufil7i.wl%richard.lewis@gold.ac.uk> At Wed, 13 Jun 2012 11:45:59 +0200, Johannes Kepper wrote: > Am 13.06.2012 um 11:24 schrieb Richard Lewis: > > > At Thu, 31 May 2012 10:31:31 +0200, > > Johannes Kepper wrote: > > > > > If we want this to come true, we need your help. Please reserve > > > some time to review the Guidelines in the two weeks starting on > > > June 11th. > > > > Apologies if I've just missed something really obvious, but where is > > the document you'd like us to review? > > Right now, there is nothing to review for you. As I've been ill for > some time now, we're slightly delayed and will come back to you with > a document to review later. Thanks for the update. I look forward to hearing. Richard -- -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- Richard Lewis ISMS, Computing Goldsmiths, University of London t: +44 (0)20 7078 5134 j: ironchicken at jabber.earth.li @: lewisrichard s: richardjlewis http://www.richardlewis.me.uk/ -=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=- From lxpugin at gmail.com Tue Jun 26 12:56:00 2012 From: lxpugin at gmail.com (Laurent Pugin) Date: Tue, 26 Jun 2012 12:56:00 +0200 Subject: [MEI-L] beamspan Message-ID: <CAJ306HZ=AE5=xTFUSb2c-oVx01w9YZS1o5RM76pCBWO=uL=ytg@mail.gmail.com> Hi, I am looking for examples of beams encoded with beamspans. Does anybody know where I can find some? Thanks! Laurent -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120626/c81384e6/attachment.html> From kepper at edirom.de Tue Jun 26 13:13:22 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 26 Jun 2012 13:13:22 +0200 Subject: [MEI-L] beamspan In-Reply-To: <CAJ306HZ=AE5=xTFUSb2c-oVx01w9YZS1o5RM76pCBWO=uL=ytg@mail.gmail.com> References: <CAJ306HZ=AE5=xTFUSb2c-oVx01w9YZS1o5RM76pCBWO=uL=ytg@mail.gmail.com> Message-ID: <34F4E221-9813-4906-84A6-5FCA18DE3B73@edirom.de> Hi Laurent, I remember at least one (piece from Webern). Will look for that and send it to you. If not included yet, we will add it to the samples. Johannes Am 26.06.2012 um 12:56 schrieb Laurent Pugin: > Hi, > > I am looking for examples of beams encoded with beamspans. Does anybody know where I can find some? > > Thanks! > Laurent > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From lxpugin at gmail.com Wed Jun 27 07:54:40 2012 From: lxpugin at gmail.com (Laurent Pugin) Date: Wed, 27 Jun 2012 07:54:40 +0200 Subject: [MEI-L] Conversion to MusicXML Message-ID: <CAJ306HbbsobgskrJUjzn95FdjAJLfZo08xnGKdMCBBpos98Esw@mail.gmail.com> Hi, I am looking for a tool (XSL stylesheet or script) for converting a MEI file to MusicXML. Does anybody know about something like this? Thanks! Laurent -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120627/76b5cb4f/attachment.html> From kepper at edirom.de Wed Jun 27 08:02:27 2012 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 27 Jun 2012 08:02:27 +0200 Subject: [MEI-L] Conversion to MusicXML In-Reply-To: <CAJ306HbbsobgskrJUjzn95FdjAJLfZo08xnGKdMCBBpos98Esw@mail.gmail.com> References: <CAJ306HbbsobgskrJUjzn95FdjAJLfZo08xnGKdMCBBpos98Esw@mail.gmail.com> Message-ID: <3D82E1D2-32BF-4DA2-9783-A814037FEC6E@edirom.de> Hi Laurent, we haven't addressed this yet, but it's scheduled for the last year of the NEH/DFG project (which is about to start). Problem was ambiguity and variation in MEI files that needs to be resolved. Now that MEISE allows to export a unified version of an MEI file that follows just one source, we have something to start the conversion from. Development will start sometime after our upcoming release, and in the same run we're going to improve the existing MusicXML->MEI (which is completely outdated and still produces pre-2010-05 code). Sorry for that answer ;-) Johannes Am 27.06.2012 um 07:54 schrieb Laurent Pugin: > Hi, > > I am looking for a tool (XSL stylesheet or script) for converting a MEI file to MusicXML. Does anybody know about something like this? > > Thanks! > Laurent > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Fri Jul 6 18:16:41 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 6 Jul 2012 18:16:41 +0200 Subject: [MEI-L] datatype [imt][1-6] Message-ID: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> Dear MEI-L, this is a somewhat technical question that I still would like to ask all of you, although it's particularly interesting to hear developer's opinions. MEI offers several attributes with a datatype of [imt][1-6], that is one letter out of i, m or t, follwed by a digit from 1 to 6. It is used to indicate the beginning ("i"), middle ("m") or end ("t") of a feature which may overlap. The number distinguishes between those overlapping occurences. For instance a <note beam="i1"> indicates the beginning of a beam. If a second, independent beam would start before the first ends, it would be start with a value of "i2". If it would start after the end of the first one, it would reuse the "i1" value. Trying to clarify such details in the Guidelines, Perry and me are wondering if this behaviour is particularly comprehensible, or if we should take this feature away completely. You may encode beams using either the <beam> or <beamSpan> element, the first one being extremely comfortable, the second extremely flexible. Besides beams, the same datatype is available for tuplets and slurs, which also offer other encoding possibilities. Is anyone actually using the functionality of this datatype, or do you have strong opinions for other reasons? It would be great if we could get some feedback in the next couple of days in order to make a decision about this soon. Thanks very much, Perry and Johannes From andrew.hankinson at mail.mcgill.ca Fri Jul 6 18:30:32 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson, Mr) Date: Fri, 6 Jul 2012 16:30:32 +0000 Subject: [MEI-L] datatype [imt][1-6] In-Reply-To: <29074_1341591417_4FF70F78_29074_14_1_D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> References: <29074_1341591417_4FF70F78_29074_14_1_D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> Message-ID: <CC3E1EAB6662EA4A95FC6E498314808B10B523BD@EXMBX2010-5.campus.MCGILL.CA> I didn't know what those were for, but now that I do I think the numerical indication is quite confusing for overlapping elements. It would be clearer and easier to use the @staff and @layer attributes on the spanning elements to indicate which particular staff and layer (or "voice") they apply to, and then the @tstamp, @dur, @startid & @endid for start and end points. I wouldn't do away with the 'i', 'm', and 't' completely, though. That would be particularly useful for things like @wordpos in lyric syllables. You wouldn't need the numbers, though. -Andrew On 2012-07-06, at 12:16 PM, Johannes Kepper wrote: > Dear MEI-L, > > this is a somewhat technical question that I still would like to ask all of you, although it's particularly interesting to hear developer's opinions. > > MEI offers several attributes with a datatype of [imt][1-6], that is one letter out of i, m or t, follwed by a digit from 1 to 6. It is used to indicate the beginning ("i"), middle ("m") or end ("t") of a feature which may overlap. The number distinguishes between those overlapping occurences. For instance a > > <note beam="i1"> > > indicates the beginning of a beam. If a second, independent beam would start before the first ends, it would be start with a value of "i2". If it would start after the end of the first one, it would reuse the "i1" value. Trying to clarify such details in the Guidelines, Perry and me are wondering if this behaviour is particularly comprehensible, or if we should take this feature away completely. You may encode beams using either the <beam> or <beamSpan> element, the first one being extremely comfortable, the second extremely flexible. Besides beams, the same datatype is available for tuplets and slurs, which also offer other encoding possibilities. Is anyone actually using the functionality of this datatype, or do you have strong opinions for other reasons? > > It would be great if we could get some feedback in the next couple of days in order to make a decision about this soon. > > Thanks very much, > Perry and Johannes > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1054 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120706/c2e2a6cc/attachment.bin> From Maja.Hartwig at gmx.de Fri Jul 6 18:48:04 2012 From: Maja.Hartwig at gmx.de (Maja Hartwig) Date: Fri, 06 Jul 2012 18:48:04 +0200 Subject: [MEI-L] datatype [imt][1-6] In-Reply-To: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> References: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> Message-ID: <20120706164804.181210@gmx.net> Dear Johannes and Perry! Actually I used this datatype, but never for encoding beams, rather for slurs, though my understanding was a different one. I used the "i1" for the first slur in a measure, "i2" for the second one also when the first beam ended before. So I didn?t use the "i1" twice in one measure, because I think that would be confusing to get the appropriate "i?s" and "t?s" together. I was always wondering about the limit of 1-6, because in my way of using that datatype, I couldn?t encode more than 6 slurs in a measure. Now I switched over to encode slurs with @tstamp and duration or @startid and @endid. To encode beams, I always use the <beam>/<beamSpan>, and I think I wouldn?t miss the i/m/t 1-6! Best, Maja -------- Original-Nachricht -------- > Datum: Fri, 6 Jul 2012 18:16:41 +0200 > Von: Johannes Kepper <kepper at edirom.de> > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Betreff: [MEI-L] datatype [imt][1-6] > Dear MEI-L, > > this is a somewhat technical question that I still would like to ask all > of you, although it's particularly interesting to hear developer's opinions. > > MEI offers several attributes with a datatype of [imt][1-6], that is one > letter out of i, m or t, follwed by a digit from 1 to 6. It is used to > indicate the beginning ("i"), middle ("m") or end ("t") of a feature which may > overlap. The number distinguishes between those overlapping occurences. For > instance a > > <note beam="i1"> > > indicates the beginning of a beam. If a second, independent beam would > start before the first ends, it would be start with a value of "i2". If it > would start after the end of the first one, it would reuse the "i1" value. > Trying to clarify such details in the Guidelines, Perry and me are wondering > if this behaviour is particularly comprehensible, or if we should take this > feature away completely. You may encode beams using either the <beam> or > <beamSpan> element, the first one being extremely comfortable, the second > extremely flexible. Besides beams, the same datatype is available for tuplets > and slurs, which also offer other encoding possibilities. Is anyone > actually using the functionality of this datatype, or do you have strong opinions > for other reasons? > > It would be great if we could get some feedback in the next couple of days > in order to make a decision about this soon. > > Thanks very much, > Perry and Johannes > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From raffaeleviglianti at gmail.com Fri Jul 6 22:14:50 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Fri, 6 Jul 2012 21:14:50 +0100 Subject: [MEI-L] datatype [imt][1-6] In-Reply-To: <20120706164804.181210@gmx.net> References: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> <20120706164804.181210@gmx.net> Message-ID: <CAMyHAnOiob_Le17_dr27UWsouY8q+dHh_8HxDuTCMP4KMnRYqg@mail.gmail.com> Dear all, In general, I would agree with both of Andrew's points: - get rid of the datatype for beam and other "spanners" like slurs and tuplets (and the related attributes); - keep i/m/t for wordpos (can't think of any other places where this could be needed, though as a rule of thumb, if 1-6 is not needed then i/m/t might be worth keeping. Best, Raffaele On Fri, Jul 6, 2012 at 5:48 PM, Maja Hartwig <Maja.Hartwig at gmx.de> wrote: > Dear Johannes and Perry! > > Actually I used this datatype, but never for encoding beams, rather for > slurs, though my understanding was a different one. > I used the "i1" for the first slur in a measure, "i2" for the second one > also when the first beam ended before. > So I didn?t use the "i1" twice in one measure, because I think that would > be confusing to get the appropriate "i?s" and "t?s" together. > I was always wondering about the limit of 1-6, because in my way of using > that datatype, I couldn?t encode more than 6 slurs in a measure. > Now I switched over to encode slurs with @tstamp and duration or @startid > and @endid. > To encode beams, I always use the <beam>/<beamSpan>, and I think I > wouldn?t miss the i/m/t 1-6! > Best, > > Maja > > > -------- Original-Nachricht -------- > > Datum: Fri, 6 Jul 2012 18:16:41 +0200 > > Von: Johannes Kepper <kepper at edirom.de> > > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > > Betreff: [MEI-L] datatype [imt][1-6] > > > Dear MEI-L, > > > > this is a somewhat technical question that I still would like to ask all > > of you, although it's particularly interesting to hear developer's > opinions. > > > > MEI offers several attributes with a datatype of [imt][1-6], that is one > > letter out of i, m or t, follwed by a digit from 1 to 6. It is used to > > indicate the beginning ("i"), middle ("m") or end ("t") of a feature > which may > > overlap. The number distinguishes between those overlapping occurences. > For > > instance a > > > > <note beam="i1"> > > > > indicates the beginning of a beam. If a second, independent beam would > > start before the first ends, it would be start with a value of "i2". If > it > > would start after the end of the first one, it would reuse the "i1" > value. > > Trying to clarify such details in the Guidelines, Perry and me are > wondering > > if this behaviour is particularly comprehensible, or if we should take > this > > feature away completely. You may encode beams using either the <beam> or > > <beamSpan> element, the first one being extremely comfortable, the second > > extremely flexible. Besides beams, the same datatype is available for > tuplets > > and slurs, which also offer other encoding possibilities. Is anyone > > actually using the functionality of this datatype, or do you have strong > opinions > > for other reasons? > > > > It would be great if we could get some feedback in the next couple of > days > > in order to make a decision about this soon. > > > > Thanks very much, > > Perry and Johannes > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120706/ede55dd7/attachment.html> From zupftom at googlemail.com Fri Jul 6 23:17:36 2012 From: zupftom at googlemail.com (TW) Date: Fri, 6 Jul 2012 23:17:36 +0200 Subject: [MEI-L] datatype [imt][1-6] In-Reply-To: <CAMyHAnOiob_Le17_dr27UWsouY8q+dHh_8HxDuTCMP4KMnRYqg@mail.gmail.com> References: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> <20120706164804.181210@gmx.net> <CAMyHAnOiob_Le17_dr27UWsouY8q+dHh_8HxDuTCMP4KMnRYqg@mail.gmail.com> Message-ID: <CAEB1mAptk9Sxg=-Qgg5pdxCGDeqQQ8NRERm4dLJNwFe0-KGdgg@mail.gmail.com> I agree with this, as well. I never felt comfortable with this kind of beam/slur indication as it scatters information about a single object over several others, and the object itself doesn't even exist as an entity of its own. @beam/@slur attributes would make sense to me if they were plists/idrefs pointing to the beam/beamSpan/slur element they belong to. For lyrics however, i|m|t totally convinces me as it indicates the relationship between different entities. Thomas 2012/7/6 Raffaele Viglianti <raffaeleviglianti at gmail.com>: > Dear all, > > In general, I would agree with both of Andrew's points: > - get rid of the datatype for beam and other "spanners" like slurs and > tuplets (and the related attributes); > - keep i/m/t for wordpos (can't think of any other places where this could > be needed, though as a rule of thumb, if 1-6 is not needed then i/m/t might > be worth keeping. > > Best, > Raffaele > > > > On Fri, Jul 6, 2012 at 5:48 PM, Maja Hartwig <Maja.Hartwig at gmx.de> wrote: >> >> Dear Johannes and Perry! >> >> Actually I used this datatype, but never for encoding beams, rather for >> slurs, though my understanding was a different one. >> I used the "i1" for the first slur in a measure, "i2" for the second one >> also when the first beam ended before. >> So I didn?t use the "i1" twice in one measure, because I think that would >> be confusing to get the appropriate "i?s" and "t?s" together. >> I was always wondering about the limit of 1-6, because in my way of using >> that datatype, I couldn?t encode more than 6 slurs in a measure. >> Now I switched over to encode slurs with @tstamp and duration or @startid >> and @endid. >> To encode beams, I always use the <beam>/<beamSpan>, and I think I >> wouldn?t miss the i/m/t 1-6! >> Best, >> >> Maja >> >> >> -------- Original-Nachricht -------- >> > Datum: Fri, 6 Jul 2012 18:16:41 +0200 >> > Von: Johannes Kepper <kepper at edirom.de> >> > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> >> > Betreff: [MEI-L] datatype [imt][1-6] >> >> > Dear MEI-L, >> > >> > this is a somewhat technical question that I still would like to ask all >> > of you, although it's particularly interesting to hear developer's >> > opinions. >> > >> > MEI offers several attributes with a datatype of [imt][1-6], that is one >> > letter out of i, m or t, follwed by a digit from 1 to 6. It is used to >> > indicate the beginning ("i"), middle ("m") or end ("t") of a feature >> > which may >> > overlap. The number distinguishes between those overlapping occurences. >> > For >> > instance a >> > >> > <note beam="i1"> >> > >> > indicates the beginning of a beam. If a second, independent beam would >> > start before the first ends, it would be start with a value of "i2". If >> > it >> > would start after the end of the first one, it would reuse the "i1" >> > value. >> > Trying to clarify such details in the Guidelines, Perry and me are >> > wondering >> > if this behaviour is particularly comprehensible, or if we should take >> > this >> > feature away completely. You may encode beams using either the <beam> or >> > <beamSpan> element, the first one being extremely comfortable, the >> > second >> > extremely flexible. Besides beams, the same datatype is available for >> > tuplets >> > and slurs, which also offer other encoding possibilities. Is anyone >> > actually using the functionality of this datatype, or do you have strong >> > opinions >> > for other reasons? >> > >> > It would be great if we could get some feedback in the next couple of >> > days >> > in order to make a decision about this soon. >> > >> > Thanks very much, >> > Perry and Johannes >> > _______________________________________________ >> > mei-l mailing list >> > mei-l at lists.uni-paderborn.de >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > From kepper at edirom.de Sat Jul 7 00:23:58 2012 From: kepper at edirom.de (Johannes Kepper) Date: Sat, 7 Jul 2012 00:23:58 +0200 Subject: [MEI-L] datatype [imt][1-6] In-Reply-To: <CAEB1mAptk9Sxg=-Qgg5pdxCGDeqQQ8NRERm4dLJNwFe0-KGdgg@mail.gmail.com> References: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> <20120706164804.181210@gmx.net> <CAMyHAnOiob_Le17_dr27UWsouY8q+dHh_8HxDuTCMP4KMnRYqg@mail.gmail.com> <CAEB1mAptk9Sxg=-Qgg5pdxCGDeqQQ8NRERm4dLJNwFe0-KGdgg@mail.gmail.com> Message-ID: <0B19E3D8-4529-439C-BA8D-CCF33104E542@edirom.de> Dear all, thanks for the answers so far, they are really helpful. Just to clarify: We don't want to touch [imt], that's a completely different datatype which is used not only for @wordpos, but also for @tie. As these can't overlap, there is no risk for confusion. We also regard this datatype as undoubtedly handy, so it won't go away, no matter what we will do with [imt][1-6]. Basically, this change would mean that we drop the possibility for a one-pass encoding here and require a second pass. That's certainly a drawback, but it seems like the current solution for single-pass is at least misleading and should therefore go away before causing more trouble. We might want to think about introducing a different one-pass method later, though. Thanks again, and still listening to other opinions, Johannes Am 06.07.2012 um 23:17 schrieb TW: > I agree with this, as well. I never felt comfortable with this kind > of beam/slur indication as it scatters information about a single > object over several others, and the object itself doesn't even exist > as an entity of its own. @beam/@slur attributes would make sense to > me if they were plists/idrefs pointing to the beam/beamSpan/slur > element they belong to. > > For lyrics however, i|m|t totally convinces me as it indicates the > relationship between different entities. > > Thomas > > > 2012/7/6 Raffaele Viglianti <raffaeleviglianti at gmail.com>: >> Dear all, >> >> In general, I would agree with both of Andrew's points: >> - get rid of the datatype for beam and other "spanners" like slurs and >> tuplets (and the related attributes); >> - keep i/m/t for wordpos (can't think of any other places where this could >> be needed, though as a rule of thumb, if 1-6 is not needed then i/m/t might >> be worth keeping. >> >> Best, >> Raffaele >> >> >> >> On Fri, Jul 6, 2012 at 5:48 PM, Maja Hartwig <Maja.Hartwig at gmx.de> wrote: >>> >>> Dear Johannes and Perry! >>> >>> Actually I used this datatype, but never for encoding beams, rather for >>> slurs, though my understanding was a different one. >>> I used the "i1" for the first slur in a measure, "i2" for the second one >>> also when the first beam ended before. >>> So I didn?t use the "i1" twice in one measure, because I think that would >>> be confusing to get the appropriate "i?s" and "t?s" together. >>> I was always wondering about the limit of 1-6, because in my way of using >>> that datatype, I couldn?t encode more than 6 slurs in a measure. >>> Now I switched over to encode slurs with @tstamp and duration or @startid >>> and @endid. >>> To encode beams, I always use the <beam>/<beamSpan>, and I think I >>> wouldn?t miss the i/m/t 1-6! >>> Best, >>> >>> Maja >>> >>> >>> -------- Original-Nachricht -------- >>>> Datum: Fri, 6 Jul 2012 18:16:41 +0200 >>>> Von: Johannes Kepper <kepper at edirom.de> >>>> An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> >>>> Betreff: [MEI-L] datatype [imt][1-6] >>> >>>> Dear MEI-L, >>>> >>>> this is a somewhat technical question that I still would like to ask all >>>> of you, although it's particularly interesting to hear developer's >>>> opinions. >>>> >>>> MEI offers several attributes with a datatype of [imt][1-6], that is one >>>> letter out of i, m or t, follwed by a digit from 1 to 6. It is used to >>>> indicate the beginning ("i"), middle ("m") or end ("t") of a feature >>>> which may >>>> overlap. The number distinguishes between those overlapping occurences. >>>> For >>>> instance a >>>> >>>> <note beam="i1"> >>>> >>>> indicates the beginning of a beam. If a second, independent beam would >>>> start before the first ends, it would be start with a value of "i2". If >>>> it >>>> would start after the end of the first one, it would reuse the "i1" >>>> value. >>>> Trying to clarify such details in the Guidelines, Perry and me are >>>> wondering >>>> if this behaviour is particularly comprehensible, or if we should take >>>> this >>>> feature away completely. You may encode beams using either the <beam> or >>>> <beamSpan> element, the first one being extremely comfortable, the >>>> second >>>> extremely flexible. Besides beams, the same datatype is available for >>>> tuplets >>>> and slurs, which also offer other encoding possibilities. Is anyone >>>> actually using the functionality of this datatype, or do you have strong >>>> opinions >>>> for other reasons? >>>> >>>> It would be great if we could get some feedback in the next couple of >>>> days >>>> in order to make a decision about this soon. >>>> >>>> Thanks very much, >>>> Perry and Johannes >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From raffaeleviglianti at gmail.com Sat Jul 7 12:07:02 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Sat, 7 Jul 2012 11:07:02 +0100 Subject: [MEI-L] datatype [imt][1-6] In-Reply-To: <0B19E3D8-4529-439C-BA8D-CCF33104E542@edirom.de> References: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> <20120706164804.181210@gmx.net> <CAMyHAnOiob_Le17_dr27UWsouY8q+dHh_8HxDuTCMP4KMnRYqg@mail.gmail.com> <CAEB1mAptk9Sxg=-Qgg5pdxCGDeqQQ8NRERm4dLJNwFe0-KGdgg@mail.gmail.com> <0B19E3D8-4529-439C-BA8D-CCF33104E542@edirom.de> Message-ID: <CAMyHAnN6ZQ07YWzbwibkbYRGRjLvJb43Q6pRg4psuh3q-+ixmw@mail.gmail.com> Hello, On Fri, Jul 6, 2012 at 11:23 PM, Johannes Kepper <kepper at edirom.de> wrote: > > > Basically, this change would mean that we drop the possibility for a > one-pass encoding here and require a second pass. That's certainly a > drawback A second pass towards what? Do you have a specific transformation in mind? Once the XML tree is parsed, knowing that a note is beamed (and whether its i/m/t) by its attribute value or by its ancestor::beam + position should be equivalent (beamspan is more complex, but that's already the case) . I'm also slightly concerned about keeping @tie if we get rid of @slur and @beam. Even though @tie is less ambiguous and so has a different datatype, they all just seem to be part of the same category to me. Best, Raffaele > . > > Thanks again, and still listening to other opinions, > Johannes > > > > Am 06.07.2012 um 23:17 schrieb TW: > > > I agree with this, as well. I never felt comfortable with this kind > > of beam/slur indication as it scatters information about a single > > object over several others, and the object itself doesn't even exist > > as an entity of its own. @beam/@slur attributes would make sense to > > me if they were plists/idrefs pointing to the beam/beamSpan/slur > > element they belong to. > > > > For lyrics however, i|m|t totally convinces me as it indicates the > > relationship between different entities. > > > > Thomas > > > > > > 2012/7/6 Raffaele Viglianti <raffaeleviglianti at gmail.com>: > >> Dear all, > >> > >> In general, I would agree with both of Andrew's points: > >> - get rid of the datatype for beam and other "spanners" like slurs and > >> tuplets (and the related attributes); > >> - keep i/m/t for wordpos (can't think of any other places where this > could > >> be needed, though as a rule of thumb, if 1-6 is not needed then i/m/t > might > >> be worth keeping. > >> > >> Best, > >> Raffaele > >> > >> > >> > >> On Fri, Jul 6, 2012 at 5:48 PM, Maja Hartwig <Maja.Hartwig at gmx.de> > wrote: > >>> > >>> Dear Johannes and Perry! > >>> > >>> Actually I used this datatype, but never for encoding beams, rather for > >>> slurs, though my understanding was a different one. > >>> I used the "i1" for the first slur in a measure, "i2" for the second > one > >>> also when the first beam ended before. > >>> So I didn?t use the "i1" twice in one measure, because I think that > would > >>> be confusing to get the appropriate "i?s" and "t?s" together. > >>> I was always wondering about the limit of 1-6, because in my way of > using > >>> that datatype, I couldn?t encode more than 6 slurs in a measure. > >>> Now I switched over to encode slurs with @tstamp and duration or > @startid > >>> and @endid. > >>> To encode beams, I always use the <beam>/<beamSpan>, and I think I > >>> wouldn?t miss the i/m/t 1-6! > >>> Best, > >>> > >>> Maja > >>> > >>> > >>> -------- Original-Nachricht -------- > >>>> Datum: Fri, 6 Jul 2012 18:16:41 +0200 > >>>> Von: Johannes Kepper <kepper at edirom.de> > >>>> An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > >>>> Betreff: [MEI-L] datatype [imt][1-6] > >>> > >>>> Dear MEI-L, > >>>> > >>>> this is a somewhat technical question that I still would like to ask > all > >>>> of you, although it's particularly interesting to hear developer's > >>>> opinions. > >>>> > >>>> MEI offers several attributes with a datatype of [imt][1-6], that is > one > >>>> letter out of i, m or t, follwed by a digit from 1 to 6. It is used to > >>>> indicate the beginning ("i"), middle ("m") or end ("t") of a feature > >>>> which may > >>>> overlap. The number distinguishes between those overlapping > occurences. > >>>> For > >>>> instance a > >>>> > >>>> <note beam="i1"> > >>>> > >>>> indicates the beginning of a beam. If a second, independent beam would > >>>> start before the first ends, it would be start with a value of "i2". > If > >>>> it > >>>> would start after the end of the first one, it would reuse the "i1" > >>>> value. > >>>> Trying to clarify such details in the Guidelines, Perry and me are > >>>> wondering > >>>> if this behaviour is particularly comprehensible, or if we should take > >>>> this > >>>> feature away completely. You may encode beams using either the <beam> > or > >>>> <beamSpan> element, the first one being extremely comfortable, the > >>>> second > >>>> extremely flexible. Besides beams, the same datatype is available for > >>>> tuplets > >>>> and slurs, which also offer other encoding possibilities. Is anyone > >>>> actually using the functionality of this datatype, or do you have > strong > >>>> opinions > >>>> for other reasons? > >>>> > >>>> It would be great if we could get some feedback in the next couple of > >>>> days > >>>> in order to make a decision about this soon. > >>>> > >>>> Thanks very much, > >>>> Perry and Johannes > >>>> _______________________________________________ > >>>> mei-l mailing list > >>>> mei-l at lists.uni-paderborn.de > >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >>> > >>> _______________________________________________ > >>> mei-l mailing list > >>> mei-l at lists.uni-paderborn.de > >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > >> > >> > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120707/d88ff22a/attachment.html> From pdr4h at eservices.virginia.edu Sat Jul 7 12:38:42 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Sat, 7 Jul 2012 10:38:42 +0000 Subject: [MEI-L] datatype [imt][1-6] In-Reply-To: <CAMyHAnN6ZQ07YWzbwibkbYRGRjLvJb43Q6pRg4psuh3q-+ixmw@mail.gmail.com> References: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> <20120706164804.181210@gmx.net> <CAMyHAnOiob_Le17_dr27UWsouY8q+dHh_8HxDuTCMP4KMnRYqg@mail.gmail.com> <CAEB1mAptk9Sxg=-Qgg5pdxCGDeqQQ8NRERm4dLJNwFe0-KGdgg@mail.gmail.com> <0B19E3D8-4529-439C-BA8D-CCF33104E542@edirom.de>, <CAMyHAnN6ZQ07YWzbwibkbYRGRjLvJb43Q6pRg4psuh3q-+ixmw@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0118DDB8@GRANT.eservices.virginia.edu> Raffaele, In this context "one pass encoding" means capturing all information available at any given point in the encoding at one time. For example, while you're on a given note, it means capturing info about any ties, slurs, beams, etc. that start on that note. MEI accomplishes this primarily with attributes; that is, @beam, @slur, @tie. "Two pass encoding" means capturing the note information first and the beam, tie, and slur data later, relating the second pass to the first, of course. This method results in note, chord, etc. elements followed later by tie, slur, etc. elements with @tstamp or @startid attributes. The beam element is slightly anomalous in that it encloses notes, but it's really part of the one-pass method. It's difficult to treat other things, like slur, tie, etc. that might overlap each other the same way as beams. And even for beams, when overlapping occurs one must switch to beamSpan. I still believe one-pass encoding has a place in MEI, at least until any hand encoding (meaning editing of the MEI file in a non-graphical environment) is no longer necessary. However, these attributes should certainly be better documented and perhaps moved to a separate module instead of being eliminated altogether. We could even go so far as to define them as unacceptable in "canonical" MEI as long as we provide conversion from the attribute method to the element method. In addition to their usefulness in hand-coding, these attributes are helpful when transforming data already in a one-pass-oriented form, such as MuseData and to some extent MusicXML, to MEI. I agree that keeping @tie while eliminating the other "one-pass attributes" is inconsistent. This is another reason for not getting rid of @slur, etc. entirely. When hand-coding is no longer necessary, then @slur, etc. including @tie can go away, but I don't think we're completely there yet. Just my two cents, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Raffaele Viglianti [raffaeleviglianti at gmail.com] Sent: Saturday, July 07, 2012 6:07 AM To: Music Encoding Initiative Subject: Re: [MEI-L] datatype [imt][1-6] Hello, On Fri, Jul 6, 2012 at 11:23 PM, Johannes Kepper <kepper at edirom.de<mailto:kepper at edirom.de>> wrote: Basically, this change would mean that we drop the possibility for a one-pass encoding here and require a second pass. That's certainly a drawback A second pass towards what? Do you have a specific transformation in mind? Once the XML tree is parsed, knowing that a note is beamed (and whether its i/m/t) by its attribute value or by its ancestor::beam + position should be equivalent (beamspan is more complex, but that's already the case) . I'm also slightly concerned about keeping @tie if we get rid of @slur and @beam. Even though @tie is less ambiguous and so has a different datatype, they all just seem to be part of the same category to me. Best, Raffaele . Thanks again, and still listening to other opinions, Johannes Am 06.07.2012 um 23:17 schrieb TW: > I agree with this, as well. I never felt comfortable with this kind > of beam/slur indication as it scatters information about a single > object over several others, and the object itself doesn't even exist > as an entity of its own. @beam/@slur attributes would make sense to > me if they were plists/idrefs pointing to the beam/beamSpan/slur > element they belong to. > > For lyrics however, i|m|t totally convinces me as it indicates the > relationship between different entities. > > Thomas > > > 2012/7/6 Raffaele Viglianti <raffaeleviglianti at gmail.com<mailto:raffaeleviglianti at gmail.com>>: >> Dear all, >> >> In general, I would agree with both of Andrew's points: >> - get rid of the datatype for beam and other "spanners" like slurs and >> tuplets (and the related attributes); >> - keep i/m/t for wordpos (can't think of any other places where this could >> be needed, though as a rule of thumb, if 1-6 is not needed then i/m/t might >> be worth keeping. >> >> Best, >> Raffaele >> >> >> >> On Fri, Jul 6, 2012 at 5:48 PM, Maja Hartwig <Maja.Hartwig at gmx.de<mailto:Maja.Hartwig at gmx.de>> wrote: >>> >>> Dear Johannes and Perry! >>> >>> Actually I used this datatype, but never for encoding beams, rather for >>> slurs, though my understanding was a different one. >>> I used the "i1" for the first slur in a measure, "i2" for the second one >>> also when the first beam ended before. >>> So I didn?t use the "i1" twice in one measure, because I think that would >>> be confusing to get the appropriate "i?s" and "t?s" together. >>> I was always wondering about the limit of 1-6, because in my way of using >>> that datatype, I couldn?t encode more than 6 slurs in a measure. >>> Now I switched over to encode slurs with @tstamp and duration or @startid >>> and @endid. >>> To encode beams, I always use the <beam>/<beamSpan>, and I think I >>> wouldn?t miss the i/m/t 1-6! >>> Best, >>> >>> Maja >>> >>> >>> -------- Original-Nachricht -------- >>>> Datum: Fri, 6 Jul 2012 18:16:41 +0200 >>>> Von: Johannes Kepper <kepper at edirom.de<mailto:kepper at edirom.de>> >>>> An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de>> >>>> Betreff: [MEI-L] datatype [imt][1-6] >>> >>>> Dear MEI-L, >>>> >>>> this is a somewhat technical question that I still would like to ask all >>>> of you, although it's particularly interesting to hear developer's >>>> opinions. >>>> >>>> MEI offers several attributes with a datatype of [imt][1-6], that is one >>>> letter out of i, m or t, follwed by a digit from 1 to 6. It is used to >>>> indicate the beginning ("i"), middle ("m") or end ("t") of a feature >>>> which may >>>> overlap. The number distinguishes between those overlapping occurences. >>>> For >>>> instance a >>>> >>>> <note beam="i1"> >>>> >>>> indicates the beginning of a beam. If a second, independent beam would >>>> start before the first ends, it would be start with a value of "i2". If >>>> it >>>> would start after the end of the first one, it would reuse the "i1" >>>> value. >>>> Trying to clarify such details in the Guidelines, Perry and me are >>>> wondering >>>> if this behaviour is particularly comprehensible, or if we should take >>>> this >>>> feature away completely. You may encode beams using either the <beam> or >>>> <beamSpan> element, the first one being extremely comfortable, the >>>> second >>>> extremely flexible. Besides beams, the same datatype is available for >>>> tuplets >>>> and slurs, which also offer other encoding possibilities. Is anyone >>>> actually using the functionality of this datatype, or do you have strong >>>> opinions >>>> for other reasons? >>>> >>>> It would be great if we could get some feedback in the next couple of >>>> days >>>> in order to make a decision about this soon. >>>> >>>> Thanks very much, >>>> Perry and Johannes >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120707/6d0606dd/attachment.html> From kepper at edirom.de Sun Jul 8 00:39:02 2012 From: kepper at edirom.de (Johannes Kepper) Date: Sun, 8 Jul 2012 00:39:02 +0200 Subject: [MEI-L] datatype [imt][1-6] In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D0118DDB8@GRANT.eservices.virginia.edu> References: <D6F26033-7EAE-4C8B-9BA5-475EFAA0F10F@edirom.de> <20120706164804.181210@gmx.net> <CAMyHAnOiob_Le17_dr27UWsouY8q+dHh_8HxDuTCMP4KMnRYqg@mail.gmail.com> <CAEB1mAptk9Sxg=-Qgg5pdxCGDeqQQ8NRERm4dLJNwFe0-KGdgg@mail.gmail.com> <0B19E3D8-4529-439C-BA8D-CCF33104E542@edirom.de>, <CAMyHAnN6ZQ07YWzbwibkbYRGRjLvJb43Q6pRg4psuh3q-+ixmw@mail.gmail.com> <BBCC497C40D85642B90E9F94FC30343D0118DDB8@GRANT.eservices.virginia.edu> Message-ID: <033AAAD5-891C-4BB2-B8CE-9940828AFD4F@edirom.de> comments inline? Am 07.07.2012 um 12:38 schrieb Roland, Perry (pdr4h): > Raffaele, > > In this context "one pass encoding" means capturing all information available at any given point in the encoding at one time. For example, while you're on a given note, it means capturing info about any ties, slurs, beams, etc. that start on that note. MEI accomplishes this primarily with attributes; that is, @beam, @slur, @tie. > > "Two pass encoding" means capturing the note information first and the beam, tie, and slur data later, relating the second pass to the first, of course. This method results in note, chord, etc. elements followed later by tie, slur, etc. elements with @tstamp or @startid attributes. > > The beam element is slightly anomalous in that it encloses notes, but it's really part of the one-pass method. It's difficult to treat other things, like slur, tie, etc. that might overlap each other the same way as beams. And even for beams, when overlapping occurs one must switch to beamSpan. > > I still believe one-pass encoding has a place in MEI, at least until any hand encoding (meaning editing of the MEI file in a non-graphical environment) is no longer necessary. However, these attributes should certainly be better documented and perhaps moved to a separate module instead of being eliminated altogether. We could even go so far as to define them as unacceptable in "canonical" MEI as long as we provide conversion from the attribute method to the element method. In addition to their usefulness in hand-coding, these attributes are helpful when transforming data already in a one-pass-oriented form, such as MuseData and to some extent MusicXML, to MEI. > > I agree that keeping @tie while eliminating the other "one-pass attributes" is inconsistent. This is another reason for not getting rid of @slur, etc. entirely. It would be inconsistent if our aim would be to eliminate all one-pass solutions, which is not the case. We're trying to solve an apparent problem arising from a certain datatype. It's just a side-effect that this means we loose a one-pass on these things. If we'd know an unambiguous one-pass, we'd certainly implement that one instead (suggestions welcome?). > > When hand-coding is no longer necessary, then @slur, etc. including @tie can go away, but I don't think we're completely there yet. > Dropping all mentioned one-passes shouldn't be our goal at any time. MEI aims at people who should really care about their data, meaning they should know about MEI good enough to be able to check what applications do for them. Not everyone will encode by hand, but that's certainly a possibility we don't want to loose at any time! I'm not arguing to throw away these attributes without further notice, I just want to get them "out of the way" to prevent future confusion about how to use them, which will eventually result in confusion on how to interpret MEI files using them. There is still sufficiently little use of MEI to do such changes right now, but that's certainly a closing time slot? > Just my two cents, > > -- > p. > > and my two euro-cents, jo > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Raffaele Viglianti [raffaeleviglianti at gmail.com] > Sent: Saturday, July 07, 2012 6:07 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] datatype [imt][1-6] > > Hello, > > On Fri, Jul 6, 2012 at 11:23 PM, Johannes Kepper <kepper at edirom.de> wrote: > > Basically, this change would mean that we drop the possibility for a one-pass encoding here and require a second pass. That's certainly a drawback > > A second pass towards what? Do you have a specific transformation in mind? Once the XML tree is parsed, knowing that a note is beamed (and whether its i/m/t) by its attribute value or by its ancestor::beam + position should be equivalent (beamspan is more complex, but that's already the case) . > > I'm also slightly concerned about keeping @tie if we get rid of @slur and @beam. Even though @tie is less ambiguous and so has a different datatype, they all just seem to be part of the same category to me. > > Best, > Raffaele > > . > > Thanks again, and still listening to other opinions, > Johannes > > > > Am 06.07.2012 um 23:17 schrieb TW: > > > I agree with this, as well. I never felt comfortable with this kind > > of beam/slur indication as it scatters information about a single > > object over several others, and the object itself doesn't even exist > > as an entity of its own. @beam/@slur attributes would make sense to > > me if they were plists/idrefs pointing to the beam/beamSpan/slur > > element they belong to. > > > > For lyrics however, i|m|t totally convinces me as it indicates the > > relationship between different entities. > > > > Thomas > > > > > > 2012/7/6 Raffaele Viglianti <raffaeleviglianti at gmail.com>: > >> Dear all, > >> > >> In general, I would agree with both of Andrew's points: > >> - get rid of the datatype for beam and other "spanners" like slurs and > >> tuplets (and the related attributes); > >> - keep i/m/t for wordpos (can't think of any other places where this could > >> be needed, though as a rule of thumb, if 1-6 is not needed then i/m/t might > >> be worth keeping. > >> > >> Best, > >> Raffaele > >> > >> > >> > >> On Fri, Jul 6, 2012 at 5:48 PM, Maja Hartwig <Maja.Hartwig at gmx.de> wrote: > >>> > >>> Dear Johannes and Perry! > >>> > >>> Actually I used this datatype, but never for encoding beams, rather for > >>> slurs, though my understanding was a different one. > >>> I used the "i1" for the first slur in a measure, "i2" for the second one > >>> also when the first beam ended before. > >>> So I didn?t use the "i1" twice in one measure, because I think that would > >>> be confusing to get the appropriate "i?s" and "t?s" together. > >>> I was always wondering about the limit of 1-6, because in my way of using > >>> that datatype, I couldn?t encode more than 6 slurs in a measure. > >>> Now I switched over to encode slurs with @tstamp and duration or @startid > >>> and @endid. > >>> To encode beams, I always use the <beam>/<beamSpan>, and I think I > >>> wouldn?t miss the i/m/t 1-6! > >>> Best, > >>> > >>> Maja > >>> > >>> > >>> -------- Original-Nachricht -------- > >>>> Datum: Fri, 6 Jul 2012 18:16:41 +0200 > >>>> Von: Johannes Kepper <kepper at edirom.de> > >>>> An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > >>>> Betreff: [MEI-L] datatype [imt][1-6] > >>> > >>>> Dear MEI-L, > >>>> > >>>> this is a somewhat technical question that I still would like to ask all > >>>> of you, although it's particularly interesting to hear developer's > >>>> opinions. > >>>> > >>>> MEI offers several attributes with a datatype of [imt][1-6], that is one > >>>> letter out of i, m or t, follwed by a digit from 1 to 6. It is used to > >>>> indicate the beginning ("i"), middle ("m") or end ("t") of a feature > >>>> which may > >>>> overlap. The number distinguishes between those overlapping occurences. > >>>> For > >>>> instance a > >>>> > >>>> <note beam="i1"> > >>>> > >>>> indicates the beginning of a beam. If a second, independent beam would > >>>> start before the first ends, it would be start with a value of "i2". If > >>>> it > >>>> would start after the end of the first one, it would reuse the "i1" > >>>> value. > >>>> Trying to clarify such details in the Guidelines, Perry and me are > >>>> wondering > >>>> if this behaviour is particularly comprehensible, or if we should take > >>>> this > >>>> feature away completely. You may encode beams using either the <beam> or > >>>> <beamSpan> element, the first one being extremely comfortable, the > >>>> second > >>>> extremely flexible. Besides beams, the same datatype is available for > >>>> tuplets > >>>> and slurs, which also offer other encoding possibilities. Is anyone > >>>> actually using the functionality of this datatype, or do you have strong > >>>> opinions > >>>> for other reasons? > >>>> > >>>> It would be great if we could get some feedback in the next couple of > >>>> days > >>>> in order to make a decision about this soon. > >>>> > >>>> Thanks very much, > >>>> Perry and Johannes > >>>> _______________________________________________ > >>>> mei-l mailing list > >>>> mei-l at lists.uni-paderborn.de > >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >>> > >>> _______________________________________________ > >>> mei-l mailing list > >>> mei-l at lists.uni-paderborn.de > >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > >> > >> > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Sun Jul 15 08:24:58 2012 From: zupftom at googlemail.com (TW) Date: Sun, 15 Jul 2012 08:24:58 +0200 Subject: [MEI-L] More "precise" <app>s Message-ID: <CAEB1mAp2P2Lh4mCNpmwJs86TMpfbUnZ8mtCsrsTzHxAO67JMoA@mail.gmail.com> Dear MEI family, how does one encode variant readings that only affect a certain "strand" of events? For example, if we have a full orchestral score where there are different readings of the flute's measures 1 and 2, I see the following options: 1a) One big <app> around measure 1 and 2, containing all staffs twice, like so: <app> <rdg> <measure n="1"> <staff n="1"> <!-- Some music --> </staff> <staff n="2"/> <staff n="3"/> <staff n="4"/> <staff n="5"/> <staff n="6"/> <staff n="7"/> <staff n="8"/> <staff n="9"/> <staff n="10"/> </measure> <measure n="2"> <staff n="1"> <!-- Some more music --> </staff> <staff n="2"/> <staff n="3"/> <staff n="4"/> <staff n="5"/> <staff n="6"/> <staff n="7"/> <staff n="8"/> <staff n="9"/> <staff n="10"/> </measure> </rdg> <rdg> <measure n="1"> <staff n="1"> <!-- Some variant of the music --> </staff> <staff n="2"/> <staff n="3"/> <staff n="4"/> <staff n="5"/> <staff n="6"/> <staff n="7"/> <staff n="8"/> <staff n="9"/> <staff n="10"/> </measure> <measure n="2"> <staff n="1"> <!-- Continued variant of the music --> </staff> <staff n="2"/> <staff n="3"/> <staff n="4"/> <staff n="5"/> <staff n="6"/> <staff n="7"/> <staff n="8"/> <staff n="9"/> <staff n="10"/> </measure> </rdg> </app> This already seems like overkill, although I didn't even fill the staffs with music. And facing such a monster, you'd have to look very closely to spot the actual variant readings, so this is certainly not the way to go. 1b) To avoid filling the staffs twice, one could of course use @copyof the second time. But this might break ID references. 2) Split this in two <app>s, like so: <measure n="1"> <app> <rdg> <staff n="1"> <!-- Some music --> </staff> </rdg> <rdg> <staff n="1"> <!-- Some variant of the music --> </staff> </rdg> </app> <staff n="2"/> <staff n="3"/> <staff n="4"/> <staff n="5"/> <staff n="6"/> <staff n="7"/> <staff n="8"/> <staff n="9"/> <staff n="10"/> </measure> <measure n="2"> <app> <rdg> <staff n="1"> <!-- Some more music --> </staff> </rdg> <rdg> <staff n="1"> <!-- Continued variant of the music --> </staff> </rdg> </app> <staff n="2"/> <staff n="3"/> <staff n="4"/> <staff n="5"/> <staff n="6"/> <staff n="7"/> <staff n="8"/> <staff n="9"/> <staff n="10"/> </measure> This looks preferable to me, especially because it allows "overlapping" variants, e.g. when there is another phrase with variant readings, but this time for the basses from measure 2 to 3. Unfortunately, I don't see a really good way to indicate that any two or more <app> elements form a logical unit. I think that the att.common.anl family of attributes might be useful for this, especially the @prev and @next attributes because they imply the concept of a "collection". The next best solution might be to use @prev and @next on the contained <lem> and <rdg> elements, but providing them on <app> would make things much clearer and easier. Therefore, shouldn't att.common.anl be allowed on <app>? Thomas From andrew.hankinson at mail.mcgill.ca Sun Jul 15 14:53:04 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson, Mr) Date: Sun, 15 Jul 2012 12:53:04 +0000 Subject: [MEI-L] More "precise" <app>s In-Reply-To: <19585_1342333512_50026248_19585_2490_1_CAEB1mAp2P2Lh4mCNpmwJs86TMpfbUnZ8mtCsrsTzHxAO67JMoA@mail.gmail.com> References: <19585_1342333512_50026248_19585_2490_1_CAEB1mAp2P2Lh4mCNpmwJs86TMpfbUnZ8mtCsrsTzHxAO67JMoA@mail.gmail.com> Message-ID: <CC3E1EAB6662EA4A95FC6E498314808B10B6497D@EXMBX2010-5.campus.MCGILL.CA> Hi Thomas, I think you're looking for @source on <rdg>. <meiHead> .... <sourceDesc> <source xml:id="sourceA"> <pubStmt/> </source> <source xml:id="sourceB"> <pubStmt/> </source> </sourceDesc> .... </meiHead> ... <measure n="1"> <app> <rdg source="sourceA"> <staff n="1"> <!-- Some music --> </staff> </rdg> <rdg source="sourceB"> <staff n="1"> <!-- Some variant of the music --> </staff> </rdg> </app> <staff n="2"/> <staff n="3"/> <staff n="4"/> <staff n="5"/> <staff n="6"/> <staff n="7"/> <staff n="8"/> <staff n="9"/> <staff n="10"/> </measure> <measure n="2"> <app> <rdg source="sourceA"> <staff n="1"> <!-- Some more music --> </staff> </rdg> <rdg source="sourceB"> <staff n="1"> <!-- Continued variant of the music --> </staff> </rdg> </app> <staff n="2"/> <staff n="3"/> <staff n="4"/> <staff n="5"/> <staff n="6"/> <staff n="7"/> <staff n="8"/> <staff n="9"/> <staff n="10"/> </measure> @source can take a space-delimited list of xml:ids too. -Andrew On 2012-07-15, at 2:24 AM, TW wrote: > Dear MEI family, > > how does one encode variant readings that only affect a certain > "strand" of events? For example, if we have a full orchestral score > where there are different readings of the flute's measures 1 and 2, I > see the following options: > 1a) One big <app> around measure 1 and 2, containing all staffs twice, like so: > <app> > <rdg> > <measure n="1"> > <staff n="1"> > <!-- Some music --> > </staff> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > <measure n="2"> > <staff n="1"> > <!-- Some more music --> > </staff> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > </rdg> > <rdg> > <measure n="1"> > <staff n="1"> > <!-- Some variant of the music --> > </staff> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > <measure n="2"> > <staff n="1"> > <!-- Continued variant of the music --> > </staff> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > </rdg> > </app> > This already seems like overkill, although I didn't even fill the > staffs with music. And facing such a monster, you'd have to look very > closely to spot the actual variant readings, so this is certainly not > the way to go. > > 1b) To avoid filling the staffs twice, one could of course use @copyof > the second time. But this might break ID references. > > 2) Split this in two <app>s, like so: > <measure n="1"> > <app> > <rdg> > <staff n="1"> > <!-- Some music --> > </staff> > </rdg> > <rdg> > <staff n="1"> > <!-- Some variant of the music --> > </staff> > </rdg> > </app> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > <measure n="2"> > <app> > <rdg> > <staff n="1"> > <!-- Some more music --> > </staff> > </rdg> > <rdg> > <staff n="1"> > <!-- Continued variant of the music --> > </staff> > </rdg> > </app> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > This looks preferable to me, especially because it allows > "overlapping" variants, e.g. when there is another phrase with variant > readings, but this time for the basses from measure 2 to 3. > Unfortunately, I don't see a really good way to indicate that any two > or more <app> elements form a logical unit. I think that the > att.common.anl family of attributes might be useful for this, > especially the @prev and @next attributes because they imply the > concept of a "collection". The next best solution might be to use > @prev and @next on the contained <lem> and <rdg> elements, but > providing them on <app> would make things much clearer and easier. > > Therefore, shouldn't att.common.anl be allowed on <app>? > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120715/9c785e5d/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1054 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120715/9c785e5d/attachment.bin> From raffaeleviglianti at gmail.com Sun Jul 15 16:17:41 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Sun, 15 Jul 2012 15:17:41 +0100 Subject: [MEI-L] More "precise" <app>s In-Reply-To: <CC3E1EAB6662EA4A95FC6E498314808B10B6497D@EXMBX2010-5.campus.MCGILL.CA> References: <19585_1342333512_50026248_19585_2490_1_CAEB1mAp2P2Lh4mCNpmwJs86TMpfbUnZ8mtCsrsTzHxAO67JMoA@mail.gmail.com> <CC3E1EAB6662EA4A95FC6E498314808B10B6497D@EXMBX2010-5.campus.MCGILL.CA> Message-ID: <CAMyHAnOJX9R-G8rWUN8G5pUHMUc1ahUqTXN-QJbd5-k1gqOxvw@mail.gmail.com> Hello, As Andrew said, @source helps, as it expresses the fact that both readings are from one source. Though if I understand Thomas' issue correctly, he's also interested in encoding the fact that the two apps can form one apparatus entry. Imagine for example an apparatus entry that says: Measures 1-2; System: Flutes; Comment: values of rests are incorrect in source A; source B has correct values. This can be split in two apps, like in the example that Andrew wrote. However, using only @source wouldn't reflect the fact that the two apps are part of one editorial statement. A use case for using @prev and @next would be to be able to generate the apparatus entry above from the encoding (I have seen this done in TEI). I don't think it's particularly urgent, but I can definitely see value in having @prev and @next on app. At least it would compensates for the fact that one app can contain several staves in one measure at once, but not multiple measures in one staff. Best, Raffaele On Sun, Jul 15, 2012 at 1:53 PM, Andrew Hankinson, Mr < andrew.hankinson at mail.mcgill.ca> wrote: > Hi Thomas, > > I think you're looking for @source on <rdg>. > > <meiHead> > .... > <sourceDesc> > <source xml:id="sourceA"> > <pubStmt/> > </source> > <source xml:id="sourceB"> > <pubStmt/> > </source> > </sourceDesc> > .... > </meiHead> > ... > <measure n="1"> > <app> > <rdg source="sourceA"> > > <staff n="1"> > <!-- Some music --> > </staff> > </rdg> > <rdg source="sourceB"> > > <staff n="1"> > <!-- Some variant of the music --> > </staff> > </rdg> > </app> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > <measure n="2"> > <app> > <rdg source="sourceA"> > > <staff n="1"> > <!-- Some more music --> > </staff> > </rdg> > <rdg source="sourceB"> > > <staff n="1"> > <!-- Continued variant of the music --> > </staff> > </rdg> > </app> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > > @source can take a space-delimited list of xml:ids too. > > -Andrew > > On 2012-07-15, at 2:24 AM, TW wrote: > > Dear MEI family, > > how does one encode variant readings that only affect a certain > "strand" of events? For example, if we have a full orchestral score > where there are different readings of the flute's measures 1 and 2, I > see the following options: > 1a) One big <app> around measure 1 and 2, containing all staffs twice, > like so: > <app> > <rdg> > <measure n="1"> > <staff n="1"> > <!-- Some music --> > </staff> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > <measure n="2"> > <staff n="1"> > <!-- Some more music --> > </staff> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > </rdg> > <rdg> > <measure n="1"> > <staff n="1"> > <!-- Some variant of the music --> > </staff> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > <measure n="2"> > <staff n="1"> > <!-- Continued variant of the music --> > </staff> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > </rdg> > </app> > This already seems like overkill, although I didn't even fill the > staffs with music. And facing such a monster, you'd have to look very > closely to spot the actual variant readings, so this is certainly not > the way to go. > > 1b) To avoid filling the staffs twice, one could of course use @copyof > the second time. But this might break ID references. > > 2) Split this in two <app>s, like so: > <measure n="1"> > <app> > <rdg> > <staff n="1"> > <!-- Some music --> > </staff> > </rdg> > <rdg> > <staff n="1"> > <!-- Some variant of the music --> > </staff> > </rdg> > </app> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > <measure n="2"> > <app> > <rdg> > <staff n="1"> > <!-- Some more music --> > </staff> > </rdg> > <rdg> > <staff n="1"> > <!-- Continued variant of the music --> > </staff> > </rdg> > </app> > <staff n="2"/> > <staff n="3"/> > <staff n="4"/> > <staff n="5"/> > <staff n="6"/> > <staff n="7"/> > <staff n="8"/> > <staff n="9"/> > <staff n="10"/> > </measure> > This looks preferable to me, especially because it allows > "overlapping" variants, e.g. when there is another phrase with variant > readings, but this time for the basses from measure 2 to 3. > Unfortunately, I don't see a really good way to indicate that any two > or more <app> elements form a logical unit. I think that the > att.common.anl family of attributes might be useful for this, > especially the @prev and @next attributes because they imply the > concept of a "collection". The next best solution might be to use > @prev and @next on the contained <lem> and <rdg> elements, but > providing them on <app> would make things much clearer and easier. > > Therefore, shouldn't att.common.anl be allowed on <app>? > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120715/570147fc/attachment.html> From zupftom at googlemail.com Sun Jul 15 21:27:27 2012 From: zupftom at googlemail.com (TW) Date: Sun, 15 Jul 2012 21:27:27 +0200 Subject: [MEI-L] More "precise" <app>s In-Reply-To: <CAMyHAnOJX9R-G8rWUN8G5pUHMUc1ahUqTXN-QJbd5-k1gqOxvw@mail.gmail.com> References: <19585_1342333512_50026248_19585_2490_1_CAEB1mAp2P2Lh4mCNpmwJs86TMpfbUnZ8mtCsrsTzHxAO67JMoA@mail.gmail.com> <CC3E1EAB6662EA4A95FC6E498314808B10B6497D@EXMBX2010-5.campus.MCGILL.CA> <CAMyHAnOJX9R-G8rWUN8G5pUHMUc1ahUqTXN-QJbd5-k1gqOxvw@mail.gmail.com> Message-ID: <CAEB1mAqKKV-xNeVtszvR_6yX8P9FCwGxfRmR4Oun_qMyPGzo0w@mail.gmail.com> Hi Raphaele, 2012/7/15 Raffaele Viglianti <raffaeleviglianti at gmail.com>: > Though if I understand Thomas' issue correctly, he's also interested in > encoding the fact that the two apps can form one apparatus entry. > Exactly. > > A use case for using @prev and @next would be to be able to generate the > apparatus entry above from the encoding (I have seen this done in TEI). > Thanks for the hint. I see that TEI's <app> (and also <choice>) indeed have @next and @prev. I think I'll create a feature request issue. And yes, I'm thinking about the automatic generation of apparatus entries. It would be messy to have things chopped up in multiple entries that actually are logically one entry (like one entry for measure 1 and a second one for measure two of the flute, although they form one melodic phrase together). It would be equally messy to get one huge entry that contains a lot of "noise" (like a two measure <app> that presents the whole orchestra, although only the flute varies). Wish you a good start into the week ahead Thomas From lxpugin at gmail.com Mon Jul 16 14:58:43 2012 From: lxpugin at gmail.com (Laurent Pugin) Date: Mon, 16 Jul 2012 14:58:43 +0200 Subject: [MEI-L] SVG Message-ID: <CAJ306HYs1bD4Y6TKXwoTUeP=mCy46tstmThdiDTvcdq=dPpRNQ@mail.gmail.com> Hi, I am looking for a way to include a SVG figure in the /workDesc/work/incip element in the header. The <graphic> element does not seem to be usable for this. Am I wrong? Is there a workaround to do it without using a customization? Thanks, Laurent -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120716/157b1d02/attachment.html> From pdr4h at eservices.virginia.edu Mon Jul 16 15:21:07 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 16 Jul 2012 13:21:07 +0000 Subject: [MEI-L] SVG In-Reply-To: <CAJ306HYs1bD4Y6TKXwoTUeP=mCy46tstmThdiDTvcdq=dPpRNQ@mail.gmail.com> References: <CAJ306HYs1bD4Y6TKXwoTUeP=mCy46tstmThdiDTvcdq=dPpRNQ@mail.gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EF61CE4@GRANT.eservices.virginia.edu> Laurent, Currently, there's no way to do it without a customization. But, there's an open issue (#53) regarding SVG and other XML markup in MEI. Any comments you have on the topic would be greatly appreciated. Can you add them to the issue please, so we don't forget? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Laurent Pugin [lxpugin at gmail.com] Sent: Monday, July 16, 2012 8:58 AM To: MEI-list Subject: [MEI-L] SVG Hi, I am looking for a way to include a SVG figure in the /workDesc/work/incip element in the header. The <graphic> element does not seem to be usable for this. Am I wrong? Is there a workaround to do it without using a customization? Thanks, Laurent -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120716/45d29f87/attachment.html> From pdr4h at eservices.virginia.edu Wed Jul 18 22:24:43 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Wed, 18 Jul 2012 20:24:43 +0000 Subject: [MEI-L] MEI workshop at 2012 AMS meeting Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EF63381@GRANT.eservices.virginia.edu> Hello, everyone, Please pardon any duplicate posts and forward liberally. :-) "Introduction to MEI," an intensive, hands-on workshop, will be offered Wednesday, 31 October from 9:00 a.m. to 5:30 p.m. Experts from the Music Encoding Initiative Council will teach the workshop, during which participants will learn about MEI history and design principles, tools for creating, editing, and rendering MEI, and techniques for customizing the MEI schema. The day will include lectures, hands-on practice, and opportunities to address participant-specific issues. There are no fees associated with this workshop and no previous experience with MEI or XML is required; however, an understanding of music notation and other markup schemes, such as HTML and TEI, will be helpful. Participants are encouraged to bring a laptop computer for hands-on exercises. The number of participants is limited to 25. Register early at http://tinyurl.com/2012meiWorkshopAMS. Registration for the AMS meeting is not required. Please address questions to info at music-encoding.org. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From pdr4h at eservices.virginia.edu Wed Jul 18 22:48:24 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Wed, 18 Jul 2012 20:48:24 +0000 Subject: [MEI-L] MEI workshop at 2012 AMS meeting In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D0EF63381@GRANT.eservices.virginia.edu> References: <BBCC497C40D85642B90E9F94FC30343D0EF63381@GRANT.eservices.virginia.edu> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EF633BA@GRANT.eservices.virginia.edu> Hello again, Sorry, the URL I sent before points to the administrative page. http://tinyurl.com/2012MEIAMS points to the registration form. Sorry for the confusion, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: Roland, Perry (pdr4h) Sent: Wednesday, July 18, 2012 4:24 PM To: mei-l at lists.uni-paderborn.de Subject: MEI workshop at 2012 AMS meeting Hello, everyone, Please pardon any duplicate posts and forward liberally. :-) "Introduction to MEI," an intensive, hands-on workshop, will be offered Wednesday, 31 October from 9:00 a.m. to 5:30 p.m. Experts from the Music Encoding Initiative Council will teach the workshop, during which participants will learn about MEI history and design principles, tools for creating, editing, and rendering MEI, and techniques for customizing the MEI schema. The day will include lectures, hands-on practice, and opportunities to address participant-specific issues. There are no fees associated with this workshop and no previous experience with MEI or XML is required; however, an understanding of music notation and other markup schemes, such as HTML and TEI, will be helpful. Participants are encouraged to bring a laptop computer for hands-on exercises. The number of participants is limited to 25. Register early at http://tinyurl.com/2012meiWorkshopAMS. Registration for the AMS meeting is not required. Please address questions to info at music-encoding.org. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From bohl at edirom.de Mon Jul 23 09:13:46 2012 From: bohl at edirom.de (=?utf-8?B?QmVuamFtaW4gVy4gQm9obA==?=) Date: Mon, 23 Jul 2012 09:13:46 +0200 Subject: [MEI-L] =?utf-8?b?QW50dy46ICBNb3JlICJwcmVjaXNlIiA8YXBwPnM=?= Message-ID: <0MVYxn-1TLPE120Ip-00YQpr@mrelayeu.kundenserver.de> Hi evbdy! Discussing the lacking possibility to aggregate multiple app elements by means of an attribute, which could be very handy indeed, I'd like to put into consideration that @prev and @next imply an order. This might exactly be what One is looking for, especially when thinking of genetic editions, but something like @corresp might be more applicable in other situations. Cheers, Benjamin ----- Reply message ----- Von: "TW" <zupftom at googlemail.com> An: "Music Encoding Initiative" <mei-l at lists.uni-paderborn.de> Betreff: [MEI-L] More "precise" <app>s Datum: So., Jul. 15, 2012 21:27 Hi Raphaele, 2012/7/15 Raffaele Viglianti <raffaeleviglianti at gmail.com>: > Though if I understand Thomas' issue correctly, he's also interested in > encoding the fact that the two apps can form one apparatus entry. > Exactly. > > A use case for using @prev and @next would be to be able to generate the > apparatus entry above from the encoding (I have seen this done in TEI). > Thanks for the hint. I see that TEI's <app> (and also <choice>) indeed have @next and @prev. I think I'll create a feature request issue. And yes, I'm thinking about the automatic generation of apparatus entries. It would be messy to have things chopped up in multiple entries that actually are logically one entry (like one entry for measure 1 and a second one for measure two of the flute, although they form one melodic phrase together). It would be equally messy to get one huge entry that contains a lot of "noise" (like a two measure <app> that presents the whole orchestra, although only the flute varies). Wish you a good start into the week ahead Thomas _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120723/c9fac796/attachment.html> From zupftom at googlemail.com Mon Jul 23 13:14:37 2012 From: zupftom at googlemail.com (TW) Date: Mon, 23 Jul 2012 13:14:37 +0200 Subject: [MEI-L] Antw.: More "precise" <app>s In-Reply-To: <0MVYxn-1TLPE120Ip-00YQpr@mrelayeu.kundenserver.de> References: <0MVYxn-1TLPE120Ip-00YQpr@mrelayeu.kundenserver.de> Message-ID: <CAEB1mArk1E6Ut=JGgGNFZVHtdeu+ia7mhMK_-tnAxQ1Hdr-KbA@mail.gmail.com> 2012/7/23 Benjamin W. Bohl <bohl at edirom.de>: > Hi evbdy! > Discussing the lacking possibility to aggregate multiple app elements by > means of an attribute, which could be very handy indeed, I'd like to put > into consideration that @prev and @next imply an order. This might exactly > be what One is looking for, especially when thinking of genetic editions, > but something like @corresp might be more applicable in other situations. > @corresp would be covered by my suggestion to allow att.common.anl on <app> and <choice>. However, @corresp always felt a bit vague to me. I guess it could legitimately be used to point to an independent <app> element, meaning something like "Look, this is the same *kind* of difference" rather than "This belongs to one virtual entity that has been split because we can't break the hierarchy". @prev and @next are a bit more definite, implying a "user-defined collection". @sameas (also in att.common.anl) "sounds" like the most precise way of saying everything belongs to the same logical <app>, but I see that this meaning would be inconsistent with other uses of @sameas. Other options I can think of: - Having @plist on a "master" <app>, pointing to the subordinate <app>s. This is very clear, but it's rather arbitrary what becomes the master <app>. At first sight, a subordinate <app> can not be distinguished from a standalone <app>. - Some kind of stand-off markup? In any case, I think having att.common.anl would already be a really useful improvement. Thomas From kepper at edirom.de Tue Jul 24 11:30:05 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 24 Jul 2012 11:30:05 +0200 Subject: [MEI-L] Antw.: More "precise" <app>s In-Reply-To: <CAEB1mArk1E6Ut=JGgGNFZVHtdeu+ia7mhMK_-tnAxQ1Hdr-KbA@mail.gmail.com> References: <0MVYxn-1TLPE120Ip-00YQpr@mrelayeu.kundenserver.de> <CAEB1mArk1E6Ut=JGgGNFZVHtdeu+ia7mhMK_-tnAxQ1Hdr-KbA@mail.gmail.com> Message-ID: <86CE134D-87C1-41F2-8DD4-F3F7445E3723@edirom.de> Hi all, as others have argued before, I don't like @next / @prev on <app>. It seems like there is no order of differences between sources. There is only a list of differences, which might be ordered following some criteria, but there is no natural order of variants. Using @next to indicate that a separate <app> deals with the _same_ situation seems absolutely awkward. @sameas would require an inconsistent usage, as Thomas already pointed out. @corresp is a generic attribute that should be left open for analytical purposes. If we give it a specific meaning on <app>, we're adding another exception to MEI (the otherwise generic @label has a specific meaning on <staffDef> already?). For me, this situation is another argument to a feature request I've been talking about quite a while. In my opinion, MEI will need a generic <grp> (group) element to address issues like this. This new element would have a @plist to point to a set of other elements which are to be regarded as one conceptual unit for some reason. A hint to this reason could be provided using @type. More specific variations of this element could be used for specific purposes: A <damageGrp> would point to <damage> elements which may spread across all levels of hierarchy etc. TEI uses *Span elements for this (<addSpan>), but TEI normally relies on a linear encoding of a base text, which is just interrupted by additional, "strippable" markup. A similar solution seems not possible for MEI. Perry has often argued that <annot> does exactly this: It allows to group a set of other elements using @plist, and to provide additional information about this group using text inside the <annot>. This is definitely an option, and works out of the box with the current schema. However, I think this approach is too generic, especially since it encourages the use of <annot> for each and everything in MEI. Maybe I'm not convinced because <annot> has a very specific meaning for me (which might lead to a feature request for <annotStruct> in the future), but I would like to see a more specific solution for problems like this. Otherwise we would recommend the use of extremely powerful elements. In my eyes, doing so would significantly increase the risk of different encoding styles for the same problem, leading to incompatibility etc. However, I don't see all this for the upcoming release. The problem is definitely more fundamental than just a missing connection between related <app>s, and I would like to avoid rushing into a quick solution too hastily. I would suggest to not add anything to <app> right now, but instead to rely on the existing <annot> for the moment (yes, I still remember my arguments?). This is nothing we're able to cope with for the upcoming release, but it's definitely something we should discuss for the next release (scheduled for 2013). Until then, I'd prefer to live with the existing compromises instead of adding new ones (which we'd have to clean up later?). Just my two cents, Johannes Am 23.07.2012 um 13:14 schrieb TW: > 2012/7/23 Benjamin W. Bohl <bohl at edirom.de>: >> Hi evbdy! >> Discussing the lacking possibility to aggregate multiple app elements by >> means of an attribute, which could be very handy indeed, I'd like to put >> into consideration that @prev and @next imply an order. This might exactly >> be what One is looking for, especially when thinking of genetic editions, >> but something like @corresp might be more applicable in other situations. >> > > @corresp would be covered by my suggestion to allow att.common.anl on > <app> and <choice>. However, @corresp always felt a bit vague to me. > I guess it could legitimately be used to point to an independent <app> > element, meaning something like "Look, this is the same *kind* of > difference" rather than "This belongs to one virtual entity that has > been split because we can't break the hierarchy". @prev and @next are > a bit more definite, implying a "user-defined collection". > > @sameas (also in att.common.anl) "sounds" like the most precise way of > saying everything belongs to the same logical <app>, but I see that > this meaning would be inconsistent with other uses of @sameas. > > Other options I can think of: > - Having @plist on a "master" <app>, pointing to the subordinate > <app>s. This is very clear, but it's rather arbitrary what becomes > the master <app>. At first sight, a subordinate <app> can not be > distinguished from a standalone <app>. > - Some kind of stand-off markup? > > In any case, I think having att.common.anl would already be a really > useful improvement. > > Thomas > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From raffaeleviglianti at gmail.com Tue Jul 24 16:35:25 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Tue, 24 Jul 2012 15:35:25 +0100 Subject: [MEI-L] Antw.: More "precise" <app>s In-Reply-To: <86CE134D-87C1-41F2-8DD4-F3F7445E3723@edirom.de> References: <0MVYxn-1TLPE120Ip-00YQpr@mrelayeu.kundenserver.de> <CAEB1mArk1E6Ut=JGgGNFZVHtdeu+ia7mhMK_-tnAxQ1Hdr-KbA@mail.gmail.com> <86CE134D-87C1-41F2-8DD4-F3F7445E3723@edirom.de> Message-ID: <CAMyHAnPsd=2mLvHzKmMQ=gUZjFEWj=8F5gAw-NQDegHyLy+Eew@mail.gmail.com> Hi Johannes, Thanks for these very insightful comments! I agree with your arguments for a grouping element to help escape hierarchy (given that, one could imagine a stand-off approach to apparatus entries that might be quite interesting to work with). Nonetheless, I still see value in having @next and @prev. On Tue, Jul 24, 2012 at 10:30 AM, Johannes Kepper <kepper at edirom.de> wrote: > I don't like @next / @prev on <app>. It seems like there is no order of > differences between sources. Is the order of sources defined at app level? The @source attribute on rdg should allow you to list your rdgs in whatever order; information in the header dictates the relevance of each source over another. > Using @next to indicate that a separate <app> deals with the _same_ > situation seems absolutely awkward. > Although I can see why using @corresp or @sameas would be wrong, I don't see why using @next is awkward. @prev and @next don't indicate a sequence, but indicate an aggregation, which is what Thomas needed in his example. > For me, this situation is another argument to a feature request I've been > talking about quite a while. In my opinion, MEI will need a generic <grp> > (group) element to address issues like this. [...] A <damageGrp> would > point to <damage> elements which may spread across all levels of hierarchy > etc. TEI uses *Span elements for this (<addSpan>), but TEI normally relies > on a linear encoding of a base text, which is just interrupted by > additional, "strippable" markup. A similar solution seems not possible for > MEI. > This is true, and as I've said I like this idea. There still is value to embedded markup, though, particularly for apparatus entries. Still, I can see how it wouldn't work with damage, add, del, etc. in advanced manuscript transcription. Best, Raffaele > > Am 23.07.2012 um 13:14 schrieb TW: > > > 2012/7/23 Benjamin W. Bohl <bohl at edirom.de>: > >> Hi evbdy! > >> Discussing the lacking possibility to aggregate multiple app elements by > >> means of an attribute, which could be very handy indeed, I'd like to put > >> into consideration that @prev and @next imply an order. This might > exactly > >> be what One is looking for, especially when thinking of genetic > editions, > >> but something like @corresp might be more applicable in other > situations. > >> > > > > @corresp would be covered by my suggestion to allow att.common.anl on > > <app> and <choice>. However, @corresp always felt a bit vague to me. > > I guess it could legitimately be used to point to an independent <app> > > element, meaning something like "Look, this is the same *kind* of > > difference" rather than "This belongs to one virtual entity that has > > been split because we can't break the hierarchy". @prev and @next are > > a bit more definite, implying a "user-defined collection". > > > > @sameas (also in att.common.anl) "sounds" like the most precise way of > > saying everything belongs to the same logical <app>, but I see that > > this meaning would be inconsistent with other uses of @sameas. > > > > Other options I can think of: > > - Having @plist on a "master" <app>, pointing to the subordinate > > <app>s. This is very clear, but it's rather arbitrary what becomes > > the master <app>. At first sight, a subordinate <app> can not be > > distinguished from a standalone <app>. > > - Some kind of stand-off markup? > > > > In any case, I think having att.common.anl would already be a really > > useful improvement. > > > > Thomas > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120724/1b750574/attachment.html> From zupftom at googlemail.com Tue Jul 24 22:35:29 2012 From: zupftom at googlemail.com (TW) Date: Tue, 24 Jul 2012 22:35:29 +0200 Subject: [MEI-L] Antw.: More "precise" <app>s In-Reply-To: <86CE134D-87C1-41F2-8DD4-F3F7445E3723@edirom.de> References: <0MVYxn-1TLPE120Ip-00YQpr@mrelayeu.kundenserver.de> <CAEB1mArk1E6Ut=JGgGNFZVHtdeu+ia7mhMK_-tnAxQ1Hdr-KbA@mail.gmail.com> <86CE134D-87C1-41F2-8DD4-F3F7445E3723@edirom.de> Message-ID: <CAEB1mAoheyMg2VxM=QLxeZP-wppsGHnuqnaTwisL_+ijnr=q5g@mail.gmail.com> Hi Johannes, thanks for your elaboration, this is all very well reasoned. Using <annot> with a specific @type and @plist indeed seems like a good intermediary solution that can later be migrated to a proper solution. If such a proper solution would be able to group <app>s in a way that makes clear they are to be understood as a single logical <app> containing a continuous strand of events (like @prev and @next would), then we could close issue 88 as WontFix and supersede it by a new issue. If nobody objects against this, I'll formulate such a new issue. Thomas 2012/7/24 Johannes Kepper <kepper at edirom.de>: > Hi all, > > as others have argued before, I don't like @next / @prev on <app>. It seems like there is no order of differences between sources. There is only a list of differences, which might be ordered following some criteria, but there is no natural order of variants. Using @next to indicate that a separate <app> deals with the _same_ situation seems absolutely awkward. @sameas would require an inconsistent usage, as Thomas already pointed out. @corresp is a generic attribute that should be left open for analytical purposes. If we give it a specific meaning on <app>, we're adding another exception to MEI (the otherwise generic @label has a specific meaning on <staffDef> already?). > > For me, this situation is another argument to a feature request I've been talking about quite a while. In my opinion, MEI will need a generic <grp> (group) element to address issues like this. This new element would have a @plist to point to a set of other elements which are to be regarded as one conceptual unit for some reason. A hint to this reason could be provided using @type. More specific variations of this element could be used for specific purposes: A <damageGrp> would point to <damage> elements which may spread across all levels of hierarchy etc. TEI uses *Span elements for this (<addSpan>), but TEI normally relies on a linear encoding of a base text, which is just interrupted by additional, "strippable" markup. A similar solution seems not possible for MEI. > > Perry has often argued that <annot> does exactly this: It allows to group a set of other elements using @plist, and to provide additional information about this group using text inside the <annot>. This is definitely an option, and works out of the box with the current schema. However, I think this approach is too generic, especially since it encourages the use of <annot> for each and everything in MEI. Maybe I'm not convinced because <annot> has a very specific meaning for me (which might lead to a feature request for <annotStruct> in the future), but I would like to see a more specific solution for problems like this. Otherwise we would recommend the use of extremely powerful elements. In my eyes, doing so would significantly increase the risk of different encoding styles for the same problem, leading to incompatibility etc. > > However, I don't see all this for the upcoming release. The problem is definitely more fundamental than just a missing connection between related <app>s, and I would like to avoid rushing into a quick solution too hastily. I would suggest to not add anything to <app> right now, but instead to rely on the existing <annot> for the moment (yes, I still remember my arguments?). This is nothing we're able to cope with for the upcoming release, but it's definitely something we should discuss for the next release (scheduled for 2013). Until then, I'd prefer to live with the existing compromises instead of adding new ones (which we'd have to clean up later?). > > Just my two cents, > Johannes > > > > > > Am 23.07.2012 um 13:14 schrieb TW: > >> 2012/7/23 Benjamin W. Bohl <bohl at edirom.de>: >>> Hi evbdy! >>> Discussing the lacking possibility to aggregate multiple app elements by >>> means of an attribute, which could be very handy indeed, I'd like to put >>> into consideration that @prev and @next imply an order. This might exactly >>> be what One is looking for, especially when thinking of genetic editions, >>> but something like @corresp might be more applicable in other situations. >>> >> >> @corresp would be covered by my suggestion to allow att.common.anl on >> <app> and <choice>. However, @corresp always felt a bit vague to me. >> I guess it could legitimately be used to point to an independent <app> >> element, meaning something like "Look, this is the same *kind* of >> difference" rather than "This belongs to one virtual entity that has >> been split because we can't break the hierarchy". @prev and @next are >> a bit more definite, implying a "user-defined collection". >> >> @sameas (also in att.common.anl) "sounds" like the most precise way of >> saying everything belongs to the same logical <app>, but I see that >> this meaning would be inconsistent with other uses of @sameas. >> >> Other options I can think of: >> - Having @plist on a "master" <app>, pointing to the subordinate >> <app>s. This is very clear, but it's rather arbitrary what becomes >> the master <app>. At first sight, a subordinate <app> can not be >> distinguished from a standalone <app>. >> - Some kind of stand-off markup? >> >> In any case, I think having att.common.anl would already be a really >> useful improvement. >> >> Thomas >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Thu Jul 26 09:00:13 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Thu, 26 Jul 2012 09:00:13 +0200 Subject: [MEI-L] Antw.: More "precise" <app>s In-Reply-To: <CAEB1mAoheyMg2VxM=QLxeZP-wppsGHnuqnaTwisL_+ijnr=q5g@mail.gmail.com> References: <0MVYxn-1TLPE120Ip-00YQpr@mrelayeu.kundenserver.de> <CAEB1mArk1E6Ut=JGgGNFZVHtdeu+ia7mhMK_-tnAxQ1Hdr-KbA@mail.gmail.com> <86CE134D-87C1-41F2-8DD4-F3F7445E3723@edirom.de> <CAEB1mAoheyMg2VxM=QLxeZP-wppsGHnuqnaTwisL_+ijnr=q5g@mail.gmail.com> Message-ID: <5010EAFD.4040700@edirom.de> Hi Johannes, thanks for elaborating. I am with you for most of it, although I don't see why @corresp should not be used? It's supposed to be used to point to elements corresponding to the current one in a "generic fashion" (whatever that be). Although from the "analytical" domain, I did not understand att.common.anl to be exclusively for music analysis (correct me if I'm wrong). I think analysis starts with examination of the subject matter. If I understood Thomas' original intention right (@Thomas please correct me), the idea was that on an early stage of encoding one might just like to mark correlation of two <app>s, then use a script to generate an apparatus, that would already have the correct @plist on its <annot>s. And in this case (having no other variants in this percise spot), the result should not be two <annot>s but a single one having measures one and two from two sources in its @plist. Let's say the phenomenon one would like to point out was a displacement of a line of notes in one staff (more or less the first note being wron, the rest just consecutive faults) it is good to indicate the connection straight away when encoding like Thomas did in his second version. For a solution: Although it is not possible to mark the correspondence directly on the <app> (I think this is no problem) the attributes we were talking about (@pre, @next, @corresp, and many more) together with @source (@Andrew: thx for pointing to it in the first place) can be used on <lem> or <rdg>. Which is even more accurate than putting it on the <app> as it is the readings from on source that correspond, not the apparatus in whole, which might contain even more <rdg>s unconnected to that very phenomenon. In the end you should be able to do what you intendended in the first place, @Thomas. Anyway I'd be very curious how the script might figure out the kind of correspondence and details of the <annot> to generate. In the end, best option might be to straight away put an <annot> (Go @Perry, go!) into (one of?) the <rdg>s with it's @plist pointing to the corresponding <rdg>s. Your scipt could then check for preexisting <annot>s deciding what to do whith <app>s/<rdg>s containing one or not. I think solutions for encoding the problem in general might be there in the firsthand (@Johannes: with slight possibilities of improvement), when thinking of intelligently machine processable connections the problem seems to have another scope in means of quantificating and qualificating the intellectual aspects. Have fun! Benni Benjamin Wolff Bohl *********************************************************** Edirom - Projekt "Digitale Musikedition" Musikwissenschaftliches Seminar Detmold/Paderborn Gartenstra?e 20 D ? 32756 Detmold Tel. +49 (0) 5231 / 975-669 Fax: +49 (0) 5231 / 975-668 http://www.edirom.de *********************************************************** Am 24.07.2012 22:35, schrieb TW: > Hi Johannes, > > thanks for your elaboration, this is all very well reasoned. Using > <annot> with a specific @type and @plist indeed seems like a good > intermediary solution that can later be migrated to a proper solution. > > If such a proper solution would be able to group <app>s in a way that > makes clear they are to be understood as a single logical <app> > containing a continuous strand of events (like @prev and @next would), > then we could close issue 88 as WontFix and supersede it by a new > issue. If nobody objects against this, I'll formulate such a new > issue. > > Thomas > > > 2012/7/24 Johannes Kepper <kepper at edirom.de>: >> Hi all, >> >> as others have argued before, I don't like @next / @prev on <app>. It seems like there is no order of differences between sources. There is only a list of differences, which might be ordered following some criteria, but there is no natural order of variants. Using @next to indicate that a separate <app> deals with the _same_ situation seems absolutely awkward. @sameas would require an inconsistent usage, as Thomas already pointed out. @corresp is a generic attribute that should be left open for analytical purposes. If we give it a specific meaning on <app>, we're adding another exception to MEI (the otherwise generic @label has a specific meaning on <staffDef> already?). >> >> For me, this situation is another argument to a feature request I've been talking about quite a while. In my opinion, MEI will need a generic <grp> (group) element to address issues like this. This new element would have a @plist to point to a set of other elements which are to be regarded as one conceptual unit for some reason. A hint to this reason could be provided using @type. More specific variations of this element could be used for specific purposes: A <damageGrp> would point to <damage> elements which may spread across all levels of hierarchy etc. TEI uses *Span elements for this (<addSpan>), but TEI normally relies on a linear encoding of a base text, which is just interrupted by additional, "strippable" markup. A similar solution seems not possible for MEI. >> >> Perry has often argued that <annot> does exactly this: It allows to group a set of other elements using @plist, and to provide additional information about this group using text inside the <annot>. This is definitely an option, and works out of the box with the current schema. However, I think this approach is too generic, especially since it encourages the use of <annot> for each and everything in MEI. Maybe I'm not convinced because <annot> has a very specific meaning for me (which might lead to a feature request for <annotStruct> in the future), but I would like to see a more specific solution for problems like this. Otherwise we would recommend the use of extremely powerful elements. In my eyes, doing so would significantly increase the risk of different encoding styles for the same problem, leading to incompatibility etc. >> >> However, I don't see all this for the upcoming release. The problem is definitely more fundamental than just a missing connection between related <app>s, and I would like to avoid rushing into a quick solution too hastily. I would suggest to not add anything to <app> right now, but instead to rely on the existing <annot> for the moment (yes, I still remember my arguments?). This is nothing we're able to cope with for the upcoming release, but it's definitely something we should discuss for the next release (scheduled for 2013). Until then, I'd prefer to live with the existing compromises instead of adding new ones (which we'd have to clean up later?). >> >> Just my two cents, >> Johannes >> >> >> >> >> >> Am 23.07.2012 um 13:14 schrieb TW: >> >>> 2012/7/23 Benjamin W. Bohl <bohl at edirom.de>: >>>> Hi evbdy! >>>> Discussing the lacking possibility to aggregate multiple app elements by >>>> means of an attribute, which could be very handy indeed, I'd like to put >>>> into consideration that @prev and @next imply an order. This might exactly >>>> be what One is looking for, especially when thinking of genetic editions, >>>> but something like @corresp might be more applicable in other situations. >>>> >>> @corresp would be covered by my suggestion to allow att.common.anl on >>> <app> and <choice>. However, @corresp always felt a bit vague to me. >>> I guess it could legitimately be used to point to an independent <app> >>> element, meaning something like "Look, this is the same *kind* of >>> difference" rather than "This belongs to one virtual entity that has >>> been split because we can't break the hierarchy". @prev and @next are >>> a bit more definite, implying a "user-defined collection". >>> >>> @sameas (also in att.common.anl) "sounds" like the most precise way of >>> saying everything belongs to the same logical <app>, but I see that >>> this meaning would be inconsistent with other uses of @sameas. >>> >>> Other options I can think of: >>> - Having @plist on a "master" <app>, pointing to the subordinate >>> <app>s. This is very clear, but it's rather arbitrary what becomes >>> the master <app>. At first sight, a subordinate <app> can not be >>> distinguished from a standalone <app>. >>> - Some kind of stand-off markup? >>> >>> In any case, I think having att.common.anl would already be a really >>> useful improvement. >>> >>> Thomas >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zupftom at googlemail.com Sun Jul 29 07:13:45 2012 From: zupftom at googlemail.com (TW) Date: Sun, 29 Jul 2012 07:13:45 +0200 Subject: [MEI-L] Antw.: More "precise" <app>s In-Reply-To: <5010EAFD.4040700@edirom.de> References: <0MVYxn-1TLPE120Ip-00YQpr@mrelayeu.kundenserver.de> <CAEB1mArk1E6Ut=JGgGNFZVHtdeu+ia7mhMK_-tnAxQ1Hdr-KbA@mail.gmail.com> <86CE134D-87C1-41F2-8DD4-F3F7445E3723@edirom.de> <CAEB1mAoheyMg2VxM=QLxeZP-wppsGHnuqnaTwisL_+ijnr=q5g@mail.gmail.com> <5010EAFD.4040700@edirom.de> Message-ID: <CAEB1mAqAnNzbMHX5-AFzai20idjkMcUGZsVu5Gz7ejbEqnZ3vw@mail.gmail.com> Hi Benjamin and all, sorry for the late reply, I'm sort of on vacation. 2012/7/26 Benjamin Wolff Bohl <bohl at edirom.de>: > Hi Johannes, > thanks for elaborating. I am with you for most of it, although I don't see > why @corresp should not be used? It's supposed to be used to point to > elements corresponding to the current one in a "generic fashion" (whatever > that be). Although from the "analytical" domain, I did not understand > att.common.anl to be exclusively for music analysis (correct me if I'm > wrong). I think analysis starts with examination of the subject matter. > If I understood Thomas' original intention right (@Thomas please correct > me), the idea was that on an early stage of encoding one might just like to > mark correlation of two <app>s, then use a script to generate an apparatus, > that would already have the correct @plist on its <annot>s. No, that's not exactly my original idea. I was thinking about how to create a more classical synoptic visualization of the apparatus that compares two or more segments. In such a visualization I wouldn't want to split a melodic line measure by measure, therefore the suggestion to indicate the relationship in a more specific way. Indeed an <annot> (maybe with @type="app") seems like a pretty good way of dealing with this, also for examples where there's for example only one note that differs and you want to put this note into a minimal musically sensible context when creating the synoptic comparison. Then you could indicate what elements outside the <app> should be included in the visualization. If we one day get something like an <appGroup>, then however it would be nicer and cleaner to split this into a @plist for pointing to all the <app> elements and maybe a @context attribute the indicates the smallest sensible musical context. Thomas From kepper at edirom.de Wed Aug 1 16:07:03 2012 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 1 Aug 2012 16:07:03 +0200 Subject: [MEI-L] Fwd: pre-release announcement to the Council References: <BBCC497C40D85642B90E9F94FC30343D0EF74B82@reagan.eservices.virginia.edu> Message-ID: <D35D78B1-B3ED-4750-BD55-5F85051783B7@edirom.de> Dear colleagues, Just to let you know, a new version of MEI is ready and will be released soon. Because it is expressed in the "One Document Does-it-all" (ODD) format, this release provides the ability to generate various schemas (RNG and W3C) and accompanying documentation and best practice guidelines. ODD also allows for the creation of customized schemas and documentation for those with special requirements. In fact, we have already implemented an MEI customization service as part of this release. In addition, ODD brings us technically closer to TEI, and together with the TEI Music Special Interest Group we have already enriched TEI with capabilities to include MEI markup. Of course, along the way we've improved the markup capabilities and overall documentation of "out-of-the-box" MEI. The Technical Team has worked hard to bring this about. During this time, we faced an increasing interest in MEI from different communities, which led to attendance at several conferences and other dissemination efforts. If you check http://music-encoding.org/community/events, you'll see some of the other activities we've been involved in. Even though the release took more time than we expected, we hope that you will agree that this is a good thing for MEI. So, while this release is a few months past our original deadline, we believe that satisfying the professional interest in MEI that has sprung up over the last year is an enviable position in which to find ourselves. Writing the new Guidelines, we had some very intense discussions, and we believe that, while there is of course always room for improvements, MEI is ready to be released in its current state. In order to avoid further delay and allow more time for review and comment, we decided to release this version without prior review from the Council. We encourage everyone to have a close look at what we offer with this release and provide comments and suggestions that will help us improve MEI. Due to the improvements we incorporated, this version will break backwards compatibility with the earlier MEI 2010-05 release. This step had to be made at some point, and it seemed better to make this decision sooner than later. All effort has been made to minimize the effect of these changes and we are happy to assist with the relatively easy transition. For the Technical Team, Perry Roland and Johannes Kepper From bohl at edirom.de Thu Aug 2 14:55:21 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Thu, 02 Aug 2012 14:55:21 +0200 Subject: [MEI-L] [ANN] Edirom-Summer-School 2012 Registration open Message-ID: <501A78B9.5020404@edirom.de> Dear colleagues, as already announced, the third Edirom-Summer-School on digital tools and technologies in the humanities is to take place ** September 24 to 28, 2012 at the Heinz-Nixdorf-Institute, University of Paderborn, Germany** Registration is open at: http://www.edirom.de/summerschool2012 With a total of 13 independet classes it is our biggest summer school to date - not least because of your great interest over the past years. Starting with basic introductions to TEI (Text Encoding Initiative), MEI (Music Encoding Initiative) and Edirom-Tools we continue with courses on Edirom-Customization, introduction to eXist-db, XPath and regular expressions, as well as XSLT. We are very excited to welcome external tutors for the first time this year. Daniel Kurzawe and Tibor K?lm?n (both GWDG, G?ttingen, Germany) from DARIAH-DE project will hold an "introduction to sustainable handling of research data", Axel Teich Geertinger (DCM, Copenhagen) teaches "MerMEId Metadata Editor and Repository for MEI Data", and Raffaele Viglianti (King's College, London) will teach "Encoding Text and Music" and "An introduction to ODD". This als has an impact on teaching language and we are happy to anounce three **classes in english, being:** - Encoding Text and Music (Sept 27) - Manuscript Encoding and Digital Editions based on MEI (Sept 27 to 28) - An introduction to ODD (Sept 28) Attendance is open to everyone and will be charged with a mere ? 5.00 contribution towards expenses for each half day of attendance. Registration deadline is due August 31, 2012. On low registration numbers we reserve the right to merge or cancel classes. The Edirom-Summer-School team is looking forward to your attendance at Paderborn. For further information visit http://www.edirom.de/summerschool2012. Wit best wishes from your organisation team, Peter Stadler and Benjamin W. Bohl ===GERMAN========================================== Sehr geehrte Kolleginnen und Kollegen, wie bereits angek?ndigt, wird die dritte Paderborner Edirom-Summer-School zum Arbeiten mit digitalen Werkzeugen und Technologien in den Geisteswissenschaften **vom 24. bis 28. September 2012** am Heinz-Nixdorf-Institut der Universit?t Paderborn stattfinden. Die Anmeldung ist ab sofort m?glich unter: http://www.edirom.de/summerschool2012 Mit insgesamt 13 verschiedenen Veranstaltungen handelt es sich um unsere bisher gr??te Summer School - ein Erfolg, den wir nicht zuletzt Ihrem gro?en Interesse in den letzten Jahren zu verdanken haben. Besonders freuen wir uns, dass wir in diesem Jahr die ersten externen Referenten begr??en d?rfen. So werden Daniel Kurzawe und Tibor K?lm?n (beide GWDG, G?ttingen) aus dem DARIAH-DE Projekt unser Programm um einen Kurs zu "Grundlagen zum nachhaltigen Umgang mit Forschungsdaten"erweitern. Axel Teich Geertinger (DCM, Kopenhagen) erl?utert im Kurs "MerMEId - Arbeiten mit MEI Metadaten" nicht nur das Format MEI sondern auch den vom DCM entwickelten "Metadata Editor and Repository for MEI Data". Weiterhin freuen wir uns, Raffaele Viglianti (King's College, London) begr??en zu k?nnen, der uns durch seine nachhaltigen Erfahrungen im Umgang mit den Codierungsstandards TEI und MEI je einen Kurs "Encoding Text and Music" sowie "An introduction to ODD" anbieten wird. Damit geht auch eine Internationalisierung hinsichtlich der Kurssprachen einher, und somit werden in diesem Jahr insgesamt drei Kurse in englischer Sprache unterrichtet. Im Folgenden eine Auflistung aller angebotenen Kurse: - Einf?hrung in die Codierung von Notentexten mit MEI (24. - 25.09.) - Einf?hrung in die Codierung von Texten mit TEI (24. - 25.09.) - Edirom-Tools - Erstellen einer Edirom-Online (24.09.) - Edirom-Customization - Projektspezifische Anpassungen der Edirom-Online (25.09.) - MerMEId - Arbeiten mit MEI Metadaten (26.09.) - Einf?hrung in die native XML-Datenbank eXist (26.09.) - Edirom User Forum (26.09.) - Encoding Text and Music [Englisch] (27.09.) - Manuscript Encoding and Digital Editions based on MEI [Englisch] (27. - 28.09.) - XPath und Regul?re Ausdr?cke - Fortgeschrittenes Suchen in XML Dokumenten (27.09.) - Grundlagen zum nachhaltigen Umgang mit Forschungsdaten (27.09.) - XSL(T) f?r Einsteiger (28.09.) - An introduction to ODD [Englisch] (28.09.) F?r die Teilnahme an den Kursen wird lediglich ein Unkostenbeitrag in der H?he von ? 5,00 pro halbem Tag erhoben. Anmeldeschluss ist der 31.08.2012. Bei zu geringen Teilnehmerzahlen behalten wir uns vor, einzelne Kurse zusammenzulegen bzw. abzusagen. Wir m?chten ausdr?cklich darauf hinweisen, dass die Workshops "Einf?hrungen in MEI", "Einf?hrungen in TEI", "Edirom-Tools", sowie "Grundlagen zum nachhaltigen Umgang mit Forschungsdaten" auch ohne Vorkenntnisse auf den genannten Gebieten besucht werden k?nnen, allerdings setzen die weiteren Kurse XML-Kenntnisse voraus (die ggf. in den Einf?hrungskursen erworben werden k?nnen). Das Edirom-Summer-School Team freut sich darauf, Sie in Paderborn begr??en zu k?nnen! Weitere Informationen und die M?glichkeit zur Registrierung finden Sie unter: http://www.edirom.de/summerschool2012 Mit besten Gr??en von Ihrem Organisationsteam, Peter Stadler und Benjamin W. Bohl -- *********************************************************** Edirom - Projekt "Digitale Musikedition" Musikwissenschaftliches Seminar Detmold/Paderborn Gartenstra?e 20 D ? 32756 Detmold Tel. +49 (0) 5231 / 975-665 Fax: +49 (0) 5231 / 975-668 http://www.edirom.de *********************************************************** -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120802/608559bc/attachment.html> From kepper at edirom.de Fri Aug 24 22:34:53 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 24 Aug 2012 16:34:53 -0400 Subject: [MEI-L] Release of MEI 2012 v2.0.0 Message-ID: <BB077ABC-0448-4C4E-B80D-FEBFB4D4AE86@edirom.de> Dear MEIers, finally, we have made our way through the Guidelines, which means that we're happy to announce the release of MEI 2012. This release is available from http://music-encoding.googlecode.com/files/MEI2012_v2.0.0.zip. Before adding more details about this, we would like to thank everyone involved in this release in one way or the other for the continuous support. Without a whole lot of dedication by all of you, this wouldn't have been possible. LABELLING SCHEME MEI 2012 v2.0.0 is the complete name for this release. While "MEI 2012" is the label for it, the "v2.0.0" is the technical identification for it. We will use this numbering scheme for all subsequent releases of MEI. Let me explain the numbers a little bit. The first digit indicates the technical foundation to be used. We have built this release from scratch using a different technology (something I'm coming back to later), so this is the second iteration of MEI, following the original 2010-05 release. The second digit indicates changes in the schema, the third digit changes to the documentation. That means the number v2.0.1 would be used for a revised documentation for the current release, the number v2.1.3 would indicate a change in the schema itself, that already had three revisions to the documentation. TECHNICAL BACKGROUND ? ODD Starting with MEI 2012 v2.0.0, MEI is described using TEI's ODD, which stands for One Document Does-it-all. This is a meta schema language that allows one to generate schemas and documentation from the same source. It also facilitates customizing MEI in a self-documenting way. We already have a preliminary customization service for those of you familiar with ODD at http://custom.music-encoding.org. GUIDELINES Compared to MEI 2010-05, we completely rewrote the documentation for MEI. We had a tag library for that, which was helpful, but which required a certain amount of familiarity with the schema to work with it. Now we have a nearly 700 page document which still includes a tag library, but also around 250 pages explaining how to use MEI to encode different features of music notation of various kinds. It follows the module structure of MEI and significantly increases accessibility of the schema. Make sure to have a look at this PDF, which is contained in the release zip mentioned above, but is also separately available from http://music-encoding.googlecode.com/files/MEI_Guidelines_2012_v2.0.0.pdf. TODO Even though we have finished this release, there are still things to improve. First, we plan to reshape our website at music-encoding.org to match this new release. We plan to include the Guidelines, a revised tag library section, and a collection of MEI sample encodings. We will address these issues successively, starting in the next couple of weeks. What should happen simultaneously is that we gather feedback on the release, specifically on the Guidelines. If you have the opportunity to work a little bit with the Guidelines in the next few weeks or months, please do that. We hope to improve the Guidelines in a number of several small steps until the end of the current DFG/NEH funded grant, which will be summer next year. If you have any further questions, feel free to ask us personally or here on MEI-L. With best regards, Perry and Johannes From veit at weber-gesamtausgabe.de Fri Aug 24 22:49:19 2012 From: veit at weber-gesamtausgabe.de (Joachim Veit) Date: Fri, 24 Aug 2012 22:49:19 +0200 Subject: [MEI-L] Release of MEI 2012 v2.0.0 In-Reply-To: <BB077ABC-0448-4C4E-B80D-FEBFB4D4AE86@edirom.de> References: <BB077ABC-0448-4C4E-B80D-FEBFB4D4AE86@edirom.de> Message-ID: <5037E8CF.7050100@weber-gesamtausgabe.de> Dear Perry and Johannes, congratulations!!!!! This was an enormous task and all MEI'ers will raise their glass of beer or wine this evening with singing the praise of the two editors but also singing thanking-hymns to all those in the still small MEI-world who have contributed with their hard work to this success!!! With this step the MEI-world will be growing still more rapidly - certainly! Please take a rest and raise your glasses and only later think that the work for version 3.0 begins tomorrow... Best greetings, and many, many - and still much more thanks to you and all! Joachim Am 24.08.12 22:34, schrieb Johannes Kepper: > Dear MEIers, > > finally, we have made our way through the Guidelines, which means that we're happy to announce the release of MEI 2012. This release is available from http://music-encoding.googlecode.com/files/MEI2012_v2.0.0.zip. Before adding more details about this, we would like to thank everyone involved in this release in one way or the other for the continuous support. Without a whole lot of dedication by all of you, this wouldn't have been possible. > > > LABELLING SCHEME > MEI 2012 v2.0.0 is the complete name for this release. While "MEI 2012" is the label for it, the "v2.0.0" is the technical identification for it. We will use this numbering scheme for all subsequent releases of MEI. Let me explain the numbers a little bit. The first digit indicates the technical foundation to be used. We have built this release from scratch using a different technology (something I'm coming back to later), so this is the second iteration of MEI, following the original 2010-05 release. The second digit indicates changes in the schema, the third digit changes to the documentation. That means the number v2.0.1 would be used for a revised documentation for the current release, the number v2.1.3 would indicate a change in the schema itself, that already had three revisions to the documentation. > > > TECHNICAL BACKGROUND ? ODD > Starting with MEI 2012 v2.0.0, MEI is described using TEI's ODD, which stands for One Document Does-it-all. This is a meta schema language that allows one to generate schemas and documentation from the same source. It also facilitates customizing MEI in a self-documenting way. We already have a preliminary customization service for those of you familiar with ODD at http://custom.music-encoding.org. > > > GUIDELINES > Compared to MEI 2010-05, we completely rewrote the documentation for MEI. We had a tag library for that, which was helpful, but which required a certain amount of familiarity with the schema to work with it. Now we have a nearly 700 page document which still includes a tag library, but also around 250 pages explaining how to use MEI to encode different features of music notation of various kinds. It follows the module structure of MEI and significantly increases accessibility of the schema. Make sure to have a look at this PDF, which is contained in the release zip mentioned above, but is also separately available from http://music-encoding.googlecode.com/files/MEI_Guidelines_2012_v2.0.0.pdf. > > > TODO > Even though we have finished this release, there are still things to improve. First, we plan to reshape our website at music-encoding.org to match this new release. We plan to include the Guidelines, a revised tag library section, and a collection of MEI sample encodings. We will address these issues successively, starting in the next couple of weeks. What should happen simultaneously is that we gather feedback on the release, specifically on the Guidelines. If you have the opportunity to work a little bit with the Guidelines in the next few weeks or months, please do that. We hope to improve the Guidelines in a number of several small steps until the end of the current DFG/NEH funded grant, which will be summer next year. > > > If you have any further questions, feel free to ask us personally or here on MEI-L. > > With best regards, > > Perry and Johannes > > > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit Bin?rdaten wurde abgetrennt... Dateiname : veit.vcf Dateityp : text/x-vcard Dateigr??e : 364 bytes Beschreibung: nicht verf?gbar URL : <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120824/e9a21579/attachment.vcf> From pdr4h at eservices.virginia.edu Mon Aug 27 17:16:11 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 27 Aug 2012 15:16:11 +0000 Subject: [MEI-L] AMS "Introduction to MEI" Workshop Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EF882F3@GRANT.eservices.virginia.edu> Dear MEIers, This is a gentle reminder about the pre-conference workshop in conjunction with the American Musicological Society and Society for Music Theory joint meeting in New Orleans. Please forgive any cross-posting, but we want to get the word out to a broad range of potential participants. This announcement has already been (or will soon be) posted to TEI-L and DH lists. However, please distribute this to other lists, such as national and local discussion lists concerned with music and digital humanities, and other interested individuals. Thanks for getting the word out, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu AMS "Introduction to MEI" Workshop The University of Virginia Library, the University of Paderborn, and the Music Encoding Initiative Council are pleased to offer an opportunity to learn about the Music Encoding Initiative (MEI), an increasingly important tool for digital humanities music research, in conjunction with the joint meeting of the American Musicological Society, the Society for Ethnomusicology and the Society for Music Theory in New Orleans, scheduled for 1-4 November. "Introduction to MEI," an intensive, hands-on workshop, will be offered Wednesday, 31 October from 9:00 a.m. to 5:30 p.m. Experts from the Music Encoding Initiative Council will teach the workshop, during which participants will learn about MEI history and design principles, tools for creating, editing, and rendering MEI, and techniques for customizing the MEI schema. The day will include lectures, hands-on practice, and opportunities to address participant-specific issues. There are no fees associated with this workshop and no previous experience with MEI or XML is required; however, an understanding of music notation and other markup schemes, such as HTML and TEI, will be helpful. Participants are encouraged to bring laptop computers for hands-on exercises. The number of participants is limited to 25. To register, visit the AMS special events page at http://www.ams-net.org/neworleans/special_events.php. Please address questions to info at music-encoding.org. Workshop Schedule Session 1 (9:00-10:00): What is music encoding? This session introduces the basic need for and techniques of music encoding using XML. ? What is markup? What is its function? Why is it important? ? Basic concepts of XML: elements, attributes, document structure, and schemas ? What is the role of standards such as MEI? Why do we need markup languages? Session 2 (10:15-12:00): What is MEI? The following issues will be addressed during this session: ? MEI's situation within the landscape of digital humanities scholarship: What are its intellectual affiliations and commitments? ? How does MEI support the creation of digital musical texts? What is its role in defining how music documents should be represented? ? How is MEI currently used, and how is it evolving? ? What are the alternatives to MEI? What are the advantages and risks of using a detailed encoding system like MEI? Session 3 (1:30-3:00): Basics of Encoding with MEI This session will describe basic MEI elements and describe their use, using detailed musical examples. Session 4 (3:15-4:30): MEI Application Tutorials This session introduces MEI-specific encoding tools, such as, MerMEId, MEISE, and the Edirom Editor. Participants will learn how these tools can be used to design workflows for entering, editing, and rendering MEI. Session 5 (4:45-5:30): Wrap-up Discussion Participants will reflect on MEI markup and tools and how they can be employed in the participants? current and future projects. In addition, opportunities for participation in the MEI community will be covered. From andrew.hankinson at gmail.com Mon Aug 27 17:26:41 2012 From: andrew.hankinson at gmail.com (Andrew Hankinson) Date: Mon, 27 Aug 2012 11:26:41 -0400 Subject: [MEI-L] AMS "Introduction to MEI" Workshop In-Reply-To: <25353_1346080589_503B8F4C_25353_392_1_BBCC497C40D85642B90E9F94FC30343D0EF882F3@GRANT.eservices.virginia.edu> References: <25353_1346080589_503B8F4C_25353_392_1_BBCC497C40D85642B90E9F94FC30343D0EF882F3@GRANT.eservices.virginia.edu> Message-ID: <BD2498EC-784F-4248-96B7-4051C8DF23FA@gmail.com> I just sent it to the Canadian Music Libraries list. How did the Summer Workshop go? -Andrew On 2012-08-27, at 11:16 AM, "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> wrote: > Dear MEIers, > > This is a gentle reminder about the pre-conference workshop in conjunction with the American Musicological Society and Society for Music Theory joint meeting in New Orleans. Please forgive any cross-posting, but we want to get the word out to a broad range of potential participants. > > This announcement has already been (or will soon be) posted to TEI-L and DH lists. However, please distribute this to other lists, such as national and local discussion lists concerned with music and digital humanities, and other interested individuals. > > Thanks for getting the word out, > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > > > AMS "Introduction to MEI" Workshop > > The University of Virginia Library, the University of Paderborn, and the Music Encoding Initiative Council > are pleased to offer an opportunity to learn about the Music Encoding Initiative (MEI), an increasingly > important tool for digital humanities music research, in conjunction with the joint meeting of the American > Musicological Society, the Society for Ethnomusicology and the Society for Music Theory in New Orleans, > scheduled for 1-4 November. > > "Introduction to MEI," an intensive, hands-on workshop, will be offered Wednesday, 31 October from 9:00 > a.m. to 5:30 p.m. Experts from the Music Encoding Initiative Council will teach the workshop, during which > participants will learn about MEI history and design principles, tools for creating, editing, and rendering > MEI, and techniques for customizing the MEI schema. The day will include lectures, hands-on practice, and > opportunities to address participant-specific issues. > > There are no fees associated with this workshop and no previous experience with MEI or XML is required; > however, an understanding of music notation and other markup schemes, such as HTML and TEI, will be > helpful. Participants are encouraged to bring laptop computers for hands-on exercises. The number of > participants is limited to 25. > > To register, visit the AMS special events page at http://www.ams-net.org/neworleans/special_events.php. > > Please address questions to info at music-encoding.org. > > Workshop Schedule > > Session 1 (9:00-10:00): What is music encoding? > This session introduces the basic need for and techniques of music encoding using XML. > ? What is markup? What is its function? Why is it important? > ? Basic concepts of XML: elements, attributes, document structure, and schemas > ? What is the role of standards such as MEI? Why do we need markup languages? > > Session 2 (10:15-12:00): What is MEI? > The following issues will be addressed during this session: > ? MEI's situation within the landscape of digital humanities scholarship: What are its intellectual affiliations and commitments? > ? How does MEI support the creation of digital musical texts? What is its role in defining how music documents should be represented? > ? How is MEI currently used, and how is it evolving? > ? What are the alternatives to MEI? What are the advantages and risks of using a detailed encoding system like MEI? > > Session 3 (1:30-3:00): Basics of Encoding with MEI > This session will describe basic MEI elements and describe their use, using detailed musical examples. > > Session 4 (3:15-4:30): MEI Application Tutorials > This session introduces MEI-specific encoding tools, such as, MerMEId, MEISE, and the Edirom Editor. Participants will learn how these tools can be used to design workflows for entering, editing, and rendering MEI. > > Session 5 (4:45-5:30): Wrap-up Discussion > Participants will reflect on MEI markup and tools and how they can be employed in the participants? current and future projects. In addition, opportunities for participation in the MEI community will be covered. > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1034 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120827/0327a574/attachment.bin> From pdr4h at eservices.virginia.edu Mon Aug 27 17:28:56 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 27 Aug 2012 15:28:56 +0000 Subject: [MEI-L] AMS "Introduction to MEI" Workshop In-Reply-To: <BD2498EC-784F-4248-96B7-4051C8DF23FA@gmail.com> References: <25353_1346080589_503B8F4C_25353_392_1_BBCC497C40D85642B90E9F94FC30343D0EF882F3@GRANT.eservices.virginia.edu>, <BD2498EC-784F-4248-96B7-4051C8DF23FA@gmail.com> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EF88334@GRANT.eservices.virginia.edu> I think it went very well judging by the comments we got. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Andrew Hankinson [andrew.hankinson at gmail.com] Sent: Monday, August 27, 2012 11:26 AM To: Music Encoding Initiative Subject: Re: [MEI-L] AMS "Introduction to MEI" Workshop I just sent it to the Canadian Music Libraries list. How did the Summer Workshop go? -Andrew On 2012-08-27, at 11:16 AM, "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> wrote: > Dear MEIers, > > This is a gentle reminder about the pre-conference workshop in conjunction with the American Musicological Society and Society for Music Theory joint meeting in New Orleans. Please forgive any cross-posting, but we want to get the word out to a broad range of potential participants. > > This announcement has already been (or will soon be) posted to TEI-L and DH lists. However, please distribute this to other lists, such as national and local discussion lists concerned with music and digital humanities, and other interested individuals. > > Thanks for getting the word out, > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > > > AMS "Introduction to MEI" Workshop > > The University of Virginia Library, the University of Paderborn, and the Music Encoding Initiative Council > are pleased to offer an opportunity to learn about the Music Encoding Initiative (MEI), an increasingly > important tool for digital humanities music research, in conjunction with the joint meeting of the American > Musicological Society, the Society for Ethnomusicology and the Society for Music Theory in New Orleans, > scheduled for 1-4 November. > > "Introduction to MEI," an intensive, hands-on workshop, will be offered Wednesday, 31 October from 9:00 > a.m. to 5:30 p.m. Experts from the Music Encoding Initiative Council will teach the workshop, during which > participants will learn about MEI history and design principles, tools for creating, editing, and rendering > MEI, and techniques for customizing the MEI schema. The day will include lectures, hands-on practice, and > opportunities to address participant-specific issues. > > There are no fees associated with this workshop and no previous experience with MEI or XML is required; > however, an understanding of music notation and other markup schemes, such as HTML and TEI, will be > helpful. Participants are encouraged to bring laptop computers for hands-on exercises. The number of > participants is limited to 25. > > To register, visit the AMS special events page at http://www.ams-net.org/neworleans/special_events.php. > > Please address questions to info at music-encoding.org. > > Workshop Schedule > > Session 1 (9:00-10:00): What is music encoding? > This session introduces the basic need for and techniques of music encoding using XML. > ? What is markup? What is its function? Why is it important? > ? Basic concepts of XML: elements, attributes, document structure, and schemas > ? What is the role of standards such as MEI? Why do we need markup languages? > > Session 2 (10:15-12:00): What is MEI? > The following issues will be addressed during this session: > ? MEI's situation within the landscape of digital humanities scholarship: What are its intellectual affiliations and commitments? > ? How does MEI support the creation of digital musical texts? What is its role in defining how music documents should be represented? > ? How is MEI currently used, and how is it evolving? > ? What are the alternatives to MEI? What are the advantages and risks of using a detailed encoding system like MEI? > > Session 3 (1:30-3:00): Basics of Encoding with MEI > This session will describe basic MEI elements and describe their use, using detailed musical examples. > > Session 4 (3:15-4:30): MEI Application Tutorials > This session introduces MEI-specific encoding tools, such as, MerMEId, MEISE, and the Edirom Editor. Participants will learn how these tools can be used to design workflows for entering, editing, and rendering MEI. > > Session 5 (4:45-5:30): Wrap-up Discussion > Participants will reflect on MEI markup and tools and how they can be employed in the participants? current and future projects. In addition, opportunities for participation in the MEI community will be covered. > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From mcbrider at email.unc.edu Mon Aug 27 19:35:05 2012 From: mcbrider at email.unc.edu (McBride, Renee) Date: Mon, 27 Aug 2012 17:35:05 +0000 Subject: [MEI-L] AMS "Introduction to MEI" Workshop In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D0EF88334@GRANT.eservices.virginia.edu> References: <25353_1346080589_503B8F4C_25353_392_1_BBCC497C40D85642B90E9F94FC30343D0EF882F3@GRANT.eservices.virginia.edu>, <BD2498EC-784F-4248-96B7-4051C8DF23FA@gmail.com> <BBCC497C40D85642B90E9F94FC30343D0EF88334@GRANT.eservices.virginia.edu> Message-ID: <F825D8D96FE25F46A486AE605C00A09849F0E96E@ITS-MSXMBS4M.ad.unc.edu> I certainly enjoyed and learned from the summer workshop, though unfortunately I had to miss the end of it. Thanks, Perry, Johannes and Maya! Also, I have shared this announcement with MLA-L. Renee ~~~~~~~~~~~~~~~~~~~~~~~~~ Renee McBride Head, Special Formats & Metadata Section Resource Description & Management Dept. Davis Library, CB 3914 UNC-Chapel Hill Chapel Hill, NC 27514-8890 mcbrider at email.unc.edu (919) 962-9709 (phone) (919) 962-4450 (fax) -----Original Message----- From: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Roland, Perry (pdr4h) Sent: Monday, August 27, 2012 11:29 AM To: Music Encoding Initiative Subject: Re: [MEI-L] AMS "Introduction to MEI" Workshop I think it went very well judging by the comments we got. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Andrew Hankinson [andrew.hankinson at gmail.com] Sent: Monday, August 27, 2012 11:26 AM To: Music Encoding Initiative Subject: Re: [MEI-L] AMS "Introduction to MEI" Workshop I just sent it to the Canadian Music Libraries list. How did the Summer Workshop go? -Andrew On 2012-08-27, at 11:16 AM, "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu> wrote: > Dear MEIers, > > This is a gentle reminder about the pre-conference workshop in conjunction with the American Musicological Society and Society for Music Theory joint meeting in New Orleans. Please forgive any cross-posting, but we want to get the word out to a broad range of potential participants. > > This announcement has already been (or will soon be) posted to TEI-L and DH lists. However, please distribute this to other lists, such as national and local discussion lists concerned with music and digital humanities, and other interested individuals. > > Thanks for getting the word out, > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > > > AMS "Introduction to MEI" Workshop > > The University of Virginia Library, the University of Paderborn, and > the Music Encoding Initiative Council are pleased to offer an > opportunity to learn about the Music Encoding Initiative (MEI), an > increasingly important tool for digital humanities music research, in > conjunction with the joint meeting of the American Musicological Society, the Society for Ethnomusicology and the Society for Music Theory in New Orleans, scheduled for 1-4 November. > > "Introduction to MEI," an intensive, hands-on workshop, will be > offered Wednesday, 31 October from 9:00 a.m. to 5:30 p.m. Experts from > the Music Encoding Initiative Council will teach the workshop, during > which participants will learn about MEI history and design principles, > tools for creating, editing, and rendering MEI, and techniques for customizing the MEI schema. The day will include lectures, hands-on practice, and opportunities to address participant-specific issues. > > There are no fees associated with this workshop and no previous > experience with MEI or XML is required; however, an understanding of > music notation and other markup schemes, such as HTML and TEI, will be > helpful. Participants are encouraged to bring laptop computers for hands-on exercises. The number of participants is limited to 25. > > To register, visit the AMS special events page at http://www.ams-net.org/neworleans/special_events.php. > > Please address questions to info at music-encoding.org. > > Workshop Schedule > > Session 1 (9:00-10:00): What is music encoding? > This session introduces the basic need for and techniques of music encoding using XML. > * What is markup? What is its function? Why is it important? > * Basic concepts of XML: elements, attributes, document structure, and > schemas * What is the role of standards such as MEI? Why do we need markup languages? > > Session 2 (10:15-12:00): What is MEI? > The following issues will be addressed during this session: > * MEI's situation within the landscape of digital humanities scholarship: What are its intellectual affiliations and commitments? > * How does MEI support the creation of digital musical texts? What is its role in defining how music documents should be represented? > * How is MEI currently used, and how is it evolving? > * What are the alternatives to MEI? What are the advantages and risks of using a detailed encoding system like MEI? > > Session 3 (1:30-3:00): Basics of Encoding with MEI This session will > describe basic MEI elements and describe their use, using detailed musical examples. > > Session 4 (3:15-4:30): MEI Application Tutorials This session > introduces MEI-specific encoding tools, such as, MerMEId, MEISE, and the Edirom Editor. Participants will learn how these tools can be used to design workflows for entering, editing, and rendering MEI. > > Session 5 (4:45-5:30): Wrap-up Discussion Participants will reflect on > MEI markup and tools and how they can be employed in the participants' current and future projects. In addition, opportunities for participation in the MEI community will be covered. > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Mon Aug 27 19:47:27 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 27 Aug 2012 17:47:27 +0000 Subject: [MEI-L] AMS "Introduction to MEI" Workshop In-Reply-To: <F825D8D96FE25F46A486AE605C00A09849F0E96E@ITS-MSXMBS4M.ad.unc.edu> References: <25353_1346080589_503B8F4C_25353_392_1_BBCC497C40D85642B90E9F94FC30343D0EF882F3@GRANT.eservices.virginia.edu>, <BD2498EC-784F-4248-96B7-4051C8DF23FA@gmail.com> <BBCC497C40D85642B90E9F94FC30343D0EF88334@GRANT.eservices.virginia.edu>, <F825D8D96FE25F46A486AE605C00A09849F0E96E@ITS-MSXMBS4M.ad.unc.edu> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EF88396@GRANT.eservices.virginia.edu> Hi, Renee, Thanks for the kind words and for letting our librarian friends know about New Orleans. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From atge at kb.dk Mon Aug 27 21:04:27 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Mon, 27 Aug 2012 19:04:27 +0000 Subject: [MEI-L] Release of MEI 2012 v2.0.0 In-Reply-To: <BB077ABC-0448-4C4E-B80D-FEBFB4D4AE86@edirom.de> References: <BB077ABC-0448-4C4E-B80D-FEBFB4D4AE86@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514BE048@EXCHANGE-02.kb.dk> Hi Johannes, Perry, the tech team & all other contributors This is great news! Thanks a lot to all of you for your enormous effort to make this new release. I am sure it will further help MEI to gain the attention it really deserves. Best wishes from Copenhagen, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Johannes Kepper Sendt: 24. august 2012 22:35 Til: Music Encoding Initiative Emne: [MEI-L] Release of MEI 2012 v2.0.0 Dear MEIers, finally, we have made our way through the Guidelines, which means that we're happy to announce the release of MEI 2012. This release is available from http://music-encoding.googlecode.com/files/MEI2012_v2.0.0.zip. Before adding more details about this, we would like to thank everyone involved in this release in one way or the other for the continuous support. Without a whole lot of dedication by all of you, this wouldn't have been possible. LABELLING SCHEME MEI 2012 v2.0.0 is the complete name for this release. While "MEI 2012" is the label for it, the "v2.0.0" is the technical identification for it. We will use this numbering scheme for all subsequent releases of MEI. Let me explain the numbers a little bit. The first digit indicates the technical foundation to be used. We have built this release from scratch using a different technology (something I'm coming back to later), so this is the second iteration of MEI, following the original 2010-05 release. The second digit indicates changes in the schema, the third digit changes to the documentation. That means the number v2.0.1 would be used for a revised documentation for the current release, the number v2.1.3 would indicate a change in the schema itself, that already had three revisions to the documentation. TECHNICAL BACKGROUND - ODD Starting with MEI 2012 v2.0.0, MEI is described using TEI's ODD, which stands for One Document Does-it-all. This is a meta schema language that allows one to generate schemas and documentation from the same source. It also facilitates customizing MEI in a self-documenting way. We already have a preliminary customization service for those of you familiar with ODD at http://custom.music-encoding.org. GUIDELINES Compared to MEI 2010-05, we completely rewrote the documentation for MEI. We had a tag library for that, which was helpful, but which required a certain amount of familiarity with the schema to work with it. Now we have a nearly 700 page document which still includes a tag library, but also around 250 pages explaining how to use MEI to encode different features of music notation of various kinds. It follows the module structure of MEI and significantly increases accessibility of the schema. Make sure to have a look at this PDF, which is contained in the release zip mentioned above, but is also separately available from http://music-encoding.googlecode.com/files/MEI_Guidelines_2012_v2.0.0.pdf. TODO Even though we have finished this release, there are still things to improve. First, we plan to reshape our website at music-encoding.org to match this new release. We plan to include the Guidelines, a revised tag library section, and a collection of MEI sample encodings. We will address these issues successively, starting in the next couple of weeks. What should happen simultaneously is that we gather feedback on the release, specifically on the Guidelines. If you have the opportunity to work a little bit with the Guidelines in the next few weeks or months, please do that. We hope to improve the Guidelines in a number of several small steps until the end of the current DFG/NEH funded grant, which will be summer next year. If you have any further questions, feel free to ask us personally or here on MEI-L. With best regards, Perry and Johannes _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From esfield at stanford.edu Fri Aug 31 03:05:41 2012 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Thu, 30 Aug 2012 18:05:41 -0700 (PDT) Subject: [MEI-L] Release of MEI 2012 v2.0.0 In-Reply-To: <5037E8CF.7050100@weber-gesamtausgabe.de> References: <BB077ABC-0448-4C4E-B80D-FEBFB4D4AE86@edirom.de> <5037E8CF.7050100@weber-gesamtausgabe.de> Message-ID: <94c460c0.00001db0.0000003c@CCARH-ADM-2.su.win.stanford.edu> Hi, Joachim, I know you sent a note recently, but I cannot find it. I travel this weekend, and I was in an auto accident a few days ago, so life is slightly crazy. Please write again. I'm sure we can find a time to meet on Thursday 6th, but I'm not yet sure when it will be. I will be at the Hotel Central from 3 to 8 Sept. Alles gut' Eleanor -----Original Message----- From: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Joachim Veit Sent: Friday, August 24, 2012 1:49 PM To: mei-l at lists.uni-paderborn.de Subject: Re: [MEI-L] Release of MEI 2012 v2.0.0 Dear Perry and Johannes, congratulations!!!!! This was an enormous task and all MEI'ers will raise their glass of beer or wine this evening with singing the praise of the two editors but also singing thanking-hymns to all those in the still small MEI-world who have contributed with their hard work to this success!!! With this step the MEI-world will be growing still more rapidly - certainly! Please take a rest and raise your glasses and only later think that the work for version 3.0 begins tomorrow... Best greetings, and many, many - and still much more thanks to you and all! Joachim Am 24.08.12 22:34, schrieb Johannes Kepper: > Dear MEIers, > > finally, we have made our way through the Guidelines, which means that we're happy to announce the release of MEI 2012. This release is available from http://music-encoding.googlecode.com/files/MEI2012_v2.0.0.zip. Before adding more details about this, we would like to thank everyone involved in this release in one way or the other for the continuous support. Without a whole lot of dedication by all of you, this wouldn't have been possible. > > > LABELLING SCHEME > MEI 2012 v2.0.0 is the complete name for this release. While "MEI 2012" is the label for it, the "v2.0.0" is the technical identification for it. We will use this numbering scheme for all subsequent releases of MEI. Let me explain the numbers a little bit. The first digit indicates the technical foundation to be used. We have built this release from scratch using a different technology (something I'm coming back to later), so this is the second iteration of MEI, following the original 2010-05 release. The second digit indicates changes in the schema, the third digit changes to the documentation. That means the number v2.0.1 would be used for a revised documentation for the current release, the number v2.1.3 would indicate a change in the schema itself, that already had three revisions to the documentation. > > > TECHNICAL BACKGROUND - ODD > Starting with MEI 2012 v2.0.0, MEI is described using TEI's ODD, which stands for One Document Does-it-all. This is a meta schema language that allows one to generate schemas and documentation from the same source. It also facilitates customizing MEI in a self-documenting way. We already have a preliminary customization service for those of you familiar with ODD at http://custom.music-encoding.org. > > > GUIDELINES > Compared to MEI 2010-05, we completely rewrote the documentation for MEI. We had a tag library for that, which was helpful, but which required a certain amount of familiarity with the schema to work with it. Now we have a nearly 700 page document which still includes a tag library, but also around 250 pages explaining how to use MEI to encode different features of music notation of various kinds. It follows the module structure of MEI and significantly increases accessibility of the schema. Make sure to have a look at this PDF, which is contained in the release zip mentioned above, but is also separately available from http://music-encoding.googlecode.com/files/MEI_Guidelines_2012_v2.0.0.pdf. > > > TODO > Even though we have finished this release, there are still things to improve. First, we plan to reshape our website at music-encoding.org to match this new release. We plan to include the Guidelines, a revised tag library section, and a collection of MEI sample encodings. We will address these issues successively, starting in the next couple of weeks. What should happen simultaneously is that we gather feedback on the release, specifically on the Guidelines. If you have the opportunity to work a little bit with the Guidelines in the next few weeks or months, please do that. We hope to improve the Guidelines in a number of several small steps until the end of the current DFG/NEH funded grant, which will be summer next year. > > > If you have any further questions, feel free to ask us personally or here on MEI-L. > > With best regards, > > Perry and Johannes > > > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From veit at weber-gesamtausgabe.de Fri Aug 31 10:22:42 2012 From: veit at weber-gesamtausgabe.de (Joachim Veit) Date: Fri, 31 Aug 2012 10:22:42 +0200 Subject: [MEI-L] Release of MEI 2012 v2.0.0 In-Reply-To: <94c460c0.00001db0.0000003c@CCARH-ADM-2.su.win.stanford.edu> References: <BB077ABC-0448-4C4E-B80D-FEBFB4D4AE86@edirom.de> <5037E8CF.7050100@weber-gesamtausgabe.de> <94c460c0.00001db0.0000003c@CCARH-ADM-2.su.win.stanford.edu> Message-ID: <50407452.4040606@weber-gesamtausgabe.de> Dear Eleanor, thanks for your mail - I hope that you arose from the auto accident without any as only any harm!! Yes, we can meet on Thursday - the Edirom presentation has now been fixed from 13:30 to 14:30, so perhaps it would be the best to meet around noon or earlier. May I propose to meet either at 11:00 or 12:00. Would you prefere to meet at the Conference directly or should I come to your Hotel (in this case, I think 11:00 would be the better time) Best greetings, Joachim P.S. Would you mind when Johannes takes part in our meeting? Am 31.08.12 03:05, schrieb Eleanor Selfridge-Field: > Hi, Joachim, > > I know you sent a note recently, but I cannot find it. I travel this > weekend, and I was in an auto accident a few days ago, so life is slightly > crazy. > > Please write again. I'm sure we can find a time to meet on Thursday 6th, > but I'm not yet sure when it will be. > > I will be at the Hotel Central from 3 to 8 Sept. > > Alles gut' > > Eleanor > > > -----Original Message----- > From: mei-l-bounces at lists.uni-paderborn.de > [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Joachim Veit > Sent: Friday, August 24, 2012 1:49 PM > To: mei-l at lists.uni-paderborn.de > Subject: Re: [MEI-L] Release of MEI 2012 v2.0.0 > > Dear Perry and Johannes, > congratulations!!!!! > This was an enormous task and all MEI'ers will raise their glass of beer > or wine this evening with singing the praise of the two editors but also > singing thanking-hymns to all those in the still small MEI-world who have > contributed with their hard work to this success!!! > With this step the MEI-world will be growing still more rapidly - > certainly! > Please take a rest and raise your glasses and only later think that the > work for version 3.0 begins tomorrow... > Best greetings, and many, many - and still much more thanks to you and > all! > Joachim > > > > Am 24.08.12 22:34, schrieb Johannes Kepper: >> Dear MEIers, >> >> finally, we have made our way through the Guidelines, which means that > we're happy to announce the release of MEI 2012. This release is available > from http://music-encoding.googlecode.com/files/MEI2012_v2.0.0.zip. Before > adding more details about this, we would like to thank everyone involved > in this release in one way or the other for the continuous support. > Without a whole lot of dedication by all of you, this wouldn't have been > possible. >> >> LABELLING SCHEME >> MEI 2012 v2.0.0 is the complete name for this release. While "MEI 2012" > is the label for it, the "v2.0.0" is the technical identification for it. > We will use this numbering scheme for all subsequent releases of MEI. Let > me explain the numbers a little bit. The first digit indicates the > technical foundation to be used. We have built this release from scratch > using a different technology (something I'm coming back to later), so this > is the second iteration of MEI, following the original 2010-05 release. > The second digit indicates changes in the schema, the third digit changes > to the documentation. That means the number v2.0.1 would be used for a > revised documentation for the current release, the number v2.1.3 would > indicate a change in the schema itself, that already had three revisions > to the documentation. >> >> TECHNICAL BACKGROUND - ODD >> Starting with MEI 2012 v2.0.0, MEI is described using TEI's ODD, which > stands for One Document Does-it-all. This is a meta schema language that > allows one to generate schemas and documentation from the same source. It > also facilitates customizing MEI in a self-documenting way. We already > have a preliminary customization service for those of you familiar with > ODD at http://custom.music-encoding.org. >> >> GUIDELINES >> Compared to MEI 2010-05, we completely rewrote the documentation for > MEI. We had a tag library for that, which was helpful, but which required > a certain amount of familiarity with the schema to work with it. Now we > have a nearly 700 page document which still includes a tag library, but > also around 250 pages explaining how to use MEI to encode different > features of music notation of various kinds. It follows the module > structure of MEI and significantly increases accessibility of the schema. > Make sure to have a look at this PDF, which is contained in the release > zip mentioned above, but is also separately available from > http://music-encoding.googlecode.com/files/MEI_Guidelines_2012_v2.0.0.pdf. >> >> TODO >> Even though we have finished this release, there are still things to > improve. First, we plan to reshape our website at music-encoding.org to > match this new release. We plan to include the Guidelines, a revised tag > library section, and a collection of MEI sample encodings. We will address > these issues successively, starting in the next couple of weeks. What > should happen simultaneously is that we gather feedback on the release, > specifically on the Guidelines. If you have the opportunity to work a > little bit with the Guidelines in the next few weeks or months, please do > that. We hope to improve the Guidelines in a number of several small steps > until the end of the current DFG/NEH funded grant, which will be summer > next year. >> >> If you have any further questions, feel free to ask us personally or > here on MEI-L. >> With best regards, >> >> Perry and Johannes >> >> >> >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit Bin?rdaten wurde abgetrennt... Dateiname : veit.vcf Dateityp : text/x-vcard Dateigr??e : 364 bytes Beschreibung: nicht verf?gbar URL : <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20120831/946b5feb/attachment.vcf> From esfield at stanford.edu Sat Sep 1 01:49:33 2012 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Fri, 31 Aug 2012 16:49:33 -0700 (PDT) Subject: [MEI-L] Release of MEI 2012 v2.0.0 In-Reply-To: <50407452.4040606@weber-gesamtausgabe.de> Message-ID: <1049666688.17828189.1346456973216.JavaMail.root@zm07.stanford.edu> Dear Joachim, If you do not hear otherwise, I think that 11-12 at the hotel would be good. We could probably have lunch while passing to the conference site. I wanted to talk you about administrative matters affecting CCARH and try to understand from you the general climate of various issues that can affect critical editions. In the US there is unusual turbulence in the notation software market, and I am wondering what is happening in Germany, particularly among the big publishers--B?renreiter and Schott. (MEI is not my agenda, but I am hoping to come to your session. I have two other meetings to schedule on Thursday, so nothing is completely certain. Also I've just been notified of the Lufthansa strike....) Best regards, Eleanor ----- Original Message ----- From: "Joachim Veit" <veit at weber-gesamtausgabe.de> To: "Music Encoding Initiative" <mei-l at lists.uni-paderborn.de> Sent: Friday, 31 August, 2012 1:22:42 AM Subject: Re: [MEI-L] Release of MEI 2012 v2.0.0 Dear Eleanor, thanks for your mail - I hope that you arose from the auto accident without any as only any harm!! Yes, we can meet on Thursday - the Edirom presentation has now been fixed from 13:30 to 14:30, so perhaps it would be the best to meet around noon or earlier. May I propose to meet either at 11:00 or 12:00. Would you prefere to meet at the Conference directly or should I come to your Hotel (in this case, I think 11:00 would be the better time) Best greetings, Joachim P.S. Would you mind when Johannes takes part in our meeting? Am 31.08.12 03:05, schrieb Eleanor Selfridge-Field: > Hi, Joachim, > > I know you sent a note recently, but I cannot find it. I travel this > weekend, and I was in an auto accident a few days ago, so life is slightly > crazy. > > Please write again. I'm sure we can find a time to meet on Thursday 6th, > but I'm not yet sure when it will be. > > I will be at the Hotel Central from 3 to 8 Sept. > > Alles gut' > > Eleanor > > > -----Original Message----- > From: mei-l-bounces at lists.uni-paderborn.de > [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Joachim Veit > Sent: Friday, August 24, 2012 1:49 PM > To: mei-l at lists.uni-paderborn.de > Subject: Re: [MEI-L] Release of MEI 2012 v2.0.0 > > Dear Perry and Johannes, > congratulations!!!!! > This was an enormous task and all MEI'ers will raise their glass of beer > or wine this evening with singing the praise of the two editors but also > singing thanking-hymns to all those in the still small MEI-world who have > contributed with their hard work to this success!!! > With this step the MEI-world will be growing still more rapidly - > certainly! > Please take a rest and raise your glasses and only later think that the > work for version 3.0 begins tomorrow... > Best greetings, and many, many - and still much more thanks to you and > all! > Joachim > > > > Am 24.08.12 22:34, schrieb Johannes Kepper: >> Dear MEIers, >> >> finally, we have made our way through the Guidelines, which means that > we're happy to announce the release of MEI 2012. This release is available > from http://music-encoding.googlecode.com/files/MEI2012_v2.0.0.zip. Before > adding more details about this, we would like to thank everyone involved > in this release in one way or the other for the continuous support. > Without a whole lot of dedication by all of you, this wouldn't have been > possible. >> >> LABELLING SCHEME >> MEI 2012 v2.0.0 is the complete name for this release. While "MEI 2012" > is the label for it, the "v2.0.0" is the technical identification for it. > We will use this numbering scheme for all subsequent releases of MEI. Let > me explain the numbers a little bit. The first digit indicates the > technical foundation to be used. We have built this release from scratch > using a different technology (something I'm coming back to later), so this > is the second iteration of MEI, following the original 2010-05 release. > The second digit indicates changes in the schema, the third digit changes > to the documentation. That means the number v2.0.1 would be used for a > revised documentation for the current release, the number v2.1.3 would > indicate a change in the schema itself, that already had three revisions > to the documentation. >> >> TECHNICAL BACKGROUND - ODD >> Starting with MEI 2012 v2.0.0, MEI is described using TEI's ODD, which > stands for One Document Does-it-all. This is a meta schema language that > allows one to generate schemas and documentation from the same source. It > also facilitates customizing MEI in a self-documenting way. We already > have a preliminary customization service for those of you familiar with > ODD at http://custom.music-encoding.org. >> >> GUIDELINES >> Compared to MEI 2010-05, we completely rewrote the documentation for > MEI. We had a tag library for that, which was helpful, but which required > a certain amount of familiarity with the schema to work with it. Now we > have a nearly 700 page document which still includes a tag library, but > also around 250 pages explaining how to use MEI to encode different > features of music notation of various kinds. It follows the module > structure of MEI and significantly increases accessibility of the schema. > Make sure to have a look at this PDF, which is contained in the release > zip mentioned above, but is also separately available from > http://music-encoding.googlecode.com/files/MEI_Guidelines_2012_v2.0.0.pdf. >> >> TODO >> Even though we have finished this release, there are still things to > improve. First, we plan to reshape our website at music-encoding.org to > match this new release. We plan to include the Guidelines, a revised tag > library section, and a collection of MEI sample encodings. We will address > these issues successively, starting in the next couple of weeks. What > should happen simultaneously is that we gather feedback on the release, > specifically on the Guidelines. If you have the opportunity to work a > little bit with the Guidelines in the next few weeks or months, please do > that. We hope to improve the Guidelines in a number of several small steps > until the end of the current DFG/NEH funded grant, which will be summer > next year. >> >> If you have any further questions, feel free to ask us personally or > here on MEI-L. >> With best regards, >> >> Perry and Johannes >> >> >> >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Mon Sep 3 12:32:36 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Mon, 3 Sep 2012 12:32:36 +0200 Subject: [MEI-L] [ANN][ESS2012] registration deadline extension until end of this week for Edirom-Summer-School 2012 Message-ID: <200C1B20-C9A3-4F18-A980-CB6A51E0F938@edirom.de> Dear colleagues, to the date there are still places available at the courses of Edirom-Summer-School 2012. Hence we decided to extend registration deadline to the end of this week. *final deadline now is Sunday September 9, 0:00 CEST* If so far you have been undecided, use your chance. For further information and registration visit: http://www.edirom.de/summerschool2012. Edirom-Summer-School tries to communicate basic technologies and concepts for the "Digital Humanities". The focus lies on the XML-based encoding guidelines TEI and MEI as well as generic tools for analysis, manipulation and presentation of XML data. Alongside the "hands on" courses there will be opportunity for personal exchange or own presentations in order to make Edirom-Summer-School a general forum for users of different experience levels. Overview of english courses: - Encoding Text and Music (Sept 27) - Manuscript Encoding and Digital Editions based on MEI (Sept 27 to 28) - An introduction to ODD (Sept 28) Wit best wishes from your organizing committee, Peter Stadler and Benjamin W. Bohl -- *********************************************************** Edirom - Projekt "Digitale Musikedition" Musikwissenschaftliches Seminar Detmold/Paderborn Gartenstra?e 20 D ? 32756 Detmold Tel. +49 (0) 5231 / 975-665 Fax: +49 (0) 5231 / 975-668 http://www.edirom.de *********************************************************** === GERMAN =============================== Sehr geehrte Kolleginnen und Kollegen, noch sind einige Pl?tze in den Kursen der Edirom-Summer-School 2012 frei. Wir haben uns daher entschieden, die Anmeldefrist um eine Woche zu verl?ngern: *endg?ltiger Anmeldeschluss ist nun Sonntag, der 09.09.2012, 0:00 Uhr MEZ.* Sollten Sie bis jetzt noch unentschlossen gewesen sein, nutzen Sie die Gelegenheit. Weitere Informationen und die M?glichkeit zur Registrierung finden Sie unter: http://www.edirom.de/summerschool2012 Die Edirom-Summer-School versucht grundlegende Technologien und Konzepte f?r die ?Digital Humanities? zu vermitteln. Den Schwerpunkt bilden die auf XML basierenden Auszeichnungsrichtlinien TEI und MEI sowie generische Werkzeuge zur Analyse, Manipulation und Pr?sentation der XML-Daten. Neben den jeweiligen Kursen, die alle ?hands on? abgehalten werden, soll auch Raum f?r pers?nlichen Austausch sowie eigene Pr?sentationen gegeben werden, sodass die Edirom-Summer-School auch ein allgemeines Forum f?r Anwender unterschiedlichen Erfahrungsstandes werden kann. Kurs?bersicht: - Einf?hrung in die Codierung von Notentexten mit MEI (24. - 25.09.) - Einf?hrung in die Codierung von Texten mit TEI (24. - 25.09.) - Edirom-Tools - Erstellen einer Edirom-Online (24.09.) - Edirom-Customization - Projektspezifische Anpassungen der Edirom-Online (25.09.) - MerMEId - Arbeiten mit MEI Metadaten (26.09.) - Einf?hrung in die native XML-Datenbank eXist (26.09.) - Edirom User Forum (26.09.) - Encoding Text and Music [Englisch] (27.09.) - Manuscript Encoding and Digital Editions based on MEI [Englisch] (27. - 28.09.) - XPath und Regul?re Ausdr?cke - Fortgeschrittenes Suchen in XML Dokumenten (27.09.) - Grundlagen zum nachhaltigen Umgang mit Forschungsdaten (27.09.) - XSL(T) f?r Einsteiger (28.09.) - An introduction to ODD [Englisch] (28.09.) Mit besten Gr??en von Ihrem Organisationsteam, Peter Stadler und Benjamin W. Bohl From rfreedma at haverford.edu Thu Oct 11 14:43:34 2012 From: rfreedma at haverford.edu (Richard Freedman) Date: Thu, 11 Oct 2012 08:43:34 -0400 Subject: [MEI-L] News on Du Chemin Project, MEI, and VexFlow Message-ID: <CA+zvZGfUZhgs3YiZe9owebhAXr+BBh9jnu6Fm+QJonK+QJ4pMw@mail.gmail.com> Friends, Just a short note to bring you up to date on the Du Chemin Project, and in particular some excellent progress we have made with MEI. You can read the details in the links below. I am printing the documents, too, and can easily send along copies of the report to you. Just let me know if you would like a copy. Meanwhile the most important developments (for the MEI group, in any case): thanks to Raffaele Viglianti and Andrew Hankinson, we have a nice system that allows us to move smoothly from our original Sibelius and Finale transcriptions to MEI and then to on-screen rendering of musical examples. The aim, as you might recall, is to build a large analytic "thesaurus" of musical types and procedures from our repertory, then coordinate these with proposed reconstructions of missing voice parts in still other pieces. Follow the links below to see various explanations and demonstrations. The attached set of images shows you just a small bit of what we can do--in this case searching for a particular type of cadence, then displaying the result. Meanwhile, best wishes for a productive year, Richard Links to various reports and demonstrations: A report on Year One of the Lost Voices Project http://duchemin.haverford.edu/editorsforum/lost-voices-2012/ The latest version of the Thesaurus (now 3.2), available in segments or in one file. http://duchemin.haverford.edu/editorsforum/thesaurus-3-2/ A Dossier documenting our methods, standards, and results so far. http://duchemin.haverford.edu/editorsforum/lost-voices-2012/ Materials for the Ecole III at Tours, with pieces to analyze and reconstruct (also some reconstructions for discussion). http://duchemin.haverford.edu/editorsforum/ecole-thematique-iii-2012/ Demonstration of the new interface and the VexFlow system. (Not complete, so don't be frustrated!). http://duchemin.haverford.edu/editorsforum/search-the-thesaurus-3-2/ -- Richard Freedman John C. Whitehead Professor of Music Haverford College Haverford, PA 19041 610-896-1007 610-896-4902 (fax) http://www.haverford.edu/faculty/rfreedma -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121011/7193409c/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: DuChemin_Search_Demo copy.pdf Type: application/pdf Size: 718764 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121011/7193409c/attachment.pdf> From raffaeleviglianti at gmail.com Thu Oct 11 15:52:34 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Thu, 11 Oct 2012 14:52:34 +0100 Subject: [MEI-L] News on Du Chemin Project, MEI, and VexFlow In-Reply-To: <CA+zvZGfUZhgs3YiZe9owebhAXr+BBh9jnu6Fm+QJonK+QJ4pMw@mail.gmail.com> References: <CA+zvZGfUZhgs3YiZe9owebhAXr+BBh9jnu6Fm+QJonK+QJ4pMw@mail.gmail.com> Message-ID: <CAMyHAnMy1s1-m9A3oCUyTKr3bKg-JTt7ESpfMK1QOooBNX2XxA@mail.gmail.com> Dear all, I would just like to add that the interfacing between MEI and VexFlow is achieved through a JavaScript library originally written by Richard Lewis at Goldsmith's, University of London ( https://github.com/ironchicken/MEItoVexFlow), which has been extended for the project. Best wishes, Raffaele On Thu, Oct 11, 2012 at 1:43 PM, Richard Freedman <rfreedma at haverford.edu>wrote: > Friends, > > Just a short note to bring you up to date on the Du Chemin Project, and in > particular some excellent progress we have made with MEI. You can read the > details in the links below. I am printing the documents, too, and can > easily send along copies of the report to you. Just let me know if you > would like a copy. > > Meanwhile the most important developments (for the MEI group, in any > case): thanks to Raffaele Viglianti and Andrew Hankinson, we have a nice > system that allows us to move smoothly from our original Sibelius and > Finale transcriptions to MEI and then to on-screen rendering of musical > examples. The aim, as you might recall, is to build a large analytic > "thesaurus" of musical types and procedures from our repertory, then > coordinate these with proposed reconstructions of missing voice parts in > still other pieces. > > Follow the links below to see various explanations and demonstrations. > The attached set of images shows you just a small bit of what we can > do--in this case searching for a particular type of cadence, then > displaying the result. > > Meanwhile, best wishes for a productive year, > > > Richard > > > > Links to various reports and demonstrations: > > > A report on Year One of the Lost Voices Project > http://duchemin.haverford.edu/editorsforum/lost-voices-2012/ > > The latest version of the Thesaurus (now 3.2), available in segments or in > one file. http://duchemin.haverford.edu/editorsforum/thesaurus-3-2/ > > A Dossier documenting our methods, standards, and results so far. > http://duchemin.haverford.edu/editorsforum/lost-voices-2012/ > > Materials for the Ecole III at Tours, with pieces to analyze and > reconstruct (also some reconstructions for discussion). > http://duchemin.haverford.edu/editorsforum/ecole-thematique-iii-2012/ > > Demonstration of the new interface and the VexFlow system. (Not complete, > so don't be frustrated!). > http://duchemin.haverford.edu/editorsforum/search-the-thesaurus-3-2/ > > > > > > > -- > Richard Freedman > John C. Whitehead Professor of Music > Haverford College > Haverford, PA 19041 > > 610-896-1007 > 610-896-4902 (fax) > > http://www.haverford.edu/faculty/rfreedma > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121011/4e5ef2fd/attachment.html> From pdr4h at eservices.virginia.edu Mon Oct 15 20:50:14 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 15 Oct 2012 18:50:14 +0000 Subject: [MEI-L] revision of <bibl> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EFA9489@GRANT.eservices.virginia.edu> Hello, Those of you subscribed to the developers list may have already seen a version of this message. My apologies for cross-posting, but I think this is important enough to warrant soliciting a wide range of opinions, since some of the changes proposed will break backwards compatibility. In order to better support bibliographic applications, such as MerMEId, the content model of <bibl> needs to be revised/expanded. The example below contains several new elements -- creator, editor, contributor, biblScope, genre, imprint, pubPlace, and publisher. These might also be useful at other points in addition to within <bibl>, say in the header, but that is not under consideration at the moment. A new <recipient> element may also be added for correspondence. <distributor> will also be allowed in <imprint>. In addition, some existing elements, such as <creation>, should also be permitted within <bibl>. This will allow the capture of non-bibliographic details of creation, such as the location where an item was created. Also, some existing elements, such as <physLoc> and <relatedItem>, should be allowed after re-definition; that is, <physLoc> will no longer function as the call number/shelf location. It will function as the wrapper for <repository> and <identifier>. In this context, <identifier> will hold the shelf number. With the FRBR changes, <relatedItem> will be replaced by <relation>, allowing <relatedItem> to be used as illustrated here. The new element <biblList> will be a member of model.listLike, allowing it to be used wherever <list> is currently allowed. Other places where it might occur include <work>, <source> (will be called <item> after FRBR implementation, <event>, <eventList>, etc. <biblList> <bibl> <!-- genre is preferred over @type on bibl so that authorizing info about the material designation can be captured. --> <genre authority="marcgt" authURI="http://www.loc.gov/standards/valuelist/marcgt.html">article</genre> <creator>Daniel Grimley</creator> <title level="a">Modernism and Closure: Nielsen's Fifth Symphony The Musical Quarterly 86 1 149-173 London 2002 Carl Nielsen William Behrend Carl Nielsen William Behrend 1904-04-13 letter Copenhagen DK-Kk NKS 5155 4? CNB II/333 Comments are greatly appreciated, especially if this will break anything you're currently doing. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From stadler at edirom.de Sat Oct 20 22:59:01 2012 From: stadler at edirom.de (Peter Stadler) Date: Sat, 20 Oct 2012 22:59:01 +0200 Subject: [MEI-L] revision of In-Reply-To: References: Message-ID: Dear Perry, thank you for disseminating this issue. I have been discussing mei:bibl with Axel in the last weeks so I feel free to repeat my comments publicly: First, TEI has the nice separation between tei:bibl and tei: biblStruct where the former is a "loosely-structured bibliographic citation" [1] and the latter a "structured bibliographic citation, in which only bibliographic sub-elements appear and in a specified order. "[2] If your use case is *creating* bibliographical citations, than the plain bibl can be used to store your citations already in the desired style (i.e. with punctuation etc.) which makes e.g. a later html output much easier. So, my proposal would be to add a mei:biblStruct for at least three reasons: 1. MerMEId is using bibliographic citations in a highly structured way (cf. your examples) 2. It would not break backwards compatibility 3. a clear separation of structured and loosely-structured data which makes processing much easier since the processor knows what to expect. (Of course, the mei:biblStruct would need a clear structure with mandatory elements in a mandatory order) Second, I don't understand the need for reinventing the wheel. There are a lot of schemata out there for bibliographic data which could be used, since it makes developing much easier as well as interchange. Here I'm not saying to use a different namespace (one could argue so, though) but to adopt the appropriate schemata from e.g. BibtexML[3], Zotero[4] or TEI[2]. For example I'd like to see the possibility to assign keywords/tags -- on the other hand I'm reluctant to introduce those special elements recipient and creation (the latter could go into mei:annot). All the best Peter PS: By the way, mei:bibl is currently described as "Provides a citation for a published work." which should be amended to cover unpublished material as well. [1] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-bibl.html [2] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-biblStruct.html [3] http://bibtexml.sourceforge.net [4] http://www.zotero.org/support/dev/data_model Am 15.10.2012 um 20:50 schrieb "Roland, Perry (pdr4h)" : > Hello, > > Those of you subscribed to the developers list may have already seen a version of this message. My apologies for cross-posting, but I think this is important enough to warrant soliciting a wide range of opinions, since some of the changes proposed will break backwards compatibility. > > In order to better support bibliographic applications, such as MerMEId, the content model of needs to be revised/expanded. The example below contains several new elements -- creator, editor, contributor, biblScope, genre, imprint, pubPlace, and publisher. These might also be useful at other points in addition to within , say in the header, but that is not under consideration at the moment. A new element may also be added for correspondence. will also be allowed in . > > In addition, some existing elements, such as , should also be permitted within . This will allow the capture of non-bibliographic details of creation, such as the location where an item was created. > > Also, some existing elements, such as and , should be allowed after re-definition; that is, will no longer function as the call number/shelf location. It will function as the wrapper for and . In this context, will hold the shelf number. With the FRBR changes, will be replaced by , allowing to be used as illustrated here. > > The new element will be a member of model.listLike, allowing it to be used wherever is currently allowed. Other places where it might occur include , (will be called after FRBR implementation, , , etc. > > > > > authURI="http://www.loc.gov/standards/valuelist/marcgt.html">article > Daniel Grimley > Modernism and Closure: Nielsen's Fifth Symphony > The Musical Quarterly > > 86 > 1 > 149-173 > > London > 2002 > > > > Carl Nielsen > William Behrend > > > Carl Nielsen > William Behrend > > 1904-04-13 > letter > > Copenhagen > > > DK-Kk > NKS 5155 4? > > > > CNB > II/333 > > > > > > Comments are greatly appreciated, especially if this will break anything you're currently doing. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Mon Oct 22 21:00:30 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 22 Oct 2012 19:00:30 +0000 Subject: [MEI-L] revision of In-Reply-To: References: , Message-ID: Hi, Peter, Thanks for your comments. I appreciate your well-thought-out arguments. I've arrived at the same conclusion as you; that is, that MEI needs a method for capturing structured as well as unstructured bibliographic citations. We already have for the unstructured citations, but need to work on a mechanism for more structured ones. I also agree whole-heartedly that wheel-reinvention is undesirable. But, maintaining backward compatibility at the expense of sensible markup is not a reasonable path either. I'm not quite yet convinced of the need to adopt a model as highly-structured as TEI's or the other schemas you pointed to. Looking at Axel's examples and trying to envision future requirements, I don't see a need *at this time* for the level of detail that provides. So, I've been working toward an element I call . If we determine that an even more-structured way of handling bibliographic citations/descriptions than is necessary, we can look at emulating/adopting , bibtexml, or Zotero. Whereas, allows all inline text elements, will permit only and the members of a new class, model.biblPart, which has the members: biblScope, contributor, creation, creator, distributor, edition, editor, funder, genre, imprint, physLoc, pubPlace, publisher, recipient, relatedItem, series, and sponsor. This is very similar to the TEI model of <bibl> with inline text elements removed. There are, however, some useful changes in the MEI model of <biblStrict>, compared with TEI, such as the substitution of <creator> and <contributor> for <author>, which I think you'll agree is not completely satisfying for music. These elements, as well as the other "responsibility-like" elements -- editor, funder, sponsor, and recipient -- are simply syntactic sugar for the <name> element with a role attribute of "editor", "funder", etc. Therefore, I see no harm in adding them to the current MEI model of <bibl> and <titleSmt> -- it makes the MEI model more TEI-like, reduces the markup needed to accurately identify the parts of the citation, and makes the markup more consistent and more processable by removing a number of competing markup possibilities. The <creation>, <genre>, <physLoc>, and <recipient> elements permit the citation of correspondence and other manuscript material without taking on the burden of TEI's <msDesc>. As all of these elements will be optional, they will not break existing MEI instances. The break in backwards compatibility is limited to the re-definition of <physLoc> and its placement in the class hierarchy. This is necessary to correct a defect in MEI, not to create "castles in the sky". While it's not as strict as TEI's <biblStruct> element, which requires all its elements in a predetermined order, <biblStrict> accommodates what I would call the "semi-structured" nature of MerMEId's bibliographic citations. And the changes to <physLoc> improve MEI's handling of bibliographic metadata not only in bibliographic citations within text, but in the header as well. Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Peter Stadler [stadler at edirom.de] Sent: Saturday, October 20, 2012 4:59 PM To: Music Encoding Initiative Subject: Re: [MEI-L] revision of <bibl> Dear Perry, thank you for disseminating this issue. I have been discussing mei:bibl with Axel in the last weeks so I feel free to repeat my comments publicly: First, TEI has the nice separation between tei:bibl and tei: biblStruct where the former is a "loosely-structured bibliographic citation" [1] and the latter a "structured bibliographic citation, in which only bibliographic sub-elements appear and in a specified order. "[2] If your use case is *creating* bibliographical citations, than the plain bibl can be used to store your citations already in the desired style (i.e. with punctuation etc.) which makes e.g. a later html output much easier. So, my proposal would be to add a mei:biblStruct for at least three reasons: 1. MerMEId is using bibliographic citations in a highly structured way (cf. your examples) 2. It would not break backwards compatibility 3. a clear separation of structured and loosely-structured data which makes processing much easier since the processor knows what to expect. (Of course, the mei:biblStruct would need a clear structure with mandatory elements in a mandatory order) Second, I don't understand the need for reinventing the wheel. There are a lot of schemata out there for bibliographic data which could be used, since it makes developing much easier as well as interchange. Here I'm not saying to use a different namespace (one could argue so, though) but to adopt the appropriate schemata from e.g. BibtexML[3], Zotero[4] or TEI[2]. For example I'd like to see the possibility to assign keywords/tags -- on the other hand I'm reluctant to introduce those special elements recipient and creation (the latter could go into mei:annot). All the best Peter PS: By the way, mei:bibl is currently described as "Provides a citation for a published work." which should be amended to cover unpublished material as well. [1] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-bibl.html [2] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-biblStruct.html [3] http://bibtexml.sourceforge.net [4] http://www.zotero.org/support/dev/data_model Am 15.10.2012 um 20:50 schrieb "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu>: > Hello, > > Those of you subscribed to the developers list may have already seen a version of this message. My apologies for cross-posting, but I think this is important enough to warrant soliciting a wide range of opinions, since some of the changes proposed will break backwards compatibility. > > In order to better support bibliographic applications, such as MerMEId, the content model of <bibl> needs to be revised/expanded. The example below contains several new elements -- creator, editor, contributor, biblScope, genre, imprint, pubPlace, and publisher. These might also be useful at other points in addition to within <bibl>, say in the header, but that is not under consideration at the moment. A new <recipient> element may also be added for correspondence. <distributor> will also be allowed in <imprint>. > > In addition, some existing elements, such as <creation>, should also be permitted within <bibl>. This will allow the capture of non-bibliographic details of creation, such as the location where an item was created. > > Also, some existing elements, such as <physLoc> and <relatedItem>, should be allowed after re-definition; that is, <physLoc> will no longer function as the call number/shelf location. It will function as the wrapper for <repository> and <identifier>. In this context, <identifier> will hold the shelf number. With the FRBR changes, <relatedItem> will be replaced by <relation>, allowing <relatedItem> to be used as illustrated here. > > The new element <biblList> will be a member of model.listLike, allowing it to be used wherever <list> is currently allowed. Other places where it might occur include <work>, <source> (will be called <item> after FRBR implementation, <event>, <eventList>, etc. > > <biblList> > <bibl> > <!-- genre is preferred over @type on bibl so that authorizing info about the material designation can be captured. --> > <genre authority="marcgt" > authURI="http://www.loc.gov/standards/valuelist/marcgt.html">article</genre> > <creator>Daniel Grimley</creator> > <title level="a">Modernism and Closure: Nielsen's Fifth Symphony > The Musical Quarterly > > 86 > 1 > 149-173 > > London > 2002 > > > > Carl Nielsen > William Behrend > > > Carl Nielsen > William Behrend > > 1904-04-13 > letter > > Copenhagen > > > DK-Kk > NKS 5155 4? > > > > CNB > II/333 > > > > > > Comments are greatly appreciated, especially if this will break anything you're currently doing. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Tue Oct 23 13:08:09 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Tue, 23 Oct 2012 11:08:09 +0000 Subject: [MEI-L] revision of In-Reply-To: References: , Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514DCD56@EXCHANGE-02.kb.dk> Hi Perry & all >From my rather pragmatic point of view, the element you suggest seems to accommodate very well the information we want to capture in MerMEId. From a more purist point of view, however, one could say that it falls between two stools, neither being as freely structured as , nor as strict as . On the other hand, I also doubt there will be a great need within bibliographic references in MEI for such detailed descriptions as provided by, for instance, , but I may be wrong. I think the solution is completely satisfying and sufficiently flexible to reference different types of material without lots of unused elements. I have one question to your original example below: You suggest to hold references to editions or transcriptions of (in this case:) letters. "Host" seems to me to be a somewhat vague description of that relation. Is that a standard and sufficiently unambiguous label for an edition or transcription of a manuscript? All the best, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Roland, Perry (pdr4h) Sendt: 22. oktober 2012 21:01 Til: Music Encoding Initiative Emne: Re: [MEI-L] revision of Hi, Peter, Thanks for your comments. I appreciate your well-thought-out arguments. I've arrived at the same conclusion as you; that is, that MEI needs a method for capturing structured as well as unstructured bibliographic citations. We already have for the unstructured citations, but need to work on a mechanism for more structured ones. I also agree whole-heartedly that wheel-reinvention is undesirable. But, maintaining backward compatibility at the expense of sensible markup is not a reasonable path either. I'm not quite yet convinced of the need to adopt a model as highly-structured as TEI's or the other schemas you pointed to. Looking at Axel's examples and trying to envision future requirements, I don't see a need *at this time* for the level of detail that provides. So, I've been working toward an element I call . If we determine that an even more-structured way of handling bibliographic citations/descriptions than is necessary, we can look at emulating/adopting , bibtexml, or Zotero. Whereas, allows all inline text elements, will permit only and the members of a new class, model.biblPart, which has the members: biblScope, contributor, creation, creator, distributor, edition, editor, funder, genre, imprint, physLoc, pubPlace, publisher, recipient, relatedItem, series, and sponsor. This is very similar to the TEI model of <bibl> with inline text elements removed. There are, however, some useful changes in the MEI model of <biblStrict>, compared with TEI, such as the substitution of <creator> and <contributor> for <author>, which I think you'll agree is not completely satisfying for music. These elements, as well as the other "responsibility-like" elements -- editor, funder, sponsor, and recipient -- are simply syntactic sugar for the <name> element with a role attribute of "editor", "funder", etc. Therefore, I see no harm in adding them to the current MEI model of <bibl> and <titleSmt> -- it makes the MEI model more TEI-like, reduces the markup needed to accurately identify the parts of the citation, and makes the markup more consistent and more processable by removing a number of competing markup possibilities. The <creation>, <genre>, <physLoc>, and <recipient> elements permit the citation of correspondence and other manuscript material without taking on the burden of TEI's <msDesc>. As all of these elements will be optional, they will not break existing MEI instances. The break in backwards compatibility is limited to the re-definition of <physLoc> and its placement in the class hierarchy. This is necessary to correct a defect in MEI, not to create "castles in the sky". While it's not as strict as TEI's <biblStruct> element, which requires all its elements in a predetermined order, <biblStrict> accommodates what I would call the "semi-structured" nature of MerMEId's bibliographic citations. And the changes to <physLoc> improve MEI's handling of bibliographic metadata not only in bibliographic citations within text, but in the header as well. Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Peter Stadler [stadler at edirom.de] Sent: Saturday, October 20, 2012 4:59 PM To: Music Encoding Initiative Subject: Re: [MEI-L] revision of <bibl> Dear Perry, thank you for disseminating this issue. I have been discussing mei:bibl with Axel in the last weeks so I feel free to repeat my comments publicly: First, TEI has the nice separation between tei:bibl and tei: biblStruct where the former is a "loosely-structured bibliographic citation" [1] and the latter a "structured bibliographic citation, in which only bibliographic sub-elements appear and in a specified order. "[2] If your use case is *creating* bibliographical citations, than the plain bibl can be used to store your citations already in the desired style (i.e. with punctuation etc.) which makes e.g. a later html output much easier. So, my proposal would be to add a mei:biblStruct for at least three reasons: 1. MerMEId is using bibliographic citations in a highly structured way (cf. your examples) 2. It would not break backwards compatibility 3. a clear separation of structured and loosely-structured data which makes processing much easier since the processor knows what to expect. (Of course, the mei:biblStruct would need a clear structure with mandatory elements in a mandatory order) Second, I don't understand the need for reinventing the wheel. There are a lot of schemata out there for bibliographic data which could be used, since it makes developing much easier as well as interchange. Here I'm not saying to use a different namespace (one could argue so, though) but to adopt the appropriate schemata from e.g. BibtexML[3], Zotero[4] or TEI[2]. For example I'd like to see the possibility to assign keywords/tags -- on the other hand I'm reluctant to introduce those special elements recipient and creation (the latter could go into mei:annot). All the best Peter PS: By the way, mei:bibl is currently described as "Provides a citation for a published work." which should be amended to cover unpublished material as well. [1] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-bibl.html [2] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-biblStruct.html [3] http://bibtexml.sourceforge.net [4] http://www.zotero.org/support/dev/data_model Am 15.10.2012 um 20:50 schrieb "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu>: > Hello, > > Those of you subscribed to the developers list may have already seen a version of this message. My apologies for cross-posting, but I think this is important enough to warrant soliciting a wide range of opinions, since some of the changes proposed will break backwards compatibility. > > In order to better support bibliographic applications, such as MerMEId, the content model of <bibl> needs to be revised/expanded. The example below contains several new elements -- creator, editor, contributor, biblScope, genre, imprint, pubPlace, and publisher. These might also be useful at other points in addition to within <bibl>, say in the header, but that is not under consideration at the moment. A new <recipient> element may also be added for correspondence. <distributor> will also be allowed in <imprint>. > > In addition, some existing elements, such as <creation>, should also be permitted within <bibl>. This will allow the capture of non-bibliographic details of creation, such as the location where an item was created. > > Also, some existing elements, such as <physLoc> and <relatedItem>, should be allowed after re-definition; that is, <physLoc> will no longer function as the call number/shelf location. It will function as the wrapper for <repository> and <identifier>. In this context, <identifier> will hold the shelf number. With the FRBR changes, <relatedItem> will be replaced by <relation>, allowing <relatedItem> to be used as illustrated here. > > The new element <biblList> will be a member of model.listLike, allowing it to be used wherever <list> is currently allowed. Other places where it might occur include <work>, <source> (will be called <item> after FRBR implementation, <event>, <eventList>, etc. > > <biblList> > <bibl> > <!-- genre is preferred over @type on bibl so that authorizing info about the material designation can be captured. --> > <genre authority="marcgt" > authURI="http://www.loc.gov/standards/valuelist/marcgt.html">article</genre> > <creator>Daniel Grimley</creator> > <title level="a">Modernism and Closure: Nielsen's Fifth Symphony > The Musical Quarterly > > 86 > 1 > 149-173 > > London > 2002 > > > > Carl Nielsen > William Behrend > > > Carl Nielsen > William Behrend > > 1904-04-13 > letter > > Copenhagen > > > DK-Kk > NKS 5155 4? > > > > CNB > II/333 > > > > > > Comments are greatly appreciated, especially if this will break anything you're currently doing. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Tue Oct 23 15:04:08 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 23 Oct 2012 13:04:08 +0000 Subject: [MEI-L] revision of In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514DCD56@EXCHANGE-02.kb.dk> References: , , <0B6F63F59F405E4C902DFE2C2329D0D1514DCD56@EXCHANGE-02.kb.dk> Message-ID: Hi, Axel, I think you've summed it up very well -- provides a middle-ground between and -- and I'm glad you agree it's sufficient for the kind of bibliographic description you're doing, which is to say that it will most likely fit others' needs too, since you're doing fairly advanced things. As for your question about @rel on , the values for @rel are taken from those used by @type on in MODS (see http://www.loc.gov/standards/mods/v3/mods-userguide-elements.html#relateditem). The value "series" has been left out because we're providing and as substitutes for . Also, the value "references" has been provided, which was added in version 3.4 (see http://www.loc.gov/standards/mods/mods-changes-3-4.html), but isn't reflected in the MODS User Guide. While I agree that "host" is probably not the best terminology, I think it's a reasonable label for the relationship between a manuscript letter and its printed counterpart since the letter is contained within the printed edition. Perhaps a better choice would've been "isConstituentOf", which makes a nice pairing with its opposite "constituent". At any rate, in order to maximize interoperability, we should stick with the MODS values. Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Axel Teich Geertinger [atge at kb.dk] Sent: Tuesday, October 23, 2012 7:08 AM To: Music Encoding Initiative Subject: Re: [MEI-L] revision of Hi Perry & all >From my rather pragmatic point of view, the element you suggest seems to accommodate very well the information we want to capture in MerMEId. From a more purist point of view, however, one could say that it falls between two stools, neither being as freely structured as , nor as strict as . On the other hand, I also doubt there will be a great need within bibliographic references in MEI for such detailed descriptions as provided by, for instance, , but I may be wrong. I think the solution is completely satisfying and sufficiently flexible to reference different types of material without lots of unused elements. I have one question to your original example below: You suggest to hold references to editions or transcriptions of (in this case:) letters. "Host" seems to me to be a somewhat vague description of that relation. Is that a standard and sufficiently unambiguous label for an edition or transcription of a manuscript? All the best, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Roland, Perry (pdr4h) Sendt: 22. oktober 2012 21:01 Til: Music Encoding Initiative Emne: Re: [MEI-L] revision of Hi, Peter, Thanks for your comments. I appreciate your well-thought-out arguments. I've arrived at the same conclusion as you; that is, that MEI needs a method for capturing structured as well as unstructured bibliographic citations. We already have for the unstructured citations, but need to work on a mechanism for more structured ones. I also agree whole-heartedly that wheel-reinvention is undesirable. But, maintaining backward compatibility at the expense of sensible markup is not a reasonable path either. I'm not quite yet convinced of the need to adopt a model as highly-structured as TEI's or the other schemas you pointed to. Looking at Axel's examples and trying to envision future requirements, I don't see a need *at this time* for the level of detail that provides. So, I've been working toward an element I call . If we determine that an even more-structured way of handling bibliographic citations/descriptions than is necessary, we can look at emulating/adopting , bibtexml, or Zotero. Whereas, allows all inline text elements, will permit only and the members of a new class, model.biblPart, which has the members: biblScope, contributor, creation, creator, distributor, edition, editor, funder, genre, imprint, physLoc, pubPlace, publisher, recipient, relatedItem, series, and sponsor. This is very similar to the TEI model of <bibl> with inline text elements removed. There are, however, some useful changes in the MEI model of <biblStrict>, compared with TEI, such as the substitution of <creator> and <contributor> for <author>, which I think you'll agree is not completely satisfying for music. These elements, as well as the other "responsibility-like" elements -- editor, funder, sponsor, and recipient -- are simply syntactic sugar for the <name> element with a role attribute of "editor", "funder", etc. Therefore, I see no harm in adding them to the current MEI model of <bibl> and <titleSmt> -- it makes the MEI model more TEI-like, reduces the markup needed to accurately identify the parts of the citation, and makes the markup more consistent and more processable by removing a number of competing markup possibilities. The <creation>, <genre>, <physLoc>, and <recipient> elements permit the citation of correspondence and other manuscript material without taking on the burden of TEI's <msDesc>. As all of these elements will be optional, they will not break existing MEI instances. The break in backwards compatibility is limited to the re-definition of <physLoc> and its placement in the class hierarchy. This is necessary to correct a defect in MEI, not to create "castles in the sky". While it's not as strict as TEI's <biblStruct> element, which requires all its elements in a predetermined order, <biblStrict> accommodates what I would call the "semi-structured" nature of MerMEId's bibliographic citations. And the changes to <physLoc> improve MEI's handling of bibliographic metadata not only in bibliographic citations within text, but in the header as well. Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Peter Stadler [stadler at edirom.de] Sent: Saturday, October 20, 2012 4:59 PM To: Music Encoding Initiative Subject: Re: [MEI-L] revision of <bibl> Dear Perry, thank you for disseminating this issue. I have been discussing mei:bibl with Axel in the last weeks so I feel free to repeat my comments publicly: First, TEI has the nice separation between tei:bibl and tei: biblStruct where the former is a "loosely-structured bibliographic citation" [1] and the latter a "structured bibliographic citation, in which only bibliographic sub-elements appear and in a specified order. "[2] If your use case is *creating* bibliographical citations, than the plain bibl can be used to store your citations already in the desired style (i.e. with punctuation etc.) which makes e.g. a later html output much easier. So, my proposal would be to add a mei:biblStruct for at least three reasons: 1. MerMEId is using bibliographic citations in a highly structured way (cf. your examples) 2. It would not break backwards compatibility 3. a clear separation of structured and loosely-structured data which makes processing much easier since the processor knows what to expect. (Of course, the mei:biblStruct would need a clear structure with mandatory elements in a mandatory order) Second, I don't understand the need for reinventing the wheel. There are a lot of schemata out there for bibliographic data which could be used, since it makes developing much easier as well as interchange. Here I'm not saying to use a different namespace (one could argue so, though) but to adopt the appropriate schemata from e.g. BibtexML[3], Zotero[4] or TEI[2]. For example I'd like to see the possibility to assign keywords/tags -- on the other hand I'm reluctant to introduce those special elements recipient and creation (the latter could go into mei:annot). All the best Peter PS: By the way, mei:bibl is currently described as "Provides a citation for a published work." which should be amended to cover unpublished material as well. [1] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-bibl.html [2] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-biblStruct.html [3] http://bibtexml.sourceforge.net [4] http://www.zotero.org/support/dev/data_model Am 15.10.2012 um 20:50 schrieb "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu>: > Hello, > > Those of you subscribed to the developers list may have already seen a version of this message. My apologies for cross-posting, but I think this is important enough to warrant soliciting a wide range of opinions, since some of the changes proposed will break backwards compatibility. > > In order to better support bibliographic applications, such as MerMEId, the content model of <bibl> needs to be revised/expanded. The example below contains several new elements -- creator, editor, contributor, biblScope, genre, imprint, pubPlace, and publisher. These might also be useful at other points in addition to within <bibl>, say in the header, but that is not under consideration at the moment. A new <recipient> element may also be added for correspondence. <distributor> will also be allowed in <imprint>. > > In addition, some existing elements, such as <creation>, should also be permitted within <bibl>. This will allow the capture of non-bibliographic details of creation, such as the location where an item was created. > > Also, some existing elements, such as <physLoc> and <relatedItem>, should be allowed after re-definition; that is, <physLoc> will no longer function as the call number/shelf location. It will function as the wrapper for <repository> and <identifier>. In this context, <identifier> will hold the shelf number. With the FRBR changes, <relatedItem> will be replaced by <relation>, allowing <relatedItem> to be used as illustrated here. > > The new element <biblList> will be a member of model.listLike, allowing it to be used wherever <list> is currently allowed. Other places where it might occur include <work>, <source> (will be called <item> after FRBR implementation, <event>, <eventList>, etc. > > <biblList> > <bibl> > <!-- genre is preferred over @type on bibl so that authorizing info about the material designation can be captured. --> > <genre authority="marcgt" > authURI="http://www.loc.gov/standards/valuelist/marcgt.html">article</genre> > <creator>Daniel Grimley</creator> > <title level="a">Modernism and Closure: Nielsen's Fifth Symphony > The Musical Quarterly > > 86 > 1 > 149-173 > > London > 2002 > > > > Carl Nielsen > William Behrend > > > Carl Nielsen > William Behrend > > 1904-04-13 > letter > > Copenhagen > > > DK-Kk > NKS 5155 4? > > > > CNB > II/333 > > > > > > Comments are greatly appreciated, especially if this will break anything you're currently doing. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Wed Oct 24 09:26:43 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Wed, 24 Oct 2012 09:26:43 +0200 Subject: [MEI-L] revision of In-Reply-To: References: , , <0B6F63F59F405E4C902DFE2C2329D0D1514DCD56@EXCHANGE-02.kb.dk> Message-ID: <50879833.8000800@edirom.de> Hi all, great thing this is being discussed on MEI-L . [1] A more strict version of mei:bibl is a good idea to support encpding structured data and I suppose Axel is the one to best estimate the degree of strictness currently needed. [2] As for the new elements I agree with Peter that we should remodel existing schemata and only "reinvent the wheel" if we see they are lacking some specific feature or are problematic in any other way. [another two cents]Consequently a first thing already problematic in the concept is mei:series and mei:seriesStmt vs. mei:relatedItem[@rel='series'] as this implements two different ways. I know this is a devloping thing and for that reson things become implemnted step by step, nonetheless I would opt for allowing mei:relatedItem[@rel='series'] in order to allow either encoding with a small amount of relation in the first case and a relation-based encoding in the latter. It's a similar case with mei:respStmt/mei:persName[@role] vs. mei:creator mei:editor etc. I know allowing multiple ways of encoding the same thing makes files more ambiguous and harder to process. a new mei:biblStrict on the other hand could allow for only one way of encoding these phenomena. Best wihes, Benjamin Benjamin Wolff Bohl *********************************************************** Edirom - Projekt "Digitale Musikedition" Musikwissenschaftliches Seminar Detmold/Paderborn Gartenstra?e 20 D ? 32756 Detmold Tel. +49 (0) 5231 / 975-669 Fax: +49 (0) 5231 / 975-668 http://www.edirom.de *********************************************************** Am 23.10.2012 15:04, schrieb Roland, Perry (pdr4h): > Hi, Axel, > > I think you've summed it up very well -- provides a middle-ground between and -- and I'm glad you agree it's sufficient for the kind of bibliographic description you're doing, which is to say that it will most likely fit others' needs too, since you're doing fairly advanced things. > > As for your question about @rel on , the values for @rel are taken from those used by @type on in MODS (see http://www.loc.gov/standards/mods/v3/mods-userguide-elements.html#relateditem). The value "series" has been left out because we're providing and as substitutes for . Also, the value "references" has been provided, which was added in version 3.4 (see http://www.loc.gov/standards/mods/mods-changes-3-4.html), but isn't reflected in the MODS User Guide. > > While I agree that "host" is probably not the best terminology, I think it's a reasonable label for the relationship between a manuscript letter and its printed counterpart since the letter is contained within the printed edition. Perhaps a better choice would've been "isConstituentOf", which makes a nice pairing with its opposite "constituent". At any rate, in order to maximize interoperability, we should stick with the MODS values. > > Best wishes, > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Axel Teich Geertinger [atge at kb.dk] > Sent: Tuesday, October 23, 2012 7:08 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] revision of > > Hi Perry & all > > From my rather pragmatic point of view, the element you suggest seems to accommodate very well the information we want to capture in MerMEId. From a more purist point of view, however, one could say that it falls between two stools, neither being as freely structured as , nor as strict as . On the other hand, I also doubt there will be a great need within bibliographic references in MEI for such detailed descriptions as provided by, for instance, , but I may be wrong. I think the solution is completely satisfying and sufficiently flexible to reference different types of material without lots of unused elements. > > I have one question to your original example below: You suggest to hold references to editions or transcriptions of (in this case:) letters. "Host" seems to me to be a somewhat vague description of that relation. Is that a standard and sufficiently unambiguous label for an edition or transcription of a manuscript? > > All the best, > Axel > > -----Oprindelig meddelelse----- > Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Roland, Perry (pdr4h) > Sendt: 22. oktober 2012 21:01 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] revision of > > Hi, Peter, > > Thanks for your comments. I appreciate your well-thought-out arguments. > > I've arrived at the same conclusion as you; that is, that MEI needs a method for capturing structured as well as unstructured bibliographic citations. We already have for the unstructured citations, but need to work on a mechanism for more structured ones. > > I also agree whole-heartedly that wheel-reinvention is undesirable. But, maintaining backward compatibility at the expense of sensible markup is not a reasonable path either. > > I'm not quite yet convinced of the need to adopt a model as highly-structured as TEI's or the other schemas you pointed to. Looking at Axel's examples and trying to envision future requirements, I don't see a need *at this time* for the level of detail that provides. So, I've been working toward an element I call . If we determine that an even more-structured way of handling bibliographic citations/descriptions than is necessary, we can look at emulating/adopting , bibtexml, or Zotero. > > Whereas, allows all inline text elements, will permit only and the members of a new class, model.biblPart, which has the members: biblScope, contributor, creation, creator, distributor, edition, editor, funder, genre, imprint, physLoc, pubPlace, publisher, recipient, relatedItem, series, and sponsor. This is very similar to the TEI model of <bibl> with inline text elements removed. > > There are, however, some useful changes in the MEI model of <biblStrict>, compared with TEI, such as the substitution of <creator> and <contributor> for <author>, which I think you'll agree is not completely satisfying for music. These elements, as well as the other "responsibility-like" elements -- editor, funder, sponsor, and recipient -- are simply syntactic sugar for the <name> element with a role attribute of "editor", "funder", etc. Therefore, I see no harm in adding them to the current MEI model of <bibl> and <titleSmt> -- it makes the MEI model more TEI-like, reduces the markup needed to accurately identify the parts of the citation, and makes the markup more consistent and more processable by removing a number of competing markup possibilities. The <creation>, <genre>, <physLoc>, and <recipient> elements permit the citation of correspondence and other manuscript material without taking on the burden of TEI's <msDesc>. > > As all of these elements will be optional, they will not break existing MEI instances. The break in backwards compatibility is limited to the re-definition of <physLoc> and its placement in the class hierarchy. This is necessary to correct a defect in MEI, not to create "castles in the sky". > > While it's not as strict as TEI's <biblStruct> element, which requires all its elements in a predetermined order, <biblStrict> accommodates what I would call the "semi-structured" nature of MerMEId's bibliographic citations. And the changes to <physLoc> improve MEI's handling of bibliographic metadata not only in bibliographic citations within text, but in the header as well. > > Best wishes, > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Peter Stadler [stadler at edirom.de] > Sent: Saturday, October 20, 2012 4:59 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] revision of <bibl> > > Dear Perry, > > thank you for disseminating this issue. > I have been discussing mei:bibl with Axel in the last weeks so I feel free to repeat my comments publicly: > > First, TEI has the nice separation between tei:bibl and tei: biblStruct where the former is a "loosely-structured bibliographic citation" [1] and the latter a "structured bibliographic citation, in which only bibliographic sub-elements appear and in a specified order. "[2] If your use case is *creating* bibliographical citations, than the plain bibl can be used to store your citations already in the desired style (i.e. with punctuation etc.) which makes e.g. a later html output much easier. > So, my proposal would be to add a mei:biblStruct for at least three reasons: > 1. MerMEId is using bibliographic citations in a highly structured way (cf. your examples) 2. It would not break backwards compatibility 3. a clear separation of structured and loosely-structured data which makes processing much easier since the processor knows what to expect. (Of course, the mei:biblStruct would need a clear structure with mandatory elements in a mandatory order) > > Second, I don't understand the need for reinventing the wheel. There are a lot of schemata out there for bibliographic data which could be used, since it makes developing much easier as well as interchange. Here I'm not saying to use a different namespace (one could argue so, though) but to adopt the appropriate schemata from e.g. BibtexML[3], Zotero[4] or TEI[2]. For example I'd like to see the possibility to assign keywords/tags -- on the other hand I'm reluctant to introduce those special elements recipient and creation (the latter could go into mei:annot). > > All the best > Peter > > PS: By the way, mei:bibl is currently described as "Provides a citation for a published work." which should be amended to cover unpublished material as well. > > [1] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-bibl.html > [2] http://www.tei-c.org/release/doc/tei-p5-doc/en/html/ref-biblStruct.html > [3] http://bibtexml.sourceforge.net > [4] http://www.zotero.org/support/dev/data_model > > Am 15.10.2012 um 20:50 schrieb "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu>: > >> Hello, >> >> Those of you subscribed to the developers list may have already seen a version of this message. My apologies for cross-posting, but I think this is important enough to warrant soliciting a wide range of opinions, since some of the changes proposed will break backwards compatibility. >> >> In order to better support bibliographic applications, such as MerMEId, the content model of <bibl> needs to be revised/expanded. The example below contains several new elements -- creator, editor, contributor, biblScope, genre, imprint, pubPlace, and publisher. These might also be useful at other points in addition to within <bibl>, say in the header, but that is not under consideration at the moment. A new <recipient> element may also be added for correspondence. <distributor> will also be allowed in <imprint>. >> >> In addition, some existing elements, such as <creation>, should also be permitted within <bibl>. This will allow the capture of non-bibliographic details of creation, such as the location where an item was created. >> >> Also, some existing elements, such as <physLoc> and <relatedItem>, should be allowed after re-definition; that is, <physLoc> will no longer function as the call number/shelf location. It will function as the wrapper for <repository> and <identifier>. In this context, <identifier> will hold the shelf number. With the FRBR changes, <relatedItem> will be replaced by <relation>, allowing <relatedItem> to be used as illustrated here. >> >> The new element <biblList> will be a member of model.listLike, allowing it to be used wherever <list> is currently allowed. Other places where it might occur include <work>, <source> (will be called <item> after FRBR implementation, <event>, <eventList>, etc. >> >> <biblList> >> <bibl> >> <!-- genre is preferred over @type on bibl so that authorizing info about the material designation can be captured. --> >> <genre authority="marcgt" >> authURI="http://www.loc.gov/standards/valuelist/marcgt.html">article</genre> >> <creator>Daniel Grimley</creator> >> <title level="a">Modernism and Closure: Nielsen's Fifth Symphony >> The Musical Quarterly >> >> 86 >> 1 >> 149-173 >> >> London >> 2002 >> >> >> >> Carl Nielsen >> William Behrend >> >> >> Carl Nielsen >> William Behrend >> >> 1904-04-13 >> letter >> >> Copenhagen >> >> >> DK-Kk >> NKS 5155 4? >> >> >> >> CNB >> II/333 >> >> >> >> >> >> Comments are greatly appreciated, especially if this will break anything you're currently doing. >> >> -- >> p. >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From stadler at edirom.de Thu Oct 25 11:16:14 2012 From: stadler at edirom.de (Peter Stadler) Date: Thu, 25 Oct 2012 11:16:14 +0200 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 Message-ID: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> Dear all, propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. Is there an official stylesheet or is anyone willing to share his/her code? Any pointers appreciated! All the best Peter -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de From kepper at edirom.de Thu Oct 25 11:55:42 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 25 Oct 2012 11:55:42 +0200 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> Message-ID: Hi Peter, we have some preliminary internal converters here. They will be revised and published on the music-encoding.org website when stable enough. Right now, they more or less cover the subset of MEI used in the sample collection, but they also do some other cleanup, which might not be appropriate in all cases. In order to see what's possible, it would be great to know what your encodings contain. Could you send them to Maja, Kristina or me? It's good you're asking on MEI-L ? of course we're willing to help anyone else with similar problems as well? Best, Johannes Am 25.10.2012 um 11:16 schrieb Peter Stadler : > Dear all, > > propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. > Is there an official stylesheet or is anyone willing to share his/her code? > > Any pointers appreciated! > > All the best > Peter > > > > -- > Peter Stadler > Carl-Maria-von-Weber-Gesamtausgabe > Arbeitsstelle Detmold > Gartenstr. 20 > D-32756 Detmold > Tel. +49 5231 975-665 > Fax: +49 5231 975-668 > stadler at weber-gesamtausgabe.de > www.weber-gesamtausgabe.de > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From andrew.hankinson at mail.mcgill.ca Thu Oct 25 16:11:12 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Thu, 25 Oct 2012 10:11:12 -0400 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> Message-ID: We would also be interested in seeing these, since we have a large number of non-2012 files that we want to migrate. I did not know about your internal stylesheets, so we've just started writing our own. Could they be posted "as-is" somewhere (in the SVN?) with appropriate warnings on them that they're not quite ready for prime-time. It would be nice if we don't duplicate effort in writing converters. -Andrew On 2012-10-25, at 5:55 AM, Johannes Kepper wrote: > Hi Peter, > > we have some preliminary internal converters here. They will be revised and published on the music-encoding.org website when stable enough. Right now, they more or less cover the subset of MEI used in the sample collection, but they also do some other cleanup, which might not be appropriate in all cases. In order to see what's possible, it would be great to know what your encodings contain. Could you send them to Maja, Kristina or me? > > It's good you're asking on MEI-L ? of course we're willing to help anyone else with similar problems as well? > > Best, > Johannes > > > > Am 25.10.2012 um 11:16 schrieb Peter Stadler : > >> Dear all, >> >> propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. >> Is there an official stylesheet or is anyone willing to share his/her code? >> >> Any pointers appreciated! >> >> All the best >> Peter >> >> >> >> -- >> Peter Stadler >> Carl-Maria-von-Weber-Gesamtausgabe >> Arbeitsstelle Detmold >> Gartenstr. 20 >> D-32756 Detmold >> Tel. +49 5231 975-665 >> Fax: +49 5231 975-668 >> stadler at weber-gesamtausgabe.de >> www.weber-gesamtausgabe.de >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Thu Oct 25 16:23:36 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 25 Oct 2012 16:23:36 +0200 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> Message-ID: <7C2BBD92-A44C-4616-B458-1CA43278FD49@edirom.de> Of course we're willing to share them. But first, I would suggest we put them on the agenda for the upcoming tech team meeting and see how we proceed with them. I'm somewhat reluctant to let them enter the wild without any further review. And adding comments on how they might fail with other files probably requires just the same amount of work than fixing these errors? For those not following the developer's list: The meeting is scheduled for the first half of November, so this shouldn't take too long? Johannes Am 25.10.2012 um 16:11 schrieb Andrew Hankinson : > We would also be interested in seeing these, since we have a large number of non-2012 files that we want to migrate. I did not know about your internal stylesheets, so we've just started writing our own. > > Could they be posted "as-is" somewhere (in the SVN?) with appropriate warnings on them that they're not quite ready for prime-time. It would be nice if we don't duplicate effort in writing converters. > > -Andrew > > On 2012-10-25, at 5:55 AM, Johannes Kepper > wrote: > >> Hi Peter, >> >> we have some preliminary internal converters here. They will be revised and published on the music-encoding.org website when stable enough. Right now, they more or less cover the subset of MEI used in the sample collection, but they also do some other cleanup, which might not be appropriate in all cases. In order to see what's possible, it would be great to know what your encodings contain. Could you send them to Maja, Kristina or me? >> >> It's good you're asking on MEI-L ? of course we're willing to help anyone else with similar problems as well? >> >> Best, >> Johannes >> >> >> >> Am 25.10.2012 um 11:16 schrieb Peter Stadler : >> >>> Dear all, >>> >>> propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. >>> Is there an official stylesheet or is anyone willing to share his/her code? >>> >>> Any pointers appreciated! >>> >>> All the best >>> Peter >>> >>> >>> >>> -- >>> Peter Stadler >>> Carl-Maria-von-Weber-Gesamtausgabe >>> Arbeitsstelle Detmold >>> Gartenstr. 20 >>> D-32756 Detmold >>> Tel. +49 5231 975-665 >>> Fax: +49 5231 975-668 >>> stadler at weber-gesamtausgabe.de >>> www.weber-gesamtausgabe.de >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From stadler at edirom.de Fri Oct 26 14:04:36 2012 From: stadler at edirom.de (Peter Stadler) Date: Fri, 26 Oct 2012 14:04:36 +0200 Subject: [MEI-L] revision of In-Reply-To: References: , Message-ID: <6DA8FDCC-209B-48DE-AAB7-1B173D976AE1@edirom.de> Am 22.10.2012 um 21:00 schrieb "Roland, Perry (pdr4h)" : > I'm not quite yet convinced of the need to adopt a model as highly-structured as TEI's or the other schemas you pointed to. Looking at Axel's examples and trying to envision future requirements, I don't see a need *at this time* for the level of detail that provides. So, I've been working toward an element I call . If we determine that an even more-structured way of handling bibliographic citations/descriptions than is necessary, we can look at emulating/adopting , bibtexml, or Zotero. > > Whereas, allows all inline text elements, will permit only and the members of a new class, model.biblPart, which has the members: biblScope, contributor, creation, creator, distributor, edition, editor, funder, genre, imprint, physLoc, pubPlace, publisher, recipient, relatedItem, series, and sponsor. This is very similar to the TEI model of <bibl> with inline text elements removed. Sorry, but I am not convinced by your arguments. <biblStrict> is even worse than <bibl> since it does remove inline text without imposing any structure. I like <tei:biblStruct> simply because it separates information concerning different bibliographic level (series, monogr, analytic) thus creating a hierarchy and structure. That's not overly complex or too detailed. But it's better than throwing everything into one container. Bibtex (and Zotero?) has that sort of flat hierarchy as well (while you could probably infer hierarchy from cross-references) and wasn't meant as an example in that case. It was meant as an example for additional fields which are lacking in TEI, e.g. tags/keywords or abstracts. Best Peter -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de From pdr4h at eservices.virginia.edu Fri Oct 26 16:47:03 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Fri, 26 Oct 2012 14:47:03 +0000 Subject: [MEI-L] revision of <bibl> In-Reply-To: <6DA8FDCC-209B-48DE-AAB7-1B173D976AE1@edirom.de> References: <BBCC497C40D85642B90E9F94FC30343D0EFA9489@GRANT.eservices.virginia.edu>, <E47ECA3D-4DB1-46BA-9854-6D11D7015B3F@edirom.de> <BBCC497C40D85642B90E9F94FC30343D0EFA9DFA@GRANT.eservices.virginia.edu>, <6DA8FDCC-209B-48DE-AAB7-1B173D976AE1@edirom.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EFB7AF2@GRANT.eservices.virginia.edu> Hi, Peter, The statement "<biblStrict> is even worse than <bibl>" sounds like hyperbole to me. :-) Of course, you're correct in pointing out that <biblStrict> doesn't impose the order and number requirements of <tei:biblStruct>, but it does impose structure -- it removes plain text and the members of model.textphraseLike from the model of <bibl>, thus requiring that the components of the citation be marked using only the members of model.biblPart (and a few other elements). I don't disagree with you that more structure than that may be required by some users. But I don't believe that <biblStrict> will be particularly difficult to use/code/process because it provides fewer constraints than <tei:biblStruct>. And, if I'm wrong, then <tei:biblStruct> or an MEI equivalent can be added later to accommodate the need for tighter constraints and the capture of additional information. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Peter Stadler [stadler at edirom.de] Sent: Friday, October 26, 2012 8:04 AM To: Music Encoding Initiative Subject: Re: [MEI-L] revision of <bibl> Am 22.10.2012 um 21:00 schrieb "Roland, Perry (pdr4h)" <pdr4h at eservices.virginia.edu>: > I'm not quite yet convinced of the need to adopt a model as highly-structured as TEI's <biblStruct> or the other schemas you pointed to. Looking at Axel's examples and trying to envision future requirements, I don't see a need *at this time* for the level of detail that <biblStruct> provides. So, I've been working toward an element I call <biblStrict>. If we determine that an even more-structured way of handling bibliographic citations/descriptions than <biblStrict> is necessary, we can look at emulating/adopting <biblStruct>, bibtexml, or Zotero. > > Whereas, <bibl> allows all inline text elements, <biblStrict> will permit only <title> and the members of a new class, model.biblPart, which has the members: biblScope, contributor, creation, creator, distributor, edition, editor, funder, genre, imprint, physLoc, pubPlace, publisher, recipient, relatedItem, series, and sponsor. This is very similar to the TEI model of <bibl> with inline text elements removed. Sorry, but I am not convinced by your arguments. <biblStrict> is even worse than <bibl> since it does remove inline text without imposing any structure. I like <tei:biblStruct> simply because it separates information concerning different bibliographic level (series, monogr, analytic) thus creating a hierarchy and structure. That's not overly complex or too detailed. But it's better than throwing everything into one container. Bibtex (and Zotero?) has that sort of flat hierarchy as well (while you could probably infer hierarchy from cross-references) and wasn't meant as an example in that case. It was meant as an example for additional fields which are lacking in TEI, e.g. tags/keywords or abstracts. Best Peter -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From stadler at edirom.de Fri Oct 26 18:02:20 2012 From: stadler at edirom.de (Peter Stadler) Date: Fri, 26 Oct 2012 18:02:20 +0200 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: <7C2BBD92-A44C-4616-B458-1CA43278FD49@edirom.de> References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> <B32B959B-272B-484F-9067-81A7D9F2406C@mail.mcgill.ca> <7C2BBD92-A44C-4616-B458-1CA43278FD49@edirom.de> Message-ID: <F661D675-FE50-4A4D-91BF-CCA7AD6FC3EB@edirom.de> I couldn't wait ;-) Here's what I came up with: https://github.com/peterstadler/MEI-2010to2012 But especially profiledesc is giving me headaches -- where is it gone? All the best Peter Am 25.10.2012 um 16:23 schrieb Johannes Kepper <kepper at edirom.de>: > Of course we're willing to share them. But first, I would suggest we put them on the agenda for the upcoming tech team meeting and see how we proceed with them. I'm somewhat reluctant to let them enter the wild without any further review. And adding comments on how they might fail with other files probably requires just the same amount of work than fixing these errors? > > For those not following the developer's list: The meeting is scheduled for the first half of November, so this shouldn't take too long? > > Johannes > > > Am 25.10.2012 um 16:11 schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: > >> We would also be interested in seeing these, since we have a large number of non-2012 files that we want to migrate. I did not know about your internal stylesheets, so we've just started writing our own. >> >> Could they be posted "as-is" somewhere (in the SVN?) with appropriate warnings on them that they're not quite ready for prime-time. It would be nice if we don't duplicate effort in writing converters. >> >> -Andrew >> >> On 2012-10-25, at 5:55 AM, Johannes Kepper <kepper at edirom.de> >> wrote: >> >>> Hi Peter, >>> >>> we have some preliminary internal converters here. They will be revised and published on the music-encoding.org website when stable enough. Right now, they more or less cover the subset of MEI used in the sample collection, but they also do some other cleanup, which might not be appropriate in all cases. In order to see what's possible, it would be great to know what your encodings contain. Could you send them to Maja, Kristina or me? >>> >>> It's good you're asking on MEI-L ? of course we're willing to help anyone else with similar problems as well? >>> >>> Best, >>> Johannes >>> >>> >>> >>> Am 25.10.2012 um 11:16 schrieb Peter Stadler <stadler at edirom.de>: >>> >>>> Dear all, >>>> >>>> propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. >>>> Is there an official stylesheet or is anyone willing to share his/her code? >>>> >>>> Any pointers appreciated! >>>> >>>> All the best >>>> Peter -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de From maja.hartwig at gmx.de Fri Oct 26 18:18:46 2012 From: maja.hartwig at gmx.de (Maja Hartwig) Date: Fri, 26 Oct 2012 18:18:46 +0200 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: <F661D675-FE50-4A4D-91BF-CCA7AD6FC3EB@edirom.de> References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> <B32B959B-272B-484F-9067-81A7D9F2406C@mail.mcgill.ca> <7C2BBD92-A44C-4616-B458-1CA43278FD49@edirom.de> <F661D675-FE50-4A4D-91BF-CCA7AD6FC3EB@edirom.de> Message-ID: <D890465C-0E0E-4D20-9611-B9243D56C45E@gmx.de> Hi Peter, ProfileDesc has changed into workDesc! Did you use some templates of my Stylesheet? ;-) Maja Am 26.10.2012 um 18:02 schrieb Peter Stadler <stadler at edirom.de>: > I couldn't wait ;-) > Here's what I came up with: https://github.com/peterstadler/MEI-2010to2012 > > But especially profiledesc is giving me headaches -- where is it gone? > > All the best > Peter > > Am 25.10.2012 um 16:23 schrieb Johannes Kepper <kepper at edirom.de>: > >> Of course we're willing to share them. But first, I would suggest we put them on the agenda for the upcoming tech team meeting and see how we proceed with them. I'm somewhat reluctant to let them enter the wild without any further review. And adding comments on how they might fail with other files probably requires just the same amount of work than fixing these errors? >> >> For those not following the developer's list: The meeting is scheduled for the first half of November, so this shouldn't take too long? >> >> Johannes >> >> >> Am 25.10.2012 um 16:11 schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: >> >>> We would also be interested in seeing these, since we have a large number of non-2012 files that we want to migrate. I did not know about your internal stylesheets, so we've just started writing our own. >>> >>> Could they be posted "as-is" somewhere (in the SVN?) with appropriate warnings on them that they're not quite ready for prime-time. It would be nice if we don't duplicate effort in writing converters. >>> >>> -Andrew >>> >>> On 2012-10-25, at 5:55 AM, Johannes Kepper <kepper at edirom.de> >>> wrote: >>> >>>> Hi Peter, >>>> >>>> we have some preliminary internal converters here. They will be revised and published on the music-encoding.org website when stable enough. Right now, they more or less cover the subset of MEI used in the sample collection, but they also do some other cleanup, which might not be appropriate in all cases. In order to see what's possible, it would be great to know what your encodings contain. Could you send them to Maja, Kristina or me? >>>> >>>> It's good you're asking on MEI-L ? of course we're willing to help anyone else with similar problems as well? >>>> >>>> Best, >>>> Johannes >>>> >>>> >>>> >>>> Am 25.10.2012 um 11:16 schrieb Peter Stadler <stadler at edirom.de>: >>>> >>>>> Dear all, >>>>> >>>>> propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. >>>>> Is there an official stylesheet or is anyone willing to share his/her code? >>>>> >>>>> Any pointers appreciated! >>>>> >>>>> All the best >>>>> Peter > > > > -- > Peter Stadler > Carl-Maria-von-Weber-Gesamtausgabe > Arbeitsstelle Detmold > Gartenstr. 20 > D-32756 Detmold > Tel. +49 5231 975-665 > Fax: +49 5231 975-668 > stadler at weber-gesamtausgabe.de > www.weber-gesamtausgabe.de > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From stadler at edirom.de Sun Oct 28 20:43:03 2012 From: stadler at edirom.de (Peter Stadler) Date: Sun, 28 Oct 2012 20:43:03 +0100 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: <D890465C-0E0E-4D20-9611-B9243D56C45E@gmx.de> References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> <B32B959B-272B-484F-9067-81A7D9F2406C@mail.mcgill.ca> <7C2BBD92-A44C-4616-B458-1CA43278FD49@edirom.de> <F661D675-FE50-4A4D-91BF-CCA7AD6FC3EB@edirom.de> <D890465C-0E0E-4D20-9611-B9243D56C45E@gmx.de> Message-ID: <BC3B138D-1CCB-4AD5-A526-3605F5A71C90@edirom.de> Hi Maja, thanks again for the stylesheet you sent off-list. It indeed confirmed my assumption that most of the (name) changes from 2010-05 to 2012 were "camelcasation". So what I did was to write an XQuery that simply tried to match all 2010-05 element (and attribute) names with the 2012 names by lower case. That's been the easy part. The tough part begins with those elements that changed semantics: * accessdesc * altmeiid * blockquote * clefchange * exhibithist * extptr * extref * fingerprint * keychange * keywords * pgfoot1 * pghead1 * profiledesc * treatmenthist * treatmentsched * @complete * @entityref * @href * @label.full * @mediacontent * @medialength If anyone can contribute a mapping for one of those, I'd be happy for any pull-request or notification on- or off-list. All the best Peter Am 26.10.2012 um 18:18 schrieb Maja Hartwig <maja.hartwig at gmx.de>: > Hi Peter, > ProfileDesc has changed into workDesc! > Did you use some templates of my Stylesheet? ;-) > > Maja > > Am 26.10.2012 um 18:02 schrieb Peter Stadler <stadler at edirom.de>: > >> I couldn't wait ;-) >> Here's what I came up with: https://github.com/peterstadler/MEI-2010to2012 >> >> But especially profiledesc is giving me headaches -- where is it gone? >> >> All the best >> Peter >> >> Am 25.10.2012 um 16:23 schrieb Johannes Kepper <kepper at edirom.de>: >> >>> Of course we're willing to share them. But first, I would suggest we put them on the agenda for the upcoming tech team meeting and see how we proceed with them. I'm somewhat reluctant to let them enter the wild without any further review. And adding comments on how they might fail with other files probably requires just the same amount of work than fixing these errors? >>> >>> For those not following the developer's list: The meeting is scheduled for the first half of November, so this shouldn't take too long? >>> >>> Johannes >>> >>> >>> Am 25.10.2012 um 16:11 schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: >>> >>>> We would also be interested in seeing these, since we have a large number of non-2012 files that we want to migrate. I did not know about your internal stylesheets, so we've just started writing our own. >>>> >>>> Could they be posted "as-is" somewhere (in the SVN?) with appropriate warnings on them that they're not quite ready for prime-time. It would be nice if we don't duplicate effort in writing converters. >>>> >>>> -Andrew >>>> >>>> On 2012-10-25, at 5:55 AM, Johannes Kepper <kepper at edirom.de> >>>> wrote: >>>> >>>>> Hi Peter, >>>>> >>>>> we have some preliminary internal converters here. They will be revised and published on the music-encoding.org website when stable enough. Right now, they more or less cover the subset of MEI used in the sample collection, but they also do some other cleanup, which might not be appropriate in all cases. In order to see what's possible, it would be great to know what your encodings contain. Could you send them to Maja, Kristina or me? >>>>> >>>>> It's good you're asking on MEI-L ? of course we're willing to help anyone else with similar problems as well? >>>>> >>>>> Best, >>>>> Johannes >>>>> >>>>> >>>>> >>>>> Am 25.10.2012 um 11:16 schrieb Peter Stadler <stadler at edirom.de>: >>>>> >>>>>> Dear all, >>>>>> >>>>>> propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. >>>>>> Is there an official stylesheet or is anyone willing to share his/her code? >>>>>> >>>>>> Any pointers appreciated! >>>>>> >>>>>> All the best >>>>>> Peter -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de From andrew.hankinson at mail.mcgill.ca Sun Oct 28 21:23:31 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Sun, 28 Oct 2012 16:23:31 -0400 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: <22559_1351453398_508D8AD5_22559_201_1_BC3B138D-1CCB-4AD5-A526-3605F5A71C90@edirom.de> References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> <B32B959B-272B-484F-9067-81A7D9F2406C@mail.mcgill.ca> <7C2BBD92-A44C-4616-B458-1CA43278FD49@edirom.de> <F661D675-FE50-4A4D-91BF-CCA7AD6FC3EB@edirom.de> <D890465C-0E0E-4D20-9611-B9243D56C45E@gmx.de> <22559_1351453398_508D8AD5_22559_201_1_BC3B138D-1CCB-4AD5-A526-3605F5A71C90@edirom.de> Message-ID: <654E24CC-53A2-4C07-9669-06E6CD051EE7@mail.mcgill.ca> Hi Peter, I'll answer inline, since we've dealt with this a bit before. Someone can correct me if I'm wrong, but this should give you a rough start. > * accessdesc This becomes <accessRestrict /> > * altmeiid This becomes <meiId /> > * blockquote This just becomes <quote /> > * clefchange This can just be converted directly to <clef /> > * exhibithist For some strange reason this was converted to <exhibHist />. (Not sure why two characters got lopped off here -- maybe we were running out of our limit of 256KB of RAM? ;) > * extptr > * extref These can be directly converted to <ptr /> and <ref />, since we now use anyURI. > * fingerprint This was removed. Not sure if it was ever used by anyone. > * keychange I *think*, like clefChange, that you can just directly convert this to <key /> > * keywords I could find a reference that on 2011 Perry changed this to *something* but I can't tell what. http://code.google.com/p/music-encoding/source/search?q=keywords&origq=keywords&btnG=Search+Trunk (in-browser search for "keywords") <change who="#PR" when="2011-01-13">Renamed <keywords> 'termList' and added @classcode</change> > * pgfoot1 > * pghead1 I think these have just been renamed pgFoot and pgHead. The pgFoot2 and pgHead2 ones still exist too. > * profiledesc This becomes <workDesc> with an embedded <work> element, in which all the children of <profiledesc /> are copied; > * treatmenthist This becomes <treatHist /> > * treatmentsched This becomes <treatSched /> > * @complete I found this changelog message that may help you here: <change who="#PR" when="2011-06-30">Reworked how metrical conformance is handled: removed @complete, added att.meterconformance and att.meterconformance.bar attribute classes in order to provide different levels of granularity at the measure, staff, and layer levels</change> > * @entityref > * @href Not sure about these two, but I think their fates are closely linked... > * @label.full This just becomes @label > * @mediacontent This becomes @avref > * @medialength I think you can just replace this with @end on <avFile />. > If anyone can contribute a mapping for one of those, I'd be happy for any pull-request or notification on- or off-list. Hope this helps! PS: I'll put my Subversion admin hat on here and thank everyone who put in descriptive commit messages. It makes a request like this so much easier. > > All the best > Peter > > Am 26.10.2012 um 18:18 schrieb Maja Hartwig <maja.hartwig at gmx.de>: > >> Hi Peter, >> ProfileDesc has changed into workDesc! >> Did you use some templates of my Stylesheet? ;-) >> >> Maja >> >> Am 26.10.2012 um 18:02 schrieb Peter Stadler <stadler at edirom.de>: >> >>> I couldn't wait ;-) >>> Here's what I came up with: https://github.com/peterstadler/MEI-2010to2012 >>> >>> But especially profiledesc is giving me headaches -- where is it gone? >>> >>> All the best >>> Peter >>> >>> Am 25.10.2012 um 16:23 schrieb Johannes Kepper <kepper at edirom.de>: >>> >>>> Of course we're willing to share them. But first, I would suggest we put them on the agenda for the upcoming tech team meeting and see how we proceed with them. I'm somewhat reluctant to let them enter the wild without any further review. And adding comments on how they might fail with other files probably requires just the same amount of work than fixing these errors? >>>> >>>> For those not following the developer's list: The meeting is scheduled for the first half of November, so this shouldn't take too long? >>>> >>>> Johannes >>>> >>>> >>>> Am 25.10.2012 um 16:11 schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: >>>> >>>>> We would also be interested in seeing these, since we have a large number of non-2012 files that we want to migrate. I did not know about your internal stylesheets, so we've just started writing our own. >>>>> >>>>> Could they be posted "as-is" somewhere (in the SVN?) with appropriate warnings on them that they're not quite ready for prime-time. It would be nice if we don't duplicate effort in writing converters. >>>>> >>>>> -Andrew >>>>> >>>>> On 2012-10-25, at 5:55 AM, Johannes Kepper <kepper at edirom.de> >>>>> wrote: >>>>> >>>>>> Hi Peter, >>>>>> >>>>>> we have some preliminary internal converters here. They will be revised and published on the music-encoding.org website when stable enough. Right now, they more or less cover the subset of MEI used in the sample collection, but they also do some other cleanup, which might not be appropriate in all cases. In order to see what's possible, it would be great to know what your encodings contain. Could you send them to Maja, Kristina or me? >>>>>> >>>>>> It's good you're asking on MEI-L ? of course we're willing to help anyone else with similar problems as well? >>>>>> >>>>>> Best, >>>>>> Johannes >>>>>> >>>>>> >>>>>> >>>>>> Am 25.10.2012 um 11:16 schrieb Peter Stadler <stadler at edirom.de>: >>>>>> >>>>>>> Dear all, >>>>>>> >>>>>>> propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. >>>>>>> Is there an official stylesheet or is anyone willing to share his/her code? >>>>>>> >>>>>>> Any pointers appreciated! >>>>>>> >>>>>>> All the best >>>>>>> Peter > > > > > -- > Peter Stadler > Carl-Maria-von-Weber-Gesamtausgabe > Arbeitsstelle Detmold > Gartenstr. 20 > D-32756 Detmold > Tel. +49 5231 975-665 > Fax: +49 5231 975-668 > stadler at weber-gesamtausgabe.de > www.weber-gesamtausgabe.de > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From maja.hartwig at gmx.de Mon Oct 29 09:04:05 2012 From: maja.hartwig at gmx.de (Maja Hartwig) Date: Mon, 29 Oct 2012 09:04:05 +0100 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: <654E24CC-53A2-4C07-9669-06E6CD051EE7@mail.mcgill.ca> References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> <B32B959B-272B-484F-9067-81A7D9F2406C@mail.mcgill.ca> <7C2BBD92-A44C-4616-B458-1CA43278FD49@edirom.de> <F661D675-FE50-4A4D-91BF-CCA7AD6FC3EB@edirom.de> <D890465C-0E0E-4D20-9611-B9243D56C45E@gmx.de> <22559_1351453398_508D8AD5_22559_201_1_BC3B138D-1CCB-4AD5-A526-3605F5A71C90@edirom.de> <654E24CC-53A2-4C07-9669-06E6CD051EE7@mail.mcgill.ca> Message-ID: <DDDB457F-62BA-4D75-9F51-47D2563B94C5@gmx.de> Hi, the <altmeiid/> becomes <altId>, but I?m not sure where to put all the content of <altmeiid/>, <keychange/> never existed, I think, and would be encoded by <keySig/> (or am I wrong?), <keywords/> were subelement of <classification/> and becomes <termlist> with <term/> now. Best, Maja Am 28.10.2012 um 21:23 schrieb Andrew Hankinson: > Hi Peter, > > I'll answer inline, since we've dealt with this a bit before. Someone can correct me if I'm wrong, but this should give you a rough start. > >> * accessdesc > > This becomes <accessRestrict /> > >> * altmeiid > > This becomes <meiId /> > >> * blockquote > > This just becomes <quote /> > >> * clefchange > > This can just be converted directly to <clef /> > >> * exhibithist > > For some strange reason this was converted to <exhibHist />. (Not sure why two characters got lopped off here -- maybe we were running out of our limit of 256KB of RAM? ;) > >> * extptr >> * extref > > These can be directly converted to <ptr /> and <ref />, since we now use anyURI. > >> * fingerprint > > This was removed. Not sure if it was ever used by anyone. > >> * keychange > > I *think*, like clefChange, that you can just directly convert this to <key /> > >> * keywords > > I could find a reference that on 2011 Perry changed this to *something* but I can't tell what. > > http://code.google.com/p/music-encoding/source/search?q=keywords&origq=keywords&btnG=Search+Trunk (in-browser search for "keywords") > > <change who="#PR" when="2011-01-13">Renamed <keywords> 'termList' and added @classcode</change> > >> * pgfoot1 >> * pghead1 > > I think these have just been renamed pgFoot and pgHead. The pgFoot2 and pgHead2 ones still exist too. > >> * profiledesc > > This becomes <workDesc> with an embedded <work> element, in which all the children of <profiledesc /> are copied; > >> * treatmenthist > > This becomes <treatHist /> > >> * treatmentsched > > This becomes <treatSched /> > >> * @complete > > I found this changelog message that may help you here: > > <change who="#PR" when="2011-06-30">Reworked how metrical conformance is handled: removed @complete, added att.meterconformance and att.meterconformance.bar attribute classes in order to provide different levels of granularity at the measure, staff, and layer levels</change> > >> * @entityref >> * @href > > Not sure about these two, but I think their fates are closely linked... > >> * @label.full > > This just becomes @label > >> * @mediacontent > > This becomes @avref > >> * @medialength > > I think you can just replace this with @end on <avFile />. > >> If anyone can contribute a mapping for one of those, I'd be happy for any pull-request or notification on- or off-list. > > Hope this helps! > > PS: I'll put my Subversion admin hat on here and thank everyone who put in descriptive commit messages. It makes a request like this so much easier. > >> >> All the best >> Peter >> >> Am 26.10.2012 um 18:18 schrieb Maja Hartwig <maja.hartwig at gmx.de>: >> >>> Hi Peter, >>> ProfileDesc has changed into workDesc! >>> Did you use some templates of my Stylesheet? ;-) >>> >>> Maja >>> >>> Am 26.10.2012 um 18:02 schrieb Peter Stadler <stadler at edirom.de>: >>> >>>> I couldn't wait ;-) >>>> Here's what I came up with: https://github.com/peterstadler/MEI-2010to2012 >>>> >>>> But especially profiledesc is giving me headaches -- where is it gone? >>>> >>>> All the best >>>> Peter >>>> >>>> Am 25.10.2012 um 16:23 schrieb Johannes Kepper <kepper at edirom.de>: >>>> >>>>> Of course we're willing to share them. But first, I would suggest we put them on the agenda for the upcoming tech team meeting and see how we proceed with them. I'm somewhat reluctant to let them enter the wild without any further review. And adding comments on how they might fail with other files probably requires just the same amount of work than fixing these errors? >>>>> >>>>> For those not following the developer's list: The meeting is scheduled for the first half of November, so this shouldn't take too long? >>>>> >>>>> Johannes >>>>> >>>>> >>>>> Am 25.10.2012 um 16:11 schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: >>>>> >>>>>> We would also be interested in seeing these, since we have a large number of non-2012 files that we want to migrate. I did not know about your internal stylesheets, so we've just started writing our own. >>>>>> >>>>>> Could they be posted "as-is" somewhere (in the SVN?) with appropriate warnings on them that they're not quite ready for prime-time. It would be nice if we don't duplicate effort in writing converters. >>>>>> >>>>>> -Andrew >>>>>> >>>>>> On 2012-10-25, at 5:55 AM, Johannes Kepper <kepper at edirom.de> >>>>>> wrote: >>>>>> >>>>>>> Hi Peter, >>>>>>> >>>>>>> we have some preliminary internal converters here. They will be revised and published on the music-encoding.org website when stable enough. Right now, they more or less cover the subset of MEI used in the sample collection, but they also do some other cleanup, which might not be appropriate in all cases. In order to see what's possible, it would be great to know what your encodings contain. Could you send them to Maja, Kristina or me? >>>>>>> >>>>>>> It's good you're asking on MEI-L ? of course we're willing to help anyone else with similar problems as well? >>>>>>> >>>>>>> Best, >>>>>>> Johannes >>>>>>> >>>>>>> >>>>>>> >>>>>>> Am 25.10.2012 um 11:16 schrieb Peter Stadler <stadler at edirom.de>: >>>>>>> >>>>>>>> Dear all, >>>>>>>> >>>>>>>> propably at least a few of you did some sort of transformation of mei files from schema version 2010-05 to MEI2012_v2.0.0. >>>>>>>> Is there an official stylesheet or is anyone willing to share his/her code? >>>>>>>> >>>>>>>> Any pointers appreciated! >>>>>>>> >>>>>>>> All the best >>>>>>>> Peter >> >> >> >> >> -- >> Peter Stadler >> Carl-Maria-von-Weber-Gesamtausgabe >> Arbeitsstelle Detmold >> Gartenstr. 20 >> D-32756 Detmold >> Tel. +49 5231 975-665 >> Fax: +49 5231 975-668 >> stadler at weber-gesamtausgabe.de >> www.weber-gesamtausgabe.de >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From roewenstrunk at edirom.de Mon Oct 29 17:22:24 2012 From: roewenstrunk at edirom.de (=?iso-8859-1?Q?Daniel_R=F6wenstrunk?=) Date: Mon, 29 Oct 2012 17:22:24 +0100 Subject: [MEI-L] Connection/relation of multiple parts Message-ID: <BC2ADE35-D31D-4E8A-89FB-57921AFC60DE@edirom.de> Hi all, I'm looking for a good approach to describe the relationship between <part> elements in different movements. In my actual encoding of the piece (see a very simplified example below) the only possibility to guess the relationship between the Violin 1 in the first ("mdiv1_part1") and second ("mdiv2_part1") movement is by comparing their label attributes. And that is not really a thing I want to rely on when I programmatically read MEI files. Is there a better and/or more precise way of encoding the relationship? By the way, why is <mdiv> surrounding <scrore> or <parts>? Wouldn't it be much more intuitive, at least if you use <mdiv> as containers for movements together with parts, to switch this the other way round? Cheers and thanks, Daniel Encoding example: <body> <mdiv xml:id="mdiv1" label="Allegro"> <parts> <part xml:id="mdiv1_part1" label="Violin 1"> ? </part> <part xml:id="mdiv1_part2" label="Violin 2"> ? </part> </parts> </mdiv> <mdiv xml:id="mdiv2" label="Allegretto"> <parts> <part xml:id="mdiv2_part1" label="Violin 1"> ? </part> <part xml:id="mdiv2_part2" label="Violin 2"> ? </part> </parts> </mdiv> </body> -- Dipl. Wirt. Inf. Daniel R?wenstrunk Project manager BMBF-Project "Freisch?tz Digital" Musikwiss. Seminar Detmold/Paderborn Gartenstr. 20 D-32756 Detmold Tel.: +49 5231 975665 Mail: roewenstrunk at edirom.de URL: http://www.freischuetz-digital.de -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121029/9b60d7ea/attachment.html> From kepper at edirom.de Mon Oct 29 19:00:21 2012 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 29 Oct 2012 19:00:21 +0100 Subject: [MEI-L] Connection/relation of multiple parts In-Reply-To: <BC2ADE35-D31D-4E8A-89FB-57921AFC60DE@edirom.de> References: <BC2ADE35-D31D-4E8A-89FB-57921AFC60DE@edirom.de> Message-ID: <EE57BAD2-410C-4FDB-B220-7B3CFEB8759D@edirom.de> Hi Daniel, without access to the schema / guidelines, I'd suggest to describe the performing forces in the header (using <perfMedium>), and then use the @decls attribute to point there from each part / staffDef (or @data in the other direction). Probably not the most convenient way, but right now there is no @sameas or @corresp on staffDef? Just my first idea, maybe others will come up with other / better suggestions. Best, Jo Am 29.10.2012 um 17:22 schrieb Daniel R?wenstrunk <roewenstrunk at edirom.d e>: > Hi all, > > I'm looking for a good approach to describe the relationship between > <part> elements in different movements. In my actual encoding of the > piece (see a very simplified example below) the only possibility to > guess the relationship between the Violin 1 in the first > ("mdiv1_part1") and second ("mdiv2_part1") movement is by comparing > their label attributes. And that is not really a thing I want to > rely on when I programmatically read MEI files. > > Is there a better and/or more precise way of encoding the > relationship? > > > By the way, why is <mdiv> surrounding <scrore> or <parts>? Wouldn't > it be much more intuitive, at least if you use <mdiv> as containers > for movements together with parts, to switch this the other way round? > > Cheers and thanks, > Daniel > > > Encoding example: > > <body> > <mdiv xml:id="mdiv1" label="Allegro"> > <parts> > <part xml:id="mdiv1_part1" label="Violin 1"> > ? > </part> > <part xml:id="mdiv1_part2" label="Violin 2"> > ? > </part> > </parts> > </mdiv> > <mdiv xml:id="mdiv2" label="Allegretto"> > <parts> > <part xml:id="mdiv2_part1" label="Violin 1"> > ? > </part> > <part xml:id="mdiv2_part2" label="Violin 2"> > ? > </part> > </parts> > </mdiv> > </body> > > > > -- > Dipl. Wirt. Inf. Daniel R?wenstrunk > Project manager > BMBF-Project "Freisch?tz Digital" > > Musikwiss. Seminar Detmold/Paderborn > Gartenstr. 20 > D-32756 Detmold > > Tel.: +49 5231 975665 > Mail: roewenstrunk at edirom.de > URL: http://www.freischuetz-digital.de > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121029/aa10b79f/attachment.html> From raffaeleviglianti at gmail.com Mon Oct 29 22:12:20 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Mon, 29 Oct 2012 21:12:20 +0000 Subject: [MEI-L] Connection/relation of multiple parts In-Reply-To: <EE57BAD2-410C-4FDB-B220-7B3CFEB8759D@edirom.de> References: <BC2ADE35-D31D-4E8A-89FB-57921AFC60DE@edirom.de> <EE57BAD2-410C-4FDB-B220-7B3CFEB8759D@edirom.de> Message-ID: <CAMyHAnNE7X3MF-BZ+aX7hxqf1EmdPeWndcz+vY-7YBc97sgRCg@mail.gmail.com> Hello, I agree with Johannes, the place for encodining this is in the header, using @decls to point back to the text. I struggle to find the right place were to do that though. To me perfMedium steps into a different domain, so I'm not sure about it. This is a very interesting problem and I'm curious to read other suggestions. Best, Raffaele On Monday, October 29, 2012, Johannes Kepper wrote: > Hi Daniel, > > without access to the schema / guidelines, I'd suggest to describe the > performing forces in the header (using <perfMedium>), and then use the > @decls attribute to point there from each part / staffDef (or @data in the > other direction). Probably not the most convenient way, but right now there > is no @sameas or @corresp on staffDef? > > Just my first idea, maybe others will come up with other / better > suggestions. > > Best, Jo > > Am 29.10.2012 um 17:22 schrieb Daniel R?wenstrunk <roewenstrunk at edirom.de<javascript:_e({}, 'cvml', 'roewenstrunk at edirom.de');> > >: > > Hi all, > > I'm looking for a good approach to describe the relationship between > <part> elements in different movements. In my actual encoding of the piece > (see a very simplified example below) the only possibility to guess the > relationship between the Violin 1 in the first ("mdiv1_part1") and second > ("mdiv2_part1") movement is by comparing their label attributes. And that > is not really a thing I want to rely on when I programmatically read MEI > files. > > Is there a better and/or more precise way of encoding the relationship? > > > By the way, why is <mdiv> surrounding <scrore> or <parts>? Wouldn't it be > much more intuitive, at least if you use <mdiv> as containers for movements > together with parts, to switch this the other way round? > > Cheers and thanks, > Daniel > > > Encoding example: > > <body> > <mdiv xml:id="mdiv1" label="Allegro"> > <parts> > <part xml:id="mdiv1_part1" label="Violin 1"> > ? > </part> > <part xml:id="mdiv1_part2" label="Violin 2"> > ? > </part> > </parts> > </mdiv> > <mdiv xml:id="mdiv2" label="Allegretto"> > <parts> > <part xml:id="mdiv2_part1" label="Violin 1"> > ? > </part> > <part xml:id="mdiv2_part2" label="Violin 2"> > ? > </part> > </parts> > </mdiv> > </body> > > > > -- > Dipl. Wirt. Inf. Daniel R?wenstrunk > Project manager > BMBF-Project "Freisch?tz Digital" > > Musikwiss. Seminar Detmold/Paderborn > Gartenstr. 20 > D-32756 Detmold > > Tel.: +49 5231 975665 > Mail: <javascript:_e({}, 'cvml', 'roewenstrunk at edirom.de');> > roewenstrunk at edirom.de <javascript:_e({}, 'cvml', > 'roewenstrunk at edirom.de');> > URL: <http://www.freischuetz-digital.de>http://www.freischuetz-digital.de > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de <javascript:_e({}, 'cvml', > 'mei-l at lists.uni-paderborn.de');> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121029/d93f0e42/attachment.html> From Maja.Hartwig at gmx.de Tue Oct 30 09:07:44 2012 From: Maja.Hartwig at gmx.de (Maja Hartwig) Date: Tue, 30 Oct 2012 09:07:44 +0100 Subject: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 In-Reply-To: <654E24CC-53A2-4C07-9669-06E6CD051EE7@mail.mcgill.ca> References: <6035CA09-D211-4751-9528-8465EAA5A877@edirom.de> <24384_1351158935_50890C97_24384_263_1_BB18C1AC-1614-4A44-A185-C5060EC5792C@edirom.de> <B32B959B-272B-484F-9067-81A7D9F2406C@mail.mcgill.ca> <7C2BBD92-A44C-4616-B458-1CA43278FD49@edirom.de> <F661D675-FE50-4A4D-91BF-CCA7AD6FC3EB@edirom.de> <D890465C-0E0E-4D20-9611-B9243D56C45E@gmx.de> <22559_1351453398_508D8AD5_22559_201_1_BC3B138D-1CCB-4AD5-A526-3605F5A71C90@edirom.de> <654E24CC-53A2-4C07-9669-06E6CD051EE7@mail.mcgill.ca> Message-ID: <20121030080744.72540@gmx.net> Hi Andrew, thanks for your hints! The stylesheet on https://github.com/peterstadler/MEI-2010to2012 transforms now from the schema version 2010 to 2012. It needs more testing and there are still some issues with those elements which change semantics like workDesc and altId. Propably, it has to be customized for some MEI-files. Please tell us, if there are any other observations! Best, Maja -------- Original-Nachricht -------- > Datum: Sun, 28 Oct 2012 16:23:31 -0400 > Von: Andrew Hankinson <andrew.hankinson at mail.mcgill.ca> > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Betreff: Re: [MEI-L] Howto transform 2010-05 to MEI2012_v2.0.0 > Hi Peter, > > I'll answer inline, since we've dealt with this a bit before. Someone can > correct me if I'm wrong, but this should give you a rough start. > > > * accessdesc > > This becomes <accessRestrict /> > > > * altmeiid > > This becomes <meiId /> > > > * blockquote > > This just becomes <quote /> > > > * clefchange > > This can just be converted directly to <clef /> > > > * exhibithist > > For some strange reason this was converted to <exhibHist />. (Not sure why > two characters got lopped off here -- maybe we were running out of our > limit of 256KB of RAM? ;) > > > * extptr > > * extref > > These can be directly converted to <ptr /> and <ref />, since we now use > anyURI. > > > * fingerprint > > This was removed. Not sure if it was ever used by anyone. > > > * keychange > > I *think*, like clefChange, that you can just directly convert this to > <key /> > > > * keywords > > I could find a reference that on 2011 Perry changed this to *something* > but I can't tell what. > > http://code.google.com/p/music-encoding/source/search?q=keywords&origq=keywords&btnG=Search+Trunk > (in-browser search for "keywords") > > <change who="#PR" when="2011-01-13">Renamed <keywords> 'termList' > and added @classcode</change> > > > * pgfoot1 > > * pghead1 > > I think these have just been renamed pgFoot and pgHead. The pgFoot2 and > pgHead2 ones still exist too. > > > * profiledesc > > This becomes <workDesc> with an embedded <work> element, in which all the > children of <profiledesc /> are copied; > > > * treatmenthist > > This becomes <treatHist /> > > > * treatmentsched > > This becomes <treatSched /> > > > * @complete > > I found this changelog message that may help you here: > > <change who="#PR" when="2011-06-30">Reworked how metrical conformance is > handled: removed @complete, added att.meterconformance and > att.meterconformance.bar attribute classes in order to provide different levels of > granularity at the measure, staff, and layer levels</change> > > > * @entityref > > * @href > > Not sure about these two, but I think their fates are closely linked... > > > * @label.full > > This just becomes @label > > > * @mediacontent > > This becomes @avref > > > * @medialength > > I think you can just replace this with @end on <avFile />. > > > If anyone can contribute a mapping for one of those, I'd be happy for > any pull-request or notification on- or off-list. > > Hope this helps! > > PS: I'll put my Subversion admin hat on here and thank everyone who put in > descriptive commit messages. It makes a request like this so much easier. > > > > > All the best > > Peter > > > > Am 26.10.2012 um 18:18 schrieb Maja Hartwig <maja.hartwig at gmx.de>: > > > >> Hi Peter, > >> ProfileDesc has changed into workDesc! > >> Did you use some templates of my Stylesheet? ;-) > >> > >> Maja > >> > >> Am 26.10.2012 um 18:02 schrieb Peter Stadler <stadler at edirom.de>: > >> > >>> I couldn't wait ;-) > >>> Here's what I came up with: > https://github.com/peterstadler/MEI-2010to2012 > >>> > >>> But especially profiledesc is giving me headaches -- where is it gone? > >>> > >>> All the best > >>> Peter > >>> > >>> Am 25.10.2012 um 16:23 schrieb Johannes Kepper <kepper at edirom.de>: > >>> > >>>> Of course we're willing to share them. But first, I would suggest we > put them on the agenda for the upcoming tech team meeting and see how we > proceed with them. I'm somewhat reluctant to let them enter the wild without > any further review. And adding comments on how they might fail with other > files probably requires just the same amount of work than fixing these > errors? > >>>> > >>>> For those not following the developer's list: The meeting is > scheduled for the first half of November, so this shouldn't take too long? > >>>> > >>>> Johannes > >>>> > >>>> > >>>> Am 25.10.2012 um 16:11 schrieb Andrew Hankinson > <andrew.hankinson at mail.mcgill.ca>: > >>>> > >>>>> We would also be interested in seeing these, since we have a large > number of non-2012 files that we want to migrate. I did not know about your > internal stylesheets, so we've just started writing our own. > >>>>> > >>>>> Could they be posted "as-is" somewhere (in the SVN?) with > appropriate warnings on them that they're not quite ready for prime-time. It would be > nice if we don't duplicate effort in writing converters. > >>>>> > >>>>> -Andrew > >>>>> > >>>>> On 2012-10-25, at 5:55 AM, Johannes Kepper <kepper at edirom.de> > >>>>> wrote: > >>>>> > >>>>>> Hi Peter, > >>>>>> > >>>>>> we have some preliminary internal converters here. They will be > revised and published on the music-encoding.org website when stable enough. > Right now, they more or less cover the subset of MEI used in the sample > collection, but they also do some other cleanup, which might not be appropriate > in all cases. In order to see what's possible, it would be great to know > what your encodings contain. Could you send them to Maja, Kristina or me? > >>>>>> > >>>>>> It's good you're asking on MEI-L ? of course we're willing to > help anyone else with similar problems as well? > >>>>>> > >>>>>> Best, > >>>>>> Johannes > >>>>>> > >>>>>> > >>>>>> > >>>>>> Am 25.10.2012 um 11:16 schrieb Peter Stadler <stadler at edirom.de>: > >>>>>> > >>>>>>> Dear all, > >>>>>>> > >>>>>>> propably at least a few of you did some sort of transformation of > mei files from schema version 2010-05 to MEI2012_v2.0.0. > >>>>>>> Is there an official stylesheet or is anyone willing to share > his/her code? > >>>>>>> > >>>>>>> Any pointers appreciated! > >>>>>>> > >>>>>>> All the best > >>>>>>> Peter > > > > > > > > > > -- > > Peter Stadler > > Carl-Maria-von-Weber-Gesamtausgabe > > Arbeitsstelle Detmold > > Gartenstr. 20 > > D-32756 Detmold > > Tel. +49 5231 975-665 > > Fax: +49 5231 975-668 > > stadler at weber-gesamtausgabe.de > > www.weber-gesamtausgabe.de > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From roewenstrunk at edirom.de Tue Oct 30 09:59:49 2012 From: roewenstrunk at edirom.de (=?iso-8859-1?Q?Daniel_R=F6wenstrunk?=) Date: Tue, 30 Oct 2012 09:59:49 +0100 Subject: [MEI-L] Connection/relation of multiple parts In-Reply-To: <CAMyHAnNE7X3MF-BZ+aX7hxqf1EmdPeWndcz+vY-7YBc97sgRCg@mail.gmail.com> References: <BC2ADE35-D31D-4E8A-89FB-57921AFC60DE@edirom.de> <EE57BAD2-410C-4FDB-B220-7B3CFEB8759D@edirom.de> <CAMyHAnNE7X3MF-BZ+aX7hxqf1EmdPeWndcz+vY-7YBc97sgRCg@mail.gmail.com> Message-ID: <0FECF574-6E8A-4BA2-A073-2F0E93681744@edirom.de> Hi, from my point of view the header should be some kind of *meta* data to the data itself. In my understanding this means that the data (inside body) should be self-consistent, even if the header is missing. I thought that is way we have staffDef and scoreDef for example inside the data part of MEI. I agree with Raffaele that <perfMedium> doesn't seem to be the right place. At least if you think from a document centric perspective and I would argue that having parts in an MEI encoding is most of the time because of a document centric approach (and I'm pretty sure some of you find examples against this argumentation ;-) ). The idea of using @sameas (which is possible on <part>) doesn't convince me, either. The two Violin 1 <part> elements are not the same, the second is the continuation of the first, right? So maybe using @next and @prev are a better choice here. Comments on that? Cheers, Daniel Am 29.10.2012 um 22:12 schrieb Raffaele Viglianti <raffaeleviglianti at gmail.com>: > Hello, > > I agree with Johannes, the place for encodining this is in the header, using @decls to point back to the text. I struggle to find the right place were to do that though. To me perfMedium steps into a different domain, so I'm not sure about it. > > This is a very interesting problem and I'm curious to read other suggestions. > > Best, > Raffaele > > On Monday, October 29, 2012, Johannes Kepper wrote: > Hi Daniel, > > without access to the schema / guidelines, I'd suggest to describe the performing forces in the header (using <perfMedium>), and then use the @decls attribute to point there from each part / staffDef (or @data in the other direction). Probably not the most convenient way, but right now there is no @sameas or @corresp on staffDef? > > Just my first idea, maybe others will come up with other / better suggestions. > > Best, Jo > > Am 29.10.2012 um 17:22 schrieb Daniel R?wenstrunk <roewenstrunk at edirom.de>: > >> Hi all, >> >> I'm looking for a good approach to describe the relationship between <part> elements in different movements. In my actual encoding of the piece (see a very simplified example below) the only possibility to guess the relationship between the Violin 1 in the first ("mdiv1_part1") and second ("mdiv2_part1") movement is by comparing their label attributes. And that is not really a thing I want to rely on when I programmatically read MEI files. >> >> Is there a better and/or more precise way of encoding the relationship? >> >> >> By the way, why is <mdiv> surrounding <scrore> or <parts>? Wouldn't it be much more intuitive, at least if you use <mdiv> as containers for movements together with parts, to switch this the other way round? >> >> Cheers and thanks, >> Daniel >> >> >> Encoding example: >> >> <body> >> <mdiv xml:id="mdiv1" label="Allegro"> >> <parts> >> <part xml:id="mdiv1_part1" label="Violin 1"> >> ? >> </part> >> <part xml:id="mdiv1_part2" label="Violin 2"> >> ? >> </part> >> </parts> >> </mdiv> >> <mdiv xml:id="mdiv2" label="Allegretto"> >> <parts> >> <part xml:id="mdiv2_part1" label="Violin 1"> >> ? >> </part> >> <part xml:id="mdiv2_part2" label="Violin 2"> >> ? >> </part> >> </parts> >> </mdiv> >> </body> >> >> >> >> -- >> Dipl. Wirt. Inf. Daniel R?wenstrunk >> Project manager >> BMBF-Project "Freisch?tz Digital" >> >> Musikwiss. Seminar Detmold/Paderborn >> Gartenstr. 20 >> D-32756 Detmold >> >> Tel.: +49 5231 975665 >> Mail: roewenstrunk at edirom.de >> URL: http://www.freischuetz-digital.de >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121030/75e9daec/attachment.html> From raffaeleviglianti at gmail.com Tue Oct 30 12:58:33 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Tue, 30 Oct 2012 11:58:33 +0000 Subject: [MEI-L] Connection/relation of multiple parts In-Reply-To: <0FECF574-6E8A-4BA2-A073-2F0E93681744@edirom.de> References: <BC2ADE35-D31D-4E8A-89FB-57921AFC60DE@edirom.de> <EE57BAD2-410C-4FDB-B220-7B3CFEB8759D@edirom.de> <CAMyHAnNE7X3MF-BZ+aX7hxqf1EmdPeWndcz+vY-7YBc97sgRCg@mail.gmail.com> <0FECF574-6E8A-4BA2-A073-2F0E93681744@edirom.de> Message-ID: <CAMyHAnMgrM18aSfWxNh1ForQxyVJTRiwLf24wjGNA1YxWzUECQ@mail.gmail.com> Well, although I agree in principle, there can be substantial information in the header upon which the body depends. Think of source description and their essential role in making sense of @source attributes across the document, or rendition elements in TEI for @rend, etc. In this context, I don't see a problem with encoding the relations between parts in the header similarly to how one would encode the relations between sources. Nonetheless, @prev and @next may be a quick and perfectly valid solution to this problem. Best, Raffaele On Tue, Oct 30, 2012 at 8:59 AM, Daniel R?wenstrunk <roewenstrunk at edirom.de>wrote: > Hi, > > from my point of view the header should be some kind of *meta* data to the > data itself. In my understanding this means that the data (inside body) > should be self-consistent, even if the header is missing. I thought that is > way we have staffDef and scoreDef for example inside the data part of MEI. > > I agree with Raffaele that <perfMedium> doesn't seem to be the right > place. At least if you think from a document centric perspective and I > would argue that having parts in an MEI encoding is most of the time > because of a document centric approach (and I'm pretty sure some of you > find examples against this argumentation ;-) ). > > The idea of using @sameas (which is possible on <part>) doesn't convince > me, either. The two Violin 1 <part> elements are not the same, the second > is the continuation of the first, right? So maybe using @next and @prev are > a better choice here. Comments on that? > > Cheers, Daniel > > > > Am 29.10.2012 um 22:12 schrieb Raffaele Viglianti < > raffaeleviglianti at gmail.com>: > > Hello, > > I agree with Johannes, the place for encodining this is in the header, > using @decls to point back to the text. I struggle to find the right place > were to do that though. To me perfMedium steps into a different domain, so > I'm not sure about it. > > This is a very interesting problem and I'm curious to read other > suggestions. > > Best, > Raffaele > > On Monday, October 29, 2012, Johannes Kepper wrote: > >> Hi Daniel, >> >> without access to the schema / guidelines, I'd suggest to describe the >> performing forces in the header (using <perfMedium>), and then use the >> @decls attribute to point there from each part / staffDef (or @data in the >> other direction). Probably not the most convenient way, but right now there >> is no @sameas or @corresp on staffDef? >> >> Just my first idea, maybe others will come up with other / better >> suggestions. >> >> Best, Jo >> >> Am 29.10.2012 um 17:22 schrieb Daniel R?wenstrunk <roewenstrunk at edirom.de >> >: >> >> Hi all, >> >> I'm looking for a good approach to describe the relationship between >> <part> elements in different movements. In my actual encoding of the piece >> (see a very simplified example below) the only possibility to guess the >> relationship between the Violin 1 in the first ("mdiv1_part1") and second >> ("mdiv2_part1") movement is by comparing their label attributes. And that >> is not really a thing I want to rely on when I programmatically read MEI >> files. >> >> Is there a better and/or more precise way of encoding the relationship? >> >> >> By the way, why is <mdiv> surrounding <scrore> or <parts>? Wouldn't it be >> much more intuitive, at least if you use <mdiv> as containers for movements >> together with parts, to switch this the other way round? >> >> Cheers and thanks, >> Daniel >> >> >> Encoding example: >> >> <body> >> <mdiv xml:id="mdiv1" label="Allegro"> >> <parts> >> <part xml:id="mdiv1_part1" label="Violin 1"> >> ? >> </part> >> <part xml:id="mdiv1_part2" label="Violin 2"> >> ? >> </part> >> </parts> >> </mdiv> >> <mdiv xml:id="mdiv2" label="Allegretto"> >> <parts> >> <part xml:id="mdiv2_part1" label="Violin 1"> >> ? >> </part> >> <part xml:id="mdiv2_part2" label="Violin 2"> >> ? >> </part> >> </parts> >> </mdiv> >> </body> >> >> >> >> -- >> Dipl. Wirt. Inf. Daniel R?wenstrunk >> Project manager >> BMBF-Project "Freisch?tz Digital" >> >> Musikwiss. Seminar Detmold/Paderborn >> Gartenstr. 20 >> D-32756 Detmold >> >> Tel.: +49 5231 975665 >> Mail: roewenstrunk at edirom.de >> URL: <http://www.freischuetz-digital.de/> >> http://www.freischuetz-digital.de >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121030/40943841/attachment.html> From stadler at edirom.de Tue Oct 30 14:19:56 2012 From: stadler at edirom.de (Peter Stadler) Date: Tue, 30 Oct 2012 14:19:56 +0100 Subject: [MEI-L] Schema version 2012 changed content model for <creation> Message-ID: <96083505-B7BF-4384-AFAE-F09511B098BE@edirom.de> Dear all, the content model of <creation> has changed with the new schema version 2012 allowing only text, <date> and <geogName>. In our 2010-05 files I find examples such as: <creation> <p>Zum Namenstag von <persname dbkey="A000537">K?nig Friedrich August I. von Sachsen</persname> (Friedrichstag) am <date reg="1818-03-05">5. M?rz 1818</date>.</p> <p>Vollendet am <date reg="1818-02-23">23. Februar 1818</date>.</p> </creation> That said, I'd have to throw away the information about paragraphs as well as about persnames if I was to transform this element to the new schema. While the description for <creation> says it should be in "narrative form" I can't imagine why the content model should be that strict?! I'd favor as a content model probably just p+ Comments? All the best Peter -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de From atge at kb.dk Tue Oct 30 14:27:06 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Tue, 30 Oct 2012 13:27:06 +0000 Subject: [MEI-L] Schema version 2012 changed content model for <creation> In-Reply-To: <96083505-B7BF-4384-AFAE-F09511B098BE@edirom.de> References: <96083505-B7BF-4384-AFAE-F09511B098BE@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514DF436@EXCHANGE-02.kb.dk> Hi Peter, <creation> now is wrapped in the new element <history>, which does allows <p>, so what you can do is moving the paragraphs up one level, i.e. out of <creation> and into a more general description of the historical context. Best wishes, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Peter Stadler Sendt: 30. oktober 2012 14:20 Til: Music Encoding Initiative Emne: [MEI-L] Schema version 2012 changed content model for <creation> Dear all, the content model of <creation> has changed with the new schema version 2012 allowing only text, <date> and <geogName>. In our 2010-05 files I find examples such as: <creation> <p>Zum Namenstag von <persname dbkey="A000537">K?nig Friedrich August I. von Sachsen</persname> (Friedrichstag) am <date reg="1818-03-05">5. M?rz 1818</date>.</p> <p>Vollendet am <date reg="1818-02-23">23. Februar 1818</date>.</p> </creation> That said, I'd have to throw away the information about paragraphs as well as about persnames if I was to transform this element to the new schema. While the description for <creation> says it should be in "narrative form" I can't imagine why the content model should be that strict?! I'd favor as a content model probably just p+ Comments? All the best Peter -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From stadler at edirom.de Tue Oct 30 14:42:14 2012 From: stadler at edirom.de (Peter Stadler) Date: Tue, 30 Oct 2012 14:42:14 +0100 Subject: [MEI-L] Schema version 2012 changed content model for <creation> In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514DF436@EXCHANGE-02.kb.dk> References: <96083505-B7BF-4384-AFAE-F09511B098BE@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514DF436@EXCHANGE-02.kb.dk> Message-ID: <CE3CC40B-D46F-4A5F-A9F9-9DBEB4C719A5@edirom.de> Well, yes, technically possible ? And probably the road I'll take--thanks for the hint. Nevertheless my feeling is that the 2012 <creation> will better fit as a structural container for simply *the* date (and place) of creation rather than a "narrative form"--which goes one level up. Many thanks again Peter Am 30.10.2012 um 14:27 schrieb Axel Teich Geertinger <atge at kb.dk>: > Hi Peter, > > <creation> now is wrapped in the new element <history>, which does allows <p>, so what you can do is moving the paragraphs up one level, i.e. out of <creation> and into a more general description of the historical context. > > Best wishes, > Axel > > > > -----Oprindelig meddelelse----- > Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Peter Stadler > Sendt: 30. oktober 2012 14:20 > Til: Music Encoding Initiative > Emne: [MEI-L] Schema version 2012 changed content model for <creation> > > Dear all, > > the content model of <creation> has changed with the new schema version 2012 allowing only text, <date> and <geogName>. > In our 2010-05 files I find examples such as: > <creation> > <p>Zum Namenstag von <persname dbkey="A000537">K?nig Friedrich August I. von Sachsen</persname> (Friedrichstag) am <date reg="1818-03-05">5. M?rz 1818</date>.</p> > <p>Vollendet am <date reg="1818-02-23">23. Februar 1818</date>.</p> </creation> That said, I'd have to throw away the information about paragraphs as well as about persnames if I was to transform this element to the new schema. While the description for <creation> says it should be in "narrative form" I can't imagine why the content model should be that strict?! > I'd favor as a content model probably just p+ > > Comments? > All the best > Peter > -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de From bohl at edirom.de Tue Oct 30 15:48:06 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Tue, 30 Oct 2012 15:48:06 +0100 Subject: [MEI-L] Connection/relation of multiple parts In-Reply-To: <CAMyHAnMgrM18aSfWxNh1ForQxyVJTRiwLf24wjGNA1YxWzUECQ@mail.gmail.com> References: <BC2ADE35-D31D-4E8A-89FB-57921AFC60DE@edirom.de> <EE57BAD2-410C-4FDB-B220-7B3CFEB8759D@edirom.de> <CAMyHAnNE7X3MF-BZ+aX7hxqf1EmdPeWndcz+vY-7YBc97sgRCg@mail.gmail.com> <0FECF574-6E8A-4BA2-A073-2F0E93681744@edirom.de> <CAMyHAnMgrM18aSfWxNh1ForQxyVJTRiwLf24wjGNA1YxWzUECQ@mail.gmail.com> Message-ID: <772A1C2B-AD3C-4D1B-BF06-294846AF79ED@edirom.de> Hi evbdy, Am 30.10.2012 um 12:58 schrieb Raffaele Viglianti: > Well, although I agree in principle, there can be substantial information in the header upon which the body depends. Think of source description and their essential role in making sense of @source attributes across the document, This is true, although even without the resolution of the IDREFs in @source a proper logical tree following a certain source can be generated, as the corresponding source element only supplies meta information on the source, not on the actual music content in body. > or rendition elements in TEI for @rend, etc. In this context, I don't see a problem with encoding the relations between parts in the header similarly to how one would encode the relations between sources. Neither in the case of rendition elements you add information concerning the logical structure of the body, rather it is visualization/rendition info. The case with the mei:mdivs would be comparable to add a sort of TOC in the header which then would be used to bring "unordered" mei:part elements into the order of the intended output, being for example the complete violin part of a work. Applied to the TEI example that would imply a sort of stand-off TOC in to order your chapters (div elements) resp. sub-chapters? Leading back to Daniels question: >>> By the way, why is <mdiv> surrounding <scrore> or <parts>? Wouldn't it be much more intuitive, at least if you use <mdiv> as containers for movements together with parts, to switch this the other way round? This brings up an interesting question: With a score-element-encoding you could "line up" all measures in document order and would become a perfectly ordered continuous score resembling the source. Whilst when encoding parts, a similar mechanism stacking all parts would lead to a score as well and not to part extracts resembling a collection of performance parts. But isn't the parts element intended for encoding such parts in the source? If so, I would support Daniel's idea of switching hierarchies in <parts> to part/mdiv/measure. > I agree with Raffaele that <perfMedium> doesn't seem to be the right place. At least if you think from a document centric perspective and I would argue that having parts in an MEI encoding is most of the time because of a document centric approach (and I'm pretty sure some of you find examples against this argumentation ;-) ). Using the already proposed @decls to point to the header, I don't understand why you would not use the children of perfMedium? The nested instrumentation/instrVoice elements could then provide a standardized label for the part and its staves. > > Nonetheless, @prev and @next may be a quick and perfectly valid solution to this problem. Seems plausible, quick and valid, although the presence of theses attributes should then be best practice MEI in case you have more than one mdiv element. When processing the MEI you nevertheless would need to check every (following::) part element for their IDs until you find the matching one, which could be nested deep inside other mdivs. > > Best, > Raffaele > > On Tue, Oct 30, 2012 at 8:59 AM, Daniel R?wenstrunk <roewenstrunk at edirom.de> wrote: > Hi, > > from my point of view the header should be some kind of *meta* data to the data itself. In my understanding this means that the data (inside body) should be self-consistent, even if the header is missing. I thought that is way we have staffDef and scoreDef for example inside the data part of MEI. > > I agree with Raffaele that <perfMedium> doesn't seem to be the right place. At least if you think from a document centric perspective and I would argue that having parts in an MEI encoding is most of the time because of a document centric approach (and I'm pretty sure some of you find examples against this argumentation ;-) ). > > The idea of using @sameas (which is possible on <part>) doesn't convince me, either. The two Violin 1 <part> elements are not the same, the second is the continuation of the first, right? So maybe using @next and @prev are a better choice here. Comments on that? > > Cheers, Daniel > > > > Am 29.10.2012 um 22:12 schrieb Raffaele Viglianti <raffaeleviglianti at gmail.com>: > >> Hello, >> >> I agree with Johannes, the place for encodining this is in the header, using @decls to point back to the text. I struggle to find the right place were to do that though. To me perfMedium steps into a different domain, so I'm not sure about it. >> >> This is a very interesting problem and I'm curious to read other suggestions. >> >> Best, >> Raffaele >> >> On Monday, October 29, 2012, Johannes Kepper wrote: >> Hi Daniel, >> >> without access to the schema / guidelines, I'd suggest to describe the performing forces in the header (using <perfMedium>), and then use the @decls attribute to point there from each part / staffDef (or @data in the other direction). Probably not the most convenient way, but right now there is no @sameas or @corresp on staffDef? >> >> Just my first idea, maybe others will come up with other / better suggestions. >> >> Best, Jo >> >> Am 29.10.2012 um 17:22 schrieb Daniel R?wenstrunk <roewenstrunk at edirom.de>: >> >>> Hi all, >>> >>> I'm looking for a good approach to describe the relationship between <part> elements in different movements. In my actual encoding of the piece (see a very simplified example below) the only possibility to guess the relationship between the Violin 1 in the first ("mdiv1_part1") and second ("mdiv2_part1") movement is by comparing their label attributes. And that is not really a thing I want to rely on when I programmatically read MEI files. >>> >>> Is there a better and/or more precise way of encoding the relationship? >>> >>> >>> By the way, why is <mdiv> surrounding <scrore> or <parts>? Wouldn't it be much more intuitive, at least if you use <mdiv> as containers for movements together with parts, to switch this the other way round? >>> >>> Cheers and thanks, >>> Daniel >>> >>> >>> Encoding example: >>> >>> <body> >>> <mdiv xml:id="mdiv1" label="Allegro"> >>> <parts> >>> <part xml:id="mdiv1_part1" label="Violin 1"> >>> ? >>> </part> >>> <part xml:id="mdiv1_part2" label="Violin 2"> >>> ? >>> </part> >>> </parts> >>> </mdiv> >>> <mdiv xml:id="mdiv2" label="Allegretto"> >>> <parts> >>> <part xml:id="mdiv2_part1" label="Violin 1"> >>> ? >>> </part> >>> <part xml:id="mdiv2_part2" label="Violin 2"> >>> ? >>> </part> >>> </parts> >>> </mdiv> >>> </body> >>> >>> >>> >>> -- >>> Dipl. Wirt. Inf. Daniel R?wenstrunk >>> Project manager >>> BMBF-Project "Freisch?tz Digital" >>> >>> Musikwiss. Seminar Detmold/Paderborn >>> Gartenstr. 20 >>> D-32756 Detmold >>> >>> Tel.: +49 5231 975665 >>> Mail: roewenstrunk at edirom.de >>> URL: http://www.freischuetz-digital.de >>> >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121030/cb76f420/attachment.html> From kepper at edirom.de Sat Nov 3 00:19:55 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 2 Nov 2012 18:19:55 -0500 Subject: [MEI-L] page sizes Message-ID: <D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> Dear all, during discussions at the AMS meeting in New Orleans, we came up with a couple of issues we'd like to discuss. The current definition of page sizes expressed in real-world units and the relationship to musical units is less than ideal. Especially, the @page.scale attribute on <scoreDef> causes some confusion. Right now, it allows expression of the relationship in the format "1:1.5", but "50%" is also allowed. The directions for scaling are not absolutely clear from the current documentation. Rather than explaining the current situation, I would like to introduce our proposal for a better definition: scoreDef/@page.units (pre-existing, but maybe better called @page.unit) will define the real-world unit used to describe pages. It allows 'cm', 'in', 'mm'. scoreDef/@page.rightmar and other margins take decimal numbers and will use the unit defined in @page.units to describe all kinds of margins on the page. This explicitly includes the distance between two staves, independent of their size (see below). scoreDef/@interline.size will replace the current @page.scale. It will hold the decimal number of real-world units (using @page.units) matching one interline distance. This interline distance (which is already used by MEI) is a musical unit which describes half the distance between two staff lines, or using a different description, the distance a note head moves when stepping up a second. For instance, an @interline.size of "1.2" and a @page.unit of "mm" would say that the distance between two staff lines is 2.4mm, or the full height of the staff is 9.6mm. This assumes, of course, that the distance is measured from the middle of the individual lines and disregards the thickness of the lines themselves. The guidelines don't say anything about this yet, but at least to me, it seems easier to leave that out of the calculation. If we really want to say something about the thickness of the line, it can be done later on using a different attribute (eventually, the thickness might change almost everywhere for manually drawn staff lines). It is open for discussion to what degree this interline distance can be made the default in certain contexts. For instance, the description of @width says it needs the value of @unit, specified on the same element, in order to be processed correctly. This means that every measure specifying its width also needs to specify the corresponding unit. While interline distances might be a reasonable default here, it is certainly not for page/@width. This is clearly a separate issue, but it is closely related, and should be given additional consideration. For cue-sized staves, the existing staffDef/@scale can be used to specify the size as a percentage of the default. The data type for this should also allow the value "cue" for cases where you don't want to be extremely precise about the exact size of the staff. Then again, a new scoreDef/@cuesize could be used to specify a default for cue-sized staves. Considering the question of scope (does this affect the size of cue notes in addition to cue staves?), one could make an argument that it should be @cuesize.staff and @cuesize.note instead. It also should be made clear in the guidelines that the staffDef/@scale does not affect margins at all, as they should always be measured in real-world units (see above). It would be great to get some feedback on all this, especially from people with engraving experience? Best regards, Laurent, Perry and Johannes From andrew.hankinson at mail.mcgill.ca Sat Nov 3 01:33:18 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Fri, 2 Nov 2012 20:33:18 -0400 Subject: [MEI-L] page sizes In-Reply-To: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> Message-ID: <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> The concept of "physical" unit doesn't really translate well to editions that are meant for digital consumption only. If I have a page meant for a tablet or digital music stand display, what does the "inch" unit mean? Does it mean render it as a physical inch on the screen, regardless of how many pixels it takes to represent it? Or does it mean render it using a fixed number of pixels-per-inch, regardless of how large or small it makes it from one display to another. E-ink displays challenge this concept, since they don't really have pixels, and high-resolution displays also challenge it since the number of pixels it takes to represent a single physical unit can be completely different. So we'll probably need some sort of proportional unit so that we can say that the page margin is a percentage of the rendered display rather than a fixed unit of physical measurement. I would suggest looking at the CSS3 documentation for how they deal with proportional and physical sizes, since they support both. Apparently their baseline unit is "ch", which is the width of the "0" when rendered in a given font. That seems to be a good analogue to the inter-staff space measurement. Also relevant to this discussion, in case it was forgotten: http://code.google.com/p/music-encoding/issues/detail?id=73 -Andrew On 2012-11-02, at 7:19 PM, Johannes Kepper <kepper at edirom.de> wrote: > Dear all, > > during discussions at the AMS meeting in New Orleans, we came up with a couple of issues we'd like to discuss. > > The current definition of page sizes expressed in real-world units and the relationship to musical units is less than ideal. Especially, the @page.scale attribute on <scoreDef> causes some confusion. Right now, it allows expression of the relationship in the format "1:1.5", but "50%" is also allowed. The directions for scaling are not absolutely clear from the current documentation. > > Rather than explaining the current situation, I would like to introduce our proposal for a better definition: > > scoreDef/@page.units (pre-existing, but maybe better called @page.unit) will define the real-world unit used to describe pages. It allows 'cm', 'in', 'mm'. > > scoreDef/@page.rightmar and other margins take decimal numbers and will use the unit defined in @page.units to describe all kinds of margins on the page. This explicitly includes the distance between two staves, independent of their size (see below). > > scoreDef/@interline.size will replace the current @page.scale. It will hold the decimal number of real-world units (using @page.units) matching one interline distance. This interline distance (which is already used by MEI) is a musical unit which describes half the distance between two staff lines, or using a different description, the distance a note head moves when stepping up a second. For instance, an @interline.size of "1.2" and a @page.unit of "mm" would say that the distance between two staff lines is 2.4mm, or the full height of the staff is 9.6mm. This assumes, of course, that the distance is measured from the middle of the individual lines and disregards the thickness of the lines themselves. The guidelines don't say anything about this yet, but at least to me, it seems easier to leave that out of the calculation. If we really want to say something about the thickness of the line, it can be done later on using a different attribute (eventually, the thickness might change almost everywhere for manually drawn staff lines). > > It is open for discussion to what degree this interline distance can be made the default in certain contexts. For instance, the description of @width says it needs the value of @unit, specified on the same element, in order to be processed correctly. This means that every measure specifying its width also needs to specify the corresponding unit. While interline distances might be a reasonable default here, it is certainly not for page/@width. This is clearly a separate issue, but it is closely related, and should be given additional consideration. > > For cue-sized staves, the existing staffDef/@scale can be used to specify the size as a percentage of the default. The data type for this should also allow the value "cue" for cases where you don't want to be extremely precise about the exact size of the staff. Then again, a new scoreDef/@cuesize could be used to specify a default for cue-sized staves. Considering the question of scope (does this affect the size of cue notes in addition to cue staves?), one could make an argument that it should be @cuesize.staff and @cuesize.note instead. > > It also should be made clear in the guidelines that the staffDef/@scale does not affect margins at all, as they should always be measured in real-world units (see above). > > It would be great to get some feedback on all this, especially from people with engraving experience? > > Best regards, > Laurent, Perry and Johannes > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Sat Nov 3 10:03:38 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Sat, 3 Nov 2012 04:03:38 -0500 Subject: [MEI-L] page sizes In-Reply-To: <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> Message-ID: <CAPcjuFe4R2bFEy721328sA9vyJ26O0dxH6-cJCYbSqepBO4eoA@mail.gmail.com> Hi Everyone, Here is a preliminary draft of notes I have been taking during a detailed study of how SCORE places staff lines on the page. This analysis is related to the thread topic (and hopefully the attachment passes unscathed through the list system). Score positioning data can be used to address an exact physical location on the page. In SCORE there is a complex interaction between physical and non-physical units which are designed to maximize flexibility in editing music notation (as opposed to describing physical layout of a pre-existing graphical image of notation). When working with music inside of the SCORE editor, physical units are not consciously dealt with. The physical units are defined at the last minute when printing to EPS files. But basically it is just a scaling factor which converts the SCORE data into physical units. I don't go into the placement objects other than staves. For other objects, they all must first be attached to a particular staff. The staff number forms part of the vertical placement of all objects on the page. To place a note somewhere on the page, you must first place it on a staff, and then you add an additional vertical offset relative to the staff. So to calculate where a note is positioned vertically on a page you need to know several things sorted from local to global positioning information: (1) vertical offset of the note on the staff in terms of diatonic steps from the staff origin (3 = bottom line of standard staff). (2) vertical offset of the staff from its default position (3) scaling of the staff (staff size) (4) bottom margin of the page (offset from default staff positions) (5) scaling of the page (excluding margins) One important concept of SCORE placement is that the physical page size is actually irrelevant when printing SCORE data to an EPS file. In other words, there is no concept of a top or right margin, and everything is ultimately referenced to the bottom left corner as the origin of the coordinate system for the page. I'll look at Johannes email in more detail and maybe comment later, but the attached study involves how to convert SCORE's internal positioning information onto a physical page. My intention is to go directly from SCORE to a TIFF image directly (or directly to SCREEN from Andrew's point of view), and get a pixel-identical image as if it were first converted to Postscript in SCORE then to TIFF via GhostScript. I am the primary target audience for the description at the moment, but if someone actually reads it and needs clarification, that would be great to add to the documentation. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121103/1787bbaa/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: SCOREPhysicalStaffPositions-20121023.pdf Type: application/pdf Size: 1138037 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121103/1787bbaa/attachment.pdf> From lxpugin at gmail.com Sun Nov 4 23:22:53 2012 From: lxpugin at gmail.com (Laurent Pugin) Date: Sun, 4 Nov 2012 16:22:53 -0600 Subject: [MEI-L] Search Message-ID: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> Hi, Is there a way to search the mailing list archive? The only resource I found are files to download. https://lists.uni-paderborn.de/pipermail/mei-l/ Thanks, Laurent -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121104/349425a4/attachment.html> From kepper at edirom.de Mon Nov 5 00:49:45 2012 From: kepper at edirom.de (Johannes Kepper) Date: Sun, 4 Nov 2012 17:49:45 -0600 Subject: [MEI-L] Search In-Reply-To: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> Message-ID: <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> Hi Laurent, I'm afraid that Paderborn's list server doesn't offer more than that. I know that TEI-L is mirrored on some more accessible listservs. Basically all mails are still served by the original list, but would also be archived somewhere else. I don't know the software they use right now, though. Maybe that's something we want to offer for MEI-L as well. Opinions? Johannes Am 04.11.2012 um 16:22 schrieb Laurent Pugin <lxpugin at gmail.com>: > Hi, > > Is there a way to search the mailing list archive? The only resource I found are files to download. > https://lists.uni-paderborn.de/pipermail/mei-l/ > > Thanks, > Laurent > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Mon Nov 5 01:38:28 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Sun, 4 Nov 2012 16:38:28 -0800 Subject: [MEI-L] Search In-Reply-To: <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> Message-ID: <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> Hi Johannes, On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: > > I'm afraid that Paderborn's list server doesn't offer more than that. I know that TEI-L is mirrored on some more accessible listservs. Basically all mails are still served by the original list, but would also be archived somewhere else. I don't know the software they use right now, though. Maybe that's something we want to offer for MEI-L as well. Opinions? My opinion is that lists should be hosted by Google Groups. In particular this allows for a web interface to the posting which is searchable. Here is the one I set up for Humdrum a few years ago which has all postings since the first one in July 2009: https://groups.google.com/forum/?fromgroups#!forum/starstarhug Google Groups allows many configurations such as public/private, allows for joining by anyone or by invitation, allows posts to be moderated or open. For **HUG (Humdrum Users Group), I allow anyone to join. New members' posts are moderated, and when their first post is non-spam, I promote them to a full unmoderated member. I created an mei-l group: https://groups.google.com/forum/?fromgroups#!forum/mei-l which I can setup further if that is of interest (or this list could perhaps subscribe to the Paderborn one which would in effect allow for archiving and searchability of the current list). The group can be posted to online from that webpage, or via email from: starstarhug at googlegroups.com -=+Craig From zupftom at googlemail.com Mon Nov 5 07:40:35 2012 From: zupftom at googlemail.com (TW) Date: Mon, 5 Nov 2012 07:40:35 +0100 Subject: [MEI-L] Search In-Reply-To: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> Message-ID: <CAEB1mAoBmVmb6bajtw53Tj6ib_CN2JD0TV7wBFmzVArqSuvu8A@mail.gmail.com> Hi Laurent, I'm using Google to search the list, like so: site:lists.uni-paderborn.de/pipermail/mei-l "layout module" Thomas 2012/11/4 Laurent Pugin <lxpugin at gmail.com>: > Is there a way to search the mailing list archive? The only resource I found > are files to download. > https://lists.uni-paderborn.de/pipermail/mei-l/ > > Thanks, > Laurent > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > From laurent at music.mcgill.ca Mon Nov 5 14:52:28 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Mon, 5 Nov 2012 08:52:28 -0500 Subject: [MEI-L] Search In-Reply-To: <12212_1352097651_50975F73_12212_14_1_CAEB1mAoBmVmb6bajtw53Tj6ib_CN2JD0TV7wBFmzVArqSuvu8A@mail.gmail.com> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <12212_1352097651_50975F73_12212_14_1_CAEB1mAoBmVmb6bajtw53Tj6ib_CN2JD0TV7wBFmzVArqSuvu8A@mail.gmail.com> Message-ID: <CAJ306Ha-QfKOOpAFuUANPSZL82fYbywg+Lf+v3t0ANwQ1b1nzg@mail.gmail.com> Hi Thomas, Thanks for this, it does do the trick. Nonetheless, having it searchable in a more easy way would be good, I think. Isn't this a feature we can switch on with the system we use? Laurent On Mon, Nov 5, 2012 at 1:40 AM, TW <zupftom at googlemail.com> wrote: > Hi Laurent, > > I'm using Google to search the list, like so: > > site:lists.uni-paderborn.de/pipermail/mei-l "layout module" > > Thomas > > > 2012/11/4 Laurent Pugin <lxpugin at gmail.com>: > > Is there a way to search the mailing list archive? The only resource I > found > > are files to download. > > https://lists.uni-paderborn.de/pipermail/mei-l/ > > > > Thanks, > > Laurent > > > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121105/67e46aa5/attachment.html> From rfreedma at haverford.edu Mon Nov 5 14:57:14 2012 From: rfreedma at haverford.edu (Richard Freedman) Date: Mon, 5 Nov 2012 08:57:14 -0500 Subject: [MEI-L] Search In-Reply-To: <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> Message-ID: <CA+zvZGehSAUT5+-gCaZ-p8TZRcJBJnW_brQwud+4HaH82Ga+bQ@mail.gmail.com> Dear Johannes, I am sorry that I was obliged to leave the Digital Critical editions session early on Sunday. Is there some chance I could have a brief version of your presentation? How did the rest of the discussion go? here is the small web site we made for the session organized with Marenzio, Josquin, and ELVIS: https://sites.google.com/a/haverford.edu/ams_digitalearlymusic/ All the best, Richard On Sun, Nov 4, 2012 at 6:49 PM, Johannes Kepper <kepper at edirom.de> wrote: > Hi Laurent, > > I'm afraid that Paderborn's list server doesn't offer more than that. I > know that TEI-L is mirrored on some more accessible listservs. Basically > all mails are still served by the original list, but would also be archived > somewhere else. I don't know the software they use right now, though. Maybe > that's something we want to offer for MEI-L as well. Opinions? > > Johannes > > > > Am 04.11.2012 um 16:22 schrieb Laurent Pugin <lxpugin at gmail.com>: > > > Hi, > > > > Is there a way to search the mailing list archive? The only resource I > found are files to download. > > https://lists.uni-paderborn.de/pipermail/mei-l/ > > > > Thanks, > > Laurent > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Richard Freedman John C. Whitehead Professor of Music Haverford College Haverford, PA 19041 610-896-1007 610-896-4902 (fax) http://www.haverford.edu/faculty/rfreedma -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121105/6409bdd3/attachment.html> From rfreedma at haverford.edu Mon Nov 5 14:59:45 2012 From: rfreedma at haverford.edu (Richard Freedman) Date: Mon, 5 Nov 2012 08:59:45 -0500 Subject: [MEI-L] Search In-Reply-To: <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> Message-ID: <CA+zvZGc5zzzmJVqkOj7cByMPL2oNajL9dGpnRwB-DaYhnng+Pw@mail.gmail.com> Apologies to members of the list for that last message. Nothing private there! I will look more carefully before sending messages in the future! Richard On Sun, Nov 4, 2012 at 6:49 PM, Johannes Kepper <kepper at edirom.de> wrote: > Hi Laurent, > > I'm afraid that Paderborn's list server doesn't offer more than that. I > know that TEI-L is mirrored on some more accessible listservs. Basically > all mails are still served by the original list, but would also be archived > somewhere else. I don't know the software they use right now, though. Maybe > that's something we want to offer for MEI-L as well. Opinions? > > Johannes > > > > Am 04.11.2012 um 16:22 schrieb Laurent Pugin <lxpugin at gmail.com>: > > > Hi, > > > > Is there a way to search the mailing list archive? The only resource I > found are files to download. > > https://lists.uni-paderborn.de/pipermail/mei-l/ > > > > Thanks, > > Laurent > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Richard Freedman John C. Whitehead Professor of Music Haverford College Haverford, PA 19041 610-896-1007 610-896-4902 (fax) http://www.haverford.edu/faculty/rfreedma -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121105/4e6d9ac7/attachment.html> From esfield at stanford.edu Tue Nov 6 01:51:03 2012 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Mon, 5 Nov 2012 16:51:03 -0800 (PST) Subject: [MEI-L] Search In-Reply-To: <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> Message-ID: <c4a07524.00000c2c.000000f9@CCARH-ADM-2.su.win.stanford.edu> Why starstarhug for mei? -----Original Message----- From: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp Sent: Sunday, November 04, 2012 4:38 PM To: Music Encoding Initiative Subject: Re: [MEI-L] Search Hi Johannes, On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: > > I'm afraid that Paderborn's list server doesn't offer more than that. I know that TEI-L is mirrored on some more accessible listservs. Basically all mails are still served by the original list, but would also be archived somewhere else. I don't know the software they use right now, though. Maybe that's something we want to offer for MEI-L as well. Opinions? My opinion is that lists should be hosted by Google Groups. In particular this allows for a web interface to the posting which is searchable. Here is the one I set up for Humdrum a few years ago which has all postings since the first one in July 2009: https://groups.google.com/forum/?fromgroups#!forum/starstarhug Google Groups allows many configurations such as public/private, allows for joining by anyone or by invitation, allows posts to be moderated or open. For **HUG (Humdrum Users Group), I allow anyone to join. New members' posts are moderated, and when their first post is non-spam, I promote them to a full unmoderated member. I created an mei-l group: https://groups.google.com/forum/?fromgroups#!forum/mei-l which I can setup further if that is of interest (or this list could perhaps subscribe to the Paderborn one which would in effect allow for archiving and searchability of the current list). The group can be posted to online from that webpage, or via email from: starstarhug at googlegroups.com -=+Craig _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Tue Nov 6 10:10:35 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 6 Nov 2012 10:10:35 +0100 Subject: [MEI-L] Search In-Reply-To: <c4a07524.00000c2c.000000f9@CCARH-ADM-2.su.win.stanford.edu> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> <c4a07524.00000c2c.000000f9@CCARH-ADM-2.su.win.stanford.edu> Message-ID: <5899B202-09AC-4774-BDD0-52F956BDFBBD@edirom.de> From what I see, Google Groups still seems quite traditional. I looked up what TEI used as mirror, and particularly http://markmail.org/search/?q=list%3Aedu.brown.listserv.tei-l seems to be a great tool. But my understanding is that the current mailman instance in Paderborn stays the official mailing list, and all other sites are just mirrors of that (which means that all mails are send through the current mei-l). If we agree on that, there is no striking argument to provide only one such additional archive? Johannes Am 06.11.2012 um 01:51 schrieb Eleanor Selfridge-Field: > Why starstarhug for mei? > > > > -----Original Message----- > From: mei-l-bounces at lists.uni-paderborn.de > [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp > Sent: Sunday, November 04, 2012 4:38 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] Search > > Hi Johannes, > > On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: >> >> I'm afraid that Paderborn's list server doesn't offer more than that. I > know that TEI-L is mirrored on some more accessible listservs. Basically > all mails are still served by the original list, but would also be > archived somewhere else. I don't know the software they use right now, > though. Maybe that's something we want to offer for MEI-L as well. > Opinions? > > My opinion is that lists should be hosted by Google Groups. In particular > this allows for a web interface to the posting which is searchable. Here > is the one I set up for Humdrum a few years ago which has all postings > since the first one in July 2009: > > https://groups.google.com/forum/?fromgroups#!forum/starstarhug > > Google Groups allows many configurations such as public/private, allows > for joining by anyone or by invitation, allows posts to be moderated or > open. For **HUG (Humdrum Users Group), I allow anyone to join. New > members' posts are moderated, and when their first post is non-spam, I > promote them to a full unmoderated member. > > I created an mei-l group: > https://groups.google.com/forum/?fromgroups#!forum/mei-l > which I can setup further if that is of interest (or this list could > perhaps subscribe to the Paderborn one which would in effect allow for > archiving and searchability of the current list). The group can be posted > to online from that webpage, or via email from: > starstarhug at googlegroups.com > > > -=+Craig > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Tue Nov 6 10:21:51 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Tue, 6 Nov 2012 10:21:51 +0100 Subject: [MEI-L] Search In-Reply-To: <5899B202-09AC-4774-BDD0-52F956BDFBBD@edirom.de> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> <c4a07524.00000c2c.000000f9@CCARH-ADM-2.su.win.stanford.edu> <5899B202-09AC-4774-BDD0-52F956BDFBBD@edirom.de> Message-ID: <CBE4F35F-CA14-4167-9C6A-861B999C5F82@edirom.de> Hi Johannes, you being the admin of the list should have the possibility to configure mailman to maintain an archive, resp. to set privileges for archive access, being either subscribers only or public. Cheers, Benjamin Am 06.11.2012 um 10:10 schrieb Johannes Kepper: > From what I see, Google Groups still seems quite traditional. I looked up what TEI used as mirror, and particularly > > http://markmail.org/search/?q=list%3Aedu.brown.listserv.tei-l > > seems to be a great tool. But my understanding is that the current mailman instance in Paderborn stays the official mailing list, and all other sites are just mirrors of that (which means that all mails are send through the current mei-l). If we agree on that, there is no striking argument to provide only one such additional archive? > > Johannes > > > Am 06.11.2012 um 01:51 schrieb Eleanor Selfridge-Field: > >> Why starstarhug for mei? >> >> >> >> -----Original Message----- >> From: mei-l-bounces at lists.uni-paderborn.de >> [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp >> Sent: Sunday, November 04, 2012 4:38 PM >> To: Music Encoding Initiative >> Subject: Re: [MEI-L] Search >> >> Hi Johannes, >> >> On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: >>> >>> I'm afraid that Paderborn's list server doesn't offer more than that. I >> know that TEI-L is mirrored on some more accessible listservs. Basically >> all mails are still served by the original list, but would also be >> archived somewhere else. I don't know the software they use right now, >> though. Maybe that's something we want to offer for MEI-L as well. >> Opinions? >> >> My opinion is that lists should be hosted by Google Groups. In particular >> this allows for a web interface to the posting which is searchable. Here >> is the one I set up for Humdrum a few years ago which has all postings >> since the first one in July 2009: >> >> https://groups.google.com/forum/?fromgroups#!forum/starstarhug >> >> Google Groups allows many configurations such as public/private, allows >> for joining by anyone or by invitation, allows posts to be moderated or >> open. For **HUG (Humdrum Users Group), I allow anyone to join. New >> members' posts are moderated, and when their first post is non-spam, I >> promote them to a full unmoderated member. >> >> I created an mei-l group: >> https://groups.google.com/forum/?fromgroups#!forum/mei-l >> which I can setup further if that is of interest (or this list could >> perhaps subscribe to the Paderborn one which would in effect allow for >> archiving and searchability of the current list). The group can be posted >> to online from that webpage, or via email from: >> starstarhug at googlegroups.com >> >> >> -=+Craig >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Tue Nov 6 10:31:46 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Tue, 6 Nov 2012 10:31:46 +0100 Subject: [MEI-L] Search In-Reply-To: <CBE4F35F-CA14-4167-9C6A-861B999C5F82@edirom.de> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> <c4a07524.00000c2c.000000f9@CCARH-ADM-2.su.win.stanford.edu> <5899B202-09AC-4774-BDD0-52F956BDFBBD@edirom.de> <CBE4F35F-CA14-4167-9C6A-861B999C5F82@edirom.de> Message-ID: <64223CFA-BB4D-4957-BA94-D185867348F6@edirom.de> Hi all, please ignore the last post, I missed that this specific archive misses the search option? sorry Benjamin Am 06.11.2012 um 10:21 schrieb Benjamin Wolff Bohl: > Hi Johannes, > you being the admin of the list should have the possibility to configure mailman to maintain an archive, resp. to set privileges for archive access, being either subscribers only or public. > > Cheers, > Benjamin > > Am 06.11.2012 um 10:10 schrieb Johannes Kepper: > >> From what I see, Google Groups still seems quite traditional. I looked up what TEI used as mirror, and particularly >> >> http://markmail.org/search/?q=list%3Aedu.brown.listserv.tei-l >> >> seems to be a great tool. But my understanding is that the current mailman instance in Paderborn stays the official mailing list, and all other sites are just mirrors of that (which means that all mails are send through the current mei-l). If we agree on that, there is no striking argument to provide only one such additional archive? >> >> Johannes >> >> >> Am 06.11.2012 um 01:51 schrieb Eleanor Selfridge-Field: >> >>> Why starstarhug for mei? >>> >>> >>> >>> -----Original Message----- >>> From: mei-l-bounces at lists.uni-paderborn.de >>> [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp >>> Sent: Sunday, November 04, 2012 4:38 PM >>> To: Music Encoding Initiative >>> Subject: Re: [MEI-L] Search >>> >>> Hi Johannes, >>> >>> On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: >>>> >>>> I'm afraid that Paderborn's list server doesn't offer more than that. I >>> know that TEI-L is mirrored on some more accessible listservs. Basically >>> all mails are still served by the original list, but would also be >>> archived somewhere else. I don't know the software they use right now, >>> though. Maybe that's something we want to offer for MEI-L as well. >>> Opinions? >>> >>> My opinion is that lists should be hosted by Google Groups. In particular >>> this allows for a web interface to the posting which is searchable. Here >>> is the one I set up for Humdrum a few years ago which has all postings >>> since the first one in July 2009: >>> >>> https://groups.google.com/forum/?fromgroups#!forum/starstarhug >>> >>> Google Groups allows many configurations such as public/private, allows >>> for joining by anyone or by invitation, allows posts to be moderated or >>> open. For **HUG (Humdrum Users Group), I allow anyone to join. New >>> members' posts are moderated, and when their first post is non-spam, I >>> promote them to a full unmoderated member. >>> >>> I created an mei-l group: >>> https://groups.google.com/forum/?fromgroups#!forum/mei-l >>> which I can setup further if that is of interest (or this list could >>> perhaps subscribe to the Paderborn one which would in effect allow for >>> archiving and searchability of the current list). The group can be posted >>> to online from that webpage, or via email from: >>> starstarhug at googlegroups.com >>> >>> >>> -=+Craig >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kristina.richts at gmx.de Tue Nov 6 11:15:34 2012 From: kristina.richts at gmx.de (Kristina Richts) Date: Tue, 6 Nov 2012 11:15:34 +0100 Subject: [MEI-L] <parts> within <incip> Message-ID: <F3FF5104-EE24-44F1-8D9E-16F2831588E4@gmx.de> Hi all, is there a specific reason why the <parts> element is not provided within the <incip> element? And wouldn't it be an advantage to allow more than only one <incip> within <work>? Best, Kristina From andrew.hankinson at mail.mcgill.ca Tue Nov 6 12:01:02 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Tue, 6 Nov 2012 12:01:02 +0100 Subject: [MEI-L] Search In-Reply-To: <4091_1352194317_5098D90C_4091_8_4_64223CFA-BB4D-4957-BA94-D185867348F6@edirom.de> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> <c4a07524.00000c2c.000000f9@CCARH-ADM-2.su.win.stanford.edu> <5899B202-09AC-4774-BDD0-52F956BDFBBD@edirom.de> <CBE4F35F-CA14-4167-9C6A-861B999C5F82@edirom.de> <4091_1352194317_5098D90C_4091_8_4_64223CFA-BB4D-4957-BA94-D185867348F6@edirom.de> Message-ID: <B45B0431-88C3-479D-83B4-F53BBAD18A55@mail.mcgill.ca> Markmail, OSDir, Nabble, etc. wrap mailing list content in ads to make money on search engine traffic. Personally I find these mailing list aggregators are pretty frustrating, since they add a whole bunch of hits to a Google search, often of the exact same message. But maybe this is a case where something is better than nothing. Markmail has a content policy that we should read and see if there are any problems with it. http://markmail.org/docs/content-policy.xqy -Andrew On 2012-11-06, at 10:31 AM, Benjamin Wolff Bohl <bohl at edirom.de> wrote: > Hi all, > please ignore the last post, I missed that this specific archive misses the search option? sorry > > Benjamin > > Am 06.11.2012 um 10:21 schrieb Benjamin Wolff Bohl: > >> Hi Johannes, >> you being the admin of the list should have the possibility to configure mailman to maintain an archive, resp. to set privileges for archive access, being either subscribers only or public. >> >> Cheers, >> Benjamin >> >> Am 06.11.2012 um 10:10 schrieb Johannes Kepper: >> >>> From what I see, Google Groups still seems quite traditional. I looked up what TEI used as mirror, and particularly >>> >>> http://markmail.org/search/?q=list%3Aedu.brown.listserv.tei-l >>> >>> seems to be a great tool. But my understanding is that the current mailman instance in Paderborn stays the official mailing list, and all other sites are just mirrors of that (which means that all mails are send through the current mei-l). If we agree on that, there is no striking argument to provide only one such additional archive? >>> >>> Johannes >>> >>> >>> Am 06.11.2012 um 01:51 schrieb Eleanor Selfridge-Field: >>> >>>> Why starstarhug for mei? >>>> >>>> >>>> >>>> -----Original Message----- >>>> From: mei-l-bounces at lists.uni-paderborn.de >>>> [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp >>>> Sent: Sunday, November 04, 2012 4:38 PM >>>> To: Music Encoding Initiative >>>> Subject: Re: [MEI-L] Search >>>> >>>> Hi Johannes, >>>> >>>> On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: >>>>> >>>>> I'm afraid that Paderborn's list server doesn't offer more than that. I >>>> know that TEI-L is mirrored on some more accessible listservs. Basically >>>> all mails are still served by the original list, but would also be >>>> archived somewhere else. I don't know the software they use right now, >>>> though. Maybe that's something we want to offer for MEI-L as well. >>>> Opinions? >>>> >>>> My opinion is that lists should be hosted by Google Groups. In particular >>>> this allows for a web interface to the posting which is searchable. Here >>>> is the one I set up for Humdrum a few years ago which has all postings >>>> since the first one in July 2009: >>>> >>>> https://groups.google.com/forum/?fromgroups#!forum/starstarhug >>>> >>>> Google Groups allows many configurations such as public/private, allows >>>> for joining by anyone or by invitation, allows posts to be moderated or >>>> open. For **HUG (Humdrum Users Group), I allow anyone to join. New >>>> members' posts are moderated, and when their first post is non-spam, I >>>> promote them to a full unmoderated member. >>>> >>>> I created an mei-l group: >>>> https://groups.google.com/forum/?fromgroups#!forum/mei-l >>>> which I can setup further if that is of interest (or this list could >>>> perhaps subscribe to the Paderborn one which would in effect allow for >>>> archiving and searchability of the current list). The group can be posted >>>> to online from that webpage, or via email from: >>>> starstarhug at googlegroups.com >>>> >>>> >>>> -=+Craig >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Tue Nov 6 12:15:26 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 6 Nov 2012 12:15:26 +0100 Subject: [MEI-L] Search In-Reply-To: <B45B0431-88C3-479D-83B4-F53BBAD18A55@mail.mcgill.ca> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> <c4a07524.00000c2c.000000f9@CCARH-ADM-2.su.win.stanford.edu> <5899B202-09AC-4774-BDD0-52F956BDFBBD@edirom.de> <CBE4F35F-CA14-4167-9C6A-861B999C5F82@edirom.de> <4091_1352194317_5098D90C_4091_8_4_64223CFA-BB4D-4957-BA94-D185867348F6@edirom.de> <B45B0431-88C3-479D-83B4-F53BBAD18A55@mail.mcgill.ca> Message-ID: <F057D736-0E56-43D8-BFCB-508634B8828C@edirom.de> Hi Andrew, I've never looked at Markmail etc. without using an adblocker, so I wasn't aware of this problem. But in my understanding, such mirrors would be additional ways of accessing the archive, and as with TEI, they wouldn't be "official". So I don't see this as a big problem. If someone want's to use their ui, he has to live with their spam. Benjamin contacted Paderborn's IT services today regarding the installation of additional plugins to mailman, which could provide a search interface to the existing mailer (thanks for that, Benni). Normally, they're quite supportive, so we should give them a couple of days. If possible, I would like to keep the official mailer in their hands, as we don't have to care about technical issues, changing policies or business models etc. We will keep you posted regarding the possibilities in Paderborn. jo Am 06.11.2012 um 12:01 schrieb Andrew Hankinson: > Markmail, OSDir, Nabble, etc. wrap mailing list content in ads to make money on search engine traffic. Personally I find these mailing list aggregators are pretty frustrating, since they add a whole bunch of hits to a Google search, often of the exact same message. But maybe this is a case where something is better than nothing. > > Markmail has a content policy that we should read and see if there are any problems with it. > > http://markmail.org/docs/content-policy.xqy > > -Andrew > > On 2012-11-06, at 10:31 AM, Benjamin Wolff Bohl <bohl at edirom.de> wrote: > >> Hi all, >> please ignore the last post, I missed that this specific archive misses the search option? sorry >> >> Benjamin >> >> Am 06.11.2012 um 10:21 schrieb Benjamin Wolff Bohl: >> >>> Hi Johannes, >>> you being the admin of the list should have the possibility to configure mailman to maintain an archive, resp. to set privileges for archive access, being either subscribers only or public. >>> >>> Cheers, >>> Benjamin >>> >>> Am 06.11.2012 um 10:10 schrieb Johannes Kepper: >>> >>>> From what I see, Google Groups still seems quite traditional. I looked up what TEI used as mirror, and particularly >>>> >>>> http://markmail.org/search/?q=list%3Aedu.brown.listserv.tei-l >>>> >>>> seems to be a great tool. But my understanding is that the current mailman instance in Paderborn stays the official mailing list, and all other sites are just mirrors of that (which means that all mails are send through the current mei-l). If we agree on that, there is no striking argument to provide only one such additional archive? >>>> >>>> Johannes >>>> >>>> >>>> Am 06.11.2012 um 01:51 schrieb Eleanor Selfridge-Field: >>>> >>>>> Why starstarhug for mei? >>>>> >>>>> >>>>> >>>>> -----Original Message----- >>>>> From: mei-l-bounces at lists.uni-paderborn.de >>>>> [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp >>>>> Sent: Sunday, November 04, 2012 4:38 PM >>>>> To: Music Encoding Initiative >>>>> Subject: Re: [MEI-L] Search >>>>> >>>>> Hi Johannes, >>>>> >>>>> On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: >>>>>> >>>>>> I'm afraid that Paderborn's list server doesn't offer more than that. I >>>>> know that TEI-L is mirrored on some more accessible listservs. Basically >>>>> all mails are still served by the original list, but would also be >>>>> archived somewhere else. I don't know the software they use right now, >>>>> though. Maybe that's something we want to offer for MEI-L as well. >>>>> Opinions? >>>>> >>>>> My opinion is that lists should be hosted by Google Groups. In particular >>>>> this allows for a web interface to the posting which is searchable. Here >>>>> is the one I set up for Humdrum a few years ago which has all postings >>>>> since the first one in July 2009: >>>>> >>>>> https://groups.google.com/forum/?fromgroups#!forum/starstarhug >>>>> >>>>> Google Groups allows many configurations such as public/private, allows >>>>> for joining by anyone or by invitation, allows posts to be moderated or >>>>> open. For **HUG (Humdrum Users Group), I allow anyone to join. New >>>>> members' posts are moderated, and when their first post is non-spam, I >>>>> promote them to a full unmoderated member. >>>>> >>>>> I created an mei-l group: >>>>> https://groups.google.com/forum/?fromgroups#!forum/mei-l >>>>> which I can setup further if that is of interest (or this list could >>>>> perhaps subscribe to the Paderborn one which would in effect allow for >>>>> archiving and searchability of the current list). The group can be posted >>>>> to online from that webpage, or via email from: >>>>> starstarhug at googlegroups.com >>>>> >>>>> >>>>> -=+Craig >>>>> >>>>> _______________________________________________ >>>>> mei-l mailing list >>>>> mei-l at lists.uni-paderborn.de >>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>>> >>>>> _______________________________________________ >>>>> mei-l mailing list >>>>> mei-l at lists.uni-paderborn.de >>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Tue Nov 6 13:30:09 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Tue, 6 Nov 2012 12:30:09 +0000 Subject: [MEI-L] <parts> within <incip> In-Reply-To: <F3FF5104-EE24-44F1-8D9E-16F2831588E4@gmx.de> References: <F3FF5104-EE24-44F1-8D9E-16F2831588E4@gmx.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514EA55E@EXCHANGE-01.kb.dk> Hi Kristina This may be a little premature, but if the breaking down of <work> into any number of (FRBR) <expression> pieces/components is going to be implemented in MEI at some point in about the same way we use it in MerMEId, you will be able to provide an incipit for each one of them. I actually think that is a better solution than using multiple <incip> at work level, because the work as a whole only has one beginning ;-) This may not answer your question, though. If you are thinking about different representations of the same incipit, say one in full score and one in reduction, or one incipit for each part, I think it would be fair to keep them together in the same <incip>. I don't know about <parts> within <incip>. Best wishes, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Kristina Richts Sendt: 6. november 2012 11:16 Til: Music Encoding Initiative Emne: [MEI-L] <parts> within <incip> Hi all, is there a specific reason why the <parts> element is not provided within the <incip> element? And wouldn't it be an advantage to allow more than only one <incip> within <work>? Best, Kristina _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From donbyrd at indiana.edu Wed Nov 7 05:54:29 2012 From: donbyrd at indiana.edu (Byrd, Donald A.) Date: Tue, 6 Nov 2012 23:54:29 -0500 Subject: [MEI-L] page sizes In-Reply-To: <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> Message-ID: <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> I've been thinking about this issue. There's something about it I don't understand, but I can't figure out what it is; so I'll go ahead as best I can. Laurent, Perry and Johannes' ideas generally make a lot of sense, but I think Andrew's point about the problem with physical units for editions meant for digital consumption is a good one. It seems to me we should think in terms of three categories of units: 1. absolute units, well-defined: inch, cm, mm, point, pica, etc. 2. absolute units, ill-defined/variable: pixel 3. relative units: distance between staff lines; percent of display width My CWMN editor Nightingale is a real-life example (though admittedly it's not nearly as flexible as what MEI needs). To oversimplify slightly, internally, Nightingale keeps coordinates of musical symbols relative to their system in terms of either of two units: one absolute (DDIST) and one relative (STDIST). * DDIST = 1/16th point = 1/1152nd inch (assuming a point is exactly 1/72nd inch, which is not exactly the traditional value) * STDIST = 1/8 the distance between staff lines Nightingale assumes it's displaying music on a conventional page; it gets the page size from the operating system and keeps margins in... well, some absolute units; I can't find the code at the moment. Finally (and I suspect MEI already handles this), I'd like to point out that two sizes of staves -- "normal" and "cue-size" -- aren't always enough; there are published performing editions that use three staff sizes. (In fact, I wouldn't be surprised if editions with _four_ sizes exist, though I don't know of any.) I'm not sure if this is helpful, but I hope so. --Don On Fri, 2 Nov 2012 20:33:18 -0400, Andrew Hankinson <andrew.hankinson at mail.mcgill.ca> wrote: > The concept of "physical" unit doesn't really translate well to > editions that are meant for digital consumption only. If I have a > page meant for a tablet or digital music stand display, what does the > "inch" unit mean? Does it mean render it as a physical inch on the > screen, regardless of how many pixels it takes to represent it? Or > does it mean render it using a fixed number of pixels-per-inch, > regardless of how large or small it makes it from one display to > another. E-ink displays challenge this concept, since they don't > really have pixels, and high-resolution displays also challenge it > since the number of pixels it takes to represent a single physical > unit can be completely different. So we'll probably need some sort of > proportional unit so that we can say that the page margin is a > percentage of the rendered display rather than a fixed unit of > physical measurement. > > I would suggest looking at the CSS3 documentation for how they deal > with proportional and physical sizes, since they support both. > Apparently their baseline unit is "ch", which is the width of the "0" > when rendered in a given font. That seems to be a good analogue to > the inter-staff space measurement. > > Also relevant to this discussion, in case it was forgotten: > > http://code.google.com/p/music-encoding/issues/detail?id=73 > > -Andrew > > On 2012-11-02, at 7:19 PM, Johannes Kepper <kepper at edirom.de> > wrote: > >> Dear all, >> >> during discussions at the AMS meeting in New Orleans, we came up >> with a couple of issues we'd like to discuss. >> >> The current definition of page sizes expressed in real-world units >> and the relationship to musical units is less than ideal. >> Especially, the @page.scale attribute on <scoreDef> causes some >> confusion. Right now, it allows expression of the relationship in >> the format "1:1.5", but "50%" is also allowed. The directions for >> scaling are not absolutely clear from the current documentation. >> >> Rather than explaining the current situation, I would like to >> introduce our proposal for a better definition: >> >> scoreDef/@page.units (pre-existing, but maybe better called >> @page.unit) will define the real-world unit used to describe pages. >> It allows 'cm', 'in', 'mm'. >> >> scoreDef/@page.rightmar and other margins take decimal numbers and >> will use the unit defined in @page.units to describe all kinds of >> margins on the page. This explicitly includes the distance between >> two staves, independent of their size (see below). >> >> scoreDef/@interline.size will replace the current @page.scale. It >> will hold the decimal number of real-world units (using @page.units) >> matching one interline distance. This interline distance (which is >> already used by MEI) is a musical unit which describes half the >> distance between two staff lines, or using a different description, >> the distance a note head moves when stepping up a second. For >> instance, an @interline.size of "1.2" and a @page.unit of "mm" would >> say that the distance between two staff lines is 2.4mm, or the full >> height of the staff is 9.6mm. This assumes, of course, that the >> distance is measured from the middle of the individual lines and >> disregards the thickness of the lines themselves. The guidelines >> don't say anything about this yet, but at least to me, it seems >> easier to leave that out of the calculation. If we really want to >> say something about the thickness of the line, it can be done later >> on using a different attribute (eventually, the thickness might >> change almost everywhere for manually drawn staff lines). >> >> It is open for discussion to what degree this interline distance can >> be made the default in certain contexts. For instance, the >> description of @width says it needs the value of @unit, specified on >> the same element, in order to be processed correctly. This means >> that every measure specifying its width also needs to specify the >> corresponding unit. While interline distances might be a reasonable >> default here, it is certainly not for page/@width. This is clearly a >> separate issue, but it is closely related, and should be given >> additional consideration. >> >> For cue-sized staves, the existing staffDef/@scale can be used to >> specify the size as a percentage of the default. The data type for >> this should also allow the value "cue" for cases where you don't >> want to be extremely precise about the exact size of the staff. Then >> again, a new scoreDef/@cuesize could be used to specify a default >> for cue-sized staves. Considering the question of scope (does this >> affect the size of cue notes in addition to cue staves?), one could >> make an argument that it should be @cuesize.staff and @cuesize.note >> instead. >> >> It also should be made clear in the guidelines that the >> staffDef/@scale does not affect margins at all, as they should >> always be measured in real-world units (see above). >> >> It would be great to get some feedback on all this, especially from >> people with engraving experience? >> >> Best regards, >> Laurent, Perry and Johannes >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Donald Byrd Woodrow Wilson Indiana Teaching Fellow Adjunct Associate Professor of Informatics & Music Indiana University, Bloomington From craigsapp at gmail.com Wed Nov 7 06:49:55 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Tue, 6 Nov 2012 21:49:55 -0800 Subject: [MEI-L] page sizes In-Reply-To: <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> Message-ID: <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> Hi Don, On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: > > Finally (and I suspect MEI already handles this), I'd like to point out > that two sizes of staves -- "normal" and "cue-size" -- aren't always > enough; there are published performing editions that use three staff sizes. > (In fact, I wouldn't be surprised if editions with _four_ sizes exist, > though I don't know of any.) I have seen at least three sizes in a score before, and this would theoretically allow for four sizes: In a piano/instrumental score, the piano part typically has the instrumental part displayed above it in a slightly smaller size. And I have seen ossia parts for the instrumental staff which in turn would be smaller than the instrumental staff size. So if the piano part also had an ossia, then there would be four staff sizes, unless the ossia for the piano is the same size as the instrumental part (which it probably should). > 1/72nd inch, which is not exactly the traditional value) You must be older than me, as I didn't know that :-) http://en.wikipedia.org/wiki/Point_(typography) <<In the late 1980s to the 1990s, the traditional point was supplanted by the desktop publishing point (also called the PostScript<http://en.wikipedia.org/wiki/PostScript> point)>> But then it seems that you need to be careful of your definition of the inch which has also changed: <<The desktop publishing <http://en.wikipedia.org/wiki/Desktop_Publishing> point (DTP point) is defined as 1/72 of the Anglo-Saxon compromise inch of 1959 (25.4 mm) which makes it *0.0138 inch* or *0.3527 mm*.>> But the pre-PostScript point size did not seem to be standardized: <<By the end of the 19th Century, it had settled to around 0.35 to 0.38 mm, depending on one?s geographical location.>> And here is the one you must be referring to which I vaguely remember seeing before: <<In 1886, the Fifteenth Meeting of the Type Founders Association of the United States approved the so-called *Johnson pica* be adopted as the official standard. This makes the traditional American printer?s foot measure 11.952 inches (303.6 mm), or 303.5808 mm exactly, giving a point size of approximately 1?72.27 of an inch, or *0.3515 mm*. This is the size of the point in the TeX <http://en.wikipedia.org/wiki/TeX> computer typesetting system by Donald Knuth<http://en.wikipedia.org/wiki/Donald_Knuth>, which predates PostScript slightly. Thus the latter unit is sometimes called the *TeX point*.>> -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121106/9e59df3c/attachment.html> From donbyrd at indiana.edu Wed Nov 7 16:25:07 2012 From: donbyrd at indiana.edu (Byrd, Donald A.) Date: Wed, 7 Nov 2012 10:25:07 -0500 Subject: [MEI-L] page sizes; multiple staff sizes; def. of point In-Reply-To: <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> Message-ID: <20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> On Tue, 6 Nov 2012 21:49:55 -0800, Craig Sapp <craigsapp at gmail.com> wrote: > Hi Don, > > On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: > >> >> Finally (and I suspect MEI already handles this), I'd like to point out >> that two sizes of staves -- "normal" and "cue-size" -- aren't always >> enough; there are published performing editions that use three staff sizes. >> (In fact, I wouldn't be surprised if editions with _four_ sizes exist, >> though I don't know of any.) > > > I have seen at least three sizes in a score before, and this would > theoretically allow for four sizes: > > In a piano/instrumental score, the piano part typically has the > instrumental part displayed above it in a slightly smaller size. And I > have seen ossia parts for the instrumental staff which in turn would be > smaller than the instrumental staff size. So if the piano part also had an > ossia, then there would be four staff sizes, unless the ossia for the piano > is the same size as the instrumental part (which it probably should). Right. My list of CMN extremes http://www.informatics.indiana.edu/donbyrd/CMNExtremes.htm lists the J. C. Bach Concerto for Harpsichord or Piano and Strings in E-flat, Op. 7 no. 5 (Dobereiner ed., 1927), where the 3rd size appears briefly, for an ossia. I'm sure I've seen other instances but I can't recall any; if you have other(s) handy, I'd love to hear about 'em (though I'm not sure others on this list would). >> 1/72nd inch, which is not exactly the traditional value) > > You must be older than me, as I didn't know that :-) It _is_ possible I'm older than you :-) . > http://en.wikipedia.org/wiki/Point_(typography) > > <<In the late 1980s to the 1990s, the traditional point was supplanted by > the desktop publishing point (also called the > PostScript<http://en.wikipedia.org/wiki/PostScript> > point)>> > > But then it seems that you need to be careful of your definition of the > inch which has also changed: > > <<The desktop publishing > <http://en.wikipedia.org/wiki/Desktop_Publishing> point > (DTP point) is defined as 1/72 of the Anglo-Saxon compromise inch of 1959 > (25.4 mm) which makes it *0.0138 inch* or *0.3527 mm*.>> > > But the pre-PostScript point size did not seem to be standardized: > > <<By the end of the 19th Century, it had settled to around 0.35 to 0.38 mm, > depending on one?s geographical location.>> > > And here is the one you must be referring to which I vaguely remember > seeing before: > > <<In 1886, the Fifteenth Meeting of the Type Founders Association of the > United States approved the so-called *Johnson pica* be adopted as the > official standard. This makes the traditional American printer?s foot > measure 11.952 inches (303.6 mm), or 303.5808 mm exactly, giving a point > size of approximately 1?72.27 of an inch, or *0.3515 mm*. > > This is the size of the point in the TeX > <http://en.wikipedia.org/wiki/TeX> computer > typesetting system by Donald > Knuth<http://en.wikipedia.org/wiki/Donald_Knuth>, > which predates PostScript slightly. Thus the latter unit is sometimes > called the *TeX point*.>> Ah yes, that's the definition I was thinking of. Thanks for the further information. I didn't know it was sometimes called the TeX point, but you might be younger than me :-) . --DAB > > > -=+Craig > -- Donald Byrd Woodrow Wilson Indiana Teaching Fellow Adjunct Associate Professor of Informatics & Music Indiana University, Bloomington From laurent at music.mcgill.ca Wed Nov 7 17:27:51 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Wed, 7 Nov 2012 11:27:51 -0500 Subject: [MEI-L] page sizes; multiple staff sizes; def. of point In-Reply-To: <13149_1352301931_509A7D6B_13149_136_1_20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> <13149_1352301931_509A7D6B_13149_136_1_20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> Message-ID: <CAJ306HYE3kxQ+pWR4Lp_2r43Oqiw98w0xmhOFq47BRcvWg+c1w@mail.gmail.com> On Wed, Nov 7, 2012 at 10:25 AM, Byrd, Donald A. <donbyrd at indiana.edu>wrote: > On Tue, 6 Nov 2012 21:49:55 -0800, Craig Sapp <craigsapp at gmail.com> wrote: > >> Hi Don, >> >> On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> >> wrote: >> >> >>> Finally (and I suspect MEI already handles this), I'd like to point out >>> that two sizes of staves -- "normal" and "cue-size" -- aren't always >>> enough; there are published performing editions that use three staff >>> sizes. >>> (In fact, I wouldn't be surprised if editions with _four_ sizes exist, >>> though I don't know of any.) >>> >> >> >> I have seen at least three sizes in a score before, and this would >> theoretically allow for four sizes: >> >> In a piano/instrumental score, the piano part typically has the >> instrumental part displayed above it in a slightly smaller size. And I >> have seen ossia parts for the instrumental staff which in turn would be >> smaller than the instrumental staff size. So if the piano part also had >> an >> ossia, then there would be four staff sizes, unless the ossia for the >> piano >> is the same size as the instrumental part (which it probably should). >> > > Right. My list of CMN extremes > > http://www.informatics.**indiana.edu/donbyrd/**CMNExtremes.htm<http://www.informatics.indiana.edu/donbyrd/CMNExtremes.htm> > > lists the J. C. Bach Concerto for Harpsichord or Piano and Strings in > E-flat, Op. 7 no. 5 (Dobereiner ed., 1927), where the 3rd size appears > briefly, for an ossia. I'm sure I've seen other instances but I can't > recall any; if you have other(s) handy, I'd love to hear about 'em (though > I'm not sure others on this list would). > The number of possible sizes would be unlimited since it can be defined for every staff individually (in staffDef element) if necessary. Our proposal was nonetheless to have a cue-size for the most common cases where we have only one size of cue-size staves and to have it defined at a higher level (in scoreDef element). Laurent -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121107/05ffddf8/attachment.html> From stadler at edirom.de Wed Nov 7 18:59:41 2012 From: stadler at edirom.de (Peter Stadler) Date: Wed, 7 Nov 2012 11:59:41 -0600 Subject: [MEI-L] Search In-Reply-To: <F057D736-0E56-43D8-BFCB-508634B8828C@edirom.de> References: <CAJ306HYMNn=iPrkD93RzYZWBuE2kh5wT+o16Q6n-TF+t6s6tCw@mail.gmail.com> <2308D774-B8C9-4E43-B411-7DDD6CB4DBE5@edirom.de> <CAPcjuFe2ZucwpWHxVQCuM25TANfKCF07vRaKhXie-G9RbiPNfQ@mail.gmail.com> <c4a07524.00000c2c.000000f9@CCARH-ADM-2.su.win.stanford.edu> <5899B202-09AC-4774-BDD0-52F956BDFBBD@edirom.de> <CBE4F35F-CA14-4167-9C6A-861B999C5F82@edirom.de> <4091_1352194317_5098D90C_4091_8_4_64223CFA-BB4D-4957-BA94-D185867348F6@edirom.de> <B45B0431-88C3-479D-83B4-F53BBAD18A55@mail.mcgill.ca> <F057D736-0E56-43D8-BFCB-508634B8828C@edirom.de> Message-ID: <870AE1BF-9F47-4BFF-98BA-E8A066305C70@edirom.de> Just for the record: I already volunteered to bring MEI-L to Markmail and/or Nabble (cf. https://lists.uni-paderborn.de/pipermail/mei-l/2011/000373.html) but my efforts got stuck somehow ? sorry for that. Actually, I do agree that a native search interface for mailman would be the best thing -- if this doesn't work out I'd be happy to revive the nabble/markmail integration. (All I'd need though is the complete mail archive from the list owner.) Best Peter Am 06.11.2012 um 05:15 schrieb Johannes Kepper <kepper at edirom.de>: > Hi Andrew, > > I've never looked at Markmail etc. without using an adblocker, so I wasn't aware of this problem. But in my understanding, such mirrors would be additional ways of accessing the archive, and as with TEI, they wouldn't be "official". So I don't see this as a big problem. If someone want's to use their ui, he has to live with their spam. > > Benjamin contacted Paderborn's IT services today regarding the installation of additional plugins to mailman, which could provide a search interface to the existing mailer (thanks for that, Benni). Normally, they're quite supportive, so we should give them a couple of days. If possible, I would like to keep the official mailer in their hands, as we don't have to care about technical issues, changing policies or business models etc. > > We will keep you posted regarding the possibilities in Paderborn. > > jo > > Am 06.11.2012 um 12:01 schrieb Andrew Hankinson: > >> Markmail, OSDir, Nabble, etc. wrap mailing list content in ads to make money on search engine traffic. Personally I find these mailing list aggregators are pretty frustrating, since they add a whole bunch of hits to a Google search, often of the exact same message. But maybe this is a case where something is better than nothing. >> >> Markmail has a content policy that we should read and see if there are any problems with it. >> >> http://markmail.org/docs/content-policy.xqy >> >> -Andrew >> >> On 2012-11-06, at 10:31 AM, Benjamin Wolff Bohl <bohl at edirom.de> wrote: >> >>> Hi all, >>> please ignore the last post, I missed that this specific archive misses the search option? sorry >>> >>> Benjamin >>> >>> Am 06.11.2012 um 10:21 schrieb Benjamin Wolff Bohl: >>> >>>> Hi Johannes, >>>> you being the admin of the list should have the possibility to configure mailman to maintain an archive, resp. to set privileges for archive access, being either subscribers only or public. >>>> >>>> Cheers, >>>> Benjamin >>>> >>>> Am 06.11.2012 um 10:10 schrieb Johannes Kepper: >>>> >>>>> From what I see, Google Groups still seems quite traditional. I looked up what TEI used as mirror, and particularly >>>>> >>>>> http://markmail.org/search/?q=list%3Aedu.brown.listserv.tei-l >>>>> >>>>> seems to be a great tool. But my understanding is that the current mailman instance in Paderborn stays the official mailing list, and all other sites are just mirrors of that (which means that all mails are send through the current mei-l). If we agree on that, there is no striking argument to provide only one such additional archive? >>>>> >>>>> Johannes >>>>> >>>>> >>>>> Am 06.11.2012 um 01:51 schrieb Eleanor Selfridge-Field: >>>>> >>>>>> Why starstarhug for mei? >>>>>> >>>>>> >>>>>> >>>>>> -----Original Message----- >>>>>> From: mei-l-bounces at lists.uni-paderborn.de >>>>>> [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp >>>>>> Sent: Sunday, November 04, 2012 4:38 PM >>>>>> To: Music Encoding Initiative >>>>>> Subject: Re: [MEI-L] Search >>>>>> >>>>>> Hi Johannes, >>>>>> >>>>>> On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: >>>>>>> >>>>>>> I'm afraid that Paderborn's list server doesn't offer more than that. I >>>>>> know that TEI-L is mirrored on some more accessible listservs. Basically >>>>>> all mails are still served by the original list, but would also be >>>>>> archived somewhere else. I don't know the software they use right now, >>>>>> though. Maybe that's something we want to offer for MEI-L as well. >>>>>> Opinions? >>>>>> >>>>>> My opinion is that lists should be hosted by Google Groups. In particular >>>>>> this allows for a web interface to the posting which is searchable. Here >>>>>> is the one I set up for Humdrum a few years ago which has all postings >>>>>> since the first one in July 2009: >>>>>> >>>>>> https://groups.google.com/forum/?fromgroups#!forum/starstarhug >>>>>> >>>>>> Google Groups allows many configurations such as public/private, allows >>>>>> for joining by anyone or by invitation, allows posts to be moderated or >>>>>> open. For **HUG (Humdrum Users Group), I allow anyone to join. New >>>>>> members' posts are moderated, and when their first post is non-spam, I >>>>>> promote them to a full unmoderated member. >>>>>> >>>>>> I created an mei-l group: >>>>>> https://groups.google.com/forum/?fromgroups#!forum/mei-l >>>>>> which I can setup further if that is of interest (or this list could >>>>>> perhaps subscribe to the Paderborn one which would in effect allow for >>>>>> archiving and searchability of the current list). The group can be posted >>>>>> to online from that webpage, or via email from: >>>>>> starstarhug at googlegroups.com >>>>>> >>>>>> >>>>>> -=+Craig >>>>>> -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de From andrew.hankinson at mail.mcgill.ca Thu Nov 8 11:43:11 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Thu, 8 Nov 2012 11:43:11 +0100 Subject: [MEI-L] page sizes; multiple staff sizes; def. of point In-Reply-To: <7596_1352305706_509A8C29_7596_17_1_CAJ306HYE3kxQ+pWR4Lp_2r43Oqiw98w0xmhOFq47BRcvWg+c1w@mail.gmail.com> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> <13149_1352301931_509A7D6B_13149_136_1_20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> <7596_1352305706_509A8C29_7596_17_1_CAJ306HYE3kxQ+pWR4Lp_2r43Oqiw98w0xmhOFq47BRcvWg+c1w@mail.gmail.com> Message-ID: <CAFD2E84-D88E-4E9D-8FBA-186CCDF5E1C8@mail.mcgill.ca> Perhaps I'm muddying the waters, but should we start looking at ways of further separating the musical structure from the actual appearance? More specifically, using CSS to control the appearance of elements, rather than interweaving the visual and semantic structure. For instance, page margins, staff sizes, cue sizes -- all of this could be specified in a different style sheet for different media: print, tablet, mobile phones, web browsers, etc. A print style sheet could specify in points or inches; a display stylesheet could specify in pixels, ems, or proportions. Different media will have different presentation needs, and if we're to make sure that MEI can operate in both the physical and digital worlds simultaneously, this question will become more important, not less. I'm not *completely* convinced of this since it does complicate lots of things, but I think it's worthy of at least a bit of discussion. -Andrew On 2012-11-07, at 5:27 PM, Laurent Pugin <laurent at music.mcgill.ca> wrote: > > > On Wed, Nov 7, 2012 at 10:25 AM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: > On Tue, 6 Nov 2012 21:49:55 -0800, Craig Sapp <craigsapp at gmail.com> wrote: > Hi Don, > > On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: > > > Finally (and I suspect MEI already handles this), I'd like to point out > that two sizes of staves -- "normal" and "cue-size" -- aren't always > enough; there are published performing editions that use three staff sizes. > (In fact, I wouldn't be surprised if editions with _four_ sizes exist, > though I don't know of any.) > > > I have seen at least three sizes in a score before, and this would > theoretically allow for four sizes: > > In a piano/instrumental score, the piano part typically has the > instrumental part displayed above it in a slightly smaller size. And I > have seen ossia parts for the instrumental staff which in turn would be > smaller than the instrumental staff size. So if the piano part also had an > ossia, then there would be four staff sizes, unless the ossia for the piano > is the same size as the instrumental part (which it probably should). > > Right. My list of CMN extremes > > http://www.informatics.indiana.edu/donbyrd/CMNExtremes.htm > > lists the J. C. Bach Concerto for Harpsichord or Piano and Strings in E-flat, Op. 7 no. 5 (Dobereiner ed., 1927), where the 3rd size appears briefly, for an ossia. I'm sure I've seen other instances but I can't recall any; if you have other(s) handy, I'd love to hear about 'em (though I'm not sure others on this list would). > > The number of possible sizes would be unlimited since it can be defined for every staff individually (in staffDef element) if necessary. Our proposal was nonetheless to have a cue-size for the most common cases where we have only one size of cue-size staves and to have it defined at a higher level (in scoreDef element). > > Laurent > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121108/ac2706ad/attachment.html> From raffaeleviglianti at gmail.com Thu Nov 8 11:53:18 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Thu, 8 Nov 2012 10:53:18 +0000 Subject: [MEI-L] page sizes; multiple staff sizes; def. of point In-Reply-To: <CAFD2E84-D88E-4E9D-8FBA-186CCDF5E1C8@mail.mcgill.ca> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> <13149_1352301931_509A7D6B_13149_136_1_20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> <7596_1352305706_509A8C29_7596_17_1_CAJ306HYE3kxQ+pWR4Lp_2r43Oqiw98w0xmhOFq47BRcvWg+c1w@mail.gmail.com> <CAFD2E84-D88E-4E9D-8FBA-186CCDF5E1C8@mail.mcgill.ca> Message-ID: <CAMyHAnO2xwxhW+PsteXdQugtjDx61EmCbnoUf3XfQXVr+sC6zg@mail.gmail.com> Dear all, I side entirely with Andrew on this one. I think real-world units become useful as part of the encoding only if they are describing a physical source (after all we want to use MEI for document encoding). Information about rendering and printing should live somewhere else. Best, Raffaele On Thu, Nov 8, 2012 at 10:43 AM, Andrew Hankinson < andrew.hankinson at mail.mcgill.ca> wrote: > Perhaps I'm muddying the waters, but should we start looking at ways of > further separating the musical structure from the actual appearance? More > specifically, using CSS to control the appearance of elements, rather than > interweaving the visual and semantic structure. > > For instance, page margins, staff sizes, cue sizes -- all of this could be > specified in a different style sheet for different media: print, tablet, > mobile phones, web browsers, etc. A print style sheet could specify in > points or inches; a display stylesheet could specify in pixels, ems, or > proportions. Different media will have different presentation needs, and if > we're to make sure that MEI can operate in both the physical and digital > worlds simultaneously, this question will become more important, not less. > > I'm not *completely* convinced of this since it does complicate lots of > things, but I think it's worthy of at least a bit of discussion. > > -Andrew > > On 2012-11-07, at 5:27 PM, Laurent Pugin <laurent at music.mcgill.ca> wrote: > > > > On Wed, Nov 7, 2012 at 10:25 AM, Byrd, Donald A. <donbyrd at indiana.edu>wrote: > >> On Tue, 6 Nov 2012 21:49:55 -0800, Craig Sapp <craigsapp at gmail.com> >> wrote: >> >>> Hi Don, >>> >>> On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> >>> wrote: >>> >>> >>>> Finally (and I suspect MEI already handles this), I'd like to point out >>>> that two sizes of staves -- "normal" and "cue-size" -- aren't always >>>> enough; there are published performing editions that use three staff >>>> sizes. >>>> (In fact, I wouldn't be surprised if editions with _four_ sizes exist, >>>> though I don't know of any.) >>>> >>> >>> >>> I have seen at least three sizes in a score before, and this would >>> theoretically allow for four sizes: >>> >>> In a piano/instrumental score, the piano part typically has the >>> instrumental part displayed above it in a slightly smaller size. And I >>> have seen ossia parts for the instrumental staff which in turn would be >>> smaller than the instrumental staff size. So if the piano part also had >>> an >>> ossia, then there would be four staff sizes, unless the ossia for the >>> piano >>> is the same size as the instrumental part (which it probably should). >>> >> >> Right. My list of CMN extremes >> >> http://www.informatics.**indiana.edu/donbyrd/**CMNExtremes.htm<http://www.informatics.indiana.edu/donbyrd/CMNExtremes.htm> >> >> lists the J. C. Bach Concerto for Harpsichord or Piano and Strings in >> E-flat, Op. 7 no. 5 (Dobereiner ed., 1927), where the 3rd size appears >> briefly, for an ossia. I'm sure I've seen other instances but I can't >> recall any; if you have other(s) handy, I'd love to hear about 'em (though >> I'm not sure others on this list would). >> > > The number of possible sizes would be unlimited since it can be defined > for every staff individually (in staffDef element) if necessary. Our > proposal was nonetheless to have a cue-size for the most common cases where > we have only one size of cue-size staves and to have it defined at a higher > level (in scoreDef element). > > Laurent > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121108/63e93292/attachment.html> From kepper at edirom.de Thu Nov 8 11:59:55 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 8 Nov 2012 11:59:55 +0100 Subject: [MEI-L] page sizes; multiple staff sizes; def. of point In-Reply-To: <CAFD2E84-D88E-4E9D-8FBA-186CCDF5E1C8@mail.mcgill.ca> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> <13149_1352301931_509A7D6B_13149_136_1_20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> <7596_1352305706_509A8C29_7596_17_1_CAJ306HYE3kxQ+pWR4Lp_2r43Oqiw98w0xmhOFq47BRcvWg+c1w@mail.gmail.com> <CAFD2E84-D88E-4E9D-8FBA-186CCDF5E1C8@mail.mcgill.ca> Message-ID: <18729826-22DB-40C1-990B-FBF90F79881F@edirom.de> Hi Andrew, I appreciate your input, and I agree that it's worth to discuss this. The problem I see is where to draw the line. Is this strictly restricted to margins, or does it affect the music as well? Can it make a rendering more dense, to fit into a specific device, or can it move system and page breaks as necessary? Can it be used to switch stem directions, or change clefs for convenience? In New Orleans, Laurent, Perry and me had a brief discussion on the prospective layout tree, and that there are ways to mimic a whole lot of its functionality with existing MEI. We came to the conclusion that we might want to rethink the layout tree proposal, to better match the still missing portions. This would probably still go in the direction of a page-based approach to MEI, and maybe this fits well with what you suggested. I'd suggest that we merge the two discussions. The only thing I wonder is if it is really of interest to whole MEI-L right now, or if we should come up with a proposal to MEI-L as soon as we have made up our minds first? Could some of the (most welcome!) lurkers on the list give some feedback on that? Would you like to follow a probably very intense discussion with lots of technical details, erroneous paths and changing minds, or would you prefer to get a summary in a couple of weeks? I would really like to get more traffic on MEI-L and discuss these things in the broadest public we have, but it might be somewhat distracting? In case we get no feedback from you out there, we will discuss this on this list, but eventually also during the virtual Technical Team meeting, which is scheduled for next Wednesday. If there's anybody interested in participating with that, please contact Perry or me beforehand to make sure you get the link? Best, Johannes Am 08.11.2012 um 11:43 schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: > Perhaps I'm muddying the waters, but should we start looking at ways of further separating the musical structure from the actual appearance? More specifically, using CSS to control the appearance of elements, rather than interweaving the visual and semantic structure. > > For instance, page margins, staff sizes, cue sizes -- all of this could be specified in a different style sheet for different media: print, tablet, mobile phones, web browsers, etc. A print style sheet could specify in points or inches; a display stylesheet could specify in pixels, ems, or proportions. Different media will have different presentation needs, and if we're to make sure that MEI can operate in both the physical and digital worlds simultaneously, this question will become more important, not less. > > I'm not *completely* convinced of this since it does complicate lots of things, but I think it's worthy of at least a bit of discussion. > > -Andrew > > On 2012-11-07, at 5:27 PM, Laurent Pugin <laurent at music.mcgill.ca> wrote: > >> >> >> On Wed, Nov 7, 2012 at 10:25 AM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: >> On Tue, 6 Nov 2012 21:49:55 -0800, Craig Sapp <craigsapp at gmail.com> wrote: >> Hi Don, >> >> On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: >> >> >> Finally (and I suspect MEI already handles this), I'd like to point out >> that two sizes of staves -- "normal" and "cue-size" -- aren't always >> enough; there are published performing editions that use three staff sizes. >> (In fact, I wouldn't be surprised if editions with _four_ sizes exist, >> though I don't know of any.) >> >> >> I have seen at least three sizes in a score before, and this would >> theoretically allow for four sizes: >> >> In a piano/instrumental score, the piano part typically has the >> instrumental part displayed above it in a slightly smaller size. And I >> have seen ossia parts for the instrumental staff which in turn would be >> smaller than the instrumental staff size. So if the piano part also had an >> ossia, then there would be four staff sizes, unless the ossia for the piano >> is the same size as the instrumental part (which it probably should). >> >> Right. My list of CMN extremes >> >> http://www.informatics.indiana.edu/donbyrd/CMNExtremes.htm >> >> lists the J. C. Bach Concerto for Harpsichord or Piano and Strings in E-flat, Op. 7 no. 5 (Dobereiner ed., 1927), where the 3rd size appears briefly, for an ossia. I'm sure I've seen other instances but I can't recall any; if you have other(s) handy, I'd love to hear about 'em (though I'm not sure others on this list would). >> >> The number of possible sizes would be unlimited since it can be defined for every staff individually (in staffDef element) if necessary. Our proposal was nonetheless to have a cue-size for the most common cases where we have only one size of cue-size staves and to have it defined at a higher level (in scoreDef element). >> >> Laurent >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Thu Nov 8 12:05:48 2012 From: bohl at edirom.de (=?utf-8?B?QmVuamFtaW4gVy4gQm9obA==?=) Date: Thu, 08 Nov 2012 12:05:48 +0100 Subject: [MEI-L] =?utf-8?q?Antw=2E=3A__Search?= Message-ID: <0LuFoB-1TNrXH43b5-011GIA@mrelayeu.kundenserver.de> Hi all, As we learned from the UPB today, they won't make any changes to the current mailman installation as they are preparing a new version. Looking at the features of the latest mailman release doesn't make much hope either (even the mailman-dev list uses an external archiver for their searchable archive...), although it provides a hook for external archivers. I'll investigate a little though, as to know whether the UPB provides a solution with their new mailman installation. Cheers, Benjamin ----- Reply message ----- Von: "Peter Stadler" <stadler at edirom.de> An: "Music Encoding Initiative" <mei-l at lists.uni-paderborn.de> Betreff: [MEI-L] Search Datum: Mi., Nov. 7, 2012 18:59 Just for the record: I already volunteered to bring MEI-L to Markmail and/or Nabble (cf. https://lists.uni-paderborn.de/pipermail/mei-l/2011/000373.html) but my efforts got stuck somehow ? sorry for that. Actually, I do agree that a native search interface for mailman would be the best thing -- if this doesn't work out I'd be happy to revive the nabble/markmail integration. (All I'd need though is the complete mail archive from the list owner.) Best Peter Am 06.11.2012 um 05:15 schrieb Johannes Kepper <kepper at edirom.de>: > Hi Andrew, > > I've never looked at Markmail etc. without using an adblocker, so I wasn't aware of this problem. But in my understanding, such mirrors would be additional ways of accessing the archive, and as with TEI, they wouldn't be "official". So I don't see this as a big problem. If someone want's to use their ui, he has to live with their spam. > > Benjamin contacted Paderborn's IT services today regarding the installation of additional plugins to mailman, which could provide a search interface to the existing mailer (thanks for that, Benni). Normally, they're quite supportive, so we should give them a couple of days. If possible, I would like to keep the official mailer in their hands, as we don't have to care about technical issues, changing policies or business models etc. > > We will keep you posted regarding the possibilities in Paderborn. > > jo > > Am 06.11.2012 um 12:01 schrieb Andrew Hankinson: > >> Markmail, OSDir, Nabble, etc. wrap mailing list content in ads to make money on search engine traffic. Personally I find these mailing list aggregators are pretty frustrating, since they add a whole bunch of hits to a Google search, often of the exact same message. But maybe this is a case where something is better than nothing. >> >> Markmail has a content policy that we should read and see if there are any problems with it. >> >> http://markmail.org/docs/content-policy.xqy >> >> -Andrew >> >> On 2012-11-06, at 10:31 AM, Benjamin Wolff Bohl <bohl at edirom.de> wrote: >> >>> Hi all, >>> please ignore the last post, I missed that this specific archive misses the search option? sorry >>> >>> Benjamin >>> >>> Am 06.11.2012 um 10:21 schrieb Benjamin Wolff Bohl: >>> >>>> Hi Johannes, >>>> you being the admin of the list should have the possibility to configure mailman to maintain an archive, resp. to set privileges for archive access, being either subscribers only or public. >>>> >>>> Cheers, >>>> Benjamin >>>> >>>> Am 06.11.2012 um 10:10 schrieb Johannes Kepper: >>>> >>>>> From what I see, Google Groups still seems quite traditional. I looked up what TEI used as mirror, and particularly >>>>> >>>>> http://markmail.org/search/?q=list%3Aedu.brown.listserv.tei-l >>>>> >>>>> seems to be a great tool. But my understanding is that the current mailman instance in Paderborn stays the official mailing list, and all other sites are just mirrors of that (which means that all mails are send through the current mei-l). If we agree on that, there is no striking argument to provide only one such additional archive? >>>>> >>>>> Johannes >>>>> >>>>> >>>>> Am 06.11.2012 um 01:51 schrieb Eleanor Selfridge-Field: >>>>> >>>>>> Why starstarhug for mei? >>>>>> >>>>>> >>>>>> >>>>>> -----Original Message----- >>>>>> From: mei-l-bounces at lists.uni-paderborn.de >>>>>> [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Craig Sapp >>>>>> Sent: Sunday, November 04, 2012 4:38 PM >>>>>> To: Music Encoding Initiative >>>>>> Subject: Re: [MEI-L] Search >>>>>> >>>>>> Hi Johannes, >>>>>> >>>>>> On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper <kepper at edirom.de> wrote: >>>>>>> >>>>>>> I'm afraid that Paderborn's list server doesn't offer more than that. I >>>>>> know that TEI-L is mirrored on some more accessible listservs. Basically >>>>>> all mails are still served by the original list, but would also be >>>>>> archived somewhere else. I don't know the software they use right now, >>>>>> though. Maybe that's something we want to offer for MEI-L as well. >>>>>> Opinions? >>>>>> >>>>>> My opinion is that lists should be hosted by Google Groups. In particular >>>>>> this allows for a web interface to the posting which is searchable. Here >>>>>> is the one I set up for Humdrum a few years ago which has all postings >>>>>> since the first one in July 2009: >>>>>> >>>>>> https://groups.google.com/forum/?fromgroups#!forum/starstarhug >>>>>> >>>>>> Google Groups allows many configurations such as public/private, allows >>>>>> for joining by anyone or by invitation, allows posts to be moderated or >>>>>> open. For **HUG (Humdrum Users Group), I allow anyone to join. New >>>>>> members' posts are moderated, and when their first post is non-spam, I >>>>>> promote them to a full unmoderated member. >>>>>> >>>>>> I created an mei-l group: >>>>>> https://groups.google.com/forum/?fromgroups#!forum/mei-l >>>>>> which I can setup further if that is of interest (or this list could >>>>>> perhaps subscribe to the Paderborn one which would in effect allow for >>>>>> archiving and searchability of the current list). The group can be posted >>>>>> to online from that webpage, or via email from: >>>>>> starstarhug at googlegroups.com >>>>>> >>>>>> >>>>>> -=+Craig >>>>>> -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121108/d279264c/attachment.html> From Tore.Simonsen at nmh.no Thu Nov 8 12:42:53 2012 From: Tore.Simonsen at nmh.no (Tore Simonsen) Date: Thu, 8 Nov 2012 11:42:53 +0000 Subject: [MEI-L] page sizes; multiple staff sizes; def. of point In-Reply-To: <18729826-22DB-40C1-990B-FBF90F79881F@edirom.de> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> <13149_1352301931_509A7D6B_13149_136_1_20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> <7596_1352305706_509A8C29_7596_17_1_CAJ306HYE3kxQ+pWR4Lp_2r43Oqiw98w0xmhOFq47BRcvWg+c1w@mail.gmail.com> <CAFD2E84-D88E-4E9D-8FBA-186CCDF5E1C8@mail.mcgill.ca> <18729826-22DB-40C1-990B-FBF90F79881F@edirom.de> Message-ID: <5E7B972ED99C3D4BB21AC334802ADF087CFC41A5@Pluto.nmh.int> Dear Johannes and list, I for one would like to be able to follow the technical discussion as close as possible - this is how I learn more about MEI. all the best, Tore (lurker) ------------------------------------- Tore Simonsen, Ph.D Associate Professor Norwegian Academy of Music -----Original Message----- From: mei-l-bounces+tore.simonsen=nmh.no at lists.uni-paderborn.de [mailto:mei-l-bounces+tore.simonsen=nmh.no at lists.uni-paderborn.de] On Behalf Of Johannes Kepper Sent: 8. november 2012 12:00 To: Music Encoding Initiative Subject: Re: [MEI-L] page sizes; multiple staff sizes; def. of point Hi Andrew, I appreciate your input, and I agree that it's worth to discuss this. The problem I see is where to draw the line. Is this strictly restricted to margins, or does it affect the music as well? Can it make a rendering more dense, to fit into a specific device, or can it move system and page breaks as necessary? Can it be used to switch stem directions, or change clefs for convenience? In New Orleans, Laurent, Perry and me had a brief discussion on the prospective layout tree, and that there are ways to mimic a whole lot of its functionality with existing MEI. We came to the conclusion that we might want to rethink the layout tree proposal, to better match the still missing portions. This would probably still go in the direction of a page-based approach to MEI, and maybe this fits well with what you suggested. I'd suggest that we merge the two discussions. The only thing I wonder is if it is really of interest to whole MEI-L right now, or if we should come up with a proposal to MEI-L as soon as we have made up our minds first? Could some of the (most welcome!) lurkers on the list give some feedback on that? Would you like to follow a probably very intense discussion with lots of technical details, erroneous paths and changing minds, or would you prefer to get a summary in a couple of weeks? I would really like to get more traffic on MEI-L and discuss these things in the broadest public we have, but it might be somewhat distracting... In case we get no feedback from you out there, we will discuss this on this list, but eventually also during the virtual Technical Team meeting, which is scheduled for next Wednesday. If there's anybody interested in participating with that, please contact Perry or me beforehand to make sure you get the link... Best, Johannes Am 08.11.2012 um 11:43 schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: > Perhaps I'm muddying the waters, but should we start looking at ways of further separating the musical structure from the actual appearance? More specifically, using CSS to control the appearance of elements, rather than interweaving the visual and semantic structure. > > For instance, page margins, staff sizes, cue sizes -- all of this could be specified in a different style sheet for different media: print, tablet, mobile phones, web browsers, etc. A print style sheet could specify in points or inches; a display stylesheet could specify in pixels, ems, or proportions. Different media will have different presentation needs, and if we're to make sure that MEI can operate in both the physical and digital worlds simultaneously, this question will become more important, not less. > > I'm not *completely* convinced of this since it does complicate lots of things, but I think it's worthy of at least a bit of discussion. > > -Andrew > > On 2012-11-07, at 5:27 PM, Laurent Pugin <laurent at music.mcgill.ca> wrote: > >> >> >> On Wed, Nov 7, 2012 at 10:25 AM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: >> On Tue, 6 Nov 2012 21:49:55 -0800, Craig Sapp <craigsapp at gmail.com> wrote: >> Hi Don, >> >> On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: >> >> >> Finally (and I suspect MEI already handles this), I'd like to point >> out that two sizes of staves -- "normal" and "cue-size" -- aren't >> always enough; there are published performing editions that use three staff sizes. >> (In fact, I wouldn't be surprised if editions with _four_ sizes >> exist, though I don't know of any.) >> >> >> I have seen at least three sizes in a score before, and this would >> theoretically allow for four sizes: >> >> In a piano/instrumental score, the piano part typically has the >> instrumental part displayed above it in a slightly smaller size. And >> I have seen ossia parts for the instrumental staff which in turn >> would be smaller than the instrumental staff size. So if the piano >> part also had an ossia, then there would be four staff sizes, unless >> the ossia for the piano is the same size as the instrumental part (which it probably should). >> >> Right. My list of CMN extremes >> >> http://www.informatics.indiana.edu/donbyrd/CMNExtremes.htm >> >> lists the J. C. Bach Concerto for Harpsichord or Piano and Strings in E-flat, Op. 7 no. 5 (Dobereiner ed., 1927), where the 3rd size appears briefly, for an ossia. I'm sure I've seen other instances but I can't recall any; if you have other(s) handy, I'd love to hear about 'em (though I'm not sure others on this list would). >> >> The number of possible sizes would be unlimited since it can be defined for every staff individually (in staffDef element) if necessary. Our proposal was nonetheless to have a cue-size for the most common cases where we have only one size of cue-size staves and to have it defined at a higher level (in scoreDef element). >> >> Laurent >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From k.annamaria at web.de Thu Nov 8 12:57:07 2012 From: k.annamaria at web.de (Anna Maria Komprecht) Date: Thu, 8 Nov 2012 12:57:07 +0100 Subject: [MEI-L] page sizes; multiple staff sizes; def. of point In-Reply-To: <5E7B972ED99C3D4BB21AC334802ADF087CFC41A5@Pluto.nmh.int> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> <13149_1352301931_509A7D6B_13149_136_1_20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> <7596_1352305706_509A8C29_7596_17_1_CAJ306HYE3kxQ+pWR4Lp_2r43Oqiw98w0xmhOFq47BRcvWg+c1w@mail.gmail.com> <CAFD2E84-D88E-4E9D-8FBA-186CCDF5E1C8@mail.mcgill.ca> <18729826-22DB-40C1-990B-FBF90F79881F@edirom.de> <5E7B972ED99C3D4BB21AC334802ADF087CFC41A5@Pluto.nmh.int> Message-ID: <13DC9B27-7E07-4336-98A8-DD3078EE3A28@web.de> Dear all, I am really interested in this discussion as well and I'd really appreciate to read and go through your varying point of views and follow the track of your debate. best, anna On 8 Nov 2012, at 12:42, Tore Simonsen <Tore.Simonsen at nmh.no> wrote: > Dear Johannes and list, > > I for one would like to be able to follow the technical discussion as close as possible - this is how I learn more about MEI. > > all the best, > > Tore (lurker) > > ------------------------------------- > Tore Simonsen, Ph.D > Associate Professor > Norwegian Academy of Music > > > > > -----Original Message----- > From: mei-l-bounces+tore.simonsen=nmh.no at lists.uni-paderborn.de [mailto:mei-l-bounces+tore.simonsen=nmh.no at lists.uni-paderborn.de] On Behalf Of Johannes Kepper > Sent: 8. november 2012 12:00 > To: Music Encoding Initiative > Subject: Re: [MEI-L] page sizes; multiple staff sizes; def. of point > > Hi Andrew, > > I appreciate your input, and I agree that it's worth to discuss this. The problem I see is where to draw the line. Is this strictly restricted to margins, or does it affect the music as well? Can it make a rendering more dense, to fit into a specific device, or can it move system and page breaks as necessary? Can it be used to switch stem directions, or change clefs for convenience? > > In New Orleans, Laurent, Perry and me had a brief discussion on the prospective layout tree, and that there are ways to mimic a whole lot of its functionality with existing MEI. We came to the conclusion that we might want to rethink the layout tree proposal, to better match the still missing portions. This would probably still go in the direction of a page-based approach to MEI, and maybe this fits well with what you suggested. > > I'd suggest that we merge the two discussions. The only thing I wonder is if it is really of interest to whole MEI-L right now, or if we should come up with a proposal to MEI-L as soon as we have made up our minds first? Could some of the (most welcome!) lurkers on the list give some feedback on that? Would you like to follow a probably very intense discussion with lots of technical details, erroneous paths and changing minds, or would you prefer to get a summary in a couple of weeks? I would really like to get more traffic on MEI-L and discuss these things in the broadest public we have, but it might be somewhat distracting... In case we get no feedback from you out there, we will discuss this on this list, but eventually also during the virtual Technical Team meeting, which is scheduled for next Wednesday. If there's anybody interested in participating with that, please contact Perry or me beforehand to make sure you get the link... > > Best, > Johannes > > > > Am 08.11.2012 um 11:43 schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: > >> Perhaps I'm muddying the waters, but should we start looking at ways of further separating the musical structure from the actual appearance? More specifically, using CSS to control the appearance of elements, rather than interweaving the visual and semantic structure. >> >> For instance, page margins, staff sizes, cue sizes -- all of this could be specified in a different style sheet for different media: print, tablet, mobile phones, web browsers, etc. A print style sheet could specify in points or inches; a display stylesheet could specify in pixels, ems, or proportions. Different media will have different presentation needs, and if we're to make sure that MEI can operate in both the physical and digital worlds simultaneously, this question will become more important, not less. >> >> I'm not *completely* convinced of this since it does complicate lots of things, but I think it's worthy of at least a bit of discussion. >> >> -Andrew >> >> On 2012-11-07, at 5:27 PM, Laurent Pugin <laurent at music.mcgill.ca> wrote: >> >>> >>> >>> On Wed, Nov 7, 2012 at 10:25 AM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: >>> On Tue, 6 Nov 2012 21:49:55 -0800, Craig Sapp <craigsapp at gmail.com> wrote: >>> Hi Don, >>> >>> On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> wrote: >>> >>> >>> Finally (and I suspect MEI already handles this), I'd like to point >>> out that two sizes of staves -- "normal" and "cue-size" -- aren't >>> always enough; there are published performing editions that use three staff sizes. >>> (In fact, I wouldn't be surprised if editions with _four_ sizes >>> exist, though I don't know of any.) >>> >>> >>> I have seen at least three sizes in a score before, and this would >>> theoretically allow for four sizes: >>> >>> In a piano/instrumental score, the piano part typically has the >>> instrumental part displayed above it in a slightly smaller size. And >>> I have seen ossia parts for the instrumental staff which in turn >>> would be smaller than the instrumental staff size. So if the piano >>> part also had an ossia, then there would be four staff sizes, unless >>> the ossia for the piano is the same size as the instrumental part (which it probably should). >>> >>> Right. My list of CMN extremes >>> >>> http://www.informatics.indiana.edu/donbyrd/CMNExtremes.htm >>> >>> lists the J. C. Bach Concerto for Harpsichord or Piano and Strings in E-flat, Op. 7 no. 5 (Dobereiner ed., 1927), where the 3rd size appears briefly, for an ossia. I'm sure I've seen other instances but I can't recall any; if you have other(s) handy, I'd love to hear about 'em (though I'm not sure others on this list would). >>> >>> The number of possible sizes would be unlimited since it can be defined for every staff individually (in staffDef element) if necessary. Our proposal was nonetheless to have a cue-size for the most common cases where we have only one size of cue-size staves and to have it defined at a higher level (in scoreDef element). >>> >>> Laurent >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Thu Nov 8 13:15:43 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Thu, 08 Nov 2012 13:15:43 +0100 Subject: [MEI-L] Antw.: Search In-Reply-To: <0LuFoB-1TNrXH43b5-011GIA@mrelayeu.kundenserver.de> References: <0LuFoB-1TNrXH43b5-011GIA@mrelayeu.kundenserver.de> Message-ID: <509BA26F.9080408@edirom.de> Hi me again, UPB is currently testing mailman 3. Concerning search and archiver, the beta release website says: "Mailman 3 exposes a REST administrative interface to the web, communicates with archivers via decoupled interfaces, and leaves summary, search, and retrieval of archived messages to a separate application (a simple implementation is provided)" Although UPB can't provide us with a switch date we might still wait and hope, before we set up anything own ;-) Meanwhile use Thomas Weber's suggestion for searching the archive. cheers, Benjamin Benjamin Wolff Bohl *********************************************************** Edirom - Projekt "Digitale Musikedition" Musikwissenschaftliches Seminar Detmold/Paderborn Gartenstra?e 20 D -- 32756 Detmold Tel. +49 (0) 5231 / 975-669 Fax: +49 (0) 5231 / 975-668 http://www.edirom.de *********************************************************** Am 08.11.2012 12:05, schrieb Benjamin W. Bohl: > Hi all, > As we learned from the UPB today, they won't make any changes to the > current mailman installation as they are preparing a new version. > Looking at the features of the latest mailman release doesn't make > much hope either (even the mailman-dev list uses an external archiver > for their searchable archive...), although it provides a hook for > external archivers. > I'll investigate a little though, as to know whether the UPB provides > a solution with their new mailman installation. > > Cheers, > Benjamin > > ----- Reply message ----- > Von: "Peter Stadler" <stadler at edirom.de> > An: "Music Encoding Initiative" <mei-l at lists.uni-paderborn.de> > Betreff: [MEI-L] Search > Datum: Mi., Nov. 7, 2012 18:59 > > > Just for the record: > > I already volunteered to bring MEI-L to Markmail and/or Nabble (cf. > https://lists.uni-paderborn.de/pipermail/mei-l/2011/000373.html) but > my efforts got stuck somehow ... sorry for that. > > Actually, I do agree that a native search interface for mailman would > be the best thing -- if this doesn't work out I'd be happy to revive > the nabble/markmail integration. (All I'd need though is the complete > mail archive from the list owner.) > > Best Peter > > > Am 06.11.2012 um 05:15 schrieb Johannes Kepper <kepper at edirom.de>: > > > Hi Andrew, > > > > I've never looked at Markmail etc. without using an adblocker, so I > wasn't aware of this problem. But in my understanding, such mirrors > would be additional ways of accessing the archive, and as with TEI, > they wouldn't be "official". So I don't see this as a big problem. If > someone want's to use their ui, he has to live with their spam. > > > > Benjamin contacted Paderborn's IT services today regarding the > installation of additional plugins to mailman, which could provide a > search interface to the existing mailer (thanks for that, Benni). > Normally, they're quite supportive, so we should give them a couple of > days. If possible, I would like to keep the official mailer in their > hands, as we don't have to care about technical issues, changing > policies or business models etc. > > > > We will keep you posted regarding the possibilities in Paderborn. > > > > jo > > > > Am 06.11.2012 um 12:01 schrieb Andrew Hankinson: > > > >> Markmail, OSDir, Nabble, etc. wrap mailing list content in ads to > make money on search engine traffic. Personally I find these mailing > list aggregators are pretty frustrating, since they add a whole bunch > of hits to a Google search, often of the exact same message. But maybe > this is a case where something is better than nothing. > >> > >> Markmail has a content policy that we should read and see if there > are any problems with it. > >> > >> http://markmail.org/docs/content-policy.xqy > >> > >> -Andrew > >> > >> On 2012-11-06, at 10:31 AM, Benjamin Wolff Bohl <bohl at edirom.de> wrote: > >> > >>> Hi all, > >>> please ignore the last post, I missed that this specific archive > misses the search option... sorry > >>> > >>> Benjamin > >>> > >>> Am 06.11.2012 um 10:21 schrieb Benjamin Wolff Bohl: > >>> > >>>> Hi Johannes, > >>>> you being the admin of the list should have the possibility to > configure mailman to maintain an archive, resp. to set privileges for > archive access, being either subscribers only or public. > >>>> > >>>> Cheers, > >>>> Benjamin > >>>> > >>>> Am 06.11.2012 um 10:10 schrieb Johannes Kepper: > >>>> > >>>>> From what I see, Google Groups still seems quite traditional. I > looked up what TEI used as mirror, and particularly > >>>>> > >>>>> http://markmail.org/search/?q=list%3Aedu.brown.listserv.tei-l > >>>>> > >>>>> seems to be a great tool. But my understanding is that the > current mailman instance in Paderborn stays the official mailing list, > and all other sites are just mirrors of that (which means that all > mails are send through the current mei-l). If we agree on that, there > is no striking argument to provide only one such additional archive... > >>>>> > >>>>> Johannes > >>>>> > >>>>> > >>>>> Am 06.11.2012 um 01:51 schrieb Eleanor Selfridge-Field: > >>>>> > >>>>>> Why starstarhug for mei? > >>>>>> > >>>>>> > >>>>>> > >>>>>> -----Original Message----- > >>>>>> From: mei-l-bounces at lists.uni-paderborn.de > >>>>>> [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of > Craig Sapp > >>>>>> Sent: Sunday, November 04, 2012 4:38 PM > >>>>>> To: Music Encoding Initiative > >>>>>> Subject: Re: [MEI-L] Search > >>>>>> > >>>>>> Hi Johannes, > >>>>>> > >>>>>> On Sun, Nov 4, 2012 at 3:49 PM, Johannes Kepper > <kepper at edirom.de> wrote: > >>>>>>> > >>>>>>> I'm afraid that Paderborn's list server doesn't offer more > than that. I > >>>>>> know that TEI-L is mirrored on some more accessible listservs. > Basically > >>>>>> all mails are still served by the original list, but would also be > >>>>>> archived somewhere else. I don't know the software they use > right now, > >>>>>> though. Maybe that's something we want to offer for MEI-L as well. > >>>>>> Opinions? > >>>>>> > >>>>>> My opinion is that lists should be hosted by Google Groups. In > particular > >>>>>> this allows for a web interface to the posting which is > searchable. Here > >>>>>> is the one I set up for Humdrum a few years ago which has all > postings > >>>>>> since the first one in July 2009: > >>>>>> > >>>>>> https://groups.google.com/forum/?fromgroups#!forum/starstarhug > >>>>>> > >>>>>> Google Groups allows many configurations such as > public/private, allows > >>>>>> for joining by anyone or by invitation, allows posts to be > moderated or > >>>>>> open. For **HUG (Humdrum Users Group), I allow anyone to join. > New > >>>>>> members' posts are moderated, and when their first post is > non-spam, I > >>>>>> promote them to a full unmoderated member. > >>>>>> > >>>>>> I created an mei-l group: > >>>>>> https://groups.google.com/forum/?fromgroups#!forum/mei-l > >>>>>> which I can setup further if that is of interest (or this list > could > >>>>>> perhaps subscribe to the Paderborn one which would in effect > allow for > >>>>>> archiving and searchability of the current list). The group > can be posted > >>>>>> to online from that webpage, or via email from: > >>>>>> starstarhug at googlegroups.com > >>>>>> > >>>>>> > >>>>>> -=+Craig > >>>>>> > > > -- > Peter Stadler > Carl-Maria-von-Weber-Gesamtausgabe > Arbeitsstelle Detmold > Gartenstr. 20 > D-32756 Detmold > Tel. +49 5231 975-665 > Fax: +49 5231 975-668 > stadler at weber-gesamtausgabe.de > www.weber-gesamtausgabe.de > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121108/92255bff/attachment.html> From laurent at music.mcgill.ca Thu Nov 8 14:12:04 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Thu, 8 Nov 2012 08:12:04 -0500 Subject: [MEI-L] page sizes; multiple staff sizes; def. of point In-Reply-To: <20604_1352372026_509B8F39_20604_71_1_CAMyHAnO2xwxhW+PsteXdQugtjDx61EmCbnoUf3XfQXVr+sC6zg@mail.gmail.com> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <E323666E-53AB-4F0B-808D-4269EE3D90D7@mail.mcgill.ca> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <CAPcjuFfuRvJJu3PAoiSSoFTs9uRpAaee3uw6nXtaa3h0VKm6TA@mail.gmail.com> <13149_1352301931_509A7D6B_13149_136_1_20121107102507.88v3r2aaas8c0kgw@webmail.iu.edu> <7596_1352305706_509A8C29_7596_17_1_CAJ306HYE3kxQ+pWR4Lp_2r43Oqiw98w0xmhOFq47BRcvWg+c1w@mail.gmail.com> <CAFD2E84-D88E-4E9D-8FBA-186CCDF5E1C8@mail.mcgill.ca> <20604_1352372026_509B8F39_20604_71_1_CAMyHAnO2xwxhW+PsteXdQugtjDx61EmCbnoUf3XfQXVr+sC6zg@mail.gmail.com> Message-ID: <CAJ306HaAum0i-nG2q0Un5zVuYU=8DEXYvQCE6x-i3JRWe8OeyQ@mail.gmail.com> It seems to me that the proposal is fine in that regard. It is mostly a clarification for attributes that we already have, and basically for: - specifying what type of real-world units we use (no mention of pixels here) - what is the size of an half-interline That is basically it. I am not sure I see where is the problem. Laurent On Thu, Nov 8, 2012 at 5:53 AM, Raffaele Viglianti < raffaeleviglianti at gmail.com> wrote: > Dear all, > > I side entirely with Andrew on this one. I think real-world units become > useful as part of the encoding only if they are describing a physical > source (after all we want to use MEI for document encoding). Information > about rendering and printing should live somewhere else. > > Best, > Raffaele > > > On Thu, Nov 8, 2012 at 10:43 AM, Andrew Hankinson < > andrew.hankinson at mail.mcgill.ca> wrote: > >> Perhaps I'm muddying the waters, but should we start looking at ways of >> further separating the musical structure from the actual appearance? More >> specifically, using CSS to control the appearance of elements, rather than >> interweaving the visual and semantic structure. >> >> For instance, page margins, staff sizes, cue sizes -- all of this could >> be specified in a different style sheet for different media: print, tablet, >> mobile phones, web browsers, etc. A print style sheet could specify in >> points or inches; a display stylesheet could specify in pixels, ems, or >> proportions. Different media will have different presentation needs, and if >> we're to make sure that MEI can operate in both the physical and digital >> worlds simultaneously, this question will become more important, not less. >> >> I'm not *completely* convinced of this since it does complicate lots of >> things, but I think it's worthy of at least a bit of discussion. >> >> -Andrew >> >> On 2012-11-07, at 5:27 PM, Laurent Pugin <laurent at music.mcgill.ca> wrote: >> >> >> >> On Wed, Nov 7, 2012 at 10:25 AM, Byrd, Donald A. <donbyrd at indiana.edu>wrote: >> >>> On Tue, 6 Nov 2012 21:49:55 -0800, Craig Sapp <craigsapp at gmail.com> >>> wrote: >>> >>>> Hi Don, >>>> >>>> On Tue, Nov 6, 2012 at 8:54 PM, Byrd, Donald A. <donbyrd at indiana.edu> >>>> wrote: >>>> >>>> >>>>> Finally (and I suspect MEI already handles this), I'd like to point out >>>>> that two sizes of staves -- "normal" and "cue-size" -- aren't always >>>>> enough; there are published performing editions that use three staff >>>>> sizes. >>>>> (In fact, I wouldn't be surprised if editions with _four_ sizes exist, >>>>> though I don't know of any.) >>>>> >>>> >>>> >>>> I have seen at least three sizes in a score before, and this would >>>> theoretically allow for four sizes: >>>> >>>> In a piano/instrumental score, the piano part typically has the >>>> instrumental part displayed above it in a slightly smaller size. And I >>>> have seen ossia parts for the instrumental staff which in turn would be >>>> smaller than the instrumental staff size. So if the piano part also >>>> had an >>>> ossia, then there would be four staff sizes, unless the ossia for the >>>> piano >>>> is the same size as the instrumental part (which it probably should). >>>> >>> >>> Right. My list of CMN extremes >>> >>> http://www.informatics.**indiana.edu/donbyrd/**CMNExtremes.htm<http://www.informatics.indiana.edu/donbyrd/CMNExtremes.htm> >>> >>> lists the J. C. Bach Concerto for Harpsichord or Piano and Strings in >>> E-flat, Op. 7 no. 5 (Dobereiner ed., 1927), where the 3rd size appears >>> briefly, for an ossia. I'm sure I've seen other instances but I can't >>> recall any; if you have other(s) handy, I'd love to hear about 'em (though >>> I'm not sure others on this list would). >>> >> >> The number of possible sizes would be unlimited since it can be defined >> for every staff individually (in staffDef element) if necessary. Our >> proposal was nonetheless to have a cue-size for the most common cases where >> we have only one size of cue-size staves and to have it defined at a higher >> level (in scoreDef element). >> >> Laurent >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121108/7f8aa197/attachment.html> From kepper at edirom.de Thu Nov 8 14:45:11 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 8 Nov 2012 14:45:11 +0100 Subject: [MEI-L] Layout discussion Message-ID: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> Don't blame it on me ? some of you requested to have the following discussion completely in public, and I think we should follow that suggestion. I expect this thread to be somewhat lengthy. What we should try to resolve is how MEI deals with layout-specific information, its relationship to the semantic parts of MEI, separating out layout info in a similar way than CSS does for HTML, relationship of various units (and types thereof), page-based approaches to MEI etc. I will try to summarize the model I briefly introduced in New Orleans last week. I would love to see Laurent replying to that, ideally with a thorough introduction of our layout tree proposal and its specific qualities. This should introduce lurkers to our current state, and from there, we can refer back to our proposal from last week and see where the discussion leads us? The basic situation is that we want to preserve differences between multiple sources. The most obvious way for doing this is to use the <app> / <rdg> elements, which provide a very intuitive way for doing so. But, as soon as we also want to preserve detailed information about layouts, we have to add many more attributes to each note etc., so it becomes more likely that differences between the sources will result in additional <app>s and <rdg>s. Eventually, this will lead to a separate <app> / </rdg> for almost every note. While this is still possible, it might be regarded as somewhat impractical. The alternative for this is to use one file, let's call it common.xml, to store all the commonalities between the sources, but also the most important differences. Basically, this file contains only @pname, @oct and @dur for every note. It will split up into <app>s and <rdg>s where the sources differ in this regard, but it will not consider stem directions, exact positioning etc. Every single source is also represented by a separate file, let's call it sourceXY.xml. These files do not contain <app>s and <rdg>s at all, they just reflect the musical text as given in the according source. They contain elements for all notes etc., but they omit the basic attributes as specified in the common.xml. Instead, they use a reference to the corresponding elements in this file. Here's an example: common.xml: <note xml:id="noteX" pname="c" oct="4" dur="1"/> sourceXY.xml: <note stem.dir="up" sameas="common.xml#noteX/> It is easily possible to point to a note within a <app>/<rdg> in case the basic parameters already differ. With this strategy, layout information can be separated out quite completely. <sb/> and <pb/> are stored only within the sourceXY.xml files (though they could be provided in the common.xml within <app>s and <rdg>s as well). If one wants to extract a source file completely, he just has to resolve all pointers. This is cumbersome when doing manually, but with an xml database and an index on xml:ids, it shouldn't be too hard. Also, it is still possible to extract information about differing stem directions etc., but it requires more processing than the <app>/<rdg> approach. Basically, this is a compromise somewhere in the middle between source separation and integration, which tries to address the most common cases in the most convenient way, while accepting some additional hurdles for corner cases. Of course it is open for discussion which attributes should go in the common.xml, and which attributes should be separated out to the individual source files. The benefit of this approach is that it completely relies on existing MEI ? it does not require any further additions to the standard and works "out of the box". But, it does not address all requests catered for with a distinct layout tree, which I hope Laurent will introduce. jo From donbyrd at indiana.edu Fri Nov 9 20:22:20 2012 From: donbyrd at indiana.edu (Byrd, Donald A.) Date: Fri, 9 Nov 2012 14:22:20 -0500 Subject: [MEI-L] Layout discussion In-Reply-To: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> Message-ID: <20121109142220.bugzt4mtgkso8ss0@webmail.iu.edu> I'm just emerging from a long period of lurking -- with a lot of ignoring :-| . I (for one) would love to see "a thorough introduction of our layout tree proposal and its specific qualities." --Don On Thu, 8 Nov 2012 14:45:11 +0100, Johannes Kepper <kepper at edirom.de> wrote: > Don't blame it on me ? some of you requested to have the following > discussion completely in public, and I think we should follow that > suggestion. I expect this thread to be somewhat lengthy. What we > should try to resolve is how MEI deals with layout-specific > information, its relationship to the semantic parts of MEI, > separating out layout info in a similar way than CSS does for HTML, > relationship of various units (and types thereof), page-based > approaches to MEI etc. > > I will try to summarize the model I briefly introduced in New Orleans > last week. I would love to see Laurent replying to that, ideally with > a thorough introduction of our layout tree proposal and its specific > qualities. This should introduce lurkers to our current state, and > from there, we can refer back to our proposal from last week and see > where the discussion leads us? > > The basic situation is that we want to preserve differences between > multiple sources. The most obvious way for doing this is to use the > <app> / <rdg> elements, which provide a very intuitive way for doing > so. But, as soon as we also want to preserve detailed information > about layouts, we have to add many more attributes to each note etc., > so it becomes more likely that differences between the sources will > result in additional <app>s and <rdg>s. Eventually, this will lead to > a separate <app> / </rdg> for almost every note. While this is still > possible, it might be regarded as somewhat impractical. The > alternative for this is to use one file, let's call it common.xml, to > store all the commonalities between the sources, but also the most > important differences. Basically, this file contains only @pname, > @oct and @dur for every note. It will split up into <app>s and <rdg>s > where the sources differ in this regard, but it will not consider > stem directions, exact positioning etc. > Every single source is also represented by a separate file, let's > call it sourceXY.xml. These files do not contain <app>s and <rdg>s at > all, they just reflect the musical text as given in the according > source. They contain elements for all notes etc., but they omit the > basic attributes as specified in the common.xml. Instead, they use a > reference to the corresponding elements in this file. Here's an > example: > > common.xml: > <note xml:id="noteX" pname="c" oct="4" dur="1"/> > > sourceXY.xml: > <note stem.dir="up" sameas="common.xml#noteX/> > > It is easily possible to point to a note within a <app>/<rdg> in case > the basic parameters already differ. > > With this strategy, layout information can be separated out quite > completely. <sb/> and <pb/> are stored only within the sourceXY.xml > files (though they could be provided in the common.xml within <app>s > and <rdg>s as well). If one wants to extract a source file > completely, he just has to resolve all pointers. This is cumbersome > when doing manually, but with an xml database and an index on > xml:ids, it shouldn't be too hard. Also, it is still possible to > extract information about differing stem directions etc., but it > requires more processing than the <app>/<rdg> approach. Basically, > this is a compromise somewhere in the middle between source > separation and integration, which tries to address the most common > cases in the most convenient way, while accepting some additional > hurdles for corner cases. Of course it is open for discussion which > attributes should go in the common.xml, and which attributes should > be separated out to the individual source files. > > The benefit of this approach is that it completely relies on existing > MEI ? it does not require any further additions to the standard and > works "out of the box". > > But, it does not address all requests catered for with a distinct > layout tree, which I hope Laurent will introduce. > jo > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Donald Byrd Woodrow Wilson Indiana Teaching Fellow Adjunct Associate Professor of Informatics & Music Indiana University, Bloomington From kepper at edirom.de Fri Nov 9 21:03:47 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 9 Nov 2012 21:03:47 +0100 Subject: [MEI-L] Layout discussion In-Reply-To: <20121109142220.bugzt4mtgkso8ss0@webmail.iu.edu> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <20121109142220.bugzt4mtgkso8ss0@webmail.iu.edu> Message-ID: <8E987354-F81B-4CDD-A6FD-090A6BF6DF22@edirom.de> Just a comment from the sideline: I haven't warned Laurent about this request, so we all should be patient with him. If he can't provide a summary early next week, I'll try to cook something up, but Laurent is definitely more familiar with the details of this proposal. Johannes Am 09.11.2012 um 20:22 schrieb "Byrd, Donald A." <donbyrd at indiana.edu>: > I'm just emerging from a long period of lurking -- with a lot of ignoring :-| . I (for one) would love to see "a thorough introduction of our layout tree proposal and its specific qualities." > > --Don > > > On Thu, 8 Nov 2012 14:45:11 +0100, Johannes Kepper <kepper at edirom.de> wrote: > >> Don't blame it on me ? some of you requested to have the following >> discussion completely in public, and I think we should follow that >> suggestion. I expect this thread to be somewhat lengthy. What we >> should try to resolve is how MEI deals with layout-specific >> information, its relationship to the semantic parts of MEI, >> separating out layout info in a similar way than CSS does for HTML, >> relationship of various units (and types thereof), page-based >> approaches to MEI etc. >> >> I will try to summarize the model I briefly introduced in New Orleans >> last week. I would love to see Laurent replying to that, ideally with >> a thorough introduction of our layout tree proposal and its specific >> qualities. This should introduce lurkers to our current state, and >> from there, we can refer back to our proposal from last week and see >> where the discussion leads us? >> >> The basic situation is that we want to preserve differences between >> multiple sources. The most obvious way for doing this is to use the >> <app> / <rdg> elements, which provide a very intuitive way for doing >> so. But, as soon as we also want to preserve detailed information >> about layouts, we have to add many more attributes to each note etc., >> so it becomes more likely that differences between the sources will >> result in additional <app>s and <rdg>s. Eventually, this will lead to >> a separate <app> / </rdg> for almost every note. While this is still >> possible, it might be regarded as somewhat impractical. The >> alternative for this is to use one file, let's call it common.xml, to >> store all the commonalities between the sources, but also the most >> important differences. Basically, this file contains only @pname, >> @oct and @dur for every note. It will split up into <app>s and <rdg>s >> where the sources differ in this regard, but it will not consider >> stem directions, exact positioning etc. >> Every single source is also represented by a separate file, let's >> call it sourceXY.xml. These files do not contain <app>s and <rdg>s at >> all, they just reflect the musical text as given in the according >> source. They contain elements for all notes etc., but they omit the >> basic attributes as specified in the common.xml. Instead, they use a >> reference to the corresponding elements in this file. Here's an >> example: >> >> common.xml: >> <note xml:id="noteX" pname="c" oct="4" dur="1"/> >> >> sourceXY.xml: >> <note stem.dir="up" sameas="common.xml#noteX/> >> >> It is easily possible to point to a note within a <app>/<rdg> in case >> the basic parameters already differ. >> >> With this strategy, layout information can be separated out quite >> completely. <sb/> and <pb/> are stored only within the sourceXY.xml >> files (though they could be provided in the common.xml within <app>s >> and <rdg>s as well). If one wants to extract a source file >> completely, he just has to resolve all pointers. This is cumbersome >> when doing manually, but with an xml database and an index on >> xml:ids, it shouldn't be too hard. Also, it is still possible to >> extract information about differing stem directions etc., but it >> requires more processing than the <app>/<rdg> approach. Basically, >> this is a compromise somewhere in the middle between source >> separation and integration, which tries to address the most common >> cases in the most convenient way, while accepting some additional >> hurdles for corner cases. Of course it is open for discussion which >> attributes should go in the common.xml, and which attributes should >> be separated out to the individual source files. >> >> The benefit of this approach is that it completely relies on existing >> MEI ? it does not require any further additions to the standard and >> works "out of the box". >> >> But, it does not address all requests catered for with a distinct >> layout tree, which I hope Laurent will introduce. >> jo >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > > > > -- > Donald Byrd > Woodrow Wilson Indiana Teaching Fellow > Adjunct Associate Professor of Informatics & Music > Indiana University, Bloomington > From esfield at stanford.edu Sat Nov 10 04:35:46 2012 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Fri, 9 Nov 2012 19:35:46 -0800 (PST) Subject: [MEI-L] Layout discussion In-Reply-To: <20121109142220.bugzt4mtgkso8ss0@webmail.iu.edu> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <20121109142220.bugzt4mtgkso8ss0@webmail.iu.edu> Message-ID: <6c100a48.000011c4.00000043@CCARH-ADM-2.su.win.stanford.edu> I'm lurking too and pass on one practical issue: web browsers. I've been noticing recently that Chrome has trouble resolving the printing parameters of attachments received on A4 paper (it may be the reserve elsewhere). Presumably some intermediate software will always stand between MEI and any screen display. For me screen layout is the next question after page layout. Instinct tells me that separate semantic content from all layout and rending questions is the better path. The SCORE approach may be instructive: the program uses virtual units to etablish the desire aspect ratio. It also has a virtual relationship to the physical page. Note though that in the music printing industry page sizes may not have the same aspect ratio as those used with desktop printers. (AT the AMS I had a discussion about this with Douglas Woodfill-Harris from B?renreiter.) Eleanor Eleanor Selfridge-Field Consulting Professor, Music (and, by courtesy, Symbolic Systems) Braun Music Center #129 Stanford University Stanford, CA 94305-3076, USA http://www.stanford.edu/~esfield/ -----Original Message----- From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On Behalf Of Byrd, Donald A. Sent: Friday, November 09, 2012 11:22 AM To: Music Encoding Initiative; Johannes Kepper Subject: Re: [MEI-L] Layout discussion I'm just emerging from a long period of lurking -- with a lot of ignoring :-| . I (for one) would love to see "a thorough introduction of our layout tree proposal and its specific qualities." --Don On Thu, 8 Nov 2012 14:45:11 +0100, Johannes Kepper <kepper at edirom.de> wrote: > Don't blame it on me ? some of you requested to have the following > discussion completely in public, and I think we should follow that > suggestion. I expect this thread to be somewhat lengthy. What we > should try to resolve is how MEI deals with layout-specific > information, its relationship to the semantic parts of MEI, separating > out layout info in a similar way than CSS does for HTML, relationship > of various units (and types thereof), page-based approaches to MEI > etc. > > I will try to summarize the model I briefly introduced in New Orleans > last week. I would love to see Laurent replying to that, ideally with > a thorough introduction of our layout tree proposal and its specific > qualities. This should introduce lurkers to our current state, and > from there, we can refer back to our proposal from last week and see > where the discussion leads us? > > The basic situation is that we want to preserve differences between > multiple sources. The most obvious way for doing this is to use the > <app> / <rdg> elements, which provide a very intuitive way for doing > so. But, as soon as we also want to preserve detailed information > about layouts, we have to add many more attributes to each note etc., > so it becomes more likely that differences between the sources will > result in additional <app>s and <rdg>s. Eventually, this will lead to > a separate <app> / </rdg> for almost every note. While this is still > possible, it might be regarded as somewhat impractical. The > alternative for this is to use one file, let's call it common.xml, to > store all the commonalities between the sources, but also the most > important differences. Basically, this file contains only @pname, @oct > and @dur for every note. It will split up into <app>s and <rdg>s where > the sources differ in this regard, but it will not consider stem > directions, exact positioning etc. > Every single source is also represented by a separate file, let's call > it sourceXY.xml. These files do not contain <app>s and <rdg>s at all, > they just reflect the musical text as given in the according source. > They contain elements for all notes etc., but they omit the basic > attributes as specified in the common.xml. Instead, they use a > reference to the corresponding elements in this file. Here's an > example: > > common.xml: > <note xml:id="noteX" pname="c" oct="4" dur="1"/> > > sourceXY.xml: > <note stem.dir="up" sameas="common.xml#noteX/> > > It is easily possible to point to a note within a <app>/<rdg> in case > the basic parameters already differ. > > With this strategy, layout information can be separated out quite > completely. <sb/> and <pb/> are stored only within the sourceXY.xml > files (though they could be provided in the common.xml within <app>s > and <rdg>s as well). If one wants to extract a source file completely, > he just has to resolve all pointers. This is cumbersome when doing > manually, but with an xml database and an index on xml:ids, it > shouldn't be too hard. Also, it is still possible to extract > information about differing stem directions etc., but it requires more > processing than the <app>/<rdg> approach. Basically, this is a > compromise somewhere in the middle between source separation and > integration, which tries to address the most common cases in the most > convenient way, while accepting some additional hurdles for corner > cases. Of course it is open for discussion which attributes should go > in the common.xml, and which attributes should be separated out to the > individual source files. > > The benefit of this approach is that it completely relies on existing > MEI ? it does not require any further additions to the standard and > works "out of the box". > > But, it does not address all requests catered for with a distinct > layout tree, which I hope Laurent will introduce. > jo > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Donald Byrd Woodrow Wilson Indiana Teaching Fellow Adjunct Associate Professor of Informatics & Music Indiana University, Bloomington _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Mon Nov 12 19:56:58 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 12 Nov 2012 18:56:58 +0000 Subject: [MEI-L] The Music Encoding Conference 2013 Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EFC310E@GRANT.eservices.virginia.edu> =============================================================== CALL FOR ABSTRACTS The Music Encoding Conference 2013: Concepts, Methods, Editions 22-24 May, 2013 =============================================================== You are cordially invited to participate in the Music Encoding Conference 2013 ? Concepts, Methods, Editions, to be held 22-24 May, 2013, at the Mainz Academy for Literature and Sciences in Mainz, Germany. Music encoding is now a prominent feature of various areas in musicology and music librarianship. The encoding of symbolic music data provides a foundation for a wide range of scholarship, and over the last several years, has garnered a great deal of attention in the digital humanities. This conference intends to provide an overview of the current state of data modeling, generation, and use, and aims to introduce new perspectives on topics in the fields of traditional and computational musicology, music librarianship, and scholarly editing, as well as in the broader area of digital humanities. As the conference has a dual focus on music encoding and scholarly editing in the context of the digital humanities, the Program Committee is also happy to announce keynote lectures by Frans Wiering (Universiteit Utrecht) and Daniel Pitti (University of Virginia), both distinguished scholars in their respective fields of musicology and markup technologies in the digital humanities. Proposals for papers, posters, panel discussions, and pre-conference workshops are encouraged. Prospective topics for submissions include: * theoretical and practical aspects of music, music notation models, and scholarly editing * rendering of symbolic music data in audio and graphical forms * relationships between symbolic music data, encoded text, and facsimile images * capture, interchange, and re-purposing of music data and metadata * ontologies, authority files, and linked data in music encoding * additional topics relevant to music encoding and music editing For paper and poster proposals, abstracts of no more than 1000 words, with no more than five relevant bibliographic references, are requested. Panel sessions may be one and a half or three hours in length. Abstracts for panel sessions, describing the topic and nature of the session and including short biographies of the participants, should be no longer than 2000 words. Proposals for pre-conference workshops, to be held on May 21st, must include a description of space and technical requirements. Author guidelines and authoritative stylesheets for each submission type will be made available on the conference webpage at http://music-encoding.org/conference/2013 in early December. All accepted papers, posters, and panel sessions will be included in the conference proceedings, tentatively scheduled to be published by the end of 2013. Important dates: 31 December 2012: Deadline for abstract submissions 31 January 2013: Notification of acceptance/rejection of submissions 21-24 May 2013: Conference 31 July 2013: Deadline for submission of full papers for conference proceedings December 2013: Publication of conference proceedings Additional details will be announced on the conference webpage (http://music-encoding.org/conference/2013). If you have any questions, please contact conference2013 at music-encoding.org. ------ Program Committee: Ichiro Fujinaga, McGill University, Montreal Niels Krabbe, Det Kongelige Bibliotek, K?benhavn, Elena Pierazzo, King's College, London Eleanor Selfridge-Field, CCARH, Stanford Joachim Veit, Universit?t Paderborn, Detmold (Local) Organizers: Johannes Kepper, Universit?t Paderborn Daniel R?wenstrunk, Universit?t Paderborn Perry Roland, University of Virginia From atge at kb.dk Tue Nov 13 11:33:32 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Tue, 13 Nov 2012 10:33:32 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> Dear list, as some of you know, we have been experimenting for some time with implementing FRBR group 1 entities (work, expression, manifestation, item; see http://www.ifla.org/publications/functional-requirements-for-bibliographic-records) in <meiHead> to be able to clearly distinguish these four levels of description. With <source> translating to FRBR manifestation, we have two of them already (work and manifestation), so we have added the two others along with some container and linking elements. In my opinion, the results so far are very promising, and I think this is the time to discuss whether and how to go ahead with it. As with the <bibl> discussion, I need to know which way to head in order to get MerMEId ready for release. So, I hope we can get to an agreement about what is going to be in the next MEI schema, at least in general terms. This will probably get a bit lengthy, but I hope some of you will have a look at it. For the time being, we are using a customization (thanks to Johannes), extending the MEI 2012 schema as follows: 1) In <work>, we have introduced the element <expressionList>, a container for <expression> elements, which have the same content model as <work>. <expression> has a child <componentGrp>, an ordered list allowing <expression> child elements; it is intended to represent a sequence of sections such as movements. This can be nested; it enables us to describe the structure of, say, an opera, divided into acts, acts into scenes, and scenes into subsections of scenes (recitative, aria etc.), each with their own incipit etc. Actually, <work> also allows <componentGrp> as a child, though we are not using it; I am not sure whether it may be useful or not. 2) Likewise, in <source>, we have an <itemList> containing <item> elements, i.e. descriptions of individual copies/exemplars of a source. As with <work> and <expression>, <source> and <item> share the same content model (though we do not use all elements at both levels - more on that later). Also <source> and <item> have an optional <componentGrp> to describe their constituents. 3) All four FRBR entity elements have a <relationList> child, containing <relation> elements. These establish the relations between entities not immediately deductable from the XML tree, such as expression-to-manifestation (these are the main links between the <workDesc> and <sourceDesc> sub-trees), manifestation-to-manifestation (for instance, identifying one source as a reprint or copy of another one; this also allows the encoding of a stemma), or external relations such as work-to-work. These new elements replace <relatedItem> now found in <work> and <source>. Apart from an agreement on the overall structure, there are a number of issues to address. I will try to list them here as good as I can, though I am sure there are more. 1) The naming of elements. For now, we have defined a generic <componentGrp> element available at all four levels, meaning that the schema allows for such rubbish as putting works into items, since <componentGrp> allows <work>, <expression>, <source>, and <item>. We may choose renaming them into <expressionComponents> etc. or something like that in order to control their different contents, or leaving it to the individual encoder (or schematron) to avoid such nonsense. Speaking of element names, the good old name <sourceDesc> does not sound quite right to me. Especially if we introduce <expressionList> and <itemList>, I think <sourceList> would be more appropriate. The actual description of sources is what I would expect to find *inside* <source>. But I know that may too late to change... 2) The content models of <source> and <item>, respectively. Obviously, some (well, most) elements will be needed at both levels, but to minimize confusion, I would suggest a few restrictions. The most obvious one would be to move <physLoc> out of <physDesc> and allowing it in <item> only. <watermark> may or may not be banned from <source> (would it make sense to describe the watermark of a print edition, or of individual copies only? I am not sure). 3) Variations FRBR (http://www.dlib.indiana.edu/projects/vfrbr/). Would it be desirable to aim at offering fully VFRBR compliant encoding? It seems we are pretty close already, though not all VFRBR attributes are matched precisely by MEI elements. I must admit I have no opinion on whether it is important. Perhaps the only element really missing from VFRBR compliancy is <extent> in <expression>. It should be no problem introducing it, I guess. And now we're at it, certain projects have requested to be able to specify the duration (<extent>) of a work. Usually, this could and should be placed in <expression> rather than <work>, but what about the situation where the composer actually prescribes a specific duration for her work - an instruction that the actual expressions may or may not follow? 4) There is a problem possibly emerging from the notation-centric nature of MEI, or perhaps it is really a FRBR problem; namely the handling of performances and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I (and MerMEId) would regard as different versions of the work. We encode performances using <eventList> elements within expression/history, i.e. as (grand-)children of <expression>, which really makes sense to me. A performance must be of a certain version (form, instrumentation) of the work, so I strongly believe we should keep it this way. It's just not how FRBR sees it. On the other hand, as far as I can see there is nothing (except the practical an conceptual difficulties) that prevents users from encoding e performance or a recording as an expression, so FRBR compliance is probably possible also in this respect. I just wouldn't recommend it, and I actually suspect FRBR having a problem there rather than MEI. I haven't looked into the details of recordings metadata yet, but I guess we'll have to address that too at some point. Without having given it much thought, I see two options here and now (apart from <expression>): <bibl> and <source>, depending on the recording's relation to the encoding. We may want to add a number of elements to <source> to accommodate recording information. 5) Finally, an issue related to the FRBR discussion, though not directly a consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I can't think of any situation, however, in which it may be desirable to describe more than one work in a single file. On the contrary, it could easily cause a lot of confusion, so I would actually suggest allowing only one <work> element; in other words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep <workDesc>, and change its content model to be the one used by <work> now. Any comments greatly appreciated :-) All the best, Axel From kepper at edirom.de Tue Nov 13 12:32:47 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 13 Nov 2012 12:32:47 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> Message-ID: <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> Dear Axel, thanks for putting that together. See my comments inline. Am 13.11.2012 um 11:33 schrieb Axel Teich Geertinger <atge at kb.dk>: > Dear list, > > as some of you know, we have been experimenting for some time with implementing FRBR group 1 entities (work, expression, manifestation, item; see http://www.ifla.org/publications/functional-requirements-for-bibliographic-records) in <meiHead> to be able to clearly distinguish these four levels of description. With <source> translating to FRBR manifestation, we have two of them already (work and manifestation), so we have added the two others along with some container and linking elements. In my opinion, the results so far are very promising, and I think this is the time to discuss whether and how to go ahead with it. As with the <bibl> discussion, I need to know which way to head in order to get MerMEId ready for release. So, I hope we can get to an agreement about what is going to be in the next MEI schema, at least in general terms. This will probably get a bit lengthy, but I hope some of you will have a look at it. > > For the time being, we are using a customization (thanks to Johannes), extending the MEI 2012 schema as follows: > > 1) In <work>, we have introduced the element <expressionList>, a container for <expression> elements, which have the same content model as <work>. <expression> has a child <componentGrp>, an ordered list allowing <expression> child elements; it is intended to represent a sequence of sections such as movements. This can be nested; it enables us to describe the structure of, say, an opera, divided into acts, acts into scenes, and scenes into subsections of scenes (recitative, aria etc.), each with their own incipit etc. > Actually, <work> also allows <componentGrp> as a child, though we are not using it; I am not sure whether it may be useful or not. > > 2) Likewise, in <source>, we have an <itemList> containing <item> elements, i.e. descriptions of individual copies/exemplars of a source. As with <work> and <expression>, <source> and <item> share the same content model (though we do not use all elements at both levels - more on that later). Also <source> and <item> have an optional <componentGrp> to describe their constituents. > > 3) All four FRBR entity elements have a <relationList> child, containing <relation> elements. These establish the relations between entities not immediately deductable from the XML tree, such as expression-to-manifestation (these are the main links between the <workDesc> and <sourceDesc> sub-trees), manifestation-to-manifestation (for instance, identifying one source as a reprint or copy of another one; this also allows the encoding of a stemma), or external relations such as work-to-work. > These new elements replace <relatedItem> now found in <work> and <source>. > > Apart from an agreement on the overall structure, there are a number of issues to address. I will try to list them here as good as I can, though I am sure there are more. > > 1) The naming of elements. For now, we have defined a generic <componentGrp> element available at all four levels, meaning that the schema allows for such rubbish as putting works into items, since <componentGrp> allows <work>, <expression>, <source>, and <item>. We may choose renaming them into <expressionComponents> etc. or something like that in order to control their different contents, or leaving it to the individual encoder (or schematron) to avoid such nonsense. While Schematron rules are not equally well supported than the RelaxNG schema itself, they are part of the schema, and thus official part of the MEI specification. If a given application is not capable of enforcing the rules expressed in Schematron by validating against them, this doesn't mean that they are obsolete. In this case, Schematron allows a less complex definition of the intended schema, and that's the reason why we chose it. It is by no means optional, as many other rules for MEI, which are also expressed in Schematron, aren't optional. > Speaking of element names, the good old name <sourceDesc> does not sound quite right to me. Especially if we introduce <expressionList> and <itemList>, I think <sourceList> would be more appropriate. The actual description of sources is what I would expect to find *inside* <source>. But I know that may too late to change... As this whole FRBR thing introduces a model which is quite distinct to TEI, it seems not unreasonable to reflect that by choosing a different name for the sources' container. This would have some consequences, though, and for me, it is connected with the question of whether sourceDesc (or whatever we decide to call it) is a child of fileDesc or not. Currently, sourceDesc is a child of fileDesc, while workDesc is a sibling. I see the reason for putting sourceDesc in there in the first place (these are the sources used to create the file, i.e. the MEI instance), but I wonder if this is still true. What about a catalogue of works and sources, which may not even have any music in it? Wouldn't it be better to add a pointer from somewhere in fileDesc to the extracted sourceDesc, indicating which source was used for the transcription? Or is it safe to rely on @source references down in the music subtree? > > 2) The content models of <source> and <item>, respectively. Obviously, some (well, most) elements will be needed at both levels, but to minimize confusion, I would suggest a few restrictions. The most obvious one would be to move <physLoc> out of <physDesc> and allowing it in <item> only. <watermark> may or may not be banned from <source> (would it make sense to describe the watermark of a print edition, or of individual copies only? I am not sure). The question here is if we want to allow people to use MEI without FRBR, i.e. without making the distinction between manifestations (the print run) and items (the individual copies). Although I think that FRBR is a very clever model, I wonder if we can or should enforce it. I could imagine to put it in a separate module, which would add (besides the elements) a couple of Schematron rules (see their importance above) that keep the usage of source in line with the FRBR model. If this module is turned of, people would be free to use MEI like before. But this would create inconsistencies, so we clearly have to make a decision here about balancing between flexibility and standardization. > > 3) Variations FRBR (http://www.dlib.indiana.edu/projects/vfrbr/). Would it be desirable to aim at offering fully VFRBR compliant encoding? It seems we are pretty close already, though not all VFRBR attributes are matched precisely by MEI elements. I must admit I have no opinion on whether it is important. > Perhaps the only element really missing from VFRBR compliancy is <extent> in <expression>. It should be no problem introducing it, I guess. > And now we're at it, certain projects have requested to be able to specify the duration (<extent>) of a work. Usually, this could and should be placed in <expression> rather than <work>, but what about the situation where the composer actually prescribes a specific duration for her work - an instruction that the actual expressions may or may not follow? Like you, I have no particular opinion on this. I would just argue that if Variations uses a subset of FRBR (and I don't know if they do), we should still keep the full, non-flavored FRBR. If all they did is adding things, I'm fine to include them. We should avoid to enforce models which are useful for a certain project, but might not be for others. > > 4) There is a problem possibly emerging from the notation-centric nature of MEI, or perhaps it is really a FRBR problem; namely the handling of performances and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I (and MerMEId) would regard as different versions of the work. We encode performances using <eventList> elements within expression/history, i.e. as (grand-)children of <expression>, which really makes sense to me. A performance must be of a certain version (form, instrumentation) of the work, so I strongly believe we should keep it this way. It's just not how FRBR sees it. On the other hand, as far as I can see there is nothing (except the practical an conceptual difficulties) that prevents users from encoding e performance or a recording as an expression, so FRBR compliance is probably possible also in this respect. I just wouldn't recommend it, and I actually suspect FRBR having a problem there rather than MEI. I haven't looked this up, but are you sure that performances and recordings are on the same level? I would see performances as expressions, while recordings are manifestations. Of course a performance follows a certain version of a work, like the piano version (=expression). But, the musician moves that to a different domain (graphical to audio), and he may or may not play the repeats, and he may or may not follow the dynamic indications of the score. There certainly is a strong relationship between both expressions, but they are distinct to me. I see your reasons for putting everything into an eventList, and thus subsuming it under one expression, but that might not always be the most appropriate model. Sometimes, it might be better to use separate expressions for the piano version and it's performances and connect them with one or more relations. > I haven't looked into the details of recordings metadata yet, but I guess we'll have to address that too at some point. Without having given it much thought, I see two options here and now (apart from <expression>): <bibl> and <source>, depending on the recording's relation to the encoding. We may want to add a number of elements to <source> to accommodate recording information. > > 5) Finally, an issue related to the FRBR discussion, though not directly a consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I can't think of any situation, however, in which it may be desirable to describe more than one work in a single file. On the contrary, it could easily cause a lot of confusion, so I would actually suggest allowing only one <work> element; in other words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep <workDesc>, and change its content model to be the one used by <work> now. Again, I think that this perspective is biased from your application, where it makes perfect sense. Consider you're working on Wagner's Ring. You might want to say something about all these works in just one file. All I want to say is that this is a modeling question, which is clearly project-specific. It seems perfectly reasonable to restrict merMEId to MEI instances with only one work, but I wouldn't restrict MEI to one work per file. This may result in preprocessing files before operating on them with merMEId, but we have similar situations for many other aspects for MEI, so this isn't bad per se. > > Any comments greatly appreciated :-) > > All the best, > Axel > Thanks again for putting that together!!! Best, Johannes > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Tue Nov 13 13:21:48 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Tue, 13 Nov 2012 12:21:48 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> Hi Johannes, Thanks for your comments. Good and relevant as always. I think I better leave it to the more technically skilled people to answer most of it, but I have just a few comments. > > > > 4) There is a problem possibly emerging from the notation-centric nature of > MEI, or perhaps it is really a FRBR problem; namely the handling of performances > and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I > (and MerMEId) would regard as different versions of the work. We encode > performances using <eventList> elements within expression/history, i.e. as (grand- > )children of <expression>, which really makes sense to me. A performance must > be of a certain version (form, instrumentation) of the work, so I strongly believe we > should keep it this way. It's just not how FRBR sees it. On the other hand, as far as > I can see there is nothing (except the practical an conceptual difficulties) that > prevents users from encoding e performance or a recording as an expression, so > FRBR compliance is probably possible also in this respect. I just wouldn't > recommend it, and I actually suspect FRBR having a problem there rather than > MEI. > > I haven't looked this up, but are you sure that performances and recordings are on > the same level? I would see performances as expressions, while recordings are > manifestations. Of course a performance follows a certain version of a work, like > the piano version (=expression). But, the musician moves that to a different > domain (graphical to audio), and he may or may not play the repeats, and he may > or may not follow the dynamic indications of the score. There certainly is a strong > relationship between both expressions, but they are distinct to me. I see your > reasons for putting everything into an eventList, and thus subsuming it under one > expression, but that might not always be the most appropriate model. Sometimes, > it might be better to use separate expressions for the piano version and it's > performances and connect them with one or more relations. > Sorry, my mistake. Now that I look it up I see you are right: performances are expressions, recordings are not. As I said, I haven't really been looking into the recordings question yet. Here's an example from the FRBR report: w1 J. S. Bach's Six suites for unaccompanied cello e1 performances by Janos Starker recorded partly in 1963 and completed in 1965 m1 recordings released on 33 1/3 rpm sound discs in 1966 by Mercury m2 recordings re-released on compact disc in 1991 by Mercury e2 performances by Yo-Yo Ma recorded in 1983 m1 recordings released on 33 1/3 rpm sound discs in 1983 by CBS Records m2 recordings re-released on compact disc in 1992 by CBS Records So, recordings are no problem, I guess. But that still leaves us with two very different ways of encoding performance data. FYI, we have recently moved performance <eventList>s from <work> to <expression>, so we do subsume them under a particular expression already. > > 5) Finally, an issue related to the FRBR discussion, though not directly a > consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I > can't think of any situation, however, in which it may be desirable to describe more > than one work in a single file. On the contrary, it could easily cause a lot of > confusion, so I would actually suggest allowing only one <work> element; in other > words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep > <workDesc>, and change its content model to be the one used by <work> now. > > Again, I think that this perspective is biased from your application, where it makes > perfect sense. Consider you're working on Wagner's Ring. You might want to say > something about all these works in just one file. All I want to say is that this is a > modeling question, which is clearly project-specific. It seems perfectly reasonable > to restrict merMEId to MEI instances with only one work, but I wouldn't restrict MEI > to one work per file. This may result in preprocessing files before operating on > them with merMEId, but we have similar situations for many other aspects for MEI, > so this isn't bad per se. In the Ring case, we are talking about the individual dramas as components of a larger work. This would probably be one of the situations where <componentGrp> would come in handy as a child of <work> (which the customization allows already). I would be reluctant, however, to include them as four <work> elements directly under <workDesc>. To clarify what that would mean, it would be necessary to specify work-to-work relations. Furthermore, there wouldn't be any place to put metadata concerning *all* four works, since we would be at top level already. Best, Axel -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121113/5dc20bdb/attachment.html> From stadler at edirom.de Wed Nov 14 09:56:20 2012 From: stadler at edirom.de (Peter Stadler) Date: Wed, 14 Nov 2012 09:56:20 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> Message-ID: <D4A79991-B98D-4ED5-9A6C-0BC428E50F54@edirom.de> Dear Axel et al., beeing not very familiar with the topic and the correspondent discussion within MEI, I have just some minor notes and further enquiry: I guess your email relates to issue 55 (http://code.google.com/p/music-encoding/issues/detail?id=55 -- opened in March) where I find some ODD files. Is the latest the customization you are talking about? Second, I see some references to articles int the ODD and the ticket. If anyone has some pointers or could mail them to me off-list that'd be highly appreciated. Many thanks in advance! Now for the note: Kevin Hawkins presented a paper in 2008 about "FRBR Group 1 Entities and the TEI Guidelines" [1]. Skimming through the text I find some very interesting points, mainly that the "idea of a hierarchy for FRBR is problematic" [2] and secondly he is inspecting a whole range of elements within the text. That is to say, the application of the FRBR ontology is not necessarily restricted to the header. Best Peter [1] http://www.ultraslavonic.info/preprints/20081102.pdf [2] ibidem, p. 3 Am 13.11.2012 um 11:33 schrieb Axel Teich Geertinger <atge at kb.dk>: > Dear list, > > as some of you know, we have been experimenting for some time with implementing FRBR group 1 entities (work, expression, manifestation, item; see http://www.ifla.org/publications/functional-requirements-for-bibliographic-records) in <meiHead> to be able to clearly distinguish these four levels of description. With <source> translating to FRBR manifestation, we have two of them already (work and manifestation), so we have added the two others along with some container and linking elements. In my opinion, the results so far are very promising, and I think this is the time to discuss whether and how to go ahead with it. As with the <bibl> discussion, I need to know which way to head in order to get MerMEId ready for release. So, I hope we can get to an agreement about what is going to be in the next MEI schema, at least in general terms. This will probably get a bit lengthy, but I hope some of you will have a look at it. > > For the time being, we are using a customization (thanks to Johannes), extending the MEI 2012 schema as follows: > > 1) In <work>, we have introduced the element <expressionList>, a container for <expression> elements, which have the same content model as <work>. <expression> has a child <componentGrp>, an ordered list allowing <expression> child elements; it is intended to represent a sequence of sections such as movements. This can be nested; it enables us to describe the structure of, say, an opera, divided into acts, acts into scenes, and scenes into subsections of scenes (recitative, aria etc.), each with their own incipit etc. > Actually, <work> also allows <componentGrp> as a child, though we are not using it; I am not sure whether it may be useful or not. > > 2) Likewise, in <source>, we have an <itemList> containing <item> elements, i.e. descriptions of individual copies/exemplars of a source. As with <work> and <expression>, <source> and <item> share the same content model (though we do not use all elements at both levels - more on that later). Also <source> and <item> have an optional <componentGrp> to describe their constituents. > > 3) All four FRBR entity elements have a <relationList> child, containing <relation> elements. These establish the relations between entities not immediately deductable from the XML tree, such as expression-to-manifestation (these are the main links between the <workDesc> and <sourceDesc> sub-trees), manifestation-to-manifestation (for instance, identifying one source as a reprint or copy of another one; this also allows the encoding of a stemma), or external relations such as work-to-work. > These new elements replace <relatedItem> now found in <work> and <source>. > > Apart from an agreement on the overall structure, there are a number of issues to address. I will try to list them here as good as I can, though I am sure there are more. > > 1) The naming of elements. For now, we have defined a generic <componentGrp> element available at all four levels, meaning that the schema allows for such rubbish as putting works into items, since <componentGrp> allows <work>, <expression>, <source>, and <item>. We may choose renaming them into <expressionComponents> etc. or something like that in order to control their different contents, or leaving it to the individual encoder (or schematron) to avoid such nonsense. > Speaking of element names, the good old name <sourceDesc> does not sound quite right to me. Especially if we introduce <expressionList> and <itemList>, I think <sourceList> would be more appropriate. The actual description of sources is what I would expect to find *inside* <source>. But I know that may too late to change... > > 2) The content models of <source> and <item>, respectively. Obviously, some (well, most) elements will be needed at both levels, but to minimize confusion, I would suggest a few restrictions. The most obvious one would be to move <physLoc> out of <physDesc> and allowing it in <item> only. <watermark> may or may not be banned from <source> (would it make sense to describe the watermark of a print edition, or of individual copies only? I am not sure). > > 3) Variations FRBR (http://www.dlib.indiana.edu/projects/vfrbr/). Would it be desirable to aim at offering fully VFRBR compliant encoding? It seems we are pretty close already, though not all VFRBR attributes are matched precisely by MEI elements. I must admit I have no opinion on whether it is important. > Perhaps the only element really missing from VFRBR compliancy is <extent> in <expression>. It should be no problem introducing it, I guess. > And now we're at it, certain projects have requested to be able to specify the duration (<extent>) of a work. Usually, this could and should be placed in <expression> rather than <work>, but what about the situation where the composer actually prescribes a specific duration for her work - an instruction that the actual expressions may or may not follow? > > 4) There is a problem possibly emerging from the notation-centric nature of MEI, or perhaps it is really a FRBR problem; namely the handling of performances and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I (and MerMEId) would regard as different versions of the work. We encode performances using <eventList> elements within expression/history, i.e. as (grand-)children of <expression>, which really makes sense to me. A performance must be of a certain version (form, instrumentation) of the work, so I strongly believe we should keep it this way. It's just not how FRBR sees it. On the other hand, as far as I can see there is nothing (except the practical an conceptual difficulties) that prevents users from encoding e performance or a recording as an expression, so FRBR compliance is probably possible also in this respect. I just wouldn't recommend it, and I actually suspect FRBR having a problem there rather than MEI. > I haven't looked into the details of recordings metadata yet, but I guess we'll have to address that too at some point. Without having given it much thought, I see two options here and now (apart from <expression>): <bibl> and <source>, depending on the recording's relation to the encoding. We may want to add a number of elements to <source> to accommodate recording information. > > 5) Finally, an issue related to the FRBR discussion, though not directly a consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I can't think of any situation, however, in which it may be desirable to describe more than one work in a single file. On the contrary, it could easily cause a lot of confusion, so I would actually suggest allowing only one <work> element; in other words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep <workDesc>, and change its content model to be the one used by <work> now. > > Any comments greatly appreciated :-) > > All the best, > Axel > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de From atge at kb.dk Wed Nov 14 11:10:49 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Wed, 14 Nov 2012 10:10:49 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <D4A79991-B98D-4ED5-9A6C-0BC428E50F54@edirom.de> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <D4A79991-B98D-4ED5-9A6C-0BC428E50F54@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514EFAFF@EXCHANGE-01.kb.dk> Hi Peter It is not exactly the customization we are using, which also includes the use of TEI:listBibl, but as to the FRBR stuff the one you can see from September seems to be up to date, as far as I can see. In the proposal, a hierarchy is imposed only on the relationship between work and expression, and between source and item. That's safe, since work and expression have a 1:n relationship, like source and item. There is no hierarchy implied between those two sub-trees in the encoding, i.e. between expression and manifestation. They are connected only by relations. Any other relations (to other levels or the same) can be defined too. Best, Axel > -----Oprindelig meddelelse----- > Fra: mei-l-bounces+atge=kb.dk at lists.uni-paderborn.de [mailto:mei-l- > bounces+atge=kb.dk at lists.uni-paderborn.de] P? vegne af Peter Stadler > Sendt: 14. november 2012 09:56 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] FRBR in MEI > > Dear Axel et al., > > beeing not very familiar with the topic and the correspondent discussion within > MEI, I have just some minor notes and further enquiry: > I guess your email relates to issue 55 (http://code.google.com/p/music- > encoding/issues/detail?id=55 -- opened in March) where I find some ODD files. Is > the latest the customization you are talking about? > Second, I see some references to articles int the ODD and the ticket. If anyone has > some pointers or could mail them to me off-list that'd be highly appreciated. Many > thanks in advance! > > Now for the note: > Kevin Hawkins presented a paper in 2008 about "FRBR Group 1 Entities and the > TEI Guidelines" [1]. Skimming through the text I find some very interesting points, > mainly that the "idea of a hierarchy for FRBR is problematic" [2] and secondly he is > inspecting a whole range of elements within the text. That is to say, the application > of the FRBR ontology is not necessarily restricted to the header. > > Best > Peter > > [1] http://www.ultraslavonic.info/preprints/20081102.pdf > [2] ibidem, p. 3 > > Am 13.11.2012 um 11:33 schrieb Axel Teich Geertinger <atge at kb.dk>: > > > Dear list, > > > > as some of you know, we have been experimenting for some time with > implementing FRBR group 1 entities (work, expression, manifestation, item; see > http://www.ifla.org/publications/functional-requirements-for-bibliographic-records) > in <meiHead> to be able to clearly distinguish these four levels of description. > With <source> translating to FRBR manifestation, we have two of them already > (work and manifestation), so we have added the two others along with some > container and linking elements. In my opinion, the results so far are very promising, > and I think this is the time to discuss whether and how to go ahead with it. As with > the <bibl> discussion, I need to know which way to head in order to get MerMEId > ready for release. So, I hope we can get to an agreement about what is going to > be in the next MEI schema, at least in general terms. This will probably get a bit > lengthy, but I hope some of you will have a look at it. > > > > For the time being, we are using a customization (thanks to Johannes), extending > the MEI 2012 schema as follows: > > > > 1) In <work>, we have introduced the element <expressionList>, a container > for <expression> elements, which have the same content model as <work>. > <expression> has a child <componentGrp>, an ordered list allowing <expression> > child elements; it is intended to represent a sequence of sections such as > movements. This can be nested; it enables us to describe the structure of, say, an > opera, divided into acts, acts into scenes, and scenes into subsections of scenes > (recitative, aria etc.), each with their own incipit etc. > > Actually, <work> also allows <componentGrp> as a child, though we are not > using it; I am not sure whether it may be useful or not. > > > > 2) Likewise, in <source>, we have an <itemList> containing <item> elements, > i.e. descriptions of individual copies/exemplars of a source. As with <work> and > <expression>, <source> and <item> share the same content model (though we do > not use all elements at both levels - more on that later). Also <source> and <item> > have an optional <componentGrp> to describe their constituents. > > > > 3) All four FRBR entity elements have a <relationList> child, containing > <relation> elements. These establish the relations between entities not immediately > deductable from the XML tree, such as expression-to-manifestation (these are the > main links between the <workDesc> and <sourceDesc> sub-trees), manifestation- > to-manifestation (for instance, identifying one source as a reprint or copy of > another one; this also allows the encoding of a stemma), or external relations such > as work-to-work. > > These new elements replace <relatedItem> now found in <work> and <source>. > > > > Apart from an agreement on the overall structure, there are a number of issues to > address. I will try to list them here as good as I can, though I am sure there are > more. > > > > 1) The naming of elements. For now, we have defined a generic > <componentGrp> element available at all four levels, meaning that the schema > allows for such rubbish as putting works into items, since <componentGrp> allows > <work>, <expression>, <source>, and <item>. We may choose renaming them into > <expressionComponents> etc. or something like that in order to control their > different contents, or leaving it to the individual encoder (or schematron) to avoid > such nonsense. > > Speaking of element names, the good old name <sourceDesc> does not sound > quite right to me. Especially if we introduce <expressionList> and <itemList>, I > think <sourceList> would be more appropriate. The actual description of sources is > what I would expect to find *inside* <source>. But I know that may too late to > change... > > > > 2) The content models of <source> and <item>, respectively. Obviously, > some (well, most) elements will be needed at both levels, but to minimize > confusion, I would suggest a few restrictions. The most obvious one would be to > move <physLoc> out of <physDesc> and allowing it in <item> only. <watermark> > may or may not be banned from <source> (would it make sense to describe the > watermark of a print edition, or of individual copies only? I am not sure). > > > > 3) Variations FRBR (http://www.dlib.indiana.edu/projects/vfrbr/). Would it be > desirable to aim at offering fully VFRBR compliant encoding? It seems we are > pretty close already, though not all VFRBR attributes are matched precisely by MEI > elements. I must admit I have no opinion on whether it is important. > > Perhaps the only element really missing from VFRBR compliancy is <extent> in > <expression>. It should be no problem introducing it, I guess. > > And now we're at it, certain projects have requested to be able to specify the > duration (<extent>) of a work. Usually, this could and should be placed in > <expression> rather than <work>, but what about the situation where the composer > actually prescribes a specific duration for her work - an instruction that the actual > expressions may or may not follow? > > > > 4) There is a problem possibly emerging from the notation-centric nature of > MEI, or perhaps it is really a FRBR problem; namely the handling of performances > and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I > (and MerMEId) would regard as different versions of the work. We encode > performances using <eventList> elements within expression/history, i.e. as (grand- > )children of <expression>, which really makes sense to me. A performance must > be of a certain version (form, instrumentation) of the work, so I strongly believe we > should keep it this way. It's just not how FRBR sees it. On the other hand, as far as > I can see there is nothing (except the practical an conceptual difficulties) that > prevents users from encoding e performance or a recording as an expression, so > FRBR compliance is probably possible also in this respect. I just wouldn't > recommend it, and I actually suspect FRBR having a problem there rather than > MEI. > > I haven't looked into the details of recordings metadata yet, but I guess we'll > have to address that too at some point. Without having given it much thought, I > see two options here and now (apart from <expression>): <bibl> and <source>, > depending on the recording's relation to the encoding. We may want to add a > number of elements to <source> to accommodate recording information. > > > > 5) Finally, an issue related to the FRBR discussion, though not directly a > consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I > can't think of any situation, however, in which it may be desirable to describe more > than one work in a single file. On the contrary, it could easily cause a lot of > confusion, so I would actually suggest allowing only one <work> element; in other > words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep > <workDesc>, and change its content model to be the one used by <work> now. > > > > Any comments greatly appreciated :-) > > > > All the best, > > Axel > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > -- > Peter Stadler > Carl-Maria-von-Weber-Gesamtausgabe > Arbeitsstelle Detmold > Gartenstr. 20 > D-32756 Detmold > Tel. +49 5231 975-665 > Fax: +49 5231 975-668 > stadler at weber-gesamtausgabe.de > www.weber-gesamtausgabe.de > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Wed Nov 14 16:50:23 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Wed, 14 Nov 2012 15:50:23 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <D4A79991-B98D-4ED5-9A6C-0BC428E50F54@edirom.de> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk>, <D4A79991-B98D-4ED5-9A6C-0BC428E50F54@edirom.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EFC38C1@GRANT.eservices.virginia.edu> Hi, Peter, The Riley paper is at http://www.lib.unc.edu/users/jlriley/presentations/ismir2008/riley.pdf. The Variations report covering FRBR Group1 entities can be found at http://www.dlib.indiana.edu/projects/variations3/docs/v3FRBRreport.pdf. The address for the Variations report covering Group2 and 3 entities and FRAD is http://www.dlib.indiana.edu/projects/variations3/docs/v3FRBRreportPhase2.pdf. Hawkins raises some interesting points, but they are out of scope for our current endeavor, which is focused on FRBR-izing the header only. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de [mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de] on behalf of Peter Stadler [stadler at edirom.de] Sent: Wednesday, November 14, 2012 3:56 AM To: Music Encoding Initiative Subject: Re: [MEI-L] FRBR in MEI Dear Axel et al., beeing not very familiar with the topic and the correspondent discussion within MEI, I have just some minor notes and further enquiry: I guess your email relates to issue 55 (http://code.google.com/p/music-encoding/issues/detail?id=55 -- opened in March) where I find some ODD files. Is the latest the customization you are talking about? Second, I see some references to articles int the ODD and the ticket. If anyone has some pointers or could mail them to me off-list that'd be highly appreciated. Many thanks in advance! Now for the note: Kevin Hawkins presented a paper in 2008 about "FRBR Group 1 Entities and the TEI Guidelines" [1]. Skimming through the text I find some very interesting points, mainly that the "idea of a hierarchy for FRBR is problematic" [2] and secondly he is inspecting a whole range of elements within the text. That is to say, the application of the FRBR ontology is not necessarily restricted to the header. Best Peter [1] http://www.ultraslavonic.info/preprints/20081102.pdf [2] ibidem, p. 3 Am 13.11.2012 um 11:33 schrieb Axel Teich Geertinger <atge at kb.dk>: > Dear list, > > as some of you know, we have been experimenting for some time with implementing FRBR group 1 entities (work, expression, manifestation, item; see http://www.ifla.org/publications/functional-requirements-for-bibliographic-records) in <meiHead> to be able to clearly distinguish these four levels of description. With <source> translating to FRBR manifestation, we have two of them already (work and manifestation), so we have added the two others along with some container and linking elements. In my opinion, the results so far are very promising, and I think this is the time to discuss whether and how to go ahead with it. As with the <bibl> discussion, I need to know which way to head in order to get MerMEId ready for release. So, I hope we can get to an agreement about what is going to be in the next MEI schema, at least in general terms. This will probably get a bit lengthy, but I hope some of you will have a look at it. > > For the time being, we are using a customization (thanks to Johannes), extending the MEI 2012 schema as follows: > > 1) In <work>, we have introduced the element <expressionList>, a container for <expression> elements, which have the same content model as <work>. <expression> has a child <componentGrp>, an ordered list allowing <expression> child elements; it is intended to represent a sequence of sections such as movements. This can be nested; it enables us to describe the structure of, say, an opera, divided into acts, acts into scenes, and scenes into subsections of scenes (recitative, aria etc.), each with their own incipit etc. > Actually, <work> also allows <componentGrp> as a child, though we are not using it; I am not sure whether it may be useful or not. > > 2) Likewise, in <source>, we have an <itemList> containing <item> elements, i.e. descriptions of individual copies/exemplars of a source. As with <work> and <expression>, <source> and <item> share the same content model (though we do not use all elements at both levels - more on that later). Also <source> and <item> have an optional <componentGrp> to describe their constituents. > > 3) All four FRBR entity elements have a <relationList> child, containing <relation> elements. These establish the relations between entities not immediately deductable from the XML tree, such as expression-to-manifestation (these are the main links between the <workDesc> and <sourceDesc> sub-trees), manifestation-to-manifestation (for instance, identifying one source as a reprint or copy of another one; this also allows the encoding of a stemma), or external relations such as work-to-work. > These new elements replace <relatedItem> now found in <work> and <source>. > > Apart from an agreement on the overall structure, there are a number of issues to address. I will try to list them here as good as I can, though I am sure there are more. > > 1) The naming of elements. For now, we have defined a generic <componentGrp> element available at all four levels, meaning that the schema allows for such rubbish as putting works into items, since <componentGrp> allows <work>, <expression>, <source>, and <item>. We may choose renaming them into <expressionComponents> etc. or something like that in order to control their different contents, or leaving it to the individual encoder (or schematron) to avoid such nonsense. > Speaking of element names, the good old name <sourceDesc> does not sound quite right to me. Especially if we introduce <expressionList> and <itemList>, I think <sourceList> would be more appropriate. The actual description of sources is what I would expect to find *inside* <source>. But I know that may too late to change... > > 2) The content models of <source> and <item>, respectively. Obviously, some (well, most) elements will be needed at both levels, but to minimize confusion, I would suggest a few restrictions. The most obvious one would be to move <physLoc> out of <physDesc> and allowing it in <item> only. <watermark> may or may not be banned from <source> (would it make sense to describe the watermark of a print edition, or of individual copies only? I am not sure). > > 3) Variations FRBR (http://www.dlib.indiana.edu/projects/vfrbr/). Would it be desirable to aim at offering fully VFRBR compliant encoding? It seems we are pretty close already, though not all VFRBR attributes are matched precisely by MEI elements. I must admit I have no opinion on whether it is important. > Perhaps the only element really missing from VFRBR compliancy is <extent> in <expression>. It should be no problem introducing it, I guess. > And now we're at it, certain projects have requested to be able to specify the duration (<extent>) of a work. Usually, this could and should be placed in <expression> rather than <work>, but what about the situation where the composer actually prescribes a specific duration for her work - an instruction that the actual expressions may or may not follow? > > 4) There is a problem possibly emerging from the notation-centric nature of MEI, or perhaps it is really a FRBR problem; namely the handling of performances and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I (and MerMEId) would regard as different versions of the work. We encode performances using <eventList> elements within expression/history, i.e. as (grand-)children of <expression>, which really makes sense to me. A performance must be of a certain version (form, instrumentation) of the work, so I strongly believe we should keep it this way. It's just not how FRBR sees it. On the other hand, as far as I can see there is nothing (except the practical an conceptual difficulties) that prevents users from encoding e performance or a recording as an expression, so FRBR compliance is probably possible also in this respect. I just wouldn't recommend it, and I actually suspect FRBR having a problem there rather than MEI. > I haven't looked into the details of recordings metadata yet, but I guess we'll have to address that too at some point. Without having given it much thought, I see two options here and now (apart from <expression>): <bibl> and <source>, depending on the recording's relation to the encoding. We may want to add a number of elements to <source> to accommodate recording information. > > 5) Finally, an issue related to the FRBR discussion, though not directly a consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I can't think of any situation, however, in which it may be desirable to describe more than one work in a single file. On the contrary, it could easily cause a lot of confusion, so I would actually suggest allowing only one <work> element; in other words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep <workDesc>, and change its content model to be the one used by <work> now. > > Any comments greatly appreciated :-) > > All the best, > Axel > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -- Peter Stadler Carl-Maria-von-Weber-Gesamtausgabe Arbeitsstelle Detmold Gartenstr. 20 D-32756 Detmold Tel. +49 5231 975-665 Fax: +49 5231 975-668 stadler at weber-gesamtausgabe.de www.weber-gesamtausgabe.de _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From laurent at music.mcgill.ca Wed Nov 14 17:50:29 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Wed, 14 Nov 2012 17:50:29 +0100 Subject: [MEI-L] Layout discussion In-Reply-To: <14025_1352518566_509DCBA5_14025_31_1_6c100a48.000011c4.00000043@CCARH-ADM-2.su.win.stanford.edu> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <20121109142220.bugzt4mtgkso8ss0@webmail.iu.edu> <14025_1352518566_509DCBA5_14025_31_1_6c100a48.000011c4.00000043@CCARH-ADM-2.su.win.stanford.edu> Message-ID: <CAJ306HYV+Vy4NcNtAdgtsOKmBFx2HMZqSsNhLh0Y3XQaqQZ4ug@mail.gmail.com> Hi, First of all, not to overload the list, I would recommend people interested in the discussion to have a look at our ismir paper http://ismir2012.ismir.net/event/papers/505-ismir-2012.pdf The module as describe in the paper works well for OMR where we need to be able to store exact positions for all the elements on the page. We also tested it for comparing the content of several sources, where we end up with one single sub-tree with the musical content, with <app> and <rdg> for differences. We can call it the logical tree, (e.g., note pitches and note durations). The positioning information is stored in sub-trees, one for each sources. The link is accomplished by xml:id in the layout sub-tree referencing elements in the logical sub-tree. In other words, the logical sub-tree is autonomous, which is not the case of the layout sub-trees. A while ago, we had a discussion about introducing a page-based representation in MEI: https://lists.uni-paderborn.de/pipermail/mei-l/2011/000280.html (very long thread!) At that point, this option did not appear to be necessary. However, as Johannes explained, we came up to the conclusion in New Orleans that it might be a good idea to re-evaluate the relevance of such a page-based representation in the light of what we learned by designing the layout module since they are quite similar in what we would like to achieve. We could say that the main difference is that with the layout module approach, the information about the layout information is subordinated to a traditional content representation of the music (stored in another sub-tree), whereas that with a page-based representation approach, everything is (or can) be represented in one single tree. As Johannes explained, we could also decide to have page-based representation files with only layout information and referring to other MEI files, and in that case they would act exactly as the layout module sub-trees. Best, Laurent On Sat, Nov 10, 2012 at 4:35 AM, Eleanor Selfridge-Field < esfield at stanford.edu> wrote: > I'm lurking too and pass on one practical issue: web browsers. I've been > noticing recently that > Chrome has trouble resolving the printing parameters of attachments > received on A4 paper (it may be the reserve elsewhere). Presumably some > intermediate software will always stand between MEI and any screen > display. > > For me screen layout is the next question after page layout. Instinct > tells me that separate semantic content from all layout and rending > questions is the better path. The SCORE approach may be instructive: the > program uses virtual units to etablish the desire aspect ratio. It also > has a virtual relationship to the physical page. Note though that in the > music printing industry page sizes may not have the same aspect ratio as > those used with desktop printers. (AT the AMS I had a discussion about > this with Douglas Woodfill-Harris from B?renreiter.) > > Eleanor > > > Eleanor Selfridge-Field > Consulting Professor, Music (and, by courtesy, Symbolic Systems) > Braun Music Center #129 > Stanford University > Stanford, CA 94305-3076, USA > http://www.stanford.edu/~esfield/ > > > > > > > > > -----Original Message----- > From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de > [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On > Behalf Of Byrd, Donald A. > Sent: Friday, November 09, 2012 11:22 AM > To: Music Encoding Initiative; Johannes Kepper > Subject: Re: [MEI-L] Layout discussion > > I'm just emerging from a long period of lurking -- with a lot of ignoring > :-| . I (for one) would love to see "a thorough introduction of our layout > tree proposal and its specific qualities." > > --Don > > > On Thu, 8 Nov 2012 14:45:11 +0100, Johannes Kepper <kepper at edirom.de> > wrote: > > > Don't blame it on me ? some of you requested to have the following > > discussion completely in public, and I think we should follow that > > suggestion. I expect this thread to be somewhat lengthy. What we > > should try to resolve is how MEI deals with layout-specific > > information, its relationship to the semantic parts of MEI, separating > > out layout info in a similar way than CSS does for HTML, relationship > > of various units (and types thereof), page-based approaches to MEI > > etc. > > > > I will try to summarize the model I briefly introduced in New Orleans > > last week. I would love to see Laurent replying to that, ideally with > > a thorough introduction of our layout tree proposal and its specific > > qualities. This should introduce lurkers to our current state, and > > from there, we can refer back to our proposal from last week and see > > where the discussion leads us? > > > > The basic situation is that we want to preserve differences between > > multiple sources. The most obvious way for doing this is to use the > > <app> / <rdg> elements, which provide a very intuitive way for doing > > so. But, as soon as we also want to preserve detailed information > > about layouts, we have to add many more attributes to each note etc., > > so it becomes more likely that differences between the sources will > > result in additional <app>s and <rdg>s. Eventually, this will lead to > > a separate <app> / </rdg> for almost every note. While this is still > > possible, it might be regarded as somewhat impractical. The > > alternative for this is to use one file, let's call it common.xml, to > > store all the commonalities between the sources, but also the most > > important differences. Basically, this file contains only @pname, @oct > > and @dur for every note. It will split up into <app>s and <rdg>s where > > the sources differ in this regard, but it will not consider stem > > directions, exact positioning etc. > > Every single source is also represented by a separate file, let's call > > it sourceXY.xml. These files do not contain <app>s and <rdg>s at all, > > they just reflect the musical text as given in the according source. > > They contain elements for all notes etc., but they omit the basic > > attributes as specified in the common.xml. Instead, they use a > > reference to the corresponding elements in this file. Here's an > > example: > > > > common.xml: > > <note xml:id="noteX" pname="c" oct="4" dur="1"/> > > > > sourceXY.xml: > > <note stem.dir="up" sameas="common.xml#noteX/> > > > > It is easily possible to point to a note within a <app>/<rdg> in case > > the basic parameters already differ. > > > > With this strategy, layout information can be separated out quite > > completely. <sb/> and <pb/> are stored only within the sourceXY.xml > > files (though they could be provided in the common.xml within <app>s > > and <rdg>s as well). If one wants to extract a source file completely, > > he just has to resolve all pointers. This is cumbersome when doing > > manually, but with an xml database and an index on xml:ids, it > > shouldn't be too hard. Also, it is still possible to extract > > information about differing stem directions etc., but it requires more > > processing than the <app>/<rdg> approach. Basically, this is a > > compromise somewhere in the middle between source separation and > > integration, which tries to address the most common cases in the most > > convenient way, while accepting some additional hurdles for corner > > cases. Of course it is open for discussion which attributes should go > > in the common.xml, and which attributes should be separated out to the > > individual source files. > > > > The benefit of this approach is that it completely relies on existing > > MEI ? it does not require any further additions to the standard and > > works "out of the box". > > > > But, it does not address all requests catered for with a distinct > > layout tree, which I hope Laurent will introduce. > > jo > > > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > -- > Donald Byrd > Woodrow Wilson Indiana Teaching Fellow > Adjunct Associate Professor of Informatics & Music Indiana University, > Bloomington > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121114/eb31110f/attachment.html> From bohl at edirom.de Wed Nov 14 19:59:25 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Wed, 14 Nov 2012 19:59:25 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> Message-ID: <50A3EA0D.3010107@edirom.de> Hi Axel, thanks for this huge insight into the FRBR-customization. Having considered some recording metadata in the Freisch?tz project I'll try to add my thought's on this topic. Sorry for adding late to this discussion, I had prepared this mail this morning in the train, then forgot to send it from work... See my comments inline Am 13.11.2012 13:21, schrieb Axel Teich Geertinger: > > Hi Johannes, > > Thanks for your comments. Good and relevant as always. I think I > better leave it to the more technically skilled people to answer most > of it, but I have just a few comments. > > > > > > > > 4) There is a problem possibly emerging from the > notation-centric nature of > > > MEI, or perhaps it is really a FRBR problem; namely the handling of > performances > > > and recordings. FRBR treats them both as expressions, i.e. as > "siblings" to what I > > > (and MerMEId) would regard as different versions of the work. We encode > > > performances using <eventList> elements within expression/history, > i.e. as (grand- > > > )children of <expression>, which really makes sense to me. A > performance must > > > be of a certain version (form, instrumentation) of the work, so I > strongly believe we > > > should keep it this way. It's just not how FRBR sees it. On the > other hand, as far as > > > I can see there is nothing (except the practical an conceptual > difficulties) that > > > prevents users from encoding e performance or a recording as an > expression, so > > > FRBR compliance is probably possible also in this respect. I just > wouldn't > > > recommend it, and I actually suspect FRBR having a problem there > rather than > > > MEI. > > > > > > I haven't looked this up, but are you sure that performances and > recordings are on > > > the same level? I would see performances as expressions, while > recordings are > > > manifestations. Of course a performance follows a certain version of > a work, like > > > the piano version (=expression). But, the musician moves that to a > different > > > domain (graphical to audio), and he may or may not play the repeats, > and he may > > > or may not follow the dynamic indications of the score. There > certainly is a strong > > > relationship between both expressions, but they are distinct to me. > I see your > > > reasons for putting everything into an eventList, and thus subsuming > it under one > > > expression, but that might not always be the most appropriate model. > Sometimes, > > > it might be better to use separate expressions for the piano version > and it's > > > performances and connect them with one or more relations. > > > > > Sorry, my mistake. Now that I look it up I see you are right: > performances are expressions, recordings are not. As I said, I haven't > really been looking into the recordings question yet. Here's an > example from the FRBR report: > > w1 J. S. Bach's Six suites for unaccompanied cello > > e1 performances by Janos Starker recorded partly in 1963 and completed > in 1965 > > m1 recordings released on 33 1/3 rpm sound discs in 1966 by Mercury > > m2 recordings re-released on compact disc in 1991 by Mercury > > e2 performances by Yo-Yo Ma recorded in 1983 > > m1 recordings released on 33 1/3 rpm sound discs in 1983 by CBS Records > > m2 recordings re-released on compact disc in 1992 by CBS Records > > So, recordings are no problem, I guess. But that still leaves us with > two very different ways of encoding performance data. FYI, we have > recently moved performance <eventList>s from <work> to <expression>, > so we do subsume them under a particular expression already. > First, I don't think, that a recording and a performance are really two different things, but correct me if I'm missing something. The way to both of them is the same, only the recording might result in further manifestations. Let's say you have a copy of a specific recording of a work. Interpreting your record as a expression of the work is fine. Interpeting the recording session as expression of the work can be rather problematic. The record you own is a manifestation of the recording (expression/?) which on the other hand will be the trans-medialization of specific performance material, having been worked with and modified by conductor and musicians in order to resemble the performance (manifestation) which again is based on a certain printed edition of the work (expression) possibly taking into account diffrences from other sources. Would one say that this makes the record inferior and nested deep inside the work-expression-manifestation of the written sources or rather a sibling expression-manifestation tree of the same work with strong relations to each other? I think it's the very right thing you did in moving the performance list to <expression>. Some further complications might arise from the following two thougts: (a)The record you may moreover be (an this is quite popular in recent years, especially with 'classical' msuic) the re-release of an older record (i.e. another manifestation of the same recording) but modified in order to fit the new medium, remastered and digitized and potentially even remixed (Vinyls have certain physical implications on the nature of the sound, whilst CDs or digital audio has different ones). (b) The record doesn't com alone, it has a booklet, which could be referenced from <extent>? This booklet will incorporate texts by different persons and again if re-released might incorporate the old booklet and add additional material. See you later, Benjamin > > > > 5) Finally, an issue related to the FRBR discussion, though not > directly a > > > consequence of it: MEI 2012 allows multiple <work> elements within > <workDesc>. I > > > can't think of any situation, however, in which it may be desirable > to describe more > > > than one work in a single file. On the contrary, it could easily > cause a lot of > > > confusion, so I would actually suggest allowing only one <work> > element; in other > > > words: either skip <workDesc> and have 1 optional <work> in > <meiHead>, or keep > > > <workDesc>, and change its content model to be the one used by > <work> now. > > > > > > Again, I think that this perspective is biased from your > application, where it makes > > > perfect sense. Consider you're working on Wagner's Ring. You might > want to say > > > something about all these works in just one file. All I want to say > is that this is a > > > modeling question, which is clearly project-specific. It seems > perfectly reasonable > > > to restrict merMEId to MEI instances with only one work, but I > wouldn't restrict MEI > > > to one work per file. This may result in preprocessing files before > operating on > > > them with merMEId, but we have similar situations for many other > aspects for MEI, > > > so this isn't bad per se. > > In the Ring case, we are talking about the individual dramas as > components of a larger work. This would probably be one of the > situations where <componentGrp> would come in handy as a child of > <work> (which the customization allows already). I would be reluctant, > however, to include them as four <work> elements directly under > <workDesc>. To clarify what that would mean, it would be necessary to > specify work-to-work relations. Furthermore, there wouldn't be any > place to put metadata concerning **all** four works, since we would be > at top level already. > > Best, > > Axel > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121114/2d5a3452/attachment.html> From andrew.hankinson at mail.mcgill.ca Wed Nov 14 20:09:45 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Wed, 14 Nov 2012 20:09:45 +0100 Subject: [MEI-L] Layout discussion In-Reply-To: <18941_1352911868_50A3CBFB_18941_82_5_CAJ306HYV+Vy4NcNtAdgtsOKmBFx2HMZqSsNhLh0Y3XQaqQZ4ug@mail.gmail.com> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <20121109142220.bugzt4mtgkso8ss0@webmail.iu.edu> <14025_1352518566_509DCBA5_14025_31_1_6c100a48.000011c4.00000043@CCARH-ADM-2.su.win.stanford.edu> <18941_1352911868_50A3CBFB_18941_82_5_CAJ306HYV+Vy4NcNtAdgtsOKmBFx2HMZqSsNhLh0Y3XQaqQZ4ug@mail.gmail.com> Message-ID: <1E4F9C9A-B281-4A1C-9031-2DC0C5AF4E61@mail.mcgill.ca> Hi, Thanks to Johannes and Laurent for sending along their explanations. I'm not able to make it to the meeting today (sorry!), so I'll put forward my 2 cents on this issue. In Johannes' approach, the primary benefit is that all information is maintained within a logical tree with no new mechanisms needed. Indeed, it is potentially more expressive, since different sources can include as much or as little information as needed to express the differences between it and the 'common' representation. As such, this proposal could operate both in the logical and in the graphical domain. This also makes hand encoding much easier, since an encoder can essentially do a one-pass encoding of each source and then use software to automate the generation of the 'common.xml' file. The drawbacks to this approach are that the logical and graphical differences are intermingled, simply by virtue of using the same mechanism to express them. This makes automated processing harder, since an app/rdg structure could represent a simple layout difference (e.g., a piece printed on octavo vs. quarto paper) or a correction in two sources (e.g., a wrong note in one edition that was corrected in a subsequent source). This also means that all source files must be distributed with the common file, and incurs extra processing and storage overhead for making sure that the links present in the common file can be resolved. Laurent's proposal seems much more targeted specifically at resolving layout differences and separating the graphical domain from the logical domain (content vs. presentation). The benefits are that it makes a clean separation between differences in musical content (represented by app/rdg) and graphical (represented by laidOutElement). This makes it very easy for parsers to ignore the layout tree completely if they are not interested in the graphical differences. The main drawback of Laurent's approach is that it adds a new set of elements to the MEI spec, and that it is slightly more difficult for hand encoding, since an encoder must do one pass to encode the logical content, and then subsequent passes to encode the different layouts in each source. To me, the layout tree argument makes more sense. With app/rdg and different source files you're interleaving logical and graphical differences; with the layout approach you're making a clean break, albeit at the expense of some expressiveness. There seems to be a lot of duplication of effort required in the source/common file structure, and I think it's a potential vector of confusion since it will be difficult to define 'the most important differences'. With the layout structure it's quite well defined that it only encodes positioning differences between two sources. So, for what it's worth you can mark me down as supporting the addition of the new layout tree since I think it creates a cleaner separation between the logical and presentational aspects of multiple sources. Thanks, -Andrew On 2012-11-14, at 5:50 PM, Laurent Pugin <laurent at music.mcgill.ca> wrote: > Hi, > > First of all, not to overload the list, I would recommend people interested in the discussion to have a look at our ismir paper > http://ismir2012.ismir.net/event/papers/505-ismir-2012.pdf > > The module as describe in the paper works well for OMR where we need to be able to store exact positions for all the elements on the page. We also tested it for comparing the content of several sources, where we end up with one single sub-tree with the musical content, with <app> and <rdg> for differences. We can call it the logical tree, (e.g., note pitches and note durations). The positioning information is stored in sub-trees, one for each sources. The link is accomplished by xml:id in the layout sub-tree referencing elements in the logical sub-tree. In other words, the logical sub-tree is autonomous, which is not the case of the layout sub-trees. > > A while ago, we had a discussion about introducing a page-based representation in MEI: > https://lists.uni-paderborn.de/pipermail/mei-l/2011/000280.html (very long thread!) > At that point, this option did not appear to be necessary. However, as Johannes explained, we came up to the conclusion in New Orleans that it might be a good idea to re-evaluate the relevance of such a page-based representation in the light of what we learned by designing the layout module since they are quite similar in what we would like to achieve. We could say that the main difference is that with the layout module approach, the information about the layout information is subordinated to a traditional content representation of the music (stored in another sub-tree), whereas that with a page-based representation approach, everything is (or can) be represented in one single tree. As Johannes explained, we could also decide to have page-based representation files with only layout information and referring to other MEI files, and in that case they would act exactly as the layout module sub-trees. > > Best, > Laurent > > On Sat, Nov 10, 2012 at 4:35 AM, Eleanor Selfridge-Field <esfield at stanford.edu> wrote: > I'm lurking too and pass on one practical issue: web browsers. I've been > noticing recently that > Chrome has trouble resolving the printing parameters of attachments > received on A4 paper (it may be the reserve elsewhere). Presumably some > intermediate software will always stand between MEI and any screen > display. > > For me screen layout is the next question after page layout. Instinct > tells me that separate semantic content from all layout and rending > questions is the better path. The SCORE approach may be instructive: the > program uses virtual units to etablish the desire aspect ratio. It also > has a virtual relationship to the physical page. Note though that in the > music printing industry page sizes may not have the same aspect ratio as > those used with desktop printers. (AT the AMS I had a discussion about > this with Douglas Woodfill-Harris from B?renreiter.) > > Eleanor > > > Eleanor Selfridge-Field > Consulting Professor, Music (and, by courtesy, Symbolic Systems) > Braun Music Center #129 > Stanford University > Stanford, CA 94305-3076, USA > http://www.stanford.edu/~esfield/ > > > > > > > > > -----Original Message----- > From: mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de > [mailto:mei-l-bounces+esfield=stanford.edu at lists.uni-paderborn.de] On > Behalf Of Byrd, Donald A. > Sent: Friday, November 09, 2012 11:22 AM > To: Music Encoding Initiative; Johannes Kepper > Subject: Re: [MEI-L] Layout discussion > > I'm just emerging from a long period of lurking -- with a lot of ignoring > :-| . I (for one) would love to see "a thorough introduction of our layout > tree proposal and its specific qualities." > > --Don > > > On Thu, 8 Nov 2012 14:45:11 +0100, Johannes Kepper <kepper at edirom.de> > wrote: > > > Don't blame it on me ? some of you requested to have the following > > discussion completely in public, and I think we should follow that > > suggestion. I expect this thread to be somewhat lengthy. What we > > should try to resolve is how MEI deals with layout-specific > > information, its relationship to the semantic parts of MEI, separating > > out layout info in a similar way than CSS does for HTML, relationship > > of various units (and types thereof), page-based approaches to MEI > > etc. > > > > I will try to summarize the model I briefly introduced in New Orleans > > last week. I would love to see Laurent replying to that, ideally with > > a thorough introduction of our layout tree proposal and its specific > > qualities. This should introduce lurkers to our current state, and > > from there, we can refer back to our proposal from last week and see > > where the discussion leads us? > > > > The basic situation is that we want to preserve differences between > > multiple sources. The most obvious way for doing this is to use the > > <app> / <rdg> elements, which provide a very intuitive way for doing > > so. But, as soon as we also want to preserve detailed information > > about layouts, we have to add many more attributes to each note etc., > > so it becomes more likely that differences between the sources will > > result in additional <app>s and <rdg>s. Eventually, this will lead to > > a separate <app> / </rdg> for almost every note. While this is still > > possible, it might be regarded as somewhat impractical. The > > alternative for this is to use one file, let's call it common.xml, to > > store all the commonalities between the sources, but also the most > > important differences. Basically, this file contains only @pname, @oct > > and @dur for every note. It will split up into <app>s and <rdg>s where > > the sources differ in this regard, but it will not consider stem > > directions, exact positioning etc. > > Every single source is also represented by a separate file, let's call > > it sourceXY.xml. These files do not contain <app>s and <rdg>s at all, > > they just reflect the musical text as given in the according source. > > They contain elements for all notes etc., but they omit the basic > > attributes as specified in the common.xml. Instead, they use a > > reference to the corresponding elements in this file. Here's an > > example: > > > > common.xml: > > <note xml:id="noteX" pname="c" oct="4" dur="1"/> > > > > sourceXY.xml: > > <note stem.dir="up" sameas="common.xml#noteX/> > > > > It is easily possible to point to a note within a <app>/<rdg> in case > > the basic parameters already differ. > > > > With this strategy, layout information can be separated out quite > > completely. <sb/> and <pb/> are stored only within the sourceXY.xml > > files (though they could be provided in the common.xml within <app>s > > and <rdg>s as well). If one wants to extract a source file completely, > > he just has to resolve all pointers. This is cumbersome when doing > > manually, but with an xml database and an index on xml:ids, it > > shouldn't be too hard. Also, it is still possible to extract > > information about differing stem directions etc., but it requires more > > processing than the <app>/<rdg> approach. Basically, this is a > > compromise somewhere in the middle between source separation and > > integration, which tries to address the most common cases in the most > > convenient way, while accepting some additional hurdles for corner > > cases. Of course it is open for discussion which attributes should go > > in the common.xml, and which attributes should be separated out to the > > individual source files. > > > > The benefit of this approach is that it completely relies on existing > > MEI ? it does not require any further additions to the standard and > > works "out of the box". > > > > But, it does not address all requests catered for with a distinct > > layout tree, which I hope Laurent will introduce. > > jo > > > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > -- > Donald Byrd > Woodrow Wilson Indiana Teaching Fellow > Adjunct Associate Professor of Informatics & Music Indiana University, > Bloomington > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121114/f2b02243/attachment.html> From atge at kb.dk Wed Nov 14 20:36:26 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Wed, 14 Nov 2012 19:36:26 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <50A3EA0D.3010107@edirom.de> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> Hi Benni some quick comments here and there... Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl Sendt: 14. november 2012 19:59 Til: Music Encoding Initiative Emne: Re: [MEI-L] FRBR in MEI Hi Axel, thanks for this huge insight into the FRBR-customization. Having considered some recording metadata in the Freisch?tz project I'll try to add my thought's on this topic. Sorry for adding late to this discussion, I had prepared this mail this morning in the train, then forgot to send it from work... See my comments inline Am 13.11.2012 13:21, schrieb Axel Teich Geertinger: Hi Johannes, Thanks for your comments. Good and relevant as always. I think I better leave it to the more technically skilled people to answer most of it, but I have just a few comments. > > > > 4) There is a problem possibly emerging from the notation-centric nature of > MEI, or perhaps it is really a FRBR problem; namely the handling of performances > and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I > (and MerMEId) would regard as different versions of the work. We encode > performances using <eventList> elements within expression/history, i.e. as (grand- > )children of <expression>, which really makes sense to me. A performance must > be of a certain version (form, instrumentation) of the work, so I strongly believe we > should keep it this way. It's just not how FRBR sees it. On the other hand, as far as > I can see there is nothing (except the practical an conceptual difficulties) that > prevents users from encoding e performance or a recording as an expression, so > FRBR compliance is probably possible also in this respect. I just wouldn't > recommend it, and I actually suspect FRBR having a problem there rather than > MEI. > > I haven't looked this up, but are you sure that performances and recordings are on > the same level? I would see performances as expressions, while recordings are > manifestations. Of course a performance follows a certain version of a work, like > the piano version (=expression). But, the musician moves that to a different > domain (graphical to audio), and he may or may not play the repeats, and he may > or may not follow the dynamic indications of the score. There certainly is a strong > relationship between both expressions, but they are distinct to me. I see your > reasons for putting everything into an eventList, and thus subsuming it under one > expression, but that might not always be the most appropriate model. Sometimes, > it might be better to use separate expressions for the piano version and it's > performances and connect them with one or more relations. > Sorry, my mistake. Now that I look it up I see you are right: performances are expressions, recordings are not. As I said, I haven't really been looking into the recordings question yet. Here's an example from the FRBR report: w1 J. S. Bach's Six suites for unaccompanied cello e1 performances by Janos Starker recorded partly in 1963 and completed in 1965 m1 recordings released on 33 1/3 rpm sound discs in 1966 by Mercury m2 recordings re-released on compact disc in 1991 by Mercury e2 performances by Yo-Yo Ma recorded in 1983 m1 recordings released on 33 1/3 rpm sound discs in 1983 by CBS Records m2 recordings re-released on compact disc in 1992 by CBS Records So, recordings are no problem, I guess. But that still leaves us with two very different ways of encoding performance data. FYI, we have recently moved performance <eventList>s from <work> to <expression>, so we do subsume them under a particular expression already. First, I don't think, that a recording and a performance are really two different things, but correct me if I'm missing something. The way to both of them is the same, only the recording might result in further manifestations. That is exactly the problem I'm having with FRBR's view on performances. I think of performance and recording as quite parallel to printed and manuscript sources: a recording is a sort of "printed performance", i.e. one that may be reproduced in multiple copies and re-releases. A performance, like a manuscript, is a unique "event", so just like a manuscript can only have one location (1 item), the performance manifestation also has just one item (it happens at a certain place at a certain time and is not repeatable until we invent time travel). And I tend to think that the performance must represent a specific expression of the work, but it may be more complex than that. However, to me all this indicates that performances should be treated as manifestations. But FRBR sees it differently. And since I treat them as events, I may have simply evaded the problem... Let's say you have a copy of a specific recording of a work. Interpreting your record as a expression of the work is fine. Interpeting the recording session as expression of the work can be rather problematic. Well, MY record would be an item, i.e. a specific copy of the manifestation (the release). Right? The record you own is a manifestation of the recording (expression/?) which on the other hand will be the trans-medialization of specific performance material, having been worked with and modified by conductor and musicians in order to resemble the performance (manifestation) which again is based on a certain printed edition of the work (expression) possibly taking into account diffrences from other sources. Again, the one I actually own is an *item* of the manifestation (just to make sure we agree on that...) Would one say that this makes the record inferior and nested deep inside the work-expression-manifestation of the written sources or rather a sibling expression-manifestation tree of the same work with strong relations to each other? I would say that the recording manifestation and the written manifestation used for the recording would have strong manifestation-to-manifestation relations, but that they would not *necessarily* have the same parent expression. The musicians could have changed something, made cuts or other things not present in the performance material they played from. So we could also have sibling expressions here. I think it's the very right thing you did in moving the performance list to <expression>. Some further complications might arise from the following two thougts: (a)The record you may moreover be (an this is quite popular in recent years, especially with 'classical' msuic) the re-release of an older record (i.e. another manifestation of the same recording) but modified in order to fit the new medium, remastered and digitized and potentially even remixed (Vinyls have certain physical implications on the nature of the sound, whilst CDs or digital audio has different ones). No problem, as I see it. Like in the FRBR example above, that would be a new manifestation of the recording expression. (b) The record doesn't com alone, it has a booklet, which could be referenced from <extent>? This booklet will incorporate texts by different persons and again if re-released might incorporate the old booklet and add additional material. Good question. This would be a bundle of relations pointing in all directions, perhaps. I have no good answer to that right away... /axel See you later, Benjamin > > 5) Finally, an issue related to the FRBR discussion, though not directly a > consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I > can't think of any situation, however, in which it may be desirable to describe more > than one work in a single file. On the contrary, it could easily cause a lot of > confusion, so I would actually suggest allowing only one <work> element; in other > words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep > <workDesc>, and change its content model to be the one used by <work> now. > > Again, I think that this perspective is biased from your application, where it makes > perfect sense. Consider you're working on Wagner's Ring. You might want to say > something about all these works in just one file. All I want to say is that this is a > modeling question, which is clearly project-specific. It seems perfectly reasonable > to restrict merMEId to MEI instances with only one work, but I wouldn't restrict MEI > to one work per file. This may result in preprocessing files before operating on > them with merMEId, but we have similar situations for many other aspects for MEI, > so this isn't bad per se. In the Ring case, we are talking about the individual dramas as components of a larger work. This would probably be one of the situations where <componentGrp> would come in handy as a child of <work> (which the customization allows already). I would be reluctant, however, to include them as four <work> elements directly under <workDesc>. To clarify what that would mean, it would be necessary to specify work-to-work relations. Furthermore, there wouldn't be any place to put metadata concerning *all* four works, since we would be at top level already. Best, Axel _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121114/186afe36/attachment.html> From bohl at edirom.de Thu Nov 15 09:41:59 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Thu, 15 Nov 2012 09:41:59 +0100 Subject: [MEI-L] Report from Technical Team Meeting Message-ID: <50A4AAD7.2070800@edirom.de> Der MEI-LIST:eners, yesterday on 14 November 2012 the MEI Technical Team held it's (quarterly) meeting. REPORTS For a beginning Craig Sapp (Standford) reported on his profitable efforts extracting logical data from the SCORE format which will allow a direct conversion to MEI (see also MEI-L Archive: https://lists.uni-paderborn.de/pipermail/mei-l/2012/000671.html). Moreover Axel T. Geertinger (Copenhagen) announced a release of MerMEId and a open test installation for the end of this year or early next year. RELEASES A big issue on discussion were release stategies. Especially in the light of recent discussions on the MEI-FRBR customization being implemented in MerMEId and the layout-tree being implemented in Aruspix - both of which should become part of official MEI. We decided on first preparing a maintenance release (MEI v2.0.1) by the end of the year, and second have another release incorporationg the FRBR-customization and maybe the layout-tree-customization before the "Music Encoding Conference 2013" (see below). In this context the version numbering system of MEI has been rediscussed as to what kind of change (on schema or guidelines) would increment which digit in the version-number. The final coclusion was first digit (major changes: e.g. anything that introduces new models new strucuture new version of ODD), second digit (middling changes: more significant, probaby breaking) and third digit (minor changes: mostly not breaking) and not restricting this to either specifications or guidelines. THE MUSIC ENCODING CONFERENCE 2013<http://www.music-encoding.org/conference> the Music Encoding Conference 2013 -- Concepts, Methods, Editions, to be held 22-24 May, 2013, at the Mainz Academy for Literature and Sciences in Mainz, Germany. For further details visit: http://www.music-encoding.org/conference For the **CALL FOR ABSTRACTS** also see: https://lists.uni-paderborn.de/pipermail/mei-l/2012/000704.html Important dates: 31 December 2012: Deadline for abstract submissions 31 January 2013: Notification of acceptance/rejection of submissions 21-24 May 2013: Conference 31 July 2013: Deadline for submission of full papers for conference proceedings December 2013: Publication of conference proceedings DISCUSSION STRATEGIES Although we try to discuss MEI issues as openly as possible on MEI-L we have a separate mei-developer mailing list, mainly tracking the issues of the google code repository (https://code.google.com/p/music-encoding/issues/list). Sometimes discussion gets started there, which we apologize for. For the future we agreed on moving the discussion to MEI-L as soon as possible. FUTURE TEAM MEETINGS The MEI Technical Team had originally agreed on holding quarterly meetings. As preparation of the last release (including the first version of the guidelines got hold of us) time went by and the meetings almost got forgotten. Future meetings will be quarterly again, with the next thus being around the mid of february 2013. If anyone else wants to participate feel free to contact us. If you have questions about this report and the past meeting we are happy to answer them. In case the Council has no objections to our proposals, we will proceed as described above. With best wishes on behalf of the MEI Technical Team, Benjamin W. Bohl -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121115/ee1b94cd/attachment.html> From laurent at music.mcgill.ca Fri Nov 16 11:04:09 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Fri, 16 Nov 2012 11:04:09 +0100 Subject: [MEI-L] Tag library Message-ID: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> Hi, I cannot find the tag library on the new website - which looks great, BTW. Is it planned to have it again? I found it extremely useful. Best, Laurent -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121116/751fd02d/attachment.html> From raffaeleviglianti at gmail.com Fri Nov 16 11:20:11 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Fri, 16 Nov 2012 10:20:11 +0000 Subject: [MEI-L] Tag library In-Reply-To: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> References: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> Message-ID: <CAMyHAnNpBJqB1ckUX_nKmquLcKQ2MExiKi4n633j=nWq=UTu7A@mail.gmail.com> Hi Laurent, I found it very helpful too! +1 to getting it back. Raffaele On Fri, Nov 16, 2012 at 10:04 AM, Laurent Pugin <laurent at music.mcgill.ca>wrote: > Hi, > > I cannot find the tag library on the new website - which looks great, > BTW. Is it planned to have it again? I found it extremely useful. > > Best, > Laurent > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121116/ae68960d/attachment.html> From kepper at edirom.de Fri Nov 16 12:34:18 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 16 Nov 2012 12:34:18 +0100 Subject: [MEI-L] Tag library In-Reply-To: <CAMyHAnNpBJqB1ckUX_nKmquLcKQ2MExiKi4n633j=nWq=UTu7A@mail.gmail.com> References: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> <CAMyHAnNpBJqB1ckUX_nKmquLcKQ2MExiKi4n633j=nWq=UTu7A@mail.gmail.com> Message-ID: <41FAF024-7559-46FF-83F7-1488A1FAEBC2@edirom.de> Dear all, the tag library was one of the two subpages that caused problems with the new layout (anyone missing the tutorial?). I have solved that almost, and will bring it back online this afternoon. I will put it in a separate archive, though, as this is the tag library for the 2010-05 release, not the current one. For the 2012 release, I would like to wait for the 2.0.1 release, as we have some issues in the 2.0.0 release which make it a little bit harder to put that online. Would it be sufficient to have just the PDF for that release, and an online version only for the subsequent releases? It won't take long to put the documentation online as soon as we have the release, I expect no more than a day? Best, jo Am 16.11.2012 um 11:20 schrieb Raffaele Viglianti <raffaeleviglianti at gmail.com>: > Hi Laurent, > > I found it very helpful too! +1 to getting it back. > > Raffaele > > > On Fri, Nov 16, 2012 at 10:04 AM, Laurent Pugin <laurent at music.mcgill.ca> wrote: > Hi, > > I cannot find the tag library on the new website - which looks great, BTW. Is it planned to have it again? I found it extremely useful. > > Best, > Laurent > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From laurent at music.mcgill.ca Fri Nov 16 13:49:14 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Fri, 16 Nov 2012 13:49:14 +0100 Subject: [MEI-L] Tag library In-Reply-To: <9313_1353065666_50A624C2_9313_57_1_41FAF024-7559-46FF-83F7-1488A1FAEBC2@edirom.de> References: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> <CAMyHAnNpBJqB1ckUX_nKmquLcKQ2MExiKi4n633j=nWq=UTu7A@mail.gmail.com> <9313_1353065666_50A624C2_9313_57_1_41FAF024-7559-46FF-83F7-1488A1FAEBC2@edirom.de> Message-ID: <CAJ306HZeQ4KX1VX9sAW10fNMT59-AhMvjw0QWaJyE9qnbQbnew@mail.gmail.com> Thanks! Sounds perfectly sufficient to me to have 2.0.0 as PDF only. Laurent On Fri, Nov 16, 2012 at 12:34 PM, Johannes Kepper <kepper at edirom.de> wrote: > Dear all, > > the tag library was one of the two subpages that caused problems with the > new layout (anyone missing the tutorial?). I have solved that almost, and > will bring it back online this afternoon. I will put it in a separate > archive, though, as this is the tag library for the 2010-05 release, not > the current one. For the 2012 release, I would like to wait for the 2.0.1 > release, as we have some issues in the 2.0.0 release which make it a little > bit harder to put that online. Would it be sufficient to have just the PDF > for that release, and an online version only for the subsequent releases? > It won't take long to put the documentation online as soon as we have the > release, I expect no more than a day? > > Best, > jo > > > > Am 16.11.2012 um 11:20 schrieb Raffaele Viglianti < > raffaeleviglianti at gmail.com>: > > > Hi Laurent, > > > > I found it very helpful too! +1 to getting it back. > > > > Raffaele > > > > > > On Fri, Nov 16, 2012 at 10:04 AM, Laurent Pugin <laurent at music.mcgill.ca> > wrote: > > Hi, > > > > I cannot find the tag library on the new website - which looks great, > BTW. Is it planned to have it again? I found it extremely useful. > > > > Best, > > Laurent > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121116/48c16181/attachment.html> From kepper at edirom.de Fri Nov 16 15:59:33 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 16 Nov 2012 15:59:33 +0100 Subject: [MEI-L] Tag library In-Reply-To: <CAJ306HZeQ4KX1VX9sAW10fNMT59-AhMvjw0QWaJyE9qnbQbnew@mail.gmail.com> References: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> <CAMyHAnNpBJqB1ckUX_nKmquLcKQ2MExiKi4n633j=nWq=UTu7A@mail.gmail.com> <9313_1353065666_50A624C2_9313_57_1_41FAF024-7559-46FF-83F7-1488A1FAEBC2@edirom.de> <CAJ306HZeQ4KX1VX9sAW10fNMT59-AhMvjw0QWaJyE9qnbQbnew@mail.gmail.com> Message-ID: <144C3EF0-EF2A-4CAB-B503-3C4C9297DFEC@edirom.de> Dear all, I have just finished the Archive, which contains the old 2010-05 tag library. It is available from http://music-encoding.org/archive/tagLibrary. Please let me know if you encounter any problems with this. The new tag library will be available under /documentation as soon as we have the 2.0.1 release of MEI 2012. Thanks for your patience. Best regards, Johannes Am 16.11.2012 um 13:49 schrieb Laurent Pugin <laurent at music.mcgill.ca>: > Thanks! Sounds perfectly sufficient to me to have 2.0.0 as PDF only. > > Laurent > > On Fri, Nov 16, 2012 at 12:34 PM, Johannes Kepper <kepper at edirom.de> wrote: > Dear all, > > the tag library was one of the two subpages that caused problems with the new layout (anyone missing the tutorial?). I have solved that almost, and will bring it back online this afternoon. I will put it in a separate archive, though, as this is the tag library for the 2010-05 release, not the current one. For the 2012 release, I would like to wait for the 2.0.1 release, as we have some issues in the 2.0.0 release which make it a little bit harder to put that online. Would it be sufficient to have just the PDF for that release, and an online version only for the subsequent releases? It won't take long to put the documentation online as soon as we have the release, I expect no more than a day? > > Best, > jo > > > > Am 16.11.2012 um 11:20 schrieb Raffaele Viglianti <raffaeleviglianti at gmail.com>: > > > Hi Laurent, > > > > I found it very helpful too! +1 to getting it back. > > > > Raffaele > > > > > > On Fri, Nov 16, 2012 at 10:04 AM, Laurent Pugin <laurent at music.mcgill.ca> wrote: > > Hi, > > > > I cannot find the tag library on the new website - which looks great, BTW. Is it planned to have it again? I found it extremely useful. > > > > Best, > > Laurent > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Fri Nov 16 16:41:04 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Fri, 16 Nov 2012 15:41:04 +0000 Subject: [MEI-L] Tag library In-Reply-To: <41FAF024-7559-46FF-83F7-1488A1FAEBC2@edirom.de> References: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> <CAMyHAnNpBJqB1ckUX_nKmquLcKQ2MExiKi4n633j=nWq=UTu7A@mail.gmail.com>, <41FAF024-7559-46FF-83F7-1488A1FAEBC2@edirom.de> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EFC3C17@GRANT.eservices.virginia.edu> Johannes, all, At this point, the original tutorial is somewhat stale. I think it should also go in the archive along with the 2010-05 tag library. In time, it could be replaced by the more recent instructional materials that Kristina and Maja have been working on, correct? -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] Sent: Friday, November 16, 2012 6:34 AM To: Music Encoding Initiative Subject: Re: [MEI-L] Tag library Dear all, the tag library was one of the two subpages that caused problems with the new layout (anyone missing the tutorial?). I have solved that almost, and will bring it back online this afternoon. I will put it in a separate archive, though, as this is the tag library for the 2010-05 release, not the current one. For the 2012 release, I would like to wait for the 2.0.1 release, as we have some issues in the 2.0.0 release which make it a little bit harder to put that online. Would it be sufficient to have just the PDF for that release, and an online version only for the subsequent releases? It won't take long to put the documentation online as soon as we have the release, I expect no more than a day? Best, jo Am 16.11.2012 um 11:20 schrieb Raffaele Viglianti <raffaeleviglianti at gmail.com>: > Hi Laurent, > > I found it very helpful too! +1 to getting it back. > > Raffaele > > > On Fri, Nov 16, 2012 at 10:04 AM, Laurent Pugin <laurent at music.mcgill.ca> wrote: > Hi, > > I cannot find the tag library on the new website - which looks great, BTW. Is it planned to have it again? I found it extremely useful. > > Best, > Laurent > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From veit at weber-gesamtausgabe.de Fri Nov 16 17:02:02 2012 From: veit at weber-gesamtausgabe.de (Joachim Veit) Date: Fri, 16 Nov 2012 17:02:02 +0100 Subject: [MEI-L] Tag library In-Reply-To: <BBCC497C40D85642B90E9F94FC30343D0EFC3C17@GRANT.eservices.virginia.edu> References: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> <CAMyHAnNpBJqB1ckUX_nKmquLcKQ2MExiKi4n633j=nWq=UTu7A@mail.gmail.com>, <41FAF024-7559-46FF-83F7-1488A1FAEBC2@edirom.de> <BBCC497C40D85642B90E9F94FC30343D0EFC3C17@GRANT.eservices.virginia.edu> Message-ID: <50A6637A.3000802@weber-gesamtausgabe.de> Hi Perry, this seems to be a very good idea - we shouldn't loose the original tutorial completely (maybe with the new release it may even be combined in a refreshed version with the very fine instructional materials from Kristina and Maja - which we all are very eager to see soon on the new website!...) It's really very helpful not to miss the old tag library until the new one has been established (and even later: an archive has always some advantages! - e.g. for easily reconstructing decisions etc.), because this is much more convenient to use than the complete PDF-file (even if I am full of admiration for that file!!) Best greetings, Joachim Am 16.11.12 16:41, schrieb Roland, Perry (pdr4h): > Johannes, all, > > At this point, the original tutorial is somewhat stale. I think it should also go in the archive along with the 2010-05 tag library. In time, it could be replaced by the more recent instructional materials that Kristina and Maja have been working on, correct? > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > Sent: Friday, November 16, 2012 6:34 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] Tag library > > Dear all, > > the tag library was one of the two subpages that caused problems with the new layout (anyone missing the tutorial?). I have solved that almost, and will bring it back online this afternoon. I will put it in a separate archive, though, as this is the tag library for the 2010-05 release, not the current one. For the 2012 release, I would like to wait for the 2.0.1 release, as we have some issues in the 2.0.0 release which make it a little bit harder to put that online. Would it be sufficient to have just the PDF for that release, and an online version only for the subsequent releases? It won't take long to put the documentation online as soon as we have the release, I expect no more than a day? > > Best, > jo > > > > Am 16.11.2012 um 11:20 schrieb Raffaele Viglianti <raffaeleviglianti at gmail.com>: > >> Hi Laurent, >> >> I found it very helpful too! +1 to getting it back. >> >> Raffaele >> >> >> On Fri, Nov 16, 2012 at 10:04 AM, Laurent Pugin <laurent at music.mcgill.ca> wrote: >> Hi, >> >> I cannot find the tag library on the new website - which looks great, BTW. Is it planned to have it again? I found it extremely useful. >> >> Best, >> Laurent >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit Bin?rdaten wurde abgetrennt... Dateiname : veit.vcf Dateityp : text/x-vcard Dateigr??e : 364 bytes Beschreibung: nicht verf?gbar URL : <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121116/97684011/attachment.vcf> From bohl at edirom.de Fri Nov 16 17:34:26 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Fri, 16 Nov 2012 17:34:26 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> Message-ID: <50A66B12.3060105@edirom.de> Hi Axel et al., first tanks for correctiong my repeationg mistake concerning the items, which certainly did not increase clarity. My head has continued working on the problematic dealing of FRBR with recordings and I tried to graphically sort out my thougts (as you can see in the attached image or [in case it doesn't go through] under http://homepages.uni-paderborn.de/bwbohl/MEI/Bohl_FRBR.jpg). The graphic is sort of a table with work, expression, manifestation and item being the column labels. The contents first show a work with an edition and an auograph source, then a performance ("Interpretation by Kepper/Roland") and a recording (in red), and last some records (I just realize I missed putting an experssion before the records, sorry for that). Blue lines show hierarchical dependencies whereas green lines indicate a "based upon" relationship. The idea behind it is, that a interpretation of a work by a certain conductor could be viewed as expression with him conduction a certain orchestra being a manifestation and the actual performance one a certain data at a certain location being the item (physical by means of the sound waves;-) A recording again is another experssion of the work although depending on a certain interpretation-perfomance I'm not a FRBR expert so I don't know how this all conforms with the FRBR paper(s) but I would be happy to sort things out with you! /benjamin Am 14.11.2012 20:36, schrieb Axel Teich Geertinger: > > Hi Benni > > some quick comments here and there... > > *Fra:*mei-l-bounces at lists.uni-paderborn.de > [mailto:mei-l-bounces at lists.uni-paderborn.de] *P? vegne af *Benjamin > Wolff Bohl > *Sendt:* 14. november 2012 19:59 > *Til:* Music Encoding Initiative > *Emne:* Re: [MEI-L] FRBR in MEI > > Hi Axel, > thanks for this huge insight into the FRBR-customization. Having > considered some recording metadata in the Freisch?tz project I'll try > to add my thought's on this topic. > Sorry for adding late to this discussion, I had prepared this mail > this morning in the train, then forgot to send it from work... > See my comments inline > > Am 13.11.2012 13:21, schrieb Axel Teich Geertinger: > > Hi Johannes, > > Thanks for your comments. Good and relevant as always. I think I > better leave it to the more technically skilled people to answer > most of it, but I have just a few comments. > > > > > > > > 4) There is a problem possibly emerging from the > notation-centric nature of > > > MEI, or perhaps it is really a FRBR problem; namely the handling > of performances > > > and recordings. FRBR treats them both as expressions, i.e. as > "siblings" to what I > > > (and MerMEId) would regard as different versions of the work. We > encode > > > performances using <eventList> elements within > expression/history, i.e. as (grand- > > > )children of <expression>, which really makes sense to me. A > performance must > > > be of a certain version (form, instrumentation) of the work, so > I strongly believe we > > > should keep it this way. It's just not how FRBR sees it. On the > other hand, as far as > > > I can see there is nothing (except the practical an conceptual > difficulties) that > > > prevents users from encoding e performance or a recording as an > expression, so > > > FRBR compliance is probably possible also in this respect. I > just wouldn't > > > recommend it, and I actually suspect FRBR having a problem there > rather than > > > MEI. > > > > > > I haven't looked this up, but are you sure that performances and > recordings are on > > > the same level? I would see performances as expressions, while > recordings are > > > manifestations. Of course a performance follows a certain > version of a work, like > > > the piano version (=expression). But, the musician moves that to > a different > > > domain (graphical to audio), and he may or may not play the > repeats, and he may > > > or may not follow the dynamic indications of the score. There > certainly is a strong > > > relationship between both expressions, but they are distinct to > me. I see your > > > reasons for putting everything into an eventList, and thus > subsuming it under one > > > expression, but that might not always be the most appropriate > model. Sometimes, > > > it might be better to use separate expressions for the piano > version and it's > > > performances and connect them with one or more relations. > > > > > Sorry, my mistake. Now that I look it up I see you are right: > performances are expressions, recordings are not. As I said, I > haven't really been looking into the recordings question yet. > Here's an example from the FRBR report: > > w1 J. S. Bach's Six suites for unaccompanied cello > > e1 performances by Janos Starker recorded partly in 1963 and > completed in 1965 > > m1 recordings released on 33 1/3 rpm sound discs in 1966 by Mercury > > m2 recordings re-released on compact disc in 1991 by Mercury > > e2 performances by Yo-Yo Ma recorded in 1983 > > m1 recordings released on 33 1/3 rpm sound discs in 1983 by CBS > Records > > m2 recordings re-released on compact disc in 1992 by CBS Records > > So, recordings are no problem, I guess. But that still leaves us > with two very different ways of encoding performance data. FYI, we > have recently moved performance <eventList>s from <work> to > <expression>, so we do subsume them under a particular expression > already. > > First, I don't think, that a recording and a performance are really > two different things, but correct me if I'm missing something. The way > to both of them is the same, only the recording might result in > further manifestations. > > That is exactly the problem I'm having with FRBR's view on > performances. I think of performance and recording as quite parallel > to printed and manuscript sources: a recording is a sort of "printed > performance", i.e. one that may be reproduced in multiple copies and > re-releases. A performance, like a manuscript, is a unique "event", so > just like a manuscript can only have one location (1 item), the > performance manifestation also has just one item (it happens at a > certain place at a certain time and is not repeatable until we invent > time travel). And I tend to think that the performance must represent > a specific expression of the work, but it may be more complex than > that. However, to me all this indicates that performances should be > treated as manifestations. But FRBR sees it differently. And since I > treat them as events, I may have simply evaded the problem... > > > Let's say you have a copy of a specific recording of a work. > Interpreting your record as a expression of the work is fine. > Interpeting the recording session as expression of the work can be > rather problematic. > > Well, MY record would be an item, i.e. a specific copy of the > manifestation (the release). Right? > > > The record you own is a manifestation of the recording (expression/?) > which on the other hand will be the trans-medialization of specific > performance material, having been worked with and modified by > conductor and musicians in order to resemble the performance > (manifestation) which again is based on a certain printed edition of > the work (expression) possibly taking into account diffrences from > other sources. > > Again, the one I actually own is an *item* of the manifestation (just > to make sure we agree on that...) > > > Would one say that this makes the record inferior and nested deep > inside the work-expression-manifestation of the written sources or > rather a sibling expression-manifestation tree of the same work with > strong relations to each other? > > I would say that the recording manifestation and the written > manifestation used for the recording would have strong > manifestation-to-manifestation relations, but that they would not > **necessarily** have the same parent expression. The musicians could > have changed something, made cuts or other things not present in the > performance material they played from. So we could also have sibling > expressions here. > > > I think it's the very right thing you did in moving the performance > list to <expression>. > > Some further complications might arise from the following two thougts: > (a)The record you may moreover be (an this is quite popular in recent > years, especially with 'classical' msuic) the re-release of an older > record (i.e. another manifestation of the same recording) but modified > in order to fit the new medium, remastered and digitized and > potentially even remixed (Vinyls have certain physical implications on > the nature of the sound, whilst CDs or digital audio has different ones). > > No problem, as I see it. Like in the FRBR example above, that would be > a new manifestation of the recording expression. > > > (b) The record doesn't com alone, it has a booklet, which could be > referenced from <extent>? This booklet will incorporate texts by > different persons and again if re-released might incorporate the old > booklet and add additional material. > > Good question. This would be a bundle of relations pointing in all > directions, perhaps. I have no good answer to that right away... > > /axel > > > > See you later, > Benjamin > > > > 5) Finally, an issue related to the FRBR discussion, though not > directly a > > > consequence of it: MEI 2012 allows multiple <work> elements within > <workDesc>. I > > > can't think of any situation, however, in which it may be desirable > to describe more > > > than one work in a single file. On the contrary, it could easily > cause a lot of > > > confusion, so I would actually suggest allowing only one <work> > element; in other > > > words: either skip <workDesc> and have 1 optional <work> in > <meiHead>, or keep > > > <workDesc>, and change its content model to be the one used by > <work> now. > > > > > > Again, I think that this perspective is biased from your > application, where it makes > > > perfect sense. Consider you're working on Wagner's Ring. You might > want to say > > > something about all these works in just one file. All I want to say > is that this is a > > > modeling question, which is clearly project-specific. It seems > perfectly reasonable > > > to restrict merMEId to MEI instances with only one work, but I > wouldn't restrict MEI > > > to one work per file. This may result in preprocessing files before > operating on > > > them with merMEId, but we have similar situations for many other > aspects for MEI, > > > so this isn't bad per se. > > In the Ring case, we are talking about the individual dramas as > components of a larger work. This would probably be one of the > situations where <componentGrp> would come in handy as a child of > <work> (which the customization allows already). I would be reluctant, > however, to include them as four <work> elements directly under > <workDesc>. To clarify what that would mean, it would be necessary to > specify work-to-work relations. Furthermore, there wouldn't be any > place to put metadata concerning **all** four works, since we would be > at top level already. > > Best, > > Axel > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de <mailto:mei-l at lists.uni-paderborn.de> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121116/857e7425/attachment.html> -------------- n?chster Teil -------------- Ein Dateianhang mit Bin?rdaten wurde abgetrennt... Dateiname : Bohl_FRBR.jpg Dateityp : image/jpeg Dateigr??e : 262190 bytes Beschreibung: nicht verf?gbar URL : <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121116/857e7425/attachment.jpg> From kepper at edirom.de Fri Nov 16 18:10:59 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 16 Nov 2012 18:10:59 +0100 Subject: [MEI-L] Tag library In-Reply-To: <50A6637A.3000802@weber-gesamtausgabe.de> References: <CAJ306Hbv6F5MZ7oPxFGHsKyN-Qn56xc_zrEK1g=WSimfa0ea6Q@mail.gmail.com> <CAMyHAnNpBJqB1ckUX_nKmquLcKQ2MExiKi4n633j=nWq=UTu7A@mail.gmail.com>, <41FAF024-7559-46FF-83F7-1488A1FAEBC2@edirom.de> <BBCC497C40D85642B90E9F94FC30343D0EFC3C17@GRANT.eservices.virginia.edu> <50A6637A.3000802@weber-gesamtausgabe.de> Message-ID: <0776A422-7C08-4BB7-98B5-81C5D7833BED@edirom.de> Dear all, we certainly won't drop the tutorial completely. The problem there is that it needs to be rewritten to some degree, whereas the rest (except the tag library) just needed a new CSS. I prioritized the tag library, and actually, my priority for the tutorial is not very high. It will go in the archive, but it has to wait for its slot in the todo. Kristina's tutorials will go on the website under /support, and actually, I think they are more important than the old one. The problem there is that they've been built with a rather exotic tool, and while they are available in HTML, I still have to figure out how to include all the Javascript they rely on. Just give me some time to adopt them for the website architecture. Of course, all materials from older releases etc. will go in the archive. What else will happen with the website? We will incorporate the service at custom.music-encoding.org in the regular site, probably somewhere under /tools (not available yet). This depends on several things, including a better documentation of that web service, and I really can't predict when it will happen. Might be even next year. Besides that, we will introduce a section with workshop materials, slides, posters etc. under /downloads. We will probably also have a corresponding section in the /archive. So if you have any material about MEI you would like to share, just contact me, and I'll put it online as soon as we have those sites. Best regards, Johannes Am 16.11.2012 um 17:02 schrieb Joachim Veit: > Hi Perry, > this seems to be a very good idea - we shouldn't loose the original tutorial completely (maybe with the new release it may even be combined in a refreshed version with the very fine instructional materials from Kristina and Maja - which we all are very eager to see soon on the new website!...) > It's really very helpful not to miss the old tag library until the new one has been established (and even later: an archive has always some advantages! - e.g. for easily reconstructing decisions etc.), because this is much more convenient to use than the complete PDF-file (even if I am full of admiration for that file!!) > Best greetings, > Joachim > > > > Am 16.11.12 16:41, schrieb Roland, Perry (pdr4h): >> Johannes, all, >> >> At this point, the original tutorial is somewhat stale. I think it should also go in the archive along with the 2010-05 tag library. In time, it could be replaced by the more recent instructional materials that Kristina and Maja have been working on, correct? >> >> -- >> p. >> >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> ________________________________________ >> From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] >> Sent: Friday, November 16, 2012 6:34 AM >> To: Music Encoding Initiative >> Subject: Re: [MEI-L] Tag library >> >> Dear all, >> >> the tag library was one of the two subpages that caused problems with the new layout (anyone missing the tutorial?). I have solved that almost, and will bring it back online this afternoon. I will put it in a separate archive, though, as this is the tag library for the 2010-05 release, not the current one. For the 2012 release, I would like to wait for the 2.0.1 release, as we have some issues in the 2.0.0 release which make it a little bit harder to put that online. Would it be sufficient to have just the PDF for that release, and an online version only for the subsequent releases? It won't take long to put the documentation online as soon as we have the release, I expect no more than a day? >> >> Best, >> jo >> >> >> >> Am 16.11.2012 um 11:20 schrieb Raffaele Viglianti <raffaeleviglianti at gmail.com>: >> >>> Hi Laurent, >>> >>> I found it very helpful too! +1 to getting it back. >>> >>> Raffaele >>> >>> >>> On Fri, Nov 16, 2012 at 10:04 AM, Laurent Pugin <laurent at music.mcgill.ca> wrote: >>> Hi, >>> >>> I cannot find the tag library on the new website - which looks great, BTW. Is it planned to have it again? I found it extremely useful. >>> >>> Best, >>> Laurent >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > <veit.vcf>_______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Fri Nov 16 18:46:57 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 16 Nov 2012 18:46:57 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <50A66B12.3060105@edirom.de> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> Message-ID: <4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de> Hi Benni, your interpretation of FRBR is just wrong. I agree that the handling of performances is not the most intuitive concept, but it seems quite consistent to me. You might want to look at the official specification document (http://www.ifla.org/files/assets/cataloguing/frbr/frbr_2008.pdf, official translations available from http://www.ifla.org/publications/translations-of-frbr). A work is a totally abstract idea of something. An expression is a form of this work. It is catered for a specific instrumentation, and is set up for a specific purpose, but it's still no good. A manifestation is the method of preserving an expression, of converting it to a physical thing. An item is the result of the manifestation, it is the physical thing. Let's consider a work called "Schreif?tz". This is a very abstract thing, which has a relation to a different work called "Freisch?tz", and it's a parody of that. The "Schreif?tz" exists in a version for nose flute and harpsichord, and a version for full orchestra. Those are expressions. If there is a print of the orchestra version, the whole print run would be the manifestation (in MEI called source), and the individual copies would be items. This helps to distinguish between the features common to all copies and individual copies (pencil markings etc.). If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item. This is probably the most annoying compromise in FRBR, but it allows to be consistent across different media types. It's just not intuitive? A manifestation follows it's expression exactly, by definition there is no or nearly no difference. So if you have two more measures in a source, this source establishes a new expression in FRBR. This might not reflect traditional editorial concepts, but matches very well with genetic approaches. This required conformance between sources / manifestations of one expression is not restricted to the music, but explicitly includes the instrumentation: If you have another manuscript of the nose flute version, where the harpsichord is replaced by a piano, it would be a separate expression already. FRBR allows some leeway here, but officially the slightest change results in a new expression. Following these arguments, a performance is clearly an expression. Different musicians will result in a different expression. I'm not sure how to model repeated performances (Broadway shows?), but let's put that aside for now. You're right, you can't hold a performance in your hand. If you want to preserve a performance, you record it, that is, you manifest it on CD, tape, whatever. The recording will always reflect the version as given during the performance, and it will result in a number of items. Of course the recording / manifestation has certain technical inflictions on the content of these items, but the same is true for prints: An engraved copy will show different slurs than a typesetter copy than Craig's SCORE file. Those are artifacts of the technical process of manifesting an expression into items. Now, if you and me perform the version of nose flute and harpsichord, what's the relationship between the nose flute / harpsichord expression and our performance? The latter is based upon the former, but they are clearly different. Maybe the original is less defined than ours (we already prescribe the performers being us), but that's a sibling relationship. We depend on this other one, but we create a new one, just like the composer correcting a preprint copy creates a new expression from it. If someone tries to implement FRBR completely, it would result in a whole bunch of expressions, manifestations, and items. I guess not even librarians would do that to the extreme. That's why I think that Axel's compromise of putting performances in an eventList inside the expression is perfectly reasonable, especially since someone could use XSLT to extract them into separate expressions if needed. I hope my short explanation of FRBR was clear enough. If you have further questions, I'm happy to give it another try. Although I'm pretty sure that Axel, Kristina, and others can explain it better than me :-) Johannes Am 16.11.2012 um 17:34 schrieb Benjamin Wolff Bohl: > Hi Axel et al., > first tanks for correctiong my repeationg mistake concerning the items, which certainly did not increase clarity. > My head has continued working on the problematic dealing of FRBR with recordings and I tried to graphically sort out my thougts (as you can see in the attached image or [in case it doesn't go through] under http://homepages.uni-paderborn.de/bwbohl/MEI/Bohl_FRBR.jpg). > The graphic is sort of a table with work, expression, manifestation and item being the column labels. The contents first show a work with an edition and an auograph source, then a performance ("Interpretation by Kepper/Roland") and a recording (in red), and last some records (I just realize I missed putting an experssion before the records, sorry for that). Blue lines show hierarchical dependencies whereas green lines indicate a "based upon" relationship. > The idea behind it is, that a interpretation of a work by a certain conductor could be viewed as expression with him conduction a certain orchestra being a manifestation and the actual performance one a certain data at a certain location being the item (physical by means of the sound waves;-) > A recording again is another experssion of the work although depending on a certain interpretation-perfomance > > I'm not a FRBR expert so I don't know how this all conforms with the FRBR paper(s) but I would be happy to sort things out with you! > > /benjamin > > Am 14.11.2012 20:36, schrieb Axel Teich Geertinger: >> Hi Benni >> >> some quick comments here and there... >> >> Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl >> Sendt: 14. november 2012 19:59 >> Til: Music Encoding Initiative >> Emne: Re: [MEI-L] FRBR in MEI >> >> Hi Axel, >> thanks for this huge insight into the FRBR-customization. Having considered some recording metadata in the Freisch?tz project I'll try to add my thought's on this topic. >> Sorry for adding late to this discussion, I had prepared this mail this morning in the train, then forgot to send it from work... >> See my comments inline >> >> Am 13.11.2012 13:21, schrieb Axel Teich Geertinger: >> Hi Johannes, >> >> Thanks for your comments. Good and relevant as always. I think I better leave it to the more technically skilled people to answer most of it, but I have just a few comments. >> >> > > >> > > 4) There is a problem possibly emerging from the notation-centric nature of >> > MEI, or perhaps it is really a FRBR problem; namely the handling of performances >> > and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I >> > (and MerMEId) would regard as different versions of the work. We encode >> > performances using <eventList> elements within expression/history, i.e. as (grand- >> > )children of <expression>, which really makes sense to me. A performance must >> > be of a certain version (form, instrumentation) of the work, so I strongly believe we >> > should keep it this way. It's just not how FRBR sees it. On the other hand, as far as >> > I can see there is nothing (except the practical an conceptual difficulties) that >> > prevents users from encoding e performance or a recording as an expression, so >> > FRBR compliance is probably possible also in this respect. I just wouldn't >> > recommend it, and I actually suspect FRBR having a problem there rather than >> > MEI. >> > >> > I haven't looked this up, but are you sure that performances and recordings are on >> > the same level? I would see performances as expressions, while recordings are >> > manifestations. Of course a performance follows a certain version of a work, like >> > the piano version (=expression). But, the musician moves that to a different >> > domain (graphical to audio), and he may or may not play the repeats, and he may >> > or may not follow the dynamic indications of the score. There certainly is a strong >> > relationship between both expressions, but they are distinct to me. I see your >> > reasons for putting everything into an eventList, and thus subsuming it under one >> > expression, but that might not always be the most appropriate model. Sometimes, >> > it might be better to use separate expressions for the piano version and it's >> > performances and connect them with one or more relations. >> > >> >> Sorry, my mistake. Now that I look it up I see you are right: performances are expressions, recordings are not. As I said, I haven?t really been looking into the recordings question yet. Here's an example from the FRBR report: >> >> w1 J. S. Bach's Six suites for unaccompanied cello >> e1 performances by Janos Starker recorded partly in 1963 and completed in 1965 >> m1 recordings released on 33 1/3 rpm sound discs in 1966 by Mercury >> m2 recordings re-released on compact disc in 1991 by Mercury >> e2 performances by Yo-Yo Ma recorded in 1983 >> m1 recordings released on 33 1/3 rpm sound discs in 1983 by CBS Records >> m2 recordings re-released on compact disc in 1992 by CBS Records >> >> So, recordings are no problem, I guess. But that still leaves us with two very different ways of encoding performance data. FYI, we have recently moved performance <eventList>s from <work> to <expression>, so we do subsume them under a particular expression already. >> First, I don't think, that a recording and a performance are really two different things, but correct me if I'm missing something. The way to both of them is the same, only the recording might result in further manifestations. >> >> That is exactly the problem I?m having with FRBR?s view on performances. I think of performance and recording as quite parallel to printed and manuscript sources: a recording is a sort of ?printed performance?, i.e. one that may be reproduced in multiple copies and re-releases. A performance, like a manuscript, is a unique ?event?, so just like a manuscript can only have one location (1 item), the performance manifestation also has just one item (it happens at a certain place at a certain time and is not repeatable until we invent time travel). And I tend to think that the performance must represent a specific expression of the work, but it may be more complex than that. However, to me all this indicates that performances should be treated as manifestations. But FRBR sees it differently. And since I treat them as events, I may have simply evaded the problem... >> >> Let's say you have a copy of a specific recording of a work. Interpreting your record as a expression of the work is fine. Interpeting the recording session as expression of the work can be rather problematic. >> >> Well, MY record would be an item, i.e. a specific copy of the manifestation (the release). Right? >> >> The record you own is a manifestation of the recording (expression/?) which on the other hand will be the trans-medialization of specific performance material, having been worked with and modified by conductor and musicians in order to resemble the performance (manifestation) which again is based on a certain printed edition of the work (expression) possibly taking into account diffrences from other sources. >> >> Again, the one I actually own is an *item* of the manifestation (just to make sure we agree on that...) >> >> Would one say that this makes the record inferior and nested deep inside the work-expression-manifestation of the written sources or rather a sibling expression-manifestation tree of the same work with strong relations to each other? >> >> I would say that the recording manifestation and the written manifestation used for the recording would have strong manifestation-to-manifestation relations, but that they would not *necessarily* have the same parent expression. The musicians could have changed something, made cuts or other things not present in the performance material they played from. So we could also have sibling expressions here. >> >> I think it's the very right thing you did in moving the performance list to <expression>. >> >> Some further complications might arise from the following two thougts: >> (a)The record you may moreover be (an this is quite popular in recent years, especially with 'classical' msuic) the re-release of an older record (i.e. another manifestation of the same recording) but modified in order to fit the new medium, remastered and digitized and potentially even remixed (Vinyls have certain physical implications on the nature of the sound, whilst CDs or digital audio has different ones). >> >> No problem, as I see it. Like in the FRBR example above, that would be a new manifestation of the recording expression. >> >> (b) The record doesn't com alone, it has a booklet, which could be referenced from <extent>? This booklet will incorporate texts by different persons and again if re-released might incorporate the old booklet and add additional material. >> >> Good question. This would be a bundle of relations pointing in all directions, perhaps. I have no good answer to that right away... >> >> /axel >> >> >> >> See you later, >> Benjamin >> >> >> > > 5) Finally, an issue related to the FRBR discussion, though not directly a >> > consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I >> > can't think of any situation, however, in which it may be desirable to describe more >> > than one work in a single file. On the contrary, it could easily cause a lot of >> > confusion, so I would actually suggest allowing only one <work> element; in other >> > words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep >> > <workDesc>, and change its content model to be the one used by <work> now. >> > >> > Again, I think that this perspective is biased from your application, where it makes >> > perfect sense. Consider you're working on Wagner's Ring. You might want to say >> > something about all these works in just one file. All I want to say is that this is a >> > modeling question, which is clearly project-specific. It seems perfectly reasonable >> > to restrict merMEId to MEI instances with only one work, but I wouldn't restrict MEI >> > to one work per file. This may result in preprocessing files before operating on >> > them with merMEId, but we have similar situations for many other aspects for MEI, >> > so this isn't bad per se. >> >> In the Ring case, we are talking about the individual dramas as components of a larger work. This would probably be one of the situations where <componentGrp> would come in handy as a child of <work> (which the customization allows already). I would be reluctant, however, to include them as four <work> elements directly under <workDesc>. To clarify what that would mean, it would be necessary to specify work-to-work relations. Furthermore, there wouldn?t be any place to put metadata concerning *all* four works, since we would be at top level already. >> >> Best, >> Axel >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> _______________________________________________ >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > <Bohl_FRBR.jpg>_______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From andrew.hankinson at mail.mcgill.ca Fri Nov 16 19:04:38 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Fri, 16 Nov 2012 13:04:38 -0500 Subject: [MEI-L] FRBR in MEI In-Reply-To: <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de> Message-ID: <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> I did a presentation on FRBR a while back and the slides may help understand how FRBR might work with music. http://transientstudent.net/sites/default/files/hankinson07_functional.pdf Some of the slides are not immediately obvious without the narrative that went along with it, but you can skip those and just look at the diagrams. -Andrew On 2012-11-16, at 12:46 PM, Johannes Kepper wrote: > Hi Benni, > > your interpretation of FRBR is just wrong. I agree that the handling of performances is not the most intuitive concept, but it seems quite consistent to me. You might want to look at the official specification document (http://www.ifla.org/files/assets/cataloguing/frbr/frbr_2008.pdf, official translations available from http://www.ifla.org/publications/translations-of-frbr). > > A work is a totally abstract idea of something. > > An expression is a form of this work. It is catered for a specific instrumentation, and is set up for a specific purpose, but it's still no good. > > A manifestation is the method of preserving an expression, of converting it to a physical thing. > > An item is the result of the manifestation, it is the physical thing. > > Let's consider a work called "Schreif?tz". This is a very abstract thing, which has a relation to a different work called "Freisch?tz", and it's a parody of that. The "Schreif?tz" exists in a version for nose flute and harpsichord, and a version for full orchestra. Those are expressions. > > If there is a print of the orchestra version, the whole print run would be the manifestation (in MEI called source), and the individual copies would be items. This helps to distinguish between the features common to all copies and individual copies (pencil markings etc.). > If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item. This is probably the most annoying compromise in FRBR, but it allows to be consistent across different media types. It's just not intuitive? > > A manifestation follows it's expression exactly, by definition there is no or nearly no difference. So if you have two more measures in a source, this source establishes a new expression in FRBR. This might not reflect traditional editorial concepts, but matches very well with genetic approaches. This required conformance between sources / manifestations of one expression is not restricted to the music, but explicitly includes the instrumentation: If you have another manuscript of the nose flute version, where the harpsichord is replaced by a piano, it would be a separate expression already. FRBR allows some leeway here, but officially the slightest change results in a new expression. > > Following these arguments, a performance is clearly an expression. Different musicians will result in a different expression. I'm not sure how to model repeated performances (Broadway shows?), but let's put that aside for now. > > You're right, you can't hold a performance in your hand. If you want to preserve a performance, you record it, that is, you manifest it on CD, tape, whatever. The recording will always reflect the version as given during the performance, and it will result in a number of items. Of course the recording / manifestation has certain technical inflictions on the content of these items, but the same is true for prints: An engraved copy will show different slurs than a typesetter copy than Craig's SCORE file. Those are artifacts of the technical process of manifesting an expression into items. > > Now, if you and me perform the version of nose flute and harpsichord, what's the relationship between the nose flute / harpsichord expression and our performance? The latter is based upon the former, but they are clearly different. Maybe the original is less defined than ours (we already prescribe the performers being us), but that's a sibling relationship. We depend on this other one, but we create a new one, just like the composer correcting a preprint copy creates a new expression from it. > > If someone tries to implement FRBR completely, it would result in a whole bunch of expressions, manifestations, and items. I guess not even librarians would do that to the extreme. That's why I think that Axel's compromise of putting performances in an eventList inside the expression is perfectly reasonable, especially since someone could use XSLT to extract them into separate expressions if needed. > > I hope my short explanation of FRBR was clear enough. If you have further questions, I'm happy to give it another try. Although I'm pretty sure that Axel, Kristina, and others can explain it better than me :-) > > Johannes > > > > Am 16.11.2012 um 17:34 schrieb Benjamin Wolff Bohl: > >> Hi Axel et al., >> first tanks for correctiong my repeationg mistake concerning the items, which certainly did not increase clarity. >> My head has continued working on the problematic dealing of FRBR with recordings and I tried to graphically sort out my thougts (as you can see in the attached image or [in case it doesn't go through] under http://homepages.uni-paderborn.de/bwbohl/MEI/Bohl_FRBR.jpg). >> The graphic is sort of a table with work, expression, manifestation and item being the column labels. The contents first show a work with an edition and an auograph source, then a performance ("Interpretation by Kepper/Roland") and a recording (in red), and last some records (I just realize I missed putting an experssion before the records, sorry for that). Blue lines show hierarchical dependencies whereas green lines indicate a "based upon" relationship. >> The idea behind it is, that a interpretation of a work by a certain conductor could be viewed as expression with him conduction a certain orchestra being a manifestation and the actual performance one a certain data at a certain location being the item (physical by means of the sound waves;-) >> A recording again is another experssion of the work although depending on a certain interpretation-perfomance >> >> I'm not a FRBR expert so I don't know how this all conforms with the FRBR paper(s) but I would be happy to sort things out with you! >> >> /benjamin >> >> Am 14.11.2012 20:36, schrieb Axel Teich Geertinger: >>> Hi Benni >>> >>> some quick comments here and there... >>> >>> Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl >>> Sendt: 14. november 2012 19:59 >>> Til: Music Encoding Initiative >>> Emne: Re: [MEI-L] FRBR in MEI >>> >>> Hi Axel, >>> thanks for this huge insight into the FRBR-customization. Having considered some recording metadata in the Freisch?tz project I'll try to add my thought's on this topic. >>> Sorry for adding late to this discussion, I had prepared this mail this morning in the train, then forgot to send it from work... >>> See my comments inline >>> >>> Am 13.11.2012 13:21, schrieb Axel Teich Geertinger: >>> Hi Johannes, >>> >>> Thanks for your comments. Good and relevant as always. I think I better leave it to the more technically skilled people to answer most of it, but I have just a few comments. >>> >>>>> >>>>> 4) There is a problem possibly emerging from the notation-centric nature of >>>> MEI, or perhaps it is really a FRBR problem; namely the handling of performances >>>> and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I >>>> (and MerMEId) would regard as different versions of the work. We encode >>>> performances using <eventList> elements within expression/history, i.e. as (grand- >>>> )children of <expression>, which really makes sense to me. A performance must >>>> be of a certain version (form, instrumentation) of the work, so I strongly believe we >>>> should keep it this way. It's just not how FRBR sees it. On the other hand, as far as >>>> I can see there is nothing (except the practical an conceptual difficulties) that >>>> prevents users from encoding e performance or a recording as an expression, so >>>> FRBR compliance is probably possible also in this respect. I just wouldn't >>>> recommend it, and I actually suspect FRBR having a problem there rather than >>>> MEI. >>>> >>>> I haven't looked this up, but are you sure that performances and recordings are on >>>> the same level? I would see performances as expressions, while recordings are >>>> manifestations. Of course a performance follows a certain version of a work, like >>>> the piano version (=expression). But, the musician moves that to a different >>>> domain (graphical to audio), and he may or may not play the repeats, and he may >>>> or may not follow the dynamic indications of the score. There certainly is a strong >>>> relationship between both expressions, but they are distinct to me. I see your >>>> reasons for putting everything into an eventList, and thus subsuming it under one >>>> expression, but that might not always be the most appropriate model. Sometimes, >>>> it might be better to use separate expressions for the piano version and it's >>>> performances and connect them with one or more relations. >>>> >>> >>> Sorry, my mistake. Now that I look it up I see you are right: performances are expressions, recordings are not. As I said, I haven?t really been looking into the recordings question yet. Here's an example from the FRBR report: >>> >>> w1 J. S. Bach's Six suites for unaccompanied cello >>> e1 performances by Janos Starker recorded partly in 1963 and completed in 1965 >>> m1 recordings released on 33 1/3 rpm sound discs in 1966 by Mercury >>> m2 recordings re-released on compact disc in 1991 by Mercury >>> e2 performances by Yo-Yo Ma recorded in 1983 >>> m1 recordings released on 33 1/3 rpm sound discs in 1983 by CBS Records >>> m2 recordings re-released on compact disc in 1992 by CBS Records >>> >>> So, recordings are no problem, I guess. But that still leaves us with two very different ways of encoding performance data. FYI, we have recently moved performance <eventList>s from <work> to <expression>, so we do subsume them under a particular expression already. >>> First, I don't think, that a recording and a performance are really two different things, but correct me if I'm missing something. The way to both of them is the same, only the recording might result in further manifestations. >>> >>> That is exactly the problem I?m having with FRBR?s view on performances. I think of performance and recording as quite parallel to printed and manuscript sources: a recording is a sort of ?printed performance?, i.e. one that may be reproduced in multiple copies and re-releases. A performance, like a manuscript, is a unique ?event?, so just like a manuscript can only have one location (1 item), the performance manifestation also has just one item (it happens at a certain place at a certain time and is not repeatable until we invent time travel). And I tend to think that the performance must represent a specific expression of the work, but it may be more complex than that. However, to me all this indicates that performances should be treated as manifestations. But FRBR sees it differently. And since I treat them as events, I may have simply evaded the problem... >>> >>> Let's say you have a copy of a specific recording of a work. Interpreting your record as a expression of the work is fine. Interpeting the recording session as expression of the work can be rather problematic. >>> >>> Well, MY record would be an item, i.e. a specific copy of the manifestation (the release). Right? >>> >>> The record you own is a manifestation of the recording (expression/?) which on the other hand will be the trans-medialization of specific performance material, having been worked with and modified by conductor and musicians in order to resemble the performance (manifestation) which again is based on a certain printed edition of the work (expression) possibly taking into account diffrences from other sources. >>> >>> Again, the one I actually own is an *item* of the manifestation (just to make sure we agree on that...) >>> >>> Would one say that this makes the record inferior and nested deep inside the work-expression-manifestation of the written sources or rather a sibling expression-manifestation tree of the same work with strong relations to each other? >>> >>> I would say that the recording manifestation and the written manifestation used for the recording would have strong manifestation-to-manifestation relations, but that they would not *necessarily* have the same parent expression. The musicians could have changed something, made cuts or other things not present in the performance material they played from. So we could also have sibling expressions here. >>> >>> I think it's the very right thing you did in moving the performance list to <expression>. >>> >>> Some further complications might arise from the following two thougts: >>> (a)The record you may moreover be (an this is quite popular in recent years, especially with 'classical' msuic) the re-release of an older record (i.e. another manifestation of the same recording) but modified in order to fit the new medium, remastered and digitized and potentially even remixed (Vinyls have certain physical implications on the nature of the sound, whilst CDs or digital audio has different ones). >>> >>> No problem, as I see it. Like in the FRBR example above, that would be a new manifestation of the recording expression. >>> >>> (b) The record doesn't com alone, it has a booklet, which could be referenced from <extent>? This booklet will incorporate texts by different persons and again if re-released might incorporate the old booklet and add additional material. >>> >>> Good question. This would be a bundle of relations pointing in all directions, perhaps. I have no good answer to that right away... >>> >>> /axel >>> >>> >>> >>> See you later, >>> Benjamin >>> >>> >>>>> 5) Finally, an issue related to the FRBR discussion, though not directly a >>>> consequence of it: MEI 2012 allows multiple <work> elements within <workDesc>. I >>>> can't think of any situation, however, in which it may be desirable to describe more >>>> than one work in a single file. On the contrary, it could easily cause a lot of >>>> confusion, so I would actually suggest allowing only one <work> element; in other >>>> words: either skip <workDesc> and have 1 optional <work> in <meiHead>, or keep >>>> <workDesc>, and change its content model to be the one used by <work> now. >>>> >>>> Again, I think that this perspective is biased from your application, where it makes >>>> perfect sense. Consider you're working on Wagner's Ring. You might want to say >>>> something about all these works in just one file. All I want to say is that this is a >>>> modeling question, which is clearly project-specific. It seems perfectly reasonable >>>> to restrict merMEId to MEI instances with only one work, but I wouldn't restrict MEI >>>> to one work per file. This may result in preprocessing files before operating on >>>> them with merMEId, but we have similar situations for many other aspects for MEI, >>>> so this isn't bad per se. >>> >>> In the Ring case, we are talking about the individual dramas as components of a larger work. This would probably be one of the situations where <componentGrp> would come in handy as a child of <work> (which the customization allows already). I would be reluctant, however, to include them as four <work> elements directly under <workDesc>. To clarify what that would mean, it would be necessary to specify work-to-work relations. Furthermore, there wouldn?t be any place to put metadata concerning *all* four works, since we would be at top level already. >>> >>> Best, >>> Axel >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> <Bohl_FRBR.jpg>_______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Fri Nov 16 20:45:39 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Fri, 16 Nov 2012 11:45:39 -0800 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> Message-ID: <CAPcjuFfO0kBAr550Y0Vb5fPcpDCb9gG4Yr4Nr87c+GC8bg-v7A@mail.gmail.com> Hi All, On Tue, Nov 13, 2012 at 2:33 AM, Axel Teich Geertinger <atge at kb.dk> wrote: > We encode performances using <eventList> elements within > expression/history, i.e. as (grand-)children of <expression>, which really > makes sense to me. This year I extracted note/event level timings for all commercially available recordings of Webern's (not Weber's) Op. 27 piano variations. "Performance" scores of the piece can be viewed online: http://mazurka.org.uk/webern/notation The notation engine is SCORE, with the output converted into SVG images (one per system) (Thanks to Thomas Weber for his SCORE EPS to SVG converter: https://github.com/th-we/seps2svg). The horizontal axis in the notation represents time, and the grayscale of the noteheads represents loudness (light=soft, dark=loud). Timings/dynamics are for all notes occurring simultaneously in the score (i.e., "chords", and which I usually call "events"), not individual notes. For your amusement, here is the performance data: http://mazurka.org.uk/webern/dynamics/mvmt1/webern-op27-1-Aitken1961.dyn used to generate this score (mvmt 1): http://mazurka.org.uk/webern/notation/Aitken1961 and here is the SCORE data used to generate the score (first system of first movement): http://mazurka.org.uk/webern/notation/Aitken1961/webern-op27-1-Aitken1961-sys01.pmx (first in the data are lots of little lines for the tick marks above and below the system, then the lists of notes which I am coloring in SVG rather than SCORE) How would this sort of data be encoded in MEI along with the printed score (luckily all performers are using the same edition of the music, and I ignore wrong notes)? Could multiple <eventList>s be stored with the score for different performances? And how might all this relate to FRBR? Another wonder: how would the "performance" scores be represented in MEI (or just leave the "manifestation" to the renderer?). In other words, these "scores" have pitch information (but no accidentals to preserve clarity), no score rhythms but with performance rhythm indicated by spatial layout on the system. Thanks to an American, the works of Webern will go into the public domain at the end of 2015, so something interesting might be done with this data and the printed score in a few years without the need for permissions. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20121116/e62a3d0c/attachment.html> From pdr4h at eservices.virginia.edu Fri Nov 16 22:25:07 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Fri, 16 Nov 2012 21:25:07 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de>, <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> Message-ID: <BBCC497C40D85642B90E9F94FC30343D0EFC3C4E@GRANT.eservices.virginia.edu> Random comments on the discussion so far. Sorry if this gets long. When contemplating performances and recordings, it seems to me that people often have trouble reaching agreement on the term "sound recording". Andrew's slides label the *expression* as "the sound recording", but others might label the *manifestation* as "the sound recording". You might say the expression is the "act of making a recording" and the manifestation is the "recording that results". To disentangle the different uses of the term "recording", it helps me to remember that an expression is not a physical entity, but a manifestation is. Therefore, I prefer to think of the expression as "the performance" (the non-physical thing being recorded) and the manifestation as "the recording" (the physical thing). This fits with the way libraries have traditionally cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... In any case, the FRBR document, which Axel cites, says a *performance is an expression* and a *recording is a manifestation*. The usual "waterfall" kind of diagram is explained by saying the term "work" applies to conceptual content; "expression" applies to the languages/media/versions in which the work occurs; "manifestation" applies to the formats in which each expression is available; and "item" applies to individual copies of a single format. (Here "media" means "medium of expression", say written language as opposed to film, and "format" means physical format, as in printed book as opposed to audio CD.) Taking another tack, though, often it is easier for me to think of FRBR "from the bottom up", rather than start from the work and proceed "down" the waterfall diagram. Using the recording example, the item is the exemplar I hold in my hand, the manifestation is all of the copies of that exemplar (or better yet, all the information shared by all those copies), the expression is the version of the work that is represented by the manifestation (e.g., Jo's nose flute + harpsichord version and the orchestral version are different expressions), and the work is an intellectual creation/idea (e.g., Bohl's op. 1, the one that goes da, da, da, daaaaaa, reeep! reeep! reeep!). Using this "bottom up" thinking helps avoid mental contortions regarding what the work is -- the work is simply the thing at the end of this mental process. From there on, there are work-to-work relationships, so we don't have to think about whether "Romeo and Juliet", "Westside Story", and every other story about star-crossed lovers are expressions of an ur-work with its own manifestations and so on, which lead us to a different "waterfall" conclusion each time we discover a new work or expression. Instead of creating separate expression-level markup for each performance, Axel treats some expressions (performances) as events related to another expression of a work (the orchestral version vs. the nose flute version). This is fine. As Johannes already pointed out, separate <expression> elements for the performances can be generated from the <eventList> markup, if necessary. Conversely, there's nothing wrong with creating separate <expression> elements for each performance and relating them to other appropriate expressions and/or relating them directly to the work. If necessary, given accurate place and date information, the <eventList> kind of markup could be created from the separate <expression> elements. So, six of one ... Johannes said "If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item." Well, maybe. But, I think in this case it would be fine to describe the manifestation and the item in a single place (within <source> in MEI) because there's only one manifestation and one (and only one) item associated with that manifestation. This is the traditional way manuscripts have been described, pre-FRBR. Practically speaking, the manifesation and the item are the same thing. But, as soon as you want to say something special about a particular *part* (as in "chunk", not performer part) of the manifestation, you have to split these up again, for example, when one section of a manuscript is located in Prague and another is in Manitoba. This is not the case with printed material where there is *always* more than one item created from a manifestation, but it is still traditional to describe the manifestation and item as though they are the same thing. For example, it is common to follow the manifestation's author, title, place of publication, etc. with information about the location where one can obtain an examplar of the manifestation, say, UVa Library M 296.C57 1987. Johannes also said "So if you have two more measures in a source, this source establishes a new expression in FRBR." Again, maybe. The FRBR report (1997, amended and corrected through 2009) says "Variations within substantially the same expression (e.g., slight variations that can be noticed between two states of the same edition in the case of hand press production) would normally be ignored or, in specialized catalogues, be reflected as a note within the bibliographic record for the manifestation. However, for some applications of the model (e.g., early texts of rare manuscripts), each variation may be viewed as a different expression." The issue is in the determination of whether 2 things are "substantially the same expression". As with many things, this depends on the person making the determination, there is no single correct answer. We intend that MEI will provide the tools for accurate description using either approach. Just my 2 cents, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From atge at kb.dk Mon Nov 19 15:11:23 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Mon, 19 Nov 2012 14:11:23 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <CAPcjuFfO0kBAr550Y0Vb5fPcpDCb9gG4Yr4Nr87c+GC8bg-v7A@mail.gmail.com> References: <E2FAC7E5-29A2-473C-969F-DF97895E9C37@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <CAPcjuFfO0kBAr550Y0Vb5fPcpDCb9gG4Yr4Nr87c+GC8bg-v7A@mail.gmail.com> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F1322@EXCHANGE-01.kb.dk> Hi Craig That's an interesting case. First, an attempt at modeling it the "strict" FRBR way, using expressions for performances (including the studio or live performances resulting in recordings). In that case, <eventList>s will not be used to describe performances. Assuming that the "translation" expression-to-expression relationship, which FRBR uses to describe a transcription of music, also is the one to use the other way (from score to performance), I get something like the following in <workDesc>: <work> <titleStmt> <title>Variations Op. 27 Anton Webern Variations Op. 27 (score) 1961 Recording Webster Aitken Craig's performance score of the 1961 recording Some live performance Webster Aitken And in : UE Score Universal Edition 1937 Aitken Recording DEL 25407 Delos 1978 Craig's online performance score of the 1961 recording That's certainly a possible approach. I see a few problems, though. Most important, you will get a lot of expressions and an even larger number of s to explain their interrelations. This may be perfectly valid, but quite complex to process, as I see it. Also, allowing only a very small amount of variation within an expression (such as Johannes suggests, and indeed the FRBR report too) tends to produce 1:1-relationships between expression and manifestation, which is not very useful. For instance, manuscripts will obviously differ from a printed edition more than, say, "two states of the same edition" (FRBR), so following this logic strictly, we would need to define a separate expression for a manuscript draft, another one for a fair copy, yet another for the printed first edition etc. My problem is that this would leave no room for defining different versions of the work - or: to do so, I would have to define expressions (all as siblings!), some of which describe a completely abstract entity (like "the version for nose flute & harpsichord"), and some of which describe not-so-abstract entities like the expression that is embodied only by one manuscript or one specific performance. I suspect FRBR would regard these versions as actually being separate *works*, but I think it should really be the project editor's decision where to draw that line. Traditionally, in most thematic catalogues different versions of a work are listed as variant representations of the same work. I find the distinction between the four FRBR group 1 entities really useful only if we maintain the distinction between completely abtract entities (work and expression), and non-abstract (manifestation [=source] and item). This may not be exactly how FRBR intends it, but otherwise I'd say there an entity missing between work and expression, grouping the more or less physical realizations of the work into clusters of closely related expressions (for instance, the written representations, the recordings, and the performances of the version for nose flute). Of course, we could introduce an additional "version" element. But that would leave us with five levels of description with even less clear distinction between them. I think the more productive approach for us would be to decide that we limit expressions (defined by FRBR as "the specific intellectual or artistic form that a work takes each time it is "realized" ") to describing versions, and accommodating the different mediations, which expressions may otherwise represent, as children of these expressions, like we do when using for performances. Also bearing in mind that the MEI is *not* a general music cataloguing language, but primarily relates to music notation, I find it completely acceptable that MEI metadata are notation-centric, treating other media such as recordings and performances more as peripheral phenomena. We may consider, though, if in that case we should avoid the element name and call it instead. Interpreting expression as version, I would suggest something more like the following. In the Webern Op. 27 case, there is only one expression, serving as the container for all. It could be used to describe the sequence of movements, which is shared by all these representations, too (in a element; not included here). You will notice the encoding is more concise, actually without loss of information. The FRBR relations, however, are now used at different levels than by original FRBR. In : Variations Op. 27 Anton Webern Variations Op. 27 Some live performance Webster Aitken In : UE Score Universal Edition 1937 Aitken Recording DEL 25407 Delos 1978 Craig's performance score of the 1961 recording Following this path even further, recordings could be treated in about the same way as performances, as s for instance - but this of course depends on the importance of the recording for the encoding of the music (if any) in the MEI document in question: Variations Op. 27 Anton Webern Variations Op. 27 Some live performance Webster Aitken sound Aitken Recording DEL 25407 Webster Aitken Delos 1978 It doesn't have to be either/or, of course. As Perry has pointed out, it may be done either way, depending on the individual projects. If the use of FRBR is optional anyway (for instance, by putting and the like in an optional module), I guess it would be acceptable doing it this way too. Opinions? Cheers, Axel Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Craig Sapp Sendt: 16. november 2012 20:46 Til: Music Encoding Initiative Emne: Re: [MEI-L] FRBR in MEI Hi All, On Tue, Nov 13, 2012 at 2:33 AM, Axel Teich Geertinger > wrote: We encode performances using elements within expression/history, i.e. as (grand-)children of , which really makes sense to me. This year I extracted note/event level timings for all commercially available recordings of Webern's (not Weber's) Op. 27 piano variations. "Performance" scores of the piece can be viewed online: http://mazurka.org.uk/webern/notation The notation engine is SCORE, with the output converted into SVG images (one per system) (Thanks to Thomas Weber for his SCORE EPS to SVG converter: https://github.com/th-we/seps2svg). The horizontal axis in the notation represents time, and the grayscale of the noteheads represents loudness (light=soft, dark=loud). Timings/dynamics are for all notes occurring simultaneously in the score (i.e., "chords", and which I usually call "events"), not individual notes. For your amusement, here is the performance data: http://mazurka.org.uk/webern/dynamics/mvmt1/webern-op27-1-Aitken1961.dyn used to generate this score (mvmt 1): http://mazurka.org.uk/webern/notation/Aitken1961 and here is the SCORE data used to generate the score (first system of first movement): http://mazurka.org.uk/webern/notation/Aitken1961/webern-op27-1-Aitken1961-sys01.pmx (first in the data are lots of little lines for the tick marks above and below the system, then the lists of notes which I am coloring in SVG rather than SCORE) How would this sort of data be encoded in MEI along with the printed score (luckily all performers are using the same edition of the music, and I ignore wrong notes)? Could multiple s be stored with the score for different performances? And how might all this relate to FRBR? Another wonder: how would the "performance" scores be represented in MEI (or just leave the "manifestation" to the renderer?). In other words, these "scores" have pitch information (but no accidentals to preserve clarity), no score rhythms but with performance rhythm indicated by spatial layout on the system. Thanks to an American, the works of Webern will go into the public domain at the end of 2015, so something interesting might be done with this data and the printed score in a few years without the need for permissions. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdr4h at eservices.virginia.edu Mon Nov 19 16:26:14 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 19 Nov 2012 15:26:14 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F1322@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F1322@EXCHANGE-01.kb.dk> Message-ID: Hi, Craig, I want clear up any confusion about MEI's element. I don't think is what you're thinking of as an "event list"; that is, a place to put timing information. is for recording *historical events using prose*. , on the other hand, is for capturing processable time data. I believe that's where the kind of thing you're thinking of should go. Multiple elements are allowed within and/or within . "[p]rovides a set of ordered points in time to which musical elements can be linked in order to create a temporal alignment of those elements." The timing data you provide in http://mazurka.org.uk/webern/dynamics/mvmt1/webern-op27-1-Aitken1961.dyn can easily be mapped to -- while the dynamics can be recorded using in the notation part of the tree. Unfortunately, at present there's no place to record your values on ("73.7632", etc.), only integer values in the MIDI value range, i.e., 0-127. This can be remedied in the short term by customizing the @val attribute. Assuming customization of @val, a dynamic can be associated with a timepoint using @when -- At present, there's no way to link a directly with a FRBR entity, such as (manifestation). An indirect link is possible by associating a timeline (via its @avref attribute) with a digital recording, then associating that digital file (represented by ) with a particular source description (via its @decls attribute). There's no requirement that @decls refer to any particular FRBR entity type (work, expression, manifestion, or item), making it theoretically possible to connect a digital audio file to any FRBR entity; however, practically speaking, a digital audio file is probably best associated with a manifestation; that is, an MEI element. It would be appropriate to describe the Delos DEL 25407 (1978) recording using as it is the base material from which your encoding of the dynamics and timing information are taken. "Performance" scores of the type you're describing can be encoded in MEI by leaving out the duration information at the event level and associating the event with timing information provided within -- There's no defined way yet, however, to record both a "performance" score and a traditional one in the same MEI file. Hope this helps, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From atge at kb.dk Mon Nov 19 21:17:04 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Mon, 19 Nov 2012 20:17:04 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F1322@EXCHANGE-01.kb.dk>, Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F17D2@EXCHANGE-01.kb.dk> Hi Perry, Craig, Yes, I was of course speaking of the header, where the FRBR-related elements are about to be implemented. In this context my remark about how to "encode performances" was intended to mean "encode information about the historical events where the work (or rather: the expression) was performed"... I would still, however, be curious to hear any opinions on the interpretation of the expression level as versions (or, as an alternative, using instead of ), because that is how MerMEId treats them right now. The FRBR-extended schema itself does not enforce that interpretation, but MerMEId does, unless we change the concept. /axel ________________________________ Fra: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] p? vegne af Roland, Perry (pdr4h) [pdr4h at eservices.virginia.edu] Sendt: 19. november 2012 16:26 Til: Music Encoding Initiative Emne: Re: [MEI-L] FRBR in MEI Hi, Craig, I want clear up any confusion about MEI's element. I don't think is what you're thinking of as an "event list"; that is, a place to put timing information. is for recording *historical events using prose*. , on the other hand, is for capturing processable time data. I believe that's where the kind of thing you're thinking of should go. Multiple elements are allowed within and/or within . "[p]rovides a set of ordered points in time to which musical elements can be linked in order to create a temporal alignment of those elements." The timing data you provide in http://mazurka.org.uk/webern/dynamics/mvmt1/webern-op27-1-Aitken1961.dyn can easily be mapped to -- while the dynamics can be recorded using in the notation part of the tree. Unfortunately, at present there's no place to record your values on ("73.7632", etc.), only integer values in the MIDI value range, i.e., 0-127. This can be remedied in the short term by customizing the @val attribute. Assuming customization of @val, a dynamic can be associated with a timepoint using @when -- At present, there's no way to link a directly with a FRBR entity, such as (manifestation). An indirect link is possible by associating a timeline (via its @avref attribute) with a digital recording, then associating that digital file (represented by ) with a particular source description (via its @decls attribute). There's no requirement that @decls refer to any particular FRBR entity type (work, expression, manifestion, or item), making it theoretically possible to connect a digital audio file to any FRBR entity; however, practically speaking, a digital audio file is probably best associated with a manifestation; that is, an MEI element. It would be appropriate to describe the Delos DEL 25407 (1978) recording using as it is the base material from which your encoding of the dynamics and timing information are taken. "Performance" scores of the type you're describing can be encoded in MEI by leaving out the duration information at the event level and associating the event with timing information provided within -- There's no defined way yet, however, to record both a "performance" score and a traditional one in the same MEI file. Hope this helps, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdr4h at eservices.virginia.edu Mon Nov 19 23:49:13 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 19 Nov 2012 22:49:13 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F17D2@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F1322@EXCHANGE-01.kb.dk>, , <0B6F63F59F405E4C902DFE2C2329D0D1514F17D2@EXCHANGE-01.kb.dk> Message-ID: Hi, Axel, I don't think the subsitution of for helps very much. We're still left with the question of what constitutes a new/different version and the answer to that question will vary. I suspect it will depend on whether the project is addressing printed music only or printed and manuscript versions, among other things. The question of what signals a different version between two manuscript versions is a very thorny one indeed. Taken to the extreme, I suppose every single intervention potentially creates a huge hierarchical tree of versions. But this is the province of philology and probably should be addressed somewhere other than in the FRBR entities, say using . Therefore, I would advise a somewhat loose, not-too-granular approach in the use of FRBR in MEI (and everywhere else for that matter). Said another way, it's not very important that something fit precisely into the "correct" or element, but that one is consistent, uses only the number of entities that achieve the level of descriptive granularity required, and that relationships between the entities are explicitly marked. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Axel Teich Geertinger [atge at kb.dk] Sent: Monday, November 19, 2012 3:17 PM To: Music Encoding Initiative Subject: Re: [MEI-L] FRBR in MEI Hi Perry, Craig, Yes, I was of course speaking of the header, where the FRBR-related elements are about to be implemented. In this context my remark about how to "encode performances" was intended to mean "encode information about the historical events where the work (or rather: the expression) was performed"... I would still, however, be curious to hear any opinions on the interpretation of the expression level as versions (or, as an alternative, using instead of ), because that is how MerMEId treats them right now. The FRBR-extended schema itself does not enforce that interpretation, but MerMEId does, unless we change the concept. /axel ________________________________ Fra: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] p? vegne af Roland, Perry (pdr4h) [pdr4h at eservices.virginia.edu] Sendt: 19. november 2012 16:26 Til: Music Encoding Initiative Emne: Re: [MEI-L] FRBR in MEI Hi, Craig, I want clear up any confusion about MEI's element. I don't think is what you're thinking of as an "event list"; that is, a place to put timing information. is for recording *historical events using prose*. , on the other hand, is for capturing processable time data. I believe that's where the kind of thing you're thinking of should go. Multiple elements are allowed within and/or within . "[p]rovides a set of ordered points in time to which musical elements can be linked in order to create a temporal alignment of those elements." The timing data you provide in http://mazurka.org.uk/webern/dynamics/mvmt1/webern-op27-1-Aitken1961.dyn can easily be mapped to -- while the dynamics can be recorded using in the notation part of the tree. Unfortunately, at present there's no place to record your values on ("73.7632", etc.), only integer values in the MIDI value range, i.e., 0-127. This can be remedied in the short term by customizing the @val attribute. Assuming customization of @val, a dynamic can be associated with a timepoint using @when -- At present, there's no way to link a directly with a FRBR entity, such as (manifestation). An indirect link is possible by associating a timeline (via its @avref attribute) with a digital recording, then associating that digital file (represented by ) with a particular source description (via its @decls attribute). There's no requirement that @decls refer to any particular FRBR entity type (work, expression, manifestion, or item), making it theoretically possible to connect a digital audio file to any FRBR entity; however, practically speaking, a digital audio file is probably best associated with a manifestation; that is, an MEI element. It would be appropriate to describe the Delos DEL 25407 (1978) recording using as it is the base material from which your encoding of the dynamics and timing information are taken. "Performance" scores of the type you're describing can be encoded in MEI by leaving out the duration information at the event level and associating the event with timing information provided within -- There's no defined way yet, however, to record both a "performance" score and a traditional one in the same MEI file. Hope this helps, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Tue Nov 20 08:34:22 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 20 Nov 2012 08:34:22 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F1322@EXCHANGE-01.kb.dk>, , <0B6F63F59F405E4C902DFE2C2329D0D1514F17D2@EXCHANGE-01.kb.dk> Message-ID: <183D8579-668C-4A89-82E5-C7F290D53F3E@edirom.de> Hi Axel, I totally agree with Perry. These are project-specific decisions about how to use FRBR and MEI in general. We don't want to enforce a specific model here, but leave the implementation open to a certain degree. The current proposal allows to do so, and also has the benefit that shortcuts like yours (performances in an eventList) are still comprehensible enough to be compatible to more specific projects (using XSLT). I think we shouldn't water the concept of FRBR just because we can't come up with better examples right away. Others may, and they couldn't use it in such a way. FRBR itself gives the user some freedom in deciding what constitutes a new version, and if that is modeled as expression or manifestation. We should preserve that in our implementation. Axel, you're certainly aware of this, but your model (performances in expression's eventList) requires that every performance is based on exactly one textual version / expression of the work. As soon as more than one set of source materials has been used, this assumption is incorrect. Also, dealing with opera might be cumbersome. If you think of a performance for which you don't have or don't know the used materials, it's hard to assign that to one of your known expressions. Especially since operas have been customized quite heftily, each performance may justify a textual expression, even if the source materials haven't been preserved. I don't have answers to these questions right away, but it might be worthwhile to think about them or at least be aware of them when implementing merMEId. For me, it would probably be sufficient to mention that in the documentation, and leave the software unchanged. These issues don't contradict with what I said before. They are project-specific, and they depend on a specific editorial concept, which others may or may not follow. Axel's solutions might be appropriate for some, and will certainly be inappropriate to others. But paired with thorough documentation, the data are still interchangeable across different projects. Best, Johannes Am 19.11.2012 um 23:49 schrieb Roland, Perry (pdr4h): > Hi, Axel, > > I don't think the subsitution of for helps very much. We're still left with the question of what constitutes a new/different version and the answer to that question will vary. I suspect it will depend on whether the project is addressing printed music only or printed and manuscript versions, among other things. > > The question of what signals a different version between two manuscript versions is a very thorny one indeed. Taken to the extreme, I suppose every single intervention potentially creates a huge hierarchical tree of versions. But this is the province of philology and probably should be addressed somewhere other than in the FRBR entities, say using . Therefore, I would advise a somewhat loose, not-too-granular approach in the use of FRBR in MEI (and everywhere else for that matter). > > Said another way, it's not very important that something fit precisely into the "correct" or element, but that one is consistent, uses only the number of entities that achieve the level of descriptive granularity required, and that relationships between the entities are explicitly marked. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Axel Teich Geertinger [atge at kb.dk] > Sent: Monday, November 19, 2012 3:17 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] FRBR in MEI > > Hi Perry, Craig, > > Yes, I was of course speaking of the header, where the FRBR-related elements are about to be implemented. In this context my remark about how to "encode performances" was intended to mean "encode information about the historical events where the work (or rather: the expression) was performed"... > > I would still, however, be curious to hear any opinions on the interpretation of the expression level as versions (or, as an alternative, using instead of ), because that is how MerMEId treats them right now. The FRBR-extended schema itself does not enforce that interpretation, but MerMEId does, unless we change the concept. > > /axel > > > Fra: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] p? vegne af Roland, Perry (pdr4h) [pdr4h at eservices.virginia.edu] > Sendt: 19. november 2012 16:26 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] FRBR in MEI > > Hi, Craig, > > I want clear up any confusion about MEI's element. I don't think is what you're thinking of as an "event list"; that is, a place to put timing information. is for recording *historical events using prose*. , on the other hand, is for capturing processable time data. I believe that's where the kind of thing you're thinking of should go. > > Multiple elements are allowed within and/or within . > "[p]rovides a set of ordered points in time to which musical elements can be linked in order to create a temporal alignment of those elements." > > The timing data you provide in http://mazurka.org.uk/webern/dynamics/mvmt1/webern-op27-1-Aitken1961.dyn can easily be mapped to -- > > > > > > > while the dynamics can be recorded using in the notation part of the tree. Unfortunately, at present there's no place to record your values on ("73.7632", etc.), only integer values in the MIDI value range, i.e., 0-127. This can be remedied in the short term by customizing the @val attribute. > > Assuming customization of @val, a dynamic can be associated with a timepoint using @when -- > > > > > > At present, there's no way to link a directly with a FRBR entity, such as (manifestation). An indirect link is possible by associating a timeline (via its @avref attribute) with a digital recording, then associating that digital file (represented by ) with a particular source description (via its @decls attribute). There's no requirement that @decls refer to any particular FRBR entity type (work, expression, manifestion, or item), making it theoretically possible to connect a digital audio file to any FRBR entity; however, practically speaking, a digital audio file is probably best associated with a manifestation; that is, an MEI element. It would be appropriate to describe the Delos DEL 25407 (1978) recording using as it is the base material from which your encoding of the dynamics and timing information are taken. > > "Performance" scores of the type you're describing can be encoded in MEI by leaving out the duration information at the event level and associating the event with timing information provided within -- > > > > > > > There's no defined way yet, however, to record both a "performance" score and a traditional one in the same MEI file. > > Hope this helps, > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Tue Nov 20 09:38:37 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Tue, 20 Nov 2012 08:38:37 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <183D8579-668C-4A89-82E5-C7F290D53F3E@edirom.de> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F1322@EXCHANGE-01.kb.dk>, , <0B6F63F59F405E4C902DFE2C2329D0D1514F17D2@EXCHANGE-01.kb.dk> <183D8579-668C-4A89-82E5-C7F290D53F3E@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F29AC@EXCHANGE-01.kb.dk> Hi Johannes, thanks for your comments (thanks to Perry too!). I just wanted to make sure we agree that this use of expressions is acceptable. I am very well aware of the performances being expression-specific when listed within . Probably some projects will not want to distinguish different versions of an opera, while others will. Again, the project-specific decision about what to group as one version, i.e. the amount of variation allowed within an expression, decides where to list which performances. I could easily leave it even more open, offering performance lists at both work and expression levels. This way, the editor may decide to list them under the version they represent, or in a general list regardless of version. That would also be the place to list performances where the version performed is not known. Originally, I had performances listed only at work level. May I remind you that it was your suggestion (though not on this list) moving them to ? ;-) All the best, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Johannes Kepper Sendt: 20. november 2012 08:34 Til: Music Encoding Initiative Emne: Re: [MEI-L] FRBR in MEI Hi Axel, I totally agree with Perry. These are project-specific decisions about how to use FRBR and MEI in general. We don't want to enforce a specific model here, but leave the implementation open to a certain degree. The current proposal allows to do so, and also has the benefit that shortcuts like yours (performances in an eventList) are still comprehensible enough to be compatible to more specific projects (using XSLT). I think we shouldn't water the concept of FRBR just because we can't come up with better examples right away. Others may, and they couldn't use it in such a way. FRBR itself gives the user some freedom in deciding what constitutes a new version, and if that is modeled as expression or manifestation. We should preserve that in our implementation. Axel, you're certainly aware of this, but your model (performances in expression's eventList) requires that every performance is based on exactly one textual version / expression of the work. As soon as more than one set of source materials has been used, this assumption is incorrect. Also, dealing with opera might be cumbersome. If you think of a performance for which you don't have or don't know the used materials, it's hard to assign that to one of your known expressions. Especially since operas have been customized quite heftily, each performance may justify a textual expression, even if the source materials haven't been preserved. I don't have answers to these questions right away, but it might be worthwhile to think about them or at least be aware of them when implementing merMEId. For me, it would probably be sufficient to mention that in the documentation, and leave the software unchanged. These issues don't contradict with what I said before. They are project-specific, and they depend on a specific editorial concept, which others may or may not follow. Axel's solutions might be appropriate for some, and will certainly be inappropriate to others. But paired with thorough documentation, the data are still interchangeable across different projects. Best, Johannes Am 19.11.2012 um 23:49 schrieb Roland, Perry (pdr4h): > Hi, Axel, > > I don't think the subsitution of for helps very much. We're still left with the question of what constitutes a new/different version and the answer to that question will vary. I suspect it will depend on whether the project is addressing printed music only or printed and manuscript versions, among other things. > > The question of what signals a different version between two manuscript versions is a very thorny one indeed. Taken to the extreme, I suppose every single intervention potentially creates a huge hierarchical tree of versions. But this is the province of philology and probably should be addressed somewhere other than in the FRBR entities, say using . Therefore, I would advise a somewhat loose, not-too-granular approach in the use of FRBR in MEI (and everywhere else for that matter). > > Said another way, it's not very important that something fit precisely into the "correct" or element, but that one is consistent, uses only the number of entities that achieve the level of descriptive granularity required, and that relationships between the entities are explicitly marked. > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > From: mei-l-bounces at lists.uni-paderborn.de > [mei-l-bounces at lists.uni-paderborn.de] on behalf of Axel Teich > Geertinger [atge at kb.dk] > Sent: Monday, November 19, 2012 3:17 PM > To: Music Encoding Initiative > Subject: Re: [MEI-L] FRBR in MEI > > Hi Perry, Craig, > > Yes, I was of course speaking of the header, where the FRBR-related elements are about to be implemented. In this context my remark about how to "encode performances" was intended to mean "encode information about the historical events where the work (or rather: the expression) was performed"... > > I would still, however, be curious to hear any opinions on the interpretation of the expression level as versions (or, as an alternative, using instead of ), because that is how MerMEId treats them right now. The FRBR-extended schema itself does not enforce that interpretation, but MerMEId does, unless we change the concept. > > /axel > > > Fra: mei-l-bounces at lists.uni-paderborn.de > [mei-l-bounces at lists.uni-paderborn.de] p? vegne af Roland, Perry > (pdr4h) [pdr4h at eservices.virginia.edu] > Sendt: 19. november 2012 16:26 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] FRBR in MEI > > Hi, Craig, > > I want clear up any confusion about MEI's element. I don't think is what you're thinking of as an "event list"; that is, a place to put timing information. is for recording *historical events using prose*. , on the other hand, is for capturing processable time data. I believe that's where the kind of thing you're thinking of should go. > > Multiple elements are allowed within and/or within > . "[p]rovides a set of ordered points in time to which musical elements can be linked in order to create a temporal alignment of those elements." > > The timing data you provide in > http://mazurka.org.uk/webern/dynamics/mvmt1/webern-op27-1-Aitken1961.d > yn can easily be mapped to -- > > > > > > > while the dynamics can be recorded using in the notation part of the tree. Unfortunately, at present there's no place to record your values on ("73.7632", etc.), only integer values in the MIDI value range, i.e., 0-127. This can be remedied in the short term by customizing the @val attribute. > > Assuming customization of @val, a dynamic can be associated with a > timepoint using @when -- > > > > > > At present, there's no way to link a directly with a FRBR entity, such as (manifestation). An indirect link is possible by associating a timeline (via its @avref attribute) with a digital recording, then associating that digital file (represented by ) with a particular source description (via its @decls attribute). There's no requirement that @decls refer to any particular FRBR entity type (work, expression, manifestion, or item), making it theoretically possible to connect a digital audio file to any FRBR entity; however, practically speaking, a digital audio file is probably best associated with a manifestation; that is, an MEI element. It would be appropriate to describe the Delos DEL 25407 (1978) recording using as it is the base material from which your encoding of the dynamics and timing information are taken. > > "Performance" scores of the type you're describing can be encoded in > MEI by leaving out the duration information at the event level and > associating the event with timing information provided within > -- > > > > > > > There's no defined way yet, however, to record both a "performance" score and a traditional one in the same MEI file. > > Hope this helps, > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Tue Nov 20 09:50:12 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 20 Nov 2012 09:50:12 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F29AC@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F1322@EXCHANGE-01.kb.dk>, , <0B6F63F59F405E4C902DFE2C2329D0D1514F17D2@EXCHANGE-01.kb.dk> <183D8579-668C-4A89-82E5-C7F290D53F3E@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F29AC@EXCHANGE-01.kb.dk> Message-ID: <0D7E7532-4438-4114-A1DA-04AE35C8037E@edirom.de> Am 20.11.2012 um 09:38 schrieb Axel Teich Geertinger : > Hi Johannes, > > thanks for your comments (thanks to Perry too!). I just wanted to make sure we agree that this use of expressions is acceptable. > > I am very well aware of the performances being expression-specific when listed within . Probably some projects will not want to distinguish different versions of an opera, while others will. Again, the project-specific decision about what to group as one version, i.e. the amount of variation allowed within an expression, decides where to list which performances. > I could easily leave it even more open, offering performance lists at both work and expression levels. This way, the editor may decide to list them under the version they represent, or in a general list regardless of version. That would also be the place to list performances where the version performed is not known. > > Originally, I had performances listed only at work level. May I remind you that it was your suggestion (though not on this list) moving them to ? ;-) To say it with Adenauer: Was k?mmert mich mein Geschw?tz von gestern? To be honest, I still prefer the association with a specific expression over the work in general _whenever it is possible_. If someone can identify the expression, he should identify it. But I also see that it might not be possible in every case. Maybe allowing it in both spots is the best solution, but then I would strongly recommend to use the more specific spot in the expressions. Obviously, even merMEId will be able to keep people from doing weird stuff, and just like MEI itself, it has to provide clear documentation and hope for the best? :-) jo > > All the best, > Axel > > > -----Oprindelig meddelelse----- > Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Johannes Kepper > Sendt: 20. november 2012 08:34 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] FRBR in MEI > > Hi Axel, > > I totally agree with Perry. These are project-specific decisions about how to use FRBR and MEI in general. We don't want to enforce a specific model here, but leave the implementation open to a certain degree. The current proposal allows to do so, and also has the benefit that shortcuts like yours (performances in an eventList) are still comprehensible enough to be compatible to more specific projects (using XSLT). I think we shouldn't water the concept of FRBR just because we can't come up with better examples right away. Others may, and they couldn't use it in such a way. FRBR itself gives the user some freedom in deciding what constitutes a new version, and if that is modeled as expression or manifestation. We should preserve that in our implementation. > > > Axel, you're certainly aware of this, but your model (performances in expression's eventList) requires that every performance is based on exactly one textual version / expression of the work. As soon as more than one set of source materials has been used, this assumption is incorrect. Also, dealing with opera might be cumbersome. If you think of a performance for which you don't have or don't know the used materials, it's hard to assign that to one of your known expressions. Especially since operas have been customized quite heftily, each performance may justify a textual expression, even if the source materials haven't been preserved. I don't have answers to these questions right away, but it might be worthwhile to think about them or at least be aware of them when implementing merMEId. For me, it would probably be sufficient to mention that in the documentation, and leave the software unchanged. > > These issues don't contradict with what I said before. They are project-specific, and they depend on a specific editorial concept, which others may or may not follow. Axel's solutions might be appropriate for some, and will certainly be inappropriate to others. But paired with thorough documentation, the data are still interchangeable across different projects. > > Best, > Johannes > > > > Am 19.11.2012 um 23:49 schrieb Roland, Perry (pdr4h): > >> Hi, Axel, >> >> I don't think the subsitution of for helps very much. We're still left with the question of what constitutes a new/different version and the answer to that question will vary. I suspect it will depend on whether the project is addressing printed music only or printed and manuscript versions, among other things. >> >> The question of what signals a different version between two manuscript versions is a very thorny one indeed. Taken to the extreme, I suppose every single intervention potentially creates a huge hierarchical tree of versions. But this is the province of philology and probably should be addressed somewhere other than in the FRBR entities, say using . Therefore, I would advise a somewhat loose, not-too-granular approach in the use of FRBR in MEI (and everywhere else for that matter). >> >> Said another way, it's not very important that something fit precisely into the "correct" or element, but that one is consistent, uses only the number of entities that achieve the level of descriptive granularity required, and that relationships between the entities are explicitly marked. >> >> -- >> p. >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> From: mei-l-bounces at lists.uni-paderborn.de >> [mei-l-bounces at lists.uni-paderborn.de] on behalf of Axel Teich >> Geertinger [atge at kb.dk] >> Sent: Monday, November 19, 2012 3:17 PM >> To: Music Encoding Initiative >> Subject: Re: [MEI-L] FRBR in MEI >> >> Hi Perry, Craig, >> >> Yes, I was of course speaking of the header, where the FRBR-related elements are about to be implemented. In this context my remark about how to "encode performances" was intended to mean "encode information about the historical events where the work (or rather: the expression) was performed"... >> >> I would still, however, be curious to hear any opinions on the interpretation of the expression level as versions (or, as an alternative, using instead of ), because that is how MerMEId treats them right now. The FRBR-extended schema itself does not enforce that interpretation, but MerMEId does, unless we change the concept. >> >> /axel >> >> >> Fra: mei-l-bounces at lists.uni-paderborn.de >> [mei-l-bounces at lists.uni-paderborn.de] p? vegne af Roland, Perry >> (pdr4h) [pdr4h at eservices.virginia.edu] >> Sendt: 19. november 2012 16:26 >> Til: Music Encoding Initiative >> Emne: Re: [MEI-L] FRBR in MEI >> >> Hi, Craig, >> >> I want clear up any confusion about MEI's element. I don't think is what you're thinking of as an "event list"; that is, a place to put timing information. is for recording *historical events using prose*. , on the other hand, is for capturing processable time data. I believe that's where the kind of thing you're thinking of should go. >> >> Multiple elements are allowed within and/or within >> . "[p]rovides a set of ordered points in time to which musical elements can be linked in order to create a temporal alignment of those elements." >> >> The timing data you provide in >> http://mazurka.org.uk/webern/dynamics/mvmt1/webern-op27-1-Aitken1961.d >> yn can easily be mapped to -- >> >> >> >> >> >> >> while the dynamics can be recorded using in the notation part of the tree. Unfortunately, at present there's no place to record your values on ("73.7632", etc.), only integer values in the MIDI value range, i.e., 0-127. This can be remedied in the short term by customizing the @val attribute. >> >> Assuming customization of @val, a dynamic can be associated with a >> timepoint using @when -- >> >> >> >> >> >> At present, there's no way to link a directly with a FRBR entity, such as (manifestation). An indirect link is possible by associating a timeline (via its @avref attribute) with a digital recording, then associating that digital file (represented by ) with a particular source description (via its @decls attribute). There's no requirement that @decls refer to any particular FRBR entity type (work, expression, manifestion, or item), making it theoretically possible to connect a digital audio file to any FRBR entity; however, practically speaking, a digital audio file is probably best associated with a manifestation; that is, an MEI element. It would be appropriate to describe the Delos DEL 25407 (1978) recording using as it is the base material from which your encoding of the dynamics and timing information are taken. >> >> "Performance" scores of the type you're describing can be encoded in >> MEI by leaving out the duration information at the event level and >> associating the event with timing information provided within >> -- >> >> >> >> >> >> >> There's no defined way yet, however, to record both a "performance" score and a traditional one in the same MEI file. >> >> Hope this helps, >> >> -- >> p. >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Tue Nov 20 09:57:14 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Tue, 20 Nov 2012 08:57:14 +0000 Subject: [MEI-L] Sibelius and MEI Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F2A4B@EXCHANGE-01.kb.dk> Dear all, I guess most of you have seen the information below from Derek Williams about the future of the Sibelius development team (if so, my apologies for re-posting it here). The following statement made me think of MEI: ?It is highly tempting to throw our support unquestioningly behind Steinberg's bold enterprise, but personally, I can only recommend this if at least the score file format is made open source, even if the application itself remains proprietary. The easiest way to achieve this for now would simply be to extend the already powerful Music XML file format so as to append all the feature assets of the new application as and when they are added.? Could this be an opportunity to promote MEI, and perhaps make MEI the primary file format for this new notation software instead of extending MusicXML? Are there any direct contacts to the Sibelius development team already? Is anyone of them on this list, for instance? Would it be worth contacting them? Best, Axel Fra: Derek Williams [mailto:mail at change.org] Sendt: 15. november 2012 02:05 Til: Axel Teich Geertinger Emne: Update about "Avid Technology: Sell Sibelius!" This message is from Derek Williams who started the petition "Avid Technology: Sell Sibelius!," which you signed on Change.org. ________________________________ Dear Save Sibelius petitioner, This petition together with your individual comments, was printed out and handed over to the new Head of Sibelius, Bobby Lombardi at the specially convened BASCA meeting on 3rd October 2012. However, with the announcement last Friday 9th November that the Sibelius development team has miraculously survived in one piece and is now safely in the employ of Steinberg in London, the campaign to persuade Avid Technology to divest itself of Sibelius has perforce come to an end. With no development team left, it is highly unlikely anyone will have interest in buying just the source code of Sibelius. As of now, aside from the feature-slim version 8 which was already prepared by the sacked development team for release next year, Sibelius is therefore effectively defunct. This petition is therefore now closed. This news will of course be received with mixed feelings. On the one hand we are delighted that Daniel Spreadbury and the last 10 of the original Sibelius development team have been retained intact to develop a new, standalone music notation application, which will without question, become the new world leader, soon to displace both Sibelius and Finale. On the other, we are left clinging to the carcass of Sibelius, that until July 2012 had been the world's leading music scoring application, and in which we have all substantially invested financially, artistically and in learning. Once again, the consumer has lost heavily to high flying corporate avarice, where malpractice goes unpunished, while ineptitude and abject failure are incomprehensibly and staggeringly rewarded. This is completely the wrong environment for ubiquitous resources like music scoring and recording software. I believe our concern should now be to directed to preventing this from happening yet again. Even though Steinberg is a reputable company with a first class track record in innovation and development, it too was taken over by Yamaha. Yamaha itself of course also has impeccable credentials as innovator of MIDI and manufacturer of products of high quality, but what will happen if a company like Avid monsters them too? It is highly tempting to throw our support unquestioningly behind Steinberg's bold enterprise, but personally, I can only recommend this if at least the score file format is made open source, even if the application itself remains proprietary. The easiest way to achieve this for now would simply be to extend the already powerful Music XML file format so as to append all the feature assets of the new application as and when they are added. If this is not going to be the case, then I consider that the long term interests of music composers and arrangers will be better served by a separate initiative to create a new open source application broadly modelled on the feature set of Sibelius, but entirely independent of any corporation. Speaking for myself, I am not willing to waste any more time and money, continually buying and learning different applications of duplicate functionality, just to do something I could already do on Sibelius, and could already do forty years ago with pen and paper. Paradoxically, I can still open and read my forty year old paper scores, yet I can't open scores written on a music application in the 1990's. Scoring applications are of value to me only if I get to keep my work, and I don't have to keep starting over every time a company like Avid loses interest in its customers or goes bust. I can only hope that Steinberg will insure longevity by adopting an open format for score files, so that I can comfortably and safely participate in their new application with our friends from Sibelius. On behalf of the Save Sibelius team, I would like to place on record our deep appreciation for your support in signing this petition. Even though Avid won its fight against its customers in the short term, the Avid board at least know that we didn't go quietly. If Steinberg's initiative proves successful however, then in the long run, it will be a win for us all, because the Sibelius development team are now free to create cutting edge notation software for the 21st Century. Derek Williams www.sibeliususers.org www.savesibelius.com ________________________________ View the petition | View and reply to this message online Unsubscribe from updates about this petition -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 332 bytes Desc: image001.jpg URL: From kepper at edirom.de Tue Nov 20 10:15:08 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 20 Nov 2012 10:15:08 +0100 Subject: [MEI-L] Sibelius and MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F2A4B@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F2A4B@EXCHANGE-01.kb.dk> Message-ID: <31B294F2-8924-495B-9700-C3F61D427B62@edirom.de> Dear Axel, thanks for pointing that out ? I wasn't aware of this last step yet. It's clearly an opportunity, and while I doubt that we have the developers on this list (I haven't checked recently, though), there are certainly ways to contact them. I'll see what I can do about this, but if others do have these contacts, please keep us posted here on MEI-L. Thanks again, Johannes Am 20.11.2012 um 09:57 schrieb Axel Teich Geertinger : > Dear all, > > I guess most of you have seen the information below from Derek Williams about the future of the Sibelius development team (if so, my apologies for re-posting it here). > The following statement made me think of MEI: > ?It is highly tempting to throw our support unquestioningly behind Steinberg's bold enterprise, but personally, I can only recommend this if at least the score file format is made open source, even if the application itself remains proprietary. The easiest way to achieve this for now would simply be to extend the already powerful Music XML file format so as to append all the feature assets of the new application as and when they are added.? > > Could this be an opportunity to promote MEI, and perhaps make MEI the primary file format for this new notation software instead of extending MusicXML? > Are there any direct contacts to the Sibelius development team already? Is anyone of them on this list, for instance? Would it be worth contacting them? > > Best, > Axel > > > Fra: Derek Williams [mailto:mail at change.org] > Sendt: 15. november 2012 02:05 > Til: Axel Teich Geertinger > Emne: Update about "Avid Technology: Sell Sibelius!" > > This message is from Derek Williams who started the petition "Avid Technology: Sell Sibelius!," which you signed on Change.org. > > Dear Save Sibelius petitioner, > > This petition together with your individual comments, was printed out and handed over to the new Head of Sibelius, Bobby Lombardi at the specially convened BASCA meeting on 3rd October 2012. > > However, with the announcement last Friday 9th November that the Sibelius development team has miraculously survived in one piece and is now safely in the employ of Steinberg in London, the campaign to persuade Avid Technology to divest itself of Sibelius has perforce come to an end. With no development team left, it is highly unlikely anyone will have interest in buying just the source code of Sibelius. As of now, aside from the feature-slim version 8 which was already prepared by the sacked development team for release next year, Sibelius is therefore effectively defunct. This petition is therefore now closed. > > This news will of course be received with mixed feelings. On the one hand we are delighted that Daniel Spreadbury and the last 10 of the original Sibelius development team have been retained intact to develop a new, standalone music notation application, which will without question, become the new world leader, soon to displace both Sibelius and Finale. On the other, we are left clinging to the carcass of Sibelius, that until July 2012 had been the world's leading music scoring application, and in which we have all substantially invested financially, artistically and in learning. > > Once again, the consumer has lost heavily to high flying corporate avarice, where malpractice goes unpunished, while ineptitude and abject failure are incomprehensibly and staggeringly rewarded. This is completely the wrong environment for ubiquitous resources like music scoring and recording software. > > I believe our concern should now be to directed to preventing this from happening yet again. Even though Steinberg is a reputable company with a first class track record in innovation and development, it too was taken over by Yamaha. Yamaha itself of course also has impeccable credentials as innovator of MIDI and manufacturer of products of high quality, but what will happen if a company like Avid monsters them too? > > It is highly tempting to throw our support unquestioningly behind Steinberg's bold enterprise, but personally, I can only recommend this if at least the score file format is made open source, even if the application itself remains proprietary. The easiest way to achieve this for now would simply be to extend the already powerful Music XML file format so as to append all the feature assets of the new application as and when they are added. > > If this is not going to be the case, then I consider that the long term interests of music composers and arrangers will be better served by a separate initiative to create a new open source application broadly modelled on the feature set of Sibelius, but entirely independent of any corporation. > > Speaking for myself, I am not willing to waste any more time and money, continually buying and learning different applications of duplicate functionality, just to do something I could already do on Sibelius, and could already do forty years ago with pen and paper. Paradoxically, I can still open and read my forty year old paper scores, yet I can't open scores written on a music application in the 1990's. Scoring applications are of value to me only if I get to keep my work, and I don't have to keep starting over every time a company like Avid loses interest in its customers or goes bust. > > I can only hope that Steinberg will insure longevity by adopting an open format for score files, so that I can comfortably and safely participate in their new application with our friends from Sibelius. > > On behalf of the Save Sibelius team, I would like to place on record our deep appreciation for your support in signing this petition. Even though Avid won its fight against its customers in the short term, the Avid board at least know that we didn't go quietly. If Steinberg's initiative proves successful however, then in the long run, it will be a win for us all, because the Sibelius development team are now free to create cutting edge notation software for the 21st Century. > > Derek Williams > > www.sibeliususers.org > www.savesibelius.com > > View the petition | View and reply to this message online > > Unsubscribe from updates about this petition > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Tue Nov 20 11:54:50 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Tue, 20 Nov 2012 11:54:50 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de> Message-ID: <50AB617A.4020405@edirom.de> Jus a few words in my defence, after that I might just shut up on this Am 16.11.2012 18:46, schrieb Johannes Kepper: > Hi Benni, > > your interpretation of FRBR is just wrong. I agree that the handling of performances is not the most intuitive concept, but it seems quite consistent to me. You might want to look at the official specification document (http://www.ifla.org/files/assets/cataloguing/frbr/frbr_2008.pdf, official translations available from http://www.ifla.org/publications/translations-of-frbr). Maybe it's not FRBR anymore but anyway, see below: > > A work is a totally abstract idea of something. > An expression is a form of this work. It is catered for a specific instrumentation, and is set up for a specific purpose, but it's still no good. It might be, but: "expression (the intellectual or artistic realization of a work) " - http://www.ifla.org/files/assets/cataloguing/frbr/frbr_2008.pdf , p. 13. So for example a conductors concept on how to perform a piece with an orchestra, being somewhat like an edition of the work. > A manifestation is the method of preserving an expression, of converting it to a physical thing. "As an entity, manifestation represents all the physical objects that bear the same characteristics, in respect to both intellectual content and physical form." as above, p. 21. All the performances the conductor held with that specific orchestra (at the same location?) well it might be that every single performance would need to be a separate manifestation as: no two days the same. But I still would like to group them as CONDUTOR's performances of WORK with ORCHESTRA as a manifestation, because each single performance would be an item then. Of course looking at the examples in the above FRBR document I assume that this is something not really catered for as it doesn't seem to be an archivable object before time travels becoming reality, nevertheless one might want to. > An item is the result of the manifestation, it is the physical thing. The one performance I attenden. Again this is not archivable but nevertheless, looking at the examples in the FRBR document I cannot see any example for an item being a musical performance, they might have forgotten about it. And of course the state that: "any change in form (e.g., from alpha-numeric notation to spoken word) results in a new expression. Similarly, changes in the intellectual conventions or instruments that are employed to express a work (e.g., translation from one langua ge to another) result in the production of a new expression. " So all the performances by the same artists with the very sam instruments should make up an expression, then conclusively a single performance would make up a manifestation having only a single item, as it cannot be repeated 1:1. I would not see the differences too big between the individual performances to justify separate manifestations, as physicall the are the same, bringing me back to the thought: Why shouldn't a set of performances constitute a manifestation if the single performances (items) all are performed by the same conductor/orchestra etc. (artist)? If they can, what is the superordinate expression? he intellectual edition-like concept by the conductor? > Let's consider a work called "Schreif?tz". This is a very abstract thing, which has a relation to a different work called "Freisch?tz", and it's a parody of that. The "Schreif?tz" exists in a version for nose flute and harpsichord, and a version for full orchestra. Those are expressions. > > If there is a print of the orchestra version, the whole print run would be the manifestation (in MEI called source), and the individual copies would be items. This helps to distinguish between the features common to all copies and individual copies (pencil markings etc.). > If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item. This is probably the most annoying compromise in FRBR, but it allows to be consistent across different media types. It's just not intuitive? > > A manifestation follows it's expression exactly, by definition there is no or nearly no difference. So if you have two more measures in a source, this source establishes a new expression in FRBR. This might not reflect traditional editorial concepts, but matches very well with genetic approaches. This required conformance between sources / manifestations of one expression is not restricted to the music, but explicitly includes the instrumentation: If you have another manuscript of the nose flute version, where the harpsichord is replaced by a piano, it would be a separate expression already. FRBR allows some leeway here, but officially the slightest change results in a new expression. > > Following these arguments, a performance is clearly an expression. Different musicians will result in a different expression. I'm not sure how to model repeated performances (Broadway shows?), but let's put that aside for now. And that was the thing I've been thinking of. I don't know either but maybe we could find one together. > You're right, you can't hold a performance in your hand. If you want to preserve a performance, you record it, that is, you manifest it on CD, tape, whatever. The recording will always reflect the version as given during the performance, and it will result in a number of items. Of course the recording / manifestation has certain technical inflictions on the content of these items, but the same is true for prints: An engraved copy will show different slurs than a typesetter copy than Craig's SCORE file. Those are artifacts of the technical process of manifesting an expression into items. Ergo the engraved and the tyesetter copy will make up different manifestations of the same expression. As will in case of the recording will the two different forms of capturing the acoustic signal - maybe better cll it "the act of making a recording" as Perry called it. > Now, if you and me perform the version of nose flute and harpsichord, what's the relationship between the nose flute / harpsichord expression and our performance? The latter is based upon the former, but they are clearly different. Maybe the original is less defined than ours (we already prescribe the performers being us), but that's a sibling relationship. We depend on this other one, but we create a new one, just like the composer correcting a preprint copy creates a new expression from it. First our intellectual inflicitons make up for another expression.Consequently us practicing it and us performing it make up for two manifestations with our performance at Detmold musicology being the one and only item of the latter as we decided never to perform it again ;-) I never said any different not even in my graphic where do I get that strinking red error? /benjamin > If someone tries to implement FRBR completely, it would result in a whole bunch of expressions, manifestations, and items. I guess not even librarians would do that to the extreme. That's why I think that Axel's compromise of putting performances in an eventList inside the expression is perfectly reasonable, especially since someone could use XSLT to extract them into separate expressions if needed. > > I hope my short explanation of FRBR was clear enough. If you have further questions, I'm happy to give it another try. Although I'm pretty sure that Axel, Kristina, and others can explain it better than me :-) > > Johannes > > > > Am 16.11.2012 um 17:34 schrieb Benjamin Wolff Bohl: > >> Hi Axel et al., >> first tanks for correctiong my repeationg mistake concerning the items, which certainly did not increase clarity. >> My head has continued working on the problematic dealing of FRBR with recordings and I tried to graphically sort out my thougts (as you can see in the attached image or [in case it doesn't go through] under http://homepages.uni-paderborn.de/bwbohl/MEI/Bohl_FRBR.jpg). >> The graphic is sort of a table with work, expression, manifestation and item being the column labels. The contents first show a work with an edition and an auograph source, then a performance ("Interpretation by Kepper/Roland") and a recording (in red), and last some records (I just realize I missed putting an experssion before the records, sorry for that). Blue lines show hierarchical dependencies whereas green lines indicate a "based upon" relationship. >> The idea behind it is, that a interpretation of a work by a certain conductor could be viewed as expression with him conduction a certain orchestra being a manifestation and the actual performance one a certain data at a certain location being the item (physical by means of the sound waves;-) >> A recording again is another experssion of the work although depending on a certain interpretation-perfomance >> >> I'm not a FRBR expert so I don't know how this all conforms with the FRBR paper(s) but I would be happy to sort things out with you! >> >> /benjamin >> >> Am 14.11.2012 20:36, schrieb Axel Teich Geertinger: >>> Hi Benni >>> >>> some quick comments here and there... >>> >>> Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl >>> Sendt: 14. november 2012 19:59 >>> Til: Music Encoding Initiative >>> Emne: Re: [MEI-L] FRBR in MEI >>> >>> Hi Axel, >>> thanks for this huge insight into the FRBR-customization. Having considered some recording metadata in the Freisch?tz project I'll try to add my thought's on this topic. >>> Sorry for adding late to this discussion, I had prepared this mail this morning in the train, then forgot to send it from work... >>> See my comments inline >>> >>> Am 13.11.2012 13:21, schrieb Axel Teich Geertinger: >>> Hi Johannes, >>> >>> Thanks for your comments. Good and relevant as always. I think I better leave it to the more technically skilled people to answer most of it, but I have just a few comments. >>> >>>>> 4) There is a problem possibly emerging from the notation-centric nature of >>>> MEI, or perhaps it is really a FRBR problem; namely the handling of performances >>>> and recordings. FRBR treats them both as expressions, i.e. as "siblings" to what I >>>> (and MerMEId) would regard as different versions of the work. We encode >>>> performances using elements within expression/history, i.e. as (grand- >>>> )children of , which really makes sense to me. A performance must >>>> be of a certain version (form, instrumentation) of the work, so I strongly believe we >>>> should keep it this way. It's just not how FRBR sees it. On the other hand, as far as >>>> I can see there is nothing (except the practical an conceptual difficulties) that >>>> prevents users from encoding e performance or a recording as an expression, so >>>> FRBR compliance is probably possible also in this respect. I just wouldn't >>>> recommend it, and I actually suspect FRBR having a problem there rather than >>>> MEI. >>>> >>>> I haven't looked this up, but are you sure that performances and recordings are on >>>> the same level? I would see performances as expressions, while recordings are >>>> manifestations. Of course a performance follows a certain version of a work, like >>>> the piano version (=expression). But, the musician moves that to a different >>>> domain (graphical to audio), and he may or may not play the repeats, and he may >>>> or may not follow the dynamic indications of the score. There certainly is a strong >>>> relationship between both expressions, but they are distinct to me. I see your >>>> reasons for putting everything into an eventList, and thus subsuming it under one >>>> expression, but that might not always be the most appropriate model. Sometimes, >>>> it might be better to use separate expressions for the piano version and it's >>>> performances and connect them with one or more relations. >>>> >>> >>> Sorry, my mistake. Now that I look it up I see you are right: performances are expressions, recordings are not. As I said, I haven?t really been looking into the recordings question yet. Here's an example from the FRBR report: >>> >>> w1 J. S. Bach's Six suites for unaccompanied cello >>> e1 performances by Janos Starker recorded partly in 1963 and completed in 1965 >>> m1 recordings released on 33 1/3 rpm sound discs in 1966 by Mercury >>> m2 recordings re-released on compact disc in 1991 by Mercury >>> e2 performances by Yo-Yo Ma recorded in 1983 >>> m1 recordings released on 33 1/3 rpm sound discs in 1983 by CBS Records >>> m2 recordings re-released on compact disc in 1992 by CBS Records >>> >>> So, recordings are no problem, I guess. But that still leaves us with two very different ways of encoding performance data. FYI, we have recently moved performance s from to , so we do subsume them under a particular expression already. >>> First, I don't think, that a recording and a performance are really two different things, but correct me if I'm missing something. The way to both of them is the same, only the recording might result in further manifestations. >>> >>> That is exactly the problem I?m having with FRBR?s view on performances. I think of performance and recording as quite parallel to printed and manuscript sources: a recording is a sort of ?printed performance?, i.e. one that may be reproduced in multiple copies and re-releases. A performance, like a manuscript, is a unique ?event?, so just like a manuscript can only have one location (1 item), the performance manifestation also has just one item (it happens at a certain place at a certain time and is not repeatable until we invent time travel). And I tend to think that the performance must represent a specific expression of the work, but it may be more complex than that. However, to me all this indicates that performances should be treated as manifestations. But FRBR sees it differently. And since I treat them as events, I may have simply evaded the problem... >>> >>> Let's say you have a copy of a specific recording of a work. Interpreting your record as a expression of the work is fine. Interpeting the recording session as expression of the work can be rather problematic. >>> >>> Well, MY record would be an item, i.e. a specific copy of the manifestation (the release). Right? >>> >>> The record you own is a manifestation of the recording (expression/?) which on the other hand will be the trans-medialization of specific performance material, having been worked with and modified by conductor and musicians in order to resemble the performance (manifestation) which again is based on a certain printed edition of the work (expression) possibly taking into account diffrences from other sources. >>> >>> Again, the one I actually own is an *item* of the manifestation (just to make sure we agree on that...) >>> >>> Would one say that this makes the record inferior and nested deep inside the work-expression-manifestation of the written sources or rather a sibling expression-manifestation tree of the same work with strong relations to each other? >>> >>> I would say that the recording manifestation and the written manifestation used for the recording would have strong manifestation-to-manifestation relations, but that they would not *necessarily* have the same parent expression. The musicians could have changed something, made cuts or other things not present in the performance material they played from. So we could also have sibling expressions here. >>> >>> I think it's the very right thing you did in moving the performance list to . >>> >>> Some further complications might arise from the following two thougts: >>> (a)The record you may moreover be (an this is quite popular in recent years, especially with 'classical' msuic) the re-release of an older record (i.e. another manifestation of the same recording) but modified in order to fit the new medium, remastered and digitized and potentially even remixed (Vinyls have certain physical implications on the nature of the sound, whilst CDs or digital audio has different ones). >>> >>> No problem, as I see it. Like in the FRBR example above, that would be a new manifestation of the recording expression. >>> >>> (b) The record doesn't com alone, it has a booklet, which could be referenced from ? This booklet will incorporate texts by different persons and again if re-released might incorporate the old booklet and add additional material. >>> >>> Good question. This would be a bundle of relations pointing in all directions, perhaps. I have no good answer to that right away... >>> >>> /axel >>> >>> >>> >>> See you later, >>> Benjamin >>> >>> >>>>> 5) Finally, an issue related to the FRBR discussion, though not directly a >>>> consequence of it: MEI 2012 allows multiple elements within . I >>>> can't think of any situation, however, in which it may be desirable to describe more >>>> than one work in a single file. On the contrary, it could easily cause a lot of >>>> confusion, so I would actually suggest allowing only one element; in other >>>> words: either skip and have 1 optional in , or keep >>>> , and change its content model to be the one used by now. >>>> >>>> Again, I think that this perspective is biased from your application, where it makes >>>> perfect sense. Consider you're working on Wagner's Ring. You might want to say >>>> something about all these works in just one file. All I want to say is that this is a >>>> modeling question, which is clearly project-specific. It seems perfectly reasonable >>>> to restrict merMEId to MEI instances with only one work, but I wouldn't restrict MEI >>>> to one work per file. This may result in preprocessing files before operating on >>>> them with merMEId, but we have similar situations for many other aspects for MEI, >>>> so this isn't bad per se. >>> >>> In the Ring case, we are talking about the individual dramas as components of a larger work. This would probably be one of the situations where would come in handy as a child of (which the customization allows already). I would be reluctant, however, to include them as four elements directly under . To clarify what that would mean, it would be necessary to specify work-to-work relations. Furthermore, there wouldn?t be any place to put metadata concerning *all* four works, since we would be at top level already. >>> >>> Best, >>> Axel >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Tue Nov 20 11:56:49 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Tue, 20 Nov 2012 11:56:49 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de>, <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> Message-ID: <50AB61F1.8090600@edirom.de> Hi Perry, thanks for some clarifying approaches further statements inline Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): > Random comments on the discussion so far. Sorry if this gets long. > > When contemplating performances and recordings, it seems to me that people often have trouble reaching agreement on the term "sound recording". Andrew's slides label the *expression* as "the sound recording", but others might label the *manifestation* as "the sound recording". You might say the expression is the "act of making a recording" and the manifestation is the "recording that results". > > To disentangle the different uses of the term "recording", it helps me to remember that an expression is not a physical entity, but a manifestation is. Therefore, I prefer to think of the expression as "the performance" (the non-physical thing being recorded) and the manifestation as "the recording" (the physical thing). This fits with the way libraries have traditionally cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... I completely agree on that, being the reason why I used both the terms recording and record with record being on the manifestation/item-level and recording being rather on the expression-manifestation-level. Why so? Recording has to be subordinate to work after all and a recording is not just a simple physical manifestation but a multistep process involving conceptual and creative work done by producers and engineers. So talking about a recording as only being a manifestation becomes problematic as it is a intellectual process resulting in a physical manifestation. That's the way I was looking on it (owed to my audio engineering past) and of course it can be seen differently. > In any case, the FRBR document, which Axel cites, says a *performance is an expression* and a *recording is a manifestation*. This is perfectly plausible when disregarding the intellectual endeavour entangled with the "act of making a recording", as mentioned before. > The usual "waterfall" kind of diagram is explained by saying the term "work" applies to conceptual content; "expression" applies to the languages/media/versions in which the work occurs; "manifestation" applies to the formats in which each expression is available; and "item" applies to individual copies of a single format. (Here "media" means "medium of expression", say written language as opposed to film, and "format" means physical format, as in printed book as opposed to audio CD.) > > Taking another tack, though, often it is easier for me to think of FRBR "from the bottom up", rather than start from the work and proceed "down" the waterfall diagram. Using the recording example, the item is the exemplar I hold in my hand, the manifestation is all of the copies of that exemplar (or better yet, all the information shared by all those copies), the expression is the version of the work that is represented by the manifestation (e.g., Jo's nose flute + harpsichord version and the orchestral version are different expressions), and the work is an intellectual creation/idea (e.g., Bohl's op. 1, the one that goes da, da, da, daaaaaa, reeep! reeep! reeep!). > > Using this "bottom up" thinking helps avoid mental contortions regarding what the work is -- the work is simply the thing at the end of this mental process. From there on, there are work-to-work relationships, so we don't have to think about whether "Romeo and Juliet", "Westside Story", and every other story about star-crossed lovers are expressions of an ur-work with its own manifestations and so on, which lead us to a different "waterfall" conclusion each time we discover a new work or expression. The idea of approaching the FRBR model "from the bottom" is great. And to be honest was something I did when drawing my model, especially concerning the record and recording portion of it. I started out from work on the top right and from the individual record bottom right and tried to fill in as many steps as possible, always wondering whether it be physical or conceptual. Actually I had the recording in between expression and manifestation in the first place, as I had the audio tape or digital audio in between manifestation and item. The parallel processes from a wok to an item (regardless of whichever form this may have) are owed to perspective and goal. When talking about graphical sources I completely agree with the idea of a certain instrumentation version or the like being an expression, a print run being a manifestation an individual copy of which would be an item. > Instead of creating separate expression-level markup for each performance, Axel treats some expressions (performances) as events related to another expression of a work (the orchestral version vs. the nose flute version). This is fine. As Johannes already pointed out, separate elements for the performances can be generated from the markup, if necessary. Conversely, there's nothing wrong with creating separate elements for each performance and relating them to other appropriate expressions and/or relating them directly to the work. If necessary, given accurate place and date information, the kind of markup could be created from the separate elements. So, six of one ... I can agree here, too. I only wondered if the sound wave resulting from the performance was the physical item (specific performers on a specific date), then consequently a series of performances by conductor and orchestra would make up for the manifestation, the expression then would be the concept that the conductor developed studying his "source material" and making up the way he wanted the composition to be realized ergo his "personal version" of the piece, somewhat of a personal edition. The performance material of course being an item of a certain print run (manifestation) of a certain edition (expression), having strong relationships to all of the above. > Johannes said "If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item." Well, maybe. But, I think in this case it would be fine to describe the manifestation and the item in a single place (within in MEI) because there's only one manifestation and one (and only one) item associated with that manifestation. This is the traditional way manuscripts have been described, pre-FRBR. Practically speaking, the manifesation and the item are the same thing. But, as soon as you want to say something special about a particular *part* (as in "chunk", not performer part) of the manifestation, you have to split these up again, for example, when one section of a manuscript is located in Prague and another is in Manitoba. This was the idea behind me marking/stretching the autograph from expression to item. /benjamin > This is not the case with printed material where there is *always* more than one item created from a manifestation, but it is still traditional to describe the manifestation and item as though they are the same thing. For example, it is common to follow the manifestation's author, title, place of publication, etc. with information about the location where one can obtain an examplar of the manifestation, say, UVa Library M 296.C57 1987. > > Johannes also said "So if you have two more measures in a source, this source establishes a new expression in FRBR." Again, maybe. The FRBR report (1997, amended and corrected through 2009) says > > "Variations within substantially the same expression (e.g., slight variations that can be noticed between two states of the same edition in the case of hand press production) would normally be ignored or, in specialized catalogues, be reflected as a note within the bibliographic record for the manifestation. However, for some applications of the model (e.g., early texts of rare manuscripts), each variation may be viewed as a different expression." > > The issue is in the determination of whether 2 things are "substantially the same expression". As with many things, this depends on the person making the determination, there is no single correct answer. We intend that MEI will provide the tools for accurate description using either approach. > > Just my 2 cents, > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Tue Nov 20 12:33:41 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Tue, 20 Nov 2012 11:33:41 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <50AB61F1.8090600@edirom.de> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de>, <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> <50AB61F1.8090600@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F2CAC@EXCHANGE-01.kb.dk> Hi Benni Perhaps we should remember that FRBR is intended for _bibliographic records_, not for descriptions of a work's reception history. Thus, the premise for using FRBR is that in the end we want to describe bibliographic items. Since a performance itself isn't a bibliographic item, perhaps it does not have to fit in? Only if it results in such an item (via manifestation), i.e. a recording, it becomes truly relevant to use FRBR. The performance in that case is not the primary thing we want to describe, it is just the context that resulted in the recording manifestation. Just another 2 cents, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl Sendt: 20. november 2012 11:57 Til: Music Encoding Initiative Emne: Re: [MEI-L] FRBR in MEI Hi Perry, thanks for some clarifying approaches further statements inline Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): > Random comments on the discussion so far. Sorry if this gets long. > > When contemplating performances and recordings, it seems to me that people often have trouble reaching agreement on the term "sound recording". Andrew's slides label the *expression* as "the sound recording", but others might label the *manifestation* as "the sound recording". You might say the expression is the "act of making a recording" and the manifestation is the "recording that results". > > To disentangle the different uses of the term "recording", it helps me to remember that an expression is not a physical entity, but a manifestation is. Therefore, I prefer to think of the expression as "the performance" (the non-physical thing being recorded) and the manifestation as "the recording" (the physical thing). This fits with the way libraries have traditionally cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... I completely agree on that, being the reason why I used both the terms recording and record with record being on the manifestation/item-level and recording being rather on the expression-manifestation-level. Why so? Recording has to be subordinate to work after all and a recording is not just a simple physical manifestation but a multistep process involving conceptual and creative work done by producers and engineers. So talking about a recording as only being a manifestation becomes problematic as it is a intellectual process resulting in a physical manifestation. That's the way I was looking on it (owed to my audio engineering past) and of course it can be seen differently. > In any case, the FRBR document, which Axel cites, says a *performance is an expression* and a *recording is a manifestation*. This is perfectly plausible when disregarding the intellectual endeavour entangled with the "act of making a recording", as mentioned before. > The usual "waterfall" kind of diagram is explained by saying the term > "work" applies to conceptual content; "expression" applies to the > languages/media/versions in which the work occurs; "manifestation" > applies to the formats in which each expression is available; and > "item" applies to individual copies of a single format. (Here "media" > means "medium of expression", say written language as opposed to film, > and "format" means physical format, as in printed book as opposed to > audio CD.) > > Taking another tack, though, often it is easier for me to think of FRBR "from the bottom up", rather than start from the work and proceed "down" the waterfall diagram. Using the recording example, the item is the exemplar I hold in my hand, the manifestation is all of the copies of that exemplar (or better yet, all the information shared by all those copies), the expression is the version of the work that is represented by the manifestation (e.g., Jo's nose flute + harpsichord version and the orchestral version are different expressions), and the work is an intellectual creation/idea (e.g., Bohl's op. 1, the one that goes da, da, da, daaaaaa, reeep! reeep! reeep!). > > Using this "bottom up" thinking helps avoid mental contortions regarding what the work is -- the work is simply the thing at the end of this mental process. From there on, there are work-to-work relationships, so we don't have to think about whether "Romeo and Juliet", "Westside Story", and every other story about star-crossed lovers are expressions of an ur-work with its own manifestations and so on, which lead us to a different "waterfall" conclusion each time we discover a new work or expression. The idea of approaching the FRBR model "from the bottom" is great. And to be honest was something I did when drawing my model, especially concerning the record and recording portion of it. I started out from work on the top right and from the individual record bottom right and tried to fill in as many steps as possible, always wondering whether it be physical or conceptual. Actually I had the recording in between expression and manifestation in the first place, as I had the audio tape or digital audio in between manifestation and item. The parallel processes from a wok to an item (regardless of whichever form this may have) are owed to perspective and goal. When talking about graphical sources I completely agree with the idea of a certain instrumentation version or the like being an expression, a print run being a manifestation an individual copy of which would be an item. > Instead of creating separate expression-level markup for each performance, Axel treats some expressions (performances) as events related to another expression of a work (the orchestral version vs. the nose flute version). This is fine. As Johannes already pointed out, separate elements for the performances can be generated from the markup, if necessary. Conversely, there's nothing wrong with creating separate elements for each performance and relating them to other appropriate expressions and/or relating them directly to the work. If necessary, given accurate place and date information, the kind of markup could be created from the separate elements. So, six of one ... I can agree here, too. I only wondered if the sound wave resulting from the performance was the physical item (specific performers on a specific date), then consequently a series of performances by conductor and orchestra would make up for the manifestation, the expression then would be the concept that the conductor developed studying his "source material" and making up the way he wanted the composition to be realized ergo his "personal version" of the piece, somewhat of a personal edition. The performance material of course being an item of a certain print run (manifestation) of a certain edition (expression), having strong relationships to all of the above. > Johannes said "If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item." Well, maybe. But, I think in this case it would be fine to describe the manifestation and the item in a single place (within in MEI) because there's only one manifestation and one (and only one) item associated with that manifestation. This is the traditional way manuscripts have been described, pre-FRBR. Practically speaking, the manifesation and the item are the same thing. But, as soon as you want to say something special about a particular *part* (as in "chunk", not performer part) of the manifestation, you have to split these up again, for example, when one section of a manuscript is located in Prague and another is in Manitoba. This was the idea behind me marking/stretching the autograph from expression to item. /benjamin > This is not the case with printed material where there is *always* more than one item created from a manifestation, but it is still traditional to describe the manifestation and item as though they are the same thing. For example, it is common to follow the manifestation's author, title, place of publication, etc. with information about the location where one can obtain an examplar of the manifestation, say, UVa Library M 296.C57 1987. > > Johannes also said "So if you have two more measures in a source, this > source establishes a new expression in FRBR." Again, maybe. The FRBR > report (1997, amended and corrected through 2009) says > > "Variations within substantially the same expression (e.g., slight variations that can be noticed between two states of the same edition in the case of hand press production) would normally be ignored or, in specialized catalogues, be reflected as a note within the bibliographic record for the manifestation. However, for some applications of the model (e.g., early texts of rare manuscripts), each variation may be viewed as a different expression." > > The issue is in the determination of whether 2 things are "substantially the same expression". As with many things, this depends on the person making the determination, there is no single correct answer. We intend that MEI will provide the tools for accurate description using either approach. > > Just my 2 cents, > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Tue Nov 20 13:29:34 2012 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 20 Nov 2012 13:29:34 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F2CAC@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de>, <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> <50AB61F1.8090600@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F2CAC@EXCHANGE-01.kb.dk> Message-ID: <34BE6CF5-15FA-4FDB-AA3A-AFA6395503B4@edirom.de> Hi Benni, I hope I got one of your last mails wrong (in this regard), but just in case I didn't: By no means I wanted to keep you from commenting on this (or other) thread(s), as your comments are extremely valuable and helpful ? even if I sometimes disagree. If I offended you somehow, that wasn't my intention, and I want to apologize for it. That being said, I may continue to disagree ;-) Actually, I don't think we're that far away. The one thing you seem to get wrong though is the process from expression to manifestation, which is in no case trivial and a mere technological step without artistic contribution. When you consider the efforts necessary to engrave a piece of music, or the work on the preparation of the WeGA scores we see every day, you will agree that even in the graphical domain, this step is indeed highly artistic and involves a whole bunch of people with different expertise. I agree that the workflows for making recordings are different, but both things seem to be comparable from this perspective, don't you think? Besides that, I totally agree that FRBR is not extremely prescriptive regarding how to model certain situations, but after thinking about it for some time, I (now) think that this is actually a benefit, as it doesn't enforce a specific setup, but allows projects to implement it as they see fit. So in the end, I'm not against your approach in general, I'm just against enforcing your approach. The current implementation of FRBR in MEI tries to keep this openness of FRBR, which I regard as a good thing. In the end, all of us could be wrong ;-) Best, Johannes Am 20.11.2012 um 12:33 schrieb Axel Teich Geertinger: > Hi Benni > > Perhaps we should remember that FRBR is intended for _bibliographic records_, not for descriptions of a work's reception history. Thus, the premise for using FRBR is that in the end we want to describe bibliographic items. Since a performance itself isn't a bibliographic item, perhaps it does not have to fit in? Only if it results in such an item (via manifestation), i.e. a recording, it becomes truly relevant to use FRBR. The performance in that case is not the primary thing we want to describe, it is just the context that resulted in the recording manifestation. > > Just another 2 cents, > Axel > > -----Oprindelig meddelelse----- > Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl > Sendt: 20. november 2012 11:57 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] FRBR in MEI > > Hi Perry, > thanks for some clarifying approaches > further statements inline > > Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): >> Random comments on the discussion so far. Sorry if this gets long. >> >> When contemplating performances and recordings, it seems to me that people often have trouble reaching agreement on the term "sound recording". Andrew's slides label the *expression* as "the sound recording", but others might label the *manifestation* as "the sound recording". You might say the expression is the "act of making a recording" and the manifestation is the "recording that results". >> >> To disentangle the different uses of the term "recording", it helps me to remember that an expression is not a physical entity, but a manifestation is. Therefore, I prefer to think of the expression as "the performance" (the non-physical thing being recorded) and the manifestation as "the recording" (the physical thing). This fits with the way libraries have traditionally cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... > I completely agree on that, being the reason why I used both the terms recording and record with record being on the manifestation/item-level and recording being rather on the expression-manifestation-level. Why so? Recording has to be subordinate to work after all and a recording is not just a simple physical manifestation but a multistep process involving conceptual and creative work done by producers and engineers. > So talking about a recording as only being a manifestation becomes problematic as it is a intellectual process resulting in a physical manifestation. That's the way I was looking on it (owed to my audio engineering past) and of course it can be seen differently. >> In any case, the FRBR document, which Axel cites, says a *performance is an expression* and a *recording is a manifestation*. > This is perfectly plausible when disregarding the intellectual endeavour entangled with the "act of making a recording", as mentioned before. >> The usual "waterfall" kind of diagram is explained by saying the term >> "work" applies to conceptual content; "expression" applies to the >> languages/media/versions in which the work occurs; "manifestation" >> applies to the formats in which each expression is available; and >> "item" applies to individual copies of a single format. (Here "media" >> means "medium of expression", say written language as opposed to film, >> and "format" means physical format, as in printed book as opposed to >> audio CD.) >> >> Taking another tack, though, often it is easier for me to think of FRBR "from the bottom up", rather than start from the work and proceed "down" the waterfall diagram. Using the recording example, the item is the exemplar I hold in my hand, the manifestation is all of the copies of that exemplar (or better yet, all the information shared by all those copies), the expression is the version of the work that is represented by the manifestation (e.g., Jo's nose flute + harpsichord version and the orchestral version are different expressions), and the work is an intellectual creation/idea (e.g., Bohl's op. 1, the one that goes da, da, da, daaaaaa, reeep! reeep! reeep!). >> >> Using this "bottom up" thinking helps avoid mental contortions regarding what the work is -- the work is simply the thing at the end of this mental process. From there on, there are work-to-work relationships, so we don't have to think about whether "Romeo and Juliet", "Westside Story", and every other story about star-crossed lovers are expressions of an ur-work with its own manifestations and so on, which lead us to a different "waterfall" conclusion each time we discover a new work or expression. > The idea of approaching the FRBR model "from the bottom" is great. And to be honest was something I did when drawing my model, especially concerning the record and recording portion of it. I started out from work on the top right and from the individual record bottom right and tried to fill in as many steps as possible, always wondering whether it be physical or conceptual. Actually I had the recording in between expression and manifestation in the first place, as I had the audio tape or digital audio in between manifestation and item. > The parallel processes from a wok to an item (regardless of whichever form this may have) are owed to perspective and goal. When talking about graphical sources I completely agree with the idea of a certain instrumentation version or the like being an expression, a print run being a manifestation an individual copy of which would be an item. >> Instead of creating separate expression-level markup for each performance, Axel treats some expressions (performances) as events related to another expression of a work (the orchestral version vs. the nose flute version). This is fine. As Johannes already pointed out, separate elements for the performances can be generated from the markup, if necessary. Conversely, there's nothing wrong with creating separate elements for each performance and relating them to other appropriate expressions and/or relating them directly to the work. If necessary, given accurate place and date information, the kind of markup could be created from the separate elements. So, six of one ... > I can agree here, too. I only wondered if the sound wave resulting from the performance was the physical item (specific performers on a specific date), then consequently a series of performances by conductor and orchestra would make up for the manifestation, the expression then would be the concept that the conductor developed studying his "source material" and making up the way he wanted the composition to be realized ergo his "personal version" of the piece, somewhat of a personal edition. > The performance material of course being an item of a certain print run > (manifestation) of a certain edition (expression), having strong relationships to all of the above. >> Johannes said "If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item." Well, maybe. But, I think in this case it would be fine to describe the manifestation and the item in a single place (within in MEI) because there's only one manifestation and one (and only one) item associated with that manifestation. This is the traditional way manuscripts have been described, pre-FRBR. Practically speaking, the manifesation and the item are the same thing. But, as soon as you want to say something special about a particular *part* (as in "chunk", not performer part) of the manifestation, you have to split these up again, for example, when one section of a manuscript is located in Prague and another is in Manitoba. > This was the idea behind me marking/stretching the autograph from expression to item. > > /benjamin >> This is not the case with printed material where there is *always* more than one item created from a manifestation, but it is still traditional to describe the manifestation and item as though they are the same thing. For example, it is common to follow the manifestation's author, title, place of publication, etc. with information about the location where one can obtain an examplar of the manifestation, say, UVa Library M 296.C57 1987. >> >> Johannes also said "So if you have two more measures in a source, this >> source establishes a new expression in FRBR." Again, maybe. The FRBR >> report (1997, amended and corrected through 2009) says >> >> "Variations within substantially the same expression (e.g., slight variations that can be noticed between two states of the same edition in the case of hand press production) would normally be ignored or, in specialized catalogues, be reflected as a note within the bibliographic record for the manifestation. However, for some applications of the model (e.g., early texts of rare manuscripts), each variation may be viewed as a different expression." >> >> The issue is in the determination of whether 2 things are "substantially the same expression". As with many things, this depends on the person making the determination, there is no single correct answer. We intend that MEI will provide the tools for accurate description using either approach. >> >> Just my 2 cents, >> >> -- >> p. >> >> __________________________ >> Perry Roland >> Music Library >> University of Virginia >> P. O. Box 400175 >> Charlottesville, VA 22904 >> 434-982-2702 (w) >> pdr4h (at) virginia (dot) edu >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Tue Nov 20 14:44:32 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Tue, 20 Nov 2012 05:44:32 -0800 Subject: [MEI-L] Sibelius and MEI In-Reply-To: <31B294F2-8924-495B-9700-C3F61D427B62@edirom.de> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F2A4B@EXCHANGE-01.kb.dk> <31B294F2-8924-495B-9700-C3F61D427B62@edirom.de> Message-ID: Hi Axel, I still have to read through the webern messages. Another idea is that MEIers pass around this job search for Sibelius to sympathetic software developers... -=+Craig =================================== Subject: sibelius developer gig Date: Mon, 19 Nov 2012 16:01:46 +0000 From: Bruce Bennett Here's a job description. If you know anyone who would be interested, have them send me their resume. Thanks! Principal Software Engineer, Req #6485BR, Daly City, CA Avid is looking for a qualified applicant who will work on the world?s most popular music notation software as our Principal Software Developer. Do you know the perfect candidate for this position or do you have the skills to apply? Requirements for this role include having strong C++ object oriented programming with more than 5 years of experience; practice reading music notation; experience with MIDI programming and real-time system design; familiarity with VST Instruments; DAW user experience; and the ability to work independently and lead a geographically distributed team. Best, Bruce Bruce Bennett Senior Technical Writer | R&D-Technical Publications Avid 2001 Junipero Serra Blvd Daly City, CA 94014 United States bruce.bennett at avid.com (650) 731-6492 DC office (650) 557-9003 home office (504) 220-1157 cell We?re Avid. Learn more at www.avid.com From pdr4h at eservices.virginia.edu Tue Nov 20 15:24:19 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 20 Nov 2012 14:24:19 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F2CAC@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de>, <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> <50AB61F1.8090600@edirom.de>, <0B6F63F59F405E4C902DFE2C2329D0D1514F2CAC@EXCHANGE-01.kb.dk> Message-ID: Axel, I think you've stated succintly what we've been dancing around for a while now -- a performance without bibliographical manifestation is outside the scope of FRBR. So, while a 1776 performance of an opera could be thought of as an expression (and marked as such in MEI), because no recording was made (and therefore there were no manifestations nor items stemming from that), it seems the use of FRBR doesn't gain anything. Better to describe these performances in history/eventList. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Axel Teich Geertinger [atge at kb.dk] Sent: Tuesday, November 20, 2012 6:33 AM To: Music Encoding Initiative Subject: Re: [MEI-L] FRBR in MEI Hi Benni Perhaps we should remember that FRBR is intended for _bibliographic records_, not for descriptions of a work's reception history. Thus, the premise for using FRBR is that in the end we want to describe bibliographic items. Since a performance itself isn't a bibliographic item, perhaps it does not have to fit in? Only if it results in such an item (via manifestation), i.e. a recording, it becomes truly relevant to use FRBR. The performance in that case is not the primary thing we want to describe, it is just the context that resulted in the recording manifestation. Just another 2 cents, Axel -----Oprindelig meddelelse----- Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl Sendt: 20. november 2012 11:57 Til: Music Encoding Initiative Emne: Re: [MEI-L] FRBR in MEI Hi Perry, thanks for some clarifying approaches further statements inline Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): > Random comments on the discussion so far. Sorry if this gets long. > > When contemplating performances and recordings, it seems to me that people often have trouble reaching agreement on the term "sound recording". Andrew's slides label the *expression* as "the sound recording", but others might label the *manifestation* as "the sound recording". You might say the expression is the "act of making a recording" and the manifestation is the "recording that results". > > To disentangle the different uses of the term "recording", it helps me to remember that an expression is not a physical entity, but a manifestation is. Therefore, I prefer to think of the expression as "the performance" (the non-physical thing being recorded) and the manifestation as "the recording" (the physical thing). This fits with the way libraries have traditionally cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... I completely agree on that, being the reason why I used both the terms recording and record with record being on the manifestation/item-level and recording being rather on the expression-manifestation-level. Why so? Recording has to be subordinate to work after all and a recording is not just a simple physical manifestation but a multistep process involving conceptual and creative work done by producers and engineers. So talking about a recording as only being a manifestation becomes problematic as it is a intellectual process resulting in a physical manifestation. That's the way I was looking on it (owed to my audio engineering past) and of course it can be seen differently. > In any case, the FRBR document, which Axel cites, says a *performance is an expression* and a *recording is a manifestation*. This is perfectly plausible when disregarding the intellectual endeavour entangled with the "act of making a recording", as mentioned before. > The usual "waterfall" kind of diagram is explained by saying the term > "work" applies to conceptual content; "expression" applies to the > languages/media/versions in which the work occurs; "manifestation" > applies to the formats in which each expression is available; and > "item" applies to individual copies of a single format. (Here "media" > means "medium of expression", say written language as opposed to film, > and "format" means physical format, as in printed book as opposed to > audio CD.) > > Taking another tack, though, often it is easier for me to think of FRBR "from the bottom up", rather than start from the work and proceed "down" the waterfall diagram. Using the recording example, the item is the exemplar I hold in my hand, the manifestation is all of the copies of that exemplar (or better yet, all the information shared by all those copies), the expression is the version of the work that is represented by the manifestation (e.g., Jo's nose flute + harpsichord version and the orchestral version are different expressions), and the work is an intellectual creation/idea (e.g., Bohl's op. 1, the one that goes da, da, da, daaaaaa, reeep! reeep! reeep!). > > Using this "bottom up" thinking helps avoid mental contortions regarding what the work is -- the work is simply the thing at the end of this mental process. From there on, there are work-to-work relationships, so we don't have to think about whether "Romeo and Juliet", "Westside Story", and every other story about star-crossed lovers are expressions of an ur-work with its own manifestations and so on, which lead us to a different "waterfall" conclusion each time we discover a new work or expression. The idea of approaching the FRBR model "from the bottom" is great. And to be honest was something I did when drawing my model, especially concerning the record and recording portion of it. I started out from work on the top right and from the individual record bottom right and tried to fill in as many steps as possible, always wondering whether it be physical or conceptual. Actually I had the recording in between expression and manifestation in the first place, as I had the audio tape or digital audio in between manifestation and item. The parallel processes from a wok to an item (regardless of whichever form this may have) are owed to perspective and goal. When talking about graphical sources I completely agree with the idea of a certain instrumentation version or the like being an expression, a print run being a manifestation an individual copy of which would be an item. > Instead of creating separate expression-level markup for each performance, Axel treats some expressions (performances) as events related to another expression of a work (the orchestral version vs. the nose flute version). This is fine. As Johannes already pointed out, separate elements for the performances can be generated from the markup, if necessary. Conversely, there's nothing wrong with creating separate elements for each performance and relating them to other appropriate expressions and/or relating them directly to the work. If necessary, given accurate place and date information, the kind of markup could be created from the separate elements. So, six of one ... I can agree here, too. I only wondered if the sound wave resulting from the performance was the physical item (specific performers on a specific date), then consequently a series of performances by conductor and orchestra would make up for the manifestation, the expression then would be the concept that the conductor developed studying his "source material" and making up the way he wanted the composition to be realized ergo his "personal version" of the piece, somewhat of a personal edition. The performance material of course being an item of a certain print run (manifestation) of a certain edition (expression), having strong relationships to all of the above. > Johannes said "If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item." Well, maybe. But, I think in this case it would be fine to describe the manifestation and the item in a single place (within in MEI) because there's only one manifestation and one (and only one) item associated with that manifestation. This is the traditional way manuscripts have been described, pre-FRBR. Practically speaking, the manifesation and the item are the same thing. But, as soon as you want to say something special about a particular *part* (as in "chunk", not performer part) of the manifestation, you have to split these up again, for example, when one section of a manuscript is located in Prague and another is in Manitoba. This was the idea behind me marking/stretching the autograph from expression to item. /benjamin > This is not the case with printed material where there is *always* more than one item created from a manifestation, but it is still traditional to describe the manifestation and item as though they are the same thing. For example, it is common to follow the manifestation's author, title, place of publication, etc. with information about the location where one can obtain an examplar of the manifestation, say, UVa Library M 296.C57 1987. > > Johannes also said "So if you have two more measures in a source, this > source establishes a new expression in FRBR." Again, maybe. The FRBR > report (1997, amended and corrected through 2009) says > > "Variations within substantially the same expression (e.g., slight variations that can be noticed between two states of the same edition in the case of hand press production) would normally be ignored or, in specialized catalogues, be reflected as a note within the bibliographic record for the manifestation. However, for some applications of the model (e.g., early texts of rare manuscripts), each variation may be viewed as a different expression." > > The issue is in the determination of whether 2 things are "substantially the same expression". As with many things, this depends on the person making the determination, there is no single correct answer. We intend that MEI will provide the tools for accurate description using either approach. > > Just my 2 cents, > > -- > p. > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From donbyrd at indiana.edu Tue Nov 20 17:10:48 2012 From: donbyrd at indiana.edu (Byrd, Donald A.) Date: Tue, 20 Nov 2012 11:10:48 -0500 Subject: [MEI-L] FRBR in MEI: performances vs. recordings In-Reply-To: <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de> <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> Message-ID: <20121120111048.dp9ncyc2o0og4gkg@webmail.iu.edu> One more thing about the relationship between performances and recordings that might clarify why FRBR considers the former an expression and the latter a manifestation. The difference between the two can go way beyond merely re-releasing a recording in digital form that was originally on vinyl. That's especially true outside of the classical music world. For example, I believe the Grateful Dead were famous for not discouraging people from making their own recordings of their concerts, and I think there are some of their performances for which dozens, maybe hundreds, of recordings exist -- all done from different locations with different equipment, and probably many of them containing just parts of the concert! And some early jazz sessions were recorded with two microphones in different positions, which were originally thought of just as two slightly different mono recordings, but making possible after-the-fact stereo. --Don >>>> ---- SNIP ---- >>>> Fra: mei-l-bounces at lists.uni-paderborn.de >>>> [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin >>>> Wolff Bohl >>>> Sendt: 14. november 2012 19:59 >>>> Til: Music Encoding Initiative >>>> Emne: Re: [MEI-L] FRBR in MEI >>>> >>>> Hi Axel, >>>> thanks for this huge insight into the FRBR-customization. Having >>>> considered some recording metadata in the Freisch?tz project I'll >>>> try to add my thought's on this topic. >>>> Sorry for adding late to this discussion, I had prepared this mail >>>> this morning in the train, then forgot to send it from work... >>>> See my comments inline >>>> >>>> Am 13.11.2012 13:21, schrieb Axel Teich Geertinger: >>>> Hi Johannes, >>>> >>>> Thanks for your comments. Good and relevant as always. I think I >>>> >>>> ---- SNIP ---- >>>> >>>> Let's say you have a copy of a specific recording of a work. >>>> Interpreting your record as a expression of the work is fine. >>>> Interpeting the recording session as expression of the work can be >>>> rather problematic. >>>> >>>> Well, MY record would be an item, i.e. a specific copy of the >>>> manifestation (the release). Right? >>>> >>>> The record you own is a manifestation of the recording >>>> (expression/?) which on the other hand will be the >>>> trans-medialization of specific performance material, having been >>>> worked with and modified by conductor and musicians in order to >>>> resemble the performance (manifestation) which again is based on a >>>> certain printed edition of the work (expression) possibly taking >>>> into account diffrences from other sources. >>>> >>>> Again, the one I actually own is an *item* of the manifestation >>>> (just to make sure we agree on that...) >>>> >>>> Would one say that this makes the record inferior and nested deep >>>> inside the work-expression-manifestation of the written sources or >>>> rather a sibling expression-manifestation tree of the same work >>>> with strong relations to each other? >>>> >>>> I would say that the recording manifestation and the written >>>> manifestation used for the recording would have strong >>>> manifestation-to-manifestation relations, but that they would not >>>> *necessarily* have the same parent expression. The musicians could >>>> have changed something, made cuts or other things not present in >>>> the performance material they played from. So we could also have >>>> sibling expressions here. >>>> >>>> I think it's the very right thing you did in moving the >>>> performance list to . >>>> >>>> Some further complications might arise from the following two thougts: >>>> (a)The record you may moreover be (an this is quite popular in >>>> recent years, especially with 'classical' msuic) the re-release of >>>> an older record (i.e. another manifestation of the same recording) >>>> but modified in order to fit the new medium, remastered and >>>> digitized and potentially even remixed (Vinyls have certain >>>> physical implications on the nature of the sound, whilst CDs or >>>> digital audio has different ones). >>>> >>>> No problem, as I see it. Like in the FRBR example above, that >>>> would be a new manifestation of the recording expression. >>>> >>>> (b) The record doesn't com alone, it has a booklet, which could be >>>> referenced from ? This booklet will incorporate texts by >>>> different persons and again if re-released might incorporate the >>>> old booklet and add additional material. >>>> >>>> Good question. This would be a bundle of relations pointing in all >>>> directions, perhaps. I have no good answer to that right away... >>>> >>>> /axel >>>> >>>> >>>> >>>> See you later, >>>> Benjamin >>>> >>>> >>>>>> 5) Finally, an issue related to the FRBR discussion, though >>>>>> not directly a >>>>> consequence of it: MEI 2012 allows multiple elements >>>>> within . I >>>>> can't think of any situation, however, in which it may be >>>>> desirable to describe more >>>>> than one work in a single file. On the contrary, it could easily >>>>> cause a lot of >>>>> confusion, so I would actually suggest allowing only one >>>>> element; in other >>>>> words: either skip and have 1 optional in >>>>> , or keep >>>>> , and change its content model to be the one used by >>>>> now. >>>>> >>>>> Again, I think that this perspective is biased from your >>>>> application, where it makes >>>>> perfect sense. Consider you're working on Wagner's Ring. You >>>>> might want to say >>>>> something about all these works in just one file. All I want to >>>>> say is that this is a >>>>> modeling question, which is clearly project-specific. It seems >>>>> perfectly reasonable >>>>> to restrict merMEId to MEI instances with only one work, but I >>>>> wouldn't restrict MEI >>>>> to one work per file. This may result in preprocessing files >>>>> before operating on >>>>> them with merMEId, but we have similar situations for many other >>>>> aspects for MEI, >>>>> so this isn't bad per se. >>>> >>>> In the Ring case, we are talking about the individual dramas as >>>> components of a larger work. This would probably be one of the >>>> situations where would come in handy as a child of >>>> (which the customization allows already). I would be >>>> reluctant, however, to include them as four elements >>>> directly under . To clarify what that would mean, it >>>> would be necessary to specify work-to-work relations. Furthermore, >>>> there wouldn?t be any place to put metadata concerning *all* four >>>> works, since we would be at top level already. >>>> >>>> Best, >>>> Axel >>>> >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Donald Byrd Woodrow Wilson Indiana Teaching Fellow Adjunct Associate Professor of Informatics & Music Indiana University, Bloomington From andrew.hankinson at mail.mcgill.ca Tue Nov 20 17:34:21 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Tue, 20 Nov 2012 11:34:21 -0500 Subject: [MEI-L] Sibelius and MEI In-Reply-To: <3409_1353419109_50AB8965_3409_238_1_CAPcjuFcX5sjxep6fVvG-PKdXjt_92cC_7C2_kc3aAcrPG_H0Hg@mail.gmail.com> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F2A4B@EXCHANGE-01.kb.dk> <31B294F2-8924-495B-9700-C3F61D427B62@edirom.de> <3409_1353419109_50AB8965_3409_238_1_CAPcjuFcX5sjxep6fVvG-PKdXjt_92cC_7C2_kc3aAcrPG_H0Hg@mail.gmail.com> Message-ID: <6107EC41-A643-47F1-87A7-9B72B05FB249@mail.mcgill.ca> This is very strange, especially since we received this e-mail to our grad students' list just yesterday. ============ Here's a job description. If you know anyone who would be interested, have them send me their resume. Thanks! Principal Software Engineer, Req #6485BR, Daly City, CA Avid is looking for a qualified applicant who will work on the world?s most popular music notation software as our Principal Software Developer. Do you know the perfect candidate for this position or do you have the skills to apply? Requirements for this role include having strong C++ object oriented programming with more than 5 years of experience; practice reading music notation; experience with MIDI programming and real-time system design; familiarity with VST Instruments; DAW user experience; and the ability to work independently and lead a geographically distributed team. Best, Bruce Bruce Bennett Senior Technical Writer | R&D-Technical Publications Avid 2001 Junipero Serra Blvd Daly City, CA 94014 United States bruce.bennett at avid.com (650) 731-6492 DC office (650) 557-9003 home office (504) 220-1157 cell We?re Avid. Learn more at www.avid.com =============== On 2012-11-20, at 8:44 AM, Craig Sapp wrote: > Hi Axel, > > I still have to read through the webern messages. Another idea is > that MEIers pass around this job search for Sibelius to sympathetic > software developers... > > -=+Craig > > =================================== > > Subject: sibelius developer gig > Date: Mon, 19 Nov 2012 16:01:46 +0000 > From: Bruce Bennett > > Here's a job description. If you know anyone who would be interested, > have them send me their resume. Thanks! > > Principal Software Engineer, > Req #6485BR, Daly City, CA > > Avid is looking for a qualified applicant who will work on the world?s > most popular music notation software as our Principal Software > Developer. Do you know the perfect candidate for this position or do > you have the skills to apply? Requirements for this role include > having strong C++ object oriented programming with more than 5 years > of experience; practice reading music notation; experience with MIDI > programming and real-time system design; familiarity with VST > Instruments; DAW user experience; and the ability to work > independently and lead a geographically distributed team. > > Best, > > Bruce > > Bruce Bennett > Senior Technical Writer | R&D-Technical Publications > > Avid > 2001 Junipero Serra Blvd > Daly City, CA 94014 > United States > bruce.bennett at avid.com > > (650) 731-6492 DC office > (650) 557-9003 home office > (504) 220-1157 cell > > We?re Avid. Learn more at www.avid.com > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From andrew.hankinson at mail.mcgill.ca Tue Nov 20 18:12:23 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Tue, 20 Nov 2012 12:12:23 -0500 Subject: [MEI-L] Sibelius and MEI In-Reply-To: <9731_1353429273_50ABB118_9731_147_2_6107EC41-A643-47F1-87A7-9B72B05FB249@mail.mcgill.ca> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F2A4B@EXCHANGE-01.kb.dk> <31B294F2-8924-495B-9700-C3F61D427B62@edirom.de> <3409_1353419109_50AB8965_3409_238_1_CAPcjuFcX5sjxep6fVvG-PKdXjt_92cC_7C2_kc3aAcrPG_H0Hg@mail.gmail.com> <9731_1353429273_50ABB118_9731_147_2_6107EC41-A643-47F1-87A7-9B72B05FB249@mail.mcgill.ca> Message-ID: <0370E54C-0216-4FDD-B595-5F9BE29664B2@mail.mcgill.ca> Hi all, TL;DR version: I've written a Sibelius plugin that will read and write MEI. If anyone's interested in working with me on this, let me know. In June and July I was a visiting researcher at UVa, working with Perry on software support for MEI. While I was there I came across an open-source MusicXML Sibelius plug-in which, although defunct, inspired me to write a plug-in to read and write MEI from Sibelius. Given the current thread subject, I thought it would be a good time to make it known more widely. The source code is here: https://github.com/DDMAL/sibmei The structure is: -- libmei.plg <- a Manu-Script version of libmei. Also includes an XML importer/exporter. -- meigui.plg <- GUI code for dialogs, etc. -- meisib.plg <- Code for reading MEI into Sibelius -- sibmei.plg <- Code for writing MEI from Sibelius. -- sibmei_test.plg <- Unit tests for sibmei It is highly, highly unfinished code. The writing of MEI is mostly complete, but there are a few missing features (like lyrics) that I have not had time to add. The reading of MEI is almost non-existent. In a way the hard part is done. Right now it will successfully read an MEI (XML) file into a Sibelius data structure, but any actual layout of the musical content is waiting to be finished. Note that this will probably only work on Sibelius 7 too. You'll also notice that the GUI has quite a few place-holders for features that I didn't have time to implement. I envisioned this as a way of producing digital editions based on certain editorial practices in Sibelius. For example, the inclusion of variant readings, or supplied symbols, etc. None of this works yet, but I thought it would be a pretty valuable tool. As I've got quite a bit on my plate right now, I don't really have time to work on this. If, however, it's valuable to someone in the community please feel free to hack away at it. All I ask is that you contribute your code back. Cheers, -Andrew From pdr4h at eservices.virginia.edu Tue Nov 20 22:45:55 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Tue, 20 Nov 2012 21:45:55 +0000 Subject: [MEI-L] bibliographic customization (FRBR) Message-ID: Hello, all, Attached to this message you'll find a .zip file containing 3 items: - mei-Bibl ODD customization file - mei-Bibl customized RNG file - tei_odds schema for validating the ODD file Even though up to this point we've been thinking and talking about this as a "FRBR module", I've called this customization "mei-Bibl" because it is more than just the addition of FRBR support -- it also affects other bibliographic parts of MEI, such as the bibliographic references, the physical description of sources, and bibliographic description of the MEI file itself. However, if, as expected, this customization is rolled into "out-of-the-box" MEI, a FRBR module will be created. When the MEI.frbr module is *not* turned on, works and sources can be described just as they were before; that is, with bibliographic description mostly taking place within . Turning on the FRBR module, however, makes it possible to separate work-, expression-, manifestation/source-, and item-level description into discrete chunks and to relate those chunks of metadata to each other. The mei-Bibl ODD file can also be found in the MEI Incubator at http://code.google.com/p/mei-incubator/. Suggestions for improvement are welcome. Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu -------------- next part -------------- A non-text attachment was scrubbed... Name: mei-Bibl_customization.zip Type: application/x-zip-compressed Size: 136926 bytes Desc: mei-Bibl_customization.zip URL: From bohl at edirom.de Wed Nov 21 09:24:22 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Wed, 21 Nov 2012 09:24:22 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: <34BE6CF5-15FA-4FDB-AA3A-AFA6395503B4@edirom.de> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de>, <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> <50AB61F1.8090600@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F2CAC@EXCHANGE-01.kb.dk> <34BE6CF5-15FA-4FDB-AA3A-AFA6395503B4@edirom.de> Message-ID: <50AC8FB6.5020005@edirom.de> Hi there, first thanks to Axel for sorting out that FRBR is for bibliographic items and thus performances, that we have nothing more of than the knowledge it happened are out of scope. I've somehow been thinking too much towards somthing like a music ontology. Moreover the idea of having a hierarchy in FRBR might have mislead me, prooving Peter's earlier mentioned concerns regarding this true. Nevertheless we still got the recordings to deal with! So Johannes, let's continue to disagree ;-) > Hi Benni, > > I hope I got one of your last mails wrong (in this regard), but just in case I didn't: By no means I wanted to keep you from commenting on this (or other) thread(s), as your comments are extremely valuable and helpful ? even if I sometimes disagree. If I offended you somehow, that wasn't my intention, and I want to apologize for it. > > That being said, I may continue to disagree ;-) Actually, I don't think we're that far away. The one thing you seem to get wrong though is the process from expression to manifestation, which is in no case trivial and a mere technological step without artistic contribution. When you consider the efforts necessary to engrave a piece of music, or the work on the preparation of the WeGA scores we see every day, you will agree that even in the graphical domain, this step is indeed highly artistic and involves a whole bunch of people with different expertise. I agree that the workflows for making recordings are different, but both things seem to be comparable from this perspective, don't you think? I neither think that we are too far away from each other now. And I never wanted to say that the transiton from expression to manifestation was a mere technical, but maybe I should have explained a little more what my initial graphic was all about with the recordings, as by no means it would involve a mere technical step. Beginning from the way the recording engineer set up his microphones and what he did on his audio desk, across quite a couple of steps involving editing (cutting, rather technical but nevertheless with artistic implications), mixing (very artistic) and mastering(as artistic as technical), that all would result in archive material quite a lot of intellectual/artistic work is involved in a record(ing). I'll have a try on this: WORK - examination -> edition (e1) ------------- engraving -> print run (m1) -printing -> print copy (i1) If you have the above, and try to get a parallel idea on the way to the copy of a record on your shelf (i2): (1) What will be expression? (2) What will be manifestation? (3) Is one stream of e-m-i this sufficient? WORK - examination -> artist's inpterpretation (e2) - -> ? -> record copy (i2) or WORK - recording -> ? - mastering -> label's press run -pressing (m2) -> record copy (i2) Maybe let's try to fill this with one of Don's "Greatful" examples: The Song "Truckin" has been released Nov 1 1970 on the album "American Beauty" and as a single. The album was recorded in AUG- SEP 1970, although it might be the single version was recorded in SEP or maybe this specific song was recorded in SEP. Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e2) --> 1970-11-1 Warner bros. release of "American Beauty" album (m2) --> record copy (i2) Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e3) --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) But the single version and album version differ quite a lot, album length 5:09 and single 3:13 so we should specify a little more. Truckin (w1) --> 1970-08 to 1970-09 Session Tape album-verison (e2) --> 1970-11-1 Warner bros. release of "American Beauty" album (m2) --> record copy (i2) Truckin (w1) --> 1970-08 to 1970-09 Session Tape single-version (e3) --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) A little complication: the single version was not recorded but taken from the album version and edited down from 5 to 3 minutes nevertheless it is a own expression, but it hints us twords some items that might reside in an archives shelf, namely: - session tapes : the tapes from the recording session (potentially multi-track) - edit tapes : the tapes where all the nice parts from the session tapes were cut together to make up the material for the work (potentially multi-track) - mix tapes : a stereo mix version including lots of additional features like for example delay effects etc. resembling the final version - master tapes : an acoustically slightly reshaped version of the mix tape version in order to fit the technical limitations of a certain target medium like vinyl and some intellectual work to smoothen the mix (e.g. making all songs on a record sound similar) If they are relevant for my MEI file, they should go into , but where should these go in FRBR? Maybe they should all be separate expressions with strong relations to each other? So actually the only one in the direct same "FRBR hierarchy" would be the master tape? Truckin (w1) --> 1970-09-XX "Truckin" Master Tape (e3) --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) If I don't know about all the tapes I might just put? Truckin (w1) --> 1970-08 to 1970-09 recordings --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) or? Truckin (w1) --> 1970 Version (e3)--> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) What do you think, is there still a problem? Is there anything interesting for you in the above? > Besides that, I totally agree that FRBR is not extremely prescriptive regarding how to model certain situations, but after thinking about it for some time, I (now) think that this is actually a benefit, as it doesn't enforce a specific setup, but allows projects to implement it as they see fit. So in the end, I'm not against your approach in general, I'm just against enforcing your approach. The current implementation of FRBR in MEI tries to keep this openness of FRBR, which I regard as a good thing. In the end, all of us could be wrong ;-) I never wanted to enforce anything only to show up possibilities to be considered when implementing FRBR or test the current implementation against. And I think you're absolutely right that an openness could be a benefit as we certainly will miss possible complicated situations. benjamin > > Best, > Johannes > > Am 20.11.2012 um 12:33 schrieb Axel Teich Geertinger: > >> Hi Benni >> >> Perhaps we should remember that FRBR is intended for _bibliographic records_, not for descriptions of a work's reception history. Thus, the premise for using FRBR is that in the end we want to describe bibliographic items. Since a performance itself isn't a bibliographic item, perhaps it does not have to fit in? Only if it results in such an item (via manifestation), i.e. a recording, it becomes truly relevant to use FRBR. The performance in that case is not the primary thing we want to describe, it is just the context that resulted in the recording manifestation. >> >> Just another 2 cents, >> Axel >> >> -----Oprindelig meddelelse----- >> Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl >> Sendt: 20. november 2012 11:57 >> Til: Music Encoding Initiative >> Emne: Re: [MEI-L] FRBR in MEI >> >> Hi Perry, >> thanks for some clarifying approaches >> further statements inline >> >> Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): >>> Random comments on the discussion so far. Sorry if this gets long. >>> >>> When contemplating performances and recordings, it seems to me that people often have trouble reaching agreement on the term "sound recording". Andrew's slides label the *expression* as "the sound recording", but others might label the *manifestation* as "the sound recording". You might say the expression is the "act of making a recording" and the manifestation is the "recording that results". >>> >>> To disentangle the different uses of the term "recording", it helps me to remember that an expression is not a physical entity, but a manifestation is. Therefore, I prefer to think of the expression as "the performance" (the non-physical thing being recorded) and the manifestation as "the recording" (the physical thing). This fits with the way libraries have traditionally cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... >> I completely agree on that, being the reason why I used both the terms recording and record with record being on the manifestation/item-level and recording being rather on the expression-manifestation-level. Why so? Recording has to be subordinate to work after all and a recording is not just a simple physical manifestation but a multistep process involving conceptual and creative work done by producers and engineers. >> So talking about a recording as only being a manifestation becomes problematic as it is a intellectual process resulting in a physical manifestation. That's the way I was looking on it (owed to my audio engineering past) and of course it can be seen differently. >>> In any case, the FRBR document, which Axel cites, says a *performance is an expression* and a *recording is a manifestation*. >> This is perfectly plausible when disregarding the intellectual endeavour entangled with the "act of making a recording", as mentioned before. >>> The usual "waterfall" kind of diagram is explained by saying the term >>> "work" applies to conceptual content; "expression" applies to the >>> languages/media/versions in which the work occurs; "manifestation" >>> applies to the formats in which each expression is available; and >>> "item" applies to individual copies of a single format. (Here "media" >>> means "medium of expression", say written language as opposed to film, >>> and "format" means physical format, as in printed book as opposed to >>> audio CD.) >>> >>> Taking another tack, though, often it is easier for me to think of FRBR "from the bottom up", rather than start from the work and proceed "down" the waterfall diagram. Using the recording example, the item is the exemplar I hold in my hand, the manifestation is all of the copies of that exemplar (or better yet, all the information shared by all those copies), the expression is the version of the work that is represented by the manifestation (e.g., Jo's nose flute + harpsichord version and the orchestral version are different expressions), and the work is an intellectual creation/idea (e.g., Bohl's op. 1, the one that goes da, da, da, daaaaaa, reeep! reeep! reeep!). >>> >>> Using this "bottom up" thinking helps avoid mental contortions regarding what the work is -- the work is simply the thing at the end of this mental process. From there on, there are work-to-work relationships, so we don't have to think about whether "Romeo and Juliet", "Westside Story", and every other story about star-crossed lovers are expressions of an ur-work with its own manifestations and so on, which lead us to a different "waterfall" conclusion each time we discover a new work or expression. >> The idea of approaching the FRBR model "from the bottom" is great. And to be honest was something I did when drawing my model, especially concerning the record and recording portion of it. I started out from work on the top right and from the individual record bottom right and tried to fill in as many steps as possible, always wondering whether it be physical or conceptual. Actually I had the recording in between expression and manifestation in the first place, as I had the audio tape or digital audio in between manifestation and item. >> The parallel processes from a wok to an item (regardless of whichever form this may have) are owed to perspective and goal. When talking about graphical sources I completely agree with the idea of a certain instrumentation version or the like being an expression, a print run being a manifestation an individual copy of which would be an item. >>> Instead of creating separate expression-level markup for each performance, Axel treats some expressions (performances) as events related to another expression of a work (the orchestral version vs. the nose flute version). This is fine. As Johannes already pointed out, separate elements for the performances can be generated from the markup, if necessary. Conversely, there's nothing wrong with creating separate elements for each performance and relating them to other appropriate expressions and/or relating them directly to the work. If necessary, given accurate place and date information, the kind of markup could be created from the separate elements. So, six of one ... >> I can agree here, too. I only wondered if the sound wave resulting from the performance was the physical item (specific performers on a specific date), then consequently a series of performances by conductor and orchestra would make up for the manifestation, the expression then would be the concept that the conductor developed studying his "source material" and making up the way he wanted the composition to be realized ergo his "personal version" of the piece, somewhat of a personal edition. >> The performance material of course being an item of a certain print run >> (manifestation) of a certain edition (expression), having strong relationships to all of the above. >>> Johannes said "If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item." Well, maybe. But, I think in this case it would be fine to describe the manifestation and the item in a single place (within in MEI) because there's only one manifestation and one (and only one) item associated with that manifestation. This is the traditional way manuscripts have been described, pre-FRBR. Practically speaking, the manifesation and the item are the same thing. But, as soon as you want to say something special about a particular *part* (as in "chunk", not performer part) of the manifestation, you have to split these up again, for example, when one section of a manuscript is located in Prague and another is in Manitoba. >> This was the idea behind me marking/stretching the autograph from expression to item. >> >> /benjamin >>> This is not the case with printed material where there is *always* more than one item created from a manifestation, but it is still traditional to describe the manifestation and item as though they are the same thing. For example, it is common to follow the manifestation's author, title, place of publication, etc. with information about the location where one can obtain an examplar of the manifestation, say, UVa Library M 296.C57 1987. >>> >>> Johannes also said "So if you have two more measures in a source, this >>> source establishes a new expression in FRBR." Again, maybe. The FRBR >>> report (1997, amended and corrected through 2009) says >>> >>> "Variations within substantially the same expression (e.g., slight variations that can be noticed between two states of the same edition in the case of hand press production) would normally be ignored or, in specialized catalogues, be reflected as a note within the bibliographic record for the manifestation. However, for some applications of the model (e.g., early texts of rare manuscripts), each variation may be viewed as a different expression." >>> >>> The issue is in the determination of whether 2 things are "substantially the same expression". As with many things, this depends on the person making the determination, there is no single correct answer. We intend that MEI will provide the tools for accurate description using either approach. >>> >>> Just my 2 cents, >>> >>> -- >>> p. >>> >>> __________________________ >>> Perry Roland >>> Music Library >>> University of Virginia >>> P. O. Box 400175 >>> Charlottesville, VA 22904 >>> 434-982-2702 (w) >>> pdr4h (at) virginia (dot) edu >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From t.crawford at gold.ac.uk Wed Nov 21 10:11:55 2012 From: t.crawford at gold.ac.uk (Tim Crawford) Date: Wed, 21 Nov 2012 09:11:55 +0000 Subject: [MEI-L] FRBR in MEI In-Reply-To: <50AC8FB6.5020005@edirom.de> References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de>, <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> <50AB61F1.8090600@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F2CAC@EXCHANGE-01.kb.dk> <34BE6CF5-15FA-4FDB-AA3A-AFA6395503B4@edirom.de> <50AC8FB6.5020005@edirom.de> Message-ID: Dear All, While I have absolutely no wish to get further involved in this fascinating discussion, it strikes me that yes, indeed, much of what Benjamin is talking about may be handled better by a music ontology. And such a thing - the Music Ontology - has now reached an advanced state of development by researchers at the BBC and elsewhere over a number of years. For details and specification, see: http://musicontology.com/ I'm not sure this alone (or even the entire apparatus of the Semantic Web) will solve all the problem cases you might encounter or devise, but at least it might relieve some of the pressure caused by a desire to encode *everything* about a musical work within MEI ... Keep up the great work! Tim Crawford, London On 21 Nov 2012, at 08:24, Benjamin Wolff Bohl wrote: > Hi there, > first thanks to Axel for sorting out that FRBR is for bibliographic > items and thus performances, that we have nothing more of than the > knowledge it happened are out of scope. > I've somehow been thinking too much towards somthing like a music > ontology. > Moreover the idea of having a hierarchy in FRBR might have mislead > me, prooving Peter's earlier mentioned concerns regarding this true. > > Nevertheless we still got the recordings to deal with! > So Johannes, let's continue to disagree ;-) > >> Hi Benni, >> >> I hope I got one of your last mails wrong (in this regard), but >> just in case I didn't: By no means I wanted to keep you from >> commenting on this (or other) thread(s), as your comments are >> extremely valuable and helpful ? even if I sometimes disagree. If I >> offended you somehow, that wasn't my intention, and I want to >> apologize for it. >> >> That being said, I may continue to disagree ;-) Actually, I don't >> think we're that far away. The one thing you seem to get wrong >> though is the process from expression to manifestation, which is in >> no case trivial and a mere technological step without artistic >> contribution. When you consider the efforts necessary to engrave a >> piece of music, or the work on the preparation of the WeGA scores >> we see every day, you will agree that even in the graphical domain, >> this step is indeed highly artistic and involves a whole bunch of >> people with different expertise. I agree that the workflows for >> making recordings are different, but both things seem to be >> comparable from this perspective, don't you think? > I neither think that we are too far away from each other now. And I > never wanted to say that the transiton from expression to > manifestation was a mere technical, but maybe I should have > explained a little more what my initial graphic was all about with > the recordings, as by no means it would involve a mere technical > step. Beginning from the way the recording engineer set up his > microphones and what he did on his audio desk, across quite a couple > of steps involving editing (cutting, rather technical but > nevertheless with artistic implications), mixing (very artistic) and > mastering(as artistic as technical), that all would result in > archive material quite a lot of intellectual/artistic work is > involved in a record(ing). > > I'll have a try on this: > WORK - examination -> edition (e1) ------------- engraving -> print > run (m1) -printing -> print copy (i1) > > If you have the above, and try to get a parallel idea on the way to > the copy of a record on your shelf (i2): > (1) What will be expression? > (2) What will be manifestation? > (3) Is one stream of e-m-i this sufficient? > > WORK - examination -> artist's inpterpretation (e2) - - > > ? -> record copy (i2) > > or > > WORK - recording -> ? - mastering -> label's > press run -pressing (m2) -> record copy (i2) > > Maybe let's try to fill this with one of Don's "Greatful" examples: > The Song "Truckin" has been released Nov 1 1970 on the album > "American Beauty" and as a single. The album was recorded in AUG- > SEP 1970, although it might be the single version was recorded in > SEP or maybe this specific song was recorded in SEP. > > Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e2) --> 1970-11-1 > Warner bros. release of "American Beauty" album (m2) --> record copy > (i2) > Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e3) --> 1970-11-1 > Warner bros. release of "Truckin" single (m3) --> record copy (i3) > > But the single version and album version differ quite a lot, album > length 5:09 and single 3:13 so we should specify a little more. > > Truckin (w1) --> 1970-08 to 1970-09 Session Tape album-verison (e2) > --> 1970-11-1 Warner bros. release of "American Beauty" album (m2) -- > > record copy (i2) > Truckin (w1) --> 1970-08 to 1970-09 Session Tape single-version (e3) > --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> > record copy (i3) > > A little complication: the single version was not recorded but taken > from the album version and edited down from 5 to 3 minutes > nevertheless it is a own expression, but it hints us twords some > items that might reside in an archives shelf, namely: > - session tapes : the tapes from the recording session (potentially > multi-track) > - edit tapes : the tapes where all the nice parts from the session > tapes were cut together to make up the material for the work > (potentially multi-track) > - mix tapes : a stereo mix version including lots of additional > features like for example delay effects etc. resembling the final > version > - master tapes : an acoustically slightly reshaped version of the > mix tape version in order to fit the technical limitations of a > certain target medium like vinyl and some intellectual work to > smoothen the mix (e.g. making all songs on a record sound similar) > > If they are relevant for my MEI file, they should go into , > but where should these go in FRBR? > Maybe they should all be separate expressions with strong relations > to each other? > So actually the only one in the direct same "FRBR hierarchy" would > be the master tape? > > Truckin (w1) --> 1970-09-XX "Truckin" Master Tape (e3) --> 1970-11-1 > Warner bros. release of "Truckin" single (m3) --> record copy (i3) > > If I don't know about all the tapes I might just put? > Truckin (w1) --> 1970-08 to 1970-09 recordings --> 1970-11-1 Warner > bros. release of "Truckin" single (m3) --> record copy (i3) > > or? > > Truckin (w1) --> 1970 Version (e3)--> 1970-11-1 Warner bros. release > of "Truckin" single (m3) --> record copy (i3) > > What do you think, is there still a problem? > Is there anything interesting for you in the above? > >> Besides that, I totally agree that FRBR is not extremely >> prescriptive regarding how to model certain situations, but after >> thinking about it for some time, I (now) think that this is >> actually a benefit, as it doesn't enforce a specific setup, but >> allows projects to implement it as they see fit. So in the end, I'm >> not against your approach in general, I'm just against enforcing >> your approach. The current implementation of FRBR in MEI tries to >> keep this openness of FRBR, which I regard as a good thing. In the >> end, all of us could be wrong ;-) > I never wanted to enforce anything only to show up possibilities to > be considered when implementing FRBR or test the current > implementation against. And I think you're absolutely right that an > openness could be a benefit as we certainly will miss possible > complicated situations. > > benjamin >> >> Best, >> Johannes >> >> Am 20.11.2012 um 12:33 schrieb Axel Teich Geertinger: >> >>> Hi Benni >>> >>> Perhaps we should remember that FRBR is intended for >>> _bibliographic records_, not for descriptions of a work's >>> reception history. Thus, the premise for using FRBR is that in the >>> end we want to describe bibliographic items. Since a performance >>> itself isn't a bibliographic item, perhaps it does not have to fit >>> in? Only if it results in such an item (via manifestation), i.e. a >>> recording, it becomes truly relevant to use FRBR. The performance >>> in that case is not the primary thing we want to describe, it is >>> just the context that resulted in the recording manifestation. >>> >>> Just another 2 cents, >>> Axel >>> >>> -----Oprindelig meddelelse----- >>> Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de >>> ] P? vegne af Benjamin Wolff Bohl >>> Sendt: 20. november 2012 11:57 >>> Til: Music Encoding Initiative >>> Emne: Re: [MEI-L] FRBR in MEI >>> >>> Hi Perry, >>> thanks for some clarifying approaches >>> further statements inline >>> >>> Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): >>>> Random comments on the discussion so far. Sorry if this gets long. >>>> >>>> When contemplating performances and recordings, it seems to me >>>> that people often have trouble reaching agreement on the term >>>> "sound recording". Andrew's slides label the *expression* as >>>> "the sound recording", but others might label the *manifestation* >>>> as "the sound recording". You might say the expression is the >>>> "act of making a recording" and the manifestation is the >>>> "recording that results". >>>> >>>> To disentangle the different uses of the term "recording", it >>>> helps me to remember that an expression is not a physical entity, >>>> but a manifestation is. Therefore, I prefer to think of the >>>> expression as "the performance" (the non-physical thing being >>>> recorded) and the manifestation as "the recording" (the physical >>>> thing). This fits with the way libraries have traditionally >>>> cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... >>> I completely agree on that, being the reason why I used both the >>> terms recording and record with record being on the manifestation/ >>> item-level and recording being rather on the expression- >>> manifestation-level. Why so? Recording has to be subordinate to >>> work after all and a recording is not just a simple physical >>> manifestation but a multistep process involving conceptual and >>> creative work done by producers and engineers. >>> So talking about a recording as only being a manifestation becomes >>> problematic as it is a intellectual process resulting in a >>> physical manifestation. That's the way I was looking on it (owed >>> to my audio engineering past) and of course it can be seen >>> differently. >>>> In any case, the FRBR document, which Axel cites, says a >>>> *performance is an expression* and a *recording is a >>>> manifestation*. >>> This is perfectly plausible when disregarding the intellectual >>> endeavour entangled with the "act of making a recording", as >>> mentioned before. >>>> The usual "waterfall" kind of diagram is explained by saying the >>>> term >>>> "work" applies to conceptual content; "expression" applies to the >>>> languages/media/versions in which the work occurs; "manifestation" >>>> applies to the formats in which each expression is available; and >>>> "item" applies to individual copies of a single format. (Here >>>> "media" >>>> means "medium of expression", say written language as opposed to >>>> film, >>>> and "format" means physical format, as in printed book as opposed >>>> to >>>> audio CD.) >>>> >>>> Taking another tack, though, often it is easier for me to think >>>> of FRBR "from the bottom up", rather than start from the work and >>>> proceed "down" the waterfall diagram. Using the recording >>>> example, the item is the exemplar I hold in my hand, the >>>> manifestation is all of the copies of that exemplar (or better >>>> yet, all the information shared by all those copies), the >>>> expression is the version of the work that is represented by the >>>> manifestation (e.g., Jo's nose flute + harpsichord version and >>>> the orchestral version are different expressions), and the work >>>> is an intellectual creation/idea (e.g., Bohl's op. 1, the one >>>> that goes da, da, da, daaaaaa, reeep! reeep! reeep!). >>>> >>>> Using this "bottom up" thinking helps avoid mental contortions >>>> regarding what the work is -- the work is simply the thing at the >>>> end of this mental process. From there on, there are work-to- >>>> work relationships, so we don't have to think about whether >>>> "Romeo and Juliet", "Westside Story", and every other story about >>>> star-crossed lovers are expressions of an ur-work with its own >>>> manifestations and so on, which lead us to a different >>>> "waterfall" conclusion each time we discover a new work or >>>> expression. >>> The idea of approaching the FRBR model "from the bottom" is great. >>> And to be honest was something I did when drawing my model, >>> especially concerning the record and recording portion of it. I >>> started out from work on the top right and from the individual >>> record bottom right and tried to fill in as many steps as >>> possible, always wondering whether it be physical or conceptual. >>> Actually I had the recording in between expression and >>> manifestation in the first place, as I had the audio tape or >>> digital audio in between manifestation and item. >>> The parallel processes from a wok to an item (regardless of >>> whichever form this may have) are owed to perspective and goal. >>> When talking about graphical sources I completely agree with the >>> idea of a certain instrumentation version or the like being an >>> expression, a print run being a manifestation an individual copy >>> of which would be an item. >>>> Instead of creating separate expression-level markup for each >>>> performance, Axel treats some expressions (performances) as >>>> events related to another expression of a work (the orchestral >>>> version vs. the nose flute version). This is fine. As Johannes >>>> already pointed out, separate elements for the >>>> performances can be generated from the markup, if >>>> necessary. Conversely, there's nothing wrong with creating >>>> separate elements for each performance and relating >>>> them to other appropriate expressions and/or relating them >>>> directly to the work. If necessary, given accurate place and >>>> date information, the kind of markup could be created >>>> from the separate elements. So, six of one ... >>> I can agree here, too. I only wondered if the sound wave resulting >>> from the performance was the physical item (specific performers on >>> a specific date), then consequently a series of performances by >>> conductor and orchestra would make up for the manifestation, the >>> expression then would be the concept that the conductor developed >>> studying his "source material" and making up the way he wanted the >>> composition to be realized ergo his "personal version" of the >>> piece, somewhat of a personal edition. >>> The performance material of course being an item of a certain >>> print run >>> (manifestation) of a certain edition (expression), having strong >>> relationships to all of the above. >>>> Johannes said "If there is a manuscript of the nose flute >>>> version, the information about it would be spread between the >>>> manifestation (source) and the item." Well, maybe. But, I think >>>> in this case it would be fine to describe the manifestation and >>>> the item in a single place (within in MEI) because >>>> there's only one manifestation and one (and only one) item >>>> associated with that manifestation. This is the traditional way >>>> manuscripts have been described, pre-FRBR. Practically speaking, >>>> the manifesation and the item are the same thing. But, as soon >>>> as you want to say something special about a particular *part* >>>> (as in "chunk", not performer part) of the manifestation, you >>>> have to split these up again, for example, when one section of a >>>> manuscript is located in Prague and another is in Manitoba. >>> This was the idea behind me marking/stretching the autograph from >>> expression to item. >>> >>> /benjamin >>>> This is not the case with printed material where there is >>>> *always* more than one item created from a manifestation, but it >>>> is still traditional to describe the manifestation and item as >>>> though they are the same thing. For example, it is common to >>>> follow the manifestation's author, title, place of publication, >>>> etc. with information about the location where one can obtain an >>>> examplar of the manifestation, say, UVa Library M 296.C57 1987. >>>> >>>> Johannes also said "So if you have two more measures in a source, >>>> this >>>> source establishes a new expression in FRBR." Again, maybe. The >>>> FRBR >>>> report (1997, amended and corrected through 2009) says >>>> >>>> "Variations within substantially the same expression (e.g., >>>> slight variations that can be noticed between two states of the >>>> same edition in the case of hand press production) would normally >>>> be ignored or, in specialized catalogues, be reflected as a note >>>> within the bibliographic record for the manifestation. However, >>>> for some applications of the model (e.g., early texts of rare >>>> manuscripts), each variation may be viewed as a different >>>> expression." >>>> >>>> The issue is in the determination of whether 2 things are >>>> "substantially the same expression". As with many things, this >>>> depends on the person making the determination, there is no >>>> single correct answer. We intend that MEI will provide the tools >>>> for accurate description using either approach. >>>> >>>> Just my 2 cents, >>>> >>>> -- >>>> p. >>>> >>>> __________________________ >>>> Perry Roland >>>> Music Library >>>> University of Virginia >>>> P. O. Box 400175 >>>> Charlottesville, VA 22904 >>>> 434-982-2702 (w) >>>> pdr4h (at) virginia (dot) edu >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > From kepper at edirom.de Wed Nov 21 10:20:17 2012 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 21 Nov 2012 10:20:17 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: References: <0B6F63F59F405E4C902DFE2C2329D0D1514EF524@EXCHANGE-01.kb.dk> <45F7170D-0C66-4C59-9285-E328FDC45556@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514EF677@EXCHANGE-01.kb.dk> <50A3EA0D.3010107@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F0523@EXCHANGE-01.kb.dk> <50A66B12.3060105@edirom.de> <14810_1353088028_50A67C1B_14810_70_1_4DF0F169-AEB5-4F46-8220-828B90F928FF@edirom.de>, <8DD7CA10-D542-48B7-BF5F-1BC177A7CD43@mail.mcgill.ca> <50AB61F1.8090600@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D1514F2CAC@EXCHANGE-01.kb.dk> <34BE6CF5-15FA-4FDB-AA3A-AFA6395503B4@edirom.de> <50AC8FB6.5020005@edirom.de> Message-ID: Thanks Tim for that hint, which I completely forgot over the last few years. Another approach, which will probably reside somewhere in the middle between FRBR and the Music Ontology is CIDOC-CRM. I have to admit that I'm not familiar with the details there, but I suspect that they are capable of dealing with situations like this somehow. I guess a good strategy for projects that care about such issues is to use MEI only to capture the music and most relevant parts in the header, while pointing to external formats like the ones above to encode the nitty-gritty details of creation history. While I'm normally in support of adding functionality to MEI, this seems to go too far down the road. We can't possibly mimic all these other formats within MEI, as we don't mimic SVG, TEI, and others? Best, Johannes Am 21.11.2012 um 10:11 schrieb Tim Crawford: > Dear All, > > While I have absolutely no wish to get further involved in this fascinating discussion, it strikes me that yes, indeed, much of what Benjamin is talking about may be handled better by a music ontology. > > And such a thing - the Music Ontology - has now reached an advanced state of development by researchers at the BBC and elsewhere over a number of years. For details and specification, see: > > http://musicontology.com/ > > I'm not sure this alone (or even the entire apparatus of the Semantic Web) will solve all the problem cases you might encounter or devise, but at least it might relieve some of the pressure caused by a desire to encode *everything* about a musical work within MEI ... > > Keep up the great work! > > Tim Crawford, London > > On 21 Nov 2012, at 08:24, Benjamin Wolff Bohl wrote: > >> Hi there, >> first thanks to Axel for sorting out that FRBR is for bibliographic items and thus performances, that we have nothing more of than the knowledge it happened are out of scope. >> I've somehow been thinking too much towards somthing like a music ontology. >> Moreover the idea of having a hierarchy in FRBR might have mislead me, prooving Peter's earlier mentioned concerns regarding this true. >> >> Nevertheless we still got the recordings to deal with! >> So Johannes, let's continue to disagree ;-) >> >>> Hi Benni, >>> >>> I hope I got one of your last mails wrong (in this regard), but just in case I didn't: By no means I wanted to keep you from commenting on this (or other) thread(s), as your comments are extremely valuable and helpful ? even if I sometimes disagree. If I offended you somehow, that wasn't my intention, and I want to apologize for it. >>> >>> That being said, I may continue to disagree ;-) Actually, I don't think we're that far away. The one thing you seem to get wrong though is the process from expression to manifestation, which is in no case trivial and a mere technological step without artistic contribution. When you consider the efforts necessary to engrave a piece of music, or the work on the preparation of the WeGA scores we see every day, you will agree that even in the graphical domain, this step is indeed highly artistic and involves a whole bunch of people with different expertise. I agree that the workflows for making recordings are different, but both things seem to be comparable from this perspective, don't you think? >> I neither think that we are too far away from each other now. And I never wanted to say that the transiton from expression to manifestation was a mere technical, but maybe I should have explained a little more what my initial graphic was all about with the recordings, as by no means it would involve a mere technical step. Beginning from the way the recording engineer set up his microphones and what he did on his audio desk, across quite a couple of steps involving editing (cutting, rather technical but nevertheless with artistic implications), mixing (very artistic) and mastering(as artistic as technical), that all would result in archive material quite a lot of intellectual/artistic work is involved in a record(ing). >> >> I'll have a try on this: >> WORK - examination -> edition (e1) ------------- engraving -> print run (m1) -printing -> print copy (i1) >> >> If you have the above, and try to get a parallel idea on the way to the copy of a record on your shelf (i2): >> (1) What will be expression? >> (2) What will be manifestation? >> (3) Is one stream of e-m-i this sufficient? >> >> WORK - examination -> artist's inpterpretation (e2) - -> ? -> record copy (i2) >> >> or >> >> WORK - recording -> ? - mastering -> label's press run -pressing (m2) -> record copy (i2) >> >> Maybe let's try to fill this with one of Don's "Greatful" examples: >> The Song "Truckin" has been released Nov 1 1970 on the album "American Beauty" and as a single. The album was recorded in AUG- SEP 1970, although it might be the single version was recorded in SEP or maybe this specific song was recorded in SEP. >> >> Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e2) --> 1970-11-1 Warner bros. release of "American Beauty" album (m2) --> record copy (i2) >> Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e3) --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) >> >> But the single version and album version differ quite a lot, album length 5:09 and single 3:13 so we should specify a little more. >> >> Truckin (w1) --> 1970-08 to 1970-09 Session Tape album-verison (e2) --> 1970-11-1 Warner bros. release of "American Beauty" album (m2) --> record copy (i2) >> Truckin (w1) --> 1970-08 to 1970-09 Session Tape single-version (e3) --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) >> >> A little complication: the single version was not recorded but taken from the album version and edited down from 5 to 3 minutes nevertheless it is a own expression, but it hints us twords some items that might reside in an archives shelf, namely: >> - session tapes : the tapes from the recording session (potentially multi-track) >> - edit tapes : the tapes where all the nice parts from the session tapes were cut together to make up the material for the work (potentially multi-track) >> - mix tapes : a stereo mix version including lots of additional features like for example delay effects etc. resembling the final version >> - master tapes : an acoustically slightly reshaped version of the mix tape version in order to fit the technical limitations of a certain target medium like vinyl and some intellectual work to smoothen the mix (e.g. making all songs on a record sound similar) >> >> If they are relevant for my MEI file, they should go into , but where should these go in FRBR? >> Maybe they should all be separate expressions with strong relations to each other? >> So actually the only one in the direct same "FRBR hierarchy" would be the master tape? >> >> Truckin (w1) --> 1970-09-XX "Truckin" Master Tape (e3) --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) >> >> If I don't know about all the tapes I might just put? >> Truckin (w1) --> 1970-08 to 1970-09 recordings --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) >> >> or? >> >> Truckin (w1) --> 1970 Version (e3)--> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> record copy (i3) >> >> What do you think, is there still a problem? >> Is there anything interesting for you in the above? >> >>> Besides that, I totally agree that FRBR is not extremely prescriptive regarding how to model certain situations, but after thinking about it for some time, I (now) think that this is actually a benefit, as it doesn't enforce a specific setup, but allows projects to implement it as they see fit. So in the end, I'm not against your approach in general, I'm just against enforcing your approach. The current implementation of FRBR in MEI tries to keep this openness of FRBR, which I regard as a good thing. In the end, all of us could be wrong ;-) >> I never wanted to enforce anything only to show up possibilities to be considered when implementing FRBR or test the current implementation against. And I think you're absolutely right that an openness could be a benefit as we certainly will miss possible complicated situations. >> >> benjamin >>> >>> Best, >>> Johannes >>> >>> Am 20.11.2012 um 12:33 schrieb Axel Teich Geertinger: >>> >>>> Hi Benni >>>> >>>> Perhaps we should remember that FRBR is intended for _bibliographic records_, not for descriptions of a work's reception history. Thus, the premise for using FRBR is that in the end we want to describe bibliographic items. Since a performance itself isn't a bibliographic item, perhaps it does not have to fit in? Only if it results in such an item (via manifestation), i.e. a recording, it becomes truly relevant to use FRBR. The performance in that case is not the primary thing we want to describe, it is just the context that resulted in the recording manifestation. >>>> >>>> Just another 2 cents, >>>> Axel >>>> >>>> -----Oprindelig meddelelse----- >>>> Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Benjamin Wolff Bohl >>>> Sendt: 20. november 2012 11:57 >>>> Til: Music Encoding Initiative >>>> Emne: Re: [MEI-L] FRBR in MEI >>>> >>>> Hi Perry, >>>> thanks for some clarifying approaches >>>> further statements inline >>>> >>>> Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): >>>>> Random comments on the discussion so far. Sorry if this gets long. >>>>> >>>>> When contemplating performances and recordings, it seems to me that people often have trouble reaching agreement on the term "sound recording". Andrew's slides label the *expression* as "the sound recording", but others might label the *manifestation* as "the sound recording". You might say the expression is the "act of making a recording" and the manifestation is the "recording that results". >>>>> >>>>> To disentangle the different uses of the term "recording", it helps me to remember that an expression is not a physical entity, but a manifestation is. Therefore, I prefer to think of the expression as "the performance" (the non-physical thing being recorded) and the manifestation as "the recording" (the physical thing). This fits with the way libraries have traditionally cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... >>>> I completely agree on that, being the reason why I used both the terms recording and record with record being on the manifestation/item-level and recording being rather on the expression-manifestation-level. Why so? Recording has to be subordinate to work after all and a recording is not just a simple physical manifestation but a multistep process involving conceptual and creative work done by producers and engineers. >>>> So talking about a recording as only being a manifestation becomes problematic as it is a intellectual process resulting in a physical manifestation. That's the way I was looking on it (owed to my audio engineering past) and of course it can be seen differently. >>>>> In any case, the FRBR document, which Axel cites, says a *performance is an expression* and a *recording is a manifestation*. >>>> This is perfectly plausible when disregarding the intellectual endeavour entangled with the "act of making a recording", as mentioned before. >>>>> The usual "waterfall" kind of diagram is explained by saying the term >>>>> "work" applies to conceptual content; "expression" applies to the >>>>> languages/media/versions in which the work occurs; "manifestation" >>>>> applies to the formats in which each expression is available; and >>>>> "item" applies to individual copies of a single format. (Here "media" >>>>> means "medium of expression", say written language as opposed to film, >>>>> and "format" means physical format, as in printed book as opposed to >>>>> audio CD.) >>>>> >>>>> Taking another tack, though, often it is easier for me to think of FRBR "from the bottom up", rather than start from the work and proceed "down" the waterfall diagram. Using the recording example, the item is the exemplar I hold in my hand, the manifestation is all of the copies of that exemplar (or better yet, all the information shared by all those copies), the expression is the version of the work that is represented by the manifestation (e.g., Jo's nose flute + harpsichord version and the orchestral version are different expressions), and the work is an intellectual creation/idea (e.g., Bohl's op. 1, the one that goes da, da, da, daaaaaa, reeep! reeep! reeep!). >>>>> >>>>> Using this "bottom up" thinking helps avoid mental contortions regarding what the work is -- the work is simply the thing at the end of this mental process. From there on, there are work-to-work relationships, so we don't have to think about whether "Romeo and Juliet", "Westside Story", and every other story about star-crossed lovers are expressions of an ur-work with its own manifestations and so on, which lead us to a different "waterfall" conclusion each time we discover a new work or expression. >>>> The idea of approaching the FRBR model "from the bottom" is great. And to be honest was something I did when drawing my model, especially concerning the record and recording portion of it. I started out from work on the top right and from the individual record bottom right and tried to fill in as many steps as possible, always wondering whether it be physical or conceptual. Actually I had the recording in between expression and manifestation in the first place, as I had the audio tape or digital audio in between manifestation and item. >>>> The parallel processes from a wok to an item (regardless of whichever form this may have) are owed to perspective and goal. When talking about graphical sources I completely agree with the idea of a certain instrumentation version or the like being an expression, a print run being a manifestation an individual copy of which would be an item. >>>>> Instead of creating separate expression-level markup for each performance, Axel treats some expressions (performances) as events related to another expression of a work (the orchestral version vs. the nose flute version). This is fine. As Johannes already pointed out, separate elements for the performances can be generated from the markup, if necessary. Conversely, there's nothing wrong with creating separate elements for each performance and relating them to other appropriate expressions and/or relating them directly to the work. If necessary, given accurate place and date information, the kind of markup could be created from the separate elements. So, six of one ... >>>> I can agree here, too. I only wondered if the sound wave resulting from the performance was the physical item (specific performers on a specific date), then consequently a series of performances by conductor and orchestra would make up for the manifestation, the expression then would be the concept that the conductor developed studying his "source material" and making up the way he wanted the composition to be realized ergo his "personal version" of the piece, somewhat of a personal edition. >>>> The performance material of course being an item of a certain print run >>>> (manifestation) of a certain edition (expression), having strong relationships to all of the above. >>>>> Johannes said "If there is a manuscript of the nose flute version, the information about it would be spread between the manifestation (source) and the item." Well, maybe. But, I think in this case it would be fine to describe the manifestation and the item in a single place (within in MEI) because there's only one manifestation and one (and only one) item associated with that manifestation. This is the traditional way manuscripts have been described, pre-FRBR. Practically speaking, the manifesation and the item are the same thing. But, as soon as you want to say something special about a particular *part* (as in "chunk", not performer part) of the manifestation, you have to split these up again, for example, when one section of a manuscript is located in Prague and another is in Manitoba. >>>> This was the idea behind me marking/stretching the autograph from expression to item. >>>> >>>> /benjamin >>>>> This is not the case with printed material where there is *always* more than one item created from a manifestation, but it is still traditional to describe the manifestation and item as though they are the same thing. For example, it is common to follow the manifestation's author, title, place of publication, etc. with information about the location where one can obtain an examplar of the manifestation, say, UVa Library M 296.C57 1987. >>>>> >>>>> Johannes also said "So if you have two more measures in a source, this >>>>> source establishes a new expression in FRBR." Again, maybe. The FRBR >>>>> report (1997, amended and corrected through 2009) says >>>>> >>>>> "Variations within substantially the same expression (e.g., slight variations that can be noticed between two states of the same edition in the case of hand press production) would normally be ignored or, in specialized catalogues, be reflected as a note within the bibliographic record for the manifestation. However, for some applications of the model (e.g., early texts of rare manuscripts), each variation may be viewed as a different expression." >>>>> >>>>> The issue is in the determination of whether 2 things are "substantially the same expression". As with many things, this depends on the person making the determination, there is no single correct answer. We intend that MEI will provide the tools for accurate description using either approach. >>>>> >>>>> Just my 2 cents, >>>>> >>>>> -- >>>>> p. >>>>> >>>>> __________________________ >>>>> Perry Roland >>>>> Music Library >>>>> University of Virginia >>>>> P. O. Box 400175 >>>>> Charlottesville, VA 22904 >>>>> 434-982-2702 (w) >>>>> pdr4h (at) virginia (dot) edu >>>>> _______________________________________________ >>>>> mei-l mailing list >>>>> mei-l at lists.uni-paderborn.de >>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From dave at titanmusic.com Wed Nov 21 22:41:01 2012 From: dave at titanmusic.com (David Meredith) Date: Wed, 21 Nov 2012 22:41:01 +0100 Subject: [MEI-L] FRBR in MEI In-Reply-To: Message-ID: "musicbrainz_guid" "amazon_asin" "myspace" So is the idea that a new property has to be added each time someone builds a new electronic catalogue? Doesn't seem particularly scaleable. Or are there just going to be some arbitrarily privileged catalogues that have associated properties? - Dave Meredith, Aalborg On 21/11/2012 10:11, "Tim Crawford" wrote: >Dear All, > >While I have absolutely no wish to get further involved in this >fascinating discussion, it strikes me that yes, indeed, much of what >Benjamin is talking about may be handled better by a music ontology. > >And such a thing - the Music Ontology - has now reached an advanced >state of development by researchers at the BBC and elsewhere over a >number of years. For details and specification, see: > > http://musicontology.com/ > >I'm not sure this alone (or even the entire apparatus of the Semantic >Web) will solve all the problem cases you might encounter or devise, >but at least it might relieve some of the pressure caused by a desire >to encode *everything* about a musical work within MEI ... > >Keep up the great work! > >Tim Crawford, London > >On 21 Nov 2012, at 08:24, Benjamin Wolff Bohl wrote: > >> Hi there, >> first thanks to Axel for sorting out that FRBR is for bibliographic >> items and thus performances, that we have nothing more of than the >> knowledge it happened are out of scope. >> I've somehow been thinking too much towards somthing like a music >> ontology. >> Moreover the idea of having a hierarchy in FRBR might have mislead >> me, prooving Peter's earlier mentioned concerns regarding this true. >> >> Nevertheless we still got the recordings to deal with! >> So Johannes, let's continue to disagree ;-) >> >>> Hi Benni, >>> >>> I hope I got one of your last mails wrong (in this regard), but >>> just in case I didn't: By no means I wanted to keep you from >>> commenting on this (or other) thread(s), as your comments are >>> extremely valuable and helpful ? even if I sometimes disagree. If I >>> offended you somehow, that wasn't my intention, and I want to >>> apologize for it. >>> >>> That being said, I may continue to disagree ;-) Actually, I don't >>> think we're that far away. The one thing you seem to get wrong >>> though is the process from expression to manifestation, which is in >>> no case trivial and a mere technological step without artistic >>> contribution. When you consider the efforts necessary to engrave a >>> piece of music, or the work on the preparation of the WeGA scores >>> we see every day, you will agree that even in the graphical domain, >>> this step is indeed highly artistic and involves a whole bunch of >>> people with different expertise. I agree that the workflows for >>> making recordings are different, but both things seem to be >>> comparable from this perspective, don't you think? >> I neither think that we are too far away from each other now. And I >> never wanted to say that the transiton from expression to >> manifestation was a mere technical, but maybe I should have >> explained a little more what my initial graphic was all about with >> the recordings, as by no means it would involve a mere technical >> step. Beginning from the way the recording engineer set up his >> microphones and what he did on his audio desk, across quite a couple >> of steps involving editing (cutting, rather technical but >> nevertheless with artistic implications), mixing (very artistic) and >> mastering(as artistic as technical), that all would result in >> archive material quite a lot of intellectual/artistic work is >> involved in a record(ing). >> >> I'll have a try on this: >> WORK - examination -> edition (e1) ------------- engraving -> print >> run (m1) -printing -> print copy (i1) >> >> If you have the above, and try to get a parallel idea on the way to >> the copy of a record on your shelf (i2): >> (1) What will be expression? >> (2) What will be manifestation? >> (3) Is one stream of e-m-i this sufficient? >> >> WORK - examination -> artist's inpterpretation (e2) - - >> > ? -> record copy (i2) >> >> or >> >> WORK - recording -> ? - mastering -> label's >> press run -pressing (m2) -> record copy (i2) >> >> Maybe let's try to fill this with one of Don's "Greatful" examples: >> The Song "Truckin" has been released Nov 1 1970 on the album >> "American Beauty" and as a single. The album was recorded in AUG- >> SEP 1970, although it might be the single version was recorded in >> SEP or maybe this specific song was recorded in SEP. >> >> Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e2) --> 1970-11-1 >> Warner bros. release of "American Beauty" album (m2) --> record copy >> (i2) >> Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e3) --> 1970-11-1 >> Warner bros. release of "Truckin" single (m3) --> record copy (i3) >> >> But the single version and album version differ quite a lot, album >> length 5:09 and single 3:13 so we should specify a little more. >> >> Truckin (w1) --> 1970-08 to 1970-09 Session Tape album-verison (e2) >> --> 1970-11-1 Warner bros. release of "American Beauty" album (m2) -- >> > record copy (i2) >> Truckin (w1) --> 1970-08 to 1970-09 Session Tape single-version (e3) >> --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> >> record copy (i3) >> >> A little complication: the single version was not recorded but taken >> from the album version and edited down from 5 to 3 minutes >> nevertheless it is a own expression, but it hints us twords some >> items that might reside in an archives shelf, namely: >> - session tapes : the tapes from the recording session (potentially >> multi-track) >> - edit tapes : the tapes where all the nice parts from the session >> tapes were cut together to make up the material for the work >> (potentially multi-track) >> - mix tapes : a stereo mix version including lots of additional >> features like for example delay effects etc. resembling the final >> version >> - master tapes : an acoustically slightly reshaped version of the >> mix tape version in order to fit the technical limitations of a >> certain target medium like vinyl and some intellectual work to >> smoothen the mix (e.g. making all songs on a record sound similar) >> >> If they are relevant for my MEI file, they should go into , >> but where should these go in FRBR? >> Maybe they should all be separate expressions with strong relations >> to each other? >> So actually the only one in the direct same "FRBR hierarchy" would >> be the master tape? >> >> Truckin (w1) --> 1970-09-XX "Truckin" Master Tape (e3) --> 1970-11-1 >> Warner bros. release of "Truckin" single (m3) --> record copy (i3) >> >> If I don't know about all the tapes I might just put? >> Truckin (w1) --> 1970-08 to 1970-09 recordings --> 1970-11-1 Warner >> bros. release of "Truckin" single (m3) --> record copy (i3) >> >> or? >> >> Truckin (w1) --> 1970 Version (e3)--> 1970-11-1 Warner bros. release >> of "Truckin" single (m3) --> record copy (i3) >> >> What do you think, is there still a problem? >> Is there anything interesting for you in the above? >> >>> Besides that, I totally agree that FRBR is not extremely >>> prescriptive regarding how to model certain situations, but after >>> thinking about it for some time, I (now) think that this is >>> actually a benefit, as it doesn't enforce a specific setup, but >>> allows projects to implement it as they see fit. So in the end, I'm >>> not against your approach in general, I'm just against enforcing >>> your approach. The current implementation of FRBR in MEI tries to >>> keep this openness of FRBR, which I regard as a good thing. In the >>> end, all of us could be wrong ;-) >> I never wanted to enforce anything only to show up possibilities to >> be considered when implementing FRBR or test the current >> implementation against. And I think you're absolutely right that an >> openness could be a benefit as we certainly will miss possible >> complicated situations. >> >> benjamin >>> >>> Best, >>> Johannes >>> >>> Am 20.11.2012 um 12:33 schrieb Axel Teich Geertinger: >>> >>>> Hi Benni >>>> >>>> Perhaps we should remember that FRBR is intended for >>>> _bibliographic records_, not for descriptions of a work's >>>> reception history. Thus, the premise for using FRBR is that in the >>>> end we want to describe bibliographic items. Since a performance >>>> itself isn't a bibliographic item, perhaps it does not have to fit >>>> in? Only if it results in such an item (via manifestation), i.e. a >>>> recording, it becomes truly relevant to use FRBR. The performance >>>> in that case is not the primary thing we want to describe, it is >>>> just the context that resulted in the recording manifestation. >>>> >>>> Just another 2 cents, >>>> Axel >>>> >>>> -----Oprindelig meddelelse----- >>>> Fra: mei-l-bounces at lists.uni-paderborn.de >>>>[mailto:mei-l-bounces at lists.uni-paderborn.de >>>> ] P? vegne af Benjamin Wolff Bohl >>>> Sendt: 20. november 2012 11:57 >>>> Til: Music Encoding Initiative >>>> Emne: Re: [MEI-L] FRBR in MEI >>>> >>>> Hi Perry, >>>> thanks for some clarifying approaches >>>> further statements inline >>>> >>>> Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): >>>>> Random comments on the discussion so far. Sorry if this gets long. >>>>> >>>>> When contemplating performances and recordings, it seems to me >>>>> that people often have trouble reaching agreement on the term >>>>> "sound recording". Andrew's slides label the *expression* as >>>>> "the sound recording", but others might label the *manifestation* >>>>> as "the sound recording". You might say the expression is the >>>>> "act of making a recording" and the manifestation is the >>>>> "recording that results". >>>>> >>>>> To disentangle the different uses of the term "recording", it >>>>> helps me to remember that an expression is not a physical entity, >>>>> but a manifestation is. Therefore, I prefer to think of the >>>>> expression as "the performance" (the non-physical thing being >>>>> recorded) and the manifestation as "the recording" (the physical >>>>> thing). This fits with the way libraries have traditionally >>>>> cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... >>>> I completely agree on that, being the reason why I used both the >>>> terms recording and record with record being on the manifestation/ >>>> item-level and recording being rather on the expression- >>>> manifestation-level. Why so? Recording has to be subordinate to >>>> work after all and a recording is not just a simple physical >>>> manifestation but a multistep process involving conceptual and >>>> creative work done by producers and engineers. >>>> So talking about a recording as only being a manifestation becomes >>>> problematic as it is a intellectual process resulting in a >>>> physical manifestation. That's the way I was looking on it (owed >>>> to my audio engineering past) and of course it can be seen >>>> differently. >>>>> In any case, the FRBR document, which Axel cites, says a >>>>> *performance is an expression* and a *recording is a >>>>> manifestation*. >>>> This is perfectly plausible when disregarding the intellectual >>>> endeavour entangled with the "act of making a recording", as >>>> mentioned before. >>>>> The usual "waterfall" kind of diagram is explained by saying the >>>>> term >>>>> "work" applies to conceptual content; "expression" applies to the >>>>> languages/media/versions in which the work occurs; "manifestation" >>>>> applies to the formats in which each expression is available; and >>>>> "item" applies to individual copies of a single format. (Here >>>>> "media" >>>>> means "medium of expression", say written language as opposed to >>>>> film, >>>>> and "format" means physical format, as in printed book as opposed >>>>> to >>>>> audio CD.) >>>>> >>>>> Taking another tack, though, often it is easier for me to think >>>>> of FRBR "from the bottom up", rather than start from the work and >>>>> proceed "down" the waterfall diagram. Using the recording >>>>> example, the item is the exemplar I hold in my hand, the >>>>> manifestation is all of the copies of that exemplar (or better >>>>> yet, all the information shared by all those copies), the >>>>> expression is the version of the work that is represented by the >>>>> manifestation (e.g., Jo's nose flute + harpsichord version and >>>>> the orchestral version are different expressions), and the work >>>>> is an intellectual creation/idea (e.g., Bohl's op. 1, the one >>>>> that goes da, da, da, daaaaaa, reeep! reeep! reeep!). >>>>> >>>>> Using this "bottom up" thinking helps avoid mental contortions >>>>> regarding what the work is -- the work is simply the thing at the >>>>> end of this mental process. From there on, there are work-to- >>>>> work relationships, so we don't have to think about whether >>>>> "Romeo and Juliet", "Westside Story", and every other story about >>>>> star-crossed lovers are expressions of an ur-work with its own >>>>> manifestations and so on, which lead us to a different >>>>> "waterfall" conclusion each time we discover a new work or >>>>> expression. >>>> The idea of approaching the FRBR model "from the bottom" is great. >>>> And to be honest was something I did when drawing my model, >>>> especially concerning the record and recording portion of it. I >>>> started out from work on the top right and from the individual >>>> record bottom right and tried to fill in as many steps as >>>> possible, always wondering whether it be physical or conceptual. >>>> Actually I had the recording in between expression and >>>> manifestation in the first place, as I had the audio tape or >>>> digital audio in between manifestation and item. >>>> The parallel processes from a wok to an item (regardless of >>>> whichever form this may have) are owed to perspective and goal. >>>> When talking about graphical sources I completely agree with the >>>> idea of a certain instrumentation version or the like being an >>>> expression, a print run being a manifestation an individual copy >>>> of which would be an item. >>>>> Instead of creating separate expression-level markup for each >>>>> performance, Axel treats some expressions (performances) as >>>>> events related to another expression of a work (the orchestral >>>>> version vs. the nose flute version). This is fine. As Johannes >>>>> already pointed out, separate elements for the >>>>> performances can be generated from the markup, if >>>>> necessary. Conversely, there's nothing wrong with creating >>>>> separate elements for each performance and relating >>>>> them to other appropriate expressions and/or relating them >>>>> directly to the work. If necessary, given accurate place and >>>>> date information, the kind of markup could be created >>>>> from the separate elements. So, six of one ... >>>> I can agree here, too. I only wondered if the sound wave resulting >>>> from the performance was the physical item (specific performers on >>>> a specific date), then consequently a series of performances by >>>> conductor and orchestra would make up for the manifestation, the >>>> expression then would be the concept that the conductor developed >>>> studying his "source material" and making up the way he wanted the >>>> composition to be realized ergo his "personal version" of the >>>> piece, somewhat of a personal edition. >>>> The performance material of course being an item of a certain >>>> print run >>>> (manifestation) of a certain edition (expression), having strong >>>> relationships to all of the above. >>>>> Johannes said "If there is a manuscript of the nose flute >>>>> version, the information about it would be spread between the >>>>> manifestation (source) and the item." Well, maybe. But, I think >>>>> in this case it would be fine to describe the manifestation and >>>>> the item in a single place (within in MEI) because >>>>> there's only one manifestation and one (and only one) item >>>>> associated with that manifestation. This is the traditional way >>>>> manuscripts have been described, pre-FRBR. Practically speaking, >>>>> the manifesation and the item are the same thing. But, as soon >>>>> as you want to say something special about a particular *part* >>>>> (as in "chunk", not performer part) of the manifestation, you >>>>> have to split these up again, for example, when one section of a >>>>> manuscript is located in Prague and another is in Manitoba. >>>> This was the idea behind me marking/stretching the autograph from >>>> expression to item. >>>> >>>> /benjamin >>>>> This is not the case with printed material where there is >>>>> *always* more than one item created from a manifestation, but it >>>>> is still traditional to describe the manifestation and item as >>>>> though they are the same thing. For example, it is common to >>>>> follow the manifestation's author, title, place of publication, >>>>> etc. with information about the location where one can obtain an >>>>> examplar of the manifestation, say, UVa Library M 296.C57 1987. >>>>> >>>>> Johannes also said "So if you have two more measures in a source, >>>>> this >>>>> source establishes a new expression in FRBR." Again, maybe. The >>>>> FRBR >>>>> report (1997, amended and corrected through 2009) says >>>>> >>>>> "Variations within substantially the same expression (e.g., >>>>> slight variations that can be noticed between two states of the >>>>> same edition in the case of hand press production) would normally >>>>> be ignored or, in specialized catalogues, be reflected as a note >>>>> within the bibliographic record for the manifestation. However, >>>>> for some applications of the model (e.g., early texts of rare >>>>> manuscripts), each variation may be viewed as a different >>>>> expression." >>>>> >>>>> The issue is in the determination of whether 2 things are >>>>> "substantially the same expression". As with many things, this >>>>> depends on the person making the determination, there is no >>>>> single correct answer. We intend that MEI will provide the tools >>>>> for accurate description using either approach. >>>>> >>>>> Just my 2 cents, >>>>> >>>>> -- >>>>> p. >>>>> >>>>> __________________________ >>>>> Perry Roland >>>>> Music Library >>>>> University of Virginia >>>>> P. O. Box 400175 >>>>> Charlottesville, VA 22904 >>>>> 434-982-2702 (w) >>>>> pdr4h (at) virginia (dot) edu >>>>> _______________________________________________ >>>>> mei-l mailing list >>>>> mei-l at lists.uni-paderborn.de >>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > > >_______________________________________________ >mei-l mailing list >mei-l at lists.uni-paderborn.de >https://lists.uni-paderborn.de/mailman/listinfo/mei-l From bohl at edirom.de Thu Nov 22 19:11:08 2012 From: bohl at edirom.de (Benjamin Wolff Bohl) Date: Thu, 22 Nov 2012 19:11:08 +0100 Subject: [MEI-L] Ontologies (was FRBR in MEI) In-Reply-To: References: Message-ID: <50AE6ABC.1050606@edirom.de> Hi Dave, thanks for contributing to this discussion. > "musicbrainz_guid" > "amazon_asin" > "myspace" the musicbrainz_guid and amazon_asin are boh identifiers that can be entered into MEI as depending on what it means for your project, e.g. as identifier as @dbkey or the like. A similar solution can be found in an ontology. > > So is the idea that a new property has to be added each time someone > builds a new electronic catalogue? Doesn't seem particularly scaleable. Depending on your project you might want to add these identifiers to your MEI file or not. So I guess it is scalable to your needs, or am I getting this wrong? > Or > are there just going to be some arbitrarily privileged catalogues that > have associated properties? Can you explain this question a little more? Which catalogs are you meaning for example and where would their properties be associated? Cheers, Benjamin > > - Dave Meredith, Aalborg > > > > On 21/11/2012 10:11, "Tim Crawford" wrote: > >> Dear All, >> >> While I have absolutely no wish to get further involved in this >> fascinating discussion, it strikes me that yes, indeed, much of what >> Benjamin is talking about may be handled better by a music ontology. >> >> And such a thing - the Music Ontology - has now reached an advanced >> state of development by researchers at the BBC and elsewhere over a >> number of years. For details and specification, see: >> >> http://musicontology.com/ >> >> I'm not sure this alone (or even the entire apparatus of the Semantic >> Web) will solve all the problem cases you might encounter or devise, >> but at least it might relieve some of the pressure caused by a desire >> to encode *everything* about a musical work within MEI ... >> >> Keep up the great work! >> >> Tim Crawford, London >> >> On 21 Nov 2012, at 08:24, Benjamin Wolff Bohl wrote: >> >>> Hi there, >>> first thanks to Axel for sorting out that FRBR is for bibliographic >>> items and thus performances, that we have nothing more of than the >>> knowledge it happened are out of scope. >>> I've somehow been thinking too much towards somthing like a music >>> ontology. >>> Moreover the idea of having a hierarchy in FRBR might have mislead >>> me, prooving Peter's earlier mentioned concerns regarding this true. >>> >>> Nevertheless we still got the recordings to deal with! >>> So Johannes, let's continue to disagree ;-) >>> >>>> Hi Benni, >>>> >>>> I hope I got one of your last mails wrong (in this regard), but >>>> just in case I didn't: By no means I wanted to keep you from >>>> commenting on this (or other) thread(s), as your comments are >>>> extremely valuable and helpful ? even if I sometimes disagree. If I >>>> offended you somehow, that wasn't my intention, and I want to >>>> apologize for it. >>>> >>>> That being said, I may continue to disagree ;-) Actually, I don't >>>> think we're that far away. The one thing you seem to get wrong >>>> though is the process from expression to manifestation, which is in >>>> no case trivial and a mere technological step without artistic >>>> contribution. When you consider the efforts necessary to engrave a >>>> piece of music, or the work on the preparation of the WeGA scores >>>> we see every day, you will agree that even in the graphical domain, >>>> this step is indeed highly artistic and involves a whole bunch of >>>> people with different expertise. I agree that the workflows for >>>> making recordings are different, but both things seem to be >>>> comparable from this perspective, don't you think? >>> I neither think that we are too far away from each other now. And I >>> never wanted to say that the transiton from expression to >>> manifestation was a mere technical, but maybe I should have >>> explained a little more what my initial graphic was all about with >>> the recordings, as by no means it would involve a mere technical >>> step. Beginning from the way the recording engineer set up his >>> microphones and what he did on his audio desk, across quite a couple >>> of steps involving editing (cutting, rather technical but >>> nevertheless with artistic implications), mixing (very artistic) and >>> mastering(as artistic as technical), that all would result in >>> archive material quite a lot of intellectual/artistic work is >>> involved in a record(ing). >>> >>> I'll have a try on this: >>> WORK - examination -> edition (e1) ------------- engraving -> print >>> run (m1) -printing -> print copy (i1) >>> >>> If you have the above, and try to get a parallel idea on the way to >>> the copy of a record on your shelf (i2): >>> (1) What will be expression? >>> (2) What will be manifestation? >>> (3) Is one stream of e-m-i this sufficient? >>> >>> WORK - examination -> artist's inpterpretation (e2) - - >>>> ? -> record copy (i2) >>> or >>> >>> WORK - recording -> ? - mastering -> label's >>> press run -pressing (m2) -> record copy (i2) >>> >>> Maybe let's try to fill this with one of Don's "Greatful" examples: >>> The Song "Truckin" has been released Nov 1 1970 on the album >>> "American Beauty" and as a single. The album was recorded in AUG- >>> SEP 1970, although it might be the single version was recorded in >>> SEP or maybe this specific song was recorded in SEP. >>> >>> Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e2) --> 1970-11-1 >>> Warner bros. release of "American Beauty" album (m2) --> record copy >>> (i2) >>> Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e3) --> 1970-11-1 >>> Warner bros. release of "Truckin" single (m3) --> record copy (i3) >>> >>> But the single version and album version differ quite a lot, album >>> length 5:09 and single 3:13 so we should specify a little more. >>> >>> Truckin (w1) --> 1970-08 to 1970-09 Session Tape album-verison (e2) >>> --> 1970-11-1 Warner bros. release of "American Beauty" album (m2) -- >>>> record copy (i2) >>> Truckin (w1) --> 1970-08 to 1970-09 Session Tape single-version (e3) >>> --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> >>> record copy (i3) >>> >>> A little complication: the single version was not recorded but taken >>> from the album version and edited down from 5 to 3 minutes >>> nevertheless it is a own expression, but it hints us twords some >>> items that might reside in an archives shelf, namely: >>> - session tapes : the tapes from the recording session (potentially >>> multi-track) >>> - edit tapes : the tapes where all the nice parts from the session >>> tapes were cut together to make up the material for the work >>> (potentially multi-track) >>> - mix tapes : a stereo mix version including lots of additional >>> features like for example delay effects etc. resembling the final >>> version >>> - master tapes : an acoustically slightly reshaped version of the >>> mix tape version in order to fit the technical limitations of a >>> certain target medium like vinyl and some intellectual work to >>> smoothen the mix (e.g. making all songs on a record sound similar) >>> >>> If they are relevant for my MEI file, they should go into , >>> but where should these go in FRBR? >>> Maybe they should all be separate expressions with strong relations >>> to each other? >>> So actually the only one in the direct same "FRBR hierarchy" would >>> be the master tape? >>> >>> Truckin (w1) --> 1970-09-XX "Truckin" Master Tape (e3) --> 1970-11-1 >>> Warner bros. release of "Truckin" single (m3) --> record copy (i3) >>> >>> If I don't know about all the tapes I might just put? >>> Truckin (w1) --> 1970-08 to 1970-09 recordings --> 1970-11-1 Warner >>> bros. release of "Truckin" single (m3) --> record copy (i3) >>> >>> or? >>> >>> Truckin (w1) --> 1970 Version (e3)--> 1970-11-1 Warner bros. release >>> of "Truckin" single (m3) --> record copy (i3) >>> >>> What do you think, is there still a problem? >>> Is there anything interesting for you in the above? >>> >>>> Besides that, I totally agree that FRBR is not extremely >>>> prescriptive regarding how to model certain situations, but after >>>> thinking about it for some time, I (now) think that this is >>>> actually a benefit, as it doesn't enforce a specific setup, but >>>> allows projects to implement it as they see fit. So in the end, I'm >>>> not against your approach in general, I'm just against enforcing >>>> your approach. The current implementation of FRBR in MEI tries to >>>> keep this openness of FRBR, which I regard as a good thing. In the >>>> end, all of us could be wrong ;-) >>> I never wanted to enforce anything only to show up possibilities to >>> be considered when implementing FRBR or test the current >>> implementation against. And I think you're absolutely right that an >>> openness could be a benefit as we certainly will miss possible >>> complicated situations. >>> >>> benjamin >>>> Best, >>>> Johannes >>>> >>>> Am 20.11.2012 um 12:33 schrieb Axel Teich Geertinger: >>>> >>>>> Hi Benni >>>>> >>>>> Perhaps we should remember that FRBR is intended for >>>>> _bibliographic records_, not for descriptions of a work's >>>>> reception history. Thus, the premise for using FRBR is that in the >>>>> end we want to describe bibliographic items. Since a performance >>>>> itself isn't a bibliographic item, perhaps it does not have to fit >>>>> in? Only if it results in such an item (via manifestation), i.e. a >>>>> recording, it becomes truly relevant to use FRBR. The performance >>>>> in that case is not the primary thing we want to describe, it is >>>>> just the context that resulted in the recording manifestation. >>>>> >>>>> Just another 2 cents, >>>>> Axel >>>>> >>>>> -----Oprindelig meddelelse----- >>>>> Fra: mei-l-bounces at lists.uni-paderborn.de >>>>> [mailto:mei-l-bounces at lists.uni-paderborn.de >>>>> ] P? vegne af Benjamin Wolff Bohl >>>>> Sendt: 20. november 2012 11:57 >>>>> Til: Music Encoding Initiative >>>>> Emne: Re: [MEI-L] FRBR in MEI >>>>> >>>>> Hi Perry, >>>>> thanks for some clarifying approaches >>>>> further statements inline >>>>> >>>>> Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): >>>>>> Random comments on the discussion so far. Sorry if this gets long. >>>>>> >>>>>> When contemplating performances and recordings, it seems to me >>>>>> that people often have trouble reaching agreement on the term >>>>>> "sound recording". Andrew's slides label the *expression* as >>>>>> "the sound recording", but others might label the *manifestation* >>>>>> as "the sound recording". You might say the expression is the >>>>>> "act of making a recording" and the manifestation is the >>>>>> "recording that results". >>>>>> >>>>>> To disentangle the different uses of the term "recording", it >>>>>> helps me to remember that an expression is not a physical entity, >>>>>> but a manifestation is. Therefore, I prefer to think of the >>>>>> expression as "the performance" (the non-physical thing being >>>>>> recorded) and the manifestation as "the recording" (the physical >>>>>> thing). This fits with the way libraries have traditionally >>>>>> cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... >>>>> I completely agree on that, being the reason why I used both the >>>>> terms recording and record with record being on the manifestation/ >>>>> item-level and recording being rather on the expression- >>>>> manifestation-level. Why so? Recording has to be subordinate to >>>>> work after all and a recording is not just a simple physical >>>>> manifestation but a multistep process involving conceptual and >>>>> creative work done by producers and engineers. >>>>> So talking about a recording as only being a manifestation becomes >>>>> problematic as it is a intellectual process resulting in a >>>>> physical manifestation. That's the way I was looking on it (owed >>>>> to my audio engineering past) and of course it can be seen >>>>> differently. >>>>>> In any case, the FRBR document, which Axel cites, says a >>>>>> *performance is an expression* and a *recording is a >>>>>> manifestation*. >>>>> This is perfectly plausible when disregarding the intellectual >>>>> endeavour entangled with the "act of making a recording", as >>>>> mentioned before. >>>>>> The usual "waterfall" kind of diagram is explained by saying the >>>>>> term >>>>>> "work" applies to conceptual content; "expression" applies to the >>>>>> languages/media/versions in which the work occurs; "manifestation" >>>>>> applies to the formats in which each expression is available; and >>>>>> "item" applies to individual copies of a single format. (Here >>>>>> "media" >>>>>> means "medium of expression", say written language as opposed to >>>>>> film, >>>>>> and "format" means physical format, as in printed book as opposed >>>>>> to >>>>>> audio CD.) >>>>>> >>>>>> Taking another tack, though, often it is easier for me to think >>>>>> of FRBR "from the bottom up", rather than start from the work and >>>>>> proceed "down" the waterfall diagram. Using the recording >>>>>> example, the item is the exemplar I hold in my hand, the >>>>>> manifestation is all of the copies of that exemplar (or better >>>>>> yet, all the information shared by all those copies), the >>>>>> expression is the version of the work that is represented by the >>>>>> manifestation (e.g., Jo's nose flute + harpsichord version and >>>>>> the orchestral version are different expressions), and the work >>>>>> is an intellectual creation/idea (e.g., Bohl's op. 1, the one >>>>>> that goes da, da, da, daaaaaa, reeep! reeep! reeep!). >>>>>> >>>>>> Using this "bottom up" thinking helps avoid mental contortions >>>>>> regarding what the work is -- the work is simply the thing at the >>>>>> end of this mental process. From there on, there are work-to- >>>>>> work relationships, so we don't have to think about whether >>>>>> "Romeo and Juliet", "Westside Story", and every other story about >>>>>> star-crossed lovers are expressions of an ur-work with its own >>>>>> manifestations and so on, which lead us to a different >>>>>> "waterfall" conclusion each time we discover a new work or >>>>>> expression. >>>>> The idea of approaching the FRBR model "from the bottom" is great. >>>>> And to be honest was something I did when drawing my model, >>>>> especially concerning the record and recording portion of it. I >>>>> started out from work on the top right and from the individual >>>>> record bottom right and tried to fill in as many steps as >>>>> possible, always wondering whether it be physical or conceptual. >>>>> Actually I had the recording in between expression and >>>>> manifestation in the first place, as I had the audio tape or >>>>> digital audio in between manifestation and item. >>>>> The parallel processes from a wok to an item (regardless of >>>>> whichever form this may have) are owed to perspective and goal. >>>>> When talking about graphical sources I completely agree with the >>>>> idea of a certain instrumentation version or the like being an >>>>> expression, a print run being a manifestation an individual copy >>>>> of which would be an item. >>>>>> Instead of creating separate expression-level markup for each >>>>>> performance, Axel treats some expressions (performances) as >>>>>> events related to another expression of a work (the orchestral >>>>>> version vs. the nose flute version). This is fine. As Johannes >>>>>> already pointed out, separate elements for the >>>>>> performances can be generated from the markup, if >>>>>> necessary. Conversely, there's nothing wrong with creating >>>>>> separate elements for each performance and relating >>>>>> them to other appropriate expressions and/or relating them >>>>>> directly to the work. If necessary, given accurate place and >>>>>> date information, the kind of markup could be created >>>>>> from the separate elements. So, six of one ... >>>>> I can agree here, too. I only wondered if the sound wave resulting >>>>> from the performance was the physical item (specific performers on >>>>> a specific date), then consequently a series of performances by >>>>> conductor and orchestra would make up for the manifestation, the >>>>> expression then would be the concept that the conductor developed >>>>> studying his "source material" and making up the way he wanted the >>>>> composition to be realized ergo his "personal version" of the >>>>> piece, somewhat of a personal edition. >>>>> The performance material of course being an item of a certain >>>>> print run >>>>> (manifestation) of a certain edition (expression), having strong >>>>> relationships to all of the above. >>>>>> Johannes said "If there is a manuscript of the nose flute >>>>>> version, the information about it would be spread between the >>>>>> manifestation (source) and the item." Well, maybe. But, I think >>>>>> in this case it would be fine to describe the manifestation and >>>>>> the item in a single place (within in MEI) because >>>>>> there's only one manifestation and one (and only one) item >>>>>> associated with that manifestation. This is the traditional way >>>>>> manuscripts have been described, pre-FRBR. Practically speaking, >>>>>> the manifesation and the item are the same thing. But, as soon >>>>>> as you want to say something special about a particular *part* >>>>>> (as in "chunk", not performer part) of the manifestation, you >>>>>> have to split these up again, for example, when one section of a >>>>>> manuscript is located in Prague and another is in Manitoba. >>>>> This was the idea behind me marking/stretching the autograph from >>>>> expression to item. >>>>> >>>>> /benjamin >>>>>> This is not the case with printed material where there is >>>>>> *always* more than one item created from a manifestation, but it >>>>>> is still traditional to describe the manifestation and item as >>>>>> though they are the same thing. For example, it is common to >>>>>> follow the manifestation's author, title, place of publication, >>>>>> etc. with information about the location where one can obtain an >>>>>> examplar of the manifestation, say, UVa Library M 296.C57 1987. >>>>>> >>>>>> Johannes also said "So if you have two more measures in a source, >>>>>> this >>>>>> source establishes a new expression in FRBR." Again, maybe. The >>>>>> FRBR >>>>>> report (1997, amended and corrected through 2009) says >>>>>> >>>>>> "Variations within substantially the same expression (e.g., >>>>>> slight variations that can be noticed between two states of the >>>>>> same edition in the case of hand press production) would normally >>>>>> be ignored or, in specialized catalogues, be reflected as a note >>>>>> within the bibliographic record for the manifestation. However, >>>>>> for some applications of the model (e.g., early texts of rare >>>>>> manuscripts), each variation may be viewed as a different >>>>>> expression." >>>>>> >>>>>> The issue is in the determination of whether 2 things are >>>>>> "substantially the same expression". As with many things, this >>>>>> depends on the person making the determination, there is no >>>>>> single correct answer. We intend that MEI will provide the tools >>>>>> for accurate description using either approach. >>>>>> >>>>>> Just my 2 cents, >>>>>> >>>>>> -- >>>>>> p. >>>>>> >>>>>> __________________________ >>>>>> Perry Roland >>>>>> Music Library >>>>>> University of Virginia >>>>>> P. O. Box 400175 >>>>>> Charlottesville, VA 22904 >>>>>> 434-982-2702 (w) >>>>>> pdr4h (at) virginia (dot) edu >>>>>> _______________________________________________ >>>>>> mei-l mailing list >>>>>> mei-l at lists.uni-paderborn.de >>>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>>> >>>>> _______________________________________________ >>>>> mei-l mailing list >>>>> mei-l at lists.uni-paderborn.de >>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>>> >>>>> _______________________________________________ >>>>> mei-l mailing list >>>>> mei-l at lists.uni-paderborn.de >>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From dave at create.aau.dk Thu Nov 22 21:37:47 2012 From: dave at create.aau.dk (David Meredith) Date: Thu, 22 Nov 2012 20:37:47 +0000 Subject: [MEI-L] Ontologies (was FRBR in MEI) In-Reply-To: <50AE6ABC.1050606@edirom.de> Message-ID: I guess my point was that having a separate dedicated, "hard-coded" property for each website that refers to the object seems a bit inflexible. It would make more sense to me if one could associate with the object a list of pairs, where the site could be a URL and the id would be the relevant id of the resource on that site. But I haven't studied this ontology in depth, so I may be missing the point... Dave On 22/11/2012 19:11, "Benjamin Wolff Bohl" wrote: >Hi Dave, >thanks for contributing to this discussion. > >> "musicbrainz_guid" >> "amazon_asin" >> "myspace" >the musicbrainz_guid and amazon_asin are boh identifiers that can be >entered into MEI as depending on what it means for your project, e.g. as >identifier as @dbkey or the like. >A similar solution can be found in an ontology. >> >> So is the idea that a new property has to be added each time someone >> builds a new electronic catalogue? Doesn't seem particularly scaleable. >Depending on your project you might want to add these identifiers to >your MEI file or not. So I guess it is scalable to your needs, or am I >getting this wrong? >> Or >> are there just going to be some arbitrarily privileged catalogues that >> have associated properties? >Can you explain this question a little more? Which catalogs are you >meaning for example and where would their properties be associated? > >Cheers, Benjamin > >> >> - Dave Meredith, Aalborg >> >> >> >> On 21/11/2012 10:11, "Tim Crawford" wrote: >> >>> Dear All, >>> >>> While I have absolutely no wish to get further involved in this >>> fascinating discussion, it strikes me that yes, indeed, much of what >>> Benjamin is talking about may be handled better by a music ontology. >>> >>> And such a thing - the Music Ontology - has now reached an advanced >>> state of development by researchers at the BBC and elsewhere over a >>> number of years. For details and specification, see: >>> >>> http://musicontology.com/ >>> >>> I'm not sure this alone (or even the entire apparatus of the Semantic >>> Web) will solve all the problem cases you might encounter or devise, >>> but at least it might relieve some of the pressure caused by a desire >>> to encode *everything* about a musical work within MEI ... >>> >>> Keep up the great work! >>> >>> Tim Crawford, London >>> >>> On 21 Nov 2012, at 08:24, Benjamin Wolff Bohl wrote: >>> >>>> Hi there, >>>> first thanks to Axel for sorting out that FRBR is for bibliographic >>>> items and thus performances, that we have nothing more of than the >>>> knowledge it happened are out of scope. >>>> I've somehow been thinking too much towards somthing like a music >>>> ontology. >>>> Moreover the idea of having a hierarchy in FRBR might have mislead >>>> me, prooving Peter's earlier mentioned concerns regarding this true. >>>> >>>> Nevertheless we still got the recordings to deal with! >>>> So Johannes, let's continue to disagree ;-) >>>> >>>>> Hi Benni, >>>>> >>>>> I hope I got one of your last mails wrong (in this regard), but >>>>> just in case I didn't: By no means I wanted to keep you from >>>>> commenting on this (or other) thread(s), as your comments are >>>>> extremely valuable and helpful ? even if I sometimes disagree. If I >>>>> offended you somehow, that wasn't my intention, and I want to >>>>> apologize for it. >>>>> >>>>> That being said, I may continue to disagree ;-) Actually, I don't >>>>> think we're that far away. The one thing you seem to get wrong >>>>> though is the process from expression to manifestation, which is in >>>>> no case trivial and a mere technological step without artistic >>>>> contribution. When you consider the efforts necessary to engrave a >>>>> piece of music, or the work on the preparation of the WeGA scores >>>>> we see every day, you will agree that even in the graphical domain, >>>>> this step is indeed highly artistic and involves a whole bunch of >>>>> people with different expertise. I agree that the workflows for >>>>> making recordings are different, but both things seem to be >>>>> comparable from this perspective, don't you think? >>>> I neither think that we are too far away from each other now. And I >>>> never wanted to say that the transiton from expression to >>>> manifestation was a mere technical, but maybe I should have >>>> explained a little more what my initial graphic was all about with >>>> the recordings, as by no means it would involve a mere technical >>>> step. Beginning from the way the recording engineer set up his >>>> microphones and what he did on his audio desk, across quite a couple >>>> of steps involving editing (cutting, rather technical but >>>> nevertheless with artistic implications), mixing (very artistic) and >>>> mastering(as artistic as technical), that all would result in >>>> archive material quite a lot of intellectual/artistic work is >>>> involved in a record(ing). >>>> >>>> I'll have a try on this: >>>> WORK - examination -> edition (e1) ------------- engraving -> print >>>> run (m1) -printing -> print copy (i1) >>>> >>>> If you have the above, and try to get a parallel idea on the way to >>>> the copy of a record on your shelf (i2): >>>> (1) What will be expression? >>>> (2) What will be manifestation? >>>> (3) Is one stream of e-m-i this sufficient? >>>> >>>> WORK - examination -> artist's inpterpretation (e2) - - >>>>> ? -> record copy (i2) >>>> or >>>> >>>> WORK - recording -> ? - mastering -> label's >>>> press run -pressing (m2) -> record copy (i2) >>>> >>>> Maybe let's try to fill this with one of Don's "Greatful" examples: >>>> The Song "Truckin" has been released Nov 1 1970 on the album >>>> "American Beauty" and as a single. The album was recorded in AUG- >>>> SEP 1970, although it might be the single version was recorded in >>>> SEP or maybe this specific song was recorded in SEP. >>>> >>>> Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e2) --> 1970-11-1 >>>> Warner bros. release of "American Beauty" album (m2) --> record copy >>>> (i2) >>>> Truckin (w1) --> 1970-08 to 1970-09 Session Tapes (e3) --> 1970-11-1 >>>> Warner bros. release of "Truckin" single (m3) --> record copy (i3) >>>> >>>> But the single version and album version differ quite a lot, album >>>> length 5:09 and single 3:13 so we should specify a little more. >>>> >>>> Truckin (w1) --> 1970-08 to 1970-09 Session Tape album-verison (e2) >>>> --> 1970-11-1 Warner bros. release of "American Beauty" album (m2) -- >>>>> record copy (i2) >>>> Truckin (w1) --> 1970-08 to 1970-09 Session Tape single-version (e3) >>>> --> 1970-11-1 Warner bros. release of "Truckin" single (m3) --> >>>> record copy (i3) >>>> >>>> A little complication: the single version was not recorded but taken >>>> from the album version and edited down from 5 to 3 minutes >>>> nevertheless it is a own expression, but it hints us twords some >>>> items that might reside in an archives shelf, namely: >>>> - session tapes : the tapes from the recording session (potentially >>>> multi-track) >>>> - edit tapes : the tapes where all the nice parts from the session >>>> tapes were cut together to make up the material for the work >>>> (potentially multi-track) >>>> - mix tapes : a stereo mix version including lots of additional >>>> features like for example delay effects etc. resembling the final >>>> version >>>> - master tapes : an acoustically slightly reshaped version of the >>>> mix tape version in order to fit the technical limitations of a >>>> certain target medium like vinyl and some intellectual work to >>>> smoothen the mix (e.g. making all songs on a record sound similar) >>>> >>>> If they are relevant for my MEI file, they should go into , >>>> but where should these go in FRBR? >>>> Maybe they should all be separate expressions with strong relations >>>> to each other? >>>> So actually the only one in the direct same "FRBR hierarchy" would >>>> be the master tape? >>>> >>>> Truckin (w1) --> 1970-09-XX "Truckin" Master Tape (e3) --> 1970-11-1 >>>> Warner bros. release of "Truckin" single (m3) --> record copy (i3) >>>> >>>> If I don't know about all the tapes I might just put? >>>> Truckin (w1) --> 1970-08 to 1970-09 recordings --> 1970-11-1 Warner >>>> bros. release of "Truckin" single (m3) --> record copy (i3) >>>> >>>> or? >>>> >>>> Truckin (w1) --> 1970 Version (e3)--> 1970-11-1 Warner bros. release >>>> of "Truckin" single (m3) --> record copy (i3) >>>> >>>> What do you think, is there still a problem? >>>> Is there anything interesting for you in the above? >>>> >>>>> Besides that, I totally agree that FRBR is not extremely >>>>> prescriptive regarding how to model certain situations, but after >>>>> thinking about it for some time, I (now) think that this is >>>>> actually a benefit, as it doesn't enforce a specific setup, but >>>>> allows projects to implement it as they see fit. So in the end, I'm >>>>> not against your approach in general, I'm just against enforcing >>>>> your approach. The current implementation of FRBR in MEI tries to >>>>> keep this openness of FRBR, which I regard as a good thing. In the >>>>> end, all of us could be wrong ;-) >>>> I never wanted to enforce anything only to show up possibilities to >>>> be considered when implementing FRBR or test the current >>>> implementation against. And I think you're absolutely right that an >>>> openness could be a benefit as we certainly will miss possible >>>> complicated situations. >>>> >>>> benjamin >>>>> Best, >>>>> Johannes >>>>> >>>>> Am 20.11.2012 um 12:33 schrieb Axel Teich Geertinger: >>>>> >>>>>> Hi Benni >>>>>> >>>>>> Perhaps we should remember that FRBR is intended for >>>>>> _bibliographic records_, not for descriptions of a work's >>>>>> reception history. Thus, the premise for using FRBR is that in the >>>>>> end we want to describe bibliographic items. Since a performance >>>>>> itself isn't a bibliographic item, perhaps it does not have to fit >>>>>> in? Only if it results in such an item (via manifestation), i.e. a >>>>>> recording, it becomes truly relevant to use FRBR. The performance >>>>>> in that case is not the primary thing we want to describe, it is >>>>>> just the context that resulted in the recording manifestation. >>>>>> >>>>>> Just another 2 cents, >>>>>> Axel >>>>>> >>>>>> -----Oprindelig meddelelse----- >>>>>> Fra: mei-l-bounces at lists.uni-paderborn.de >>>>>> [mailto:mei-l-bounces at lists.uni-paderborn.de >>>>>> ] P? vegne af Benjamin Wolff Bohl >>>>>> Sendt: 20. november 2012 11:57 >>>>>> Til: Music Encoding Initiative >>>>>> Emne: Re: [MEI-L] FRBR in MEI >>>>>> >>>>>> Hi Perry, >>>>>> thanks for some clarifying approaches >>>>>> further statements inline >>>>>> >>>>>> Am 16.11.2012 22:25, schrieb Roland, Perry (pdr4h): >>>>>>> Random comments on the discussion so far. Sorry if this gets long. >>>>>>> >>>>>>> When contemplating performances and recordings, it seems to me >>>>>>> that people often have trouble reaching agreement on the term >>>>>>> "sound recording". Andrew's slides label the *expression* as >>>>>>> "the sound recording", but others might label the *manifestation* >>>>>>> as "the sound recording". You might say the expression is the >>>>>>> "act of making a recording" and the manifestation is the >>>>>>> "recording that results". >>>>>>> >>>>>>> To disentangle the different uses of the term "recording", it >>>>>>> helps me to remember that an expression is not a physical entity, >>>>>>> but a manifestation is. Therefore, I prefer to think of the >>>>>>> expression as "the performance" (the non-physical thing being >>>>>>> recorded) and the manifestation as "the recording" (the physical >>>>>>> thing). This fits with the way libraries have traditionally >>>>>>> cataloged recordings, i.e., CDs, LPs, cassettes, wax cylinders, ... >>>>>> I completely agree on that, being the reason why I used both the >>>>>> terms recording and record with record being on the manifestation/ >>>>>> item-level and recording being rather on the expression- >>>>>> manifestation-level. Why so? Recording has to be subordinate to >>>>>> work after all and a recording is not just a simple physical >>>>>> manifestation but a multistep process involving conceptual and >>>>>> creative work done by producers and engineers. >>>>>> So talking about a recording as only being a manifestation becomes >>>>>> problematic as it is a intellectual process resulting in a >>>>>> physical manifestation. That's the way I was looking on it (owed >>>>>> to my audio engineering past) and of course it can be seen >>>>>> differently. >>>>>>> In any case, the FRBR document, which Axel cites, says a >>>>>>> *performance is an expression* and a *recording is a >>>>>>> manifestation*. >>>>>> This is perfectly plausible when disregarding the intellectual >>>>>> endeavour entangled with the "act of making a recording", as >>>>>> mentioned before. >>>>>>> The usual "waterfall" kind of diagram is explained by saying the >>>>>>> term >>>>>>> "work" applies to conceptual content; "expression" applies to the >>>>>>> languages/media/versions in which the work occurs; "manifestation" >>>>>>> applies to the formats in which each expression is available; and >>>>>>> "item" applies to individual copies of a single format. (Here >>>>>>> "media" >>>>>>> means "medium of expression", say written language as opposed to >>>>>>> film, >>>>>>> and "format" means physical format, as in printed book as opposed >>>>>>> to >>>>>>> audio CD.) >>>>>>> >>>>>>> Taking another tack, though, often it is easier for me to think >>>>>>> of FRBR "from the bottom up", rather than start from the work and >>>>>>> proceed "down" the waterfall diagram. Using the recording >>>>>>> example, the item is the exemplar I hold in my hand, the >>>>>>> manifestation is all of the copies of that exemplar (or better >>>>>>> yet, all the information shared by all those copies), the >>>>>>> expression is the version of the work that is represented by the >>>>>>> manifestation (e.g., Jo's nose flute + harpsichord version and >>>>>>> the orchestral version are different expressions), and the work >>>>>>> is an intellectual creation/idea (e.g., Bohl's op. 1, the one >>>>>>> that goes da, da, da, daaaaaa, reeep! reeep! reeep!). >>>>>>> >>>>>>> Using this "bottom up" thinking helps avoid mental contortions >>>>>>> regarding what the work is -- the work is simply the thing at the >>>>>>> end of this mental process. From there on, there are work-to- >>>>>>> work relationships, so we don't have to think about whether >>>>>>> "Romeo and Juliet", "Westside Story", and every other story about >>>>>>> star-crossed lovers are expressions of an ur-work with its own >>>>>>> manifestations and so on, which lead us to a different >>>>>>> "waterfall" conclusion each time we discover a new work or >>>>>>> expression. >>>>>> The idea of approaching the FRBR model "from the bottom" is great. >>>>>> And to be honest was something I did when drawing my model, >>>>>> especially concerning the record and recording portion of it. I >>>>>> started out from work on the top right and from the individual >>>>>> record bottom right and tried to fill in as many steps as >>>>>> possible, always wondering whether it be physical or conceptual. >>>>>> Actually I had the recording in between expression and >>>>>> manifestation in the first place, as I had the audio tape or >>>>>> digital audio in between manifestation and item. >>>>>> The parallel processes from a wok to an item (regardless of >>>>>> whichever form this may have) are owed to perspective and goal. >>>>>> When talking about graphical sources I completely agree with the >>>>>> idea of a certain instrumentation version or the like being an >>>>>> expression, a print run being a manifestation an individual copy >>>>>> of which would be an item. >>>>>>> Instead of creating separate expression-level markup for each >>>>>>> performance, Axel treats some expressions (performances) as >>>>>>> events related to another expression of a work (the orchestral >>>>>>> version vs. the nose flute version). This is fine. As Johannes >>>>>>> already pointed out, separate elements for the >>>>>>> performances can be generated from the markup, if >>>>>>> necessary. Conversely, there's nothing wrong with creating >>>>>>> separate elements for each performance and relating >>>>>>> them to other appropriate expressions and/or relating them >>>>>>> directly to the work. If necessary, given accurate place and >>>>>>> date information, the kind of markup could be created >>>>>>> from the separate elements. So, six of one ... >>>>>> I can agree here, too. I only wondered if the sound wave resulting >>>>>> from the performance was the physical item (specific performers on >>>>>> a specific date), then consequently a series of performances by >>>>>> conductor and orchestra would make up for the manifestation, the >>>>>> expression then would be the concept that the conductor developed >>>>>> studying his "source material" and making up the way he wanted the >>>>>> composition to be realized ergo his "personal version" of the >>>>>> piece, somewhat of a personal edition. >>>>>> The performance material of course being an item of a certain >>>>>> print run >>>>>> (manifestation) of a certain edition (expression), having strong >>>>>> relationships to all of the above. >>>>>>> Johannes said "If there is a manuscript of the nose flute >>>>>>> version, the information about it would be spread between the >>>>>>> manifestation (source) and the item." Well, maybe. But, I think >>>>>>> in this case it would be fine to describe the manifestation and >>>>>>> the item in a single place (within in MEI) because >>>>>>> there's only one manifestation and one (and only one) item >>>>>>> associated with that manifestation. This is the traditional way >>>>>>> manuscripts have been described, pre-FRBR. Practically speaking, >>>>>>> the manifesation and the item are the same thing. But, as soon >>>>>>> as you want to say something special about a particular *part* >>>>>>> (as in "chunk", not performer part) of the manifestation, you >>>>>>> have to split these up again, for example, when one section of a >>>>>>> manuscript is located in Prague and another is in Manitoba. >>>>>> This was the idea behind me marking/stretching the autograph from >>>>>> expression to item. >>>>>> >>>>>> /benjamin >>>>>>> This is not the case with printed material where there is >>>>>>> *always* more than one item created from a manifestation, but it >>>>>>> is still traditional to describe the manifestation and item as >>>>>>> though they are the same thing. For example, it is common to >>>>>>> follow the manifestation's author, title, place of publication, >>>>>>> etc. with information about the location where one can obtain an >>>>>>> examplar of the manifestation, say, UVa Library M 296.C57 1987. >>>>>>> >>>>>>> Johannes also said "So if you have two more measures in a source, >>>>>>> this >>>>>>> source establishes a new expression in FRBR." Again, maybe. The >>>>>>> FRBR >>>>>>> report (1997, amended and corrected through 2009) says >>>>>>> >>>>>>> "Variations within substantially the same expression (e.g., >>>>>>> slight variations that can be noticed between two states of the >>>>>>> same edition in the case of hand press production) would normally >>>>>>> be ignored or, in specialized catalogues, be reflected as a note >>>>>>> within the bibliographic record for the manifestation. However, >>>>>>> for some applications of the model (e.g., early texts of rare >>>>>>> manuscripts), each variation may be viewed as a different >>>>>>> expression." >>>>>>> >>>>>>> The issue is in the determination of whether 2 things are >>>>>>> "substantially the same expression". As with many things, this >>>>>>> depends on the person making the determination, there is no >>>>>>> single correct answer. We intend that MEI will provide the tools >>>>>>> for accurate description using either approach. >>>>>>> >>>>>>> Just my 2 cents, >>>>>>> >>>>>>> -- >>>>>>> p. >>>>>>> >>>>>>> __________________________ >>>>>>> Perry Roland >>>>>>> Music Library >>>>>>> University of Virginia >>>>>>> P. O. Box 400175 >>>>>>> Charlottesville, VA 22904 >>>>>>> 434-982-2702 (w) >>>>>>> pdr4h (at) virginia (dot) edu >>>>>>> _______________________________________________ >>>>>>> mei-l mailing list >>>>>>> mei-l at lists.uni-paderborn.de >>>>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>>>> >>>>>> _______________________________________________ >>>>>> mei-l mailing list >>>>>> mei-l at lists.uni-paderborn.de >>>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>>>> >>>>>> _______________________________________________ >>>>>> mei-l mailing list >>>>>> mei-l at lists.uni-paderborn.de >>>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>>> _______________________________________________ >>>>> mei-l mailing list >>>>> mei-l at lists.uni-paderborn.de >>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > >_______________________________________________ >mei-l mailing list >mei-l at lists.uni-paderborn.de >https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Fri Nov 30 04:40:09 2012 From: craigsapp at gmail.com (Craig Sapp) Date: Thu, 29 Nov 2012 19:40:09 -0800 Subject: [MEI-L] page sizes In-Reply-To: References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> Message-ID: Hi Everyone, In case you were eagerly awaiting it, attached is the final version of my analysis of staff placement in SCORE. SCORE data itself has no concept of physical units (with some minor caveats), so it would be a good model to observe. The physical units are defined at the last minute when you are ready to print, and are not defined while editing the music in the SCORE editor, which matches the idea which you are headed towards. See page 50 of the attached PDF for example default spacings in SCORE which is a good basic roadmap to how default spacing units are defined in SCORE. > This interline distance (which is already used by MEI) is a musical unit which describes > half the distance between two staff lines, I complain about how you are defining "interline". Interline is Latin for "between lines", not "halfway between lines". This will cause continual confusion, such as losing your spacecraft: http://mars.jpl.nasa.gov/msp98/news/mco990930.html How about "@semiline", "@hemiline", or "@demiline" instead? Or maybe "@halbline" :-) http://www.dailywritingtips.com/semi-demi-and-hemi * The nominal physical length of scoreDef/@interline.size in SCORE is 3.15 points (0.04375 inches, 1.11125 mm). This is when you print out the music using the default staff scaling and print size. Vertical values are always represented by this step size, and the data files themselves do not indicate that the final physical rendering is at 3.15 points (which is why I had to measure it off of the example on page 50). * So in SCORE, the distance between two staff lines is 2.0 "steps". And this means that the height of a staff is 8.0 steps. Every staff has an independent scaling factor which only affects the vertical dimension (there is no staff-level scaling for the horizontal dimension). So if a staff has a 50% scaling, all of its steps would be 1/2 of the size of the nominal height. * The default "successive staff spacing" is 18.0 steps, so there are 18.0 - 8.0 = 10.0 steps from the top of one staff to the bottom of the next. This default spacing is the framework over which individual staves may be scaled. This framework is also how SCORE avoids using physical measurements to place individual staves vertically on the page. Staves can be placed anywhere vertically, but their placements are in relation to their default positions. For example, a staff could be placed 15 steps above the top of the staff below by adding an extra offset of 5 steps to its default position (In SCOREese, set P4=5 for the top staff). * The staves each have their own scaling factor (called the staff's P5 in SCOREese). If P5 is 0.5, then the local staff's step size is now 1/2 of the default step size. This scaling factor only affects the height of the staves, not the length of the staves. Object placed on a staff will have the staff's P5 scaling applied to their horizontal and vertical dimensions (they will shrink by 50% if the staff is scaled by 50%), their vertical placement will be scaled as well, but not their horizontal placement which is independent of the staff scaling. Note that changing the scaling of a staff does not affect its default position on the page, but if there is a vertical offset from the default position, that offset will be scaled. * The horizontal distances in SCORE are described on a different scale than the vertical distances. They are described as the fractional position along the left/right sides of the default staff length. The nominal length of a staff is 7.5 inches. This length is divided up into 200 units, so the left side of staves are at 0.0, and the right side is at 200.0. A length of 540 points (7.5 inches) divided by 200 is 2.7 points, so "200" was probably used to give an approximately equivalent unit to vertical steps. It would have been more elegant to set the default horizontal and vertical units to be the same by using a different vertical scaling... In any case the horizontal units are 6/7 of the vertical step units, so if you set a staff's scaling to 6/7 (85.71428...%), the horizontal units will match the vertical steps locally for that staff. Another way of thinking about the relationship between the horizontal and vertical units in SCORE is that the default length of staves is 171.428... steps long. So all vertical and horizontal units can be related and a final scaling can be given to match to the specified @interline physical distance between steps. * horizontal units cannot be scaled within the SCORE editor, and can only be scaled at print time, such as to match the staff lengths to the distance between page margins. For final placement on a physical page, there are three important values: (1) The distance from the left side of the page to the left side of the staff (the "left margin", although the system brackets will fall into this margin, so not exactly the same as a text margin). The default left margin is 0.5 inches (plus a fixed extra 0.025 in). (2) The distance from the bottom of the page to the bottom line of the first staff. This is also not exactly a "margin" in the text sense, since notes/slurs/dynamics can fall within this bottom margin. The default bottom margin is 0.75 inches (plus a fixed extra 0.0625 inches). (3) The page scaling. This is the method to control the horizontal scale which is not possible within the SCORE editor (other than trivial zooming). But the page scaling will also affect the vertical scale at the same time. The origin for the page scaling is the point defined by (1) and (2) above. In other words the page scaling does not affect the page margins, but rather only affects the scaling of the music (you have to scale the music so that it falls at the correct top and right margin positions on your page. Notice that only two margins are defined when printing from SCORE. This perhaps gets to the point that Andrew was trying to make: > The concept of "physical" unit doesn't really translate well to editions that are meant for digital consumption only. > If I have a page meant for a tablet or digital music stand display, what does the "inch" unit mean? Does it mean > render it as a physical inch on the screen, regardless of how many pixels it takes to represent it? Or does it > mean render it using a fixed number of pixels-per-inch, regardless of how large or small it makes it from one > display to another. E-ink displays challenge this concept, since they don't really have pixels, and high-resolution > displays also challenge it since the number of pixels it takes to represent a single physical unit can be completely > different. So we'll probably need some sort of proportional unit so that we can say that the page margin is a > percentage of the rendered display rather than a fixed unit of physical measurement. SCORE uses a single origin when printing the music on a page (left and bottom margins). And it is up to you to scale the music to correctly fit within the top and right margins of your desired paper size (or screen size, e-reader size, etc.). This could be done by specifying the top and right margins instead of scaling, but page-level scaling and top/right margins cannot be controlled independently in SCORE. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: SCORE-Staff-Positions-20121128.pdf Type: application/pdf Size: 6046735 bytes Desc: not available URL: From kepper at edirom.de Fri Nov 30 08:47:34 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 30 Nov 2012 08:47:34 +0100 Subject: [MEI-L] attachment sizes Message-ID: <162B701B-83FF-4ED7-B4FA-802DC0093925@edirom.de> Dear list, I'm not sure if this has ever been said in public ? the list accepts attachments only up to 5mb without asking. If more than that, the list admin gets an error message and has to decide whether to keep or drop that mail (, not only the attachment). I think it's still useful to restrict attachment sizes, so please do keep that limitation in mind when posting to MEI-L. Best, Johannes From laurent at music.mcgill.ca Thu Dec 6 18:29:23 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Thu, 6 Dec 2012 18:29:23 +0100 Subject: [MEI-L] page sizes In-Reply-To: References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> Message-ID: Hi Craig, Thank for the very detailed report. I particularly like the randomly placed staves of figure 7 ;-) There is certainly some interesting information about spacement, especially vertically. This is closely related to the discussion we had about units, and I agree that we can probably find a better name than interline. I am not sure about the use of horizontal placement system outside Score, though. For the general organization of the music, it is actually very similar to what we have in Wolfgang (the music notation software create by Etienne Darbellay, on which Aruspix is partially based on). Maybe not surprisingly since they are from the same generation. I don't really see how this can become a CSS for music. Could you tell us more about it? As I understand it, it would be fairly similar to what we would like to achieve with the layout module in the sense that separating content and presentation is what CSS does. A difference with the layout module is that we have an additional level in the hierarchy, namely the systems. In Score (as in Wolfgang), we have staves directly within a page, because this is enough for representing them. This is a very economic way of representing a page of music and it maybe has to do with the memory limitations they had when they started these software applications. I am not sure this would be optimal for MEI, and this is why I change the internal representation in Aruspix. As I said, the layout module was proposed for two reasons, 1) because a page-based representation did not seemed to be an interesting option at that time, and 2) because it does offer a separation between content and presentation (i.e., we can have several presentations for the same content). This second argument seems to be appealing to several of us. I must confess, however, that implementing it is quite of a challenge. I am a little be concerned that because of this, it will become a very powerful solution that will not be used beyond simple cases because of the complexity involved. It works well with Renaissance music since the general score organization is simple, we should be able to go beyond this proof of concept. As we already discussed, maybe re-considering a page-based representation is a way to go. This does not mean that the layout module cannot exists in parallel. But I can see a fairly direct path from the Score representation as described to a MEI page-based representation, and this can very well be useful to you Craig, as it would be for OMR people and maybe others who would like to have a very detail source encoding. What do you think? Best, Laurent On Fri, Nov 30, 2012 at 4:40 AM, Craig Sapp wrote: > Hi Everyone, > > In case you were eagerly awaiting it, attached is the final version of my > analysis of staff placement in SCORE. > > > SCORE data itself has no concept of physical units (with some minor > caveats), so it would be a good model to observe. The physical units are > defined at the last minute when you are ready to print, and are not defined > while editing the music in the SCORE editor, which matches the idea which > you are headed towards. > > See page 50 of the attached PDF for example default spacings in SCORE > which is a good basic roadmap to how default spacing units are defined in > SCORE. > > > > > This interline distance (which is already used by MEI) is a musical > unit which describes > > half the distance between two staff lines, > > I complain about how you are defining "interline". Interline is Latin for > "between lines", not "halfway between lines". This will cause continual > confusion, such as losing your spacecraft: > http://mars.jpl.nasa.gov/msp98/news/mco990930.html > How about "@semiline", "@hemiline", or "@demiline" instead? Or maybe > "@halbline" :-) > http://www.dailywritingtips.com/semi-demi-and-hemi > > > * The nominal physical length of scoreDef/@interline.size in SCORE is 3.15 > points (0.04375 inches, 1.11125 mm). This is when you print out the music > using the default staff scaling and print size. Vertical values are always > represented by this step size, and the data files themselves do not > indicate that the final physical rendering is at 3.15 points (which is why > I had to measure it off of the example on page 50). > > * So in SCORE, the distance between two staff lines is 2.0 "steps". And > this means that the height of a staff is 8.0 steps. Every staff has an > independent scaling factor which only affects the vertical dimension (there > is no staff-level scaling for the horizontal dimension). So if a staff > has a 50% scaling, all of its steps would be 1/2 of the size of the nominal > height. > > * The default "successive staff spacing" is 18.0 steps, so there are 18.0 > - 8.0 = 10.0 steps from the top of one staff to the bottom of the next. > This default spacing is the framework over which individual staves may be > scaled. This framework is also how SCORE avoids using physical > measurements to place individual staves vertically on the page. Staves can > be placed anywhere vertically, but their placements are in relation to > their default positions. For example, a staff could be placed 15 steps > above the top of the staff below by adding an extra offset of 5 steps to > its default position (In SCOREese, set P4=5 for the top staff). > > * The staves each have their own scaling factor (called the staff's P5 in > SCOREese). If P5 is 0.5, then the local staff's step size is now 1/2 of > the default step size. This scaling factor only affects the height of the > staves, not the length of the staves. Object placed on a staff will have > the staff's P5 scaling applied to their horizontal and vertical dimensions > (they will shrink by 50% if the staff is scaled by 50%), their vertical > placement will be scaled as well, but not their horizontal placement which > is independent of the staff scaling. Note that changing the scaling of a > staff does not affect its default position on the page, but if there is a > vertical offset from the default position, that offset will be scaled. > > * The horizontal distances in SCORE are described on a different scale > than the vertical distances. They are described as the fractional position > along the left/right sides of the default staff length. The nominal length > of a staff is 7.5 inches. This length is divided up into 200 units, so the > left side of staves are at 0.0, and the right side is at 200.0. A length > of 540 points (7.5 inches) divided by 200 is 2.7 points, so "200" was > probably used to give an approximately equivalent unit to vertical steps. > It would have been more elegant to set the default horizontal and vertical > units to be the same by using a different vertical scaling... In any case > the horizontal units are 6/7 of the vertical step units, so if you set a > staff's scaling to 6/7 (85.71428...%), the horizontal units will match the > vertical steps locally for that staff. Another way of thinking about the > relationship between the horizontal and vertical units in SCORE is that the > default length of staves is 171.428... steps long. So all vertical and > horizontal units can be related and a final scaling can be given to match > to the specified @interline physical distance between steps. > > * horizontal units cannot be scaled within the SCORE editor, and can only > be scaled at print time, such as to match the staff lengths to the distance > between page margins. > > For final placement on a physical page, there are three important values: > > (1) The distance from the left side of the page to the left side of the > staff (the "left margin", although the system brackets will fall into this > margin, so not exactly the same as a text margin). The default left margin > is 0.5 inches (plus a fixed extra 0.025 in). > > (2) The distance from the bottom of the page to the bottom line of the > first staff. This is also not exactly a "margin" in the text sense, since > notes/slurs/dynamics can fall within this bottom margin. The default > bottom margin is 0.75 inches (plus a fixed extra 0.0625 inches). > > (3) The page scaling. This is the method to control the horizontal scale > which is not possible within the SCORE editor (other than trivial > zooming). But the page scaling will also affect the vertical scale at the > same time. The origin for the page scaling is the point defined by (1) and > (2) above. In other words the page scaling does not affect the page > margins, but rather only affects the scaling of the music (you have to > scale the music so that it falls at the correct top and right margin > positions on your page. > > Notice that only two margins are defined when printing from SCORE. This > perhaps gets to the point that Andrew was trying to make: > > > > The concept of "physical" unit doesn't really translate well to > editions that are meant for digital consumption only. > > If I have a page meant for a tablet or digital music stand display, what > does the "inch" unit mean? Does it mean > > render it as a physical inch on the screen, regardless of how many > pixels it takes to represent it? Or does it > > mean render it using a fixed number of pixels-per-inch, regardless of > how large or small it makes it from one > > display to another. E-ink displays challenge this concept, since they > don't really have pixels, and high-resolution > > displays also challenge it since the number of pixels it takes to > represent a single physical unit can be completely > > different. So we'll probably need some sort of proportional unit so that > we can say that the page margin is a > > percentage of the rendered display rather than a fixed unit of physical > measurement. > > SCORE uses a single origin when printing the music on a page (left and > bottom margins). And it is up to you to scale the music to correctly fit > within the top and right margins of your desired paper size (or screen > size, e-reader size, etc.). This could be done by specifying the top and > right margins instead of scaling, but page-level scaling and top/right > margins cannot be controlled independently in SCORE. > > > -=+Craig > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From kepper at edirom.de Thu Dec 6 19:55:29 2012 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 6 Dec 2012 19:55:29 +0100 Subject: [MEI-L] page sizes In-Reply-To: References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> Message-ID: Hi all, there are so many different approaches, models, proposals etc. that I completely lost my overview. It would be great to hear Craig's response regarding CSS for music, but the general intention of most of these things seems to be the wish for a clearer separation of content and rendition in order to allow multiple renditions from the same source content. While I appreciate the discussion so far, I wonder if we couldn't break it down to more digestible issues. The question of units and their relationship seems to be such an issue. The possibilities of expressing pages in MEI seems to be another. Adopting a common algorithm, it seems the whole discussion would be easier to conquer when we divide it? At the same time, we need to make sure that we keep all the bits and pieces together. I would suggest to establish something like an MEI Layout SIG (special interest group), which coordinates the discussion of this. This group should also consider to apply for a session and maybe an additional workshop at the conference next year in May, as this seems to be an ideal time frame to prepare and discuss these issues. Hopefully, there will be a large part of the community available in Mainz, so we could take decisions there. Who would volunteer to participate in such a group, and what modus operandi would you suggest? Johannes Am 06.12.2012 um 18:29 schrieb Laurent Pugin: > Hi Craig, > > Thank for the very detailed report. I particularly like the randomly placed staves of figure 7 ;-) > > There is certainly some interesting information about spacement, especially vertically. This is closely related to the discussion we had about units, and I agree that we can probably find a better name than interline. I am not sure about the use of horizontal placement system outside Score, though. > > For the general organization of the music, it is actually very similar to what we have in Wolfgang (the music notation software create by Etienne Darbellay, on which Aruspix is partially based on). Maybe not surprisingly since they are from the same generation. I don't really see how this can become a CSS for music. Could you tell us more about it? > > As I understand it, it would be fairly similar to what we would like to achieve with the layout module in the sense that separating content and presentation is what CSS does. A difference with the layout module is that we have an additional level in the hierarchy, namely the systems. In Score (as in Wolfgang), we have staves directly within a page, because this is enough for representing them. This is a very economic way of representing a page of music and it maybe has to do with the memory limitations they had when they started these software applications. I am not sure this would be optimal for MEI, and this is why I change the internal representation in Aruspix. > > As I said, the layout module was proposed for two reasons, 1) because a page-based representation did not seemed to be an interesting option at that time, and 2) because it does offer a separation between content and presentation (i.e., we can have several presentations for the same content). This second argument seems to be appealing to several of us. I must confess, however, that implementing it is quite of a challenge. I am a little be concerned that because of this, it will become a very powerful solution that will not be used beyond simple cases because of the complexity involved. It works well with Renaissance music since the general score organization is simple, we should be able to go beyond this proof of concept. As we already discussed, maybe re-considering a page-based representation is a way to go. This does not mean that the layout module cannot exists in parallel. But I can see a fairly direct path from the Score representation as described to a MEI page-based representation, and this can very well be useful to you Craig, as it would be for OMR people and maybe others who would like to have a very detail source encoding. What do you think? > > Best, > Laurent > > On Fri, Nov 30, 2012 at 4:40 AM, Craig Sapp wrote: > Hi Everyone, > > In case you were eagerly awaiting it, attached is the final version of my analysis of staff placement in SCORE. > > > SCORE data itself has no concept of physical units (with some minor caveats), so it would be a good model to observe. The physical units are defined at the last minute when you are ready to print, and are not defined while editing the music in the SCORE editor, which matches the idea which you are headed towards. > > See page 50 of the attached PDF for example default spacings in SCORE which is a good basic roadmap to how default spacing units are defined in SCORE. > > > > > This interline distance (which is already used by MEI) is a musical unit which describes > > half the distance between two staff lines, > > I complain about how you are defining "interline". Interline is Latin for "between lines", not "halfway between lines". This will cause continual confusion, such as losing your spacecraft: > http://mars.jpl.nasa.gov/msp98/news/mco990930.html > How about "@semiline", "@hemiline", or "@demiline" instead? Or maybe "@halbline" :-) > http://www.dailywritingtips.com/semi-demi-and-hemi > > > * The nominal physical length of scoreDef/@interline.size in SCORE is 3.15 points (0.04375 inches, 1.11125 mm). This is when you print out the music using the default staff scaling and print size. Vertical values are always represented by this step size, and the data files themselves do not indicate that the final physical rendering is at 3.15 points (which is why I had to measure it off of the example on page 50). > > * So in SCORE, the distance between two staff lines is 2.0 "steps". And this means that the height of a staff is 8.0 steps. Every staff has an independent scaling factor which only affects the vertical dimension (there is no staff-level scaling for the horizontal dimension). So if a staff has a 50% scaling, all of its steps would be 1/2 of the size of the nominal height. > > * The default "successive staff spacing" is 18.0 steps, so there are 18.0 - 8.0 = 10.0 steps from the top of one staff to the bottom of the next. This default spacing is the framework over which individual staves may be scaled. This framework is also how SCORE avoids using physical measurements to place individual staves vertically on the page. Staves can be placed anywhere vertically, but their placements are in relation to their default positions. For example, a staff could be placed 15 steps above the top of the staff below by adding an extra offset of 5 steps to its default position (In SCOREese, set P4=5 for the top staff). > > * The staves each have their own scaling factor (called the staff's P5 in SCOREese). If P5 is 0.5, then the local staff's step size is now 1/2 of the default step size. This scaling factor only affects the height of the staves, not the length of the staves. Object placed on a staff will have the staff's P5 scaling applied to their horizontal and vertical dimensions (they will shrink by 50% if the staff is scaled by 50%), their vertical placement will be scaled as well, but not their horizontal placement which is independent of the staff scaling. Note that changing the scaling of a staff does not affect its default position on the page, but if there is a vertical offset from the default position, that offset will be scaled. > > * The horizontal distances in SCORE are described on a different scale than the vertical distances. They are described as the fractional position along the left/right sides of the default staff length. The nominal length of a staff is 7.5 inches. This length is divided up into 200 units, so the left side of staves are at 0.0, and the right side is at 200.0. A length of 540 points (7.5 inches) divided by 200 is 2.7 points, so "200" was probably used to give an approximately equivalent unit to vertical steps. It would have been more elegant to set the default horizontal and vertical units to be the same by using a different vertical scaling... In any case the horizontal units are 6/7 of the vertical step units, so if you set a staff's scaling to 6/7 (85.71428...%), the horizontal units will match the vertical steps locally for that staff. Another way of thinking about the relationship between the horizontal and vertical units in SCORE is that the default length of staves is 171.428... steps long. So all vertical and horizontal units can be related and a final scaling can be given to match to the specified @interline physical distance between steps. > > * horizontal units cannot be scaled within the SCORE editor, and can only be scaled at print time, such as to match the staff lengths to the distance between page margins. > > For final placement on a physical page, there are three important values: > > (1) The distance from the left side of the page to the left side of the staff (the "left margin", although the system brackets will fall into this margin, so not exactly the same as a text margin). The default left margin is 0.5 inches (plus a fixed extra 0.025 in). > > (2) The distance from the bottom of the page to the bottom line of the first staff. This is also not exactly a "margin" in the text sense, since notes/slurs/dynamics can fall within this bottom margin. The default bottom margin is 0.75 inches (plus a fixed extra 0.0625 inches). > > (3) The page scaling. This is the method to control the horizontal scale which is not possible within the SCORE editor (other than trivial zooming). But the page scaling will also affect the vertical scale at the same time. The origin for the page scaling is the point defined by (1) and (2) above. In other words the page scaling does not affect the page margins, but rather only affects the scaling of the music (you have to scale the music so that it falls at the correct top and right margin positions on your page. > > Notice that only two margins are defined when printing from SCORE. This perhaps gets to the point that Andrew was trying to make: > > > > The concept of "physical" unit doesn't really translate well to editions that are meant for digital consumption only. > > If I have a page meant for a tablet or digital music stand display, what does the "inch" unit mean? Does it mean > > render it as a physical inch on the screen, regardless of how many pixels it takes to represent it? Or does it > > mean render it using a fixed number of pixels-per-inch, regardless of how large or small it makes it from one > > display to another. E-ink displays challenge this concept, since they don't really have pixels, and high-resolution > > displays also challenge it since the number of pixels it takes to represent a single physical unit can be completely > > different. So we'll probably need some sort of proportional unit so that we can say that the page margin is a > > percentage of the rendered display rather than a fixed unit of physical measurement. > > SCORE uses a single origin when printing the music on a page (left and bottom margins). And it is up to you to scale the music to correctly fit within the top and right margins of your desired paper size (or screen size, e-reader size, etc.). This could be done by specifying the top and right margins instead of scaling, but page-level scaling and top/right margins cannot be controlled independently in SCORE. > > > -=+Craig > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Thu Dec 6 19:57:16 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Thu, 6 Dec 2012 18:57:16 +0000 Subject: [MEI-L] physLoc In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk> Message-ID: Hi, everyone, I'm moving the discussion Axel and I have been having off-line to MEI-L as I think it may be of interest to others. Regarding the bibliographic customization posted earlier, Axel said: > I see that is moved out of , but is not. Shouldn?t and go along? [...] > I see as something like the history of , and as such I would expect their close relationship to be reflected by the schema. It?s a great improvement that is moved out of . But being a sibling of , while is still a child of does not make much sense to me. How about making a child of instead? That wouldn?t further increase the number of immediate children of , and it would keep and together in a way that would make sense to me: ... ... ... ... I agree that can be a history of physical location(s), but it doesn't make sense as a child of when multiple elements are allowed, as is the case of or when an has component parts. In the following case ... ... does the provenance information pertain to only the copy in the first physical location or to both? If it pertains to both, then the elemement shouldn't be a child of the first , but should exist outside it. Assuming is permitted only within , if I want to say that both copies have always been kept together, then information will have to be repeated. For example, Always together Always together The same is true for ... ... The phrase "exist outside it" above, however, doesn't mean that it has to be a sibling of -- it can reside inside , which is a sibling of . Taking the first case above, can be moved outside either of the elements so that it can apply to both locations. ... ... As part of , can be associated with either or . This permits a description of provenance independent of the number of locations. If, however, one item's history is somehow different from its fellows, then this can be accommodated in the text as well. Making siblings of and (and probably by extension , , and as well) instead of children of will break backward compatibility as will allowing them only within , and not . If they're allowed in both places (that is, in and as siblings of , then that will increase the complexity of the schema and therefore decrease the likelihood it will be used properly. So, I'm inclined to leave well enough alone on this one. Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From slu at kb.dk Fri Dec 7 08:59:45 2012 From: slu at kb.dk (Sigfrid Lundberg) Date: Fri, 7 Dec 2012 07:59:45 +0000 Subject: [MEI-L] physLoc In-Reply-To: References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, Message-ID: <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> Hi I do, naturally agree with Axel. After all we share office. But when I read this I felt that a physLoc is a property or result of an acquisition, which is an event in the history of an object. I.e., somewhat like this Might be that this is a chicken and egg discussion Sigfrid ________________________________________ Fra: Roland, Perry (pdr4h) [pdr4h at eservices.virginia.edu] Sendt: 6. december 2012 19:57 Til: mei-l at lists.uni-paderborn.de Emne: RE: physLoc Hi, everyone, I'm moving the discussion Axel and I have been having off-line to MEI-L as I think it may be of interest to others. Regarding the bibliographic customization posted earlier, Axel said: > I see that is moved out of , but is not. Shouldn?t and go along? [...] > I see as something like the history of , and as such I would expect their close relationship to be reflected by the schema. It?s a great improvement that is moved out of . But being a sibling of , while is still a child of does not make much sense to me. How about making a child of instead? That wouldn?t further increase the number of immediate children of , and it would keep and together in a way that would make sense to me: ... ... ... ... I agree that can be a history of physical location(s), but it doesn't make sense as a child of when multiple elements are allowed, as is the case of or when an has component parts. In the following case ... ... does the provenance information pertain to only the copy in the first physical location or to both? If it pertains to both, then the elemement shouldn't be a child of the first , but should exist outside it. Assuming is permitted only within , if I want to say that both copies have always been kept together, then information will have to be repeated. For example, Always together Always together The same is true for ... ... The phrase "exist outside it" above, however, doesn't mean that it has to be a sibling of -- it can reside inside , which is a sibling of . Taking the first case above, can be moved outside either of the elements so that it can apply to both locations. ... ... As part of , can be associated with either or . This permits a description of provenance independent of the number of locations. If, however, one item's history is somehow different from its fellows, then this can be accommodated in the text as well. Making siblings of and (and probably by extension , , and as well) instead of children of will break backward compatibility as will allowing them only within , and not . If they're allowed in both places (that is, in and as siblings of , then that will increase the complexity of the schema and therefore decrease the likelihood it will be used properly. So, I'm inclined to leave well enough alone on this one. Best wishes, -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu From atge at kb.dk Fri Dec 7 09:46:54 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Fri, 7 Dec 2012 08:46:54 +0000 Subject: [MEI-L] physLoc In-Reply-To: <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> Hi all I think this discussion illustrates a problem of the non-FRBR approach to encoding source information: Whithout the distinction between features common to all copies (items) and the individual items, we are in trouble when is used to describe more than one copy (item). This is true not only for physLoc, but for physDesc as well, because there may be some shared features (number of pages, for instance) and indvidual ones (binding, for instance). How would that information be organized without the FRBR level? Perhaps should not even be allowed in at all when using the bibl customization (and thus standard MEI, later?). Restricting it to would make it clear that - and , whether within or outside - refers to this particular item or item component. The schema actually allows only one per item (or item component). I think thats keeps us clear of all the ambiguities we have in now. Actually, having multiple in is really a sort of alternative implementation a FRBR , but a very limited one, isn't it? I believe the FRBR approach is the more consistent and more useful one. Backwards compatibility... well, compatibility to MEI 2010 was severely broken already with v. 2.0.0, and I doubt anyone has adapted the 2012 encoding yet. I don't think that should be the reason for not changing anything. Best, Axel > -----Oprindelig meddelelse----- > Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni- > paderborn.de] P? vegne af Sigfrid Lundberg > Sendt: 7. december 2012 09:00 > Til: Roland, Perry (pdr4h); mei-l at lists.uni-paderborn.de > Emne: Re: [MEI-L] physLoc > > Hi > > I do, naturally agree with Axel. After all we share office. But when I read this I felt > that a physLoc is a property or result of an acquisition, which is an event in the > history of an object. I.e., somewhat like this > > > > > > > > > > > > > Might be that this is a chicken and egg discussion > > Sigfrid > ________________________________________ > Fra: Roland, Perry (pdr4h) [pdr4h at eservices.virginia.edu] > Sendt: 6. december 2012 19:57 > Til: mei-l at lists.uni-paderborn.de > Emne: RE: physLoc > > Hi, everyone, > > I'm moving the discussion Axel and I have been having off-line to MEI-L as I think > it may be of interest to others. > > Regarding the bibliographic customization posted earlier, Axel said: > > > I see that is moved out of , but is not. > Shouldn't and go along? > > [...] > > > I see as something like the history of , and as such I > would expect their close relationship to be reflected by the schema. It's a great > improvement that is moved out of . But being > a sibling of , while is still a child of does > not make much sense to me. How about making a child of > instead? That wouldn't further increase the number of immediate > children of , and it would keep and together in a > way that would make sense to me: > > > ... > > ... > > > ... > > > ... > > > I agree that can be a history of physical location(s), but it doesn't > make sense as a child of when multiple elements are > allowed, as is the case of or when an has component parts. > > In the following case > > > > > > > ... > > ... > > > does the provenance information pertain to only the copy in the first physical > location or to both? If it pertains to both, then the elemement > shouldn't be a child of the first , but should exist outside it. Assuming > is permitted only within , if I want to say that both copies > have always been kept together, then information will have to be repeated. For > example, > > > > > Always together > > > Always together > > > > The same is true for > > > > > > ... > > > > > ... > > > > > > The phrase "exist outside it" above, however, doesn't mean that it has to be a > sibling of -- it can reside inside , which is a sibling of > . Taking the first case above, can be moved outside > either of the elements so that it can apply to both locations. > > > > ... > > > > > > ... > > > As part of , can be associated with either or > . This permits a description of provenance independent of the number of > locations. If, however, one item's history is somehow different from its fellows, > then this can be accommodated in the text as well. > > Making siblings of and (and probably by > extension , , and as well) instead of children > of will break backward compatibility as will allowing them only within > , and not . If they're allowed in both places (that is, in > and as siblings of , then that will increase the complexity > of the schema and therefore decrease the likelihood it will be used properly. So, > I'm inclined to leave well enough alone on this one. > > Best wishes, > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From andrew.hankinson at mail.mcgill.ca Fri Dec 7 15:08:52 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Fri, 7 Dec 2012 09:08:52 -0500 Subject: [MEI-L] physLoc In-Reply-To: <18585_1354870032_50C1AD0F_18585_522_1_0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> <18585_1354870032_50C1AD0F_18585_522_1_0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> Message-ID: > Backwards compatibility... well, compatibility to MEI 2010 was severely broken already with v. 2.0.0, and I doubt anyone has adapted the 2012 encoding yet. I don't think that should be the reason for not changing anything. I have no "horse in the race" for the source encoding problem, but I would like to jump in on this point. If we're breaking compatibility, then this is a 3.0.0 issue. -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From atge at kb.dk Fri Dec 7 15:25:15 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Fri, 7 Dec 2012 14:25:15 +0000 Subject: [MEI-L] physLoc In-Reply-To: References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> <18585_1354870032_50C1AD0F_18585_522_1_0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F958F@EXCHANGE-01.kb.dk> True. But as far as I can see there are changes in the bibl customization breaking compatibility already, such as replacing with in . That may be a mistake, though. /axel Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Andrew Hankinson Sendt: 7. december 2012 15:09 Til: Music Encoding Initiative Emne: Re: [MEI-L] physLoc Backwards compatibility... well, compatibility to MEI 2010 was severely broken already with v. 2.0.0, and I doubt anyone has adapted the 2012 encoding yet. I don't think that should be the reason for not changing anything. I have no "horse in the race" for the source encoding problem, but I would like to jump in on this point. If we're breaking compatibility, then this is a 3.0.0 issue. -Andrew -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Fri Dec 7 16:39:24 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 7 Dec 2012 16:39:24 +0100 Subject: [MEI-L] physLoc In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F958F@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> <18585_1354870032_50C1AD0F_18585_522_1_0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> <0B6F63F59F405E4C902DFE2C2329D0D1514F958F@EXCHANGE-01.kb.dk> Message-ID: <5B4C4462-DFA5-42CB-BA6F-4C8FA9FC51D8@edirom.de> If I remember correctly, a second-digit change is allowed to break compatibility. Benjamin's report from our last meeting (mail from September 15th) reads: > The final conclusion was first digit (major changes: e.g. anything that introduces new models / new structures / new version of ODD), second digit (middling changes: more significant, probably breaking [compatibility]) and third digit (minor changes: mostly not breaking [compatibility]) and not restricting this to either specifications or guidelines. One could argue that the whole release, which will include not only the bibl customization, but also the FRB model, justifies a new first-digit release number. But we had that discussion already, and we decided to call it a 2.1.0 during the tech team meeting. Given the amount of time we already invested, do we really want to re-open that can of worms? I'm not against it, I just want to be sure it's necessary? Best, Johannes Am 07.12.2012 um 15:25 schrieb Axel Teich Geertinger : > True. But as far as I can see there are changes in the bibl customization breaking compatibility already, such as replacing with in . That may be a mistake, though. > > /axel > > > > Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Andrew Hankinson > Sendt: 7. december 2012 15:09 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] physLoc > > > > Backwards compatibility... well, compatibility to MEI 2010 was severely broken already with v. 2.0.0, and I doubt anyone has adapted the 2012 encoding yet. I don't think that should be the reason for not changing anything. > > I have no "horse in the race" for the source encoding problem, but I would like to jump in on this point. If we're breaking compatibility, then this is a 3.0.0 issue. > > -Andrew > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Fri Dec 7 17:01:02 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Fri, 7 Dec 2012 16:01:02 +0000 Subject: [MEI-L] physLoc In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk>, <0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> Message-ID: Greetings, I recognize Axel's point regarding the advantages of FRBR organization and, in the cases where it is superior, I expect that it will be used by MEI coders. After all, that's why we're working toward allowing FRBR in MEI anyway. But, to the best of our ability, we also have to accommodate those who choose not to use FRBR, in pursuit of what I'll call (for lack of a better term) "traditional bibligraphic description". Accommodation of this already-existing approach is what I meant by "backward compatibility". In traditional bibliographical description (which rightly or wrongly mixes up description of work, expression, manifestation, and item) it is often necessary to provide locational information for source material, so eliminating from isn't prudent. I agree with Axel that allowing multiple elements in *is* a limited alternative to full-blown FRBR. And what's wrong with that? For some uses; that is, those that must incorporate traditional bibliographic descriptions or those that do not attempt to cram a lot of complexity into , it will be just the right thing. For complex uses; that is, those that require detailed descriptions of (sometimes multiple) expressions, manifestations, and items, the FRBR approach will prove to be worth the effort. We must trust users of MEI to make intelligent decisions regarding which approach best fits their needs. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Axel Teich Geertinger [atge at kb.dk] Sent: Friday, December 07, 2012 3:46 AM To: Music Encoding Initiative Subject: Re: [MEI-L] physLoc Hi all I think this discussion illustrates a problem of the non-FRBR approach to encoding source information: Whithout the distinction between features common to all copies (items) and the individual items, we are in trouble when is used to describe more than one copy (item). This is true not only for physLoc, but for physDesc as well, because there may be some shared features (number of pages, for instance) and indvidual ones (binding, for instance). How would that information be organized without the FRBR level? Perhaps should not even be allowed in at all when using the bibl customization (and thus standard MEI, later?). Restricting it to would make it clear that - and , whether within or outside - refers to this particular item or item component. The schema actually allows only one per item (or item component). I think thats keeps us clear of all the ambiguities we have in now. Actually, having multiple in is really a sort of alternative implementation a FRBR , but a very limited one, isn't it? I believe the FRBR approach is the more consistent and more useful one. Backwards compatibility... well, compatibility to MEI 2010 was severely broken already with v. 2.0.0, and I doubt anyone has adapted the 2012 encoding yet. I don't think that should be the reason for not changing anything. Best, Axel > -----Oprindelig meddelelse----- > Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni- > paderborn.de] P? vegne af Sigfrid Lundberg > Sendt: 7. december 2012 09:00 > Til: Roland, Perry (pdr4h); mei-l at lists.uni-paderborn.de > Emne: Re: [MEI-L] physLoc > > Hi > > I do, naturally agree with Axel. After all we share office. But when I read this I felt > that a physLoc is a property or result of an acquisition, which is an event in the > history of an object. I.e., somewhat like this > > > > > > > > > > > > > Might be that this is a chicken and egg discussion > > Sigfrid > ________________________________________ > Fra: Roland, Perry (pdr4h) [pdr4h at eservices.virginia.edu] > Sendt: 6. december 2012 19:57 > Til: mei-l at lists.uni-paderborn.de > Emne: RE: physLoc > > Hi, everyone, > > I'm moving the discussion Axel and I have been having off-line to MEI-L as I think > it may be of interest to others. > > Regarding the bibliographic customization posted earlier, Axel said: > > > I see that is moved out of , but is not. > Shouldn't and go along? > > [...] > > > I see as something like the history of , and as such I > would expect their close relationship to be reflected by the schema. It's a great > improvement that is moved out of . But being > a sibling of , while is still a child of does > not make much sense to me. How about making a child of > instead? That wouldn't further increase the number of immediate > children of , and it would keep and together in a > way that would make sense to me: > > > ... > > ... > > > ... > > > ... > > > I agree that can be a history of physical location(s), but it doesn't > make sense as a child of when multiple elements are > allowed, as is the case of or when an has component parts. > > In the following case > > > > > > > ... > > ... > > > does the provenance information pertain to only the copy in the first physical > location or to both? If it pertains to both, then the elemement > shouldn't be a child of the first , but should exist outside it. Assuming > is permitted only within , if I want to say that both copies > have always been kept together, then information will have to be repeated. For > example, > > > > > Always together > > > Always together > > > > The same is true for > > > > > > ... > > > > > ... > > > > > > The phrase "exist outside it" above, however, doesn't mean that it has to be a > sibling of -- it can reside inside , which is a sibling of > . Taking the first case above, can be moved outside > either of the elements so that it can apply to both locations. > > > > ... > > > > > > ... > > > As part of , can be associated with either or > . This permits a description of provenance independent of the number of > locations. If, however, one item's history is somehow different from its fellows, > then this can be accommodated in the text as well. > > Making siblings of and (and probably by > extension , , and as well) instead of children > of will break backward compatibility as will allowing them only within > , and not . If they're allowed in both places (that is, in > and as siblings of , then that will increase the complexity > of the schema and therefore decrease the likelihood it will be used properly. So, > I'm inclined to leave well enough alone on this one. > > Best wishes, > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at eservices.virginia.edu Fri Dec 7 17:02:24 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Fri, 7 Dec 2012 16:02:24 +0000 Subject: [MEI-L] physLoc In-Reply-To: <5B4C4462-DFA5-42CB-BA6F-4C8FA9FC51D8@edirom.de> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> <18585_1354870032_50C1AD0F_18585_522_1_0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> <0B6F63F59F405E4C902DFE2C2329D0D1514F958F@EXCHANGE-01.kb.dk>, <5B4C4462-DFA5-42CB-BA6F-4C8FA9FC51D8@edirom.de> Message-ID: The die has been cast -- The releases scheduled for January and March 2013 will break compatibility with MEI 2012, the first (2.0.1) in order to correct of errors and omissions in MEI 2012 and the second (2.1.0) to introduce new features. Let's move on. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] Sent: Friday, December 07, 2012 10:39 AM To: Music Encoding Initiative Subject: Re: [MEI-L] physLoc If I remember correctly, a second-digit change is allowed to break compatibility. Benjamin's report from our last meeting (mail from September 15th) reads: > The final conclusion was first digit (major changes: e.g. anything that introduces new models / new structures / new version of ODD), second digit (middling changes: more significant, probably breaking [compatibility]) and third digit (minor changes: mostly not breaking [compatibility]) and not restricting this to either specifications or guidelines. One could argue that the whole release, which will include not only the bibl customization, but also the FRB model, justifies a new first-digit release number. But we had that discussion already, and we decided to call it a 2.1.0 during the tech team meeting. Given the amount of time we already invested, do we really want to re-open that can of worms? I'm not against it, I just want to be sure it's necessary? Best, Johannes Am 07.12.2012 um 15:25 schrieb Axel Teich Geertinger : > True. But as far as I can see there are changes in the bibl customization breaking compatibility already, such as replacing with in . That may be a mistake, though. > > /axel > > > > Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Andrew Hankinson > Sendt: 7. december 2012 15:09 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] physLoc > > > > Backwards compatibility... well, compatibility to MEI 2010 was severely broken already with v. 2.0.0, and I doubt anyone has adapted the 2012 encoding yet. I don't think that should be the reason for not changing anything. > > I have no "horse in the race" for the source encoding problem, but I would like to jump in on this point. If we're breaking compatibility, then this is a 3.0.0 issue. > > -Andrew > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From andrew.hankinson at mail.mcgill.ca Fri Dec 7 18:10:41 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Fri, 7 Dec 2012 12:10:41 -0500 Subject: [MEI-L] physLoc In-Reply-To: <28216_1354896163_50C2131C_28216_60_31_BBCC497C40D85642B90E9F94FC30343D0EFC5714@GRANT.eservices.virginia.edu> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> <18585_1354870032_50C1AD0F_18585_522_1_0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> <0B6F63F59F405E4C902DFE2C2329D0D1514F958F@EXCHANGE-01.kb.dk>, <5B4C4462-DFA5-42CB-BA6F-4C8FA9FC51D8@edirom.de> <28216_1354896163_50C2131C_28216_60_31_BBCC497C40D85642B90E9F94FC30343D0EFC5714@GRANT.eservices.virginia.edu> Message-ID: <53B58A7E-6EAC-42EA-9219-F8A28B429B0A@mail.mcgill.ca> I admit that I'm at a disadvantage, since I couldn't make it to the last developers call, but I feel I must push the need for *some* backwards compatibility fairly strongly. Whether that's a first-or-second digit release is somewhat irrelevant, since breaking compatibility is a pain, no matter what we call it. I don't think breaking compatibility whenever we need to form "a more perfect spec" is a sustainable way forward. This could mean something like ensuring we have a stylesheet in place when a compatibility-breaking version is released to transform older versions to newer versions, but I feel very strongly that if we're to gain some traction we cannot always present a moving target to the folks that are implementing MEI. Here's a modest proposal: -- New compatibility-breaking changes are placed in an ODD customization somewhere (perhaps the incubator). -- Those who are seeking to have those customizations rolled into "core" must also supply an XSLT to transform documents in the current version to their proposed version. In my opinion, MusicXML goes too far in maintaining backwards-compatiblity at the expense of more sane or robust methods of representation, but I'd not like to see us adopt an opposite, but just as absolute, practice. -Andrew On 2012-12-07, at 11:02 AM, "Roland, Perry (pdr4h)" wrote: > The die has been cast -- The releases scheduled for January and March 2013 will break compatibility with MEI 2012, the first (2.0.1) in order to correct of errors and omissions in MEI 2012 and the second (2.1.0) to introduce new features. Let's move on. > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > Sent: Friday, December 07, 2012 10:39 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] physLoc > > If I remember correctly, a second-digit change is allowed to break compatibility. Benjamin's report from our last meeting (mail from September 15th) reads: > >> The final conclusion was first digit (major changes: e.g. anything that introduces new models / new structures / new version of ODD), second digit (middling changes: more significant, probably breaking [compatibility]) and third digit (minor changes: mostly not breaking [compatibility]) and not restricting this to either specifications or guidelines. > > One could argue that the whole release, which will include not only the bibl customization, but also the FRB model, justifies a new first-digit release number. But we had that discussion already, and we decided to call it a 2.1.0 during the tech team meeting. Given the amount of time we already invested, do we really want to re-open that can of worms? I'm not against it, I just want to be sure it's necessary? > > Best, > Johannes > > > > Am 07.12.2012 um 15:25 schrieb Axel Teich Geertinger : > >> True. But as far as I can see there are changes in the bibl customization breaking compatibility already, such as replacing with in . That may be a mistake, though. >> >> /axel >> >> >> >> Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Andrew Hankinson >> Sendt: 7. december 2012 15:09 >> Til: Music Encoding Initiative >> Emne: Re: [MEI-L] physLoc >> >> >> >> Backwards compatibility... well, compatibility to MEI 2010 was severely broken already with v. 2.0.0, and I doubt anyone has adapted the 2012 encoding yet. I don't think that should be the reason for not changing anything. >> >> I have no "horse in the race" for the source encoding problem, but I would like to jump in on this point. If we're breaking compatibility, then this is a 3.0.0 issue. >> >> -Andrew >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From donbyrd at indiana.edu Fri Dec 7 18:39:44 2012 From: donbyrd at indiana.edu (Byrd, Donald A.) Date: Fri, 7 Dec 2012 12:39:44 -0500 Subject: [MEI-L] page sizes; break discussion down? Layout SIG? In-Reply-To: References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> Message-ID: <20121207123944.uov3yruce8csskw8@webmail.iu.edu> I've been trying for the last few weeks to figure out how to contribute to the discussion. One thing I've been doing is writing something about Nightingale's use of coordinate systems, including units, but not yet covering page layout, which (as you say) is an issue that can be separated out. Anyway, I think your ideas are super, Johannes! -- both separating discussion into "more digestible" chunks and establishing a Layout SIG. My future is still very unclear, but I'd like to participate in it, and at the moment, it looks like I'd have time. --Don On Thu, 6 Dec 2012 19:55:29 +0100, Johannes Kepper wrote: > Hi all, > > there are so many different approaches, models, proposals etc. that I > completely lost my overview. It would be great to hear Craig's > response regarding CSS for music, but the general intention of most > of these things seems to be the wish for a clearer separation of > content and rendition in order to allow multiple renditions from the > same source content. > > While I appreciate the discussion so far, I wonder if we couldn't > break it down to more digestible issues. The question of units and > their relationship seems to be such an issue. The possibilities of > expressing pages in MEI seems to be another. Adopting a common > algorithm, it seems the whole discussion would be easier to conquer > when we divide it? At the same time, we need to make sure that we > keep all the bits and pieces together. I would suggest to establish > something like an MEI Layout SIG (special interest group), which > coordinates the discussion of this. This group should also consider > to apply for a session and maybe an additional workshop at the > conference next year in May, as this seems to be an ideal time frame > to prepare and discuss these issues. Hopefully, there will be a large > part of the community available in Mainz, so we could take decisions > there. Who would volunteer to participate in such a group, and what > modus operandi would you suggest? > > Johannes > > > Am 06.12.2012 um 18:29 schrieb Laurent Pugin: > >> Hi Craig, >> >> Thank for the very detailed report. I particularly like the randomly >> placed staves of figure 7 ;-) >> >> There is certainly some interesting information about spacement, >> especially vertically. This is closely related to the discussion we >> had about units, and I agree that we can probably find a better name >> than interline. I am not sure about the use of horizontal placement >> system outside Score, though. >> >> For the general organization of the music, it is actually very >> similar to what we have in Wolfgang (the music notation software >> create by Etienne Darbellay, on which Aruspix is partially based >> on). Maybe not surprisingly since they are from the same generation. >> I don't really see how this can become a CSS for music. Could you >> tell us more about it? >> >> As I understand it, it would be fairly similar to what we would like >> to achieve with the layout module in the sense that separating >> content and presentation is what CSS does. A difference with the >> layout module is that we have an additional level in the hierarchy, >> namely the systems. In Score (as in Wolfgang), we have staves >> directly within a page, because this is enough for representing >> them. This is a very economic way of representing a page of music >> and it maybe has to do with the memory limitations they had when >> they started these software applications. I am not sure this would >> be optimal for MEI, and this is why I change the internal >> representation in Aruspix. >> >> As I said, the layout module was proposed for two reasons, 1) >> because a page-based representation did not seemed to be an >> interesting option at that time, and 2) because it does offer a >> separation between content and presentation (i.e., we can have >> several presentations for the same content). This second argument >> seems to be appealing to several of us. I must confess, however, >> that implementing it is quite of a challenge. I am a little be >> concerned that because of this, it will become a very powerful >> solution that will not be used beyond simple cases because of the >> complexity involved. It works well with Renaissance music since the >> general score organization is simple, we should be able to go beyond >> this proof of concept. As we already discussed, maybe re-considering >> a page-based representation is a way to go. This does not mean that >> the layout module cannot exists in parallel. But I can see a fairly >> direct path from the Score representation as described to a MEI >> page-based representation, and this can very well be useful to you >> Craig, as it would be for OMR people and maybe others who would like >> to have a very detail source encoding. What do you think? >> >> Best, >> Laurent >> >> On Fri, Nov 30, 2012 at 4:40 AM, Craig Sapp wrote: >> Hi Everyone, >> >> In case you were eagerly awaiting it, attached is the final version >> of my analysis of staff placement in SCORE. >> >> >> SCORE data itself has no concept of physical units (with some minor >> caveats), so it would be a good model to observe. The physical >> units are defined at the last minute when you are ready to print, >> and are not defined while editing the music in the SCORE editor, >> which matches the idea which you are headed towards. >> >> See page 50 of the attached PDF for example default spacings in >> SCORE which is a good basic roadmap to how default spacing units are >> defined in SCORE. >> >> >> >> > This interline distance (which is already used by MEI) is a >> musical unit which describes >> > half the distance between two staff lines, >> >> I complain about how you are defining "interline". Interline is >> Latin for "between lines", not "halfway between lines". This will >> cause continual confusion, such as losing your spacecraft: >> http://mars.jpl.nasa.gov/msp98/news/mco990930.html >> How about "@semiline", "@hemiline", or "@demiline" instead? Or >> maybe "@halbline" :-) >> http://www.dailywritingtips.com/semi-demi-and-hemi >> >> >> * The nominal physical length of scoreDef/@interline.size in SCORE >> is 3.15 points (0.04375 inches, 1.11125 mm). This is when you print >> out the music using the default staff scaling and print size. >> Vertical values are always represented by this step size, and the >> data files themselves do not indicate that the final physical >> rendering is at 3.15 points (which is why I had to measure it off of >> the example on page 50). >> >> * So in SCORE, the distance between two staff lines is 2.0 "steps". >> And this means that the height of a staff is 8.0 steps. Every staff >> has an independent scaling factor which only affects the vertical >> dimension (there is no staff-level scaling for the horizontal >> dimension). So if a staff has a 50% scaling, all of its steps >> would be 1/2 of the size of the nominal height. >> >> * The default "successive staff spacing" is 18.0 steps, so there are >> 18.0 - 8.0 = 10.0 steps from the top of one staff to the bottom of >> the next. This default spacing is the framework over which >> individual staves may be scaled. This framework is also how SCORE >> avoids using physical measurements to place individual staves >> vertically on the page. Staves can be placed anywhere vertically, >> but their placements are in relation to their default positions. >> For example, a staff could be placed 15 steps above the top of the >> staff below by adding an extra offset of 5 steps to its default >> position (In SCOREese, set P4=5 for the top staff). >> >> * The staves each have their own scaling factor (called the staff's >> P5 in SCOREese). If P5 is 0.5, then the local staff's step size is >> now 1/2 of the default step size. This scaling factor only affects >> the height of the staves, not the length of the staves. Object >> placed on a staff will have the staff's P5 scaling applied to their >> horizontal and vertical dimensions (they will shrink by 50% if the >> staff is scaled by 50%), their vertical placement will be scaled as >> well, but not their horizontal placement which is independent of the >> staff scaling. Note that changing the scaling of a staff does not >> affect its default position on the page, but if there is a vertical >> offset from the default position, that offset will be scaled. >> >> * The horizontal distances in SCORE are described on a different >> scale than the vertical distances. They are described as the >> fractional position along the left/right sides of the default staff >> length. The nominal length of a staff is 7.5 inches. This length >> is divided up into 200 units, so the left side of staves are at 0.0, >> and the right side is at 200.0. A length of 540 points (7.5 inches) >> divided by 200 is 2.7 points, so "200" was probably used to give an >> approximately equivalent unit to vertical steps. It would have been >> more elegant to set the default horizontal and vertical units to be >> the same by using a different vertical scaling... In any case the >> horizontal units are 6/7 of the vertical step units, so if you set a >> staff's scaling to 6/7 (85.71428...%), the horizontal units will >> match the vertical steps locally for that staff. Another way of >> thinking about the relationship between the horizontal and vertical >> units in SCORE is that the default length of staves is 171.428... >> steps long. So all vertical and horizontal units can be related and >> a final scaling can be given to match to the specified @interline >> physical distance between steps. >> >> * horizontal units cannot be scaled within the SCORE editor, and can >> only be scaled at print time, such as to match the staff lengths to >> the distance between page margins. >> >> For final placement on a physical page, there are three important values: >> >> (1) The distance from the left side of the page to the left side of >> the staff (the "left margin", although the system brackets will fall >> into this margin, so not exactly the same as a text margin). The >> default left margin is 0.5 inches (plus a fixed extra 0.025 in). >> >> (2) The distance from the bottom of the page to the bottom line of >> the first staff. This is also not exactly a "margin" in the text >> sense, since notes/slurs/dynamics can fall within this bottom >> margin. The default bottom margin is 0.75 inches (plus a fixed >> extra 0.0625 inches). >> >> (3) The page scaling. This is the method to control the horizontal >> scale which is not possible within the SCORE editor (other than >> trivial zooming). But the page scaling will also affect the >> vertical scale at the same time. The origin for the page scaling is >> the point defined by (1) and (2) above. In other words the page >> scaling does not affect the page margins, but rather only affects >> the scaling of the music (you have to scale the music so that it >> falls at the correct top and right margin positions on your page. >> >> Notice that only two margins are defined when printing from SCORE. >> This perhaps gets to the point that Andrew was trying to make: >> >> >> > The concept of "physical" unit doesn't really translate well to >> editions that are meant for digital consumption only. >> > If I have a page meant for a tablet or digital music stand >> display, what does the "inch" unit mean? Does it mean >> > render it as a physical inch on the screen, regardless of how many >> pixels it takes to represent it? Or does it >> > mean render it using a fixed number of pixels-per-inch, regardless >> of how large or small it makes it from one >> > display to another. E-ink displays challenge this concept, since >> they don't really have pixels, and high-resolution >> > displays also challenge it since the number of pixels it takes to >> represent a single physical unit can be completely >> > different. So we'll probably need some sort of proportional unit >> so that we can say that the page margin is a >> > percentage of the rendered display rather than a fixed unit of >> physical measurement. >> >> SCORE uses a single origin when printing the music on a page (left >> and bottom margins). And it is up to you to scale the music to >> correctly fit within the top and right margins of your desired paper >> size (or screen size, e-reader size, etc.). This could be done by >> specifying the top and right margins instead of scaling, but >> page-level scaling and top/right margins cannot be controlled >> independently in SCORE. >> >> >> -=+Craig >> >> >> > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Donald Byrd Woodrow Wilson Indiana Teaching Fellow Adjunct Associate Professor of Informatics Indiana University, Bloomington From pdr4h at eservices.virginia.edu Fri Dec 7 19:18:37 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Fri, 7 Dec 2012 18:18:37 +0000 Subject: [MEI-L] physLoc In-Reply-To: <53B58A7E-6EAC-42EA-9219-F8A28B429B0A@mail.mcgill.ca> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk>, <0C090608704AF04E898055296C932B1251564FF8@EXCHANGE-01.kb.dk> <18585_1354870032_50C1AD0F_18585_522_1_0B6F63F59F405E4C902DFE2C2329D0D1514F91D3@EXCHANGE-01.kb.dk> <0B6F63F59F405E4C902DFE2C2329D0D1514F958F@EXCHANGE-01.kb.dk>, <5B4C4462-DFA5-42CB-BA6F-4C8FA9FC51D8@edirom.de> <28216_1354896163_50C2131C_28216_60_31_BBCC497C40D85642B90E9F94FC30343D0EFC5714@GRANT.eservices.virginia.edu>, <53B58A7E-6EAC-42EA-9219-F8A28B429B0A@mail.mcgill.ca> Message-ID: Hi, Andrew, I agree that MEI can't be a moving target forever, but that doesn't mean it shouldn't ever change. As you say, a hard-line position at either extreme (between always changing and never changing) isn't useful. "Compromise" is often the operative word. I also agree with your statement that *some* backward compatibility is necessary. And I am attempting to maintain it wherever possible. Your "modest proposal" was essentially what was agreed upon at the last technical group meeting. That's why the mei-Bibl customization was placed in the incubator and posted to MEI-L (leading to this discussion). That is also why I'm currently preparing an "MEI 2013" customization that contains other changes. And I hope we can discuss that on MEI-L as well. The tech group did not include the requirement of an accompanying stylesheet for the conversion of existing documents to the proposed modification(s), but did agree that such a thing would be necessary with the adoption of the modification(s) into "core" MEI. It would be wonderful to be proactive about this, but the reality is that the stylesheet for moving from one version to the next will probably always lag behind the newest release because the changes will be moving targets themselves. It would be difficult to write a stylesheet for conversion to a new version until that new version is settled. But, the release of a new version doesn't have to be held up while a conversion stylesheet is authored. Personally, rather than 2 fairly closely spaced releases next year, I would rather have worked towards just one. In my opinion, two releases that both break existing documents gives the impression that we're breaking things willy-nilly. Shooting for a single major change in March/April better matches our grant schedule and provides us more time to prepare for the change, say by providing conversion stylesheets. :-) -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Andrew Hankinson [andrew.hankinson at mail.mcgill.ca] Sent: Friday, December 07, 2012 12:10 PM To: Music Encoding Initiative Subject: Re: [MEI-L] physLoc I admit that I'm at a disadvantage, since I couldn't make it to the last developers call, but I feel I must push the need for *some* backwards compatibility fairly strongly. Whether that's a first-or-second digit release is somewhat irrelevant, since breaking compatibility is a pain, no matter what we call it. I don't think breaking compatibility whenever we need to form "a more perfect spec" is a sustainable way forward. This could mean something like ensuring we have a stylesheet in place when a compatibility-breaking version is released to transform older versions to newer versions, but I feel very strongly that if we're to gain some traction we cannot always present a moving target to the folks that are implementing MEI. Here's a modest proposal: -- New compatibility-breaking changes are placed in an ODD customization somewhere (perhaps the incubator). -- Those who are seeking to have those customizations rolled into "core" must also supply an XSLT to transform documents in the current version to their proposed version. In my opinion, MusicXML goes too far in maintaining backwards-compatiblity at the expense of more sane or robust methods of representation, but I'd not like to see us adopt an opposite, but just as absolute, practice. -Andrew On 2012-12-07, at 11:02 AM, "Roland, Perry (pdr4h)" wrote: > The die has been cast -- The releases scheduled for January and March 2013 will break compatibility with MEI 2012, the first (2.0.1) in order to correct of errors and omissions in MEI 2012 and the second (2.1.0) to introduce new features. Let's move on. > > -- > p. > > > __________________________ > Perry Roland > Music Library > University of Virginia > P. O. Box 400175 > Charlottesville, VA 22904 > 434-982-2702 (w) > pdr4h (at) virginia (dot) edu > ________________________________________ > From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Johannes Kepper [kepper at edirom.de] > Sent: Friday, December 07, 2012 10:39 AM > To: Music Encoding Initiative > Subject: Re: [MEI-L] physLoc > > If I remember correctly, a second-digit change is allowed to break compatibility. Benjamin's report from our last meeting (mail from September 15th) reads: > >> The final conclusion was first digit (major changes: e.g. anything that introduces new models / new structures / new version of ODD), second digit (middling changes: more significant, probably breaking [compatibility]) and third digit (minor changes: mostly not breaking [compatibility]) and not restricting this to either specifications or guidelines. > > One could argue that the whole release, which will include not only the bibl customization, but also the FRB model, justifies a new first-digit release number. But we had that discussion already, and we decided to call it a 2.1.0 during the tech team meeting. Given the amount of time we already invested, do we really want to re-open that can of worms? I'm not against it, I just want to be sure it's necessary? > > Best, > Johannes > > > > Am 07.12.2012 um 15:25 schrieb Axel Teich Geertinger : > >> True. But as far as I can see there are changes in the bibl customization breaking compatibility already, such as replacing with in . That may be a mistake, though. >> >> /axel >> >> >> >> Fra: mei-l-bounces at lists.uni-paderborn.de [mailto:mei-l-bounces at lists.uni-paderborn.de] P? vegne af Andrew Hankinson >> Sendt: 7. december 2012 15:09 >> Til: Music Encoding Initiative >> Emne: Re: [MEI-L] physLoc >> >> >> >> Backwards compatibility... well, compatibility to MEI 2010 was severely broken already with v. 2.0.0, and I doubt anyone has adapted the 2012 encoding yet. I don't think that should be the reason for not changing anything. >> >> I have no "horse in the race" for the source encoding problem, but I would like to jump in on this point. If we're breaking compatibility, then this is a 3.0.0 issue. >> >> -Andrew >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Sat Dec 8 11:59:23 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Sat, 8 Dec 2012 10:59:23 +0000 Subject: [MEI-L] physLoc In-Reply-To: References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D1514F98F6@EXCHANGE-01.kb.dk> Hi Perry, to get back to the physLoc question: you wrote does the provenance information pertain to only the copy in the first physical location or to both? If it pertains to both, then the elemement shouldn't be a child of the first , but should exist outside it. Assuming is permitted only within , if I want to say that both copies have always been kept together, then information will have to be repeated. For example, Always together Always together I see that this would require information to be repeated. However, I think usually the situation would rather be that items in different physical locations (typically copies kept in different archives) would have different provenance. How would you describe that? You would have something like: This relates to one location This relates to the other location some archive some other archive With and separated, you would have to use IDREFs to clarify which provenance is related to which location (or item), right? Is that better than repeating information or using someting like @sameas in those (rare, I would say) cases, where several items in _different_ locations (or with different shelf marks) share the same provenance? Have a nice weekend, Axel -------------- next part -------------- An HTML attachment was scrubbed... URL: From pdr4h at eservices.virginia.edu Sat Dec 8 15:25:38 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Sat, 8 Dec 2012 14:25:38 +0000 Subject: [MEI-L] physLoc In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D1514F98F6@EXCHANGE-01.kb.dk> References: <0B6F63F59F405E4C902DFE2C2329D0D1514F8A45@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F8F28@EXCHANGE-01.kb.dk> , <0B6F63F59F405E4C902DFE2C2329D0D1514F98F6@EXCHANGE-01.kb.dk> Message-ID: Axel, To describe the provenance of two copies in different locations, I would leave your example exactly as it is. Or I might even compress the two elements into one -- This relates to one location. This relates to the other location. some archive some other archive The text in explains its relationship to the items in the different locations. @sameas is incorrect, but I would not attempt to connect the element to the elements. That's a level of detail that is not required using this "text-based" approach. Going to the FRBR approach is a better option for that level of detail. In order to make the distinction between the two approaches more clear, it might be worth considering allowing only one in . The example then would be This relates to one location. This relates to the other location. some archive. some other archive -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Axel Teich Geertinger [atge at kb.dk] Sent: Saturday, December 08, 2012 5:59 AM To: Music Encoding Initiative Subject: Re: [MEI-L] physLoc Hi Perry, to get back to the physLoc question: you wrote does the provenance information pertain to only the copy in the first physical location or to both? If it pertains to both, then the elemement shouldn't be a child of the first , but should exist outside it. Assuming is permitted only within , if I want to say that both copies have always been kept together, then information will have to be repeated. For example, Always together Always together I see that this would require information to be repeated. However, I think usually the situation would rather be that items in different physical locations (typically copies kept in different archives) would have different provenance. How would you describe that? You would have something like: This relates to one location This relates to the other location some archive some other archive With and separated, you would have to use IDREFs to clarify which provenance is related to which location (or item), right? Is that better than repeating information or using someting like @sameas in those (rare, I would say) cases, where several items in _different_ locations (or with different shelf marks) share the same provenance? Have a nice weekend, Axel -------------- next part -------------- An HTML attachment was scrubbed... URL: From laurent at music.mcgill.ca Mon Dec 10 11:50:48 2012 From: laurent at music.mcgill.ca (Laurent Pugin) Date: Mon, 10 Dec 2012 11:50:48 +0100 Subject: [MEI-L] page sizes In-Reply-To: <25145_1354820224_50C0EA80_25145_95_1_B7E73178-1B13-4EAD-90A3-EE7305469DF9@edirom.de> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <25145_1354820224_50C0EA80_25145_95_1_B7E73178-1B13-4EAD-90A3-EE7305469DF9@edirom.de> Message-ID: Hi Johannes, I am of course interested in participating to a SIG on layout questions. Who else? Laurent On Thu, Dec 6, 2012 at 7:55 PM, Johannes Kepper wrote: > Hi all, > > there are so many different approaches, models, proposals etc. that I > completely lost my overview. It would be great to hear Craig's response > regarding CSS for music, but the general intention of most of these things > seems to be the wish for a clearer separation of content and rendition in > order to allow multiple renditions from the same source content. > > While I appreciate the discussion so far, I wonder if we couldn't break it > down to more digestible issues. The question of units and their > relationship seems to be such an issue. The possibilities of expressing > pages in MEI seems to be another. Adopting a common algorithm, it seems the > whole discussion would be easier to conquer when we divide it? At the same > time, we need to make sure that we keep all the bits and pieces together. I > would suggest to establish something like an MEI Layout SIG (special > interest group), which coordinates the discussion of this. This group > should also consider to apply for a session and maybe an additional > workshop at the conference next year in May, as this seems to be an ideal > time frame to prepare and discuss these issues. Hopefully, there will be a > large part of the community available in Mainz, so we could take decisions > there. Who would volunteer to participate in such a group, and what modus > operandi would you suggest? > > Johannes > > > Am 06.12.2012 um 18:29 schrieb Laurent Pugin: > > > Hi Craig, > > > > Thank for the very detailed report. I particularly like the randomly > placed staves of figure 7 ;-) > > > > There is certainly some interesting information about spacement, > especially vertically. This is closely related to the discussion we had > about units, and I agree that we can probably find a better name than > interline. I am not sure about the use of horizontal placement system > outside Score, though. > > > > For the general organization of the music, it is actually very similar > to what we have in Wolfgang (the music notation software create by Etienne > Darbellay, on which Aruspix is partially based on). Maybe not surprisingly > since they are from the same generation. I don't really see how this can > become a CSS for music. Could you tell us more about it? > > > > As I understand it, it would be fairly similar to what we would like to > achieve with the layout module in the sense that separating content and > presentation is what CSS does. A difference with the layout module is that > we have an additional level in the hierarchy, namely the systems. In Score > (as in Wolfgang), we have staves directly within a page, because this is > enough for representing them. This is a very economic way of representing a > page of music and it maybe has to do with the memory limitations they had > when they started these software applications. I am not sure this would be > optimal for MEI, and this is why I change the internal representation in > Aruspix. > > > > As I said, the layout module was proposed for two reasons, 1) because a > page-based representation did not seemed to be an interesting option at > that time, and 2) because it does offer a separation between content and > presentation (i.e., we can have several presentations for the same > content). This second argument seems to be appealing to several of us. I > must confess, however, that implementing it is quite of a challenge. I am a > little be concerned that because of this, it will become a very powerful > solution that will not be used beyond simple cases because of the > complexity involved. It works well with Renaissance music since the general > score organization is simple, we should be able to go beyond this proof of > concept. As we already discussed, maybe re-considering a page-based > representation is a way to go. This does not mean that the layout module > cannot exists in parallel. But I can see a fairly direct path from the > Score representation as described to a MEI page-based representation, and > this can very well be useful to you Craig, as it would be for OMR people > and maybe others who would like to have a very detail source encoding. What > do you think? > > > > Best, > > Laurent > > > > On Fri, Nov 30, 2012 at 4:40 AM, Craig Sapp wrote: > > Hi Everyone, > > > > In case you were eagerly awaiting it, attached is the final version of > my analysis of staff placement in SCORE. > > > > > > SCORE data itself has no concept of physical units (with some minor > caveats), so it would be a good model to observe. The physical units are > defined at the last minute when you are ready to print, and are not defined > while editing the music in the SCORE editor, which matches the idea which > you are headed towards. > > > > See page 50 of the attached PDF for example default spacings in SCORE > which is a good basic roadmap to how default spacing units are defined in > SCORE. > > > > > > > > > This interline distance (which is already used by MEI) is a musical > unit which describes > > > half the distance between two staff lines, > > > > I complain about how you are defining "interline". Interline is Latin > for "between lines", not "halfway between lines". This will cause > continual confusion, such as losing your spacecraft: > > http://mars.jpl.nasa.gov/msp98/news/mco990930.html > > How about "@semiline", "@hemiline", or "@demiline" instead? Or maybe > "@halbline" :-) > > http://www.dailywritingtips.com/semi-demi-and-hemi > > > > > > * The nominal physical length of scoreDef/@interline.size in SCORE is > 3.15 points (0.04375 inches, 1.11125 mm). This is when you print out the > music using the default staff scaling and print size. Vertical values are > always represented by this step size, and the data files themselves do not > indicate that the final physical rendering is at 3.15 points (which is why > I had to measure it off of the example on page 50). > > > > * So in SCORE, the distance between two staff lines is 2.0 "steps". And > this means that the height of a staff is 8.0 steps. Every staff has an > independent scaling factor which only affects the vertical dimension (there > is no staff-level scaling for the horizontal dimension). So if a staff > has a 50% scaling, all of its steps would be 1/2 of the size of the nominal > height. > > > > * The default "successive staff spacing" is 18.0 steps, so there are > 18.0 - 8.0 = 10.0 steps from the top of one staff to the bottom of the > next. This default spacing is the framework over which individual staves > may be scaled. This framework is also how SCORE avoids using physical > measurements to place individual staves vertically on the page. Staves can > be placed anywhere vertically, but their placements are in relation to > their default positions. For example, a staff could be placed 15 steps > above the top of the staff below by adding an extra offset of 5 steps to > its default position (In SCOREese, set P4=5 for the top staff). > > > > * The staves each have their own scaling factor (called the staff's P5 > in SCOREese). If P5 is 0.5, then the local staff's step size is now 1/2 of > the default step size. This scaling factor only affects the height of the > staves, not the length of the staves. Object placed on a staff will have > the staff's P5 scaling applied to their horizontal and vertical dimensions > (they will shrink by 50% if the staff is scaled by 50%), their vertical > placement will be scaled as well, but not their horizontal placement which > is independent of the staff scaling. Note that changing the scaling of a > staff does not affect its default position on the page, but if there is a > vertical offset from the default position, that offset will be scaled. > > > > * The horizontal distances in SCORE are described on a different scale > than the vertical distances. They are described as the fractional position > along the left/right sides of the default staff length. The nominal length > of a staff is 7.5 inches. This length is divided up into 200 units, so the > left side of staves are at 0.0, and the right side is at 200.0. A length > of 540 points (7.5 inches) divided by 200 is 2.7 points, so "200" was > probably used to give an approximately equivalent unit to vertical steps. > It would have been more elegant to set the default horizontal and vertical > units to be the same by using a different vertical scaling... In any case > the horizontal units are 6/7 of the vertical step units, so if you set a > staff's scaling to 6/7 (85.71428...%), the horizontal units will match the > vertical steps locally for that staff. Another way of thinking about the > relationship between the horizontal and vertical units in SCORE is that the > default length of staves is 171.428... steps long. So all vertical and > horizontal units can be related and a final scaling can be given to match > to the specified @interline physical distance between steps. > > > > * horizontal units cannot be scaled within the SCORE editor, and can > only be scaled at print time, such as to match the staff lengths to the > distance between page margins. > > > > For final placement on a physical page, there are three important values: > > > > (1) The distance from the left side of the page to the left side of the > staff (the "left margin", although the system brackets will fall into this > margin, so not exactly the same as a text margin). The default left margin > is 0.5 inches (plus a fixed extra 0.025 in). > > > > (2) The distance from the bottom of the page to the bottom line of the > first staff. This is also not exactly a "margin" in the text sense, since > notes/slurs/dynamics can fall within this bottom margin. The default > bottom margin is 0.75 inches (plus a fixed extra 0.0625 inches). > > > > (3) The page scaling. This is the method to control the horizontal > scale which is not possible within the SCORE editor (other than trivial > zooming). But the page scaling will also affect the vertical scale at the > same time. The origin for the page scaling is the point defined by (1) and > (2) above. In other words the page scaling does not affect the page > margins, but rather only affects the scaling of the music (you have to > scale the music so that it falls at the correct top and right margin > positions on your page. > > > > Notice that only two margins are defined when printing from SCORE. This > perhaps gets to the point that Andrew was trying to make: > > > > > > > The concept of "physical" unit doesn't really translate well to > editions that are meant for digital consumption only. > > > If I have a page meant for a tablet or digital music stand display, > what does the "inch" unit mean? Does it mean > > > render it as a physical inch on the screen, regardless of how many > pixels it takes to represent it? Or does it > > > mean render it using a fixed number of pixels-per-inch, regardless of > how large or small it makes it from one > > > display to another. E-ink displays challenge this concept, since they > don't really have pixels, and high-resolution > > > displays also challenge it since the number of pixels it takes to > represent a single physical unit can be completely > > > different. So we'll probably need some sort of proportional unit so > that we can say that the page margin is a > > > percentage of the rendered display rather than a fixed unit of > physical measurement. > > > > SCORE uses a single origin when printing the music on a page (left and > bottom margins). And it is up to you to scale the music to correctly fit > within the top and right margins of your desired paper size (or screen > size, e-reader size, etc.). This could be done by specifying the top and > right margins instead of scaling, but page-level scaling and top/right > margins cannot be controlled independently in SCORE. > > > > > > -=+Craig > > > > > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- section suivante -------------- Une pi?ce jointe HTML a ?t? nettoy?e... URL: From zupftom at googlemail.com Mon Dec 10 12:28:10 2012 From: zupftom at googlemail.com (TW) Date: Mon, 10 Dec 2012 12:28:10 +0100 Subject: [MEI-L] page sizes In-Reply-To: References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <25145_1354820224_50C0EA80_25145_95_1_B7E73178-1B13-4EAD-90A3-EE7305469DF9@edirom.de> Message-ID: Me, too! Thomas 2012/12/10 Laurent Pugin : > Hi Johannes, > > I am of course interested in participating to a SIG on layout questions. Who > else? > > Laurent > > On Thu, Dec 6, 2012 at 7:55 PM, Johannes Kepper wrote: >> >> Hi all, >> >> there are so many different approaches, models, proposals etc. that I >> completely lost my overview. It would be great to hear Craig's response >> regarding CSS for music, but the general intention of most of these things >> seems to be the wish for a clearer separation of content and rendition in >> order to allow multiple renditions from the same source content. >> >> While I appreciate the discussion so far, I wonder if we couldn't break it >> down to more digestible issues. The question of units and their relationship >> seems to be such an issue. The possibilities of expressing pages in MEI >> seems to be another. Adopting a common algorithm, it seems the whole >> discussion would be easier to conquer when we divide it? At the same time, >> we need to make sure that we keep all the bits and pieces together. I would >> suggest to establish something like an MEI Layout SIG (special interest >> group), which coordinates the discussion of this. This group should also >> consider to apply for a session and maybe an additional workshop at the >> conference next year in May, as this seems to be an ideal time frame to >> prepare and discuss these issues. Hopefully, there will be a large part of >> the community available in Mainz, so we could take decisions there. Who >> would volunteer to participate in such a group, and what modus operandi >> would you suggest? >> >> Johannes >> >> >> Am 06.12.2012 um 18:29 schrieb Laurent Pugin: >> >> > Hi Craig, >> > >> > Thank for the very detailed report. I particularly like the randomly >> > placed staves of figure 7 ;-) >> > >> > There is certainly some interesting information about spacement, >> > especially vertically. This is closely related to the discussion we had >> > about units, and I agree that we can probably find a better name than >> > interline. I am not sure about the use of horizontal placement system >> > outside Score, though. >> > >> > For the general organization of the music, it is actually very similar >> > to what we have in Wolfgang (the music notation software create by Etienne >> > Darbellay, on which Aruspix is partially based on). Maybe not surprisingly >> > since they are from the same generation. I don't really see how this can >> > become a CSS for music. Could you tell us more about it? >> > >> > As I understand it, it would be fairly similar to what we would like to >> > achieve with the layout module in the sense that separating content and >> > presentation is what CSS does. A difference with the layout module is that >> > we have an additional level in the hierarchy, namely the systems. In Score >> > (as in Wolfgang), we have staves directly within a page, because this is >> > enough for representing them. This is a very economic way of representing a >> > page of music and it maybe has to do with the memory limitations they had >> > when they started these software applications. I am not sure this would be >> > optimal for MEI, and this is why I change the internal representation in >> > Aruspix. >> > >> > As I said, the layout module was proposed for two reasons, 1) because a >> > page-based representation did not seemed to be an interesting option at that >> > time, and 2) because it does offer a separation between content and >> > presentation (i.e., we can have several presentations for the same content). >> > This second argument seems to be appealing to several of us. I must confess, >> > however, that implementing it is quite of a challenge. I am a little be >> > concerned that because of this, it will become a very powerful solution that >> > will not be used beyond simple cases because of the complexity involved. It >> > works well with Renaissance music since the general score organization is >> > simple, we should be able to go beyond this proof of concept. As we already >> > discussed, maybe re-considering a page-based representation is a way to go. >> > This does not mean that the layout module cannot exists in parallel. But I >> > can see a fairly direct path from the Score representation as described to a >> > MEI page-based representation, and this can very well be useful to you >> > Craig, as it would be for OMR people and maybe others who would like to have >> > a very detail source encoding. What do you think? >> > >> > Best, >> > Laurent >> > >> > On Fri, Nov 30, 2012 at 4:40 AM, Craig Sapp wrote: >> > Hi Everyone, >> > >> > In case you were eagerly awaiting it, attached is the final version of >> > my analysis of staff placement in SCORE. >> > >> > >> > SCORE data itself has no concept of physical units (with some minor >> > caveats), so it would be a good model to observe. The physical units are >> > defined at the last minute when you are ready to print, and are not defined >> > while editing the music in the SCORE editor, which matches the idea which >> > you are headed towards. >> > >> > See page 50 of the attached PDF for example default spacings in SCORE >> > which is a good basic roadmap to how default spacing units are defined in >> > SCORE. >> > >> > >> > >> > > This interline distance (which is already used by MEI) is a musical >> > > unit which describes >> > > half the distance between two staff lines, >> > >> > I complain about how you are defining "interline". Interline is Latin >> > for "between lines", not "halfway between lines". This will cause continual >> > confusion, such as losing your spacecraft: >> > http://mars.jpl.nasa.gov/msp98/news/mco990930.html >> > How about "@semiline", "@hemiline", or "@demiline" instead? Or maybe >> > "@halbline" :-) >> > http://www.dailywritingtips.com/semi-demi-and-hemi >> > >> > >> > * The nominal physical length of scoreDef/@interline.size in SCORE is >> > 3.15 points (0.04375 inches, 1.11125 mm). This is when you print out the >> > music using the default staff scaling and print size. Vertical values are >> > always represented by this step size, and the data files themselves do not >> > indicate that the final physical rendering is at 3.15 points (which is why I >> > had to measure it off of the example on page 50). >> > >> > * So in SCORE, the distance between two staff lines is 2.0 "steps". And >> > this means that the height of a staff is 8.0 steps. Every staff has an >> > independent scaling factor which only affects the vertical dimension (there >> > is no staff-level scaling for the horizontal dimension). So if a staff has >> > a 50% scaling, all of its steps would be 1/2 of the size of the nominal >> > height. >> > >> > * The default "successive staff spacing" is 18.0 steps, so there are >> > 18.0 - 8.0 = 10.0 steps from the top of one staff to the bottom of the next. >> > This default spacing is the framework over which individual staves may be >> > scaled. This framework is also how SCORE avoids using physical measurements >> > to place individual staves vertically on the page. Staves can be placed >> > anywhere vertically, but their placements are in relation to their default >> > positions. For example, a staff could be placed 15 steps above the top of >> > the staff below by adding an extra offset of 5 steps to its default position >> > (In SCOREese, set P4=5 for the top staff). >> > >> > * The staves each have their own scaling factor (called the staff's P5 >> > in SCOREese). If P5 is 0.5, then the local staff's step size is now 1/2 of >> > the default step size. This scaling factor only affects the height of the >> > staves, not the length of the staves. Object placed on a staff will have >> > the staff's P5 scaling applied to their horizontal and vertical dimensions >> > (they will shrink by 50% if the staff is scaled by 50%), their vertical >> > placement will be scaled as well, but not their horizontal placement which >> > is independent of the staff scaling. Note that changing the scaling of a >> > staff does not affect its default position on the page, but if there is a >> > vertical offset from the default position, that offset will be scaled. >> > >> > * The horizontal distances in SCORE are described on a different scale >> > than the vertical distances. They are described as the fractional position >> > along the left/right sides of the default staff length. The nominal length >> > of a staff is 7.5 inches. This length is divided up into 200 units, so the >> > left side of staves are at 0.0, and the right side is at 200.0. A length of >> > 540 points (7.5 inches) divided by 200 is 2.7 points, so "200" was probably >> > used to give an approximately equivalent unit to vertical steps. It would >> > have been more elegant to set the default horizontal and vertical units to >> > be the same by using a different vertical scaling... In any case the >> > horizontal units are 6/7 of the vertical step units, so if you set a staff's >> > scaling to 6/7 (85.71428...%), the horizontal units will match the vertical >> > steps locally for that staff. Another way of thinking about the >> > relationship between the horizontal and vertical units in SCORE is that the >> > default length of staves is 171.428... steps long. So all vertical and >> > horizontal units can be related and a final scaling can be given to match to >> > the specified @interline physical distance between steps. >> > >> > * horizontal units cannot be scaled within the SCORE editor, and can >> > only be scaled at print time, such as to match the staff lengths to the >> > distance between page margins. >> > >> > For final placement on a physical page, there are three important >> > values: >> > >> > (1) The distance from the left side of the page to the left side of the >> > staff (the "left margin", although the system brackets will fall into this >> > margin, so not exactly the same as a text margin). The default left margin >> > is 0.5 inches (plus a fixed extra 0.025 in). >> > >> > (2) The distance from the bottom of the page to the bottom line of the >> > first staff. This is also not exactly a "margin" in the text sense, since >> > notes/slurs/dynamics can fall within this bottom margin. The default bottom >> > margin is 0.75 inches (plus a fixed extra 0.0625 inches). >> > >> > (3) The page scaling. This is the method to control the horizontal >> > scale which is not possible within the SCORE editor (other than trivial >> > zooming). But the page scaling will also affect the vertical scale at the >> > same time. The origin for the page scaling is the point defined by (1) and >> > (2) above. In other words the page scaling does not affect the page >> > margins, but rather only affects the scaling of the music (you have to scale >> > the music so that it falls at the correct top and right margin positions on >> > your page. >> > >> > Notice that only two margins are defined when printing from SCORE. This >> > perhaps gets to the point that Andrew was trying to make: >> > >> > >> > > The concept of "physical" unit doesn't really translate well to >> > > editions that are meant for digital consumption only. >> > > If I have a page meant for a tablet or digital music stand display, >> > > what does the "inch" unit mean? Does it mean >> > > render it as a physical inch on the screen, regardless of how many >> > > pixels it takes to represent it? Or does it >> > > mean render it using a fixed number of pixels-per-inch, regardless of >> > > how large or small it makes it from one >> > > display to another. E-ink displays challenge this concept, since they >> > > don't really have pixels, and high-resolution >> > > displays also challenge it since the number of pixels it takes to >> > > represent a single physical unit can be completely >> > > different. So we'll probably need some sort of proportional unit so >> > > that we can say that the page margin is a >> > > percentage of the rendered display rather than a fixed unit of >> > > physical measurement. >> > >> > SCORE uses a single origin when printing the music on a page (left and >> > bottom margins). And it is up to you to scale the music to correctly fit >> > within the top and right margins of your desired paper size (or screen size, >> > e-reader size, etc.). This could be done by specifying the top and right >> > margins instead of scaling, but page-level scaling and top/right margins >> > cannot be controlled independently in SCORE. >> > >> > >> > -=+Craig >> > >> > >> > >> > >> > _______________________________________________ >> > mei-l mailing list >> > mei-l at lists.uni-paderborn.de >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > >> > >> > _______________________________________________ >> > mei-l mailing list >> > mei-l at lists.uni-paderborn.de >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > From raffaeleviglianti at gmail.com Mon Dec 10 12:31:34 2012 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Mon, 10 Dec 2012 11:31:34 +0000 Subject: [MEI-L] page sizes In-Reply-To: References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de> <20121106235429.4mv0f8114w04gk4o@webmail.iu.edu> <25145_1354820224_50C0EA80_25145_95_1_B7E73178-1B13-4EAD-90A3-EE7305469DF9@edirom.de> Message-ID: I might not have too much to contribute, but I'd like to follow the SIG's work closely. I am particularly interested in which role the layout module could play for handling parts. Raffaele On Mon, Dec 10, 2012 at 11:28 AM, TW wrote: > Me, too! > > Thomas > > > 2012/12/10 Laurent Pugin : > > Hi Johannes, > > > > I am of course interested in participating to a SIG on layout questions. > Who > > else? > > > > Laurent > > > > On Thu, Dec 6, 2012 at 7:55 PM, Johannes Kepper > wrote: > >> > >> Hi all, > >> > >> there are so many different approaches, models, proposals etc. that I > >> completely lost my overview. It would be great to hear Craig's response > >> regarding CSS for music, but the general intention of most of these > things > >> seems to be the wish for a clearer separation of content and rendition > in > >> order to allow multiple renditions from the same source content. > >> > >> While I appreciate the discussion so far, I wonder if we couldn't break > it > >> down to more digestible issues. The question of units and their > relationship > >> seems to be such an issue. The possibilities of expressing pages in MEI > >> seems to be another. Adopting a common algorithm, it seems the whole > >> discussion would be easier to conquer when we divide it? At the same > time, > >> we need to make sure that we keep all the bits and pieces together. I > would > >> suggest to establish something like an MEI Layout SIG (special interest > >> group), which coordinates the discussion of this. This group should also > >> consider to apply for a session and maybe an additional workshop at the > >> conference next year in May, as this seems to be an ideal time frame to > >> prepare and discuss these issues. Hopefully, there will be a large part > of > >> the community available in Mainz, so we could take decisions there. Who > >> would volunteer to participate in such a group, and what modus operandi > >> would you suggest? > >> > >> Johannes > >> > >> > >> Am 06.12.2012 um 18:29 schrieb Laurent Pugin: > >> > >> > Hi Craig, > >> > > >> > Thank for the very detailed report. I particularly like the randomly > >> > placed staves of figure 7 ;-) > >> > > >> > There is certainly some interesting information about spacement, > >> > especially vertically. This is closely related to the discussion we > had > >> > about units, and I agree that we can probably find a better name than > >> > interline. I am not sure about the use of horizontal placement system > >> > outside Score, though. > >> > > >> > For the general organization of the music, it is actually very similar > >> > to what we have in Wolfgang (the music notation software create by > Etienne > >> > Darbellay, on which Aruspix is partially based on). Maybe not > surprisingly > >> > since they are from the same generation. I don't really see how this > can > >> > become a CSS for music. Could you tell us more about it? > >> > > >> > As I understand it, it would be fairly similar to what we would like > to > >> > achieve with the layout module in the sense that separating content > and > >> > presentation is what CSS does. A difference with the layout module is > that > >> > we have an additional level in the hierarchy, namely the systems. In > Score > >> > (as in Wolfgang), we have staves directly within a page, because this > is > >> > enough for representing them. This is a very economic way of > representing a > >> > page of music and it maybe has to do with the memory limitations they > had > >> > when they started these software applications. I am not sure this > would be > >> > optimal for MEI, and this is why I change the internal representation > in > >> > Aruspix. > >> > > >> > As I said, the layout module was proposed for two reasons, 1) because > a > >> > page-based representation did not seemed to be an interesting option > at that > >> > time, and 2) because it does offer a separation between content and > >> > presentation (i.e., we can have several presentations for the same > content). > >> > This second argument seems to be appealing to several of us. I must > confess, > >> > however, that implementing it is quite of a challenge. I am a little > be > >> > concerned that because of this, it will become a very powerful > solution that > >> > will not be used beyond simple cases because of the complexity > involved. It > >> > works well with Renaissance music since the general score > organization is > >> > simple, we should be able to go beyond this proof of concept. As we > already > >> > discussed, maybe re-considering a page-based representation is a way > to go. > >> > This does not mean that the layout module cannot exists in parallel. > But I > >> > can see a fairly direct path from the Score representation as > described to a > >> > MEI page-based representation, and this can very well be useful to you > >> > Craig, as it would be for OMR people and maybe others who would like > to have > >> > a very detail source encoding. What do you think? > >> > > >> > Best, > >> > Laurent > >> > > >> > On Fri, Nov 30, 2012 at 4:40 AM, Craig Sapp > wrote: > >> > Hi Everyone, > >> > > >> > In case you were eagerly awaiting it, attached is the final version of > >> > my analysis of staff placement in SCORE. > >> > > >> > > >> > SCORE data itself has no concept of physical units (with some minor > >> > caveats), so it would be a good model to observe. The physical units > are > >> > defined at the last minute when you are ready to print, and are not > defined > >> > while editing the music in the SCORE editor, which matches the idea > which > >> > you are headed towards. > >> > > >> > See page 50 of the attached PDF for example default spacings in SCORE > >> > which is a good basic roadmap to how default spacing units are > defined in > >> > SCORE. > >> > > >> > > >> > > >> > > This interline distance (which is already used by MEI) is a musical > >> > > unit which describes > >> > > half the distance between two staff lines, > >> > > >> > I complain about how you are defining "interline". Interline is Latin > >> > for "between lines", not "halfway between lines". This will cause > continual > >> > confusion, such as losing your spacecraft: > >> > http://mars.jpl.nasa.gov/msp98/news/mco990930.html > >> > How about "@semiline", "@hemiline", or "@demiline" instead? Or maybe > >> > "@halbline" :-) > >> > http://www.dailywritingtips.com/semi-demi-and-hemi > >> > > >> > > >> > * The nominal physical length of scoreDef/@interline.size in SCORE is > >> > 3.15 points (0.04375 inches, 1.11125 mm). This is when you print out > the > >> > music using the default staff scaling and print size. Vertical > values are > >> > always represented by this step size, and the data files themselves > do not > >> > indicate that the final physical rendering is at 3.15 points (which > is why I > >> > had to measure it off of the example on page 50). > >> > > >> > * So in SCORE, the distance between two staff lines is 2.0 "steps". > And > >> > this means that the height of a staff is 8.0 steps. Every staff has > an > >> > independent scaling factor which only affects the vertical dimension > (there > >> > is no staff-level scaling for the horizontal dimension). So if a > staff has > >> > a 50% scaling, all of its steps would be 1/2 of the size of the > nominal > >> > height. > >> > > >> > * The default "successive staff spacing" is 18.0 steps, so there are > >> > 18.0 - 8.0 = 10.0 steps from the top of one staff to the bottom of > the next. > >> > This default spacing is the framework over which individual staves > may be > >> > scaled. This framework is also how SCORE avoids using physical > measurements > >> > to place individual staves vertically on the page. Staves can be > placed > >> > anywhere vertically, but their placements are in relation to their > default > >> > positions. For example, a staff could be placed 15 steps above the > top of > >> > the staff below by adding an extra offset of 5 steps to its default > position > >> > (In SCOREese, set P4=5 for the top staff). > >> > > >> > * The staves each have their own scaling factor (called the staff's P5 > >> > in SCOREese). If P5 is 0.5, then the local staff's step size is now > 1/2 of > >> > the default step size. This scaling factor only affects the height > of the > >> > staves, not the length of the staves. Object placed on a staff will > have > >> > the staff's P5 scaling applied to their horizontal and vertical > dimensions > >> > (they will shrink by 50% if the staff is scaled by 50%), their > vertical > >> > placement will be scaled as well, but not their horizontal placement > which > >> > is independent of the staff scaling. Note that changing the scaling > of a > >> > staff does not affect its default position on the page, but if there > is a > >> > vertical offset from the default position, that offset will be scaled. > >> > > >> > * The horizontal distances in SCORE are described on a different scale > >> > than the vertical distances. They are described as the fractional > position > >> > along the left/right sides of the default staff length. The nominal > length > >> > of a staff is 7.5 inches. This length is divided up into 200 units, > so the > >> > left side of staves are at 0.0, and the right side is at 200.0. A > length of > >> > 540 points (7.5 inches) divided by 200 is 2.7 points, so "200" was > probably > >> > used to give an approximately equivalent unit to vertical steps. It > would > >> > have been more elegant to set the default horizontal and vertical > units to > >> > be the same by using a different vertical scaling... In any case the > >> > horizontal units are 6/7 of the vertical step units, so if you set a > staff's > >> > scaling to 6/7 (85.71428...%), the horizontal units will match the > vertical > >> > steps locally for that staff. Another way of thinking about the > >> > relationship between the horizontal and vertical units in SCORE is > that the > >> > default length of staves is 171.428... steps long. So all vertical > and > >> > horizontal units can be related and a final scaling can be given to > match to > >> > the specified @interline physical distance between steps. > >> > > >> > * horizontal units cannot be scaled within the SCORE editor, and can > >> > only be scaled at print time, such as to match the staff lengths to > the > >> > distance between page margins. > >> > > >> > For final placement on a physical page, there are three important > >> > values: > >> > > >> > (1) The distance from the left side of the page to the left side of > the > >> > staff (the "left margin", although the system brackets will fall into > this > >> > margin, so not exactly the same as a text margin). The default left > margin > >> > is 0.5 inches (plus a fixed extra 0.025 in). > >> > > >> > (2) The distance from the bottom of the page to the bottom line of the > >> > first staff. This is also not exactly a "margin" in the text sense, > since > >> > notes/slurs/dynamics can fall within this bottom margin. The default > bottom > >> > margin is 0.75 inches (plus a fixed extra 0.0625 inches). > >> > > >> > (3) The page scaling. This is the method to control the horizontal > >> > scale which is not possible within the SCORE editor (other than > trivial > >> > zooming). But the page scaling will also affect the vertical scale > at the > >> > same time. The origin for the page scaling is the point defined by > (1) and > >> > (2) above. In other words the page scaling does not affect the page > >> > margins, but rather only affects the scaling of the music (you have > to scale > >> > the music so that it falls at the correct top and right margin > positions on > >> > your page. > >> > > >> > Notice that only two margins are defined when printing from SCORE. > This > >> > perhaps gets to the point that Andrew was trying to make: > >> > > >> > > >> > > The concept of "physical" unit doesn't really translate well to > >> > > editions that are meant for digital consumption only. > >> > > If I have a page meant for a tablet or digital music stand display, > >> > > what does the "inch" unit mean? Does it mean > >> > > render it as a physical inch on the screen, regardless of how many > >> > > pixels it takes to represent it? Or does it > >> > > mean render it using a fixed number of pixels-per-inch, regardless > of > >> > > how large or small it makes it from one > >> > > display to another. E-ink displays challenge this concept, since > they > >> > > don't really have pixels, and high-resolution > >> > > displays also challenge it since the number of pixels it takes to > >> > > represent a single physical unit can be completely > >> > > different. So we'll probably need some sort of proportional unit so > >> > > that we can say that the page margin is a > >> > > percentage of the rendered display rather than a fixed unit of > >> > > physical measurement. > >> > > >> > SCORE uses a single origin when printing the music on a page (left and > >> > bottom margins). And it is up to you to scale the music to correctly > fit > >> > within the top and right margins of your desired paper size (or > screen size, > >> > e-reader size, etc.). This could be done by specifying the top and > right > >> > margins instead of scaling, but page-level scaling and top/right > margins > >> > cannot be controlled independently in SCORE. > >> > > >> > > >> > -=+Craig > >> > > >> > > >> > > >> > > >> > _______________________________________________ > >> > mei-l mailing list > >> > mei-l at lists.uni-paderborn.de > >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > > >> > > >> > _______________________________________________ > >> > mei-l mailing list > >> > mei-l at lists.uni-paderborn.de > >> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > >> > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From siegert at udk-berlin.de Mon Dec 10 18:45:23 2012 From: siegert at udk-berlin.de (Christine Siegert) Date: Mon, 10 Dec 2012 18:45:23 +0100 Subject: [MEI-L] New MEI project References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de><20121106235429.4mv0f8114w04gk4o@webmail.iu.edu><25145_1354820224_50C0EA80_25145_95_1_B7E73178-1B13-4EAD-90A3-EE7305469DF9@edirom.de> Message-ID: <4B0539965FD0435EBAAC6E274959D448@Laptop> Dear all, It's a great pleasure for me to inform you that the Einstein Foundation Berlin has agreed funding the research project "Giuseppe Sarti - A Cosmopolitan Composer in Pre-Revolutionary Europe" including an MEI based edition of his Italian operas "Fra i due litiganti il terzo gode" and "Giulio Sabino". The project starting in early 2013 will be a cooperation with D?rte Schmidt (University of the Ars Berlin) and Bella Brover Lubovsky (Hebrew University of Jerusalem) who will write a monograph and provide a critical edition of Sarti's Russian opera "Oleg". We will - of course - work closely together with our Detmold colleagues. With best wishes, Christine Prof. Dr. Christine Siegert Universit?t der K?nste Berlin Musikwissenschaft, Fakult?t Musik Fasanenstr. 1B, D-10623 Berlin Postanschrift: Postfach 12 05 44, D-10595 Berlin Tel.: +49 (0)30-3185-2318 -------------- n?chster Teil -------------- Ein Dateianhang mit HTML-Daten wurde abgetrennt... URL: From pdr4h at eservices.virginia.edu Mon Dec 10 19:20:03 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Mon, 10 Dec 2012 18:20:03 +0000 Subject: [MEI-L] New MEI project In-Reply-To: <4B0539965FD0435EBAAC6E274959D448@Laptop> References: <16763_1351898411_5094552A_16763_59_1_D2A91356-8368-4ECC-82BA-FA832A515EA3@edirom.de><20121106235429.4mv0f8114w04gk4o@webmail.iu.edu><25145_1354820224_50C0EA80_25145_95_1_B7E73178-1B13-4EAD-90A3-EE7305469DF9@edirom.de> , <4B0539965FD0435EBAAC6E274959D448@Laptop> Message-ID: Dear Christine, Herzlichen Gl?ckwunsch! I'm certain I speak for everyone on MEI-L when I say that we're eager to hear about your successes as the project moves forward. And please feel free to post any questions that arise to the list as well. -- p. __________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ________________________________ From: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] on behalf of Christine Siegert [siegert at udk-berlin.de] Sent: Monday, December 10, 2012 12:45 PM To: Music Encoding Initiative Cc: ??? ????? ???????; Schmidt D?rte Subject: [MEI-L] New MEI project Dear all, It's a great pleasure for me to inform you that the Einstein Foundation Berlin has agreed funding the research project "Giuseppe Sarti - A Cosmopolitan Composer in Pre-Revolutionary Europe" including an MEI based edition of his Italian operas "Fra i due litiganti il terzo gode" and "Giulio Sabino". The project starting in early 2013 will be a cooperation with D?rte Schmidt (University of the Ars Berlin) and Bella Brover Lubovsky (Hebrew University of Jerusalem) who will write a monograph and provide a critical edition of Sarti's Russian opera "Oleg". We will - of course - work closely together with our Detmold colleagues. With best wishes, Christine Prof. Dr. Christine Siegert Universit?t der K?nste Berlin Musikwissenschaft, Fakult?t Musik Fasanenstr. 1B, D-10623 Berlin Postanschrift: Postfach 12 05 44, D-10595 Berlin Tel.: +49 (0)30-3185-2318 -------------- next part -------------- An HTML attachment was scrubbed... URL: From donbyrd at indiana.edu Wed Dec 12 20:18:02 2012 From: donbyrd at indiana.edu (Byrd, Donald A.) Date: Wed, 12 Dec 2012 14:18:02 -0500 Subject: [MEI-L] page sizes; books on music notation/engraving Message-ID: <20121212141802.httuldg5rsw0cogs@webmail.iu.edu> I've finally finished documenting how units and coordinate systems work in Nightingale. It's far less detailed than Craig's opus on SCORE, and it's not at all clear to me it'll be helpful to this discussion!; but anyway here it is (attached). Also, if anyone is interested in books on conventional Western music notation (CWMN), the latest and quite possibly greatest I know of is Behind Bars, by Elaine Gould, published by Faber Music. It's certainly the most detailed book on CWMN I've ever seen; it even includes specific rules about exactly how much space to allow for lots of things, something I've seen in only one book before (Ted Ross' old The Art of Music Engraving and Processing). --Don -- Donald Byrd Woodrow Wilson Indiana Teaching Fellow Adjunct Associate Professor of Informatics Indiana University, Bloomington -------------- next part -------------- A non-text attachment was scrubbed... Name: CoordinateSystems-TN11.doc Type: application/msword Size: 53248 bytes Desc: not available URL: From pdr4h at eservices.virginia.edu Fri Dec 14 17:38:46 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Fri, 14 Dec 2012 16:38:46 +0000 Subject: [MEI-L] Music Encoding Conference 2013, 2nd call Message-ID: Dear colleagues, This is a friendly reminder about the Music Encoding Conference. The deadline for submissions is Dec. 31 so please contribute soon. Also, please circulate this notice widely and forgive any cross-postings. For the conference organizers, -- p. _________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ================================================== SECOND CALL FOR ABSTRACTS The Music Encoding Conference 2013: Concepts, Methods, Editions 22-24 May, 2013 ================================================== You are cordially invited to participate in the Music Encoding Conference 2013 - Concepts, Methods, Editions, to be held 22-24 May, 2013, at the Mainz Academy for Literature and Sciences in Mainz, Germany. Music encoding is now a prominent feature of various areas in musicology and music librarianship. The encoding of symbolic music data provides a foundation for a wide range of scholarship, and over the last several years, has garnered a great deal of attention in the digital humanities. This conference intends to provide an overview of the current state of data modeling, generation, and use, and aims to introduce new perspectives on topics in the fields of traditional and computational musicology, music librarianship, and scholarly editing, as well as in the broader area of digital humanities. With its dual focus on music encoding and editing in the context of the digital humanities, the Program Committee is happy to announce keynote lectures by Frans Wiering (Universiteit Utrecht), and Daniel Pitti (University of Virginia), both distinguished scholars in their respective fields of musicology and markup technologies in the digital humanities. Proposals for papers, posters, panel discussions, and pre-conference workshops are encouraged. Prospective topics for submissions include: * theoretical and practical aspects of music, music notation models, and scholarly editing * rendering of symbolic music data in audio and graphical forms * relationships between symbolic music data, encoded text, and facsimile images * capture, interchange, and re-purposing of music data and metadata * ontologies, authority files, and linked data in music encoding * additional topics relevant to music encoding and music editing Paper and poster proposals must contain no more than 1000 words and a references section with no more than five relevant bibliographic references. A length requirement for final papers has not yet been determined; however, poster presentations will be limited to 2 pages in the proceedings. Panel sessions may be one and a half or three hours in length. Proposals for panel sessions, describing the topic and nature of the session and including short biographies of the participants, should be no longer than 2000 words. Proposals for pre-conference workshops, to be held on May 21st, must be no longer than 2000 words and must include a detailed syllabus and schedule and a description of space and technical requirements. Detailed submission instructions, including author guidelines and authoritative stylesheets for each submission type, are available on the conference webpage at https://music-encoding.org/conference/submission. All accepted papers, posters, and reports of panel sessions and workshops will be included in the conference proceedings, tentatively scheduled to be published by the end of 2013. Important dates: 31 December 2012: Deadline for abstract submissions 31 January 2013: Notification of acceptance/rejection of submissions 21-24 May 2013: Conference 31 July 2013: Deadline for submission of full papers, posters, etc. for conference proceedings December 2013: Publication of conference proceedings Additional details will be announced on the conference webpage (http://music-encoding.org/conference/2013). If you have any questions, please contact conference2013 at music-encoding.org. ------ Program Committee: Ichiro Fujinaga, McGill University, Montreal Niels Krabbe, Det Kongelige Bibliotek, K?benhavn, Elena Pierazzo, King's College, London Eleanor Selfridge-Field, CCARH, Stanford Joachim Veit, Universit?t Paderborn, Detmold (Local) Organizers: Johannes Kepper, Universit?t Paderborn Daniel R?wenstrunk, Universit?t Paderborn Perry Roland, University of Virginia From pdr4h at eservices.virginia.edu Sat Dec 15 17:27:22 2012 From: pdr4h at eservices.virginia.edu (Roland, Perry (pdr4h)) Date: Sat, 15 Dec 2012 16:27:22 +0000 Subject: [MEI-L] Music Encoding Conference 2013, 2nd call Message-ID: Dear colleagues, This is a friendly reminder about the Music Encoding Conference. The deadline for submissions is Dec. 31 so please contribute soon. Also, please circulate this notice widely and forgive any cross-postings. For the conference organizers, -- p. _________________________ Perry Roland Music Library University of Virginia P. O. Box 400175 Charlottesville, VA 22904 434-982-2702 (w) pdr4h (at) virginia (dot) edu ================================================== SECOND CALL FOR ABSTRACTS The Music Encoding Conference 2013: Concepts, Methods, Editions 22-24 May, 2013 ================================================== You are cordially invited to participate in the Music Encoding Conference 2013 - Concepts, Methods, Editions, to be held 22-24 May, 2013, at the Mainz Academy for Literature and Sciences in Mainz, Germany. Music encoding is now a prominent feature of various areas in musicology and music librarianship. The encoding of symbolic music data provides a foundation for a wide range of scholarship, and over the last several years, has garnered a great deal of attention in the digital humanities. This conference intends to provide an overview of the current state of data modeling, generation, and use, and aims to introduce new perspectives on topics in the fields of traditional and computational musicology, music librarianship, and scholarly editing, as well as in the broader area of digital humanities. With its dual focus on music encoding and editing in the context of the digital humanities, the Program Committee is happy to announce keynote lectures by Frans Wiering (Universiteit Utrecht), and Daniel Pitti (University of Virginia), both distinguished scholars in their respective fields of musicology and markup technologies in the digital humanities. Proposals for papers, posters, panel discussions, and pre-conference workshops are encouraged. Prospective topics for submissions include: * theoretical and practical aspects of music, music notation models, and scholarly editing * rendering of symbolic music data in audio and graphical forms * relationships between symbolic music data, encoded text, and facsimile images * capture, interchange, and re-purposing of music data and metadata * ontologies, authority files, and linked data in music encoding * additional topics relevant to music encoding and music editing Paper and poster proposals must contain no more than 1000 words and a references section with no more than five relevant bibliographic references. A length requirement for final papers has not yet been determined; however, poster presentations will be limited to 2 pages in the proceedings. Panel sessions may be one and a half or three hours in length. Proposals for panel sessions, describing the topic and nature of the session and including short biographies of the participants, should be no longer than 2000 words. Proposals for pre-conference workshops, to be held on May 21st, must be no longer than 2000 words and must include a detailed syllabus and schedule and a description of space and technical requirements. Detailed submission instructions, including author guidelines and authoritative stylesheets for each submission type, are available on the conference webpage at https://music-encoding.org/conference/submission. All accepted papers, posters, and reports of panel sessions and workshops will be included in the conference proceedings, tentatively scheduled to be published by the end of 2013. Important dates: 31 December 2012: Deadline for abstract submissions 31 January 2013: Notification of acceptance/rejection of submissions 21-24 May 2013: Conference 31 July 2013: Deadline for submission of full papers, posters, etc. for conference proceedings December 2013: Publication of conference proceedings Additional details will be announced on the conference webpage (http://music-encoding.org/conference/2013). If you have any questions, please contact conference2013 at music-encoding.org. ------ Program Committee: Ichiro Fujinaga, McGill University, Montreal Niels Krabbe, Det Kongelige Bibliotek, K?benhavn, Elena Pierazzo, King's College, London Eleanor Selfridge-Field, CCARH, Stanford Joachim Veit, Universit?t Paderborn, Detmold (Local) Organizers: Johannes Kepper, Universit?t Paderborn Daniel R?wenstrunk, Universit?t Paderborn Perry Roland, University of Virginia From andrew.hankinson at mail.mcgill.ca Sat Dec 22 05:15:35 2012 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Sat, 22 Dec 2012 00:15:35 -0400 Subject: [MEI-L] Music encoding conference Message-ID: Hi, The instructions for the poster proposals are a little unclear on what is actually expected for the December deadline. It seems as though the proposal must essentially be formatted as the final paper, so are we expected to submit the final paper itself? Or will it be two different works: A proposal ("our poster will explain x, y, z"), and then if accepted, an actual paper of essentially the same length? Thanks, -Andrew From kepper at edirom.de Sat Dec 22 08:32:10 2012 From: kepper at edirom.de (Johannes Kepper) Date: Sat, 22 Dec 2012 08:32:10 +0100 Subject: [MEI-L] Music encoding conference In-Reply-To: References: Message-ID: <0A35C69B-FBBC-4338-A08F-75FEC017D936@edirom.de> Hi Andrew, Thanks for pointing that out. For the deadline, we expect an abstract of the paper, that is an explanation of its planned content, and why this is important for the conference. You don't have to upload the final poster (as you don't have to upload final papers by now). Please notice also that the provided word counts are maximum numbers. You don't have to write 1000 words, when the intention is clear enough spelled with just 400? We just wanted to be nice to our reviewers ;-) Hope this helps, Johannes Am 22.12.2012 um 05:15 schrieb Andrew Hankinson : > Hi, > > The instructions for the poster proposals are a little unclear on what is actually expected for the December deadline. > > It seems as though the proposal must essentially be formatted as the final paper, so are we expected to submit the final paper itself? Or will it be two different works: A proposal ("our poster will explain x, y, z"), and then if accepted, an actual paper of essentially the same length? > > Thanks, > -Andrew > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From atge at kb.dk Sat Dec 22 14:28:29 2012 From: atge at kb.dk (Axel Teich Geertinger) Date: Sat, 22 Dec 2012 13:28:29 +0000 Subject: [MEI-L] Music encoding conference In-Reply-To: <0A35C69B-FBBC-4338-A08F-75FEC017D936@edirom.de> References: , <0A35C69B-FBBC-4338-A08F-75FEC017D936@edirom.de> Message-ID: <0B6F63F59F405E4C902DFE2C2329D0D168609622@EXCHANGE-01.kb.dk> Hi all, sorry for being slow, but I am a little confused now. I asked Perry last week about what to submit for a poster. What confused me then was the Word template.Until then I had also thought an abstract was enough, but the template did not seem to be made for an abstract submission. Perry answered: Hello, Axel, You have options -- - a one-page image of the poster and a page of text explaining it or - a two-page paper (with or without) the actual poster image. In any case, you can think of this as a "proposal for a poster" rather than an actual poster. In other words, once your submission is accepted, you can change the content. Does that help? Cheers, -- p. So now we've prepared the text for a poster, but I gave up the idea of submitting also a paper, because I understood I would have to submit a (more or less) completed paper by December 31st, which I couldn't. The question now is whether I should try to cook up some abstract in a hurry or not... Anyway, if I do: What is the Word template for? The abstract is to be entered directly in the submission form, so perhaps the template is not to be used at all until the final submission (after the conference, perhaps)? Wishing you all a merry Christmas, Axel ________________________________________ Fra: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] på vegne af Johannes Kepper [kepper at edirom.de] Sendt: 22. december 2012 08:32 Til: Music Encoding Initiative Emne: Re: [MEI-L] Music encoding conference Hi Andrew, Thanks for pointing that out. For the deadline, we expect an abstract of the paper, that is an explanation of its planned content, and why this is important for the conference. You don't have to upload the final poster (as you don't have to upload final papers by now). Please notice also that the provided word counts are maximum numbers. You don't have to write 1000 words, when the intention is clear enough spelled with just 400? We just wanted to be nice to our reviewers ;-) Hope this helps, Johannes Am 22.12.2012 um 05:15 schrieb Andrew Hankinson : > Hi, > > The instructions for the poster proposals are a little unclear on what is actually expected for the December deadline. > > It seems as though the proposal must essentially be formatted as the final paper, so are we expected to submit the final paper itself? Or will it be two different works: A proposal ("our poster will explain x, y, z"), and then if accepted, an actual paper of essentially the same length? > > Thanks, > -Andrew > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Sat Dec 22 14:57:23 2012 From: kepper at edirom.de (Johannes Kepper) Date: Sat, 22 Dec 2012 14:57:23 +0100 Subject: [MEI-L] Music encoding conference In-Reply-To: <0B6F63F59F405E4C902DFE2C2329D0D168609622@EXCHANGE-01.kb.dk> References: , <0A35C69B-FBBC-4338-A08F-75FEC017D936@edirom.de> <0B6F63F59F405E4C902DFE2C2329D0D168609622@EXCHANGE-01.kb.dk> Message-ID: Hi Axel, the issue is that ConfTool itself asks for an abstract by default, and without the possibility to turn that of. In the case of this conference, this ConfTool abstract is supposed to be an abstract of the abstract we ask you to provide. Don't worry too much about it (the conftool one). What we need by now, is an abstract describing the poster or paper, which may be no longer than 1000 words. The template, which we ask you to use for this, is also intended to be used for the final paper, which you will have to provide a couple of weeks _after_ the conference for inclusion in the proceedings. Actually, if you can't make the submission work with the templates, just upload something else, together with a short notice about that. We can work these details out later. The main problem is that the word abstract is used for two different things. We got the license of Conftool only a couple of days before we announced everything, and our wording for the templates etc. was done by then already. Maybe we should have revised our terminology. Conftool, though being highly configurable, seems to be not changeable in this regard. Sorry for the confusion! And Axel, if you think you could draft a paper by the end of the year[1], I'm sure that would make a perfect contribution, so please don't hesitate! Best, Johannes [1]: There is a chance that the deadline will be extended a little bit. The final decision on this will be taken directly after christmas. But there is not much room to move, so don't expect much more time. Am 22.12.2012 um 14:28 schrieb Axel Teich Geertinger: > Hi all, > > sorry for being slow, but I am a little confused now. I asked Perry last week about what to submit for a poster. What confused me then was the Word template.Until then I had also thought an abstract was enough, but the template did not seem to be made for an abstract submission. Perry answered: > > Hello, Axel, > You have options -- > - a one-page image of the poster and a page of text explaining it or > - a two-page paper (with or without) the actual poster image. > In any case, you can think of this as a "proposal for a poster" rather than an actual poster. In other words, once your submission is accepted, you can change the content. > Does that help? > Cheers, > -- > p. > > So now we've prepared the text for a poster, but I gave up the idea of submitting also a paper, because I understood I would have to submit a (more or less) completed paper by December 31st, which I couldn't. The question now is whether I should try to cook up some abstract in a hurry or not... > Anyway, if I do: What is the Word template for? The abstract is to be entered directly in the submission form, so perhaps the template is not to be used at all until the final submission (after the conference, perhaps)? > > Wishing you all a merry Christmas, > Axel > > > ________________________________________ > Fra: mei-l-bounces at lists.uni-paderborn.de [mei-l-bounces at lists.uni-paderborn.de] på vegne af Johannes Kepper [kepper at edirom.de] > Sendt: 22. december 2012 08:32 > Til: Music Encoding Initiative > Emne: Re: [MEI-L] Music encoding conference > > Hi Andrew, > > Thanks for pointing that out. For the deadline, we expect an abstract of the paper, that is an explanation of its planned content, and why this is important for the conference. You don't have to upload the final poster (as you don't have to upload final papers by now). Please notice also that the provided word counts are maximum numbers. You don't have to write 1000 words, when the intention is clear enough spelled with just 400? We just wanted to be nice to our reviewers ;-) > > Hope this helps, > Johannes > > > > > Am 22.12.2012 um 05:15 schrieb Andrew Hankinson : > >> Hi, >> >> The instructions for the poster proposals are a little unclear on what is actually expected for the December deadline. >> >> It seems as though the proposal must essentially be formatted as the final paper, so are we expected to submit the final paper itself? Or will it be two different works: A proposal ("our poster will explain x, y, z"), and then if accepted, an actual paper of essentially the same length? >> >> Thanks, >> -Andrew >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Fri Dec 28 14:56:22 2012 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 28 Dec 2012 14:56:22 +0100 Subject: [MEI-L] Music Encoding Conference 2013: Deadline extension Message-ID: <53046B99-1988-4789-B702-8DF1CCB57B00@edirom.de> Dear colleagues, I write to announce that we have slightly extended the deadline for the Music Encoding Conference 2013. Please submit your abstract by January 6. Also, please circulate this notice widely and forgive any cross-postings. If you have any enquiries, please get in touch with me or email conference2013 at music-encoding.org. For the conference organizers, Johannes Kepper ------------------------ Dr. Johannes Kepper Wiss. Mitarbeiter BMBF-Project "Freisch?tz Digital" Musikwiss. Seminar Detmold/Paderborn Gartenstr. 20 32756 Detmold Tel. +49 5231 975665 Mail: kepper at edirom.de ================================================== SECOND CALL FOR ABSTRACTS The Music Encoding Conference 2013: Concepts, Methods, Editions 22-24 May, 2013 ================================================== You are cordially invited to participate in the Music Encoding Conference 2013 - Concepts, Methods, Editions, to be held 22-24 May, 2013, at the Mainz Academy for Literature and Sciences in Mainz, Germany. Music encoding is now a prominent feature of various areas in musicology and music librarianship. The encoding of symbolic music data provides a foundation for a wide range of scholarship, and over the last several years, has garnered a great deal of attention in the digital humanities. This conference intends to provide an overview of the current state of data modeling, generation, and use, and aims to introduce new perspectives on topics in the fields of traditional and computational musicology, music librarianship, and scholarly editing, as well as in the broader area of digital humanities. With its dual focus on music encoding and editing in the context of the digital humanities, the Program Committee is happy to announce keynote lectures by Frans Wiering (Universiteit Utrecht), and Daniel Pitti (University of Virginia), both distinguished scholars in their respective fields of musicology and markup technologies in the digital humanities. Proposals for papers, posters, panel discussions, and pre-conference workshops are encouraged. Prospective topics for submissions include: * theoretical and practical aspects of music, music notation models, and scholarly editing * rendering of symbolic music data in audio and graphical forms * relationships between symbolic music data, encoded text, and facsimile images * capture, interchange, and re-purposing of music data and metadata * ontologies, authority files, and linked data in music encoding * additional topics relevant to music encoding and music editing Paper and poster proposals must contain no more than 1000 words and a references section with no more than five relevant bibliographic references. A length requirement for final papers has not yet been determined; however, poster presentations will be limited to 2 pages in the proceedings. Panel sessions may be one and a half or three hours in length. Proposals for panel sessions, describing the topic and nature of the session and including short biographies of the participants, should be no longer than 2000 words. Proposals for pre-conference workshops, to be held on May 21st, must be no longer than 2000 words and must include a detailed syllabus and schedule and a description of space and technical requirements. Detailed submission instructions, including author guidelines and authoritative stylesheets for each submission type, are available on the conference webpage at https://music-encoding.org/conference/submission. All accepted papers, posters, and reports of panel sessions and workshops will be included in the conference proceedings, tentatively scheduled to be published by the end of 2013. Important dates: 31 December 2012: Deadline for abstract submissions 31 January 2013: Notification of acceptance/rejection of submissions 21-24 May 2013: Conference 31 July 2013: Deadline for submission of full papers, posters, etc. for conference proceedings December 2013: Publication of conference proceedings Additional details will be announced on the conference webpage (http://music-encoding.org/conference/2013). If you have any questions, please contact conference2013 at music-encoding.org. ------ Program Committee: Ichiro Fujinaga, McGill University, Montreal Niels Krabbe, Det Kongelige Bibliotek, K?benhavn, Elena Pierazzo, King's College, London Eleanor Selfridge-Field, CCARH, Stanford Joachim Veit, Universit?t Paderborn, Detmold (Local) Organizers: Johannes Kepper, Universit?t Paderborn Daniel R?wenstrunk, Universit?t Paderborn Perry Roland, University of Virginia