From francesca.giannetti at gmail.com Wed Jan 9 17:26:57 2019 From: francesca.giannetti at gmail.com (Francesca Giannetti) Date: Wed, 9 Jan 2019 11:26:57 -0500 Subject: [MEI-L] Request: Participate in a survey on musical genre Message-ID: Dear MEI colleagues, Since this is my first time posting to this listserv, I thought I would briefly introduce myself. I'm Francesca Giannetti at Rutgers University—New Brunswick. I'm a digital humanities librarian with a degree and background in music performance. My work sits at the intersection of music, digital humanities, and librarianship. At present, I'm part of a team of scholars and librarians working to develop a digital research environment for music called Music Scholarship Online (). It is in this capacity that I'm studying the ways that music researchers use musical genre as an access point in online information systems. I would very much appreciate it if you could help us by completing the following survey. Here are the details: You are invited to participate in a research study, entitled “Musical Genre and Digital Collections: Some Information Seeking Approaches.” The study is being conducted by Francesca Giannetti of Rutgers University–New Brunswick, Alexander Library, 169 College Avenue, New Brunswick, NJ, 848-932-6097, francesca.giannetti at rutgers.edu. The aim of this study is to investigate how users of online music information systems categorize music. A secondary aim is to examine the ways in which genre tags may potentially mediate the discovery of scholarly digital projects in music and their associated datasets and software tools. Your participation in the study will contribute to a better understanding of how different music communities interpret and use musical genre as an access point in online information systems. You must be at least 18 years old to participate. Completing the survey will take approximately 15 minutes of your time. The purpose of this study is to develop a holistic model of musical genre for a specific information resource–Music Scholarship Online (MuSO)–but the results will be generalizable and extensible to other music information seeking contexts. Your participation in the study will contribute to a better understanding of how different music communities interpret and use musical genre as an access point in online information systems. I hope that you will be willing to help by participating in this research study. If you agree to participate, the survey is available at this URL: https://rutgers.ca1.qualtrics.com/jfe/form/SV_262oW4NW02khjoN Thank you very much for your consideration of this request! All best, Francesca Giannetti Digital Humanities Librarian Alexander Library Rutgers University–New Brunswick -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hankinson at bodleian.ox.ac.uk Wed Jan 16 12:47:35 2019 From: andrew.hankinson at bodleian.ox.ac.uk (Andrew Hankinson) Date: Wed, 16 Jan 2019 11:47:35 +0000 Subject: [MEI-L] MEI Board Meeting, 15 January 2019 Message-ID: <7FE971C7-2B63-4EF0-8462-CE8EFC0B1E32@bodleian.ox.ac.uk> Dear MEI Community, Last night, 15 January, the MEI Board met for the first time in 2019. We welcomed Elsa De Luca as a new member to the board, and welcomed back Ichiro Fujinaga and Benjamin W. Bohl to serve another term. We are delighted to have such a diversity of backgrounds and experiences on our board, from music encoding pioneers, to technical experts, to world-class leaders in music scholarship of all eras. At the meeting we arrived at consensus for the following roles over the next year: Administrative Chair: Andrew Hankinson Technical Co-chairs: Johannes Kepper, Benjamin W. Bohl We will be posting the minutes of the meeting to the website in the next few days. In the coming year, we will be focusing our efforts on making MEI more accessible to wider audiences. This will include updating and improving our documentation, more tutorials and training materials, and updated information on our website. Finally, I would like to take this opportunity to thank Perry for his long-term service to the Board. For over 20 years Perry has worked tirelessly on MEI. Thank you, again, Perry -- we are truly humbled by your persistence and your accomplishments, and I hope we can continue to carry on your vision faithfully. I look forward to our next year together, -Andrew From pdr4h at virginia.edu Wed Jan 16 17:41:40 2019 From: pdr4h at virginia.edu (Roland, Perry D (pdr4h)) Date: Wed, 16 Jan 2019 16:41:40 +0000 Subject: [MEI-L] MEI Board Meeting, 15 January 2019 In-Reply-To: <7FE971C7-2B63-4EF0-8462-CE8EFC0B1E32@bodleian.ox.ac.uk> References: <7FE971C7-2B63-4EF0-8462-CE8EFC0B1E32@bodleian.ox.ac.uk> Message-ID: Dear Andrew, Thank you for the kind words and congratulations on being chosen as administrative chair. Thanks to everyone on the Board and in the community at large for all you've done to advance MEI. MEI has been a great joy to me -- I've had the pleasure to work with intelligent and caring colleagues and friends from many places in the world. I'm proud to have played a role in the establishment and development of MEI. I was always only its originator -- it was everyone else it attracted who really made it shine. Best wishes, -- p. -----Original Message----- From: mei-l On Behalf Of Andrew Hankinson Sent: Wednesday, January 16, 2019 6:48 AM To: Music Encoding Initiative Subject: [MEI-L] MEI Board Meeting, 15 January 2019 Dear MEI Community, Last night, 15 January, the MEI Board met for the first time in 2019. We welcomed Elsa De Luca as a new member to the board, and welcomed back Ichiro Fujinaga and Benjamin W. Bohl to serve another term. We are delighted to have such a diversity of backgrounds and experiences on our board, from music encoding pioneers, to technical experts, to world-class leaders in music scholarship of all eras. At the meeting we arrived at consensus for the following roles over the next year: Administrative Chair: Andrew Hankinson Technical Co-chairs: Johannes Kepper, Benjamin W. Bohl We will be posting the minutes of the meeting to the website in the next few days. In the coming year, we will be focusing our efforts on making MEI more accessible to wider audiences. This will include updating and improving our documentation, more tutorials and training materials, and updated information on our website. Finally, I would like to take this opportunity to thank Perry for his long-term service to the Board. For over 20 years Perry has worked tirelessly on MEI. Thank you, again, Perry -- we are truly humbled by your persistence and your accomplishments, and I hope we can continue to carry on your vision faithfully. I look forward to our next year together, -Andrew _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From ichiro.fujinaga at mcgill.ca Thu Jan 17 09:42:02 2019 From: ichiro.fujinaga at mcgill.ca (Ichiro Fujinaga, Prof.) Date: Thu, 17 Jan 2019 08:42:02 +0000 Subject: [MEI-L] Postdoc position available at McGill Univesity Message-ID: <0B8C6F40-9765-4272-A393-D78EA6C34F15@mcgill.ca> The Single Interface for Music Score Searching and Analysis (SIMSSA) project at McGill University is hiring a new Postdoctoral Researcher in Music Information Retrieval. SIMSSA is a seven-year research partnership grant funded by the Social Sciences and Humanities Research Council of Canada, headed by Ichiro Fujinaga, Principal Investigator and Julie Cumming, Co- investigator. The goal of this project is to make digital images of musical notation searchable and analyzable. Please see https://simssa.ca/opportunities for more details on how to apply. Adjudication will begin Feb. 4, 2019. Ichiro Fujinaga From raffaeleviglianti at gmail.com Fri Jan 25 19:44:10 2019 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Fri, 25 Jan 2019 13:44:10 -0500 Subject: [MEI-L] Music Encoding Course at DHSI (University of Victoria, BC) Message-ID: Dear all, Dr. Tim Duguid and I will be teaching a week-long course on Music Encoding at the Digital Humanities Summer Institute this June (it will be our second run). MEI will be, of course, front and center. Full scholarships are available for students. Please circulate widely! Announcement below. With many thanks and best wishes, Raff Viglianti Harness the power of technology for your music research! Would you like to harness your computer to conduct corpus-wide musical analyses? Are you interested in digital music editing and publishing? Full-tuition awards available! After a successful offering in 2018, the Digital Humanities Summer Institute (DHSI) is again offering its course entitled “Music Encoding Fundamentals and their Applications” in June 2019. This exciting course offers an introduction to the theory and practice of encoding electronic musical scores. It is designed for students, early career researchers and senior academics who are interested in a music-encoding project, or for those who would like to better understand the philosophy, theory, and practicalities of encoding notated music. Moreover, it will consider ways of incorporating sound and text files with encoded music notation. Participants should have a basic knowledge of how to read music, but no prior experience with coding is assumed. The course will run on June 10-14, 2019, on the beautiful campus of the University of Victoria. For more information on the full-tuition awards, see scholarships at: http://www.dhsi.org/scholarships.php . For more information on this course and DHSI, see Course 32: http://www.dhsi.org/index.php . See you in Victoria! -- Raffaele Viglianti, PhD Research Programmer Maryland Institute for Technology in the Humanities University of Maryland -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.muennich at unibas.ch Tue Jan 29 12:56:19 2019 From: stefan.muennich at unibas.ch (=?iso-8859-1?Q?Stefan_M=FCnnich?=) Date: Tue, 29 Jan 2019 11:56:19 +0000 Subject: [MEI-L] Wikimedia Commons request for comment on musical notation files In-Reply-To: <5F2E5A49-CB3D-4720-ACFC-1BD6C969CA02@mail.mcgill.ca> References: <04614F1A-78F2-4F3B-ADB9-74952CC1A985@icloud.com>, <5F2E5A49-CB3D-4720-ACFC-1BD6C969CA02@mail.mcgill.ca> Message-ID: <59c1db3c00a54c1598b0ee9c5f9bee26@unibas.ch> Dear MEI-list members, just for the record and the reference: The very interesting discussion about music notations formats on Wikimedia Commons has now been archived under https://commons.wikimedia.org/wiki/Commons:Village_pump/Proposals/Archive/2018/11#RfC:_Musical_notation_files Thanks to all participants! -Stefan ________________________________ Von: mei-l im Auftrag von Andrew Hankinson Gesendet: Montag, 26. November 2018 11:28 An: Music Encoding Initiative Betreff: [MEI-L] Fwd: Wikimedia Commons request for comment on musical notation files FYI, it would be good to get some members of the MEI community involved in this discussion, as there are a few things that need clarifying on what is there now. -Andrew Begin forwarded message: From: jc86035 > Subject: Wikimedia Commons request for comment on musical notation files Date: 26 November 2018 at 10:17:44 GMT To: public-music-notation at w3.org, mei-l at lists.uni-paderborn.de, lilypond-user at gnu.org, lilypond-devel at gnu.org Resent-From: public-music-notation at w3.org Hi all, I'm a Wikipedia and Wikimedia Commons editor (User:Jc86035). Earlier in November I opened a request for comment on Wikimedia Commons, proposing that several musical notation file formats (originally MuseScore, LilyPond and MusicXML) become uploadable on Commons, with the intention of eventually allowing audio and scores of some or all of the file types to be shown in pages like Wikipedia articles. (The MediaWiki software already has Extension:Score, based on LilyPond and Fluidsynth, but there are various benefits to allowing music notation to be stored as files. Currently notation is shown in Wikipedia articles as images or PDFs, or used directly through the Score extension.) Your feedback on which file formats Commons should support would be much appreciated; several developers have already provided input. Currently, the discussion is also evaluating MNX and MEI (of which the former doesn't exist yet; it's not clear to us how these two formats would interface and if supporting both would be redundant). If you've never edited a Wikimedia site before, anyone can create an account and participate in the discussion. (If discussion continues on the mailing lists I will link to new posts, although it would be preferable to have discussion all in one place.) Best jc86035 -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hankinson at bodleian.ox.ac.uk Wed Jan 30 09:05:17 2019 From: andrew.hankinson at bodleian.ox.ac.uk (Andrew Hankinson) Date: Wed, 30 Jan 2019 08:05:17 +0000 Subject: [MEI-L] Wikimedia Commons request for comment on musical notation files In-Reply-To: <59c1db3c00a54c1598b0ee9c5f9bee26@unibas.ch> References: <04614F1A-78F2-4F3B-ADB9-74952CC1A985@icloud.com> <5F2E5A49-CB3D-4720-ACFC-1BD6C969CA02@mail.mcgill.ca> <59c1db3c00a54c1598b0ee9c5f9bee26@unibas.ch> Message-ID: <24D33E47-8B1E-45E4-8A79-338016ABCD4E@bodleian.ox.ac.uk> It seems the conversation has moved here: https://phabricator.wikimedia.org/T208494 -Andrew > On 29 Jan 2019, at 12:56, Stefan Münnich wrote: > > Dear MEI-list members, > > just for the record and the reference: The very interesting discussion about music notations formats on Wikimedia Commons has now been archived under > > https://commons.wikimedia.org/wiki/Commons:Village_pump/Proposals/Archive/2018/11#RfC:_Musical_notation_files > > Thanks to all participants! > > -Stefan > > > Von: mei-l im Auftrag von Andrew Hankinson > Gesendet: Montag, 26. November 2018 11:28 > An: Music Encoding Initiative > Betreff: [MEI-L] Fwd: Wikimedia Commons request for comment on musical notation files > > FYI, it would be good to get some members of the MEI community involved in this discussion, as there are a few things that need clarifying on what is there now. > > -Andrew > >> Begin forwarded message: >> >> From: jc86035 >> Subject: Wikimedia Commons request for comment on musical notation files >> Date: 26 November 2018 at 10:17:44 GMT >> To: public-music-notation at w3.org, mei-l at lists.uni-paderborn.de, lilypond-user at gnu.org, lilypond-devel at gnu.org >> Resent-From: public-music-notation at w3.org >> >> Hi all, >> >> I'm a Wikipedia and Wikimedia Commons editor (User:Jc86035). Earlier in November I opened a request for comment on Wikimedia Commons, proposing that several musical notation file formats (originally MuseScore, LilyPond and MusicXML) become uploadable on Commons, with the intention of eventually allowing audio and scores of some or all of the file types to be shown in pages like Wikipedia articles. (The MediaWiki software already has Extension:Score, based on LilyPond and Fluidsynth, but there are various benefits to allowing music notation to be stored as files. Currently notation is shown in Wikipedia articles as images or PDFs, or used directly through the Score extension.) >> >> Your feedback on which file formats Commons should support would be much appreciated; several developers have already provided input. Currently, the discussion is also evaluating MNX and MEI (of which the former doesn't exist yet; it's not clear to us how these two formats would interface and if supporting both would be redundant). If you've never edited a Wikimedia site before, anyone can create an account and participate in the discussion. (If discussion continues on the mailing lists I will link to new posts, although it would be preferable to have discussion all in one place.) >> >> Best >> jc86035 > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From goebl at mdw.ac.at Tue Feb 5 17:10:34 2019 From: goebl at mdw.ac.at (Werner Goebl) Date: Tue, 5 Feb 2019 17:10:34 +0100 Subject: [MEI-L] Slurs/ties across repetitions and/or endings Message-ID: Dear list, How would you encode a slur or tie that spans across a repetition sign and an ending block or across two ending blocks? Please see attached an excerpt from Beethoven Op. 57, 2nd movement. 1) There is a tie in the bass (bars 1--2) across a repeat start. The same tie is drawn in bar 9 with a repetition bar line in the first ending (prima volta) that leads back to bar 2. You could encode the ties as note attributes with multiple tie="i" or tie="t". The first example in bar 1--2/9--2 would be like this: ... ... ... ... ... ... ... ... 2) Another version of this problem is the tie from bar 7--8 into ending 1 and into ending 2. (A second such example occurs in the 3rd staff group.) ... ... ... ... ... ... ... ... In both approaches, Verovio only renders the tie in the first ending, but not in the second. Is there a better way to encode such overlapping slurs/ties, in a way that Verovio actually renders? Would a modified tie element help that allows for multiple endids in such cases? Or is there another correct way of encoding this? All the best, Werner & David -- Dr. Werner Goebl Associate Professor Department of Music Acoustics – Wiener Klangstil University of Music and Performing Arts Vienna Anton-von-Webern-Platz 1 1030 Vienna, Austria Tel. +43 1 71155 4311 Fax. +43 1 71155 4399 http://iwk.mdw.ac.at/goebl -------------- next part -------------- A non-text attachment was scrubbed... Name: Beethoven_Op57_2_excerpt.png Type: image/png Size: 766595 bytes Desc: not available URL: From andrew.hankinson at bodleian.ox.ac.uk Tue Feb 5 17:16:27 2019 From: andrew.hankinson at bodleian.ox.ac.uk (Andrew Hankinson) Date: Tue, 5 Feb 2019 16:16:27 +0000 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: References: Message-ID: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> Hi Werner, I would use the @startid and @endid attributes on the (or ) elements: This would mean that you would need to assign xml:ids to the note elements: -Andrew > On 5 Feb 2019, at 17:10, Werner Goebl wrote: > > Dear list, > > How would you encode a slur or tie that spans across a repetition sign and an ending block or across two ending blocks? > > Please see attached an excerpt from Beethoven Op. 57, 2nd movement. > > 1) There is a tie in the bass (bars 1--2) across a repeat start. The same tie is drawn in bar 9 with a repetition bar line in the first ending (prima volta) that leads back to bar 2. > > You could encode the ties as note attributes with multiple tie="i" or tie="t". > > The first example in bar 1--2/9--2 would be like this: > > > ... > > ... > > > ... > > ... > > ... > > ... > > ... > > ... > > > > 2) Another version of this problem is the tie from bar 7--8 into ending 1 and into ending 2. (A second such example occurs in the 3rd staff group.) > > > ... > > ... > > > > ... > > ... > > ... > > > > ... > > ... > > ... > > > In both approaches, Verovio only renders the tie in the first ending, but not in the second. > > Is there a better way to encode such overlapping slurs/ties, in a way that Verovio actually renders? Would a modified tie element help that allows for multiple endids in such cases? Or is there another correct way of encoding this? > > All the best, > Werner & David > > > -- > Dr. Werner Goebl > Associate Professor > Department of Music Acoustics – Wiener Klangstil > University of Music and Performing Arts Vienna > Anton-von-Webern-Platz 1 > 1030 Vienna, Austria > Tel. +43 1 71155 4311 > Fax. +43 1 71155 4399 > http://iwk.mdw.ac.at/goebl > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From thomas.weber at notengrafik.com Tue Feb 5 17:31:08 2019 From: thomas.weber at notengrafik.com (Thomas Weber) Date: Tue, 5 Feb 2019 17:31:08 +0100 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> References: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> Message-ID: But how would you encode that the tie before the first ending block has two endings – one in block one and one in block two? Am 05.02.19 um 17:16 schrieb Andrew Hankinson: > Hi Werner, > > I would use the @startid and @endid attributes on the (or ) elements: > > > > This would mean that you would need to assign xml:ids to the note elements: > > > > > > -Andrew > >> On 5 Feb 2019, at 17:10, Werner Goebl wrote: >> >> Dear list, >> >> How would you encode a slur or tie that spans across a repetition sign and an ending block or across two ending blocks? >> >> Please see attached an excerpt from Beethoven Op. 57, 2nd movement. >> >> 1) There is a tie in the bass (bars 1--2) across a repeat start. The same tie is drawn in bar 9 with a repetition bar line in the first ending (prima volta) that leads back to bar 2. >> >> You could encode the ties as note attributes with multiple tie="i" or tie="t". >> >> The first example in bar 1--2/9--2 would be like this: >> >> >> ... >> >> ... >> >> >> ... >> >> ... >> >> ... >> >> ... >> >> ... >> >> ... >> >> >> >> 2) Another version of this problem is the tie from bar 7--8 into ending 1 and into ending 2. (A second such example occurs in the 3rd staff group.) >> >> >> ... >> >> ... >> >> >> >> ... >> >> ... >> >> ... >> >> >> >> ... >> >> ... >> >> ... >> >> >> In both approaches, Verovio only renders the tie in the first ending, but not in the second. >> >> Is there a better way to encode such overlapping slurs/ties, in a way that Verovio actually renders? Would a modified tie element help that allows for multiple endids in such cases? Or is there another correct way of encoding this? >> >> All the best, >> Werner & David >> >> >> -- >> Dr. Werner Goebl >> Associate Professor >> Department of Music Acoustics – Wiener Klangstil >> University of Music and Performing Arts Vienna >> Anton-von-Webern-Platz 1 >> 1030 Vienna, Austria >> Tel. +43 1 71155 4311 >> Fax. +43 1 71155 4399 >> http://iwk.mdw.ac.at/goebl >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -- Notengrafik Berlin GmbH HRB 150007 UstID: DE 289234097 Geschäftsführer: Thomas Weber und Werner J. Wolff fon: +49 30 25359505 Friedrichstraße 23a 10969 Berlin notengrafik.com From goebl at mdw.ac.at Tue Feb 5 17:31:55 2019 From: goebl at mdw.ac.at (Werner Goebl) Date: Tue, 5 Feb 2019 17:31:55 +0100 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> References: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> Message-ID: Hi Andrew, thanks for your message. Sure, but then you only have one start and one end id, but for my example 1, I need two start ids or for example 2 two end ids. To clarify my problem, please see the attached MEI file with two slur elements for each of the two problems (renders strangely). Thanks, Werner On 05.02.19 17:16, Andrew Hankinson wrote: > Hi Werner, > > I would use the @startid and @endid attributes on the (or ) elements: > > > > This would mean that you would need to assign xml:ids to the note elements: > > > > > > -Andrew > >> On 5 Feb 2019, at 17:10, Werner Goebl wrote: >> >> Dear list, >> >> How would you encode a slur or tie that spans across a repetition sign and an ending block or across two ending blocks? >> >> Please see attached an excerpt from Beethoven Op. 57, 2nd movement. >> >> 1) There is a tie in the bass (bars 1--2) across a repeat start. The same tie is drawn in bar 9 with a repetition bar line in the first ending (prima volta) that leads back to bar 2. >> >> You could encode the ties as note attributes with multiple tie="i" or tie="t". >> >> The first example in bar 1--2/9--2 would be like this: >> >> >> ... >> >> ... >> >> >> ... >> >> ... >> >> ... >> >> ... >> >> ... >> >> ... >> >> >> >> 2) Another version of this problem is the tie from bar 7--8 into ending 1 and into ending 2. (A second such example occurs in the 3rd staff group.) >> >> >> ... >> >> ... >> >> >> >> ... >> >> ... >> >> ... >> >> >> >> ... >> >> ... >> >> ... >> >> >> In both approaches, Verovio only renders the tie in the first ending, but not in the second. >> >> Is there a better way to encode such overlapping slurs/ties, in a way that Verovio actually renders? Would a modified tie element help that allows for multiple endids in such cases? Or is there another correct way of encoding this? >> >> All the best, >> Werner & David >> >> >> -- >> Dr. Werner Goebl >> Associate Professor >> Department of Music Acoustics – Wiener Klangstil >> University of Music and Performing Arts Vienna >> Anton-von-Webern-Platz 1 >> 1030 Vienna, Austria >> Tel. +43 1 71155 4311 >> Fax. +43 1 71155 4399 >> http://iwk.mdw.ac.at/goebl >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Dr. Werner Goebl Associate Professor Department of Music Acoustics – Wiener Klangstil University of Music and Performing Arts Vienna Anton-von-Webern-Platz 1 1030 Vienna, Austria Tel. +43 1 71155 4311 Fax. +43 1 71155 4399 http://iwk.mdw.ac.at/goebl -------------- next part -------------- A non-text attachment was scrubbed... Name: Beethoven_Op57_2_excerpt.mei Type: text/xml Size: 25734 bytes Desc: not available URL: From andrew.hankinson at bodleian.ox.ac.uk Wed Feb 6 08:43:13 2019 From: andrew.hankinson at bodleian.ox.ac.uk (Andrew Hankinson) Date: Wed, 6 Feb 2019 07:43:13 +0000 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: References: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> Message-ID: You could also have two ties with the same startid and a different endid. I'm not sure how Verovio would render it, but that would seem to me to be the most 'semantic' markup. -Andrew > On 5 Feb 2019, at 17:31, Werner Goebl wrote: > > Hi Andrew, > > thanks for your message. Sure, but then you only have one start and one end id, but for my example 1, I need two start ids or for example 2 two end ids. > > To clarify my problem, please see the attached MEI file with two slur elements for each of the two problems (renders strangely). > > Thanks, > Werner > > On 05.02.19 17:16, Andrew Hankinson wrote: >> Hi Werner, >> I would use the @startid and @endid attributes on the (or ) elements: >> >> This would mean that you would need to assign xml:ids to the note elements: >> >> >> -Andrew >>> On 5 Feb 2019, at 17:10, Werner Goebl wrote: >>> >>> Dear list, >>> >>> How would you encode a slur or tie that spans across a repetition sign and an ending block or across two ending blocks? >>> >>> Please see attached an excerpt from Beethoven Op. 57, 2nd movement. >>> >>> 1) There is a tie in the bass (bars 1--2) across a repeat start. The same tie is drawn in bar 9 with a repetition bar line in the first ending (prima volta) that leads back to bar 2. >>> >>> You could encode the ties as note attributes with multiple tie="i" or tie="t". >>> >>> The first example in bar 1--2/9--2 would be like this: >>> >>> >>> ... >>> >>> ... >>> >>> >>> ... >>> >>> ... >>> >>> ... >>> >>> ... >>> >>> ... >>> >>> ... >>> >>> >>> >>> 2) Another version of this problem is the tie from bar 7--8 into ending 1 and into ending 2. (A second such example occurs in the 3rd staff group.) >>> >>> >>> ... >>> >>> ... >>> >>> >>> >>> ... >>> >>> ... >>> >>> ... >>> >>> >>> >>> ... >>> >>> ... >>> >>> ... >>> >>> >>> In both approaches, Verovio only renders the tie in the first ending, but not in the second. >>> >>> Is there a better way to encode such overlapping slurs/ties, in a way that Verovio actually renders? Would a modified tie element help that allows for multiple endids in such cases? Or is there another correct way of encoding this? >>> >>> All the best, >>> Werner & David >>> >>> >>> -- >>> Dr. Werner Goebl >>> Associate Professor >>> Department of Music Acoustics – Wiener Klangstil >>> University of Music and Performing Arts Vienna >>> Anton-von-Webern-Platz 1 >>> 1030 Vienna, Austria >>> Tel. +43 1 71155 4311 >>> Fax. +43 1 71155 4399 >>> http://iwk.mdw.ac.at/goebl >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -- > Dr. Werner Goebl > Associate Professor > Department of Music Acoustics – Wiener Klangstil > University of Music and Performing Arts Vienna > Anton-von-Webern-Platz 1 > 1030 Vienna, Austria > Tel. +43 1 71155 4311 > Fax. +43 1 71155 4399 > http://iwk.mdw.ac.at/goebl > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Wed Feb 6 10:06:54 2019 From: craigsapp at gmail.com (Craig Sapp) Date: Wed, 6 Feb 2019 04:06:54 -0500 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: References: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> Message-ID: The notation in measure 18 and 19 in the third system (given as a PDF attachment in case inline images are removed): [image: Screen Shot 2019-02-06 at 3.39.41 AM.png] Is shorthand for: [image: Screen Shot 2019-02-06 at 3.39.46 AM.png] MEI's primary focus is on the visual aspect of the notation. So the slur/tie in the second repeat should have a @startid attaching to the barline at the beginning of the second repeat (or a @tstamp of 0, also representing the barline at the start of the measure). When the semantics of the situation is in conflict with the visual grammar (as in this case where the slur/tie start on a barline), then there should be a parallel gestural parameter to clarify the performance meaning (i.e., "semantics") of the element. So in these cases, the slur/tie starting on the barline in the second repeat should contain an additional gestural attribute called @startid.ges. This is a parallel situation to note at accid & note at accid.ges, note at dur & note at dur.ges, as well as note at dots & note at dots.ges. @startid.ges does not yet exist: https://music-encoding.org/guidelines/v4/elements/slur.html https://music-encoding.org/guidelines/v4/elements/tie.html So it will first need to be added to the MEI schema before it can be used. For notation rendering, @startid would be used, but for performance rendering (i.e., converting to a MIDI file), @startid.ges would be used. This sort of encoding would also apply to the other situations on the page that you mention (where @endid.ges will also be required). -=+Craig On Wed, 6 Feb 2019 at 02:43, Andrew Hankinson < andrew.hankinson at bodleian.ox.ac.uk> wrote: > You could also have two ties with the same startid and a different endid. > I'm not sure how Verovio would render it, but that would seem to me to be > the most 'semantic' markup. > > -Andrew > > > On 5 Feb 2019, at 17:31, Werner Goebl wrote: > > > > Hi Andrew, > > > > thanks for your message. Sure, but then you only have one start and one > end id, but for my example 1, I need two start ids or for example 2 two end > ids. > > > > To clarify my problem, please see the attached MEI file with two slur > elements for each of the two problems (renders strangely). > > > > Thanks, > > Werner > > > > On 05.02.19 17:16, Andrew Hankinson wrote: > >> Hi Werner, > >> I would use the @startid and @endid attributes on the (or ) > elements: > >> > >> This would mean that you would need to assign xml:ids to the note > elements: > >> > >> > >> -Andrew > >>> On 5 Feb 2019, at 17:10, Werner Goebl wrote: > >>> > >>> Dear list, > >>> > >>> How would you encode a slur or tie that spans across a repetition sign > and an ending block or across two ending blocks? > >>> > >>> Please see attached an excerpt from Beethoven Op. 57, 2nd movement. > >>> > >>> 1) There is a tie in the bass (bars 1--2) across a repeat start. The > same tie is drawn in bar 9 with a repetition bar line in the first ending > (prima volta) that leads back to bar 2. > >>> > >>> You could encode the ties as note attributes with multiple tie="i" or > tie="t". > >>> > >>> The first example in bar 1--2/9--2 would be like this: > >>> > >>> > >>> ... > >>> > >>> ... > >>> > >>> > >>> ... > >>> > >>> ... > >>> > >>> ... > >>> > >>> ... > >>> > >>> ... > >>> > >>> ... > >>> > >>> > >>> > >>> 2) Another version of this problem is the tie from bar 7--8 into > ending 1 and into ending 2. (A second such example occurs in the 3rd staff > group.) > >>> > >>> > >>> ... > >>> > >>> ... > >>> > >>> > >>> > >>> ... > >>> > >>> ... > >>> > >>> ... > >>> > >>> > >>> > >>> ... > >>> > >>> ... > >>> > >>> ... > >>> > >>> > >>> In both approaches, Verovio only renders the tie in the first ending, > but not in the second. > >>> > >>> Is there a better way to encode such overlapping slurs/ties, in a way > that Verovio actually renders? Would a modified tie element help that > allows for multiple endids in such cases? Or is there another correct way > of encoding this? > >>> > >>> All the best, > >>> Werner & David > >>> > >>> > >>> -- > >>> Dr. Werner Goebl > >>> Associate Professor > >>> Department of Music Acoustics – Wiener Klangstil > >>> University of Music and Performing Arts Vienna > >>> Anton-von-Webern-Platz 1 > >>> 1030 Vienna, Austria > >>> Tel. +43 1 71155 4311 > >>> Fax. +43 1 71155 4399 > >>> http://iwk.mdw.ac.at/goebl > >>> > _______________________________________________ > >>> mei-l mailing list > >>> mei-l at lists.uni-paderborn.de > >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > -- > > Dr. Werner Goebl > > Associate Professor > > Department of Music Acoustics – Wiener Klangstil > > University of Music and Performing Arts Vienna > > Anton-von-Webern-Platz 1 > > 1030 Vienna, Austria > > Tel. +43 1 71155 4311 > > Fax. +43 1 71155 4399 > > http://iwk.mdw.ac.at/goebl > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2019-02-06 at 3.39.46 AM.png Type: image/png Size: 71191 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2019-02-06 at 3.39.41 AM.png Type: image/png Size: 61235 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: beetrep.pdf Type: application/pdf Size: 16806 bytes Desc: not available URL: From sapov at mozarteum.at Wed Feb 6 10:12:03 2019 From: sapov at mozarteum.at (Oleksii Sapov Internationale Stiftung Mozarteum) Date: Wed, 06 Feb 2019 10:12:03 +0100 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: References: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> Message-ID: <20190206101203.EGroupware.XoegwnW1FTqdarkllpONvK8@_> Hi David, Werner, I didn't try it with endings, but with choice/app it is indeed possible to have 2 slurs. For instance: slur_1/@startid="note_regular" @endid="note_lem" slur_2/@startid="note_regular" @endid="note_rdg" Verovio renders only one of the slurs then. ----------------ursprüngliche Nachricht----------------- Von: Andrew Hankinson [andrew.hankinson at bodleian.ox.ac.uk ] An: Music Encoding Initiative [mei-l at lists.uni-paderborn.de ] Datum: Wed, 6 Feb 2019 07:43:13 +0000 ------------------------------------------------- > You could also have two ties with the same startid and a different > endid. I'm not sure how Verovio would render it, but that would seem > to me to be the most 'semantic' markup. > > -Andrew > >> On 5 Feb 2019, at 17:31, Werner Goebl wrote: >> >> Hi Andrew, >> >> thanks for your message. Sure, but then you only have one start and >> one end id, but for my example 1, I need two start ids or for >> example 2 two end ids. >> >> To clarify my problem, please see the attached MEI file with two >> slur elements for each of the two problems (renders strangely). >> >> Thanks, >> Werner >> >> On 05.02.19 17:16, Andrew Hankinson wrote: >>> Hi Werner, >>> I would use the @startid and @endid attributes on the (or ) elements: >>> >>> This would mean that you would need to assign xml:ids to the note elements: >>> >>> >>> -Andrew >>>> On 5 Feb 2019, at 17:10, Werner Goebl wrote: >>>> >>>> Dear list, >>>> >>>> How would you encode a slur or tie that spans across a repetition >>>> sign and an ending block or across two ending blocks? >>>> >>>> Please see attached an excerpt from Beethoven Op. 57, 2nd movement. >>>> >>>> 1) There is a tie in the bass (bars 1--2) across a repeat start. >>>> The same tie is drawn in bar 9 with a repetition bar line in the >>>> first ending (prima volta) that leads back to bar 2. >>>> >>>> You could encode the ties as note attributes with multiple >>>> tie="i" or tie="t". >>>> >>>> The first example in bar 1--2/9--2 would be like this: >>>> >>>> >>>> ... >>>> >>>> ... >>>> >>>> >>>> ... >>>> >>>> ... >>>> >>>> ... >>>> >>>> ... >>>> >>>> ... >>>> >>>> ... >>>> >>>> >>>> >>>> 2) Another version of this problem is the tie from bar 7--8 into >>>> ending 1 and into ending 2. (A second such example occurs in the >>>> 3rd staff group.) >>>> >>>> >>>> ... >>>> >>>> ... >>>> >>>> >>>> >>>> ... >>>> >>>> ... >>>> >>>> ... >>>> >>>> >>>> >>>> ... >>>> >>>> ... >>>> >>>> ... >>>> >>>> >>>> In both approaches, Verovio only renders the tie in the first >>>> ending, but not in the second. >>>> >>>> Is there a better way to encode such overlapping slurs/ties, in a >>>> way that Verovio actually renders? Would a modified tie element >>>> help that allows for multiple endids in such cases? Or is there >>>> another correct way of encoding this? >>>> >>>> All the best, >>>> Werner & David >>>> >>>> >>>> -- >>>> Dr. Werner Goebl >>>> Associate Professor >>>> Department of Music Acoustics – Wiener Klangstil >>>> University of Music and Performing Arts Vienna >>>> Anton-von-Webern-Platz 1 >>>> 1030 Vienna, Austria >>>> Tel. +43 1 71155 4311 >>>> Fax. +43 1 71155 4399 >>>> http://iwk.mdw.ac.at/goebl >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> -- >> Dr. Werner Goebl >> Associate Professor >> Department of Music Acoustics – Wiener Klangstil >> University of Music and Performing Arts Vienna >> Anton-von-Webern-Platz 1 >> 1030 Vienna, Austria >> Tel. +43 1 71155 4311 >> Fax. +43 1 71155 4399 >> http://iwk.mdw.ac.at/goebl >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > ---------------------------------- Oleksii Sapov, BA MA Mozart-Institut/ Digitale Mozart-Edition Internationale Stiftung Mozarteum Schwarzstr. 26 5020 Salzburg, Austria T +43 (0) 662 889 40 964 E mailto:sapov at mozarteum.at [ http://www.mozarteum.at/ -> www.mozarteum.at ] [ http://www.mozarteum.at/content/newsletter -> Newsletter Stiftung Mozarteum ] [ http://www.facebook.com/StiftungMozarteum -> Facebook Stiftung Mozarteum ] ZVR: 438729131, UID: ATU33977907 -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.lewis at oerc.ox.ac.uk Wed Feb 6 10:44:12 2019 From: david.lewis at oerc.ox.ac.uk (David Lewis) Date: Wed, 6 Feb 2019 09:44:12 +0000 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: <20190206101203.EGroupware.XoegwnW1FTqdarkllpONvK8@_> References: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> <20190206101203.EGroupware.XoegwnW1FTqdarkllpONvK8@_> Message-ID: <0EE5CDC8-C99B-4783-A623-CB4557C6DC77@oerc.ox.ac.uk> Hi, I’m not going to suggest a third option, rather to register a preference. By default, I think that Andrew’s solution is better than Craig’s (sorry Craig). If we believe that the slur should always prioritise the visual dimension, then any slur over a system break should be treated this way, which seems like a recipe for an encoding that is too susceptible minor engraving changes. I do agree with Craig’s observation that we could do with the facility to distinguish visual and semantic start and end ids in general, though. D > On 6 Feb 2019, at 09:12, Oleksii Sapov Internationale Stiftung Mozarteum wrote: > > Hi David, Werner, > > I didn't try it with endings, but with choice/app it is indeed possible to have 2 slurs. > For instance: > slur_1/@startid="note_regular" @endid="note_lem" > slur_2/@startid="note_regular" @endid="note_rdg" > > Verovio renders only one of the slurs then. > > > ----------------ursprüngliche Nachricht----------------- > Von: Andrew Hankinson [andrew.hankinson at bodleian.ox.ac.uk] > An: Music Encoding Initiative [mei-l at lists.uni-paderborn.de] > Datum: Wed, 6 Feb 2019 07:43:13 +0000 > ------------------------------------------------- > > > > You could also have two ties with the same startid and a different endid. I'm not > > sure how Verovio would render it, but that would seem to me to be the most > > 'semantic' markup. > > > > -Andrew > > > >> On 5 Feb 2019, at 17:31, Werner Goebl wrote: > >> > >> Hi Andrew, > >> > >> thanks for your message. Sure, but then you only have one start and one end id, but for my example 1, I need two start ids or for example 2 two end ids. > >> > >> To clarify my problem, please see the attached MEI file with two slur elements for each of the two problems (renders strangely). > >> > >> Thanks, > >> Werner > >> > >> On 05.02.19 17:16, Andrew Hankinson wrote: > >>> Hi Werner, > >>> I would use the @startid and @endid attributes on the (or ) elements: > >>> > >>> This would mean that you would need to assign xml:ids to the note elements: > >>> > >>> > >>> -Andrew > >>>> On 5 Feb 2019, at 17:10, Werner Goebl wrote: > >>>> > >>>> Dear list, > >>>> > >>>> How would you encode a slur or tie that spans across a repetition sign and an ending block or across two ending blocks? > >>>> > >>>> Please see attached an excerpt from Beethoven Op. 57, 2nd movement. > >>>> > >>>> 1) There is a tie in the bass (bars 1--2) across a repeat start. The same tie is drawn in bar 9 with a repetition bar line in the first ending (prima volta) that leads back to bar 2. > >>>> > >>>> You could encode the ties as note attributes with multiple tie="i" or tie="t". > >>>> > >>>> The first example in bar 1--2/9--2 would be like this: > >>>> > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> > >>>> > >>>> 2) Another version of this problem is the tie from bar 7--8 into ending 1 and into ending 2. (A second such example occurs in the 3rd staff group.) > >>>> > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> ... > >>>> > >>>> > >>>> In both approaches, Verovio only renders the tie in the first ending, but not in the second. > >>>> > >>>> Is there a better way to encode such overlapping slurs/ties, in a way that Verovio actually renders? Would a modified tie element help that allows for multiple endids in such cases? Or is there another correct way of encoding this? > >>>> > >>>> All the best, > >>>> Werner & David > >>>> > >>>> > >>>> -- > >>>> Dr. Werner Goebl > >>>> Associate Professor > >>>> Department of Music Acoustics – Wiener Klangstil > >>>> University of Music and Performing Arts Vienna > >>>> Anton-von-Webern-Platz 1 > >>>> 1030 Vienna, Austria > >>>> Tel. +43 1 71155 4311 > >>>> Fax. +43 1 71155 4399 > >>>> http://iwk.mdw.ac.at/goebl > >>>> _______________________________________________ > >>>> mei-l mailing list > >>>> mei-l at lists.uni-paderborn.de > >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >>> _______________________________________________ > >>> mei-l mailing list > >>> mei-l at lists.uni-paderborn.de > >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >> > >> -- > >> Dr. Werner Goebl > >> Associate Professor > >> Department of Music Acoustics – Wiener Klangstil > >> University of Music and Performing Arts Vienna > >> Anton-von-Webern-Platz 1 > >> 1030 Vienna, Austria > >> Tel. +43 1 71155 4311 > >> Fax. +43 1 71155 4399 > >> http://iwk.mdw.ac.at/goebl > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > ---------------------------------- > Oleksii Sapov, BA MA > Mozart-Institut/ Digitale Mozart-Edition > > Internationale Stiftung Mozarteum > Schwarzstr. 26 > 5020 Salzburg, Austria > T +43 (0) 662 889 40 964 > E sapov at mozarteum.at > www.mozarteum.at > > Newsletter Stiftung Mozarteum > Facebook Stiftung Mozarteum > ZVR: 438729131, UID: ATU33977907 > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From craigsapp at gmail.com Wed Feb 6 11:07:09 2019 From: craigsapp at gmail.com (Craig Sapp) Date: Wed, 6 Feb 2019 05:07:09 -0500 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: <0EE5CDC8-C99B-4783-A623-CB4557C6DC77@oerc.ox.ac.uk> References: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> <20190206101203.EGroupware.XoegwnW1FTqdarkllpONvK8@_> <0EE5CDC8-C99B-4783-A623-CB4557C6DC77@oerc.ox.ac.uk> Message-ID: > If we believe that the slur should always prioritise the visual > dimension, then any slur over a system break should be treated this way, > which seems like a recipe for an encoding that is too susceptible minor > engraving changes. > Touché, but I was not thinking that literally. I was thinking that there is a visual tie/slur in a broader sense (and I let Verovio or another rendering software handle the lower-level visual breaking of that single slur across systems). For note durations and pitches, there is already a dual system for visual/gestural attributes. For example, here is a note that looks like a half note but would be rendered in MIDI as a dotted quarter note: Here is a more common case where a note looks like it does not have an accidental alteration visually, but it does (due to the key signature, or a alteration earlier in the measure): Here is a more extreme case for a B-natural note in 18th-century music: By analogy, a hanging tie could look like this: _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From goebl at mdw.ac.at Wed Feb 6 14:27:25 2019 From: goebl at mdw.ac.at (Werner Goebl) Date: Wed, 6 Feb 2019 14:27:25 +0100 Subject: [MEI-L] Slurs/ties across repetitions and/or endings In-Reply-To: References: <5ED27EC0-F025-41B1-AA9F-28710E21F133@bodleian.ox.ac.uk> <20190206101203.EGroupware.XoegwnW1FTqdarkllpONvK8@_> <0EE5CDC8-C99B-4783-A623-CB4557C6DC77@oerc.ox.ac.uk> Message-ID: Thank you all for the great discussion and specially @Craig for clarifying the problem. On 06.02.19 11:18, Laurent Pugin wrote: > > By analogy, a hanging tie could look like this: >       From kelnreiter at mozarteum.at Fri Mar 1 08:52:26 2019 From: kelnreiter at mozarteum.at (Franz Kelnreiter) Date: Fri, 1 Mar 2019 08:52:26 +0100 Subject: [MEI-L] MEC 2019 Vienna: Registration open Message-ID: * Dear MEI-L, We are delighted to announce that the registration for the Music Encoding Conference 2019, held on 29 May - 1 June at the University of Vienna, Austria, is now open! You are invited to register for this event via conftool for MEC 2019 . You can also find information about the program, venues and accommodations at the conference website . Conference fees are the following: 1-31 March: 110 and 80 € for students; Conference dinner 10 € 1 April-20 May: 130 and 100 € for students; Conference dinner 15 € Please note that early bird registration will end on 1 April. We aim to organize this event in accordance with the guidelines of the Austrian Ecolabel for Green Meetings in order to contribute to resource conservation, climate protection, regional added value and awareness raising. To help us in this endeavor, we kindly ask you to use environmentally compatible, public means of transportation to travel to the event location and consider booking an eco-labelled hotel (Austrian Eco-label, European Ecolabel, EMAS or others) from our list of recommendations. Reducing and separating waste like plastic, paper, glass, etc. is very important to us, so please use the separate collection systems provided at your hotel and at the event venues and choose food and drinks in recyclable or reusable packaging. Our catering Pool 7 meets the high standards of the Austrian Ecolabel and will serve specialities made from regional products at the lunch breaks. Before planning your trip to Vienna and for more information on Green Meetings, please visit our guidelines at the conference website . We will be able to award bursaries to students to partially cover the cost of registration, travel and accommodation.  More details on deadlines, eligibility, and how to apply can be found at the conference website. We look forward to seeing you in Vienna! Paul Gulewycz, ÖAW Wien Robert Klugseder, ÖAW Wien Norbert Dubowy, ISM Salzburg Franz Kelnreiter, ISM Salzburg On behalf of the Organizing Committee * -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh at yokermusic.scot Sat Mar 9 15:26:42 2019 From: josh at yokermusic.scot (Joshua Stutter) Date: Sat, 9 Mar 2019 14:26:42 +0000 Subject: [MEI-L] Encoding Notre Dame Polyphonic Neumes Message-ID: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> Dear all, Fairly new MEI user, trying to encode some 13th-century Notre Dame notation into MEI for a class. MEI has good support for many neumes, but I'm attempting to get them to work in a polyphonic context, align correctly and with good semantics. Here is a small example which I'm attempting to encode: Small Notre Dame example. Most of the neumes can be notated and typed with the exception of the complex neume FGBGA which does not have a name. This is fine as this music does not stick to the usual neume types. My first issue arises when trying to show the first tenor note D is aligned with the porrectus GFG. How would I go about achieving this without using semantically-incorrect spacers or invisible rests or durations? What I really wish for is the possibility to encode sections of polyphony in groups that are aligned together, i.e. the first three neumes in the organal voice in one group, then the porrectus in a new group with the tenor virga. The second issue is the vertical lines. They are not barlines, nor always rests. They are divisiones with a complex and context-sensitive function. Sometimes they function as rests, sometimes they are alignment marks, sometimes syllable marks. The first attached file 'benedicamus-domino.mei' encodes this example naively. Nothing is aligned and I use where divisiones are. The second attached file 'benedicamus-domino-wish.mei' is how I wish to encode this file, using a made-up element that can contain anything a
contains. I have also replaced the with another made-up element . Needless to say, I'm not concerned with the output in verovio, as very little neumatic notation is supported anyway, but instead encoding the alignment and elements correctly. Is something like this possible in MEI already or will I have to dabble in ODD? If I must, are there any links to a good workflow and documentation for using ODD with MEI? Thanks in advance for responding to this quite complex question, Joshua Stutter. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: notre-dame-example.jpg Type: image/jpeg Size: 74075 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: benedicamus-domino.mei Type: text/xml Size: 3980 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: benedicamus-domino-wish.mei Type: text/xml Size: 4941 bytes Desc: not available URL: From f.wiering at UU.NL Thu Mar 14 15:17:32 2019 From: f.wiering at UU.NL (Frans Wiering) Date: Thu, 14 Mar 2019 15:17:32 +0100 Subject: [MEI-L] Invitation DH2019 Message-ID: <08849937-752d-24a5-0246-fc75835a8bc5@UU.NL> Dear Madam, Sir, The University of Utrecht is happy and honoured to welcome you to the DH2019-conference in Utrecht, the Netherlands! The DH2019-conference will take place from July 9 - 12 in TivoliVredenburg. The pre-conference workshops will be on Monday July 8 and Tuesday July 9. With over 900 submitted abstracts, we are hoping this will be the biggest DH-conference so far! Please note that the early bird fee ends on March 31^st ! Also, as hotel rooms are limited in Utrecht, it is advisable to arrange your accommodation as soon as possible. On the conference website you can find information about the programme, registration, the venue, accommodation and travel information. Please do not hesitate to contact us for any questions you may have. In the meantime we look forward to welcoming you in Utrecht! Kind regards, -- --------------------------------------------------------------------- dr. Frans Wiering Opleidingsdirecteur Informatiekunde Associate Professor Interaction Technology Digital Humanities Research Fellow --------------------------------------------------------------------- Utrecht University Department of Information and Computing Sciences (ICS) Buys Ballot Building, office 482 Princetonplein 5 3584 CC Utrecht Netherlands mail:F.Wiering at uu.nl tel: +31-30-2536335 www:http://www.uu.nl/staff/FWiering/0 --------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: knoacclhjhhhnebo.png Type: image/png Size: 60616 bytes Desc: knoacclhjhhhnebo.png URL: From kepper at edirom.de Fri Mar 15 10:31:38 2019 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 15 Mar 2019 10:31:38 +0100 Subject: [MEI-L] Documentation for MEI v4 Message-ID: <1BE37328-FFE0-434B-8EF0-F2271B5D93F7@edirom.de> Dear all, Benni and I are currently preparing a work plan for revamping the Guidelines section of the MEI Documentation (https://music-encoding.org/guidelines/v4/content/). The problem is that this Documentation still reflects the state of MEI v3, which in some parts differs significantly from the current model. The differences can be traced here: https://music-encoding.org/archive/comparison-4.0.html, but that doesn't provide a detailed explanation of the differences. For that purpose, we currently only have the release notes, available from https://github.com/music-encoding/music-encoding/releases/tag/v4.0.0. It is comprehensible that people are getting confused when reading that documentation, and we would very much like to resolve that situation rather sooner than later. However, it's clear that we can't do this alone, or in a week's time. As a first step, we will introduce a warning at the top of each chapter explaining the situation, and redirecting people to work with the Specs (under Elements etc.) instead. Next, we would like to gather and coordinate a group of people that is willing to help with updating the Guidelines. For this purpose, we plan to have regular open meetings on Slack / Skype / …, on every odd week's Friday (pun intended), where people can jointly work on the documentation, ask questions and so on. This will be hop on / hop off meetings – we're happy about everyone who will be able to join us, and we don't expect formal commitments. Of course, work on the documentation can be done at other times as well, but we'd like to make ourselves available for discussion and so on. Ideally, something like this (maybe with a lower frequency) will be helpful for the continued development of the schema, but this has much lower priority right now. The first ODD Friday will be 29 March, then 12 April, and every other week from then. We will be available from 2pm German time (Germany switches to summer time on March 31, so I leave it to you to identify what this means for you…). We encourage everyone to join us, even without technical knowledge about MEI – this is about writing comprehensible documentation, where it's always good to have a broad range of expertises. If you're uncertain, please contact Benni and / or me directly, and we will make sure that everything will work for you. Thanks very much, and all best, Benni and Johannes -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From josh at yokermusic.scot Sat Mar 16 13:46:11 2019 From: josh at yokermusic.scot (Joshua Stutter) Date: Sat, 16 Mar 2019 12:46:11 +0000 Subject: [MEI-L] Encoding Notre Dame Polyphonic Neumes In-Reply-To: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> References: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> Message-ID: <3f61b10a-ad5a-189d-2644-d261f319569a@yokermusic.scot> Dear all, A week later followup. I've resigned myself to have to customise MEI-neumes.xml to support this polyphonic neume notation and am trying to work my way through the sparse documentation. Unfortunately, I've fallen at the first hurdle. Attempting to generate the schema without any of my own customisation, I opened up TEI Roma and uploaded the MEI-neumes.xml customisation direct from Github (because the "customeization" service appears to have been broken for over a fortnight now without any sign that it is being fixed). Roma appears to parse the customization without issue, but fails when pulling in the SVG elements: > terminated by the matching end-tag ""." exclass="class > java.io.IOException" >java.io.IOException: to RNG then Trang to make > RNC failed: net.sf.saxon.s9api.SaxonApiException: > org.xml.sax.SAXParseException; systemId: > http://www.tei-c.org/release/xml/tei/custom/schema/relaxng/svg11.rng; > lineNumber: 6; columnNumber: 3; The element type "hr" must be > terminated by the matching end-tag "". >     at > pl.psnc.dl.ege.tei.TEIConverter.convertDocument(TEIConverter.java:306) >     at pl.psnc.dl.ege.tei.TEIConverter.convert(TEIConverter.java:154) >     at > pl.psnc.dl.ege.component.NamedConverter.convert(NamedConverter.java:44) >     at pl.psnc.dl.ege.ConversionPerformer.run(ConversionPerformer.java:45) >     at java.lang.Thread.run(Thread.java:748) > Looking through svg11.rng, there is no mention of
. Am I using Roma correctly here or is this an upstream issue at TEI? The same issue occurs on MEI-CMN.xml Joshua. On 09/03/2019 14:26, Joshua Stutter wrote: > > Dear all, > > Fairly new MEI user, trying to encode some 13th-century Notre Dame > notation into MEI for a class. MEI has good support for many neumes, > but I'm attempting to get them to work in a polyphonic context, align > correctly and with good semantics. Here is a small example which I'm > attempting to encode: > > Small Notre Dame example. > > Most of the neumes can be notated and typed with the exception of the > complex neume FGBGA which does not have a name. This is fine as this > music does not stick to the usual neume types. > > My first issue arises when trying to show the first tenor note D is > aligned with the porrectus GFG. How would I go about achieving this > without using semantically-incorrect spacers or invisible rests or > durations? What I really wish for is the possibility to encode > sections of polyphony in groups that are aligned together, i.e. the > first three neumes in the organal voice in one group, then the > porrectus in a new group with the tenor virga. > > The second issue is the vertical lines. They are not barlines, nor > always rests. They are divisiones with a complex and context-sensitive > function. Sometimes they function as rests, sometimes they are > alignment marks, sometimes syllable marks. > > The first attached file 'benedicamus-domino.mei' encodes this example > naively. Nothing is aligned and I use where divisiones are. > > The second attached file 'benedicamus-domino-wish.mei' is how I wish > to encode this file, using a made-up element that can > contain anything a
contains. I have also replaced the > with another made-up element . > > Needless to say, I'm not concerned with the output in verovio, as very > little neumatic notation is supported anyway, but instead encoding the > alignment and elements correctly. Is something like this possible in > MEI already or will I have to dabble in ODD? If I must, are there any > links to a good workflow and documentation for using ODD with MEI? > > Thanks in advance for responding to this quite complex question, > > Joshua Stutter. > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: notre-dame-example.jpg Type: image/jpeg Size: 74075 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From kdesmond at brandeis.edu Sun Mar 17 11:02:14 2019 From: kdesmond at brandeis.edu (Karen Desmond) Date: Sun, 17 Mar 2019 06:02:14 -0400 Subject: [MEI-L] Encoding Notre Dame Polyphonic Neumes In-Reply-To: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> References: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> Message-ID: Hi Joshua, Quick comment. first, I’m really glad someone is looking at modal notation. My first instinct though would be to not use the neumes module, as many of the things you are trying to do may have support within the measural module - where you have elements like ligatures, and note values like longs and breves, though I know of course that this not mensural and eventually would need at least its own notationtype attribute (and module?). I don’t think using the neume names is appropriate for this repertory as the theorists didn’t use these. For modal notation probably the most important thing you want to encode is how many notes within a ligature and for the specific case of the conjunctura, that the type of ligature is a conjunctura (possibly using the form element of conjunctura). You’re right that properly encoding the divisio is important - whether it truly functions as a rest, or a divisio syllabarum, etc. The alignment is a more complex issue. Ideally you would probably want to number the perfections and then you would simply tag your tenor notes as occurring within a certain perfection. However in the duplum the ligatures could begin in one perfection and end in another - i.e. if in a discant section in mode 1 you had a 3-note ligature, the notes would be long breve long but the first long is in the first perfection and the third is the second perfection, unless of course you had perfection be a sub-element of ligature in the tag hierarchy. Best Karen On Sat, Mar 9, 2019 at 9:27 AM Joshua Stutter wrote: > Dear all, > > Fairly new MEI user, trying to encode some 13th-century Notre Dame > notation into MEI for a class. MEI has good support for many neumes, but > I'm attempting to get them to work in a polyphonic context, align correctly > and with good semantics. Here is a small example which I'm attempting to > encode: > > [image: Small Notre Dame example.] > > Most of the neumes can be notated and typed with the exception of the > complex neume FGBGA which does not have a name. This is fine as this music > does not stick to the usual neume types. > > My first issue arises when trying to show the first tenor note D is > aligned with the porrectus GFG. How would I go about achieving this without > using semantically-incorrect spacers or invisible rests or durations? What > I really wish for is the possibility to encode sections of polyphony in > groups that are aligned together, i.e. the first three neumes in the > organal voice in one group, then the porrectus in a new group with the > tenor virga. > > The second issue is the vertical lines. They are not barlines, nor always > rests. They are divisiones with a complex and context-sensitive function. > Sometimes they function as rests, sometimes they are alignment marks, > sometimes syllable marks. > > The first attached file 'benedicamus-domino.mei' encodes this example > naively. Nothing is aligned and I use where divisiones are. > > The second attached file 'benedicamus-domino-wish.mei' is how I wish to > encode this file, using a made-up element that can contain > anything a
contains. I have also replaced the with > another made-up element . > > Needless to say, I'm not concerned with the output in verovio, as very > little neumatic notation is supported anyway, but instead encoding the > alignment and elements correctly. Is something like this possible in MEI > already or will I have to dabble in ODD? If I must, are there any links to > a good workflow and documentation for using ODD with MEI? > > Thanks in advance for responding to this quite complex question, > > Joshua Stutter. > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- --- Karen Desmond Visiting Fellow, Clare Hall, and Visiting Scholar, Faculty of Music, University of Cambridge (Lent/Easter 2019) Assistant Professor of Music, Brandeis University -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: notre-dame-example.jpg Type: image/jpeg Size: 74075 bytes Desc: not available URL: From berndt at hfm-detmold.de Mon Mar 18 14:51:51 2019 From: berndt at hfm-detmold.de (Axel Berndt) Date: Mon, 18 Mar 2019 14:51:51 +0100 Subject: [MEI-L] Audio Mostly Call for Contributions Message-ID: <31648581-52fe-84a7-92b8-14b50afaa237@hfm-detmold.de> Dear all, as a member of the Audio Mostly steering committee I would like to make you aware of this interdisciplinary conference that started in 2006. We would love to open up towards the MEI community as there is already a considerable overlap. In past years Audio Mostly (AM) has been a perfect place for bringing together different disciplines from academia and industry, theory and practice. We would love to have contributions from the MEI community this year. Here comes the official CfP. *Audio Mostly 2019: A Journey in Sound *18th to 20th September 2019 University of Nottingham, Nottingham, UK www.audiomostly.com *AUDIO MOSTLY 2019 *Audio Mostly is an audio focused interdisciplinary conference on design, interacting with sound and technology, which embraces applied theory and practice-based research. It is an annual conference which brings together thinkers and doers from academia and industry who share an interest in sonic interaction and the use of audio for interface design. This remit covers product design, auditory displays, computer games and virtual environments, new digital musical instruments, educational applications and workplace tools, /as well as the topics listed below/. It further includes fields such as the psychology of sound and music, cultural studies, systems engineering, and everything in between in which sonic Human-Computer Interaction plays a role. Audio Mostly 2019 will be an inclusive event for all, bringing together a whole range of people and communities. It will be a lively and sociable mix of oral and poster paper presentations, demos, and workshops. We welcome submissions from industry, academia and interested parties in each of these categories. As in previous years, the Audio Mostly 2019 proceedings will be published by the Association for Computing Machinery (ACM) (/to be confirmed/) and made available through their digital library . Regular papers, posters and demos/installations will be double-blind peer reviewed. It is envisaged that there will be a special issue of a journal relating to the conference, as with previous years. *CONFERENCE THEME* The special theme for the conference this year is A Journey in Sound and we would particularly welcome papers relating to this theme for at the conference this year. We often have different experiences of sound and music though out our lives, there are sounds that remind us of different places and people. We also have different playlists and songs that take us back and remind us of certain times and events. Throughout our lives we are interacting with sounds and music, we are on a journey in sound. This year the theme of the conference is open to interpretation, but people might think about the following, in relation to the theme: * /Sonic aspects of digital stories, documentaries and archives/ * /The soundtrack to our lives. Archiving and sharing sound / * /The emotional potential of a sound, how might this be used to support interaction/ * /The different uses of music across different settings/ * /The re-use of recollections and memories by composers & sound designers/ * /The development of musical tools that can let us express our experiences over time / * /Socio-technical uses of AI create highly personalised soundtracks that respond to one’s context/ * /Adaptive music use in journeys, time and the creative use of data/  Audio Mostly 2019 encourages the submission of regular papers (oral/poster presentation) addressing such questions and others related to the conference theme and the topics presented below. *LIST OF TOPICS *The Audio Mostly conference series is interested in sound /Interaction Design & Human-Computer Interaction (HCI) /in general. The conference provides a space to reflect on the role of sound/music in our lives and how to understand, develop and design systems which relate to sound and music – we are particularly interested in this from a broad HCI perspective. We encourage original regular papers (oral/poster presentation) addressing the conference theme or other topics from the list provided below. We welcome multidisciplinary approaches involving fields such as music informatics, information and communication technologies, sound design, music performance, visualisation, composition, perception/cognition and aesthetics. • Accessibility • Aesthetics • Affective computing applied to sound/music • AI, HCI and Music • Acoustics and Psychoacoustics • Auditory display and sonification • Augmented and virtual reality with or for sound and music • Computational musicology • Critical approaches to interaction, design and sound • Digital augmentation (e.g. musical instruments, stage, studio, audiences, performers, objects) • Digital music libraries • Ethnographic studies • Game audio and music • Gestural interaction with sound or music • Immersive and spatial audio • Interactive sonic arts and artworks • Intelligent music tutoring systems • Interfaces for audio engineering and post-production • Interfaces or synthesis models for sound design • Live performing arts • Music information retrieval & Interaction • Musical Human-Computer Interaction • New methods for the evaluation of user experiences of sound and music • Participatory and co-design methodologies with or for audio • Philosophical or sociological reflections on Audio Mostly related topics • Psychology, cognition, perception • Semantic web music technologies • Spatial audio, interaction design and ambisonics • Sonic interaction design • Sound and image interaction: from production to perception • Sound and soundscape studies *SUBMISSION INSTRUCTIONS * Regular paper, poster, demo and workshop contributions must be submitted via the EasyChair Audio Mostly 2019 submission portal . All Audio Mostly 2018 papers should be submitted using the 2017 ACM Master Article Template specified below for your contribution. Authors should use the ACM Computing Classification System (CCS) to provide the proper indexing information in their papers (see instructions on the 2017 ACM Master Article Template page). All papers must be submitted in the PDF format. *Call for Papers & Posters * *IMPORTANT DATES * *(Papers & Posters) * Deadline for Submissions: *24*^*th* *May 2019* Notification of Acceptance: *14*^*th* *June 2019* Camera-ready submissions: 9^th August 2019 Early Registration Deadline: 10^th August 2019 Conference: 18 – 20 September 2019 * Call for Workshops * *IMPORTANT DATES (Workshops)* Deadline for Submissions: *24*^*th* *May 2019* Notification of Acceptance: *14*^*th* *June 2019 *Workshops: 17^th September 2019 *Call for Demos * *IMPORTANT DATES (Demos & Installations) * Deadline for Submissions: *1*^*st * *July 2019* Notification of Acceptance: *15*^*th* *July 2019* Submission Deadline: *22^st July 2019* *Submission Site* https://easychair.org/conferences/?conf=am2019 *LOCATION* This year, the conference is hosted by the Mixed Reality Lab (in the School of Computer Science) and the Department of Music at the University of Nottingham – The conference will be located on the University Park The University Park is The University of Nottingham’s largest campus at 300 acres. Part of the University since 1929, the campus is widely regarded as one of the largest and most attractive in the country. Set in extensive greenery and around a lake, University Park is the focus of life for students, staff and visitors. Conveniently located only two miles from the city centre. The campus is well connected, the nearest airport is the East Midlands Airport , local train stations are Nottingham, and Beeston. For more information on the location, transport links and general information see the link below: Getting here – Maps and Directions --- Best wishes, Axel -- Dr.-Ing. Axel Berndt Phone: +49 (0) 5231 / 975 874 Web: http://www.cemfi.de/people/axel-berndt Center of Music and Film Informatics Ostwestfalen-Lippe University of Applied Sciences Detmold University of Music Hornsche Strasse 44, 32756 Detmold, Germany -------------- next part -------------- An HTML attachment was scrubbed... URL: From D.Lewis at gold.ac.uk Tue Mar 19 14:56:05 2019 From: D.Lewis at gold.ac.uk (David Lewis) Date: Tue, 19 Mar 2019 13:56:05 +0000 Subject: [MEI-L] Encoding Notre Dame Polyphonic Neumes In-Reply-To: References: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> Message-ID: Just to say that I agree with Karen. I know it’s transitional in a sense, but I think it’d be immensely helpful to have someone work on modal notation as a part of the *mensural* notation model. I think it raises some issues that we should probably be looking into anyway. Is @synch of any use for alignment? It’s not something I’ve used before. David > On 17 Mar 2019, at 10:02, Karen Desmond wrote: > > Hi Joshua, > > Quick comment. first, I’m really glad someone is looking at modal notation. My first instinct though would be to not use the neumes module, as many of the things you are trying to do may have support within the measural module - where you have elements like ligatures, and note values like longs and breves, though I know of course that this not mensural and eventually would need at least its own notationtype attribute (and module?). I don’t think using the neume names is appropriate for this repertory as the theorists didn’t use these. For modal notation probably the most important thing you want to encode is how many notes within a ligature and for the specific case of the conjunctura, that the type of ligature is a conjunctura (possibly using the form element of conjunctura). You’re right that properly encoding the divisio is important - whether it truly functions as a rest, or a divisio syllabarum, etc. > > The alignment is a more complex issue. Ideally you would probably want to number the perfections and then you would simply tag your tenor notes as occurring within a certain perfection. However in the duplum the ligatures could begin in one perfection and end in another - i.e. if in a discant section in mode 1 you had a 3-note ligature, the notes would be long breve long but the first long is in the first perfection and the third is the second perfection, unless of course you had perfection be a sub-element of ligature in the tag hierarchy. > > Best > > Karen > > On Sat, Mar 9, 2019 at 9:27 AM Joshua Stutter wrote: > Dear all, > > Fairly new MEI user, trying to encode some 13th-century Notre Dame notation into MEI for a class. MEI has good support for many neumes, but I'm attempting to get them to work in a polyphonic context, align correctly and with good semantics. Here is a small example which I'm attempting to encode: > > > > Most of the neumes can be notated and typed with the exception of the complex neume FGBGA which does not have a name. This is fine as this music does not stick to the usual neume types. > My first issue arises when trying to show the first tenor note D is aligned with the porrectus GFG. How would I go about achieving this without using semantically-incorrect spacers or invisible rests or durations? What I really wish for is the possibility to encode sections of polyphony in groups that are aligned together, i.e. the first three neumes in the organal voice in one group, then the porrectus in a new group with the tenor virga. > > The second issue is the vertical lines. They are not barlines, nor always rests. They are divisiones with a complex and context-sensitive function. Sometimes they function as rests, sometimes they are alignment marks, sometimes syllable marks. > > The first attached file 'benedicamus-domino.mei' encodes this example naively. Nothing is aligned and I use where divisiones are. > > The second attached file 'benedicamus-domino-wish.mei' is how I wish to encode this file, using a made-up element that can contain anything a
contains. I have also replaced the with another made-up element . > > Needless to say, I'm not concerned with the output in verovio, as very little neumatic notation is supported anyway, but instead encoding the alignment and elements correctly. Is something like this possible in MEI already or will I have to dabble in ODD? If I must, are there any links to a good workflow and documentation for using ODD with MEI? > > Thanks in advance for responding to this quite complex question, > > Joshua Stutter. > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- > --- > Karen Desmond > Visiting Fellow, Clare Hall, and Visiting Scholar, Faculty of Music, University of Cambridge (Lent/Easter 2019) > Assistant Professor of Music, Brandeis University > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Tue Mar 19 15:27:17 2019 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 19 Mar 2019 15:27:17 +0100 Subject: [MEI-L] Encoding Notre Dame Polyphonic Neumes In-Reply-To: References: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> Message-ID: <7AFCA5E2-794C-4B84-A063-74E27FED85F6@edirom.de> I'm not qualified to say much anything about the original question – I just wanted to comment on David: According to the Guidelines: "The @synch attribute points to an element that is synchronous with; that is, begins at the same moment in time, as the current element. It is useful when the encoding order differs from the order in which entities occur in time." (https://music-encoding.org/guidelines/v4/content/analysis.html#analysisDescribingRelationships) I haven't followed to closely, but from what I got, I wouldn't have thought about @synch so far… jo > Am 19.03.2019 um 14:56 schrieb David Lewis : > > Just to say that I agree with Karen. I know it’s transitional in a sense, but I think it’d be immensely helpful to have someone work on modal notation as a part of the *mensural* notation model. I think it raises some issues that we should probably be looking into anyway. > > Is @synch of any use for alignment? It’s not something I’ve used before. > > David > >> On 17 Mar 2019, at 10:02, Karen Desmond wrote: >> >> Hi Joshua, >> >> Quick comment. first, I’m really glad someone is looking at modal notation. My first instinct though would be to not use the neumes module, as many of the things you are trying to do may have support within the measural module - where you have elements like ligatures, and note values like longs and breves, though I know of course that this not mensural and eventually would need at least its own notationtype attribute (and module?). I don’t think using the neume names is appropriate for this repertory as the theorists didn’t use these. For modal notation probably the most important thing you want to encode is how many notes within a ligature and for the specific case of the conjunctura, that the type of ligature is a conjunctura (possibly using the form element of conjunctura). You’re right that properly encoding the divisio is important - whether it truly functions as a rest, or a divisio syllabarum, etc. >> >> The alignment is a more complex issue. Ideally you would probably want to number the perfections and then you would simply tag your tenor notes as occurring within a certain perfection. However in the duplum the ligatures could begin in one perfection and end in another - i.e. if in a discant section in mode 1 you had a 3-note ligature, the notes would be long breve long but the first long is in the first perfection and the third is the second perfection, unless of course you had perfection be a sub-element of ligature in the tag hierarchy. >> >> Best >> >> Karen >> >> On Sat, Mar 9, 2019 at 9:27 AM Joshua Stutter wrote: >> Dear all, >> >> Fairly new MEI user, trying to encode some 13th-century Notre Dame notation into MEI for a class. MEI has good support for many neumes, but I'm attempting to get them to work in a polyphonic context, align correctly and with good semantics. Here is a small example which I'm attempting to encode: >> >> >> >> Most of the neumes can be notated and typed with the exception of the complex neume FGBGA which does not have a name. This is fine as this music does not stick to the usual neume types. >> My first issue arises when trying to show the first tenor note D is aligned with the porrectus GFG. How would I go about achieving this without using semantically-incorrect spacers or invisible rests or durations? What I really wish for is the possibility to encode sections of polyphony in groups that are aligned together, i.e. the first three neumes in the organal voice in one group, then the porrectus in a new group with the tenor virga. >> >> The second issue is the vertical lines. They are not barlines, nor always rests. They are divisiones with a complex and context-sensitive function. Sometimes they function as rests, sometimes they are alignment marks, sometimes syllable marks. >> >> The first attached file 'benedicamus-domino.mei' encodes this example naively. Nothing is aligned and I use where divisiones are. >> >> The second attached file 'benedicamus-domino-wish.mei' is how I wish to encode this file, using a made-up element that can contain anything a
contains. I have also replaced the with another made-up element . >> >> Needless to say, I'm not concerned with the output in verovio, as very little neumatic notation is supported anyway, but instead encoding the alignment and elements correctly. Is something like this possible in MEI already or will I have to dabble in ODD? If I must, are there any links to a good workflow and documentation for using ODD with MEI? >> >> Thanks in advance for responding to this quite complex question, >> >> Joshua Stutter. >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> -- >> --- >> Karen Desmond >> Visiting Fellow, Clare Hall, and Visiting Scholar, Faculty of Music, University of Cambridge (Lent/Easter 2019) >> Assistant Professor of Music, Brandeis University >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From RKlugseder at gmx.de Thu Mar 21 17:04:04 2019 From: RKlugseder at gmx.de (RKlugseder at gmx.de) Date: Thu, 21 Mar 2019 17:04:04 +0100 Subject: [MEI-L] MEC Vienna 2019 Message-ID: An HTML attachment was scrubbed... URL: From kelnreiter at mozarteum.at Fri Mar 22 11:58:15 2019 From: kelnreiter at mozarteum.at (Franz Kelnreiter) Date: Fri, 22 Mar 2019 11:58:15 +0100 Subject: [MEI-L] MEC Vienna 2019 In-Reply-To: References: Message-ID: ...just to correct one important typo concerning the application for bursaries : The correct email address for this is *kelnreiter at mozarteum.at (!!)** * Best, --franz ** Am 21.03.2019 um 17:04 schrieb RKlugseder at gmx.de: > > Dear all, > > We would like to provide you with some additional information and news > about the *MEC 2019 in Vienna*. > > 1) *Early bird* *registration* is still possible until the end of > March (https://www.conftool.net/music-encoding2019/ ). > > 2) Some *workshops* of the pre-conference day are almost *fully > booked*. You should hurry! Once the maximum number of participants of > 20 each has been reached, registration for the WS via Conftool is no > longer possible. If you want to be added to the waiting list, please > send us an e-mail. > > 3) The conference will award *travel grants to students* with > appropriate expertise. The bursaries will partially cover the costs of > registration, travel and accommodation. Please send your application > with a short description of your expertise to kelnreiter at mozarteum.com > until 31 March. > > 4) Changes to the title or the description of posters, panels or > lectures are no longer possible in Conftool. In exceptional cases, you > can notify us of changes by e-mail. > > 5) In the afternoon of the un-conference day (Saturday 1 June), our > cooperation partner READ Project > will present the > *Transkribus app* in a workshop. Transkribus > is a comprehensive platform for > the automated recognition, transcription and searching of historical > documents. The main objective of Transkribus is to support users like > humanities scholars, archivists, volunteers and computer scientists, > who are engaged in the transcription of printed or handwritten > documents. In a research project, we are working on the extension of > Transkribus for /Optical Music Recognition/. Please register via > Conftool (already registered participants should send us an e-mail). > > 6) If there is enough interest, we will organize a guided tour through > the /State Hall/ (/Prunksaal/) and the music collections of the > *Austrian National Library* for the afternoon of the pre-conference > day (Wednesday 29 May, 16 to 18 o'clock). The National Library > preserves important autographs of composers working in Vienna, but > also choir books and manuscripts with Gregorian chant. If you are > interested, please register via Conftool (already registered > participants should send us an e-mail). > > Please use this e-mail address for communication with the organisation > team in Vienna: mec at oeaw.ac.at . > > We look forward to welcoming you in Vienna. > > For the organizing committee > > Robert Klugseder, chair > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From zayne at zayne.co.za Fri Mar 22 14:31:04 2019 From: zayne at zayne.co.za (Zayne Upton) Date: Fri, 22 Mar 2019 15:31:04 +0200 Subject: [MEI-L] Tools for encoding in MEI Message-ID: I'm currently part of a research group that is compiling digital critical editions of African composers. Some of the team members are using Sibelius to notate the music in staff notation and I'm then using SibMEI to convert to MEI. The issue though is that the conversion is not completely accurate and I need to further edit the XML. What I'm struggling with is finding the best tools to do so. There is a plugin for Oxygen that I'm trying to get to work, but to no avail. Can anyone offer any advice here? Thanks Zayne Upton -------------- next part -------------- An HTML attachment was scrubbed... URL: From rfreedma at haverford.edu Fri Mar 22 14:52:21 2019 From: rfreedma at haverford.edu (Richard Freedman) Date: Fri, 22 Mar 2019 09:52:21 -0400 Subject: [MEI-L] Tools for encoding in MEI In-Reply-To: References: Message-ID: Zayne, As part of The Lost Voices Project , we developed a set of routines for doing this sort of post-processing of MEI files using various Python scripts. Read more here and here . You could adapt these for your own use (the scripts are modular, and you could use/ignore any that are not relevant). Lost Voices uses VexFlow as the rendering engine. Verovio (which was not available at the time) is much better in many respects, and we are now using it in the Citations Project . Richard On Fri, Mar 22, 2019 at 9:32 AM Zayne Upton wrote: > I'm currently part of a research group that is compiling digital critical > editions of African composers. Some of the team members are using Sibelius > to notate the music in staff notation and I'm then using SibMEI to convert > to MEI. The issue though is that the conversion is not completely accurate > and I need to further edit the XML. What I'm struggling with is finding the > best tools to do so. There is a plugin for Oxygen that I'm trying to get to > work, but to no avail. > > Can anyone offer any advice here? > > Thanks > > Zayne Upton > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Richard Freedman Professor of Music John C. Whitehead '43 Professor of Humanities Associate Provost for Curricular Development Haverford College Haverford, PA 19041 610-896-1007 610-896-4902 (fax) http://www.haverford.edu/users/rfreedma Schedule meeting time: https://goo.gl/3KN2hr -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh at yokermusic.scot Fri Mar 22 15:13:09 2019 From: josh at yokermusic.scot (Joshua Stutter) Date: Fri, 22 Mar 2019 14:13:09 +0000 Subject: [MEI-L] Encoding Notre Dame Polyphonic Neumes In-Reply-To: References: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> Message-ID: Karen, > I don’t think using the neume names is appropriate for this repertory > as the theorists didn’t use these. You're right here, I've already dispensed with them. > The alignment is a more complex issue. Ideally you would probably want > to number the perfections and then you would simply tag your tenor > notes as occurring within a certain perfection. I've already /kind of/ solved the alignment issue by using
as an alignment group. I'm against tagging in a particular perfection as that is implying that the music proceeds in a constant modal rhythm and has length, which may not be exactly correct. All I'm attempting to do is to align the notes together that occur at the same time and leave the rhythm up to a further editor or a performer's interpretation, especially with this two-part music which may be completely without formal rhythm. However, I'm still no closer to even beginning to customise the ODD to fit my needs, there's some sort of SVG error which I do not understand. Either going through Roma or cloning the MEI github and attempting to build directly gives me this error (see my message 16th of this month) and the MEI customization page is still broken. Joshua. On 17/03/2019 10:02, Karen Desmond wrote: > Hi Joshua, > > Quick comment. first, I’m really glad someone is looking at modal > notation. My first instinct though would be to not use the neumes > module, as many of the things you are trying to do may have support > within the measural module - where you have elements like ligatures, > and note values like longs and breves, though I know of course that > this not mensural and eventually would need at least its own > notationtype attribute (and module?). I don’t think using the neume > names is appropriate for this repertory as the theorists didn’t use > these. For modal notation probably the most important thing you want > to encode is how many notes within a ligature and for the specific > case of the conjunctura, that the type of ligature is a conjunctura > (possibly using the form element of conjunctura). You’re right that > properly encoding the divisio is important - whether it truly > functions as a rest, or a divisio syllabarum, etc. > > The alignment is a more complex issue. Ideally you would probably want > to number the perfections and then you would simply tag your tenor > notes as occurring within a certain perfection. However in the duplum > the ligatures could begin in one perfection and end in another - i.e. > if in a discant section in mode 1 you had a 3-note ligature, the notes > would be long breve long but the first long is in the first perfection > and the third is the second perfection, unless of course you had > perfection be a sub-element of ligature in the tag hierarchy.  > > Best > > Karen > > On Sat, Mar 9, 2019 at 9:27 AM Joshua Stutter > wrote: > > Dear all, > > Fairly new MEI user, trying to encode some 13th-century Notre Dame > notation into MEI for a class. MEI has good support for many > neumes, but I'm attempting to get them to work in a polyphonic > context, align correctly and with good semantics. Here is a small > example which I'm attempting to encode: > > Small Notre Dame example. > > Most of the neumes can be notated and typed with the exception of > the complex neume FGBGA which does not have a name. This is fine > as this music does not stick to the usual neume types. > > My first issue arises when trying to show the first tenor note D > is aligned with the porrectus GFG. How would I go about achieving > this without using semantically-incorrect spacers or invisible > rests or durations? What I really wish for is the possibility to > encode sections of polyphony in groups that are aligned together, > i.e. the first three neumes in the organal voice in one group, > then the porrectus in a new group with the tenor virga. > > The second issue is the vertical lines. They are not barlines, nor > always rests. They are divisiones with a complex and > context-sensitive function. Sometimes they function as rests, > sometimes they are alignment marks, sometimes syllable marks. > > The first attached file 'benedicamus-domino.mei' encodes this > example naively. Nothing is aligned and I use where > divisiones are. > > The second attached file 'benedicamus-domino-wish.mei' is how I > wish to encode this file, using a made-up element > that can contain anything a
contains. I have also > replaced the with another made-up element . > > Needless to say, I'm not concerned with the output in verovio, as > very little neumatic notation is supported anyway, but instead > encoding the alignment and elements correctly. Is something like > this possible in MEI already or will I have to dabble in ODD? If I > must, are there any links to a good workflow and documentation for > using ODD with MEI? > > Thanks in advance for responding to this quite complex question, > > Joshua Stutter. > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -- > --- > Karen Desmond > Visiting Fellow, Clare Hall, and Visiting Scholar, Faculty of Music, > University of Cambridge (Lent/Easter 2019) > Assistant Professor of Music, Brandeis University > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: notre-dame-example.jpg Type: image/jpeg Size: 74075 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From josh at yokermusic.scot Fri Mar 22 15:22:35 2019 From: josh at yokermusic.scot (Joshua Stutter) Date: Fri, 22 Mar 2019 14:22:35 +0000 Subject: [MEI-L] Encoding Notre Dame Polyphonic Neumes In-Reply-To: References: <048ea37a-3e7c-c1b9-5cc0-f9f4a3743f19@yokermusic.scot> Message-ID: <2249c961-8f2b-67f5-51a1-d3b8d0d6dd20@yokermusic.scot> David, > I know it’s transitional in a sense I think you're absolutely correct here, no-one really knows how to begin to interpret this music and it's always changing. What we thought in the mid-20th century was that it was completely rhythmic but now, we don't really know. I think it's bound up in the fact that there are two distinct styles in this music: the "Perotinian" style which is very clearly in modal rhythm throughout with a consistent, repeating tenor usually in mode V or other voices (3- and 4-part) that must move in rhythm in order to stay synchronised. Then there's the "Leoninian" style which... could be anything. If it's any help, the small example I'm attempting is very much of the latter form, so to force it into a mensural context would be misleading. My assignment is due in the middle of next month so I think if I can get the customization to work (still not working, see my other post) and simply add a element, then I might encode in a mensural context, leave out the durations and use @synch to align. Joshua. On 19/03/2019 13:56, David Lewis wrote: > Just to say that I agree with Karen. I know it’s transitional in a sense, but I think it’d be immensely helpful to have someone work on modal notation as a part of the *mensural* notation model. I think it raises some issues that we should probably be looking into anyway. > > Is @synch of any use for alignment? It’s not something I’ve used before. > > David > >> On 17 Mar 2019, at 10:02, Karen Desmond wrote: >> >> Hi Joshua, >> >> Quick comment. first, I’m really glad someone is looking at modal notation. My first instinct though would be to not use the neumes module, as many of the things you are trying to do may have support within the measural module - where you have elements like ligatures, and note values like longs and breves, though I know of course that this not mensural and eventually would need at least its own notationtype attribute (and module?). I don’t think using the neume names is appropriate for this repertory as the theorists didn’t use these. For modal notation probably the most important thing you want to encode is how many notes within a ligature and for the specific case of the conjunctura, that the type of ligature is a conjunctura (possibly using the form element of conjunctura). You’re right that properly encoding the divisio is important - whether it truly functions as a rest, or a divisio syllabarum, etc. >> >> The alignment is a more complex issue. Ideally you would probably want to number the perfections and then you would simply tag your tenor notes as occurring within a certain perfection. However in the duplum the ligatures could begin in one perfection and end in another - i.e. if in a discant section in mode 1 you had a 3-note ligature, the notes would be long breve long but the first long is in the first perfection and the third is the second perfection, unless of course you had perfection be a sub-element of ligature in the tag hierarchy. >> >> Best >> >> Karen >> >> On Sat, Mar 9, 2019 at 9:27 AM Joshua Stutter wrote: >> Dear all, >> >> Fairly new MEI user, trying to encode some 13th-century Notre Dame notation into MEI for a class. MEI has good support for many neumes, but I'm attempting to get them to work in a polyphonic context, align correctly and with good semantics. Here is a small example which I'm attempting to encode: >> >> >> >> Most of the neumes can be notated and typed with the exception of the complex neume FGBGA which does not have a name. This is fine as this music does not stick to the usual neume types. >> My first issue arises when trying to show the first tenor note D is aligned with the porrectus GFG. How would I go about achieving this without using semantically-incorrect spacers or invisible rests or durations? What I really wish for is the possibility to encode sections of polyphony in groups that are aligned together, i.e. the first three neumes in the organal voice in one group, then the porrectus in a new group with the tenor virga. >> >> The second issue is the vertical lines. They are not barlines, nor always rests. They are divisiones with a complex and context-sensitive function. Sometimes they function as rests, sometimes they are alignment marks, sometimes syllable marks. >> >> The first attached file 'benedicamus-domino.mei' encodes this example naively. Nothing is aligned and I use where divisiones are. >> >> The second attached file 'benedicamus-domino-wish.mei' is how I wish to encode this file, using a made-up element that can contain anything a
contains. I have also replaced the with another made-up element . >> >> Needless to say, I'm not concerned with the output in verovio, as very little neumatic notation is supported anyway, but instead encoding the alignment and elements correctly. Is something like this possible in MEI already or will I have to dabble in ODD? If I must, are there any links to a good workflow and documentation for using ODD with MEI? >> >> Thanks in advance for responding to this quite complex question, >> >> Joshua Stutter. >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> -- >> --- >> Karen Desmond >> Visiting Fellow, Clare Hall, and Visiting Scholar, Faculty of Music, University of Cambridge (Lent/Easter 2019) >> Assistant Professor of Music, Brandeis University >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 833 bytes Desc: OpenPGP digital signature URL: From thomas.weber at notengrafik.com Fri Mar 22 16:41:18 2019 From: thomas.weber at notengrafik.com (Thomas Weber) Date: Fri, 22 Mar 2019 15:41:18 +0000 Subject: [MEI-L] Tools for encoding in MEI In-Reply-To: References: Message-ID: If there are any inaccuracies in the SibMei-Export, then best file an issue at the Github repo: https://github.com/music-encoding/sibmei/issues/new Or reply to me off-list and we can discuss (by mail or Skype) what could be done to make SibMei more usable for you. Best Thomas Am 22.03.19 um 14:31 schrieb Zayne Upton: I'm currently part of a research group that is compiling digital critical editions of African composers. Some of the team members are using Sibelius to notate the music in staff notation and I'm then using SibMEI to convert to MEI. The issue though is that the conversion is not completely accurate and I need to further edit the XML. What I'm struggling with is finding the best tools to do so. There is a plugin for Oxygen that I'm trying to get to work, but to no avail. Can anyone offer any advice here? Thanks Zayne Upton _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From D.Lewis at gold.ac.uk Fri Mar 22 16:50:03 2019 From: D.Lewis at gold.ac.uk (David Lewis) Date: Fri, 22 Mar 2019 15:50:03 +0000 Subject: [MEI-L] Tools for encoding in MEI In-Reply-To: References: Message-ID: Hi Zayne, This sounds like a really interesting project, and it’s really great that you’re working with MEI for it. I’ve started using Atom with the Verovio plugin for hand-editing MEI. It’s a bit slow and clunky, but it’s not bad. I’d also note that sometimes – especially for more complex scores – there’s a weirdly cumbersome process of exporting as MusicXML and then either converting that (for instance on the Verovio website) or (deep breath) loading the MusicXML into MuseScore, exporting as MusicXML AGAIN, then converting the results. MuseScore seems to regularise the MusicXML a little, which can help especially for slur positions and other graphical elements. Best, David > On 22 Mar 2019, at 13:31, Zayne Upton wrote: > > I'm currently part of a research group that is compiling digital critical editions of African composers. Some of the team members are using Sibelius to notate the music in staff notation and I'm then using SibMEI to convert to MEI. The issue though is that the conversion is not completely accurate and I need to further edit the XML. What I'm struggling with is finding the best tools to do so. There is a plugin for Oxygen that I'm trying to get to work, but to no avail. > > Can anyone offer any advice here? > > Thanks > > Zayne Upton > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zayne at zayne.co.za Fri Mar 22 20:58:53 2019 From: zayne at zayne.co.za (Zayne Upton) Date: Fri, 22 Mar 2019 21:58:53 +0200 Subject: [MEI-L] Tools for encoding in MEI In-Reply-To: References: Message-ID: Thanks Thomas.I’ll do a bit more troubleshooting first and then see if I need to log an issue. I’m not very experienced with Sibelius so I may just need to score in a way that makes it easier to export. Cheers Zayne > On 22 Mar 2019, at 17:41, Thomas Weber wrote: > > If there are any inaccuracies in the SibMei-Export, then best file an issue at the Github repo: > https://github.com/music-encoding/sibmei/issues/new > Or reply to me off-list and we can discuss (by mail or Skype) what could be done to make SibMei more usable for you. > > Best > Thomas > > > Am 22.03.19 um 14:31 schrieb Zayne Upton: >> I'm currently part of a research group that is compiling digital critical editions of African composers. Some of the team members are using Sibelius to notate the music in staff notation and I'm then using SibMEI to convert to MEI. The issue though is that the conversion is not completely accurate and I need to further edit the XML. What I'm struggling with is finding the best tools to do so. There is a plugin for Oxygen that I'm trying to get to work, but to no avail. >> >> Can anyone offer any advice here? >> >> Thanks >> >> Zayne Upton >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From zayne at zayne.co.za Fri Mar 22 21:25:02 2019 From: zayne at zayne.co.za (Zayne Upton) Date: Fri, 22 Mar 2019 22:25:02 +0200 Subject: [MEI-L] Tools for encoding in MEI In-Reply-To: References: Message-ID: Thanks Richard. I briefly had a look a while back so I’ll give this another look. Cheers Zayne > On 22 Mar 2019, at 15:52, Richard Freedman wrote: > > Zayne, > > As part of The Lost Voices Project , we developed a set of routines for doing this sort of post-processing of MEI files using various Python scripts. > > Read more here and here . > > You could adapt these for your own use (the scripts are modular, and you could use/ignore any that are not relevant). Lost Voices uses VexFlow as the rendering engine. Verovio (which was not available at the time) is much better in many respects, and we are now using it in the Citations Project . > > Richard > > On Fri, Mar 22, 2019 at 9:32 AM Zayne Upton > wrote: > I'm currently part of a research group that is compiling digital critical editions of African composers. Some of the team members are using Sibelius to notate the music in staff notation and I'm then using SibMEI to convert to MEI. The issue though is that the conversion is not completely accurate and I need to further edit the XML. What I'm struggling with is finding the best tools to do so. There is a plugin for Oxygen that I'm trying to get to work, but to no avail. > > Can anyone offer any advice here? > > Thanks > > Zayne Upton > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > -- > Richard Freedman > Professor of Music > John C. Whitehead '43 Professor of Humanities > Associate Provost for Curricular Development > Haverford College > Haverford, PA 19041 > > 610-896-1007 > 610-896-4902 (fax) > > http://www.haverford.edu/users/rfreedma > > Schedule meeting time: https://goo.gl/3KN2hr > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From zayne at zayne.co.za Fri Mar 22 21:27:22 2019 From: zayne at zayne.co.za (Zayne Upton) Date: Fri, 22 Mar 2019 22:27:22 +0200 Subject: [MEI-L] Tools for encoding in MEI In-Reply-To: References: Message-ID: Thanks David.I’ve tried Atom now but struggling a bit with it - It doesn’t seem to like the exported MEI file from Sibelius. I’ll try the MuseScore trick. Cheers Zayne > On 22 Mar 2019, at 17:50, David Lewis wrote: > > Hi Zayne, > > This sounds like a really interesting project, and it’s really great that you’re working with MEI for it. > > I’ve started using Atom with the Verovio plugin for hand-editing MEI. It’s a bit slow and clunky, but it’s not bad. > > I’d also note that sometimes – especially for more complex scores – there’s a weirdly cumbersome process of exporting as MusicXML and then either converting that (for instance on the Verovio website) or (deep breath) loading the MusicXML into MuseScore, exporting as MusicXML AGAIN, then converting the results. MuseScore seems to regularise the MusicXML a little, which can help especially for slur positions and other graphical elements. > > Best, > > David > >> On 22 Mar 2019, at 13:31, Zayne Upton wrote: >> >> I'm currently part of a research group that is compiling digital critical editions of African composers. Some of the team members are using Sibelius to notate the music in staff notation and I'm then using SibMEI to convert to MEI. The issue though is that the conversion is not completely accurate and I need to further edit the XML. What I'm struggling with is finding the best tools to do so. There is a plugin for Oxygen that I'm trying to get to work, but to no avail. >> >> Can anyone offer any advice here? >> >> Thanks >> >> Zayne Upton >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From rfreedma at haverford.edu Fri Mar 22 21:31:20 2019 From: rfreedma at haverford.edu (Richard Freedman) Date: Fri, 22 Mar 2019 16:31:20 -0400 Subject: [MEI-L] Tools for encoding in MEI In-Reply-To: References: Message-ID: In our experience the most useful approach is to create a 'minimal example' that contains all of the editorial features you want to use. This becomes the 'test' file for your post-processes, and also becomes the 'model' file for your editors. Our routines are designed around the needs of Renaissance music. Yours might need to be different (but can make use of the same concepts, such as color highlights or alternative staves, or special articulations, etc, that are in turn transformed during the processing stage to the final MEI encoding you prefer). Richard On Fri, Mar 22, 2019 at 4:26 PM Zayne Upton wrote: > Thanks Richard. I briefly had a look a while back so I’ll give this > another look. > > Cheers > > Zayne > > On 22 Mar 2019, at 15:52, Richard Freedman wrote: > > Zayne, > > As part of The Lost Voices Project > , we developed a set of > routines for doing this sort of post-processing of MEI files using various > Python scripts. > > Read more here > > and here > > . > > You could adapt these for your own use (the scripts are modular, and you > could use/ignore any that are not relevant). Lost Voices uses VexFlow as > the rendering engine. Verovio (which was not available at the time) is > much better in many respects, and we are now using it in the Citations > Project . > > Richard > > On Fri, Mar 22, 2019 at 9:32 AM Zayne Upton wrote: > >> I'm currently part of a research group that is compiling digital critical >> editions of African composers. Some of the team members are using Sibelius >> to notate the music in staff notation and I'm then using SibMEI to convert >> to MEI. The issue though is that the conversion is not completely accurate >> and I need to further edit the XML. What I'm struggling with is finding the >> best tools to do so. There is a plugin for Oxygen that I'm trying to get to >> work, but to no avail. >> >> Can anyone offer any advice here? >> >> Thanks >> >> Zayne Upton >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > > > -- > Richard Freedman > Professor of Music > John C. Whitehead '43 Professor of Humanities > Associate Provost for Curricular Development > Haverford College > Haverford, PA 19041 > > 610-896-1007 > 610-896-4902 (fax) > > http://www.haverford.edu/users/rfreedma > > Schedule meeting time: https://goo.gl/3KN2hr > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Richard Freedman Professor of Music John C. Whitehead '43 Professor of Humanities Associate Provost for Curricular Development Haverford College Haverford, PA 19041 610-896-1007 610-896-4902 (fax) http://www.haverford.edu/users/rfreedma Schedule meeting time: https://goo.gl/3KN2hr -------------- next part -------------- An HTML attachment was scrubbed... URL: From zayne at zayne.co.za Sat Mar 23 06:53:24 2019 From: zayne at zayne.co.za (Zayne Upton) Date: Sat, 23 Mar 2019 07:53:24 +0200 Subject: [MEI-L] Tools for encoding in MEI In-Reply-To: References: Message-ID: <77BC80D8-A572-4672-9140-4550FFAADF03@zayne.co.za> That’s a sensible approach. Up until this point the focus of the project has been building a website to house the various composers and works. I will follow your approach now for the next phase, which is making the works interactive. Zayne > On 22 Mar 2019, at 22:31, Richard Freedman wrote: > > In our experience the most useful approach is to create a 'minimal example' that contains all of the editorial features you want to use. This becomes the 'test' file for your post-processes, and also becomes the 'model' file for your editors. Our routines are designed around the needs of Renaissance music. Yours might need to be different (but can make use of the same concepts, such as color highlights or alternative staves, or special articulations, etc, that are in turn transformed during the processing stage to the final MEI encoding you prefer). > > Richard > > On Fri, Mar 22, 2019 at 4:26 PM Zayne Upton > wrote: > Thanks Richard. I briefly had a look a while back so I’ll give this another look. > > Cheers > > Zayne > >> On 22 Mar 2019, at 15:52, Richard Freedman > wrote: >> >> Zayne, >> >> As part of The Lost Voices Project , we developed a set of routines for doing this sort of post-processing of MEI files using various Python scripts. >> >> Read more here and here . >> >> You could adapt these for your own use (the scripts are modular, and you could use/ignore any that are not relevant). Lost Voices uses VexFlow as the rendering engine. Verovio (which was not available at the time) is much better in many respects, and we are now using it in the Citations Project . >> >> Richard >> >> On Fri, Mar 22, 2019 at 9:32 AM Zayne Upton > wrote: >> I'm currently part of a research group that is compiling digital critical editions of African composers. Some of the team members are using Sibelius to notate the music in staff notation and I'm then using SibMEI to convert to MEI. The issue though is that the conversion is not completely accurate and I need to further edit the XML. What I'm struggling with is finding the best tools to do so. There is a plugin for Oxygen that I'm trying to get to work, but to no avail. >> >> Can anyone offer any advice here? >> >> Thanks >> >> Zayne Upton >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> -- >> Richard Freedman >> Professor of Music >> John C. Whitehead '43 Professor of Humanities >> Associate Provost for Curricular Development >> Haverford College >> Haverford, PA 19041 >> >> 610-896-1007 >> 610-896-4902 (fax) >> >> http://www.haverford.edu/users/rfreedma >> >> Schedule meeting time: https://goo.gl/3KN2hr >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > -- > Richard Freedman > Professor of Music > John C. Whitehead '43 Professor of Humanities > Associate Provost for Curricular Development > Haverford College > Haverford, PA 19041 > > 610-896-1007 > 610-896-4902 (fax) > > http://www.haverford.edu/users/rfreedma > > Schedule meeting time: https://goo.gl/3KN2hr > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Wed Mar 27 16:43:31 2019 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 27 Mar 2019 16:43:31 +0100 Subject: [MEI-L] Documentation for MEI v4 In-Reply-To: <1BE37328-FFE0-434B-8EF0-F2271B5D93F7@edirom.de> References: <1BE37328-FFE0-434B-8EF0-F2271B5D93F7@edirom.de> Message-ID: <278DC2DD-8F1F-449B-A308-251F102C1297@edirom.de> Dear all, In preparation of our "ODD Friday" Documentation Sprint on coming Friday, we've prepared a new chapter structure for the MEI Guidelines. In the past, every module in the specs had its own chapter in the Guidelines. Some changes to the code for v4 have made this impractical, and indeed there are good arguments to not organise documentation in the same way as the code. We hope that this structure is more accessible than the original one: https://docs.google.com/document/d/1yKIPkjBwfMwbOMVawQxL6uy4EFjdda8WKpp8klFMsyQ/edit?usp=sharing We would like to gather feedback to that structure, either here on MEI-L or as comments (or proposed changes) in the document. The chapter names are mostly preliminary, and also chapter organisation may change to some degree while working on them. So we're asking for feedback about the general direction, although input about the specifics will be helpful for the actual implementation. That said, we hope that on Friday many of you will be available. It's then when we want to assign volunteers to those chapters. As you can see from the document, most of the content is already available, and just needs to be checked for compatibility with the current schema. If there is sufficient contribution from the Community, updating the Guidelines will be an easy task. Friday itself isn't necessarily a formal conference call. Instead, Benni and I will stand ready to answer questions. We recommend that people register on Slack for easy communication. I will be available on Skype as well (my user name there is j.kepper), and of course we're answering to emails. However, to get things started, we will offer a conference call on Skype at 2.30pm Central European Time (1:30pm GMT, 9:30am EDT, 6:30am PDT). Please reach out earlier to allow accepting new contacts etc. ahead of time. After the meeting, we will take the further work on this to GitHub, and assist people as needed. All best, Benni and Johannes > Dear all, > > Benni and I are currently preparing a work plan for revamping the Guidelines section of the MEI Documentation (https://music-encoding.org/guidelines/v4/content/). The problem is that this Documentation still reflects the state of MEI v3, which in some parts differs significantly from the current model. The differences can be traced here: https://music-encoding.org/archive/comparison-4.0.html, but that doesn't provide a detailed explanation of the differences. For that purpose, we currently only have the release notes, available from https://github.com/music-encoding/music-encoding/releases/tag/v4.0.0. > > It is comprehensible that people are getting confused when reading that documentation, and we would very much like to resolve that situation rather sooner than later. However, it's clear that we can't do this alone, or in a week's time. As a first step, we will introduce a warning at the top of each chapter explaining the situation, and redirecting people to work with the Specs (under Elements etc.) instead. Next, we would like to gather and coordinate a group of people that is willing to help with updating the Guidelines. For this purpose, we plan to have regular open meetings on Slack / Skype / …, on every odd week's Friday (pun intended), where people can jointly work on the documentation, ask questions and so on. This will be hop on / hop off meetings – we're happy about everyone who will be able to join us, and we don't expect formal commitments. Of course, work on the documentation can be done at other times as well, but we'd like to make ourselves available for discussion and so on. Ideally, something like this (maybe with a lower frequency) will be helpful for the continued development of the schema, but this has much lower priority right now. > > The first ODD Friday will be 29 March, then 12 April, and every other week from then. We will be available from 2pm German time (Germany switches to summer time on March 31, so I leave it to you to identify what this means for you…). We encourage everyone to join us, even without technical knowledge about MEI – this is about writing comprehensible documentation, where it's always good to have a broad range of expertises. If you're uncertain, please contact Benni and / or me directly, and we will make sure that everything will work for you. > > Thanks very much, and all best, > Benni and Johannes > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From zayne at zayne.co.za Sun Apr 7 09:44:01 2019 From: zayne at zayne.co.za (Zayne Upton) Date: Sun, 7 Apr 2019 09:44:01 +0200 Subject: [MEI-L] MEI to tonic-solfa representation for African choral music Message-ID: Has anyone come across a tool or xslt that will output a tonic-solfa representation of an MEI file? I'm working on a project of African choral music, much of which is written in tonic-solfa. Our team is capturing this music into MEI, but I'd like a way to represent it on our website both in staff and tonic-solfa simply using the MEI file. Any help would be appreciated. -- __________________________ Zayne Upton | +27 83 324 5435 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Thu Apr 11 17:06:47 2019 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 11 Apr 2019 17:06:47 +0200 Subject: [MEI-L] MEI ODD Friday Message-ID: Dear all, tomorrow is our next "ODD Friday". Last time, we discussed a new structure for the Guidelines, which has been implemented already. Some people are already investigating where fixes to the Guidelines themselves are necessary (special thanks to Lara Grabitz, who's doing an internship in Detmold this week!), but we'll need to keep this running, and we still need a lot of input and help to get it done. Tomorrow, we'd like to try Zoom.us as provider for our conference. Their free plan is limited to 40 minutes, but we'll just set up another meeting afterwards if necessary. We will be available by mail and on Slack simultaneously, so please reach out to us if that's better. The meeting will start 2:30pm german time, so this'll be 1:30pm UK, 8:30am East Cost. Many thanks to everyone for the big support we get on this! The details on how to enter the call can be found below. All best, and see you tomorrow, jo ----------------------- Thema: Zoom-Meeting von Johannes Kepper Uhrzeit: Apr 12, 2019 2:30 PM Amsterdam, Berlin, Rom, Stockholm, Wien Enter Zoom-Meeting https://zoom.us/j/564627885 Enter by phone: +49 69 8088 3899 Deutschland +1 646 558 8656 United States +45 89 88 37 88 Denmark +47 7349 4877 Norway +41 22 518 89 78 Switzerland +44 203 966 3809 United Kingdom +33 1 8288 0188 France +43 72 011 5988 Austria Meeting-ID: 564 627 885 Enter by phone from different countries: https://zoom.us/u/abBcgO7QJu -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From ichiro.fujinaga at mcgill.ca Thu Apr 18 18:45:26 2019 From: ichiro.fujinaga at mcgill.ca (Ichiro Fujinaga, Prof.) Date: Thu, 18 Apr 2019 16:45:26 +0000 Subject: [MEI-L] Postdoc position available at McGill University In-Reply-To: <0B8C6F40-9765-4272-A393-D78EA6C34F15@mcgill.ca> References: <0B8C6F40-9765-4272-A393-D78EA6C34F15@mcgill.ca> Message-ID: <98D9A897-A9DE-4E7F-97F0-81D1FFAFCF9C@mcgill.ca> The Single Interface for Music Score Searching and Analysis (SIMSSA) project at McGill University is hiring a new Postdoctoral Researcher in Music Information Retrieval to begin July 1 or as soon as possible. SIMSSA is a seven-year research partnership grant funded by the Social Sciences and Humanities Research Council of Canada, headed by Ichiro Fujinaga, Principal Investigator and Julie Cumming, Co-investigator. The goal of this project is to make digital images of musical notation searchable and analyzable. Please see https://simssa.ca/opportunities for more details on how to apply. Ichiro From drizo at dlsi.ua.es Fri May 3 12:50:54 2019 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Fri, 3 May 2019 12:50:54 +0200 Subject: [MEI-L] Call for Papers | DLfM2019 - Digital Libraries for Musicology | The Hague, The Netherlands | 9th November 2019 Message-ID: <6DC6D2BC-C6DE-4EE1-B6A2-BE96C0936AC9@dlsi.ua.es> [with apologies for cross posting] 6th International Conference on Digital Libraries for Musicology (DLfM 2019) 9th November 2019 National Library of The Netherlands A satellite event of ISMIR 2019. https://dlfm.web.ox.ac.uk/ CALL FOR PAPERS Many Digital Libraries have long offered facilities to provide multimedia content, including music. However there is now an ever more urgent need to specifically support the distinct multiple forms of music, the links between them, and the surrounding scholarly context, as required by the transformed and extended methods being applied to musicology and the wider Digital Humanities. The Digital Libraries for Musicology (DLfM) conference presents a venue specifically for those working on, and with, Digital Library systems and content in the domain of music and musicology. This includes Music Digital Library systems, their application and use in musicology, technologies for enhanced access and organisation of musics in Digital Libraries, bibliographic and metadata for music, intersections with music Linked Data, and the challenges of working with the multiple representations of music across large-scale digital collections such as the Internet Archive and HathiTrust. This, the Sixth Digital Libraries for Musicology conference, follows previous workshops in London, Knoxville, New York, Shanghai, and Paris. In 2019, DLfM is again proud to be a satellite event of the annual International Society for Music Information Retrieval (ISMIR) conference which is being held in Delft, and in particular encourages reports on the use of MIR methods and technologies within Music Digital Library systems when applied to the pursuit of musicological research. SCOPE AND OBJECTIVES DLfM will focuses on the implications of music for Digital Libraries and Digital Libraries research when pushing the boundaries of contemporary musicology, including the application of techniques as reported in more technologically-oriented fora such as ISMIR and ICMC. This will be the sixth edition of DLfM following very successful and well received previous workshops (in 2014, 2015, 2016, 2017, and 2018), giving an opportunity for the community to present and discuss recent developments that address the challenges of effectively combining technology with musicology through Digital Library systems and their application. The conference objectives are: to act as a forum for reporting, presenting, and evaluating this work and disseminating new approaches to advance the discipline; to create a venue for critically and constructively evaluating and verifying the operation of Music Digital Libraries and the applications and findings that flow from them; to consider the suitability of existing Music Digital Libraries, particularly in light of the transformative methods and applications emerging from musicology, large collections of both audio and music related data, ‘big data’ method, and MIR; to explore how digital libraries and digital musicology can combine to offer richer online access to online music collections; to set the agenda for work in the field to address these new challenges and opportunities. TOPICS Topics of interest include, but are not limited to: Building and managing digital music collections Optical Music Recognition Information literacies for Music Digital Libraries Data quality assessment Access, interfaces and ergonomics Interfaces and access mechanisms for Music Digital Libraries Identification/location of music (in all forms) in generic Digital Libraries Techniques for locating and accessing music in Very Large Digital Libraries (e.g. HathiTrust, Internet Archive) and musical corpus-building at scale Mechanisms for combining multi-form music content within and between Digital Libraries and other digital resources User information needs and behaviour for Music Digital Libraries Musicological Knowledge Music data representations, including manuscripts/scores and audio Applied MIR techniques in Music Digital Libraries and musicological investigations using them Extraction of musical concepts from symbolic notation and audio data Metadata and metadata schemas for music Application of Linked Data and Semantic Web techniques to Music Digital Libraries, and for their access and organisation Ontologies and categorisation of musics and music artefacts Improving data for musicology Digital Libraries which enrich public access to music, music-cultural, and music-ephemera material online Digital Libraries in support of musicology and other scholarly study; novel requirements and methodologies therein Digital Libraries for combination of resources in support of musicology (e.g. combining audio, scores, bibliographic, geographic, ethnomusicology, performance, etc.) SUBMISSIONS We invite full papers (up to 8 pages excluding reference) or short and position papers (up to 4 pages excluding references). In addition to the general submission requirements below, we will require that camera-ready copy be received before 21st September 2019, and that at least one author per accepted paper is registered for DLfM by that date. All papers will be peer reviewed by 2-3 members of the programme committee. Please submit an abstract to EasyChair by 21th June 2019, and produce your paper using the ACM template and submit it to DLfM on EasyChair by 28th June 2019. All submitted papers must: be written in English; contain author names, affiliations and e-mail addresses; be in PDF format (please ensure that the PDF can be viewed on any platform), and formatted for A4 size. Page limits for submitted papers apply to all text, but exclude the bibliography (i.e. references can be included on pages over the specified limits). It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. Please note that at least one author from each accepted paper must attend the conference to present their work. Submissions: https://easychair.org/conferences/?conf=dlfm2019 Contact email: dlfm2019 at easychair.org ACM template (both Word and LaTeX): https://www.acm.org/publications/taps/word-template-workflow Questions regarding the ACM manuscript templates MUST be directed to the ACM TeX support team at Aptara directly at acmtexsupport at aptaracorp.com . IMPORTANT DATES Abstract submission deadline: 21th June 2019 (23:59 UTC-11) Paper submission deadline: 28th June 2019 (23:59 UTC-11) Notification of acceptance: 17th August 2019 Camera ready submission deadline: 21st September 2019 Conference: 9th November 2019 DLfM proceedings will be included in the ACM Digital Library through the ICPS series. CONFERENCE ORGANIZATION Programme Chair David Rizo, Universidad de Alicante. Instituto Superior de Enseñanzas Artísticas de la Comunidad Valenciana General Chair Kevin Page, University of Oxford Publicity and proceedings Chair Jorge Calvo-Zaragoza, Universidad de Alicante Programme Committe (in progress) Alessandro Adamou, Knowledge Media Institute, The Open University Islah Ali-Maclachlan, Birmingham City University Richard Chesser, British Library Tim Crawford, Goldsmiths College María Teresa Delgado-Sánchez, Biblioteca Nacional de España Jürgen Diet, Bavarian State Library Tim Duguid, University of Glasgow Yun Fan, Répertoire International de Littérature Musicale Ichiro Fujinaga, McGill University Francesca Giannetti, Rutgers University José Manuel Iñesta, Universidad de Alicante Audrey Laplante, EBSI, Université de Montréal David Lewis, University of Oxford Cynthia Liem, Delft University of Technology Joshua Neumann, University of Florida Alastair Porter, Universitat Pompeu Fabra Laurent Pugin, RISM Switzerland Amelie Roper, British Library Sertan Şentürk Marnix Vanberchum, Utrecht University Rafael Caro Repetto, Universitat Pompeu Fabra Kjell Lemström, University of Helsinki -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gabriel at music.mcgill.ca Fri May 3 21:33:25 2019 From: Gabriel at music.mcgill.ca (Gabriel Vigliensoni) Date: Fri, 3 May 2019 15:33:25 -0400 Subject: [MEI-L] mei-Neumes validation Message-ID: Dear MEI-L, We are trying to validate files with square-note notation against mei-Neumes customization. The validator is parsing all musical elements but is throwing errors for clef and custos (error: element ... not allowed here). On the contrary, if we use mei-all the validation passes successfully. We looked at the mei-Neumes customization file and realized that model.eventLike is being modified ( https://github.com/music-encoding/music-encoding/blob/develop/customizations/mei-Neumes.xml#L113-L120), with the result that clef and custos elements have to be within syllable instead of layer. Is this an unintended error or are there any special motives for this decision? Thank you, Gabriel PS: Attached to this email there is a basic MEI example with neume notation that can not be validated. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: example-neume.mei Type: application/octet-stream Size: 1150 bytes Desc: not available URL: From reinierdevalk at gmail.com Tue May 21 16:36:08 2019 From: reinierdevalk at gmail.com (Reinier de Valk) Date: Tue, 21 May 2019 15:36:08 +0100 Subject: [MEI-L] Automatic beaming Message-ID: Dear all, I am working with automatically generated MEI files that contain no beaming information. This page [1] in the MEI guidelines seems to suggest that beaming can be done automatically by using beam.group. So I tried adding beam.group='4,4,4,4' to either the scoreDef or the staffDefs of my example, as follows: ... but there is no difference when I render the file - none of the notes are beamed. I am not an experienced user, and I am just playing around figuring things out - so it is very well possible that I am misunderstanding the usage of beam.group. Any tips would be greatly appreciated! Best wishes, Reinier [1] https://music-encoding.org/guidelines/v3/attribute-classes/att.beaming.log.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hankinson at bodleian.ox.ac.uk Tue May 21 17:05:53 2019 From: andrew.hankinson at bodleian.ox.ac.uk (Andrew Hankinson) Date: Tue, 21 May 2019 15:05:53 +0000 Subject: [MEI-L] Automatic beaming In-Reply-To: References: Message-ID: <766FF48E-8C99-495F-A0DE-27C2051CD6B8@bodleian.ox.ac.uk> Hi Reinier, I suspect that whatever you are using to render the files (Verovio?) just doesn't support the @beam.group attribute (yet). -Andrew > On 21 May 2019, at 15:36, Reinier de Valk wrote: > > Dear all, > > I am working with automatically generated MEI files that contain no beaming information. This page [1] in the MEI guidelines seems to suggest that beaming can be done automatically by using beam.group. So I tried adding beam.group='4,4,4,4' to either the scoreDef or the staffDefs of my example, as follows: > > > > ... but there is no difference when I render the file - none of the notes are beamed. > > I am not an experienced user, and I am just playing around figuring things out - so it is very well possible that I am misunderstanding the usage of beam.group. Any tips would be greatly appreciated! > > Best wishes, > Reinier > > [1] https://music-encoding.org/guidelines/v3/attribute-classes/att.beaming.log.html > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From reinierdevalk at gmail.com Tue May 21 17:19:25 2019 From: reinierdevalk at gmail.com (Reinier de Valk) Date: Tue, 21 May 2019 16:19:25 +0100 Subject: [MEI-L] Automatic beaming In-Reply-To: <766FF48E-8C99-495F-A0DE-27C2051CD6B8@bodleian.ox.ac.uk> References: <766FF48E-8C99-495F-A0DE-27C2051CD6B8@bodleian.ox.ac.uk> Message-ID: Hi Andrew, That might very well be it! I am using the online Verovio MEI viewer. What would you suggest I use instead? Thanks, Reinier Op di 21 mei 2019 om 16:06 schreef Andrew Hankinson < andrew.hankinson at bodleian.ox.ac.uk>: > Hi Reinier, > > I suspect that whatever you are using to render the files (Verovio?) just > doesn't support the @beam.group attribute (yet). > > -Andrew > > > On 21 May 2019, at 15:36, Reinier de Valk > wrote: > > > > Dear all, > > > > I am working with automatically generated MEI files that contain no > beaming information. This page [1] in the MEI guidelines seems to suggest > that beaming can be done automatically by using beam.group. So I tried > adding beam.group='4,4,4,4' to either the scoreDef or the staffDefs of my > example, as follows: > > > > beam.group='4,4,4,4'> > > > > ... but there is no difference when I render the file - none of the > notes are beamed. > > > > I am not an experienced user, and I am just playing around figuring > things out - so it is very well possible that I am misunderstanding the > usage of beam.group. Any tips would be greatly appreciated! > > > > Best wishes, > > Reinier > > > > [1] > https://music-encoding.org/guidelines/v3/attribute-classes/att.beaming.log.html > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juliette.regimbal at mail.mcgill.ca Wed May 22 17:30:43 2019 From: juliette.regimbal at mail.mcgill.ca (Juliette Regimbal) Date: Wed, 22 May 2019 15:30:43 +0000 Subject: [MEI-L] Representing syllables that span multiple staves (neume notation) Message-ID: Hello everyone, In neume notation, it is possible that a single syllable can have neumes that continue past the end of a staff and onto the following one. An example is shown in the attached image. In it, the last syllable of the first staff ("ret") has neumes on both the end of the first staff and the beginning of the second staff. This kind of situation can also occur across pages if the last staff of a page has a syllable that continues onto the next staff, which would be the first staff of the following page. Since MEI is an XML-based encoding, any element can only have one direct parent element. In this case, a must be the child of exactly one in one and any neumes that are part of the syllable must be represented by elements in a single element. There is no way to encode a single in multiple elements, even if the syllable it is meant to describe does span multiple staves. Looking through the MEI documentation there does not appear to be an accepted way to encode this, even though this should be possible. For example, this could be done by creating two separate elements in each . Each could reference the ID of the other in an attribute saying that the syllable is continued by another element or is a continuation of another element for the first and second elements respectively. Juliette Regimbal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screenshot_2019-05-22 Neon3(2).png Type: image/png Size: 738930 bytes Desc: Screenshot_2019-05-22 Neon3(2).png URL: From thomas.weber at notengrafik.com Wed May 22 23:57:28 2019 From: thomas.weber at notengrafik.com (Thomas Weber) Date: Wed, 22 May 2019 21:57:28 +0000 Subject: [MEI-L] Representing syllables that span multiple staves (neume notation) In-Reply-To: References: Message-ID: <6a327eda-75d3-1dff-b35d-14754373a358@notengrafik.com> Am 22.05.19 um 17:30 schrieb Juliette Regimbal: Since MEI is an XML-based encoding, any element can only have one direct parent element. In this case, a must be the child of exactly one in one and any neumes that are part of the syllable must be represented by elements in a single element. There is no way to encode a single in multiple elements, even if the syllable it is meant to describe does span multiple staves. I think that is a misconception of the element. At least in my understanding, describes a logical staff, not layout. In mensural and neumes notation where you don't have , I'd therefore only use one element per part in each section. then should be used to describe layout. Looking through the MEI documentation there does not appear to be an accepted way to encode this, even though this should be possible. It is: is allowed inside . Thomas -- Notengrafik Berlin GmbH HRB 150007 UstID: DE 289234097 Geschäftsführer: Thomas Weber und Werner J. Wolff fon: +49 30 25359505 Friedrichstraße 23a 10969 Berlin notengrafik.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hankinson at bodleian.ox.ac.uk Thu May 23 08:23:52 2019 From: andrew.hankinson at bodleian.ox.ac.uk (Andrew Hankinson) Date: Thu, 23 May 2019 06:23:52 +0000 Subject: [MEI-L] Representing syllables that span multiple staves (neume notation) In-Reply-To: References: Message-ID: Hi Juliette, In MEI there is a distinction between a 'staff' and a 'system'. A staff could be defined as the 'line' of performed music, while systems are simply the mechanism to fit a portion of a staff within a given width (a page, or a screen). A 'system begin' is encoded with the `` element, and this is a milestone element that can generally go anywhere in your encoding. In the picture you sent I would say there is only one staff if we ignore the 'Dominus" bit up to the red line, and assume that the chant ends where your image ends, thus starting 'Et' and ending '-ne'. There are, however, two systems. Although not wanting to say 'never' when notated music is concerned, it's highly unlikely that a syllable would cross two 'staff' elements, since this would mean a single syllable would be present across, say, a movement or section. In Neume encoding specifically, the `syllable` element is different than the `syl` element in "standard" MEI. Syllable is a logical chunking of the content such that multiple neumes can be grouped on the same sung bit of text, while `syl` provides the specific text being sung. So for your encoding I would say (roughly; neume and syl encoding abbreviated):
Et au- fe- ret a ...
To indicate the "system begin" on the new line you might then begin the next staff (just off the page in your image):
...etc. Does that help? -Andrew > On 22 May 2019, at 16:30, Juliette Regimbal wrote: > > Hello everyone, > > In neume notation, it is possible that a single syllable can have neumes that continue past the end of a staff and onto the following one. An example is shown in the attached image. In it, the last syllable of the first staff ("ret") has neumes on both the end of the first staff and the beginning of the second staff. This kind of situation can also occur across pages if the last staff of a page has a syllable that continues onto the next staff, which would be the first staff of the following page. > > Since MEI is an XML-based encoding, any element can only have one direct parent element. In this case, a must be the child of exactly one in one and any neumes that are part of the syllable must be represented by elements in a single element. There is no way to encode a single in multiple elements, even if the syllable it is meant to describe does span multiple staves. > > Looking through the MEI documentation there does not appear to be an accepted way to encode this, even though this should be possible. For example, this could be done by creating two separate elements in each . Each could reference the ID of the other in an attribute saying that the syllable is continued by another element or is a continuation of another element for the first and second elements respectively. > > Juliette Regimbal > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From juliette.regimbal at mail.mcgill.ca Thu May 23 15:08:46 2019 From: juliette.regimbal at mail.mcgill.ca (Juliette Regimbal) Date: Thu, 23 May 2019 13:08:46 +0000 Subject: [MEI-L] Representing syllables that span multiple staves (neume notation) In-Reply-To: References: , Message-ID: Hi Andrew, This does help, but for me it raises a separate question regarding how to encode where the systems would be using the facsimile module. When using to encode the different systems, would the zone describing the bounding box of the system be referenced with the facs attribute on the system begin element? Thank you, Juliette ________________________________ From: mei-l on behalf of Andrew Hankinson Sent: May 23, 2019 2:23 AM To: Music Encoding Initiative Subject: Re: [MEI-L] Representing syllables that span multiple staves (neume notation) Hi Juliette, In MEI there is a distinction between a 'staff' and a 'system'. A staff could be defined as the 'line' of performed music, while systems are simply the mechanism to fit a portion of a staff within a given width (a page, or a screen). A 'system begin' is encoded with the `` element, and this is a milestone element that can generally go anywhere in your encoding. In the picture you sent I would say there is only one staff if we ignore the 'Dominus" bit up to the red line, and assume that the chant ends where your image ends, thus starting 'Et' and ending '-ne'. There are, however, two systems. Although not wanting to say 'never' when notated music is concerned, it's highly unlikely that a syllable would cross two 'staff' elements, since this would mean a single syllable would be present across, say, a movement or section. In Neume encoding specifically, the `syllable` element is different than the `syl` element in "standard" MEI. Syllable is a logical chunking of the content such that multiple neumes can be grouped on the same sung bit of text, while `syl` provides the specific text being sung. So for your encoding I would say (roughly; neume and syl encoding abbreviated):
Et au- fe- ret a ...
To indicate the "system begin" on the new line you might then begin the next staff (just off the page in your image):
...etc. Does that help? -Andrew > On 22 May 2019, at 16:30, Juliette Regimbal wrote: > > Hello everyone, > > In neume notation, it is possible that a single syllable can have neumes that continue past the end of a staff and onto the following one. An example is shown in the attached image. In it, the last syllable of the first staff ("ret") has neumes on both the end of the first staff and the beginning of the second staff. This kind of situation can also occur across pages if the last staff of a page has a syllable that continues onto the next staff, which would be the first staff of the following page. > > Since MEI is an XML-based encoding, any element can only have one direct parent element. In this case, a must be the child of exactly one in one and any neumes that are part of the syllable must be represented by elements in a single element. There is no way to encode a single in multiple elements, even if the syllable it is meant to describe does span multiple staves. > > Looking through the MEI documentation there does not appear to be an accepted way to encode this, even though this should be possible. For example, this could be done by creating two separate elements in each . Each could reference the ID of the other in an attribute saying that the syllable is continued by another element or is a continuation of another element for the first and second elements respectively. > > Juliette Regimbal > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Thu May 23 21:36:37 2019 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 23 May 2019 21:36:37 +0200 Subject: [MEI-L] Final ODD Friday before MEC Message-ID: Dear all, tomorrow is the last ODD Friday before the Music Encoding Conference in Vienna next week. Benni and I will be around over the day on Slack or by E-Mail, in case someone needs assistance on his work on the Guidelines ;-) In the (german) afternoon at 3pm, I will host a Zoom.us meeting for last-minute coordination. Just step in if you like. All best, Benni and Johannes ----------- ODD Friday Uhrzeit: Mai 24, 2019 3:00 PM Amsterdam, Berlin, Rom, Stockholm, Wien Meeting-ID: https://zoom.us/j/130949664 -------------- next part -------------- A non-text attachment was scrubbed... Name: iCal-20190523-212115.ics Type: text/calendar Size: 1799 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From kepper at edirom.de Thu May 23 21:49:07 2019 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 23 May 2019 21:49:07 +0200 Subject: [MEI-L] Final ODD Friday before MEC In-Reply-To: References: Message-ID: <8E7188AC-F458-4C15-B27B-473FFB70498C@edirom.de> I just noticed that the mail was sent with PGP turned on. As this may cause problems for some, here's the text without encryption – at least I hope ;-) All best, jo > Am 23.05.2019 um 21:36 schrieb Johannes Kepper : > > Dear all, > > tomorrow is the last ODD Friday before the Music Encoding Conference in Vienna next week. Benni and I will be around over the day on Slack or by E-Mail, in case someone needs assistance on his work on the Guidelines ;-) In the (german) afternoon at 3pm, I will host a Zoom.us meeting for last-minute coordination. Just step in if you like. > > All best, > Benni and Johannes > > > > ----------- > > > > ODD Friday > Uhrzeit: Mai 24, 2019 3:00 PM Amsterdam, Berlin, Rom, Stockholm, Wien > > Meeting-ID: https://zoom.us/j/130949664 > > From RKlugseder at gmx.de Fri May 24 11:27:01 2019 From: RKlugseder at gmx.de (RKlugseder at gmx.de) Date: Fri, 24 May 2019 11:27:01 +0200 Subject: [MEI-L] MEC Vienna 2019 Message-ID: An HTML attachment was scrubbed... URL: From esfield at stanford.edu Fri May 24 23:02:20 2019 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Fri, 24 May 2019 21:02:20 +0000 Subject: [MEI-L] MEC Vienna 2019 In-Reply-To: References: Message-ID: Dear Robert, The transport pass is a really innovative idea (for MEI). Thanks for that. I will in Vienna from Sunday evening for a week. Is there any way to get hold of the pass before Wednesday? I’ll be staying near the conference venue (Hotel Baron am Schottentur). Many hanks for all your hard work on behalf of the other arrangements. Eleanor Eleanor Selfridge-Field Braun Music Center #129 541 Lasuen Mall Stanford University Stanford, CA 94305-3076 https://profiles.stanford.edu/eleanor-selfridge-field From: mei-l On Behalf Of RKlugseder at gmx.de Sent: Friday, May 24, 2019 2:27 AM To: mei-l at lists.uni-paderborn.de Subject: [MEI-L] MEC Vienna 2019 Dear colleagues, We would like to address you with the latest information shortly before the start of the MEC Vienna 2019. We will be available on Wednesday morning (pre conference day) from 8 am for your registration in Hörsaal 2 (lecture room 2) of the Institute of Musicology. Please register before attending the workshops. For our foreign guests, we will provide a weekly ticket for Vienna's public transport system at the registration. The participants of the workshops of the pre conference day will be contacted, if necessary, by the responsible persons of the workshops. We do not offer catering on the pre conference day. You can find supermarkets and many restaurants near the conference venue. The participants of the guided tour through the Music Collection of the Austrian National Library meet at 4 pm at Michaelerplatz, directly at the entrance to the Hofburg. https://goo.gl/maps/FBQo16j4LZij2m3bA On Saturday there is the possibility for meetings after the Community Meeting (CM). Separate rooms are available for the MEI SIGs. All other groups can meet in Hörsaal 1 (lecture room 1). We will organize this after the CM. Tickets for the conference dinner for partners can be purchased at the conference registration (30 EUR, cash only). Please note the travel guide, the map of the conference venue and the "Green meeting" information on the conference website. https://music-encoding.org/conference/2019/ In particular, we would like to invite you to the festive opening of the conference and the first keynote speech in the historic Festsaal of the Academy of Sciences. The auditorium of the University of Vienna (now the Festsaal) was one of the most important concert halls in Vienna around 1800. Beethoven conducted the world premiere of his 7th Symphony and his symphonic battle painting "Wellington's Victory" here on December 8 and 12, 1813. In addition, the 'mechanical trumpeter' of the inventor Johann Nepomuk Mälzel made a spectacular performance. The event begins at 7 pm. After the keynote there will be a reception in the Aula of the Academy. https://goo.gl/maps/9pxnFwAiR9gK58Dq5 We are very pleased to welcome you in Vienna. For the Organizing Committee Robert Klugseder, chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From g.fazekas at qmul.ac.uk Wed Jun 5 13:44:57 2019 From: g.fazekas at qmul.ac.uk (George Fazekas) Date: Wed, 5 Jun 2019 11:44:57 +0000 Subject: [MEI-L] PhD studentship at Queen Mary University of London (AI and Music CDT) Message-ID: <0E857D45-2703-45C1-A33C-A1DC49EE545D@qmul.ac.uk> (with apologies for cross-postings) A fully-funded PhD studentship is available to carry out research in the area of Optical Music Recognition using Deep Learning in collaboration with Steinberg Media Technologies GmbH. The position is available within the UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM) at Queen Mary University of London. https://www.aim.qmul.ac.uk/ The studentships covers fees and a stipend for four years starting September 2019. The position is open to UK and international students. Application deadline: 21 June 2019 (http://www.aim.qmul.ac.uk/apply) Why apply to the AIM Programme? * 4-year fully-funded PhD studentships available * Access to cutting-edge facilities and expertise in artificial intelligence (AI) and music/audio technology * Comprehensive technical training at the intersection of AI and music through a personalised programme * Partnerships with over 20 companies and cultural institutions in the music, audio and creative sectors More information on the AIM Programme can be found at: https://www.aim.qmul.ac.uk/ PhD Topic: Optical Music Recognition using Deep Learning in collaboration with Steinberg Media Technologies GmbH. The proposed PhD focuses on developing novel techniques for optical music recognition (OMR) using Deep Neural Networks (DNN). The research will be carried out in collaboration with Steinberg Media Technologies opening the opportunity to work with and test the research outcomes in leading music notation software such as Dorico (http://www.dorico.com). Musicians, composers, arrangers, orchestrators and other users of music notation have long had a dream that they could simply take a photo or use a scan of sheet music and bring it into a music notation application to be able to make changes, rearrange, transpose, or simply listen to being played by the computer. The PhD aims to investigate and demonstrate a novel approach to converting images of sheet music into a semantic representation such as MusicXML and/or MEI. The research will be carried out in the context of designing a music recognition engine capable of ingesting, optically correcting, processing and recognising multiple pages of handwritten or music from image captured by mobile phone, or low-resolution copyright-free scans from the International Music Score Library Project (IMSLP). The main objective is outputting semantic mark-up identifying as many notational elements and text as possible, along with the relationship to their position in the original image. Prior solutions have used algorithmic approaches and have involved layers of algorithmic rules applied to traditional feature detection techniques such as edge detection. An opportunity exists to develop and evaluate new approaches based on DNN and other machine learning techniques. State-of-the-art Optical Music Recognition (OMR) is already able to recognise clean sheet music with very high accuracy, but fixing the remaining errors may take just as long, if not longer, than transcribing the music into notation software by hand. A new method that can improve recognition rates will allow users who are not so adept at inputting notes into a music notation application to get better results quicker. Another challenge to tackle is the variability in quality of input (particularly from images captured from smartphones) and how best to preprocess the images to improve the quality of recognition for subsequent stages of the pipeline. The application of cutting edge techniques in data science, including machine learning, particularly convolutional neural networks (CNN) may yield better results than traditional methods. To this end, research will start from testing VGG like architectures (https://arxiv.org/abs/1409.1556) and residual networks (e.g. ResNet, https://arxiv.org/pdf/1512.03385.pdf) for the recognition of hand written and/or low-resolution printed sheet music. The same techniques may also prove useful in earlier stages of the pipeline such as document detection and feature detection. It would be desirable to recognise close to all individual objects in the score. One of the first objectives will be to establish the methodology for determining the differences between the reference data and the recognised data. Furthermore data augmentation can be supported by existing Steinberg software. The ideal candidate would have previous experience of training machine learning models and would be familiar with Western music notation. Being well versed in image acquisition, processing techniques, and computer vision would be a significant advantage. Programme structure Our Centre for Doctoral Training (CDT) offers a four year training programme where students will carry out a research project in the intersection of AI and music, supported by taught specialist modules, industrial placements, and skills training. Find out more about the programme structure at: http://www.aim.qmul.ac.uk/about/ Who can apply? We are on the lookout for the best and brightest students interested in the intersection of music/audio technology and AI. Successful applicants will have the following profile: * Hold or be completing a Masters degree at distinction or first class level, or equivalent, in Computer Science, Electronic Engineering, Music/Audio Technology, Physics, Mathematics, or Psychology. * Programming skills are strongly desirable; however we do not consider this to be an essential criterion if candidates have complementary strengths. * Formal music training is desirable, but not a prerequisite. * This position is open to UK and international students. Funding Funding will cover the cost of tuition fees and will provide an annual tax-free stipend of £17,009. The CDT will also provide funding for conference travel, equipment, and for attending other CDT-related events. Apply Now Information on applications and PhD topics can be found at: http://www.aim.qmul.ac.uk/apply Application deadline: 21 June 2019 For further information on eligibility, funding and the application process please visit our website. Please email any questions to aim-enquiries at qmul.ac.uk — Dr. George Fazekas, Senior Lecturer (Assoc. Prof.) in Digital Media Programme Coordinator, Sound and Music Computing (SMC) Centre for Digital Music (C4DM) School of Electronic Engineering and Computer Science Queen Mary University of London, UK FHEA, M. IEEE, ACM, AES email: g.fazekas at qmul.ac.uk web: c4dm.eecs.qmul.ac.uk | semanticaudio.net | audiocommons.org | bit.ly/smc-qmul | aim.qmul.ac.uk -------------- next part -------------- An HTML attachment was scrubbed... URL: From drizo at dlsi.ua.es Wed Jun 12 18:34:14 2019 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Wed, 12 Jun 2019 18:34:14 +0200 Subject: [MEI-L] 2nd Call for Papers | DLfM2019 - Digital Libraries for Musicology | The Hague, The Netherlands | 9th November 2019 Message-ID: <46D18962-8F75-4E89-BFB7-92E46282A2F5@dlsi.ua.es> [with apologies for cross posting] * Abstracts deadline in 10 days * 6th International Conference on Digital Libraries for Musicology (DLfM 2019) 9th November 2019 National Library of The Netherlands A satellite event of ISMIR 2019. https://dlfm.web.ox.ac.uk/ CALL FOR PAPERS Many Digital Libraries have long offered facilities to provide multimedia content, including music. However there is now an ever more urgent need to specifically support the distinct multiple forms of music, the links between them, and the surrounding scholarly context, as required by the transformed and extended methods being applied to musicology and the wider Digital Humanities. The Digital Libraries for Musicology (DLfM) conference presents a venue specifically for those working on, and with, Digital Library systems and content in the domain of music and musicology. This includes Music Digital Library systems, their application and use in musicology, technologies for enhanced access and organisation of musics in Digital Libraries, bibliographic and metadata for music, intersections with music Linked Data, and the challenges of working with the multiple representations of music across large-scale digital collections such as the Internet Archive and HathiTrust. This, the Sixth Digital Libraries for Musicology conference, follows previous workshops in London, Knoxville, New York, Shanghai, and Paris. In 2019, DLfM is again proud to be a satellite event of the annual International Society for Music Information Retrieval (ISMIR) conference which is being held in Delft, and in particular encourages reports on the use of MIR methods and technologies within Music Digital Library systems when applied to the pursuit of musicological research. SCOPE AND OBJECTIVES DLfM will focus on the implications of music for Digital Libraries and Digital Libraries research when pushing the boundaries of contemporary musicology, including the application of techniques as reported in more technologically-oriented fora such as ISMIR and ICMC. This will be the sixth edition of DLfM following very successful and well received previous workshops (in 2014, 2015, 2016, 2017, and 2018), giving an opportunity for the community to present and discuss recent developments that address the challenges of effectively combining technology with musicology through Digital Library systems and their application. The conference objectives are: to act as a forum for reporting, presenting, and evaluating this work and disseminating new approaches to advance the discipline; to create a venue for critically and constructively evaluating and verifying the operation of Music Digital Libraries and the applications and findings that flow from them; to consider the suitability of existing Music Digital Libraries, particularly in light of the transformative methods and applications emerging from musicology, large collections of both audio and music related data, ‘big data’ method, and MIR; to explore how digital libraries and digital musicology can combine to offer richer online access to online music collections; to set the agenda for work in the field to address these new challenges and opportunities. TOPICS Topics of interest include, but are not limited to: Building and managing digital music collections Optical Music Recognition Information literacies for Music Digital Libraries Data quality assessment Access, interfaces and ergonomics Interfaces and access mechanisms for Music Digital Libraries Identification/location of music (in all forms) in generic Digital Libraries Techniques for locating and accessing music in Very Large Digital Libraries (e.g. HathiTrust, Internet Archive) and musical corpus-building at scale Mechanisms for combining multi-form music content within and between Digital Libraries and other digital resources User information needs and behaviour for Music Digital Libraries Musicological Knowledge Music data representations, including manuscripts/scores and audio Applied MIR techniques in Music Digital Libraries and musicological investigations using them Extraction of musical concepts from symbolic notation and audio data Metadata and metadata schemas for music Application of Linked Data and Semantic Web techniques to Music Digital Libraries, and for their access and organisation Ontologies and categorisation of musics and music artefacts Improving data for musicology Digital Libraries which enrich public access to music, music-cultural, and music-ephemera material online Digital Libraries in support of musicology and other scholarly study; novel requirements and methodologies therein Digital Libraries for combination of resources in support of musicology (e.g. combining audio, scores, bibliographic, geographic, ethnomusicology, performance, etc.) SUBMISSIONS Proceedings track We invite full papers (up to 8 pages excluding references) or short and position papers (up to 4 pages excluding references). In addition to the general submission requirements below, we will require that camera-ready copy be received before 21st September 2019, and that at least one author per accepted paper is registered for DLfM by that date. All papers will be peer reviewed by 2-3 members of the programme committee. Please submit an abstract to EasyChair by 21th June 2019, and produce your paper using the ACM template and submit it to DLfM on EasyChair by 28th June 2019. All submitted papers must: be written in English; contain author names, affiliations and e-mail addresses; be in PDF format (please ensure that the PDF can be viewed on any platform), and formatted for A4 size. Page limits for submitted papers apply to all text, but exclude the bibliography (i.e. references can be included on pages over the specified limits). It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. Please note that at least one author of each accepted paper must attend the conference to present their work. Submissions: https://easychair.org/conferences/?conf=dlfm2019 Contact email: dlfm2019 at easychair.org ACM template (both Word and LaTeX): https://www.acm.org/publications/taps/word-template-workflow Questions regarding the ACM manuscript templates MUST be directed to the ACM TeX support team at Aptara directly at acmtexsupport at aptaracorp.com. TROMPA Project Challenge Diverse public domain collections exposing materials of scholarly musicological interest are published on the Web. How will scholars benefit from the interlinking of such repositories? What research questions will be supported by unified access to collections of digitised score images, score encodings, textual and audio-visual materials, and other multimodal data sources? What kinds of holistic interpretive and analytical insights can scholars contribute to enrich such interconnected repositories, and how can they be supported in doing so? The TROMPA Project Challenge solicits short position papers addressing these questions as submissions of up to 2 pages to DLfM. TROMPA Project Challenge papers will be peer reviewed, and accepted papers will be presented at the conference as either part of a panel or as poster. Challenge papers will not be included in the main DLfM proceedings, but will be compiled into a supplement hosted on the conference website. While we encourage authors to engage with DLfM through the TROMPA Project Challenge track, those who wish their papers to appear in the main proceedings may prefer to submit a more detailed description of their work to the Proceedings Track as a short or long paper (see above). TROMPA (trompamusic.eu ) is an EU-funded project (2018-2021) dedicated to massively enriching and democratising the heritage of classical music, and involving content owners, scholars, performers, choral singers and music enthusiasts of every kind. The project employs and improves state-of-the-art technology, engaging thousands of music-loving citizens to work with the technology, give feedback on algorithmic results, and annotate the data according to their personal expertise. IMPORTANT DATES Abstract submission deadline: 21th June 2019 (23:59 UTC-11) Paper submission deadline: 28th June 2019 (23:59 UTC-11) Notification of acceptance: 17th August 2019 Camera ready submission deadline: 21st September 2019 Conference: 9th November 2019 DLfM proceedings will be included in the ACM Digital Library through the ICPS series. CONFERENCE ORGANIZATION Programme Chair David Rizo, Universidad de Alicante. Instituto Superior de Enseñanzas Artísticas de la Comunidad Valenciana General Chair Kevin Page, University of Oxford Local Chair Lotte Wilms, KB National Library of the Netherlands Publicity and proceedings Chair Jorge Calvo-Zaragoza, Universidad de Alicante Programme Committee (in progress) Alessandro Adamou, Knowledge Media Institute, The Open University Islah Ali-Maclachlan, Birmingham City University Rafael Caro Repetto, Universitat Pompeu Fabra Richard Chesser, British Library Tim Crawford, Goldsmiths College María Teresa Delgado-Sánchez, Biblioteca Nacional de España Jürgen Diet, Bavarian State Library J. Stephen Downie, University of Illinois Tim Duguid, University of Glasgow Yun Fan, Répertoire International de Littérature Musicale Ben Fields, Goldsmiths University of London Ichiro Fujinaga, McGill University Axel-Teich Geertinger, Royal Danish Library Francesca Giannetti, Rutgers University Xiao Hu Hu, University of Hong Kong Charles Inskip, University College London José Manuel Iñesta, Universidad de Alicante Frauke Jürgensen, University of Aberdeen Audrey Laplante, EBSI, Université de Montréal Kjell Lemström, University of Helsinki David Lewis, University of Oxford Cynthia Liem, Delft University of Technology Alan Marsden, Lancaster University Joshua Neumann, University of Florida Alastair Porter, Universitat Pompeu Fabra Laurent Pugin, RISM Switzerland Andreas Rauber, Vienna University of Technology Amelie Roper, British Library Sertan Şentürk, Kobalt Music Group Marnix Vanberchum, Utrecht University Raffaele Viglianti, University of Maryland Tillman Weyde, City University Frans Wiering, Utrecht University -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 4442 bytes Desc: not available URL: From thomas.weber at notengrafik.com Mon Jun 17 13:24:22 2019 From: thomas.weber at notengrafik.com (Thomas Weber) Date: Mon, 17 Jun 2019 11:24:22 +0000 Subject: [MEI-L] Sibmei 2.2.0 released Message-ID: <4f03ebb9-e6bb-34c2-10eb-32f482007e0a@notengrafik.com> Hello all, a new release of the MEI export plugin for Sibelius – Sibmei – is now available: https://github.com/music-encoding/sibmei/releases/tag/v2.2.0 Features Andrew Hankinson and I added are figured bass, sections, @tstamp.ges output and arpeggios.  Figured bass was kindly sponsored by the ÖAW – many thanks to Robert Klugseder.  And special thanks to Anna Plaksin and Martha Thomae for fixing bugs. For more details, see the linked release notes.  Please don't hesitate to get in touch if you have any questions concerning Sibmei! Thomas -- Notengrafik Berlin GmbH HRB 150007 UstID: DE 289234097 Geschäftsführer: Thomas Weber und Werner J. Wolff fon: +49 30 25359505 Friedrichstraße 23a 10969 Berlin notengrafik.com From drizo at dlsi.ua.es Wed Jun 19 21:49:29 2019 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Wed, 19 Jun 2019 21:49:29 +0200 Subject: [MEI-L] Deadline extended: DLfM2019 - Digital Libraries for Musicology | The Hague, The Netherlands | 9th November 2019 Message-ID: [with apologies for cross posting] 6th International Conference on Digital Libraries for Musicology (DLfM 2019) 9th November 2019 National Library of The Netherlands A satellite event of ISMIR 2019. *** We have extended the deadline for submission as follows: 29 June: Abstract submission deadline 6 July: Full paper submission deadline 18 July: TROMPA Project Challenge position papers Please note that full papers can only be submitted if an abstract has been uploaded by 29 June. *** https://dlfm.web.ox.ac.uk/ FINAL CALL FOR PAPERS Deadline: 29 June 2019 (abstracts); 6 July 2019 (full papers), 18 July 2019 (TROMPA Project position papers) Many Digital Libraries have long offered facilities to provide multimedia content, including music. However there is now an ever more urgent need to specifically support the distinct multiple forms of music, the links between them, and the surrounding scholarly context, as required by the transformed and extended methods being applied to musicology and the wider Digital Humanities. The Digital Libraries for Musicology (DLfM) conference presents a venue specifically for those working on, and with, Digital Library systems and content in the domain of music and musicology. This includes Music Digital Library systems, their application and use in musicology, technologies for enhanced access and organisation of musics in Digital Libraries, bibliographic and metadata for music, intersections with music Linked Data, and the challenges of working with the multiple representations of music across large-scale digital collections such as the Internet Archive and HathiTrust. This, the Sixth Digital Libraries for Musicology conference, follows previous workshops in London, Knoxville, New York, Shanghai, and Paris. In 2019, DLfM is again proud to be a satellite event of the annual International Society for Music Information Retrieval (ISMIR) conference which is being held in Delft, and in particular encourages reports on the use of MIR methods and technologies within Music Digital Library systems when applied to the pursuit of musicological research. SCOPE AND OBJECTIVES DLfM will focus on the implications of music for Digital Libraries and Digital Libraries research when pushing the boundaries of contemporary musicology, including the application of techniques as reported in more technologically-oriented fora such as ISMIR and ICMC. This will be the sixth edition of DLfM following very successful and well received previous workshops (in 2014, 2015, 2016, 2017, and 2018), giving an opportunity for the community to present and discuss recent developments that address the challenges of effectively combining technology with musicology through Digital Library systems and their application. The conference objectives are: to act as a forum for reporting, presenting, and evaluating this work and disseminating new approaches to advance the discipline; to create a venue for critically and constructively evaluating and verifying the operation of Music Digital Libraries and the applications and findings that flow from them; to consider the suitability of existing Music Digital Libraries, particularly in light of the transformative methods and applications emerging from musicology, large collections of both audio and music related data, ‘big data’ method, and MIR; to explore how digital libraries and digital musicology can combine to offer richer online access to online music collections; to set the agenda for work in the field to address these new challenges and opportunities. TOPICS Topics of interest include, but are not limited to: Building and managing digital music collections Optical Music Recognition Information literacies for Music Digital Libraries Data quality assessment Access, interfaces and ergonomics Interfaces and access mechanisms for Music Digital Libraries Identification/location of music (in all forms) in generic Digital Libraries Techniques for locating and accessing music in Very Large Digital Libraries (e.g. HathiTrust, Internet Archive) and musical corpus-building at scale Mechanisms for combining multi-form music content within and between Digital Libraries and other digital resources User information needs and behaviour for Music Digital Libraries Musicological Knowledge Music data representations, including manuscripts/scores and audio Applied MIR techniques in Music Digital Libraries and musicological investigations using them Extraction of musical concepts from symbolic notation and audio data Metadata and metadata schemas for music Application of Linked Data and Semantic Web techniques to Music Digital Libraries, and for their access and organisation Ontologies and categorisation of musics and music artefacts Improving data for musicology Digital Libraries which enrich public access to music, music-cultural, and music-ephemera material online Digital Libraries in support of musicology and other scholarly study; novel requirements and methodologies therein Digital Libraries for combination of resources in support of musicology (e.g. combining audio, scores, bibliographic, geographic, ethnomusicology, performance, etc.) SUBMISSIONS Proceedings track We invite full papers (up to 8 pages excluding references) or short and position papers (up to 4 pages excluding references). In addition to the general submission requirements below, we will require that camera-ready copy be received before 21st September 2019, and that at least one author per accepted paper is registered for DLfM by that date. All papers will be peer reviewed by 2-3 members of the programme committee. Please submit an abstract to EasyChair by 21th June 2019, and produce your paper using the ACM template and submit it to DLfM on EasyChair by 28th June 2019. All submitted papers must: be written in English; contain author names, affiliations and e-mail addresses; be in PDF format (please ensure that the PDF can be viewed on any platform), and formatted for A4 size. Page limits for submitted papers apply to all text, but exclude the bibliography (i.e. references can be included on pages over the specified limits). It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. Please note that at least one author of each accepted paper must attend the conference to present their work. Submissions: https://easychair.org/conferences/?conf=dlfm2019 Contact email: dlfm2019 at easychair.org ACM template (both Word and LaTeX): https://www.acm.org/publications/taps/word-template-workflow Questions regarding the ACM manuscript templates MUST be directed to the ACM TeX support team at Aptara directly at acmtexsupport at aptaracorp.com. TROMPA Project Challenge Diverse public domain collections exposing materials of scholarly musicological interest are published on the Web. How will scholars benefit from the interlinking of such repositories? What research questions will be supported by unified access to collections of digitised score images, score encodings, textual and audio-visual materials, and other multimodal data sources? What kinds of holistic interpretive and analytical insights can scholars contribute to enrich such interconnected repositories, and how can they be supported in doing so? The TROMPA Project Challenge solicits short position papers addressing these questions as submissions of up to 2 pages to DLfM. TROMPA Project Challenge papers will be peer reviewed, and accepted papers will be presented at the conference as either part of a panel or as poster. Challenge papers will not be included in the main DLfM proceedings, but will be compiled into a supplement hosted on the conference website. While we encourage authors to engage with DLfM through the TROMPA Project Challenge track, those who wish their papers to appear in the main proceedings may prefer to submit a more detailed description of their work to the Proceedings Track as a short or long paper (see above). TROMPA (trompamusic.eu ) is an EU-funded project (2018-2021) dedicated to massively enriching and democratising the heritage of classical music, and involving content owners, scholars, performers, choral singers and music enthusiasts of every kind. The project employs and improves state-of-the-art technology, engaging thousands of music-loving citizens to work with the technology, give feedback on algorithmic results, and annotate the data according to their personal expertise. IMPORTANT DATES Abstract submission deadline: 21th June 2019 (23:59 UTC-11) Paper submission deadline: 28th June 2019 (23:59 UTC-11) Notification of acceptance: 17th August 2019 Camera ready submission deadline: 21st September 2019 Conference: 9th November 2019 DLfM proceedings will be included in the ACM Digital Library through the ICPS series. CONFERENCE ORGANIZATION Programme Chair David Rizo, Universidad de Alicante. Instituto Superior de Enseñanzas Artísticas de la Comunidad Valenciana General Chair Kevin Page, University of Oxford Local Chair Lotte Wilms, KB National Library of the Netherlands Publicity and proceedings Chair Jorge Calvo-Zaragoza, Universidad de Alicante Programme Committee (in progress) Alessandro Adamou, Knowledge Media Institute, The Open University Islah Ali-Maclachlan, Birmingham City University Rafael Caro Repetto, Universitat Pompeu Fabra Richard Chesser, British Library Tim Crawford, Goldsmiths College María Teresa Delgado-Sánchez, Biblioteca Nacional de España Jürgen Diet, Bavarian State Library J. Stephen Downie, University of Illinois Tim Duguid, University of Glasgow Yun Fan, Répertoire International de Littérature Musicale Ben Fields, Goldsmiths University of London Ichiro Fujinaga, McGill University Axel-Teich Geertinger, Royal Danish Library Francesca Giannetti, Rutgers University Xiao Hu Hu, University of Hong Kong Charles Inskip, University College London José Manuel Iñesta, Universidad de Alicante Frauke Jürgensen, University of Aberdeen Audrey Laplante, EBSI, Université de Montréal Kjell Lemström, University of Helsinki David Lewis, University of Oxford Cynthia Liem, Delft University of Technology Alan Marsden, Lancaster University Joshua Neumann, University of Florida Alastair Porter, Universitat Pompeu Fabra Laurent Pugin, RISM Switzerland Andreas Rauber, Vienna University of Technology Amelie Roper, British Library Sertan Şentürk, Kobalt Music Group Marnix Vanberchum, Utrecht University Raffaele Viglianti, University of Maryland Tillman Weyde, City University Frans Wiering, Utrecht University -------------- next part -------------- An HTML attachment was scrubbed... URL: From mkirsch at musik.uni-kiel.de Thu Jun 20 15:55:41 2019 From: mkirsch at musik.uni-kiel.de (Matthias Kirsch) Date: Thu, 20 Jun 2019 15:55:41 +0200 Subject: [MEI-L] slurs with XML:id Message-ID: <9ad4d38521af611fd1530012a65a33a2@musik.uni-kiel.de> Dear all, Is there any basic information out there in the web concerning the use of xml:id for slurs? Something like a tutorial? I'm using verovio for rendering and I don't know anything about this yet. Thanks for help, Best wishes, Matthias -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hankinson at bodleian.ox.ac.uk Thu Jun 20 16:15:50 2019 From: andrew.hankinson at bodleian.ox.ac.uk (Andrew Hankinson) Date: Thu, 20 Jun 2019 14:15:50 +0000 Subject: [MEI-L] slurs with XML:id In-Reply-To: <9ad4d38521af611fd1530012a65a33a2@musik.uni-kiel.de> References: <9ad4d38521af611fd1530012a65a33a2@musik.uni-kiel.de> Message-ID: <0EB4209D-68DB-4029-9E7F-F996D54D2D0F@bodleian.ox.ac.uk> Hi Matthias, You can find our tutorials at: https://music-encoding.org/resources/tutorials.html What would you like to know about using xml:id for slurs? Are you interested in knowing how to use them using "stand-off" markup? Something like: -Andrew > On 20 Jun 2019, at 14:55, Matthias Kirsch wrote: > > Dear all, > > Is there any basic information out there in the web concerning the use of xml:id for slurs? Something like a tutorial? I'm using verovio for rendering and I don't know anything about this yet. > > Thanks for help, > > Best wishes, Matthias > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From mkirsch at musik.uni-kiel.de Thu Jun 20 19:43:57 2019 From: mkirsch at musik.uni-kiel.de (Matthias Kirsch) Date: Thu, 20 Jun 2019 19:43:57 +0200 Subject: [MEI-L] slurs with XML:id In-Reply-To: <0EB4209D-68DB-4029-9E7F-F996D54D2D0F@bodleian.ox.ac.uk> References: <9ad4d38521af611fd1530012a65a33a2@musik.uni-kiel.de> <0EB4209D-68DB-4029-9E7F-F996D54D2D0F@bodleian.ox.ac.uk> Message-ID: <268d980e10d8ec084f7450dee819a4a4@musik.uni-kiel.de> Hi Andrew, I'm not really sure, but it might be my special Problem: I would like to encode a number of slurs in a dozen or so pieces of a Tutor-Collection for the Oboe (The sprightly companion, London 1695; most of the work is already done). The 'tstamp-solution' doesn't work when rendering these pieces with Verovio, so I need an alternative solution. How could I create the correct values for "a" and "b"? Is there any other possibility for rendering my Markups? Many thanks, Matthias Am 2019-06-20 16:15, schrieb Andrew Hankinson: > Hi Matthias, > > You can find our tutorials at: https://music-encoding.org/resources/tutorials.html > > What would you like to know about using xml:id for slurs? Are you interested in knowing how to use them using "stand-off" markup? Something like: > > > > > > > -Andrew > >> On 20 Jun 2019, at 14:55, Matthias Kirsch wrote: >> >> Dear all, >> >> Is there any basic information out there in the web concerning the use of xml:id for slurs? Something like a tutorial? I'm using verovio for rendering and I don't know anything about this yet. >> >> Thanks for help, >> >> Best wishes, Matthias >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Thu Jun 20 20:39:07 2019 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 20 Jun 2019 20:39:07 +0200 Subject: [MEI-L] slurs with XML:id In-Reply-To: <268d980e10d8ec084f7450dee819a4a4@musik.uni-kiel.de> References: <9ad4d38521af611fd1530012a65a33a2@musik.uni-kiel.de> <0EB4209D-68DB-4029-9E7F-F996D54D2D0F@bodleian.ox.ac.uk> <268d980e10d8ec084f7450dee819a4a4@musik.uni-kiel.de> Message-ID: Hi Matthias, it would be best if you could share an encoding, ideally with a scan of what you're trying to capture. Information about your workflow would be helpful as well. If for some reason you may not share that on this list, I'm happy to answer private messages or have a Skype call about this. All best, Johannes Dr. Johannes Kepper Wissenschaftlicher Mitarbeiter Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 www.beethovens-werkstatt.de Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz > Am 20.06.2019 um 19:43 schrieb Matthias Kirsch : > > Hi Andrew, > > I'm not really sure, but it might be my special Problem: I would like to encode a number of slurs in a dozen or so pieces of a Tutor-Collection for the Oboe (The sprightly companion, London 1695; most of the work is already done). The 'tstamp-solution' doesn't work when rendering these pieces with Verovio, so I need an alternative solution. How could I create the correct values for "a" and "b"? Is there any other possibility for rendering my Markups? > > Many thanks, Matthias > > > > > Am 2019-06-20 16:15, schrieb Andrew Hankinson: > >> Hi Matthias, >> >> You can find our tutorials at: https://music-encoding.org/resources/tutorials.html >> >> What would you like to know about using xml:id for slurs? Are you interested in knowing how to use them using "stand-off" markup? Something like: >> >> >> >> >> >> >> -Andrew >> >>> On 20 Jun 2019, at 14:55, Matthias Kirsch wrote: >>> >>> Dear all, >>> >>> Is there any basic information out there in the web concerning the use of xml:id for slurs? Something like a tutorial? I'm using verovio for rendering and I don't know anything about this yet. >>> >>> Thanks for help, >>> >>> Best wishes, Matthias >>> >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From drizo at dlsi.ua.es Mon Jun 24 19:17:53 2019 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Mon, 24 Jun 2019 19:17:53 +0200 Subject: [MEI-L] Call for participation to the 2nd International Workshop on Music Reading Systems (WoRMS) Message-ID: <7E9FCB98-41BE-4A04-A1EE-8B0AE2867C0C@dlsi.ua.es> Dear colleagues, It is our pleasure to announce the 2nd International Workshop on Music Reading Systems (WoRMS). It will take place on Thursday, the 2nd of November 2019, at the Delft University of Technology, as a satellite event to ISMIR 2019. WoRMS is a new workshop that tries to connect researchers who develop music reading systems — especially from the field of optical music recognition, but also related topics such as score following, score searching, or information retrieval from written music — with researchers and practitioners that could benefit from such systems, like librarians or musicologists. WoRMS will be organized as a half-day workshop and provides a good opportunity to share ideas, discuss current developments and shape the future of music reading systems. We would like for diverse points of view to engage, by explicitly inviting contributors without a technical background to participate as well. We strive to make the workshop as interactive as possible, with participants getting the opportunity not just to present their work, but to discuss current research in depth and foster relationships within the community. Therefore, promising ideas, work-in-progress submissions and recently submitted or published works are equally welcome. The topics of interest for the workshop include, but are not limited to: Music reading systems Optical music recognition Datasets and performance evaluation Image processing on music scores Writer identification Authoring, editing, storing and presentation systems for music scores Multi-modal systems Novel input-methods for music to produce written music Web-based Music Information Retrieval services Applications and projects Use-cases related to written music Important dates: Submission Deadline Sep 13, 2019 Notification Due Sep 27, 2019 Workshop Nov 2, 2019 Please check the website https://sites.google.com/view/worms2019 for further information. Feel free to forward this e-mail to anyone who might be interested. Best regards, Jorge Calvo-Zaragoza Alexander Pacha Heinz Roggenkemper -------------- next part -------------- An HTML attachment was scrubbed... URL: From thomas.weber at notengrafik.com Wed Jun 26 09:58:00 2019 From: thomas.weber at notengrafik.com (Thomas Weber) Date: Wed, 26 Jun 2019 07:58:00 +0000 Subject: [MEI-L] Sibmei 2.2.1 bugfix release In-Reply-To: <4f03ebb9-e6bb-34c2-10eb-32f482007e0a@notengrafik.com> References: <4f03ebb9-e6bb-34c2-10eb-32f482007e0a@notengrafik.com> Message-ID: We had to fix a bug in Sibmei v2.2.0 that made the export crash for lines and slurs across barlines. If you already installed Sibmei 2.2.0, please update to 2.2.1: https://github.com/music-encoding/sibmei/releases/tag/v2.2.1 Sorry we didn't spot that before the release! Thomas Am 17.06.19 um 12:54 schrieb Thomas Weber: Hello all, a new release of the MEI export plugin for Sibelius – Sibmei – is now available: https://github.com/music-encoding/sibmei/releases/tag/v2.2.0 Features Andrew Hankinson and I added are figured bass, sections, @tstamp.ges output and arpeggios. Figured bass was kindly sponsored by the ÖAW – many thanks to Robert Klugseder. And special thanks to Anna Plaksin and Martha Thomae for fixing bugs. For more details, see the linked release notes. Please don't hesitate to get in touch if you have any questions concerning Sibmei! Thomas -- Notengrafik Berlin GmbH HRB 150007 UstID: DE 289234097 Geschäftsführer: Thomas Weber und Werner J. Wolff fon: +49 30 25359505 Friedrichstraße 23a 10969 Berlin notengrafik.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From beate.kutschke at gmx.de Mon Jul 1 13:22:52 2019 From: beate.kutschke at gmx.de (Beate Kutschke) Date: Mon, 1 Jul 2019 13:22:52 +0200 Subject: [MEI-L] CfP In-Reply-To: References: Message-ID: <762952dc-038f-505c-954a-61e1d92c1c78@gmx.de> Dear MEI-L, Apologies for cross-posting. Please feel free to forward this CfP to interested parties: Call for Papers and Poster Presentations Paris Lodron University Salzburg, 3 to 4 April 020 Deadline for submission of abstracts: 15 September 2019 Life-World and Musical Form – Concepts, Models, and Analogies “It is by no means certain what form in music is, and any attempt to formulate rules would provoke nothing but derision”. Despite Dahlhaus’ habitually pessimistic insight, music scholars and musicians have developed manifold concepts of form that were usually applied to more than one musical work. In doing so, they were influenced by life-world [lebensweltlich] concepts, models and analogies: in the musical rhetorical tradition, Mattheson understood musical form as the sequence of sentences (principal and subordinate clauses). Marx established an architectural model encoding the individual modules with letters. Around the turn of the 20th century Schenker and Kurth implicitly drew on evolutionary theory and theories from the field of thermodynamics for their models of musical form. In the late 20th century, after the scholarly community had come to terms with the hyper-individuality of contemporary and especially avant-garde music, Caplin initiated a new trend in musical-form analysis, which shifted the priority from the composition’s wholeness to its elements. While his approach was functional and taxonomic, Hepokoski and Darcy proposed the established dichotomy between ‘general/normative’ vs. ‘particular/deviant/innovative’ to musical form. Most recently, Greenberg, Diergarten and Neuwirth described form of the classical era as an effect of the type case or toy block principle according to which the composers combined modules more or less freely. In sum, the history of music theory points to the constitutive role that life-world experiences, visualizations and metaphors have played in the development of diverse concepts of musical form. This workshop aims to better understand musical form in light of current theories and models by focusing on two aspects: 1. It will reconstruct the diverse life-world models, tropes and theories that have stimulated music theorists and musicians in the past twenty years. 2. It will bring together scholars who have recently developed new approaches to musical form and like to discuss the models, tropes and theories that inspired them. We invite papers and poster presentations of approximately 20 minutes, especially by young scholars and/or from the ‘digital field’. Please send abstracts of 250 words in English to beateruth.kutschke at sgb.ac.at. For updates: http://historiography-of-musical-form-through-mir.sbg.ac.at/cfp/ From reinierdevalk at gmail.com Mon Jul 1 18:57:47 2019 From: reinierdevalk at gmail.com (Reinier de Valk) Date: Mon, 1 Jul 2019 17:57:47 +0100 Subject: [MEI-L] Forcing bars on a single system Message-ID: Hi all, There is probably a simple answer tothis, but I could not find it in the documentation. Is there a way to force a certain amount of bars onto a single system? I am rendering my MEI using the online Verovio viewer, and the individual bars are spaced very widely. Thanks, Reinier -------------- next part -------------- An HTML attachment was scrubbed... URL: From klaus.rettinghaus at gmail.com Mon Jul 1 19:05:04 2019 From: klaus.rettinghaus at gmail.com (Klaus Rettinghaus) Date: Mon, 1 Jul 2019 19:05:04 +0200 Subject: [MEI-L] Forcing bars on a single system In-Reply-To: References: Message-ID: Hi Reinier, this is not a MEI question, but a Verovio question. You probably want to check the options there: https://www.verovio.org/command-line.xhtml Klaus > Am 01.07.2019 um 18:57 schrieb Reinier de Valk : > > Hi all, > > There is probably a simple answer tothis, but I could not find it in the documentation. > > Is there a way to force a certain amount of bars onto a single system? I am rendering my MEI using the online Verovio viewer, and the individual bars are spaced very widely. > > Thanks, > Reinier > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From reinierdevalk at gmail.com Mon Jul 1 19:25:01 2019 From: reinierdevalk at gmail.com (Reinier de Valk) Date: Mon, 1 Jul 2019 18:25:01 +0100 Subject: [MEI-L] Forcing bars on a single system In-Reply-To: References: Message-ID: Hi Klaus, Thanks for your reply. I am using the online Verovio viewer, and when I click the Options button I see that certain page layout options (such as Breaks and Page width) are made unavailable. The command line tool does not work on the computer I have with me. It seems strange that this cannot be specified in the MEI itself - after all, you can specify system breaks. Best wishes, Reinier Op ma 1 jul. 2019 om 18:10 schreef Klaus Rettinghaus < klaus.rettinghaus at gmail.com>: > Hi Reinier, > > this is not a MEI question, but a Verovio question. > You probably want to check the options there: > https://www.verovio.org/command-line.xhtml > > Klaus > > Am 01.07.2019 um 18:57 schrieb Reinier de Valk : > > Hi all, > > There is probably a simple answer tothis, but I could not find it in the > documentation. > > Is there a way to force a certain amount of bars onto a single system? I > am rendering my MEI using the online Verovio viewer, and the individual > bars are spaced very widely. > > Thanks, > Reinier > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: From craigsapp at gmail.com Mon Jul 1 21:52:16 2019 From: craigsapp at gmail.com (Craig Sapp) Date: Mon, 1 Jul 2019 21:52:16 +0200 Subject: [MEI-L] Forcing bars on a single system In-Reply-To: References: Message-ID: Hi Reinier, MEI encodes system breaks with between measures. So if you want four measures on every system you would place a after every four measures. In verovio you would need to set the " --breaks encoded" option on the command line, or {"breaks": "encoded"} in the javascript verovio toolkit (and perhaps the verovio.org MEI viewer does not allow for this setting). If you do not explicitly tell verovio to pay attention to the system breaks, then it will ignore them, since the default parameter is "auto" allowing verovio to decide on the location to break systems. A complication related to how TEI does things, is that in order for the elements to be recognized in verovio, you must first have a (page break) at the start of a
element before the first . If you do not, then verovio complains about this (cryptically) and then ignores the elements. If you want the barlines evenly spaced regardless of the rhythmic content of the measure, then set the --spacing-non-linear parameter (see the second example below for this). Here is an example, where I make the first system have one measure, then then second system has two measures, then the third system has three measures and then the fourth system has four measures: [image: Screen Shot 2019-07-01 at 9.25.01 PM.png] </titleStmt> <pubStmt /> </fileDesc> <encodingDesc> <appInfo> <application isodate="2019-07-01T21:06:28" version="2.2.0-dev-7ac2fe9"> <name>Verovio</name> <p>Transcoded from Humdrum</p> </application> </appInfo> </encodingDesc> <workList> <work> <title /> </work> </workList> </meiHead> <music> <body> <mdiv> <score> <scoreDef> <staffGrp> <staffDef clef.shape="G" clef.line="2" meter.count="4" meter.unit="4" n="1" lines="5"/> </staffGrp> </scoreDef> <section> <pb/> <measure n="1"> <staff n="1"> <layer n="1"> <note dur="1" oct="4" pname="c" /> </layer> </staff> </measure> <sb/> <measure n="2"> <staff n="1"> <layer n="1"> <note dur="1" oct="4" pname="d" /> </layer> </staff> </measure> <measure n="3"> <staff n="1"> <layer n="1"> <note dur="2" oct="4" pname="e" /> <note dur="2" oct="4" pname="e" /> </layer> </staff> </measure> <sb/> <measure n="4"> <staff n="1"> <layer n="1"> <note dur="1" oct="4" pname="f" /> </layer> </staff> </measure> <measure n="5"> <staff n="1"> <layer n="1"> <note dur="1" oct="4" pname="g" /> </layer> </staff> </measure> <measure n="6"> <staff n="1"> <layer n="1"> <note dur="2" oct="4" pname="a" /> <note dur="2" oct="4" pname="a" /> </layer> </staff> </measure> <sb/> <measure n="7"> <staff n="1"> <layer n="1"> <note dur="1" oct="4" pname="b" /> </layer> </staff> </measure> <measure n="8"> <staff n="1"> <layer n="1"> <note dur="1" oct="4" pname="c" /> </layer> </staff> </measure> <measure n="9"> <staff n="1"> <layer n="1"> <note dur="2" oct="4" pname="d" /> <note dur="2" oct="4" pname="d" /> </layer> </staff> </measure> <measure n="10"> <staff n="1"> <layer n="1"> <note dur="1" oct="4" pname="e" /> </layer> </staff> </measure> </section> </score> </mdiv> </body> </music> </mei> It seems like you asked this question last year :-). There is another way of both forcing a fixed number of measures per line, and an equal width to each measure. You need to set the verovio option --spacing-non-linear to 1. This makes there be no compression ratio in the width of rhythms between different rhythmic levels. In other words with a factor of 1, the width of two half notes equals the width of one whole note, and the width of two quarter notes equals the width of one half note, and so on. Here is a second example using the command line options: verovio test.krn --spacing-non-linear 1 --spacing-linear 0.04 The spacing-linear option is set to 0.04 to force four measures per line (otherwise, you can change the page-width parameter instead of, or including, the spacing-linear factor to force a particular number of measures per line). [image: Screen Shot 2019-07-01 at 9.46.48 PM.png] If the image gets through the mail system, notice that measure 5 is the same width as measure 9, and the same as measure 10. There is a little disturbance in the exactly equal widths on the first system due to the time signature being present and squeezing the music a bit to make room for it. Test MEI data: <?xml version="1.0" encoding="UTF-8"?> <?xml-model href="http://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?> <?xml-model href="http://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?> <mei xmlns="http://www.music-encoding.org/ns/mei" meiversion="4.0.0"> <meiHead> <fileDesc> <titleStmt> <title /> </titleStmt> <pubStmt /> </fileDesc> <encodingDesc> <appInfo> <application isodate="2019-07-01T21:48:46" version="2.2.0-dev-7ac2fe9"> <name>Verovio</name> <p>Transcoded from Humdrum</p> </application> </appInfo> </encodingDesc> <workList> <work> <title /> </work> </workList> </meiHead> <music> <body> <mdiv xml:id="mdiv-0000001601484476"> <score xml:id="score-0000001737040281"> <scoreDef xml:id="scoredef-0000001751737657" midi.bpm="400"> <staffGrp xml:id="staffgrp-0000001453242670"> <staffDef xml:id="staffdef-0000001027192877" clef.shape="G" clef.line="2" meter.count="4" meter.unit="4" n="1" lines="5"> <label xml:id="label-0000000791880593" /> </staffDef> </staffGrp> </scoreDef> <section xml:id="section-L1F1"> <measure xml:id="measure-L3" n="1"> <staff xml:id="staff-L3F1N1" n="1"> <layer xml:id="layer-L3F1N1" n="1"> <note xml:id="note-L4F1" dur="1" oct="4" pname="c" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L5" n="2"> <staff xml:id="staff-L5F1N1" n="1"> <layer xml:id="layer-L5F1N1" n="1"> <note xml:id="note-L6F1" dur="1" oct="4" pname="d" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L7" n="3"> <staff xml:id="staff-L7F1N1" n="1"> <layer xml:id="layer-L7F1N1" n="1"> <note xml:id="note-L8F1" dur="2" oct="4" pname="e" accid.ges="n" /> <note xml:id="note-L9F1" dur="2" oct="4" pname="e" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L10" n="4"> <staff xml:id="staff-L10F1N1" n="1"> <layer xml:id="layer-L10F1N1" n="1"> <note xml:id="note-L11F1" dur="1" oct="4" pname="f" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L12" n="5"> <staff xml:id="staff-L12F1N1" n="1"> <layer xml:id="layer-L12F1N1" n="1"> <beam xml:id="beam-L13F1-L16F1"> <note xml:id="note-L13F1" dur="8" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L14F1" dur="8" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L15F1" dur="8" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L16F1" dur="8" oct="4" pname="g" accid.ges="n" /> </beam> <beam xml:id="beam-L17F1-L20F1"> <note xml:id="note-L17F1" dur="8" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L18F1" dur="8" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L19F1" dur="8" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L20F1" dur="8" oct="4" pname="g" accid.ges="n" /> </beam> </layer> </staff> </measure> <measure xml:id="measure-L21" n="6"> <staff xml:id="staff-L21F1N1" n="1"> <layer xml:id="layer-L21F1N1" n="1"> <note xml:id="note-L22F1" dur="2" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L23F1" dur="2" oct="4" pname="a" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L24" n="7"> <staff xml:id="staff-L24F1N1" n="1"> <layer xml:id="layer-L24F1N1" n="1"> <note xml:id="note-L25F1" dur="1" oct="4" pname="b" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L26" n="8"> <staff xml:id="staff-L26F1N1" n="1"> <layer xml:id="layer-L26F1N1" n="1"> <note xml:id="note-L27F1" dur="1" oct="4" pname="c" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L28" n="9"> <staff xml:id="staff-L28F1N1" n="1"> <layer xml:id="layer-L28F1N1" n="1"> <note xml:id="note-L29F1" dur="2" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L30F1" dur="2" oct="4" pname="d" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L31" n="10"> <staff xml:id="staff-L31F1N1" n="1"> <layer xml:id="layer-L31F1N1" n="1"> <note xml:id="note-L32F1" dur="1" oct="4" pname="e" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L33" n="11"> <staff xml:id="staff-L33F1N1" n="1"> <layer xml:id="layer-L33F1N1" n="1"> <note xml:id="note-L34F1" dur="1" oct="4" pname="f" accid.ges="n" /> </layer> </staff> </measure> <measure xml:id="measure-L35" right="end" n="12"> <staff xml:id="staff-L35F1N1" n="1"> <layer xml:id="layer-L35F1N1" n="1"> <note xml:id="note-L36F1" dur="2" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L37F1" dur="2" oct="4" pname="g" accid.ges="n" /> </layer> </staff> </measure> </section> </score> </mdiv> </body> </music> </mei> -=+Craig On Mon, 1 Jul 2019 at 19:25, Reinier de Valk <reinierdevalk at gmail.com> wrote: > Hi Klaus, > > Thanks for your reply. I am using the online Verovio viewer, and when I > click the Options button I see that certain page layout options (such as > Breaks and Page width) are made unavailable. The command line tool does not > work on the computer I have with me. > > It seems strange that this cannot be specified in the MEI itself - after > all, you can specify system breaks. > > Best wishes, > Reinier > > Op ma 1 jul. 2019 om 18:10 schreef Klaus Rettinghaus < > klaus.rettinghaus at gmail.com>: > >> Hi Reinier, >> >> this is not a MEI question, but a Verovio question. >> You probably want to check the options there: >> https://www.verovio.org/command-line.xhtml >> >> Klaus >> >> Am 01.07.2019 um 18:57 schrieb Reinier de Valk <reinierdevalk at gmail.com>: >> >> Hi all, >> >> There is probably a simple answer tothis, but I could not find it in the >> documentation. >> >> Is there a way to force a certain amount of bars onto a single system? I >> am rendering my MEI using the online Verovio viewer, and the individual >> bars are spaced very widely. >> >> Thanks, >> Reinier >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190701/488c6bde/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2019-07-01 at 9.25.01 PM.png Type: image/png Size: 32261 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190701/488c6bde/attachment.png> -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2019-07-01 at 9.46.48 PM.png Type: image/png Size: 42460 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190701/488c6bde/attachment-0001.png> From mcundiff at loc.gov Wed Jul 24 18:24:59 2019 From: mcundiff at loc.gov (Cundiff, Morgan) Date: Wed, 24 Jul 2019 16:24:59 +0000 Subject: [MEI-L] Music Encoding Conference 2020 details Message-ID: <b0d9e3509f574a299506b282e98f87be@LCXEX04.LCDS.LOC.GOV> Hello all: What are 2020 conference details - dates, location, host institution, contact info, etc.? I need to put in travel request now and could not find anything on the website. I know it's in Boston... Thanks, Morgan From andrew.hankinson at bodleian.ox.ac.uk Thu Jul 25 15:52:38 2019 From: andrew.hankinson at bodleian.ox.ac.uk (Andrew Hankinson) Date: Thu, 25 Jul 2019 13:52:38 +0000 Subject: [MEI-L] Music Encoding Conference 2020 details In-Reply-To: <b0d9e3509f574a299506b282e98f87be@LCXEX04.LCDS.LOC.GOV> References: <b0d9e3509f574a299506b282e98f87be@LCXEX04.LCDS.LOC.GOV> Message-ID: <A4EA2F3D-0079-4556-BBE2-0E16C8133104@bodleian.ox.ac.uk> Hi Morgan, The dates are May 26-29, 2020. There are still a few details we are working out about hosting institution and contact info, but this will come in due course. Cheers, -Andrew > On 24 Jul 2019, at 17:24, Cundiff, Morgan <mcundiff at loc.gov> wrote: > > Hello all: > > What are 2020 conference details - dates, location, host institution, contact info, etc.? > > I need to put in travel request now and could not find anything on the website. > > I know it's in Boston... > > Thanks, > Morgan > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From mcundiff at loc.gov Thu Jul 25 15:56:03 2019 From: mcundiff at loc.gov (Cundiff, Morgan) Date: Thu, 25 Jul 2019 13:56:03 +0000 Subject: [MEI-L] Music Encoding Conference 2020 details In-Reply-To: <A4EA2F3D-0079-4556-BBE2-0E16C8133104@bodleian.ox.ac.uk> References: <b0d9e3509f574a299506b282e98f87be@LCXEX04.LCDS.LOC.GOV> <A4EA2F3D-0079-4556-BBE2-0E16C8133104@bodleian.ox.ac.uk> Message-ID: <1f6daf1a030040eeb12446e144b5f00a@LCXEX04.LCDS.LOC.GOV> Hey Andrew, Thanks, that will get me started. Morgan -----Original Message----- From: mei-l <mei-l-bounces at lists.uni-paderborn.de> On Behalf Of Andrew Hankinson Sent: Thursday, July 25, 2019 9:53 AM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: Re: [MEI-L] Music Encoding Conference 2020 details Hi Morgan, The dates are May 26-29, 2020. There are still a few details we are working out about hosting institution and contact info, but this will come in due course. Cheers, -Andrew > On 24 Jul 2019, at 17:24, Cundiff, Morgan <mcundiff at loc.gov> wrote: > > Hello all: > > What are 2020 conference details - dates, location, host institution, contact info, etc.? > > I need to put in travel request now and could not find anything on the website. > > I know it's in Boston... > > Thanks, > Morgan > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From oberts at campus.uni-paderborn.de Mon Aug 5 11:10:57 2019 From: oberts at campus.uni-paderborn.de (Salome Obert) Date: Mon, 5 Aug 2019 11:10:57 +0200 Subject: [MEI-L] Cadenzas in MEI Message-ID: <CAK4dzNM_oJnPGDqw39cdyfyU-Mm-5xtTu3dj=p+j2fLbcSw99A@mail.gmail.com> Dear community, My name is Salome and I am working for Beethovens Werkstatt. In the project we are currently encoding Beethoven’s Septett op. 20 (respectively Trio op. 38). In the sixth movement *Andante con moto alla Marcia* is a Cadenza which is written in one measure. Thus the Cadenza-measure does not have a meter. Our question is: How can we deal with this free meter? From our point of view the attribute @metcon within the measure-tag which is usually used for measures whose meter differs from the previous one is not semantically correct because the Cadenza does not belong to a metrical system at all. We would prefer a new attribute within the measure-tag with boolean values which would be a clear identifier for a Cadenza and for its free meter: <measure n=”1” cadenza=”true”>…</measure> What do you think about this proposal and about dealing with Cadenzas in MEI in general? We would be very happy discuss about this question with the MEI-community, so please share your thoughts. Best Greetings, Salome Obert -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190805/948beca6/attachment.html> From kepper at edirom.de Thu Aug 8 14:40:47 2019 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 8 Aug 2019 14:40:47 +0200 Subject: [MEI-L] MEC proceedings 2015-1017 published Message-ID: <41CB4C72-D520-4027-9F22-AAD78FA0F5B6@edirom.de> Dear all, We are pleased to announced that the Music Encoding Conference Proceedings for the years 2015 (Tours), 2016 (Montreal) and 2017 (Tours) are published now. The proceedings hold a total of nineteen contributions from Serafina Beck, Axel Berndt, Benjamin W. Bohl, David Burn, Xuanli Chen, Jim DeLaHunt, Giuliano Di Bacco, Norbert Dubowy, Ichiro Fujinaga, Andrew Hankinson, Jane Harrison, Andrew Horwitz, Yu-Hui Huang, Johannes Kepper, Farhan Khalid, Reiner Krämer, Debra Lewis Lacoste, David Lewis, Jacob Olley, Kevin Page, Anna Plaksin, Perry Roland, Nico Schüler, Agnes Seipelt, Rebecca A. Shaw, John A. Stinson, Jason Stoessel, Barbara Swanson, Richard Sänger, Radu Timofte, Luc Van Gool, Raffaele Viglianti, and Jan-Peter Voigt, plus a foreword by Giuliano Di Bacco. They are available from https://doi.org/10.15463/music-1 We would like to thank the authors for their wonderful and very diverse contributions to the field of music encoding. We also thank the Bavarian State Library in Munich for their assistance and for taking care of the long-term preservation of the publication. Finally, we thank everyone else involved in the preparation of this volume. The editors Giuliano Di Bacco, Perry Roland and Johannes Kepper From lxpugin at gmail.com Fri Aug 9 07:25:05 2019 From: lxpugin at gmail.com (Laurent Pugin) Date: Fri, 9 Aug 2019 07:25:05 +0200 Subject: [MEI-L] Music Encoding workshop, October 24th-27th, 2019; Nashville, TN Message-ID: <CAJ306Ha=-3sj0-Njb24dezMS-vAs3Tbf_w4=vM_Y3209Om12dA@mail.gmail.com> Dear all, We are pleased to announce that Vanderbilt University will hold a Music Encoding workshop & hackathon in Nashville, TN on 24-27 October 2019. The workshop will introduce MEI to newcomers and offer workshops for advanced users. Register (free) by October 17. The workshop will be led by members of the Music Encoding Initiative Board and Technical Team. Those new to MEI will learn how to use the Music Encoding Initiative for research, teaching, electronic publishing, and management of digital collections. Major topics will include: basic XML; MEI history and design principles; tools for creating and editing MEI data and metadata; MEI-based workflows; using Verovio (http://www.verovio.org). Each day will include lectures, hands-on encoding practice, and opportunities to address participant specific issues. Attendees are encouraged to bring example material they would like to encode. No previous experience with MEI or XML is required for this track, but an understanding of music notation and other markup schemes, such as TEI and HTML, will be helpful. For software developers and advanced users interested in creating software and documentation for authoring, editing, converting, querying, and rendering encoded music data, the workshop will offer opportunities to share ideas and tools with members of the Technical Team and other members of the community. Please address questions to info at music-encoding.org <mailto:info at music-encoding.org>. We look forward to seeing you there! Best wishes, Laurent on behalf of the workshop organizers -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190809/caab070e/attachment.html> From klaus.rettinghaus at gmail.com Mon Aug 12 10:26:13 2019 From: klaus.rettinghaus at gmail.com (Klaus Rettinghaus) Date: Mon, 12 Aug 2019 10:26:13 +0200 Subject: [MEI-L] Cadenzas in MEI In-Reply-To: <CAK4dzNM_oJnPGDqw39cdyfyU-Mm-5xtTu3dj=p+j2fLbcSw99A@mail.gmail.com> References: <CAK4dzNM_oJnPGDqw39cdyfyU-Mm-5xtTu3dj=p+j2fLbcSw99A@mail.gmail.com> Message-ID: <CAB481HHV=n825Nn+LXcbVP9u2fnHe2iJ7GbzTp82_nY--Pt4QA@mail.gmail.com> Hi Salome, this approach seems a bit too specific for me. There are many different cases where a free meter may appear within measured music. So perhaps it would be good to have an extra attribute value, to stress the uncountability, something like meter.count="none". Then you could mark the cadenza as a separate section or use a type="cadenza" on the measure element. Cheers Klaus From goebl at mdw.ac.at Mon Aug 19 16:44:46 2019 From: goebl at mdw.ac.at (Werner Goebl) Date: Mon, 19 Aug 2019 16:44:46 +0200 Subject: [MEI-L] Different <expansion> versions within <choice>? Message-ID: <f8d955ab-974b-873f-9a12-3a6c0244ffcf@mdw.ac.at> Dear list, while discussing how to implement a "render expansion" functionality into Verovio (that would render repeated sections, endings, lem, and rdg as specified in the expansion at plist attribute), we found that there is some need for discussion here. As there is often more than one valid expansion of a given piece (played with or without the repetitions, or repeating only certain marked-up repetitions, depending on the performer's choice), we may have multiple <expansion> elements in a given MEI encoding. Craig is using the @type attribute to specify expansions with and without repetitions (see example here: https://github.com/craigsapp/beethoven-piano-sonatas/blob/master/kern/sonata23-2.krn or the conversion to MEI: https://gist.github.com/wergo/143518bb133486bcd3d80a93371e07da). However, @type is a very generic attribute (according to Laurent) that should be reserved for use-case specific functions. As a solution, we thought of putting <expansion> within <choice> so that it is clearly coded that only one of the many expansion elements is to be considered at a time (if a "--render-expansion" argument is given to Verovio or to another tool). <choice> <expansion xml:id="expansion-full" plist="#A #A1 #A #A2 #B #B1 #B #B2 #C #C1 #C #C2 #A #B"> <expansion xml:id="expansion-minimal" plist="#A #A2 #B #B2 #C #A #B"> <expansion xml:id="expansion-arrau1956" plist="#A #A1 #A #A2 #B #B2 #C #C2 #A #A1 #A #A2 #B"> </choice> (Alternatively, we discussed wrapping each expansion by <app> and <rdg> to encode the mutually exclusive nature of the expansion element, but finally thought that <choice> would be the simpler choice. :-) To account for even more complex expansion situations, we thought of allowing the expansion at plist to refer to other (sub-)expansions alongside the currently allowed section, ending, rdg, and lem. Please see (an excerpt of) Beethoven's Op. 35 (Eroica Variations) that has repeated sections at different section levels (as the Introduzione is an umbrella for several thematic sub-units, before the Theme and the variations come). The expansions are coded where they occur (in full, minimal, or specific to a performer or whatever) and are referred to by expansion elements higher up in the hierarchy: https://gist.github.com/wergo/90308c74046f28acd98b73a8fae7bd6e (Please see below the MEI encoding for a screenshot of the MEI structure.) Each occurrence of more than one expansion element should be surrounded by a <choice> element. For this, we would need to modify the MEI schema to allow <expansion> to be contained by <choice>. Any other ideas? We are eager to hear your opinion on this. All the best, Werner (Goebl), David (Weigl), Laurent (Pugin) From berndt at hfm-detmold.de Mon Aug 19 17:09:24 2019 From: berndt at hfm-detmold.de (Axel Berndt) Date: Mon, 19 Aug 2019 17:09:24 +0200 Subject: [MEI-L] Different <expansion> versions within <choice>? In-Reply-To: <f8d955ab-974b-873f-9a12-3a6c0244ffcf@mdw.ac.at> References: <f8d955ab-974b-873f-9a12-3a6c0244ffcf@mdw.ac.at> Message-ID: <9e1e9c67-93a2-2218-c63f-9de98bca453d@hfm-detmold.de> Dear Werner, David and Laurent, why not wrapping the expansion in a rdg? And the different readings can then be placed in an app element. Best, Axel -- Dr.-Ing. Axel Berndt Phone: +49 (0) 5231 / 975 874 Web: http://www.cemfi.de/people/axel-berndt Center of Music and Film Informatics Ostwestfalen-Lippe University of Applied Sciences and Arts Detmold University of Music Hornsche Strasse 44, 32756 Detmold, Germany From craigsapp at gmail.com Mon Aug 19 18:24:02 2019 From: craigsapp at gmail.com (Craig Sapp) Date: Mon, 19 Aug 2019 12:24:02 -0400 Subject: [MEI-L] Different <expansion> versions within <choice>? In-Reply-To: <f8d955ab-974b-873f-9a12-3a6c0244ffcf@mdw.ac.at> References: <f8d955ab-974b-873f-9a12-3a6c0244ffcf@mdw.ac.at> Message-ID: <CAPcjuFeWfBmvogh8QxHc+tqkxhgnS70Kx7PQLVLL3SEjANcCtQ@mail.gmail.com> One thought about the expansion list, such as: <choice> <expansion xml:id="expansion-full" plist="#A #A1 #A #A2 #B #B1 #B #B2 #C #C1 #C #C2 #A #A2 #B #B2"> <expansion xml:id="expansion-minimal" plist="#A #A2 #B #B2 #C #A #A2 #B #B2"> <expansion xml:id="expansion-arrau1956" plist="#A #A1 #A #A2 #B #B2 #C #C2 #A #A1 #A #A2 #B"> </choice> It would be useful to have a standardized method of identifying what you are IDing as "expansion-full" and "expansion-minimal". In other words, the performance sequence when taking repeats as instructed in the score and not taking repeats. Special labeling of these two expansion cases would be useful for analytic purposes such as determining the duration of Beethoven sonatas when taking written repeats or with no repeats. And these special cases should not be encoded only in the structure of the expansion IDs. That is the main purpose of the expansion at type in the Humdrum conversions, where the default expansion indicates the performance sequence taking repeats as written, and "norep" is the minimal performance sequence (with "norep" meaning "no repeats"). The norep expansion is particularly useful for doing computational analysis of the score, since the repeated material is usually not needed. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190819/395a1a58/attachment.html> From goebl at mdw.ac.at Tue Aug 20 10:29:25 2019 From: goebl at mdw.ac.at (Werner Goebl) Date: Tue, 20 Aug 2019 10:29:25 +0200 Subject: [MEI-L] Different <expansion> versions within <choice>? In-Reply-To: <CAPcjuFeWfBmvogh8QxHc+tqkxhgnS70Kx7PQLVLL3SEjANcCtQ@mail.gmail.com> References: <f8d955ab-974b-873f-9a12-3a6c0244ffcf@mdw.ac.at> <CAPcjuFeWfBmvogh8QxHc+tqkxhgnS70Kx7PQLVLL3SEjANcCtQ@mail.gmail.com> Message-ID: <ccc9f6ca-3625-f823-3d5c-41ec40a032b6@mdw.ac.at> I agree with Craig that the expansions (full repetitions, minimal repetitions) should be coded as such in MEI. There should be another type: "typical" repetitions that reflect omitted repetitions at a da-capo Minuet (A A B B C C D D A B). These three expansion types are not really editorial readings (as suggested by Axel) and they are not really choices either. expansion at type seems to be reasonable for this, but apparently using this attribute for that purpose has some contra arguments. Best, Werner On 19.08.19 18:24, Craig Sapp wrote: > One thought about the expansion list, such as: > > <choice> >          <expansion xml:id="expansion-full" plist="#A #A1 #A #A2 #B #B1 >                #B #B2 #C #C1 #C #C2 #A #A2 #B #B2"> >          <expansion xml:id="expansion-minimal" plist="#A #A2 #B #B2 #C > #A #A2 >                #B #B2"> >          <expansion xml:id="expansion-arrau1956" plist="#A #A1 #A #A2 #B >                #B2 #C #C2 #A #A1 #A #A2 #B"> > </choice> > > It would be useful to have a standardized method of identifying what you > are IDing as "expansion-full" and "expansion-minimal".  In other words, > the performance sequence when taking repeats as instructed in the score > and not taking repeats.  Special labeling of these two expansion cases > would be useful for analytic purposes such as determining the duration > of Beethoven sonatas when taking written repeats or with no repeats. > And these special cases should not be encoded only in the structure of > the expansion IDs.  That is the main purpose of the expansion at type in > the Humdrum conversions, where the default expansion indicates the > performance sequence taking repeats as written, and "norep" is the > minimal performance sequence (with "norep" meaning "no repeats").  The > norep expansion is particularly useful for doing computational analysis > of the score, since the repeated material is usually not needed. > > -=+Craig > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > From craigsapp at gmail.com Tue Aug 20 13:58:57 2019 From: craigsapp at gmail.com (Craig Sapp) Date: Tue, 20 Aug 2019 07:58:57 -0400 Subject: [MEI-L] Different <expansion> versions within <choice>? In-Reply-To: <CAPcjuFeWfBmvogh8QxHc+tqkxhgnS70Kx7PQLVLL3SEjANcCtQ@mail.gmail.com> References: <f8d955ab-974b-873f-9a12-3a6c0244ffcf@mdw.ac.at> <CAPcjuFeWfBmvogh8QxHc+tqkxhgnS70Kx7PQLVLL3SEjANcCtQ@mail.gmail.com> Message-ID: <CAPcjuFepT=SoFNGxT78SHQMn8qpVbvHn+He2wjzTT4PVzp8DAw@mail.gmail.com> > There should be another type: "typical" repetitions that reflect omitted > repetitions at a da-capo Minuet (A A B B C C D D A B). That is OK, but I would categorize this case as "full expansion", or maybe called the "default expansion" to not imply that the full expansion requires repeating of the A and B after the da-capo. On Mon, 19 Aug 2019 at 12:24, Craig Sapp <craigsapp at gmail.com> wrote: > One thought about the expansion list, such as: > > <choice> > <expansion xml:id="expansion-full" plist="#A #A1 #A #A2 #B #B1 > #B #B2 #C #C1 #C #C2 #A #A2 #B #B2"> > <expansion xml:id="expansion-minimal" plist="#A #A2 #B #B2 #C #A > #A2 > #B #B2"> > <expansion xml:id="expansion-arrau1956" plist="#A #A1 #A #A2 #B > #B2 #C #C2 #A #A1 #A #A2 #B"> > </choice> > > It would be useful to have a standardized method of identifying what you > are IDing as "expansion-full" and "expansion-minimal". In other words, the > performance sequence when taking repeats as instructed in the score and not > taking repeats. Special labeling of these two expansion cases would be > useful for analytic purposes such as determining the duration of Beethoven > sonatas when taking written repeats or with no repeats. And these special > cases should not be encoded only in the structure of the expansion IDs. > That is the main purpose of the expansion at type in the Humdrum > conversions, where the default expansion indicates the performance sequence > taking repeats as written, and "norep" is the minimal performance sequence > (with "norep" meaning "no repeats"). The norep expansion is particularly > useful for doing computational analysis of the score, since the repeated > material is usually not needed. > > -=+Craig > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190820/878951c4/attachment.html> From weigl at mdw.ac.at Wed Aug 21 14:56:55 2019 From: weigl at mdw.ac.at (David M. Weigl) Date: Wed, 21 Aug 2019 14:56:55 +0200 Subject: [MEI-L] Edirom Summer School tutorial on Semantic Web / Linked Open Data Message-ID: <409ba24a1b104665559476010b7f2b16d27be6f3.camel@mdw.ac.at> Dear all, A note to let you know about our upcoming tutorial on Semantic Web / Linked Open Data in a music encoding context at the Edirom Summer School, September 2nd - 3rd, in Paderborn. This is a reprise of the tutorial we presented at MEC 2019. If you missed it earlier this summer, here's another chance to learn about interlinking music encodings within a wider web of data! Please register at https://ess.uni-paderborn.de/2019/registrierung.html - registrations close August 30th! Note that the language of the workshop is officially German, but we are happy to be linguistically flexible as required. :-) Thanks and kind regards, David M. Weigl & Stefan Münnich -- David M. Weigl, PhD Department of Music Acoustics - Wiener Klangstil University of Music and Performing Arts Vienna, Austria Data Officer, EU H2020 TROMPA Project Towards Richer Online Music Public-domain Archives From lucinda.johnston at ualberta.ca Wed Aug 21 17:40:23 2019 From: lucinda.johnston at ualberta.ca (Lucinda Johnston) Date: Wed, 21 Aug 2019 09:40:23 -0600 Subject: [MEI-L] Edirom Summer School tutorial on Semantic Web / Linked Open Data In-Reply-To: <409ba24a1b104665559476010b7f2b16d27be6f3.camel@mdw.ac.at> References: <409ba24a1b104665559476010b7f2b16d27be6f3.camel@mdw.ac.at> Message-ID: <CAF6XPH_2Ehi0gVxjYq538ox3aQ0SDEYM1mWe1e1NTimadxqU9Q@mail.gmail.com> Good morning, This tutorial sounds very interesting, but I don't speak German! :( When you say you "are happy to be as linguistically flexible as required", what exactly do you mean by that? Will there be simultaneous translation or English subtitles? Just wondering how the "flexibility" works. :) Sincerely, Lucinda On Wed, 21 Aug 2019 at 06:58, David M. Weigl <weigl at mdw.ac.at> wrote: > Dear all, > > A note to let you know about our upcoming tutorial on Semantic Web / > Linked Open Data in a music encoding context at the Edirom Summer > School, September 2nd - 3rd, in Paderborn. This is a reprise of the > tutorial we presented at MEC 2019. If you missed it earlier this > summer, here's another chance to learn about interlinking music > encodings within a wider web of data! > > Please register at https://ess.uni-paderborn.de/2019/registrierung.html > - registrations close August 30th! > > Note that the language of the workshop is officially German, but we > are happy to be linguistically flexible as required. :-) > > Thanks and kind regards, > > David M. Weigl & Stefan Münnich > > -- > David M. Weigl, PhD > Department of Music Acoustics - Wiener Klangstil > University of Music and Performing Arts Vienna, Austria > Data Officer, EU H2020 TROMPA Project > Towards Richer Online Music Public-domain Archives > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- *Lucinda Johnston, MLIS* *she/her/hers* Public Services Librarian, Subject Librarian: Music and Drama Rutherford Humanities & Social Sciences Library <https://www.library.ualberta.ca/locations/rutherford> 1-01 Rutherford Library South, University of Alberta Edmonton, Alberta T6G 2J8 780-492-3015 lucinda.johnston at ualberta.ca *The University of Alberta acknowledges that we are located on Treaty 6 territory, and respects the histories, languages, and cultures of First Nations, M*é *tis, Inuit, and all First Peoples of Canada, whose presence continues to enrich our vibrant community.**Amiskwaciwâskahikan / ᐊᒥᐢᑲᐧᒋᕀᐋᐧᐢᑲᐦᐃᑲᐣ / Edmonton* -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190821/92c7265d/attachment.html> From kepper at edirom.de Wed Aug 21 22:24:00 2019 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 21 Aug 2019 22:24:00 +0200 Subject: [MEI-L] Edirom Summer School tutorial on Semantic Web / Linked Open Data In-Reply-To: <CAF6XPH_2Ehi0gVxjYq538ox3aQ0SDEYM1mWe1e1NTimadxqU9Q@mail.gmail.com> References: <409ba24a1b104665559476010b7f2b16d27be6f3.camel@mdw.ac.at> <CAF6XPH_2Ehi0gVxjYq538ox3aQ0SDEYM1mWe1e1NTimadxqU9Q@mail.gmail.com> Message-ID: <6054BBD9-678E-4DCB-B844-146D0B0A344C@edirom.de> Dear Lucinda, although I'm not one of the teachers of that very tutorial, I think I can answer the question. In essence, they will pick the language that works best for the audience, and will answer questions in both english and german. The group will be small enough to ensure that everyone gets her or his questions answered. While we'd be more than happy if you could attend the Edirom Summer School in Paderborn, I would like to point out to the MEI Workshop in Nashville, October 24-27. We will start with an introductory day there, and will have three open "Hackathen-Style" days after that. I know that some people interested in Linked Open Data and MEI will be there, so this could be another opportunity for you… Looking forward to meet you in either place, jo > Am 21.08.2019 um 17:40 schrieb Lucinda Johnston <lucinda.johnston at ualberta.ca>: > > Good morning, > > This tutorial sounds very interesting, but I don't speak German! :( When you say you "are happy to be as linguistically flexible as required", what exactly do you mean by that? Will there be simultaneous translation or English subtitles? Just wondering how the "flexibility" works. :) > > Sincerely, > Lucinda > > On Wed, 21 Aug 2019 at 06:58, David M. Weigl <weigl at mdw.ac.at> wrote: > Dear all, > > A note to let you know about our upcoming tutorial on Semantic Web / > Linked Open Data in a music encoding context at the Edirom Summer > School, September 2nd - 3rd, in Paderborn. This is a reprise of the > tutorial we presented at MEC 2019. If you missed it earlier this > summer, here's another chance to learn about interlinking music > encodings within a wider web of data! > > Please register at https://ess.uni-paderborn.de/2019/registrierung.html > - registrations close August 30th! > > Note that the language of the workshop is officially German, but we > are happy to be linguistically flexible as required. :-) > > Thanks and kind regards, > > David M. Weigl & Stefan Münnich > > -- > David M. Weigl, PhD > Department of Music Acoustics - Wiener Klangstil > University of Music and Performing Arts Vienna, Austria > Data Officer, EU H2020 TROMPA Project > Towards Richer Online Music Public-domain Archives > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > -- > Lucinda Johnston, MLIS > she/her/hers > Public Services Librarian, Subject Librarian: Music and Drama > Rutherford Humanities & Social Sciences Library > 1-01 Rutherford Library South, University of Alberta > Edmonton, Alberta T6G 2J8 > 780-492-3015 lucinda.johnston at ualberta.ca > > The University of Alberta acknowledges that we are located on Treaty 6 territory, and respects the histories, languages, and cultures of First Nations, Métis, Inuit, and all First Peoples of Canada, whose presence continues to enrich our vibrant community. > Amiskwaciwâskahikan / ᐊᒥᐢᑲᐧᒋᕀᐋᐧᐢᑲᐦᐃᑲᐣ / Edmonton > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l Dr. Johannes Kepper Wissenschaftlicher Mitarbeiter Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 www.beethovens-werkstatt.de Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190821/5e11f49a/attachment.sig> From weigl at mdw.ac.at Thu Aug 22 15:53:17 2019 From: weigl at mdw.ac.at (David M. Weigl) Date: Thu, 22 Aug 2019 15:53:17 +0200 Subject: [MEI-L] Edirom Summer School tutorial on Semantic Web / Linked Open Data In-Reply-To: <6054BBD9-678E-4DCB-B844-146D0B0A344C@edirom.de> References: <409ba24a1b104665559476010b7f2b16d27be6f3.camel@mdw.ac.at> <CAF6XPH_2Ehi0gVxjYq538ox3aQ0SDEYM1mWe1e1NTimadxqU9Q@mail.gmail.com> <6054BBD9-678E-4DCB-B844-146D0B0A344C@edirom.de> Message-ID: <1f87f96315f39a91c0baa5495b7398320412d04d.camel@mdw.ac.at> Dear Lucinda, To clarify in addition to Johannes' response (thanks!) -- the presenters at the Edirom Summer School session are both bilingual, and we presented this tutorial in English last time round. The materials (slides and exercises) are also in English. We'll present in one or both languages depending on audience requirements; regardless, we will ensure that everyone can follow along. Would be great to see you there :-) That said, I'm sure the Nashville workshops will also be excellent! Best regards, David On Wed, 2019-08-21 at 22:24 +0200, Johannes Kepper wrote: > Dear Lucinda, > > although I'm not one of the teachers of that very tutorial, I think I > can answer the question. In essence, they will pick the language that > works best for the audience, and will answer questions in both > english and german. The group will be small enough to ensure that > everyone gets her or his questions answered. > > While we'd be more than happy if you could attend the Edirom Summer > School in Paderborn, I would like to point out to the MEI Workshop in > Nashville, October 24-27. We will start with an introductory day > there, and will have three open "Hackathen-Style" days after that. I > know that some people interested in Linked Open Data and MEI will be > there, so this could be another opportunity for you… > > Looking forward to meet you in either place, > jo > > > > Am 21.08.2019 um 17:40 schrieb Lucinda Johnston < > > lucinda.johnston at ualberta.ca>: > > > > Good morning, > > > > This tutorial sounds very interesting, but I don't speak German! > > :( When you say you "are happy to be as linguistically flexible as > > required", what exactly do you mean by that? Will there be > > simultaneous translation or English subtitles? Just wondering how > > the "flexibility" works. :) > > > > Sincerely, > > Lucinda > > > > On Wed, 21 Aug 2019 at 06:58, David M. Weigl <weigl at mdw.ac.at> > > wrote: > > Dear all, > > > > A note to let you know about our upcoming tutorial on Semantic Web > > / > > Linked Open Data in a music encoding context at the Edirom Summer > > School, September 2nd - 3rd, in Paderborn. This is a reprise of the > > tutorial we presented at MEC 2019. If you missed it earlier this > > summer, here's another chance to learn about interlinking music > > encodings within a wider web of data! > > > > Please register at > > https://ess.uni-paderborn.de/2019/registrierung.html > > - registrations close August 30th! > > > > Note that the language of the workshop is officially German, but we > > are happy to be linguistically flexible as required. :-) > > > > Thanks and kind regards, > > > > David M. Weigl & Stefan Münnich > > > > -- > > David M. Weigl, PhD > > Department of Music Acoustics - Wiener Klangstil > > University of Music and Performing Arts Vienna, Austria > > Data Officer, EU H2020 TROMPA Project > > Towards Richer Online Music Public-domain Archives > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > -- > > Lucinda Johnston, MLIS > > she/her/hers > > Public Services Librarian, Subject Librarian: Music and Drama > > Rutherford Humanities & Social Sciences Library > > 1-01 Rutherford Library South, University of Alberta > > Edmonton, Alberta T6G 2J8 > > 780-492-3015 lucinda.johnston at ualberta.ca > > > > The University of Alberta acknowledges that we are located on > > Treaty 6 territory, and respects the histories, languages, and > > cultures of First Nations, Métis, Inuit, and all First Peoples of > > Canada, whose presence continues to enrich our vibrant community. > > Amiskwaciwâskahikan / ᐊᒥᐢᑲᐧᒋᕀᐋᐧᐢᑲᐦᐃᑲᐣ / Edmonton > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > Dr. Johannes Kepper > Wissenschaftlicher Mitarbeiter > > Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition > Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 > Detmold > kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 > > www.beethovens-werkstatt.de > Forschungsprojekt gefördert durch die Akademie der Wissenschaften und > der Literatur | Mainz > > > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From lucinda.johnston at ualberta.ca Thu Aug 22 18:19:03 2019 From: lucinda.johnston at ualberta.ca (Lucinda Johnston) Date: Thu, 22 Aug 2019 10:19:03 -0600 Subject: [MEI-L] Edirom Summer School tutorial on Semantic Web / Linked Open Data In-Reply-To: <1f87f96315f39a91c0baa5495b7398320412d04d.camel@mdw.ac.at> References: <409ba24a1b104665559476010b7f2b16d27be6f3.camel@mdw.ac.at> <CAF6XPH_2Ehi0gVxjYq538ox3aQ0SDEYM1mWe1e1NTimadxqU9Q@mail.gmail.com> <6054BBD9-678E-4DCB-B844-146D0B0A344C@edirom.de> <1f87f96315f39a91c0baa5495b7398320412d04d.camel@mdw.ac.at> Message-ID: <CAF6XPH_UebzanqgKDBGJBOCfvGCL9s=SynfApbK4Bj_C3tz9gg@mail.gmail.com> Hello David and Johannes, Thank you both so much for your responses. It has been very helpful. I actually have to apologise because for some very strange reason, I thought this was a webinar that was being offered. I should have read the initial email through more carefully, but I was so excited and interested that I just jumped in with a question about the linguistic logistics! However, unfortunately, it will not be possible for me to go to Paderborn in the next two weeks. I will investigate the workshops in Nashville and see what happens. Again thank you for your response, I do appreciate the time you spent getting back to me. Perhaps we will meet at some point in the future. Sincerely, Lucinda On Thu, 22 Aug 2019 at 07:54, David M. Weigl <weigl at mdw.ac.at> wrote: > Dear Lucinda, > > To clarify in addition to Johannes' response (thanks!) -- > the presenters at the Edirom Summer School session are both bilingual, > and we presented this tutorial in English last time round. The > materials (slides and exercises) are also in English. We'll present in > one or both languages depending on audience requirements; regardless, > we will ensure that everyone can follow along. > > Would be great to see you there :-) > > That said, I'm sure the Nashville workshops will also be excellent! > > Best regards, > > David > > > > On Wed, 2019-08-21 at 22:24 +0200, Johannes Kepper wrote: > > Dear Lucinda, > > > > although I'm not one of the teachers of that very tutorial, I think I > > can answer the question. In essence, they will pick the language that > > works best for the audience, and will answer questions in both > > english and german. The group will be small enough to ensure that > > everyone gets her or his questions answered. > > > > While we'd be more than happy if you could attend the Edirom Summer > > School in Paderborn, I would like to point out to the MEI Workshop in > > Nashville, October 24-27. We will start with an introductory day > > there, and will have three open "Hackathen-Style" days after that. I > > know that some people interested in Linked Open Data and MEI will be > > there, so this could be another opportunity for you… > > > > Looking forward to meet you in either place, > > jo > > > > > > > Am 21.08.2019 um 17:40 schrieb Lucinda Johnston < > > > lucinda.johnston at ualberta.ca>: > > > > > > Good morning, > > > > > > This tutorial sounds very interesting, but I don't speak German! > > > :( When you say you "are happy to be as linguistically flexible as > > > required", what exactly do you mean by that? Will there be > > > simultaneous translation or English subtitles? Just wondering how > > > the "flexibility" works. :) > > > > > > Sincerely, > > > Lucinda > > > > > > On Wed, 21 Aug 2019 at 06:58, David M. Weigl <weigl at mdw.ac.at> > > > wrote: > > > Dear all, > > > > > > A note to let you know about our upcoming tutorial on Semantic Web > > > / > > > Linked Open Data in a music encoding context at the Edirom Summer > > > School, September 2nd - 3rd, in Paderborn. This is a reprise of the > > > tutorial we presented at MEC 2019. If you missed it earlier this > > > summer, here's another chance to learn about interlinking music > > > encodings within a wider web of data! > > > > > > Please register at > > > https://ess.uni-paderborn.de/2019/registrierung.html > > > - registrations close August 30th! > > > > > > Note that the language of the workshop is officially German, but we > > > are happy to be linguistically flexible as required. :-) > > > > > > Thanks and kind regards, > > > > > > David M. Weigl & Stefan Münnich > > > > > > -- > > > David M. Weigl, PhD > > > Department of Music Acoustics - Wiener Klangstil > > > University of Music and Performing Arts Vienna, Austria > > > Data Officer, EU H2020 TROMPA Project > > > Towards Richer Online Music Public-domain Archives > > > > > > > > > _______________________________________________ > > > mei-l mailing list > > > mei-l at lists.uni-paderborn.de > > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > > > > -- > > > Lucinda Johnston, MLIS > > > she/her/hers > > > Public Services Librarian, Subject Librarian: Music and Drama > > > Rutherford Humanities & Social Sciences Library > > > 1-01 Rutherford Library South, University of Alberta > > > Edmonton, Alberta T6G 2J8 > > > 780-492-3015 lucinda.johnston at ualberta.ca > > > > > > The University of Alberta acknowledges that we are located on > > > Treaty 6 territory, and respects the histories, languages, and > > > cultures of First Nations, Métis, Inuit, and all First Peoples of > > > Canada, whose presence continues to enrich our vibrant community. > > > Amiskwaciwâskahikan / ᐊᒥᐢᑲᐧᒋᕀᐋᐧᐢᑲᐦᐃᑲᐣ / Edmonton > > > _______________________________________________ > > > mei-l mailing list > > > mei-l at lists.uni-paderborn.de > > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > Dr. Johannes Kepper > > Wissenschaftlicher Mitarbeiter > > > > Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition > > Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 > > Detmold > > kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 > > > > www.beethovens-werkstatt.de > > Forschungsprojekt gefördert durch die Akademie der Wissenschaften und > > der Literatur | Mainz > > > > > > > > > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- *Lucinda Johnston, MLIS* *she/her/hers* Public Services Librarian, Subject Librarian: Music and Drama Rutherford Humanities & Social Sciences Library <https://www.library.ualberta.ca/locations/rutherford> 1-01 Rutherford Library South, University of Alberta Edmonton, Alberta T6G 2J8 780-492-3015 lucinda.johnston at ualberta.ca *The University of Alberta acknowledges that we are located on Treaty 6 territory, and respects the histories, languages, and cultures of First Nations, M*é *tis, Inuit, and all First Peoples of Canada, whose presence continues to enrich our vibrant community.**Amiskwaciwâskahikan / ᐊᒥᐢᑲᐧᒋᕀᐋᐧᐢᑲᐦᐃᑲᐣ / Edmonton* -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190822/9e5ece02/attachment.html> From T.Crawford at gold.ac.uk Sat Aug 24 18:31:39 2019 From: T.Crawford at gold.ac.uk (Tim Crawford) Date: Sat, 24 Aug 2019 16:31:39 +0000 Subject: [MEI-L] Beethoven Septet In-Reply-To: <CAK4dzNM_oJnPGDqw39cdyfyU-Mm-5xtTu3dj=p+j2fLbcSw99A@mail.gmail.com> References: <CAK4dzNM_oJnPGDqw39cdyfyU-Mm-5xtTu3dj=p+j2fLbcSw99A@mail.gmail.com> Message-ID: <AA943754-3010-4731-887E-4902755645AF@gold.ac.uk> Dear Salome Obert, I wonder if you are interested in two very peripheral sources of the Septet, Op. 20, which are in my possession. (I like this piece!) The first is a manuscript score of Op. 20 that I bought from a London dealer in the 1980s, probably made from the published parts many years before the score itself was published. It was formerly owned by Gottfried Weber (who was incidentally a close friend of his namesake, Carl Maria von Weber), and carries his signature, but the music seems to be in the hand of a different scribe. G. Weber was a friend of the Darmstadt violinist Louis Schloesser, and the Beethoven score also bears a printed bookplate of the latter’s son, ‘C[arl] W[ilhelm] A[dolph] Schloesser’, who came to England and was an early professor of piano at the newly-founded Royal Academy of Music. I’ve attached a photo of the title-page with G. Weber’s signature, and the first page of the score. Also, the two pages (95/96) with the cadenza (which ends with the word ‘Volti’, confirming that the score was compiled from the instrumental parts, not copied from a score). The other one is rather bizarre. It’s the Tema con Variazioni from the Septet, arranged for violin and organ by ‘T. Sanderson’, about whom I have been able to discover nothing at all! I bought this when I was a student in Brighton in about 1967; it might have a local connection, but I have not been able to find this name anywhere. Let me know if you’d like more photos. The ‘Weber’ score would take a lot of work, as it’s 115 pages of music. Best wishes (and Hi, Johannes!) Tim Crawford [cid:48DFC384-D65E-4F22-A64E-17B80330A3F2][cid:10B9F80D-A4A4-42CC-B331-A0B27EB2AC2D][cid:FCD08875-D403-41E9-8F74-DD2B50D878DC][cid:3D941E52-39EF-4100-8156-612B55F71E9D][cid:37DAC8B9-6781-4F39-8A1B-E0999F8AB2D1] On 5 Aug 2019, at 10:10, Salome Obert <oberts at campus.uni-paderborn.de<mailto:oberts at campus.uni-paderborn.de>> wrote: Dear community, My name is Salome and I am working for Beethovens Werkstatt. In the project we are currently encoding Beethoven’s Septett op. 20 (respectively Trio op. 38). -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190824/36a76b08/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_0780.jpeg Type: image/jpeg Size: 391360 bytes Desc: IMG_0780.jpeg URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190824/36a76b08/attachment.jpeg> -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_0781.jpeg Type: image/jpeg Size: 444590 bytes Desc: IMG_0781.jpeg URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190824/36a76b08/attachment-0001.jpeg> -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_0782.jpeg Type: image/jpeg Size: 370332 bytes Desc: IMG_0782.jpeg URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190824/36a76b08/attachment-0002.jpeg> -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_0783.jpeg Type: image/jpeg Size: 313127 bytes Desc: IMG_0783.jpeg URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190824/36a76b08/attachment-0003.jpeg> -------------- next part -------------- A non-text attachment was scrubbed... Name: IMG_0784.jpeg Type: image/jpeg Size: 324005 bytes Desc: IMG_0784.jpeg URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190824/36a76b08/attachment-0004.jpeg> From D.Lewis at gold.ac.uk Mon Aug 26 15:50:31 2019 From: D.Lewis at gold.ac.uk (David Lewis) Date: Mon, 26 Aug 2019 13:50:31 +0000 Subject: [MEI-L] Different <expansion> versions within <choice>? In-Reply-To: <ccc9f6ca-3625-f823-3d5c-41ec40a032b6@mdw.ac.at> References: <f8d955ab-974b-873f-9a12-3a6c0244ffcf@mdw.ac.at> <CAPcjuFeWfBmvogh8QxHc+tqkxhgnS70Kx7PQLVLL3SEjANcCtQ@mail.gmail.com> <ccc9f6ca-3625-f823-3d5c-41ec40a032b6@mdw.ac.at> Message-ID: <FF7A36E8-5EF3-422B-9B22-3717F5B24845@gold.ac.uk> Just to note (in support of this proposal) that the TEI use of expan and ex is pretty close to this, including expan as a child of choice. My vote would be to use @type initially as discussed until the categories settle well enough to be considered as a special new attribute with a fully-controlled vocabulary. D > On 20 Aug 2019, at 09:29, Werner Goebl <goebl at mdw.ac.at> wrote: > > I agree with Craig that the expansions (full repetitions, minimal repetitions) should be coded as such in MEI. There should be another type: "typical" repetitions that reflect omitted repetitions at a da-capo Minuet (A A B B C C D D A B). > > These three expansion types are not really editorial readings (as suggested by Axel) and they are not really choices either. expansion at type seems to be reasonable for this, but apparently using this attribute for that purpose has some contra arguments. > > Best, > Werner > > On 19.08.19 18:24, Craig Sapp wrote: >> One thought about the expansion list, such as: >> <choice> >> <expansion xml:id="expansion-full" plist="#A #A1 #A #A2 #B #B1 >> #B #B2 #C #C1 #C #C2 #A #A2 #B #B2"> >> <expansion xml:id="expansion-minimal" plist="#A #A2 #B #B2 #C #A #A2 >> #B #B2"> >> <expansion xml:id="expansion-arrau1956" plist="#A #A1 #A #A2 #B >> #B2 #C #C2 #A #A1 #A #A2 #B"> >> </choice> >> It would be useful to have a standardized method of identifying what you are IDing as "expansion-full" and "expansion-minimal". In other words, the performance sequence when taking repeats as instructed in the score and not taking repeats. Special labeling of these two expansion cases would be useful for analytic purposes such as determining the duration of Beethoven sonatas when taking written repeats or with no repeats. And these special cases should not be encoded only in the structure of the expansion IDs. That is the main purpose of the expansion at type in the Humdrum conversions, where the default expansion indicates the performance sequence taking repeats as written, and "norep" is the minimal performance sequence (with "norep" meaning "no repeats"). The norep expansion is particularly useful for doing computational analysis of the score, since the repeated material is usually not needed. >> -=+Craig >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From drizo at dlsi.ua.es Mon Sep 2 13:25:55 2019 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Mon, 2 Sep 2019 13:25:55 +0200 Subject: [MEI-L] Second call for participation to the 2nd International Workshop on Reading Music Systems (WoRMS) Message-ID: <1D252134-3478-44C6-8F17-74550A280324@dlsi.ua.es> Dear colleagues, This is a reminder of the 2nd International Workshop on Reading Music Systems (WoRMS). It will take place on Saturday, the 2nd of November 2019, from 2pm to 6pm, at the Delft University of Technology, as a satellite event to ISMIR 2019. WoRMS is a new workshop that tries to connect researchers who develop music reading systems — especially from the field of optical music recognition, but also related topics such as score following, score searching, or information retrieval from written music — with researchers and practitioners that could benefit from such systems, like librarians or musicologists. WoRMS will be organized as a half-day workshop and provides a good opportunity to share ideas, discuss current developments and shape the future of reading music systems. We would like for diverse points of view to engage, by explicitly inviting contributors without a technical background to participate as well. We strive to make the workshop as interactive as possible, with participants getting the opportunity not just to present their work, but to discuss current research and foster relationships within the community. Therefore, promising ideas, work-in-progress submissions and recently submitted or published works are equally welcome. The topics of interest for the workshop include, but are not limited to: Music reading systems Optical music recognition Datasets and performance evaluation Image processing on music scores Writer identification Authoring, editing, storing and presentation systems for music scores Multi-modal systems Novel input-methods for music to produce written music Web-based Music Information Retrieval services Applications and projects Use-cases related to written music Important dates: Submission Deadline Sep 13, 2019 Notification Due Sep 27, 2019 Workshop Nov 2, 2019 (Saturday) from 2pm to 6pm Check the website https://sites.google.com/view/worms2019 <https://sites.google.com/view/worms2019> for further information. Please forward this e-mail to anyone who might be interested. Best regards, Jorge Calvo-Zaragoza Alexander Pacha Heinz Roggenkemper -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190902/f402ab79/attachment.html> From drizo at dlsi.ua.es Fri Sep 6 18:05:04 2019 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Fri, 6 Sep 2019 18:05:04 +0200 Subject: [MEI-L] DLfM2019 Registration Open + TROMPA Challenge Position Papers Final CfP - Digital Libraries for Musicology | The Hague, The Netherlands | 9th November 2019 Message-ID: <32BEC361-A59A-45E4-BCD6-A93D11499A6B@dlsi.ua.es> [with apologies for cross posting] 6th International Conference on Digital Libraries for Musicology (DLfM 2019) 9th November 2019 National Library of The Netherlands A satellite event of ISMIR 2019. *** TROMPA Project Challenge position papers deadline for submission has been extended as follows: 6th October: TROMPA Project Challenge position papers We are pleased to report that REGISTRATION for DLfM 2019 in The Hague is now open at https://dlfm.web.ox.ac.uk/ <https://dlfm.web.ox.ac.uk/>. *** Many Digital Libraries have long offered facilities to provide multimedia content, including music. However there is now an ever more urgent need to specifically support the distinct multiple forms of music, the links between them, and the surrounding scholarly context, as required by the transformed and extended methods being applied to musicology and the wider Digital Humanities. The Digital Libraries for Musicology (DLfM) conference presents a venue specifically for those working on, and with, Digital Library systems and content in the domain of music and musicology. This includes Music Digital Library systems, their application and use in musicology, technologies for enhanced access and organisation of musics in Digital Libraries, bibliographic and metadata for music, intersections with music Linked Data, and the challenges of working with the multiple representations of music across large-scale digital collections such as the Internet Archive and HathiTrust. This, the Sixth Digital Libraries for Musicology conference, follows previous workshops in London, Knoxville, New York, Shanghai, and Paris. In 2019, DLfM is again proud to be a satellite event of the annual International Society for Music Information Retrieval (ISMIR) conference which is being held in Delft, and in particular encourages reports on the use of MIR methods and technologies within Music Digital Library systems when applied to the pursuit of musicological research. TROMPA Project Challenge Diverse public domain collections exposing materials of scholarly musicological interest are published on the Web. How will scholars benefit from the interlinking of such repositories? What research questions will be supported by unified access to collections of digitised score images, score encodings, textual and audio-visual materials, and other multimodal data sources? What kinds of holistic interpretive and analytical insights can scholars contribute to enrich such interconnected repositories, and how can they be supported in doing so? The TROMPA Project Challenge solicits short position papers addressing these questions as submissions of up to 2 pages to DLfM. TROMPA Project Challenge papers will be peer reviewed, and accepted papers will be presented at the conference as either part of a panel or as poster. Challenge papers will not be included in the main DLfM proceedings, but will be compiled into a supplement hosted on the conference website. Please note that at least one author of each accepted paper must attend the conference to present their work. Submissions: https://easychair.org/conferences/?conf=dlfm2019 <https://easychair.org/conferences/?conf=dlfm2019> Contact email: dlfm2019 at easychair.org ACM template (both Word and LaTeX): https://www.acm.org/publications/taps/word-template-workflow <https://www.acm.org/publications/taps/word-template-workflow> TROMPA (trompamusic.eu <https://trompamusic.eu/>) is an EU-funded project (2018-2021) dedicated to massively enriching and democratising the heritage of classical music, and involving content owners, scholars, performers, choral singers and music enthusiasts of every kind. The project employs and improves state-of-the-art technology, engaging thousands of music-loving citizens to work with the technology, give feedback on algorithmic results, and annotate the data according to their personal expertise. IMPORTANT DATES Abstract submission deadline: 21th June 2019 (23:59 UTC-11) Paper submission deadline: 28th June 2019 (23:59 UTC-11) Notification of acceptance: 17th August 2019 General track camera ready submission and author registration deadline: 21st September 2019 TROMPA Project challenge submission deadline: 6th October 2019 TROMPA Project challenge Notification of acceptance: 21st October 2019 General registration deadline: 28th October 2019 Conference: 9th November 2019 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190906/7021a183/attachment.html> From jean.bresson at ircam.fr Sat Sep 7 09:00:40 2019 From: jean.bresson at ircam.fr (Jean Bresson) Date: Sat, 7 Sep 2019 09:00:40 +0200 Subject: [MEI-L] Fwd: [MUSIC-NOTATION] TENOR 2020 - Call for Participation (please share) References: <A0346E36-04F8-4E77-AD87-A032DBE67C74@hfmt-hamburg.de> Message-ID: <AFCCA222-3FCA-43A6-BD1B-8A3BDF8287F3@ircam.fr> The Hamburg University of Music and Drama (HfMT) will host the 6th International Conference on Technologies for Music Notation and Representation (TENOR) in Hamburg (Germany) from Tuesday, May 12 to Thursday, May 14, 2020. The focus of the conference will be KiSS (Kinetics in Sound and Space) — the notation and representation of movement and gestures in sound and space. The last day of the conference will overlap with the Klingt gut! Symposium <https://www.tonlabor-haw.de/klingtgut-call-2020>—an interdisciplinary event at the intersection of art and technology hosted by the Hamburg University of Applied Sciences (HAW). Attendees will be able to get passes at a reduced rate should they wish to participate in both conferences. Keynote speakers will include Alexander Schubert, Thor Magnusson and Jessie Marino. A total of 6 concerts will accompany the conference, showcasing technology recently acquired by the HfMT and featuring a special concert with the school’s motion tracking system. Workshops will be held at the Elbphilharmonie, Hamburg’s newly inaugurated landmark and cultural beacon. ———— Important Dates Submission Deadline: December 31, 2019 Start of review process: January 6, 2020 Notifications: February 20, 2020 Camera ready submissions: April 20, 2020 Early registration: February 1 - April 1, 2020 Conference Program ready: May 6, 2020 ———— Call for Papers We are now soliciting submissions for oral presentations as well as workshops and round-table discussions, encouraging submissions examining core areas of Technologies for Music Notation and Representation, in particular: Real-time composition, improvisation and comprovisation History and aesthetics of notation Notation of microtonal and/or electronic music Performer perspectives on technologies around notation Notation for and in virtual environments New interfaces for music notation Digital games as notation Notation as an emergent property of the performance system Critical, aesthetic and sociological examinations of the interactions between new notation technologies and performance Notation/representation technologies for time-based arts beyond music including notations for space, gesture, movement Non-visual notation systems (aural, tactile, olfactory, etc.) Principles of mnemonic notation, exploring the relationship between memory and representation We also welcome submissions on the following topics: Notation and representation of sonified data Computational musicology and mathematical music theory with a focus on music representation Computer environments for music notation Notation in interactive performance systems Notation and robotics Music information retrieval Notation and music representation in education Notation and neurocognition Papers should be between 4 and 10 pages, written in English and not previously published. All submissions should anonymous and will be peer-reviewed. Accepted papers are expected to be delivered as oral presentations: 15 minute talk followed by 5 minutes of Q&A. Papers must be submitted by December 31, 2019 via the TENOR conference system at https://easychair.org/conferences/?conf=tenor2020 <https://easychair.org/conferences/?conf=tenor2020> and must comply with the given templates. The conference system will open on November 1, 2019. Upon acceptance, camera-ready versions should be re-submitted within 30 days. All accepted papers will be included in the conference proceedings which will be available on the conference website (http://tenor2020.hfmt-hamburg.de/ <http://tenor2020.hfmt-hamburg.de/>) as an electronic publication. Please note that per paper at least one of the authors needs to register in order for the paper to be presented and included in the proceedings. Inquiries on submissions and presentations including workshops should be directed to the paper chair Rama Gottfried: paper-chair.tenor2020 at hfmt-hamburg.de <mailto:paper-chair.tenor2020 at hfmt-hamburg.de> ———— Call for Sonic Works We also invite submissions of music and sonic art works for presentation at the festival. Hamburg has a long and ongoing tradition of improvised music. In the spirit of this tradition we encourage submissions of works exploring emerging trends such as comprovisation or conduction. We also encourage submissions for pieces using the HfMT’s drawsocket technology for animated scores, recently employed in a large-scale project in the St. Pauli Elbe Tunnel (more information at https://github.com/HfMT-ZM4/drawsocket <https://github.com/HfMT-ZM4/drawsocket>, http://computermusicnotation.com <http://computermusicnotation.com/> and https://www.youtube.com/watch?v=BXlaSBo0KXs <https://www.youtube.com/watch?v=BXlaSBo0KXs>). Finally, submission of pieces using immersive and interactive technologies is strongly recommended. Featured categories: Real-time composition/notation and animated scores Improvisation, comprovisation and conduction AV & performance art Works for interactive movement / dance / object theater Fixed scores / media Works in all categories are free to use video. Available ensembles and musicians: TENOR2020 Orchestra (utilizing the drawsocket system, instrumentation TBA) SPIIC Ensemble, the ensemble of the Studio for Polystylistic Improvisation and Interdisciplinary Crossover directed by Vlatko Kučan ( vocals, dàm bao, tp, tb, violin, sax, sax, guitar, piano, db, perc, clar / sax) Blueprint Ensemble: sax, egtr, db, piano, perc, electronics Chamber choir (4, 4, 4, 4) Works for other resources will be considered, but artists may be responsible for providing their own performers. Where possible, modest financial help may be offered to facilitate this. Pieces can be old or new, but they must be presented: In a PDF with screenshots and a detailed explanation of their artistic and technological paradigms, their approach to musical praxis and – very important – their technological implementation, including personnel needs and qualifications. This PDF should also include a max. 250 word artist’s bio. This PDF should also include a link to a max. 5 min YouTube/Vimeo video that demonstrates the score in action. Works must be submitted by December 31, 2019 via the TENOR conference system at https://easychair.org/conferences/?conf=tenor2020 <https://easychair.org/conferences/?conf=tenor2020>. The conference system will open on November 1, 2019. Please note that per piece at least one of the artists needs to register and attend the conference in order for the piece to be presented. Inquiries on submissions and concerts should be directed to the music chair Jacob Sello: music-chair.tenor2020 at hfmt-hamburg.de <mailto:music-chair.tenor2020 at hfmt-hamburg.de> General inquiries should be directed to the conference chair Georg Hajdu: chair.tenor2020 at hfmt-hamburg.de <mailto:chair.tenor2020 at hfmt-hamburg.de> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190907/e282b0f9/attachment.html> From list+mei-l at jdlh.com Tue Sep 10 11:03:37 2019 From: list+mei-l at jdlh.com (Jim DeLaHunt) Date: Tue, 10 Sep 2019 11:03:37 +0200 Subject: [MEI-L] Wien? (and Venezia, Napoli, Roma, Firenze, Bologna?) Message-ID: <955ce992-cb51-22a8-bab6-0f47c73096b4@jdlh.com> Greetings, MEI colleagues: I am on an extended trip from my home base in Canada to Austria, followed by Italy, until the end of November. If there are colleagues in these cities working on MEI-related projects, or digital musicology more generally, I would find it interesting to buy you a cup of coffee and learn more about your work. It is gratifying to put faces to the email addresses on this list. I also expect I would find the range of activities of this list's subscribers to be fascinating. My own interest is in transcribing legacy printed scores in mainstream notation to digital form, and giving those scores away for free, through the Keyboard Philharmonic project. I am in Wien/Vienna now, through 18. September. I am in Venezia/Venice in late September-early October, in Napoli/Naples and Roma/Rome in mid-October, in Firenze/Florence in early November, and in Bologna in late November. If anyone would be interested in meeting face to face, please reply to me by email to <jdlh at jdlh.com>. You could also phone or text me on my mobile ‭+43 660 7481305‬ (but that number may change in October). Best regards,     —Jim DeLaHunt, Vancouver, Canada -- --Jim DeLaHunt, jdlh at jdlh.com http://blog.jdlh.com/ (http://jdlh.com/) multilingual websites consultant 355-1027 Davie St, Vancouver BC V6E 4L2, Canada Austria mobile +43 660 7481305 From sylvaineleblondmartin at gmail.com Tue Sep 10 19:01:56 2019 From: sylvaineleblondmartin at gmail.com (Sylvaine Leblond Martin) Date: Tue, 10 Sep 2019 19:01:56 +0200 Subject: [MEI-L] Wien? (and Venezia, Napoli, Roma, Firenze, Bologna?) In-Reply-To: <955ce992-cb51-22a8-bab6-0f47c73096b4@jdlh.com> References: <955ce992-cb51-22a8-bab6-0f47c73096b4@jdlh.com> Message-ID: <CADp7aT1L6MVnn7qo3E1dnXjugRB+iGeG8pk-LDnQHM6iUaTVLw@mail.gmail.com> Dear Jim DeLaHunt, Too bad you do not go through Paris. We would have introduced you to our research group on the application of MEI and TEI standards on the oral music works of the Maghreb and Mashriq. Our group is called GenÆnorma "Group of digital encoding of auralities, and new oralities of the current musics", project supported by the UNESCO Chair ITEN "Innovation, Transmission and Digital Publishing". We are based at the Maison des Sciences de l’Homme Paris Nord. We could always stay in touch! What do you think ? Looking forward to hearing from you, we wish you a pleasant trip! Sylvaine Leblond Martin Le mar. 10 sept. 2019 à 18:31, Jim DeLaHunt <list+mei-l at jdlh.com> a écrit : > Greetings, MEI colleagues: > > I am on an extended trip from my home base in Canada to Austria, > followed by Italy, until the end of November. If there are colleagues in > these cities working on MEI-related projects, or digital musicology more > generally, I would find it interesting to buy you a cup of coffee and > learn more about your work. > > It is gratifying to put faces to the email addresses on this list. I > also expect I would find the range of activities of this list's > subscribers to be fascinating. My own interest is in transcribing legacy > printed scores in mainstream notation to digital form, and giving those > scores away for free, through the Keyboard Philharmonic project. > > I am in Wien/Vienna now, through 18. September. I am in Venezia/Venice > in late September-early October, in Napoli/Naples and Roma/Rome in > mid-October, in Firenze/Florence in early November, and in Bologna in > late November. > > If anyone would be interested in meeting face to face, please reply to > me by email to <jdlh at jdlh.com>. You could also phone or text me on my > mobile ‭+43 660 7481305‬ (but that number may change in October). > > Best regards, > —Jim DeLaHunt, Vancouver, Canada > > -- > --Jim DeLaHunt, jdlh at jdlh.com http://blog.jdlh.com/ ( > http://jdlh.com/) > multilingual websites consultant > > 355-1027 Davie St, Vancouver BC V6E 4L2, Canada > <https://www.google.com/maps/search/355-1027+Davie+St,+Vancouver+BC+V6E+4L2,+Canada?entry=gmail&source=g> > Austria mobile +43 660 7481305 > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Sylvaine Leblond Martin Compositrice, Docteure en Sciences de l'information et de la communication, Membre de la Chaire Unesco ITEN, Chef du projet GEN Æ NORMA « Groupe d'Encodages Numériques des Auralités Et Nouvelles ORalités des Musiques Actuelles », Chercheure associée au CRTM « Centre de recherche sur les traditions musicales » de l’Université Antonine (Liban), Chercheure post-doctorante à la MSH Paris Nord. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190910/b4fc6729/attachment.html> From gobindamode at gmx.de Tue Sep 10 19:10:28 2019 From: gobindamode at gmx.de (Kai) Date: Tue, 10 Sep 2019 19:10:28 +0200 Subject: [MEI-L] Wien? (and Venezia, Napoli, Roma, Firenze, Bologna?) In-Reply-To: <CADp7aT1L6MVnn7qo3E1dnXjugRB+iGeG8pk-LDnQHM6iUaTVLw@mail.gmail.com> Message-ID: <79cda529-6b73-482d-997d-c5a981ed18ab@email.android.com> An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190910/8d83a837/attachment.html> From jdownie at illinois.edu Tue Sep 10 19:19:39 2019 From: jdownie at illinois.edu (Downie, J Stephen) Date: Tue, 10 Sep 2019 17:19:39 +0000 Subject: [MEI-L] Wien? (and Venezia, Napoli, Roma, Firenze, Bologna?) In-Reply-To: <955ce992-cb51-22a8-bab6-0f47c73096b4@jdlh.com> References: <955ce992-cb51-22a8-bab6-0f47c73096b4@jdlh.com> Message-ID: <BN6PR11MB15216E4473E9E3C8B74AD860CFB60@BN6PR11MB1521.namprd11.prod.outlook.com> Hi Jim Any chance you could pop up to Delft for ISMIR and DLfM in November? Lots of folks working at the intersection of MEI and MIR. J. Stephen Downie From: Jim DeLaHunt Sent: Tuesday, September 10, 11:04 AM Subject: [MEI-L] Wien? (and Venezia, Napoli, Roma, Firenze, Bologna?) To: Music Encoding Initiative Greetings, MEI colleagues: I am on an extended trip from my home base in Canada to Austria, followed by Italy, until the end of November. If there are colleagues in these cities working on MEI-related projects, or digital musicology more generally, I would find it interesting to buy you a cup of coffee and learn more about your work. It is gratifying to put faces to the email addresses on this list. I also expect I would find the range of activities of this list's subscribers to be fascinating. My own interest is in transcribing legacy printed scores in mainstream notation to digital form, and giving those scores away for free, through the Keyboard Philharmonic project. I am in Wien/Vienna now, through 18. September. I am in Venezia/Venice in late September-early October, in Napoli/Naples and Roma/Rome in mid-October, in Firenze/Florence in early November, and in Bologna in late November. If anyone would be interested in meeting face to face, please reply to me by email to <jdlh at jdlh.com>. You could also phone or text me on my mobile ‭+43 660 7481305‬ (but that number may change in October). Best regards, —Jim DeLaHunt, Vancouver, Canada -- --Jim DeLaHunt, jdlh at jdlh.com http://blog.jdlh.com/ (http://jdlh.com/) multilingual websites consultant 355-1027 Davie St, Vancouver BC V6E 4L2, Canada Austria mobile +43 660 7481305 _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190910/b3a6b6a6/attachment.html> From ul at openlilylib.org Tue Sep 17 09:51:08 2019 From: ul at openlilylib.org (ul at openlilylib.org) Date: Tue, 17 Sep 2019 07:51:08 +0000 Subject: [MEI-L] =?utf-8?q?CfP_=E2=80=9CMusic_Engraving_in_the_21st_Centu?= =?utf-8?q?ry_-_Developments_and_Perspectives=E2=80=9D?= Message-ID: <27dc3b3677509d2990a9f0eecd64b4e5@openlilylib.org> Dear MEI community, I'm happy to forward the Call for Papers for a conference “Music Engraving in the 21st Century - Developments and Perspectives” which will take place January 17-19 2020 at the University Mozarteum Salzburg. The conference is a hybrid targeting both users (e.g. teachers, scholars, publishers) and developers (or vendors) of notation software (of all kinds). Keynote speaker will be Elaine Gould. We're looking forward to interesting proposals (and of course also attendants who do not wish to present anything). You can find the details at https://www.uni-mozarteum.at/en/kunst/music-engraving-conference.php and https://www.uni-mozarteum.at/de/kunst/notensatz-konferenz.php (German). Best regards Urs (Liska) From luca.ludovico at unimi.it Wed Sep 25 00:19:15 2019 From: luca.ludovico at unimi.it (luca.ludovico at unimi.it) Date: Wed, 25 Sep 2019 00:19:15 +0200 Subject: [MEI-L] CFP - Special session on Computer Supported Music Education @ CSEDU 2020 Message-ID: <01de01d57326$1a01e160$4e05a420$@unimi.it> [Apologies for cross-postings] [Please distribute] 12th International Conference on Computer Supported Education (CSEDU 2020) Special session on Computer Supported Music Education (CSME 2020) The International Conference on Computer Supported Education is a yearly meeting place for presenting and discussing new educational tools and environments, best practices and case studies on innovative technology-based learning strategies, and institutional policies on computer supported education including open and distance education. In this framework, the special session on Computer Supported Music Education aims to investigate the impact of computer-based approaches on music education. We welcome contributions on the development and use of hardware devices, software, and, more generally, advanced technologies to support learning/teaching actions in music creation, performance, and analysis. All accepted papers will be published in the conference proceedings, under an ISBN reference, on paper and on digital support, and will be given a DOI (Digital Object Identifier). The conference proceedings will be submitted for indexation by Thomson Reuters Conference Proceedings Citation Index (CPCI/ISI), DBLP, EI (Elsevier Engineering Village Index), Scopus, Semantic Scholar and Google Scholar. Important dates - Paper Submission: March 3, 2020 - Authors Notification: March 17, 2020 - Camera Ready and Registration: March 25, 2020 For further information Special session web page: <http://www.csedu.org/CSME.aspx> http://www.csedu.org/CSME.aspx Call for paper: <http://www.csedu.org/CallForPapers.aspx> http://www.csedu.org/CallForPapers.aspx Organizer and chair Luca A. Ludovico Laboratory of Music Informatics (LIM), Department of Computer Science, University of Milan luca.ludovico at unimi.it <mailto:luca.ludovico at unimi.it> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190925/38139f06/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.jpg Type: image/jpeg Size: 15760 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190925/38139f06/attachment.jpg> From kepper at edirom.de Sun Oct 6 18:34:24 2019 From: kepper at edirom.de (Johannes Kepper) Date: Sun, 6 Oct 2019 18:34:24 +0200 Subject: [MEI-L] Workshop at Vanderbilt, Oct 24-27 Message-ID: <C0ED31EA-19F6-4946-B74F-FA458DFF1849@edirom.de> Hi all, this is a friendly reminder that registration for the _free_ workshop at Vanderbilt is still possible. We will have a ~1 day introduction to MEI, followed by two days of an MEI Hackathon. There's certainly room for discussions about and work on MEI, music encoding in general, and digital music research based on it. Every level of experience is welcome, and we will make sure that complete beginners in the field will be able to follow and contribute. A more official press release is available from https://news.vanderbilt.edu/2019/10/02/vanderbilt-to-host-international-music-encoding-workshop-and-hackathon/. This is one of the few possibilities to get MEI training in America, so come over and join us in wonderful Nashville. Of course we're happy to answer any additional questions. With best regards, jo Dr. Johannes Kepper Technical Co-Chair of the Music Encoding Initiative University of Paderborn | Germany kepper at edirom.de From kepper at edirom.de Thu Oct 31 20:00:07 2019 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 31 Oct 2019 15:00:07 -0400 Subject: [MEI-L] Call for Hosting MEC2020 and MEC2021 Message-ID: <2033DB77-D9F7-4748-BD07-C44FDFE82922@edirom.de> PLEASE CIRCULATE WIDELY Greetings, The MEI Board invites proposals for the organization of the 9th and 10th editions of the annual Music Encoding Conference, to be held in 2020 and 2021. As many of you are aware, among its activities MEI oversees the organization of an annual conference, the Music Encoding Conference (MEC), to provide a meeting place for scholars interested in discussing the modeling, generation and uses of music encoding. While the conference has an emphasis on the development and uses of MEI, other contributions related to general approaches to music encoding are always welcome, as an opportunity for exchange between scholars from various research communities, including technologists, librarians, historians, and theorists. In order to assist prospective organizers, the MEI Board has published ‹Hosting Guidelines for the Music Encoding Conference› at <http://music-encoding.org/conference/hosting-guidelines.html>. Historically, the conference has been organized by institutions involved in MEI, such as MEI member institutions or those hosting MEI-based projects, but proposals from any interested group or institution will be happily received, and ideas other than those expressed in the official document are welcome. While MEC venues have alternated between Europe and North America in the past, there is no such requirement, so applications from anywhere are invited. The deadline for sending proposals is 20 December 2019. The Board will notify bidders of its decision in late January, and we will jointly inform the MEI community through MEI-L thereafter. Bidding institutions should indicate clearly for which year they're applying. Successful bidders should be prepared to make a short presentation at the upcoming MEC in Boston, 26-29 May 2020. The MEI Board is happy to discuss proposals at an early stage already. Please direct all proposals and inquiries to <info at music-encoding.org>. On behalf of the MEI Board, best wishes, jo From Margrethe.Bue at nb.no Thu Oct 31 20:07:29 2019 From: Margrethe.Bue at nb.no (Margrethe Bue) Date: Thu, 31 Oct 2019 19:07:29 +0000 Subject: [MEI-L] Call for Hosting MEC2020 and MEC2021 In-Reply-To: <2033DB77-D9F7-4748-BD07-C44FDFE82922@edirom.de> References: <2033DB77-D9F7-4748-BD07-C44FDFE82922@edirom.de> Message-ID: <30A579AE80589863.a5e65883-8c60-43e8-98c6-9067cc35f6c6@mail.outlook.com> Or 2020 and 2022? On Thu, Oct 31, 2019 at 8:02 PM +0100, "Johannes Kepper" <kepper at edirom.de<mailto:kepper at edirom.de>> wrote: PLEASE CIRCULATE WIDELY Greetings, The MEI Board invites proposals for the organization of the 9th and 10th editions of the annual Music Encoding Conference, to be held in 2020 and 2021. As many of you are aware, among its activities MEI oversees the organization of an annual conference, the Music Encoding Conference (MEC), to provide a meeting place for scholars interested in discussing the modeling, generation and uses of music encoding. While the conference has an emphasis on the development and uses of MEI, other contributions related to general approaches to music encoding are always welcome, as an opportunity for exchange between scholars from various research communities, including technologists, librarians, historians, and theorists. In order to assist prospective organizers, the MEI Board has published ‹Hosting Guidelines for the Music Encoding Conference› at . Historically, the conference has been organized by institutions involved in MEI, such as MEI member institutions or those hosting MEI-based projects, but proposals from any interested group or institution will be happily received, and ideas other than those expressed in the official document are welcome. While MEC venues have alternated between Europe and North America in the past, there is no such requirement, so applications from anywhere are invited. The deadline for sending proposals is 20 December 2019. The Board will notify bidders of its decision in late January, and we will jointly inform the MEI community through MEI-L thereafter. Bidding institutions should indicate clearly for which year they're applying. Successful bidders should be prepared to make a short presentation at the upcoming MEC in Boston, 26-29 May 2020. The MEI Board is happy to discuss proposals at an early stage already. Please direct all proposals and inquiries to . On behalf of the MEI Board, best wishes, jo _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191031/87f449ee/attachment.html> From kepper at edirom.de Thu Oct 31 20:07:57 2019 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 31 Oct 2019 15:07:57 -0400 Subject: [MEI-L] Call for Hosting MEC2021 and MEC2022 In-Reply-To: <2033DB77-D9F7-4748-BD07-C44FDFE82922@edirom.de> References: <2033DB77-D9F7-4748-BD07-C44FDFE82922@edirom.de> Message-ID: <459CFC32-3108-44C6-8E47-A897D985180D@edirom.de> Yes, I should double-check what I write – of course we're asking for 2021 and 2022 – sorry for the confusion :-) jo > Am 31.10.2019 um 15:00 schrieb Johannes Kepper <kepper at edirom.de>: > > PLEASE CIRCULATE WIDELY > > > > Greetings, > > The MEI Board invites proposals for the organization of the 9th and 10th editions of the annual Music Encoding Conference, to be held in 2020 and 2021. > > As many of you are aware, among its activities MEI oversees the organization of an annual conference, the Music Encoding Conference (MEC), to provide a meeting place for scholars interested in discussing the modeling, generation and uses of music encoding. While the conference has an emphasis on the development and uses of MEI, other contributions related to general approaches to music encoding are always welcome, as an opportunity for exchange between scholars from various research communities, including technologists, librarians, historians, and theorists. > > In order to assist prospective organizers, the MEI Board has published ‹Hosting Guidelines for the Music Encoding Conference› at <http://music-encoding.org/conference/hosting-guidelines.html>. Historically, the conference has been organized by institutions involved in MEI, such as MEI member institutions or those hosting MEI-based projects, but proposals from any interested group or institution will be happily received, and ideas other than those expressed in the official document are welcome. While MEC venues have alternated between Europe and North America in the past, there is no such requirement, so applications from anywhere are invited. > > The deadline for sending proposals is 20 December 2019. The Board will notify bidders of its decision in late January, and we will jointly inform the MEI community through MEI-L thereafter. Bidding institutions should indicate clearly for which year they're applying. Successful bidders should be prepared to make a short presentation at the upcoming MEC in Boston, 26-29 May 2020. > > The MEI Board is happy to discuss proposals at an early stage already. Please direct all proposals and inquiries to <info at music-encoding.org>. > > On behalf of the MEI Board, best wishes, > jo > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From l.wintermans at xs4all.nl Wed Nov 6 13:05:38 2019 From: l.wintermans at xs4all.nl (Lian Wintermans) Date: Wed, 6 Nov 2019 13:05:38 +0100 Subject: [MEI-L] Digital Musicology for harpsichordists Message-ID: <AEB214E0-8A94-44E6-BE20-5ED10D961D99@xs4all.nl> Dear all, I am writing an article on digital musicology for the journal of the Dutch Harpsichord Society SCGN and I would like to pay special attention to research and projects that are of interest to harpsichordists. I am aware of “The temperament police” and "Analysis, performance, and tension perception of an unmeasured prelude for harpsichord”, but I would like to know if there is more, even if it is still in progress and no results are available yet. I would be happy to hear from you, or meet you at DLfM next Saturday! Best regards, Lian Wintermans Heron Information Management LLP Email: lian at heronim.eu Mobile: +31 (0)6 4425 6581 From Anna.Kijas at tufts.edu Thu Nov 7 19:48:56 2019 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Thu, 7 Nov 2019 18:48:56 +0000 Subject: [MEI-L] Announcement: Music Encoding Conference 2020 Message-ID: <C35227AD-697E-49B7-8F7B-10AEF4921D55@tufts.edu> Dear Colleagues, I know that you have been eagerly awaiting details about the Music Encoding Conference 2020! On behalf of the Organizing Committee, I’d like to announce that the conference will be held at Tufts University<https://www.tufts.edu/> on 26 – 29 May 2020 in Medford, Massachusetts (USA). The landing page for this conference is now live at https://music-encoding.org/conference/2020/. This conference is hosted by Tisch Library<https://tischlibrary.tufts.edu/> and Lilly Music Library<https://tischlibrary.tufts.edu/use-library/music-library> of Tufts University. It is co-sponsored with the Digital Scholarship Group<https://dsg.neu.edu/> at Northeastern University Library<https://library.northeastern.edu/>. Additional details will be added to the conference pages over the next few months. Registration is not yet open, but please watch the conference space; I will also send an announcement when it is live. In the meantime, if you have any questions, please contact me at anna.kijas at tufts.edu. We look forward to welcoming you to Tufts University in May 2020! Best, Anna Chair, MEC Organizing Committee Anna Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment<https://tufts.libcal.com/appointments/kijas/lilly> | (617) 627-2846 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191107/1bf4b404/attachment.html> From b.w.bohl at gmail.com Wed Nov 13 16:09:22 2019 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Wed, 13 Nov 2019 16:09:22 +0100 Subject: [MEI-L] MEI Board elections 2019: Call for candidates Message-ID: <1652B1B9-7555-401A-B6EC-4B0787AC1DC6@gmail.com> **Too long to read?** visit: https://forms.gle/XwJR95FYn1xYcrDK7 <https://forms.gle/XwJR95FYn1xYcrDK7> Dear MEI Community, on 31 December 2019 the terms of three MEI Board members will come to an end. The entire Board wishes to thank Andrew Hankinson, Johannes Kepper and Eleanor Selfridge-Field for their service and dedication to the MEI community. In order to fill these soon-to-be-vacant positions, elections must be held. The election process will take place in accordance with the Music Encoding Initiative By-Laws.[1] To nominate a canadidate, please do so via this form: https://forms.gle/XwJR95FYn1xYcrDK7 <https://forms.gle/XwJR95FYn1xYcrDK7> The timeline of the elections will be as follows: Nomination phase (13 November – 11 December, 2019) - Nominations can be sent by filling in the nomination form between 13 November – 11 December, 2019.[2] - Any person who today is a subscriber of MEI-L has the right to nominate candidates. - Nominees have to be members of the MEI-L mailing list but may register until 11 December 2019. - Individuals who have previously served on the Board are eligible for nomination and re-appointment. - Self nominations are welcome. - Individuals will be informed of their nomination when received and asked to confirm their willingness to serve on the Board. - Acceptance of a nomination requires submission of a short CV and a personal statement of interest in MEI (a maximum of 200 words each) to elections at music-encoding.org by 12 December, 2019. Candidates who have been nominated but who have not confirmed their willingness will not be included on the ballot. Election phase (13 December – 18 December, 2019) - The voting period will be open from 13 December – 18 December, 2019. - The election will take place using OpaVote and the Ranked Choice Voting method (https://www.opavote.com/methods/ranked-choice-voting <https://www.opavote.com/methods/ranked-choice-voting>). - You will be informed about the election and your individual voting tokens in a separate e-mail. Post election phase - Election results will be announced after the elections have closed. - The term of the elected candidates starts on 1 January 2020. - The first meeting of the new MEI Board will be held on Wednesday, 15 January 2020, 8:00 pm in Germany (e.g. 7:00 pm in the UK, or 11:00 am USA west coast, or 2:00 pm USA east coast) The selection of Board members is an opportunity for each of you to have a voice in determining the future of MEI. Thank you for your support, Peter Stadler and Benjamin W. Bohl MEI election administrators 2019 by appointment of the MEI Board [1] The By-laws of the Music Encoding Initiative are available online at: http://music-encoding.org/community/mei-by-laws.html <http://music-encoding.org/community/mei-by-laws.html> [2] All deadlines are referenced to 11:59 pm (UTC) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191113/449d3af5/attachment.html> From kepper at edirom.de Mon Nov 18 15:04:26 2019 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 18 Nov 2019 15:04:26 +0100 Subject: [MEI-L] MEI ODD Fridays / Development Schedule for MEI Message-ID: <5418C639-C9E8-4C7A-BAB8-6E23100C19E6@edirom.de> Dear all, while the ODD Fridays for MEI have gone rather unnoticed in the last couple of months, there was significant progress on the Guidelines. While we still need to focus on editing the Guidelines, the Technical Chairs suggest to follow a slightly different procedure for the continued development of MEI. From here on, we would like to hold bi-monthly conference calls with all interested parties. At these calls, we will discuss open pull requests with additions to and revisions of both the MEI Schema and Guidelines. There are two requirements for accepting changes: There have to be matching proposals for both the Schema and the Guidelines (i.e., changes without proper documentation won't get accepted, but may collect feedback), and there needs to be wide acceptance for these changes at the meetings and in the corresponding PRs. If a proposal doesn't match these two criteria, it will be postponed by two months, and should be revised for the next developer meeting. We aim to take notes of these meetings and publish them on MEI's GitHub wiki. We also plan to send out timely reminders in advance of these meetings. The intention is to increase both comprehensibility and transparency of the MEI development workflow, and make it more accessible and inviting to newcomers. We will reflect this plan to the MEI website. The schedule for these developer workshops is the _last Friday of every odd month_, at 1pm UTC. The next three iterations will thus be: November 29, 1pm UTC January 31, 1pm UTC March 27, 1pm UTC The following meeting will then be held during / after the Community Meeting at MEC in Boston (i.e. on May 29). There, we will evaluate this workflow. We're looking forward to see everyone who's interested on Friday next week. All best, Benni and jo From efreja at wanadoo.fr Mon Nov 18 22:35:25 2019 From: efreja at wanadoo.fr (Etienne =?UTF-8?B?RnLDqWphdmlsbGU=?=) Date: Mon, 18 Nov 2019 22:35:25 +0100 Subject: [MEI-L] Jazz Chords in MEI ? Message-ID: <D9F8CD2D.11E74%efreja@wanadoo.fr> Dear community, Do you know if MEI has the plan to introduce Jazz Chords as they exist in MusicXML ? (element harmony and sons : root, kind, bass, degree). See https://usermanuals.musicxml.com/MusicXML/Content/CT-MusicXML-harmony.htm The only information I found about a harmony element is : https://music-encoding.org/guidelines/v4/content/analysisharm.html#harmony but it doesn't cover Jazz Chords. Thanks! -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191118/5f40940c/attachment.html> From T.Crawford at gold.ac.uk Tue Nov 19 12:56:17 2019 From: T.Crawford at gold.ac.uk (Tim Crawford) Date: Tue, 19 Nov 2019 11:56:17 +0000 Subject: [MEI-L] Extra technical meeting dedicated to TabMEI - 18-19 Dec 2019 Message-ID: <243B5A7D-5860-4B36-A8DC-F51885401F7F@gold.ac.uk> Dear all, We shall be holding a 2-day technical MEI meeting here at Goldsmiths in London, Wednesday-Thursday 18-19 December 2019, to attempt to get the work that has been done on a revised tablature module for MEI (working title TabMEI) into a releasable state in the near future. We shall be focussing on the MEI specification for lute and (modern) guitar tablature, in order to greatly expand the range of MEI’s coverage of both historical and modern popular music. While we want to include tablatures for keyboard and other instruments in future releases, they do not figure greatly in the current TabMEI effort. However, we would welcome participation of anyone interested in working on them, or even reading their suggestions in this mailing list. The meeting is booked to take place in Room 140 of the (main) Richard Hoggart Building at Goldsmiths (RHB 140). This link should help you find the room easily: https://www.gold.ac.uk/find-us/rhb-room-finder/?room=140 Directions for getting to Goldsmiths are at: https://www.gold.ac.uk/find-us/ The meeting dates have been deliberately timed to follow on from the DMRN workshop at Queen Mary, University of London, which is an annual event, and takes place this year on Tuesday, 17 December 2019: https://www.qmul.ac.uk/dmrn/dmrn-14/ The DMRN programme is not yet finalised, but should be fixed some time next week. I would be grateful if everyone wishing to come to the meeting could confirm the days they want to attend by email to me. Although this is not absolutely essential it helps greatly with planning the supply of refreshments, etc. See you on 18 December! Tim Prof. Tim Crawford Professorial Research Fellow in Computational Musicology Department of Computing Goldsmiths College London SE14 6NW U.K. t.crawford at gold.ac.uk<mailto:t.crawford at gold.ac.uk> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191119/29be2ddb/attachment.html> From thomas.weber at notengrafik.com Tue Nov 19 13:05:25 2019 From: thomas.weber at notengrafik.com (Thomas Weber) Date: Tue, 19 Nov 2019 12:05:25 +0000 Subject: [MEI-L] Sibmei 2.3.0 released Message-ID: <894896b7-c1bc-933d-7469-3519f772410f@notengrafik.com> Sibmei 2.3.0 can now be downloaded from Github<https://github.com/music-encoding/sibmei/releases/tag/v2.3.0>. It adds a few features, fixes a few bugs and is the last release supporting MEI 3. The next release is planned to output MEI 4.0. Thanks to Anna Plaksin for doing most of the work, to the Max Weber Stiftung for sponsoring and to Klaus Rettinghaus for joining the team. -- Notengrafik Berlin GmbH HRB 150007 UstID: DE 289234097 Geschäftsführer: Thomas Weber und Werner J. Wolff fon: +49 30 25359505 Friedrichstraße 23a 10969 Berlin notengrafik.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191119/efe2076d/attachment.html> From klaus.rettinghaus at gmail.com Thu Nov 28 12:14:20 2019 From: klaus.rettinghaus at gmail.com (Klaus Rettinghaus) Date: Thu, 28 Nov 2019 12:14:20 +0100 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <D9F8CD2D.11E74%efreja@wanadoo.fr> References: <D9F8CD2D.11E74%efreja@wanadoo.fr> Message-ID: <CAB481HEoXVb1GB8jxAsLEiw-9o-eyFhnxGLjveg+doXa-08YPQ@mail.gmail.com> Dear Etienne, I don't know of any plans so far. If you have a cunning plan how this could be implemented in MEI please go ahead. At least you should open a new issue on https://github.com/music-encoding/music-encoding explaining why this is needed in MEI. Cheers, Klaus Am Mo., 18. Nov. 2019 um 22:35 Uhr schrieb Etienne Fréjaville <efreja at wanadoo.fr>: > > Dear community, > > Do you know if MEI has the plan to introduce Jazz Chords as they exist in MusicXML ? (element harmony and sons : root, kind, bass, degree). > See https://usermanuals.musicxml.com/MusicXML/Content/CT-MusicXML-harmony.htm > > The only information I found about a harmony element is : > > https://music-encoding.org/guidelines/v4/content/analysisharm.html#harmony > > but it doesn't cover Jazz Chords. > > Thanks! > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at virginia.edu Thu Nov 28 18:57:18 2019 From: pdr4h at virginia.edu (Roland, Perry D (pdr4h)) Date: Thu, 28 Nov 2019 17:57:18 +0000 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <CAB481HEoXVb1GB8jxAsLEiw-9o-eyFhnxGLjveg+doXa-08YPQ@mail.gmail.com> References: <D9F8CD2D.11E74%efreja@wanadoo.fr> <CAB481HEoXVb1GB8jxAsLEiw-9o-eyFhnxGLjveg+doXa-08YPQ@mail.gmail.com> Message-ID: <BN6PR13MB08995039FC3638463BA4F3769F470@BN6PR13MB0899.namprd13.prod.outlook.com> Hi Etienne, Actually, chords of all kinds are already supported in MEI. MEI takes a different, and I believe, better approach than MusicXML in that it separates the labeling of a chord from its sounded rendition. What this means is that you can use whatever label you like, "A7#9" for example, in the <harm> element. Formatting information can be captured here if, for instance, you want the "7#9" portion of the label to be superscripted. The sound that corresponds to the label is defined within the <chordDef> element. By defining the sound apart from label, variations in the label, "A7#9" vs. "A7(#9)", can be captured independently from the sounding information. Also, the sounding info doesn't have to be repeated each time the chord is used -- it can be referred to when needed. Of course, this also makes it possible to have multiple voicings for the same label; that is, the label "Cmaj7" can be linked to different sounding renditions. <chordDef> elements are collected within a <chordTable>. A chord table can be defined for each score; that is, inside <scoreDef>. Alternatively, it may be encoded in an external file and included within a <scoreDef> using xInclude. The linkage between <harm> and <chordDef> is achieved using the chordref attribute. For example -- <harm chordref="#myFvChord">A7(♭13)</harm> points to -- <chordDef xml:id="myFvChord"> <chordMember pname="a" oct="3"/> <chordMember pname="g" oct="4"/> <chordMember pname="c" accid.ges="s" oct="5"/> <chordMember pname="f" oct="5"/> </chordDef> for sounding info. Fretboard diagrams can be created using the (optional) @tab.* attributes -- <chordDef xml:id="myFvChord"> <chordMember pname="a" oct="3" tab.string="6" tab.fret="5" /> <chordMember pname="g" oct="4" tab.string="4" tab.fret="5"/> <chordMember pname="c" accid.ges="s" oct="5" tab.string="3" tab.fret="6"/> <chordMember pname="f" oct="5" tab.string="2" tab.fret="6"/> </chordDef> The documentation doesn't mention "jazz chords" directly, but https://music-encoding.org/guidelines/v4/content/analysisharm.html#harmonyDetails covers the use of <harm>, <chordTable>, and <chordDef> generally. It also goes into more detail on how to use the @inth attribute to define chords using intervals instead of explicit pitch names and how to encode fingering info for a fretboard diagram. Hope this helps, -- p. -----Original Message----- From: mei-l <mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de> On Behalf Of Klaus Rettinghaus Sent: Thursday, November 28, 2019 6:14 AM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: Re: [MEI-L] Jazz Chords in MEI ? Dear Etienne, I don't know of any plans so far. If you have a cunning plan how this could be implemented in MEI please go ahead. At least you should open a new issue on https://github.com/music-encoding/music-encoding explaining why this is needed in MEI. Cheers, Klaus Am Mo., 18. Nov. 2019 um 22:35 Uhr schrieb Etienne Fréjaville <efreja at wanadoo.fr>: > > Dear community, > > Do you know if MEI has the plan to introduce Jazz Chords as they exist in MusicXML ? (element harmony and sons : root, kind, bass, degree). > See > https://usermanuals.musicxml.com/MusicXML/Content/CT-MusicXML-harmony. > htm > > The only information I found about a harmony element is : > > https://music-encoding.org/guidelines/v4/content/analysisharm.html#har > mony > > but it doesn't cover Jazz Chords. > > Thanks! > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Fri Nov 29 11:52:45 2019 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 29 Nov 2019 11:52:45 +0100 Subject: [MEI-L] MEI ODD Fridays / Development Schedule for MEI In-Reply-To: <5418C639-C9E8-4C7A-BAB8-6E23100C19E6@edirom.de> References: <5418C639-C9E8-4C7A-BAB8-6E23100C19E6@edirom.de> Message-ID: <6B9D1587-E69D-4EAA-90BC-F7192397BBCC@edirom.de> Dear all, today, in little more than 2 hours, our next MEI ODD Friday meeting will take place. We'll be using Zoom.us, on invitation of RISM Switzerland (thanks Laurent!). Here's the link: https://us04web.zoom.us/j/235534918 This meeting is public to everyone who's interested in the (technical) development of MEI, and it's the place where we will take decisions about the Schema and Guidelines in the future. Since this is the first iteration in this revised form, we may want to discuss the workflow itself as well. You don't need to be an expert in MEI to participate, but the discussions may very well get technical. That said, we're happy to welcome newcomers and long-time experts alike – the more the merrier :-) See you soon, jo > Am 18.11.2019 um 15:04 schrieb Johannes Kepper <kepper at edirom.de>: > > Dear all, > > while the ODD Fridays for MEI have gone rather unnoticed in the last couple of months, there was significant progress on the Guidelines. While we still need to focus on editing the Guidelines, the Technical Chairs suggest to follow a slightly different procedure for the continued development of MEI. > > From here on, we would like to hold bi-monthly conference calls with all interested parties. At these calls, we will discuss open pull requests with additions to and revisions of both the MEI Schema and Guidelines. There are two requirements for accepting changes: There have to be matching proposals for both the Schema and the Guidelines (i.e., changes without proper documentation won't get accepted, but may collect feedback), and there needs to be wide acceptance for these changes at the meetings and in the corresponding PRs. If a proposal doesn't match these two criteria, it will be postponed by two months, and should be revised for the next developer meeting. > > We aim to take notes of these meetings and publish them on MEI's GitHub wiki. We also plan to send out timely reminders in advance of these meetings. The intention is to increase both comprehensibility and transparency of the MEI development workflow, and make it more accessible and inviting to newcomers. We will reflect this plan to the MEI website. > > The schedule for these developer workshops is the _last Friday of every odd month_, at 1pm UTC. The next three iterations will thus be: > > November 29, 1pm UTC > January 31, 1pm UTC > March 27, 1pm UTC > > The following meeting will then be held during / after the Community Meeting at MEC in Boston (i.e. on May 29). There, we will evaluate this workflow. > > We're looking forward to see everyone who's interested on Friday next week. > > All best, > Benni and jo > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From sylvaineleblondmartin at gmail.com Fri Nov 29 12:44:46 2019 From: sylvaineleblondmartin at gmail.com (Sylvaine Leblond Martin) Date: Fri, 29 Nov 2019 12:44:46 +0100 Subject: [MEI-L] MEI ODD Fridays / Development Schedule for MEI In-Reply-To: <6B9D1587-E69D-4EAA-90BC-F7192397BBCC@edirom.de> References: <5418C639-C9E8-4C7A-BAB8-6E23100C19E6@edirom.de> <6B9D1587-E69D-4EAA-90BC-F7192397BBCC@edirom.de> Message-ID: <CADp7aT1v1_0vwMjQ5umX7Yp5_YNt911boc+2PhgzVvNdYgOQSg@mail.gmail.com> Hi Johannes, This meeting remote that we can attend is an excellent initiative. Our research group, the GenÆnorma (MEI digital encoding and TEI of oral tradition music) will be able to attend and for those who can not, we found that it is possible to record the meeting. See you soon ! Sylvaine Leblond Martin, Henri Hudrisier, Mokhtar Ben Henda, Jean-Michel Borde, Daniel Mancero Baquerizo, Weiping Estelle Wang, Xia Zhang et Vincent Boucheau Le ven. 29 nov. 2019 à 11:52, Johannes Kepper <kepper at edirom.de> a écrit : > Dear all, > > today, in little more than 2 hours, our next MEI ODD Friday meeting will > take place. We'll be using Zoom.us, on invitation of RISM Switzerland > (thanks Laurent!). Here's the link: > > https://us04web.zoom.us/j/235534918 > > This meeting is public to everyone who's interested in the (technical) > development of MEI, and it's the place where we will take decisions about > the Schema and Guidelines in the future. Since this is the first iteration > in this revised form, we may want to discuss the workflow itself as well. > You don't need to be an expert in MEI to participate, but the discussions > may very well get technical. That said, we're happy to welcome newcomers > and long-time experts alike – the more the merrier :-) > > See you soon, > jo > > > > Am 18.11.2019 um 15:04 schrieb Johannes Kepper <kepper at edirom.de>: > > > > Dear all, > > > > while the ODD Fridays for MEI have gone rather unnoticed in the last > couple of months, there was significant progress on the Guidelines. While > we still need to focus on editing the Guidelines, the Technical Chairs > suggest to follow a slightly different procedure for the continued > development of MEI. > > > > From here on, we would like to hold bi-monthly conference calls with all > interested parties. At these calls, we will discuss open pull requests with > additions to and revisions of both the MEI Schema and Guidelines. There are > two requirements for accepting changes: There have to be matching proposals > for both the Schema and the Guidelines (i.e., changes without proper > documentation won't get accepted, but may collect feedback), and there > needs to be wide acceptance for these changes at the meetings and in the > corresponding PRs. If a proposal doesn't match these two criteria, it will > be postponed by two months, and should be revised for the next developer > meeting. > > > > We aim to take notes of these meetings and publish them on MEI's GitHub > wiki. We also plan to send out timely reminders in advance of these > meetings. The intention is to increase both comprehensibility and > transparency of the MEI development workflow, and make it more accessible > and inviting to newcomers. We will reflect this plan to the MEI website. > > > > The schedule for these developer workshops is the _last Friday of every > odd month_, at 1pm UTC. The next three iterations will thus be: > > > > November 29, 1pm UTC > > January 31, 1pm UTC > > March 27, 1pm UTC > > > > The following meeting will then be held during / after the Community > Meeting at MEC in Boston (i.e. on May 29). There, we will evaluate this > workflow. > > > > We're looking forward to see everyone who's interested on Friday next > week. > > > > All best, > > Benni and jo > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Sylvaine Leblond Martin, Compositrice, Docteure en Sciences de l'information et de la communication, Membre de la Chaire Unesco ITEN, Responsable du projet GEN Æ NORMA "Groupe d'Encodages Numériques des Auralités Et Nouvelles ORalités des Musiques Actuelles" Chercheure post-doctorante à la MSH Paris Nord, Chercheure associée au CRTM "Centre de Recherche sur les Traditions Musicales" de l'Université Antonine (Liban). 20 avenue George Sand 93210 La Plaine Saint-Denis Bu.: 226 Tél. bu.: 01 55 93 93 56 Cell. 06.95.92.76.79 Tél. dom.: 09 51 97 67 59 "La musique, c'est ce qu'on écoute avec l'intention d'écouter de la musique." *Luciano Berio* - "La langue n'est pas enracinée dans un peuple. Une langue, ça n'appartient à personne, on l'aime à travers ses oeuvres et non parce que c'est la forteresse d'une nation" *Hannah Arendt -* -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191129/2d2bf861/attachment.html> From kepper at edirom.de Fri Nov 29 16:16:35 2019 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 29 Nov 2019 16:16:35 +0100 Subject: [MEI-L] MEI ODD Fridays / Development Schedule for MEI In-Reply-To: <6B9D1587-E69D-4EAA-90BC-F7192397BBCC@edirom.de> References: <5418C639-C9E8-4C7A-BAB8-6E23100C19E6@edirom.de> <6B9D1587-E69D-4EAA-90BC-F7192397BBCC@edirom.de> Message-ID: <82D33268-0D80-4215-9B2B-BF84D239942F@edirom.de> Dear all, we've met briefly on Zoom – thanks for everyone who had the time to come. I've taken some notes of the meeting, which are available from https://github.com/music-encoding/music-encoding/wiki/2019-11-29-ODD-Friday. I'd like to ask the other participants to add things I've missed there directly. All best, jo > Am 29.11.2019 um 11:52 schrieb Johannes Kepper <kepper at edirom.de>: > > Dear all, > > today, in little more than 2 hours, our next MEI ODD Friday meeting will take place. We'll be using Zoom.us, on invitation of RISM Switzerland (thanks Laurent!). Here's the link: > > https://us04web.zoom.us/j/235534918 > > This meeting is public to everyone who's interested in the (technical) development of MEI, and it's the place where we will take decisions about the Schema and Guidelines in the future. Since this is the first iteration in this revised form, we may want to discuss the workflow itself as well. You don't need to be an expert in MEI to participate, but the discussions may very well get technical. That said, we're happy to welcome newcomers and long-time experts alike – the more the merrier :-) > > See you soon, > jo > > >> Am 18.11.2019 um 15:04 schrieb Johannes Kepper <kepper at edirom.de>: >> >> Dear all, >> >> while the ODD Fridays for MEI have gone rather unnoticed in the last couple of months, there was significant progress on the Guidelines. While we still need to focus on editing the Guidelines, the Technical Chairs suggest to follow a slightly different procedure for the continued development of MEI. >> >> From here on, we would like to hold bi-monthly conference calls with all interested parties. At these calls, we will discuss open pull requests with additions to and revisions of both the MEI Schema and Guidelines. There are two requirements for accepting changes: There have to be matching proposals for both the Schema and the Guidelines (i.e., changes without proper documentation won't get accepted, but may collect feedback), and there needs to be wide acceptance for these changes at the meetings and in the corresponding PRs. If a proposal doesn't match these two criteria, it will be postponed by two months, and should be revised for the next developer meeting. >> >> We aim to take notes of these meetings and publish them on MEI's GitHub wiki. We also plan to send out timely reminders in advance of these meetings. The intention is to increase both comprehensibility and transparency of the MEI development workflow, and make it more accessible and inviting to newcomers. We will reflect this plan to the MEI website. >> >> The schedule for these developer workshops is the _last Friday of every odd month_, at 1pm UTC. The next three iterations will thus be: >> >> November 29, 1pm UTC >> January 31, 1pm UTC >> March 27, 1pm UTC >> >> The following meeting will then be held during / after the Community Meeting at MEC in Boston (i.e. on May 29). There, we will evaluate this workflow. >> >> We're looking forward to see everyone who's interested on Friday next week. >> >> All best, >> Benni and jo >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From sylvaineleblondmartin at gmail.com Fri Nov 29 16:20:27 2019 From: sylvaineleblondmartin at gmail.com (Sylvaine Leblond Martin) Date: Fri, 29 Nov 2019 16:20:27 +0100 Subject: [MEI-L] MEI ODD Fridays / Development Schedule for MEI In-Reply-To: <82D33268-0D80-4215-9B2B-BF84D239942F@edirom.de> References: <5418C639-C9E8-4C7A-BAB8-6E23100C19E6@edirom.de> <6B9D1587-E69D-4EAA-90BC-F7192397BBCC@edirom.de> <82D33268-0D80-4215-9B2B-BF84D239942F@edirom.de> Message-ID: <CADp7aT1a_zC=ceWgmzRB3ShLkMtCYKmkqagEv5MKHuuPy3VOUw@mail.gmail.com> Thank you very much ! Sylvaine Le ven. 29 nov. 2019 à 16:16, Johannes Kepper <kepper at edirom.de> a écrit : > Dear all, > > we've met briefly on Zoom – thanks for everyone who had the time to come. > I've taken some notes of the meeting, which are available from > https://github.com/music-encoding/music-encoding/wiki/2019-11-29-ODD-Friday. > I'd like to ask the other participants to add things I've missed there > directly. > > All best, > jo > > > Am 29.11.2019 um 11:52 schrieb Johannes Kepper <kepper at edirom.de>: > > > > Dear all, > > > > today, in little more than 2 hours, our next MEI ODD Friday meeting will > take place. We'll be using Zoom.us, on invitation of RISM Switzerland > (thanks Laurent!). Here's the link: > > > > https://us04web.zoom.us/j/235534918 > > > > This meeting is public to everyone who's interested in the (technical) > development of MEI, and it's the place where we will take decisions about > the Schema and Guidelines in the future. Since this is the first iteration > in this revised form, we may want to discuss the workflow itself as well. > You don't need to be an expert in MEI to participate, but the discussions > may very well get technical. That said, we're happy to welcome newcomers > and long-time experts alike – the more the merrier :-) > > > > See you soon, > > jo > > > > > >> Am 18.11.2019 um 15:04 schrieb Johannes Kepper <kepper at edirom.de>: > >> > >> Dear all, > >> > >> while the ODD Fridays for MEI have gone rather unnoticed in the last > couple of months, there was significant progress on the Guidelines. While > we still need to focus on editing the Guidelines, the Technical Chairs > suggest to follow a slightly different procedure for the continued > development of MEI. > >> > >> From here on, we would like to hold bi-monthly conference calls with > all interested parties. At these calls, we will discuss open pull requests > with additions to and revisions of both the MEI Schema and Guidelines. > There are two requirements for accepting changes: There have to be matching > proposals for both the Schema and the Guidelines (i.e., changes without > proper documentation won't get accepted, but may collect feedback), and > there needs to be wide acceptance for these changes at the meetings and in > the corresponding PRs. If a proposal doesn't match these two criteria, it > will be postponed by two months, and should be revised for the next > developer meeting. > >> > >> We aim to take notes of these meetings and publish them on MEI's GitHub > wiki. We also plan to send out timely reminders in advance of these > meetings. The intention is to increase both comprehensibility and > transparency of the MEI development workflow, and make it more accessible > and inviting to newcomers. We will reflect this plan to the MEI website. > >> > >> The schedule for these developer workshops is the _last Friday of every > odd month_, at 1pm UTC. The next three iterations will thus be: > >> > >> November 29, 1pm UTC > >> January 31, 1pm UTC > >> March 27, 1pm UTC > >> > >> The following meeting will then be held during / after the Community > Meeting at MEC in Boston (i.e. on May 29). There, we will evaluate this > workflow. > >> > >> We're looking forward to see everyone who's interested on Friday next > week. > >> > >> All best, > >> Benni and jo > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -- Sylvaine Leblond Martin Compositrice, Docteure en Sciences de l'information et de la communication, Membre de la Chaire Unesco ITEN, Responsable du projet GEN Æ NORMA « Groupe d'Encodages Numériques des Auralités Et Nouvelles ORalités des Musiques Actuelles », Chercheure associée au CRTM « Centre de recherche sur les traditions musicales » de l’Université Antonine (Liban), Chercheure post-doctorante à la MSH Paris Nord. 20 Avenue George Sand Bureau 27 93210 La Plaine Saint-Denis FRANCE Tél : +(33)1-55-93-93-56 sylvaine.leblond-martin at mshparisnord.fr -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191129/6c9213d4/attachment.html> From efreja at wanadoo.fr Mon Dec 2 00:00:25 2019 From: efreja at wanadoo.fr (Etienne =?UTF-8?B?RnLDqWphdmlsbGU=?=) Date: Mon, 02 Dec 2019 00:00:25 +0100 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <BN6PR13MB08995039FC3638463BA4F3769F470@BN6PR13MB0899.namprd13.prod.outlook.com> Message-ID: <DA0A033B.11F54%efreja@wanadoo.fr> Hi Perry, Thanks very much for your answer. That's a much clearer and a good start. I have therefore some questions about the ability of MEI to represent some simple situations in a Jazz grid. As understood, I suppose that the best I can do to encode the following measure (the 6th) extracted from Coltrane's "Crescent" is: <score> <scoreDef> <staffGrp> <staffDef n="1" lines="5" clef.shape="G" clef.line="2" /> </staffGrp> <chordDef xml:id="DminDom5f"> <chordMember pname="d" oct="4"/> <chordMember pname="f" oct="4"/> <chordMember pname="a" accid.ges="f" oct="4"/> <chordMember pname="c" oct="4"/> </chordDef> <chordDef xml:id="GDom5s"> <chordMember pname="g" oct="4"/> <chordMember pname="b" oct="4"/> <chordMember pname="d" accid.ges="s" oct="4"/> <chordMember pname="f" oct="4"/> </chordDef> </scoreDef> <section> <measure n="1"> <staff n="1"> <layer n="1"> <note pname="a" dur="2" oct="5" /> <note pname="g" dur="4" oct="5" /> <note pname="g" dur="4" oct="5" /> </layer> </staff> <harm chordref="#DminDom5f" staff="1" tstamp="1">D-7b5/G</harm> <harm chordref="#GDom5s" staff="1" tstamp="3">G7#5</harm> </measure> </section> </score> What I can see is : - I don’t know how to encode the root of the chord. Is it supposed to be always the lowest note of the chordDef ? - I don't know how to encode the base of the chord : In my first chord, it's G (the 11th of the chord) but appears only on the <harm> value which is not part of the semantics. - I don’t see how from the <harm> value we can guess how the value must be drawn (for ex in G7#5, 7#5 is superscripted). In D-7b5/G, -7b5 is superscripted and /G is less or more placed under D-7b5. At last, detailing the separate notes on each chordDef is somehow useless in Jazz as there are some 33 different possible chords for each root note (D-7b5 is D’s half-diminished chord). But I can quite understand that this feature may have its use in different situations, so it's not the main point. Hoping that my questions are clear… Thanks Le 28/11/2019 18:57, « Roland, Perry D (pdr4h) » <pdr4h at virginia.edu> a écrit : > >Hi Etienne, > >Actually, chords of all kinds are already supported in MEI. MEI takes a >different, and I believe, better approach than MusicXML in that it >separates the labeling of a chord from its sounded rendition. > >What this means is that you can use whatever label you like, "A7#9" for >example, in the <harm> element. Formatting information can be captured >here if, for instance, you want the "7#9" portion of the label to be >superscripted. > >The sound that corresponds to the label is defined within the <chordDef> >element. By defining the sound apart from label, variations in the >label, "A7#9" vs. "A7(#9)", can be captured independently from the >sounding information. Also, the sounding info doesn't have to be >repeated each time the chord is used -- it can be referred to when >needed. Of course, this also makes it possible to have multiple voicings >for the same label; that is, the label "Cmaj7" can be linked to different >sounding renditions. > ><chordDef> elements are collected within a <chordTable>. A chord table >can be defined for each score; that is, inside <scoreDef>. >Alternatively, it may be encoded in an external file and included within >a <scoreDef> using xInclude. > >The linkage between <harm> and <chordDef> is achieved using the chordref >attribute. For example -- > ><harm chordref="#myFvChord">A7(♭13)</harm> > >points to -- > ><chordDef xml:id="myFvChord"> > <chordMember pname="a" oct="3"/> > <chordMember pname="g" oct="4"/> > <chordMember pname="c" accid.ges="s" oct="5"/> > <chordMember pname="f" oct="5"/> ></chordDef> > >for sounding info. Fretboard diagrams can be created using the >(optional) @tab.* attributes -- > ><chordDef xml:id="myFvChord"> > <chordMember pname="a" oct="3" tab.string="6" tab.fret="5" /> > <chordMember pname="g" oct="4" tab.string="4" tab.fret="5"/> > <chordMember pname="c" accid.ges="s" oct="5" tab.string="3" >tab.fret="6"/> > <chordMember pname="f" oct="5" tab.string="2" tab.fret="6"/> ></chordDef> > >The documentation doesn't mention "jazz chords" directly, but >https://music-encoding.org/guidelines/v4/content/analysisharm.html#harmony >Details covers the use of <harm>, <chordTable>, and <chordDef> generally. > It also goes into more detail on how to use the @inth attribute to >define chords using intervals instead of explicit pitch names and how to >encode fingering info for a fretboard diagram. > >Hope this helps, > >-- >p. > > > >-----Original Message----- >From: mei-l <mei-l-bounces+pdr4h=virginia.edu at lists.uni-paderborn.de> On >Behalf Of Klaus Rettinghaus >Sent: Thursday, November 28, 2019 6:14 AM >To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> >Subject: Re: [MEI-L] Jazz Chords in MEI ? > >Dear Etienne, > >I don't know of any plans so far. If you have a cunning plan how this >could be implemented in MEI please go ahead. >At least you should open a new issue on >https://github.com/music-encoding/music-encoding explaining why this is >needed in MEI. > >Cheers, >Klaus > >Am Mo., 18. Nov. 2019 um 22:35 Uhr schrieb Etienne Fréjaville ><efreja at wanadoo.fr>: >> >> Dear community, >> >> Do you know if MEI has the plan to introduce Jazz Chords as they exist >>in MusicXML ? (element harmony and sons : root, kind, bass, degree). >> See >> https://usermanuals.musicxml.com/MusicXML/Content/CT-MusicXML-harmony. >> htm >> >> The only information I found about a harmony element is : >> >> https://music-encoding.org/guidelines/v4/content/analysisharm.html#har >> mony >> >> but it doesn't cover Jazz Chords. >> >> Thanks! >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > >_______________________________________________ >mei-l mailing list >mei-l at lists.uni-paderborn.de >https://lists.uni-paderborn.de/mailman/listinfo/mei-l >_______________________________________________ >mei-l mailing list >mei-l at lists.uni-paderborn.de >https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at virginia.edu Mon Dec 2 20:55:47 2019 From: pdr4h at virginia.edu (Roland, Perry D (pdr4h)) Date: Mon, 2 Dec 2019 19:55:47 +0000 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <DA0A033B.11F54%efreja@wanadoo.fr> References: <BN6PR13MB08995039FC3638463BA4F3769F470@BN6PR13MB0899.namprd13.prod.outlook.com> <DA0A033B.11F54%efreja@wanadoo.fr> Message-ID: <BN6PR13MB0899FFDD3EEEFE520405DF9D9F430@BN6PR13MB0899.namprd13.prod.outlook.com> Hello Etienne, Your example should look like this -- <score xmlns="http://www.music-encoding.org/ns/mei" xmlns:svg="http://www.w3.org/2000/svg"> <scoreDef> <chordTable> <chordDef xml:id="DminDom5f"> <chordMember pname="g" oct="3"/> <chordMember pname="d" oct="4"/> <chordMember pname="f" oct="4"/> <chordMember pname="a" accid.ges="f" oct="4"/> <chordMember pname="c" oct="4"/> </chordDef> <chordDef xml:id="GDom5s"> <chordMember pname="g" oct="4"/> <chordMember pname="b" oct="4"/> <chordMember pname="d" accid.ges="s" oct="4"/> <chordMember pname="f" oct="4"/> </chordDef> </chordTable> <symbolTable> <symbolDef xml:id="mySymbol"> <svg:svg> <!-- SVG for the chord label here --> </svg:svg> </symbolDef> </symbolTable> <staffGrp> <staffDef n="1" lines="5" clef.shape="G" clef.line="2"/> </staffGrp> </scoreDef> <section> <measure n="1"> <staff n="1"> <layer n="1"> <note pname="a" dur="2" oct="5"/> <note pname="g" dur="4" oct="5"/> <note pname="g" dur="4" oct="5"/> </layer> </staff> <harm chordref="#DminDom5f" staff="1" tstamp="1" rendgrid="gridtext">D<rend rend="sup" >-7b5</rend>/G</harm> <!-- Alternative markup for the preceding harmonic label --> <!-- <harm chordref="#DminDom5f" staff="1" tstamp="1" rendgrid="gridtext"> <symbol altsym="#mySymbol"/> </harm> --> <harm chordref="#GDom5s" staff="1" tstamp="3" rendgrid="gridtext">G<rend rend="sup" >7#5</rend></harm> </measure> </section> </score> 1. <chordMember> elements must be children of <chordTable>. 2. The bass note of the first chord should be included in the spelling-out of the chord tones. Since the goal of the <chordDef> is to capture information related to the chord tablature grid (and not harmonic analysis), there's no need to mark the harmonic function of each "note"; that is, marking the root or any of the other chord tones isn't necessary. If you feel otherwise, you can advocate for making <chordMember> a member of the att.harmonicFunction attribute class. This would add the @deg attribute, which is where one can record scale degree info. 3. The superscripted portion of a label can be marked using the <rend> element. However, using markup to get the "G" to be under the "D7b5" portion of the label is straying too far from semantic markup. In any case, I don't believe Verovio (the MEI renderer) supports the <stack> element yet. So, if you require a precise rendition of the label that's unachievable with <rend>, you can define a symbol (using <symbolTable> and <symbolDef> elements) and refer to it inside the <harm> element. See the commented-out <harm> element for an example. 4. I don't quite understand your last comment, but I'll address what I think you're getting at. Please let me know if I miss the mark. Yes, there are innumerable ways in which a given chord (say "D7b5") can be performed/voiced on the guitar. If one doesn't care which is to be played, then a simple label can be used; that is, "D7b5". In this case, the performer can choose how/where to play the chord and no chord definition is necessary. However, if a particular voicing is desired, then the way to do that is to explicitly encode the chord in a <chordDef>. But this only has to be done once. Anywhere in the piece where the chord is expected to be played, the <chordDef> can be referred to without re-encoding the note-spelling of the chord. The label ("D7b5") and/or the chord tab grid can be rendered. This not only reduces the amount of encoding required to represent chord tab grids, but it also decouples the label from the grid, meaning that one can use the traditional label "Cm7♭5" or "Cm7" with a superscript "o" or any other chord labeling scheme which all refer to the same grid. You can also refer to a chord definition outside the current MEI file, which means you can put the definition of a chord in a particular inversion/voicing in an external file and refer to it from many MEI documents. So, yes, there may be 33 different possibilities for "D7b5", but they only have to be captured once. Hope this helps, -- p. From efreja at wanadoo.fr Mon Dec 2 23:11:12 2019 From: efreja at wanadoo.fr (Etienne =?UTF-8?B?RnLDqWphdmlsbGU=?=) Date: Mon, 02 Dec 2019 23:11:12 +0100 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <BN6PR13MB0899FFDD3EEEFE520405DF9D9F430@BN6PR13MB0899.namprd13.prod.outlook.com> Message-ID: <DA0B4A3B.11F64%efreja@wanadoo.fr> Hello Perry, Thank you very much for the answer. I don't come back to display the chord labels, that's clear even if using svg is probably not the ideal solution. Concerning the harmony, first of all, I hadn't caught that <chordDef> was to capture information related to the chord tablature grid (and not harmonic analysis). If chordDef can be used to encode voicings (in Jazz it's seldom that voicings are encoded), but I note that it can be achieved through the chordMember element: If I wanted for example to encode a C-7 Monk voicing, I'd probably have a c/b flat or c/b flat/d (an added 9th). That’s perfect! But aside from voicings, it's harmonic analysis that I'd like to be encoded for piano for example. What I don't get clearly is that I don't need to designate with Jazz chords the exact note that is altered. For instance in D-7b5, I know that it's "a" that has a diminished 5th. So it would be strange to mark, either on a <note> or a <chordMember> (by making <chordMember> a member of the att.harmonicFunction attribute class*) : pname="a" deg="5-". The simplest way would be to do as it's in MusicXML, say that the kind** of chord is a half-diminished chord of D (the root note). Besides, if I want to extract from the score all the chords/notes that have a half-diminished chord, I'll have to search for all chordDef that have a deg="3-", a deg="5-" and deg="7". Then search all notes/chords at the timestamp specified by the matching chordDef(s). Probably much more complex than searching all chords that have a <kind> element whose value is "half-diminished". Again I hope I was clear. Thanks. Etienne *Don't know how. ** there are about 33 different kinds of jazz chords for a given note. Le 02/12/2019 20:55, « Roland, Perry D (pdr4h) » <pdr4h at virginia.edu> a écrit : > >Hello Etienne, > >Your example should look like this -- > ><score xmlns="http://www.music-encoding.org/ns/mei" >xmlns:svg="http://www.w3.org/2000/svg"> > <scoreDef> > <chordTable> > <chordDef xml:id="DminDom5f"> > <chordMember pname="g" oct="3"/> > <chordMember pname="d" oct="4"/> > <chordMember pname="f" oct="4"/> > <chordMember pname="a" accid.ges="f" oct="4"/> > <chordMember pname="c" oct="4"/> > </chordDef> > <chordDef xml:id="GDom5s"> > <chordMember pname="g" oct="4"/> > <chordMember pname="b" oct="4"/> > <chordMember pname="d" accid.ges="s" oct="4"/> > <chordMember pname="f" oct="4"/> > </chordDef> > </chordTable> > <symbolTable> > <symbolDef xml:id="mySymbol"> > <svg:svg> > <!-- SVG for the chord label here --> > </svg:svg> > </symbolDef> > </symbolTable> > <staffGrp> > <staffDef n="1" lines="5" clef.shape="G" clef.line="2"/> > </staffGrp> > </scoreDef> > <section> > <measure n="1"> > <staff n="1"> > <layer n="1"> > <note pname="a" dur="2" oct="5"/> > <note pname="g" dur="4" oct="5"/> > <note pname="g" dur="4" oct="5"/> > </layer> > </staff> > <harm chordref="#DminDom5f" staff="1" tstamp="1" >rendgrid="gridtext">D<rend rend="sup" > >-7b5</rend>/G</harm> > <!-- Alternative markup for the preceding harmonic label --> > <!-- <harm chordref="#DminDom5f" staff="1" tstamp="1" >rendgrid="gridtext"> > <symbol altsym="#mySymbol"/> > </harm> --> > <harm chordref="#GDom5s" staff="1" tstamp="3" >rendgrid="gridtext">G<rend rend="sup" > >7#5</rend></harm> > </measure> > </section> ></score> > >1. <chordMember> elements must be children of <chordTable>. > >2. The bass note of the first chord should be included in the >spelling-out of the chord tones. Since the goal of the <chordDef> is to >capture information related to the chord tablature grid (and not harmonic >analysis), there's no need to mark the harmonic function of each "note"; >that is, marking the root or any of the other chord tones isn't >necessary. If you feel otherwise, you can advocate for making ><chordMember> a member of the att.harmonicFunction attribute class. This >would add the @deg attribute, which is where one can record scale degree >info. > >3. The superscripted portion of a label can be marked using the <rend> >element. However, using markup to get the "G" to be under the "D7b5" >portion of the label is straying too far from semantic markup. In any >case, I don't believe Verovio (the MEI renderer) supports the <stack> >element yet. So, if you require a precise rendition of the label that's >unachievable with <rend>, you can define a symbol (using <symbolTable> >and <symbolDef> elements) and refer to it inside the <harm> element. See >the commented-out <harm> element for an example. > >4. I don't quite understand your last comment, but I'll address what I >think you're getting at. Please let me know if I miss the mark. Yes, >there are innumerable ways in which a given chord (say "D7b5") can be >performed/voiced on the guitar. If one doesn't care which is to be >played, then a simple label can be used; that is, "D7b5". In this case, >the performer can choose how/where to play the chord and no chord >definition is necessary. However, if a particular voicing is desired, >then the way to do that is to explicitly encode the chord in a ><chordDef>. But this only has to be done once. Anywhere in the piece >where the chord is expected to be played, the <chordDef> can be referred >to without re-encoding the note-spelling of the chord. The label >("D7b5") and/or the chord tab grid can be rendered. This not only >reduces the amount of encoding required to represent chord tab grids, but >it also decouples the label from the grid, meaning that one can use the >traditional label "Cm7♭5" or "Cm7" with a superscript "o" or any other >chord labeling scheme which all refer to the same grid. You can also >refer to a chord definition outside the current MEI file, which means you >can put the definition of a chord in a particular inversion/voicing in an >external file and refer to it from many MEI documents. So, yes, there >may be 33 different possibilities for "D7b5", but they only have to be >captured once. > >Hope this helps, > >-- >p. > >_______________________________________________ >mei-l mailing list >mei-l at lists.uni-paderborn.de >https://lists.uni-paderborn.de/mailman/listinfo/mei-l From rfreedma at haverford.edu Mon Dec 2 23:38:33 2019 From: rfreedma at haverford.edu (Richard Freedman) Date: Mon, 2 Dec 2019 23:38:33 +0100 Subject: [MEI-L] Call for Papers: Music Encoding Conference, Tufts University, May 26-29, 2020 Message-ID: <CA+zvZGfkoEwW6gOAKS1JkREfa7NsU6PCQc+nRHQpCQXOzZzZPQ@mail.gmail.com> On behalf of the Organizing and Program Committees of the Music Encoding Conference, I am pleased to announce the call for proposals for papers, workshops, poster sessions, and other activities to be offered during the annual meeting, hosted by Tufts University from May 26-29, 2020. All the relevant information is posted below, and in the attached PDF, which you should feel free to circulate widely. With best wishes for a productive remainder to the term, Richard Freedman Call for Papers: Music Encoding Conference, 26-29 May, 2020 (Boston, MA) The Music Encoding Conference is the annual meeting of the Music Encoding Initiative (MEI) community and all who are interested in the digital representation of music. We are pleased to announce our call for papers, posters, panels, and workshops. The deadline for submission is 22 December 2019. (Note that this is a firm deadline--there will be no extensions.) Music encoding is a critical component for fields and areas of study including computational or digital musicology, digital editions, symbolic music information retrieval, and digital libraries. This event brings together specialists from various music research communities, including technologists, librarians, music scholars, and students and provides an opportunity for learning and engaging with and from each other. The MEC will take place 26-29 May 2020 at Tufts University in Medford, MA (in metropolitan Boston), hosted by Tisch Library and Lilly Music Library. It is co-sponsored with the Digital Scholarship Group at Northeastern University Library. For detailed information about the venue and local arrangements, see the MEC website: https://music-encoding.org/conference/2020/. Background The study of music encoding and its applications has emerged as a critical area of interest among scholars, librarians, publishers, and the wider music industry. The Music Encoding Conference has emerged as the foremost international forum where researchers and practitioners from across these varied fields can meet and explore new developments in music encoding and its use. The Conference celebrates a multidisciplinary program, combining the latest advances from established music encodings, novel technical proposals and encoding extensions, and the presentation or evaluation of new practical applications of music encoding (e.g. in academic study, libraries, editions, commercial products). Pre-conference workshops provide an opportunity to quickly engage with best practice in the community. Newcomers are encouraged to submit to the main program with articulations of the potential for music encoding in their work, highlighting strengths and weaknesses of existing approaches within this context. Following the formal program an unconference session fosters collaboration in the community through the meeting of Interest Groups, and self-selected discussions on hot topics that emerge during the conference. Interest Groups can also choose to meet May 24, 25, or 26 in various spaces generously provided by the host library. (Please be in touch with conference organizers with requests reserve these spaces.) The program welcomes contributions from all those working on, or with, any music encoding. In addition, the Conference serves as a focus event for the Music Encoding Initiative community, with its annual community meeting scheduled the day following the main program. We in particular seek to broaden the scope of musical repertories considered, and to provide a welcoming, inclusive community for all who are interested in this work. Participants are encouraged to attend all four days of the MEC: - May 26: pre-conference workshops, keynote speaker, and opening reception - May 27: main conference (papers, posters, sessions) - May 28: main conference (papers, posters, sessions, and closing keynote) - May 29: community day (open community meeting in the morning, hackathon and interest groups) Topics The conference welcomes contributions from all those who are developing or applying music encodings in their work and research. Topics include, but are not limited to: - data structures for music encoding - music encoding standardisation - music encoding interoperability / universality - methodologies for encoding, music editing, description and analysis - computational analysis of encoded music - rendering of symbolic music data in audio and graphical forms - conceptual encoding of relationships between multimodal music forms (e.g. symbolic music data, encoded text, facsimile images, audio) - capture, interchange, and re-purposing of musical data and metadata - ontologies, authority files, and linked data in music encoding and description - (symbolic) music information retrieval using music encoding - evaluation of music encodings - best practice in approaches to music encoding and the use or application of music encodings in: - music theory and analysis - digital musicology and, more broadly, digital humanities - music digital libraries - digital editions - bibliographies and bibliographic studies - catalogues - collection management - composition - performance - teaching and learning - search and browsing - multimedia music presentation, exploration, and exhibition Submissions Authors are invited to upload their anonymized submission for review to our Conftool website: <https://www.conftool.net/music-encoding2019> https://www.conftool.net/music-encoding2020/. The final (and definitive) deadline for all submissions is 22 December 2019. Conftool accepts abstracts as PDF files only. The submission to Conftool must include: - name(s) of author(s) - title - abstract (see below for maximum lengths) - current or most recent institutional affiliation of author(s) and e-mail address - proposal type: paper, poster, panel session, or workshop - all identifying information must be provided in the corresponding fields of Conftool only, while the submitted PDF must anonymize the author’s details. Paper and poster proposals must include an abstract of no more than 1000 words. Relevant bibliographic references may be included above this limit. Panel discussion proposal abstracts must be no longer than 2000 words, and describe the topic and nature of the discussion, along with short biographies of the participants. Panel discussions are not expected to be a set of papers which could otherwise be submitted as individual papers. Proposals for half- or full-day pre-conference workshops, to be held on May 26th, should include the duration of the proposed workshop, as well as its logistical and technical requirements. The program committee will communicate the results of its deliberations on or about January 31, 2020. In case of questions, feel free to contact: conference2020 at music-encoding.org. Program Committee - Vincent Besson, Centre d'études supérieures de la Renaissance, Université de Tours - Margrethe Bue, National Library of Norway - Joy Calico, Vanderbilt University - Elsa De Luca, NOVA University of Lisbon - Richard Freedman (Committee Chair), Haverford College - Stefan Münnich, University of Basel - Anna Plaksin, Max Weber Stiftung, Bonn - David Weigl, University of Music and Performing Arts Vienna Local organizing Committee - Anna Kijas (Committee Chair), Head, Lilly Music Library, Tufts University - Julie-Ann Bryson, Library Coordinator, Lilly Music Library, Tufts University - Sarah Connell, Assistant Director, Women Writers Project, Northeastern University - Julia Flanders, Digital Scholarship Group Director, Northeastern University - Jessica Fulkerson, Lecturer in Music, Tufts University -- Richard Freedman Professor of Music John C. Whitehead '43 Professor of Humanities Associate Provost for Curricular Development Haverford College Haverford, PA 19041 610-896-1007 610-896-4902 (fax) http://www.haverford.edu/users/rfreedma Schedule meeting time: https://goo.gl/3KN2hr -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191202/abeb1a4f/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: MEC CfP 2020.pdf Type: application/pdf Size: 49848 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191202/abeb1a4f/attachment.pdf> From ul at openlilylib.org Tue Dec 3 14:10:13 2019 From: ul at openlilylib.org (Urs Liska) Date: Tue, 3 Dec 2019 14:10:13 +0100 Subject: [MEI-L] Notensatz im 21. Jahrhundert / Music Engraving in the 21st Century Message-ID: <e84433e3-4c74-d264-db48-b17c6ff50f35@openlilylib.org> [ Teilen erwünscht / sharing encouraged! see below for an English version of this invitation ] Sehr geehrte Damen und Herren! Hiermit möchten wir Sie herzlich zu unserer Konferenz   Notensatz im 21. Jahrhundert   Entwicklungen und Perspektiven   https://www.uni-mozarteum.at/de/kunst/notensatz-konferenz.php einladen, die von Freitag, 17. Jänner, bis Sonntag, 19. Jänner 2020 in der Universität Mozarteum in Salzburg, Mirabellplatz 1, stattfinden wird. Als Keynote Speaker konnten wir Elaine Gould gewinnen; sie ist Autorin des Standardwerks über Musiktypographie »Behind Bars« (auf Deutsch erschienen als »Hals über Kopf«) und wird über die heutige Relevanz von hochqualitativem Notensatz sprechen. Der Eintritt zur Konferenz ist frei.  Trotzdem bitten wir um eine formlose Anmeldung per E-Mail an »notensatz-konferenz at moz.ac.at«. Falls Sie aktiv an einem der Workshops im MediaLab teilnehmen wollen, teilen Sie uns das bitte mit – die Plätze sind limitiert und werden in der Reihenfolge der Anmeldung per E-Mail vergeben. Die Konferenzsprachen sind Deutsch und Englisch.                               *   *   * Computer sind überall heutzutage.  Auch in der Musikwelt werden Computer vielseitig eingesetzt – für den Notensatz wurden sie sogar unverzichtbar.  Das alte Handwerk des Notenstechers ist ausgestorben, und wir verlassen uns stattdessen auf Programme, um über Jahrhunderte angesammeltes Wissen und Ästhetik in den Notensatz einzubringen. Die sich dadurch ergebenden Gelegenheiten und Herausfordungen im Großen wie im Kleinen müssen gut bedacht sein, um musikalische Computeranwendungen fit für die digitale Zukunft zu machen. Die breit gestreuten Vortragsthemen versuchen, die angesprochenen Aspekte im Lauf von zwei Tagen von möglichst vielen Seiten zu beleuchten.  Wir freuen uns auch, Workshops und Demonstrationen von Notensatz-Software anbieten zu können – im besonderen wird die Firma Steinberg Media Technologies, welche die Konferenz großzügig mit Software-Lizenzen für das MediaLab der Universität unterstützt, ihr Notensatz-Flaggschiff »Dorico 3« präsentieren. Der dritte Tag ist unser Unconference Day; derzeit geplant sind Meetings für LilyPond, Frescobaldi und die Facebook-Gruppe »Music Engraving Tips«. Anbei finden Sie das detaillierte Programm (Änderungen vorbehalten).   Veranstalter: Abteilung für Komposition und Musiktheorie der Universität Mozarteum                 In Zusammenarbeit mit der Gesellschaft für Musiktheorie (GMTH)   Programmkomitee: Werner Lemberg (Universität Mozarteum Salzburg)                    Lukas-Fabian Moser (Universität Mozarteum Salzburg)                    Urs Liska (Hochschule für Musik Freiburg) ====================================================================== Ladies and Gentlemen, we would like to invite you to our conference   Music Engraving in the 21st Century   Developments and Perspectives https://www.uni-mozarteum.at/en/kunst/music-engraving-conference.php taking place at the Mozarteum Music University in Salzburg, Mirabellplatz 1, Austria, from January 17th (Fr) to January 19th (Su),  020. The keynote speaker will be Elaine Gould, author of “the” authoritative book on music engraving, “Behind Bars,” discussing the value of high-quality music engraving. There are no entrance fees to the conference.  However, we ask for an informal application via e-mail to <notensatz-konferenz at moz.ac.at>. If you want to actively participate in one of the workshops held at the MediaLab, please tell us – the number of slots is limited, and interested people will be ranked according to the arrival time of e-mail answers. Conference languages will be German and English.                               *   *   * Computers are everywhere today.  Even in the music world, computers have become a tool used for many purposes; especially for music engraving, they are now indispensable.  The old craft of music engravers has become extinct, and we now rely on programs to apply the knowledge and aesthetics accumulated over centuries.  This brings opportunities and challenges that have to be considered as we head towards the future of musical applications in the digital age. The various talks will try to highlight these aspects from different perspectives within two days of the conference.  We are also happy to be able to offer workshops and presentations of music notation software.  In particular, we look forward to the presentation of the new music notation flagship “Dorico 3” by Steinberg Media Technologies, the company which generously supports the conference with software licenses for the university's MediaLab. The third day is reserved for developer and user group meetings; the current plan is to hold events for LilyPond, Frescobaldi, and the “Music Engraving Tips” Facebook group. Attached you can find the detailed program (subject to modifications).   Host: Department for Composition and Music Theory, Music University Mozarteum          In cooperation with the Society for Music Theory (GMTH).   Program Committee: Werner Lemberg (University Mozarteum Salzburg)                      Lukas-Fabian Moser (University Mozarteum Salzburg)                      Urs Liska (University of Music Freiburg)   <notensatz-konferenz at moz.ac.at> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191203/6bc0f354/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: notensatz-konferenz_programm_de.pdf Type: application/pdf Size: 107179 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191203/6bc0f354/attachment.pdf> From pdr4h at virginia.edu Tue Dec 3 17:13:58 2019 From: pdr4h at virginia.edu (Roland, Perry D (pdr4h)) Date: Tue, 3 Dec 2019 16:13:58 +0000 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <DA0B4A3B.11F64%efreja@wanadoo.fr> References: <BN6PR13MB0899FFDD3EEEFE520405DF9D9F430@BN6PR13MB0899.namprd13.prod.outlook.com> <DA0B4A3B.11F64%efreja@wanadoo.fr> Message-ID: <BN6PR13MB0899FBFB66C8F6565FBD50529F420@BN6PR13MB0899.namprd13.prod.outlook.com> Hi Etienne, Of course using SVG for chord labels is not ideal, but the only other possibility is to create a renderer smart enough to handle all the various ways in which labels can be presented. This is not so difficult for your "D7b5/G" example; that is, the behavior expected when processing the <stack> element could be tweaked, but ultimately there's a limit on how intelligent a renderer can be. Turning to harmonic analysis based on harmonic labels, the obvious approach is to use the content of the <harm> element itself. That is, the string "D-7b5" already conveys the information you're looking to encode -- a chord containing a diminished 5th with a root of D -- so there's no need to capture it elsewhere. If you're interested in finding diminished chords with a root other than D, then a search for <harm> containing "-7b5" or a regular expression that matches the root, such as "[A-G][#-]?7b5", would work. Another possibility is to use @inth on <harm> to encode the intervallic content of the chord. For example, <harm inth="m3 d5 m7"> captures the intervals of the chord (so really "m3d5m7" is a substitute for "half-diminished"), but it doesn't capture the root. That may be a useful feature to add, probably via a @root attribute, especially for those cases where the label doesn't indicate the root of the chord. If that's done, then it's not unreasonable to add another attribute to hold a chord quality value, like "half-diminished", etc. However, one could use the @type attribute to hold this data or use the @class attribute to point to a value in a formal taxonomy. To move this discussion toward actualization, I suggest you file an issue in the MEI github repo to add @root to <harm>. Hope this helps, -- p. From efreja at wanadoo.fr Wed Dec 4 23:22:04 2019 From: efreja at wanadoo.fr (Etienne =?UTF-8?B?RnLDqWphdmlsbGU=?=) Date: Wed, 04 Dec 2019 23:22:04 +0100 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <BN6PR13MB0899FBFB66C8F6565FBD50529F420@BN6PR13MB0899.namprd13.prod.outlook.com> Message-ID: <DA0DE5DF.11FA9%efreja@wanadoo.fr> Hello Perry, Thanks again for the answer. In any case, I think harmonic analysis based on harmonic labels is not a good idea. The Jazz harmonic labels aren’t in any case standardized. The Major 7th can be Maj7 or △ , min can be min or - , the diminished degree can be dim or ° all this can be superscripted or not, that’s the problem… And it’s even worse if there is a necessity to use SVG for bass chords (I don’t know how the <stack> element works). However the @inth on <harm> to encode the intervallic content of the chord seems to be the best place and I guess it should be able to encode all possible jazz chords (to check with https://en.wikipedia.org/wiki/Chord_names_and_symbols_(popular_music)) Agreed also that a @root attribute is missing on the <harm> element. Keeping the bass note as a chordMember of the scoreDef is acceptable as it’s an inversion of the chord, thus a voicing, that doesn’t change the harmonic structure. The difficulty is to identify that bass note inside the scoreDef. Converting from/to MusicXML a fragment like this : (G6/D chord) <harmony> <root> <root-step>G</root-step> </root> <kind text="6">major-sixth</kind> <bass> <bass-step>D</bass-step> </bass> </harmony> wouldn’t be too easy. Therefore I think a @bass attribute on <harm> could encode this in a more efficient manner. Thanks Le 03/12/2019 17:13, « Roland, Perry D (pdr4h) » <pdr4h at virginia.edu> a écrit : > Hi Etienne, > > Of course using SVG for chord labels is not ideal, but the only other > possibility is to create a renderer smart enough to handle all the various > ways in which labels can be presented. This is not so difficult for your > "D7b5/G" example; that is, the behavior expected when processing the <stack> > element could be tweaked, but ultimately there's a limit on how intelligent a > renderer can be. > > Turning to harmonic analysis based on harmonic labels, the obvious approach is > to use the content of the <harm> element itself. That is, the string "D-7b5" > already conveys the information you're looking to encode -- a chord containing > a diminished 5th with a root of D -- so there's no need to capture it > elsewhere. If you're interested in finding diminished chords with a root > other than D, then a search for <harm> containing "-7b5" or a regular > expression that matches the root, such as "[A-G][#-]?7b5", would work. > > Another possibility is to use @inth on <harm> to encode the intervallic > content of the chord. For example, <harm inth="m3 d5 m7"> captures the > intervals of the chord (so really "m3d5m7" is a substitute for > "half-diminished"), but it doesn't capture the root. That may be a useful > feature to add, probably via a @root attribute, especially for those cases > where the label doesn't indicate the root of the chord. If that's done, then > it's not unreasonable to add another attribute to hold a chord quality value, > like "half-diminished", etc. However, one could use the @type attribute to > hold this data or use the @class attribute to point to a value in a formal > taxonomy. > > To move this discussion toward actualization, I suggest you file an issue in > the MEI github repo to add @root to <harm>. > > Hope this helps, > > -- > p. > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191204/7aa473cc/attachment.html> From pdr4h at virginia.edu Thu Dec 5 16:21:44 2019 From: pdr4h at virginia.edu (Roland, Perry D (pdr4h)) Date: Thu, 5 Dec 2019 15:21:44 +0000 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <DA0DE5DF.11FA9%efreja@wanadoo.fr> References: <BN6PR13MB0899FBFB66C8F6565FBD50529F420@BN6PR13MB0899.namprd13.prod.outlook.com> <DA0DE5DF.11FA9%efreja@wanadoo.fr> Message-ID: <MWHPR13MB0910055F801E0BA86B4D22939F5C0@MWHPR13MB0910.namprd13.prod.outlook.com> Etienne, Analysis based on the labels would not be universally applicable. It would have to be tailored to the labeling system in use. In other words, there’s little to no hope of creating a regex that matches all the various ways chords labels are written. But, I believe that it’s rare for labeling to switch between styles. So, as long as “Maj7” is used consistently, and not mixed with △, then there’s no problem. Still, I agree that this is not ideal. Let me reiterate, SVG doesn’t have to be used for chords over a given bass note, e.g., “Dm7/G”. It’s your requirement that the chord be displayed “directly over” the bass note, similar to the way mathematical fractions are written, as opposed to being written horizontally as it is above, that currently demands the use of SVG. Adding @bass at the same time as @root sounds reasonable. -- p. From: mei-l <mei-l-bounces at lists.uni-paderborn.de> On Behalf Of Etienne Fréjaville Sent: Wednesday, December 4, 2019 5:22 PM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: Re: [MEI-L] Jazz Chords in MEI ? Hello Perry, Thanks again for the answer. In any case, I think harmonic analysis based on harmonic labels is not a good idea. The Jazz harmonic labels aren’t in any case standardized. The Major 7th can be Maj7 or △ , min can be min or - , the diminished degree can be dim or ° all this can be superscripted or not, that’s the problem… And it’s even worse if there is a necessity to use SVG for bass chords (I don’t know how the <stack> element works). However the @inth on <harm> to encode the intervallic content of the chord seems to be the best place and I guess it should be able to encode all possible jazz chords (to check with https://en.wikipedia.org/wiki/Chord_names_and_symbols_(popular_music)) Agreed also that a @root attribute is missing on the <harm> element. Keeping the bass note as a chordMember of the scoreDef is acceptable as it’s an inversion of the chord, thus a voicing, that doesn’t change the harmonic structure. The difficulty is to identify that bass note inside the scoreDef. Converting from/to MusicXML a fragment like this : (G6/D chord) <harmony> <root> <root-step>G</root-step> </root> <kind text="6">major-sixth</kind> <bass> <bass-step>D</bass-step> </bass> </harmony> wouldn’t be too easy. Therefore I think a @bass attribute on <harm> could encode this in a more efficient manner. Thanks Le 03/12/2019 17:13, « Roland, Perry D (pdr4h) » <pdr4h at virginia.edu<mailto:pdr4h at virginia.edu>> a écrit : Hi Etienne, Of course using SVG for chord labels is not ideal, but the only other possibility is to create a renderer smart enough to handle all the various ways in which labels can be presented. This is not so difficult for your "D7b5/G" example; that is, the behavior expected when processing the <stack> element could be tweaked, but ultimately there's a limit on how intelligent a renderer can be. Turning to harmonic analysis based on harmonic labels, the obvious approach is to use the content of the <harm> element itself. That is, the string "D-7b5" already conveys the information you're looking to encode -- a chord containing a diminished 5th with a root of D -- so there's no need to capture it elsewhere. If you're interested in finding diminished chords with a root other than D, then a search for <harm> containing "-7b5" or a regular expression that matches the root, such as "[A-G][#-]?7b5", would work. Another possibility is to use @inth on <harm> to encode the intervallic content of the chord. For example, <harm inth="m3 d5 m7"> captures the intervals of the chord (so really "m3d5m7" is a substitute for "half-diminished"), but it doesn't capture the root. That may be a useful feature to add, probably via a @root attribute, especially for those cases where the label doesn't indicate the root of the chord. If that's done, then it's not unreasonable to add another attribute to hold a chord quality value, like "half-diminished", etc. However, one could use the @type attribute to hold this data or use the @class attribute to point to a value in a formal taxonomy. To move this discussion toward actualization, I suggest you file an issue in the MEI github repo to add @root to <harm>. Hope this helps, -- p. _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191205/617df631/attachment.html> From efreja at wanadoo.fr Thu Dec 5 20:44:22 2019 From: efreja at wanadoo.fr (Etienne =?UTF-8?B?RnLDqWphdmlsbGU=?=) Date: Thu, 05 Dec 2019 20:44:22 +0100 Subject: [MEI-L] Jazz Chords in MEI ? In-Reply-To: <MWHPR13MB0910055F801E0BA86B4D22939F5C0@MWHPR13MB0910.namprd13.prod.outlook.com> Message-ID: <DA0F1C45.11FFC%efreja@wanadoo.fr> Thank you Perry. I’m going to file an issue in the MEI github repo on this subject. Etienne. De : "Roland, Perry D (pdr4h)" <pdr4h at virginia.edu> Répondre à : Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Date : jeudi 5 décembre 2019 16:21 À : Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Objet : Re: [MEI-L] Jazz Chords in MEI ? Etienne, Analysis based on the labels would not be universally applicable. It would have to be tailored to the labeling system in use. In other words, there’s little to no hope of creating a regex that matches all the various ways chords labels are written. But, I believe that it’s rare for labeling to switch between styles. So, as long as “Maj7” is used consistently, and not mixed with △, then there’s no problem. Still, I agree that this is not ideal. Let me reiterate, SVG doesn’t have to be used for chords over a given bass note, e.g., “Dm7/G”. It’s your requirement that the chord be displayed “directly over” the bass note, similar to the way mathematical fractions are written, as opposed to being written horizontally as it is above, that currently demands the use of SVG. Adding @bass at the same time as @root sounds reasonable. -- p. From: mei-l <mei-l-bounces at lists.uni-paderborn.de> On Behalf Of Etienne Fréjaville Sent: Wednesday, December 4, 2019 5:22 PM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: Re: [MEI-L] Jazz Chords in MEI ? Hello Perry, Thanks again for the answer. In any case, I think harmonic analysis based on harmonic labels is not a good idea. The Jazz harmonic labels aren’t in any case standardized. The Major 7th can be Maj7 or △ , min can be min or - , the diminished degree can be dim or ° all this can be superscripted or not, that’s the problem… And it’s even worse if there is a necessity to use SVG for bass chords (I don’t know how the <stack> element works). However the @inth on <harm> to encode the intervallic content of the chord seems to be the best place and I guess it should be able to encode all possible jazz chords (to check with https://en.wikipedia.org/wiki/Chord_names_and_symbols_(popular_music <https://en.wikipedia.org/wiki/Chord_names_and_symbols_(popular_music> )) Agreed also that a @root attribute is missing on the <harm> element. Keeping the bass note as a chordMember of the scoreDef is acceptable as it’s an inversion of the chord, thus a voicing, that doesn’t change the harmonic structure. The difficulty is to identify that bass note inside the scoreDef. Converting from/to MusicXML a fragment like this : (G6/D chord) <harmony> <root> <root-step>G</root-step> </root> <kind text="6">major-sixth</kind> <bass> <bass-step>D</bass-step> </bass> </harmony> wouldn’t be too easy. Therefore I think a @bass attribute on <harm> could encode this in a more efficient manner. Thanks Le 03/12/2019 17:13, « Roland, Perry D (pdr4h) » <pdr4h at virginia.edu <mailto:pdr4h at virginia.edu> > a écrit : > > Hi Etienne, > > > > Of course using SVG for chord labels is not ideal, but the only other > possibility is to create a renderer smart enough to handle all the various > ways in which labels can be presented. This is not so difficult for your > "D7b5/G" example; that is, the behavior expected when processing the <stack> > element could be tweaked, but ultimately there's a limit on how intelligent a > renderer can be. > > > > Turning to harmonic analysis based on harmonic labels, the obvious approach is > to use the content of the <harm> element itself. That is, the string "D-7b5" > already conveys the information you're looking to encode -- a chord containing > a diminished 5th with a root of D -- so there's no need to capture it > elsewhere. If you're interested in finding diminished chords with a root > other than D, then a search for <harm> containing "-7b5" or a regular > expression that matches the root, such as "[A-G][#-]?7b5", would work. > > > > Another possibility is to use @inth on <harm> to encode the intervallic > content of the chord. For example, <harm inth="m3 d5 m7"> captures the > intervals of the chord (so really "m3d5m7" is a substitute for > "half-diminished"), but it doesn't capture the root. That may be a useful > feature to add, probably via a @root attribute, especially for those cases > where the label doesn't indicate the root of the chord. If that's done, then > it's not unreasonable to add another attribute to hold a chord quality value, > like "half-diminished", etc. However, one could use the @type attribute to > hold this data or use the @class attribute to point to a value in a formal > taxonomy. > > > > To move this discussion toward actualization, I suggest you file an issue in > the MEI github repo to add @root to <harm>. > > > > Hope this helps, > > > > -- > > p. > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de <mailto:mei-l at lists.uni-paderborn.de> > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > <https://lists.uni-paderborn.de/mailman/listinfo/mei-l> > > _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191205/e74be94d/attachment.html> From b.w.bohl at gmail.com Mon Dec 9 18:22:26 2019 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Mon, 9 Dec 2019 18:22:26 +0100 Subject: [MEI-L] MEI Board elections 2019: Call for candidates In-Reply-To: <1652B1B9-7555-401A-B6EC-4B0787AC1DC6@gmail.com> References: <1652B1B9-7555-401A-B6EC-4B0787AC1DC6@gmail.com> Message-ID: <9F309611-2D30-4FE1-BF67-058C9D90BC92@gmail.com> Dear all, just a gentle reminder, that the nomination phase for this year’s elections ends in two days on Wednesday 11 December, 2019. For more information, see below. Thank you for your nominations, Peter Stadler and Benjamin W. Bohl MEI election administrators 2019 by appointment of the MEI Board > On 13. Nov 2019, at 16:09, Benjamin W. Bohl <b.w.bohl at gmail.com> wrote: > > **Too long to read?** visit: https://forms.gle/XwJR95FYn1xYcrDK7 > > Dear MEI Community, > > on 31 December 2019 the terms of three MEI Board members will come to an end. The entire Board wishes to thank Andrew Hankinson, Johannes Kepper and Eleanor Selfridge-Field for their service and dedication to the MEI community. > > In order to fill these soon-to-be-vacant positions, elections must be held. The election process will take place in accordance with the Music Encoding Initiative By-Laws.[1] > > To nominate a canadidate, please do so via this form: https://forms.gle/XwJR95FYn1xYcrDK7 > > The timeline of the elections will be as follows: > > Nomination phase (13 November – 11 December, 2019) > - Nominations can be sent by filling in the nomination form between 13 November – 11 December, 2019.[2] > - Any person who today is a subscriber of MEI-L has the right to nominate candidates. > - Nominees have to be members of the MEI-L mailing list but may register until 11 December 2019. > - Individuals who have previously served on the Board are eligible for nomination and re-appointment. > - Self nominations are welcome. > - Individuals will be informed of their nomination when received and asked to confirm their willingness to serve on the Board. > - Acceptance of a nomination requires submission of a short CV and a personal statement of interest in MEI (a maximum of 200 words each) to elections at music-encoding.org by 12 December, 2019. Candidates who have been nominated but who have not confirmed their willingness will not be included on the ballot. > > Election phase (13 December – 18 December, 2019) > - The voting period will be open from 13 December – 18 December, 2019. > - The election will take place using OpaVote and the Ranked Choice Voting method (https://www.opavote.com/methods/ranked-choice-voting). > - You will be informed about the election and your individual voting tokens in a separate e-mail. > > Post election phase > - Election results will be announced after the elections have closed. > - The term of the elected candidates starts on 1 January 2020. > - The first meeting of the new MEI Board will be held on Wednesday, 15 January 2020, 8:00 pm in Germany (e.g. 7:00 pm in the UK, or 11:00 am USA west coast, or 2:00 pm USA east coast) > > The selection of Board members is an opportunity for each of you to have a voice in determining the future of MEI. > > Thank you for your support, > Peter Stadler and Benjamin W. Bohl > MEI election administrators 2019 > by appointment of the MEI Board > > [1] The By-laws of the Music Encoding Initiative are available online at: http://music-encoding.org/community/mei-by-laws.html > [2] All deadlines are referenced to 11:59 pm (UTC) > From ul at openlilylib.org Wed Dec 11 13:06:02 2019 From: ul at openlilylib.org (Urs Liska) Date: Wed, 11 Dec 2019 12:06:02 +0000 Subject: [MEI-L] MEI scores database Message-ID: <0076e53503f94317ff6b8127942545ec@openlilylib.org> Dear MEI list, for a university project that will be able to display MEI scores we were asking ourselves if there already is a collection/database/link-list of freely available MEI encodings somewhere. Did anyone start a project collecting such resources and making it available either simply as a list or even in some form that can directly be referred to? Thanks Urs From thompease at gmail.com Wed Dec 11 13:50:56 2019 From: thompease at gmail.com (Thom) Date: Wed, 11 Dec 2019 07:50:56 -0500 Subject: [MEI-L] MEI scores database In-Reply-To: <0076e53503f94317ff6b8127942545ec@openlilylib.org> References: <0076e53503f94317ff6b8127942545ec@openlilylib.org> Message-ID: <CAN_nrX5sLT_9n9bt20Dga-RGtAh3TAQPJbwPtjbyys_TNEUz4w@mail.gmail.com> Urs, I would suggest asking on MLA-L and/or IAML-L as well. Thom Pease Library of Congress On Wed, Dec 11, 2019 at 7:06 AM Urs Liska <ul at openlilylib.org> wrote: > Dear MEI list, > > for a university project that will be able to display MEI scores we were > asking ourselves if there already is a collection/database/link-list of > freely available MEI encodings somewhere. > > Did anyone start a project collecting such resources and making it > available either simply as a list or even in some form that can directly be > referred to? > > Thanks > Urs > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191211/2ad9821e/attachment.html> From T.Crawford at gold.ac.uk Wed Dec 11 16:24:46 2019 From: T.Crawford at gold.ac.uk (Tim Crawford) Date: Wed, 11 Dec 2019 15:24:46 +0000 Subject: [MEI-L] Extra technical meeting dedicated to TabMEI - 18-19 Dec 2019 In-Reply-To: <243B5A7D-5860-4B36-A8DC-F51885401F7F@gold.ac.uk> References: <243B5A7D-5860-4B36-A8DC-F51885401F7F@gold.ac.uk> Message-ID: <D3E563F4-9614-41D4-8674-72051AB71FE3@gold.ac.uk> Dear all, Could I ask everyone who intends to come to this meeting to get in touch with me off-list? (Reinier, I know about you.) Many thanks! Tim On 19 Nov 2019, at 11:56, Tim Crawford <T.Crawford at gold.ac.uk<mailto:T.Crawford at gold.ac.uk>> wrote: Dear all, We shall be holding a 2-day technical MEI meeting here at Goldsmiths in London, Wednesday-Thursday 18-19 December 2019, to attempt to get the work that has been done on a revised tablature module for MEI (working title TabMEI) into a releasable state in the near future. We shall be focussing on the MEI specification for lute and (modern) guitar tablature, in order to greatly expand the range of MEI’s coverage of both historical and modern popular music. While we want to include tablatures for keyboard and other instruments in future releases, they do not figure greatly in the current TabMEI effort. However, we would welcome participation of anyone interested in working on them, or even reading their suggestions in this mailing list. The meeting is booked to take place in Room 140 of the (main) Richard Hoggart Building at Goldsmiths (RHB 140). This link should help you find the room easily: https://www.gold.ac.uk/find-us/rhb-room-finder/?room=140 Directions for getting to Goldsmiths are at: https://www.gold.ac.uk/find-us/ The meeting dates have been deliberately timed to follow on from the DMRN workshop at Queen Mary, University of London, which is an annual event, and takes place this year on Tuesday, 17 December 2019: https://www.qmul.ac.uk/dmrn/dmrn-14/ The DMRN programme is not yet finalised, but should be fixed some time next week. I would be grateful if everyone wishing to come to the meeting could confirm the days they want to attend by email to me. Although this is not absolutely essential it helps greatly with planning the supply of refreshments, etc. See you on 18 December! Tim Prof. Tim Crawford Professorial Research Fellow in Computational Musicology Department of Computing Goldsmiths College London SE14 6NW U.K. t.crawford at gold.ac.uk<mailto:t.crawford at gold.ac.uk> _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191211/a4c6cb26/attachment.html> From T.Crawford at gold.ac.uk Wed Dec 11 16:27:55 2019 From: T.Crawford at gold.ac.uk (Tim Crawford) Date: Wed, 11 Dec 2019 15:27:55 +0000 Subject: [MEI-L] Extra technical meeting dedicated to TabMEI - 18-19 Dec 2019 In-Reply-To: <D3E563F4-9614-41D4-8674-72051AB71FE3@gold.ac.uk> References: <243B5A7D-5860-4B36-A8DC-F51885401F7F@gold.ac.uk> <D3E563F4-9614-41D4-8674-72051AB71FE3@gold.ac.uk> Message-ID: <AAE6181E-22DC-4835-8531-71E276B55A97@gold.ac.uk> Excuse the repeated posting, but the URL for DMRN 14 has changed subtly. It should be: https://www.qmul.ac.uk/dmrn/dmrn14/ Tim On 11 Dec 2019, at 15:24, Tim Crawford <T.Crawford at gold.ac.uk<mailto:T.Crawford at gold.ac.uk>> wrote: Dear all, Could I ask everyone who intends to come to this meeting to get in touch with me off-list? (Reinier, I know about you.) Many thanks! Tim On 19 Nov 2019, at 11:56, Tim Crawford <T.Crawford at gold.ac.uk<mailto:T.Crawford at gold.ac.uk>> wrote: Dear all, We shall be holding a 2-day technical MEI meeting here at Goldsmiths in London, Wednesday-Thursday 18-19 December 2019, to attempt to get the work that has been done on a revised tablature module for MEI (working title TabMEI) into a releasable state in the near future. We shall be focussing on the MEI specification for lute and (modern) guitar tablature, in order to greatly expand the range of MEI’s coverage of both historical and modern popular music. While we want to include tablatures for keyboard and other instruments in future releases, they do not figure greatly in the current TabMEI effort. However, we would welcome participation of anyone interested in working on them, or even reading their suggestions in this mailing list. The meeting is booked to take place in Room 140 of the (main) Richard Hoggart Building at Goldsmiths (RHB 140). This link should help you find the room easily: https://www.gold.ac.uk/find-us/rhb-room-finder/?room=140 Directions for getting to Goldsmiths are at: https://www.gold.ac.uk/find-us/ The meeting dates have been deliberately timed to follow on from the DMRN workshop at Queen Mary, University of London, which is an annual event, and takes place this year on Tuesday, 17 December 2019: https://www.qmul.ac.uk/dmrn/dmrn-14/ The DMRN programme is not yet finalised, but should be fixed some time next week. I would be grateful if everyone wishing to come to the meeting could confirm the days they want to attend by email to me. Although this is not absolutely essential it helps greatly with planning the supply of refreshments, etc. See you on 18 December! Tim Prof. Tim Crawford Professorial Research Fellow in Computational Musicology Department of Computing Goldsmiths College London SE14 6NW U.K. t.crawford at gold.ac.uk<mailto:t.crawford at gold.ac.uk> _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191211/06ab8bcc/attachment.html> From markus.neuwirth at epfl.ch Wed Dec 4 15:20:25 2019 From: markus.neuwirth at epfl.ch (Neuwirth Markus Franz Josef) Date: Wed, 4 Dec 2019 14:20:25 +0000 Subject: [MEI-L] CfP Empirical Musicology Review: Special Issue on Open Science in Musicology Message-ID: <6eaecd5279304533bf7b11299ff99b07@epfl.ch> ** With apologies for cross-posting ** Please forward to interested colleagues Dear colleagues, We would like to draw your attention to two calls for the journal Empirical Musicology Review. Empirical Musicology Review: Special Issue on Open Science in Musicology Empirical musicology relies crucially on the creation, analysis, publication, and distribution of datasets. Despite the progress made over the past decades in this vibrating field, numerous issues regarding the sharing of data, the reproducibility of research findings, and the general role of transparency remain challenging. In many disciplines, these issues are addressed under the umbrella of the Open Science movement and the adherence to FAIR principles for scientific data management (findable, accessible, interoperable, reusable; https://www.go-fair.org/fair-principles<https://www.go-fair.org/fair-principles/>). To advance the state-of-the-art in data-based music research, Empirical Musicology Review is devoting a special issue to a wide discussion of questions related to Open Science and Open Data, and introduces a new section on data reports that will remain a permanent part of the journal in all subsequent issues. CfP: Research Articles and Think Pieces We invite papers that address general aspects of Open Science / Open Data, discuss challenges in the application of the FAIR principles to music research, or reflect upon methodological and meta questions. Papers may also describe the generation of particular datasets and explore their characteristics in the context of the overall topic of this special issue. We envisage to include contributions from a wide variety of domains, such as music theory, music psychology, music information retrieval, historical musicology etc. The data must be accessible in an open repository or database. Papers should be 3000-6000 words in length. CfP: Data Reports Starting with this special issue, EMR is introducing a new section on Data Reports. In order to promote Open Science and to facilitate reproducibility, empirical studies of music are increasingly relying on openly available corpora and datasets. Since the scientific value of creating, cleaning, curating, enabling access, and maintaining data is of the utmost importance, EMR invites researchers to share their datasets and to apply the FAIR principles. Data Reports may describe a variety of datasets such as musical metadata, annotations of musical corpora in symbolic or audio formats, automatically extracted musical features, data from psychological experiments etc. Data Reports should not exceed a word limit of 2000 words. Please register on http://emusicology.org/ and submit your contribution by 31 March 2020. If you have any further questions, please get in touch with the guest editors: Fabian C. Moss (fabian.moss at epfl.ch<mailto:fabian.moss at epfl.ch>) and Markus Neuwirth (markus.neuwirth at epfl.ch<mailto:markus.neuwirth at epfl.ch>) Dr. Markus Neuwirth Digital and Cognitive Musicology Lab (DCML) École polytechnique fédérale de Lausanne (EPFL) https://dcml.epfl.ch/lab/neuwirth/ http://epfl.academia.edu/MarkusNeuwirth Editor-in-chief of the journal Music Theory and Analysis<https://lup.be/collections/series-music-theory-and-analysis> Series co-editor of the GMTH Proceedings<https://www.gmth.de/proceedings.aspx> Current project (Co-PI): From Bach to the Beatles<https://dcml.epfl.ch/projects/from-bach-to-the-beatles/> (2018-20, funded by the Volkswagen Foundation) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191204/fdb01984/attachment.html> From JALBREC6 at kent.edu Tue Dec 3 22:54:22 2019 From: JALBREC6 at kent.edu (Albrecht, Josh) Date: Tue, 3 Dec 2019 21:54:22 +0000 Subject: [MEI-L] Reminder: CFP for Future Directions of Music Cognition conference due 12/15 Message-ID: <SN6PR08MB5136C6D13C03A5000B2D0945F3420@SN6PR08MB5136.namprd08.prod.outlook.com> Dear Colleagues, We are pleased to announce that we will be publishing proceedings articles for the "Future Directions of Music Cognition" conference (May 10-14, 2020; Columbus, OH). As a reminder, abstract submissions close at 11:59 PM on December 15, 2019. Details about abstract submissions can be found on our website http://org.osu.edu/mascats/call-for-papers/<http://org.osu.edu/mascats/call-for-papers>. Each day of the "Future Directions" conference will be devoted to one music & science topic: corpus studies, emotion, rhythm & meter, timbre, and pedagogy. During each day, there will be a keynote presentation, a methodology workshop, presentations & posters, and group projects. The five keynote speakers are: Emotion—Jonna Vuoskoski Corpus Studies—Daniel Shanahan Pedagogy—Leigh VanHandel Rhythm & Meter—Justin London Timbre—Stephen McAdams All the best, Lindsay Warrenburg, Conference Co-chair Lindsey Reymore, Conference Co-chair Daniel Shanahan, Conference Co-chair Joshua Albrecht, Program Committee Chair -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191203/0b938e57/attachment.html> From raffaeleviglianti at gmail.com Fri Dec 13 15:36:58 2019 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Fri, 13 Dec 2019 09:36:58 -0500 Subject: [MEI-L] Part-time opportunity for Digital Musicologist on Tinctoris Project Message-ID: <CAMyHAnNPBoQLe7O2ftv8Sopbmr4kRoKtTKh+=t5g-iY964pZ4w@mail.gmail.com> Dear List, I'm posting this on behalf of Jeffrey Dean via Barbara H. Haggh-Huglo. Please circulate! Note that remote work is allowed. -- Raff. Job for Digital Musicologist on Tinctoris Project http://earlymusictheory.org/Tinctoris/# Here is the link to the job advertisement: https://jobs.bcu.ac.uk/Vacancy.aspx?ref=122019-553 If you know of anyone who might be suitable, do encourage them to apply. I would emphasize that it will not be necessary for the person we hire to relocate to Birmingham -- the duties can easily be carried out remotely -- and indeed it is much simpler if someone who does not already have the right to work in the UK remains abroad. We are looking for someone with a solid background in coding and familiarity with MEI; acquaintance with pre-1600 music is desirable (better still, pre-1500) but not essential, likewise machine learning. The appointment is at the level of Research Fellow (equivalent to Lecturer in the UK, Assistant Professor in North America) at 0.6 FTE (3 days a week) until the end of the project on 30 September 2021. The deadline for application is 17 January 2020. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191213/43c08422/attachment.html> From reinierdevalk at gmail.com Tue Dec 17 22:27:05 2019 From: reinierdevalk at gmail.com (Reinier de Valk) Date: Tue, 17 Dec 2019 22:27:05 +0100 Subject: [MEI-L] Extra technical meeting dedicated to TabMEI - 18-19 Dec 2019 In-Reply-To: <D3E563F4-9614-41D4-8674-72051AB71FE3@gold.ac.uk> References: <243B5A7D-5860-4B36-A8DC-F51885401F7F@gold.ac.uk> <D3E563F4-9614-41D4-8674-72051AB71FE3@gold.ac.uk> Message-ID: <CAC7npKUYQE3rPaL_eP=01oUTZ1M1-pgDiyjjVAwXvC4EF+eYFA@mail.gmail.com> Hi Tim, Did I miss the update on times for tomorrow and Thursday? (Very well possible...!) See you tomorrow, Reinier On Wed, 11 Dec 2019, 15:24 Tim Crawford, <T.Crawford at gold.ac.uk> wrote: > Dear all, > > Could I ask everyone who intends to come to this meeting to get in touch > with me off-list? > (Reinier, I know about you.) > > Many thanks! > > Tim > > On 19 Nov 2019, at 11:56, Tim Crawford <T.Crawford at gold.ac.uk> wrote: > > Dear all, > > We shall be holding a 2-day technical MEI meeting here at Goldsmiths in > London, Wednesday-Thursday 18-19 December 2019, to attempt to get the work > that has been done on a revised tablature module for MEI (working title > TabMEI) into a releasable state in the near future. We shall be focussing > on the MEI specification for lute and (modern) guitar tablature, in order > to greatly expand the range of MEI’s coverage of both historical and modern > popular music. > > While we want to include tablatures for keyboard and other instruments in > future releases, they do not figure greatly in the current TabMEI effort. > However, we would welcome participation of anyone interested in working on > them, or even reading their suggestions in this mailing list. > > The meeting is booked to take place in Room 140 of the (main) Richard > Hoggart Building at Goldsmiths (RHB 140). This link should help you find > the room easily: > https://www.gold.ac.uk/find-us/rhb-room-finder/?room=140 > Directions for getting to Goldsmiths are at: > https://www.gold.ac.uk/find-us/ > > The meeting dates have been deliberately timed to follow on from the DMRN > workshop at Queen Mary, University of London, which is an annual event, and > takes place this year on Tuesday, 17 December 2019: > https://www.qmul.ac.uk/dmrn/dmrn-14/ > The DMRN programme is not yet finalised, but should be fixed some time > next week. > > I would be grateful if everyone wishing to come to the meeting could > confirm the days they want to attend by email to me. Although this is not > absolutely essential it helps greatly with planning the supply of > refreshments, etc. > > See you on 18 December! > > Tim > > Prof. Tim Crawford > Professorial Research Fellow in Computational Musicology > Department of Computing > Goldsmiths College > London SE14 6NW > U.K. > > t.crawford at gold.ac.uk > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20191217/738f5fa5/attachment.html> From b.w.bohl at gmail.com Thu Dec 19 14:31:49 2019 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Thu, 19 Dec 2019 14:31:49 +0100 Subject: [MEI-L] MEI scores database In-Reply-To: <0076e53503f94317ff6b8127942545ec@openlilylib.org> References: <0076e53503f94317ff6b8127942545ec@openlilylib.org> Message-ID: <EF772A16-2A18-4BA2-A12D-8294C2D28C58@gmail.com> Hi Urs, unfortunately I can’t supply you with such a list but I deem it of great interest to the whole MEI community. How about calling for links to resources and maintaining a list corresponding list on the MEI website? All the best, Benjamin > On 11. Dec 2019, at 13:06, Urs Liska <ul at openlilylib.org> wrote: > > Dear MEI list, > > for a university project that will be able to display MEI scores we were asking ourselves if there already is a collection/database/link-list of freely available MEI encodings somewhere. > > Did anyone start a project collecting such resources and making it available either simply as a list or even in some form that can directly be referred to? > > Thanks > Urs > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From ul at openlilylib.org Thu Dec 19 14:33:58 2019 From: ul at openlilylib.org (Urs Liska) Date: Thu, 19 Dec 2019 14:33:58 +0100 Subject: [MEI-L] MEI scores database In-Reply-To: <EF772A16-2A18-4BA2-A12D-8294C2D28C58@gmail.com> References: <0076e53503f94317ff6b8127942545ec@openlilylib.org> <EF772A16-2A18-4BA2-A12D-8294C2D28C58@gmail.com> Message-ID: <C12267A1-3687-46E1-A08B-2A8636D9FFE1@openlilylib.org> Hi Benjamin, Am 19. Dezember 2019 14:31:49 MEZ schrieb "Benjamin W. Bohl" <b.w.bohl at gmail.com>: >Hi Urs, > >unfortunately I can’t supply you with such a list but I deem it of >great interest to the whole MEI community. How about calling for links >to resources and maintaining a list corresponding list on the MEI >website? That sounds like something that would be pretty appropriate there. Urs > >All the best, >Benjamin > >> On 11. Dec 2019, at 13:06, Urs Liska <ul at openlilylib.org> wrote: >> >> Dear MEI list, >> >> for a university project that will be able to display MEI scores we >were asking ourselves if there already is a >collection/database/link-list of freely available MEI encodings >somewhere. >> >> Did anyone start a project collecting such resources and making it >available either simply as a list or even in some form that can >directly be referred to? >> >> Thanks >> Urs >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > >_______________________________________________ >mei-l mailing list >mei-l at lists.uni-paderborn.de >https://lists.uni-paderborn.de/mailman/listinfo/mei-l -- Diese Nachricht wurde von meinem Android-Gerät mit K-9 Mail gesendet. From stadler at edirom.de Thu Dec 19 14:59:50 2019 From: stadler at edirom.de (Peter Stadler) Date: Thu, 19 Dec 2019 14:59:50 +0100 Subject: [MEI-L] Update on the current MEI board elections In-Reply-To: <1652B1B9-7555-401A-B6EC-4B0787AC1DC6@gmail.com> References: <1652B1B9-7555-401A-B6EC-4B0787AC1DC6@gmail.com> Message-ID: <D84F6185-C163-4F7B-BAC0-D3214ADEB44C@edirom.de> Dear MEI community, just a brief update on the current elections since we are a little bit delayed … The nomination phase is already closed and at present we are still collecting candidate statements. We decided to extend this phase because the original schedule was extremely tight. So, once we received the candidate statements (the candidates know the deadline ;) we will be starting the election phase which will run until January 10, 2020. We are sorry for the delay and the confusion it may have caused but still considered it to be the better option in the interest of the community and the candidates. With seasonal greetings Benni & Peter MEI election administrators > Am 13.11.2019 um 16:09 schrieb Benjamin W. Bohl <b.w.bohl at gmail.com>: > > **Too long to read?** visit: https://forms.gle/XwJR95FYn1xYcrDK7 > > Dear MEI Community, > > on 31 December 2019 the terms of three MEI Board members will come to an end. The entire Board wishes to thank Andrew Hankinson, Johannes Kepper and Eleanor Selfridge-Field for their service and dedication to the MEI community. > > In order to fill these soon-to-be-vacant positions, elections must be held. The election process will take place in accordance with the Music Encoding Initiative By-Laws.[1] > > To nominate a canadidate, please do so via this form: https://forms.gle/XwJR95FYn1xYcrDK7 > > The timeline of the elections will be as follows: > > Nomination phase (13 November – 11 December, 2019) > - Nominations can be sent by filling in the nomination form between 13 November – 11 December, 2019.[2] > - Any person who today is a subscriber of MEI-L has the right to nominate candidates. > - Nominees have to be members of the MEI-L mailing list but may register until 11 December 2019. > - Individuals who have previously served on the Board are eligible for nomination and re-appointment. > - Self nominations are welcome. > - Individuals will be informed of their nomination when received and asked to confirm their willingness to serve on the Board. > - Acceptance of a nomination requires submission of a short CV and a personal statement of interest in MEI (a maximum of 200 words each) to elections at music-encoding.org by 12 December, 2019. Candidates who have been nominated but who have not confirmed their willingness will not be included on the ballot. > > Election phase (13 December – 18 December, 2019) > - The voting period will be open from 13 December – 18 December, 2019. > - The election will take place using OpaVote and the Ranked Choice Voting method (https://www.opavote.com/methods/ranked-choice-voting). > - You will be informed about the election and your individual voting tokens in a separate e-mail. > > Post election phase > - Election results will be announced after the elections have closed. > - The term of the elected candidates starts on 1 January 2020. > - The first meeting of the new MEI Board will be held on Wednesday, 15 January 2020, 8:00 pm in Germany (e.g. 7:00 pm in the UK, or 11:00 am USA west coast, or 2:00 pm USA east coast) > > The selection of Board members is an opportunity for each of you to have a voice in determining the future of MEI. > > Thank you for your support, > Peter Stadler and Benjamin W. Bohl > MEI election administrators 2019 > by appointment of the MEI Board > > [1] The By-laws of the Music Encoding Initiative are available online at: http://music-encoding.org/community/mei-by-laws.html > [2] All deadlines are referenced to 11:59 pm (UTC) > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l