From Anna.Kijas at tufts.edu Tue Jan 5 22:54:44 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Tue, 5 Jan 2021 21:54:44 +0000 Subject: [MEI-L] Upcoming MEI Pedagogy Interest Group Meeting & Opportunity Reminder! Message-ID: Happy New Year! Please join us on Friday, January 15, 2021 at 11 AM (EST) for an MEI Pedagogy Interest Group Meeting. You can join the meeting on January 15, 2021 with this Zoom link: https://tufts.zoom.us/j/9420917662?pwd=VnhEWm5aWUd2K0xWRUVEdFFpNW5rdz09. If prompted for a passcode, enter 420721. We also look forward to hearing from anyone who is interested in working with us on one of the three opportunities detailed below. Best, Anna Kijas and Joy Calico, Administrative Co-Chairs Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 From: "Kijas, Anna E" Date: Wednesday, December 16, 2020 at 10:50 AM To: Music Encoding Initiative , "mei-pedagogy-ig at lists.uni-paderborn.de" Subject: Opportunities to Engage with the MEI Pedagogy Interest Group! Deadline for expressions of interest in opportunities below: 15 January 2021 Dear Colleagues, The MEI Pedagogy Interest Group met on December 15 (view meeting notes) and identified three opportunities for you to engage with this group over the next few months! Please read on if you are interested in lending a hand in one of these ways: 1. Help draft a Call for Proposals along with criteria or a rubric for an openly available, peer-reviewed collection of pedagogical examples, lessons, or tutorials that demonstrate a variety of music encoding use-cases. Additional roles/tasks will be required as we move along with this project! 2. Send us links (or content) to your existing tutorials, workshop materials, videos, etc. that we can add to a “Community-Created Resource” section on the MEI website which will feature instructional content that is not peer-reviewed by the MEI community. This will be on a rolling deadline once we receive initial content. 3. Help draft a conference proposal for a session focused on music encoding and pedagogy for the American Musicological Society 2021. Perhaps you are working on a project or have been teaching music encoding and would like to present on your work or approaches? (N.B. If you presented in any format at AMS 2020, you must skip a year.) Deadline for letting us know how you’d like to participate in these opportunities is January 15, 2021. If you are interested, please send an email to Anna Kijas (anna.kijas at tufts.edu) and Joy Calico (joy.calico at vanderbilt.edu). We plan to hold regular monthly meetings on the third Friday of each month at 11 AM (EST). Meetings will be announced on the MEI and IG lists, and on Slack. The upcoming dates include: * January 15, 2021 * February 19, 2021 * March 19, 2021 * April 16, 2021 * May (TBA during the MEC conference) Best, Anna Kijas and Joy Calico, Administrative Co-Chairs Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 -------------- next part -------------- An HTML attachment was scrubbed... URL: From lindsay.a.warrenburg at gmail.com Sat Jan 9 15:16:37 2021 From: lindsay.a.warrenburg at gmail.com (Lindsay Warrenburg) Date: Sat, 9 Jan 2021 09:16:37 -0500 Subject: [MEI-L] Announcing Future Directions of Music Cognition Speaker Series Message-ID: Dear colleagues, We’re pleased to announce an online speaker series as part of our virtual conference Future Directions of Music Cognition. As the name “Future Directions” implies, the aim of this series is to look forward and think about where various subfields of music cognition are headed in academia and outside the academy. The series is aimed at scholars of all ages, including undergraduate and graduate students of music, as well as professors who will mentor scholars in a post-Covid world. Titles and Zoom links are forthcoming and will be posted online at http://org.osu.edu/mascats/virtual-speaker-series/. All events will be on Mondays at 4pm EST unless otherwise noted below. - February 22: David Huron, Professor Emeritus, Ohio State University School of Music & Center for Cognitive and Brain Sciences - March 1 at 5pm: Psyche Loui, Assistant Professor of Creativity and Creative Practice, Northeastern University Department of Music - March 8: Joint presentation - Dominique Vuvan, Assistant Professor of Psychology, Skidmore College - SMPC Anti-Racism committee - March 15: Roman Holowinsky, Managing Director & Co-Founder at The Erdős Institute and Associate Professor of Mathematics, Ohio State University - March 22: Justin London, Andrew W. Mellon Professor of Music, Cognitive Science, and the Humanities, Carleton College - March 29: Alt-ac panel from speakers with Music PhDs - Suhnne Ahn, Director of the Peabody at Homewood Program - Nell Cloutier, Director of Measurement and Learning at Habitat for Humanity - Dana DeVlieger, Law Student, Northwestern University Pritzker School of Law - Lindsay Warrenburg, Data Scientist at Sonde Health - April 5: Daniel Shanahan, Associate Professor of Music Theory and Cognition, Ohio State University School of Music - April 12: Joe Plazak, Sibelius Principal Software Engineer and Product Owner - April 19: Reyna Gordon, Assistant Professor, Departments of Otolaryngology & Psychology, Vanderbilt University - April 26: Zachary Wallmark, Assistant Professor of Musicology and Affiliated Faculty of the Center for Translational Neuroscience, University of Oregon - May 3: Stephen McAdams, Canada Research Chair in Music Perception and Cognition; Professor of Music, Schulich School of Music, McGill University; Director, ACTOR Project (Analysis, Creation, and Teaching of ORchestration) - May 10: Leigh VanHandel, Associate Professor of Music Theory, University of British Columbia - May 17: Aniruddh Patel, Professor of Psychology, Tufts University - May 24: Elizabeth Hellmuth Margulis, Professor of Music, Princeton University - May 31: Jonna Vuoskoski, Associate Professor, Departments of Musicology and the Centre for Interdisciplinary Studies in Rhythm, Time and Motion (IMV), University of Oslo Sincerely, Lindsey Reymore & Lindsay Warrenburg Co-chairs of Future Directions of Music Cognition — *Lindsay Warrenburg, PhD* Music Theory, Cognition, and Perception lindsaywarrenburg.com she/her/hers -------------- next part -------------- An HTML attachment was scrubbed... URL: From tim.eipert at uni-wuerzburg.de Thu Jan 14 23:23:38 2021 From: tim.eipert at uni-wuerzburg.de (Tim Eipert) Date: Thu, 14 Jan 2021 23:23:38 +0100 Subject: [MEI-L] =?utf-8?q?Professorial_position_in_W=C3=BCrzburg_=22Digi?= =?utf-8?q?tale_Musikphilologie=22?= Message-ID: <20210114232338.Horde.sOrw3JoKJnTIcbxlhFND_7o@webmail.uni-wuerzburg.de> (Dear Community, I would like to share with you this message. Best wishes, Tim) Dear Colleagues, Please, find attached the advertisement for a professor position that might interest you or colleagues of yours in your area. Unfortunately, the deadline ends already on 25 January. Therefore, it is all the more important to us that as many potential applicants as possible learn about this as soon a possible. With kind regards Andreas Haug -- Tim Eipert Corpus monodicum: Die einstimmige Musik des lateinischen Mittelalters. Julius-Maximilians-Universität Institut für Musikforschung Domerschulstraße 13 D-97070 Würzburg Tel: +49 (0) 15751795814 (mobil) Github: https://github.com/timeipert -------------- next part -------------- A non-text attachment was scrubbed... Name: Ausschreibung JunProf Digitale Musikphilologie (1).pdf Type: application/pdf Size: 105586 bytes Desc: not available URL: From Anna.Kijas at tufts.edu Fri Jan 22 20:40:33 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Fri, 22 Jan 2021 19:40:33 +0000 Subject: [MEI-L] CfP for 2021 Association for Computers and the Humanities Conference Message-ID: Dear Colleagues, I’d like to share a Call for Proposals from the Association for Computers and the Humanities (ACH) for ACH 2021, which will be held virtually on July 22-23, 2021. In partnership with the University of Houston’s US Latino Digital Humanities (USLDH) program and University Libraries, Texas Southern University, Rice University, Texas A&M College Station, Texas A&M Prairie View, UH Clear Lake, UH Downtown, and Houston Community College, ACH 2021 will provide a forum for conversations on an expansive definition of digital humanities in a broad array of subject areas, methods, and communities of practice. * CFP details: https://ach.org/blog/2020/12/29/call-for-proposals-association-for-computers-and-the-humanities-2021/ * Deadline for the CFP: February 1, 2021 * Submission site: https://www.conftool.org/ach2021/ Please note the details for “Suggested Proposal Types and Duration” that is in the Call for Proposals at the ACH website. ACH 2021 submissions will undergo fully anonymous peer review. Please remove all identifying information from your proposal submission including author name and affiliation. ACH recognizes that this work is inherently and inextricably sociopolitical, and thus especially welcomes proposals that emphasize social justice in the context of anti-racist work, Black studies, Latinx studies, Indigenous studies, cultural and critical ethnic studies, intersectional feminism, postcolonial and decolonial studies, and queer interventions in digital studies. Areas of engagement include but are not limited to: * Social justice * Digital surveillance * Environmental humanities & climate justice * Computational and digital approaches to humanistic research and pedagogy * Digital pedagogy, research, and activism during COVID-19 * Digital media, art, literature, history, music, film, and games * Digital librarianship * Digital humanities tools and infrastructures * Humanistic research on digital objects and cultures * Knowledge infrastructures * Physical computing * Resource creation, curation, and engagement * Use of digital technologies to write, publish, and review scholarship As an organization committed to cross-disciplinary engagement, ACH welcomes interdisciplinary proposals. We also are especially interested in receiving proposals from participants with a range of expertise and from a variety of roles, including alt-ac positions, employment outside of higher education, and graduate students. We further invite proposals from participants who are newcomers to digital humanities. For questions and concerns about the CFP, conference program, submissions, Code of Conduct, or accessibility, please contact the program committee co-chairs: Lorena Gauthereau and Tanya Clement (ach2021 [at] ach [dot] org ). In gratitude and partnership, The ACH 2021 Program Committee Best, Anna Kijas Council Representative, ACH Chair, ACH Affiliation & Liaisons Committee Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Thu Jan 28 17:35:58 2021 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 28 Jan 2021 17:35:58 +0100 Subject: [MEI-L] ODD Friday tomorrow Message-ID: Dear all, tomorrow at 2pm Austrian time is our next ODD Friday. Everyone’s invited to join us for technical discussions around MEI, or just o listen in ;-) See you tomorrow, jo https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI From thomaemartha at gmail.com Fri Jan 29 01:08:16 2021 From: thomaemartha at gmail.com (Martha Thomae) Date: Fri, 29 Jan 2021 00:08:16 +0000 Subject: [MEI-L] ODD Friday tomorrow In-Reply-To: References: Message-ID: Hello, Thank you for sharing the meeting link, Johannes! I was wondering, would it be possible to still include two items in tomorrow's agenda? I think it might be useful to look into how one can visualize the changes done in the guidelines (how to get the html) with the new setup of the repository. And also looking into the building of the schema with the instructions given in the README of the music-encoding repo (https://github.com/music-encoding/music-encoding#building-mei-schemas). I haven't been able to obtain the schema using these instructions (I usually use Oxygen to get it). Thank you! See you tomorrow everyone, Martha On 2021-01-28, 11:36 AM, "mei-l on behalf of Johannes Kepper" wrote: Dear all, tomorrow at 2pm Austrian time is our next ODD Friday. Everyone’s invited to join us for technical discussions around MEI, or just o listen in ;-) See you tomorrow, jo https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From esperanza.rodriguezgarcia at univ-tours.fr Fri Jan 29 13:21:00 2021 From: esperanza.rodriguezgarcia at univ-tours.fr (Esperanza Rodriguez-Garcia) Date: Fri, 29 Jan 2021 13:21:00 +0100 (CET) Subject: [MEI-L] ODD Friday tomorrow In-Reply-To: References: Message-ID: <1224490677.6364756.1611922860819.JavaMail.zimbra@univ-tours.fr> Dear all, I have to excuse myself for today, as I have another meeting at the same time. See you next time Esperanza ----- Mail original ----- De: "Johannes Kepper" À: "Music Encoding Initiative" Envoyé: Jeudi 28 Janvier 2021 17:35:58 Objet: [MEI-L] ODD Friday tomorrow Dear all, tomorrow at 2pm Austrian time is our next ODD Friday. Everyone’s invited to join us for technical discussions around MEI, or just o listen in ;-) See you tomorrow, jo https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From irmlind.capelle at uni-paderborn.de Tue Feb 2 11:54:49 2021 From: irmlind.capelle at uni-paderborn.de (Irmlind Capelle) Date: Tue, 2 Feb 2021 11:54:49 +0100 Subject: [MEI-L] Cheat sheets Message-ID: Dear all, I’m not quite sure if this is the correct place to put my question but I try it. On the Website there is the button „resources“ and in it the button „tutorials“. If you click this it is opened „tutorials and related material“. It would be fine if the button would also be extended with „related material“. Then this would be a good place to place there one or more cheat sheets for mei. We had two in the metadata ig sesson at the MEC 2020 and it would be fine to have it at an official place so you can link it in correspondence and literature. A cheat sheet is a very good document to compare different XML structures and I would be really happy to have it not only on my private desc. Best regards Irmlind Dr. Irmlind Capelle Wissenschaftliche Mitarbeiterin DFG-Viewer für musikalische Quellen bis 1/2021: DFG-Projekt „Entwicklung eines MEI- und TEI-basierten Modells konzeptueller Tiefenerschließung von Musikalienbeständen am Beispiel des Detmolder Hoftheaters im 19. Jahrhundert (1825-1875)“ www.hoftheater-detmold.de Forum Wissenschaft | Bibliothek | Musik Hornsche Straße 39 32756 Detmold Tel.: +49 5231 975-665 Mail: irmlind.capelle at upb.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anna.Kijas at tufts.edu Tue Feb 2 13:42:12 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Tue, 2 Feb 2021 12:42:12 +0000 Subject: [MEI-L] Cheat sheets In-Reply-To: References: Message-ID: Dear Irmlind, That is a great question! At our last Digital Pedagogy Interest Group meeting in January the question also came up about sharing cheat sheets (in addition to other resources) from the MEI community in a central place. The IG is going to suggest the creation of a subpage or section on the MEI website that is called MEI Community-Created Resources. This would be similar to the bibliography page, but focused on identifying resources that may not be typical publications, but are useful for teaching MEI or demonstrating concepts, in formats such as lesson plans, tutorials, cheat sheets, etc. Does that sound like a good option? The IG will be drafting some language for this Community-Created section and will share with the MEI-L to get feedback and additional thoughts. Best, Anna Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 From: mei-l on behalf of Irmlind Capelle Reply-To: Music Encoding Initiative Date: Tuesday, February 2, 2021 at 5:56 AM To: Music Encoding Initiative Subject: [MEI-L] Cheat sheets Dear all, I’m not quite sure if this is the correct place to put my question but I try it. On the Website there is the button „resources“ and in it the button „tutorials“. If you click this it is opened „tutorials and related material“. It would be fine if the button would also be extended with „related material“. Then this would be a good place to place there one or more cheat sheets for mei. We had two in the metadata ig sesson at the MEC 2020 and it would be fine to have it at an official place so you can link it in correspondence and literature. A cheat sheet is a very good document to compare different XML structures and I would be really happy to have it not only on my private desc. Best regards Irmlind Dr. Irmlind Capelle Wissenschaftliche Mitarbeiterin DFG-Viewer für musikalische Quellen bis 1/2021: DFG-Projekt „Entwicklung eines MEI- und TEI-basierten Modells konzeptueller Tiefenerschließung von Musikalienbeständen am Beispiel des Detmolder Hoftheaters im 19. Jahrhundert (1825-1875)“ www.hoftheater-detmold.de Forum Wissenschaft | Bibliothek | Musik Hornsche Straße 39 32756 Detmold Tel.: +49 5231 975-665 Mail: irmlind.capelle at upb.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From jveit at mail.uni-paderborn.de Tue Feb 2 13:58:55 2021 From: jveit at mail.uni-paderborn.de (Joachim Veit) Date: Tue, 2 Feb 2021 13:58:55 +0100 Subject: [MEI-L] Cheat sheets In-Reply-To: References: Message-ID: <9ab2e2fb-394a-881a-1a5d-63d4244cae3c@mail.uni-paderborn.de> Dear Anna, thank you very much - that's a really fine and helpfull idea!! This would also result in fixed URLs for these teaching materials. I just made the experience that a MEI cheat sheet which I cited in an article (now published five years after the conference...) is no longer on the website and thus I produced a broken link in the publication... And I would heartly welcome a collection of materials which help us to better teach MEI - because MEI seems to be very useful .... Best greetings, Joachim Am 02.02.21 um 13:42 schrieb Kijas, Anna E: > Dear Irmlind, > > That is a great question! At our last Digital Pedagogy Interest Group meeting in January the question also came up about sharing cheat sheets (in addition to other resources) from the MEI community in a central place. The IG is going to suggest the creation of a subpage or section on the MEI website that is called MEI Community-Created Resources. This would be similar to the bibliography page, but focused on identifying resources that may not be typical publications, but are useful for teaching MEI or demonstrating concepts, in formats such as lesson plans, tutorials, cheat sheets, etc. Does that sound like a good option? > > The IG will be drafting some language for this Community-Created section and will share with the MEI-L to get feedback and additional thoughts. > > Best, > Anna > > Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. > > Anna E. Kijas > Head, Lilly Music Library > Granoff Music Center > Tufts University > 20 Talbot Avenue, Medford, MA 02155 > Pronouns: she, her, hers > Book an appointment | (617) 627-2846 > > From: mei-l on behalf of Irmlind Capelle > Reply-To: Music Encoding Initiative > Date: Tuesday, February 2, 2021 at 5:56 AM > To: Music Encoding Initiative > Subject: [MEI-L] Cheat sheets > > Dear all, > > I’m not quite sure if this is the correct place to put my question but I try it. > > On the Website there is the button „resources“ and in it the button „tutorials“. If you click this it is opened „tutorials and related material“. It would be fine if the button would also be extended with „related material“. Then this would be a good place to place there one or more cheat sheets for mei. We had two in the metadata ig sesson at the MEC 2020 and it would be fine to have it at an official place so you can link it in correspondence and literature. > > A cheat sheet is a very good document to compare different XML structures and I would be really happy to have it not only on my private desc. > > Best regards > Irmlind > > > > Dr. Irmlind Capelle > > Wissenschaftliche Mitarbeiterin > DFG-Viewer für musikalische Quellen > > bis 1/2021: DFG-Projekt „Entwicklung eines MEI- und TEI-basierten Modells konzeptueller > Tiefenerschließung von Musikalienbeständen am Beispiel des Detmolder Hoftheaters > im 19. Jahrhundert (1825-1875)“ www.hoftheater-detmold.de > Forum Wissenschaft | Bibliothek | Musik > Hornsche Straße 39 > 32756 Detmold > > Tel.: +49 5231 975-665 > Mail: irmlind.capelle at upb.de > > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- A non-text attachment was scrubbed... Name: jveit.vcf Type: text/x-vcard Size: 512 bytes Desc: not available URL: From irmlind.capelle at uni-paderborn.de Wed Feb 10 11:05:04 2021 From: irmlind.capelle at uni-paderborn.de (Irmlind Capelle) Date: Wed, 10 Feb 2021 11:05:04 +0100 Subject: [MEI-L] Cheat sheets In-Reply-To: <9ab2e2fb-394a-881a-1a5d-63d4244cae3c@mail.uni-paderborn.de> References: <9ab2e2fb-394a-881a-1a5d-63d4244cae3c@mail.uni-paderborn.de> Message-ID: <1983E0A1-BE0C-4906-8361-96B8AACBBA8D@uni-paderborn.de> Dear Anna, please excuse me that I didn’t answer immediately: Really it is a good idea to have a place for Community Created Resources on the MEI website. I am not quite sure how to handle the difference between MEI Resources and MEI Community-Created Resources but I trust in your discussion between the Board and the Digital Pedagogy IG. However, for me cheat sheets are fundamental materials on MEI and should have really official character and place. We will stay in contact. Best regards Irmlind > Am 02.02.2021 um 13:58 schrieb Joachim Veit : > > Dear Anna, > > thank you very much - that's a really fine and helpfull idea!! This would also result in fixed URLs for these teaching materials. I just made the experience that a MEI cheat sheet which I cited in an article (now published five years after the conference...) is no longer on the website and thus I produced a broken link in the publication... > And I would heartly welcome a collection of materials which help us to better teach MEI - because MEI seems to be very useful .... > > Best greetings, > Joachim > > > > Am 02.02.21 um 13:42 schrieb Kijas, Anna E: >> Dear Irmlind, >> That is a great question! At our last Digital Pedagogy Interest Group meeting in January the question also came up about sharing cheat sheets (in addition to other resources) from the MEI community in a central place. The IG is going to suggest the creation of a subpage or section on the MEI website that is called MEI Community-Created Resources. This would be similar to the bibliography page, but focused on identifying resources that may not be typical publications, but are useful for teaching MEI or demonstrating concepts, in formats such as lesson plans, tutorials, cheat sheets, etc. Does that sound like a good option? >> The IG will be drafting some language for this Community-Created section and will share with the MEI-L to get feedback and additional thoughts. >> Best, >> Anna >> Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. >> Anna E. Kijas >> Head, Lilly Music Library >> Granoff Music Center >> Tufts University >> 20 Talbot Avenue, Medford, MA 02155 >> Pronouns: she, her, hers >> Book an appointment | (617) 627-2846 >> From: mei-l on behalf of Irmlind Capelle >> Reply-To: Music Encoding Initiative >> Date: Tuesday, February 2, 2021 at 5:56 AM >> To: Music Encoding Initiative >> Subject: [MEI-L] Cheat sheets >> Dear all, >> I’m not quite sure if this is the correct place to put my question but I try it. >> On the Website there is the button „resources“ and in it the button „tutorials“. If you click this it is opened „tutorials and related material“. It would be fine if the button would also be extended with „related material“. Then this would be a good place to place there one or more cheat sheets for mei. We had two in the metadata ig sesson at the MEC 2020 and it would be fine to have it at an official place so you can link it in correspondence and literature. >> A cheat sheet is a very good document to compare different XML structures and I would be really happy to have it not only on my private desc. >> Best regards >> Irmlind >> Dr. Irmlind Capelle >> Wissenschaftliche Mitarbeiterin >> DFG-Viewer für musikalische Quellen >> bis 1/2021: DFG-Projekt „Entwicklung eines MEI- und TEI-basierten Modells konzeptueller >> Tiefenerschließung von Musikalienbeständen am Beispiel des Detmolder Hoftheaters >> im 19. Jahrhundert (1825-1875)“ www.hoftheater-detmold.de >> Forum Wissenschaft | Bibliothek | Musik >> Hornsche Straße 39 >> 32756 Detmold >> Tel.: +49 5231 975-665 >> Mail: irmlind.capelle at upb.de >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From lindsay.a.warrenburg at gmail.com Wed Feb 10 14:59:53 2021 From: lindsay.a.warrenburg at gmail.com (Lindsay Warrenburg) Date: Wed, 10 Feb 2021 08:59:53 -0500 Subject: [MEI-L] Announcing Presentation Titles for the Future Directions of Music Cognition Speaker Series Message-ID: Dear all, We are pleased to announce the titles for the Future Directions of Music Cognition Speaker Series. In order to register for one or more talks, please sign up for the email listserv on the website below. The listserv will be the primary method of communication for information about the speaker series, including Zoom links and announcements. *Register for the speaker series here: * http://org.osu.edu/mascats/virtual-speaker-series/ Please reach out if you have any questions. We can't wait to see you on February 22 at 4PM EST for our first presentation! - February 22, 4pm EST: - *On the future of music research* - *David Huron*, Professor Emeritus, Ohio State University School of Music & Center for Cognitive and Brain Sciences - March 1,* 5pm EST*: - *Use-inspired music cognition: Designing cognitively informed musical interventions for the brain* - *Psyche Loui*, Assistant Professor of Creativity and Creative Practice, Northeastern University Department of Music - March 8, 4pm EST: - *Joint presentation* - *Dominique Vuvan*, Assistant Professor of Psychology, Skidmore College - *SMPC Anti-Racism committee* - March 15, 4pm EST: - *Career preparedness through alumni engagement* - *Roman Holowinsky*, Managing Director & Co-Founder at The Erdős Institute and Associate Professor of Mathematics, Ohio State University - March 22, 4pm EST: - *Music theory as junk science, and how and why we need to fix it* - *Justin London**,* Andrew W. Mellon Professor of Music, Cognitive Science, and the Humanities, Carleton College - March 29, 4pm EST: - *Alt-ac panel from speakers with Music PhDs* - *Suhnne Ahn*, Director of the Peabody at Homewood Program - *Nell Cloutier*, Director of Measurement and Learning at Habitat for Humanity - *Dana DeVlieger*, Law Student, Northwestern University Pritzker School of Law - *Lindsay Warrenburg*, Data Scientist at Sonde Health - April 5, 4pm EST: - *What the history of computational musicology can tell us about the future of corpus studies* - *Daniel Shanahan*, Associate Professor of Music Theory and Cognition, Ohio State University School of Music - April 12, 4pm EST: - *The future of music cognition through the lens of music notation* - *Joe Plazak*, Sibelius Principal Software Engineer and Product Owner - April 19, 4pm EST: - *New frontiers in the genetic basis of musicality* - *Reyna Gordon*, Assistant Professor, Departments of Otolaryngology & Psychology, Vanderbilt University - April 26, 4pm EST: - *Empathic listening: Music and the social mind* - *Zachary Wallmark*, Assistant Professor of Musicology and Affiliated Faculty of the Center for Translational Neuroscience, University of Oregon - May 3, 4pm EST: - *Analyzing the perceptual effects of orchestration practice through the lens of auditory grouping principles* - *Stephen McAdams*, Canada Research Chair in Music Perception and Cognition; Professor of Music, Schulich School of Music, McGill University; Director, ACTOR Project (Analysis, Creation, and Teaching of ORchestration) - May 10, 4pm EST: - *Melody and rhythm: Effects on tempo determination* - *Leigh VanHandel*, Associate Professor of Music Theory, University of British Columbia - May 17, 4pm EST: - *Musicality and gene-culture coevolution* - *Aniruddh Patel*, Professor of Psychology, Tufts University - May 24, 4pm EST: - *Music cognition between the sciences and the humanities* - *Elizabeth Hellmuth Margulis*, Professor of Music, Princeton University - May 31, 4pm EST: - *From compassion to being moved: Social emotions evoked by music* - *Jonna Vuoskoski*, Associate Professor, Departments of Musicology and the Centre for Interdisciplinary Studies in Rhythm, Time and Motion (IMV), University of Oslo All the best, Lindsay Warrenburg Lindsey Reymore Daniel Shanahan — *Lindsay Warrenburg, PhD* Music Theory, Cognition, and Perception lindsaywarrenburg.com she/her/hers -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anna.Kijas at tufts.edu Thu Feb 11 20:19:32 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Thu, 11 Feb 2021 19:19:32 +0000 Subject: [MEI-L] Cheat sheets In-Reply-To: <1983E0A1-BE0C-4906-8361-96B8AACBBA8D@uni-paderborn.de> References: <9ab2e2fb-394a-881a-1a5d-63d4244cae3c@mail.uni-paderborn.de> <1983E0A1-BE0C-4906-8361-96B8AACBBA8D@uni-paderborn.de> Message-ID: <24282BE8-B4E2-4725-94CA-3D17177EBCC3@tufts.edu> Hello Irmlind, Thank you for following up and providing your perspective on the importance of cheat-sheets. I wonder if the MEI GitHub might be a good place for cheat-sheets that have been reviewed for accuracy or in a sense “approved” by the MEI as an organization? Best, Anna Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 On 2/10/21, 5:06 AM, "mei-l on behalf of Irmlind Capelle" wrote: Dear Anna, please excuse me that I didn’t answer immediately: Really it is a good idea to have a place for Community Created Resources on the MEI website. I am not quite sure how to handle the difference between MEI Resources and MEI Community-Created Resources but I trust in your discussion between the Board and the Digital Pedagogy IG. However, for me cheat sheets are fundamental materials on MEI and should have really official character and place. We will stay in contact. Best regards Irmlind > Am 02.02.2021 um 13:58 schrieb Joachim Veit : > > Dear Anna, > > thank you very much - that's a really fine and helpfull idea!! This would also result in fixed URLs for these teaching materials. I just made the experience that a MEI cheat sheet which I cited in an article (now published five years after the conference...) is no longer on the website and thus I produced a broken link in the publication... > And I would heartly welcome a collection of materials which help us to better teach MEI - because MEI seems to be very useful .... > > Best greetings, > Joachim > > > > Am 02.02.21 um 13:42 schrieb Kijas, Anna E: >> Dear Irmlind, >> That is a great question! At our last Digital Pedagogy Interest Group meeting in January the question also came up about sharing cheat sheets (in addition to other resources) from the MEI community in a central place. The IG is going to suggest the creation of a subpage or section on the MEI website that is called MEI Community-Created Resources. This would be similar to the bibliography page, but focused on identifying resources that may not be typical publications, but are useful for teaching MEI or demonstrating concepts, in formats such as lesson plans, tutorials, cheat sheets, etc. Does that sound like a good option? >> The IG will be drafting some language for this Community-Created section and will share with the MEI-L to get feedback and additional thoughts. >> Best, >> Anna >> Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. >> Anna E. Kijas >> Head, Lilly Music Library >> Granoff Music Center >> Tufts University >> 20 Talbot Avenue, Medford, MA 02155 >> Pronouns: she, her, hers >> Book an appointment | (617) 627-2846 >> From: mei-l on behalf of Irmlind Capelle >> Reply-To: Music Encoding Initiative >> Date: Tuesday, February 2, 2021 at 5:56 AM >> To: Music Encoding Initiative >> Subject: [MEI-L] Cheat sheets >> Dear all, >> I’m not quite sure if this is the correct place to put my question but I try it. >> On the Website there is the button „resources“ and in it the button „tutorials“. If you click this it is opened „tutorials and related material“. It would be fine if the button would also be extended with „related material“. Then this would be a good place to place there one or more cheat sheets for mei. We had two in the metadata ig sesson at the MEC 2020 and it would be fine to have it at an official place so you can link it in correspondence and literature. >> A cheat sheet is a very good document to compare different XML structures and I would be really happy to have it not only on my private desc. >> Best regards >> Irmlind >> Dr. Irmlind Capelle >> Wissenschaftliche Mitarbeiterin >> DFG-Viewer für musikalische Quellen >> bis 1/2021: DFG-Projekt „Entwicklung eines MEI- und TEI-basierten Modells konzeptueller >> Tiefenerschließung von Musikalienbeständen am Beispiel des Detmolder Hoftheaters >> im 19. Jahrhundert (1825-1875)“ www.hoftheater-detmold.de >> Forum Wissenschaft | Bibliothek | Musik >> Hornsche Straße 39 >> 32756 Detmold >> Tel.: +49 5231 975-665 >> Mail: irmlind.capelle at upb.de >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anna.Kijas at tufts.edu Fri Feb 12 16:04:47 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Fri, 12 Feb 2021 15:04:47 +0000 Subject: [MEI-L] February 19 - Next Meeting of the MEI Digital Pedagogy Interest Group Message-ID: <8164825D-75E3-464A-A06B-D73EAE443EBD@tufts.edu> Hello everyone, We’d like to invite you to the next MEI Digital Pedagogy IG on Friday, February 19, 2021 at 11 AM (EST). Zoom details can be found below and will be posted on the Slack channel. You can view the agenda and notes from our meetings online. If you have agenda items, please send them to me (anna.kijas at tufts.edu) or Joy Calico (joy.calico at vanderbilt.edu). All best, Anna Kijas and Joy Calico, Administrative co-chairs Zoom details: Meeting URL: https://tufts.zoom.us/j/9420917662?pwd=VnhEWm5aWUd2K0xWRUVEdFFpNW5rdz09 Meeting ID: 942 091 7662 Passcode: 420721 Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 -------------- next part -------------- An HTML attachment was scrubbed... URL: From D.Lewis at gold.ac.uk Thu Feb 11 15:00:55 2021 From: D.Lewis at gold.ac.uk (David Lewis) Date: Thu, 11 Feb 2021 14:00:55 +0000 Subject: [MEI-L] Call for Papers: Digital Libraries for Musicology 2021, July 28-30 In-Reply-To: References: Message-ID: <5B597257-0F45-4836-A198-2CD7193ED3D2@gold.ac.uk> [with apologies for cross-posting] ________________________________ CFP: 8th International Conference on Digital Libraries for Musicology (In Association with IAML 2021), July 28-30, 2021 The Digital Libraries for Musicology (DLfM) conference presents a venue for those engaging with Digital Library systems and content in the domain of music and musicology. It provides a forum for musicians, musicologists, librarians, and technologists to share findings and expertise. CALL FOR PAPERS The 8th DLfM conference (https://dlfm.web.ox.ac.uk), to be held online, welcomes contributions related to any aspect of digital libraries and musicology, including topics related to musical archiving and retrieval, cataloguing and classification, musical databases, special collections, music encodings and representations, computational musicology, or music information retrieval (MIR). This year’s conference will be held in association with the online IAML Congress (https://www.iaml.info/congresses/2021-online), and will feature a joint paper session as well as a joint poster session. In bringing these two conferences together we aim to encourage new collaborations and foster larger group discussions surrounding prominent issues in the digital humanities. This year’s theme of “bridging the gap” is designed to bring together scholars working across the niche subfields of digital libraries and humanities, computational musicology, and MIR with the aim of broadening the understanding of the needs, obstacles, and optimal outcomes within each of these subfields in the DLfM community, and how outcomes in one area can best be applied to (or served by) another. The conference strongly encourages papers and posters that address this year’s theme, however, we welcome all papers addressing all traditional topics that fall under the scope of DLfM. Specific examples of topics traditionally covered at DLfM can be found at https://dlfm.web.ox.ac.uk. We are planning for our proceedings to be published in ACM ICPS as an Open Access publication. In light of the challenges surrounding conference planning and travel during a pandemic, this year’s DLfM conference will be entirely virtual. The conference organizers are striving to make an engaging and interactive conference while we patiently and eagerly anticipate a return to in-person conferences in 2022. IMPORTANT DATES (AoE) Abstract submission deadline: March 29, 2021 Paper submission deadline: April 5, 2021 Notification of Acceptance: May 3, 2021 Camera-ready submission deadline: June 13, 2021 Conference: July 28-30 SUBMISSIONS Proceedings Track Submissions We invite full papers (up to 8 pages excluding references) or short papers (up to 4 pages excluding references). An abstract of the proposed paper must be submitted to DLfM via EasyChair by March 29, 2021. The full papers are expected to be submitted to DLfM on EasyChair and following the ACM template by April 5, 2021. Authors will need to follow carefully the instructions for formatting. It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. For paper templates, please refer to the DLfM 2021 website(https://dlfm.web.ox.ac.uk). Page limits for submitted papers apply to all text, but exclude the bibliography (i.e. references can be included on pages over the specified limits). Authors of accepted paper submissions are expected to submit corrected, camera-ready copies, which must be received by June 13, 2021. Papers for each track will be peer reviewed by 2-3 members of the programme committee. For accepted paper submissions, at least one author must register for the conference (as a presenter) by June 13th. All submitted papers must: * be written in English; * contain author names, affiliations, and e-mail addresses; * be formatted according to the appropriate ACM template * be in PDF format, and formatted for A4 size An additional call for (unpublished) poster presentations will follow. For more detailed submission procedures and information, please visit https://dlfm.web.ox.ac.uk. Contact email: dlfm2021 at easychair.org CONFERENCE ORGANISATION Programme Chair Claire Arthur, Center for Music Technology, Georgia Tech General Chair David Lewis, Goldsmiths University of London Proceedings and Publicity Chair Néstor Nápoles López, McGill University -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.muennich at unibas.ch Fri Feb 19 11:45:04 2021 From: stefan.muennich at unibas.ch (=?iso-8859-1?Q?Stefan_M=FCnnich?=) Date: Fri, 19 Feb 2021 10:45:04 +0000 Subject: [MEI-L] MEC2021: Reminder CfP Message-ID: <7b5ab6d1ece84d0a85720d866a861e27@unibas.ch> Dear MEI-L, a gentle reminder that submissions for the Music Encoding Conference 2021 are welcome until March 8 (with an option to update your submissions until March 15). Please use our ConfTool website (www.conftool.net/music-encoding2021) to provide metadata of contributors including name(s) of author(s), affiliation(s) and email address(es), type and title of the submission, and a short one-paragraph abstract. When uploading your anonymized and full-paper submissions for review, please remove all identifying information from the text and PDF before the upload. And please be aware that ConfTool does only accept PDF submissions. If you have already submitted some time ago, we recommend that you check one last time whether all the required information and the correct version of your submission is in place. Detailed information about the submission process can be found on the Music Encoding Website: https://music-encoding.org/conference/2021/call/#submissions The members of the program committee look forward to your contributions and send you their very best regards. On behalf of the program committee, Stefan Münnich PS: Credits to David Rizo and his local organizing team and students for the beautiful conference logo and branding. #mec2021 is now on Twitter via https://twitter.com/MusicEncoding21 [cid:cc3669d4-df3b-49d3-82b3-35ce1e4a8fba] -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Firma_Stefan.png Type: image/png Size: 10190 bytes Desc: Firma_Stefan.png URL: From drizo at dlsi.ua.es Fri Feb 19 12:05:40 2021 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Fri, 19 Feb 2021 12:05:40 +0100 Subject: [MEI-L] MEC2021: Reminder CfP In-Reply-To: <7b5ab6d1ece84d0a85720d866a861e27@unibas.ch> References: <7b5ab6d1ece84d0a85720d866a861e27@unibas.ch> Message-ID: <15A8DA7C-A198-4489-A91C-9A901158C27B@dlsi.ua.es> Thanks Stefan!! What do you think about adding the names of the designers to a section in the webpage? We are not paying anything to them ☺️ > El 19 feb 2021, a las 11:45, Stefan Münnich escribió: > > Dear MEI-L, > > a gentle reminder that submissions for the Music Encoding Conference 2021 are welcome until March 8 (with an option to update your submissions until March 15). > > Please use our ConfTool website (www.conftool.net/music-encoding2021 ) to provide metadata of contributors including name(s) of author(s), affiliation(s) and email address(es), type and title of the submission, and a short one-paragraph abstract. When uploading your anonymized and full-paper submissions for review, please remove all identifying information from the text and PDF before the upload. And please be aware that ConfTool does only accept PDF submissions. If you have already submitted some time ago, we recommend that you check one last time whether all the required information and the correct version of your submission is in place. > > Detailed information about the submission process can be found on the Music Encoding Website: https://music-encoding.org/conference/2021/call/#submissions > > The members of the program committee look forward to your contributions and send you their very best regards. > > On behalf of the program committee, > Stefan Münnich > > PS: Credits to David Rizo and his local organizing team and students for the beautiful conference logo and branding. #mec2021 is now on Twitter via https://twitter.com/MusicEncoding21 > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From drizo at dlsi.ua.es Fri Feb 19 12:11:30 2021 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Fri, 19 Feb 2021 12:11:30 +0100 Subject: [MEI-L] MEC2021: Reminder CfP In-Reply-To: <15A8DA7C-A198-4489-A91C-9A901158C27B@dlsi.ua.es> References: <7b5ab6d1ece84d0a85720d866a861e27@unibas.ch> <15A8DA7C-A198-4489-A91C-9A901158C27B@dlsi.ua.es> Message-ID: <83943EC4-1FD8-4990-869C-9B10CE5986A6@dlsi.ua.es> Dear MEI-L I’m sorry, I’ve sent the mail to the wrong mail. Just to explain correctly the message, our students have created the graphical design of the event as a class assignment, and at least I wanna credit them explicitly. Best regards, David > El 19 feb 2021, a las 12:05, David Rizo Valero escribió: > > Thanks Stefan!! > > What do you think about adding the names of the designers to a section in the webpage? We are not paying anything to them ☺️ > >> El 19 feb 2021, a las 11:45, Stefan Münnich > escribió: >> >> Dear MEI-L, >> >> a gentle reminder that submissions for the Music Encoding Conference 2021 are welcome until March 8 (with an option to update your submissions until March 15). >> >> Please use our ConfTool website (www.conftool.net/music-encoding2021 ) to provide metadata of contributors including name(s) of author(s), affiliation(s) and email address(es), type and title of the submission, and a short one-paragraph abstract. When uploading your anonymized and full-paper submissions for review, please remove all identifying information from the text and PDF before the upload. And please be aware that ConfTool does only accept PDF submissions. If you have already submitted some time ago, we recommend that you check one last time whether all the required information and the correct version of your submission is in place. >> >> Detailed information about the submission process can be found on the Music Encoding Website: https://music-encoding.org/conference/2021/call/#submissions >> >> The members of the program committee look forward to your contributions and send you their very best regards. >> >> On behalf of the program committee, >> Stefan Münnich >> >> PS: Credits to David Rizo and his local organizing team and students for the beautiful conference logo and branding. #mec2021 is now on Twitter via https://twitter.com/MusicEncoding21 >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From stefan.muennich at unibas.ch Fri Feb 19 12:58:05 2021 From: stefan.muennich at unibas.ch (=?utf-8?B?U3RlZmFuIE3DvG5uaWNo?=) Date: Fri, 19 Feb 2021 11:58:05 +0000 Subject: [MEI-L] MEC2021: Reminder CfP In-Reply-To: <83943EC4-1FD8-4990-869C-9B10CE5986A6@dlsi.ua.es> References: <7b5ab6d1ece84d0a85720d866a861e27@unibas.ch> <15A8DA7C-A198-4489-A91C-9A901158C27B@dlsi.ua.es>, <83943EC4-1FD8-4990-869C-9B10CE5986A6@dlsi.ua.es> Message-ID: <3a570bcfbe02431ead7de42f73690e91@unibas.ch> Hi David, No worries. Please let your design students know that they have done a fantastic job and give them our very warmest thanks! I am quite sure that the members of MEI-L here will agree that it is more than a great idea to have them visible on the conference website. I could also imagine a mini-series on Twitter à la "Meet the designers" to introduce them and their work in more detail... Thanks, Stefan ________________________________ Von: mei-l im Auftrag von David Rizo Valero Gesendet: Freitag, 19. Februar 2021 12:11:30 An: Music Encoding Initiative Betreff: Re: [MEI-L] MEC2021: Reminder CfP Dear MEI-L I’m sorry, I’ve sent the mail to the wrong mail. Just to explain correctly the message, our students have created the graphical design of the event as a class assignment, and at least I wanna credit them explicitly. Best regards, David El 19 feb 2021, a las 12:05, David Rizo Valero > escribió: Thanks Stefan!! What do you think about adding the names of the designers to a section in the webpage? We are not paying anything to them ☺️ El 19 feb 2021, a las 11:45, Stefan Münnich > escribió: Dear MEI-L, a gentle reminder that submissions for the Music Encoding Conference 2021 are welcome until March 8 (with an option to update your submissions until March 15). Please use our ConfTool website (www.conftool.net/music-encoding2021) to provide metadata of contributors including name(s) of author(s), affiliation(s) and email address(es), type and title of the submission, and a short one-paragraph abstract. When uploading your anonymized and full-paper submissions for review, please remove all identifying information from the text and PDF before the upload. And please be aware that ConfTool does only accept PDF submissions. If you have already submitted some time ago, we recommend that you check one last time whether all the required information and the correct version of your submission is in place. Detailed information about the submission process can be found on the Music Encoding Website: https://music-encoding.org/conference/2021/call/#submissions The members of the program committee look forward to your contributions and send you their very best regards. On behalf of the program committee, Stefan Münnich PS: Credits to David Rizo and his local organizing team and students for the beautiful conference logo and branding. #mec2021 is now on Twitter via https://twitter.com/MusicEncoding21 _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From irmlind.capelle at uni-paderborn.de Fri Feb 19 16:24:48 2021 From: irmlind.capelle at uni-paderborn.de (Irmlind Capelle) Date: Fri, 19 Feb 2021 16:24:48 +0100 Subject: [MEI-L] Cheat sheets In-Reply-To: <24282BE8-B4E2-4725-94CA-3D17177EBCC3@tufts.edu> References: <9ab2e2fb-394a-881a-1a5d-63d4244cae3c@mail.uni-paderborn.de> <1983E0A1-BE0C-4906-8361-96B8AACBBA8D@uni-paderborn.de> <24282BE8-B4E2-4725-94CA-3D17177EBCC3@tufts.edu> Message-ID: <7F6614EC-25C9-4A9B-8DD6-3060816EF71C@uni-paderborn.de> Hello Anna, I am not quite sure if the MEI GitHub is the right place. I would put it into the introduction to the guidelines or something like this. (The cheat sheets I mentioned are done by a board member …) Gest regards Irmlind > Am 11.02.2021 um 20:19 schrieb Kijas, Anna E : > > Hello Irmlind, > > Thank you for following up and providing your perspective on the importance of cheat-sheets. I wonder if the MEI GitHub might be a good place for cheat-sheets that have been reviewed for accuracy or in a sense “approved” by the MEI as an organization? > > Best, > Anna > > Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library . Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900 . All instruction, meetings, and consultations will be conducted over Zoom. > > Anna E. Kijas > Head, Lilly Music Library > Granoff Music Center > Tufts University > 20 Talbot Avenue, Medford, MA 02155 > Pronouns: she, her, hers > Book an appointment > | (617) 627-2846 > > On 2/10/21, 5:06 AM, "mei-l on behalf of Irmlind Capelle" on behalf of irmlind.capelle at uni-paderborn.de > wrote: > > Dear Anna, > > please excuse me that I didn’t answer immediately: Really it is a good idea to have a place for Community Created Resources on the MEI website. > I am not quite sure how to handle the difference between MEI Resources and MEI Community-Created Resources but I trust in your discussion between the Board and the Digital Pedagogy IG. However, for me cheat sheets are fundamental materials on MEI and should have really official character and place. > > We will stay in contact. > Best regards > Irmlind > > > Am 02.02.2021 um 13:58 schrieb Joachim Veit : > > > > Dear Anna, > > > > thank you very much - that's a really fine and helpfull idea!! This would also result in fixed URLs for these teaching materials. I just made the experience that a MEI cheat sheet which I cited in an article (now published five years after the conference...) is no longer on the website and thus I produced a broken link in the publication... > > And I would heartly welcome a collection of materials which help us to better teach MEI - because MEI seems to be very useful .... > > > > Best greetings, > > Joachim > > > > > > > > Am 02.02.21 um 13:42 schrieb Kijas, Anna E: > >> Dear Irmlind, > >> That is a great question! At our last Digital Pedagogy Interest Group meeting in January the question also came up about sharing cheat sheets (in addition to other resources) from the MEI community in a central place. The IG is going to suggest the creation of a subpage or section on the MEI website that is called MEI Community-Created Resources. This would be similar to the bibliography page, but focused on identifying resources that may not be typical publications, but are useful for teaching MEI or demonstrating concepts, in formats such as lesson plans, tutorials, cheat sheets, etc. Does that sound like a good option? > >> The IG will be drafting some language for this Community-Created section and will share with the MEI-L to get feedback and additional thoughts. > >> Best, > >> Anna > >> Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. > >> Anna E. Kijas > >> Head, Lilly Music Library > >> Granoff Music Center > >> Tufts University > >> 20 Talbot Avenue, Medford, MA 02155 > >> Pronouns: she, her, hers > >> Book an appointment | (617) 627-2846 > >> From: mei-l on behalf of Irmlind Capelle > >> Reply-To: Music Encoding Initiative > >> Date: Tuesday, February 2, 2021 at 5:56 AM > >> To: Music Encoding Initiative > >> Subject: [MEI-L] Cheat sheets > >> Dear all, > >> I’m not quite sure if this is the correct place to put my question but I try it. > >> On the Website there is the button „resources“ and in it the button „tutorials“. If you click this it is opened „tutorials and related material“. It would be fine if the button would also be extended with „related material“. Then this would be a good place to place there one or more cheat sheets for mei. We had two in the metadata ig sesson at the MEC 2020 and it would be fine to have it at an official place so you can link it in correspondence and literature. > >> A cheat sheet is a very good document to compare different XML structures and I would be really happy to have it not only on my private desc. > >> Best regards > >> Irmlind > >> Dr. Irmlind Capelle > >> Wissenschaftliche Mitarbeiterin > >> DFG-Viewer für musikalische Quellen > >> bis 1/2021: DFG-Projekt „Entwicklung eines MEI- und TEI-basierten Modells konzeptueller > >> Tiefenerschließung von Musikalienbeständen am Beispiel des Detmolder Hoftheaters > >> im 19. Jahrhundert (1825-1875)“ www.hoftheater-detmold.de > >> Forum Wissenschaft | Bibliothek | Musik > >> Hornsche Straße 39 > >> 32756 Detmold > >> Tel.: +49 5231 975-665 > >> Mail: irmlind.capelle at upb.de > >> _______________________________________________ > >> mei-l mailing list > >> mei-l at lists.uni-paderborn.de > >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.w.bohl at gmail.com Thu Feb 25 14:15:52 2021 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Thu, 25 Feb 2021 14:15:52 +0100 Subject: [MEI-L] Gentle reminder: ODD meeting NOW Message-ID: <6D512903-EF34-4658-B118-0B784619D6DC@gmail.com> Dear all, just a gentle reminder that our regular ODD Meeting is taking place RIGHT NOW at: https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Passcode: MEI Sorry for posting this late, I thought SLACK was sufficient… my bad, promise to send email earlier next time ;-) See you in a minute, Benni From martin.albrecht-hohmaier at web.de Thu Feb 25 14:22:57 2021 From: martin.albrecht-hohmaier at web.de (Martin Albrecht-Hohmaier) Date: Thu, 25 Feb 2021 14:22:57 +0100 Subject: [MEI-L] Gentle reminder: ODD meeting NOW In-Reply-To: <6D512903-EF34-4658-B118-0B784619D6DC@gmail.com> References: <6D512903-EF34-4658-B118-0B784619D6DC@gmail.com> Message-ID: <940A3C88-2D04-48E9-80E6-D7B25D3CC358@web.de> Hi Benni, Habe Johannes schon auf slack geschrieben, bin bei der Meta data IG. Es wäre für mich sehr hilfreich, wenn sich die künftigen Meetings nicht überschneiden. Dank und Gruß, Martin mobil gesendet > Am 25.02.2021 um 14:16 schrieb Benjamin W. Bohl : > > Dear all, > > just a gentle reminder that our regular ODD Meeting is taking place RIGHT NOW at: > > https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 > Meeting-ID: 830 9788 5923 > Passcode: MEI > > Sorry for posting this late, I thought SLACK was sufficient… my bad, promise to send email earlier next time ;-) > > See you in a minute, > Benni > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From b.w.bohl at gmail.com Thu Feb 25 14:30:54 2021 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Thu, 25 Feb 2021 14:30:54 +0100 Subject: [MEI-L] Gentle reminder: ODD meeting NOW In-Reply-To: <940A3C88-2D04-48E9-80E6-D7B25D3CC358@web.de> References: <6D512903-EF34-4658-B118-0B784619D6DC@gmail.com> <940A3C88-2D04-48E9-80E6-D7B25D3CC358@web.de> Message-ID: <1EDD6415-E9D4-4281-B7A6-E8E85E647F3F@gmail.com> This is a meet too thing then ;-) > On 25. Feb 2021, at 14:22, Martin Albrecht-Hohmaier wrote: > > Hi Benni, > Habe Johannes schon auf slack geschrieben, bin bei der Meta data IG. > Es wäre für mich sehr hilfreich, wenn sich die künftigen Meetings nicht überschneiden. > Dank und Gruß, > Martin > > mobil gesendet > >> Am 25.02.2021 um 14:16 schrieb Benjamin W. Bohl : >> >> Dear all, >> >> just a gentle reminder that our regular ODD Meeting is taking place RIGHT NOW at: >> >> https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 >> Meeting-ID: 830 9788 5923 >> Passcode: MEI >> >> Sorry for posting this late, I thought SLACK was sufficient… my bad, promise to send email earlier next time ;-) >> >> See you in a minute, >> Benni >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From lindsay.a.warrenburg at gmail.com Fri Feb 26 17:58:14 2021 From: lindsay.a.warrenburg at gmail.com (Lindsay Warrenburg) Date: Fri, 26 Feb 2021 11:58:14 -0500 Subject: [MEI-L] 2-Day Virtual Conference: Future Directions of Music Cognition Message-ID: Dear all, The conference portion of Future Directions of Music Cognition is coming up! On *Saturday & Sunday, March 6-7*, we will have 16 presentation sessions and 2 poster sessions. In addition, we will have 3 meet and greet/networking events at various times to accommodate different time zones. *Registration is free for everyone!* If you have registered for the speaker series, that registration will also serve as registration for the conference. New registrations for the conference and/or speaker series can be completed at the link below. *Registration:* https://forms.gle/t6BJ81NczKwg7Ejr7 *Program:* http://org.osu.edu/mascats/march-6-7-schedule/ *Social/Networking:* http://org.osu.edu/mascats/social/ *Student Award Information:* http://org.osu.edu/mascats/student-awards/ *Proceedings articles **will be released on March 6 at this link*: http://org.osu.edu/mascats/proceedings/ All the best, Lindsay Warrenburg Lindsey Reymore Daniel Shanahan Joshua Albrecht — *Lindsay Warrenburg, PhD* Music Theory, Cognition, and Perception lindsaywarrenburg.com she/her/hers -------------- next part -------------- An HTML attachment was scrubbed... URL: From napulen at gmail.com Mon Mar 8 19:13:32 2021 From: napulen at gmail.com (=?UTF-8?B?TsOpc3RvciBOw6Fwb2xlcw==?=) Date: Mon, 8 Mar 2021 13:13:32 -0500 Subject: [MEI-L] 2nd CfP, DLfM 2021 Message-ID: Dear all, A reminder about the International Conference on Digital Libraries for Musicology (DLfM 2021 ), which is still accepting submissions: - Initial abstracts until *March 29* - [4 or 8]-page paper submissions until *April 5* This year is in association with IAML2021 ( https://www.iaml.info/congresses/2021-online) and the theme is "bridging the gap" between Digital Music Libraries, Music Information Retrieval, and Computational Musicology. Any relevant topic is welcome. Check the full details on the website: https://dlfm.web.ox.ac.uk/ Kind regards, Néstor -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anna.Kijas at tufts.edu Fri Mar 12 14:51:34 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Fri, 12 Mar 2021 13:51:34 +0000 Subject: [MEI-L] March 19: Next Meeting of the Digital Pedagogy Interest Group Message-ID: <4CA2E909-08CC-4B2C-9D57-7C1CB1844F33@tufts.edu> Hello everyone, We’d like to invite you to the next MEI Digital Pedagogy IG on Friday, March 19, 2021 at 11 AM (EST). Zoom details can be found below and will be posted on the Slack channel. You can view the agenda and notes from our meetings online. If you have agenda items to propose, please send them to me (anna.kijas at tufts.edu) or Joy Calico (joy.calico at vanderbilt.edu). All best, Anna Kijas and Joy Calico, Administrative co-chairs Meeting URL: https://tufts.zoom.us/j/9420917662?pwd=VnhEWm5aWUd2K0xWRUVEdFFpNW5rdz09 Meeting ID: 942 091 7662 Passcode: 420721 Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire.arthur81 at gmail.com Mon Mar 22 21:40:31 2021 From: claire.arthur81 at gmail.com (Claire Arthur) Date: Mon, 22 Mar 2021 16:40:31 -0400 Subject: [MEI-L] Final CfP: Digital Libraries for Musicology (DLfM) 2021, July 28-30th Message-ID: CFP: 8th International Conference on Digital Libraries for Musicology (In Association with IAML 2021), July 28-30, 2021 The Digital Libraries for Musicology (DLfM) conference presents a venue for those engaging with Digital Library systems and content in the domain of music and musicology. It provides a forum for musicians, musicologists, librarians, and technologists to share findings and expertise. CALL FOR PAPERS The 8th DLfM conference (https://dlfm.web.ox.ac.uk ), to be held online, welcomes contributions related to any aspect of digital libraries and musicology, including topics related to musical archiving and retrieval, cataloguing and classification, musical databases, special collections, music encodings and representations, computational musicology, or music information retrieval (MIR). This year’s conference will be held in association with the online IAML Congress (https://www.iaml.info/congresses/2021-online ), and will feature a joint paper session as well as a joint poster session. In bringing these two conferences together we aim to encourage new collaborations and foster larger group discussions surrounding prominent issues in the digital humanities. This year’s theme of “bridging the gap” is designed to bring together scholars working across the niche subfields of digital libraries and humanities, computational musicology, and MIR with the aim of broadening the understanding of the needs, obstacles, and optimal outcomes within each of these subfields in the DLfM community, and how outcomes in one area can best be applied to (or served by) another. The conference strongly encourages papers and posters that address this year’s theme, however, we welcome all papers addressing all traditional topics that fall under the scope of DLfM. Specific examples of topics traditionally covered at DLfM can be found at https://dlfm.web.ox.ac.uk . We are pleased to announce our proceedings will again be published in ACM ICPS this year. In light of the challenges surrounding conference planning and travel during a pandemic, this year’s DLfM conference will be entirely virtual. The conference organizers are striving to make an engaging and interactive conference while we patiently and eagerly anticipate a return to in-person conferences in 2022. IMPORTANT DATES (AoE ) Abstract submission deadline: March 29, 2021 Paper submission deadline: April 5, 2021 Notification of Acceptance: May 3, 2021 Camera-ready submission deadline: June 13, 2021 Conference: July 28-30 SUBMISSIONS Proceedings Track Submissions We invite full papers (up to 8 pages excluding references) or short papers (up to 4 pages excluding references). An abstract of the proposed paper must be submitted to DLfM via EasyChair by March 29, 2021. The full papers are expected to be submitted to DLfM on EasyChair and following the ACM template by April 5, 2021. Authors will need to follow carefully the instructions for formatting. It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. For paper templates, please refer to the DLfM 2021 website( https://dlfm.web.ox.ac.uk ). Page limits for submitted papers apply to all text, but exclude the bibliography (i.e. references can be included on pages over the specified limits). Authors of accepted paper submissions are expected to submit corrected, camera-ready copies, which must be received by June 13, 2021. Papers for each track will be peer reviewed by 2-3 members of the programme committee. For accepted paper submissions, at least one author must register for the conference (as a presenter) by June 13th. All submitted papers must: - be written in English; - contain author names, affiliations, and e-mail addresses; - be formatted according to the appropriate ACM template - be in PDF format, and formatted for A4 size An additional call for (unpublished) poster presentations will follow. For more detailed submission procedures and information, please visit https://dlfm.web.ox.ac.uk . Submission link: https://easychair.org/conferences/?conf=dlfm2021 Contact email: dlfm2021 at easychair.org CONFERENCE ORGANISATION Programme Chair Claire Arthur, Center for Music Technology, Georgia Tech General Chair David Lewis, Goldsmiths University of London Proceedings and Publicity Chair Néstor Nápoles López, McGill University ------------------------------ Claire Arthur Assistant Professor, School of Music College of Design Georgia Institute of Technology claire.arthur[at]gatech.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Thu Mar 25 21:46:55 2021 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 25 Mar 2021 21:46:55 +0100 Subject: [MEI-L] ODD Meeting tomorrow Message-ID: Dear all, this is just a brief reminder for our next ODD Meeting tomorrow, March 26, at 2pm CET (we're still on winter time here, so only 5h to the east coast). We'll meet at https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI Topics for tomorrow can be found at https://github.com/orgs/music-encoding/projects/2#column-13107157, and you're invited to add things there. Of course we can surely add other topics during the meeting. We're welcoming everyone, so please join if you want to get involved in the technical details of MEI. Looking forward to see you tomorrow, jo --- Just a brief reminder that this is a regular meeting, held once per month. In every odd month, it's on the last Friday, in every even month, it's on the last Thursday (to give people an opportunity to join if they're available on one of those days). Meeting time is always at 2pm European time. Here's a list of all the dates in 2021: 2021-03-26 ODD Friday 2021-04-29 ODD Thursday 2021-05-28 ODD Friday 2021-06-24 ODD Thursday 2021-07-30 ODD Friday (This may be integrated into MEC2021 in Alicante) 2021-08-26 ODD Thursday 2021-09-24 ODD Friday 2021-10-28 ODD Thursday 2021-11-26 ODD Friday 2021-12-30 ODD Thursday From nikolaos.beer at uni-paderborn.de Wed Apr 7 12:27:32 2021 From: nikolaos.beer at uni-paderborn.de (Nikolaos Beer) Date: Wed, 7 Apr 2021 12:27:32 +0200 Subject: [MEI-L] Job offer: Research Assistant at Reger-Werkausgabe Message-ID: <415CCFD6-AAA1-4C7B-B4CE-C52F3AA9E6EE@uni-paderborn.de> Dear list members, the hybrid edition project "Reger-Werkausgabe" (RWA) at the Max-Reger-Institut (MRI) in Karlsruhe/Germany is seeking a research assistant (50% part-time, terms and pay scale "TV-L 13", fixed term until December 31, 2025) to join its team of editors. Application deadline is April 15, 2021. Please see details at: https://www.max-reger-institut.de/media/rwa_2021_en.pdf . RWA is funded by the "Akademie der Wissenschaften und der Literatur, Mainz" (https://www.adwmainz.de ). For more information about RWA and MRI please see the institute's website at https://www.max-reger-institut.de/en/ . Best regards Niko Beer ___________________________________ Nikolaos Beer M.A. Wissenschaftlicher Mitarbeiter Verbundstelle Musikedition Reger-Werkausgabe Universität Paderborn Musikwissenschaftliches Seminar Detmold/Paderborn Hornsche Straße 39 D-32756 Detmold Dienstadresse: Max-Reger-Institut/Elsa-Reger-Stiftung Pfinztalstraße 7 76227 Karlsruhe Fon: +49 - (0)721 - 854 501 @: nikolaos.beer at uni-paderborn.de -------------- next part -------------- An HTML attachment was scrubbed... URL: From janek.spaderna at pluto.uni-freiburg.de Thu Apr 8 18:22:48 2021 From: janek.spaderna at pluto.uni-freiburg.de (Janek Spaderna) Date: Thu, 8 Apr 2021 18:22:48 +0200 Subject: [MEI-L] First efforts to bring modal notation to MEI Message-ID: <9F7B8FB8-048E-42A5-BBC9-430BC73AFF88@pluto.uni-freiburg.de> Hello everyone, as a project for university I am looking into the challenges of bringing support for modal notation to MEI. I am quite new to both modal notation and MEI so please don’t hesitate to let me know in case I mix things up, miss something or end up writing straight up wrong stuff. Instead of directly thinking in terms of a (potential) concrete encoding I would like to discuss the elements and concepts of modal notation first. Below you can find the thoughts I had so for in this regard. In my opinion thinking this through beforehand can help because we can get a feeling on which level an encoding should work: rather purely visual or should it include/require analysis of some (which?) sort. The only prior discussion about this I could find is a thread initiated by Joshua Stutter about two years ago [1]. (Did I miss something?) It touches on some of the concepts which exist in modal notation but have disappeared in mensural music but is mostly concerned with finding a concrete encoding for some piece of music. # Visual elements of modal notation The distinctive visual elements are only very few: lines, clefs, notes, tractus and lyrics. Regarding notes there are some special cases: a) notes can be grouped into ligatures b) a note can have a plica attached c) notes can be followed by currentes, making it a coniunctura d) the last note in a ligature can also be coniunctura Cases a) and b) also occur in mensural notation; about c)/d) I am not sure, at least I think I do not have seen a way to encode it in MEI? Also for c) there is a difference in musical meaning wether there is a note followed by up to three lozenges or if there are more than three. The former corresponds to a ternaria whereas only the latter is truly a coniunctura. Tractus serve multiple purposes: 1. It groups notes into ordines. 2. It indicates syllable changes. 3. It indicates alignment of different voices in organum passages. # Concepts of modal notation ## Tempora, perfectiones, ordines The rhythmic feeling is based on perfectiones which consist of three tempora. If not changed by context a brevis has a length of one tempus, a longa of two. Only having seen a few transcription I still got the feeling that it is quite common to number the perfectiones. Notes are grouped by tractus into ordines. The duration of an ordo is not fix and can encompass one or more perfectiones. The tractus which ends an ordo is usually transcribed as a rest with a context dependet duration. ## Discantus/organum purum On one side there is the discantus, on the other the organum purum. In between the two lives the copula. In discantus passages each voice follows a mode which can be recognized by a specific pattern of ligatures. The mode then tells which notes in the ligatures are longae and which are breves. Additional notes---be it from overlong ligatures not fitting the patterns or coniuncturae---live outside the longa/breve classification. This concept does not apply to organum purum. As I understand it the most important bit here is finding a way to encode the visual alignment of the voices. Karen Desmond writes in one of her responses to the aforementioned thread on the mailing list [2] > Ideally you would probably want to number the perfections and then you would > simply tag your tenor notes as occurring within a certain perfection. Whilst she notes other problems with this idea I am wondering if this would even be feasible in organum purum passages as I thought we do not know which notes are longae and which are breves. As I read it, Joshua shares my sentiment [3] > I'm against tagging in a particular perfection as that is implying that the > music proceeds in a constant modal rhythm and has length, which may not be > exactly correct. The copula is used to connect discantus passages with organum purum passages. During these connecting sections the duplum operates as in discantus sections whereas the tenor holds notes as in organum purum passages. Overall it can be said that a way to encode alignment is important in organum purum and copula passages. In discantus passages however this is not necessary as the modal rhythm used in all voices carries enough information; moreover the visual alignment usually does not even correspond to the musical alignment. ------------------------------- What are your thoughts so far? I am looking forward to your feedback! Best Janek [1]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002268.html Joshua Stutter’s initial message [2]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002272.html Karen Desmond in response to Joshua [3]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002280.html Joshua in response to Karen From Anna.Kijas at tufts.edu Mon Apr 12 21:04:10 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Mon, 12 Apr 2021 19:04:10 +0000 Subject: [MEI-L] MEI Pedagogy Interest Group Meeting this Friday, 4/16 Message-ID: We’d like to invite you to the next MEI Digital Pedagogy IG this Friday, April 16, 2021 at 11 AM (EST). Zoom details can be found below and will be posted on the Slack channel. You can view the agenda and notes from our meetings online. If you have agenda items to propose, please send them to me (anna.kijas at tufts.edu) or Joy Calico (joy.calico at vanderbilt.edu). All best, Anna Kijas and Joy Calico, Administrative co-chairs Meeting URL: https://tufts.zoom.us/j/9420917662?pwd=VnhEWm5aWUd2K0xWRUVEdFFpNW5rdz09 Meeting ID: 942 091 7662 Passcode: 420721 Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.w.bohl at gmail.com Tue Apr 13 16:30:52 2021 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Tue, 13 Apr 2021 16:30:52 +0200 Subject: [MEI-L] Rename the MEI master-branch Message-ID: Dear MEI Community, following a suggestion by the Software Freedom Conservancy GitHub renamed their master-branch to main in order to avoid potentially offensive vocabulary or allusions to slavery. MEI would like to follow this lead and rename the master-branch of https://github.com/music-encoding/music-encoding and other repositories where applicable. Following the discussion on GitHub (https://github.com/music-encoding/music-encoding/issues/776 ) the Technical Team set up this poll to take in the community's votes on a closed list of potential new names for our current master-branch, used to disseminate tagged versions (e.g. MEI 3.0.0, MEI 4.0.0 MEI 4.0.1). Please cast your vote until 2021-04-28 using the form available at: https://abstimmung.dfn.de/tNOBDWgWAFtVz6lr On behalf of the MEI Board and Technical Team, Benjamin W. Bohl MEI Technical Co-chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From Anna.Kijas at tufts.edu Tue Apr 13 16:45:21 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Tue, 13 Apr 2021 14:45:21 +0000 Subject: [MEI-L] Rename the MEI master-branch In-Reply-To: References: Message-ID: Thank you, Benjamin for this! Here is some additional context for folks who may not be following these conversations, https://www.nytimes.com/2021/04/13/technology/racist-computer-engineering-terms-ietf.html. Also I’d like to share a guide created by several of my colleagues at the Association for Computers and the Humanities - https://ach.org/toward-anti-racist-technical-terminology/ - which addresses racist technical terminology. We also have an open bibliography on Zotero for Inclusive Technology - https://www.zotero.org/groups/2554430/ach_inclusive_technology. Best, Anna Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 From: mei-l on behalf of "Benjamin W. Bohl" Reply-To: Music Encoding Initiative Date: Tuesday, April 13, 2021 at 10:32 AM To: MEI-L Subject: [MEI-L] Rename the MEI master-branch Dear MEI Community, following a suggestion by the Software Freedom Conservancy GitHub renamed their master-branch to main in order to avoid potentially offensive vocabulary or allusions to slavery. MEI would like to follow this lead and rename the master-branch of https://github.com/music-encoding/music-encoding and other repositories where applicable. Following the discussion on GitHub (https://github.com/music-encoding/music-encoding/issues/776) the Technical Team set up this poll to take in the community's votes on a closed list of potential new names for our current master-branch, used to disseminate tagged versions (e.g. MEI 3.0.0, MEI 4.0.0 MEI 4.0.1). Please cast your vote until 2021-04-28 using the form available at: https://abstimmung.dfn.de/tNOBDWgWAFtVz6lr On behalf of the MEI Board and Technical Team, Benjamin W. Bohl MEI Technical Co-chair -------------- next part -------------- An HTML attachment was scrubbed... URL: From b.w.bohl at gmail.com Tue Apr 13 16:46:53 2021 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Tue, 13 Apr 2021 16:46:53 +0200 Subject: [MEI-L] Rename the MEI master-branch In-Reply-To: References: Message-ID: <3CBD7E35-E5C9-4A00-9FA2-9107B0E0BCD8@gmail.com> Dear Anna, Thanks for this valuable addition ;-) /Benni > On 13. Apr 2021, at 16:45, Kijas, Anna E wrote: > > Thank you, Benjamin for this! Here is some additional context for folks who may not be following these conversations,https://www.nytimes.com/2021/04/13/technology/racist-computer-engineering-terms-ietf.html. Also I’d like to share a guide created by several of my colleagues at the Association for Computers and the Humanities - https://ach.org/toward-anti-racist-technical-terminology/ - which addresses racist technical terminology. We also have an open bibliography on Zotero for Inclusive Technology -https://www.zotero.org/groups/2554430/ach_inclusive_technology. > > Best, > Anna > > Please note: Lilly Music Library hours and additional details can be viewed athttps://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. > > Anna E. Kijas > Head, Lilly Music Library > Granoff Music Center > Tufts University > 20 Talbot Avenue, Medford, MA 02155 > Pronouns: she, her, hers > Book an appointment | (617) 627-2846 > > From: mei-l on behalf of "Benjamin W. Bohl" > Reply-To: Music Encoding Initiative > Date: Tuesday, April 13, 2021 at 10:32 AM > To: MEI-L > Subject: [MEI-L] Rename the MEI master-branch > > Dear MEI Community, > > following a suggestion by the Software Freedom Conservancy GitHub renamed their master-branch to main in order to avoid potentially offensive vocabulary or allusions to slavery. > > MEI would like to follow this lead and rename the master-branch of https://github.com/music-encoding/music-encoding and other repositories where applicable. Following the discussion on GitHub (https://github.com/music-encoding/music-encoding/issues/776) the Technical Team set up this poll to take in the community's votes on a closed list of potential new names for our current master-branch, used to disseminate tagged versions (e.g. MEI 3.0.0, MEI 4.0.0 MEI 4.0.1). > > Please cast your vote until 2021-04-28 using the form available at: > https://abstimmung.dfn.de/tNOBDWgWAFtVz6lr > > On behalf of the MEI Board and Technical Team, > Benjamin W. Bohl > MEI Technical Co-chair > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From j.ingram at netcologne.de Wed Apr 14 21:49:00 2021 From: j.ingram at netcologne.de (James Ingram) Date: Wed, 14 Apr 2021 21:49:00 +0200 Subject: [MEI-L] Tick-based Timing Message-ID: Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. My conclusions are that Tick-based Timing * has to do with the difference between absolute (mechanical, physical) time and performance practice, * is relevant to the encoding of /*all*/ the world's event-based music notations, not just CWMN1900. * needs to be considered for the next generation of music encoding formats I would especially like to get some feedback from those working on non-western notations, so am posting this not only to the W3C MNCG's public mailing list, but also to MEI's. All the best, James Ingram (notator) [1] MNX Issue #217: https://github.com/w3c/mnx/issues/217 [2] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html -- https://james-ingram-act-two.de https://github.com/notator email signature -------------- next part -------------- An HTML attachment was scrubbed... URL: From janek.spaderna at pluto.uni-freiburg.de Fri Mar 19 20:21:47 2021 From: janek.spaderna at pluto.uni-freiburg.de (Janek Spaderna) Date: Fri, 19 Mar 2021 19:21:47 -0000 Subject: [MEI-L] First efforts to bring modal notation to MEI Message-ID: <6E75491B-688B-401D-9CCD-C99833D60D6D@pluto.uni-freiburg.de> Hello everyone, as a project for university I am looking into the challenges of bringing support for modal notation to MEI. I am quite new to both modal notation and MEI so please don’t hesitate to let me know in case I mix things up, miss something or end up writing straight up wrong stuff. Instead of directly thinking in terms of a (potential) concrete encoding I would like to discuss the elements and concepts of modal notation first. Below you can find the thoughts I had so for in this regard. In my opinion thinking this through beforehand can help because we can get a feeling on which level an encoding should work (rather purely visual or should it include/require analysis of some (which?) sort). The only prior discussion about this I could find is a thread initiated by Joshua Stutter about two years ago [1]. (Did I miss something?) It touches on some of the concepts which exist in modal notation but have disappeared in mensural music but is mostly concerned with finding a concrete encoding for some piece of music. # Visual elements of modal notation The distinctive visual elements are only very few: lines, clefs, notes, tractus and lyrics. Regarding notes there are some special cases: a) notes can be grouped into ligatures b) a note can have a plica attached c) notes can be followed by currentes, making it a coniunctura d) the last note in a ligature can also be coniunctura Cases a) and b) also occur in mensural notation; about c)/d) I am not sure, at least I think I do not have seen a way to encode it in MEI? Also for c) there is a difference in musical meaning wether there is a note followed by up to three lozenges or if there are more than three. The former corresponds to a ternaria whereas only the latter is truly a coniunctura. Tractus serve multiple purposes: 1. It groups notes into ordines. 2. It indicates syllable changes. 3. It indicates alignment of different voices in organum passages. # Concepts of modal notation ## Tempora, perfectiones, ordines The rhythmic feeling is based on perfectiones which consist of three tempora. If not changed by context a brevis has a length of one tempus, a longa of two. Only having seen a few transcription I still got the feeling that it is quite common to number the perfectiones. Notes are grouped by tractus into ordines. The duration of an ordo is not fix and can encompass one or more perfectiones. The tractus which ends an ordo is usually transcribed as a rest with a context dependet duration. ## Discantus/organum purum On one side there is the discantus, on the other the organum purum. In between the two lives the copula. In discantus passages each voice follows a mode which can be recognized by a specific pattern of ligatures. The mode then tells which notes in the ligatures are longae and which are breves. Additional notes---be it from overlong ligatures not fitting the patterns or coniuncturae---live outside the longa/breve classification. This concept does not apply to organum purum. As I understand it the most important bit here is finding a way to encode the visual alignment of the voices. Karen Desmond writes in one of her responses to the aforementioned thread on the mailing list [2] > Ideally you would probably want to number the perfections and then you would > simply tag your tenor notes as occurring within a certain perfection. Whilst she notes other problems with this idea I am wondering if this would even be feasible in organum purum passages as I thought we do not know which notes are longae and which are breves. As I read it, Joshua shares my sentiment [3] > I'm against tagging in a particular perfection as that is implying that the > music proceeds in a constant modal rhythm and has length, which may not be > exactly correct. The copula is used to connect discantus passages with organum purum passages. During these connecting sections the duplum operates as in discantus sections whereas the tenor holds notes as in organum purum passages. Overall it can be said that a way to encode alignment is important in organum purum and copula passages. In discantus passages however this is not necessary as the modal rhythm used in all voices carries enough information; moreover the visual alignment usually does not even correspond to the musical alignment. ------------------------------- What are your thoughts so far? I am looking forward to your feedback! Best Janek [1]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002268.html Joshua Stutter’s initial message [2]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002272.html Karen Desmond in response to Joshua [3]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002280.html Joshua in response to Karen From bureau at tradmus.org Fri Apr 16 15:35:38 2021 From: bureau at tradmus.org (Simon Wascher) Date: Fri, 16 Apr 2021 15:35:38 +0200 Subject: [MEI-L] Fwd: Tick-based Timing References: <1c13f78c-fc4e-9a0c-77f5-fa863d231de5@netcologne.de> Message-ID: Hi alltogether, Am 14.04.2021 um 21:49 schrieb James Ingram : >>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>> [...] Am 16.04.2021 um 11:06 schrieb James Ingram : > First: Did you intend your reply just to be private, or did you want to send it to the public list as well. I'm not sure. > If you'd like this all to be public, I could send this to MEI-L as well... I answered to James Ingram off list, but now move to MEI-L with my answer, as it seems it was the intention to get answers on the list. My full first answer to James Ingram is down at the end of this mail, if someone is interested (I did not forward James Ingram's repy to me in full as I did not want to forward someone elses private answer to me to public. Am 15.04.2021 um 01:03 schrieb Simon Wascher : >> I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. Am 16.04.2021 um 11:06 schrieb James Ingram : > I took a look at Lauge Dideriksens website, but can't see enough music examples to know quite what you mean by "positioned according to musical timing". I see that he (sometimes?) uses tuplets. Does he (sometimes?) position the symbols in (flat) space to represent (flat) time? > In either case, that's not quite what I'm saying. I'm talking about the underlying machine-readable encoding of notation, not just the way it looks on the surface. maybe there is still no material online of this. He is talking about this at European Voices VI in Vienna 27–30 September 2021. It might be sensible to contact him direct. Am 15.04.2021 um 01:03 schrieb Simon Wascher : >> Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? Am 16.04.2021 um 11:06 schrieb James Ingram : > Here again, I'm not quite sure what you mean. Perhaps it would help if I again emphasise the difference between the surface appearance of a notation and its machine-readable encoding. I see, your focus seems to be on machine-readability and the problem of the relation between CWMN and its machine-playback. My focus is the problem of the relation between CWMN and real live-performance. I am looking for tools to code real live performances, using the symbols of CWMN but allowing to include the _display_ of the real live durations of the real live-performance (the difference between real live-performance and CWMN). Am 16.04.2021 um 11:06 schrieb James Ingram : > You ask about emic and etic, and the problem of notating traditional Scandinavian Polska or jodling: > To get us on the same page, here's where I am: Transcriptions of music that is in an aural tradition always reflect what the transcriber thinks is important. Transcriptions often leave out nuances (timing, tonal inflexions etc.) that the original performers and their public would regard as essential. > I think that aural traditions correctly ignore machine time (seconds, milliseconds), but that if we use machines to record them, we ultimately have to use such timings (in the machines). I don't think that matters, providing that the transcribers don't try to impose machine time (in the form of beats per second) too literally on their interpretations of the original performances. Well, to be precise: in transcribing music, there is (at least) three points of view (versions of notation): 1. the musician's "emic" intention 2. the machines "phonetic" protocol (which can be automatically transformed to a duration and pitch notation applying a certain level of accuracy, but which cannot know about light and heavy time and barlines, as these are cultural phenomenons. The level of accuracy is indeed already a cultural decission, but: If the transformation is not into CWMN but for example into a time/pitch chart of the fundamental frequencies the limits of readability of CWMN do not apply. 3. the transcribers intention, which usually is called "etic" but is in fact "emic" to the transcriber. (emic and etic is not my favorite wording) (I am not worring about the composer, as in my field music gets composed sounding, the composer is a musician here.) Am 16.04.2021 um 11:06 schrieb James Ingram : > "Stress programs" in abc-notation: > I can't find any references to "stress programs" at the abc site [2], Ah, you are right, that is a kind of de facto standard, which is weakly documented. It is interpreted by abc2midi and BarFly (and maybe other programs). It makes use of the R:header of abc. Either the stress program is written there directly or in an external file. Here is one of the stress programs I use: * 37 Mazurka 3/4 3/4=35 6 120 1.4 100 0.6 110 1.2 100 0.8 115 1.32 100 0.67 so that is: "*37" it starts with a number (that does not do anything). "Mazurka" is the identifying string used in the R:header field connecting abc-file and stress program for the playback program. "3/4" is the meter. The stress program only applies to abc-notation in this meter. So there may be stress programs with the same name, but for different meters. "3/4=35" is the tempo indication. "6" is the number of sections the bar is split up to in this stress program (a free choisse). So it should be followed by that number of describing lines. "120 1.4" describes the first section of the bar. "120" is the volume (beteen 0-127), "1.4" is the multiplier, the core of the thing, so to say: It says the duration of the first sixt of the notated bar is to be played 1.4 times as long than it would be played considering the metronome tempo given. "100 0.6" and so on. I attached BarFly's "Stress Programs" file which also contains the descriptin provided by the author of "Barfly" Phil Taylor. (I personally would prefer if this mechanism would not be limited to one single bar, but could be used to describe durations of a choosen number of bars/beats.) So, thanks for the moment, and feel free to tell me I shall not send this longish and maybe not very clever e-mails to this list. Thanks, Health, Simon Wascher Anfang der weitergeleiteten Nachricht: > Von: Simon Wascher > Betreff: Aw: [MEI-L] Tick-based Timing > Datum: 15. April 2021 01:03:32 MESZ > An: James Ingram > > Hello, > > reading your post to MEI mailing list (I am not active in MEI) I started to read your text >> [2] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html > > > and would like to just add my two cents of ideas about >> Ticks carry both temporal and spatial information. >> In particular, synchronous events at the beginning of a bar have the same tick.time, so: >> The (abstract) tick durations of the events in parallel voices in a bar, add up to the same value. >> In other words: >> Bars “add up” in (abstract) ticks. >> The same is true for parallel voices in systems (that are as wide as the page allows) even when there are no barlines, so: >> Systems also “add up” in (abstract) ticks. > > * First I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. > > * About Barlines I would like to add, that barlines also represent human perception (the composer's, the musician's, the listener's or the transcriber's), as barlines do not exist in the audio-signal. > Barlines do not need to align. It is the music as a whole that keeps a common pace (the musicians stay together, but not necessarily at beats or barlines. > It is even possible to play along with completly different barlines in mind, that really happends, I experienced it myself. > > * Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? > > * In many musical styles of traditional music, also in Europe there are severe differences between emic and ethic music perception. Typical and well known examples are Polska-playing in Tradions of Scandinavia or the problems of scientific notation of Jodler/Jodel (Jodler interpretation has a very loose relation to beat). Looking for examples of perfect common pace in a music that treats the tension of timing between the ensemble members as a carrier of musical expression have a look at central polish traditional instrumental dance music. > > * About notational approaches: are you aware of the "Stress Programs" used with abc-notation to describe microtiming? It is a method where the bar is split up into a freely choosen number of fractions described by multipiers (1 is the standardlenght of one fraction, so 0.76 is 0.76 times shorter and 1.43 is 1.43 times longer than standard)? > > Not sure if this is meeting your intentions, > Thanks, > Health, > > Simon Wascher, (Vienna; musician, transcriber of historical music notation; researcher in folk music) > > > > > Am 14.04.2021 um 21:49 schrieb James Ingram : > >> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >> My conclusions are that Tick-based Timing >> • has to do with the difference between absolute (mechanical, physical) time and performance practice, >> • is relevant to the encoding of all the world's event-based music notations, not just CWMN1900. >> • needs to be considered for the next generation of music encoding formats >> I would especially like to get some feedback from those working on non-western notations, so am posting this not only to the W3C MNCG's public mailing list, but also to MEI's. >> All the best, >> James Ingram >> (notator) >> [1] MNX Issue #217: https://github.com/w3c/mnx/issues/217 >> [2] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >> >> -- >> https://james-ingram-act-two.de >> https://github.com/notator >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Stress_Programs Type: application/octet-stream Size: 13675 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Fri Apr 16 18:02:45 2021 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 16 Apr 2021 18:02:45 +0200 Subject: [MEI-L] Tick-based Timing In-Reply-To: References: <1c13f78c-fc4e-9a0c-77f5-fa863d231de5@netcologne.de> Message-ID: <7A76A059-A268-42A0-9553-A4E23D796F73@edirom.de> Dear all, I’m not much into this discussion, and haven’t really investigated into the use cases behind this, so my answer may not be appropriate for the question asked. However, I believe that most of the requirements articulated here are safely covered by MEI. Looking at the attributes available on notes (see https://music-encoding.org/guidelines/v4/elements/note.html#attributes), there are plenty of different approaches available: @dur – Records the duration of a feature using the relative durational values provided by the data.DURATION datatype. @dur.ges – Records performed duration information that differs from the written duration. @dur.metrical – Duration as a count of units provided in the time signature denominator. @dur.ppq – Duration recorded as pulses-per-quarter note, e.g. MIDI clicks or MusicXML divisions. @dur.real – Duration in seconds, e.g. '1.732‘. @dur.recip – Duration as an optionally dotted Humdrum *recip value. In addition, there is also @tstamp – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature. @tstamp.ges – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature. @tstamp.real – Records the onset time in terms of ISO time. @to – Records a timestamp adjustment of a feature's programmatically-determined location in terms of musical time; that is, beats. @synch – Points to elements that are synchronous with the current element. @when – Indicates the point of occurrence of this feature along a time line. Its value must be the ID of a when element elsewhere in the document. They’re all for slightly different purposes, and surely many of those attributes are not (well) supported by existing software, but they seem to offer good starting points to find a model for the questions asked. It is important to keep in mind that music manifests in various forms – sound, notation, concepts (what _is_ a quarter?), and that MEI tries to treat those „domains“ as independently as possible. Of course, they’re all connected, but not being specific (enough) in that regard did no good to other formats… Hope this helps, jo > Am 16.04.2021 um 15:35 schrieb Simon Wascher : > > Hi alltogether, > > Am 14.04.2021 um 21:49 schrieb James Ingram : >>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>>> [...] > > Am 16.04.2021 um 11:06 schrieb James Ingram : >> First: Did you intend your reply just to be private, or did you want to send it to the public list as well. I'm not sure. >> If you'd like this all to be public, I could send this to MEI-L as well... > > I answered to James Ingram off list, but now move to MEI-L with my answer, as it seems it was the intention to get answers on the list. > My full first answer to James Ingram is down at the end of this mail, if someone is interested (I did not forward James Ingram's repy to me in full as I did not want to forward someone elses private answer to me to public. > > Am 15.04.2021 um 01:03 schrieb Simon Wascher : >>> I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. > Am 16.04.2021 um 11:06 schrieb James Ingram : >> I took a look at Lauge Dideriksens website, but can't see enough music examples to know quite what you mean by "positioned according to musical timing". I see that he (sometimes?) uses tuplets. Does he (sometimes?) position the symbols in (flat) space to represent (flat) time? >> In either case, that's not quite what I'm saying. I'm talking about the underlying machine-readable encoding of notation, not just the way it looks on the surface. > > maybe there is still no material online of this. He is talking about this at European Voices VI in Vienna 27–30 September 2021. > It might be sensible to contact him direct. > > Am 15.04.2021 um 01:03 schrieb Simon Wascher : >>> Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? > Am 16.04.2021 um 11:06 schrieb James Ingram : >> Here again, I'm not quite sure what you mean. Perhaps it would help if I again emphasise the difference between the surface appearance of a notation and its machine-readable encoding. > > I see, your focus seems to be on machine-readability and the problem of the relation between CWMN and its machine-playback. > My focus is the problem of the relation between CWMN and real live-performance. > I am looking for tools to code real live performances, using the symbols of CWMN but allowing to include the _display_ of the real live durations of the real live-performance (the difference between real live-performance and CWMN). > > > Am 16.04.2021 um 11:06 schrieb James Ingram : >> You ask about emic and etic, and the problem of notating traditional Scandinavian Polska or jodling: >> To get us on the same page, here's where I am: Transcriptions of music that is in an aural tradition always reflect what the transcriber thinks is important. Transcriptions often leave out nuances (timing, tonal inflexions etc.) that the original performers and their public would regard as essential. >> I think that aural traditions correctly ignore machine time (seconds, milliseconds), but that if we use machines to record them, we ultimately have to use such timings (in the machines). I don't think that matters, providing that the transcribers don't try to impose machine time (in the form of beats per second) too literally on their interpretations of the original performances. > > Well, to be precise: in transcribing music, there is (at least) three points of view (versions of notation): > > 1. the musician's "emic" intention > 2. the machines "phonetic" protocol (which can be automatically transformed to a duration and pitch notation applying a certain level of accuracy, but which cannot know about light and heavy time and barlines, as these are cultural phenomenons. The level of accuracy is indeed already a cultural decission, but: If the transformation is not into CWMN but for example into a time/pitch chart of the fundamental frequencies the limits of readability of CWMN do not apply. > 3. the transcribers intention, which usually is called "etic" but is in fact "emic" to the transcriber. > (emic and etic is not my favorite wording) > (I am not worring about the composer, as in my field music gets composed sounding, the composer is a musician here.) > > Am 16.04.2021 um 11:06 schrieb James Ingram : >> "Stress programs" in abc-notation: >> I can't find any references to "stress programs" at the abc site [2], > > Ah, you are right, that is a kind of de facto standard, which is weakly documented. It is interpreted by abc2midi and BarFly (and maybe other programs). > It makes use of the R:header of abc. Either the stress program is written there directly or in an external file. > Here is one of the stress programs I use: > > * 37 > Mazurka > 3/4 > 3/4=35 > 6 > 120 1.4 > 100 0.6 > 110 1.2 > 100 0.8 > 115 1.32 > 100 0.67 > > so that is: > > "*37" it starts with a number (that does not do anything). > "Mazurka" is the identifying string used in the R:header field connecting abc-file and stress program for the playback program. > "3/4" is the meter. The stress program only applies to abc-notation in this meter. So there may be stress programs with the same name, but for different meters. > "3/4=35" is the tempo indication. > "6" is the number of sections the bar is split up to in this stress program (a free choisse). So it should be followed by that number of describing lines. > "120 1.4" describes the first section of the bar. "120" is the volume (beteen 0-127), "1.4" is the multiplier, the core of the thing, so to say: It says the duration of the first sixt of the notated bar is to be played 1.4 times as long than it would be played considering the metronome tempo given. > "100 0.6" and so on. > > I attached BarFly's "Stress Programs" file which also contains the descriptin provided by the author of "Barfly" Phil Taylor. > (I personally would prefer if this mechanism would not be limited to one single bar, but could be used to describe durations of a choosen number of bars/beats.) > > > So, thanks for the moment, > and feel free to tell me I shall not send this longish and maybe not very clever e-mails to this list. > > Thanks, > Health, > Simon > Wascher > > Anfang der weitergeleiteten Nachricht: >> Von: Simon Wascher >> Betreff: Aw: [MEI-L] Tick-based Timing >> Datum: 15. April 2021 01:03:32 MESZ >> An: James Ingram >> >> Hello, >> >> reading your post to MEI mailing list (I am not active in MEI) I started to read your text >>> [2] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >> >> >> and would like to just add my two cents of ideas about >>> Ticks carry both temporal and spatial information. >>> In particular, synchronous events at the beginning of a bar have the same tick.time, so: >>> The (abstract) tick durations of the events in parallel voices in a bar, add up to the same value. >>> In other words: >>> Bars “add up” in (abstract) ticks. >>> The same is true for parallel voices in systems (that are as wide as the page allows) even when there are no barlines, so: >>> Systems also “add up” in (abstract) ticks. >> >> * First I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. >> >> * About Barlines I would like to add, that barlines also represent human perception (the composer's, the musician's, the listener's or the transcriber's), as barlines do not exist in the audio-signal. >> Barlines do not need to align. It is the music as a whole that keeps a common pace (the musicians stay together, but not necessarily at beats or barlines. >> It is even possible to play along with completly different barlines in mind, that really happends, I experienced it myself. >> >> * Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? >> >> * In many musical styles of traditional music, also in Europe there are severe differences between emic and ethic music perception. Typical and well known examples are Polska-playing in Tradions of Scandinavia or the problems of scientific notation of Jodler/Jodel (Jodler interpretation has a very loose relation to beat). Looking for examples of perfect common pace in a music that treats the tension of timing between the ensemble members as a carrier of musical expression have a look at central polish traditional instrumental dance music. >> >> * About notational approaches: are you aware of the "Stress Programs" used with abc-notation to describe microtiming? It is a method where the bar is split up into a freely choosen number of fractions described by multipiers (1 is the standardlenght of one fraction, so 0.76 is 0.76 times shorter and 1.43 is 1.43 times longer than standard)? >> >> Not sure if this is meeting your intentions, >> Thanks, >> Health, >> >> Simon Wascher, (Vienna; musician, transcriber of historical music notation; researcher in folk music) >> >> >> >> >> Am 14.04.2021 um 21:49 schrieb James Ingram : >> >>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>> My conclusions are that Tick-based Timing >>> • has to do with the difference between absolute (mechanical, physical) time and performance practice, >>> • is relevant to the encoding of all the world's event-based music notations, not just CWMN1900. >>> • needs to be considered for the next generation of music encoding formats >>> I would especially like to get some feedback from those working on non-western notations, so am posting this not only to the W3C MNCG's public mailing list, but also to MEI's. >>> All the best, >>> James Ingram >>> (notator) >>> [1] MNX Issue #217: https://github.com/w3c/mnx/issues/217 >>> [2] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >>> >>> -- >>> https://james-ingram-act-two.de >>> https://github.com/notator >>> _______________________________________________ >>> mei-l mailing list >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l Dr. Johannes Kepper Wissenschaftlicher Mitarbeiter Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 www.beethovens-werkstatt.de Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From esfield at stanford.edu Fri Apr 16 22:34:16 2021 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Fri, 16 Apr 2021 20:34:16 +0000 Subject: [MEI-L] First efforts to bring modal notation to MEI In-Reply-To: <6E75491B-688B-401D-9CCD-C99833D60D6D@pluto.uni-freiburg.de> References: <6E75491B-688B-401D-9CCD-C99833D60D6D@pluto.uni-freiburg.de> Message-ID: If you look broadly at other music-encoding systems, you will find periodic coverage in our yearbook Computing in Musicology (1985-2008), which are indexed in RILM and now included in the extended RILM subscription. Brief title information can be found at http://www.ccarh.org/publications/books/cm/. Later issues are available from the MIT Press. Various issues in early notational styles are presented in Vols. 6, 8, 10, and 12. Digital Resources for Musicology (drm.ccarh.org) contains information on open-access projects. Its companion ADAM (Archive of Digital Applications in Music) features projects that originated in the mainframe era. See especially the work of Nortbert Böker-Heil: https://adam.ccarh.org/. Eleanor Eleanor Selfridge-Field Stanford/CCARH/Parkard Humanities Inst. Braun Music Center #129 Stanford University Stanford, CA 94305-3076, USA esfield at stanford.edu Profile: https://profiles.stanford.edu/eleanor-selfridge-field ________________________________ From: mei-l on behalf of Janek Spaderna Sent: Friday, March 19, 2021 12:21 PM To: mei-l at lists.uni-paderborn.de Subject: [MEI-L] First efforts to bring modal notation to MEI Hello everyone, as a project for university I am looking into the challenges of bringing support for modal notation to MEI. I am quite new to both modal notation and MEI so please don’t hesitate to let me know in case I mix things up, miss something or end up writing straight up wrong stuff. Instead of directly thinking in terms of a (potential) concrete encoding I would like to discuss the elements and concepts of modal notation first. Below you can find the thoughts I had so for in this regard. In my opinion thinking this through beforehand can help because we can get a feeling on which level an encoding should work (rather purely visual or should it include/require analysis of some (which?) sort). The only prior discussion about this I could find is a thread initiated by Joshua Stutter about two years ago [1]. (Did I miss something?) It touches on some of the concepts which exist in modal notation but have disappeared in mensural music but is mostly concerned with finding a concrete encoding for some piece of music. # Visual elements of modal notation The distinctive visual elements are only very few: lines, clefs, notes, tractus and lyrics. Regarding notes there are some special cases: a) notes can be grouped into ligatures b) a note can have a plica attached c) notes can be followed by currentes, making it a coniunctura d) the last note in a ligature can also be coniunctura Cases a) and b) also occur in mensural notation; about c)/d) I am not sure, at least I think I do not have seen a way to encode it in MEI? Also for c) there is a difference in musical meaning wether there is a note followed by up to three lozenges or if there are more than three. The former corresponds to a ternaria whereas only the latter is truly a coniunctura. Tractus serve multiple purposes: 1. It groups notes into ordines. 2. It indicates syllable changes. 3. It indicates alignment of different voices in organum passages. # Concepts of modal notation ## Tempora, perfectiones, ordines The rhythmic feeling is based on perfectiones which consist of three tempora. If not changed by context a brevis has a length of one tempus, a longa of two. Only having seen a few transcription I still got the feeling that it is quite common to number the perfectiones. Notes are grouped by tractus into ordines. The duration of an ordo is not fix and can encompass one or more perfectiones. The tractus which ends an ordo is usually transcribed as a rest with a context dependet duration. ## Discantus/organum purum On one side there is the discantus, on the other the organum purum. In between the two lives the copula. In discantus passages each voice follows a mode which can be recognized by a specific pattern of ligatures. The mode then tells which notes in the ligatures are longae and which are breves. Additional notes---be it from overlong ligatures not fitting the patterns or coniuncturae---live outside the longa/breve classification. This concept does not apply to organum purum. As I understand it the most important bit here is finding a way to encode the visual alignment of the voices. Karen Desmond writes in one of her responses to the aforementioned thread on the mailing list [2] > Ideally you would probably want to number the perfections and then you would > simply tag your tenor notes as occurring within a certain perfection. Whilst she notes other problems with this idea I am wondering if this would even be feasible in organum purum passages as I thought we do not know which notes are longae and which are breves. As I read it, Joshua shares my sentiment [3] > I'm against tagging in a particular perfection as that is implying that the > music proceeds in a constant modal rhythm and has length, which may not be > exactly correct. The copula is used to connect discantus passages with organum purum passages. During these connecting sections the duplum operates as in discantus sections whereas the tenor holds notes as in organum purum passages. Overall it can be said that a way to encode alignment is important in organum purum and copula passages. In discantus passages however this is not necessary as the modal rhythm used in all voices carries enough information; moreover the visual alignment usually does not even correspond to the musical alignment. ------------------------------- What are your thoughts so far? I am looking forward to your feedback! Best Janek [1]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002268.html Joshua Stutter’s initial message [2]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002272.html Karen Desmond in response to Joshua [3]: https://lists.uni-paderborn.de/pipermail/mei-l/2019/002280.html Joshua in response to Karen _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From josh at yokermusic.scot Fri Apr 16 22:42:55 2021 From: josh at yokermusic.scot (Joshua Stutter) Date: Fri, 16 Apr 2021 21:42:55 +0100 Subject: [MEI-L] First efforts to bring modal notation to MEI In-Reply-To: References: <6E75491B-688B-401D-9CCD-C99833D60D6D@pluto.uni-freiburg.de> Message-ID: Although I believe my previous response made it to Janek directly, I think it didn't reach the list as it came from my other e-mail address. It is reproduced below. > Janek, > > Great, yet another person interested in this tricky transitional > notation! It gets a little lonely sometimes. > >> a thread initiated by Joshua Stutter about two years ago > (for it was he) > > My initial interest in this was during my Master's level study where > I was attempting to encode some modal notation into MEI. The issue at > that time on this list was whether modal notation should be encoded > as "neumes" or "mensural". I was initially on the side of neumes as > it seems most readily available for the same elements, but it seems > that the neumes module cannot support polyphony. IIRC, modal notation > would fit best into mensural. > > My view at the time was that it is neither neumes nor mensural, and > is something "special" and inherently transitional. My answer then > was to try my hand at writing an ODD (in PXSL because I don't like > the verbosity of XML - don't judge me!) which can be found here: > this ODD likely > doesn't work but I had no idea what I was doing... > > There was discussion about adding "plica" to ligatures which I > believe has made good progress: > there was > lots of discussion around October of last year in the MEI Slack > server #ig-mensural > > For my continuing work I have largely moved away from MEI until > someone (perhaps yourself!) can add support. My own project at the > time was attempting to encode modal notation as succinctly as > possible (due to time constraints) which eventually became > with a parser > and discussion can be found in my > Master's thesis: > > How I would approach this nowadays if I had more time would be to > pragmatically attempt to fit and stuff modal rhythm into the mensural > module as best as possible, but bearing in mind what I know now that > any "rules" that the mid-twentieth century writers came up with are > broken time and time again in actual sources. > > Good luck, and feel free to reach out to me for more info as someone > who would love to use more MEI in my projects for want of support! > > Joshua. On Fri, 16 Apr, 2021 at 20:34, Eleanor Selfridge-Field wrote: > > If you look broadly at other music-encoding systems, you will find > periodic coverage in our yearbook Computing in Musicology > (1985-2008), which are indexed in RILM and now included in the > extended RILM subscription. Brief title information can be found at > . Later issues are > available from the MIT Press. Various issues in early notational > styles are presented in Vols. 6, 8, 10, and 12. > > Digital Resources for Musicology (drm.ccarh.org) contains information > on open-access projects. Its companion ADAM (Archive of Digital > Applications in Music) features projects that originated in the > mainframe era. See especially the work of Nortbert Böker-Heil: > . > > Eleanor > > > Eleanor Selfridge-Field > Stanford/CCARH/Parkard Humanities Inst. > Braun Music Center #129 > Stanford University > Stanford, CA 94305-3076, USA > esfield at stanford.edu > Profile: https://profiles.stanford.edu/eleanor-selfridge-field > > *From:* mei-l on behalf of > Janek Spaderna > *Sent:* Friday, March 19, 2021 12:21 PM > *To:* mei-l at lists.uni-paderborn.de > *Subject:* [MEI-L] First efforts to bring modal notation to MEI > > Hello everyone, > > as a project for university I am looking into the challenges of > bringing > support for modal notation to MEI. I am quite new to both modal > notation and > MEI so please don’t hesitate to let me know in case I mix things > up, miss > something or end up writing straight up wrong stuff. > > Instead of directly thinking in terms of a (potential) concrete > encoding I > would like to discuss the elements and concepts of modal notation > first. > Below you can find the thoughts I had so for in this regard. > > In my opinion thinking this through beforehand can help because we > can get a > feeling on which level an encoding should work (rather purely visual > or should > it include/require analysis of some (which?) sort). > > The only prior discussion about this I could find is a thread > initiated by > Joshua Stutter about two years ago [1]. (Did I miss something?) It > touches on > some of the concepts which exist in modal notation but have > disappeared in > mensural music but is mostly concerned with finding a concrete > encoding for > some piece of music. > > > # Visual elements of modal notation > > The distinctive visual elements are only very few: lines, clefs, > notes, tractus > and lyrics. > > Regarding notes there are some special cases: > > a) notes can be grouped into ligatures > b) a note can have a plica attached > c) notes can be followed by currentes, making it a coniunctura > d) the last note in a ligature can also be coniunctura > > Cases a) and b) also occur in mensural notation; about c)/d) I am > not sure, at > least I think I do not have seen a way to encode it in MEI? > > Also for c) there is a difference in musical meaning wether there is > a note > followed by up to three lozenges or if there are more than three. > The former > corresponds to a ternaria whereas only the latter is truly a > coniunctura. > > Tractus serve multiple purposes: > > 1. It groups notes into ordines. > 2. It indicates syllable changes. > 3. It indicates alignment of different voices in organum passages. > > > # Concepts of modal notation > > ## Tempora, perfectiones, ordines > > The rhythmic feeling is based on perfectiones which consist of three > tempora. > If not changed by context a brevis has a length of one tempus, a > longa of two. > Only having seen a few transcription I still got the feeling that it > is quite > common to number the perfectiones. > > Notes are grouped by tractus into ordines. The duration of an ordo > is not fix > and can encompass one or more perfectiones. The tractus which ends > an ordo is > usually transcribed as a rest with a context dependet duration. > > > ## Discantus/organum purum > > On one side there is the discantus, on the other the organum purum. > In between > the two lives the copula. > > In discantus passages each voice follows a mode which can be > recognized by a > specific pattern of ligatures. The mode then tells which notes in > the ligatures > are longae and which are breves. Additional notes---be it from > overlong > ligatures not fitting the patterns or coniuncturae---live outside the > longa/breve classification. > > This concept does not apply to organum purum. As I understand it the > most > important bit here is finding a way to encode the visual alignment > of the > voices. Karen Desmond writes in one of her responses to the > aforementioned > thread on the mailing list [2] > > > Ideally you would probably want to number the perfections and then > you would > > simply tag your tenor notes as occurring within a certain > perfection. > > Whilst she notes other problems with this idea I am wondering if > this would > even be feasible in organum purum passages as I thought we do not > know which > notes are longae and which are breves. As I read it, Joshua shares my > sentiment [3] > > > I'm against tagging in a particular perfection as that is implying > that the > > music proceeds in a constant modal rhythm and has length, which > may not be > > exactly correct. > > The copula is used to connect discantus passages with organum purum > passages. > During these connecting sections the duplum operates as in discantus > sections > whereas the tenor holds notes as in organum purum passages. > > Overall it can be said that a way to encode alignment is important > in organum > purum and copula passages. In discantus passages however this is not > necessary > as the modal rhythm used in all voices carries enough information; > moreover the > visual alignment usually does not even correspond to the musical > alignment. > > ------------------------------- > > > What are your thoughts so far? I am looking forward to your feedback! > > Best > Janek > > > [1]: > > Joshua Stutter’s initial message > [2]: > > Karen Desmond in response to Joshua > [3]: > > Joshua in response to Karen > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > -------------- next part -------------- An HTML attachment was scrubbed... URL: From j.ingram at netcologne.de Sun Apr 18 13:11:46 2021 From: j.ingram at netcologne.de (James Ingram) Date: Sun, 18 Apr 2021 13:11:46 +0200 Subject: [MEI-L] Tick-based Timing In-Reply-To: <7A76A059-A268-42A0-9553-A4E23D796F73@edirom.de> References: <1c13f78c-fc4e-9a0c-77f5-fa863d231de5@netcologne.de> <7A76A059-A268-42A0-9553-A4E23D796F73@edirom.de> Message-ID: <533df9de-bf97-8413-88bd-540c95ac99cf@netcologne.de> Thanks, Simon and Jo, for your responses, @Simon: Please be patient, I'll come back to you, but I first need to sort out some basics with Jo. @Jo: Before replying to your posting, I first need to provide some context so that you can better understand what I'm saying: Context 1: Outline of the state of the debate in the MNX Repository MNX is intended to be a set of next-generation, web-friendly music notation encodings, related via common elements in their schemas. The first format being developed is MNXcommon1900, which is intended to be the successor to MusicXML. It does not have to be backwardly compatible with MusicXML, so the co-chair wants to provide documentation comparing the different ways in which MNXcommon1900 and MusicXML encode a number of simple examples. Unfortunately, they first need to revise the MusicXML documentation in order to do that, so work on MNXcommon1900 has temporarily stopped. The intention is to start work on it again as soon as the MusicXML documentation revision is complete. After 5+ years of debate, MNXcommon1900 is actually in a fairly advanced state. I have two MNX-related GitHub repositories (the best way to really understand software is to write it): * MNXtoSVG:  A (C#) desktop application that converts MNX files to SVG. This application is a test-bed for MNX's data structures, and successfully converts the first completed MusicXML-MNX comparison examples to (graphics only) SVG. When/if MNX includes temporal info, it will do that too (using a special namespace in the SVG). * A fork of the MNX repository: This contains (among other things) the beginnings of a draft schema for MNXcommon1900. The intention is to plunder that schema for other schemas... I'm looking for things that all event-based music notations have in common, so that software libraries can be used efficiently across all such notations. That's important if we want to develop standards that are consistent all over the web. Context 2: My background I'm a relic of the '60s /Avant-Garde/. Left college in the early 1970s, and became K. Stockhausen's principal copyist 1974-2000.  In the '60s, they were still trying to develop new notations, but that project collapsed//quite suddenly in 1970 when all the leading composers gave it up (without solving anything), and reverted to using standard notation. In 1982, having learned a lot from my boss, and having had a few years practical experience pushing the dots around, I suddenly realised what had gone wrong, and wrote an article about it that was eventually published in 1985. The article contains a critical analysis of CWMN... So I'm coming from a rather special niche in the practical world of music publishing, not from the academic world. In 1982, I was not aware of Maxwell (1981)  [1], and I hadn't realised until researching this post, how it relates to things like /metrical time/ in the (1985) MIDI 1.0 standard (see §5, §5.1.2 in [3]). *** MEI: The Background page [2] on the MEI website cites Maxwell (1981) [1] as the source of the three principle domains /physical/, /logical/ and /graphical/. In contrast, my 1982 insight was that the domains /space/ and /time/ are fundamental, and need to be clearly and radically distinguished. Maxwell himself says (at the beginning of §2.0 of his paper) that his "classification is not the only way that music notation could be broken up..." So I'm thinking that Maxwell's domains are not as fundamental as the ones I found, and that mine lead to simpler, more general and more powerful results. From my point of view, Maxwell's /logical/ domain seems particularly problematic: Understandably for the date (1981), and the other problems he was coping with, I think Maxwell has a too respectful attitude to the symbols he was dealing with. The then unrivalled supremacy of CWMN1900 over all other notations leads him to think that he can assign fixed relative values to the duration symbols. That could, of course, also be explained in terms of him wanting to limit the scope of his project but, especially when one looks at legitimate, non-standard (e.g.Baroque) uses of the symbols (see §4.1.1 of [3]), his/logical /domain still seems to be on rather shaky ground. Being able to include notations containing /any/ kind of event-symbol in my model (see §4.2) is exactly what's needed in order to create a consistent set of related schemas for all the world's event-based music notations... So, having said all that, MEI's @dur attribute subclasses look to me like ad-hoc postulates that have been added to the paradigm to shore it up, without questioning its underlying assumptions. The result is that MEI has become over-complicated and unwieldy. That's a common theme in ageing paradigms... remember Ptolemy? Okay, maybe I'm being a bit provocative there. But am I justified? :-) Hope that helps, all the best, James [1] Maxwell (1981): http://dspace.mit.edu/handle/1721.1/15893 [2] https://music-encoding.org/resources/background.html [3] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html -- https://james-ingram-act-two.de https://github.com/notator Am 16.04.2021 um 18:02 schrieb Johannes Kepper: > Dear all, > > I’m not much into this discussion, and haven’t really investigated into the use cases behind this, so my answer may not be appropriate for the question asked. However, I believe that most of the requirements articulated here are safely covered by MEI. Looking at the attributes available on notes (seehttps://music-encoding.org/guidelines/v4/elements/note.html#attributes), there are plenty of different approaches available: > > @dur – Records the duration of a feature using the relative durational values provided by the data.DURATION datatype. > @dur.ges – Records performed duration information that differs from the written duration. > @dur.metrical – Duration as a count of units provided in the time signature denominator. > @dur.ppq – Duration recorded as pulses-per-quarter note, e.g. MIDI clicks or MusicXML divisions. > @dur.real – Duration in seconds, e.g. '1.732‘. > @dur.recip – Duration as an optionally dotted Humdrum *recip value. > > In addition, there is also > > @tstamp – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature. > @tstamp.ges – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature. > @tstamp.real – Records the onset time in terms of ISO time. > @to – Records a timestamp adjustment of a feature's programmatically-determined location in terms of musical time; that is, beats. > @synch – Points to elements that are synchronous with the current element. > @when – Indicates the point of occurrence of this feature along a time line. Its value must be the ID of a when element elsewhere in the document. > > They’re all for slightly different purposes, and surely many of those attributes are not (well) supported by existing software, but they seem to offer good starting points to find a model for the questions asked. It is important to keep in mind that music manifests in various forms – sound, notation, concepts (what _is_ a quarter?), and that MEI tries to treat those „domains“ as independently as possible. Of course, they’re all connected, but not being specific (enough) in that regard did no good to other formats… > > Hope this helps, > jo > > >> Am 16.04.2021 um 15:35 schrieb Simon Wascher: >> >> Hi alltogether, >> >> Am 14.04.2021 um 21:49 schrieb James Ingram: >>>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>>>> [...] >> Am 16.04.2021 um 11:06 schrieb James Ingram: >>> First: Did you intend your reply just to be private, or did you want to send it to the public list as well. I'm not sure. >>> If you'd like this all to be public, I could send this to MEI-L as well... >> I answered to James Ingram off list, but now move to MEI-L with my answer, as it seems it was the intention to get answers on the list. >> My full first answer to James Ingram is down at the end of this mail, if someone is interested (I did not forward James Ingram's repy to me in full as I did not want to forward someone elses private answer to me to public. >> >> Am 15.04.2021 um 01:03 schrieb Simon Wascher: >>>> I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. >> Am 16.04.2021 um 11:06 schrieb James Ingram: >>> I took a look at Lauge Dideriksens website, but can't see enough music examples to know quite what you mean by "positioned according to musical timing". I see that he (sometimes?) uses tuplets. Does he (sometimes?) position the symbols in (flat) space to represent (flat) time? >>> In either case, that's not quite what I'm saying. I'm talking about the underlying machine-readable encoding of notation, not just the way it looks on the surface. >> maybe there is still no material online of this. He is talking about this at European Voices VI in Vienna 27–30 September 2021. >> It might be sensible to contact him direct. >> >> Am 15.04.2021 um 01:03 schrieb Simon Wascher: >>>> Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? >> Am 16.04.2021 um 11:06 schrieb James Ingram: >>> Here again, I'm not quite sure what you mean. Perhaps it would help if I again emphasise the difference between the surface appearance of a notation and its machine-readable encoding. >> I see, your focus seems to be on machine-readability and the problem of the relation between CWMN and its machine-playback. >> My focus is the problem of the relation between CWMN and real live-performance. >> I am looking for tools to code real live performances, using the symbols of CWMN but allowing to include the _display_ of the real live durations of the real live-performance (the difference between real live-performance and CWMN). >> >> >> Am 16.04.2021 um 11:06 schrieb James Ingram: >>> You ask about emic and etic, and the problem of notating traditional Scandinavian Polska or jodling: >>> To get us on the same page, here's where I am: Transcriptions of music that is in an aural tradition always reflect what the transcriber thinks is important. Transcriptions often leave out nuances (timing, tonal inflexions etc.) that the original performers and their public would regard as essential. >>> I think that aural traditions correctly ignore machine time (seconds, milliseconds), but that if we use machines to record them, we ultimately have to use such timings (in the machines). I don't think that matters, providing that the transcribers don't try to impose machine time (in the form of beats per second) too literally on their interpretations of the original performances. >> Well, to be precise: in transcribing music, there is (at least) three points of view (versions of notation): >> >> 1. the musician's "emic" intention >> 2. the machines "phonetic" protocol (which can be automatically transformed to a duration and pitch notation applying a certain level of accuracy, but which cannot know about light and heavy time and barlines, as these are cultural phenomenons. The level of accuracy is indeed already a cultural decission, but: If the transformation is not into CWMN but for example into a time/pitch chart of the fundamental frequencies the limits of readability of CWMN do not apply. >> 3. the transcribers intention, which usually is called "etic" but is in fact "emic" to the transcriber. >> (emic and etic is not my favorite wording) >> (I am not worring about the composer, as in my field music gets composed sounding, the composer is a musician here.) >> >> Am 16.04.2021 um 11:06 schrieb James Ingram: >>> "Stress programs" in abc-notation: >>> I can't find any references to "stress programs" at the abc site [2], >> Ah, you are right, that is a kind of de facto standard, which is weakly documented. It is interpreted by abc2midi and BarFly (and maybe other programs). >> It makes use of the R:header of abc. Either the stress program is written there directly or in an external file. >> Here is one of the stress programs I use: >> >> * 37 >> Mazurka >> 3/4 >> 3/4=35 >> 6 >> 120 1.4 >> 100 0.6 >> 110 1.2 >> 100 0.8 >> 115 1.32 >> 100 0.67 >> >> so that is: >> >> "*37" it starts with a number (that does not do anything). >> "Mazurka" is the identifying string used in the R:header field connecting abc-file and stress program for the playback program. >> "3/4" is the meter. The stress program only applies to abc-notation in this meter. So there may be stress programs with the same name, but for different meters. >> "3/4=35" is the tempo indication. >> "6" is the number of sections the bar is split up to in this stress program (a free choisse). So it should be followed by that number of describing lines. >> "120 1.4" describes the first section of the bar. "120" is the volume (beteen 0-127), "1.4" is the multiplier, the core of the thing, so to say: It says the duration of the first sixt of the notated bar is to be played 1.4 times as long than it would be played considering the metronome tempo given. >> "100 0.6" and so on. >> >> I attached BarFly's "Stress Programs" file which also contains the descriptin provided by the author of "Barfly" Phil Taylor. >> (I personally would prefer if this mechanism would not be limited to one single bar, but could be used to describe durations of a choosen number of bars/beats.) >> >> >> So, thanks for the moment, >> and feel free to tell me I shall not send this longish and maybe not very clever e-mails to this list. >> >> Thanks, >> Health, >> Simon >> Wascher >> >> Anfang der weitergeleiteten Nachricht: >>> Von: Simon Wascher >>> Betreff: Aw: [MEI-L] Tick-based Timing >>> Datum: 15. April 2021 01:03:32 MESZ >>> An: James Ingram >>> >>> Hello, >>> >>> reading your post to MEI mailing list (I am not active in MEI) I started to read your text >>>> [2]https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >>> and would like to just add my two cents of ideas about >>>> Ticks carry both temporal and spatial information. >>>> In particular, synchronous events at the beginning of a bar have the same tick.time, so: >>>> The (abstract) tick durations of the events in parallel voices in a bar, add up to the same value. >>>> In other words: >>>> Bars “add up” in (abstract) ticks. >>>> The same is true for parallel voices in systems (that are as wide as the page allows) even when there are no barlines, so: >>>> Systems also “add up” in (abstract) ticks. >>> * First I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. >>> >>> * About Barlines I would like to add, that barlines also represent human perception (the composer's, the musician's, the listener's or the transcriber's), as barlines do not exist in the audio-signal. >>> Barlines do not need to align. It is the music as a whole that keeps a common pace (the musicians stay together, but not necessarily at beats or barlines. >>> It is even possible to play along with completly different barlines in mind, that really happends, I experienced it myself. >>> >>> * Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? >>> >>> * In many musical styles of traditional music, also in Europe there are severe differences between emic and ethic music perception. Typical and well known examples are Polska-playing in Tradions of Scandinavia or the problems of scientific notation of Jodler/Jodel (Jodler interpretation has a very loose relation to beat). Looking for examples of perfect common pace in a music that treats the tension of timing between the ensemble members as a carrier of musical expression have a look at central polish traditional instrumental dance music. >>> >>> * About notational approaches: are you aware of the "Stress Programs" used with abc-notation to describe microtiming? It is a method where the bar is split up into a freely choosen number of fractions described by multipiers (1 is the standardlenght of one fraction, so 0.76 is 0.76 times shorter and 1.43 is 1.43 times longer than standard)? >>> >>> Not sure if this is meeting your intentions, >>> Thanks, >>> Health, >>> >>> Simon Wascher, (Vienna; musician, transcriber of historical music notation; researcher in folk music) >>> >>> >>> >>> >>> Am 14.04.2021 um 21:49 schrieb James Ingram: >>> >>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>>> My conclusions are that Tick-based Timing >>>> • has to do with the difference between absolute (mechanical, physical) time and performance practice, >>>> • is relevant to the encoding of all the world's event-based music notations, not just CWMN1900. >>>> • needs to be considered for the next generation of music encoding formats >>>> I would especially like to get some feedback from those working on non-western notations, so am posting this not only to the W3C MNCG's public mailing list, but also to MEI's. >>>> All the best, >>>> James Ingram >>>> (notator) >>>> [1] MNX Issue #217:https://github.com/w3c/mnx/issues/217 >>>> [2]https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >>>> >>>> -- >>>> https://james-ingram-act-two.de >>>> https://github.com/notator >>>> _______________________________________________ >>>> mei-l mailing list >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > Dr. Johannes Kepper > Wissenschaftlicher Mitarbeiter > > Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition > Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold > kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 > > www.beethovens-werkstatt.de > Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Tue Apr 20 19:31:54 2021 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 20 Apr 2021 19:31:54 +0200 Subject: [MEI-L] Tick-based Timing In-Reply-To: <533df9de-bf97-8413-88bd-540c95ac99cf@netcologne.de> References: <1c13f78c-fc4e-9a0c-77f5-fa863d231de5@netcologne.de> <7A76A059-A268-42A0-9553-A4E23D796F73@edirom.de> <533df9de-bf97-8413-88bd-540c95ac99cf@netcologne.de> Message-ID: Hi James, I’ve been loosely following the MNX efforts as well. About a year ago, I wrote a converter [1] from the then current MNX to MEI Basic (to which I’ll come back to in a sec). However, MNX felt pretty unstable at that time, and the documentation didn’t always match the available examples, so I put that aside. As soon as MNX has reached a certain level of maturity, I will go back to it. In this context, I’m talking about what you call MNXcommon1900 only – the other aspects of MNX seem much less stable and fleshed out. Maxwell 1981 is just one reference for this concept of musical domains, and many other people had similar ideas. I lack the time to look it up right now, but I’m quite confident I have read similar stuff in 19th century literature on music philology – maybe Spitta or Nottebohm. Milton Babbitt [2] needs to be mentioned in any case. To be honest, it’s a quite obvious thing that music can be seen from multiple perspectives. I think it’s also obvious that there are more than those three perspectives mentioned so far: There is a plethora of analytical approaches to music, and Schenker's and Riemann’s perspectives (to name just two) are quite different and may not be served well by any single approach. In my own project, we’re working on the genesis of musical works, so we’re interested in ink colors, writing orders, revision instructions written on the margins etc. And there’s much more than that: Asking a synesthete, we should probably consider to encode the different colors of music as well (and it’s surprising to see how far current MEI would already take us down that road…). Of course this doesn’t mean that each and everyone needs to use those categories, but in specific contexts, they might be relevant. So, I have no messianic zeal whatsoever to define a closed list of allowed musical domains – life (incl. music encoding) is more complex than that. There seems to be a misunderstanding of the intention and purpose of MEI. MEI is not a music encoding _format_, but a framework / toolkit to build such formats. One should not use the so-called MEI-all customization, which offers all possibilities of MEI at the same time. Instead, MEI should be cut down to the very repertoires / notation types, perspectives and intentions of a given encoding task to facilitate consistent markup that is on spot for the (research) question at hand. Of course there is need and room for a common ground within MEI, where people and projects can share their encodings and re-use them for purposes other than their original uses. One such common ground is probably MEI Basic [3], which tries to simplify MEI as much as possible, allowing only one specific way of encoding things. It’s still rather new, and not many projects support it yet, but ideally, projects that work with more complex MEI profiles internally also offer serializations to MEI Basic for others to use – as they know their own data best. At the same time, MEI Basic may serve as interface to other encoding formats like MusicXML, Humdrum, MIDI, you-name-it. However, this interchange is just one purpose of music encoding, and many other use cases are equally legitimate. MEI is based on the idea that there are many different types of music and manifestations thereof, numerous use-cases and reasons to encode them, and diverging intentions what to achieve and do with those encodings, but that there are still some commonalities in there which are worth to be considered, as they help to better understand the phenomenon at hand. MEI is a mental model (which happens to be serialized as an XML format right now, but it could be expressed as JSON or even RDF instead…), but it’s not necessary to cover that model in full for any given encoding task. To make a long story short: If you feel like you don’t need a specific aspect of MEI, that’s perfectly fine, and nothing forces you to use that. Others may come to other conclusions, and that is equally fine. Admittedly, this flexibility comes at the price of a certain complexity of the model, but MEI’s intention is not to squeeze every use case into a prescribed static model, and rule out everything that doesn’t fit – it’s not a hammer that treats everything as nails. At the same time, MEI offers (among others) a simple (basic) starting point for the CWMN repertoire, but it is easy to build on up from there, utilizing the full potential of the framework when and where necessary. I hope this helps to get a better picture of what MEI is, and how it relates to your own efforts on music encoding. All best, jo [1] Converter MNX to MEI: https://github.com/music-encoding/encoding-tools/blob/master/mnx2mei/mnx2mei.xsl [2] Milton Babbitt 1965: The Use of Computers in Musicological Research, https://doi.org/10.1515/9781400841226.202, p. 204f [3] MEI Basic: https://music-encoding.org/guidelines/dev/content/introduction.html#meiprofiles > Am 18.04.2021 um 13:11 schrieb James Ingram : > > Thanks, Simon and Jo, for your responses, > > @Simon: Please be patient, I'll come back to you, but I first need to sort out some basics with Jo. > > @Jo: Before replying to your posting, I first need to provide some context so that you can better understand what I'm saying: > > Context 1: Outline of the state of the debate in the MNX Repository > MNX is intended to be a set of next-generation, web-friendly music notation encodings, related via common elements in their schemas. The first format being developed is MNXcommon1900, which is intended to be the successor to MusicXML. It does not have to be backwardly compatible with MusicXML, so the co-chair wants to provide documentation comparing the different ways in which MNXcommon1900 and MusicXML encode a number of simple examples. Unfortunately, they first need to revise the MusicXML documentation in order to do that, so work on MNXcommon1900 has temporarily stopped. The intention is to start work on it again as soon as the MusicXML documentation revision is complete. After 5+ years of debate, MNXcommon1900 is actually in a fairly advanced state. > I have two MNX-related GitHub repositories (the best way to really understand software is to write it): > > • MNXtoSVG: A (C#) desktop application that converts MNX files to SVG. This application is a test-bed for MNX's data structures, and successfully converts the first completed MusicXML-MNX comparison examples to (graphics only) SVG. When/if MNX includes temporal info, it will do that too (using a special namespace in the SVG). > • A fork of the MNX repository: This contains (among other things) the beginnings of a draft schema for MNXcommon1900. The intention is to plunder that schema for other schemas... > I'm looking for things that all event-based music notations have in common, so that software libraries can be used efficiently across all such notations. That's important if we want to develop standards that are consistent all over the web. > > Context 2: My background > I'm a relic of the '60s Avant-Garde. Left college in the early 1970s, and became K. Stockhausen's principal copyist 1974-2000. In the '60s, they were still trying to develop new notations, but that project collapsed quite suddenly in 1970 when all the leading composers gave it up (without solving anything), and reverted to using standard notation. In 1982, having learned a lot from my boss, and having had a few years practical experience pushing the dots around, I suddenly realised what had gone wrong, and wrote an article about it that was eventually published in 1985. The article contains a critical analysis of CWMN... > So I'm coming from a rather special niche in the practical world of music publishing, not from the academic world. In 1982, I was not aware of Maxwell (1981) [1], and I hadn't realised until researching this post, how it relates to things like metrical time in the (1985) MIDI 1.0 standard (see §5, §5.1.2 in [3]). > > *** > > MEI: > The Background page [2] on the MEI website cites Maxwell (1981) [1] as the source of the three principle domains physical, logical and graphical. In contrast, my 1982 insight was that the domains space and time are fundamental, and need to be clearly and radically distinguished. Maxwell himself says (at the beginning of §2.0 of his paper) that his "classification is not the only way that music notation could be broken up..." > > So I'm thinking that Maxwell's domains are not as fundamental as the ones I found, and that mine lead to simpler, more general and more powerful results. > > From my point of view, Maxwell's logical domain seems particularly problematic: > Understandably for the date (1981), and the other problems he was coping with, I think Maxwell has a too respectful attitude to the symbols he was dealing with. The then unrivalled supremacy of CWMN1900 over all other notations leads him to think that he can assign fixed relative values to the duration symbols. That could, of course, also be explained in terms of him wanting to limit the scope of his project but, especially when one looks at legitimate, non-standard (e.g.Baroque) uses of the symbols (see §4.1.1 of [3]), his logical domain still seems to be on rather shaky ground. > Being able to include notations containing any kind of event-symbol in my model (see §4.2) is exactly what's needed in order to create a consistent set of related schemas for all the world's event-based music notations... > > So, having said all that, MEI's @dur attribute subclasses look to me like ad-hoc postulates that have been added to the paradigm to shore it up, without questioning its underlying assumptions. The result is that MEI has become over-complicated and unwieldy. That's a common theme in ageing paradigms... remember Ptolemy? > > Okay, maybe I'm being a bit provocative there. But am I justified? :-) > > Hope that helps, > all the best, > James > > [1] Maxwell (1981): http://dspace.mit.edu/handle/1721.1/15893 > > [2] https://music-encoding.org/resources/background.html > > [3] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html > > -- > > https://james-ingram-act-two.de > https://github.com/notator > > > Am 16.04.2021 um 18:02 schrieb Johannes Kepper: >> Dear all, >> >> I’m not much into this discussion, and haven’t really investigated into the use cases behind this, so my answer may not be appropriate for the question asked. However, I believe that most of the requirements articulated here are safely covered by MEI. Looking at the attributes available on notes (see >> https://music-encoding.org/guidelines/v4/elements/note.html#attributes >> ), there are plenty of different approaches available: >> >> @dur – Records the duration of a feature using the relative durational values provided by the data.DURATION datatype. >> @dur.ges – Records performed duration information that differs from the written duration. >> @dur.metrical – Duration as a count of units provided in the time signature denominator. >> @dur.ppq – Duration recorded as pulses-per-quarter note, e.g. MIDI clicks or MusicXML divisions. >> @dur.real – Duration in seconds, e.g. '1.732‘. >> @dur.recip – Duration as an optionally dotted Humdrum *recip value. >> >> In addition, there is also >> >> @tstamp – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature. >> @tstamp.ges – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature. >> @tstamp.real – Records the onset time in terms of ISO time. >> @to – Records a timestamp adjustment of a feature's programmatically-determined location in terms of musical time; that is, beats. >> @synch – Points to elements that are synchronous with the current element. >> @when – Indicates the point of occurrence of this feature along a time line. Its value must be the ID of a when element elsewhere in the document. >> >> They’re all for slightly different purposes, and surely many of those attributes are not (well) supported by existing software, but they seem to offer good starting points to find a model for the questions asked. It is important to keep in mind that music manifests in various forms – sound, notation, concepts (what _is_ a quarter?), and that MEI tries to treat those „domains“ as independently as possible. Of course, they’re all connected, but not being specific (enough) in that regard did no good to other formats… >> >> Hope this helps, >> jo >> >> >> >>> Am 16.04.2021 um 15:35 schrieb Simon Wascher >>> : >>> >>> Hi alltogether, >>> >>> Am 14.04.2021 um 21:49 schrieb James Ingram >>> >>> : >>> >>>>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>>>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>>>>> [...] >>>>>> >>> Am 16.04.2021 um 11:06 schrieb James Ingram >>> : >>> >>>> First: Did you intend your reply just to be private, or did you want to send it to the public list as well. I'm not sure. >>>> If you'd like this all to be public, I could send this to MEI-L as well... >>>> >>> I answered to James Ingram off list, but now move to MEI-L with my answer, as it seems it was the intention to get answers on the list. >>> My full first answer to James Ingram is down at the end of this mail, if someone is interested (I did not forward James Ingram's repy to me in full as I did not want to forward someone elses private answer to me to public. >>> >>> Am 15.04.2021 um 01:03 schrieb Simon Wascher >>> >>> : >>> >>>>> I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. >>>>> >>> Am 16.04.2021 um 11:06 schrieb James Ingram >>> : >>> >>>> I took a look at Lauge Dideriksens website, but can't see enough music examples to know quite what you mean by "positioned according to musical timing". I see that he (sometimes?) uses tuplets. Does he (sometimes?) position the symbols in (flat) space to represent (flat) time? >>>> In either case, that's not quite what I'm saying. I'm talking about the underlying machine-readable encoding of notation, not just the way it looks on the surface. >>>> >>> maybe there is still no material online of this. He is talking about this at European Voices VI in Vienna 27–30 September 2021. >>> It might be sensible to contact him direct. >>> >>> Am 15.04.2021 um 01:03 schrieb Simon Wascher >>> >>> : >>> >>>>> Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? >>>>> >>> Am 16.04.2021 um 11:06 schrieb James Ingram >>> : >>> >>>> Here again, I'm not quite sure what you mean. Perhaps it would help if I again emphasise the difference between the surface appearance of a notation and its machine-readable encoding. >>>> >>> I see, your focus seems to be on machine-readability and the problem of the relation between CWMN and its machine-playback. >>> My focus is the problem of the relation between CWMN and real live-performance. >>> I am looking for tools to code real live performances, using the symbols of CWMN but allowing to include the _display_ of the real live durations of the real live-performance (the difference between real live-performance and CWMN). >>> >>> >>> Am 16.04.2021 um 11:06 schrieb James Ingram >>> >>> : >>> >>>> You ask about emic and etic, and the problem of notating traditional Scandinavian Polska or jodling: >>>> To get us on the same page, here's where I am: Transcriptions of music that is in an aural tradition always reflect what the transcriber thinks is important. Transcriptions often leave out nuances (timing, tonal inflexions etc.) that the original performers and their public would regard as essential. >>>> I think that aural traditions correctly ignore machine time (seconds, milliseconds), but that if we use machines to record them, we ultimately have to use such timings (in the machines). I don't think that matters, providing that the transcribers don't try to impose machine time (in the form of beats per second) too literally on their interpretations of the original performances. >>>> >>> Well, to be precise: in transcribing music, there is (at least) three points of view (versions of notation): >>> >>> 1. the musician's "emic" intention >>> 2. the machines "phonetic" protocol (which can be automatically transformed to a duration and pitch notation applying a certain level of accuracy, but which cannot know about light and heavy time and barlines, as these are cultural phenomenons. The level of accuracy is indeed already a cultural decission, but: If the transformation is not into CWMN but for example into a time/pitch chart of the fundamental frequencies the limits of readability of CWMN do not apply. >>> 3. the transcribers intention, which usually is called "etic" but is in fact "emic" to the transcriber. >>> (emic and etic is not my favorite wording) >>> (I am not worring about the composer, as in my field music gets composed sounding, the composer is a musician here.) >>> >>> Am 16.04.2021 um 11:06 schrieb James Ingram >>> >>> : >>> >>>> "Stress programs" in abc-notation: >>>> I can't find any references to "stress programs" at the abc site [2], >>>> >>> Ah, you are right, that is a kind of de facto standard, which is weakly documented. It is interpreted by abc2midi and BarFly (and maybe other programs). >>> It makes use of the R:header of abc. Either the stress program is written there directly or in an external file. >>> Here is one of the stress programs I use: >>> >>> * 37 >>> Mazurka >>> 3/4 >>> 3/4=35 >>> 6 >>> 120 1.4 >>> 100 0.6 >>> 110 1.2 >>> 100 0.8 >>> 115 1.32 >>> 100 0.67 >>> >>> so that is: >>> >>> "*37" it starts with a number (that does not do anything). >>> "Mazurka" is the identifying string used in the R:header field connecting abc-file and stress program for the playback program. >>> "3/4" is the meter. The stress program only applies to abc-notation in this meter. So there may be stress programs with the same name, but for different meters. >>> "3/4=35" is the tempo indication. >>> "6" is the number of sections the bar is split up to in this stress program (a free choisse). So it should be followed by that number of describing lines. >>> "120 1.4" describes the first section of the bar. "120" is the volume (beteen 0-127), "1.4" is the multiplier, the core of the thing, so to say: It says the duration of the first sixt of the notated bar is to be played 1.4 times as long than it would be played considering the metronome tempo given. >>> "100 0.6" and so on. >>> >>> I attached BarFly's "Stress Programs" file which also contains the descriptin provided by the author of "Barfly" Phil Taylor. >>> (I personally would prefer if this mechanism would not be limited to one single bar, but could be used to describe durations of a choosen number of bars/beats.) >>> >>> >>> So, thanks for the moment, >>> and feel free to tell me I shall not send this longish and maybe not very clever e-mails to this list. >>> >>> Thanks, >>> Health, >>> Simon >>> Wascher >>> >>> Anfang der weitergeleiteten Nachricht: >>> >>>> Von: Simon Wascher >>>> >>>> Betreff: Aw: [MEI-L] Tick-based Timing >>>> Datum: 15. April 2021 01:03:32 MESZ >>>> An: James Ingram >>>> >>>> >>>> >>>> Hello, >>>> >>>> reading your post to MEI mailing list (I am not active in MEI) I started to read your text >>>> >>>>> [2] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >>>> and would like to just add my two cents of ideas about >>>> >>>>> Ticks carry both temporal and spatial information. >>>>> In particular, synchronous events at the beginning of a bar have the same tick.time, so: >>>>> The (abstract) tick durations of the events in parallel voices in a bar, add up to the same value. >>>>> In other words: >>>>> Bars “add up” in (abstract) ticks. >>>>> The same is true for parallel voices in systems (that are as wide as the page allows) even when there are no barlines, so: >>>>> Systems also “add up” in (abstract) ticks. >>>>> >>>> * First I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. >>>> >>>> * About Barlines I would like to add, that barlines also represent human perception (the composer's, the musician's, the listener's or the transcriber's), as barlines do not exist in the audio-signal. >>>> Barlines do not need to align. It is the music as a whole that keeps a common pace (the musicians stay together, but not necessarily at beats or barlines. >>>> It is even possible to play along with completly different barlines in mind, that really happends, I experienced it myself. >>>> >>>> * Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? >>>> >>>> * In many musical styles of traditional music, also in Europe there are severe differences between emic and ethic music perception. Typical and well known examples are Polska-playing in Tradions of Scandinavia or the problems of scientific notation of Jodler/Jodel (Jodler interpretation has a very loose relation to beat). Looking for examples of perfect common pace in a music that treats the tension of timing between the ensemble members as a carrier of musical expression have a look at central polish traditional instrumental dance music. >>>> >>>> * About notational approaches: are you aware of the "Stress Programs" used with abc-notation to describe microtiming? It is a method where the bar is split up into a freely choosen number of fractions described by multipiers (1 is the standardlenght of one fraction, so 0.76 is 0.76 times shorter and 1.43 is 1.43 times longer than standard)? >>>> >>>> Not sure if this is meeting your intentions, >>>> Thanks, >>>> Health, >>>> >>>> Simon Wascher, (Vienna; musician, transcriber of historical music notation; researcher in folk music) >>>> >>>> >>>> >>>> >>>> Am 14.04.2021 um 21:49 schrieb James Ingram >>>> >>>> : >>>> >>>> >>>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>>>> My conclusions are that Tick-based Timing >>>>> • has to do with the difference between absolute (mechanical, physical) time and performance practice, >>>>> • is relevant to the encoding of all the world's event-based music notations, not just CWMN1900. >>>>> • needs to be considered for the next generation of music encoding formats >>>>> I would especially like to get some feedback from those working on non-western notations, so am posting this not only to the W3C MNCG's public mailing list, but also to MEI's. >>>>> All the best, >>>>> James Ingram >>>>> (notator) >>>>> [1] MNX Issue #217: >>>>> https://github.com/w3c/mnx/issues/217 >>>>> >>>>> [2] >>>>> https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >>>>> >>>>> >>>>> -- >>>>> >>>>> https://james-ingram-act-two.de >>>>> https://github.com/notator >>>>> >>>>> _______________________________________________ >>>>> mei-l mailing list >>>>> >>>>> mei-l at lists.uni-paderborn.de >>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> >>> _______________________________________________ >>> mei-l mailing list >>> >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> Dr. Johannes Kepper >> Wissenschaftlicher Mitarbeiter >> >> Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition >> Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold >> >> kepper at beethovens-werkstatt.de >> | -49 (0) 5231 / 975669 >> >> >> www.beethovens-werkstatt.de >> >> Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz >> >> >> >> >> >> >> _______________________________________________ >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l Dr. Johannes Kepper Wissenschaftlicher Mitarbeiter Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 www.beethovens-werkstatt.de Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From j.ingram at netcologne.de Thu Apr 22 13:09:22 2021 From: j.ingram at netcologne.de (James Ingram) Date: Thu, 22 Apr 2021 13:09:22 +0200 Subject: [MEI-L] Tick-based Timing In-Reply-To: References: <1c13f78c-fc4e-9a0c-77f5-fa863d231de5@netcologne.de> <7A76A059-A268-42A0-9553-A4E23D796F73@edirom.de> <533df9de-bf97-8413-88bd-540c95ac99cf@netcologne.de> Message-ID: <6b0f1c85-c727-2005-682f-9e218679f8b2@netcologne.de> Hi Jo, Thanks for your thoughts, and the links. Yes, I also think that MNXcommon1900 has not yet reached the level of maturity necessary for a project like your MNX to MEI Converter. There are still too many open issues. In particular, the ones I opened last January (Actions is particularly interesting). My own MNXtoSVG project is designed to be changed when MNXcommon1900 changes. Its a test-bed, not a finished tool, though it could end up that way when/if MNXcommon1900 gets finalised. ******** Unfortunately I couldn't download all of the Milton Babbit paper from the link you gave, but I was reading /Perspectives of New Music/ during the late 1960s, so I probably read it then, and there are lots of other people one might also mention. Interesting as it is, I don't think we need to discuss the history of these ideas further here. Better to stay focussed on the topic as it presents itself to us now. :-) ******** I'm quite new to MEI so, finding the learning curve a bit steep beginning at your link to §1.2.4 Profiles [1], I decided to read the document (the development version of the MEI Guidelines) from the top. Here are my thoughts on §1.2 *Basic Concepts of MEI*: §1.2.1 Musical Domains [2] describes the four musical domains used by MEI: /logical/, /gestural/, /visual/ and /analytical/. The text says that MEI does not keep these domains hermetically separate. That is, I think, rather confusing. Things would be clearer if the domains were explained a bit differently. Here's how I understand them (please correct me if I get anything wrong): The *logical domain* is the content of a machine-readable XML file. This is the MEI encoding. The *visual domain* is a *spatial* instantiation of data in the XML file. It is a score, on paper, screen etc., and is created by a machine that can read the XML. The instantiation can be done automatically (using default styles), or by a human using the XML-reading machine (adding stylistic information ad lib.). The *gestural domain* is a *temporal *instantiation of data in the XML file. This is a live or deferred performance created by a machine that can read the XML. A deferred performance is simply a recording that can be performed, without the addition of further information, by a machine designed for that purpose. The instantiation can be done automatically (using default styles), or by a human using the XML-reading machine (adding stylistic information ad lib.). *N.B.*: Since the logical information in the original XML file is preserved in the *visual domain* (score), a temporal  instantiation (*gestural domain*) of the data in the original XML file can also be created by a human interpreting the spatial instantiation (the score), (adding advanced stylistic information, stored in human memory, ad lib.). Historically, performance-practice information has been ignored by computer music algorithms because its not directly accessible to the machines -- but it is nevertheless fundamental to the development of musical style. Failing to include it misses the whole point of music notation -- which is to be an /aide-memoire/. Musical culture *is* performance practice stored in human memory. As I point out in §5.1.3 of my article [5], attempting to ignore performance practice traditions was a common problem in the 20th century. Its /still/ not addressed by MEI, but needs to be: We are entering an era in which Artificial Intelligence applications can be trained to learn (or develop) performance practice traditions. Given the information in an MEI file, an AI could be trained to play Mozart, Couperin or any other style correctly (i.e. in accordance with a particular tradition). The *analytical domain* is, I think, just a question of advanced metadata, so consists of information that can be put in the XML file as ancillary data, without affecting the other information there. **** §1.2.2 Events and Controlevents There seem to be close parallels between the way these are defined, and the way global and other "directions" are used in MNX. **** §1.2.3 Timestamps in MEI [3]. (MNX's usage is currently very similar to MEI's.) For a *Basic Concept*, this section seems to me to be curiously CWMN-centric. The document says that: > timestamps rely solely on the numbers given in the meter signature. What about music notations that don't use the CWMN duration symbols, or use them in non-standard ways? The same section also says: > At this point, MEI uses real numbers only to express timestamps. In > case of (nested or complex) tuplets, this solution is inferior to > fractions because of rounding errors. It is envisioned to introduce a > fraction-based value for timestamps in a future revision of MEI. The proposal in my article [5] is that events should be allocated tick-durations that are /integers/. If that is done, the future revision of MEI would use /integer/ (tick.time) timestamp values for both @tstamp and @tstamp2. This would also eliminate rounding errors when aligning event symbols. If the gestural (=temporal) domain is also needed, absolute time values (numbers of seconds) can be found by adding the millisecond durations of successive ticks. (Note that tempo is a redundant concept here.) As I point out in [5], that's no problem in CWMN, and it allows *all* the world's event-based music notations to be treated in the same way (i.e. it really is a *Basic Concept*). **** §1.2.4 MEI Profiles [1] and §1.2.5 Customizing MEI [5] I'm speculating a bit here, but: Obviously, MEI's existing profiles have to continue to be supported, but I think that providing a common strategy for coding durations would make it easier (more economical) to develop parsers. You say > ideally, projects that work with more complex MEI profiles internally > also offer serializations to MEI Basic for others to use Perhaps there should be a new version of mei-Basic, that could be used not only in a similar way, but also for new customisations (e.g. for Asian notations). The existing customisations could then migrate gracefully if/when the new formats turn out to be better supported. Hope that helps, all the best, James [1] https://music-encoding.org/guidelines/dev/content/introduction.html#meiprofiles [2] https://music-encoding.org/guidelines/dev/content/introduction.html#musicalDomains [3] https://music-encoding.org/guidelines/dev/content/introduction.html#timestamps [4] https://music-encoding.org/guidelines/dev/content/introduction.html#meicustomization [5] https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html Am 20.04.2021 um 19:31 schrieb Johannes Kepper: > Hi James, > > I’ve been loosely following the MNX efforts as well. About a year ago, I wrote a converter [1] from the then current MNX to MEI Basic (to which I’ll come back to in a sec). However, MNX felt pretty unstable at that time, and the documentation didn’t always match the available examples, so I put that aside. As soon as MNX has reached a certain level of maturity, I will go back to it. In this context, I’m talking about what you call MNXcommon1900 only – the other aspects of MNX seem much less stable and fleshed out. > > Maxwell 1981 is just one reference for this concept of musical domains, and many other people had similar ideas. I lack the time to look it up right now, but I’m quite confident I have read similar stuff in 19th century literature on music philology – maybe Spitta or Nottebohm. Milton Babbitt [2] needs to be mentioned in any case. To be honest, it’s a quite obvious thing that music can be seen from multiple perspectives. I think it’s also obvious that there are more than those three perspectives mentioned so far: There is a plethora of analytical approaches to music, and Schenker's and Riemann’s perspectives (to name just two) are quite different and may not be served well by any single approach. In my own project, we’re working on the genesis of musical works, so we’re interested in ink colors, writing orders, revision instructions written on the margins etc. And there’s much more than that: Asking a synesthete, we should probably consider to encode the different colors of music as well (and it’s surprising to see how far current MEI would already take us down that road…). Of course this doesn’t mean that each and everyone needs to use those categories, but in specific contexts, they might be relevant. So, I have no messianic zeal whatsoever to define a closed list of allowed musical domains – life (incl. music encoding) is more complex than that. > > There seems to be a misunderstanding of the intention and purpose of MEI. MEI is not a music encoding _format_, but a framework / toolkit to build such formats. One should not use the so-called MEI-all customization, which offers all possibilities of MEI at the same time. Instead, MEI should be cut down to the very repertoires / notation types, perspectives and intentions of a given encoding task to facilitate consistent markup that is on spot for the (research) question at hand. Of course there is need and room for a common ground within MEI, where people and projects can share their encodings and re-use them for purposes other than their original uses. One such common ground is probably MEI Basic [3], which tries to simplify MEI as much as possible, allowing only one specific way of encoding things. It’s still rather new, and not many projects support it yet, but ideally, projects that work with more complex MEI profiles internally also offer serializations to MEI Basic for others to use – as they know their own data best. At the same time, MEI Basic may serve as interface to other encoding formats like MusicXML, Humdrum, MIDI, you-name-it. However, this interchange is just one purpose of music encoding, and many other use cases are equally legitimate. MEI is based on the idea that there are many different types of music and manifestations thereof, numerous use-cases and reasons to encode them, and diverging intentions what to achieve and do with those encodings, but that there are still some commonalities in there which are worth to be considered, as they help to better understand the phenomenon at hand. MEI is a mental model (which happens to be serialized as an XML format right now, but it could be expressed as JSON or even RDF instead…), but it’s not necessary to cover that model in full for any given encoding task. > > To make a long story short: If you feel like you don’t need a specific aspect of MEI, that’s perfectly fine, and nothing forces you to use that. Others may come to other conclusions, and that is equally fine. Admittedly, this flexibility comes at the price of a certain complexity of the model, but MEI’s intention is not to squeeze every use case into a prescribed static model, and rule out everything that doesn’t fit – it’s not a hammer that treats everything as nails. At the same time, MEI offers (among others) a simple (basic) starting point for the CWMN repertoire, but it is easy to build on up from there, utilizing the full potential of the framework when and where necessary. > > I hope this helps to get a better picture of what MEI is, and how it relates to your own efforts on music encoding. > > All best, > jo > > > [1] Converter MNX to MEI:https://github.com/music-encoding/encoding-tools/blob/master/mnx2mei/mnx2mei.xsl > [2] Milton Babbitt 1965: The Use of Computers in Musicological Research,https://doi.org/10.1515/9781400841226.202, p. 204f > [3] MEI Basic:https://music-encoding.org/guidelines/dev/content/introduction.html#meiprofiles > > >> Am 18.04.2021 um 13:11 schrieb James Ingram: >> >> Thanks, Simon and Jo, for your responses, >> >> @Simon: Please be patient, I'll come back to you, but I first need to sort out some basics with Jo. >> >> @Jo: Before replying to your posting, I first need to provide some context so that you can better understand what I'm saying: >> >> Context 1: Outline of the state of the debate in the MNX Repository >> MNX is intended to be a set of next-generation, web-friendly music notation encodings, related via common elements in their schemas. The first format being developed is MNXcommon1900, which is intended to be the successor to MusicXML. It does not have to be backwardly compatible with MusicXML, so the co-chair wants to provide documentation comparing the different ways in which MNXcommon1900 and MusicXML encode a number of simple examples. Unfortunately, they first need to revise the MusicXML documentation in order to do that, so work on MNXcommon1900 has temporarily stopped. The intention is to start work on it again as soon as the MusicXML documentation revision is complete. After 5+ years of debate, MNXcommon1900 is actually in a fairly advanced state. >> I have two MNX-related GitHub repositories (the best way to really understand software is to write it): >> >> • MNXtoSVG: A (C#) desktop application that converts MNX files to SVG. This application is a test-bed for MNX's data structures, and successfully converts the first completed MusicXML-MNX comparison examples to (graphics only) SVG. When/if MNX includes temporal info, it will do that too (using a special namespace in the SVG). >> • A fork of the MNX repository: This contains (among other things) the beginnings of a draft schema for MNXcommon1900. The intention is to plunder that schema for other schemas... >> I'm looking for things that all event-based music notations have in common, so that software libraries can be used efficiently across all such notations. That's important if we want to develop standards that are consistent all over the web. >> >> Context 2: My background >> I'm a relic of the '60s Avant-Garde. Left college in the early 1970s, and became K. Stockhausen's principal copyist 1974-2000. In the '60s, they were still trying to develop new notations, but that project collapsed quite suddenly in 1970 when all the leading composers gave it up (without solving anything), and reverted to using standard notation. In 1982, having learned a lot from my boss, and having had a few years practical experience pushing the dots around, I suddenly realised what had gone wrong, and wrote an article about it that was eventually published in 1985. The article contains a critical analysis of CWMN... >> So I'm coming from a rather special niche in the practical world of music publishing, not from the academic world. In 1982, I was not aware of Maxwell (1981) [1], and I hadn't realised until researching this post, how it relates to things like metrical time in the (1985) MIDI 1.0 standard (see §5, §5.1.2 in [3]). >> >> *** >> >> MEI: >> The Background page [2] on the MEI website cites Maxwell (1981) [1] as the source of the three principle domains physical, logical and graphical. In contrast, my 1982 insight was that the domains space and time are fundamental, and need to be clearly and radically distinguished. Maxwell himself says (at the beginning of §2.0 of his paper) that his "classification is not the only way that music notation could be broken up..." >> >> So I'm thinking that Maxwell's domains are not as fundamental as the ones I found, and that mine lead to simpler, more general and more powerful results. >> >> From my point of view, Maxwell's logical domain seems particularly problematic: >> Understandably for the date (1981), and the other problems he was coping with, I think Maxwell has a too respectful attitude to the symbols he was dealing with. The then unrivalled supremacy of CWMN1900 over all other notations leads him to think that he can assign fixed relative values to the duration symbols. That could, of course, also be explained in terms of him wanting to limit the scope of his project but, especially when one looks at legitimate, non-standard (e.g.Baroque) uses of the symbols (see §4.1.1 of [3]), his logical domain still seems to be on rather shaky ground. >> Being able to include notations containing any kind of event-symbol in my model (see §4.2) is exactly what's needed in order to create a consistent set of related schemas for all the world's event-based music notations... >> >> So, having said all that, MEI's @dur attribute subclasses look to me like ad-hoc postulates that have been added to the paradigm to shore it up, without questioning its underlying assumptions. The result is that MEI has become over-complicated and unwieldy. That's a common theme in ageing paradigms... remember Ptolemy? >> >> Okay, maybe I'm being a bit provocative there. But am I justified? :-) >> >> Hope that helps, >> all the best, >> James >> >> [1] Maxwell (1981):http://dspace.mit.edu/handle/1721.1/15893 >> >> [2]https://music-encoding.org/resources/background.html >> >> [3]https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >> >> -- >> >> https://james-ingram-act-two.de >> https://github.com/notator >> >> >> Am 16.04.2021 um 18:02 schrieb Johannes Kepper: >>> Dear all, >>> >>> I’m not much into this discussion, and haven’t really investigated into the use cases behind this, so my answer may not be appropriate for the question asked. However, I believe that most of the requirements articulated here are safely covered by MEI. Looking at the attributes available on notes (see >>> https://music-encoding.org/guidelines/v4/elements/note.html#attributes >>> ), there are plenty of different approaches available: >>> >>> @dur – Records the duration of a feature using the relative durational values provided by the data.DURATION datatype. >>> @dur.ges – Records performed duration information that differs from the written duration. >>> @dur.metrical – Duration as a count of units provided in the time signature denominator. >>> @dur.ppq – Duration recorded as pulses-per-quarter note, e.g. MIDI clicks or MusicXML divisions. >>> @dur.real – Duration in seconds, e.g. '1.732‘. >>> @dur.recip – Duration as an optionally dotted Humdrum *recip value. >>> >>> In addition, there is also >>> >>> @tstamp – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature. >>> @tstamp.ges – Encodes the onset time in terms of musical time, i.e., beats[.fractional beat part], as expressed in the written time signature. >>> @tstamp.real – Records the onset time in terms of ISO time. >>> @to – Records a timestamp adjustment of a feature's programmatically-determined location in terms of musical time; that is, beats. >>> @synch – Points to elements that are synchronous with the current element. >>> @when – Indicates the point of occurrence of this feature along a time line. Its value must be the ID of a when element elsewhere in the document. >>> >>> They’re all for slightly different purposes, and surely many of those attributes are not (well) supported by existing software, but they seem to offer good starting points to find a model for the questions asked. It is important to keep in mind that music manifests in various forms – sound, notation, concepts (what _is_ a quarter?), and that MEI tries to treat those „domains“ as independently as possible. Of course, they’re all connected, but not being specific (enough) in that regard did no good to other formats… >>> >>> Hope this helps, >>> jo >>> >>> >>> >>>> Am 16.04.2021 um 15:35 schrieb Simon Wascher >>>> : >>>> >>>> Hi alltogether, >>>> >>>> Am 14.04.2021 um 21:49 schrieb James Ingram >>>> >>>> : >>>> >>>>>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>>>>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>>>>>> [...] >>>>>>> >>>> Am 16.04.2021 um 11:06 schrieb James Ingram >>>> : >>>> >>>>> First: Did you intend your reply just to be private, or did you want to send it to the public list as well. I'm not sure. >>>>> If you'd like this all to be public, I could send this to MEI-L as well... >>>>> >>>> I answered to James Ingram off list, but now move to MEI-L with my answer, as it seems it was the intention to get answers on the list. >>>> My full first answer to James Ingram is down at the end of this mail, if someone is interested (I did not forward James Ingram's repy to me in full as I did not want to forward someone elses private answer to me to public. >>>> >>>> Am 15.04.2021 um 01:03 schrieb Simon Wascher >>>> >>>> : >>>> >>>>>> I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. >>>>>> >>>> Am 16.04.2021 um 11:06 schrieb James Ingram >>>> : >>>> >>>>> I took a look at Lauge Dideriksens website, but can't see enough music examples to know quite what you mean by "positioned according to musical timing". I see that he (sometimes?) uses tuplets. Does he (sometimes?) position the symbols in (flat) space to represent (flat) time? >>>>> In either case, that's not quite what I'm saying. I'm talking about the underlying machine-readable encoding of notation, not just the way it looks on the surface. >>>>> >>>> maybe there is still no material online of this. He is talking about this at European Voices VI in Vienna 27–30 September 2021. >>>> It might be sensible to contact him direct. >>>> >>>> Am 15.04.2021 um 01:03 schrieb Simon Wascher >>>> >>>> : >>>> >>>>>> Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? >>>>>> >>>> Am 16.04.2021 um 11:06 schrieb James Ingram >>>> : >>>> >>>>> Here again, I'm not quite sure what you mean. Perhaps it would help if I again emphasise the difference between the surface appearance of a notation and its machine-readable encoding. >>>>> >>>> I see, your focus seems to be on machine-readability and the problem of the relation between CWMN and its machine-playback. >>>> My focus is the problem of the relation between CWMN and real live-performance. >>>> I am looking for tools to code real live performances, using the symbols of CWMN but allowing to include the _display_ of the real live durations of the real live-performance (the difference between real live-performance and CWMN). >>>> >>>> >>>> Am 16.04.2021 um 11:06 schrieb James Ingram >>>> >>>> : >>>> >>>>> You ask about emic and etic, and the problem of notating traditional Scandinavian Polska or jodling: >>>>> To get us on the same page, here's where I am: Transcriptions of music that is in an aural tradition always reflect what the transcriber thinks is important. Transcriptions often leave out nuances (timing, tonal inflexions etc.) that the original performers and their public would regard as essential. >>>>> I think that aural traditions correctly ignore machine time (seconds, milliseconds), but that if we use machines to record them, we ultimately have to use such timings (in the machines). I don't think that matters, providing that the transcribers don't try to impose machine time (in the form of beats per second) too literally on their interpretations of the original performances. >>>>> >>>> Well, to be precise: in transcribing music, there is (at least) three points of view (versions of notation): >>>> >>>> 1. the musician's "emic" intention >>>> 2. the machines "phonetic" protocol (which can be automatically transformed to a duration and pitch notation applying a certain level of accuracy, but which cannot know about light and heavy time and barlines, as these are cultural phenomenons. The level of accuracy is indeed already a cultural decission, but: If the transformation is not into CWMN but for example into a time/pitch chart of the fundamental frequencies the limits of readability of CWMN do not apply. >>>> 3. the transcribers intention, which usually is called "etic" but is in fact "emic" to the transcriber. >>>> (emic and etic is not my favorite wording) >>>> (I am not worring about the composer, as in my field music gets composed sounding, the composer is a musician here.) >>>> >>>> Am 16.04.2021 um 11:06 schrieb James Ingram >>>> >>>> : >>>> >>>>> "Stress programs" in abc-notation: >>>>> I can't find any references to "stress programs" at the abc site [2], >>>>> >>>> Ah, you are right, that is a kind of de facto standard, which is weakly documented. It is interpreted by abc2midi and BarFly (and maybe other programs). >>>> It makes use of the R:header of abc. Either the stress program is written there directly or in an external file. >>>> Here is one of the stress programs I use: >>>> >>>> * 37 >>>> Mazurka >>>> 3/4 >>>> 3/4=35 >>>> 6 >>>> 120 1.4 >>>> 100 0.6 >>>> 110 1.2 >>>> 100 0.8 >>>> 115 1.32 >>>> 100 0.67 >>>> >>>> so that is: >>>> >>>> "*37" it starts with a number (that does not do anything). >>>> "Mazurka" is the identifying string used in the R:header field connecting abc-file and stress program for the playback program. >>>> "3/4" is the meter. The stress program only applies to abc-notation in this meter. So there may be stress programs with the same name, but for different meters. >>>> "3/4=35" is the tempo indication. >>>> "6" is the number of sections the bar is split up to in this stress program (a free choisse). So it should be followed by that number of describing lines. >>>> "120 1.4" describes the first section of the bar. "120" is the volume (beteen 0-127), "1.4" is the multiplier, the core of the thing, so to say: It says the duration of the first sixt of the notated bar is to be played 1.4 times as long than it would be played considering the metronome tempo given. >>>> "100 0.6" and so on. >>>> >>>> I attached BarFly's "Stress Programs" file which also contains the descriptin provided by the author of "Barfly" Phil Taylor. >>>> (I personally would prefer if this mechanism would not be limited to one single bar, but could be used to describe durations of a choosen number of bars/beats.) >>>> >>>> >>>> So, thanks for the moment, >>>> and feel free to tell me I shall not send this longish and maybe not very clever e-mails to this list. >>>> >>>> Thanks, >>>> Health, >>>> Simon >>>> Wascher >>>> >>>> Anfang der weitergeleiteten Nachricht: >>>> >>>>> Von: Simon Wascher >>>>> >>>>> Betreff: Aw: [MEI-L] Tick-based Timing >>>>> Datum: 15. April 2021 01:03:32 MESZ >>>>> An: James Ingram >>>>> >>>>> >>>>> >>>>> Hello, >>>>> >>>>> reading your post to MEI mailing list (I am not active in MEI) I started to read your text >>>>> >>>>>> [2]https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >>>>> and would like to just add my two cents of ideas about >>>>> >>>>>> Ticks carry both temporal and spatial information. >>>>>> In particular, synchronous events at the beginning of a bar have the same tick.time, so: >>>>>> The (abstract) tick durations of the events in parallel voices in a bar, add up to the same value. >>>>>> In other words: >>>>>> Bars “add up” in (abstract) ticks. >>>>>> The same is true for parallel voices in systems (that are as wide as the page allows) even when there are no barlines, so: >>>>>> Systems also “add up” in (abstract) ticks. >>>>>> >>>>> * First I would like to point you at Lauge Dideriksens approach to notate music with CWMN symbols but positioned according to the musical timing. I suppose that is a compareable approach to yours. >>>>> >>>>> * About Barlines I would like to add, that barlines also represent human perception (the composer's, the musician's, the listener's or the transcriber's), as barlines do not exist in the audio-signal. >>>>> Barlines do not need to align. It is the music as a whole that keeps a common pace (the musicians stay together, but not necessarily at beats or barlines. >>>>> It is even possible to play along with completly different barlines in mind, that really happends, I experienced it myself. >>>>> >>>>> * Do you consider tick based notation to be a way to represent phonemic and phonetic notation(interpretation) at the same time? >>>>> >>>>> * In many musical styles of traditional music, also in Europe there are severe differences between emic and ethic music perception. Typical and well known examples are Polska-playing in Tradions of Scandinavia or the problems of scientific notation of Jodler/Jodel (Jodler interpretation has a very loose relation to beat). Looking for examples of perfect common pace in a music that treats the tension of timing between the ensemble members as a carrier of musical expression have a look at central polish traditional instrumental dance music. >>>>> >>>>> * About notational approaches: are you aware of the "Stress Programs" used with abc-notation to describe microtiming? It is a method where the bar is split up into a freely choosen number of fractions described by multipiers (1 is the standardlenght of one fraction, so 0.76 is 0.76 times shorter and 1.43 is 1.43 times longer than standard)? >>>>> >>>>> Not sure if this is meeting your intentions, >>>>> Thanks, >>>>> Health, >>>>> >>>>> Simon Wascher, (Vienna; musician, transcriber of historical music notation; researcher in folk music) >>>>> >>>>> >>>>> >>>>> >>>>> Am 14.04.2021 um 21:49 schrieb James Ingram >>>>> >>>>> : >>>>> >>>>> >>>>>> Last January, I raised an issue about Tick-based Timing in the W3C Music Notation Community Group's MNX Repository [1], but it was closed in February without my being satisfied that it had been sufficiently discussed. >>>>>> I had the feeling that something important was being glossed over, so have been thinking hard about the subject over the past few weeks, and have now uploaded an article about it to my website [2]. >>>>>> My conclusions are that Tick-based Timing >>>>>> • has to do with the difference between absolute (mechanical, physical) time and performance practice, >>>>>> • is relevant to the encoding of all the world's event-based music notations, not just CWMN1900. >>>>>> • needs to be considered for the next generation of music encoding formats >>>>>> I would especially like to get some feedback from those working on non-western notations, so am posting this not only to the W3C MNCG's public mailing list, but also to MEI's. >>>>>> All the best, >>>>>> James Ingram >>>>>> (notator) >>>>>> [1] MNX Issue #217: >>>>>> https://github.com/w3c/mnx/issues/217 >>>>>> >>>>>> [2] >>>>>> https://james-ingram-act-two.de/writings/TickBasedTiming/tickBasedTiming.html >>>>>> >>>>>> >>>>>> -- >>>>>> >>>>>> https://james-ingram-act-two.de >>>>>> https://github.com/notator >>>>>> >>>>>> _______________________________________________ >>>>>> mei-l mailing list >>>>>> >>>>>> mei-l at lists.uni-paderborn.de >>>>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>>> >>>> _______________________________________________ >>>> mei-l mailing list >>>> >>>> mei-l at lists.uni-paderborn.de >>>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >>> Dr. Johannes Kepper >>> Wissenschaftlicher Mitarbeiter >>> >>> Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition >>> Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold >>> >>> kepper at beethovens-werkstatt.de >>> | -49 (0) 5231 / 975669 >>> >>> >>> www.beethovens-werkstatt.de >>> >>> Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz >>> >>> >>> >>> >>> >>> >>> _______________________________________________ >>> mei-l mailing list >>> >>> mei-l at lists.uni-paderborn.de >>> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > Dr. Johannes Kepper > Wissenschaftlicher Mitarbeiter > > Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition > Musikwiss. Seminar Detmold / Paderborn | Hornsche Straße 39 | D-32756 Detmold > kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 > > www.beethovens-werkstatt.de > Forschungsprojekt gefördert durch die Akademie der Wissenschaften und der Literatur | Mainz > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -- email signature https://james-ingram-act-two.de https://github.com/notator -------------- next part -------------- An HTML attachment was scrubbed... URL: From elsadeluca at fcsh.unl.pt Mon Apr 26 08:15:25 2021 From: elsadeluca at fcsh.unl.pt (Elsa De Luca) Date: Mon, 26 Apr 2021 08:15:25 +0200 Subject: [MEI-L] Call for Interest - MEC 2022 Message-ID: PLEASE CIRCULATE WIDELY Dear MEI-L, As many of you are aware, among its activities MEI oversees the organization of an annual conference, the *Music Encoding Conference* (*MEC*), to provide a meeting place for scholars interested in discussing the modeling, generation and uses of music encoding. While the conference has an emphasis on the development and uses of MEI, other contributions related to general approaches to music encoding are always welcome, as an opportunity for exchange between scholars from various research communities, including technologists, librarians, historians, and theorists. The MEI Board invites expressions of interest for the organization of the* 10th edition* of the annual *Music Encoding Conference*, to be held in *2022*. In order to address the uncertainty related to the current global health crisis, we have opted for a simplified application procedure. At this stage, *it is not necessary to submit a full application*; instead, an informal expression of interest will be enough to set the ball rolling (maximum word limit: 600 words). We are aware that it may seem difficult to think ahead while we are in the midst of a pandemic, but we are optimistic that the worst is already behind and, in any case, the option of a hybrid conference (both online and in person) remains valid for 2022. Historically, the conference has been organized by institutions involved in MEI, such as MEI member institutions or those hosting MEI-based projects, but expressions of interest from any interested group or institution will be happily received. While MEC venues have alternated between Europe and North America in the past, there is no such requirement, so proposals from anywhere are invited. The *deadline *to make up your mind and get in touch with us is *26 May 2021*. Please direct all proposals and inquiries to info at music-encoding.org. Looking forward to hearing from you! Best wishes, Elsa De Luca (on behalf of the MEI Board) Elsa De Luca ------------------------ Early Music Researcher CESEM -FCSH, NOVA University of Lisbon https://sites.google.com/fcsh.unl.pt/elsadeluca/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From elsadeluca at fcsh.unl.pt Mon Apr 26 08:33:11 2021 From: elsadeluca at fcsh.unl.pt (Elsa De Luca) Date: Mon, 26 Apr 2021 08:33:11 +0200 Subject: [MEI-L] Sustainability study of MEI Message-ID: Dear MEI-L, On behalf of the MEI Board, I would like to announce that Katrina Simone Fenlon and Jessica Grimmer at the University of Maryland are carrying out a study on the sustainability of MEI. This study is part of the *Communities Sustaining Digital Collections *project, which is investigating how digital humanities projects are sustained by their communities, with support from the Andrew W. Mellon Foundation and the Institute of Museum and Library Services. This research is intended to benefit as many digital collections and the communities that create and maintain them, as this case constitutes one part of an overarching study of sustainability for different kinds of digital collections. As part of this MEI research, they will observe forums such as the MEI website, Slack channel, and GitHub to gain an understanding of how the organization functions, as well as how plans for sustainability of the digital objects as well as sustainability for the non-digital aspects of the organization, such as leadership roles, are discussed. They plan to dissociate identifying information used in project disseminations as much as possible. MEI Board members have approved the observation of publicly available MEI forums as sources of evidence for how the MEI community contributes to the sustainability of the MEI and we hope that you will share with us the same enthusiasm towards this initiative. If you wish to know more on the project, please don’t hesitate to contact Katrina Fenlon (kfenlon at umd.edu). All the best, Elsa De Luca -------------------------------- MEI Administrative Chair Early Music Researcher CESEM -FCSH, NOVA University of Lisbon https://sites.google.com/fcsh.unl.pt/elsadeluca/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Wed Apr 28 09:28:40 2021 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 28 Apr 2021 09:28:40 +0200 Subject: [MEI-L] ODD Thursday tomorrow Message-ID: Dear all, this is an even month, so we meet for the monthly ODD meeting on the last Thursday, which is tomorrow, at 2pm European time / 8am EST. The agenda for tomorrow is at https://github.com/orgs/music-encoding/projects/2#column-13573953, everyone is invited to both join the meeting and contribute to the agenda. The meeting credentials are as follows: https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI Looking forward to see you tomorrow, jo PS: Would it help to push future meetings further into the day? We recognize that it’s quite early for some… -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From kepper at edirom.de Wed Apr 28 09:31:00 2021 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 28 Apr 2021 09:31:00 +0200 Subject: [MEI-L] ODD Thursday tomorrow Message-ID: Dear all, this is an even month, so we meet for the monthly ODD meeting on the last Thursday, which is tomorrow, at 2pm European time / 8am EST. The agenda for tomorrow is at https://github.com/orgs/music-encoding/projects/2#column-13573953, everyone is invited to both join the meeting and contribute to the agenda. The meeting credentials are as follows: https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI Looking forward to see you tomorrow, jo PS: Would it help to push future meetings further into the day? We recognize that it’s quite early for some… -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From krichts at mail.uni-paderborn.de Fri Apr 30 17:03:11 2021 From: krichts at mail.uni-paderborn.de (Kristina Richts) Date: Fri, 30 Apr 2021 17:03:11 +0200 Subject: [MEI-L] Survey on authority records for musical works in digital projects (NFDI4Culture) Message-ID: <39ae507e-62c7-9eed-05c6-470c15011bb5@mail.uni-paderborn.de> Dear colleagues, I would like to draw your attention to a survey, which deals with the use of authority records for musical works in digital projects (see forwarded mail below). It would be great if as many of you as possible would participate in that survey so that my colleague Desiree Mayer can get a comprehensive overview. Many thanks in advance and best regards Kristina Dear colleagues, in Germany various consortia for research data infrastructure recently started their work, as you may perhaps have heard. Amongst them is NFDI4Culture, which is responsible for the subjects of architecture, art history, musicology, film, theater and media studies. As part of NFDI4Culture, I now ask you to support us in our work by filling out the following survey: https://werknormdaten.limesurvey.net/856948?lang=en With this survey, we want to determine whether and how authority records for musical works are used in digital musicological projects. At the same time, we want to use it to query needs and gain user stories. The results of this survey serve as a working basis for Task Area 2, which takes care of standards, data quality and curation within NFDI4Culture. Furthermore, the results will be presented as part of a MiniCon at the GNDCon2.0. The survey contains 13 questions and will be activated until May 10th, 2021. It takes approximately 15 minutes to fill it out and you may save your answers and resume later with a button on the upper right side. If you don’t want to submit your answers, please use the button „exist and clear survey“, which is also on the upper right side. If you have any questions, please feel free to contact me anytime. We look forward to your active participation in the survey! Best regards Desiree Mayer Dr. Desiree Mayer NFDI4Culture Sächsische Landesbibliothek - Staats- und Universitätsbibliothek Dresden (SLUB) Postadresse: 01054 Dresden Besucheradresse: Zellescher Weg 18, 01069 Dresden Tel.: +49 (0)351 4677751 E-Mail: desiree.mayer at slub-dresden.de -- Dr. Kristina Richts-Matthaei NFDI4Culture – Culture Coordination Office Universität Paderborn Musikwissenschaftliches Seminar Detmold/Paderborn Hornsche Straße 39 D-32756 Detmold Tel.: +49 5231 975 665 -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hankinson at rism.digital Wed May 5 13:56:52 2021 From: andrew.hankinson at rism.digital (Andrew Hankinson) Date: Wed, 5 May 2021 13:56:52 +0200 Subject: [MEI-L] Rename the MEI master-branch In-Reply-To: <3CBD7E35-E5C9-4A00-9FA2-9107B0E0BCD8@gmail.com> References: <3CBD7E35-E5C9-4A00-9FA2-9107B0E0BCD8@gmail.com> Message-ID: Hello everyone, To follow up on this discussion, the community has decided on 'stable' as the appropriate replacement. This is just a 'heads-up' that I will be making the change today, so please update any of your tools, workflows, and other dependencies to reflect this change. Cheers, -Andrew > On 13 Apr 2021, at 16:46, Benjamin W. Bohl wrote: > > Dear Anna, > > Thanks for this valuable addition ;-) > > /Benni > >> On 13. Apr 2021, at 16:45, Kijas, Anna E wrote: >> >> Thank you, Benjamin for this! Here is some additional context for folks who may not be following these conversations,https://www.nytimes.com/2021/04/13/technology/racist-computer-engineering-terms-ietf.html. Also I’d like to share a guide created by several of my colleagues at the Association for Computers and the Humanities - https://ach.org/toward-anti-racist-technical-terminology/ - which addresses racist technical terminology. We also have an open bibliography on Zotero for Inclusive Technology -https://www.zotero.org/groups/2554430/ach_inclusive_technology. >> >> Best, >> Anna >> >> Please note: Lilly Music Library hours and additional details can be viewed athttps://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. >> >> Anna E. Kijas >> Head, Lilly Music Library >> Granoff Music Center >> Tufts University >> 20 Talbot Avenue, Medford, MA 02155 >> Pronouns: she, her, hers >> Book an appointment | (617) 627-2846 >> >> From: mei-l on behalf of "Benjamin W. Bohl" >> Reply-To: Music Encoding Initiative >> Date: Tuesday, April 13, 2021 at 10:32 AM >> To: MEI-L >> Subject: [MEI-L] Rename the MEI master-branch >> >> Dear MEI Community, >> >> following a suggestion by the Software Freedom Conservancy GitHub renamed their master-branch to main in order to avoid potentially offensive vocabulary or allusions to slavery. >> >> MEI would like to follow this lead and rename the master-branch of https://github.com/music-encoding/music-encoding and other repositories where applicable. Following the discussion on GitHub (https://github.com/music-encoding/music-encoding/issues/776) the Technical Team set up this poll to take in the community's votes on a closed list of potential new names for our current master-branch, used to disseminate tagged versions (e.g. MEI 3.0.0, MEI 4.0.0 MEI 4.0.1). >> >> Please cast your vote until 2021-04-28 using the form available at: >> https://abstimmung.dfn.de/tNOBDWgWAFtVz6lr >> >> On behalf of the MEI Board and Technical Team, >> Benjamin W. Bohl >> MEI Technical Co-chair >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From stadler at edirom.de Fri May 7 12:46:21 2021 From: stadler at edirom.de (Peter Stadler) Date: Fri, 7 May 2021 12:46:21 +0200 Subject: [MEI-L] Next MerMEId community call on Monday, May 10 at 1:30PM UTC Message-ID: <1743BB16-F426-47D9-AC47-F2B8D0C5FBFE@edirom.de> Dear all, this is just a brief reminder and invitation to our upcoming MerMEId community meeting on next Monday (May 10) at 1:30PM (UTC https://time.is/UTC). We will discuss pull requests and issues collected at https://github.com/Edirom/MerMEId but we welcome any newcomers to share their MerMEId story, raise questions, or just enjoy the meeting. Stay safe and have a great weekend Peter -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From stadler at edirom.de Fri May 7 12:48:14 2021 From: stadler at edirom.de (Peter Stadler) Date: Fri, 7 May 2021 12:48:14 +0200 Subject: [MEI-L] Next MerMEId community call on Monday, May 10 at 1:30PM UTC In-Reply-To: <1743BB16-F426-47D9-AC47-F2B8D0C5FBFE@edirom.de> References: <1743BB16-F426-47D9-AC47-F2B8D0C5FBFE@edirom.de> Message-ID: <1291B0DB-E54D-4A0B-BC8D-41AAD7890470@edirom.de> Well, I should add that we’ll meet via Zoom: https://uni-paderborn-de.zoom.us/j/93200725368?pwd=RG93T1JTemRQY2dadGIrOTRVeTlGQT09 Meeting-ID: 932 0072 5368 Code: 841840 Cheers Peter > Am 07.05.2021 um 12:46 schrieb Peter Stadler : > > Signierter PGP-Teil > Dear all, > > this is just a brief reminder and invitation to our upcoming MerMEId community meeting on next Monday (May 10) at 1:30PM (UTC https://time.is/UTC). > We will discuss pull requests and issues collected at https://github.com/Edirom/MerMEId but we welcome any newcomers to share their MerMEId story, raise questions, or just enjoy the meeting. > > Stay safe > and have a great weekend > Peter > > -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From lxpugin at gmail.com Tue May 11 11:39:26 2021 From: lxpugin at gmail.com (Laurent Pugin) Date: Tue, 11 May 2021 11:39:26 +0200 Subject: [MEI-L] Verovio reference book Message-ID: Dear all, The latest release of Verovio (3.4) comes with a new reference book that gathers all the documentation for the project. This book is a collaborative work that we will continuously improve and adjust when new features are added to Verovio or when additional documentation appears to be desirable. Some of the sections of the book are still in preparation. The book contains: - an introduction to Verovio and the history of the project as well as an overview on it can be used, - some tutorials (still partially in preparation) on how to use it, starting at the very basic and ending at advanced topics in notation, - some advanced topics with in-depth explanations of Verovio specifics, - a toolkit reference with input and output formats as well as documentation for all methods and options, - some instructions how to install it or how to build it from source, - some guidelines for contributing to the project. This online book is also available as PDF for users who prefer to have it in this format. We welcome feedback and contributions. These can be made directly from the book through the “Edit page” button that leads to the online GitHub editor where the code of the page can be edited. The source code of the book and instructions for more advanced contributions are available here . All the best, Laurent -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Thu May 27 17:01:14 2021 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 27 May 2021 17:01:14 +0200 Subject: [MEI-L] ODD Friday tomorrow Message-ID: <48AD610C-2151-4329-B02A-8876F31D69D1@edirom.de> Dear all, Tomorrow is another ODD Friday for MEI. If you’d like to peek in discussions about the technical development of MEI, you’re more than welcome to join the meeting at 2pm European time (8am EDT) at https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI There is a preliminary agenda available at https://github.com/orgs/music-encoding/projects/2#column-14084321 – feel free to add to that list if you’d like to see a topic discussed. All best, jo -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From drizo at dlsi.ua.es Fri May 28 11:21:07 2021 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Fri, 28 May 2021 11:21:07 +0200 Subject: [MEI-L] MEC'21 Message-ID: Dear MEI-L, We are pleased to announce that REGISTRATION for MEC 2021 in Alicante / Online is now open at https://music-encoding.org/conference/2021/register . Recall that at least one author of accepted submissions must register to the conference. You can find information about the possible workshops to enroll at https://music-encoding.org/conference/2021/ program/ . In the following days the complete program will be published. We are proud to announce our keynote speakers: • Pip Willcox, Head of Research, The National Archives (UK) • Álvaro Torrente, Professor of Music History, Universidad Complutense de Madrid, and Director, Instituto Complutense de Ciencias Musicales, Spain. Didone Project) - https://didone.eu Although the conference mode remains hybrid, most presenters will join only online, so the entire scientific program has been planned for this mode. Nonetheless, some attendees have confirmed their physical attendance, and we are happy to welcome and host anyone who can travel here to Alicante . For current travel procedures visit https://music-encoding.org/conference/2021/travel/ . For information about COVID-19 protection measures in Alicante, please check https://music-encoding.org/conference/2021/covid-19/ . Looking forward to seeing you either online or physically. For the program and organizing committees, Stefan Münnich David Rizo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Firma_David-1.png Type: image/png Size: 8433 bytes Desc: not available URL: From Anna.Kijas at tufts.edu Tue Jun 1 18:59:23 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Tue, 1 Jun 2021 16:59:23 +0000 Subject: [MEI-L] Call for Content: Pedagogy & Praxis Resources Message-ID: <4EE5E983-3B8E-4D6D-BFA0-88D5189C8439@tufts.edu> Hello All, I’m happy to announce that the Pedagogy Interest Group has launched, Community-Created Pedagogy & Praxis Resources, an effort that aims to encourage practitioners to share and submit already-created resources that they use for teaching and projects focused on music encoding. All of the resources will be uploaded into the Humanities Commons where they will be preserved and can be easily shared or cited with a unique DOI. Please visit this new page where you’ll find pertinent details and consider submitting your resources! If you have questions or need assistance, please reach out to the Pedagogy Interest Group Administrative Co-Chairs, Anna Kijas and Joy H. Calico. Best, Anna Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment | (617) 627-2846 Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. -------------- next part -------------- An HTML attachment was scrubbed... URL: From stadler at edirom.de Mon Jun 7 07:56:22 2021 From: stadler at edirom.de (Peter Stadler) Date: Mon, 7 Jun 2021 07:56:22 +0200 Subject: [MEI-L] Today's MerMEId community meeting Message-ID: <188604C8-6D8F-461A-86DA-C88BD2DFBF96@edirom.de> Dear all, this is a late yet friendly reminder for today’s MerMEId community meeting at 3:30PM (CEST) ~10hours from now. We will meet via our established Zoom channel: https://uni-paderborn-de.zoom.us/j/93200725368?pwd=RG93T1JTemRQY2dadGIrOTRVeTlGQT09 Meeting-ID: 932 0072 5368 Code: 841840 Other important MerMEId URLs GitHub repo: https://github.com/Edirom/MerMEId/ (see Wiki for community meeting minutes) Sandbox: https://mermeid.edirom.de/index.html Slack: #mermeid at https://music-encoding.slack.com Cheers Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From melottejoseph2 at gmail.com Mon Jun 7 08:06:51 2021 From: melottejoseph2 at gmail.com (Joseph Melotte) Date: Mon, 7 Jun 2021 08:06:51 +0200 Subject: [MEI-L] About what ? Re: Today's MerMEId community meeting Message-ID: Dear, Will you remind me what this community MerMEld, about what, is doing ? Best, Melotte Joseph Op maandag 7 juni 2021 schreef Peter Stadler : > Dear all, > > this is a late yet friendly reminder for today’s MerMEId community meeting > at 3:30PM (CEST) ~10hours from now. > > We will meet via our established Zoom channel: > https://uni-paderborn-de.zoom.us/j/93200725368?pwd= > RG93T1JTemRQY2dadGIrOTRVeTlGQT09 > Meeting-ID: 932 0072 5368 > Code: 841840 > > Other important MerMEId URLs > GitHub repo: https://github.com/Edirom/MerMEId/ (see Wiki for community > meeting minutes) > Sandbox: https://mermeid.edirom.de/index.html > Slack: #mermeid at https://music-encoding.slack.com > > > Cheers > Peter > > > > -- vr groeten, sincerely, Melotte Joseph -------------- next part -------------- An HTML attachment was scrubbed... URL: From stadler at edirom.de Mon Jun 7 09:27:53 2021 From: stadler at edirom.de (Peter Stadler) Date: Mon, 7 Jun 2021 09:27:53 +0200 Subject: [MEI-L] About what ? Re: Today's MerMEId community meeting In-Reply-To: References: Message-ID: Hi Melotte Joseph, please excuse for sending out to MEI-L which might be just of interest for a sub community of the greater MEI community. The MerMEId is a "Metadata Editor and Repository for MEI Data“ originally developed by the Royal Danish Library at Copenhagen and now maintained by MEI enthusiasts. During the meeting we will discuss pull requests and issues collected at https://github.com/Edirom/MerMEId but we welcome any newcomers to share their MerMEId story, raise questions, or just enjoy the meeting. Cheers Peter > Am 07.06.2021 um 08:06 schrieb Joseph Melotte : > > Dear, > > Will you remind me what this community MerMEld, about what, is doing ? > > Best, > > Melotte Joseph > > > Op maandag 7 juni 2021 schreef Peter Stadler >: > Dear all, > > this is a late yet friendly reminder for today’s MerMEId community meeting at 3:30PM (CEST) ~10hours from now. > > We will meet via our established Zoom channel: > https://uni-paderborn-de.zoom.us/j/93200725368?pwd=RG93T1JTemRQY2dadGIrOTRVeTlGQT09 > Meeting-ID: 932 0072 5368 > Code: 841840 > > Other important MerMEId URLs > GitHub repo: https://github.com/Edirom/MerMEId/ (see Wiki for community meeting minutes) > Sandbox: https://mermeid.edirom.de/index.html > Slack: #mermeid at https://music-encoding.slack.com > > Cheers > Peter > > > > > > -- > vr groeten, > sincerely, > > Melotte Joseph > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From drizo at dlsi.ua.es Tue Jun 15 18:16:19 2021 From: drizo at dlsi.ua.es (David Rizo Valero) Date: Tue, 15 Jun 2021 18:16:19 +0200 Subject: [MEI-L] =?utf-8?q?=5BMEC=E2=80=9921=5D_Conference_program_upload?= =?utf-8?q?ed?= Message-ID: <76184C81-2D04-42E4-B851-26DF417DCCF1@dlsi.ua.es> Dear MEI-L, The conference program has just been uploaded to https://music-encoding.org/conference/2021/program/ . As mentioned in previous messages, although the conference mode remains hybrid, most presenters will join only online, so the entire scientific program has been planned for this mode using a mixture of teleconferencing tools (Zoom ), social virtual spaces (wonder.me ), and at least one (or more) Slack Townhall channel for asynchronous communication (wonder.me just works on Google Chrome and Microsoft Edge browsers). Nonetheless, some attendees have confirmed their physical attendance, and we are happy to welcome and host anyone who can travel here to Alicante. In that case, EARLY BIRD REGISTRATION has been EXTENDED to June 21st (https://music-encoding.org/conference/2021/register/ ). For current travel procedures visit https://music-encoding.org/conference/2021/travel/ . For information about COVID-19 protection measures in Alicante, please check https://music-encoding.org/conference/2021/covid-19/ . Looking forward to seeing you either online or physically. For the program and organizing committees, Stefan Münnich David Rizo -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Firma_David-1.png Type: image/png Size: 8433 bytes Desc: not available URL: From claire.arthur81 at gmail.com Wed Jun 16 21:00:20 2021 From: claire.arthur81 at gmail.com (Claire Arthur) Date: Wed, 16 Jun 2021 15:00:20 -0400 Subject: [MEI-L] Digital Libraries for Musicology (DLfM) 2021 -- Registration Now Open Message-ID: DLfM 2021 -- Registration Now Open Registration is now open for the 8th International Conference on Digital Libraries for Musicology. The conference will be held from July 28th to July 30th in association with the International Association for Music Libraries (IAML), and will be entirely online. Thanks to the generous support from Goldsmiths University of London, Georgia Institute of Technology, and the Royal Music Association, we are pleased to announce free registration for all participants and attendees. Advance registration is mandatory in order to receive links to conference events. Please visit the link below to register: https://www.eventbrite.co.uk/e/8th-international-conference-on-digital-libraries-for-musicology-dlfm-21-tickets-156092070585 We are pleased to announce our preliminary conference schedule is now available: https://dlfm.web.ox.ac.uk/2021-programme For more details, please visit our conference website at https://dlfm.web.ox.ac.uk ------------------------------ Claire Arthur Assistant Professor, School of Music College of Design Georgia Institute of Technology claire.arthur[at]gatech.edu -------------- next part -------------- An HTML attachment was scrubbed... URL: From T.Crawford at gold.ac.uk Fri Jun 18 12:53:09 2021 From: T.Crawford at gold.ac.uk (Tim Crawford) Date: Fri, 18 Jun 2021 10:53:09 +0000 Subject: [MEI-L] TROMPA project has MEI at its core Message-ID: <79E4B5CD-C805-4C79-A50B-BFCA8AA267FD@gold.ac.uk> Dear MEI community, This is a blatant plug for TROMPA, but also may be a case of ‘preaching to the choir’ as we quaint English say. You may not all be aware of this, but the 3-year TROMPA EU project (ended 30 April 2021) - which had its final review this Wednesday, 16 June - depended in many crucial aspects on the adoption of MEI at an early stage (after a very brief format-tussle). This was largely due to our successful recruitment of David Weigl, one of the developers of MELD (Music Encoding and Linked Data), which also supports some crucial components of TROMPA. https://trompamusic.eu TROMPA got a resoundingly positive review assessment, I’m glad to say, and we’re intending to find time to write this up soon as some kind of ’Success Story’, which will embed heavy stress on the importance of MEI, and *not just as something for musicologists*. Meanwhile, however, there will be various TROMPA-related papers and posters at both DLfM and MEC, and some of the MEI-related technology developed or enhanced within TROMPA is already being used within other projects. All best wishes for MEI and the TROMPA tools! (Now I must get back to thinking about tablature again!) And finally, Thank You Perry for making this possible!! Tim Prof. Tim Crawford Department of Computing Goldsmiths College London SE14 6NW U.K. TROMPA UK Principal Investigator [cid:5ACEF57E-55F6-4351-96B6-51CDF503E2F1] https://trompamusic.eu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: top-bar-logo_0_0.png Type: image/png Size: 4467 bytes Desc: top-bar-logo_0_0.png URL: From pdr4h at virginia.edu Sat Jun 19 00:24:12 2021 From: pdr4h at virginia.edu (Roland, Perry D (pdr4h)) Date: Fri, 18 Jun 2021 22:24:12 +0000 Subject: [MEI-L] TROMPA project has MEI at its core In-Reply-To: <79E4B5CD-C805-4C79-A50B-BFCA8AA267FD@gold.ac.uk> References: <79E4B5CD-C805-4C79-A50B-BFCA8AA267FD@gold.ac.uk> Message-ID: Hi Tim, Congratulations to you and the entire TROMPA team for successful completion of the grant. I look forward to hearing more about the project process and deliverables in the presentations at MEC and in your “how-we-done-it-good” project wrap-up. You’re very gracious to include me in your acknowledgments. I’m pleased that MEI has provided the foundation for so many projects. While the original idea for it was mine, I didn’t create MEI alone. Many, many people – including my friends at Goldsmiths! – had a hand in making MEI the success it is becoming. I thank everyone who has contributed to MEI and look forward to working together with you – and our MEI future colleagues – to make it better. I won’t be in Alicante, but I’ll see everyone virtually. Let’s hope the recent Covid unpleasantness will be over and we can meet in person next year. Best wishes, -- p. _________________________ Perry Roland Metadata Operations Librarian University of Virginia Library 2450 Old Ivy Rd. Charlottesville, VA 22903 434-982-2702 (w) pdr4h (at) virginia (dot) edu From: mei-l On Behalf Of Tim Crawford Sent: Friday, June 18, 2021 6:53 AM To: Music Encoding Initiative Subject: [MEI-L] TROMPA project has MEI at its core Dear MEI community, This is a blatant plug for TROMPA, but also may be a case of ‘preaching to the choir’ as we quaint English say. You may not all be aware of this, but the 3-year TROMPA EU project (ended 30 April 2021) - which had its final review this Wednesday, 16 June - depended in many crucial aspects on the adoption of MEI at an early stage (after a very brief format-tussle). This was largely due to our successful recruitment of David Weigl, one of the developers of MELD (Music Encoding and Linked Data), which also supports some crucial components of TROMPA. https://trompamusic.eu TROMPA got a resoundingly positive review assessment, I’m glad to say, and we’re intending to find time to write this up soon as some kind of ’Success Story’, which will embed heavy stress on the importance of MEI, and *not just as something for musicologists*. Meanwhile, however, there will be various TROMPA-related papers and posters at both DLfM and MEC, and some of the MEI-related technology developed or enhanced within TROMPA is already being used within other projects. All best wishes for MEI and the TROMPA tools! (Now I must get back to thinking about tablature again!) And finally, Thank You Perry for making this possible!! Tim Prof. Tim Crawford Department of Computing Goldsmiths College London SE14 6NW U.K. TROMPA UK Principal Investigator [cid:image001.png at 01D76468.5CA7B4B0] https://trompamusic.eu -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 4467 bytes Desc: image001.png URL: From stefan.muennich at unibas.ch Thu Jun 24 10:21:42 2021 From: stefan.muennich at unibas.ch (=?iso-8859-1?Q?Stefan_M=FCnnich?=) Date: Thu, 24 Jun 2021 08:21:42 +0000 Subject: [MEI-L] Proposed changes to MEI By-Laws Message-ID: <75ed1db002024a6fa79b7f1abf01b720@unibas.ch> Dear MEI community members, The MEI Board has agreed to propose some changes to the MEI By-laws. Focal changes are the explicit accomodation of any type of music notation, not only Western, in section 2 ("Purpose"), the clarification of Personal and Institutional membership in section 3 ("Membership"), and the explicit integration of appointed offices of the Board and a fine-tuning of the election procedure. Other changes concern minor corrections to wording. Options to view the proposed changes include: 1) a compiled overview of the changes in a GoogleDoc (https://docs.google.com/document/d/1Kg46Nw-D8Pe7dj341-NH-9UaFy5Ksw7tlJ7x_syqvug/edit?usp=sharing) 2) a "diff" in the dedicated Pull Request on Github (https://github.com/music-encoding/music-encoding.github.io/pull/234/files) According to the By-Laws, any proposed amendments must be approved by the MEI community, following this procedure: "Changes to these by-laws become proposed amendments with a simple majority of the complete Board membership. Proposed amendments will be published on the MEI mailing list (MEI-L) and the MEI website. After a minimum 10-day comment period, the Board will discuss comments on the proposed amendments received from the MEI membership and incorporate changes agreed to by a simple majority of the Board. The agreed upon version of the amended by-laws will be presented to the MEI membership for voting over a minimum 7-day period. The amended version of the by-laws must pass with a two-thirds (2/3) majority of votes in order to become effective." (https://music-encoding.org/community/mei-by-laws.html#10-amendments) The Board invites you to provide any additional comments you have to the GoogleDoc or the Pull Request above. Any comments before July 5th (23:59:59, UTC-11) will be considered for final discussion within the Board. The agreed upon version of the amended By-laws will be put to vote over a minimum 7-day period before MEC2021. The voting results will be presented during the Community Meeting at MEC2021. Thank you, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: From fabianmoss at gmail.com Mon Jun 28 12:48:30 2021 From: fabianmoss at gmail.com (Fabian Moss) Date: Mon, 28 Jun 2021 12:48:30 +0200 Subject: [MEI-L] Job Announcement Message-ID: [Apologies for cross-posting] Dear colleagues, please share the following job offer widely. For the project “Enabling interactive music visualization for a wider community” at École Polytechnique Fédérale de Lausanne (EPFL, Switzerland), funded by the UNIL-EPFL dhCenter, we are looking for a Front-end Web Developer. The envisaged web app will provide an interface to upload music files for interactive visualizations and explorations and the functionality to analyze collections of notes using the discrete Fourier transform. Remote working is possible and the candidate will be supervised by EPFL researchers Dr. Fabian Moss and Dr. Daniel Harasim. The candidate should be confident with modern web technologies, in particular JavaScript frameworks such as Vue/React/Angular, and visualization libraries such as Vega or D3. Specific musical knowledge is not required but would be an asset. Experience with collaborative projects using Git/GitHub is required. Please submit a brief CV (at most 2 pages) that highlights your prior work experience and provides links to earlier (music) visualization projects no later than July 15, 2021. Depending on the number of applicants we will conduct brief online interviews in the week from July 19 to July 23. The decision for a candidate will be taken by the end of July. The start date is negotiable and the development of the app is expected to be finished by the end of October. In Brief: What you will work on: - interactive music-visualization project - visualizing music in MIDI format using the discrete Fourier transform Required skills: - demonstrable experience with front-end development projects - profound knowledge of at least one JS framework such as Vue / React / Angular - familiarity with Git & Github - proficiency with a JS visualization library such as Vega / D3 Nice to have - musical background - basic understanding of the discrete Fourier transform - hands-on experience with Python for data science - experience with MIDI in the browser Start and duration - negotiable - earliest start date: August 1, 2021 - latest end date: October 31, 2021 Payment - 2880 CHF for the development of the whole application We are looking forward to receiving your application by email. Don’t hesitate to contact us for any further information. Fabian Moss (fabian.moss at epfl.ch) Daniel Harasim (daniel.harasim at epfl.ch) LINK: https://www.epfl.ch/labs/dcml/open-positions/other-positions/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.cashner at rochester.edu Thu Jul 8 15:01:06 2021 From: andrew.cashner at rochester.edu (Cashner, Andrew) Date: Thu, 8 Jul 2021 13:01:06 +0000 Subject: [MEI-L] showing change of mensuration in new section Message-ID: Dear colleagues, I cannot get changes of mensuration to show at the beginning of new sections. I am encoding 17th-century music in mensural style, putting and directly within
elements, and not using at all. I'm rendering with Verovio. If I put the mensuration change inside a within the
, no new mensuration shows at all. If I put and elements directly into the , then it does show, but the proportion number is placed incorrectly (after or overlapping the first note). Example 1 below shows the scoreDef approach, and example 2 shows the layer approach. The first mensuration signature in this example should be "C3" and the second should be "cutC 3". I will be grateful for any help you can provide, and my apologies if there is something I missed in the documentation. Best, Andrew Example 1: </titleStmt> <pubStmt /> </fileDesc> <encodingDesc /> </meiHead> <music> <body> <mdiv> <score> <scoreDef> <staffGrp xml:id="chorus" n="1" bar.thru="false" symbol="bracket"> <staffDef xml:id="soprano" n="1" lines="5" clef.line="2" clef.shape="G" mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> <staffDef xml:id="bass" n="4" lines="5" clef.line="4" clef.shape="F" xml:id="midi.P4" mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> </staffGrp> </scoreDef> <section> <staff xml:id="soprano" n="1"> <layer n="1"> <note pname="a" oct="4" dur="2" /> <note pname="a" oct="4" dur="2" /> <note pname="b" oct="4" dur="2" accid="f" /> <note pname="a" oct="4" dur="1" /> <note pname="a" oct="4" dur="2" /> <barLine form="dbl"/> </layer> </staff> <staff xml:id="bass" n="4"> <layer n="1"> <note pname="d" oct="3" dur="2" /> <note pname="d" oct="3" dur="2" /> <note pname="g" oct="3" dur="2" /> <note pname="a" oct="3" dur="1" /> <note pname="d" oct="3" dur="2" /> <barLine form="dbl"/> </layer> </staff> </section> <section> <scoreDef> <staffGrp xml:id="chorus" n="1"> <staffDef xml:id="soprano" n="1" mensur.sign="C" slash="1" tempus="2" proport.num="3" /> <staffDef xml:id="alto" n="2" mensur.sign="C" slash="1" tempus="2" proport.num="3" /> <staffDef xml:id="tenor" n="3" mensur.sign="C" slash="1" tempus="2" proport.num="3" /> <staffDef xml:id="bass" n="4" mensur.sign="C" slash="1" tempus="2" proport.num="3" /> </staffGrp> </scoreDef> <staff xml:id="soprano" n="1"> <layer n="1"> <note pname="a" oct="4" dur="1" dots="1" /> <note pname="a" oct="4" dur="2" /> <note pname="g" oct="4" dur="1" /> <note pname="a" oct="4" dur="breve" /> <note pname="a" oct="4" dur="1" /> <barLine form="end"/> </layer> </staff> <staff xml:id="bass" n="4"> <layer n="1"> <note pname="d" oct="3" dur="1" dots="1" /> <note pname="c" oct="3" dur="2" accid="s" /> <note pname="b" oct="2" dur="1" accid="f" /> <note pname="a" oct="2" dur="breve" /> <note pname="d" oct="3" dur="1" /> <barLine form="end"/> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Example 2: <?xml version="1.0" encoding="UTF-8"?> <mei xmlns=https://www.music-encoding.org/ns/mei meiversion="4.0.1"> <meiHead> <fileDesc> <titleStmt> <title /> </titleStmt> <pubStmt /> </fileDesc> <encodingDesc /> </meiHead> <music> <body> <mdiv> <score> <scoreDef> <staffGrp xml:id="chorus" n="1" bar.thru="false" symbol="bracket"> <staffDef xml:id="soprano" n="1" lines="5" clef.line="2" clef.shape="G" mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> <staffDef xml:id="bass" n="4" lines="5" clef.line="4" clef.shape="F" xml:id="midi.P4" mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> </staffGrp> </scoreDef> <section> <staff xml:id="soprano" n="1"> <layer n="1"> <note pname="a" oct="4" dur="2" /> <note pname="a" oct="4" dur="2" /> <note pname="b" oct="4" dur="2" accid="f" /> <note pname="a" oct="4" dur="1" /> <note pname="a" oct="4" dur="2" /> <barLine form="dbl"/> </layer> </staff> <staff xml:id="bass" n="4"> <layer n="1"> <note pname="d" oct="3" dur="2" /> <note pname="d" oct="3" dur="2" /> <note pname="g" oct="3" dur="2" /> <note pname="a" oct="3" dur="1" /> <note pname="d" oct="3" dur="2" /> <barLine form="dbl"/> </layer> </staff> </section> <section> <staff xml:id="soprano" n="1"> <layer n="1"> <mensur sign="C" slash="1" tempus="2"/> <proport num="3"/> <note pname="a" oct="4" dur="1" dots="1" /> <note pname="a" oct="4" dur="2" /> <note pname="g" oct="4" dur="1" /> <note pname="a" oct="4" dur="breve" /> <note pname="a" oct="4" dur="1" /> <barLine form="end"/> </layer> </staff> <staff xml:id="bass" n="4"> <layer n="1"> <mensur sign="C" slash="1" tempus="2"/> <proport num="3"/> <note pname="d" oct="3" dur="1" dots="1" /> <note pname="c" oct="3" dur="2" accid="s" /> <note pname="b" oct="2" dur="1" accid="f" /> <note pname="a" oct="2" dur="breve" /> <note pname="d" oct="3" dur="1" /> <barLine form="end"/> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> *** Andrew A. Cashner, PhD Assistant professor of music, University of Rochester -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210708/3737e2a6/attachment.htm> From klaus.rettinghaus at gmail.com Thu Jul 8 15:29:06 2021 From: klaus.rettinghaus at gmail.com (Klaus Rettinghaus) Date: Thu, 8 Jul 2021 15:29:06 +0200 Subject: [MEI-L] showing change of mensuration in new section In-Reply-To: <CH2PR07MB66773FEE3CFFA8209D3A1CCB86199@CH2PR07MB6677.namprd07.prod.outlook.com> References: <CH2PR07MB66773FEE3CFFA8209D3A1CCB86199@CH2PR07MB6677.namprd07.prod.outlook.com> Message-ID: <0543F91F-8D8A-4456-B07D-16B8E5942A1A@gmail.com> Dear Andrew, maybe this question is better to address directly to the Verovio developers than the entire community? Best, Klaus > Am 08.07.2021 um 15:01 schrieb Cashner, Andrew <andrew.cashner at rochester.edu>: > > Dear colleagues, > > I cannot get changes of mensuration to show at the beginning of new sections. I am encoding 17th-century music in mensural style, putting <staff> and <layer> directly within <section> elements, and not using <measure> at all. I’m rendering with Verovio. > > If I put the mensuration change inside a <scoreDef> within the <section>, no new mensuration shows at all. If I put <mensur> and <proport> elements directly into the <layer>, then it does show, but the proportion number is placed incorrectly (after or overlapping the first note). Example 1 below shows the scoreDef approach, and example 2 shows the layer approach. The first mensuration signature in this example should be “C3” and the second should be “cutC 3”. > > I will be grateful for any help you can provide, and my apologies if there is something I missed in the documentation. > > Best, > Andrew > > Example 1: > <?xml version="1.0" encoding="UTF-8"?> > <mei xmlns=https://www.music-encoding.org/ns/mei <https://www.music-encoding.org/ns/mei> meiversion="4.0.1"> > <meiHead> > <fileDesc> > <titleStmt> > <title /> > </titleStmt> > <pubStmt /> > </fileDesc> > <encodingDesc /> > </meiHead> > <music> > <body> > <mdiv> > <score> > <scoreDef> > <staffGrp xml:id="chorus" n="1" bar.thru="false" symbol="bracket"> > <staffDef xml:id="soprano" n="1" > lines="5" clef.line="2" clef.shape="G" > mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> > <staffDef xml:id="bass" n="4" > lines="5" clef.line="4" clef.shape="F" xml:id="midi.P4" > mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> > </staffGrp> > </scoreDef> > <section> > <staff xml:id="soprano" n="1"> > <layer n="1"> > <note pname="a" oct="4" dur="2" /> > <note pname="a" oct="4" dur="2" /> > <note pname="b" oct="4" dur="2" accid="f" /> > <note pname="a" oct="4" dur="1" /> > <note pname="a" oct="4" dur="2" /> > <barLine form="dbl"/> > </layer> > </staff> > <staff xml:id="bass" n="4"> > <layer n="1"> > <note pname="d" oct="3" dur="2" /> > <note pname="d" oct="3" dur="2" /> > <note pname="g" oct="3" dur="2" /> > <note pname="a" oct="3" dur="1" /> > <note pname="d" oct="3" dur="2" /> > <barLine form="dbl"/> > </layer> > </staff> > </section> > <section> > <scoreDef> > <staffGrp xml:id="chorus" n="1"> > <staffDef xml:id="soprano" n="1" > mensur.sign="C" slash="1" tempus="2" proport.num="3" /> > <staffDef xml:id="alto" n="2" > mensur.sign="C" slash="1" tempus="2" proport.num="3" /> > <staffDef xml:id="tenor" n="3" > mensur.sign="C" slash="1" tempus="2" proport.num="3" /> > <staffDef xml:id="bass" n="4" > mensur.sign="C" slash="1" tempus="2" proport.num="3" /> > </staffGrp> > </scoreDef> > <staff xml:id="soprano" n="1"> > <layer n="1"> > <note pname="a" oct="4" dur="1" dots="1" /> > <note pname="a" oct="4" dur="2" /> > <note pname="g" oct="4" dur="1" /> > <note pname="a" oct="4" dur="breve" /> > <note pname="a" oct="4" dur="1" /> > <barLine form="end"/> > </layer> > </staff> > <staff xml:id="bass" n="4"> > <layer n="1"> > <note pname="d" oct="3" dur="1" dots="1" /> > <note pname="c" oct="3" dur="2" accid="s" /> > <note pname="b" oct="2" dur="1" accid="f" /> > <note pname="a" oct="2" dur="breve" /> > <note pname="d" oct="3" dur="1" /> > <barLine form="end"/> > </layer> > </staff> > </section> > </score> > </mdiv> > </body> > </music> > </mei> > > Example 2: > <?xml version="1.0" encoding="UTF-8"?> > <mei xmlns=https://www.music-encoding.org/ns/mei <https://www.music-encoding.org/ns/mei> meiversion="4.0.1"> > <meiHead> > <fileDesc> > <titleStmt> > <title /> > </titleStmt> > <pubStmt /> > </fileDesc> > <encodingDesc /> > </meiHead> > <music> > <body> > <mdiv> > <score> > <scoreDef> > <staffGrp xml:id="chorus" n="1" bar.thru="false" symbol="bracket"> > <staffDef xml:id="soprano" n="1" > lines="5" clef.line="2" clef.shape="G" > mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> > <staffDef xml:id="bass" n="4" > lines="5" clef.line="4" clef.shape="F" xml:id="midi.P4" > mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> > </staffGrp> > </scoreDef> > <section> > <staff xml:id="soprano" n="1"> > <layer n="1"> > <note pname="a" oct="4" dur="2" /> > <note pname="a" oct="4" dur="2" /> > <note pname="b" oct="4" dur="2" accid="f" /> > <note pname="a" oct="4" dur="1" /> > <note pname="a" oct="4" dur="2" /> > <barLine form="dbl"/> > </layer> > </staff> > <staff xml:id="bass" n="4"> > <layer n="1"> > <note pname="d" oct="3" dur="2" /> > <note pname="d" oct="3" dur="2" /> > <note pname="g" oct="3" dur="2" /> > <note pname="a" oct="3" dur="1" /> > <note pname="d" oct="3" dur="2" /> > <barLine form="dbl"/> > </layer> > </staff> > </section> > <section> > <staff xml:id="soprano" n="1"> > <layer n="1"> > <mensur sign="C" slash="1" tempus="2"/> > <proport num="3"/> > <note pname="a" oct="4" dur="1" dots="1" /> > <note pname="a" oct="4" dur="2" /> > <note pname="g" oct="4" dur="1" /> > <note pname="a" oct="4" dur="breve" /> > <note pname="a" oct="4" dur="1" /> > <barLine form="end"/> > </layer> > </staff> > <staff xml:id="bass" n="4"> > <layer n="1"> > <mensur sign="C" slash="1" tempus="2"/> > <proport num="3"/> > <note pname="d" oct="3" dur="1" dots="1" /> > <note pname="c" oct="3" dur="2" accid="s" /> > <note pname="b" oct="2" dur="1" accid="f" /> > <note pname="a" oct="2" dur="breve" /> > <note pname="d" oct="3" dur="1" /> > <barLine form="end"/> > </layer> > </staff> > </section> > </score> > </mdiv> > </body> > </music> > </mei> > > *** > Andrew A. Cashner, PhD > Assistant professor of music, University of Rochester > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de <mailto:mei-l at lists.uni-paderborn.de> > https://lists.uni-paderborn.de/mailman/listinfo/mei-l <https://lists.uni-paderborn.de/mailman/listinfo/mei-l> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210708/801adb55/attachment.htm> From andrew.cashner at rochester.edu Thu Jul 8 15:59:30 2021 From: andrew.cashner at rochester.edu (Cashner, Andrew) Date: Thu, 8 Jul 2021 13:59:30 +0000 Subject: [MEI-L] [EXT] Re: showing change of mensuration in new section In-Reply-To: <0543F91F-8D8A-4456-B07D-16B8E5942A1A@gmail.com> References: <CH2PR07MB66773FEE3CFFA8209D3A1CCB86199@CH2PR07MB6677.namprd07.prod.outlook.com>, <0543F91F-8D8A-4456-B07D-16B8E5942A1A@gmail.com> Message-ID: <CH2PR07MB66779B1BE0735858A70ADD9E86199@CH2PR07MB6677.namprd07.prod.outlook.com> >From an MEI-only perspective, then, is the first example correctly encoded? Best, Andrew ________________________________ From: mei-l <mei-l-bounces at lists.uni-paderborn.de> on behalf of Klaus Rettinghaus <klaus.rettinghaus at gmail.com> Sent: Thursday, July 8, 2021 9:29:06 AM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: [EXT] Re: [MEI-L] showing change of mensuration in new section Dear Andrew, maybe this question is better to address directly to the Verovio developers than the entire community? Best, Klaus Am 08.07.2021 um 15:01 schrieb Cashner, Andrew <andrew.cashner at rochester.edu<mailto:andrew.cashner at rochester.edu>>: Dear colleagues, I cannot get changes of mensuration to show at the beginning of new sections. I am encoding 17th-century music in mensural style, putting <staff> and <layer> directly within <section> elements, and not using <measure> at all. I’m rendering with Verovio. If I put the mensuration change inside a <scoreDef> within the <section>, no new mensuration shows at all. If I put <mensur> and <proport> elements directly into the <layer>, then it does show, but the proportion number is placed incorrectly (after or overlapping the first note). Example 1 below shows the scoreDef approach, and example 2 shows the layer approach. The first mensuration signature in this example should be “C3” and the second should be “cutC 3”. I will be grateful for any help you can provide, and my apologies if there is something I missed in the documentation. Best, Andrew Example 1: <?xml version="1.0" encoding="UTF-8"?> <mei xmlns=https://www.music-encoding.org/ns/mei<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.music-2Dencoding.org_ns_mei&d=DwMFaQ&c=kbmfwr1Yojg42sGEpaQh5ofMHBeTl9EI2eaqQZhHbOU&r=PlIlpVsx8a9gvc2u7VBKWNJQNqwxKYbgOaGEDg3C1Fs&m=Zvns5_hm7SMMfQiZgTBFa4uQPh-Z5Np4aTo7nv-s-zc&s=shhxE-TH2pwYq66tnCIpb6qy1d_HZmODFQCW7Vtmyvg&e=> meiversion="4.0.1"> <meiHead> <fileDesc> <titleStmt> <title /> </titleStmt> <pubStmt /> </fileDesc> <encodingDesc /> </meiHead> <music> <body> <mdiv> <score> <scoreDef> <staffGrp xml:id="chorus" n="1" bar.thru="false" symbol="bracket"> <staffDef xml:id="soprano" n="1" lines="5" clef.line="2" clef.shape="G" mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> <staffDef xml:id="bass" n="4" lines="5" clef.line="4" clef.shape="F" xml:id="midi.P4" mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> </staffGrp> </scoreDef> <section> <staff xml:id="soprano" n="1"> <layer n="1"> <note pname="a" oct="4" dur="2" /> <note pname="a" oct="4" dur="2" /> <note pname="b" oct="4" dur="2" accid="f" /> <note pname="a" oct="4" dur="1" /> <note pname="a" oct="4" dur="2" /> <barLine form="dbl"/> </layer> </staff> <staff xml:id="bass" n="4"> <layer n="1"> <note pname="d" oct="3" dur="2" /> <note pname="d" oct="3" dur="2" /> <note pname="g" oct="3" dur="2" /> <note pname="a" oct="3" dur="1" /> <note pname="d" oct="3" dur="2" /> <barLine form="dbl"/> </layer> </staff> </section> <section> <scoreDef> <staffGrp xml:id="chorus" n="1"> <staffDef xml:id="soprano" n="1" mensur.sign="C" slash="1" tempus="2" proport.num="3" /> <staffDef xml:id="alto" n="2" mensur.sign="C" slash="1" tempus="2" proport.num="3" /> <staffDef xml:id="tenor" n="3" mensur.sign="C" slash="1" tempus="2" proport.num="3" /> <staffDef xml:id="bass" n="4" mensur.sign="C" slash="1" tempus="2" proport.num="3" /> </staffGrp> </scoreDef> <staff xml:id="soprano" n="1"> <layer n="1"> <note pname="a" oct="4" dur="1" dots="1" /> <note pname="a" oct="4" dur="2" /> <note pname="g" oct="4" dur="1" /> <note pname="a" oct="4" dur="breve" /> <note pname="a" oct="4" dur="1" /> <barLine form="end"/> </layer> </staff> <staff xml:id="bass" n="4"> <layer n="1"> <note pname="d" oct="3" dur="1" dots="1" /> <note pname="c" oct="3" dur="2" accid="s" /> <note pname="b" oct="2" dur="1" accid="f" /> <note pname="a" oct="2" dur="breve" /> <note pname="d" oct="3" dur="1" /> <barLine form="end"/> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> Example 2: <?xml version="1.0" encoding="UTF-8"?> <mei xmlns=https://www.music-encoding.org/ns/mei<https://urldefense.proofpoint.com/v2/url?u=https-3A__www.music-2Dencoding.org_ns_mei&d=DwMFaQ&c=kbmfwr1Yojg42sGEpaQh5ofMHBeTl9EI2eaqQZhHbOU&r=PlIlpVsx8a9gvc2u7VBKWNJQNqwxKYbgOaGEDg3C1Fs&m=Zvns5_hm7SMMfQiZgTBFa4uQPh-Z5Np4aTo7nv-s-zc&s=shhxE-TH2pwYq66tnCIpb6qy1d_HZmODFQCW7Vtmyvg&e=> meiversion="4.0.1"> <meiHead> <fileDesc> <titleStmt> <title /> </titleStmt> <pubStmt /> </fileDesc> <encodingDesc /> </meiHead> <music> <body> <mdiv> <score> <scoreDef> <staffGrp xml:id="chorus" n="1" bar.thru="false" symbol="bracket"> <staffDef xml:id="soprano" n="1" lines="5" clef.line="2" clef.shape="G" mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> <staffDef xml:id="bass" n="4" lines="5" clef.line="4" clef.shape="F" xml:id="midi.P4" mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> </staffGrp> </scoreDef> <section> <staff xml:id="soprano" n="1"> <layer n="1"> <note pname="a" oct="4" dur="2" /> <note pname="a" oct="4" dur="2" /> <note pname="b" oct="4" dur="2" accid="f" /> <note pname="a" oct="4" dur="1" /> <note pname="a" oct="4" dur="2" /> <barLine form="dbl"/> </layer> </staff> <staff xml:id="bass" n="4"> <layer n="1"> <note pname="d" oct="3" dur="2" /> <note pname="d" oct="3" dur="2" /> <note pname="g" oct="3" dur="2" /> <note pname="a" oct="3" dur="1" /> <note pname="d" oct="3" dur="2" /> <barLine form="dbl"/> </layer> </staff> </section> <section> <staff xml:id="soprano" n="1"> <layer n="1"> <mensur sign="C" slash="1" tempus="2"/> <proport num="3"/> <note pname="a" oct="4" dur="1" dots="1" /> <note pname="a" oct="4" dur="2" /> <note pname="g" oct="4" dur="1" /> <note pname="a" oct="4" dur="breve" /> <note pname="a" oct="4" dur="1" /> <barLine form="end"/> </layer> </staff> <staff xml:id="bass" n="4"> <layer n="1"> <mensur sign="C" slash="1" tempus="2"/> <proport num="3"/> <note pname="d" oct="3" dur="1" dots="1" /> <note pname="c" oct="3" dur="2" accid="s" /> <note pname="b" oct="2" dur="1" accid="f" /> <note pname="a" oct="2" dur="breve" /> <note pname="d" oct="3" dur="1" /> <barLine form="end"/> </layer> </staff> </section> </score> </mdiv> </body> </music> </mei> *** Andrew A. Cashner, PhD Assistant professor of music, University of Rochester _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l<https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.uni-2Dpaderborn.de_mailman_listinfo_mei-2Dl&d=DwMFaQ&c=kbmfwr1Yojg42sGEpaQh5ofMHBeTl9EI2eaqQZhHbOU&r=PlIlpVsx8a9gvc2u7VBKWNJQNqwxKYbgOaGEDg3C1Fs&m=Zvns5_hm7SMMfQiZgTBFa4uQPh-Z5Np4aTo7nv-s-zc&s=jib2I8ctV972TbSWSbr8ldx6aD26k5Y2-cv6mZmGo9E&e=> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210708/d327e659/attachment.htm> From andrew.hankinson at rism.digital Fri Jul 9 09:24:29 2021 From: andrew.hankinson at rism.digital (Andrew Hankinson) Date: Fri, 9 Jul 2021 09:24:29 +0200 Subject: [MEI-L] [EXT] Re: showing change of mensuration in new section In-Reply-To: <CH2PR07MB66779B1BE0735858A70ADD9E86199@CH2PR07MB6677.namprd07.prod.outlook.com> References: <CH2PR07MB66773FEE3CFFA8209D3A1CCB86199@CH2PR07MB6677.namprd07.prod.outlook.com> <0543F91F-8D8A-4456-B07D-16B8E5942A1A@gmail.com> <CH2PR07MB66779B1BE0735858A70ADD9E86199@CH2PR07MB6677.namprd07.prod.outlook.com> Message-ID: <D869BA9A-CE46-46D4-A4D4-70BC2276FC14@rism.digital> If you remove the 'proport' element and use the 'mensur' element to encode the change in your second example, then Verovio seems to render it as expected. <section> <staff n="1"> <layer n="1"> <mensur num="3" sign="C" slash="1" /> <note pname="a" oct="4" dur="1" dots="1" /> <note pname="a" oct="4" dur="2" /> <note pname="g" oct="4" dur="1" /> <note pname="a" oct="4" dur="breve" /> <note pname="a" oct="4" dur="1" /> <barLine form="end"/> </layer> </staff> <staff n="4"> <layer n="1"> <mensur sign="C" slash="1" num="3" /> <note pname="d" oct="3" dur="1" dots="1" /> <note pname="c" oct="3" dur="2" accid="s" /> <note pname="b" oct="2" dur="1" accid="f" /> <note pname="a" oct="2" dur="breve" /> <note pname="d" oct="3" dur="1" /> <barLine form="end"/> </layer> </staff> </section> The mensural example on the Verovio page has some handy patterns to follow. https://www.verovio.org/test-suite.xhtml?cat=mensural There were also a few XML errors in your examples (mostly around duplicate xml:id values). An XML editor can help you spot these problems, since they're probably going to cause other rendering issues if they aren't already. Cheers! -Andrew > On 8 Jul 2021, at 15:59, Cashner, Andrew <andrew.cashner at rochester.edu> wrote: > > From an MEI-only perspective, then, is the first example correctly encoded? > Best, > Andrew > From: mei-l <mei-l-bounces at lists.uni-paderborn.de> on behalf of Klaus Rettinghaus <klaus.rettinghaus at gmail.com> > Sent: Thursday, July 8, 2021 9:29:06 AM > To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Subject: [EXT] Re: [MEI-L] showing change of mensuration in new section > > Dear Andrew, > > maybe this question is better to address directly to the Verovio developers than the entire community? > > Best, > Klaus > >> Am 08.07.2021 um 15:01 schrieb Cashner, Andrew <andrew.cashner at rochester.edu>: >> >> Dear colleagues, >> >> I cannot get changes of mensuration to show at the beginning of new sections. I am encoding 17th-century music in mensural style, putting <staff> and <layer> directly within <section> elements, and not using <measure> at all. I’m rendering with Verovio. >> >> If I put the mensuration change inside a <scoreDef> within the <section>, no new mensuration shows at all. If I put <mensur> and <proport> elements directly into the <layer>, then it does show, but the proportion number is placed incorrectly (after or overlapping the first note). Example 1 below shows the scoreDef approach, and example 2 shows the layer approach. The first mensuration signature in this example should be “C3” and the second should be “cutC 3”. >> >> I will be grateful for any help you can provide, and my apologies if there is something I missed in the documentation. >> >> Best, >> Andrew >> >> Example 1: >> <?xml version="1.0" encoding="UTF-8"?> >> <mei xmlns=https://www.music-encoding.org/ns/mei meiversion="4.0.1"> >> <meiHead> >> <fileDesc> >> <titleStmt> >> <title /> >> </titleStmt> >> <pubStmt /> >> </fileDesc> >> <encodingDesc /> >> </meiHead> >> <music> >> <body> >> <mdiv> >> <score> >> <scoreDef> >> <staffGrp xml:id="chorus" n="1" bar.thru="false" symbol="bracket"> >> <staffDef xml:id="soprano" n="1" >> lines="5" clef.line="2" clef.shape="G" >> mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> >> <staffDef xml:id="bass" n="4" >> lines="5" clef.line="4" clef.shape="F" xml:id="midi.P4" >> mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> >> </staffGrp> >> </scoreDef> >> <section> >> <staff xml:id="soprano" n="1"> >> <layer n="1"> >> <note pname="a" oct="4" dur="2" /> >> <note pname="a" oct="4" dur="2" /> >> <note pname="b" oct="4" dur="2" accid="f" /> >> <note pname="a" oct="4" dur="1" /> >> <note pname="a" oct="4" dur="2" /> >> <barLine form="dbl"/> >> </layer> >> </staff> >> <staff xml:id="bass" n="4"> >> <layer n="1"> >> <note pname="d" oct="3" dur="2" /> >> <note pname="d" oct="3" dur="2" /> >> <note pname="g" oct="3" dur="2" /> >> <note pname="a" oct="3" dur="1" /> >> <note pname="d" oct="3" dur="2" /> >> <barLine form="dbl"/> >> </layer> >> </staff> >> </section> >> <section> >> <scoreDef> >> <staffGrp xml:id="chorus" n="1"> >> <staffDef xml:id="soprano" n="1" >> mensur.sign="C" slash="1" tempus="2" proport.num="3" /> >> <staffDef xml:id="alto" n="2" >> mensur.sign="C" slash="1" tempus="2" proport.num="3" /> >> <staffDef xml:id="tenor" n="3" >> mensur.sign="C" slash="1" tempus="2" proport.num="3" /> >> <staffDef xml:id="bass" n="4" >> mensur.sign="C" slash="1" tempus="2" proport.num="3" /> >> </staffGrp> >> </scoreDef> >> <staff xml:id="soprano" n="1"> >> <layer n="1"> >> <note pname="a" oct="4" dur="1" dots="1" /> >> <note pname="a" oct="4" dur="2" /> >> <note pname="g" oct="4" dur="1" /> >> <note pname="a" oct="4" dur="breve" /> >> <note pname="a" oct="4" dur="1" /> >> <barLine form="end"/> >> </layer> >> </staff> >> <staff xml:id="bass" n="4"> >> <layer n="1"> >> <note pname="d" oct="3" dur="1" dots="1" /> >> <note pname="c" oct="3" dur="2" accid="s" /> >> <note pname="b" oct="2" dur="1" accid="f" /> >> <note pname="a" oct="2" dur="breve" /> >> <note pname="d" oct="3" dur="1" /> >> <barLine form="end"/> >> </layer> >> </staff> >> </section> >> </score> >> </mdiv> >> </body> >> </music> >> </mei> >> >> Example 2: >> <?xml version="1.0" encoding="UTF-8"?> >> <mei xmlns=https://www.music-encoding.org/ns/mei meiversion="4.0.1"> >> <meiHead> >> <fileDesc> >> <titleStmt> >> <title /> >> </titleStmt> >> <pubStmt /> >> </fileDesc> >> <encodingDesc /> >> </meiHead> >> <music> >> <body> >> <mdiv> >> <score> >> <scoreDef> >> <staffGrp xml:id="chorus" n="1" bar.thru="false" symbol="bracket"> >> <staffDef xml:id="soprano" n="1" >> lines="5" clef.line="2" clef.shape="G" >> mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> >> <staffDef xml:id="bass" n="4" >> lines="5" clef.line="4" clef.shape="F" xml:id="midi.P4" >> mensur.sign="C" mensur.tempus="2" proport.num="3" key.sig="0" /> >> </staffGrp> >> </scoreDef> >> <section> >> <staff xml:id="soprano" n="1"> >> <layer n="1"> >> <note pname="a" oct="4" dur="2" /> >> <note pname="a" oct="4" dur="2" /> >> <note pname="b" oct="4" dur="2" accid="f" /> >> <note pname="a" oct="4" dur="1" /> >> <note pname="a" oct="4" dur="2" /> >> <barLine form="dbl"/> >> </layer> >> </staff> >> <staff xml:id="bass" n="4"> >> <layer n="1"> >> <note pname="d" oct="3" dur="2" /> >> <note pname="d" oct="3" dur="2" /> >> <note pname="g" oct="3" dur="2" /> >> <note pname="a" oct="3" dur="1" /> >> <note pname="d" oct="3" dur="2" /> >> <barLine form="dbl"/> >> </layer> >> </staff> >> </section> >> <section> >> <staff xml:id="soprano" n="1"> >> <layer n="1"> >> <mensur sign="C" slash="1" tempus="2"/> >> <proport num="3"/> >> <note pname="a" oct="4" dur="1" dots="1" /> >> <note pname="a" oct="4" dur="2" /> >> <note pname="g" oct="4" dur="1" /> >> <note pname="a" oct="4" dur="breve" /> >> <note pname="a" oct="4" dur="1" /> >> <barLine form="end"/> >> </layer> >> </staff> >> <staff xml:id="bass" n="4"> >> <layer n="1"> >> <mensur sign="C" slash="1" tempus="2"/> >> <proport num="3"/> >> <note pname="d" oct="3" dur="1" dots="1" /> >> <note pname="c" oct="3" dur="2" accid="s" /> >> <note pname="b" oct="2" dur="1" accid="f" /> >> <note pname="a" oct="2" dur="breve" /> >> <note pname="d" oct="3" dur="1" /> >> <barLine form="end"/> >> </layer> >> </staff> >> </section> >> </score> >> </mdiv> >> </body> >> </music> >> </mei> >> >> *** >> Andrew A. Cashner, PhD >> Assistant professor of music, University of Rochester >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210709/1cb070f0/attachment.htm> -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 32939 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210709/1cb070f0/attachment.png> From bureau at tradmus.org Fri Jul 9 10:57:12 2021 From: bureau at tradmus.org (Simon Wascher) Date: Fri, 9 Jul 2021 10:57:12 +0200 Subject: [MEI-L] [EXT] Re: showing change of mensuration in new section In-Reply-To: <D869BA9A-CE46-46D4-A4D4-70BC2276FC14@rism.digital> References: <CH2PR07MB66773FEE3CFFA8209D3A1CCB86199@CH2PR07MB6677.namprd07.prod.outlook.com> <0543F91F-8D8A-4456-B07D-16B8E5942A1A@gmail.com> <CH2PR07MB66779B1BE0735858A70ADD9E86199@CH2PR07MB6677.namprd07.prod.outlook.com> <D869BA9A-CE46-46D4-A4D4-70BC2276FC14@rism.digital> Message-ID: <C2A9AFDC-C334-4510-BE38-1C0D9222AAC3@tradmus.org> Hi, just for my understanding: Am 09.07.2021 um 09:24 schrieb Andrew Hankinson <andrew.hankinson at rism.digital>: > If you remove the 'proport' element and use the 'mensur' element isn't one of the ideas of MEI to encode the semantic of a notation and wouldn't 'proport'and 'mensur' elements encode different semantic cases? (To be able to differenciate the search between 'proport' OR 'mensur' cases in the resulting xhtml texts?) Thanks, Simon From andrew.hankinson at rism.digital Fri Jul 9 11:05:16 2021 From: andrew.hankinson at rism.digital (Andrew Hankinson) Date: Fri, 9 Jul 2021 11:05:16 +0200 Subject: [MEI-L] [EXT] Re: showing change of mensuration in new section In-Reply-To: <C2A9AFDC-C334-4510-BE38-1C0D9222AAC3@tradmus.org> References: <CH2PR07MB66773FEE3CFFA8209D3A1CCB86199@CH2PR07MB6677.namprd07.prod.outlook.com> <0543F91F-8D8A-4456-B07D-16B8E5942A1A@gmail.com> <CH2PR07MB66779B1BE0735858A70ADD9E86199@CH2PR07MB6677.namprd07.prod.outlook.com> <D869BA9A-CE46-46D4-A4D4-70BC2276FC14@rism.digital> <C2A9AFDC-C334-4510-BE38-1C0D9222AAC3@tradmus.org> Message-ID: <D5892179-07BF-4939-8251-8F4482CD32FB@rism.digital> Yes, indeed! But there's also a practical difference between what should be encoded to be semantically correct, and what Verovio will render. The complexity of the notation ensures that there are always corners of the spec, and of the software, where certain combinations of elements haven't been tested and may not behave as expected. So it's probably a bug or missing feature in Verovio, and it would be great if someone could help with fixing it. In the meantime, however, if you want to have the correct visual effect there is a practical workaround. -Andrew > On 9 Jul 2021, at 10:57, Simon Wascher <bureau at tradmus.org> wrote: > > Hi, > > just for my understanding: > > Am 09.07.2021 um 09:24 schrieb Andrew Hankinson <andrew.hankinson at rism.digital>: >> If you remove the 'proport' element and use the 'mensur' element > > isn't one of the ideas of MEI to encode the semantic of a notation and wouldn't 'proport'and 'mensur' elements encode different semantic cases? > (To be able to differenciate the search between 'proport' OR 'mensur' cases in the resulting xhtml texts?) > > Thanks, > Simon > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From stefan.muennich at unibas.ch Fri Jul 16 18:28:56 2021 From: stefan.muennich at unibas.ch (=?iso-8859-1?Q?Stefan_M=FCnnich?=) Date: Fri, 16 Jul 2021 16:28:56 +0000 Subject: [MEI-L] Proposed changes to MEI By-Laws In-Reply-To: <75ed1db002024a6fa79b7f1abf01b720@unibas.ch> References: <75ed1db002024a6fa79b7f1abf01b720@unibas.ch> Message-ID: <db04360ce1a2435db37b8b5338bacdb7@unibas.ch> Dear MEI community members, The "request for community comments" period regarding the proposed changes to the MEI By-laws will be extended until Thursday, July 22th, 14.00 CEST, i.e. the begin of the Board meeting at MEC2021. Any comments before that date will be considered for final discussion within the Board at the MEC Board meeting. The agreed upon version of the amended By-laws will be put to vote over a minimum 7-day period. Voting phase will be announced and started at the Community Meeting at MEC2021. For options to review the proposed changes see the information below. Thank you, Stefan ________________________________ Von: mei-l <mei-l-bounces at lists.uni-paderborn.de> im Auftrag von Stefan Münnich <stefan.muennich at unibas.ch> Gesendet: Donnerstag, 24. Juni 2021 10:21 An: Music Encoding Initiative Betreff: [MEI-L] Proposed changes to MEI By-Laws Dear MEI community members, The MEI Board has agreed to propose some changes to the MEI By-laws<https://music-encoding.org/community/mei-by-laws.html>. Focal changes are the explicit accomodation of any type of music notation, not only Western, in section 2 ("Purpose"), the clarification of Personal and Institutional membership in section 3 ("Membership"), and the explicit integration of appointed offices of the Board and a fine-tuning of the election procedure. Other changes concern minor corrections to wording. Options to view the proposed changes include: 1) a compiled overview of the changes in a GoogleDoc (<https://docs.google.com/document/d/1cHVVA8gkRI-gFDLo4YbtHgslP9K37IbOxX1ctSd_HeA/edit?usp=sharing>https://docs.google.com/document/d/1Kg46Nw-D8Pe7dj341-NH-9UaFy5Ksw7tlJ7x_syqvug/edit?usp=sharing) 2) a "diff" in the dedicated Pull Request on Github (https://github.com/music-encoding/music-encoding.github.io/pull/234/files) According to the By-Laws, any proposed amendments must be approved by the MEI community, following this procedure: "Changes to these by-laws become proposed amendments with a simple majority of the complete Board membership. Proposed amendments will be published on the MEI mailing list (MEI-L) and the MEI website. After a minimum 10-day comment period, the Board will discuss comments on the proposed amendments received from the MEI membership and incorporate changes agreed to by a simple majority of the Board. The agreed upon version of the amended by-laws will be presented to the MEI membership for voting over a minimum 7-day period. The amended version of the by-laws must pass with a two-thirds (2/3) majority of votes in order to become effective." (https://music-encoding.org/community/mei-by-laws.html#10-amendments) The Board invites you to provide any additional comments you have to the GoogleDoc or the Pull Request above. Any comments before July 5th (23:59:59, UTC-11)<https://www.timeanddate.com/worldclock/converter.html?iso=20210706T105900&p1=3925&p2=137&p3=213&p4=43&p5=136&p6=1976&p7=56&p8=110&p9=33&p10=248&p11=240> will be considered for final discussion within the Board. The agreed upon version of the amended By-laws will be put to vote over a minimum 7-day period before MEC2021. The voting results will be presented during the Community Meeting at MEC2021. Thank you, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210716/0531713b/attachment.htm> From Anna.Kijas at tufts.edu Mon Jul 19 15:36:10 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Mon, 19 Jul 2021 13:36:10 +0000 Subject: [MEI-L] Interested in Pedagogical Approaches to Music Encoding? Message-ID: <39EA5A7A-C11B-455E-8C4D-B53BF8DD4B07@tufts.edu> Hello All, If you are interested in pedagogical approaches to music encoding, use cases for teaching, and more, please consider joining the MEI Digital Pedagogy Interest Group! You can register for the group’s mailing list at http://lists.uni-paderborn.de/mailman/listinfo/mei-pedagogy-ig. Additional details about this and other IGs can be found at https://music-encoding.org/community/interest-groups.html. During the MEC 2021 the Digital Pedagogy IG will meet on Thursday, July 22, 2021 at 11:40 AM – 12:40 PM (EDT) / 17:40 -18:40 (CEST). Please join us! Join via Zoom Meeting URL: https://tufts.zoom.us/j/9420917662 Meeting ID: 942 091 7662 The meeting agenda is being created and can be found in our running meeting notes document<https://docs.google.com/document/d/1G0F7qbZ-6vkUfKNE91yQ97zYYfqUqHvKtfdnNU5Qzw4/edit#heading=h.hukvyxfpidxx>. We look forward to seeing you all at the MEC next week and at the IG meeting! Best, Anna Kijas and Joy Calico, Administrative co-chairs Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment<https://tufts.libcal.com/appointments/kijas/lilly> | (617) 627-2846 Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210719/2a5f2bce/attachment.htm> From luca.ludovico at unimi.it Mon Jul 19 17:52:47 2021 From: luca.ludovico at unimi.it (luca.ludovico at unimi.it) Date: Mon, 19 Jul 2021 17:52:47 +0200 Subject: [MEI-L] CFP - Special session on Computer Supported Music Education @ CSEDU 2022 Message-ID: <02a301d77cb6$1eeb7b00$5cc27100$@unimi.it> [Apologies for cross-postings] [Please distribute] 14th International Conference on Computer Supported Education (CSEDU 2022) Special session on Computer Supported Music Education (CSME 2022) - 3rd edition The International Conference on Computer Supported Education is an annual meeting place for presenting and discussing new educational tools and environments, best practices and case studies on innovative technology-based learning strategies, and institutional policies on computer supported education, including open and distance education. In this framework, the special session on Computer Supported Music Education aims to investigate the impact of computer-based approaches on music education. We welcome contributions on the design, development and use of advanced technologies to support learning and teaching actions in music creation, performance, and analysis. Accepted papers, presented at the conference by one or more of the authors, will be published in the Proceedings of CSEDU under an ISBN and indexed by major systems, e.g. Thomson Reuters Conference Proceedings Citation Index (CPCI/ISI), DBLP, EI (Elsevier Engineering Village Index), Scopus, Semantic Scholar and Google Scholar. Important dates Paper Submission: February 24, 2022 Authors Notification: March 10, 2022 Camera Ready and Registration: March 18, 2022 For further information Special session web page: <http://www.csedu.org/CSME.aspx> http://www.csedu.org/CSME.aspx Conference web page: <http://www.csedu.org/Home.aspx> http://www.csedu.org/Home.aspx Organizer and chair Luca A. Ludovico Laboratory of Music Informatics (LIM), Department of Computer Science, University of Milan luca.ludovico at unimi.it <mailto:luca.ludovico at unimi.it> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210719/3ad34a5a/attachment.htm> -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.jpg Type: image/jpeg Size: 8985 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210719/3ad34a5a/attachment.jpg> From joy.calico at Vanderbilt.Edu Tue Jul 20 01:18:08 2021 From: joy.calico at Vanderbilt.Edu (Calico, Joy Haslam) Date: Mon, 19 Jul 2021 23:18:08 +0000 Subject: [MEI-L] Interested in Pedagogical Approaches to Music Encoding? In-Reply-To: <39EA5A7A-C11B-455E-8C4D-B53BF8DD4B07@tufts.edu> References: <39EA5A7A-C11B-455E-8C4D-B53BF8DD4B07@tufts.edu> Message-ID: <BN7PR08MB549252EFD424C04CA28B5DB2E0E19@BN7PR08MB5492.namprd08.prod.outlook.com> Hi Anna, these are not sentences so much as bullet points, but it will be quick either way, which is what he wants for the community meeting! Please add, subtract, or otherwise edit as you see fit: In our first year the Digital Pedagogy Interest Group has * met on Zoom nine times * set goals (which you can read on the MEI IG page) * established a new section on the MEI website called "community-created pedagogy and praxis" with a link to open access resources housed on Humanities Commons that includes tutorials, cheat sheets, assignments, etc. (thanks to Martha, Maristella, and Anna for these!) * presented papers at the International Association of Music Libraries and AMS's Teaching Music History Conference * begun planning an MEI OA pedagogy-focused publication * been asked to contribute a roundtable to the Journal of Musicological Research * had a member, Maristella Feustle, co-lead the "intro to MEI workshop" this week (the first time such a workshop has been led by a non-board member) From: mei-l <mei-l-bounces+joy.calico=vanderbilt.edu at lists.uni-paderborn.de> On Behalf Of Kijas, Anna E Sent: Monday, July 19, 2021 08:36 To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: [MEI-L] Interested in Pedagogical Approaches to Music Encoding? Hello All, If you are interested in pedagogical approaches to music encoding, use cases for teaching, and more, please consider joining the MEI Digital Pedagogy Interest Group! You can register for the group's mailing list at http://lists.uni-paderborn.de/mailman/listinfo/mei-pedagogy-ig<https://nam04.safelinks.protection.outlook.com/?url=http%3A%2F%2Flists.uni-paderborn.de%2Fmailman%2Flistinfo%2Fmei-pedagogy-ig&data=04%7C01%7Cjoy.calico%40vanderbilt.edu%7C53d6c03aec4940e573af08d94abd055c%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C637622997967722639%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=YZOSXOIRD7F1JxzwESOEduD2pstSFWiB9mG57HLDbsU%3D&reserved=0>. Additional details about this and other IGs can be found at https://music-encoding.org/community/interest-groups.html<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fmusic-encoding.org%2Fcommunity%2Finterest-groups.html&data=04%7C01%7Cjoy.calico%40vanderbilt.edu%7C53d6c03aec4940e573af08d94abd055c%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C637622997967732635%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=WHBc1RQHur7TLUdoqnwZIp5fFprQGUwGWf8UoH8FUvM%3D&reserved=0>. During the MEC 2021 the Digital Pedagogy IG will meet on Thursday, July 22, 2021 at 11:40 AM - 12:40 PM (EDT) / 17:40 -18:40 (CEST). Please join us! Join via Zoom Meeting URL: https://tufts.zoom.us/j/9420917662<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftufts.zoom.us%2Fj%2F9420917662&data=04%7C01%7Cjoy.calico%40vanderbilt.edu%7C53d6c03aec4940e573af08d94abd055c%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C637622997967742629%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=xGCj6KAHkQ1BTLjL9RgZMA4ew8iD87dd55YRrT7ylII%3D&reserved=0> Meeting ID: 942 091 7662 The meeting agenda is being created and can be found in our running meeting notes document<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Fdocs.google.com%2Fdocument%2Fd%2F1G0F7qbZ-6vkUfKNE91yQ97zYYfqUqHvKtfdnNU5Qzw4%2Fedit%23heading%3Dh.hukvyxfpidxx&data=04%7C01%7Cjoy.calico%40vanderbilt.edu%7C53d6c03aec4940e573af08d94abd055c%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C637622997967742629%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=5ykfHm%2Fw1gk0z4GNZ0oPOKCn8rpbi90hhcQ7%2B%2BHq%2BB4%3D&reserved=0>. We look forward to seeing you all at the MEC next week and at the IG meeting! Best, Anna Kijas and Joy Calico, Administrative co-chairs Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftufts.libcal.com%2Fappointments%2Fkijas%2Flilly&data=04%7C01%7Cjoy.calico%40vanderbilt.edu%7C53d6c03aec4940e573af08d94abd055c%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C637622997967752627%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=7y3yLtsIjqxN0SE0Qeh2T0RvY2bkHPKdfvO80scDpHw%3D&reserved=0> | (617) 627-2846 Please note: Lilly Music Library hours and additional details can be viewed at https://tischlibrary.tufts.edu/use-library/lilly-music-library<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftischlibrary.tufts.edu%2Fuse-library%2Flilly-music-library&data=04%7C01%7Cjoy.calico%40vanderbilt.edu%7C53d6c03aec4940e573af08d94abd055c%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C637622997967752627%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=InOa5buUgc9cdpM5cQN2l2p6NcjPe507%2Bis%2Fimx2tkg%3D&reserved=0>. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900<https://nam04.safelinks.protection.outlook.com/?url=https%3A%2F%2Ftischlibrary.tufts.edu%2Fabout-us%2Fnews%2F2020-03-16-9900&data=04%7C01%7Cjoy.calico%40vanderbilt.edu%7C53d6c03aec4940e573af08d94abd055c%7Cba5a7f39e3be4ab3b45067fa80faecad%7C0%7C1%7C637622997967762617%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=ilNL421LeVf0%2BspX2uv8daWnIjySrkxe4CSixOZPvX8%3D&reserved=0>. All instruction, meetings, and consultations will be conducted over Zoom. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210719/989db805/attachment.htm> From weigl at mdw.ac.at Thu Jul 22 14:06:08 2021 From: weigl at mdw.ac.at (David Weigl) Date: Thu, 22 Jul 2021 14:06:08 +0200 Subject: [MEI-L] Funded PhD opportunity - digital musicology & performance science - FWF Signature Sound Vienna Message-ID: <60F95F30020000830002E917@tgwia7.mdw.ac.at> [With apologies for cross-posting. Please distribute widely! And thanks for an amazing MEC 2021!!] Opportunity for a 3-year funded PhD position at mdw - University of Music and Performing Arts Vienna, Austria: Quantifying the signature sound of the Vienna Philharmonic's New Year's Concerts The successful candidate will have a strong background in musicology, performance science, performance studies, or related fields. You will work in close collaboration with experts in historical musicology, performance science, music encoding, and music informatics at the University of Music and Performing Arts Vienna. Together we will undertake wide-ranging investigations to find quantitative traces of the influence of the orchestra, conductor, and other contextual factors on historical performance recordings pertaining to the Vienna Philharmonic's New Year's Concert series, applying close- and distant-listening approaches using score-informed audio feature extraction technologies. The project will also emphasise digital means of disseminating findings to scholarly and lay audiences. A formal call for applications to this position with further details is available on the project website, alongside a high-level project description: https://iwk.mdw.ac.at/signature-sound-vienna/ ** Application deadline: August 15th, 2021 ** Apply via the mdw application portal: https://www.mdw.ac.at/bewerbungsportal/mittelbau/88 (you will need to register). The project is funded by the Austrian Science Fund (FWF) (P 34664-G). Please do not hesitate to contact us for further information: David M. Weigl weigl at mdw.ac.at Werner Goebl goebl at mdw.ac.at Kind regards, David -- David M. Weigl, PhD Department of Music Acoustics - Wiener Klangstil University of Music and Performing Arts Vienna, Austria P.I., FWF Signature Sound Vienna. Same procedure as every year? https://iwk.mdw.ac.at/signature-sound-vienna/ From stefan.muennich at unibas.ch Thu Jul 22 14:36:37 2021 From: stefan.muennich at unibas.ch (=?iso-8859-1?Q?Stefan_M=FCnnich?=) Date: Thu, 22 Jul 2021 12:36:37 +0000 Subject: [MEI-L] MEI Community meeting Message-ID: <cdbb30d8cec94f3c84cef0640cf62582@unibas.ch> Dear MEI community members, Just a little reminder that we will have the public MEI community meeting at 15pm CEST as part of the Un-Conference Day of MEC2021. The meeting is open to all, so everyone is cordially invited. (No need to be registered for the conference.) We will use that Zoom link: https://us02web.zoom.us/j/81462952025?pwd=TVViQnZLVjY3TVRQa3RlNUh3OWFvQT09 Meeting-ID: 814 6295 2025 Passcode: 319634 Looking forward to seeing you at the meeting, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210722/06678c27/attachment.htm> From stefan.muennich at unibas.ch Thu Jul 22 21:31:03 2021 From: stefan.muennich at unibas.ch (=?iso-8859-1?Q?Stefan_M=FCnnich?=) Date: Thu, 22 Jul 2021 19:31:03 +0000 Subject: [MEI-L] Proposed changes to MEI By-laws Message-ID: <31db64249cc046a3a07eb0a2809942f2@unibas.ch> Dear MEI community members, as announced in the community meeting today, the Board has agreed on an amended version of the MEI By-laws which integrates the community comments we had received in the last days and weeks. Thank you for all your input! The proposed changes aim to clarify some things and to codify existing practice of other things. Focal changes are the explicit accomodation of any type of music notation, not only Western, in section 2 ("Purpose"), the clarification of Personal and Institutional membership in section 3 ("Membership"), clarification of the format of IG reports in section 8 ("Interest groups"), and the explicit integration of appointed offices of the Board and a fine-tuning of the election procedure. Other changes concern minor corrections to wording. Options to view the proposed changes include: 1) a compiled overview of the changes in a GoogleDoc (https://docs.google.com/document/d/1Kg46Nw-D8Pe7dj341-NH-9UaFy5Ksw7tlJ7x_syqvug/edit?usp=sharing) 2) a "diff" in the Pull Request on GitHub (https://github.com/music-encoding/music-encoding.github.io/pull/234/files) According to the last chapter of the By-laws, "the agreed upon version of the amended by-laws will be presented to the MEI membership for voting over a minimum 7-day period. The amended version of the by-laws must pass with a two-thirds (2/3) majority of votes in order to become effective." (see https://music-encoding.org/community/mei-by-laws.html#10-amendments) We invite you to leave your "agree" or "not agree" via a mentimeter poll: https://www.menti.com/235xvose1h The vote will be open until Thursday, 29 July, 21.30 CEST. Thank you, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210722/9380e670/attachment.htm> From andrew.hankinson at rism.digital Wed Jul 28 09:23:22 2021 From: andrew.hankinson at rism.digital (Andrew Hankinson) Date: Wed, 28 Jul 2021 09:23:22 +0200 Subject: [MEI-L] MEC Online / Alicante Message-ID: <6477DDD7-52DA-4706-BDE8-3DEACDFAE1F8@rism.digital> Hi everyone, This year's MEC was one of the smoothest and highest-quality events we have had as a community. I wanted to pass along my thanks to the local organizers in Alicante, who made those of us who were able to attend in person feel very welcome, and to the whole team that made the online conference such a smooth and seamless experience. Thank you to particularly to David and Jorge, and to Maria, Francisco, and Antonio in the "control room", and everyone else on-site. Seeing how much time and effort they put into making the experience smooth 'off-camera' was spectacular. Thanks also to Stefan, who made the presenters and chairs feel very comfortable with the technologies involved, and to the whole programme committee, whose thoughtfulness and expertise brought together really excellent and diverse presentations. Amazing work by everyone, and it is appreciated. All the best, -Andrew From stefan.muennich at unibas.ch Sat Jul 31 08:53:36 2021 From: stefan.muennich at unibas.ch (=?iso-8859-1?Q?Stefan_M=FCnnich?=) Date: Sat, 31 Jul 2021 06:53:36 +0000 Subject: [MEI-L] Proposed changes to MEI By-laws In-Reply-To: <31db64249cc046a3a07eb0a2809942f2@unibas.ch> References: <31db64249cc046a3a07eb0a2809942f2@unibas.ch> Message-ID: <545f46ef42154469b28396901a7cddba@unibas.ch> Dear MEI community members, the vote on the amended MEI By-laws is closed. We had 24 participants who voted with 100% yes votes: https://www.mentimeter.com/s/e507c96af04042f92fd66b735835b014/fe6eac49111f So the proposed changes are ready to be included in the By-laws. Thank you to all participants! Stefan ________________________________ Von: Stefan Münnich Gesendet: Donnerstag, 22. Juli 2021 21:31:03 An: Music Encoding Initiative Betreff: Proposed changes to MEI By-laws Dear MEI community members, as announced in the community meeting today, the Board has agreed on an amended version of the MEI By-laws which integrates the community comments we had received in the last days and weeks. Thank you for all your input! The proposed changes aim to clarify some things and to codify existing practice of other things. Focal changes are the explicit accomodation of any type of music notation, not only Western, in section 2 ("Purpose"), the clarification of Personal and Institutional membership in section 3 ("Membership"), clarification of the format of IG reports in section 8 ("Interest groups"), and the explicit integration of appointed offices of the Board and a fine-tuning of the election procedure. Other changes concern minor corrections to wording. Options to view the proposed changes include: 1) a compiled overview of the changes in a GoogleDoc (https://docs.google.com/document/d/1Kg46Nw-D8Pe7dj341-NH-9UaFy5Ksw7tlJ7x_syqvug/edit?usp=sharing) 2) a "diff" in the Pull Request on GitHub (https://github.com/music-encoding/music-encoding.github.io/pull/234/files) According to the last chapter of the By-laws, "the agreed upon version of the amended by-laws will be presented to the MEI membership for voting over a minimum 7-day period. The amended version of the by-laws must pass with a two-thirds (2/3) majority of votes in order to become effective." (see https://music-encoding.org/community/mei-by-laws.html#10-amendments) We invite you to leave your "agree" or "not agree" via a mentimeter poll: https://www.menti.com/235xvose1h The vote will be open until Thursday, 29 July, 21.30 CEST. Thank you, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210731/35a9ea02/attachment.htm> From david_day at byu.edu Thu Aug 5 01:18:17 2021 From: david_day at byu.edu (David Day) Date: Wed, 4 Aug 2021 23:18:17 +0000 Subject: [MEI-L] =?utf-8?q?M=C3=A9lodies_en_vogue_au_XVIIIe_si=C3=A8cle?= In-Reply-To: <545f46ef42154469b28396901a7cddba@unibas.ch> References: <31db64249cc046a3a07eb0a2809942f2@unibas.ch> <545f46ef42154469b28396901a7cddba@unibas.ch> Message-ID: <92437A66-1413-497C-B555-45A306D25647@byu.edu> Dear MEI community, I recently discovered this amazing 2020 publication. Mélodies en vogue au XVIIIe siècle : le répertoire des timbres de Patrice Coirault. Révisé, organisé et complété par Georges Delarue et Marlène Belly. Published by the BnF. Does anyone know if the roughly 2,800 melodies/songs indexed in this book were ever made available as encoded MEI or any other system of encoding? Does anyone know the compilers? Could they be approached to make the music available in MEI? David Day From weigl at mdw.ac.at Tue Aug 17 16:10:13 2021 From: weigl at mdw.ac.at (David M. Weigl) Date: Tue, 17 Aug 2021 16:10:13 +0200 Subject: [MEI-L] Funded PhD opportunity - digital musicology & performance science - FWF Signature Sound Vienna - DEADLINE EXTENDED Message-ID: <bef7671be3c1e952ccb2748490dbca51d5e94ad1.camel@mdw.ac.at> [With apologies for cross-posting. Please distribute widely!] ** Application deadline extended to August 31st, 2021! ** Opportunity for a 3-year funded PhD position at mdw - University of Music and Performing Arts Vienna, Austria: Quantifying the signature sound of the Vienna Philharmonic's New Year's Concerts The successful candidate will have a strong background in musicology, performance science, performance studies, or related fields. You will work in close collaboration with experts in historical musicology, performance science, music encoding, and music informatics at the University of Music and Performing Arts Vienna. Together we will undertake wide-ranging investigations to find quantitative traces of the influence of the orchestra, conductor, and other contextual factors on historical performance recordings pertaining to the Vienna Philharmonic's New Year's Concert series, applying close- and distant-listening approaches using score-informed audio feature extraction technologies. The project will also emphasise digital means of disseminating findings to scholarly and lay audiences. A formal call for applications to this position with further details is available on the project website, alongside a high-level project description: https://iwk.mdw.ac.at/signature-sound-vienna/ Apply via the mdw application portal: https://www.mdw.ac.at/bewerbungsportal/mittelbau/88 (you will need to register). The project is funded by the Austrian Science Fund (FWF) (P 34664-G). Please do not hesitate to contact us for further information: David M. Weigl weigl at mdw.ac.at Werner Goebl goebl at mdw.ac.at Kind regards, David -- David M. Weigl, PhD Department of Music Acoustics - Wiener Klangstil University of Music and Performing Arts Vienna, Austria P.I., FWF Signature Sound Vienna. "Same procedure as every year?" https://iwk.mdw.ac.at/signature-sound-vienna/ From stefan.muennich at unibas.ch Wed Aug 25 17:28:36 2021 From: stefan.muennich at unibas.ch (=?iso-8859-1?Q?Stefan_M=FCnnich?=) Date: Wed, 25 Aug 2021 15:28:36 +0000 Subject: [MEI-L] MEC2021: Survey about submission process Message-ID: <388de59c5bc14d6fa4770faa242f8c61@unibas.ch> Dear MEI-L, We prepared a short survey about the submission process of this year's MEC. We would kindly ask for your feedback until September 4th via this GoogleForm: https://forms.gle/1dizhQh63VXXMtHB9 (should not take more than 5 minutes). Any feedback (including from people who did not participate in or submitted to MEC2021) is welcome and highly appreciated. The results are expected not only to provide insights about this year, but also to support the workflows and processes of future MEC's. Thank you very much in advance for your support, Stefan -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210825/13b90b07/attachment.htm> From kepper at edirom.de Wed Aug 25 23:14:54 2021 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 25 Aug 2021 23:14:54 +0200 Subject: [MEI-L] ODD Thursday tomorrow Message-ID: <5DBD3AF4-FC45-4E16-9149-0FA66FF66433@edirom.de> Dear all, this is a short reminder that we’re going to have another ODD meeting tomorrow (last Thursday of an even month). The group decided to have it at 2pm UTC, 4pm CEST, 10am EST, so two hours later than these meetings used to be. We'll meet at: https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI See you tomorrow! jo From ichiro.fujinaga at mcgill.ca Thu Aug 26 16:52:35 2021 From: ichiro.fujinaga at mcgill.ca (Ichiro Fujinaga, Prof.) Date: Thu, 26 Aug 2021 14:52:35 +0000 Subject: [MEI-L] Workshop on Byzantine Music Notation In-Reply-To: <71D32352-B14D-49EC-A002-CA26BF861BDE@mcgill.ca> References: <86352125-3B1B-473D-B7C5-FA1504E0C2C5@mcgill.ca> <71D32352-B14D-49EC-A002-CA26BF861BDE@mcgill.ca> Message-ID: <CB017C99-793E-4DCC-965E-9F8BBF06F399@mcgill.ca> Dear all, We will hold an online Workshop on Byzantine Music Notation on Thursday 2 September 2021. The time is: 8 am (PST), 11 am (EST), 4 pm (BST), 5 pm (CEST), and 6 pm (EEST). The meeting is scheduled for 90 minutes. Zoom link: https://mcgill.zoom.us/j/87586218134?pwd=ZlgySjdVS3NEUk9MSG82STZtejRrdz09 Detailed invitation is below. We will have an introductory presentation on Byzantine music notation by Maria Alexandru and Nikolaos Siklafidis. We will follow that with a discussion on how we should proceed, including a possible formation of a Subcommittee on MEI encoding of Byzantine music notation. Anyone interested is welcome to attend the workshop. Best, Ichiro ============================================== Ichiro Fujinaga is inviting you to a scheduled Zoom meeting. Topic: Workshop on Byzantine Music Notation Time: Sep 2, 2021 11:00 Montreal Join Zoom Meeting https://mcgill.zoom.us/j/87586218134?pwd=ZlgySjdVS3NEUk9MSG82STZtejRrdz09 Meeting ID: 875 8621 8134 Passcode: 428956 One tap mobile +16475580588,,87586218134# Canada +17789072071,,87586218134# Canada Dial by your location +1 647 558 0588 Canada +1 778 907 2071 Canada +1 204 272 7920 Canada +1 438 809 7799 Canada +1 587 328 1099 Canada +1 647 374 4685 Canada +1 646 558 8656 US (New York) +1 669 900 6833 US (San Jose) +1 253 215 8782 US (Tacoma) +1 301 715 8592 US (Washington DC) +1 312 626 6799 US (Chicago) +1 346 248 7799 US (Houston) Meeting ID: 875 8621 8134 Find your local number: https://mcgill.zoom.us/u/kkoH5AwnD Join by SIP 87586218134 at zoomcrc.com Join by H.323 162.255.37.11 (US West) 162.255.36.11 (US East) 115.114.131.7 (India Mumbai) 115.114.115.7 (India Hyderabad) 213.19.144.110 (Amsterdam Netherlands) 213.244.140.110 (Germany) 103.122.166.55 (Australia Sydney) 103.122.167.55 (Australia Melbourne) 149.137.40.110 (Singapore) 64.211.144.160 (Brazil) 149.137.68.253 (Mexico) 69.174.57.160 (Canada Toronto) 65.39.152.160 (Canada Vancouver) 207.226.132.110 (Japan Tokyo) 149.137.24.110 (Japan Osaka) Meeting ID: 875 8621 8134 Passcode: 428956 -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 1602 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210826/30f22dcc/attachment.bin> From francesca.giannetti at gmail.com Fri Aug 27 14:00:00 2021 From: francesca.giannetti at gmail.com (Francesca Giannetti) Date: Fri, 27 Aug 2021 08:00:00 -0400 Subject: [MEI-L] Call for collaboration: A Directory of Digital Scholarship in Music, Oct 12 Message-ID: <CAPg3Xyip-bB9U9r+17DjmqO2pF-hTsXc4i3fbtNZOe=MmPJGww@mail.gmail.com> Dear MEI-L, The Digital Humanities Interest Group of the Music Library Association invites you to help curate an online bibliography of specialized digital resources and born-digital scholarship in music. This project will engage the community in compiling entries in an open, shared online data sheet. First begun during the Music Library Association’s Annual Meeting in Portland in 2018, we are seeking to expand the bibliography with contributions from the past three years in the fields of (digital) music archives and librarianship, musicology and ethnomusicology, and music theory. The dataset will be used to produce an online directory that provides browse, preview, and search functionality, allowing users to enter the comprehensive bibliography through a variety of pathways. How it works This will be an online, asynchronous effort, held on Tuesday, October 12, 2021, throughout the day. The following Wednesday, October 13, may be reserved for wrap-up activities. You can of course enter project(s) on the data sheet ( https://docs.google.com/spreadsheets/d/1UyCED16mYxo3XE4RuushxE7DWyqR_CNFecn0k79ldA4/edit?usp=sharing) at any time before or after those dates. Although before is better to be included in this year’s online directory. You can get help and ask questions of our project moderators on the Music Library Association Slack, Twitter, or via email. Reserve at https://forms.gle/EbfeFo1C4Tez7aSE9 to receive a Slack invitation and additional event details! Call published at https://rutgersdh.github.io/musicdh/cfc/. Please share with students and colleagues! All the best, Francesca Giannetti (she/her) Digital Humanities Librarian Rutgers University–New Brunswick -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210827/eae4f167/attachment.htm> From b.w.bohl at gmail.com Thu Sep 23 12:26:18 2021 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Thu, 23 Sep 2021 12:26:18 +0200 Subject: [MEI-L] ODD Friday 2pm tomorrow Message-ID: <B202FABC-E9DC-4119-835A-3B7D7DCE540D@gmail.com> Dear MEI-L:isteners, due to some inconveniences in my personal calendar and Johannes Kepper not being available for this week’s ODD meeting, tomorrow’s ODD Friday will take place from 2pm to 3:30pm CEST. We meet at: https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 The agenda (to which you are invited to add your topics) is available at: https://github.com/orgs/music-encoding/projects/2#column-15696520 Everybody is invited to join ;-) Happy to see you tomorrow, Benjamin From elsadeluca at fcsh.unl.pt Wed Sep 29 11:50:05 2021 From: elsadeluca at fcsh.unl.pt (Elsa De Luca) Date: Wed, 29 Sep 2021 10:50:05 +0100 Subject: [MEI-L] Call for Interest - MEC 2023 Message-ID: <CAO3NZGgZDcHLs5pMQ+4CWVdxbLBjXkeOZiAjUomigOCS0-jfkg@mail.gmail.com> PLEASE CIRCULATE WIDELY Dear MEI-L, As many of you are aware, among its activities MEI oversees the organization of an annual conference, the *Music Encoding Conference* (*MEC*), to provide a meeting place for scholars interested in discussing the modeling, generation and uses of music encoding. While the conference has an emphasis on the development and uses of MEI, other contributions related to general approaches to music encoding are always welcome, as an opportunity for exchange between scholars from various research communities, including technologists, librarians, historians, and theorists. The MEI Board invites expressions of interest for the organization of the* 11th edition* of the annual *Music Encoding Conference*, to be held in *2023*. In order to address the uncertainty related to the current global health crisis, we have opted for a simplified application procedure. At this stage, *it is not necessary to submit a full application*; instead, an informal expression of interest will be enough to set the ball rolling (maximum word limit: 600 words). We are aware that it may seem difficult to think ahead while we are in the midst of a pandemic, but we are optimistic that the worst is already behind and, in any case, the option of a hybrid conference (both online and in person) remains valid for 2023. Historically, the conference has been organized by institutions involved in MEI, such as MEI member institutions or those hosting MEI-based projects, but expressions of interest from any interested group or institution will be happily received. While MEC venues have alternated between Europe and North America in the past, there is no such requirement, so proposals from anywhere are invited. The *deadline *to make up your mind and get in touch with us is *28 October 2021*. Please direct all proposals and inquiries to info at music-encoding.org. Looking forward to hearing from you! Best wishes, Elsa De Luca (on behalf of the MEI Board) Early Music Researcher CESEM <https://cesem.fcsh.unl.pt/en/pessoa/elsa-de-luca/>-FCSH, NOVA University of Lisbon https://sites.google.com/fcsh.unl.pt/elsadeluca/ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20210929/409f0a03/attachment.htm> From weigl at mdw.ac.at Thu Oct 7 15:43:02 2021 From: weigl at mdw.ac.at (David Weigl) Date: Thu, 07 Oct 2021 15:43:02 +0200 Subject: [MEI-L] Funded PhD / post-doc opportunity in Digital Musicology - FWF Signature Sound Vienna References: <615EF87D02000083000304CE@tgwia7.mdw.ac.at> Message-ID: <615EF96602000083000304E5@tgwia7.mdw.ac.at> [With apologies for cross-posting. Please distribute widely!] The Department of Music Acoustics – Wiener Klangstil (IWK) at the mdw – University of Music and Performing Arts Vienna is offering the position of a Research Associate (doctoral student or post-doctoral researcher) The successful candidate will work on the project entitled “Quantifying the Signature Sound of the Vienna Philharmonic’s New Year’s Concerts” (Signature Sound Vienna), funded by the Austrian Science Fund (FWF) and led by Dr. David M. Weigl in collaboration with Prof. Werner Goebl (IWK), and Profs. Markus Grassl and Fritz Trümpi (IMI). This project will generate musicological hypotheses pertaining to the renowned New Year’s Concert series and test them using empirical methodologies, finding traces of the influence of the orchestra, conductor, and other contextual factors on historical performance recordings, by means of score-informed audio feature extraction technologies. The project will involve close interdisciplinary collaboration between historical musicology, performance studies, and music informatics, and will also feature a strong emphasis on digital means of disseminating findings to scholarly and lay audiences. We are seeking a candidate with musicological and historical competences, and an openness to collaborate within an interdisciplinary digital musicology context. A formal call for applications to this position with further details is available on the project website, alongside a high-level project description: https://iwk.mdw.ac.at/signature-sound-vienna/ Apply via the mdw application portal: https://www.mdw.ac.at/bewerbungsportal/mittelbau/91 (you will need to register). The project is funded by the Austrian Science Fund (FWF) (P 34664-G). Please do not hesitate to contact us for further information: David M. Weigl weigl at mdw.ac.at Werner Goebl goebl at mdw.ac.at Kind regards, David -- David M. Weigl, PhD Department of Music Acoustics - Wiener Klangstil University of Music and Performing Arts Vienna, Austria P.I., FWF Signature Sound Vienna. "Same procedure as every year?" https://iwk.mdw.ac.at/signature-sound-vienna/ From francesca.giannetti at gmail.com Fri Oct 8 20:39:47 2021 From: francesca.giannetti at gmail.com (Francesca Giannetti) Date: Fri, 8 Oct 2021 14:39:47 -0400 Subject: [MEI-L] Call for collaboration: A Directory of Digital Scholarship in Music, Oct 12 In-Reply-To: <CAPg3Xyip-bB9U9r+17DjmqO2pF-hTsXc4i3fbtNZOe=MmPJGww@mail.gmail.com> References: <CAPg3Xyip-bB9U9r+17DjmqO2pF-hTsXc4i3fbtNZOe=MmPJGww@mail.gmail.com> Message-ID: <CAPg3XygKRHx_wz4AQJ5+8zj1z7cxTp5s1Y3XAZkeumSMjLvBnw@mail.gmail.com> Hello MEI-L, I'd like to remind you about the upcoming asynchronous crowdsourcing event next Tuesday, October 12, to expand the Directory of Digital Scholarship in Music (https://rutgersdh.github.io/musicdh/). Right now we have 76 projects in the directory, but I know there's a lot more out there. We'll be convening on the #digital_humanities channel of the Music Library Association's Slack. There’s no special expertise required to contribute, just a willingness to help out. If you reserve (at https://forms.gle/EbfeFo1C4Tez7aSE9), I'll send an invitation to the MLA Slack channel and a Zoom link for video chatting. Please consider joining us or helping to spread the word. And this perhaps goes without saying, but considering that the presentation of the directory is still in flux, any comments or suggestions you might have about the structure of the site or the genre categories are most welcome. With kind regards, Francesca On Fri, Aug 27, 2021 at 8:00 AM Francesca Giannetti < francesca.giannetti at gmail.com> wrote: > Dear MEI-L, > > The Digital Humanities Interest Group of the Music Library Association > invites you to help curate an online bibliography of specialized digital > resources and born-digital scholarship in music. This project will engage > the community in compiling entries in an open, shared online data sheet. > First begun during the Music Library Association’s Annual Meeting in > Portland in 2018, we are seeking to expand the bibliography with > contributions from the past three years in the fields of (digital) music > archives and librarianship, musicology and ethnomusicology, and music > theory. The dataset will be used to produce an online directory that > provides browse, preview, and search functionality, allowing users to enter > the comprehensive bibliography through a variety of pathways. > > How it works > > This will be an online, asynchronous effort, held on Tuesday, October 12, > 2021, throughout the day. The following Wednesday, October 13, may be > reserved for wrap-up activities. You can of course enter project(s) on the > data sheet ( > https://docs.google.com/spreadsheets/d/1UyCED16mYxo3XE4RuushxE7DWyqR_CNFecn0k79ldA4/edit?usp=sharing) > at any time before or after those dates. Although before is better to be > included in this year’s online directory. > > You can get help and ask questions of our project moderators on the Music > Library Association Slack, Twitter, or via email. Reserve at > https://forms.gle/EbfeFo1C4Tez7aSE9 to receive a Slack invitation and > additional event details! > > Call published at https://rutgersdh.github.io/musicdh/cfc/. Please share > with students and colleagues! > > All the best, > > Francesca Giannetti (she/her) > Digital Humanities Librarian > Rutgers University–New Brunswick > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211008/d25ae4e9/attachment.htm> From weigl at mdw.ac.at Mon Oct 11 17:17:52 2021 From: weigl at mdw.ac.at (David Weigl) Date: Mon, 11 Oct 2021 17:17:52 +0200 Subject: [MEI-L] CfP: Music Encoding Conference 2022, Dalhouse University (Hybrid), 19-22 May 2022 Message-ID: <616455A002000083000306C5@tgwia7.mdw.ac.at> [With apologies for cross-posting. Please distribute widely!] We are pleased to announce our call for papers, posters, panels, and workshops for the Music Encoding Conference 2022. The Music Encoding Conference is the annual meeting of the Music Encoding Initiative (MEI) community and all who are interested in the digital representation of music. This cross-disciplinary venue is open to and brings together members from various encoding, analysis, and music research communities, including musicologists, theorists, librarians, technologists, music scholars, teachers, and students, and provides an opportunity for learning and engaging with and from each other. The MEC 2022 will take place Thursday 19th – Sunday 22nd May, 2022, at Dalhousie University, Nova Scotia, Canada. While we sincerely hope to welcome as many attendees in person as possible, this year’s Conference will again run in hybrid mode, allowing remote attendance where travel plans are affected by the ongoing pandemic. Please note that submission types and guidelines have been adapted this year in response to community feedback. Background ---------------- Music encoding is a critical component for fields and areas of study including computational or digital musicology, digital editions, symbolic music information retrieval, digital libraries, digital pedagogy, or the wider music industry. The Music Encoding Conference has emerged as the foremost international forum where researchers and practitioners from across these varied fields can meet and explore new developments in music encoding and its use. The Conference celebrates a multidisciplinary program, combining the latest advances from established music encodings, novel technical proposals and encoding extensions, and the presentation or evaluation of new practical applications of music encoding (e.g. in academic study, libraries, editions, pedagogy). Pre-conference workshops provide an opportunity to quickly engage with best practice in the community. Newcomers are encouraged to submit to the main program with articulations of the potential for music encoding in their work, highlighting strengths and weaknesses of existing approaches within this context. Following the formal program, an unconference session fosters collaboration in the community through the meeting of Interest Groups, and self-selected discussions on hot topics that emerge during the conference. For these meetings, there are various spaces generously provided by the hosting institution on May 22nd. Please be in touch with conference organizers if you need to reserve these spaces. For meetings on other days during or immediately after the conference, availability can be checked upon request. The program welcomes contributions from all those working on, or with, any music encoding. In addition, the Conference serves as a focus event for the Music Encoding Initiative community, with its annual community meeting scheduled the day following the main program. We in particular seek to broaden the scope of musical repertories considered, and to provide a welcoming, inclusive community for all who are interested in this work. Topics --------- The conference welcomes contributions from all those who are developing or applying music encodings in their work and research. Topics include, but are not limited to: * data structures for music encoding * music encoding standardisation * music encoding interoperability / universality * methodologies for encoding, music editing, description and analysis * computational analysis of encoded music * rendering of symbolic music data in audio and graphical forms * conceptual encoding of relationships between multimodal music forms (e.g. symbolic music data, encoded text, facsimile images, audio) * capture, interchange, and re-purposing of musical data and metadata * ontologies, authority files, and linked data in musi c encoding and description * (symbolic) music informatio * best practice in approaches to music encoding and the use or application of music encodings in: * music theory and analysis * digital musicology and, more broadly, digital humanities * digital editions * music digital libraries * bibliographies and bibliographic studies * catalogues and collection management * composition * performance * teaching and learning * search and browsing * multimedia music presentation, exploration, and exhibition * machine learning approaches Submissions ----------------- In response to feedback received from the community on last year’s submission process, this year’s MEC will be accepting submissions in the following forms for presentation in the main conference programme (page counts include figures and tables, but exclude references): * Paper submissions of between 4 and 10 pages, * Poster submissions of up to 4 pages. MEC ‘22 also welcomes submissions of proposals for panel sessions and workshops. Submissions to each category will be reviewed according to specific expectations outlined in “Submission Guidelines” below. Finally, we will welcome submissions of late-breaking reports of up to 2 pages during a later submission period closer to the conference dates (see “Important Dates” below). Authors of paper submissions will be invited to present their work in a plenary setting if accepted. Authors of poster submissions will be given the opportunity to briefly present in a plenary setting (“lightning talk”) in addition to a poster session if accepted. Authors of late-breaking reports will be invited to present during a dedicated poster session outside of the main conference programme. All submissions to the main conference programme (papers, posters, and panel sessions) will undergo blind review by multiple members of the program committee before acceptance. Late-breaking reports will be lightly reviewed for relevance to the conference (see “Topics” above) and accepted in limited numbers based on the order in which submissions are received. Authors of workshop submissions will be contacted by the PC to coordinate workshop planning in consultation with the local organizers and contributors. Please note the deadlines for the submission process outlined under “Important Dates” below. Submission Guidelines ------------------------------ All submissions should be formatted in A4 size with 2.5cm margins, font size 12, single space, justified, in a sans-serif typeface (e.g. Calibri) according to this template: https://tinyurl.com/mec2022-submission-template. Please take care to remove all identifying information from the submitted PDF before the upload - submissions should be anonymised for blind review. Submission types (page counts include figures and tables, but exclude references): * Paper submissions (4–10 pages) are expected to present overviews or detail specific aspects of ongoing or completed projects, present detailed case-studies or elaborated perspectives on best practices in the field, or provide other reports on topics relevant to the conference (see “Topics” above). The length requirement for submissions is intentionally broad this year, to allow authors flexibility in their reporting. Note that reporting is expected to be complete and self-contained in its argumentation. * Poster submissions (up to 4 pages) are expected to report on early-stage work, or to present experimental ideas for community feedback. The following types are welcome to be abstract submissions: * Panel discussions (3–5 pages). Submissions should describe the topic and nature of the discussion, along with the main theses and objectives of the proposed contributions; panel discussions are not expected to be a set of papers which could otherwise be submitted as individual papers). * Half- or full-day pre-conference workshops (up to 3 pages). Proposals should include possible conveners, a description of the workshopobjective and proposed duration, as well as its logistical and technical requirements). * Late-breaking reports (up to 2 pages). The PC will coordinate the duration of proposed panels and workshops in consultation with the local organizers and contributors. Important Dates ---------------------- 10 December: Initial registration via our ConfTool website: www.conftool.net/music-encoding2022 (available from late October 2021) with metadata of contributors including name(s) of author(s), affiliation(s) and email address(es), type and title of the submission, and a short one-paragraph abstract. 17 December: Upload of anonymized submissions (see submission guidelines above) for review to ConfTool. Please be aware that ConfTool only accepts PDF submissions. Please remove all identifying information from the submitted PDF before the upload. 11 February: Notification of acceptance and invitation to authors of accepted submissions to contribute to the MEC proceedings. A formatted template pre-configured with your metadata will be provided on or about the day after notification. 13 March: Presenter registration deadline (papers, posters, workshops, panels). At least one author per accepted submission must register and confirm in-person or online participation. 3 April: Upload of accepted submissions in conference-ready version using the provided template. This version will be made available to registered conference attendees prior to the conference. 3 April–19 April: Submissions of late-breaking reports. A limited number of submissions will be accepted in order received. Further details to be announced. 19–22 May: Conference. 5 June: Final upload of camera-ready papers for publication in the proceedings. Camera-ready versions are welcome to incorporate light modifications in response to feedback obtained during the conference. The MEC proceedings will be published under an open access license and with an individual DOI number for all papers. We especially encourage students and other first time attendees to make a submission to the Music Encoding Conference. We have applied for funding to provide a number of $800 (CAD) travel bursaries to support national and international travel for student presenters, and are seeking further ways to support their attendance. Further details will be announced on the conference web page in due course. Additional information ----------------------------- While we look forward to welcoming as many of you as possible in person at Dalhousie University, we are preparing for MEC ‘22 within the context of ongoing uncertainty due to the Covid-19 pandemic. To allow the community to best accommodate to this situation, we are organising this year’s conference with the following commitments in mind: * The conference will allow remote participation as in the previous years (MEC ‘20 and ‘21). Decisions on the precise implementation of this year’s hybrid format will be announced in due course and communicated widely (conference web page, mailing list, MEI Slack, Twitter) in the months leading up to the event. * We commit to the announced dates for MEC ‘22 (19th-22nd May). There will be no rescheduling of the conference to fit projected changes in the pandemic situation this year. Additional details regarding registration, accommodation, etc. will be announced on the conference web page (https://music-encoding.org/conference/2022/). In case of questions, feel free to contact: conference2022 at music-encoding.org. Programme Committee ------------------------------- Daniel Bangert, Digital Repository of Ireland, Royal Irish Academy Benjamin Bohl, Department of Musicology, Goethe-Universität Frankfurt Susanne Cox, Beethoven-Haus Bonn Timothy Duguid, School of Humanities, University of Glasgow Norbert Dubowy, Digital Mozart Edition, Salzburg Mozarteum Foundation Maristella Feustle, University of North Texas Libraries Music Library Est elle Joubert, Dalhousie University Anna Kijas, Lilly Music Library, Tufts University David Lewis, University of Oxford | GoldsmSageev Oore, Dalhousie University | Vector Institute for Artificial Intelligence Anna Plaksin, Johannes Gutenberg-Universität Mainz | Birmingham City University Juliette Regimbal, McGill University Kristina Richts-Matthaei, Paderborn University David M. Weigl (Committee Chair), University of Music and Performing Arts Vienna Local Organizing Committee -------------------------------------- Jennifer Bain (Committee Chair), Dalhousie University Estelle Joubert, Dalhousie University Sageev Oore, Dalhousie University | Vector Institute for Artificial Intelligence Morgan Paul, Dalhousie University From Anna.Kijas at tufts.edu Mon Oct 18 16:34:01 2021 From: Anna.Kijas at tufts.edu (Kijas, Anna E) Date: Mon, 18 Oct 2021 14:34:01 +0000 Subject: [MEI-L] CfP for MEI Pedagogy Resource Message-ID: <2E229582-A41C-4EF7-8FF1-0530DDD3275B@tufts.edu> Dear MEI community, The MEI Digital Pedagogy Interest Group<https://music-encoding.org/community/interest-groups.html> invites proposals for an online, open-access, peer-reviewed resource showcasing pedagogical use cases of MEI and music encoding more generally. We seek “music encoding” initiatives across the full spectrum of instruction in archives, library work, and music studies to demonstrate how we teach music encoding, how it can be used to help us answer specific research questions, and how to make it more accessible to students and instructors. We are interested in contributions that include, but are not limited to, lesson plans, instructional materials, short essays, tutorials, and examples drawn from existing projects. “The Music Encoding Initiative<https://music-encoding.org/> (MEI) is a community-driven, open-source effort to define a system for encoding musical documents in a machine-readable structure. MEI brings together specialists from various music research communities, including technologists, librarians, historians, and theorists in a common effort to define best practices for representing a broad range of musical documents and structures.” You can also view this CfP on Humanities Commons: https://hcommons.org/groups/music-encoding-initiative/forum/topic/cfp-mei-pedagogy-resource/. Topics and Themes We are interested in contributions that explore a variety of pedagogical topics and themes including, but not limited to, the following: * How can instructors incorporate music encoding in the classroom to explore specific types of music research questions? For example, questions may be related to aspects of western and non-western musics, performance, or use of analytical or computational methods; * Project use cases that highlight specific applications of music encoding, including challenges or failures, as well as success stories; * Approaches to teaching music encoding in asynchronous and online learning environments; * Incorporating musical examples by women and underrepresented composers into music encoding pedagogy; * Foundational knowledge and skills needed before undertaking music encoding work; * Methods for teaching and promoting music encoding amongst peers outside of the classroom setting How to Participate Interested contributors should submit this form<https://forms.gle/6FEeBMoVa6JsXnDcA> with a short proposal (up to 500 words) and/or to volunteer for a particular role. You can volunteer without contributing a proposal. Timeframe * Submit your proposal or express interest in a volunteer role by January 10, 2022 * Interested contributors and volunteers will be contacted by February 1, 2022 * Submit your manuscript (up to 7-8,000 words) by June 10, 2022 * Manuscripts will be peer-reviewed during Summer 2022 * Publication planned in late Fall 2022 Please reach out to Anna E. Kijas (anna.kijas at tufts.edu) and Joy Calico (joy.calico at vanderbilt.edu), co-administrators of the Pedagogy Interest Group, with any questions. Share widely! Best, Anna Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155<webextlink://20%20Talbot%20Avenue,%20Medford,%20MA%2002155> Pronouns: she, her, hers Book an appointment<https://tufts.libcal.com/appointments/kijas/lilly> | (617) 627-2846 Anna E. Kijas Head, Lilly Music Library Granoff Music Center Tufts University 20 Talbot Avenue, Medford, MA 02155 Pronouns: she, her, hers Book an appointment<https://tufts.libcal.com/appointments/kijas/lilly> | (617) 627-2846 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211018/1312103a/attachment.htm> From kepper at edirom.de Tue Oct 26 19:28:19 2021 From: kepper at edirom.de (Johannes Kepper) Date: Tue, 26 Oct 2021 19:28:19 +0200 Subject: [MEI-L] ODD Thursday this week Message-ID: <8A0A25F6-BC2A-45A1-91BF-62FF0D0EF1F8@edirom.de> Dear all, coming Thursday, we have another iteration of our ODD Thursday / Friday meetings to go through more technical aspects of MEI. We’ll meet on 2021-10-28 at 4pm CEST 3pm BST 2pm UTC+0 10am EDT over at https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 As usual, there is a preliminary agenda available at https://github.com/orgs/music-encoding/projects/2#column-16111354 – feel free to add to that list, or just show up during the meeting. Looking forward to see you then, jo -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211026/0d172599/attachment.sig> From b.w.bohl at gmail.com Fri Oct 29 11:52:49 2021 From: b.w.bohl at gmail.com (Benjamin W. Bohl) Date: Fri, 29 Oct 2021 11:52:49 +0200 Subject: [MEI-L] ODD Friday Nov 26 Message-ID: <F64412A1-C683-4C6A-ABB7-15A8C326B946@gmail.com> Dear MEI-L, our next ODD meeting will be on Nov. 26 at 04:00pm UTC+1 (03:00pm UTC+0; 10:00 am EST; 07:00 am PST). Please feel free to add to the agenda at: https://github.com/orgs/music-encoding/projects/2#column-16625544 If you’re having trouble doing so, let me know :wink: N.B. This will be the last ODD meeting in 2021 All the best wishes, Benni From stefan.muennich at unibas.ch Fri Oct 29 17:06:42 2021 From: stefan.muennich at unibas.ch (=?Windows-1252?Q?Stefan_M=FCnnich?=) Date: Fri, 29 Oct 2021 15:06:42 +0000 Subject: [MEI-L] Rename the MEI master-branch In-Reply-To: <A9639783-E3C5-412C-9A40-C89A090B6C1A@rism.digital> References: <D964DD52-971E-436A-958F-1D20D2CF5557@gmail.com> <D288D410-520B-48BE-83F8-DE21D1E2DBF8@tufts.edu> <3CBD7E35-E5C9-4A00-9FA2-9107B0E0BCD8@gmail.com>, <A9639783-E3C5-412C-9A40-C89A090B6C1A@rism.digital> Message-ID: <22dcfe57392c4d41a96c444d437ce624@unibas.ch> Dear MEI community, As another follow-up, today we continued the discussed transition and renamed the "master" branch for all main MEI GitHub repositories to "main". This includes - https://github.com/music-encoding/music-encoding.github.io - https://github.com/music-encoding/sample-encodings - https://github.com/music-encoding/encoding-tools - https://github.com/music-encoding/guidelines - https://github.com/music-encoding/schema While this will not affect you if you have not forked any of the aforementioned repositories, fork owners may want to adjust their respective local branches to keep in sync. A nice and easy to follow description of the necessary steps can be found here: https://stevenmortimer.com/5-steps-to-change-github-default-branch-from-master-to-main/ (There are 5 simple steps that can be completed in under 1 minute). Many thanks to everyone involved in this important step towards a more inclusive terminology (see Anna's mail for background)! And special thanks to you, Andrew, for the technical implementation. Cheers, Stefan ________________________________ Von: mei-l <mei-l-bounces at lists.uni-paderborn.de> im Auftrag von Andrew Hankinson <andrew.hankinson at rism.digital> Gesendet: Mittwoch, 5. Mai 2021 13:56:52 An: Music Encoding Initiative Betreff: Re: [MEI-L] Rename the MEI master-branch Hello everyone, To follow up on this discussion, the community has decided on 'stable' as the appropriate replacement. This is just a 'heads-up' that I will be making the change today, so please update any of your tools, workflows, and other dependencies to reflect this change. Cheers, -Andrew > On 13 Apr 2021, at 16:46, Benjamin W. Bohl <b.w.bohl at gmail.com> wrote: > > Dear Anna, > > Thanks for this valuable addition ;-) > > /Benni > >> On 13. Apr 2021, at 16:45, Kijas, Anna E <Anna.Kijas at tufts.edu> wrote: >> >> Thank you, Benjamin for this! Here is some additional context for folks who may not be following these conversations,https://www.nytimes.com/2021/04/13/technology/racist-computer-engineering-terms-ietf.html. Also I’d like to share a guide created by several of my colleagues at the Association for Computers and the Humanities - https://ach.org/toward-anti-racist-technical-terminology/ - which addresses racist technical terminology. We also have an open bibliography on Zotero for Inclusive Technology -https://www.zotero.org/groups/2554430/ach_inclusive_technology. >> >> Best, >> Anna >> >> Please note: Lilly Music Library hours and additional details can be viewed athttps://tischlibrary.tufts.edu/use-library/lilly-music-library. Updates about library services can be found at https://tischlibrary.tufts.edu/about-us/news/2020-03-16-9900. All instruction, meetings, and consultations will be conducted over Zoom. >> >> Anna E. Kijas >> Head, Lilly Music Library >> Granoff Music Center >> Tufts University >> 20 Talbot Avenue, Medford, MA 02155 >> Pronouns: she, her, hers >> Book an appointment | (617) 627-2846 >> >> From: mei-l <mei-l-bounces at lists.uni-paderborn.de> on behalf of "Benjamin W. Bohl" <b.w.bohl at gmail.com> >> Reply-To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> >> Date: Tuesday, April 13, 2021 at 10:32 AM >> To: MEI-L <mei-l at lists.uni-paderborn.de> >> Subject: [MEI-L] Rename the MEI master-branch >> >> Dear MEI Community, >> >> following a suggestion by the Software Freedom Conservancy GitHub renamed their master-branch to main in order to avoid potentially offensive vocabulary or allusions to slavery. >> >> MEI would like to follow this lead and rename the master-branch of https://github.com/music-encoding/music-encoding and other repositories where applicable. Following the discussion on GitHub (https://github.com/music-encoding/music-encoding/issues/776) the Technical Team set up this poll to take in the community's votes on a closed list of potential new names for our current master-branch, used to disseminate tagged versions (e.g. MEI 3.0.0, MEI 4.0.0 MEI 4.0.1). >> >> Please cast your vote until 2021-04-28 using the form available at: >> https://abstimmung.dfn.de/tNOBDWgWAFtVz6lr >> >> On behalf of the MEI Board and Technical Team, >> Benjamin W. Bohl >> MEI Technical Co-chair >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211029/9ec64031/attachment.htm> From nikolaos.beer at uni-paderborn.de Tue Nov 2 16:28:07 2021 From: nikolaos.beer at uni-paderborn.de (Nikolaos Beer) Date: Tue, 2 Nov 2021 16:28:07 +0100 Subject: [MEI-L] Start of RWA online Message-ID: <318DC4CF-ED33-4014-A804-F0581ED87D27@uni-paderborn.de> Dear list members We are pleased to inform you today about our new web service RWA online (www.reger-werkausgabe.de) and the parallel publication of the two latest edition volumes Songs II (vol. II/2) and Works for mixed voice unaccompanied choir II (vol. II/9) of the edition project Reger-Werkausgabe (RWA). RWA online is the new publication and research platform for the digital edition’s components of the hybrid edition RWA. It replaces the DVDs previously enclosed with each printed volume and brings them together in a new look, technically modernised and for the first time freely accessible. RWA online is being developed by RWA/Max-Reger-Institute (MRI) with the support of Virtueller Forschungsverbund Edirom (ViFE, www.edirom.de) and operated by MRI. Carus-Verlag Stuttgart significantly supports RWA online by kindly authorising the online publication of the scholarly texts and the complete Critical Report together with bar-based excerpts of the edited text. As usual, with Edirom-Online and where legally possible, all edition-relevant manuscripts and first editions are accessible in digital full page views. A full page view of the edition itself is provided within the respective printed volume. Later this year, the likewise extensive digital RWA Encyclopedia will be added. This will be followed by the digital components of the volumes already published in Module II (Songs and Choral Works) and, in the course of 2022, those of Module I (Organ Works), which has already been completed. RWA online is directly linked to the research data repositories of the Max-Reger-Portal (www.maxreger.info), which is also operated by the MRI, and is thus always based on the latest research data. We invite you to RWA online at www.reger-werkausgabe.de. Information on the two printed volumes just published is available for II/2 here: https://www.reger-werkausgabe.de/rwa_news_2021101502.html - and for II/9 here: https://www.reger-werkausgabe.de/rwa_news_2021101503.html Information on the Reger-Werkausgabe at Carus-Verlag: https://www.carus-verlag.com/produkte/gesamt-und-werkausgaben/reger-werkausgabe/ With kind regards and on behalf of Max-Reger-Institute/Reger-Werkausgabe Nikolaos Beer ___________________________________ Nikolaos Beer M.A. Wissenschaftlicher Mitarbeiter Verbundstelle Musikedition Reger-Werkausgabe Universität Paderborn Musikwissenschaftliches Seminar Detmold/Paderborn Hornsche Straße 39 D-32756 Detmold Dienstadresse: Max-Reger-Institut/Elsa-Reger-Stiftung Pfinztalstraße 7 76227 Karlsruhe Fon: +49 - (0)721 - 854 501 @: nikolaos.beer at uni-paderborn.de -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211102/6972b119/attachment.htm> From weigl at mdw.ac.at Mon Nov 15 17:02:12 2021 From: weigl at mdw.ac.at (David M. Weigl) Date: Mon, 15 Nov 2021 17:02:12 +0100 Subject: [MEI-L] 2nd CfP: Music Encoding Conference 2022, Dalhousie University (Hybrid), 19-22 May 2022 Message-ID: <6da0f6586fe866972948a0d734bb0c476d882289.camel@mdw.ac.at> [With apologies for cross-posting. Please distribute widely!] ** ABSTRACT DEADLINE: December 10th, PAPER DEADLINE: December 17th, 2021 ** This is the second call for papers, posters, panels, and workshops for the Music Encoding Conference 2022. The Music Encoding Conference is the annual meeting of the Music Encoding Initiative (MEI) community and all who are interested in the digital representation of music. This cross-disciplinary venue is open to and brings together members from various encoding, analysis, and music research communities, including musicologists, theorists, librarians, technologists, music scholars, teachers, and students, and provides an opportunity for learning and engaging with and from each other. The MEC 2022 will take place Thursday 19th – Sunday 22nd May, 2022, at Dalhousie University, Nova Scotia, Canada. While we sincerely hope to welcome as many attendees in person as possible, this year’s Conference will again run in hybrid mode, allowing remote attendance where travel plans are affected by the ongoing pandemic. Please note that submission types and guidelines have been adapted this year in response to community feedback. Background ---------------- Music encoding is a critical component for fields and areas of study including computational or digital musicology, digital editions, symbolic music information retrieval, digital libraries, digital pedagogy, or the wider music industry. The Music Encoding Conference has emerged as the foremost international forum where researchers and practitioners from across these varied fields can meet and explore new developments in music encoding and its use. The Conference celebrates a multidisciplinary program, combining the latest advances from established music encodings, novel technical proposals and encoding extensions, and the presentation or evaluation of new practical applications of music encoding (e.g. in academic study, libraries, editions, pedagogy). Pre-conference workshops provide an opportunity to quickly engage with best practice in the community. Newcomers are encouraged to submit to the main program with articulations of the potential for music encoding in their work, highlighting strengths and weaknesses of existing approaches within this context. Following the formal program, an unconference session fosters collaboration in the community through the meeting of Interest Groups, and self-selected discussions on hot topics that emerge during the conference. For these meetings, there are various spaces generously provided by the hosting institution on May 22nd. Please be in touch with conference organizers if you need to reserve these spaces. For meetings on other days during or immediately after the conference, availability can be checked upon request. The program welcomes contributions from all those working on, or with, any music encoding. In addition, the Conference serves as a focus event for the Music Encoding Initiative community, with its annual community meeting scheduled the day following the main program. We in particular seek to broaden the scope of musical repertories considered, and to provide a welcoming, inclusive community for all who are interested in this work. Topics --------- The conference welcomes contributions from all those who are developing or applying music encodings in their work and research. Topics include, but are not limited to:     * data structures for music encoding     * music encoding standardisation     * music encoding interoperability / universality     * methodologies for encoding, music editing, description and analysis     * computational analysis of encoded music     * rendering of symbolic music data in audio and graphical forms     * conceptual encoding of relationships between multimodal music forms (e.g. symbolic music data, encoded text, facsimile images, audio)     * capture, interchange, and re-purposing of musical data and metadata     * ontologies, authority files, and linked data in music encoding and description     * (symbolic) music information retrieval using music encoding     * evaluation of music encodings     * best practice in approaches to music encoding and the use or application of music encodings in:     * music theory and analysis     * digital musicology and, more broadly, digital humanities     * digital editions     * music digital libraries     * bibliographies and bibliographic studies     * catalogues and collection management     * composition     * performance     * teaching and learning     * search and browsing     * multimedia music presentation, exploration, and exhibition     * machine learning approaches Submissions ----------------- In response to feedback received from the community on last year’s submission process, this year’s MEC will be accepting submissions in the following forms for presentation in the main conference programme (page counts include figures and tables, but exclude references): * Paper submissions of between 4 and 10 pages, * Poster submissions of up to 4 pages. MEC ‘22 also welcomes submissions of proposals for panel sessions and workshops. Submissions to each category will be reviewed according to specific expectations outlined in “Submission Guidelines” below. Finally, we will welcome submissions of late-breaking reports of up to 2 pages during a later submission period closer to the conference dates (see “Important Dates” below). Authors of paper submissions will be invited to present their work in a plenary setting if accepted. Authors of poster submissions will be given the opportunity to briefly present in a plenary setting (“lightning talk”) in addition to a poster session if accepted. Authors of late-breaking reports will be invited to present during a dedicated poster session outside of the main conference programme. All submissions to the main conference programme (papers, posters, and panel sessions) will undergo blind review by multiple members of the program committee before acceptance. Late-breaking reports will be lightly reviewed for relevance to the conference (see “Topics” above) and accepted in limited numbers based on the order in which submissions are received. Authors of workshop submissions will be contacted by the PC to coordinate workshop planning in consultation with the local organizers and contributors. Please note the deadlines for the submission process outlined under “Important Dates” below. Submission Guidelines ------------------------------ All submissions should be formatted in A4 size with 2.5cm margins, font size 12, single space, justified, in a sans-serif typeface (e.g. Calibri), using APA-style citations and references, according to this template: https://tinyurl.com/mec2022-submission-template. Please take care to remove all identifying information from the submitted PDF before the upload - submissions should be anonymised for blind review. Submission types (page counts include figures and tables, but exclude references): * Paper submissions (4–10 pages) are expected to present overviews or detail specific aspects of ongoing or completed projects, present detailed case-studies or elaborated perspectives on best practices in the field, or provide other reports on topics relevant to the conference (see “Topics” above). The length requirement for submissions is intentionally broad this year, to allow authors flexibility in their reporting. Note that reporting is expected to be complete and self-contained in its argumentation. * Poster submissions (up to 4 pages) are expected to report on early-stage work, or to present experimental ideas for community feedback. The following types are welcome to be abstract submissions: * Panel discussions (3–5 pages). Submissions should describe the topic and nature of the discussion, along with the main theses and objectives of the proposed contributions; panel discussions are not expected to be a set of papers which could otherwise be submitted as individual papers). * Half- or full-day pre-conference workshops (up to 3 pages). Proposals should include possible conveners, a description of the workshop’s objective and proposed duration, as well as its logistical and technical requirements). * Late-breaking reports (up to 2 pages). The PC will coordinate the duration of proposed panels and workshops in consultation with the local organizers and contributors. Important Dates (Timezone: AoE / Anywhere on Earth) --------------------------------------------------- 10 December: Initial registration via our ConfTool website: www.conftool.net/music-encoding2022 with metadata of contributors including name(s) of author(s), affiliation(s) and email address(es), type and title of the submission, and a short one-paragraph abstract. 17 December: Upload of anonymized submissions (see submission guidelines above) for review to ConfTool. Please be aware that ConfTool only accepts PDF submissions. Please remove all identifying information from the submitted PDF before the upload. 11 February: Notification of acceptance and invitation to authors of accepted submissions to contribute to the MEC proceedings. A formatted template pre-configured with your metadata will be provided on or about the day after notification. 13 March: Presenter registration deadline (papers, posters, workshops, panels). At least one author per accepted submission must register and confirm in-person or online participation. 3 April: Upload of accepted submissions in conference-ready version using the provided template. This version will be made available to registered conference attendees prior to the conference. 3 April–19 April: Submissions of late-breaking reports. A limited number of submissions will be accepted in order received. Further details to be announced. 19–22 May: Conference. 5 June: Final upload of camera-ready papers for publication in the proceedings. Camera-ready versions are welcome to incorporate light modifications in response to feedback obtained during the conference. The MEC proceedings will be published under an open access license and with an individual DOI number for all papers. We especially encourage students and other first time attendees to make a submission to the Music Encoding Conference. We have applied for funding to provide a number of $800 (CAD) travel bursaries to support national and international travel for student presenters, and are seeking further ways to support their attendance. Further details will be announced on the conference web page in due course. Additional information ----------------------------- While we look forward to welcoming as many of you as possible in person at Dalhousie University, we are preparing for MEC ‘22 within the context of ongoing uncertainty due to the Covid-19 pandemic. To allow the community to best accommodate to this situation, we are organising this year’s conference with the following commitments in mind: * The conference will allow remote participation as in the previous years (MEC ‘20 and ‘21). Decisions on the precise implementation of this year’s hybrid format will be announced in due course and communicated widely (conference web page, mailing list, MEI Slack, Twitter) in the months leading up to the event. * We commit to the announced dates for MEC ‘22 (19th-22nd May). There will be no rescheduling of the conference to fit projected changes in the pandemic situation this year. Additional details regarding registration, accommodation, etc. will be announced on the conference web page (https://music-encoding.org/conference/2022/). In case of questions, feel free to contact: conference2022 at music-encoding.org. Programme Committee ------------------------------- Daniel Bangert, Digital Repository of Ireland, Royal Irish Academy Benjamin Bohl, Department of Musicology, Goethe-Universität Frankfurt Susanne Cox, Beethoven-Haus Bonn Timothy Duguid, School of Humanities, University of Glasgow Norbert Dubowy, Digital Mozart Edition, Salzburg Mozarteum Foundation Maristella Feustle,  University of North Texas Libraries Music Library Estelle Joubert, Dalhousie University Anna Kijas, Lilly Music Library, Tufts University David Lewis, University of Oxford | Goldsmiths University of London Sageev Oore, Dalhousie University | Vector Institute for Artificial Intelligence Anna Plaksin, Johannes Gutenberg-Universität Mainz | Birmingham City University Juliette Regimbal, McGill University Kristina Richts-Matthaei, Paderborn University David M. Weigl (Committee Chair), University of Music and Performing Arts Vienna Local Organizing Committee -------------------------------------- Jennifer Bain (Committee Chair), Dalhousie University Estelle Joubert, Dalhousie University Sageev Oore, Dalhousie University | Vector Institute for Artificial Intelligence Morgan Paul, Dalhousie University From stadler at edirom.de Fri Nov 19 18:01:44 2021 From: stadler at edirom.de (Peter Stadler) Date: Fri, 19 Nov 2021 18:01:44 +0100 Subject: [MEI-L] [2021 MEI elections] Nominations for the MEI Board Message-ID: <526CFE0A-04D1-4DD8-BBAE-40EB82C2118F@edirom.de> **Too long to read?** visit: https://forms.gle/XCdSyeQ7BN6cbad27 Dear MEI Community, on 31 December 2021 the terms of three MEI Board members will come to an end. The entire Board wishes to thank Benjamin W. Bohl, Elsa De Luca, and Ichiro Fujinaga for their service and dedication to the MEI community. In order to fill these soon-to-be-vacant positions, elections must be held. The election process will take place in accordance with the Music Encoding Initiative By-Laws.[1] To nominate candidates, please do so via this form: https://forms.gle/XCdSyeQ7BN6cbad27 The timeline of the elections will be as follows: Nomination phase (20 November – 4 December, 2021 [2]) - Nominations can be sent by filling in the nomination form. - Any person who today is a subscriber of MEI-L has the right to nominate candidates. - Individuals who have previously served on the Board are eligible for nomination and re-appointment. - Self nominations are welcome. - Individuals will be informed of their nomination when received and asked to confirm their willingness to serve on the Board. - Acceptance of a nomination requires submission of a short CV and a personal statement of interest in MEI (a maximum of 200 words each) to elections at music-encoding.org by November, 17 2021. - Candidates who have been nominated but who have not confirmed their willingness will not be included on the ballot. - Candidates have to be members of the MEI-L mailing list but may register until 16 November 2021. Election phase (18 December – 31 December, 2021) - The election will take place using OpaVote and the Ranked Choice Voting method (https://www.opavote.com/methods/ranked-choice-voting). - You will be informed about the election and your individual voting tokens in a separate email. Post election phase - Election results will be announced after the elections have closed. - The term of the elected candidates starts on 1 January 2022. The election of Board members is an opportunity for each of you to have a voice in determining the future of MEI. Thank you for your support, Peter Stadler and Laurent Pugin MEI election administrators 2021 by appointment of the MEI Board [1] The By-laws of the Music Encoding Initiative are available online at: http://music-encoding.org/community/mei-by-laws.html [2] All deadlines are referenced to 11:59 pm (UTC) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211119/6af89dad/attachment.sig> From stadler at edirom.de Fri Nov 19 23:16:00 2021 From: stadler at edirom.de (Peter Stadler) Date: Fri, 19 Nov 2021 23:16:00 +0100 Subject: [MEI-L] [2021 MEI elections] Nominations for the MEI Board In-Reply-To: <2ae6af214e5e4034ac4d91e1b980893f@unibas.ch> References: <2ae6af214e5e4034ac4d91e1b980893f@unibas.ch> Message-ID: <11C46357-0EB6-4FF1-9B40-FE62E2367CFA@edirom.de> Hah, good catch! Of course you’re right! Cheers Peter > Am 19.11.2021 um 22:46 schrieb Stefan Münnich <stefan.muennich at unibas.ch>: > >  > Dear Peter, > > > > There might probably be an error with the dates? Should it read 17 December and 16 December for statement and registration respectively? > > > > Best, Stefan > > Von: mei-l <mei-l-bounces at lists.uni-paderborn.de> im Auftrag von Peter Stadler <stadler at edirom.de> > Gesendet: Freitag, 19. November 2021 18:01:44 > An: Music Encoding Initiative > Betreff: [MEI-L] [2021 MEI elections] Nominations for the MEI Board > > **Too long to read?** visit: > https://forms.gle/XCdSyeQ7BN6cbad27 > > Dear MEI Community, > > on 31 December 2021 the terms of three MEI Board members will come to an end. The entire Board wishes to thank Benjamin W. Bohl, Elsa De Luca, and Ichiro Fujinaga for their service and dedication to the MEI community. > > In order to fill these soon-to-be-vacant positions, elections must be held. The election process will take place in accordance with the Music Encoding Initiative By-Laws.[1] > > To nominate candidates, please do so via this form: > https://forms.gle/XCdSyeQ7BN6cbad27 > > The timeline of the elections will be as follows: > > Nomination phase (20 November – 4 December, 2021 [2]) > - Nominations can be sent by filling in the nomination form. > - Any person who today is a subscriber of MEI-L has the right to nominate candidates. > - Individuals who have previously served on the Board are eligible for nomination and re-appointment. > - Self nominations are welcome. > - Individuals will be informed of their nomination when received and asked to confirm their willingness to serve on the Board. > - Acceptance of a nomination requires submission of a short CV and a personal statement of interest in MEI (a maximum of 200 words each) to elections at music-encoding.org by November, 17 2021. > - Candidates who have been nominated but who have not confirmed their willingness will not be included on the ballot. > - Candidates have to be members of the MEI-L mailing list but may register until 16 November 2021. > > Election phase (18 December – 31 December, 2021) > - The election will take place using OpaVote and the Ranked Choice Voting method (https://www.opavote.com/methods/ranked-choice-voting). > - You will be informed about the election and your individual voting tokens in a separate email. > > Post election phase > - Election results will be announced after the elections have closed. > - The term of the elected candidates starts on 1 January 2022. > > > The election of Board members is an opportunity for each of you to have a voice in determining the future of MEI. > > Thank you for your support, > Peter Stadler and Laurent Pugin > MEI election administrators 2021 > by appointment of the MEI Board > > [1] The By-laws of the Music Encoding Initiative are available online at: http://music-encoding.org/community/mei-by-laws.html > [2] All deadlines are referenced to 11:59 pm (UTC) > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211119/cd20b916/attachment.htm> From kepper at edirom.de Thu Nov 25 20:23:07 2021 From: kepper at edirom.de (Johannes Kepper) Date: Thu, 25 Nov 2021 20:23:07 +0100 Subject: [MEI-L] ODD Friday Message-ID: <D126F098-9768-4E29-9FD9-8B45AC2E1D67@edirom.de> Dear all, this is a gentle reminder about tomorrow's ODD Friday. The agenda is available at https://github.com/orgs/music-encoding/projects/2#column-16625544, please come join us and add to that list :-) https://us02web.zoom.us/j/83097885923?pwd=NTZvTXh1S2E1MkdNdi9tV3FKWVpMQT09 Meeting-ID: 830 9788 5923 Kenncode: MEI The meeting will start at 4pm CET, 3pm GMT, 10am EST. See you then… jo From stefan.muennich at unibas.ch Mon Nov 29 16:09:01 2021 From: stefan.muennich at unibas.ch (=?Windows-1252?Q?Stefan_M=FCnnich?=) Date: Mon, 29 Nov 2021 15:09:01 +0000 Subject: [MEI-L] WG: New Perspectives in Fifteenth- and Sixteenth-Century Music Notations In-Reply-To: <CAG0F6a9FrJPKAdgsJsBLkh-iAOFxshhYMH8G1=xVH8xK1W-U_A@mail.gmail.com> References: <CAG0F6a9FrJPKAdgsJsBLkh-iAOFxshhYMH8G1=xVH8xK1W-U_A@mail.gmail.com> Message-ID: <b930d02da58d4ae9aa74500cd26f9252@unibas.ch> (With apologies for cross-posting) Dear MEI-L members, The following CfP may be of interest to some of you. (Disclaimer: I am just forwarding the announcement. For any questions, please contact the organizers indicated in the CfP.) All best, Stefan ________________________________ Von: Notation Study Group <notation.studygroup at gmail.com> Gesendet: Montag, 29. November 2021 15:53 An: Notation Study Group Betreff: CFP: New Perspectives in Fifteenth- and Sixteenth-Century Music Notations CFP: New Perspectives in Fifteenth- and Sixteenth-Century Music Notations CFP deadline: 15 February 2022 Conference dates: 4-7 May 2022 https://alamirefoundation.org/en/research/conferences/notations Alamire Foundation / KU Leuven Department of Musicology Leuven, Belgium This conference aims to bring together scholars working on music notations from the fifteenth and sixteenth centuries from a wide range of perspectives and seeks to advance scholarship in a number of different areas relating to notation. Keynote addresses will be delivered by Margaret Bent (All Souls College Oxford) and Emily Zazulia (UC Berkeley). The conference concert will feature Cappella Pratensis singing Jacob Obrecht’s Missa Maria zart. See the conference website<https://alamirefoundation.org/en/research/conferences/notations> for more information. We are interested in papers on topics including but not limited to: · the history and theory of notation; · the notational practice of composers, scribes, and music printers; · interactions between different types of notation (monophonic/polyphonic, chant/mensural); · the notation of canons; · optical music recognition (OMR) and music encoding; · notation and performance practice; · notation and editing techniques; · notation and rhetoric; and · notation and society. Please send titles and abstracts for consideration to Paul Kolb at paul.kolb -at- kuleuven.be<http://kuleuven.be/> by 15 February 2022. The conference language is English. For those unable to attend in person, it will be possible to present virtually. Convener: Paul Kolb (KU Leuven) Scientific Committee: David Burn (KU Leuven), Marie-Alexis Colin (Université libre de Bruxelles), Barbara Haggh-Huglo (University of Maryland), Paul Kolb (KU Leuven), Katelijne Schiltz (Universität Regensburg), Thomas Schmidt (University of Manchester) Organizing Committee: David Burn (KU Leuven), Bart Demuyt (KU Leuven/Alamire Foundation), Ann Kelders (KBR), Paul Kolb (KU Leuven), Ryan O’Sullivan (KU Leuven), Miriam Wendling (KU Leuven) -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211129/81e853b7/attachment.htm> From weigl at mdw.ac.at Fri Dec 3 14:38:52 2021 From: weigl at mdw.ac.at (David Morrison Weigl) Date: Fri, 03 Dec 2021 14:38:52 +0100 Subject: [MEI-L] Final CfP: Music Encoding Conference 2022, Dalhousie University (Hybrid), 19-22 May 2022 Message-ID: <dc4c6f754598d70bdfe65c7ecdd7afa86403c6b0.camel@mdw.ac.at> [With apologies for cross-posting. Please distribute widely!] NEWS: * Paper deadline EXTENDED to December 23rd, 2021.  * Abstract deadline (with title and author list) REMAINS December 10th but updates are possible until December 23rd! * Student travel bursaries available-see 'Additional Information' below *********************************************************************** This is the final call for papers, posters, panels, and workshops for the Music Encoding Conference 2022. The Music Encoding Conference is the annual meeting of the Music Encoding Initiative (MEI) community and all who are interested in the digital representation of music. This cross-disciplinary venue is open to and brings together members from various encoding, analysis, and music research communities, including musicologists, theorists, librarians, technologists, music scholars, teachers, and students, and provides an opportunity for learning and engaging with and from each other. The MEC 2022 will take place Thursday 19th – Sunday 22nd May, 2022, at Dalhousie University, Nova Scotia, Canada. While we sincerely hope to welcome as many attendees in person as possible, this year’s Conference will again run in hybrid mode, allowing remote attendance where travel plans are affected by the ongoing pandemic. Please note that submission types and guidelines have been adapted this year in response to community feedback. Background ---------------- Music encoding is a critical component for fields and areas of study including computational or digital musicology, digital editions, symbolic music information retrieval, digital libraries, digital pedagogy, or the wider music industry. The Music Encoding Conference has emerged as the foremost international forum where researchers and practitioners from across these varied fields can meet and explore new developments in music encoding and its use. The Conference celebrates a multidisciplinary program, combining the latest advances from established music encodings, novel technical proposals and encoding extensions, and the presentation or evaluation of new practical applications of music encoding (e.g. in academic study, libraries, editions, pedagogy). Pre-conference workshops provide an opportunity to quickly engage with best practice in the community. Newcomers are encouraged to submit to the main program with articulations of the potential for music encoding in their work, highlighting strengths and weaknesses of existing approaches within this context. Following the formal program, an unconference session fosters collaboration in the community through the meeting of Interest Groups, and self-selected discussions on hot topics that emerge during the conference. For these meetings, there are various spaces generously provided by the hosting institution on May 22nd. Please be in touch with conference organizers if you need to reserve these spaces. For meetings on other days during or immediately after the conference, availability can be checked upon request. The program welcomes contributions from all those working on, or with, any music encoding. In addition, the Conference serves as a focus event for the Music Encoding Initiative community, with its annual community meeting scheduled the day following the main program. We in particular seek to broaden the scope of musical repertories considered, and to provide a welcoming, inclusive community for all who are interested in this work. Topics --------- The conference welcomes contributions from all those who are developing or applying music encodings in their work and research. Topics include, but are not limited to:     * data structures for music encoding     * music encoding standardisation     * music encoding interoperability / universality     * methodologies for encoding, music editing, description and analysis     * computational analysis of encoded music     * rendering of symbolic music data in audio and graphical forms     * conceptual encoding of relationships between multimodal music forms (e.g. symbolic music data, encoded text, facsimile images, audio)     * capture, interchange, and re-purposing of musical data and metadata     * ontologies, authority files, and linked data in music encoding and description     * (symbolic) music information retrieval using music encoding     * evaluation of music encodings     * best practice in approaches to music encoding and the use or application of music encodings in:     * music theory and analysis     * digital musicology and, more broadly, digital humanities     * digital editions     * music digital libraries     * bibliographies and bibliographic studies     * catalogues and collection management     * composition     * performance     * teaching and learning     * search and browsing     * multimedia music presentation, exploration, and exhibition     * machine learning approaches Submissions ----------------- In response to feedback received from the community on last year’s submission process, this year’s MEC will be accepting submissions in the following forms for presentation in the main conference programme (page counts include figures and tables, but exclude references): * Paper submissions of between 4 and 10 pages, * Poster submissions of up to 4 pages. MEC ‘22 also welcomes submissions of proposals for panel sessions and workshops. Submissions to each category will be reviewed according to specific expectations outlined in “Submission Guidelines” below. Finally, we will welcome submissions of late-breaking reports of up to 2 pages during a later submission period closer to the conference dates (see “Important Dates” below). Authors of paper submissions will be invited to present their work in a plenary setting if accepted. Authors of poster submissions will be given the opportunity to briefly present in a plenary setting (“lightning talk”) in addition to a poster session if accepted. Authors of late-breaking reports will be invited to present during a dedicated poster session outside of the main conference programme. All submissions to the main conference programme (papers, posters, and panel sessions) will undergo blind review by multiple members of the program committee before acceptance. Late-breaking reports will be lightly reviewed for relevance to the conference (see “Topics” above) and accepted in limited numbers based on the order in which submissions are received. Authors of workshop submissions will be contacted by the PC to coordinate workshop planning in consultation with the local organizers and contributors. Please note the deadlines for the submission process outlined under “Important Dates” below. Submission Guidelines ------------------------------ All submissions should be formatted in A4 size with 2.5cm margins, font size 12, single space, justified, in a sans-serif typeface (e.g. Calibri), using APA-style citations and references, according to this template: https://tinyurl.com/mec2022-submission-template. Please take care to remove all identifying information from the submitted PDF before the upload - submissions should be anonymised for blind review. Submission types (page counts include figures and tables, but exclude references): * Paper submissions (4–10 pages) are expected to present overviews or detail specific aspects of ongoing or completed projects, present detailed case-studies or elaborated perspectives on best practices in the field, or provide other reports on topics relevant to the conference (see “Topics” above). The length requirement for submissions is intentionally broad this year, to allow authors flexibility in their reporting. Note that reporting is expected to be complete and self- contained in its argumentation. * Poster submissions (up to 4 pages) are expected to report on early- stage work, or to present experimental ideas for community feedback. The following types are welcome to be abstract submissions: * Panel discussions (3–5 pages). Submissions should describe the topic and nature of the discussion, along with the main theses and objectives of the proposed contributions; panel discussions are not expected to be a set of papers which could otherwise be submitted as individual papers). * Half- or full-day pre-conference workshops (up to 3 pages). Proposals should include possible conveners, a description of the workshop’s objective and proposed duration, as well as its logistical and technical requirements). * Late-breaking reports (up to 2 pages). The PC will coordinate the duration of proposed panels and workshops in consultation with the local organizers and contributors. Important Dates (Timezone: AoE / Anywhere on Earth) --------------------------------------------------- 10 December: Initial registration via our ConfTool website: www.conftool.net/music-encoding2022 with metadata of contributors including name(s) of author(s), affiliation(s) and email address(es), type and title of the submission, and a short one-paragraph abstract. 23 December (extended from 17 Dec.): Upload of anonymized submissions (see submission guidelines above) for review to ConfTool. Please be aware that ConfTool only accepts PDF submissions. Please remove all identifying information from the submitted PDF before the upload. 11 February: Notification of acceptance and invitation to authors of accepted submissions to contribute to the MEC proceedings. A formatted template pre-configured with your metadata will be provided on or about the day after notification. 13 March: Presenter registration deadline (papers, posters, workshops, panels). At least one author per accepted submission must register and confirm in-person or online participation. 3 April: Upload of accepted submissions in conference-ready version using the provided template. This version will be made available to registered conference attendees prior to the conference. 3 April–19 April: Submissions of late-breaking reports. A limited number of submissions will be accepted in order received. Further details to be announced. 19–22 May: Conference. 5 June: Final upload of camera-ready papers for publication in the proceedings. Camera-ready versions are welcome to incorporate light modifications in response to feedback obtained during the conference. The MEC proceedings will be published under an open access license and with an individual DOI number for all papers. Additional information ----------------------------- While we look forward to welcoming as many of you as possible in person at Dalhousie University, we are preparing for MEC ‘22 within the context of ongoing uncertainty due to the Covid-19 pandemic. To allow the community to best accommodate to this situation, we are organising this year’s conference with the following commitments in mind: * The conference will allow remote participation as in the previous years (MEC ‘20 and ‘21). Decisions on the precise implementation of this year’s hybrid format will be announced in due course and communicated widely (conference web page, mailing list, MEI Slack, Twitter) in the months leading up to the event. * We commit to the announced dates for MEC ‘22 (19th-22nd May). There will be no rescheduling of the conference to fit projected changes in the pandemic situation this year. Additional details regarding registration, accommodation, etc. will be announced on the conference web page (https://music-encoding.org/conference/2022/). We especially encourage students and other first time attendees to make a submission to the Music Encoding Conference. We can now confirm the availability of ten student bursaries of €500 / $800 CAD to support national and international travel for student presenters, and are seeking further ways to support their attendance. In case of questions, feel free to contact: conference2022 at music- encoding.org. Programme Committee ------------------------------- Daniel Bangert, Digital Repository of Ireland, Royal Irish Academy Benjamin Bohl, Department of Musicology, Goethe-Universität Frankfurt Susanne Cox, Beethoven-Haus Bonn Timothy Duguid, School of Humanities, University of Glasgow Norbert Dubowy, Digital Mozart Edition, Salzburg Mozarteum Foundation Maristella Feustle,  University of North Texas Libraries Music Library Estelle Joubert, Dalhousie University Anna Kijas, Lilly Music Library, Tufts University David Lewis, University of Oxford | Goldsmiths University of London Sageev Oore, Dalhousie University | Vector Institute for Artificial Intelligence Anna Plaksin, Johannes Gutenberg-Universität Mainz | Birmingham City University Juliette Regimbal, McGill University Kristina Richts-Matthaei, Paderborn University David M. Weigl (Committee Chair), University of Music and Performing Arts Vienna Local Organizing Committee -------------------------------------- Jennifer Bain (Committee Chair), Dalhousie University Estelle Joubert, Dalhousie University Sageev Oore, Dalhousie University | Vector Institute for Artificial Intelligence Morgan Paul, Dalhousie University From stadler at edirom.de Sat Dec 18 17:45:15 2021 From: stadler at edirom.de (Peter Stadler) Date: Sat, 18 Dec 2021 17:45:15 +0100 Subject: [MEI-L] 2021 MEI Board elections started Message-ID: <03658A72-7879-460C-8293-CB821DF9F753@edirom.de> Dear MEI Community, the 2021 MEI Board elections for the term 2022–2024 has started a few moments ago. All of you should have received individual voting tokens by mail from OpaVote (the system we’re using for the election) with noreply at opavote.com as sender. If by any chance you did not receive such an email, feel free to contact us at elections at music-encoding.org You can find the candidate statements online at https://music-encoding.org/community/mei-board/elections/2021/candidates Use this chance to get involved in the future of MEI ;-) For the record: The election will take place using Scottish STV with a Ranked Choice Voting method (https://www.opavote.com/methods/single-transferable-vote#scottish-stv) With all the best wishes, Peter Stadler and Laurent Pugin MEI election administrators 2021 by appointment of the MEI Board -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20211218/c0637b29/attachment.sig>