From kdesmond at brandeis.edu Thu Jan 18 02:51:41 2018 From: kdesmond at brandeis.edu (Karen Desmond) Date: Wed, 17 Jan 2018 20:51:41 -0500 Subject: [MEI-L] where to encode information about URL relationships Message-ID: Hi, I'd like some advice about the best place to store website links that contain further information about the compositions encoded in my MEI files. Specifically, I want to include with the MEI file the following: - A link to the composition on the DIAMM website - A link to the url of IIIF manifest pointing to the manuscript source that the MEI file's transcription is based on - A link to the manuscript source on the DIAMM website - A link to any other available online source for the manuscript images (if the manuscript is not yet available through IIIF or DIAMM) At first I thought about putting this information in and using an of and the for the web addresses but I'm not sure that meiHead is the best place for this set of links? In addition, I also want to record the folio numbers of the manuscript, which again I thought probably ought to go in sourceDesc (pasted below is an example of sourceDesc from one of our files), but I couldn't see where this information would go. Scanned from Polyphonic Music of the Fourteenth Century, vol. 1 (PMFC) [Primary manuscript source for this encoding] <identifier>Bibliothèque nationale de France, fr. 146</identifier> Scan checked and corrected against this manuscript [Other concordant sources] Br, Koblenz, Robertsbridge Many thanks and all the best, Karen Karen Desmond Assistant Professor of Music & Chair, Graduate Program in Musicology, Brandeis University Visiting Assistant Professor, Department of Music, Harvard University (Spring 2018) -------------- next part -------------- An HTML attachment was scrubbed... URL: From esfield at stanford.edu Thu Jan 18 03:26:51 2018 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Thu, 18 Jan 2018 02:26:51 +0000 Subject: [MEI-L] where to encode information about URL relationships In-Reply-To: References: Message-ID: Hi, Karen, Thanks for bringing up this important point. I’m concerned about the non-standardization of citations such as the one you mention. This kind of metadata will be very important to third-person users in the future, and we need to provide a path to identify sources, technical specs, and indications of ownership of the underlying content, where pertinent. The last is pertinent to DIAMM, so this is a useful context in which to discuss the broader issues. Eleanor From: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] On Behalf Of Karen Desmond Sent: Wednesday, January 17, 2018 5:52 PM To: Music Encoding Initiative Subject: [MEI-L] where to encode information about URL relationships Hi, I'd like some advice about the best place to store website links that contain further information about the compositions encoded in my MEI files. Specifically, I want to include with the MEI file the following: - A link to the composition on the DIAMM website - A link to the url of IIIF manifest pointing to the manuscript source that the MEI file's transcription is based on - A link to the manuscript source on the DIAMM website - A link to any other available online source for the manuscript images (if the manuscript is not yet available through IIIF or DIAMM) At first I thought about putting this information in and using an of and the for the web addresses but I'm not sure that meiHead is the best place for this set of links? In addition, I also want to record the folio numbers of the manuscript, which again I thought probably ought to go in sourceDesc (pasted below is an example of sourceDesc from one of our files), but I couldn't see where this information would go. Scanned from Polyphonic Music of the Fourteenth Century, vol. 1 (PMFC) [Primary manuscript source for this encoding] <identifier>Bibliothèque nationale de France, fr. 146</identifier> Scan checked and corrected against this manuscript [Other concordant sources] Br, Koblenz, Robertsbridge Many thanks and all the best, Karen Karen Desmond Assistant Professor of Music & Chair, Graduate Program in Musicology, Brandeis University Visiting Assistant Professor, Department of Music, Harvard University (Spring 2018) -------------- next part -------------- An HTML attachment was scrubbed... URL: From DanielAlles at stud.uni-frankfurt.de Tue Jan 23 15:41:01 2018 From: DanielAlles at stud.uni-frankfurt.de (Daniel Alles) Date: Tue, 23 Jan 2018 15:41:01 +0100 Subject: [MEI-L] Research Associate (100%): Richard Wagner Schriften (Univ. Wuerzburg) In-Reply-To: <1458d9c9-c0af-f984-cdfb-8be90d57141c@uni-wuerzburg.de> References: <17d7e634-7edd-6822-04fc-5ca158ae174c@uni-wuerzburg.de> <1458d9c9-c0af-f984-cdfb-8be90d57141c@uni-wuerzburg.de> Message-ID: <20180123154101.Horde.CswaXXTEbUyFDElNx38h0Ig@webmail.server.uni-frankfurt.de> Lieber Torsten, ich habe deinen Hinweis auf der MEI-Mailingliste mit Freude gelesen und eine Bewerbung an Professor Konrad geschickt. Leider habe ich bisher keine Rückmeldung erhalten, ob sie überhaupt angekommen ist. Kannst du mir vielleicht informell sagen, ob meine Bewerbung da ist? Und vielleicht auch, wie meine Chancen stehen...? Meine Magisternote werde ich am Montag nach meiner letzten Prüfung bekommen und würde sie dann sofort nachreichen. Beste Grüße, Daniel Alles Zitat von Torsten Roeder : > Dear MEI List, > > please note the invitation for applications in the attached file. We are > looking for a full time research associate for the project "Richard > Wagner Schriften" (print/digital scholarly edition). TEI/XML is used > thoroughly, partially also MEI, so this might be of interest for the > list members. > > Please apologize any cross-postings. > > Best regards > Torsten Roeder > > > -- > Torsten Roeder > Wissenschaftlicher Mitarbeiter > Richard Wagner Schriften > > Julius-Maximilians-Universität Würzburg > Institut für Musikforschung > Domerschulstraße 13 > D-97070 Würzburg > > Tel +49 (0)931 31-85167 > Mail torsten.roeder at uni-wuerzburg.de > WWW http://www.musikwissenschaft.uni-wuerzburg.de/rws From DanielAlles at stud.uni-frankfurt.de Tue Jan 23 15:54:03 2018 From: DanielAlles at stud.uni-frankfurt.de (Daniel Alles) Date: Tue, 23 Jan 2018 15:54:03 +0100 Subject: [MEI-L] Research Associate (100%): Richard Wagner Schriften (Univ. Wuerzburg) In-Reply-To: <20180123154101.Horde.CswaXXTEbUyFDElNx38h0Ig@webmail.server.uni-frankfurt.de> References: <17d7e634-7edd-6822-04fc-5ca158ae174c@uni-wuerzburg.de> <1458d9c9-c0af-f984-cdfb-8be90d57141c@uni-wuerzburg.de> <20180123154101.Horde.CswaXXTEbUyFDElNx38h0Ig@webmail.server.uni-frankfurt.de> Message-ID: <20180123155403.Horde.TWKE9kMcDOnQ_g199xhYoRJ@webmail.server.uni-frankfurt.de> Dear all, sorry for that spam mail, it was meant for Torsten alone. Daniel Zitat von Daniel Alles : > Lieber Torsten, > > ich habe deinen Hinweis auf der MEI-Mailingliste mit Freude gelesen > und eine Bewerbung an Professor Konrad geschickt. Leider habe ich > bisher keine Rückmeldung erhalten, ob sie überhaupt angekommen ist. > > Kannst du mir vielleicht informell sagen, ob meine Bewerbung da ist? > Und vielleicht auch, wie meine Chancen stehen...? Meine Magisternote > werde ich am Montag nach meiner letzten Prüfung bekommen und würde > sie dann sofort nachreichen. > > Beste Grüße, > Daniel Alles > > > Zitat von Torsten Roeder : > >> Dear MEI List, >> >> please note the invitation for applications in the attached file. We are >> looking for a full time research associate for the project "Richard >> Wagner Schriften" (print/digital scholarly edition). TEI/XML is used >> thoroughly, partially also MEI, so this might be of interest for the >> list members. >> >> Please apologize any cross-postings. >> >> Best regards >> Torsten Roeder >> >> >> -- >> Torsten Roeder >> Wissenschaftlicher Mitarbeiter >> Richard Wagner Schriften >> >> Julius-Maximilians-Universität Würzburg >> Institut für Musikforschung >> Domerschulstraße 13 >> D-97070 Würzburg >> >> Tel +49 (0)931 31-85167 >> Mail torsten.roeder at uni-wuerzburg.de >> WWW http://www.musikwissenschaft.uni-wuerzburg.de/rws > > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From f.wiering at UU.NL Fri Feb 2 15:04:13 2018 From: f.wiering at UU.NL (Frans Wiering) Date: Fri, 2 Feb 2018 15:04:13 +0100 Subject: [MEI-L] Last opportunity for submitting: Approaches to research on music and dance in the internet era, Beijing 11-14 July 2018 (deadline 9 Feb) Message-ID: <56c6e541-7e10-255c-5b69-39df7417b578@UU.NL> APPROACHES TO RESEARCH ON MUSIC AND DANCE IN THE INTERNET ERA INTERNATIONAL FORUM OF FIVE MUSIC RESEARCH SOCIETIES ICTM - SEM - IMS - IAML - IASPM Central Conservatory of Music, Beijing, 11-14 July 2018 Call for Papers for IMS sessions We have extended the deadline for submission to 9 February 2018. Please submit your abstract by mail to f.wiering at uu.nl. A quick word of clarification: submissions need not to be about music AND dance but may address only one of the two. Relevant workin computational musicology and MIR is highly appreciated! /Forum Abstract/  The internet age has brought forward a series of new approaches to the research of music and dance, and new patterns of scholarship have been developed, both within different disciplines and across the globe. Scholars from five global research associations (ICTM, SEM, IMS, IAML, IASPM) will meet to present and discuss the new methodologies now emerging, looking both for commonalities and distinctive new departures. Inter-, multi-, trans- and cross-disciplinary approaches will be welcome, including those that reach out beyond the specifically academic domain toward new social and economic usages. We ask how these new possibilities provide a means to generate respect for and engagement within traditional, historical or popular forms of music and dance, as well as allowing our imagination to reach knowledgeably across conventional geographical, social and historical boundaries. The Forum will interleave discipline-specific and interdisciplinary sessions. /Submission Guidelines/ Each partner society and the host institution will gather a set of contributions, which will be organized into a programme by a joint programme committee, which includes representatives from each society as well as from the host institution. The following guidelines are specific to those seeking an invitation as part of the IMS block of participants. (Note that if you are a member of more than one of the scholarly associations listed above, you may be considered by each for inclusion in the programme as part of their block, but that no individual can finally deliver more than one paper at the Forum.) Proposals are now welcome for individual papers (20 minutes’ duration, followed by 10 minutes for questions and discussion) and panels (60/ 90 minutes, followed by up to 30 minutes of discussion). Proposals should be submitted to IMS representative on the joint programme committee Frans Wiering at mailto:f.wiering at uu.nl, by Friday 9 February 2018. A maximum of 20 presenters will be selected for participation in the IMS block. Each proposal should contain: * (each) speaker’s name, title, affiliation (where applicable), and contact email; * an abstract summarising the paper or panel. Abstracts for individual papers should be 200/-/250 words in length; those for panels should be of 750-1,000 words; * the proposal can be submitted as an email attachment in doc, docx, or rtf format; * use “ICTM Beijing Forum proposal” as the subject line for your email; * prospective participants should preferably be members of IMS, an IMS study group or, in the case of multi-disciplinary proposals, of one of the other scholarly associations listed. /Local Arrangements/  The forum, coordinated by Svanibor Pettan and Zhang Boyu, will be hosted by the Central Conservatory of Music, Beijing. Partially subsidized hotel accommodation is available for those selected, and all meals will be provided on a complimentary basis. The registration fee for those selected will be paid by the Central Conservatory of Music. The Forum also includes a programme of concerts and associated events. Attendees will need to be able to pay their own travel costs. -- --------------------------------------------------------------------- dr. Frans Wiering Opleidingsdirecteur Informatiekunde Associate Professor Interaction Technology Digital Humanities Research Fellow --------------------------------------------------------------------- Utrecht University Department of Information and Computing Sciences (ICS) Buys Ballot Building, office 482 Princetonplein 5, De Uithof PO Box 80.089 NL-3508 TB Utrecht mail:F.Wiering at uu.nl tel: +31-30-2536335 www:http://www.uu.nl/staff/FWiering/0 --------------------------------------------------------------------- -------------- next part -------------- An HTML attachment was scrubbed... URL: From raffaeleviglianti at gmail.com Wed Feb 28 19:44:09 2018 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Wed, 28 Feb 2018 18:44:09 +0000 Subject: [MEI-L] MEC 2018 registration now open Message-ID: Dear MEI List, The registration to the Music Encoding Conference 2018, 22-25 May 2018 at the University of Maryland is now open! Head to the conference website at http://music-encoding.org/conference/2018/ for registering and accessing information about the program, travel and accommodation. Please note that early bird registration ends on April 22nd. We look forward to seeing you at the conference! Kind regards, Raffaele Viglianti on behalf of the MEC2018 organizers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From aseipelt at mail.uni-paderborn.de Tue Mar 6 14:12:57 2018 From: aseipelt at mail.uni-paderborn.de (Agnes Seipelt) Date: Tue, 6 Mar 2018 14:12:57 +0100 Subject: [MEI-L] Encoding of lyrics under a rest Message-ID: <8DD9C321-353B-4758-A014-9485FA661B8B@mail.uni-paderborn.de> Dear MEI community, I hope that someone can help me with an encoding probIem. In the manuscript there are lyrics under a rest (that is obviously a mistake because the text is also crossed out). You can see it in the screenshot. I use MEI-CMN (and also tried MEI-all), rests cannot contain lyrics or verse, the only solution would be to integrate lyrics in a del-element within a rest: In this case the lyrics are deleted so this would be a solution. But I think it is “tag-abuse” because the del-element is necessary to validate the document. Another idea is to encode the lyrics as control event within the measure: … Unfortunately, there is no attribute like @tstamp or @startid to point to the element the lyrics are related to. There is only @synch that “points to elements that are synchronous with the current element”. But I think this would also be “tag-abuse”. Does somebody have an idea how to solve this problem? Thanks in advance! Agnes ----- Agnes Seipelt M.A. Wissenschaftliche Mitarbeiterin Musikwissenschaftliches Seminar Detmold/Paderborn FORUM Wissenschaft | Bibliothek | Musik Hornsche Str. 39 32756 Detmold Tel. +49 5231 975677 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 40694 bytes Desc: not available URL: From berndt at hfm-detmold.de Tue Mar 6 14:51:20 2018 From: berndt at hfm-detmold.de (Axel Berndt) Date: Tue, 6 Mar 2018 14:51:20 +0100 Subject: [MEI-L] Encoding of lyrics under a rest In-Reply-To: <8DD9C321-353B-4758-A014-9485FA661B8B@mail.uni-paderborn.de> References: <8DD9C321-353B-4758-A014-9485FA661B8B@mail.uni-paderborn.de> Message-ID: <288d8172-bfcc-1b86-0736-cf7179d779ae@hfm-detmold.de> Hi Agnes, here come my suggestion:                                                                                 Best, Axel -- Dr.-Ing. Axel Berndt Phone: +49 (0) 5231 / 975 874 Center of Music and Film Informatics Ostwestfalen-Lippe University of Applied Sciences Detmold University of Music Hornsche Strasse 44, 32756 Detmold, Germany From annplaksin at gmx.net Tue Mar 6 15:34:03 2018 From: annplaksin at gmx.net (Anna Plaksin) Date: Tue, 6 Mar 2018 15:34:03 +0100 Subject: [MEI-L] Encoding of lyrics under a rest In-Reply-To: <8DD9C321-353B-4758-A014-9485FA661B8B@mail.uni-paderborn.de> References: <8DD9C321-353B-4758-A014-9485FA661B8B@mail.uni-paderborn.de> Message-ID: <0LbuIK-1eRXpO3nfA-00jFdp@mail.gmx.com> Hi Agnes, why do you think using is tag abuse? You added a screenshot of a source showing this exact case to us. So, if you want to encode this particular case, I don’t think it is tag abuse to use because it describes the case exactly as it is. Indeed, using without the notion of describing the deleted text below that rest would be tag abuse. In another case this discussion could be made about the use of : Is it appropriate to use in a case, where something so severely erroneous needs to be encoded that it could not be valid without it? I think, such decisions depend on the particular case. A transcription of a manuscript has other requirements than a critical or a performance edition. It would be nice, if you could give a little more context of what you actually want to achieve in that particular case. That would help a lot. Thanks in advance. Regards, Anna Von: Agnes Seipelt Gesendet: Dienstag, 6. März 2018 14:12 An: Music Encoding Initiative Betreff: [MEI-L] Encoding of lyrics under a rest Dear MEI community, I hope that someone can help me with an encoding probIem. In the manuscript there are lyrics under a rest (that is obviously a mistake because the text is also crossed out). You can see it in the screenshot. I use MEI-CMN (and also tried MEI-all), rests cannot contain lyrics or verse, the only solution would be to integrate lyrics in a del-element within a rest:                                                                                                                                                                                  In this case the lyrics are deleted so this would be a solution. But I think it is “tag-abuse” because the del-element is necessary to validate the document.  Another idea is to encode the lyrics as control event within the measure: …                                                                                                                      Unfortunately, there is no attribute like @tstamp or @startid to point to the element the lyrics are related to. There is only @synch that “points to elements that are synchronous with the current element”.  But I think this would also be “tag-abuse”. Does somebody have an idea how to solve this problem? Thanks in advance!  Agnes  ----- Agnes Seipelt M.A. Wissenschaftliche Mitarbeiterin Musikwissenschaftliches Seminar Detmold/Paderborn FORUM Wissenschaft | Bibliothek | Musik Hornsche Str. 39  32756 Detmold Tel. +49 5231 975677 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: PastedGraphic-1.png Type: image/png Size: 40694 bytes Desc: not available URL: From aseipelt at mail.uni-paderborn.de Tue Mar 6 17:04:44 2018 From: aseipelt at mail.uni-paderborn.de (Agnes Seipelt) Date: Tue, 6 Mar 2018 17:04:44 +0100 Subject: [MEI-L] Encoding of lyrics under a rest In-Reply-To: <0LbuIK-1eRXpO3nfA-00jFdp@mail.gmx.com> References: <8DD9C321-353B-4758-A014-9485FA661B8B@mail.uni-paderborn.de> <0LbuIK-1eRXpO3nfA-00jFdp@mail.gmx.com> Message-ID: Dear Axel and Anna, thanks for your answers! @Axel: I had this in mind, too. The problem is that in this case the lyrics are with a not visible note (e.g. a space) and the rest has no lyrics again.. @Anna: I don’t mean the del itself is a tag-abuse but the fact that this construction is only valid because of the del. If the text were not deleted, the construction with rest and lyrics ( ) would not be valid. I hope you understand what I mean ;) For the context: I’d like to encode all the written text (music and text) of the manuscript, like a nearly diplomatic edition. Best, Agnes Agnes Seipelt M.A. Wissenschaftliche Mitarbeiterin Musikwissenschaftliches Seminar Detmold/Paderborn FORUM Wissenschaft | Bibliothek | Musik Hornsche Str. 39 32756 Detmold Tel. +49 5231 975677 > Am 06.03.2018 um 15:34 schrieb Anna Plaksin : > > Hi Agnes, > > why do you think using is tag abuse? > You added a screenshot of a source showing this exact case to us. So, if you want to encode this particular case, I don’t think it is tag abuse to use because it describes the case exactly as it is. Indeed, using without the notion of describing the deleted text below that rest would be tag abuse. > In another case this discussion could be made about the use of : Is it appropriate to use in a case, where something so severely erroneous needs to be encoded that it could not be valid without it? > > I think, such decisions depend on the particular case. A transcription of a manuscript has other requirements than a critical or a performance edition. > > It would be nice, if you could give a little more context of what you actually want to achieve in that particular case. That would help a lot. > > Thanks in advance. > > Regards, > Anna > > Von: Agnes Seipelt > Gesendet: Dienstag, 6. März 2018 14:12 > An: Music Encoding Initiative > Betreff: [MEI-L] Encoding of lyrics under a rest > > Dear MEI community, > I hope that someone can help me with an encoding probIem. In the manuscript there are lyrics under a rest (that is obviously a mistake because the text is also crossed out). You can see it in the screenshot. > > I use MEI-CMN (and also tried MEI-all), rests cannot contain lyrics or verse, the only solution would be to integrate lyrics in a del-element within a rest: > > > > > > > > > > In this case the lyrics are deleted so this would be a solution. But I think it is “tag-abuse” because the del-element is necessary to validate the document. > Another idea is to encode the lyrics as control event within the measure: > … > > > > > > > Unfortunately, there is no attribute like @tstamp or @startid to point to the element the lyrics are related to. There is only @synch that “points to elements that are synchronous with the current element”. But I think this would also be “tag-abuse”. > Does somebody have an idea how to solve this problem? > Thanks in advance! > Agnes > > ----- > > Agnes Seipelt M.A. > Wissenschaftliche Mitarbeiterin > Musikwissenschaftliches Seminar Detmold/Paderborn > FORUM Wissenschaft | Bibliothek | Musik > Hornsche Str. 39 32756 Detmold > Tel. +49 5231 975677 > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From annplaksin at gmx.net Tue Mar 6 18:04:55 2018 From: annplaksin at gmx.net (Anna Plaksin) Date: Tue, 6 Mar 2018 18:04:55 +0100 Subject: [MEI-L] Encoding of lyrics under a rest In-Reply-To: References: <8DD9C321-353B-4758-A014-9485FA661B8B@mail.uni-paderborn.de> <0LbuIK-1eRXpO3nfA-00jFdp@mail.gmx.com> Message-ID: <0MQ2zr-1eobBN2nI3-005JIp@mail.gmx.com> Hi Agnes, thanks for the context. :) In the case of a ‘nearly diplomatic edition’ I don’t see the problem with using . It describes the manuscript and it is valid. Indeed, it is only valid because of the ... because it is in fact a notation error. In your case, it is corrected by a hand, if not it would be marked as error with . In fact, I assume, you want to preserve this apparent error in your encoding. So, I think, the proper way is to take the toolbox MEI gives you for editorial mark up and use it. When dealing with music manuscripts, we need to deal with errors and they’re not always subtle enough to not conquer every logical validation. The purpose of accurate transcription of a manuscript could contradict the need of a representation system for music notation to be logically correct. It actually needs quite a mess to force a notation software if you want to create an example image of a notation error... in that situations, I embrace the possibilities of editorial mark up in MEI. I would not see it as tag abuse to use element in their intended way (marking errors or other source-related phenomena). It would be an abuse, if you try to use another element with a different purpose only to get the stuff somehow encoded. My advice would still be: If you want to document this corrected error, use . It’s not a bug, it’s a feature. If anyone holds other views, I appreciate to learn more. Regards, Anna Von: Agnes Seipelt Gesendet: Dienstag, 6. März 2018 17:04 An: Music Encoding Initiative Betreff: Re: [MEI-L] Encoding of lyrics under a rest Dear Axel and Anna, thanks for your answers! @Axel: I had this in mind, too. The problem is that in this case the lyrics are with a not visible note (e.g. a space) and the rest has no lyrics again.. @Anna: I don’t mean the del itself is a tag-abuse but the fact that this construction is only valid because of the del. If the text were not deleted, the construction with rest and lyrics ( ) would not be valid. I hope you understand what I mean ;)  For the context: I’d like to encode all the written text (music and text) of the manuscript, like a nearly diplomatic edition.  Best, Agnes Agnes Seipelt M.A. Wissenschaftliche Mitarbeiterin Musikwissenschaftliches Seminar Detmold/Paderborn FORUM Wissenschaft | Bibliothek | Musik Hornsche Str. 39  32756 Detmold Tel. +49 5231 975677 Am 06.03.2018 um 15:34 schrieb Anna Plaksin : Hi Agnes,   why do you think using is tag abuse? You added a screenshot of a source showing this exact case to us. So, if you want to encode this particular case, I don’t think it is tag abuse to use because it describes the case exactly as it is. Indeed, using without the notion of describing the deleted text below that rest would be tag abuse. In another case this discussion could be made about the use of : Is it appropriate to use in a case, where something so severely erroneous needs to be encoded that it could not be valid without it?   I think, such decisions depend on the particular case. A transcription of a manuscript has other requirements than a critical or a performance edition.   It would be nice, if you could give a little more context of what you actually want to achieve in that particular case. That would help a lot.   Thanks in advance.   Regards, Anna   Von: Agnes Seipelt Gesendet: Dienstag, 6. März 2018 14:12 An: Music Encoding Initiative Betreff: [MEI-L] Encoding of lyrics under a rest   Dear MEI community, I hope that someone can help me with an encoding probIem. In the manuscript there are lyrics under a rest (that is obviously a mistake because the text is also crossed out). You can see it in the screenshot.  I use MEI-CMN (and also tried MEI-all), rests cannot contain lyrics or verse, the only solution would be to integrate lyrics in a del-element within a rest:                                                                                                                                                                                      In this case the lyrics are deleted so this would be a solution. But I think it is “tag-abuse” because the del-element is necessary to validate the document.  Another idea is to encode the lyrics as control event within the measure: …                                                                                                                           Unfortunately, there is no attribute like @tstamp or @startid to point to the element the lyrics are related to. There is only @synch that “points to elements that are synchronous with the current element”.  But I think this would also be “tag-abuse”. Does somebody have an idea how to solve this problem? Thanks in advance!  Agnes    -----   Agnes Seipelt M.A. Wissenschaftliche Mitarbeiterin Musikwissenschaftliches Seminar Detmold/Paderborn FORUM Wissenschaft | Bibliothek | Musik Hornsche Str. 39  32756 Detmold Tel. +49 5231 975677     _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.weigl at oerc.ox.ac.uk Fri Mar 9 18:34:57 2018 From: david.weigl at oerc.ox.ac.uk (David M. Weigl) Date: Fri, 09 Mar 2018 17:34:57 +0000 Subject: [MEI-L] [CfP] Intl. Workshop on Semantic Applications for Audio and Music - SAAM 2018 Message-ID: <1520616897.10272.126.camel@oerc.ox.ac.uk> With apologies for cross-postings. Please forward to interested colleagues and mailing lists. CALL FOR PAPERS ===================================================================== International Workshop on Semantic Applications for Audio and Music SAAM 2018 An ISWC 2018 workshop, Oct 9, 2018, Monterey, California, USA http://saam.semanticaudio.ac.uk/ ===================================================================== The SAAM organising committee would like to invite researchers, engineers, developers and all those interested in semantic applications for audio and music to submit their work (long/short/challenge papers) to SAAM 2018, held in conjunction with the International Semantic Web Conference (ISWC 2018). SAAM is a venue for dissemination and discussion, identifying intersections in the challenges and solutions which cut across musical areas. In finding common approaches and coordination, SAAM will set the research agenda for advancing the development of semantic applications for audio and music. Submission deadline: 18 May, 2018 (23:59 UTC-11) (see IMPORTANT DATES below) Workshop web site: Submissions via: Contact: saam2018 at easychair.org The workshop proceedings will be made available in the ACM Digital Library. BACKGROUND AND OBJECTIVES ------------------------- Music provides a fascinating and challenging field for the application of Semantic Web technologies. Music is culture. Yet as knowledge, music takes fundamentally different forms: as digital audio waveforms recording a performance (e.g. MP3); symbolic notation prescribing a work (scores, Music Encoding Initiative); instructions for synthesising or manipulating sounds (MIDI, Digital Audio Workstations); catalogues of performance or thematic aggregations (playlists, setlists); psychological responses to listening; and as an experienced and interpretable art form. How can these heterogeneous structures be linked to each other? To what end? How do we study these materials? Can computational and knowledge management analyses yield insight within and across musics? Semantic Web technologies have been applied to these challenges -- across industry, memory institutions and academia -- but with results reported to conferences representing the communities of different disciplines of musical study. The workshop will bring together established members of the Music Informatics and ISWC communities with users, practitioners and researchers beyond its normal boundaries. SAAM will encourage a multidisciplinary audience, providing attendees with the opportunity of learning about the needs and experiences of these users. Conversely, music specialists will be availed of the latest developments in the Semantic Web, and how they can be applied to their work. SAAM also invites the wider community to discover “what makes music interesting!” TOPICS OF INTEREST ------------------ Topics of interest for the workshop include, but are not limited to: Consuming and exploiting music and media data on the Semantic Web * Music recommender systems using Semantic Web data * Visualisations of music and time-based media using Semantic Web data * Semantic Web-based automation in content management, distribution, archiving and curation * Emerging interchange standards using Semantic Web technologies (e.g. IIIF AV) * Music and media content resolution * Semantic Web in musicology * Sonification and composition techniques in the context of the Semantic Web Producing and publishing music and media-related data on the Semantic Web * Annotations, ground truth collections and crowd-sourcing for music and media collections * Uniquely identifying music resources on the Web * Automatic interlinking of music- and media-related datasets * Learning ontologies and structured music data from Web mining * Publishing the results of content-based analysis on the Semantic Web * Semantic Web technologies in the recording studio * Capturing annotations at source in composition and performance Managing music and media-related data * Management of music libraries, archives and digital collections * Managing music analysis services and workflows * Semantic Web services for music and media processing, rights, policies, payment * Preserving Semantic Web data through remixing and re-use * End-to-end semantic flows throughout the music creation and interpretation lifecycle Modelling music and media-related data * Music and media metadata, from production to personal applications * Ontologies and knowledge representation for the music and time- based media domains * Representations for time-based navigation e.g. musical and narrative structures SUBMISSIONS ----------- SAAM invites short, long, and challenge paper submissions. Papers will be peer reviewed by 2-3 members of the programme committee following a single-blind review process. Please produce your paper using the ACM template and submit it to SAAM on EasyChair by 18th May 2018 (see IMPORTANT DATES). SUBMISSION FORMATS ------------------ *Full papers* (maximum 8 pages plus references) should report on substantially complete and mature work, or efforts that have reached an important milestone. *Short papers* (maximum 4 pages plus references) may highlight demonstrators or preliminary results to bring them to the community’s attention, or present emerging technologies and approaches as position papers. For both full and short papers, we encourage submissions which report the practical application of semantic technologies to the audio and music domain, and for which demonstrators can be shown during presentation of the paper at the workshop. We also encourage sharing of demonstrators amongst participants during the workshop coffee break. Accepted full and short papers will be included directly in the workshop proceedings to be published in the ACM ICPS, and presented at the workshop. *Music and Audio Applications Challenge papers* (maximum 1 page plus references), henceforth ‘Challenge papers’, encourage the attendance and engagement of users (or potential users) of Semantic Web technologies through music and audio applications. Challenge papers should take the form of an extended abstract or short position paper reporting or motivating a specific problem, use case, or application. Challenge papers need not report a completed implementation or evaluation, and may be illustrative or speculative in proposing an application of semantic technology from the perspective of a clearly articulated user need. Accepted Challenge papers will be incorporated within a single consolidated article, edited by the chairs, and included in the workshop proceedings. Challenge papers will be presented as short pitches, followed by collective discussion, within a dedicated session at the workshop. Summary of submission lengths (details above): * Long papers: up to 8 pages (excluding references), * Short papers: up to 4 pages (excluding references), * Challenge papers: 1 page extended abstracts (excluding references). All submitted papers must: * be written in English; * contain author names, affiliations and e-mail addresses; * be formatted according to the ACM SIG Proceedings template, using 9pt Type 1 font; * be in PDF format (please ensure that the PDF can be viewed on any platform) and formatted for A4 size. It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. Please note that at least one author from each accepted paper must attend the workshop to present their work, and must be registered by 29th June 2018 (see IMPORTANT DATES). ACM template: Submissions: Contact email: saam2018 at easychair.org The workshop proceedings will be published in the ACM ICPS and will be made available in the ACM Digital Library. Please use the ‘ACM SigConf’ version of the ‘2017 ACM Master Article Template’ – for MS Word (Mac and Windows versions are available), please use the ACM_SigConf template from the master for LaTeX (version 1.50), and see sample-sigconf.tex IMPORTANT DATES --------------- Paper submission deadline: 18th May 2018 (23:59 UTC-11) Notification of acceptance: 27th June 2018 Registration deadline for one author per accepted paper: 29th June 2018 Camera ready submission deadline: 24th July 2018 Workshop: 9th October 2018 WORKSHOP ORGANISATION --------------------- Programme chairs Sean Bechhofer, School of Computer Science, University of Manchester George Fazekas, Centre for Digital Music (C4DM), Queen Mary University of London (QMUL) Kevin Page, Oxford e-Research Centre, Dept. Engineering Science, University of Oxford Organising Committee members Miguel Ceriani (Website Chair) David Weigl (Publicity and Proceedings Chair) tbc. Programme Committee Alessandro Adamou, Insight Centre Miguel Ceriani, Queen Mary University of London Mathieu d'Aquin, Insight Centre David De Roure, University of Oxford Alan Dix, University of Birmingham Stephen Downie, University of Illinois Frederic Font, Universitat Pompeu Fabra Nick Gibbins, University of Southampton Andrew Hankinson, Bodleian Libraries Kevin Kishimoto, Stanford University Libraries Graham Klyne, University of Oxford David Lewis, University of Oxford Pasquale Lisena, EURECOM Albert Meroño Peñuela, VU Amsterdam Terhi Nurmikko-Fuller, Australian National University Mark Sandler, Queen Mary University of London Stefan Schlobach, VU Amsterdam Xavier Serra, Universitat Pompeu Fabra Florian Thalmann, Queen Mary University of London Raphaël Troncy, EURECOM Ruben Verbough, ID Lab David Weigl, University of Oxford Tillman Weyde, City University of London Thomas Wilmering, Queen Mary University of London From kepper at edirom.de Mon Mar 12 20:58:05 2018 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 12 Mar 2018 20:58:05 +0100 Subject: [MEI-L] Call for Hosting MEC 2020 Message-ID: <82A00656-6551-460E-97BC-7C351F4AC976@edirom.de> PLEASE CIRCULATE WIDELY Greetings, The MEI Board invites proposals for the organization of the 8th edition of the annual Music Encoding Conference, to be held in 2020. As many of you are aware, among its activities MEI oversees the organization of an annual conference, the Music Encoding Conference (MEC), to provide a meeting place for scholars interested in discussing the modeling, generation and uses of music encoding. While the conference has an emphasis on the development and uses of MEI, other contributions related to general approaches to music encoding are always welcome, as an opportunity for exchange between scholars from various research communities, including technologists, librarians, historians, and theorists. In order to assist prospective organizers, the MEI Board has published ‹Hosting Guidelines for the Music Encoding Conference› at . Historically, the conference has been organized by institutions involved in MEI, such as MEI member institutions or those hosting MEI-based projects, but proposals from any interested group or institution will be happily received, and ideas other than those expressed in the official document are welcome. While MEC venues have alternated between Europe and North America in the past, there is no such requirement, so applications from anywhere are invited. The deadline for sending proposals is 1 August 2018. The Board will notify bidders of its decision shortly after that, and we will jointly inform the MEI community through MEI-L thereafter. This year's deadline is later than in previous years to allow potential bidders to discuss their proposals with this year's organisers and the MEI Board in Maryland. Please direct all proposals and inquiries to . On behalf of the MEI Board, best wishes, Johannes Kepper -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From raffaeleviglianti at gmail.com Tue Apr 17 20:02:24 2018 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Tue, 17 Apr 2018 14:02:24 -0400 Subject: [MEI-L] Reminder: MEC2018 Early Bird registration closes on April 22 Message-ID: Dear MEI-L Just a reminder that the Early Bird registration period to the Music Encoding Conference at the University of Maryland closes on *April 22nd*. Register today! http://music-encoding.org/conference/2018/ We look forward to welcoming you at the conference 22-25 May 2018. Kind regards, Raff Viglianti on behalf of the MEC2018 organizers -------------- next part -------------- An HTML attachment was scrubbed... URL: From david_day at byu.edu Tue Apr 17 20:15:57 2018 From: david_day at byu.edu (David Day) Date: Tue, 17 Apr 2018 18:15:57 +0000 Subject: [MEI-L] Reminder: MEC2018 Early Bird registration closes on April 22 In-Reply-To: References: Message-ID: The registration site/process is not working. The captcha feature is not working. Can someone let us know when it is fixed? [cid:E72F1CBA-5230-48A5-9D0F-B975E1FDBB0C at lib.byu.edu] On Apr 17, 2018, at 12:02 PM, Raffaele Viglianti > wrote: Dear MEI-L Just a reminder that the Early Bird registration period to the Music Encoding Conference at the University of Maryland closes on April 22nd. Register today! http://music-encoding.org/conference/2018/ We look forward to welcoming you at the conference 22-25 May 2018. Kind regards, Raff Viglianti on behalf of the MEC2018 organizers _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-04-17 at 12.14.16 PM.png Type: image/png Size: 212260 bytes Desc: Screen Shot 2018-04-17 at 12.14.16 PM.png URL: From raffaeleviglianti at gmail.com Tue Apr 17 20:55:22 2018 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Tue, 17 Apr 2018 14:55:22 -0400 Subject: [MEI-L] Reminder: MEC2018 Early Bird registration closes on April 22 In-Reply-To: References: Message-ID: Hi David, Thanks for reporting this, we're investigating it. In the mean time, I believe you'll be able to get past the captcha as long as you enter any text. All best, Raff On Tue, Apr 17, 2018 at 2:15 PM, David Day wrote: > The registration site/process is not working. The captcha feature is not > working. Can someone let us know when it is fixed? > > > > > > > > On Apr 17, 2018, at 12:02 PM, Raffaele Viglianti < > raffaeleviglianti at gmail.com> wrote: > > Dear MEI-L > > Just a reminder that the Early Bird registration period to the Music > Encoding Conference at the University of Maryland closes on *April 22nd*. > > > Register today! http://music-encoding.org/conference/2018/ > > > We look forward to welcoming you at the conference 22-25 May 2018. > > Kind regards, > Raff Viglianti on behalf of the MEC2018 organizers > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-04-17 at 12.14.16 PM.png Type: image/png Size: 212260 bytes Desc: not available URL: From david.lewis at oerc.ox.ac.uk Wed Apr 18 16:37:30 2018 From: david.lewis at oerc.ox.ac.uk (David Lewis) Date: Wed, 18 Apr 2018 15:37:30 +0100 Subject: [MEI-L] Digital Humanities at Oxford Summer School: Digital Musicology Workshop Message-ID: <074560C5-E774-4193-8CA4-967A71190A98@oerc.ox.ac.uk> (apologies for cross-postings) SUMMER SCHOOL WORKSHOP: INVITATION TO REGISTER Digital Musicology: Applied computational and informatics methods for enhancing musicology Dates: 2-6 July Registration now open: closes 18 June A wealth of music and music-related information is now available digitally, offering tantalizing possibilities for digital musicologies. These resources include large collections of audio and scores, bibliographic and biographic data, and performance ephemera -- not to mention the ‘hidden’ existence of these in other digital content. With such large and wide ranging opportunities come new challenges in methods, principally in adapting technological solutions to assist musicologists in identifying, studying, and disseminating scholarly insights from amongst this ‘data deluge’. This workshop provides an introduction to computational and informatics methods that can be, and have been, successfully applied to musicology. Many of these techniques have their foundations in computer science, library and information science, mathematics and most recently Music Information Retrieval (MIR); sessions are delivered by expert practitioners from these fields and presented in the context of their collaborations with musicologists, and by musicologists relating their experiences of these multidisciplinary investigations. The workshop comprises a series of lectures and hands-on sessions, supplemented with reports from musicology research exemplars. Theoretical lectures are paired with practical sessions in which attendees are guided through their own exploration of the topics and tools covered. Laptops will be loaned to attendees with the appropriate specialised software installed and preconfigured. Participants also attend afternoon lectures and masterclasses with participants from other workshops, these sessions cover a broad range of digital humanities topics. There will also be optional evening events (some at additional cost), including a guided tour of Oxford, an evening drinks and poster session at the Weston Library and the TORCH lecture. Participants are invited to submit posters for the welcome reception at the Weston Library by Wednesday 18th May. Please note that numbers for this workshop are limited, and we cannot guarantee that places will still be available towards the end of the registration period. Summer School site: Contact: From david_day at byu.edu Thu Apr 19 19:32:15 2018 From: david_day at byu.edu (David Day) Date: Thu, 19 Apr 2018 17:32:15 +0000 Subject: [MEI-L] Reminder: MEC2018 Early Bird registration closes on April 22 In-Reply-To: References: Message-ID: <9BB4D1F3-38EA-45BC-A196-F86CCBF65EA0@byu.edu> I have been trying for several days, but I still cannot succeed in the final confirmation. Still getting an error that captcha is not working. Now there is now captcha box, but I still get the same error message. I have tried different browsers. Are others also experiencing this problem? On Apr 17, 2018, at 12:55 PM, Raffaele Viglianti > wrote: Hi David, Thanks for reporting this, we're investigating it. In the mean time, I believe you'll be able to get past the captcha as long as you enter any text. All best, Raff On Tue, Apr 17, 2018 at 2:15 PM, David Day > wrote: The registration site/process is not working. The captcha feature is not working. Can someone let us know when it is fixed? On Apr 17, 2018, at 12:02 PM, Raffaele Viglianti > wrote: Dear MEI-L Just a reminder that the Early Bird registration period to the Music Encoding Conference at the University of Maryland closes on April 22nd. Register today! http://music-encoding.org/conference/2018/ We look forward to welcoming you at the conference 22-25 May 2018. Kind regards, Raff Viglianti on behalf of the MEC2018 organizers _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From raffaeleviglianti at gmail.com Thu Apr 19 19:37:29 2018 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Thu, 19 Apr 2018 13:37:29 -0400 Subject: [MEI-L] Reminder: MEC2018 Early Bird registration closes on April 22 In-Reply-To: <9BB4D1F3-38EA-45BC-A196-F86CCBF65EA0@byu.edu> References: <9BB4D1F3-38EA-45BC-A196-F86CCBF65EA0@byu.edu> Message-ID: Dear David and all, I've just spoken again with the service providers and they are in the process of fixing this issue -- I believe it will be resolved shortly. We sincerely apologize for this inconvenience, I am sure it has been frustrating. *We have decided to* *extend the Early Bird registration period to April 30th*. The conference rate for the hotel block is also available until April 30th. I will send a follow up email as soon as I have confirmation from our provider that the issue is resolved. Apologies again and kind regards, Raff Viglianti on behalf of the Organizing Committee On Thu, Apr 19, 2018 at 1:32 PM, David Day wrote: > I have been trying for several days, but I still cannot succeed in the > final confirmation. Still getting an error that captcha is not working. Now > there is now captcha box, but I still get the same error message. I have > tried different browsers. Are others also experiencing this problem? > > > On Apr 17, 2018, at 12:55 PM, Raffaele Viglianti < > raffaeleviglianti at gmail.com> wrote: > > Hi David, > > Thanks for reporting this, we're investigating it. In the mean time, I > believe you'll be able to get past the captcha as long as you enter any > text. > > All best, > Raff > > On Tue, Apr 17, 2018 at 2:15 PM, David Day wrote: > >> The registration site/process is not working. The captcha feature is not >> working. Can someone let us know when it is fixed? >> >> >> >> >> >> >> >> >> On Apr 17, 2018, at 12:02 PM, Raffaele Viglianti < >> raffaeleviglianti at gmail.com> wrote: >> >> Dear MEI-L >> >> Just a reminder that the Early Bird registration period to the Music >> Encoding Conference at the University of Maryland closes on *April 22nd*. >> >> >> Register today! http://music-encoding.org/conference/2018/ >> >> >> We look forward to welcoming you at the conference 22-25 May 2018. >> >> Kind regards, >> Raff Viglianti on behalf of the MEC2018 organizers >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> >> >> _______________________________________________ >> mei-l mailing list >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l >> >> > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.weigl at oerc.ox.ac.uk Mon Apr 23 23:58:16 2018 From: david.weigl at oerc.ox.ac.uk (David M. Weigl) Date: Mon, 23 Apr 2018 17:58:16 -0400 Subject: [MEI-L] [CfP] Intl. Workshop on Semantic Applications for Audio and Music - SAAM 2018 Message-ID: <1524520696.20176.17.camel@oerc.ox.ac.uk> With apologies for cross-postings. Please forward to interested colleagues and mailing lists. 2nd CALL FOR PAPERS ===================================================================== International Workshop on Semantic Applications for Audio and Music SAAM 2018 An ISWC 2018 workshop, Oct 9, 2018, Monterey, California, USA http://saam.semanticaudio.ac.uk/ ===================================================================== The SAAM organising committee would like to invite researchers, engineers, developers and all those interested in semantic applications for audio and music to submit their work (long/short/challenge papers) to SAAM 2018, held in conjunction with the International Semantic Web Conference (ISWC 2018). SAAM is a venue for dissemination and discussion, identifying intersections in the challenges and solutions which cut across musical areas. In finding common approaches and coordination, SAAM will set the research agenda for advancing the development of semantic applications for audio and music. Submission deadline: 18 May, 2018 (23:59 UTC-11) (see IMPORTANT DATES below) Workshop web site: Submissions via: Contact: saam2018 at easychair.org The workshop proceedings will be made available in the ACM Digital Library. BACKGROUND AND OBJECTIVES ------------------------- Music provides a fascinating and challenging field for the application of Semantic Web technologies. Music is culture. Yet as knowledge, music takes fundamentally different forms: as digital audio waveforms recording a performance (e.g. MP3); symbolic notation prescribing a work (scores, Music Encoding Initiative); instructions for synthesising or manipulating sounds (MIDI, Digital Audio Workstations); catalogues of performance or thematic aggregations (playlists, setlists); psychological responses to listening; and as an experienced and interpretable art form. How can these heterogeneous structures be linked to each other? To what end? How do we study these materials? Can computational and knowledge management analyses yield insight within and across musics? Semantic Web technologies have been applied to these challenges -- across industry, memory institutions and academia -- but with results reported to conferences representing the communities of different disciplines of musical study. The workshop will bring together established members of the Music Informatics and ISWC communities with users, practitioners and researchers beyond its normal boundaries. SAAM will encourage a multidisciplinary audience, providing attendees with the opportunity of learning about the needs and experiences of these users. Conversely, music specialists will be availed of the latest developments in the Semantic Web, and how they can be applied to their work. SAAM also invites the wider community to discover “what makes music interesting!” TOPICS OF INTEREST ------------------ Topics of interest for the workshop include, but are not limited to: Consuming and exploiting music and media data on the Semantic Web * Music recommender systems using Semantic Web data * Visualisations of music and time-based media using Semantic Web data * Semantic Web-based automation in content management, distribution, archiving and curation * Emerging interchange standards using Semantic Web technologies (e.g. IIIF AV) * Music and media content resolution * Semantic Web in musicology * Sonification and composition techniques in the context of the Semantic Web Producing and publishing music and media-related data on the Semantic Web * Annotations, ground truth collections and crowd-sourcing for music and media collections * Uniquely identifying music resources on the Web * Automatic interlinking of music- and media-related datasets * Learning ontologies and structured music data from Web mining * Publishing the results of content-based analysis on the Semantic Web * Semantic Web technologies in the recording studio * Capturing annotations at source in composition and performance Managing music and media-related data * Management of music libraries, archives and digital collections * Managing music analysis services and workflows * Semantic Web services for music and media processing, rights, policies, payment * Preserving Semantic Web data through remixing and re-use * End-to-end semantic flows throughout the music creation and interpretation lifecycle Modelling music and media-related data * Music and media metadata, from production to personal applications * Ontologies and knowledge representation for the music and time- based media domains * Representations for time-based navigation e.g. musical and narrative structures SUBMISSIONS ----------- SAAM invites short, long, and challenge paper submissions. Papers will be peer reviewed by 2-3 members of the programme committee following a single-blind review process. Please produce your paper using the ACM template and submit it to SAAM on EasyChair by 18th May 2018 (see IMPORTANT DATES). SUBMISSION FORMATS ------------------ *Full papers* (maximum 8 pages plus references) should report on substantially complete and mature work, or efforts that have reached an important milestone. *Short papers* (maximum 4 pages plus references) may highlight demonstrators or preliminary results to bring them to the community’s attention, or present emerging technologies and approaches as position papers. For both full and short papers, we encourage submissions which report the practical application of semantic technologies to the audio and music domain, and for which demonstrators can be shown during presentation of the paper at the workshop. We also encourage sharing of demonstrators amongst participants during the workshop coffee break. Accepted full and short papers will be included directly in the workshop proceedings to be published in the ACM ICPS, and presented at the workshop. *Music and Audio Applications Challenge papers* (maximum 1 page plus references), henceforth ‘Challenge papers’, encourage the attendance and engagement of users (or potential users) of Semantic Web technologies through music and audio applications. Challenge papers should take the form of an extended abstract or short position paper reporting or motivating a specific problem, use case, or application. Challenge papers need not report a completed implementation or evaluation, and may be illustrative or speculative in proposing an application of semantic technology from the perspective of a clearly articulated user need. Accepted Challenge papers will be incorporated within a single consolidated article, edited by the chairs, and included in the workshop proceedings. Challenge papers will be presented as short pitches, followed by collective discussion, within a dedicated session at the workshop. Summary of submission lengths (details above): * Long papers: up to 8 pages (excluding references), * Short papers: up to 4 pages (excluding references), * Challenge papers: 1 page extended abstracts (excluding references). All submitted papers must: * be written in English; * contain author names, affiliations and e-mail addresses; * be formatted according to the ACM SIG Proceedings template, using 9pt Type 1 font; * be in PDF format (please ensure that the PDF can be viewed on any platform) and formatted for A4 size. It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. Please note that at least one author from each accepted paper must attend the workshop to present their work, and must be registered by 29th June 2018 (see IMPORTANT DATES). ACM template: Submissions: Contact email: saam2018 at easychair.org The workshop proceedings will be published in the ACM ICPS and will be made available in the ACM Digital Library. Please use the ‘ACM SigConf’ version of the ‘2017 ACM Master Article Template’ – for MS Word (Mac and Windows versions are available), please use the ACM_SigConf template from the master for LaTeX (version 1.50), and see sample-sigconf.tex IMPORTANT DATES --------------- Paper submission deadline: 18th May 2018 (23:59 UTC-11) Notification of acceptance: 27th June 2018 Registration deadline for one author per accepted paper: 29th June 2018 Camera ready submission deadline: 24th July 2018 Workshop: 9th October 2018 WORKSHOP ORGANISATION --------------------- Programme chairs Sean Bechhofer, School of Computer Science, University of Manchester George Fazekas, Centre for Digital Music (C4DM), Queen Mary University of London (QMUL) Kevin Page, Oxford e-Research Centre, Dept. Engineering Science, University of Oxford Organising Committee members Miguel Ceriani (Website Chair) David Weigl (Publicity and Proceedings Chair) tbc. Programme Committee Alessandro Adamou, Insight Centre Miguel Ceriani, Queen Mary University of London Mathieu d'Aquin, Insight Centre David De Roure, University of Oxford Alan Dix, University of Birmingham Stephen Downie, University of Illinois Frederic Font, Universitat Pompeu Fabra Nick Gibbins, University of Southampton Andrew Hankinson, Bodleian Libraries Kevin Kishimoto, Stanford University Libraries Graham Klyne, University of Oxford David Lewis, University of Oxford Pasquale Lisena, EURECOM Albert Meroño Peñuela, VU Amsterdam Terhi Nurmikko-Fuller, Australian National University Mark Sandler, Queen Mary University of London Stefan Schlobach, VU Amsterdam Xavier Serra, Universitat Pompeu Fabra Florian Thalmann, Queen Mary University of London Raphaël Troncy, EURECOM Ruben Verbough, ID Lab David Weigl, University of Oxford Tillman Weyde, City University of London Thomas Wilmering, Queen Mary University of London From raffaeleviglianti at gmail.com Tue Apr 24 22:14:55 2018 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Tue, 24 Apr 2018 16:14:55 -0400 Subject: [MEI-L] MEC2018 *extended* Early Bird registration closes on April 30 Message-ID: Dear all, Due to technical issues, we have extended the Early Bird registration period to *30 April 2018*. The conference rate for the hotel is also available until that date. Register now at http://music-encoding.org/conference/2018/ We apologize for the issues and we hope to see you at UMD in May! Kind regards, Raff Viglianti on behalf of the Organizing Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From raffaeleviglianti at gmail.com Tue May 1 16:18:22 2018 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Tue, 1 May 2018 10:18:22 -0400 Subject: [MEI-L] Interest Groups at MEC2018 Message-ID: Dear MEI-L The last day of the conference (Friday 25th) is dedicated to MEI announcements from the board, Interest Groups, and "unconference" activities planned on the day. If there are Interest Groups that would like to reserve time and a room for Friday the 25th, please contact me directly to make arrangements. Best wishes, Raff Viglianti on behalf of the Organizing Committee -------------- next part -------------- An HTML attachment was scrubbed... URL: From esfield at stanford.edu Wed May 2 01:40:26 2018 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Tue, 1 May 2018 23:40:26 +0000 Subject: [MEI-L] Interest Groups at MEC2018 In-Reply-To: References: Message-ID: Hi Raf, On current reckoning, I will not be joining you. I registered, and I have a flight, but I sistained a concussion on April 21 in New York. I will remain in rehab for another week, then fly home and puck up the threads of my life. I will not be able to travel for a while. I regret this, but I cannot change it. If you were able to refund the registeation (and banquet), I woykd appreciate it. Auguri, Eleanor Sent from my iPad > On May 1, 2018, at 10:19 AM, Raffaele Viglianti wrote: > > Dear MEI-L > > The last day of the conference (Friday 25th) is dedicated to MEI announcements from the board, Interest Groups, and "unconference" activities planned on the day. > > If there are Interest Groups that would like to reserve time and a room for Friday the 25th, please contact me directly to make arrangements. > > Best wishes, > Raff Viglianti on behalf of the Organizing Committee > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From D.Lewis at gold.ac.uk Fri May 4 12:13:08 2018 From: D.Lewis at gold.ac.uk (David Lewis) Date: Fri, 4 May 2018 10:13:08 +0000 Subject: [MEI-L] Digital Humanities at Oxford Summer School: Digital Musicology Workshop Message-ID: (apologies for cross-postings) SUMMER SCHOOL WORKSHOP: INVITATION TO REGISTER Digital Musicology: Applied computational and informatics methods for enhancing musicology Dates: 2-6 July Registration now open: closes 18 June A wealth of music and music-related information is now available digitally, offering tantalizing possibilities for digital musicologies. These resources include large collections of audio and scores, bibliographic and biographic data, and performance ephemera -- not to mention the ‘hidden’ existence of these in other digital content. With such large and wide ranging opportunities come new challenges in methods, principally in adapting technological solutions to assist musicologists in identifying, studying, and disseminating scholarly insights from amongst this ‘data deluge’. This workshop provides an introduction to computational and informatics methods that can be, and have been, successfully applied to musicology. Many of these techniques have their foundations in computer science, library and information science, mathematics and most recently Music Information Retrieval (MIR); sessions are delivered by expert practitioners from these fields and presented in the context of their collaborations with musicologists, and by musicologists relating their experiences of these multidisciplinary investigations. The workshop comprises a series of lectures and hands-on sessions, supplemented with reports from musicology research exemplars. Theoretical lectures are paired with practical sessions in which attendees are guided through their own exploration of the topics and tools covered. Laptops will be loaned to attendees with the appropriate specialised software installed and preconfigured. Participants also attend afternoon lectures and masterclasses with participants from other workshops, these sessions cover a broad range of digital humanities topics. There will also be optional evening events (some at additional cost), including a guided tour of Oxford, an evening drinks and poster session at the Weston Library and the TORCH lecture. Participants are invited to submit posters for the welcome reception at the Weston Library by Wednesday 18th May. Please note that numbers for this workshop are limited, and we cannot guarantee that places will still be available towards the end of the registration period. Summer School site: Contact: From david.weigl at oerc.ox.ac.uk Mon May 14 17:51:04 2018 From: david.weigl at oerc.ox.ac.uk (David M. Weigl) Date: Mon, 14 May 2018 17:51:04 +0200 Subject: [MEI-L] [CfP] Intl. Workshop on Semantic Applications for Audio and Music - SAAM 2018 - final call, note extended deadline Message-ID: <1526313064.25407.8.camel@oerc.ox.ac.uk> With apologies for cross-postings. Please forward to interested colleagues and mailing lists. Final CALL FOR PAPERS -- n.b. extended deadline! ===================================================================== International Workshop on Semantic Applications for Audio and Music SAAM 2018 An ISWC 2018 workshop, Oct 9, 2018, Monterey, California, USA http://saam.semanticaudio.ac.uk/ ===================================================================== The SAAM organising committee would like to invite researchers, engineers, developers and all those interested in semantic applications for audio and music to submit their work (long/short/challenge papers) to SAAM 2018, held in conjunction with the International Semantic Web Conference (ISWC 2018). SAAM is a venue for dissemination and discussion, identifying intersections in the challenges and solutions which cut across musical areas. In finding common approaches and coordination, SAAM will set the research agenda for advancing the development of semantic applications for audio and music. Submission deadline EXTENDED to: Thursday, 24 May, 2018 (23:59 UTC-11) (see IMPORTANT DATES below) Workshop web site: Submissions via: Contact: saam2018 at easychair.org The workshop proceedings will be made available in the ACM Digital Library. BACKGROUND AND OBJECTIVES ------------------------- Music provides a fascinating and challenging field for the application of Semantic Web technologies. Music is culture. Yet as knowledge, music takes fundamentally different forms: as digital audio waveforms recording a performance (e.g. MP3); symbolic notation prescribing a work (scores, Music Encoding Initiative); instructions for synthesising or manipulating sounds (MIDI, Digital Audio Workstations); catalogues of performance or thematic aggregations (playlists, setlists); psychological responses to listening; and as an experienced and interpretable art form. How can these heterogeneous structures be linked to each other? To what end? How do we study these materials? Can computational and knowledge management analyses yield insight within and across musics? Semantic Web technologies have been applied to these challenges -- across industry, memory institutions and academia -- but with results reported to conferences representing the communities of different disciplines of musical study. The workshop will bring together established members of the Music Informatics and ISWC communities with users, practitioners and researchers beyond its normal boundaries. SAAM will encourage a multidisciplinary audience, providing attendees with the opportunity of learning about the needs and experiences of these users. Conversely, music specialists will be availed of the latest developments in the Semantic Web, and how they can be applied to their work. SAAM also invites the wider community to discover “what makes music interesting!” TOPICS OF INTEREST ------------------ Topics of interest for the workshop include, but are not limited to: Consuming and exploiting music and media data on the Semantic Web * Music recommender systems using Semantic Web data * Visualisations of music and time-based media using Semantic Web data * Semantic Web-based automation in content management, distribution, archiving and curation * Emerging interchange standards using Semantic Web technologies (e.g. IIIF AV) * Music and media content resolution * Semantic Web in musicology * Sonification and composition techniques in the context of the Semantic Web Producing and publishing music and media-related data on the Semantic Web * Annotations, ground truth collections and crowd-sourcing for music and media collections * Uniquely identifying music resources on the Web * Automatic interlinking of music- and media-related datasets * Learning ontologies and structured music data from Web mining * Publishing the results of content-based analysis on the Semantic Web * Semantic Web technologies in the recording studio * Capturing annotations at source in composition and performance Managing music and media-related data * Management of music libraries, archives and digital collections * Managing music analysis services and workflows * Semantic Web services for music and media processing, rights, policies, payment * Preserving Semantic Web data through remixing and re-use * End-to-end semantic flows throughout the music creation and interpretation lifecycle Modelling music and media-related data * Music and media metadata, from production to personal applications * Ontologies and knowledge representation for the music and time- based media domains * Representations for time-based navigation e.g. musical and narrative structures SUBMISSIONS ----------- SAAM invites short, long, and challenge paper submissions. Papers will be peer reviewed by 2-3 members of the programme committee following a single-blind review process. Please produce your paper using the ACM template and submit it to SAAM on EasyChair by 18th May 2018 (see IMPORTANT DATES). SUBMISSION FORMATS ------------------ *Full papers* (maximum 8 pages plus references) should report on substantially complete and mature work, or efforts that have reached an important milestone. *Short papers* (maximum 4 pages plus references) may highlight demonstrators or preliminary results to bring them to the community’s attention, or present emerging technologies and approaches as position papers. For both full and short papers, we encourage submissions which report the practical application of semantic technologies to the audio and music domain, and for which demonstrators can be shown during presentation of the paper at the workshop. We also encourage sharing of demonstrators amongst participants during the workshop coffee break. Accepted full and short papers will be included directly in the workshop proceedings to be published in the ACM ICPS, and presented at the workshop. *Music and Audio Applications Challenge papers* (maximum 1 page plus references), henceforth ‘Challenge papers’, encourage the attendance and engagement of users (or potential users) of Semantic Web technologies through music and audio applications. Challenge papers should take the form of an extended abstract or short position paper reporting or motivating a specific problem, use case, or application. Challenge papers need not report a completed implementation or evaluation, and may be illustrative or speculative in proposing an application of semantic technology from the perspective of a clearly articulated user need. Accepted Challenge papers will be incorporated within a single consolidated article, edited by the chairs, and included in the workshop proceedings. Challenge papers will be presented as short pitches, followed by collective discussion, within a dedicated session at the workshop. Summary of submission lengths (details above): * Long papers: up to 8 pages (excluding references), * Short papers: up to 4 pages (excluding references), * Challenge papers: 1 page extended abstracts (excluding references). All submitted papers must: * be written in English; * contain author names, affiliations and e-mail addresses; * be formatted according to the ACM SIG Proceedings template, using 9pt Type 1 font; * be in PDF format (please ensure that the PDF can be viewed on any platform) and formatted for A4 size. It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. Please note that at least one author from each accepted paper must attend the workshop to present their work, and must be registered by 29th June 2018 (see IMPORTANT DATES). ACM template: Submissions: Contact email: saam2018 at easychair.org The workshop proceedings will be published in the ACM ICPS and will be made available in the ACM Digital Library. Please use the ‘ACM SigConf’ version of the ‘2017 ACM Master Article Template’ – for MS Word (Mac and Windows versions are available), please use the ACM_SigConf template from the master for LaTeX (version 1.50), and see sample-sigconf.tex IMPORTANT DATES --------------- Paper submission deadline EXTENDED to: Thursday, 24 May, 2018 (23:59 UTC-11) Notification of acceptance: 27th June 2018 Registration deadline for one author per accepted paper: 29th June 2018 Camera ready submission deadline: 24th July 2018 Workshop: 9th October 2018 WORKSHOP ORGANISATION --------------------- Programme chairs Sean Bechhofer, School of Computer Science, University of Manchester George Fazekas, Centre for Digital Music (C4DM), Queen Mary University of London (QMUL) Kevin Page, Oxford e-Research Centre, Dept. Engineering Science, University of Oxford Organising Committee members Miguel Ceriani (Website Chair) David Weigl (Publicity and Proceedings Chair) Programme Committee Alessandro Adamou, Insight Centre Miguel Ceriani, Queen Mary University of London Mathieu d'Aquin, Insight Centre David De Roure, University of Oxford Alan Dix, University of Birmingham Stephen Downie, University of Illinois Frederic Font, Universitat Pompeu Fabra Nick Gibbins, University of Southampton Andrew Hankinson, Bodleian Libraries Kevin Kishimoto, Stanford University Libraries Graham Klyne, University of Oxford David Lewis, University of Oxford Pasquale Lisena, EURECOM Albert Meroño Peñuela, VU Amsterdam Terhi Nurmikko-Fuller, Australian National University Mark Sandler, Queen Mary University of London Stefan Schlobach, VU Amsterdam Xavier Serra, Universitat Pompeu Fabra Florian Thalmann, Queen Mary University of London Raphaël Troncy, EURECOM Ruben Verbough, ID Lab David Weigl, University of Oxford Tillman Weyde, City University of London Thomas Wilmering, Queen Mary University of London From david.weigl at oerc.ox.ac.uk Tue May 15 14:10:06 2018 From: david.weigl at oerc.ox.ac.uk (David M. Weigl) Date: Tue, 15 May 2018 14:10:06 +0200 Subject: [MEI-L] [CfP] CORRECTION: Intl. Workshop on Semantic Applications for Audio and Music - SAAM 2018 - abstract due: May 18, paper submission extended to: May 24 Message-ID: <1526386206.25407.42.camel@oerc.ox.ac.uk> Clarification on paper submission deadlines: Abstract submission due: Friday, May 18th, 2018 Full paper submission extended to: Thursday, May 24th, 2018 REQUIRES prior abstract submission by Fri, May 18th! Both deadlines at 23:59 UTC-11 -- With apologies for cross-postings. Please forward to interested colleagues and mailing lists. Final CALL FOR PAPERS -- n.b. extended deadline! ===================================================================== International Workshop on Semantic Applications for Audio and Music SAAM 2018 An ISWC 2018 workshop, Oct 9, 2018, Monterey, California, USA http://saam.semanticaudio.ac.uk/ ===================================================================== The SAAM organising committee would like to invite researchers, engineers, developers and all those interested in semantic applications for audio and music to submit their work (long/short/challenge papers) to SAAM 2018, held in conjunction with the International Semantic Web Conference (ISWC 2018). SAAM is a venue for dissemination and discussion, identifying intersections in the challenges and solutions which cut across musical areas. In finding common approaches and coordination, SAAM will set the research agenda for advancing the development of semantic applications for audio and music. Submission deadline EXTENDED to: Thursday, 24 May, 2018 (23:59 UTC-11) (see IMPORTANT DATES below) Workshop web site: Submissions via: Contact: saam2018 at easychair.org The workshop proceedings will be made available in the ACM Digital Library. BACKGROUND AND OBJECTIVES ------------------------- Music provides a fascinating and challenging field for the application of Semantic Web technologies. Music is culture. Yet as knowledge, music takes fundamentally different forms: as digital audio waveforms recording a performance (e.g. MP3); symbolic notation prescribing a work (scores, Music Encoding Initiative); instructions for synthesising or manipulating sounds (MIDI, Digital Audio Workstations); catalogues of performance or thematic aggregations (playlists, setlists); psychological responses to listening; and as an experienced and interpretable art form. How can these heterogeneous structures be linked to each other? To what end? How do we study these materials? Can computational and knowledge management analyses yield insight within and across musics? Semantic Web technologies have been applied to these challenges -- across industry, memory institutions and academia -- but with results reported to conferences representing the communities of different disciplines of musical study. The workshop will bring together established members of the Music Informatics and ISWC communities with users, practitioners and researchers beyond its normal boundaries. SAAM will encourage a multidisciplinary audience, providing attendees with the opportunity of learning about the needs and experiences of these users. Conversely, music specialists will be availed of the latest developments in the Semantic Web, and how they can be applied to their work. SAAM also invites the wider community to discover “what makes music interesting!” TOPICS OF INTEREST ------------------ Topics of interest for the workshop include, but are not limited to: Consuming and exploiting music and media data on the Semantic Web * Music recommender systems using Semantic Web data * Visualisations of music and time-based media using Semantic Web data * Semantic Web-based automation in content management, distribution, archiving and curation * Emerging interchange standards using Semantic Web technologies (e.g. IIIF AV) * Music and media content resolution * Semantic Web in musicology * Sonification and composition techniques in the context of the Semantic Web Producing and publishing music and media-related data on the Semantic Web * Annotations, ground truth collections and crowd-sourcing for music and media collections * Uniquely identifying music resources on the Web * Automatic interlinking of music- and media-related datasets * Learning ontologies and structured music data from Web mining * Publishing the results of content-based analysis on the Semantic Web * Semantic Web technologies in the recording studio * Capturing annotations at source in composition and performance Managing music and media-related data * Management of music libraries, archives and digital collections * Managing music analysis services and workflows * Semantic Web services for music and media processing, rights, policies, payment * Preserving Semantic Web data through remixing and re-use * End-to-end semantic flows throughout the music creation and interpretation lifecycle Modelling music and media-related data * Music and media metadata, from production to personal applications * Ontologies and knowledge representation for the music and time- based media domains * Representations for time-based navigation e.g. musical and narrative structures SUBMISSIONS ----------- SAAM invites short, long, and challenge paper submissions. Papers will be peer reviewed by 2-3 members of the programme committee following a single-blind review process. Please produce your paper using the ACM template. Abstracts must be submitted to SAAM on EasyChair by 18th May 2018. Full paper submissions due 24th May 2018. (see IMPORTANT DATES). SUBMISSION FORMATS ------------------ *Full papers* (maximum 8 pages plus references) should report on substantially complete and mature work, or efforts that have reached an important milestone. *Short papers* (maximum 4 pages plus references) may highlight demonstrators or preliminary results to bring them to the community’s attention, or present emerging technologies and approaches as position papers. For both full and short papers, we encourage submissions which report the practical application of semantic technologies to the audio and music domain, and for which demonstrators can be shown during presentation of the paper at the workshop. We also encourage sharing of demonstrators amongst participants during the workshop coffee break. Accepted full and short papers will be included directly in the workshop proceedings to be published in the ACM ICPS, and presented at the workshop. *Music and Audio Applications Challenge papers* (maximum 1 page plus references), henceforth ‘Challenge papers’, encourage the attendance and engagement of users (or potential users) of Semantic Web technologies through music and audio applications. Challenge papers should take the form of an extended abstract or short position paper reporting or motivating a specific problem, use case, or application. Challenge papers need not report a completed implementation or evaluation, and may be illustrative or speculative in proposing an application of semantic technology from the perspective of a clearly articulated user need. Accepted Challenge papers will be incorporated within a single consolidated article, edited by the chairs, and included in the workshop proceedings. Challenge papers will be presented as short pitches, followed by collective discussion, within a dedicated session at the workshop. Summary of submission lengths (details above): * Long papers: up to 8 pages (excluding references), * Short papers: up to 4 pages (excluding references), * Challenge papers: 1 page extended abstracts (excluding references). All submitted papers must: * be written in English; * contain author names, affiliations and e-mail addresses; * be formatted according to the ACM SIG Proceedings template, using 9pt Type 1 font; * be in PDF format (please ensure that the PDF can be viewed on any platform) and formatted for A4 size. It is the authors’ responsibility to ensure that their submissions adhere strictly to the required format. Submissions that do not comply with the above requirements may be rejected without review. Please note that at least one author from each accepted paper must attend the workshop to present their work, and must be registered by 29th June 2018 (see IMPORTANT DATES). ACM template: Submissions: Contact email: saam2018 at easychair.org The workshop proceedings will be published in the ACM ICPS and will be made available in the ACM Digital Library. Please use the ‘ACM SigConf’ version of the ‘2017 ACM Master Article Template’ – for MS Word (Mac and Windows versions are available), please use the ACM_SigConf template from the master for LaTeX (version 1.50), and see sample-sigconf.tex IMPORTANT DATES --------------- Full paper submission deadline: Thursday, 24 May, 2018 (prior abstract submission REQUIRED by Fri May 18th!) Notification of acceptance: 27th June 2018 Registration deadline for one author per accepted paper: 29th June 2018 Camera ready submission deadline: 24th July 2018 Workshop: 9th October 2018 All deadlines at (23:59 UTC-11) WORKSHOP ORGANISATION --------------------- Programme chairs Sean Bechhofer, School of Computer Science, University of Manchester George Fazekas, Centre for Digital Music (C4DM), Queen Mary University of London (QMUL) Kevin Page, Oxford e-Research Centre, Dept. Engineering Science, University of Oxford Organising Committee members Miguel Ceriani (Website Chair) David Weigl (Publicity and Proceedings Chair) Programme Committee Alessandro Adamou, Insight Centre Miguel Ceriani, Queen Mary University of London Mathieu d'Aquin, Insight Centre David De Roure, University of Oxford Alan Dix, University of Birmingham Stephen Downie, University of Illinois Frederic Font, Universitat Pompeu Fabra Nick Gibbins, University of Southampton Andrew Hankinson, Bodleian Libraries Kevin Kishimoto, Stanford University Libraries Graham Klyne, University of Oxford David Lewis, University of Oxford Pasquale Lisena, EURECOM Albert Meroño Peñuela, VU Amsterdam Terhi Nurmikko-Fuller, Australian National University Mark Sandler, Queen Mary University of London Stefan Schlobach, VU Amsterdam Xavier Serra, Universitat Pompeu Fabra Florian Thalmann, Queen Mary University of London Raphaël Troncy, EURECOM Ruben Verbough, ID Lab David Weigl, University of Oxford Tillman Weyde, City University of London Thomas Wilmering, Queen Mary University of London From paul at nines.org Mon May 28 15:51:54 2018 From: paul at nines.org (Newsletters) Date: Mon, 28 May 2018 09:51:54 -0400 Subject: [MEI-L] Small, representative set of MEI files? Message-ID: <8ecb70d3-e96a-e3ec-a4a2-ea5dd9ea67b5@nines.org> I was looking for a small set of MEI files, preferably written by different people using different software, to use as test data for a script that I'm writing to parse MEI. Would anyone who has written a parser for MEI have a collection of MEI files they'd be willing to share? Has anyone written an MEI that has something unusual in it that they'd be willing to share? Is there already a public repository of these files somewhere? I couldn't find one. Thanks, Paul Rosen From kepper at upb.de Mon May 28 16:49:14 2018 From: kepper at upb.de (Johannes Kepper) Date: Mon, 28 May 2018 16:49:14 +0200 Subject: [MEI-L] Small, representative set of MEI files? In-Reply-To: <8ecb70d3-e96a-e3ec-a4a2-ea5dd9ea67b5@nines.org> References: <8ecb70d3-e96a-e3ec-a4a2-ea5dd9ea67b5@nines.org> Message-ID: <071DF885-4296-444A-9863-66F2B8967FBC@upb.de> Hi Paul, I think https://github.com/DDMAL/mei-test-set might be a good starting point for you. There are certainly more (and more recent) files out there, but here, everything is nicely structured and seems to serve the very purpose you‘re looking for. It was good to see you last week in College Park, and I’m glad you’re already taking action ;-) All best, jo > Am 28.05.2018 um 15:51 schrieb Newsletters : > > I was looking for a small set of MEI files, preferably written by different people using different software, to use as test data for a script that I'm writing to parse MEI. > > Would anyone who has written a parser for MEI have a collection of MEI files they'd be willing to share? > > Has anyone written an MEI that has something unusual in it that they'd be willing to share? > > Is there already a public repository of these files somewhere? I couldn't find one. > > Thanks, > > Paul Rosen > > > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: From paul at nines.org Mon May 28 17:36:28 2018 From: paul at nines.org (Newsletters) Date: Mon, 28 May 2018 11:36:28 -0400 Subject: [MEI-L] Small, representative set of MEI files? In-Reply-To: <071DF885-4296-444A-9863-66F2B8967FBC@upb.de> References: <8ecb70d3-e96a-e3ec-a4a2-ea5dd9ea67b5@nines.org> <071DF885-4296-444A-9863-66F2B8967FBC@upb.de> Message-ID: <76188ac1-e0b2-3584-412d-f3b8d6f41015@nines.org> Hi Jo, Yes, that's exactly what I was looking for! Thanks. I was inspired by the conference so I want to make some progress before that wears off. On 5/28/18 10:49 AM, Johannes Kepper wrote: > Hi Paul, > > I think https://github.com/DDMAL/mei-test-set might be a good starting > point for you. There are certainly more (and more recent) files out > there, but here, everything is nicely structured and seems to serve > the very purpose you‘re looking for. It was good to see you last week > in College Park, and I’m glad you’re already taking action ;-) > > All best, > jo > > Am 28.05.2018 um 15:51 schrieb Newsletters >: > >> I was looking for a small set of MEI files, preferably written by >> different people using different software, to use as test data for a >> script that I'm writing to parse MEI. >> >> Would anyone who has written a parser for MEI have a collection of >> MEI files they'd be willing to share? >> >> Has anyone written an MEI that has something unusual in it that >> they'd be willing to share? >> >> Is there already a public repository of these files somewhere? I >> couldn't find one. >> >> Thanks, >> >> Paul Rosen >> >> From raffaeleviglianti at gmail.com Tue May 29 18:28:18 2018 From: raffaeleviglianti at gmail.com (Raffaele Viglianti) Date: Tue, 29 May 2018 12:28:18 -0400 Subject: [MEI-L] MEC 2018 - Thank you! Message-ID: Dear MEI-L Many thanks to those of you who attended the Music Encoding Conference 2018 at the University of Maryland last week! Your contributions and attendance are what made it all possible. It was four busy days in College Park. We enjoyed having you all here and I hope some of you had a chance to explore downtown Washington DC. It was good to see that the proximity to DC meant that we had some attendees from the capital’s cultural institutions such as the National Public Radio and the Library of Congress. The theme “Encoding and Performance” was well represented throughout the conference. We are particularly grateful to John Rink for his keynote lecture-recital “(Not) Beyond the Score: Decoding Musical Performance,” which highlighted the challenges of encoding/decoding music notation through the lens of performance research and practice. We are also particularly grateful to Anna Kijas who, in her keynote speech “What does the data tell us?: Representation, Canon, and Music Encoding,” highlighted critical topics of that are too often neglected in our community. Her talk made the fundamental point that our acts of building digital representations of notated music can (and currently do) reinforce traditional canons of music history that overlook contributions by women and people of color. In establishing a “digital canon” we have an unprecedented opportunity to change this. We closed MEC with a productive “unconference” day and we are happy to already see some activity on the mailing list as a result! Many thanks have been given throughout the conference days; however, we are truly grateful to the University of Maryland School of Arts and Humanities and the MEI Board for having sponsored bursaries for students to attend the conference in a place that is currently geographically distant from the core constituencies of the MEI community. We are also thankful to Tido for sponsoring the Wednesday reception and particularly to soprano Tory Wood and Tido’s founder and director Brad Cohen for a wonderful live performance. Finally, we look forward to the 2019 conference in Vienna and we hope to see you all there! Best wishes. Raff Viglianti on behalf of the MEC2018 conference organizers. -------------- next part -------------- An HTML attachment was scrubbed... URL: From kepper at edirom.de Wed May 30 11:32:21 2018 From: kepper at edirom.de (Johannes Kepper) Date: Wed, 30 May 2018 11:32:21 +0200 Subject: [MEI-L] Report from the Community Meeting at MEC2018 Message-ID: Dear Members of the MEI, for those of you who weren't able to join us, here's a report from last week's Community Meeting. First of all, we had a wonderful Music Encoding Conference over in College Park, with an exciting program and good organisation – thanks again to Karen, Raffaele, Stephen and their respective teams. MEC2019 will be hosted by a team led by Robert Klugseder (Austrian Academy of Sciences) and Franz Kelnreiter (Mozarteum Salzburg), and will be held from May 29 to June 1 in wonderful Vienna. We're excited that Kevin Page (Oxford) will serve as Program Chair. We encouraged institutions to respond to our Call for Hosting MEC2020, which is open until August 1. While we've been able to alternate between Europe and North America so far, that's not a strict requirement. However, we'd be happy to strengthen our North American Community by having another conference there in 2020. Interested parties are invited to reach out to Raff Viglianti or the MEI Board to discuss possible proposals upfront. As in previous years, MEI has three Institutional Members: ZenMEM (2500€ / year), the ÖAW (500€ / year) and TiDo (500$ / year). With their support, we've been able to sponsor student bursaries at Music Encoding Conferences. For version 4.0 of the MEI Schema, we're waiting for some last-minute changes from one of our Interest Groups. The current plan is to have these ready by July 1, and to release the new version shortly thereafter. As Perry may not use his official working time for MEI anymore, the MEI Board would like to initiate annual developer meetings in fall. These meetings will have a specific topic, and are open to anyone who wants to actively contribute work to that topic. Sending delegates to these workshops will be accepted as in-kind contribution, which means that such institutions will be treated as Institutional Members of MEI (for that year) and will get their logo on the MEI website. Our Technical Chairs will announce a date and topic for such a workshop in the next few weeks; interested people are requested to respond to that. If your institution may not be able to support you with travel money, please contact the Technical Chairs anyway. While there is no formal procedure for this ready yet, the Board is willing to step in and support such cases with MEI money. The proceedings for MEC2015 and 2016 will be bundled together and will go out for final author corrections very soon. Proceedings for MEC2017 will be combined with this year's submissions. There will be a separate announcement to this year's authors in the coming week, but basically the same rules as for the last years apply: Please submit either a simple Word file, or try to follow the guidelines given at https://github.com/music-encoding/mec-proceedings. Deadline for submissions will be August 31. We hope to get both volumes – MEC2015+16 and MEC2017+18 – out by the end of the year. During the Community meeting, we intensively discussed ways on how to improve communication on MEI-L and elsewhere. This also involved discussions about (paid) memberships. It seems like MEI has changed over the last couple of years, and we may have to respond to these changes accordingly. A very simple suggestion is to make sure that all the tools developed should be listed at http://music-encoding.org/resources/tools.html, while all projects using MEI should be listed at http://music-encoding.org/community/projects-users.html. Both pages can be modified through Github (please submit pull requests). If you need assistance, our Technical Team will happily support you. However, it is your responsibility to speak up and get your activities listed there. Regarding the major issues of improving communication / reorganising MEI, we plan to set up a dedicated working group in the coming weeks. There will be a separate mail for that, but if you already know that you'd like to participate in that effort, please contact me off-list. There will be no separate minutes from the Board meeting at MEC2018, as we've mostly prepared the topics discussed at the Community Meeting. If there are questions, additions, suggestions etc., please comment. With best regards, jo -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: From efreja at wanadoo.fr Sun Jun 24 00:31:43 2018 From: efreja at wanadoo.fr (Etienne =?ISO-8859-1?B?RnLpamF2aWxsZQ==?=) Date: Sun, 24 Jun 2018 00:31:43 +0200 Subject: [MEI-L] A WordPress plugin for displaying very easily a MEI (or MusicXML or PAE) score in a WordPress page Message-ID: Dear MEI community, I'm happy to inform you that I have published a WordPress plugin allowing to display very easily a MEI (or MusicXML or PAE) score in a WordPress page. Do you own a WordPress site and publish frequently music fragments, incipits or small music sheets ? This plugin is done for you? and it's free!! It allows you by writing directly your MEI, MusicXML or PAE code between two shortcode tags ?[pn_msv] and ?[/pn_msv] in the text of a WordPress page or post, to have it rendered directly in your page. Is the code too big to be placed directly in our page ? Upload a file on your site and have it rendered the same way. The plugin is powered by Verovio for the score rendering, and generates for you the cumbersome JavaScript code necessary to invoke Verovio JavaScript toolkit with the appropriate parameters. Moreover, it facilitates greatly the display in the appropriate size of your score, without having to specify any score dimensioning. Even if all Verovio options are not accessible on the shortcode (this is not the goal, the use must be simplified at the maximum), some options are allowed on the plugin and are fully described at http://www.partitionnumerique.com/music-sheet-viewer-wordpress-plugin/ The plugin is freely available in the WordPress plugins directory https://wordpress.org/plugins/music-sheet-viewer/ or can be installed directly from your WordPress site by going to the plugin?s administration page and entering ? Music Sheet Viewer ? in the plugin?s search field. At last, all donations are welcome for supporting my work. All the best. Etienne Fr?javille http://www.partitionnumerique.com L'avenir de la partition musicale -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 6CFB6F91-585F-40BD-B2DE-FBD93BE2F38E.png Type: image/png Size: 1066 bytes Desc: not available URL: From craigsapp at gmail.com Sun Jun 24 03:45:09 2018 From: craigsapp at gmail.com (Craig Sapp) Date: Sat, 23 Jun 2018 18:45:09 -0700 Subject: [MEI-L] Digital edition of Mozart piano sonatas Message-ID: Hello Everyone, I put a digital edition of Mozart's piano sonatas in the Humdrum format onto Github: https://github.com/craigsapp/mozart-piano-sonatas You can view the graphical notation generated from the data in Verovio Humdrum Viewer by clicking on links for each movement in the description of the edition: https://github.com/craigsapp/mozart-piano-sonatas#online-viewing Typing alt-m while viewing the music notation will display the MEI data conversion in the text editor to the left. -=+Craig -------------- next part -------------- An HTML attachment was scrubbed... URL: From andrew.hankinson at mail.mcgill.ca Sun Jun 24 19:38:11 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Sun, 24 Jun 2018 17:38:11 +0000 Subject: [MEI-L] A WordPress plugin for displaying very easily a MEI (or MusicXML or PAE) score in a WordPress page In-Reply-To: References: Message-ID: <410227C9-120C-4EEC-A3D3-1E72FC515B69@mail.mcgill.ca> That's great, Etienne! Thank you very much for making this available to the community. -Andrew > On 23 Jun 2018, at 23:31, Etienne Fr?javille wrote: > > Dear MEI community, > > I'm happy to inform you that I have published a WordPress plugin allowing to display very easily a MEI (or MusicXML or PAE) score in a WordPress page. > > Do you own a WordPress site and publish frequently music fragments, incipits or small music sheets ? > > This plugin is done for you? and it's free!! > > It allows you by writing directly your MEI, MusicXML or PAE code between two shortcode tags ?[pn_msv] and ?[/pn_msv] in the text of a WordPress page or post, to have it rendered directly in your page. > > Is the code too big to be placed directly in our page ? Upload a file on your site and have it rendered the same way. > > The plugin is powered by Verovio for the score rendering, and generates for you the cumbersome JavaScript code necessary to invoke Verovio JavaScript toolkit with the appropriate parameters. > > Moreover, it facilitates greatly the display in the appropriate size of your score, without having to specify any score dimensioning. > > Even if all Verovio options are not accessible on the shortcode (this is not the goal, the use must be simplified at the maximum), some options are allowed on the plugin and are fully described at http://www.partitionnumerique.com/music-sheet-viewer-wordpress-plugin/ > > The plugin is freely available in the WordPress plugins directory https://wordpress.org/plugins/music-sheet-viewer/ or can be installed directly from your WordPress site by going to the plugin?s administration page and entering ? Music Sheet Viewer ? in the plugin?s search field. > > At last, all donations are welcome for supporting my work. > > All the best. > > Etienne Fr?javille > <6CFB6F91-585F-40BD-B2DE-FBD93BE2F38E.png> http://www.partitionnumerique.com > L'avenir de la partition musicale > <6CFB6F91-585F-40BD-B2DE-FBD93BE2F38E.png>_______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From zayne at zayne.co.za Mon Jun 25 08:00:52 2018 From: zayne at zayne.co.za (Zayne Upton) Date: Mon, 25 Jun 2018 08:00:52 +0200 Subject: [MEI-L] A WordPress plugin for displaying very easily a MEI (or MusicXML or PAE) score in a WordPress page In-Reply-To: References: Message-ID: This is great! This might be just the thing I need. Thanks for the contribution. On Sun, 24 Jun 2018, 00:32 Etienne Fr?javille, wrote: > Dear MEI community, > > I'm happy to inform you that I have published a WordPress plugin allowing > to display very easily a MEI (or MusicXML or PAE) score in a WordPress page. > > Do you own a WordPress site and publish frequently music fragments, > incipits or small music sheets ? > > This plugin is done for you? and it's free!! > > It allows you by writing directly your MEI, MusicXML or PAE code between > two shortcode tags ?[pn_msv] and ?[/pn_msv] in the text of a WordPress page > or post, to have it rendered directly in your page. > > Is the code too big to be placed directly in our page ? Upload a file on > your site and have it rendered the same way. > > The plugin is powered by Verovio for the score rendering, and generates > for you the cumbersome JavaScript code necessary to invoke Verovio > JavaScript toolkit with the appropriate parameters. > > Moreover, it facilitates greatly the display in the appropriate size of > your score, without having to specify any score dimensioning. > > Even if all Verovio options are not accessible on the shortcode (this is > not the goal, the use must be simplified at the maximum), some options are > allowed on the plugin and are fully described at > http://www.partitionnumerique.com/music-sheet-viewer-wordpress-plugin/ > > The plugin is freely available in the WordPress plugins directory > https://wordpress.org/plugins/music-sheet-viewer/ or can be installed > directly from your WordPress site by going to the plugin?s administration > page and entering ? Music Sheet Viewer ? in the plugin?s search field. > > At last, all donations are welcome for supporting my work. > > All the best. > > Etienne Fr?javille > http://www.partitionnumerique.com > *L'avenir de la partition musicale* > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 6CFB6F91-585F-40BD-B2DE-FBD93BE2F38E.png Type: image/png Size: 1066 bytes Desc: not available URL: From kijas at bc.edu Mon Jun 25 15:37:53 2018 From: kijas at bc.edu (Anna Kijas) Date: Mon, 25 Jun 2018 09:37:53 -0400 Subject: [MEI-L] A WordPress plugin for displaying very easily a MEI (or MusicXML or PAE) score in a WordPress page In-Reply-To: References: Message-ID: Thanks so much for sharing your plugin, Etienne! I look forward to testing it out. Anna Anna E. Kijas, MA, MLS Senior Digital Scholarship Librarian Boston College Libraries 140 Commonwealth Ave. Chestnut Hill, MA 02467 Tel: 617-552-4253 Schedule an appointment | https://ds.bc.edu/ |@anna_kijas | She/Hers On Sat, Jun 23, 2018 at 6:31 PM, Etienne Fr?javille wrote: > Dear MEI community, > > I'm happy to inform you that I have published a WordPress plugin allowing > to display very easily a MEI (or MusicXML or PAE) score in a WordPress page. > > Do you own a WordPress site and publish frequently music fragments, > incipits or small music sheets ? > > This plugin is done for you? and it's free!! > > It allows you by writing directly your MEI, MusicXML or PAE code between > two shortcode tags ?[pn_msv] and ?[/pn_msv] in the text of a WordPress page > or post, to have it rendered directly in your page. > > Is the code too big to be placed directly in our page ? Upload a file on > your site and have it rendered the same way. > > The plugin is powered by Verovio for the score rendering, and generates > for you the cumbersome JavaScript code necessary to invoke Verovio > JavaScript toolkit with the appropriate parameters. > > Moreover, it facilitates greatly the display in the appropriate size of > your score, without having to specify any score dimensioning. > > Even if all Verovio options are not accessible on the shortcode (this is > not the goal, the use must be simplified at the maximum), some options are > allowed on the plugin and are fully described at > http://www.partitionnumerique.com/music-sheet-viewer-wordpress-plugin/ > > The plugin is freely available in the WordPress plugins directory > https://wordpress.org/plugins/music-sheet-viewer/ or can be installed > directly from your WordPress site by going to the plugin?s administration > page and entering ? Music Sheet Viewer ? in the plugin?s search field. > > At last, all donations are welcome for supporting my work. > > All the best. > > Etienne Fr?javille > http://www.partitionnumerique.com > *L'avenir de la partition musicale* > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 6CFB6F91-585F-40BD-B2DE-FBD93BE2F38E.png Type: image/png Size: 1066 bytes Desc: not available URL: From zayne at zayne.co.za Tue Jun 26 20:09:15 2018 From: zayne at zayne.co.za (Zayne Upton) Date: Tue, 26 Jun 2018 20:09:15 +0200 Subject: [MEI-L] A WordPress plugin for displaying very easily a MEI (or MusicXML or PAE) score in a WordPress page In-Reply-To: References: Message-ID: Hi Etienne I've installed the plugin and got some errors. I've added them to a Wordpress.org support forum post ( https://wordpress.org/support/topic/installation-problem-141/). It may have something to do with the PHP version. Cheers, Zayne On 24 June 2018 at 00:31, Etienne Fr?javille wrote: > Dear MEI community, > > I'm happy to inform you that I have published a WordPress plugin allowing > to display very easily a MEI (or MusicXML or PAE) score in a WordPress page. > > Do you own a WordPress site and publish frequently music fragments, > incipits or small music sheets ? > > This plugin is done for you? and it's free!! > > It allows you by writing directly your MEI, MusicXML or PAE code between > two shortcode tags ?[pn_msv] and ?[/pn_msv] in the text of a WordPress page > or post, to have it rendered directly in your page. > > Is the code too big to be placed directly in our page ? Upload a file on > your site and have it rendered the same way. > > The plugin is powered by Verovio for the score rendering, and generates > for you the cumbersome JavaScript code necessary to invoke Verovio > JavaScript toolkit with the appropriate parameters. > > Moreover, it facilitates greatly the display in the appropriate size of > your score, without having to specify any score dimensioning. > > Even if all Verovio options are not accessible on the shortcode (this is > not the goal, the use must be simplified at the maximum), some options are > allowed on the plugin and are fully described at > http://www.partitionnumerique.com/music-sheet-viewer-wordpress-plugin/ > > The plugin is freely available in the WordPress plugins directory > https://wordpress.org/plugins/music-sheet-viewer/ or can be installed > directly from your WordPress site by going to the plugin?s administration > page and entering ? Music Sheet Viewer ? in the plugin?s search field. > > At last, all donations are welcome for supporting my work. > > All the best. > > Etienne Fr?javille > http://www.partitionnumerique.com > *L'avenir de la partition musicale* > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -- __________________________ Zayne Upton | +27 83 324 5435 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 6CFB6F91-585F-40BD-B2DE-FBD93BE2F38E.png Type: image/png Size: 1066 bytes Desc: not available URL: From charbelelachkar at hotmail.com Tue Jul 3 12:37:18 2018 From: charbelelachkar at hotmail.com (Charbel El Achkar) Date: Tue, 3 Jul 2018 10:37:18 +0000 Subject: [MEI-L] Half flat (Arabic quarter-tone flat) Message-ID: To whom it may concern, I would like to ask if the Half flat (Arabic quarter-tone flat) is available now in MEI in order to encode it as MEI accidental and if verovio supports the rendering of this feature? please provide an example (.svg) . if yes can we also encode this half-flat at the beginning of the musical piece (armature) in order to apply it to a specified string in the musical measure? Thank you for your cooperation, Charbel El Achkar -------------- next part -------------- An HTML attachment was scrubbed... URL: From craigsapp at gmail.com Tue Jul 3 12:59:23 2018 From: craigsapp at gmail.com (Craig Sapp) Date: Tue, 3 Jul 2018 12:59:23 +0200 Subject: [MEI-L] Half flat (Arabic quarter-tone flat) In-Reply-To: References: Message-ID: Hello Charbel, Yes, half-flats are possible in MEI and verovio. Here is the MEI list of accidentals: http://music-encoding.org/guidelines/v3/data-types/data.accidental.explicit.html ("1qf" is the name of the "one quarter-tone flat" in MEI data) > if yes can we also encode this half-flat at the beginning of the musical piece (armature) in order to apply it to a specified string in the musical measure? Do you mean include half-flats in the key signature, or another graphical system for the instrument tuning? The first should be possible in MEI, although I do not know if it is implemented in verovio, the second is unlikely implemented in verovio, and probably not in MEI. Here is an example with half-flats. The rendering in verovio: The MEI encoding: </titleStmt> <pubStmt /> </fileDesc> <encodingDesc> <appInfo> <application isodate="2018-07-03T12:47:10" version="2.0.0-dev-f99f5ee"> <name>Verovio</name> <p>Transcoded from Humdrum</p> </application> </appInfo> </encodingDesc> <workDesc> <work> <titleStmt> <title /> </titleStmt> </work> </workDesc> </meiHead> <music> <body> <mdiv xml:id="mdiv-0000001806357079"> <score xml:id="score-0000001745921925"> <scoreDef xml:id="scoredef-0000000200022524"> <staffGrp xml:id="staffgrp-0000001765537353"> <staffDef xml:id="staffdef-0000001889892317" clef.shape="G" clef.line="2" n="1" lines="5"> <label xml:id="label-0000000078207176" /> </staffDef> </staffGrp> </scoreDef> <section xml:id="section-L1F1"> <measure xml:id="measure-L1" right="dbl" n="0" type="m-0"> <staff xml:id="staff-0000001901581718" n="1"> <layer xml:id="layer-L1F1N1" n="1"> <note xml:id="note-L3F1" type="qon-0 qoff-1 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L4F1" type="qon-1 qoff-2 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" accid="1qf" /> <note xml:id="note-L5F1" type="qon-2 qoff-3 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L6F1" type="qon-3 qoff-4 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L7F1" type="qon-4 qoff-5 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L8F1" type="qon-5 qoff-6 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" accid="f" /> <note xml:id="note-L9F1" type="qon-6 qoff-7 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L10F1" type="qon-7 qoff-8 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L3F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000000740703624" fontstyle="normal">Bayati</rend> </dir> </measure> <measure xml:id="measure-L11" right="dbl" type="m--1"> <staff xml:id="staff-L11F1N1" n="1"> <layer xml:id="layer-L11F1N1" n="1"> <note xml:id="note-L13F1" type="qon-8 qoff-9 pname-c acc-n oct-4 b40c-2 b12c-0 " dur="4" oct="4" pname="c" accid.ges="n" /> <note xml:id="note-L14F1" type="qon-9 qoff-10 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L15F1" type="qon-10 qoff-11 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" accid="1qf" /> <note xml:id="note-L16F1" type="qon-11 qoff-12 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L17F1" type="qon-12 qoff-13 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L18F1" type="qon-13 qoff-14 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L19F1" type="qon-14 qoff-15 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" accid="1qf" /> <note xml:id="note-L20F1" type="qon-15 qoff-16 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L13F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000002144627832" fontstyle="normal">Rast</rend> </dir> </measure> <measure xml:id="measure-L21" type="m--1"> <staff xml:id="staff-L21F1N1" n="1"> <layer xml:id="layer-L21F1N1" n="1"> <note xml:id="note-L23F1" type="qon-16 qoff-17 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L24F1" type="qon-17 qoff-18 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" accid="1qf" /> <note xml:id="note-L25F1" type="qon-18 qoff-19 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L26F1" type="qon-19 qoff-20 pname-g acc-f oct-4 b40c-24 b12c-6 " dur="4" oct="4" pname="g" accid="f" /> <note xml:id="note-L27F1" type="qon-20 qoff-21 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L28F1" type="qon-21 qoff-22 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" accid="f" /> <note xml:id="note-L29F1" type="qon-22 qoff-23 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L30F1" type="qon-23 qoff-24 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L23F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000000817275699" fontstyle="normal">Sabba</rend> </dir> </measure> <measure xml:id="measure-L31" right="end" type="m--1"> <staff xml:id="staff-L31F1N1" n="1"> <layer xml:id="layer-L31F1N1" n="1"> <note xml:id="note-L33F1" type="qon-24 qoff-25 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" accid="1qf" /> <note xml:id="note-L34F1" type="qon-25 qoff-26 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L35F1" type="qon-26 qoff-27 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L36F1" type="qon-27 qoff-28 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L37F1" type="qon-28 qoff-29 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" accid="1qf" /> <note xml:id="note-L38F1" type="qon-29 qoff-30 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L39F1" type="qon-30 qoff-31 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" accid.ges="n" /> <note xml:id="note-L40F1" type="qon-31 qoff-32 pname-e acc-f oct-5 b40c-13 b12c-3 " dur="4" oct="5" pname="e" accid="1qf" /> </layer> </staff> <dir xml:id="dir-L33F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000000800832592" fontstyle="normal">Siga</rend> </dir> </measure> </section> </score> </mdiv> </body> </music> </mei> -=+Craig On 3 July 2018 at 12:37, Charbel El Achkar <charbelelachkar at hotmail.com> wrote: > To whom it may concern, > > > I would like to ask if the Half flat (Arabic quarter-tone flat) is > available now in MEI in order to encode it as MEI accidental and if > verovio supports the rendering of this feature? > > please provide an example (.svg) . if yes can we also encode this > half-flat at the beginning of the musical piece (armature) in order to > apply it to a specified string in the musical measure? > > > Thank you for your cooperation, > > Charbel El Achkar > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180703/d033d6be/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-07-03 at 12.49.40 PM.png Type: image/png Size: 87360 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180703/d033d6be/attachment.png> From charbelelachkar at hotmail.com Tue Jul 3 13:30:39 2018 From: charbelelachkar at hotmail.com (Charbel El Achkar) Date: Tue, 3 Jul 2018 11:30:39 +0000 Subject: [MEI-L] Half flat (Arabic quarter-tone flat) In-Reply-To: <CAPcjuFf6Zfnv0X=S-=cxEwfKBHQzP-O60bXnFEb1ebCzMhHk-A@mail.gmail.com> References: <VI1PR0602MB342313BC92134BEE9D392417A7420@VI1PR0602MB3423.eurprd06.prod.outlook.com>, <CAPcjuFf6Zfnv0X=S-=cxEwfKBHQzP-O60bXnFEb1ebCzMhHk-A@mail.gmail.com> Message-ID: <VI1PR0602MB342312F06DF4BE3B688953E2A7420@VI1PR0602MB3423.eurprd06.prod.outlook.com> Hello Craig, First of all, thank you for replying. In my question concerning the half-flat, I meant the Arabic half-flat (Arabic quarter-tone flat) that has the same symbol as the one shown in the picture above. The one that you sent to me is the Turkish half-flat that has a different symbol than this one. Does this (Arabic half-flat) exist now in MEI accidental and can we include it in the key signature? Can verovio also support the rendering of the half-flat described above? Thank you in advance, Charbel El Achkar ________________________________ From: Craig Sapp <craigsapp at gmail.com> Sent: Tuesday, July 3, 2018 1:59 PM To: Music Encoding Initiative Cc: charbelelachkar at hotmail.com Subject: Re: [MEI-L] Half flat (Arabic quarter-tone flat) Hello Charbel, Yes, half-flats are possible in MEI and verovio. Here is the MEI list of accidentals: http://music-encoding.org/guidelines/v3/data-types/data.accidental.explicit.html data.ACCIDENTAL.EXPLICIT<http://music-encoding.org/guidelines/v3/data-types/data.accidental.explicit.html> music-encoding.org :scroll: The Music Encoding Initiative Guidelines ("1qf" is the name of the "one quarter-tone flat" in MEI data) > if yes can we also encode this half-flat at the beginning of the musical piece (armature) in order to apply it to a specified string in the musical measure? Do you mean include half-flats in the key signature, or another graphical system for the instrument tuning? The first should be possible in MEI, although I do not know if it is implemented in verovio, the second is unlikely implemented in verovio, and probably not in MEI. Here is an example with half-flats. The rendering in verovio: [cid:ii_jj5kt5770_1645fc77293b135c] The MEI encoding: <?xml version="1.0" encoding="UTF-8"?> <?xml-model href="http://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?> <?xml-model href="http://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?> <mei xmlns="http://www.music-encoding.org/ns/mei" meiversion="4.0.0"> <meiHead> <fileDesc> <titleStmt> <title /> </titleStmt> <pubStmt /> </fileDesc> <encodingDesc> <appInfo> <application isodate="2018-07-03T12:47:10" version="2.0.0-dev-f99f5ee"> <name>Verovio</name> <p>Transcoded from Humdrum</p> </application> </appInfo> </encodingDesc> <workDesc> <work> <titleStmt> <title /> </titleStmt> </work> </workDesc> </meiHead> <music> <body> <mdiv xml:id="mdiv-0000001806357079"> <score xml:id="score-0000001745921925"> <scoreDef xml:id="scoredef-0000000200022524"> <staffGrp xml:id="staffgrp-0000001765537353"> <staffDef xml:id="staffdef-0000001889892317" clef.shape="G" clef.line="2" n="1" lines="5"> <label xml:id="label-0000000078207176" /> </staffDef> </staffGrp> </scoreDef> <section xml:id="section-L1F1"> <measure xml:id="measure-L1" right="dbl" n="0" type="m-0"> <staff xml:id="staff-0000001901581718" n="1"> <layer xml:id="layer-L1F1N1" n="1"> <note xml:id="note-L3F1" type="qon-0 qoff-1 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L4F1" type="qon-1 qoff-2 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" accid="1qf" /> <note xml:id="note-L5F1" type="qon-2 qoff-3 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L6F1" type="qon-3 qoff-4 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L7F1" type="qon-4 qoff-5 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L8F1" type="qon-5 qoff-6 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" accid="f" /> <note xml:id="note-L9F1" type="qon-6 qoff-7 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L10F1" type="qon-7 qoff-8 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L3F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000000740703624" fontstyle="normal">Bayati</rend> </dir> </measure> <measure xml:id="measure-L11" right="dbl" type="m--1"> <staff xml:id="staff-L11F1N1" n="1"> <layer xml:id="layer-L11F1N1" n="1"> <note xml:id="note-L13F1" type="qon-8 qoff-9 pname-c acc-n oct-4 b40c-2 b12c-0 " dur="4" oct="4" pname="c" accid.ges="n" /> <note xml:id="note-L14F1" type="qon-9 qoff-10 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L15F1" type="qon-10 qoff-11 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" accid="1qf" /> <note xml:id="note-L16F1" type="qon-11 qoff-12 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L17F1" type="qon-12 qoff-13 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L18F1" type="qon-13 qoff-14 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L19F1" type="qon-14 qoff-15 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" accid="1qf" /> <note xml:id="note-L20F1" type="qon-15 qoff-16 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L13F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000002144627832" fontstyle="normal">Rast</rend> </dir> </measure> <measure xml:id="measure-L21" type="m--1"> <staff xml:id="staff-L21F1N1" n="1"> <layer xml:id="layer-L21F1N1" n="1"> <note xml:id="note-L23F1" type="qon-16 qoff-17 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L24F1" type="qon-17 qoff-18 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" accid="1qf" /> <note xml:id="note-L25F1" type="qon-18 qoff-19 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L26F1" type="qon-19 qoff-20 pname-g acc-f oct-4 b40c-24 b12c-6 " dur="4" oct="4" pname="g" accid="f" /> <note xml:id="note-L27F1" type="qon-20 qoff-21 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L28F1" type="qon-21 qoff-22 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" accid="f" /> <note xml:id="note-L29F1" type="qon-22 qoff-23 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L30F1" type="qon-23 qoff-24 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L23F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000000817275699" fontstyle="normal">Sabba</rend> </dir> </measure> <measure xml:id="measure-L31" right="end" type="m--1"> <staff xml:id="staff-L31F1N1" n="1"> <layer xml:id="layer-L31F1N1" n="1"> <note xml:id="note-L33F1" type="qon-24 qoff-25 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" accid="1qf" /> <note xml:id="note-L34F1" type="qon-25 qoff-26 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L35F1" type="qon-26 qoff-27 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L36F1" type="qon-27 qoff-28 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L37F1" type="qon-28 qoff-29 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" accid="1qf" /> <note xml:id="note-L38F1" type="qon-29 qoff-30 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L39F1" type="qon-30 qoff-31 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" accid.ges="n" /> <note xml:id="note-L40F1" type="qon-31 qoff-32 pname-e acc-f oct-5 b40c-13 b12c-3 " dur="4" oct="5" pname="e" accid="1qf" /> </layer> </staff> <dir xml:id="dir-L33F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000000800832592" fontstyle="normal">Siga</rend> </dir> </measure> </section> </score> </mdiv> </body> </music> </mei> -=+Craig On 3 July 2018 at 12:37, Charbel El Achkar <charbelelachkar at hotmail.com<mailto:charbelelachkar at hotmail.com>> wrote: To whom it may concern, I would like to ask if the Half flat (Arabic quarter-tone flat) is available now in MEI in order to encode it as MEI accidental and if verovio supports the rendering of this feature? please provide an example (.svg) . if yes can we also encode this half-flat at the beginning of the musical piece (armature) in order to apply it to a specified string in the musical measure? Thank you for your cooperation, Charbel El Achkar _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180703/8746d0da/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-07-03 at 12.49.40 PM.png Type: image/png Size: 87360 bytes Desc: Screen Shot 2018-07-03 at 12.49.40 PM.png URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180703/8746d0da/attachment.png> -------------- next part -------------- A non-text attachment was scrubbed... Name: Arabic_music_notation_half_flat.png Type: image/png Size: 87385 bytes Desc: Arabic_music_notation_half_flat.png URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180703/8746d0da/attachment-0001.png> From craigsapp at gmail.com Tue Jul 3 15:05:45 2018 From: craigsapp at gmail.com (Craig Sapp) Date: Tue, 3 Jul 2018 15:05:45 +0200 Subject: [MEI-L] Half flat (Arabic quarter-tone flat) In-Reply-To: <VI1PR0602MB342312F06DF4BE3B688953E2A7420@VI1PR0602MB3423.eurprd06.prod.outlook.com> References: <VI1PR0602MB342313BC92134BEE9D392417A7420@VI1PR0602MB3423.eurprd06.prod.outlook.com> <CAPcjuFf6Zfnv0X=S-=cxEwfKBHQzP-O60bXnFEb1ebCzMhHk-A@mail.gmail.com> <VI1PR0602MB342312F06DF4BE3B688953E2A7420@VI1PR0602MB3423.eurprd06.prod.outlook.com> Message-ID: <CAPcjuFcqTjQ6PQTTMUT5G-D5U=yDMJ0-UdEJh_jixin_owAnUw@mail.gmail.com> Hello Charbel, It would be possible in MEI now to encode using the flat with a slash through it to represent a quarter-flat symbol. The SMuFL font includes such an accidental: http://www.smufl.org/version/latest/glyph/accidentalBakiyeFlat from this group of accidentals: http://www.smufl.org/version/latest/range/arelEzgiUzdilekAeuAccidentals More specific description of accidentals can be added to notes in MEI using the accid element attached to note elements: http://music-encoding.org/guidelines/v3/elements/accid.html So encoding such a flat currently in MEI would be: <note dur="4" oct="5" pname="e"> <accid* accid="1qf"* *glyphnum="#xE442"* /> </note> The attribute "glyphnum" specifies the SMuFL character number which is E442 for the flat symbol with a slash through it. I have tried this encoding in verovio, and it is not implemented. You could add a request for implementing in verovio here: https://github.com/rism-ch/verovio/issues (requires that you have a free account on github to post the request). Here is the previous example I encoded using the E442 glyphnum for: <?xml version="1.0" encoding="UTF-8"?> <?xml-model href="http://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0"?> <?xml-model href="http://music-encoding.org/schema/4.0.0/mei-all.rng" type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron"?> <mei xmlns="http://www.music-encoding.org/ns/mei" meiversion="4.0.0"> <meiHead> <fileDesc> <titleStmt> <title /> </titleStmt> <pubStmt /> </fileDesc> <encodingDesc> <appInfo> <application isodate="2018-07-03T14:48:46" version="2.0.0-dev-f99f5ee"> <name>Verovio</name> <p>Transcoded from Humdrum</p> </application> </appInfo> </encodingDesc> <workDesc> <work> <titleStmt> <title /> </titleStmt> </work> </workDesc> </meiHead> <music> <body> <mdiv xml:id="mdiv-0000000921666019"> <score xml:id="score-0000002027726695"> <scoreDef xml:id="scoredef-0000001348949585"> <staffGrp xml:id="staffgrp-0000001221224857"> <staffDef xml:id="staffdef-0000000660985943" clef.shape="G" clef.line="2" n="1" lines="5"> <label xml:id="label-0000000200297176" /> </staffDef> </staffGrp> </scoreDef> <section xml:id="section-L1F1"> <measure xml:id="measure-L1" right="dbl" n="0"> <staff xml:id="staff-0000001652783252" n="1"> <layer xml:id="layer-L1F1N1" n="1"> <note xml:id="note-L3F1" dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L4F1" dur="4" oct="4" pname="e"> <accid xml:id="accid-L4F1" accid="1qf" glyphnum="#xE442" /> </note> <note xml:id="note-L5F1" dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L6F1" dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L7F1" dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L8F1" dur="4" oct="4" pname="b" accid="f" /> <note xml:id="note-L9F1" dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L10F1" dur="4" oct="5" pname="d" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L3F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000001462587673" fontstyle="normal">Bayati</rend> </dir> </measure> <measure xml:id="measure-L11" right="dbl"> <staff xml:id="staff-L11F1N1" n="1"> <layer xml:id="layer-L11F1N1" n="1"> <note xml:id="note-L13F1" dur="4" oct="4" pname="c" accid.ges="n" /> <note xml:id="note-L14F1" dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L15F1" dur="4" oct="4" pname="e"> <accid xml:id="accid-L15F1" accid="1qf" glyphnum="#xE442" /> </note> <note xml:id="note-L16F1" dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L17F1" dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L18F1" dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L19F1" dur="4" oct="4" pname="b"> <accid xml:id="accid-L19F1" accid="1qf" glyphnum="#xE442" /> </note> <note xml:id="note-L20F1" dur="4" oct="5" pname="c" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L13F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000001649109636" fontstyle="normal">Rast</rend> </dir> </measure> <measure xml:id="measure-L21"> <staff xml:id="staff-L21F1N1" n="1"> <layer xml:id="layer-L21F1N1" n="1"> <note xml:id="note-L23F1" dur="4" oct="4" pname="d" accid.ges="n" /> <note xml:id="note-L24F1" dur="4" oct="4" pname="e"> <accid xml:id="accid-L24F1" accid="1qf" glyphnum="#xE442" /> </note> <note xml:id="note-L25F1" dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L26F1" dur="4" oct="4" pname="g" accid="f" /> <note xml:id="note-L27F1" dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L28F1" dur="4" oct="4" pname="b" accid="f" /> <note xml:id="note-L29F1" dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L30F1" dur="4" oct="5" pname="d" accid.ges="n" /> </layer> </staff> <dir xml:id="dir-L23F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000000149562672" fontstyle="normal">Sabba</rend> </dir> </measure> <measure xml:id="measure-L31" right="end"> <staff xml:id="staff-L31F1N1" n="1"> <layer xml:id="layer-L31F1N1" n="1"> <note xml:id="note-L33F1" dur="4" oct="4" pname="e"> <accid xml:id="accid-L33F1" accid="1qf" glyphnum="#xE442" /> </note> <note xml:id="note-L34F1" dur="4" oct="4" pname="f" accid.ges="n" /> <note xml:id="note-L35F1" dur="4" oct="4" pname="g" accid.ges="n" /> <note xml:id="note-L36F1" dur="4" oct="4" pname="a" accid.ges="n" /> <note xml:id="note-L37F1" dur="4" oct="4" pname="b"> <accid xml:id="accid-L37F1" accid="1qf" glyphnum="#xE442" /> </note> <note xml:id="note-L38F1" dur="4" oct="5" pname="c" accid.ges="n" /> <note xml:id="note-L39F1" dur="4" oct="5" pname="d" accid.ges="n" /> <note xml:id="note-L40F1" dur="4" oct="5" pname="e"> <accid xml:id="accid-L40F1" accid="1qf" glyphnum="#xE442" /> </note> </layer> </staff> <dir xml:id="dir-L33F1" place="above" staff="1" tstamp="1.000000"> <rend xml:id="rend-0000001172085304" fontstyle="normal">Siga</rend> </dir> </measure> </section> </score> </mdiv> </body> </music> </mei> -=+Craig On 3 July 2018 at 13:30, Charbel El Achkar <charbelelachkar at hotmail.com> wrote: > Hello Craig, > > > First of all, thank you for replying. > > In my question concerning the half-flat, I meant the Arabic half-flat > (Arabic quarter-tone flat) that has the same symbol as the one shown in the > picture above. > > The one that you sent to me is the Turkish half-flat that has a different > symbol than this one. Does this (Arabic half-flat) exist now in MEI > accidental and can we include it in the key signature? Can verovio also > support the rendering of the half-flat described above? > > > Thank you in advance, > > Charbel El Achkar > > > ------------------------------ > *From:* Craig Sapp <craigsapp at gmail.com> > *Sent:* Tuesday, July 3, 2018 1:59 PM > *To:* Music Encoding Initiative > *Cc:* charbelelachkar at hotmail.com > *Subject:* Re: [MEI-L] Half flat (Arabic quarter-tone flat) > > Hello Charbel, > > Yes, half-flats are possible in MEI and verovio. Here is the MEI list of > accidentals: > > http://music-encoding.org/guidelines/v3/data-types/data. > accidental.explicit.html > data.ACCIDENTAL.EXPLICIT > <http://music-encoding.org/guidelines/v3/data-types/data.accidental.explicit.html> > music-encoding.org > :scroll: The Music Encoding Initiative Guidelines > > > > ("1qf" is the name of the "one quarter-tone flat" in MEI data) > > > if yes can we also encode this half-flat at the beginning of the > musical piece (armature) in order to apply it to a specified string in the > musical measure? > > Do you mean include half-flats in the key signature, or another graphical > system for the instrument tuning? The first should be possible in MEI, > although I do not know if it is implemented in verovio, the second is > unlikely implemented in verovio, and probably not in MEI. > > > Here is an example with half-flats. The rendering in verovio: > > > > The MEI encoding: > > <?xml version="1.0" encoding="UTF-8"?> > <?xml-model href="http://music-encoding.org/schema/4.0.0/mei-all.rng" > type="application/xml" schematypens="http://relaxng.org/ns/structure/1.0 > "?> > <?xml-model href="http://music-encoding.org/schema/4.0.0/mei-all.rng" > type="application/xml" schematypens="http://purl.oclc.org/dsdl/schematron > "?> > <mei xmlns="http://www.music-encoding.org/ns/mei" meiversion="4.0.0"> > <meiHead> > <fileDesc> > <titleStmt> > <title /> > </titleStmt> > <pubStmt /> > </fileDesc> > <encodingDesc> > <appInfo> > <application isodate="2018-07-03T12:47:10" > version="2.0.0-dev-f99f5ee"> > <name>Verovio</name> > <p>Transcoded from Humdrum</p> > </application> > </appInfo> > </encodingDesc> > <workDesc> > <work> > <titleStmt> > <title /> > </titleStmt> > </work> > </workDesc> > </meiHead> > <music> > <body> > <mdiv xml:id="mdiv-0000001806357079"> > <score xml:id="score-0000001745921925"> > <scoreDef xml:id="scoredef-0000000200022524"> > <staffGrp xml:id="staffgrp-0000001765537353"> > <staffDef xml:id="staffdef-0000001889892317" > clef.shape="G" clef.line="2" n="1" lines="5"> > <label xml:id="label-0000000078207176" /> > </staffDef> > </staffGrp> > </scoreDef> > <section xml:id="section-L1F1"> > <measure xml:id="measure-L1" right="dbl" n="0" > type="m-0"> > <staff xml:id="staff-0000001901581718" n="1"> > <layer xml:id="layer-L1F1N1" n="1"> > <note xml:id="note-L3F1" type="qon-0 > qoff-1 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" > accid.ges="n" /> > <note xml:id="note-L4F1" type="qon-1 > qoff-2 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" > accid="1qf" /> > <note xml:id="note-L5F1" type="qon-2 > qoff-3 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" > accid.ges="n" /> > <note xml:id="note-L6F1" type="qon-3 > qoff-4 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" > accid.ges="n" /> > <note xml:id="note-L7F1" type="qon-4 > qoff-5 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" > accid.ges="n" /> > <note xml:id="note-L8F1" type="qon-5 > qoff-6 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" > accid="f" /> > <note xml:id="note-L9F1" type="qon-6 > qoff-7 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" > accid.ges="n" /> > <note xml:id="note-L10F1" type="qon-7 > qoff-8 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" > accid.ges="n" /> > </layer> > </staff> > <dir xml:id="dir-L3F1" place="above" staff="1" > tstamp="1.000000"> > <rend xml:id="rend-0000000740703624" > fontstyle="normal">Bayati</rend> > </dir> > </measure> > <measure xml:id="measure-L11" right="dbl" > type="m--1"> > <staff xml:id="staff-L11F1N1" n="1"> > <layer xml:id="layer-L11F1N1" n="1"> > <note xml:id="note-L13F1" type="qon-8 > qoff-9 pname-c acc-n oct-4 b40c-2 b12c-0 " dur="4" oct="4" pname="c" > accid.ges="n" /> > <note xml:id="note-L14F1" type="qon-9 > qoff-10 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" > accid.ges="n" /> > <note xml:id="note-L15F1" type="qon-10 > qoff-11 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" > accid="1qf" /> > <note xml:id="note-L16F1" type="qon-11 > qoff-12 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" > accid.ges="n" /> > <note xml:id="note-L17F1" type="qon-12 > qoff-13 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" > accid.ges="n" /> > <note xml:id="note-L18F1" type="qon-13 > qoff-14 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" > accid.ges="n" /> > <note xml:id="note-L19F1" type="qon-14 > qoff-15 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" > accid="1qf" /> > <note xml:id="note-L20F1" type="qon-15 > qoff-16 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" > accid.ges="n" /> > </layer> > </staff> > <dir xml:id="dir-L13F1" place="above" > staff="1" tstamp="1.000000"> > <rend xml:id="rend-0000002144627832" > fontstyle="normal">Rast</rend> > </dir> > </measure> > <measure xml:id="measure-L21" type="m--1"> > <staff xml:id="staff-L21F1N1" n="1"> > <layer xml:id="layer-L21F1N1" n="1"> > <note xml:id="note-L23F1" type="qon-16 > qoff-17 pname-d acc-n oct-4 b40c-8 b12c-2 " dur="4" oct="4" pname="d" > accid.ges="n" /> > <note xml:id="note-L24F1" type="qon-17 > qoff-18 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" > accid="1qf" /> > <note xml:id="note-L25F1" type="qon-18 > qoff-19 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" > accid.ges="n" /> > <note xml:id="note-L26F1" type="qon-19 > qoff-20 pname-g acc-f oct-4 b40c-24 b12c-6 " dur="4" oct="4" pname="g" > accid="f" /> > <note xml:id="note-L27F1" type="qon-20 > qoff-21 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" > accid.ges="n" /> > <note xml:id="note-L28F1" type="qon-21 > qoff-22 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" > accid="f" /> > <note xml:id="note-L29F1" type="qon-22 > qoff-23 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" > accid.ges="n" /> > <note xml:id="note-L30F1" type="qon-23 > qoff-24 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" > accid.ges="n" /> > </layer> > </staff> > <dir xml:id="dir-L23F1" place="above" > staff="1" tstamp="1.000000"> > <rend xml:id="rend-0000000817275699" > fontstyle="normal">Sabba</rend> > </dir> > </measure> > <measure xml:id="measure-L31" right="end" > type="m--1"> > <staff xml:id="staff-L31F1N1" n="1"> > <layer xml:id="layer-L31F1N1" n="1"> > <note xml:id="note-L33F1" type="qon-24 > qoff-25 pname-e acc-f oct-4 b40c-13 b12c-3 " dur="4" oct="4" pname="e" > accid="1qf" /> > <note xml:id="note-L34F1" type="qon-25 > qoff-26 pname-f acc-n oct-4 b40c-19 b12c-5 " dur="4" oct="4" pname="f" > accid.ges="n" /> > <note xml:id="note-L35F1" type="qon-26 > qoff-27 pname-g acc-n oct-4 b40c-25 b12c-7 " dur="4" oct="4" pname="g" > accid.ges="n" /> > <note xml:id="note-L36F1" type="qon-27 > qoff-28 pname-a acc-n oct-4 b40c-31 b12c-9 " dur="4" oct="4" pname="a" > accid.ges="n" /> > <note xml:id="note-L37F1" type="qon-28 > qoff-29 pname-b acc-f oct-4 b40c-36 b12c-10 " dur="4" oct="4" pname="b" > accid="1qf" /> > <note xml:id="note-L38F1" type="qon-29 > qoff-30 pname-c acc-n oct-5 b40c-2 b12c-0 " dur="4" oct="5" pname="c" > accid.ges="n" /> > <note xml:id="note-L39F1" type="qon-30 > qoff-31 pname-d acc-n oct-5 b40c-8 b12c-2 " dur="4" oct="5" pname="d" > accid.ges="n" /> > <note xml:id="note-L40F1" type="qon-31 > qoff-32 pname-e acc-f oct-5 b40c-13 b12c-3 " dur="4" oct="5" pname="e" > accid="1qf" /> > </layer> > </staff> > <dir xml:id="dir-L33F1" place="above" > staff="1" tstamp="1.000000"> > <rend xml:id="rend-0000000800832592" > fontstyle="normal">Siga</rend> > </dir> > </measure> > </section> > </score> > </mdiv> > </body> > </music> > </mei> > > > -=+Craig > > > > > > On 3 July 2018 at 12:37, Charbel El Achkar <charbelelachkar at hotmail.com> > wrote: > > To whom it may concern, > > > I would like to ask if the Half flat (Arabic quarter-tone flat) is > available now in MEI in order to encode it as MEI accidental and if > verovio supports the rendering of this feature? > > please provide an example (.svg) . if yes can we also encode this > half-flat at the beginning of the musical piece (armature) in order to > apply it to a specified string in the musical measure? > > > Thank you for your cooperation, > > Charbel El Achkar > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180703/73a7e576/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-07-03 at 12.49.40 PM.png Type: image/png Size: 87360 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180703/73a7e576/attachment.png> -------------- next part -------------- A non-text attachment was scrubbed... Name: Screen Shot 2018-07-03 at 3.05.07 PM.png Type: image/png Size: 4385 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180703/73a7e576/attachment-0001.png> From andrew.hankinson at mail.mcgill.ca Fri Jul 6 10:22:15 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Fri, 6 Jul 2018 08:22:15 +0000 Subject: [MEI-L] MEI Autumn Working Meeting in Oxford Message-ID: <92748448-E341-45BC-93F3-90FBDD48F944@mail.mcgill.ca> Hello all, The MEI Technical Group is pleased to announce that we will be having an MEI Working Meeting, October 30?November 2, at the University of Oxford, co-hosted by the Oxford e-Research Centre and the Bodleian Libraries. As mentioned at this year's Music Encoding Conference, the goal of this meeting is to bring people together to build and develop tools and resources that support the larger MEI community. To this end, it will be less formal than our 'traditional' conference. It will be structured in such a way to promote co-operative and collaborative development of the tools that support our community -- a "Hack Week" for MEI, if you will. Some possible topics include: - Development of MEI Tutorial Material - Improvements to the MEI Guidelines and the MEI Website - A better automated schema testing system - Improvements to, and bug-fixes for, Verovio - Improvements to the updating process and tools for MEI 3 to (the forthcoming) MEI 4. - Improvements to MEI editors (e.g., the oXygen or Atom plugins) - Improvements to the Sibelius MEI plugin This is by no means an exhaustive list, and suggestions from the community and participants are welcome! While you *do not* have to be a programmer to participate (Documentation improvements and training material is very high on our list of priorities!) you will need to provide your own computer. There will be no registration cost to attend this meeting, but registration will be required so we can plan for space. More details will be coming shortly on how to register. Also, if you are sure you are coming, it would be wise to secure accommodation sooner rather than later, as Oxford can fill up quite quickly. We look forward to seeing you in Oxford! Please let me know if you have any questions. Andrew Hankinson Kevin Page David Lewis Local organizers From Paul.Gulewycz at oeaw.ac.at Tue Jul 10 18:30:58 2018 From: Paul.Gulewycz at oeaw.ac.at (Gulewycz, Paul) Date: Tue, 10 Jul 2018 16:30:58 +0000 Subject: [MEI-L] Invisible staves and content Message-ID: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> Dear MEI community, many of you might currently be on vacation or enjoying the sunny weather, but we have to ask you for a short interruption, because we need your opinion and advice on an issue concerning the encoding of invisible content and staves. In our project on the digital edition of the study book of Anton Bruckner, we came across many pages, in which multiple staves need to be made invisible. In particular, there is one exercise across five pages, where the instrumentation changes every two lines, sometimes even after each line. That's why the header in this file contains 32 different staffDefs. We were trying to hide unnecessary staves by inserting @visible="false" into the staff elements in the measures in question, but, of course, that only makes the content invisible, staves and barlines remain unhidden. We talked to Laurent about this issue and he would be happy to implement a function in Verovio to change the layout in such a way, except not via making content invisible on the staff-level, but by using scoreDef. Now, the questions are: should @visible be made usable for changing the layout? If not, then scoreDef would be probably most suitable, which leads to the question: how should this information be included in scoreDef? In our special case, we would like to use the @label-information of the staffDef in not only the first system, but also each time the instrumentation changes. So we could cheat, delete every invisible staff and use <dir> to label the staves, when new instruments occur, but this is not pretty at all and we would not be happy with this solution. Thank you very much in advance! Best regards, Agnes Seipelt, Peter Provaznik and Paul Gulewycz Paul Gulewycz, BA ?sterreichische Akademie der Wissenschaften Institut f?r kunst- und musikhistorische Forschungen AG Digital Musicology Dr. Ignaz Seipel-Platz 2 A-1010 Wien +43/650/646 94 32 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180710/35cd77ec/attachment.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: A-WnMus.Hs.44706-231.jpg Type: image/jpeg Size: 1061551 bytes Desc: A-WnMus.Hs.44706-231.jpg URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180710/35cd77ec/attachment.jpg> -------------- next part -------------- A non-text attachment was scrubbed... Name: 105.png Type: image/png Size: 32020 bytes Desc: 105.png URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180710/35cd77ec/attachment.png> From andrew.hankinson at mail.mcgill.ca Tue Jul 10 19:26:41 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Tue, 10 Jul 2018 17:26:41 +0000 Subject: [MEI-L] Invisible staves and content In-Reply-To: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> References: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> Message-ID: <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> It looks like it really is a measure-by-measure showing and hiding of particular staves, so I might agree with your first instincts of adding @visible to individual staves. I would be interested in hearing more about why this is not a good option, though. What will you expect the rendered score to look like? That may inform what the best option is for encoding it. On 10 Jul 2018, at 17:32, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at<mailto:Paul.Gulewycz at oeaw.ac.at>> wrote: Dear MEI community, many of you might currently be on vacation or enjoying the sunny weather, but we have to ask you for a short interruption, because we need your opinion and advice on an issue concerning the encoding of invisible content and staves. In our project on the digital edition of the study book of Anton Bruckner, we came across many pages, in which multiple staves need to be made invisible. In particular, there is one exercise across five pages, where the instrumentation changes every two lines, sometimes even after each line. That?s why the header in this file contains 32 different staffDefs. We were trying to hide unnecessary staves by inserting @visible=?false? into the staff elements in the measures in question, but, of course, that only makes the content invisible, staves and barlines remain unhidden. We talked to Laurent about this issue and he would be happy to implement a function in Verovio to change the layout in such a way, except not via making content invisible on the staff-level, but by using scoreDef. Now, the questions are: should @visible be made usable for changing the layout? If not, then scoreDef would be probably most suitable, which leads to the question: how should this information be included in scoreDef? In our special case, we would like to use the @label-information of the staffDef in not only the first system, but also each time the instrumentation changes. So we could cheat, delete every invisible staff and use <dir> to label the staves, when new instruments occur, but this is not pretty at all and we would not be happy with this solution. Thank you very much in advance! Best regards, Agnes Seipelt, Peter Provaznik and Paul Gulewycz Paul Gulewycz, BA ?sterreichische Akademie der Wissenschaften Institut f?r kunst- und musikhistorische Forschungen AG Digital Musicology Dr. Ignaz Seipel-Platz 2 A-1010 Wien +43/650/646 94 32 <A-WnMus.Hs.44706-231.jpg> <105.png> _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180710/dc3b808f/attachment.html> From esfield at stanford.edu Wed Jul 11 02:25:25 2018 From: esfield at stanford.edu (Eleanor Selfridge-Field) Date: Wed, 11 Jul 2018 00:25:25 +0000 Subject: [MEI-L] Invisible staves and content In-Reply-To: <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> References: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> Message-ID: <BYAPR02MB4200C5310FB0B1A4F69DA3A3C35A0@BYAPR02MB4200.namprd02.prod.outlook.com> It may be that this question does not have a simple answer, because invisible staves come up in radically different contexts. In our own work with MuseData, there are three situations where they may play a role: 1. Ambiguous scoring (mainly early 18th century): a single written part modified with written cues to signal switches between a single timbre (violin), a different single timbre (oboe), or the two doubling one part. 2. Dialogue set in recitative (18th-19th centuries): Role A and Role B (C, D) in rapid exchange on a single staff, but logically occupying more than one staff (all be empty when not in use). 3. Facsimile of the sort Paul describes: the composer is sketching an idea and tracing the dominant part. In Vivaldi it is common for the editor to supply the viola part, because often it is indicated only as as ?colla parte? on a blank staff. If a replica is desired, that may suggest one solution. If the numbers or identities of singers or instruments is changing frequently, that may constitute a situation worth recognizing as sui generis. To generate scores we use a Boolean switch: A and B, or A or B (a total of three states). The implementation works well. Eleanor Eleanor Selfridge-Field Braun Music Center #129 541 Lasuen Mall Stanford University Stanford, CA 94305-3076 https://profiles.stanford.edu/eleanor-selfridge-field From: mei-l <mei-l-bounces at lists.uni-paderborn.de> On Behalf Of Andrew Hankinson Sent: Tuesday, July 10, 2018 10:27 AM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: Re: [MEI-L] Invisible staves and content It looks like it really is a measure-by-measure showing and hiding of particular staves, so I might agree with your first instincts of adding @visible to individual staves. I would be interested in hearing more about why this is not a good option, though. What will you expect the rendered score to look like? That may inform what the best option is for encoding it. On 10 Jul 2018, at 17:32, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at<mailto:Paul.Gulewycz at oeaw.ac.at>> wrote: Dear MEI community, many of you might currently be on vacation or enjoying the sunny weather, but we have to ask you for a short interruption, because we need your opinion and advice on an issue concerning the encoding of invisible content and staves. In our project on the digital edition of the study book of Anton Bruckner, we came across many pages, in which multiple staves need to be made invisible. In particular, there is one exercise across five pages, where the instrumentation changes every two lines, sometimes even after each line. That?s why the header in this file contains 32 different staffDefs. We were trying to hide unnecessary staves by inserting @visible=?false? into the staff elements in the measures in question, but, of course, that only makes the content invisible, staves and barlines remain unhidden. We talked to Laurent about this issue and he would be happy to implement a function in Verovio to change the layout in such a way, except not via making content invisible on the staff-level, but by using scoreDef. Now, the questions are: should @visible be made usable for changing the layout? If not, then scoreDef would be probably most suitable, which leads to the question: how should this information be included in scoreDef? In our special case, we would like to use the @label-information of the staffDef in not only the first system, but also each time the instrumentation changes. So we could cheat, delete every invisible staff and use <dir> to label the staves, when new instruments occur, but this is not pretty at all and we would not be happy with this solution. Thank you very much in advance! Best regards, Agnes Seipelt, Peter Provaznik and Paul Gulewycz Paul Gulewycz, BA ?sterreichische Akademie der Wissenschaften Institut f?r kunst- und musikhistorische Forschungen AG Digital Musicology Dr. Ignaz Seipel-Platz 2 A-1010 Wien +43/650/646 94 32 <A-WnMus.Hs.44706-231.jpg> <105.png> _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180711/bb4dc730/attachment.html> From Paul.Gulewycz at oeaw.ac.at Wed Jul 11 15:27:09 2018 From: Paul.Gulewycz at oeaw.ac.at (Gulewycz, Paul) Date: Wed, 11 Jul 2018 13:27:09 +0000 Subject: [MEI-L] Invisible staves and content In-Reply-To: <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> References: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> Message-ID: <2f27ec29a1b5411babd2439d74f8f9f5@oeaw.ac.at> I can?t really decide if I find one solution better than the other. I think putting extra layout information into <scoreDef> and/or <staffDef> e.g. after a <sb> is a very elegant way to choose which staves should be displayed, as it keeps the encoding slim and easy to work with. In our problem especially, it is definitely smarter to choose what is visible than what is invisible. In situations like the ones that Eleanor described, it could work out as well that way, I believe. But I might be mistaken entirely, because maybe I?m missing something. In the case of the study book, we would like to keep this exercise as one single unit and not separate the different parts into many smaller units. Which means, that we would still have 32 staffDefs in the <scoreDef> at the beginning of <score>. Every time there is change in instrumentation, the names of the instruments should be displayed in the beginning to correctly label the staves as soon as they occur. Currently, instrument labels only shown in the first system of the first page. When two staves are visible, the other 30 staves without content have to be hidden in our encoding, as the rendered score should look just like the page in the facsimile. Von: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] Im Auftrag von Andrew Hankinson Gesendet: Dienstag, 10. Juli 2018 19:27 An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Betreff: Re: [MEI-L] Invisible staves and content It looks like it really is a measure-by-measure showing and hiding of particular staves, so I might agree with your first instincts of adding @visible to individual staves. I would be interested in hearing more about why this is not a good option, though. What will you expect the rendered score to look like? That may inform what the best option is for encoding it. On 10 Jul 2018, at 17:32, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at<mailto:Paul.Gulewycz at oeaw.ac.at>> wrote: Dear MEI community, many of you might currently be on vacation or enjoying the sunny weather, but we have to ask you for a short interruption, because we need your opinion and advice on an issue concerning the encoding of invisible content and staves. In our project on the digital edition of the study book of Anton Bruckner, we came across many pages, in which multiple staves need to be made invisible. In particular, there is one exercise across five pages, where the instrumentation changes every two lines, sometimes even after each line. That?s why the header in this file contains 32 different staffDefs. We were trying to hide unnecessary staves by inserting @visible=?false? into the staff elements in the measures in question, but, of course, that only makes the content invisible, staves and barlines remain unhidden. We talked to Laurent about this issue and he would be happy to implement a function in Verovio to change the layout in such a way, except not via making content invisible on the staff-level, but by using scoreDef. Now, the questions are: should @visible be made usable for changing the layout? If not, then scoreDef would be probably most suitable, which leads to the question: how should this information be included in scoreDef? In our special case, we would like to use the @label-information of the staffDef in not only the first system, but also each time the instrumentation changes. So we could cheat, delete every invisible staff and use <dir> to label the staves, when new instruments occur, but this is not pretty at all and we would not be happy with this solution. Thank you very much in advance! Best regards, Agnes Seipelt, Peter Provaznik and Paul Gulewycz Paul Gulewycz, BA ?sterreichische Akademie der Wissenschaften Institut f?r kunst- und musikhistorische Forschungen AG Digital Musicology Dr. Ignaz Seipel-Platz 2 A-1010 Wien +43/650/646 94 32 <A-WnMus.Hs.44706-231.jpg> <105.png> _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de> https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180711/42cc5908/attachment.html> From andrew.hankinson at mail.mcgill.ca Wed Jul 11 15:50:34 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Wed, 11 Jul 2018 13:50:34 +0000 Subject: [MEI-L] Invisible staves and content In-Reply-To: <2f27ec29a1b5411babd2439d74f8f9f5@oeaw.ac.at> References: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> <2f27ec29a1b5411babd2439d74f8f9f5@oeaw.ac.at> Message-ID: <D4648FB1-1BD3-45B2-BE41-F654882C6788@mail.mcgill.ca> I think there needs to be a clear separation here between what you're "encoding" (that is, the change of instrumentation) and what you're "displaying" (that is, how it is being rendered). I would favour an encoding-first approach, recognising that display in a digital context will be very different than how it is physically represented on the page. In a dynamic rendering environment (like Verovio), it would be very hard to maintain the appearance of the original study book while still doing all the things that Verovio does really well, like fitting as much music on the screen as it can depending on screen size and zoom level. Faithfully reproducing the layout, and dynamically rendering the notation, are two very different tasks. You said "Every time there is change in instrumentation, the names of the instruments should be displayed in the beginning to correctly label the staves as soon as they occur." This will be problematic with Verovio. "in the beginning" may be halfway across the screen, depending on screen size and zoom level. Similarly "as soon as they occur" may necessitate rendering a big blank gap. Or do you want Verovio to automatically condense the staves into a single system so that the Viola starts on the same system as the oboe? (for example). Doing these things well, with an infinite number of possible screen sizes and zoom levels, will be very difficult. Is there provision for your application to display the original facsimile image? Then you would not have to worry about reproducing the layout in Verovio -- you can show your users what the original looked like, while still allowing for things like dynamic playback and interaction through Verovio. If you really want to try and preserve the look of the original, I would recommend bringing in the <surface>/<facsimile>/<zone> functions and aligning the musical content with the image using pixel-based co-ordinates. But that would be much more difficult to do manually. -Andrew > On 11 Jul 2018, at 14:27, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> wrote: > > I can?t really decide if I find one solution better than the other. I think putting extra layout information into <scoreDef> and/or <staffDef> e.g. after a <sb> is a very elegant way to choose which staves should be displayed, as it keeps the encoding slim and easy to work with. In our problem especially, it is definitely smarter to choose what is visible than what is invisible. In situations like the ones that Eleanor described, it could work out as well that way, I believe. > But I might be mistaken entirely, because maybe I?m missing something. > > In the case of the study book, we would like to keep this exercise as one single unit and not separate the different parts into many smaller units. Which means, that we would still have 32 staffDefs in the <scoreDef> at the beginning of <score>. Every time there is change in instrumentation, the names of the instruments should be displayed in the beginning to correctly label the staves as soon as they occur. Currently, instrument labels only shown in the first system of the first page. When two staves are visible, the other 30 staves without content have to be hidden in our encoding, as the rendered score should look just like the page in the facsimile. > > Von: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] Im Auftrag von Andrew Hankinson > Gesendet: Dienstag, 10. Juli 2018 19:27 > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Betreff: Re: [MEI-L] Invisible staves and content > > It looks like it really is a measure-by-measure showing and hiding of particular staves, so I might agree with your first instincts of adding @visible to individual staves. I would be interested in hearing more about why this is not a good option, though. > > What will you expect the rendered score to look like? That may inform what the best option is for encoding it. > > On 10 Jul 2018, at 17:32, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> wrote: > > Dear MEI community, > > many of you might currently be on vacation or enjoying the sunny weather, but we have to ask you for a short interruption, because we need your opinion and advice on an issue concerning the encoding of invisible content and staves. > > In our project on the digital edition of the study book of Anton Bruckner, we came across many pages, in which multiple staves need to be made invisible. In particular, there is one exercise across five pages, where the instrumentation changes every two lines, sometimes even after each line. That?s why the header in this file contains 32 different staffDefs. We were trying to hide unnecessary staves by inserting @visible=?false? into the staff elements in the measures in question, but, of course, that only makes the content invisible, staves and barlines remain unhidden. > > We talked to Laurent about this issue and he would be happy to implement a function in Verovio to change the layout in such a way, except not via making content invisible on the staff-level, but by using scoreDef. > > Now, the questions are: should @visible be made usable for changing the layout? If not, then scoreDef would be probably most suitable, which leads to the question: how should this information be included in scoreDef? > In our special case, we would like to use the @label-information of the staffDef in not only the first system, but also each time the instrumentation changes. So we could cheat, delete every invisible staff and use <dir> to label the staves, when new instruments occur, but this is not pretty at all and we would not be happy with this solution. > > Thank you very much in advance! > > Best regards, > Agnes Seipelt, Peter Provaznik and Paul Gulewycz > > > Paul Gulewycz, BA > ?sterreichische Akademie der Wissenschaften > Institut f?r kunst- und musikhistorische Forschungen > AG Digital Musicology > Dr. Ignaz Seipel-Platz 2 > A-1010 Wien > +43/650/646 94 32 > > <A-WnMus.Hs.44706-231.jpg> > <105.png> > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From Paul.Gulewycz at oeaw.ac.at Wed Jul 11 17:03:05 2018 From: Paul.Gulewycz at oeaw.ac.at (Gulewycz, Paul) Date: Wed, 11 Jul 2018 15:03:05 +0000 Subject: [MEI-L] Invisible staves and content In-Reply-To: <D4648FB1-1BD3-45B2-BE41-F654882C6788@mail.mcgill.ca> References: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> <2f27ec29a1b5411babd2439d74f8f9f5@oeaw.ac.at> <D4648FB1-1BD3-45B2-BE41-F654882C6788@mail.mcgill.ca> Message-ID: <1c40cca893c849e285192e36c76434c3@oeaw.ac.at> We adapted Verovio in such a way that zooming does not change the layout but enlarges or minimizes the score. So the layout will be displayed correctly. And yes, we also want to include the original manuscript next to the edition, that's why we want to keep the layout as close to the facsimile as possible, so the orientation will be enhanced for the users. It will be possible to select a single page or a composition or an exercise, which will produce the facsimile on the left side and the edition on the right side of the screen. I agree with your encoding-first approach, unfortunately all my ideas circle around changing the encoding for the sake of rendering it in the right way, e.g. cross-staff notation via @staff, which is why they are relatively useless. Our problem is very specific, but, needless to say, it would be great to find a solution, which others might also find useful for their encoding and displaying of data. I find condensing two staves quite interesting, but again, this would be handy for an/our adapted Verovio version and probably nothing much else, because of the zoom function, and also raises the question, where to put the information for doing so. -----Urspr?ngliche Nachricht----- Von: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] Im Auftrag von Andrew Hankinson Gesendet: Mittwoch, 11. Juli 2018 15:51 An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Betreff: Re: [MEI-L] Invisible staves and content I think there needs to be a clear separation here between what you're "encoding" (that is, the change of instrumentation) and what you're "displaying" (that is, how it is being rendered). I would favour an encoding-first approach, recognising that display in a digital context will be very different than how it is physically represented on the page. In a dynamic rendering environment (like Verovio), it would be very hard to maintain the appearance of the original study book while still doing all the things that Verovio does really well, like fitting as much music on the screen as it can depending on screen size and zoom level. Faithfully reproducing the layout, and dynamically rendering the notation, are two very different tasks. You said "Every time there is change in instrumentation, the names of the instruments should be displayed in the beginning to correctly label the staves as soon as they occur." This will be problematic with Verovio. "in the beginning" may be halfway across the screen, depending on screen size and zoom level. Similarly "as soon as they occur" may necessitate rendering a big blank gap. Or do you want Verovio to automatically condense the staves into a single system so that the Viola starts on the same system as the oboe? (for example). Doing these things well, with an infinite number of possible screen sizes and zoom levels, will be very difficult. Is there provision for your application to display the original facsimile image? Then you would not have to worry about reproducing the layout in Verovio -- you can show your users what the original looked like, while still allowing for things like dynamic playback and interaction through Verovio. If you really want to try and preserve the look of the original, I would recommend bringing in the <surface>/<facsimile>/<zone> functions and aligning the musical content with the image using pixel-based co-ordinates. But that would be much more difficult to do manually. -Andrew > On 11 Jul 2018, at 14:27, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> wrote: > > I can?t really decide if I find one solution better than the other. I think putting extra layout information into <scoreDef> and/or <staffDef> e.g. after a <sb> is a very elegant way to choose which staves should be displayed, as it keeps the encoding slim and easy to work with. In our problem especially, it is definitely smarter to choose what is visible than what is invisible. In situations like the ones that Eleanor described, it could work out as well that way, I believe. > But I might be mistaken entirely, because maybe I?m missing something. > > In the case of the study book, we would like to keep this exercise as one single unit and not separate the different parts into many smaller units. Which means, that we would still have 32 staffDefs in the <scoreDef> at the beginning of <score>. Every time there is change in instrumentation, the names of the instruments should be displayed in the beginning to correctly label the staves as soon as they occur. Currently, instrument labels only shown in the first system of the first page. When two staves are visible, the other 30 staves without content have to be hidden in our encoding, as the rendered score should look just like the page in the facsimile. > > Von: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] Im Auftrag > von Andrew Hankinson > Gesendet: Dienstag, 10. Juli 2018 19:27 > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Betreff: Re: [MEI-L] Invisible staves and content > > It looks like it really is a measure-by-measure showing and hiding of particular staves, so I might agree with your first instincts of adding @visible to individual staves. I would be interested in hearing more about why this is not a good option, though. > > What will you expect the rendered score to look like? That may inform what the best option is for encoding it. > > On 10 Jul 2018, at 17:32, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> wrote: > > Dear MEI community, > > many of you might currently be on vacation or enjoying the sunny weather, but we have to ask you for a short interruption, because we need your opinion and advice on an issue concerning the encoding of invisible content and staves. > > In our project on the digital edition of the study book of Anton Bruckner, we came across many pages, in which multiple staves need to be made invisible. In particular, there is one exercise across five pages, where the instrumentation changes every two lines, sometimes even after each line. That?s why the header in this file contains 32 different staffDefs. We were trying to hide unnecessary staves by inserting @visible=?false? into the staff elements in the measures in question, but, of course, that only makes the content invisible, staves and barlines remain unhidden. > > We talked to Laurent about this issue and he would be happy to implement a function in Verovio to change the layout in such a way, except not via making content invisible on the staff-level, but by using scoreDef. > > Now, the questions are: should @visible be made usable for changing the layout? If not, then scoreDef would be probably most suitable, which leads to the question: how should this information be included in scoreDef? > In our special case, we would like to use the @label-information of the staffDef in not only the first system, but also each time the instrumentation changes. So we could cheat, delete every invisible staff and use <dir> to label the staves, when new instruments occur, but this is not pretty at all and we would not be happy with this solution. > > Thank you very much in advance! > > Best regards, > Agnes Seipelt, Peter Provaznik and Paul Gulewycz > > > Paul Gulewycz, BA > ?sterreichische Akademie der Wissenschaften Institut f?r kunst- und > musikhistorische Forschungen AG Digital Musicology Dr. Ignaz > Seipel-Platz 2 > A-1010 Wien > +43/650/646 94 32 > > <A-WnMus.Hs.44706-231.jpg> > <105.png> > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From lxpugin at gmail.com Fri Jul 13 17:57:28 2018 From: lxpugin at gmail.com (Laurent Pugin) Date: Fri, 13 Jul 2018 17:57:28 +0200 Subject: [MEI-L] Invisible staves and content In-Reply-To: <1c40cca893c849e285192e36c76434c3@oeaw.ac.at> References: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> <2f27ec29a1b5411babd2439d74f8f9f5@oeaw.ac.at> <D4648FB1-1BD3-45B2-BE41-F654882C6788@mail.mcgill.ca> <1c40cca893c849e285192e36c76434c3@oeaw.ac.at> Message-ID: <CAJ306HbAgWhQ6CxXycDVOku-MhKx6v=waPoaAECCX_sHi245Sg@mail.gmail.com> On way to handle this is to use a <scoreDef> for indicating a change in a system content. That is, if a system displays only a subset of the staves, then you need to have a <scoreDef> with the desired <staffGrp> and <staffDef>. One question is whereas we expect a <scoreDef> to have 1) only the <staffGrp/<staffDef> that are visible, or 2) always all of them and have a @visible="false" on the ones that needs to be hidden. Both are a valid approach, I think. So with : <scoreDef meter.sym="common" key.sig="4s"> <staffGrp barthru="false"> <staffGrp barthru="true" symbol="bracket"> <staffDef n="1" label="Flute" key.sig="4s" /> <staffDef n="2" label="Oboe" key.sig="4s" /> <staffDef n="3" label="Clarinet in Bb" key.sig="6s" trans.semi="-2" trans.diat="-1" /> </staffGrp> <staffGrp barthru="true" symbol="bracket"> <staffDef n="4" label="Violin I" key.sig="4s" /> <staffDef n="5" label="Violin II" key.sig="4s" /> <staffDef n="6" label="Viola" key.sig="4s" /> <staffDef n="7" label="Cello" key.sig="4s" /> </staffGrp> </staffGrp> </scoreDef> We can then have when a system has only strings with 1): <scoreDef> <staffGrp> <staffGrp> <staffDef n="4" /> <staffDef n="5" /> <staffDef n="6" /> <staffDef n="7" /> </staffGrp> </staffGrp> </scoreDef> Or with 2) <scoreDef> <staffGrp> <staffGrp visible="false"> <staffDef n="1" visible="false" /> <staffDef n="2" visible="false" /> <staffDef n="3" visible="false" /> </staffGrp> <staffGrp> <staffDef n="4" /> <staffDef n="5" /> <staffDef n="6" /> <staffDef n="7" /> </staffGrp> </staffGrp> </scoreDef> With both cases we can also expect to have an empty <scoreDef> with no <staffGrp>/<staffDef> children to be used but with a slightly different meaning, typically for indicating a time signature or a key signature change. Changing from C to C/ would be just <scoreDef meter.sym="cut"/> Now this become a bit tricky with transposing instrument when changing the key signature because a key signature change cannot be specified globally. With solution 1) then you cannot do: <scoreDef key.sig="2s> <staffGrp> <staffGrp> <staffDef n="3" key.sig="4s" /> </staffGrp> </staffGrp> </scoreDef> because this means that only the clarinet staff has to be shown. So I assume that here you would need to either use @visible too, or to have two consecutive <scoreDef>, one for the key signature change, and one for then specifying the staves to be shown. So something like: <scoreDef key.sig="2s> <staffGrp> <staffGrp> <staffDef n="3" key.sig="4s" /> </staffGrp> </staffGrp> </scoreDef> <scoreDef> <staffGrp> <staffGrp> <staffDef n="4" /> <staffDef n="5" /> <staffDef n="6" /> <staffDef n="7" /> </staffGrp> </staffGrp> </scoreDef> Because of this maybe solution would actually more appropriate. Other opinions / ideas? Of course this is all for cases when the desired output is to render encoded page and system breaks, and not for dynamic rendering. With dynamic rendering, the content of the <scoreDef> for the part of the score rendered in a system has to be calculated by the application. One option is to dynamically hide staves that have only mRest in the corresponding scope. Similarly, a staffGrp will be hidden when all staves are empty. Laurent On Wed, Jul 11, 2018 at 5:03 PM, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> wrote: > We adapted Verovio in such a way that zooming does not change the layout > but enlarges or minimizes the score. So the layout will be displayed > correctly. And yes, we also want to include the original manuscript next to > the edition, that's why we want to keep the layout as close to the > facsimile as possible, so the orientation will be enhanced for the users. > It will be possible to select a single page or a composition or an > exercise, which will produce the facsimile on the left side and the edition > on the right side of the screen. > > I agree with your encoding-first approach, unfortunately all my ideas > circle around changing the encoding for the sake of rendering it in the > right way, e.g. cross-staff notation via @staff, which is why they are > relatively useless. Our problem is very specific, but, needless to say, it > would be great to find a solution, which others might also find useful for > their encoding and displaying of data. I find condensing two staves quite > interesting, but again, this would be handy for an/our adapted Verovio > version and probably nothing much else, because of the zoom function, and > also raises the question, where to put the information for doing so. > > -----Urspr?ngliche Nachricht----- > Von: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] Im Auftrag von > Andrew Hankinson > Gesendet: Mittwoch, 11. Juli 2018 15:51 > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Betreff: Re: [MEI-L] Invisible staves and content > > I think there needs to be a clear separation here between what you're > "encoding" (that is, the change of instrumentation) and what you're > "displaying" (that is, how it is being rendered). I would favour an > encoding-first approach, recognising that display in a digital context will > be very different than how it is physically represented on the page. > > In a dynamic rendering environment (like Verovio), it would be very hard > to maintain the appearance of the original study book while still doing all > the things that Verovio does really well, like fitting as much music on the > screen as it can depending on screen size and zoom level. Faithfully > reproducing the layout, and dynamically rendering the notation, are two > very different tasks. > > You said "Every time there is change in instrumentation, the names of the > instruments should be displayed in the beginning to correctly label the > staves as soon as they occur." > > This will be problematic with Verovio. "in the beginning" may be halfway > across the screen, depending on screen size and zoom level. Similarly "as > soon as they occur" may necessitate rendering a big blank gap. Or do you > want Verovio to automatically condense the staves into a single system so > that the Viola starts on the same system as the oboe? (for example). Doing > these things well, with an infinite number of possible screen sizes and > zoom levels, will be very difficult. > > Is there provision for your application to display the original facsimile > image? Then you would not have to worry about reproducing the layout in > Verovio -- you can show your users what the original looked like, while > still allowing for things like dynamic playback and interaction through > Verovio. > > If you really want to try and preserve the look of the original, I would > recommend bringing in the <surface>/<facsimile>/<zone> functions and > aligning the musical content with the image using pixel-based co-ordinates. > But that would be much more difficult to do manually. > > -Andrew > > > On 11 Jul 2018, at 14:27, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> > wrote: > > > > I can?t really decide if I find one solution better than the other. I > think putting extra layout information into <scoreDef> and/or <staffDef> > e.g. after a <sb> is a very elegant way to choose which staves should be > displayed, as it keeps the encoding slim and easy to work with. In our > problem especially, it is definitely smarter to choose what is visible than > what is invisible. In situations like the ones that Eleanor described, it > could work out as well that way, I believe. > > But I might be mistaken entirely, because maybe I?m missing something. > > > > In the case of the study book, we would like to keep this exercise as > one single unit and not separate the different parts into many smaller > units. Which means, that we would still have 32 staffDefs in the <scoreDef> > at the beginning of <score>. Every time there is change in instrumentation, > the names of the instruments should be displayed in the beginning to > correctly label the staves as soon as they occur. Currently, instrument > labels only shown in the first system of the first page. When two staves > are visible, the other 30 staves without content have to be hidden in our > encoding, as the rendered score should look just like the page in the > facsimile. > > > > Von: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] Im Auftrag > > von Andrew Hankinson > > Gesendet: Dienstag, 10. Juli 2018 19:27 > > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > > Betreff: Re: [MEI-L] Invisible staves and content > > > > It looks like it really is a measure-by-measure showing and hiding of > particular staves, so I might agree with your first instincts of adding > @visible to individual staves. I would be interested in hearing more about > why this is not a good option, though. > > > > What will you expect the rendered score to look like? That may inform > what the best option is for encoding it. > > > > On 10 Jul 2018, at 17:32, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> > wrote: > > > > Dear MEI community, > > > > many of you might currently be on vacation or enjoying the sunny > weather, but we have to ask you for a short interruption, because we need > your opinion and advice on an issue concerning the encoding of invisible > content and staves. > > > > In our project on the digital edition of the study book of Anton > Bruckner, we came across many pages, in which multiple staves need to be > made invisible. In particular, there is one exercise across five pages, > where the instrumentation changes every two lines, sometimes even after > each line. That?s why the header in this file contains 32 different > staffDefs. We were trying to hide unnecessary staves by inserting > @visible=?false? into the staff elements in the measures in question, but, > of course, that only makes the content invisible, staves and barlines > remain unhidden. > > > > We talked to Laurent about this issue and he would be happy to implement > a function in Verovio to change the layout in such a way, except not via > making content invisible on the staff-level, but by using scoreDef. > > > > Now, the questions are: should @visible be made usable for changing the > layout? If not, then scoreDef would be probably most suitable, which leads > to the question: how should this information be included in scoreDef? > > In our special case, we would like to use the @label-information of the > staffDef in not only the first system, but also each time the > instrumentation changes. So we could cheat, delete every invisible staff > and use <dir> to label the staves, when new instruments occur, but this is > not pretty at all and we would not be happy with this solution. > > > > Thank you very much in advance! > > > > Best regards, > > Agnes Seipelt, Peter Provaznik and Paul Gulewycz > > > > > > Paul Gulewycz, BA > > ?sterreichische Akademie der Wissenschaften Institut f?r kunst- und > > musikhistorische Forschungen AG Digital Musicology Dr. Ignaz > > Seipel-Platz 2 > > A-1010 Wien > > +43/650/646 94 32 > > > > <A-WnMus.Hs.44706-231.jpg> > > <105.png> > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180713/f1ff0ef8/attachment.html> From krichts at mail.uni-paderborn.de Mon Jul 16 13:59:49 2018 From: krichts at mail.uni-paderborn.de (Kristina Richts) Date: Mon, 16 Jul 2018 13:59:49 +0200 Subject: [MEI-L] Registration for Edirom Summer School now open Message-ID: <8853B33E-FDC8-4567-B179-5FD35A0803AB@mail.uni-paderborn.de> Dear colleagues, we are happy to inform you that registration is now open for this year?s Edirom Summer School? <https://ess.upb.de/>(ESS). The ESS will take place from September 17th to 21st, 2018 at Paderborn University. The course descriptions can be found on the ESS Website: https://ess.upb.de/2018/programm.html <https://ess.upb.de/2018/programm.html> We expressly draw attention to the spotlights and the poster session, which give participants the opportunity to present their projects, ideas and questions. Upon the explicit request we are additionally offering 30-minute spotlights this year. Applications should be sent to us by August 15th. Please let us know by September 3rd if you would like to present a poster. We would also like to draw your attention to the fact that this year?s ESS has a special focus on MEI metadata, consisting of an expert discussion about the Detmold Court Theatre Project <http://hoftheater-detmold.de/> on Wednesday, a one and half day?s metadata discussion on source descriptions with MEI on Thursday and Friday and a concluding wrap-up of the main results on Friday afternoon. In agreement with the Academy of Sciences and Literature at Mainz this meeting will serve as the annual meeting of the working group on digital music editions. We wish you a nice summer and look forward to seeing you in Paderborn in September. Best wishes on behalf of all members of the Virtual Research Group Edirom Kristina -- Kristina Richts M.A., MA LIS Wissenschaftliche Mitarbeiterin DFG-Projekt ?Detmolder Hoftheater (1825-1875)? Musikwissenschaftliches Seminar Detmold/Paderborn Forum Wissenschaft | Bibliothek | Musik Hornsche Stra?e 39, Raum 2.12 D-32756 Detmold Tel.: +49 5231 975 665 E-Mail: kristina.richts at uni-paderborn.de -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180716/fb838fc2/attachment.html> From kepper at edirom.de Mon Aug 6 15:47:37 2018 From: kepper at edirom.de (Johannes Kepper) Date: Mon, 6 Aug 2018 15:47:37 +0200 Subject: [MEI-L] Job Offer in Detmold Message-ID: <EB82C730-8912-4DE6-B6FD-106AC2562C2D@edirom.de> Dear all, Although it's a very short notice, I'm happy to share the attached job offer for the Beethovens Werkstatt project in Detmold. It's a 75% position for someone skilled in MEI, and is initially advertised for three years, but can be extended until 2029. The application deadline is in about two weeks, on August 21. If you have any questions, please contact either Joachim Veit (veit at weber-gesamtausgabe.de) or me off-list. All best, Johannes -------------- next part -------------- A non-text attachment was scrubbed... Name: Kennziffer3461.pdf Type: application/pdf Size: 349567 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180806/d5cd6bf4/attachment.pdf> -------------- next part -------------- Dr. Johannes Kepper Wissenschaftlicher Mitarbeiter Beethovens Werkstatt: Genetische Textkritik und Digitale Musikedition Musikwiss. Seminar Detmold / Paderborn | Hornsche Stra?e 39 | D-32756 Detmold kepper at beethovens-werkstatt.de | -49 (0) 5231 / 975669 www.beethovens-werkstatt.de Forschungsprojekt gef?rdert durch die Akademie der Wissenschaften und der Literatur | Mainz -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180806/d5cd6bf4/attachment.sig> From klaus.rettinghaus at gmail.com Thu Aug 23 12:01:20 2018 From: klaus.rettinghaus at gmail.com (Klaus Rettinghaus) Date: Thu, 23 Aug 2018 12:01:20 +0200 Subject: [MEI-L] MEI Autumn Working Meeting in Oxford In-Reply-To: <92748448-E341-45BC-93F3-90FBDD48F944@mail.mcgill.ca> References: <92748448-E341-45BC-93F3-90FBDD48F944@mail.mcgill.ca> Message-ID: <1535018480.2947.0@smtp.gmail.com> Dear Andrew, for better planning and flight booking it would be important for me to know how much time on the last day is scheduled. Best Klaus Am Fr, 6. Jul, 2018 um 10:22 VORMITTAGS schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: > Hello all, > > The MEI Technical Group is pleased to announce that we will be having > an MEI Working Meeting, October 30?November 2, at the University of > Oxford, co-hosted by the Oxford e-Research Centre and the Bodleian > Libraries. > > As mentioned at this year's Music Encoding Conference, the goal of > this meeting is to bring people together to build and develop tools > and resources that support the larger MEI community. To this end, it > will be less formal than our 'traditional' conference. It will be > structured in such a way to promote co-operative and collaborative > development of the tools that support our community -- a "Hack Week" > for MEI, if you will. > > Some possible topics include: > > - Development of MEI Tutorial Material > - Improvements to the MEI Guidelines and the MEI Website > - A better automated schema testing system > - Improvements to, and bug-fixes for, Verovio > - Improvements to the updating process and tools for MEI 3 to (the > forthcoming) MEI 4. > - Improvements to MEI editors (e.g., the oXygen or Atom plugins) > - Improvements to the Sibelius MEI plugin > > This is by no means an exhaustive list, and suggestions from the > community and participants are welcome! > > While you *do not* have to be a programmer to participate > (Documentation improvements and training material is very high on our > list of priorities!) you will need to provide your own computer. > > There will be no registration cost to attend this meeting, but > registration will be required so we can plan for space. More details > will be coming shortly on how to register. Also, if you are sure you > are coming, it would be wise to secure accommodation sooner rather > than later, as Oxford can fill up quite quickly. > > We look forward to seeing you in Oxford! Please let me know if you > have any questions. > > Andrew Hankinson > Kevin Page > David Lewis > > Local organizers > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180823/8fd04b94/attachment.html> From andrew.hankinson at mail.mcgill.ca Thu Aug 23 13:33:44 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Thu, 23 Aug 2018 11:33:44 +0000 Subject: [MEI-L] MEI Autumn Working Meeting in Oxford In-Reply-To: <1535018480.2947.0@smtp.gmail.com> References: <92748448-E341-45BC-93F3-90FBDD48F944@mail.mcgill.ca> <1535018480.2947.0@smtp.gmail.com> Message-ID: <FFC7F1BA-D1B7-4F73-A9D7-95DD93CED805@mail.mcgill.ca> Hi Klaus, There is no formal program, and no particular schedule. I believe we have the rooms for the day, so we do not have to end at a particular time. You should feel free to make your travel schedule as you see fit. -Andrew > On 23 Aug 2018, at 12:01, Klaus Rettinghaus <klaus.rettinghaus at gmail.com> wrote: > > Dear Andrew, > > for better planning and flight booking it would be important for me to know how much time on the last day is scheduled. > > Best > Klaus > > Am Fr, 6. Jul, 2018 um 10:22 VORMITTAGS schrieb Andrew Hankinson <andrew.hankinson at mail.mcgill.ca>: >> Hello all, >> >> The MEI Technical Group is pleased to announce that we will be having an MEI Working Meeting, October 30?November 2, at the University of Oxford, co-hosted by the Oxford e-Research Centre and the Bodleian Libraries. >> >> As mentioned at this year's Music Encoding Conference, the goal of this meeting is to bring people together to build and develop tools and resources that support the larger MEI community. To this end, it will be less formal than our 'traditional' conference. It will be structured in such a way to promote co-operative and collaborative development of the tools that support our community -- a "Hack Week" for MEI, if you will. >> >> Some possible topics include: >> >> - Development of MEI Tutorial Material >> - Improvements to the MEI Guidelines and the MEI Website >> - A better automated schema testing system >> - Improvements to, and bug-fixes for, Verovio >> - Improvements to the updating process and tools for MEI 3 to (the forthcoming) MEI 4. >> - Improvements to MEI editors (e.g., the oXygen or Atom plugins) >> - Improvements to the Sibelius MEI plugin >> >> This is by no means an exhaustive list, and suggestions from the community and participants are welcome! >> >> While you *do not* have to be a programmer to participate (Documentation improvements and training material is very high on our list of priorities!) you will need to provide your own computer. >> >> There will be no registration cost to attend this meeting, but registration will be required so we can plan for space. More details will be coming shortly on how to register. Also, if you are sure you are coming, it would be wise to secure accommodation sooner rather than later, as Oxford can fill up quite quickly. >> >> We look forward to seeing you in Oxford! Please let me know if you have any questions. >> >> Andrew Hankinson >> Kevin Page >> David Lewis >> >> Local organizers >> _______________________________________________ >> mei-l mailing list >> >> mei-l at lists.uni-paderborn.de >> https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From D.Lewis at gold.ac.uk Fri Aug 24 16:10:48 2018 From: D.Lewis at gold.ac.uk (David Lewis) Date: Fri, 24 Aug 2018 14:10:48 +0000 Subject: [MEI-L] Digital Delius: Unlocking Digitised Music Manuscripts, British Library, 1 Oct 2018 Message-ID: <A198F7E6-98AF-4214-A066-CAAC6568E846@gold.ac.uk> Digital Delius: Unlocking Digitised Music Manuscripts Monday 1st Oct 2018, 13:00 - 18:00 British Library Foyle Visitor and Learning Centre, London Free registration is required to attend How can technology help people access and understand music manuscripts? Join us at the British Library for the launch of a new digital exhibition showcasing the music of British-born composer Frederick Delius (1862?1934), including a live performance by the Villiers String Quartet. Bringing together digitised sources including scores and sketches, early recordings, photographs, and concert programmes, the exhibition is complemented by expert commentary and interactive digital tools. Researchers from the University of Oxford will present an overview of the AHRC-funded project. Daniel Grimley, Joanna Bullivant, Kevin Page and David Lewis will also describe how technology, built on standards such as the Music Encoding Initiative (MEI) and Linked Data, can enrich engagement with musical sources and give an insight into the creative process. Then enjoy a performance by the Villiers String Quartet, revealing some of the practical implications of the project, plus an opportunity to experiment with the technology and discuss the project with the team This event is free but booking is required. Event: Digital Delius: Unlocking Digitised Music Manuscripts Where: Foyle Visitor and Learning Centre The British Library 96 Euston Road London NW1 2DB Booking: https://www.bl.uk/events/digital-delius-unlocking-digitised-music-manuscripts Enquiries: +44 (0)1937 546546 / boxoffice at bl.uk Programme: 1.30pm Welcome. Introduction to ?Discovering Music? from British Library and Oxford project members. 2pm Panel: ?Online Curation for Digital Musicology: Delius? ?Curating Delius digitally?, Daniel Grimley, Faculty of Music, University of Oxford ?The Delius Digital Catalogue?, Joanna Bullivant, Faculty of Music, University of Oxford ?Enhancing Delius manuscripts for web users?, David Lewis, University of Oxford e-Research Centre ?From manuscripts to performance: a digital workshop?, Kevin Page, University of Oxford e-Research Centre From andrew.hankinson at mail.mcgill.ca Tue Aug 28 12:09:04 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Tue, 28 Aug 2018 10:09:04 +0000 Subject: [MEI-L] Registration Open for MEI Autumn Working Meeting Message-ID: <964DA6E8-22A6-4BAF-939B-EDF9A80E7D66@mail.mcgill.ca> Hello everyone, Registration is now open to the MEI Autumn Working Meeting, October 30?November 2, 2018 at the Oxford e-Research Centre, University of Oxford. Registration is free, but space is limited so you must register so we can gauge the number of participants. PLEASE only register if you are coming. Registration form: https://goo.gl/forms/RuWU63PnMg9wRXXm2 We are also gauging interest in 'virtual participation'. If you cannot make it in person but would like to try and arrange some time to participate via video conference, please sign-up, ensuring you check the 'virtual participation' checkbox on the registration form. The Music Encoding Initiative Board has made available a limited amount of funds to help reduce the costs of travel for some participants. Please indicate on the form if you would like to be considered for this. Logistics: If you are flying in from Heathrow, there is a bus from Heathrow Central Bus Station or Terminal 5 that will bring you to the centre of town. https://airline.oxfordbus.co.uk/heathrow/ If you are coming in via London, you can take trains to Oxford from either Paddington or Marylebone Stations. You can find timetables and tickets here: http://www.nationalrail.co.uk You can get cheaper tickets if you purchase them in advance. The longer you wait, the more expensive they get. Accommodation is very limited in the town centre, and can be quite expensive. You should book it as far in advance as you can. Some (cheaper) college rooms may be available on the University Rooms website: https://www.universityrooms.com Venue & Schedule: The Oxford e-Research Centre can be found at: 7 Keble Rd, Oxford OX1 3QG, UK https://goo.gl/maps/eVWhDYrLkJK2 The schedule for the days will be fairly relaxed. We will aim to start at 9:30 on Tuesday October 30, and probably go until 17:00 each day. Snacks and hot/cold drinks will be provided, but meals are on your own. There are good places to eat close to the e-Research Centre, either for coffee and sandwich, or for more discerning palates, and the centre of town is only a few blocks away. We are looking forward to seeing you in Oxford in October! Please get in touch with any questions or concerns. Andrew Hankinson Kevin Page David Lewis From kepper at edirom.de Sun Sep 23 20:02:25 2018 From: kepper at edirom.de (Johannes Kepper) Date: Sun, 23 Sep 2018 20:02:25 +0200 Subject: [MEI-L] Clarification of FRBR levels / source Message-ID: <2823E4EB-8833-4D1C-B3C0-4E94CCA88C27@edirom.de> Dear MEI-L, On various occasions, members of the MEI Community have expressed confusion that <workDesc> is a first-level child of <meiHead>, while <sourceDesc> is nested into <fileDesc>. This has historical reasons: Before the introduction of FRBR into MEI, <source> was used to describe both the material used in the preparation of the file, and sources (prints, manuscripts, ?) for which there is an editorial interest of some kind. Sometimes, these two fell together, and sometimes they didn't. When implementing the FRBR model, these two use cases were not differentiated clearly. As such a change to the MEI model would clearly break backwards compatibility, and at the same time affect a great number of MEI instances, it seems good to not implement it in the upcoming MEI 4.0.0, but instead in a later revision of MEI. At the same time, MEI 4.0.0 could include a deprecation warning, giving users and developers sufficient room for discussing a potential solution and testing its consequences on their use of MEI. This same message has been filed as an issue on the MEI GitHub account over at https://github.com/music-encoding/music-encoding/issues/546. Perry and I already started work on a potential solution. You're invited to join our discussion either here on MEI-L, or directly at GitHub. If you have a GitHub account, you can hit the "Watch"-button on that page and receive email-notifications whenever someone contributes to the GitHub issue. All best, jo -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20180923/de8b3a74/attachment.sig> From kepper at beethovens-werkstatt.de Thu Sep 27 20:07:58 2018 From: kepper at beethovens-werkstatt.de (Johannes Kepper) Date: Thu, 27 Sep 2018 20:07:58 +0200 Subject: [MEI-L] Invisible staves and content In-Reply-To: <CAJ306HbAgWhQ6CxXycDVOku-MhKx6v=waPoaAECCX_sHi245Sg@mail.gmail.com> References: <a72840755a83459783c581460b0b8ce4@oeaw.ac.at> <F9C6FAAA-F1F0-418B-A730-FE5051D46090@mail.mcgill.ca> <2f27ec29a1b5411babd2439d74f8f9f5@oeaw.ac.at> <D4648FB1-1BD3-45B2-BE41-F654882C6788@mail.mcgill.ca> <1c40cca893c849e285192e36c76434c3@oeaw.ac.at> <CAJ306HbAgWhQ6CxXycDVOku-MhKx6v=waPoaAECCX_sHi245Sg@mail.gmail.com> Message-ID: <CE956821-C981-4A44-B6E9-017EB64084E9@beethovens-werkstatt.de> Sorry for coming late to the party ;-) after considering several options, I think that Laurent's second suggestion is the only solution that seems to have the right balance between encoding of a source and enabling a dynamically generated layout. I think all the problems that Laurent mentioned later on make it relatively clear that the first suggestion is leaving out too much. Another approach might have been to have a single scoreDef at the beginning of the file, which contains all staves. Each measure would then contain only staff elements for parts which are actually visible in that measure. However, this contradicts some discussions we had over the last few years, and it also complicates the encoding of multiple sources, which may not always "flatten" the score in the same way. Especially this requirement seems to forbid the use of an attribute on <staff> (or elsewhere in the measure): It's much easier to have multiple <scoreDef>s nested in an <app> / <rdg>, each capturing the specific situation for one (or more) source(s). So I would suggest to allow @visible on <staffDef> and <staffGrp>, and require the presence of all <staffDefs> in a <scoreDef>. The only way around this would be a statement in the Guidelines saying that the absence of staffDef/@visible indicates that this staff is visible. If a <staffDef> is omitted, then this rule would still apply. That way, one could selectively modify the key (including transposing instruments) _and_ specify invisible staves at the same time. However, as that often has consequences on <staffGrp>s, it might still not be sufficient to encode only the changing staves. Therefore, I believe this "default behaviour" approach won't be as clear to interpret as it needs to be, and so we should not implement it. To make a long story short: I'm all in for Laurent's second proposal? All best, jo > Am 13.07.2018 um 17:57 schrieb Laurent Pugin <lxpugin at gmail.com>: > > On way to handle this is to use a <scoreDef> for indicating a change in a system content. That is, if a system displays only a subset of the staves, then you need to have a <scoreDef> with the desired <staffGrp> and <staffDef>. One question is whereas we expect a <scoreDef> to have 1) only the <staffGrp/<staffDef> that are visible, or 2) always all of them and have a @visible="false" on the ones that needs to be hidden. Both are a valid approach, I think. > > So with : > > <scoreDef meter.sym="common" key.sig="4s"> > <staffGrp barthru="false"> > <staffGrp barthru="true" symbol="bracket"> > <staffDef n="1" label="Flute" key.sig="4s" /> > <staffDef n="2" label="Oboe" key.sig="4s" /> > <staffDef n="3" label="Clarinet in Bb" key.sig="6s" trans.semi="-2" trans.diat="-1" /> > </staffGrp> > <staffGrp barthru="true" symbol="bracket"> > <staffDef n="4" label="Violin I" key.sig="4s" /> > <staffDef n="5" label="Violin II" key.sig="4s" /> > <staffDef n="6" label="Viola" key.sig="4s" /> > <staffDef n="7" label="Cello" key.sig="4s" /> > </staffGrp> > </staffGrp> > </scoreDef> > > We can then have when a system has only strings with 1): > > <scoreDef> > <staffGrp> > <staffGrp> > <staffDef n="4" /> > <staffDef n="5" /> > <staffDef n="6" /> > <staffDef n="7" /> > </staffGrp> > </staffGrp> > </scoreDef> > > Or with 2) > > <scoreDef> > <staffGrp> > <staffGrp visible="false"> > <staffDef n="1" visible="false" /> > <staffDef n="2" visible="false" /> > <staffDef n="3" visible="false" /> > </staffGrp> > <staffGrp> > <staffDef n="4" /> > <staffDef n="5" /> > <staffDef n="6" /> > <staffDef n="7" /> > </staffGrp> > </staffGrp> > </scoreDef> > > With both cases we can also expect to have an empty <scoreDef> with no <staffGrp>/<staffDef> children to be used but with a slightly different meaning, typically for indicating a time signature or a key signature change. Changing from C to C/ would be just > > <scoreDef meter.sym="cut"/> > > Now this become a bit tricky with transposing instrument when changing the key signature because a key signature change cannot be specified globally. With solution 1) then you cannot do: > > <scoreDef key.sig="2s> > <staffGrp> > <staffGrp> > <staffDef n="3" key.sig="4s" /> > </staffGrp> > </staffGrp> > </scoreDef> > > because this means that only the clarinet staff has to be shown. So I assume that here you would need to either use @visible too, or to have two consecutive <scoreDef>, one for the key signature change, and one for then specifying the staves to be shown. So something like: > > <scoreDef key.sig="2s> > <staffGrp> > <staffGrp> > <staffDef n="3" key.sig="4s" /> > </staffGrp> > </staffGrp> > </scoreDef> > <scoreDef> > <staffGrp> > <staffGrp> > <staffDef n="4" /> > <staffDef n="5" /> > <staffDef n="6" /> > <staffDef n="7" /> > </staffGrp> > </staffGrp> > </scoreDef> > > Because of this maybe solution would actually more appropriate. Other opinions / ideas? > > Of course this is all for cases when the desired output is to render encoded page and system breaks, and not for dynamic rendering. With dynamic rendering, the content of the <scoreDef> for the part of the score rendered in a system has to be calculated by the application. One option is to dynamically hide staves that have only mRest in the corresponding scope. Similarly, a staffGrp will be hidden when all staves are empty. > > Laurent > > > > On Wed, Jul 11, 2018 at 5:03 PM, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> wrote: > We adapted Verovio in such a way that zooming does not change the layout but enlarges or minimizes the score. So the layout will be displayed correctly. And yes, we also want to include the original manuscript next to the edition, that's why we want to keep the layout as close to the facsimile as possible, so the orientation will be enhanced for the users. It will be possible to select a single page or a composition or an exercise, which will produce the facsimile on the left side and the edition on the right side of the screen. > > I agree with your encoding-first approach, unfortunately all my ideas circle around changing the encoding for the sake of rendering it in the right way, e.g. cross-staff notation via @staff, which is why they are relatively useless. Our problem is very specific, but, needless to say, it would be great to find a solution, which others might also find useful for their encoding and displaying of data. I find condensing two staves quite interesting, but again, this would be handy for an/our adapted Verovio version and probably nothing much else, because of the zoom function, and also raises the question, where to put the information for doing so. > > -----Urspr?ngliche Nachricht----- > Von: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] Im Auftrag von Andrew Hankinson > Gesendet: Mittwoch, 11. Juli 2018 15:51 > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > Betreff: Re: [MEI-L] Invisible staves and content > > I think there needs to be a clear separation here between what you're "encoding" (that is, the change of instrumentation) and what you're "displaying" (that is, how it is being rendered). I would favour an encoding-first approach, recognising that display in a digital context will be very different than how it is physically represented on the page. > > In a dynamic rendering environment (like Verovio), it would be very hard to maintain the appearance of the original study book while still doing all the things that Verovio does really well, like fitting as much music on the screen as it can depending on screen size and zoom level. Faithfully reproducing the layout, and dynamically rendering the notation, are two very different tasks. > > You said "Every time there is change in instrumentation, the names of the instruments should be displayed in the beginning to correctly label the staves as soon as they occur." > > This will be problematic with Verovio. "in the beginning" may be halfway across the screen, depending on screen size and zoom level. Similarly "as soon as they occur" may necessitate rendering a big blank gap. Or do you want Verovio to automatically condense the staves into a single system so that the Viola starts on the same system as the oboe? (for example). Doing these things well, with an infinite number of possible screen sizes and zoom levels, will be very difficult. > > Is there provision for your application to display the original facsimile image? Then you would not have to worry about reproducing the layout in Verovio -- you can show your users what the original looked like, while still allowing for things like dynamic playback and interaction through Verovio. > > If you really want to try and preserve the look of the original, I would recommend bringing in the <surface>/<facsimile>/<zone> functions and aligning the musical content with the image using pixel-based co-ordinates. But that would be much more difficult to do manually. > > -Andrew > > > On 11 Jul 2018, at 14:27, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> wrote: > > > > I can?t really decide if I find one solution better than the other. I think putting extra layout information into <scoreDef> and/or <staffDef> e.g. after a <sb> is a very elegant way to choose which staves should be displayed, as it keeps the encoding slim and easy to work with. In our problem especially, it is definitely smarter to choose what is visible than what is invisible. In situations like the ones that Eleanor described, it could work out as well that way, I believe. > > But I might be mistaken entirely, because maybe I?m missing something. > > > > In the case of the study book, we would like to keep this exercise as one single unit and not separate the different parts into many smaller units. Which means, that we would still have 32 staffDefs in the <scoreDef> at the beginning of <score>. Every time there is change in instrumentation, the names of the instruments should be displayed in the beginning to correctly label the staves as soon as they occur. Currently, instrument labels only shown in the first system of the first page. When two staves are visible, the other 30 staves without content have to be hidden in our encoding, as the rendered score should look just like the page in the facsimile. > > > > Von: mei-l [mailto:mei-l-bounces at lists.uni-paderborn.de] Im Auftrag > > von Andrew Hankinson > > Gesendet: Dienstag, 10. Juli 2018 19:27 > > An: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> > > Betreff: Re: [MEI-L] Invisible staves and content > > > > It looks like it really is a measure-by-measure showing and hiding of particular staves, so I might agree with your first instincts of adding @visible to individual staves. I would be interested in hearing more about why this is not a good option, though. > > > > What will you expect the rendered score to look like? That may inform what the best option is for encoding it. > > > > On 10 Jul 2018, at 17:32, Gulewycz, Paul <Paul.Gulewycz at oeaw.ac.at> wrote: > > > > Dear MEI community, > > > > many of you might currently be on vacation or enjoying the sunny weather, but we have to ask you for a short interruption, because we need your opinion and advice on an issue concerning the encoding of invisible content and staves. > > > > In our project on the digital edition of the study book of Anton Bruckner, we came across many pages, in which multiple staves need to be made invisible. In particular, there is one exercise across five pages, where the instrumentation changes every two lines, sometimes even after each line. That?s why the header in this file contains 32 different staffDefs. We were trying to hide unnecessary staves by inserting @visible=?false? into the staff elements in the measures in question, but, of course, that only makes the content invisible, staves and barlines remain unhidden. > > > > We talked to Laurent about this issue and he would be happy to implement a function in Verovio to change the layout in such a way, except not via making content invisible on the staff-level, but by using scoreDef. > > > > Now, the questions are: should @visible be made usable for changing the layout? If not, then scoreDef would be probably most suitable, which leads to the question: how should this information be included in scoreDef? > > In our special case, we would like to use the @label-information of the staffDef in not only the first system, but also each time the instrumentation changes. So we could cheat, delete every invisible staff and use <dir> to label the staves, when new instruments occur, but this is not pretty at all and we would not be happy with this solution. > > > > Thank you very much in advance! > > > > Best regards, > > Agnes Seipelt, Peter Provaznik and Paul Gulewycz > > > > > > Paul Gulewycz, BA > > ?sterreichische Akademie der Wissenschaften Institut f?r kunst- und > > musikhistorische Forschungen AG Digital Musicology Dr. Ignaz > > Seipel-Platz 2 > > A-1010 Wien > > +43/650/646 94 32 > > > > <A-WnMus.Hs.44706-231.jpg> > > <105.png> > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > > mei-l mailing list > > mei-l at lists.uni-paderborn.de > > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From f.wiering at UU.NL Fri Oct 5 09:53:20 2018 From: f.wiering at UU.NL (Frans Wiering) Date: Fri, 5 Oct 2018 09:53:20 +0200 Subject: [MEI-L] Call for Papers | DH2019 - ADHO Digital Humanities Conference | Utrecht, The Netherlands | 9-12 July 2019, , Message-ID: <caa0f7df-7edb-9da6-6934-86274eec2285@UU.NL> [Apologies for cross-posting. Please feel free to forward this CfP to interested parties] **DH2019**Call for Papers** ADHO Digital Humanities Conference  -  DH2019 The Alliance of Digital Humanities Organizations (ADHO) invites submission of proposals for its annual conference, to be hosted by Utrecht University (The Netherlands), 9-12 July 2019. Preconference workshops are scheduled for 8-9 July 2019. Conference website: http://dh2019.adho.org/ **Conference Theme** The theme of the 2019 conference is ‘*/Complexities’/*. This theme has a multifaceted connection with Digital Humanities scholarship. /Complexities /intends to inspire people to focus on Digital Humanities (DH) as the humanist way of building complex models of complex realities, analysing them with computational methods and communicating the results to a broader public. The theme also invites people to think of the theoretical, social, and cultural complexity and diversity in which DH scholarship is immersed and asks our community to interact consciously and critically in myriad ways, through the conference and the networks, institutions and the enterprises interested in DH research. Finally, it means involving the next generation, teaching DH to students—the people who will need to deal with the /complexities /of the future. Proposals related to these themes are particularly welcome, but the DH2019 will accept submissions on any other aspect or field of Digital Humanities. For more details, see the full Call for Papers on the conference website. **Formats that can be proposed** -Posters (abstract maximum 750 words) -Short papers (abstract maximum 1000 words) -Long papers (abstract maximum 1500 words) -Multiple-paper panels (500-word abstracts + 500-word overview) -Pre-conference workshops and tutorials (proposal maximum 1 500 words) **Important Dates** Submission deadline for papers: 11:59pm GMT 27 November 2018 Submission deadline for workshops and tutorials: 11:59pm GMT, 10 January 2019 Date for notifications: 3 May 2019 Workshops: 8-9 July 2019 Conference: 9-12 July 2019 **Check the conference website** For more detailed information on the submission procedure, selection criteria, programme committee, venue TivoliVredenburg <https://www.tivolivredenburg.nl/english/plan-your-visit/>, registration, bursaries and local organizers, check the DH2019 webpage: http://dh2019.adho.org/. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181005/3405a1cd/attachment.html> From andrew.hankinson at mail.mcgill.ca Tue Oct 16 22:14:54 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Tue, 16 Oct 2018 20:14:54 +0000 Subject: [MEI-L] MEI Board elections 2018: Call for candidates Message-ID: <C4067C67-147C-4EF2-ABE2-CA2D8B6B1ECA@mail.mcgill.ca> Dear MEI Community, On 31 December 2018 the terms of three MEI Board members will come to an end. The entire Board wishes to thank Benjamin W. Bohl, Ichiro Fujinaga, and Perry Roland for their service and dedication to the MEI community. In order to fill these soon-to-be-vacant positions, elections must be held. The election process will take place in accordance with the Music Encoding Initiative By-Laws.[1] To nominate a canadidate, please do so via this form: https://goo.gl/forms/TRGPzYLnpqEQtk2g2 The timeline of the elections will be as follows: Nomination phase (17 October - 17 November, 2018) - Nominations can be sent by filling in the nomination form between 17 October - 17 November, 2018.[2] - Any person who subscribes to MEI-L has the right to nominate candidates. - Nominees have to be members of the MEI-L mailing list but may register until 17 November 2018. - Individuals who have previously served on the Board are eligible for nomination and re-appointment. - Self nominations are welcome. - Individuals will be informed of their nomination when received and asked to confirm their willingness to serve on the Board. - Acceptance of a nomination requires submission of a short CV and a personal statement of interest in MEI (a maximum of 200 words each) to elections at music-encoding.org by 20 November, 2018. Candidates who have been nominated but who have not confirmed their willingness will not be included on the ballot. Election phase (21 November - 7 December 2018) - The voting period will be open from 21 November - 7 December, 2018. - The election will take place using OpaVote and the Ranked Choice Voting method (https://www.opavote.com/methods/ranked-choice-voting). - You will be informed about the election and your individual voting tokens in a separate e-mail. Post election phase - Election results will be announced after the elections have closed. - The term of the elected candidates starts on 1 January 2018. - The first meeting of the new MEI Board will be held on Tuesday, 15 January 2019. The selection of Board members is an opportunity for each of you to have a voice in determining the future of MEI. Thank you for your support, Peter Stadler and Andrew Hankinson MEI election administrators 2018 by appointment of the MEI Board [1] The By-laws of the Music Encoding Initiative are available online at: http://music-encoding.org/community/mei-by-laws.html [2] All deadlines are referenced to 11:59 pm (UTC) From andrew.hankinson at mail.mcgill.ca Sat Oct 20 10:40:27 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Sat, 20 Oct 2018 08:40:27 +0000 Subject: [MEI-L] MEC Montreal Final Report Message-ID: <786B921F-FF3C-4ACB-AE06-431EF0B9BDE3@mail.mcgill.ca> Dear Music Encoding Community, As part of our reporting process for the Music Encoding conference in Montreal (2016), we are looking to gauge the outcomes fo the conference. I am hoping members of the MEI community can help us by reporting back to me any outcomes that came out of your participation in this conference. So, if you have a moment, could you please let me know if you had: - Any papers published in a journal where you initially presented your work at MEC Montreal - Any research collaborations you created or maintained when you attended the conference - Any local media attention your work generated - Any additional presentations you gave (other than the one you gave at the conference) If you were a student attendee we would be particularly interested in your experiences. You can send your responses to me privately off-list. Many thanks, -Andrew From T.Crawford at gold.ac.uk Tue Oct 23 13:45:11 2018 From: T.Crawford at gold.ac.uk (Tim Crawford) Date: Tue, 23 Oct 2018 11:45:11 +0000 Subject: [MEI-L] MEC Montreal Final Report In-Reply-To: <786B921F-FF3C-4ACB-AE06-431EF0B9BDE3@mail.mcgill.ca> References: <786B921F-FF3C-4ACB-AE06-431EF0B9BDE3@mail.mcgill.ca> Message-ID: <0D26EA20-65E0-4A52-A28C-044F180F6244@gold.ac.uk> How about this: Among the students at the Digital Musicology Workshop at the Oxford Digital Humanities Summer School in July 2015 was Dr Jessica Schwartz, a lecturer from UCLA. Dr Schwartz pointed out to us that although the guitar tabs she uses in her teaching of a course on the history of Punk exist in profusion on the internet, they are in a great and unorganised variety of formats; there is thus a need for a standard encoding. This encoding issue led to our development, within the Transforming Musicology project (Goldsmiths, AHRC 2013-2017), of an MEI extension to include a larger range of tablature formats (the one initially provided within MEI was quite inadequate). A paper on ways in which this could be used was presented at MEC 2016 by Tim Crawford with Dr Schwartz. Jessica expressed interest in working on a joint project on the use of guitar tabs in a pedagogical context. This led to a Follow-On Funding for Impact and Engagement grant ("Learn to Play: Computational Assessment of Playability for Users' Practice", AHRC 2017-18). The work is now continuing within the international TROMPA project (EU, 2018-2021), and will contribute to the uptake of MEI within that project and elsewhere. The potential of tablature support for MEI is in fact huge, both for research on historical repertories and, due to the existence of vast quantities of tablature available for download, for commercial exploitation. Alongside this, development of Verovio to enable tablature display is also a desideratum. See you on Tuesday! Tim > On 20 Oct 2018, at 09:40, Andrew Hankinson <andrew.hankinson at mail.mcgill.ca> wrote: > > Dear Music Encoding Community, > > As part of our reporting process for the Music Encoding conference in Montreal (2016), we are looking to gauge the outcomes fo the conference. I am hoping members of the MEI community can help us by reporting back to me any outcomes that came out of your participation in this conference. > > So, if you have a moment, could you please let me know if you had: > > - Any papers published in a journal where you initially presented your work at MEC Montreal > - Any research collaborations you created or maintained when you attended the conference > - Any local media attention your work generated > - Any additional presentations you gave (other than the one you gave at the conference) > > If you were a student attendee we would be particularly interested in your experiences. > > You can send your responses to me privately off-list. > > Many thanks, > -Andrew > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3864 bytes Desc: not available URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181023/f3b39ff0/attachment.bin> From kevin.page at oerc.ox.ac.uk Tue Oct 23 17:23:29 2018 From: kevin.page at oerc.ox.ac.uk (Kevin Page) Date: Tue, 23 Oct 2018 15:23:29 +0000 Subject: [MEI-L] CfP: Music Encoding Conference 2019, 29th May - 1st June, Vienna Message-ID: <ffa7ab2e-aaa2-7195-ea08-edb709d49e20@oerc.ox.ac.uk> MUSIC ENCODING CONFERENCE 2019 Wednesday 29 May - Saturday 1 June 2019, University of Vienna, Austria CALL FOR PROPOSALS http://music-encoding.org/conference/2019/ The Music Encoding Conference 2019 calls for paper, poster, panel, and workshop proposals to be submitted by 7 December 2018. The seventh Music Encoding Conference will take place in 2019, continuing this key annual event for dissemination and discussion for those working with, and on, music encoding. The 2019 conference will be held in the beautiful and culturally rich city of Vienna, Austria, jointly organised by the Austrian Academy of Sciences and the Mozart Institute of the Mozarteum Foundation Salzburg, on behalf of the Music Encoding Initiative community. The conference will be hosted at the University of Vienna over four days, with pre-conference workshops on Wednesday 29 May, the formal programme on Thursday 30 and Friday 31 May, and an ‘unconference’ on Saturday 1 June. BACKGROUND When using and manipulating digital music information, the properties and behaviours of its encoding are of fundamental importance - be that for musicological study, music theory, production of digital editions, composition, performance, teaching and learning, cataloguing, symbolic music information retrieval and recommendation, or more general electronic presentation of musical material and associated narratives. The study of music encoding and its applications is therefore a critical foundation for the use of music information by scholars, librarians, publishers, and the wider music industry. The Music Encoding Conference has emerged as the foremost international forum where researchers and practitioners from across these diverse fields can meet and explore new developments in music encoding and its use. The Conference celebrates a multidisciplinary programme, combining the latest advances from established music encodings, novel technical proposals and encoding extensions, and the presentation or evaluation of new practical applications of music encoding (e.g. in academic study, libraries, editions, commercial products). Pre-conference workshops provide an opportunity to quickly engage with best practice in the community. Newcomers are encouraged to submit to the main programme with articulations of the potential for music encoding in their work, highlighting strengths and weaknesses of existing approaches within this context. Following the formal programme, on Saturday 1 June, an unconference session fosters collaboration in the community through the meeting of Interest Groups, and self-selected discussions on hot topics that emerge during the conference. The programme welcomes contributions from all those working on, or with, any music encoding. In addition, the Conference serves as a focus event for the Music Encoding Initiative community, with its annual community meeting scheduled the day following the main programme, on Saturday 1 June. TOPICS The conference welcomes contributions from all those who are developing or applying music encodings in their work and research. Topics include, but are not limited to: * data structures for music encoding * music encoding standardisation * music encoding interoperability / universality * methodologies for encoding, music editing, description and analysis * computational analysis of encoded music * rendering of symbolic music data in audio and graphical forms * conceptual encoding of relationships between multimodal music forms (e.g. symbolic music data, encoded text, facsimile images, audio) * capture, interchange, and re-purposing of musical data and metadata * ontologies, authority files, and linked data in music encoding and description * (symbolic) music information retrieval using music encoding * evaluation of music encodings * best practice in approaches to music encoding and the use or application of music encodings in: * music theory and analysis * digital musicology and, more broadly, digital humanities * music digital libraries * digital editions * bibliographies and bibliographic studies * catalogues * collection management * composition * performance * teaching and learning * search and browsing * multimedia music presentation, exploration, and exhibition SUBMISSIONS The Music Encoding Conference 2019 calls for paper, poster, panel, and workshop proposals. All submissions will be reviewed by 2-3 members of the programme committee before acceptance. Authors are invited to upload their anonymized submission for review to our Conftool website: https://www.conftool.net/music-encoding2019 The deadline for all submissions is 7 December 2018 (see IMPORTANT DATES below). Conftool accepts abstracts as PDF files only. The submission to Conftool must include: * name(s) of author(s) * title * abstract (see below for maximum lengths) * current or most recent institutional affiliation of author(s) and e-mail address * proposal type: paper, poster, panel session, or workshop * all identifying information must be provided in the corresponding fields of Conftool only, while the submitted PDF must anonymize the author’s details. Paper and poster proposals must include an abstract of no more than 1000 words. Relevant bibliographic references may be included above this limit (i.e. will not be counted within the 1000 word limit). Please also include a short statement regarding your current interests related to music encoding. Panel discussion proposal abstracts must be no longer than 2000 words, and describe the topic and nature of the discussion, along with short biographies of the participants. Panel discussions are not expected to be a set of papers which could otherwise be submitted as individual papers. Proposals for half- or full-day pre-conference workshops, to be held on May 29th, should include the workshop’s proposed duration, as well as its logistical and technical requirements. Additional details regarding registration, accommodation, etc. will be announced on the conference web page: http://music-encoding.org/conference/2019/ IMPORTANT DATES All deadlines are midnight, Vienna (UTC+1). Friday 7 December 2018: Deadline for submissions Friday 25 January 2019: Notifications of acceptance Wednesday 29 May 2019: Pre-conference workshops Thursday 30 and Friday 31 May 2019: Papers, panels, and posters programme Saturday 1 June 2019: Unconference If you have any questions, please e-mail conference2019 at music-encoding.org. CONFERENCE ORGANISATION Programme Committee (in progress) Tim Crawford, Goldsmiths, University of London Anna E. Kijas, Boston College Kevin R. Page, chair, University of Oxford Klaus Rettinghaus, Saxon Academy of Sciences and Humanities in Leipzig Raffaele Viglianti, University of Maryland Local organising committee Robert Klugseder, Institute for History of Art and Musicology, Austrian Academy of Sciences Franz Kelnreiter, Mozart Institute, Mozarteum Foundation Salzburg From roewenstrunk at uni-paderborn.de Tue Oct 30 17:55:21 2018 From: roewenstrunk at uni-paderborn.de (=?utf-8?Q?Daniel_R=C3=B6wenstrunk?=) Date: Tue, 30 Oct 2018 17:55:21 +0100 Subject: [MEI-L] Workshop Research Data in Musicology (German) Message-ID: <AB3CD043-DD25-4AFF-9777-BFEDF3964FF7@uni-paderborn.de> Dear all, we are going to have a workshop on infrastructure for research data in musicology in Paderborn, December 13 and 14. In Germany there is will be a call for developing infrastructure for research data, soon. The workshop is meant to inform about the forthcoming call and consolidate endeavors for applying for a research infrastructure grant for Cultural Heritage. Since the call is limited to german institutions the workshop will be in German. The official (german) invitation is below. Greetings, Daniel ******************** Sehr geehrte Kollegin, sehr geehrter Kollege, im Kontext der durch den Rat für Informationsinfrastrukturen (RfII) angestoßenen Diskussion über eine Nationale Forschungsdateninfrastruktur möchten wir (Gesellschaft für Musikforschung, Landesinitiative NFDI der Digitalen Hochschule NRW, Virtueller Forschungsverbund Edirom und Zentrum Musik – Edition – Medien) Sie herzlich zu einem „Workshop zu Forschungsdaten in der Musikwissenschaft / audio-visuelle Kulturgüter“ am 13. Dezember (Beginn um 13.00 Uhr) und 14. Dezember (Ende um 13.00 Uhr) ins Heinz-Nixdorf-MuseumsForum in Paderborn einladen. Forschungsdaten und ihr Management gewinnen auch in den Geisteswissenschaften im Allgemeinen und in der Musikwissenschaft im Besonderen immer mehr an Bedeutung. Der RfII hat in seinen Papieren der letzten Jahre deutlich herausgestellt, dass es für den Umgang mit Forschungsdaten einer fachnahen, interdiziplinären, dezentralen Forschungsinfrastruktur bedarf. Hierzu ist eine Ausschreibung des Bundes und der Länder angekündigt worden, die zum Anfang kommenden Jahres erwartet wird. Im Vorfeld dieser Ausschreibung haben auf Initiative von DARIAH, CLARIN, DHd und den Wissenschafts-Akademien bereits drei großangelegte interdisziplinäre Workshops zur Verständigung in den Geisteswissenschaften stattgefunden, bei denen auch die Musikwissenschaft zu Wort kam. Im Umfeld dieser Initiative entstand eine Allianz aus Musikwissenschaft, Kunstgeschichte und Archäologie für die Entwicklung eines gemeinsamen, auf die besonderen medialen, materialen und rechtlichen Bedingungen der Gegenstände dieser Fächer zugeschnittenen Konsortiums oder eines fachspezifischen Knotens. In unserem Workshop möchten wir gemeinsam mit Ihnen klären, welche Bedarfe die Musikwissenschaft in Bezug auf ihre Forschungsdaten und -methoden an eine solche Infrastruktur hat. Wir möchten bereits existierende Angebote betrachten, deren Potential für eine Zusammenführung bewerten und die Frage nach Aufgaben und Verantwortungen im Kontext einer Forschungsdateninfrastruktur klären. Auch die rechtlichen Aspekte eines Konsortiums und die Rahmenbedingungen der zu erwartenden Ausschreibung sollen erörtert und diskutiert werden. Bereits jetzt steht fest, dass eine Infrastruktur für die Musikwissenschaft allein zu klein wäre und so laden wir auch andere Fächer mit ein, die für ein gemeinsames Konsortium infrage kommen und/oder mit denen wir uns bereits in engen Absprachen befinden. Das Ziel des Workshops ist zum einen, die Bedarfe der Musikwissenschaft zu ermitteln, zum anderen aber auch, ein Bild einer möglichen Infrastruktur zu zeichnen und dieses mit allen potentiellen Beteiligten eines fächerübergreifenden Konsortiums abzustimmen. Ausdrücklich weisen wir auf eine Reihe von Veröffentlichungen des Rats für Informationsinfrastrukturen und Positionspapiere unterschiedlicher Fachverbände hin, die sich bequem zusammengestellt auf einer Website für drei in den vergangenen Monaten in Berlin veranstalteten NFDI-Workshops finden: http://forschungsinfrastrukturen.de/doku.php/positionspapiere <http://forschungsinfrastrukturen.de/doku.php/positionspapiere>. Wir würden uns sehr über eine rege Teilnahme aus der Forschung, den Gedächtnisinstitutionen, den Kompetenz- und Rechenzentren an diesem, für die Ausgestaltung der Beteiligung unseres Faches an den NFDI-Plänen wichtigen Workshop freuen und bitten Sie im Hinblick auf die Erleichterung unserer Planungen, sich auf der Seite https://nfdi.edirom.de <https://nfdi.edirom.de/> anzumelden. Das Programm des Workshops werden wir alsbald nachreichen und über die Webseite bzw. die Mailingliste, in die Sie sich eintragen können, bekannt geben. Bei Fragen wenden Sie sich bitte per Mail an info at zenmem.de <mailto:info at zenmem.de>. Mit freundlichen Grüßen, Prof. Dr. Dörte Schmidt (Gesellschaft für Musikforschung) Dr. Ania López (Landesinitiative NFDI der Digitalen Hochschule NRW) Prof. Dr. Joachim Veit (Virtueller Forschungsverbund Edirom, Universität Paderborn) Daniel Röwenstrunk (Zentrum Musik – Edition – Medien) ******************** -- Dipl. Wirt. Inf. Daniel Röwenstrunk Geschäftsführung Zentrum Musik – Edition – Medien Universität Paderborn Musikwiss. Seminar Detmold/Paderborn Hornsche Str. 39 32756 Detmold Tel.: +49 5231 975662 Mail: roewenstrunk at uni-paderborn.de <mailto:roewenstrunk at edirom.de> Web: http://www.zenmem.de <http://www.zenmem.de/> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181030/aaf77afc/attachment.html> From andrew.hankinson at mail.mcgill.ca Thu Nov 1 16:43:50 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Thu, 1 Nov 2018 15:43:50 +0000 Subject: [MEI-L] MEI 4.0 Schema Released Message-ID: <AE178411-5EBA-41BD-8C01-38C0EC59B626@mail.mcgill.ca> ** Please circulate widely and forgive any cross-posting. ** The Board of the Music Encoding Initiative (MEI) is pleased to announce the availability of the MEI 4.0 Schema. The MEI is an international collaborative effort to capture the semantics of music notation documents as machine-readable data. For more information about MEI, please visit https://music-encoding.org. This release comes after two years of extensive community consultation and development, and we thank all of the contributors in the community who have provided valuable input, feedback, critique, and development. These conversations have resulted in the most collaborative music notation schema in history. Although there have been many music notation formats, few have achieved broad consultation and feedback, and even fewer have been co-developed in consultation with such a broad spectrum of stakeholders, from industry partners to librarians, publishers to researchers, and computer scientists to digital humanists. It is our sincere hope that we are able to grow and maintain these collaborations into the future. Although we are releasing the schema today, there is still work to be done. Along with this release, we are seeking help from the community in updating the guidelines, documentation, tools, and tutorials. The release of the schema represents the beginning of a transition period, where we will be asking contributors to help us write and expand our documentation, and make MEI more broadly accessible to newcomers and specialists. Over the next few months the technical team will be organizing documentation development "sprints" where we hope members of the community will help contribute content updated to the new specification. The dates for these sprints will be announced on the MEI-L, and members of the technical team will be available on the MEI Slack channel for consultation and discussion during the sprints. It is our goal to present MEI 4.0, both schema and documentation, to the community at the 2019 Music Encoding Conference in Vienna. For an overview of the changes in MEI 4.0, please see the Release Notes available on GitHub: https://github.com/music-encoding/music-encoding/releases/tag/v4.0.0 A detailed overview of the changes in the schema are available here: https://music-encoding.org/archive/comparison-4.0.html For the Board, Andrew Hankinson & Perry Roland Technical Co-Chairs The Music Encoding Initiative From luca.ludovico at unimi.it Fri Nov 2 13:46:27 2018 From: luca.ludovico at unimi.it (Luca Andrea Ludovico) Date: Fri, 02 Nov 2018 13:46:27 +0100 Subject: [MEI-L] Invitation to submit to MMRP19 workshop Message-ID: <002001d472aa$12377c60$36a67520$@unimi.it> Dear MEI community members, I am Luca A. Ludovico, from the Laboratory of Music Informatics, Department of Computer Science, University of Milan. I am one of the authors of the IEEE 1599 standard, an XML-based format for the multi-layer representation of music information. I was also in Mainz in 2013 at the MEC conference, where I presented a paper titled "The Music Encoding Initiative and the IEEE 1599 Standard - Towards an Integrated Description of Music Contents" focusing on possible intersections of the two formats. I'm sending this e-mail to announce that we are organizing a one-day workshop in Milan, followed by the kickoff meeting to revise the IEEE 1599 standard. We would be very pleased to receive contributions from MEI community focusing on the multi-layer characteristics of the standard, and to have someone from MEI in the new working group of IEEE 1599. Please feel free to extend our invitation to other people potentially interested. Workshop proceedings will be published by IEEE CPS, made available on IEEE Xplore and indexed by all major systems. Below, the call for papers and participation that we are sending to sound and music computing mailing lists. I hope to see you in Milan. Best regards. Luca A. Ludovico --- First International Workshop on Multilayer Music Representation and Processing (MMRP19) Department of Computer Science, University of Milan, 24-25 January 2019 http://mmrp19.di.unimi.it/ The MMRP Workshop is organized by the Laboratory of Music Informatics (LIM), Dept. of Computer Science, University of Milan. We are pleased to confirm that the workshop Proceedings will be published by the IEEE Conference Publishing Services (IEEE CPS), will be made available on IEEE Xplore, and will be included on major indexing systems. Updated information about paper submission and registration is now available on the MMRP19 website. The workshop is held in conjunction with the kick-off of the IEEE Working Group (WG) for XML Musical Application. Ten years after the release of the IEEE 1599 Standard for music representation, the WG will work at updating and extending the standard to provide a multilayered meta-representation of music information, achieving integration among general (metadata), structural, notational, computer-driven performance, and audio layers. We are soliciting original submissions for oral and poster/demo presentations examining all facets of the workshop theme. We welcome in particular scientific contributions on novel approaches to bridge the gap between different layers of music representation and generate multilayer music contents, as well as related application domains. These include (but are not limited to): - Computational Musicology - Intangible Cultural Heritage - Machine Learning and Understanding of Music - Multilayer representation models - Music Libraries and Archives - Music Signal Processing - Music Training and Education - Optical Music Recognition - Representations of Music - Score-informed Transcription - Structural Segmentation of Music - Symbolic Music Processing - Synchronization of Score, MIDI, and Audio - XML for Music Applications Important dates - Abstract submission: November 25, 2018 - Full-paper submission: December 02, 2018 - Notification to authors: December 23, 2018 - Camera-ready submission: January 11, 2019 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181102/1c4a683c/attachment.html> From andrew.hankinson at mail.mcgill.ca Mon Nov 5 01:08:45 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Mon, 5 Nov 2018 00:08:45 +0000 Subject: [MEI-L] Reminder: MEI Board Election Nominations are Open Message-ID: <8513F497-C572-474B-AC6B-57818DEA3E16@mail.mcgill.ca> A reminder that nominations for the MEI Board are Open until November 17. Please see the message below, and use this Google Form to submit your nominations: https://goo.gl/forms/TRGPzYLnpqEQtk2g2 --- Dear MEI Community, On 31 December 2018 the terms of three MEI Board members will come to an end. The entire Board wishes to thank Benjamin W. Bohl, Ichiro Fujinaga, and Perry Roland for their service and dedication to the MEI community. In order to fill these soon-to-be-vacant positions, elections must be held. The election process will take place in accordance with the Music Encoding Initiative By-Laws.[1] To nominate a canadidate, please do so via this form: https://goo.gl/forms/TRGPzYLnpqEQtk2g2 The timeline of the elections will be as follows: Nomination phase (17 October - 17 November, 2018) - Nominations can be sent by filling in the nomination form between 17 October - 17 November, 2018.[2] - Any person who subscribes to MEI-L has the right to nominate candidates. - Nominees have to be members of the MEI-L mailing list but may register until 17 November 2018. - Individuals who have previously served on the Board are eligible for nomination and re-appointment. - Self nominations are welcome. - Individuals will be informed of their nomination when received and asked to confirm their willingness to serve on the Board. - Acceptance of a nomination requires submission of a short CV and a personal statement of interest in MEI (a maximum of 200 words each) to elections at music-encoding.org by 20 November, 2018. Candidates who have been nominated but who have not confirmed their willingness will not be included on the ballot. Election phase (21 November - 7 December 2018) - The voting period will be open from 21 November - 7 December, 2018. - The election will take place using OpaVote and the Ranked Choice Voting method (https://www.opavote.com/methods/ranked-choice-voting). - You will be informed about the election and your individual voting tokens in a separate e-mail. Post election phase - Election results will be announced after the elections have closed. - The term of the elected candidates starts on 1 January 2018. - The first meeting of the new MEI Board will be held on Tuesday, 15 January 2019. The selection of Board members is an opportunity for each of you to have a voice in determining the future of MEI. Thank you for your support, Peter Stadler and Andrew Hankinson MEI election administrators 2018 by appointment of the MEI Board [1] The By-laws of the Music Encoding Initiative are available online at: http://music-encoding.org/community/mei-by-laws.html [2] All deadlines are referenced to 11:59 pm (UTC) From stadler at edirom.de Fri Nov 16 14:09:58 2018 From: stadler at edirom.de (Peter Stadler) Date: Fri, 16 Nov 2018 14:09:58 +0100 Subject: [MEI-L] Final Reminder: MEI Board Election Nominations are Open In-Reply-To: <8513F497-C572-474B-AC6B-57818DEA3E16@mail.mcgill.ca> References: <8513F497-C572-474B-AC6B-57818DEA3E16@mail.mcgill.ca> Message-ID: <55928594-7545-4122-B0B7-4A2DC9A7DED3@edirom.de> A gentle final reminder that nominations for the MEI Board are open until tomorrow, November 17. Please see the message below, and use this Google Form to submit your nominations: https://goo.gl/forms/TRGPzYLnpqEQtk2g2 --- Dear MEI Community, On 31 December 2018 the terms of three MEI Board members will come to an end. The entire Board wishes to thank Benjamin W. Bohl, Ichiro Fujinaga, and Perry Roland for their service and dedication to the MEI community. In order to fill these soon-to-be-vacant positions, elections must be held. The election process will take place in accordance with the Music Encoding Initiative By-Laws.[1] To nominate a canadidate, please do so via this form: https://goo.gl/forms/TRGPzYLnpqEQtk2g2 The timeline of the elections will be as follows: Nomination phase (17 October - 17 November, 2018) - Nominations can be sent by filling in the nomination form between 17 October - 17 November, 2018.[2] - Any person who subscribes to MEI-L has the right to nominate candidates. - Nominees have to be members of the MEI-L mailing list but may register until 17 November 2018. - Individuals who have previously served on the Board are eligible for nomination and re-appointment. - Self nominations are welcome. - Individuals will be informed of their nomination when received and asked to confirm their willingness to serve on the Board. - Acceptance of a nomination requires submission of a short CV and a personal statement of interest in MEI (a maximum of 200 words each) to elections at music-encoding.org by 20 November, 2018. Candidates who have been nominated but who have not confirmed their willingness will not be included on the ballot. Election phase (21 November - 7 December 2018) - The voting period will be open from 21 November - 7 December, 2018. - The election will take place using OpaVote and the Ranked Choice Voting method (https://www.opavote.com/methods/ranked-choice-voting). - You will be informed about the election and your individual voting tokens in a separate e-mail. Post election phase - Election results will be announced after the elections have closed. - The term of the elected candidates starts on 1 January 2018. - The first meeting of the new MEI Board will be held on Tuesday, 15 January 2019. The selection of Board members is an opportunity for each of you to have a voice in determining the future of MEI. Thank you for your support, Peter Stadler and Andrew Hankinson MEI election administrators 2018 by appointment of the MEI Board [1] The By-laws of the Music Encoding Initiative are available online at: http://music-encoding.org/community/mei-by-laws.html [2] All deadlines are referenced to 11:59 pm (UTC) -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181116/f53c9d62/attachment.sig> From f.wiering at UU.NL Tue Nov 20 11:07:46 2018 From: f.wiering at UU.NL (Frans Wiering) Date: Tue, 20 Nov 2018 11:07:46 +0100 Subject: [MEI-L] Reminder: deadline 27 November | Call for Papers | DH2019 - ADHO Digital Humanities Conference | Utrecht, The Netherlands | 9-12 July 2019 Message-ID: <810b2837-8a71-6957-84b4-8b61b0e7c370@UU.NL> [Apologies for cross-posting. Please feel free to forward this CfP to interested parties] **DH2019**Call for Papers** ADHO Digital Humanities Conference  -  DH2019 The Alliance of Digital Humanities Organizations (ADHO) invites submission of proposals for its annual conference, to be hosted by Utrecht University (The Netherlands), 9-12 July 2019. Preconference workshops are scheduled for 8-9 July 2019. Conference website: http://dh2019.adho.org/ **Conference Theme** The theme of the 2019 conference is ‘*/Complexities’/*. This theme has a multifaceted connection with Digital Humanities scholarship. /Complexities /intends to inspire people to focus on Digital Humanities (DH) as the humanist way of building complex models of complex realities, analysing them with computational methods and communicating the results to a broader public. The theme also invites people to think of the theoretical, social, and cultural complexity and diversity in which DH scholarship is immersed and asks our community to interact consciously and critically in myriad ways, through the conference and the networks, institutions and the enterprises interested in DH research. Finally, it means involving the next generation, teaching DH to students—the people who will need to deal with the /complexities /of the future. Proposals related to these themes are particularly welcome, but the DH2019 will accept submissions on any other aspect or field of Digital Humanities. For more details, see the full Call for Papers on the conference website. **Formats that can be proposed** -Posters (abstract maximum 750 words) -Short papers (abstract maximum 1000 words) -Long papers (abstract maximum 1500 words) -Multiple-paper panels (500-word abstracts + 500-word overview) -Pre-conference workshops and tutorials (proposal maximum 1 500 words) **Important Dates** Submission deadline for papers: 11:59pm GMT 27 November 2018 Submission deadline for workshops and tutorials: 11:59pm GMT, 10 January 2019 Date for notifications: 3 May 2019 Workshops: 8-9 July 2019 Conference: 9-12 July 2019 **Check the conference website** For more detailed information on the submission procedure, selection criteria, programme committee, venue TivoliVredenburg <https://www.tivolivredenburg.nl/english/plan-your-visit/>, registration, bursaries and local organizers, check the DH2019 webpage: http://dh2019.adho.org/. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181120/150101f2/attachment.html> From jc86035 at icloud.com Mon Nov 26 11:17:44 2018 From: jc86035 at icloud.com (jc86035) Date: Mon, 26 Nov 2018 18:17:44 +0800 Subject: [MEI-L] Wikimedia Commons request for comment on musical notation files Message-ID: <04614F1A-78F2-4F3B-ADB9-74952CC1A985@icloud.com> Hi all, I'm a Wikipedia and Wikimedia Commons editor (User:Jc86035 <https://commons.wikimedia.org/wiki/User:Jc86035>). Earlier in November I opened a request for comment <https://commons.wikimedia.org/wiki/Commons:Village_pump/Proposals#RfC:_Musical_notation_files> on Wikimedia Commons, proposing that several musical notation file formats (originally MuseScore, LilyPond and MusicXML) become uploadable on Commons, with the intention of eventually allowing audio and scores of some or all of the file types to be shown in pages like Wikipedia articles. (The MediaWiki software already has Extension:Score <https://www.mediawiki.org/wiki/Extension:Score>, based on LilyPond and Fluidsynth, but there are various benefits to allowing music notation to be stored as files. Currently notation is shown in Wikipedia articles as images or PDFs, or used directly through the Score extension.) Your feedback on which file formats Commons should support would be much appreciated; several developers have already provided input. Currently, the discussion is also evaluating MNX and MEI (of which the former doesn't exist yet; it's not clear to us how these two formats would interface and if supporting both would be redundant). If you've never edited a Wikimedia site before, anyone can create an account and participate in the discussion. (If discussion continues on the mailing lists I will link to new posts, although it would be preferable to have discussion all in one place.) Best jc86035 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181126/3daa729f/attachment.html> From andrew.hankinson at mail.mcgill.ca Mon Nov 26 11:28:26 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Mon, 26 Nov 2018 10:28:26 +0000 Subject: [MEI-L] Fwd: Wikimedia Commons request for comment on musical notation files References: <04614F1A-78F2-4F3B-ADB9-74952CC1A985@icloud.com> Message-ID: <5F2E5A49-CB3D-4720-ACFC-1BD6C969CA02@mail.mcgill.ca> FYI, it would be good to get some members of the MEI community involved in this discussion, as there are a few things that need clarifying on what is there now. -Andrew Begin forwarded message: From: jc86035 <jc86035 at icloud.com<mailto:jc86035 at icloud.com>> Subject: Wikimedia Commons request for comment on musical notation files Date: 26 November 2018 at 10:17:44 GMT To: public-music-notation at w3.org<mailto:public-music-notation at w3.org>, mei-l at lists.uni-paderborn.de<mailto:mei-l at lists.uni-paderborn.de>, lilypond-user at gnu.org<mailto:lilypond-user at gnu.org>, lilypond-devel at gnu.org<mailto:lilypond-devel at gnu.org> Resent-From: public-music-notation at w3.org<mailto:public-music-notation at w3.org> Hi all, I'm a Wikipedia and Wikimedia Commons editor (User:Jc86035<https://commons.wikimedia.org/wiki/User:Jc86035>). Earlier in November I opened a request for comment<https://commons.wikimedia.org/wiki/Commons:Village_pump/Proposals#RfC:_Musical_notation_files> on Wikimedia Commons, proposing that several musical notation file formats (originally MuseScore, LilyPond and MusicXML) become uploadable on Commons, with the intention of eventually allowing audio and scores of some or all of the file types to be shown in pages like Wikipedia articles. (The MediaWiki software already has Extension:Score<https://www.mediawiki.org/wiki/Extension:Score>, based on LilyPond and Fluidsynth, but there are various benefits to allowing music notation to be stored as files. Currently notation is shown in Wikipedia articles as images or PDFs, or used directly through the Score extension.) Your feedback on which file formats Commons should support would be much appreciated; several developers have already provided input. Currently, the discussion is also evaluating MNX and MEI (of which the former doesn't exist yet; it's not clear to us how these two formats would interface and if supporting both would be redundant). If you've never edited a Wikimedia site before, anyone can create an account and participate in the discussion. (If discussion continues on the mailing lists I will link to new posts, although it would be preferable to have discussion all in one place.) Best jc86035 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181126/bbb47cb7/attachment.html> From andrew.hankinson at mail.mcgill.ca Wed Nov 28 00:13:36 2018 From: andrew.hankinson at mail.mcgill.ca (Andrew Hankinson) Date: Tue, 27 Nov 2018 23:13:36 +0000 Subject: [MEI-L] Community Projects Page Message-ID: <5023EE96-FBEF-430A-901C-F376C55CFDC8@mail.mcgill.ca> Dear MEI Community, I've noticed that there are quite a few 'missing faces' from our community project page. https://music-encoding.org/community/projects-users.html Could I send out an appeal that, if your project is using MEI, could you please get in touch (either to me, or to the 'contact' link on that page) and give us a few details so we can fill it out? ** Thanks, -Andrew (** Or, if you're so inclined, you can add it yourself with a pull request: https://github.com/music-encoding/music-encoding.github.io/tree/master/_projects) From stadler at edirom.de Wed Dec 5 08:11:00 2018 From: stadler at edirom.de (Peter Stadler) Date: Wed, 5 Dec 2018 08:11:00 +0100 Subject: [MEI-L] 2018 MEI Board elections open Message-ID: <4CA90E63-A64D-4CB0-B7CE-53F8F05F4F23@edirom.de> Dear all, this is to inform you that the election period has just started und you should have received a voting notification (with link) from the OpaVote system. If not, please check your spam folder first and then get in touch with us. Best regards, Andrew Hankinson & Peter Stadler -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181205/73d1bfc0/attachment.sig> From dubowy at mozarteum.at Wed Dec 12 23:32:38 2018 From: dubowy at mozarteum.at (Norbert Dubowy Internationale Stiftung Mozarteum) Date: Wed, 12 Dec 2018 23:32:38 +0100 Subject: [MEI-L] Digital Interactive Mozart Edition, Public Launch Message-ID: <20181212233238.EGroupware.hdo410jgg22FPK9pWNi37U6@_> Dear MEI community, The Digital Mozart Edition, a collaborative project of the Stiftung Mozarteum, Salzburg, and the Packard Humanities Institute, Los Altos (CA), is happy to announce the public launch of its Digital Interactive Mozart Edition. On the occasion of the web publication, a press conference is held on Friday, 14 December 2018, 10:30 a.m. (CET), at the Mozart Residence, Makartplatz 8, A-5020 Salzburg, Austria. The press conference is transmitted via live stream on [ http://www.facebook.com/StiftungMozarteum/ -> www.facebook.com/StiftungMozarteum/ ]. The Digital Interactive Mozart Edition will be accessible, from 14 December 2018 on, through our website at [ https://dme.mozarteum.at/ -> dme.mozarteum.at ]. Its use is free of charge for private, scientific and educational purposes. Kind regards, Norbert Dubowy Dr. Norbert Dubowy Mozart-Institut/Digitale Mozart-Edition Cheflektor/Managing Editor Internationale Stiftung Mozarteum Schwarzstr. 26 5020 Salzburg, Austria T +43 (0) 662 889 40 66 F +43 (0) 662 889 40 68 E mailto:dubowy at mozarteum.at [ http://www.mozarteum.at/ -> www.mozarteum.at ] [ http://www.mozarteum.at/content/newsletter -> Newsletter Stiftung Mozarteum ] [ http://www.facebook.com/StiftungMozarteum -> Facebook Stiftung Mozarteum ] ZVR: 438729131, UID: ATU33977907 -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181212/46b5476b/attachment.html> From andrew.hankinson at gmail.com Thu Dec 13 01:47:53 2018 From: andrew.hankinson at gmail.com (Andrew Hankinson) Date: Thu, 13 Dec 2018 00:47:53 +0000 Subject: [MEI-L] Digital Interactive Mozart Edition, Public Launch In-Reply-To: <20181212233238.EGroupware.hdo410jgg22FPK9pWNi37U6@_> References: <20181212233238.EGroupware.hdo410jgg22FPK9pWNi37U6@_> Message-ID: <4B2ABDB6-BFDC-456B-B5C6-B95BBE5BAEF5@gmail.com> That is fantastic, Norbert! Congratulations to your team, and I look forward to seeing it tomorrow. -Andrew > On 12 Dec 2018, at 22:32, Norbert Dubowy Internationale Stiftung Mozarteum <dubowy at mozarteum.at> wrote: > > Dear MEI community, > ​ > The Digital Mozart Edition, a collaborative project of the Stiftung Mozarteum, Salzburg, and the Packard Humanities Institute, Los Altos (CA), is happy to announce the public launch of its Digital Interactive Mozart Edition. On the occasion of the web publication, a press conference is held on Friday, 14 December 2018, 10:30 a.m. (CET), at the Mozart Residence, Makartplatz 8, A-5020 Salzburg, Austria. The press conference is transmitted via live stream on www.facebook.com/StiftungMozarteum/. The Digital Interactive Mozart Edition will be accessible, from 14 December 2018 on, through our website at dme.mozarteum.at. Its use is free of charge for private, scientific and educational purposes. > > Kind regards, > Norbert Dubowy > > > Dr. Norbert Dubowy > Mozart-Institut/Digitale Mozart-Edition > Cheflektor/Managing Editor > Internationale Stiftung Mozarteum > Schwarzstr. 26 > 5020 Salzburg, Austria > T +43 (0) 662 889 40 66 > F +43 (0) 662 889 40 68 > E dubowy at mozarteum.at > www.mozarteum.at > > Newsletter Stiftung Mozarteum > Facebook Stiftung Mozarteum > ZVR: 438729131, UID: ATU33977907 > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at virginia.edu Thu Dec 13 02:29:57 2018 From: pdr4h at virginia.edu (Roland, Perry D (pdr4h)) Date: Thu, 13 Dec 2018 01:29:57 +0000 Subject: [MEI-L] Digital Interactive Mozart Edition, Public Launch In-Reply-To: <4B2ABDB6-BFDC-456B-B5C6-B95BBE5BAEF5@gmail.com> References: <20181212233238.EGroupware.hdo410jgg22FPK9pWNi37U6@_> <4B2ABDB6-BFDC-456B-B5C6-B95BBE5BAEF5@gmail.com> Message-ID: <DM3PR13MB0554F63718D60021D70732CC9FA00@DM3PR13MB0554.namprd13.prod.outlook.com> Good news, indeed! Congratulations, Norbert. -- p. -----Original Message----- From: mei-l <mei-l-bounces at lists.uni-paderborn.de> On Behalf Of Andrew Hankinson Sent: Wednesday, December 12, 2018 7:48 PM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: Re: [MEI-L] Digital Interactive Mozart Edition, Public Launch That is fantastic, Norbert! Congratulations to your team, and I look forward to seeing it tomorrow. -Andrew > On 12 Dec 2018, at 22:32, Norbert Dubowy Internationale Stiftung Mozarteum <dubowy at mozarteum.at> wrote: > > Dear MEI community, > ​ > The Digital Mozart Edition, a collaborative project of the Stiftung Mozarteum, Salzburg, and the Packard Humanities Institute, Los Altos (CA), is happy to announce the public launch of its Digital Interactive Mozart Edition. On the occasion of the web publication, a press conference is held on Friday, 14 December 2018, 10:30 a.m. (CET), at the Mozart Residence, Makartplatz 8, A-5020 Salzburg, Austria. The press conference is transmitted via live stream on www.facebook.com/StiftungMozarteum/. The Digital Interactive Mozart Edition will be accessible, from 14 December 2018 on, through our website at dme.mozarteum.at. Its use is free of charge for private, scientific and educational purposes. > > Kind regards, > Norbert Dubowy > > > Dr. Norbert Dubowy > Mozart-Institut/Digitale Mozart-Edition Cheflektor/Managing Editor > Internationale Stiftung Mozarteum Schwarzstr. 26 > 5020 Salzburg, Austria > T +43 (0) 662 889 40 66 > F +43 (0) 662 889 40 68 > E dubowy at mozarteum.at > www.mozarteum.at > > Newsletter Stiftung Mozarteum > Facebook Stiftung Mozarteum > ZVR: 438729131, UID: ATU33977907 > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From pdr4h at virginia.edu Thu Dec 13 02:33:03 2018 From: pdr4h at virginia.edu (Roland, Perry D (pdr4h)) Date: Thu, 13 Dec 2018 01:33:03 +0000 Subject: [MEI-L] Digital Interactive Mozart Edition, Public Launch In-Reply-To: <DM3PR13MB0554F63718D60021D70732CC9FA00@DM3PR13MB0554.namprd13.prod.outlook.com> References: <20181212233238.EGroupware.hdo410jgg22FPK9pWNi37U6@_> <4B2ABDB6-BFDC-456B-B5C6-B95BBE5BAEF5@gmail.com> <DM3PR13MB0554F63718D60021D70732CC9FA00@DM3PR13MB0554.namprd13.prod.outlook.com> Message-ID: <DM3PR13MB055482F98E4C9483C8AFCF659FA00@DM3PR13MB0554.namprd13.prod.outlook.com> And congratulations to everyone who worked on the project. -- p. -----Original Message----- From: mei-l <mei-l-bounces at lists.uni-paderborn.de> On Behalf Of Roland, Perry D (pdr4h) Sent: Wednesday, December 12, 2018 8:30 PM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: Re: [MEI-L] Digital Interactive Mozart Edition, Public Launch Good news, indeed! Congratulations, Norbert. -- p. -----Original Message----- From: mei-l <mei-l-bounces at lists.uni-paderborn.de> On Behalf Of Andrew Hankinson Sent: Wednesday, December 12, 2018 7:48 PM To: Music Encoding Initiative <mei-l at lists.uni-paderborn.de> Subject: Re: [MEI-L] Digital Interactive Mozart Edition, Public Launch That is fantastic, Norbert! Congratulations to your team, and I look forward to seeing it tomorrow. -Andrew > On 12 Dec 2018, at 22:32, Norbert Dubowy Internationale Stiftung Mozarteum <dubowy at mozarteum.at> wrote: > > Dear MEI community, > ​ > The Digital Mozart Edition, a collaborative project of the Stiftung Mozarteum, Salzburg, and the Packard Humanities Institute, Los Altos (CA), is happy to announce the public launch of its Digital Interactive Mozart Edition. On the occasion of the web publication, a press conference is held on Friday, 14 December 2018, 10:30 a.m. (CET), at the Mozart Residence, Makartplatz 8, A-5020 Salzburg, Austria. The press conference is transmitted via live stream on www.facebook.com/StiftungMozarteum/. The Digital Interactive Mozart Edition will be accessible, from 14 December 2018 on, through our website at dme.mozarteum.at. Its use is free of charge for private, scientific and educational purposes. > > Kind regards, > Norbert Dubowy > > > Dr. Norbert Dubowy > Mozart-Institut/Digitale Mozart-Edition Cheflektor/Managing Editor > Internationale Stiftung Mozarteum Schwarzstr. 26 > 5020 Salzburg, Austria > T +43 (0) 662 889 40 66 > F +43 (0) 662 889 40 68 > E dubowy at mozarteum.at > www.mozarteum.at > > Newsletter Stiftung Mozarteum > Facebook Stiftung Mozarteum > ZVR: 438729131, UID: ATU33977907 > > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l _______________________________________________ mei-l mailing list mei-l at lists.uni-paderborn.de https://lists.uni-paderborn.de/mailman/listinfo/mei-l From kepper at edirom.de Sat Dec 15 18:16:54 2018 From: kepper at edirom.de (Johannes Kepper) Date: Sat, 15 Dec 2018 18:16:54 +0100 Subject: [MEI-L] MEC2020 in Boston Message-ID: <9D377D65-6D23-46CE-92B0-02B9FF3FB468@edirom.de> Dear Community, the MEI Board is happy to announce that MEC2020 will be held from 26 May to 29 May 2020 at Boston College, in conjunction with Northeastern University. Anna Kijas from Boston College Libraries will serve as OC Chair, and will team up with Nina Bogdanovsky (Boston College), Julia Flanders and Sarah Connell (both Northeastern University). We're looking forward to another wonderful venue for our Music Encoding Conference, organised by a fantastic team. Please help them by spreading the word! However, if you feel like attending another MEC before 2020, there is a great opportunity at MEC2019 in Vienna (May 29 – June 1 2019). The (extended) CfP ends tomorrow (Dec 16) – if you hurry up, you could still make it ;-) For the MEI Board, Johannes From andrew.hankinson at gmail.com Thu Dec 20 15:26:46 2018 From: andrew.hankinson at gmail.com (Andrew Hankinson) Date: Thu, 20 Dec 2018 14:26:46 +0000 Subject: [MEI-L] Results of the MEI Board Elections Message-ID: <57A79FB6-C188-4219-8132-37C3C8F2E1E2@gmail.com> Dear MEI Community, It is our pleasure to announce the results of the MEI Board elections (for the term 2019–2021). Elected by the MEI community are: * Benjamin W. Bohl * Elsa De Luca * Ichiro Fujinaga Congratulations to our new board members, and many thanks to *all* our excellent candidates! Following this e-mail you will receive a link to the full results of the election from OpaVote. Best regards and happy holidays, Andrew & Peter From andrew.hankinson at gmail.com Thu Dec 20 15:27:15 2018 From: andrew.hankinson at gmail.com (Andrew Hankinson) Date: Thu, 20 Dec 2018 14:27:15 +0000 Subject: [MEI-L] Results of the MEI Board Elections Message-ID: <1DA29A3D-F045-4E0A-9028-4EDE1B250F63@gmail.com> Dear MEI Community, It is our pleasure to announce the results of the MEI Board elections (for the term 2019–2021). Elected by the MEI community are: * Benjamin W. Bohl * Elsa De Luca * Ichiro Fujinaga Congratulations to our new board members, and many thanks to *all* our excellent candidates! Following this e-mail you will receive a link to the full results of the election from OpaVote. Best regards and happy holidays, Andrew & Peter From kepper at edirom.de Fri Dec 21 00:28:44 2018 From: kepper at edirom.de (Johannes Kepper) Date: Fri, 21 Dec 2018 00:28:44 +0100 Subject: [MEI-L] Results of the MEI Board Elections In-Reply-To: <1DA29A3D-F045-4E0A-9028-4EDE1B250F63@gmail.com> References: <1DA29A3D-F045-4E0A-9028-4EDE1B250F63@gmail.com> Message-ID: <E95F488E-25E4-4D39-993D-9FB853790A5A@edirom.de> Dear MEI Community, let me take this opportunity for some acknowledgements. First, I'd like to thank all candidates that stood for election. Having such a great slate of bright people comes at the price of excellent candidates being not elected – special thanks to Karen Desmond and Klaus Rettinghaus, knowing that both will contribute to MEI as much as possible anyway. Next, I'd like to congratulate Elsa De Luca, Benjamin Bohl and Ichiro Fujinaga for being (re-)elected to the Board. The MEI community is changing noticeably, and we're all happy to have your voice in the Board in this situation. Then, I'd like to thank Perry Roland for having served on the Board since it's establishment four years ago. I'm glad that Perry is just stepping down from the Board, but will continue his excellent work on the schemes time allows. Finally, I'd like to thank you, the MEI community, for being so serious about the elections. I think it's fantastic to have close to one hundred people casting ballots and thus showing their interest in MEI. And of course, I'd like to thank Andrew and Peter for guiding us through this election. Let me wish a few relaxing days to all of you, and a good start into 2019. Let me already invite you to a thrilling Music Encoding Conference in Vienna in late May. The new Board will meet virtually on Tuesday, January 15, at 7pm GMT. Further instructions will follow closer to the date. All best, jo > Am 20.12.2018 um 15:27 schrieb Andrew Hankinson <andrew.hankinson at gmail.com>: > > Dear MEI Community, > > It is our pleasure to announce the results of the MEI Board elections (for the term 2019–2021). Elected by the MEI community are: > > * Benjamin W. Bohl > * Elsa De Luca > * Ichiro Fujinaga > > Congratulations to our new board members, and many thanks to *all* our excellent candidates! > > Following this e-mail you will receive a link to the full results of the election from OpaVote. > > Best regards and happy holidays, > > Andrew & Peter > _______________________________________________ > mei-l mailing list > mei-l at lists.uni-paderborn.de > https://lists.uni-paderborn.de/mailman/listinfo/mei-l -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 488 bytes Desc: Message signed with OpenPGP URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20181221/92a0b0b2/attachment.sig>