[MEI-L] PhD studentship at Queen Mary University of London (AI and Music CDT)
George Fazekas
g.fazekas at qmul.ac.uk
Wed Jun 5 13:44:57 CEST 2019
(with apologies for cross-postings)
A fully-funded PhD studentship is available to carry out research in the area of Optical Music Recognition using Deep Learning in collaboration with Steinberg Media Technologies GmbH.
The position is available within the UKRI Centre for Doctoral Training in Artificial Intelligence and Music (AIM) at Queen Mary University of London.
https://www.aim.qmul.ac.uk/
The studentships covers fees and a stipend for four years starting September 2019.
The position is open to UK and international students.
Application deadline: 21 June 2019 (http://www.aim.qmul.ac.uk/apply)
Why apply to the AIM Programme?
* 4-year fully-funded PhD studentships available
* Access to cutting-edge facilities and expertise in artificial intelligence (AI) and music/audio technology
* Comprehensive technical training at the intersection of AI and music through a personalised programme
* Partnerships with over 20 companies and cultural institutions in the music, audio and creative sectors
More information on the AIM Programme can be found at: https://www.aim.qmul.ac.uk/
PhD Topic: Optical Music Recognition using Deep Learning in collaboration with Steinberg Media Technologies GmbH.
The proposed PhD focuses on developing novel techniques for optical music recognition (OMR) using Deep Neural Networks (DNN). The research will be carried out in collaboration with Steinberg Media Technologies opening the opportunity to work with and test the research outcomes in leading music notation software such as Dorico (http://www.dorico.com). Musicians, composers, arrangers, orchestrators and other users of music notation have long had a dream that they could simply take a photo or use a scan of sheet music and bring it into a music notation application to be able to make changes, rearrange, transpose, or simply listen to being played by the computer. The PhD aims to investigate and demonstrate a novel approach to converting images of sheet music into a semantic representation such as MusicXML and/or MEI. The research will be carried out in the context of designing a music recognition engine capable of ingesting, optically correcting, processing and recognising multiple pages of handwritten or music from image captured by mobile phone, or low-resolution copyright-free scans from the International Music Score Library Project (IMSLP). The main objective is outputting semantic mark-up identifying as many notational elements and text as possible, along with the relationship to their position in the original image. Prior solutions have used algorithmic approaches and have involved layers of algorithmic rules applied to traditional feature detection techniques such as edge detection. An opportunity exists to develop and evaluate new approaches based on DNN and other machine learning techniques. State-of-the-art Optical Music Recognition (OMR) is already able to recognise clean sheet music with very high accuracy, but fixing the remaining errors may take just as long, if not longer, than transcribing the music into notation software by hand. A new method that can improve recognition rates will allow users who are not so adept at inputting notes into a music notation application to get better results quicker. Another challenge to tackle is the variability in quality of input (particularly from images captured from smartphones) and how best to preprocess the images to improve the quality of recognition for subsequent stages of the pipeline. The application of cutting edge techniques in data science, including machine learning, particularly convolutional neural networks (CNN) may yield better results than traditional methods. To this end, research will start from testing VGG like architectures (https://arxiv.org/abs/1409.1556) and residual networks (e.g. ResNet, https://arxiv.org/pdf/1512.03385.pdf) for the recognition of hand written and/or low-resolution printed sheet music. The same techniques may also prove useful in earlier stages of the pipeline such as document detection and feature detection. It would be desirable to recognise close to all individual objects in the score. One of the first objectives will be to establish the methodology for determining the differences between the reference data and the recognised data. Furthermore data augmentation can be supported by existing Steinberg software. The ideal candidate would have previous experience of training machine learning models and would be familiar with Western music notation. Being well versed in image acquisition, processing techniques, and computer vision would be a significant advantage.
Programme structure
Our Centre for Doctoral Training (CDT) offers a four year training programme where students will carry out a research project in the intersection of AI and music, supported by taught specialist modules, industrial placements, and skills training. Find out more about the programme structure at: http://www.aim.qmul.ac.uk/about/
Who can apply?
We are on the lookout for the best and brightest students interested in the intersection of music/audio technology and AI. Successful applicants will have the following profile:
* Hold or be completing a Masters degree at distinction or first class level, or equivalent, in Computer Science, Electronic Engineering, Music/Audio Technology, Physics, Mathematics, or Psychology.
* Programming skills are strongly desirable; however we do not consider this to be an essential criterion if candidates have complementary strengths.
* Formal music training is desirable, but not a prerequisite.
* This position is open to UK and international students.
Funding
Funding will cover the cost of tuition fees and will provide an annual tax-free stipend of £17,009. The CDT will also provide funding for conference travel, equipment, and for attending other CDT-related events.
Apply Now
Information on applications and PhD topics can be found at: http://www.aim.qmul.ac.uk/apply
Application deadline: 21 June 2019
For further information on eligibility, funding and the application process please visit our website. Please email any questions to aim-enquiries at qmul.ac.uk<mailto:aim-enquiries at qmul.ac.uk>
—
Dr. George Fazekas,
Senior Lecturer (Assoc. Prof.) in Digital Media
Programme Coordinator, Sound and Music Computing (SMC)<http://bit.ly/smc-qmul>
Centre for Digital Music (C4DM)
School of Electronic Engineering and Computer Science
Queen Mary University of London, UK
FHEA, M. IEEE, ACM, AES
email: g.fazekas at qmul.ac.uk<mailto:g.fazekas at qmul.ac.uk>
web: c4dm.eecs.qmul.ac.uk<http://c4dm.eecs.qmul.ac.uk> | semanticaudio.net<http://semanticaudio.net> | audiocommons.org<http://audiocommons.org/> | bit.ly/smc-qmul<http://bit.ly/smc-qmul> | aim.qmul.ac.uk<http://aim.qmul.ac.uk>
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.uni-paderborn.de/pipermail/mei-l/attachments/20190605/fc3c1d4c/attachment.html>
More information about the mei-l
mailing list