DEGEM News – FWD – [ak-discourse] WG: PhD Studentships for 2025 at Kingston University London

Von Lepa, Steffen via ak discourse
Datum: Tue, 28 Jan 2025
Betreff [ak-discourse] WG: PhD Studentships for 2025 at Kingston University London

PhD Studentships for 2025 at Kingston University London

The Department of Performing Arts at Kingston University London is delighted to invite expressions of interest from strong Music researchers to join our PhD Programme from October 2025.

Several fully-funded and part-funded studentships are available on a competitive basis to undertake doctoral study (including practice-based) in a range of areas.

We encourage applications from Global Majority students: as well as ringfencing a number of studentships, we are also offering a programme of support and mentoring during the application process.

At least one of the fully-funded PhD studentships will be awarded to a Kingston Alumni who has completed an undergraduate and/or masters level degree course at Kingston University or its predecessor institutions.

Deadline for applications: 5 March 2025

Research in Music at Kingston University is both theoretical and practice-based and often the two are integrated. Engaging with a range of methodologies and theoretical perspectives, music staff have particular strength in:

  • Music technology
  • Musicology
  • Music education
  • Music composition

Music technology research include studio-based creative practices, immersive audio environments, interactive performances, and music AI. Musicology topics range from historical and philosophical aspects to popular musicology and music in other media (moving image, dance). Music education research is inclusive of a broad range of teaching and learning settings and related music in health and wellbeing contexts. Music compositions encompass instrumental works, interactive pieces and multimedia collaborations.

Specific areas of expertise include:

  • studio-based creative practice-research
  • pop musicology and ludomusicology
  • sound spatialisation techniques and systems
  • site-specific installation art
  • music memory and perception
  • electroacoustic music analysis
  • preservation issues of electronic music
  • electronic music performance practices
  • music and generative AI

Further information on the studentships and guidelines for application can be found here:

https://www.kingston.ac.uk/research/research-degrees/funding/phd-studentships-2025/

For information of the research environment of Kingston School of Art, please follow this link:https://www.kingston.ac.uk/faculties/kingston-school-of-art/research-and-innovation/research-degrees/

Questions about the PhD programme can be directed to Dr Daniela Perazzo, Postgraduate Research Coordinator: https://www.kingston.ac.uk/staff/profile/dr-daniela-perazzo-179/. Prospective applicants are encouraged to discuss their research proposal in advance of submitting an application.

DEGEM News – FWD – [ak-discourse] Urban Sound Symposium 2025, Zürich – Register now!

Von: Moshona, Cleopatra Christina via ak discourse
Datum: Tue, 28 Jan 2025
Betreff: [ak-discourse] Urban Sound Symposium 2025, Zürich – Register now!

Liebe Kolleg*innen und Interessierte,

im Namen des Organisationskomitees des Urban Sound Symposiums (USS), möchten ich Sie auf das USS 2025 aufmerksam machen, welches dieses Jahr vom 28-30. April in Zürich (Dübendorf) stattfindet und von Empa, Swiss Federal Laboratories for Materials Science and Technology organisiert wird.

Das Urban Sound Symposium befasst sich mit den Herausforderungen, die durch Lärm in städtischen Umgebungen entstehen. Die Hauptthemen sind Planung und Gestaltung von urbanen Umgebungen, Lärmschutz, Analyse städtischer Geräuschkulissen und lärmpolitische Fragen. Das Symposium richtet sich an ein interdisziplinäres Publikum aus Akustiker*innen, Architekt*innen und Stadtplaner*innen.

Auch dieses Jahr bieten wir ein spannendes Programm mit einem Line-up von angesehenen Sprecher*innen aus Industrie, Forschung und öffentlicher Verwaltung an.

https://urban-sound-symposium.org/program/

Bis zum 31.01.2025 können noch Abstracts für Poster eingereicht werden. Bitte beachten Sie, dass die Early-Bird Deadline bis zum 28.02.2025 verlängert wurde.

https://urban-sound-symposium.org/registration/

Wir würden uns freuen Sie in Zürich begrüßen zu dürfen, um gemeinsam über die Zukunft urbaner Klangräume zu diskutieren.

Beste Grüße,

Cleopatra Moshona, OC-Member

__

Cleopatra Christina Moshona, M.A., M.A.

Research Associate

 

Technische Universität Berlin

Faculty V – Mechanical Engineering and Transport Systems

Institute of Fluid Dynamics and Technical Acoustics

Engineering Acoustics – Psychoacoustics Group

Room: HFT-TA 438
Telephone.: +49 (0)30 314-70437

https://www.tu.berlin/akustik

DEGEM News – BERLIN – Triple Helix – Friedl/Sharma: Piano und 393

Von: Gerriet K. Sharma
Datum: Mon, 27 Jan 2025
Betreff: Triple Helix – Friedl/Sharma: Piano und 393

Liebe Freunde und Kollegen,

(English Version below)

Reinhold Friedl (https://www.reinhold-friedl.de/about) und ich haben das letzte Jahr an einem Stück für Konzertflügel und Live-Spatialisierung gearbeitet.

Ich freue mich, die Uraufführung
am 13.02.2025
um19h00
im Kurt-Sachs Saal des
Staatlichen Institut für Musikforschung (Philharmonie)
in Berlin

ankündigen zu können.

https://www.simpk.de/ueber-uns/veranstaltungen/veranstaltungs-detailseite/veranstaltung/2025/02/13/triple-helix.html

Über das Stück:

Triple Helix

Gerriet Krishna Sharma & Reinhold Friedl 2024/25

Für Konzertflügel und zwei multidirektionale 393-beamforming Lautsprecher.

Triple Helix erforscht die dynamischen Beziehungen und Interaktionen zwischen Instrument und Raum, traditionellen Konzepten von Musik und Augmented Reality (AR) sowie die Synthese verschiedener Vorstellungen von Virtualität.

Reinhold Friedl ist bekannt für seine außergewöhnlichen Klavierdarbietungen, insbesondere für seine innovativen erweiterten Techniken am Konzertflügel, die einzigartige und höchst variable Klanglandschaften erzeugen.

In dieser Performance werden die Klavierklänge in Echtzeit verarbeitet und räumlich inszeniert.

Gerriet K. Sharma nutzt zwei Prototypen von 393-Lautsprechern mit Beamforming-Technologie, die es ermöglichen, skulpturale Klangtexturen zu erzeugen.

Mit diesen Lautsprecher können die Klavierklänge ihren raum-klanglichen Auswirkungen instantan gegenübergestellt und dabei komplexe räumliche und klangliche Interferenzen erzeugt werden.

In Echtzeit werden die Klavierklänge mit den 393 Beamforming-Lautsprechern sowohl spektralisiert als auch räumlich inszeniert, während Mikrofone die inneren Resonanzen des Klaviers erfassen und wieder rückkoppeln.

Dieses komplexe Zusammenspiel führt zu einer „Triple Helix“: Drei verschiedene klangliche Räume existieren gleichzeitig.

Sie sind jeweils klar unterscheidbar, aber räumlich und zeitlich miteinander verbunden.

Gemeinsam bilden sie eine einheitliche, erweiterte Spektromorphologie – live und ortsspezifisch.

[ENGLISH]

Dear friends and colleagues,

Reinhold Friedl (https://www.reinhold-friedl.de/about) and I have spent the past year working on a piece for piano and live spatialization.

I am delighted to announce the world premiere on

February 13, 2025,

at 7:00 PM

in the Kurt-Sachs Hall

at the State Institute for Music Research (Philharmonie)

in Berlin.
https://www.simpk.de/ueber-uns/veranstaltungen/veranstaltungs-detailseite/veranstaltung/2025/02/13/triple-helix.html

About the piece:

Triple Helix

Gerriet Krishna Sharma & Reinhold Friedl 2024/25

For grand piano and two multidirectional 393-beamforming loudspeakers.

Triple Helix explores the dynamic relationships and interactions between instrument and space, traditional concepts of music and augmented reality (AR) experiences, and synthesizes various notions of virtuality.

Reinhold Friedl is renowned for his virtuosic piano performances, particularly his innovative extended techniques on the grand piano, which produce unique and highly variable soundscapes.

In this performance, the piano’s sounds will be processed and spatialized in real time.

Gerriet K. Sharma employs two prototype 393 3D audio beamforming loudspeakers, capable of creating sculptural sound formations.

These speakers allow to juxtapose the piano’s sounds with their spatio-sonic derivatives, generating complex spatial and sonic interferences.

In real time, the piano’s sounds are both spectralized and spatialized using the 393-beamforming loudspeakers, while microphones capture and feedback the inner resonances of the piano.

This intricate interplay gives rise to a „triple helix“: three distinct sonic spaces coexist, each distinguishable yet interconnected in space and time.

Together, they form a unified, augmented spectromorphology—live and site-specific.

.
Gerriet Krishna Sharma
www.gksh.net
www.spaes.org
www.ikoweave.com
instagram: gksh_lab
facebook: GerrietKSharma
phone: +491752449300
.

DEGEM News – FWD – [ak-discourse] Job: Research Scientist – Generative AI Technologies at Spotify

Von: Lepa, Steffen via ak discourse
Datum: Fri, 17 Jan 2025
Betreff: [ak-discourse] Job: Research Scientist – Generative AI Technologies at Spotify

Research Scientist – Generative AI Technologies

We are seeking a Research Scientist to help shape the future of listening on Spotify. Our team pioneers and advances state-of-the-art generative technologies that enable recommendation and personalization features, customized playback features, and much more. We build products for end-users, for artists, for labels and publishers, for advertisers – projects that cut across all of Spotify. We enhance current offerings and invent entirely new listening experiences that center and celebrate human artists and creatives.

What You’ll Do

Conduct cutting-edge research in audio generation (diffusion, flow matching, or autoregressive models), as well as related domains like ML-based audio processing, audio information retrieval, machine learning, and signal processing.
Run large-scale experiments with access to Spotify’s extensive infrastructure and an audience of more than 600 million monthly active users.
Create practical applications that harness generative technologies, pushing the boundaries of what’s possible in listening experiences.
Collaborate as part of an autonomous, cross-functional team—working closely with scientists, engineers, product managers, designers, user researchers, and analysts—to craft innovative solutions to complex challenges.
Have a direct impact on Spotify’s products, tools, and services, working on projects that influence the entire organization.
Engage with the broader research community by publishing your findings, delivering talks, and attending top conferences.

Who You Are

You have a Ph.D. degree in Computer Science, Mathematics, Engineering, or a related field. Previous industry experience is helpful.
You have experience in one or more of the following fields: generative modeling, machine learning, music information retrieval, speech processing, audio processing, signal processing, probabilistic modeling, computer vision, or related areas.
You have publications in communities such as ICASSP, ISMIR, Interspeech, ICLR, AAAI, IJCAI, NeurIPS, ICML, CVPR, ECCV, ICCV, or related.
You have strong coding skills in the following languages/libraries: Python, NumPy, PyTorch / Tensorflow.
You are a creative problem solver who is passionate about building outstanding products that add real value to millions of people.
You are enthusiastic about learning more about turning research ideas into products operating at scale.
You can explain complex topics in simple terms, and you enjoy building strong relationships with colleagues and stakeholders.

Where You’ll Be

We offer you the flexibility to work where you work best! For this role, you can be within the Europe region as long as we have a work location.
This team operates within the Central Europe time zone for collaboration.

Sebastian Ewert

Director of Research (Music Mission / MIQ Research PA)

Berlin, Germany | @sewert

DEGEM News – FWD – [ak-discourse] PhD Studentships at the Centre for Digital Music (Queen Mary University of London) – Autumn 2025 start

Von: Lepa, Steffen
Datum: Tue, 14 Jan 2025
Betreff: [ak-discourse] PhD Studentships at the Centre for Digital Music (Queen Mary University of London) – Autumn 2025 start

https://www.c4dm.eecs.qmul.ac.uk/news/2024-11-12.PhD-call-2025/

The Centre for Digital Music at Queen Mary University of London is inviting applications for PhD study for Autumn 2025 start across various funding schemes. Below are suggested PhD topics offered by academics; interested applicants can apply for a PhD under one of those topics, or can propose their own topic. In all cases, prospective applicants are strongly encouraged to contact academics at C4DM to informally discuss prospective research topics.

Opportunities include internally and externally funded positions for PhD projects to start in Autumn 2025. It is also possible to apply as a self-funded student or with funding from another source. Applicants can apply for a 3-year PhD degree in Computer Science or Electronic Engineering, or for a 4-year PhD in AI and Music. Studentship opportunities include:

Each funding scheme has a dedicated application process and requirements. S&E Doctoral Research Studentships and CSC applications close on 29 January 2025 at 5pm UK time. Detailed information and application links can be found on the respective funding scheme pages, following the above links.

AI Models of Music Understanding

Supervisor: Simon Dixon

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

Music information retrieval (MIR) applies computing and engineering technologies to musical data to satisfy users‘ information needs. This topic involves the application of artificial intelligence technologies to the processing of music, either in audio or symbolic (score, MIDI) form. The application could be e.g. for software to enhance the listening experience, for music education, for musical practice or for the scientific study of music. Examples of topics of particular interest are automatic transcription of multi-instrumental music, providing feedback to music learners, incorporation of musical knowledge into data-driven deep learning approaches, and tracing the transmission of musical styles, ideas or influences across time or locations.

It is intentional that this topic description is very general, but it is expected that applicants choose your own specific project within this broad area of research, according to your interests and experience. The research proposal should define the scope of the project, the relationship to the state of the art, the data and methods that you plan to use, and the expected outputs and means of evaluation.

AI-Powered Audio Loop Generation for Assistive Music Production (in collaboration with Steinberg Media Technologies GmbH)

Supervisor: George Fazekas

Eligible funding schemes: Industry funded PhD topic in collaboration with Steinberg Media Technologies GmbH (applicants from all nationalities are eligible)

This research explores the use of controllable deep learning models for generating high-quality audio loops tailored to musicians‘ needs. By focusing on audio tokenisation and representation learning techniques, the project aims to create reusable loops, such as drum patterns, basslines and synth textures, that seamlessly integrate into music production workflows. Unlike tools that generate full compositions, this approach priorities modular, user-customisable components, enabling artists to adapt loops for specific creative goals. The work also emphasises real-time usability, with plans to integrate the model into digital audio workstations (DAWs). By advancing tokenisation methods and intuitive controls, the research seeks to enhance AI’s role in modern music production. There is scope to explore different tokenisation techniques and different modelling approaches including, transformers, diffusion and consistency models, as well as retrieval augmented generation. Key challenges include ensuring high audio quality across diverse loop types, balancing customisable controls with user-friendly simplicity, and optimising the model for low-latency, efficient performance in real-time DAW environments. The research should also include elements concerning the evaluation of audio and musical qualities of the generated output and the usability/controllability of the model.

Audio-visual sensing for machine intelligence

Supervisor: Lin Wang

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

The project aims to develop novel audio-visual signal processing and machine learning algorithms that help improve machine intelligence and autonomy in an unknown environment, and to understand human behaviours interacting with robots. The project will investigate the application of AI algorithms for audio-visual scene analysis in real-life environments. One example is to employ multimodal sensors e.g. microphones and cameras, for analysing various sources and events present in the acoustic environment. Tasks to be considered include audio-visual source separation, localization/tracking, audio-visual event detection/recognition, audio-visual scene understanding.

Automated machine learning for music understanding

Supervisor: Emmanouil Benetos

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

The field of music information retrieval (MIR) has been growing for more than 20 years, with recent advances in deep learning having revolutionised the way machines can make sense of music data. At the same time, research in the field is still constrained by laborious tasks involving data preparation, feature extraction, model selection, architecture optimisation, hyperparameter optimisation, and transfer learning, to name but a few. Some of the model and experimental design choices made by MIR researchers also reflect their own biases.

Inspired by recent developments in machine learning and automation, this PhD project will investigate and develop automated machine learning methods which can be applied at any stage in the MIR pipeline as to build music understanding models ready for deployment across a wide range of tasks. This project will also compare the automated decisions made on every step in the MIR pipeline, as compared with manual model design choices made by researchers. The successful candidate will investigate, propose and develop novel deep learning methods for automating music understanding, resulting in models that can accelerate MIR research and contribute to the democratisation of AI.

Dynamical Systems Analysis and Hebbian Learning for Advanced Time-Series Processing

Supervisor: Iran R. Roman

Eligible funding schemes: S&E Studentships for Underrepresented Groups, International PhD Funding Scheme

This research aims to advance neural networks for time-series processing by applying dynamical systems theory and Hebbian learning, with a focus on emulating biological mechanisms that recognize and retain temporal patterns. We intend to develop efficient, adaptable architectures that minimize data dependency, utilizing low-dimensional circuits derived from dynamic analyses of large-scale neural activities. By converting complex neural states into simpler mathematical forms, we enhance both the efficiency and adaptability of processing time-series data.

The PhD project will develop state-of-the-art neural network models for applications such as musical rhythm, speech processing, and time-series forecasting. Using dynamical systems theory, we will dissect these models to understand the underlying dynamics that facilitate synchronization and pattern generation, identifying essential lower-dimensional circuits. Comparative analysis with biological data from humans and primates will be used to inform the design of biologically inspired models.

Additionally, the PhD student will implement Hebbian learning to create networks capable of few-shot and continual learning, thereby reducing the dependency on extensive datasets. This strategy will lead to robust, data-efficient models that offer deeper insights into both artificial and biological time-series processing mechanisms.

Exploiting Domain-Knowledge in Music Representation Learning

Supervisor: George Fazekas

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

The field of music representation learning aims to transform complex musical data into latent representations that are useful for tasks such as music classification, mood detection, music recommendation or generation. Despite recent advances in deep learning, many models rely purely on data-driven approaches and overlook domain-specific musical structures such as rhythm, melody and harmony.

This PhD project will investigate the integration of domain knowledge into music representation learning to enhance model interpretability and performance. Embedding music theoretical knowledge, structural hierarchies or genre-specific knowledge, the research should improve learning efficiency and provide richer representations that are more explainable and interpretable. The research has the option to explore various techniques, including incorporating symbolic representations, develop new methodologies for better utilisation of inductive biases, or leveraging musical ontologies to bridge the gap between data-driven models and the structured knowledge inherent in music theory.

There is flexibility in the approach taken, but the candidate should identify and outline a specific method within music analysis, production or generation. Special attention should be devoted to Ethical AI, i.e., it is expected that the proposed approach will not only improve music representation but allow for the reduction data biases or improve attribution of authorship to respect copyright.

Generative sound-based music

Supervisor: Anna Xambó

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

This PhD research explores the potential of generative techniques in sound-based music, where sound itself—rather than traditional musical notes—serves as the core building block of composition. By utilising generative learning procedures, the study will develop systems capable of creating novel soundscapes and site-specific sound art experiences. It is particularly relevant for students with expertise in computing and music, as it combines advanced algorithmic design with artistic sound manipulation. Through the integration of neural networks and sound synthesis methods, this research will examine how machines can generate, transform, and structure sounds into cohesive musical works from a human-centred perspective. This approach contributes to various fields, including acoustic ecology, sound design, interactive music systems, and human-computer interaction.

Interpretable AI for Sound Event Detection and Classification

Supervisor: Lin Wang

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

Deep-learning models have revolutionized state-of-the-art technologies for environmental sound recognition motivated by their applications in healthcare, smart homes, or urban planning. However, most of the systems used for these applications are based on black boxes and, therefore, cannot be inspected, so the rationale behind their decisions is obscure. Despite recent advances, there is still a lack of research in interpretable machine learning in the audio domain. Applicants are invited to develop ideas to reduce this gap by proposing interpretable deep-learning models for automatic sound event detection and classification in real-life environments.

Machine learning models for musical timbre

Supervisor: Charalampos Saitis

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

Music information retrieval tasks related to timbre (e.g., instrument identification, playing technique detection) have historically been under-researched, partly due to lack of available -and annotated- data, including a lack of community consensus around instrument and technique taxonomies. In the context of music similarity, which extends to the topic of timbre similarity, metric learning methods are commonly used to learn distances from human judgements. There is extensive work on using metric learning with hand-crafted features, but such representations can be limiting. Conversely, deep metric learning methods attempt to learn distances directly from data, promising a viable alternative. Despite some limited adoption of deep metric learning for specific music similarity tasks, related efforts to learn timbre similarity, or automatically construct taxonomical structures for timbre, are currently lacking. This project will investigate, propose, and develop machine learning models, including curating a new sizable dataset, that can learn discriminative representations of timbre through supervised, semi-supervised, and self-supervised learning paradigms of similarity and categorisation. Such models will enable a wide range of applications for computational music understanding (e.g., foundation models for music) and generation/creativity (e.g., neural audio synthesis). Candidates should have experience in at least one of the following: music informatics, machine listening, metric learning.

Scalable Acoustic Imaging Using Sparse Microphone Arrays for Embedded Devices

Supervisor: Iran R. Roman

Eligible funding schemes: S&E Studentships for Underrepresented Groups, International PhD Funding Scheme

This project will fundamentally transform existing acoustic imaging technologies by developing scalable and adaptable machine learning algorithms aimed at delivering precise spatial sound representations, while requiring minimal hardware. The project focuses on harnessing the potential of embedded microphone arrays, using as few as two or four channels, to create efficient algorithms integrated directly into chips or mobile devices. These algorithms will be designed to accurately localize sound sources and decode semantic information from the sounds, such as identifying the type of sound-producing entity.

The project will entail the development of efficient machine learning models that effectively process both simulated and real sound recordings to not only pinpoint the exact location of sound sources but also extract rich semantic content. This capability will enable compact and versatile acoustic cameras. These cameras will be integrated into various devices, such as smartphones, AR glasses, and security doorbells, enhancing functionalities such as video object tracking by localizing sounds outside the visual field, improving automatic speech recognition systems by providing spatial audio cues to differentiate speakers, and augmenting reality applications by synchronizing virtual sound sources with physical environments.

Students will engage in rigorous algorithmic design, leveraging both theoretical and practical aspects of acoustic signal processing, machine learning, and spatial audio techniques. Comparative analyses with existing technologies will help in fine-tuning the algorithms to achieve high accuracy and efficiency. This research aims to pave the way for next-generation multimodal technologies that enhance the sensory capabilities of everyday devices through advanced sound processing.

Sound-based DIY approaches to creative AI

Supervisor: Anna Xambó

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

I am also open to discussing other projects related to creative AI and DIY projects that aim to improve societal aspects of unprivileged communities through the use of sound-based music and acoustic ecology systems.

Understanding Neural Audio Models with Artificial Intelligence and Linear Algebra

Supervisor: Mark Sandler

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

Since ~2016 most research in Digital Music and Digital Audio has adopted Deep Learning techniques. These have brought performance improvements in applications like Music Source Separation, Automatic Music Transcription and so on. This is good, but on the downside, the models get larger, they consume increasingly large amounts of power for training and inference, require more data and become less understandable and explainable. These issues underpin the research in this PhD.

A fundamental building block in DL is Matrix (or Linear) Algebra. Through training, each each layer’s weight matrix is progressively modified to reduce the training error. By examining these matrices during training, DL models can be compactly engineered to learn faster and more efficiently.

Research will start by exploring the learning dynamics of established Music Source Separation models. Using this knowledge, we can intelligently prune the models, using Low Rank approximations of weight matrices. We will explore what happens when Low Rank is imposed as a training constraint. Is the model better trained? Is it easier and cheaper to train? Next, the work shifts either to other Neural Audio applications, or to applying Mechanistic Interpretability, which reveals the hidden, innermost structures that emerge in trained Neural Networks.

Using machine learning to enhance simulation of sound phenomena

Supervisor: Josh Reiss

Eligible funding schemes: S&E Studentships for Underrepresented Groups, CSC PhD Studentships, International PhD Funding Scheme

Physical and signal-based models of sound generating phenomena are widely used in noise and vibration modelling, sound effects, and digital musical instruments. This project will explore machine learning from sample libraries for improving the models and their design process.

Not only can optimisation approaches be used to select parameter values such that the output of the model matches samples, the accuracy of such an approach will give us insight into the limitations of a model. It also provides the opportunity to explore the overall performance of different modelling approaches, and to find out whether a model can be generalised to cover a large number of sounds, with a relatively small number of exposed parameters.

Existing models will be used, with parameter optimisation based on gradient descent. Performance will be compared against recent neural synthesis approaches that often provide high quality synthesis but lack intuitive controls or a physical basis. It will also seek to measure the extent to which entire sample libraries could be replaced by a small number of models with parameters set to match the samples in the library.

The project can be tailored to the skills of the researcher, and has the potential for high impact.

 

DEGEM News – FWD – [ak-discourse] WG: Anfrage für die AK-Liste

Von: Weinzierl, Stefan, Prof. Dr. via ak discourse
Datum: Sat, 11 Jan 2025
Betreff: [ak-discourse] WG: Anfrage für die AK-Liste

Liebe Freunde der Audiokommunikation,

wir möchten Sie zu einer Befragung einladen, die in Kooperation mit der Hochschule Düsseldorf (Studiengang Ton und Bild) stattfindet.

Die Hörgesundheit ist ein wichtiges Thema für Toningenieur*innen – und dennoch oft unterschätzt. Als Fachleute, die täglich mit intensiven Schallpegeln arbeiten, stehen wir vor besonderen Herausforderungen.

Diese Befragung ist Teil einer der ersten wissenschaftlichen Studien, die sich gezielt mit der Hörgesundheit dieser Berufsgruppe beschäftigt. Ihre Teilnahme hilft, das Bewusstsein für berufliche Anforderungen zu stärken und wichtige Erkenntnisse zu gewinnen.

Die Befragung ist kurz und umfasst Fragen zu Ihrem beruflichen Hintergrund, Ihrer Hörfähigkeit, Ihrem Umgang mit Schall und Gehörschutz sowie Ihrer Stressbelastung.

➡ Hier geht’s zur Umfrage: https://ww2.unipark.de/uc/hoergesundheithsd/?a=

Ihre Teilnahme ist anonym. Nach erfolgreicher Teilnahme an der Umfrage haben Sie die Möglichkeit, an einem Gewinnspiel teilzunehmen, bei dem Sie die Chance haben, einen von 10 Amazon-Gutscheinen im Wert von 20 Euro zu gewinnen.

Ihnen dankend,
Lukas Komnenovic
(Student der Hochschule Düsseldorf im Studiengang Ton und Bild)

Projektleiter:
Prof. Dr.-Ing. Jochen Steffens
Hochschule Düsseldorf

***
Prof. Dr. Stefan Weinzierl
TU Berlin
Audio Communication Group
Sekr. EN-8
Einsteinufer 17c
10587 Berlin

https://www.tu.berlin/ak

tel. +49 30 314 22236

DEGEM News – NEWS – Die Deutsche Nationalbibliothek sammelt jetzt digitale Musik

Von: Johannes S. Sistermanns via DEGEM
Datum: Sun, 05 Jan 2025
Betreff: Re: Die Deutsche Nationalbibliothek sammelt jetzt digitale Musik

 

Can’t Stop The Music

Die Deutsche Nationalbibliothek sammelt jetzt digitale Musik

https://blog.dnb.de/cant-stop-the-music/

Informationen zur Ablieferung von digitaler Musik

Interessierte finden auf den Webseiten der DNB zahlreiche Informationen zur Ablieferung von digitalen Publikationen. Speziell für die Ablieferung von digitaler Musik steht unter anderem eine ausführliche Dokumentation zur Ablieferung von Musik-Audiodateien im Metadatenformat DDEX ERN 4.3 bereit. Testdaten können in Version 4.3 und in Version 4.2 heruntergeladen werden.

Ein praktischer Ratgeber zur Anlieferung der Daten ist der Leitfaden „Onboarding Ablieferung digitale Musik-Publikationen“.

Dokumentationen und allgemeine Informationen zu den DDEX Standards sind in der DDEX Knowledge Base zu finden.

viel Erfolg allen

und

beste Grüße

 

Johannes

 

DEGEM News – FWD – [ak-discourse] WG: TISMIR Special Collection: „Digital Musicology“ – Call for Papers

Von: Lepa, Steffen via ak discourse
Datum: Wed, 8 Jan 2025
Betreff: [ak-discourse] WG: TISMIR Special Collection: „Digital Musicology“ – Call for Papers

TISMIR Special Collection on “Digital Musicology”

Call for Papers

Deadline for submission: May 31st, 2025

https://transactions.ismir.net/announcements#call-for-papers—special-collection

 

Scope of the collection

This special collection serves as a platform for an interdisciplinary dialogue between music technology and musicology, promoting scholarly discussions on the application and usability of digital technologies to enhance music research, and capturing contemporary trends and emerging directions in digital musicology scholarship. It is simultaneously inspired by the recent “Digital Technologies Applied to Music Research Conference: Methodologies, Projects and Challenges” (Lisbon, 06.2024), alongside reflections and consolidation celebrating a decade of contributions from the international Digital Libraries for Musicology conference (DLfM), which held its first event in London in September 2014.

We welcome discussions on pressing issues in the digital humanities, such as cultural heritage preservation, FAIR principles and interconnected repertories, digital sustainability, and increasing awareness and access to digital music in non-academic contexts. We also provide a venue for reflecting upon, re-evaluating, and revisiting research previously presented at DLfM, which has since been substantially extended or adapted, or for surveying and summarising technologies and methodologies that have emerged as instrumental or prevalent in the digital musicology research community. By bringing together scholars from digital libraries, humanities, computational musicology, and MIR, this collection aims to foster a broader mutual understanding of the needs, challenges, and desired outcomes within each of these areas. It seeks to help scholars evaluate methodologies and research questions, ultimately contributing to the development of new, more dynamic, inclusive and integrated research that benefits from diverse contributions. From a musicologist’s perspective, it will explore how digital technologies are transforming research practices and examine the extent of interdisciplinary collaboration between historical musicologists and music technology scholars in advancing our understanding and use of music.

 

Guest Editors

  • Elsa De Luca (lead). Researcher at CESEM-IN2PAST, NOVA University Lisbon
  • Ichiro Fujinaga. Professor at McGill University
  • David Lewis. Lecturer at Goldsmiths, University of London
  • Kevin Page. Senior Researcher and Associate Faculty at the University of Oxford e-Research Centre
  • Martha Thomae. Post-doctoral researcher at CESEM-IN2PAST, NOVA University Lisbon

 

Topics and submission guidelines at https://account.transactions.ismir.net/index.php/up-j-tismir/libraryFiles/downloadPublic/4.

 

If you are considering submitting to this special issue, it would greatly help our planning if you let us know by replying  to elsadeluca@fcsh.unl.pt

Kind regards,

 

Martha E. Thomae (on behalf of the GE)

Martha E. Thomae (PhD Music Technology, McGill University)

Postdoctoral Research Fellow at NOVA University Lisbon

DEGEM News – FWD – [ak-discourse] Einladung: 24.-25. Januar 2025: Dancecult 25 Conference (TU Berlin)

Von: Lepa, Steffen via ak discourse
Datum: Tue, 7 Jan 2025
Betreff: [ak-discourse] Einladung: 24.-25. Januar 2025: Dancecult 25 Conference (TU Berlin)

Liebe Studierende, liebe Kolleginnen, liebe Alumnis,

vom 24.-25 Januar wird an der TU Berlin im Hauptgebäude die „Dancecult 25 Conference“ stattfinden – eine Internationale Tagung zum Thema „Preserving and Archiving Electronic Music and Dance Cultures“. In rund 50 Vorträgen werden Sozial- und KulturwissenschaftlerInnen aus allen Kontinenten zu diesem Thema ihre Forschung präsentieren.

Der Eintritt ist kostenlos, das Programm findet sich hier: https://dancecult-research.net/

Gastgeber sind das FG Audiokommunikation und das Dancecult Research Network.

Wir freuen uns auf Ihren/Euren Besuch!
Mit freundlichen Grüßen

Steffen Lepa

Pronomen: er/sein oder es/sein

***

Dr. Steffen Lepa

Postdoc Researcher & Lecturer

Fachgebiet Audiokommunikation (Sekr. EN-8)

Technische Universität Berlin

Einsteinufer 17c
10587 Berlin
Germany

Raum H 2001 E

FON: +49 (0)30 – 314 – 29313
FAX: +49 (0)30 – 314 – 12329313
MAIL: steffen.lepa@tu-berlin.de
WWW: https://www.tu.berlin/ak/ueber-uns/team/dr-steffen-lepa

SKYPE: steffenlepa

Handy/Messenger: +491794562244

***

DEGEM News – FWD – [ak-discourse] Ankündigung: Tapeheads – Arbeitskreis für elektromagnetische Aufnahme und Wiedergabe

Von: Lepa, Steffen via ak discourse
Datum: Tue, 7 Jan 2025
Betreff: [ak-discourse] Ankündigung: Tapeheads – Arbeitskreis für elektromagnetische Aufnahme und Wiedergabe

Liebe alle,

wir freuen uns, die Welt mit einem weiteren Arbeitskreis zu beglücken, der „Tapehead Society“, dem Arbeitskreis für elektromagnetische Aufnahme und Wiedergabe.

Momentan besteht der Arbeitskreis aus einem Verteiler https://www.listserv.dfn.de/sympa/info/tapeheadsociety.

Die Tapehead Society soll einen Raum bieten, in dem man sich über elektromagnetische Audiomedien und Aufnahme- und Wiedergabegeräte austauschen kann. Weitgehend unabhängig von Institutionen, Verbänden oder anderen Interessengruppen soll über alle Fragen in den Bereichen Tonband und Audiotechnik diskutiert werden. Wir verstehen uns einerseits als Selbsthilfegruppe und andererseits als informelle Interessenvertretung für Fragen der elektromagnetischen Aufnahme und Wiedergabe.

Der Arbeitskreis richtet sich an alle Forschende, Archivierende, Sammelnde, Reparierende, Musizierende und Geräteabspielende, die in ihrem Leben und in unterschiedlichsten Anwendungskontexten schon einmal mit Tonbändern und Tonbandmaschinen in Berührung gekommen sind und sich vielleicht damit sogar eingehender beschäftigt haben oder in Zukunft beschäftigen wollen.

Initiiert wurden die Tapeheads von Richard Limbert (Lippmann+Rau-Archiv Eisenach) und Knut Holtsträter (ZPKM Freiburg).

Wenn Euch das Thema interessiert, klickt bitte auf den Listserv-Link und lest bitte die Beschreibung durch. Falls Du Interesse hast, logge über Listserv ein oder schreibe einfach eine Mail an knut.holtstraeter@zpkm.uni-freiburg.de, dann bekommst Du eine nette Mail und landest im Verteiler.

Herzliche Grüße

Knut (Holtsträter) und Richard (Limbert)

DEGEM News- FWD – Marc Behrens News: Listen in to Out of Sight

Von: Marc Behrens
Datum: Tue, 7 Jan 2025
Betreff: Marc Behrens News: Listen in to Out of Sight

Marc Behrens News: Listen in to Out of Sight
marcbehrens.com/news_current.html
Kuratorenführung ––– Invitation to the premiere
11.01.2024, 18:00 / 19:00 MEZ/CET

Dialogmuseum Frankfurt, Germany: Letzte Gelegenheit für eine Kuratorenführung zu meinem HörstückCuriosity Gap und den Werken von Matter of Facts, Untere Reklamationsbehörde und Lasse-Marc Riek für die Reihe “Out of Sight – neue Klangkunst im Dialogmuseum”. Koproduktion von Hannes Seidl & Briefkastenfirma und Dialogmuseum Frankfurt. Die Stücke werden künftig in der lokalen Mediathek des Museums abrufbar sein. ––– Last chance for a curator’s guided tour to my listening piece Curiosity Gapand works by Matter of Facts, Untere Reklamationsbehörde and Lasse-Marc Riek for the event series “Out of Sight”. Co-production by Hannes Seidl & Briefkastenfirma and Dialogmuseum Frankfurt. The music pieces will be available in the museum’s local mediatheque in the future. ––– Dialogmuseum, An der Hauptwache, B-Ebene, Passage 10 (-> Roßmarkt), 60313 Frankfurt am Main, Germany.
-> Info
-> Tickets

***

DEGEM News – FWD – [ak-discourse] Akustisches Seminar am 09.01.25 um 14:15 Uhr

Von: Radmann, Vincent via ak discourse
Datum: Mon, 6 Jan 2025
Betreff: [ak-discourse] Akustisches Seminar am 09.01.25 um 14:15 Uhr

Liebe Akustik-Interessierte,

am kommenden Donnerstag (09.01.25) findet das erste Akustisches Seminar im neuen Jahr statt. Beginn ist um 14:15 Uhr und es gibt einen Vortrag zu folgendem Thema:

„Auswirkungen auffälliger akustischer Einzelereignisse auf den Gesamteindruck am Beispiel von Straßenverkehrsgeräuschen“ von Jakob-Dominik Ullmann (Bachelorarbeit)

Sie sind herzlich eingeladen, über folgenden Link am Vortrag teilzunehmen, Fragen zu stellen und zu diskutieren.

Zoom: https://tu-berlin.zoom.us/j/66079473364?pwd=ZU52TEVjSWQxeUFKZC9OSURVelFJQT09
Meeting-ID: 660 7947 3364
Kenncode: 043624

Viele Grüße und ein frohes Neues Jahr
Vincent Radmann

Vincent Radmann
Technische Universität Berlin

Fachgebiet Technische Akustik

Sekr. TA7

Einsteinufer 25

10587 Berlin

Tel.: 030 314-78736

Email: vincent.radmann@tu-berlin.de

DEGEM News – FWD – Call for Submissions! ICMC Boston 2025

Von: Akkermann, Miriam
Datum: Sun, 05 Jan 2025
Betreff: FW: Call for Submissions! ICMC Boston 2025


Call for Submissions!
https://icmc2025.sites.northeastern.edu/

Dear Colleagues,

It is my pleasure to share with you the Call for Submissions for the 50th Anniversary of the International Computer Music Conference (ICMC 2025) to be held in Boston from June 8-14, 2025.

In particular, we’d like to highlight our Innovation Showcase, which combines poster presentations and live demonstrations. This may be of interest to those with research in progress who would like to directly engage with conference attendees. Each submission will be evaluated based on its creative and intellectual merit and potential impact on the field of music technology and computer music. Selected submissions will be included in the conference proceedings.

We are also pleased to announce that PARMA Recordings will release a Best of the Listening Rooms digital album selected from works accepted and presented in the listening rooms at ICMC BOSTON 2025, on the Ravello Records label.

SUBMISSIONS DEADLINES:
January 15, 2025: Papers; Innovation Showcase
February 1, 2025: Music; Installation; Soundwalks
March 15, 2025: David Wessel Festschrift

All submissions should be made through the ICMC Boston 2025 Conference Management Toolkit (CMT).

For any questions, please do not hesitate to contact the conference Chair, Anthony Paul De Ritis, at a.deritis@northeastern.edu .