MORPHOACOUSTICS
  • Home
  • Research
  • Publications
  • People
  • SYMARE Database
  • Demos
  • SUPPORT
  • About
  • Contact
  • Blog
Research

Morphoacoustics is a research area that explores the inter-relationship between physical structure, acoustical properties, and perception. We are currently investigating three different phenomena:
  1. 1. Human outer ears and spatial hearing
  2. 2. Violins and their acoustic radiance patterns
  3. 3. Human vocal tract and speech

For each phenomenon above, the three attributes of physical structure, acoustical properties, and perception are integrally related in a complex way. Therefore, a key component of morphoacoustics research is the ability to understand and predict how changes in physical structure result in changes to acoustical properties, which in turn result in changes to perception.

The research approach we are taking first requires that we can mathematically model deformations in each of the three attributes. For example, consider the phenomenon of spatial hearing in which our outer ears enable us to perceive the location of sounds in space. Outer ears vary in shape across different listeners - so how do we model these shape variations? Each outer ear at a given frequency has a unique directivity pattern - how do we model these variations in the acoustic directivity patterns? The directivity patterns of the outer ears influences the perception of both sound localization and sound timbre - how do we model variations in the perception of localization and timbre?

In addition to requiring suitable mathematical models of deformations in the three different attributes, we also require appropriate numerical simulation tools that enable us to predict the consequences of deformations. For example, even if we can deform an ear shape mathematically, how can we understand what changes occur in the acoustic directivity patterns? Similarly, if we can model changes in acoustic directivity patterns, how can we understand what changes occur in perception? To this end, we must develop accurate simulations tools.

We have found that our research approach requires new tools and new data.  Please read on if you have an interest!


INDIVIDUALIZATION FOR 3D AUDIO

The crux to synthesizing high-fidelity 3D sound over headphones depends on the individualized relationship between human outer ears and their acoustic properties. The torso, head, and particularly the outer ears all influence external sounds on their way to the ear drum and they do so in a directionally dependant manner. It is this acoustic filtering, mathematically described by head-related inpulse response (HRIR) filters, that provides the auditory cues which enable human spatial hearing. The physical characteristics of each outer ear is as unique as a fingerprint and each listener requires his/her own HRIR filters to obtain high-fidelity 3D audio via earphones. Acoustically measuring a listener's HRIR filters for 3D audio is an expensive and time-consuming laboratory process. It typically requires the listener to sit still with microphones in his/her ears in an anechoic room (with no echoes) with a loudspeaker mounted on a robotic arm. The loudspeaker is moved to a particular location and sounds are recorded in the ears of the listener. It has long been a dream and it is the aim of this project to efficiently derive high-fidelity HRIR filters from limited morphological data.

The challenge for this project is to characterize the nonlinear relationship between two extremely complex data: the intricate morphology of human outer ears and the variation in their acoustic properties across space and frequency. Our international research group is working to solve this challenge by developing sophisticated tools that enable us to model complex data. We are developing the application of a mathematical shape analysis method referred to as Large Deformation Diffeomorphic Metric Mapping (LDDMM) for the study of ear shapes. The LDDMM method enables us to construct a powerful parametric model of ear shape and to measure the similarity between different ear shapes in a suitable Riemannian space (refer to Fig. 1). We are also proposing to use the LDDMM approach to construct a global model of HRIR filters. We have developed the SYMARE database 
which contains detailed morphological data and HRIR data (refer to Fig. 2). This database makes possible the construction of morphable models of ears and HRIR filters. We use fast-multipole Boundary Element Method (FM-BEM) acoustic simulations of surface meshes of the torso, head, and ears to provide accurate numerical HRIR data. HRIR filters are generally studied in the frequency domain and the frequency domain representations of HRIR filters are referred to as head-related transfer functions (HRTFs). Comparisons between acoustically-measured and numerically-simulated HRTFs are shown in Fig. 3.
 
Picture
Figure 1: The LDDMM flow of diffeomorphisms (Ear A to Ear B) for several time steps is shown.
 
Picture
Figure 2: Population of high-resolution head and torso surface meshes in the SYMARE database.
 
Picture
Figure 3: Comparison between acoustically-measured and FM-BEM simulated HRTFs are shown. The magnitude of the HRTF is shown as a function of space for two frequencies.

VIOLINS AND ACOUSTIC RADIANCE PATTERNS

Cremona, Italy harbours world cultural heritage in violin making and provides an inspiration for our interests in the violin's acoustic properties. The violin's acoustic properties clearly result from the processes involved in violin making and demonstrate a complex relationship with these processes. For now, our focus is on the vibrational modes of the violin, the violin's radiance patterns and how these vary with frequency. Vibrometric measurements provide data regarding the vibrational modes of the violin (see Fig. 4). Admittance measurements of the bridge provide further characterization of the violin's acoustic properties.
Picture
Figure 4. The measured vibrational modes of a free soundboard (modes: 1, 2, 5, and 10) .
The issue we study is how do the vibrational modes of the violin relate to the acoustic radiance patterns of the violin. The acoustic radiance patterns of the violin are measured during actual musical play - using what we refer to as a plenacoustic camera and ray-space analyses. The plenacoustic camera consists of multiple planar arrays of microphones as shown in the background in Fig. 5. There is also gyroscopic orientation tracking placed on the violin to monitor the movements of the violinist. In this way, the movements of the violinist can be removed as a confounding variable during the plenacoustic measurements. Some of the measured acoustic radiance patterns are shown below in Fig. 6.
Picture
Figure 5. A violinist playing in the recording room with the planar plenacoustic microphone arrays in the background.
Picture
Figure 6. Acoustic radiance patterns of a violin.

VOCAL TRACT AND SPEECH

We are also exploring the application of Large Deformation Diffeomorphic Metric Mapping for the study of vocal tract morphology as shown in Fig. 7 below. We morph vocal tract 3D mesh models. A morph from Vocal Tract 1 to Vocal Tract 2 is shown below. The vocal tract shown corresponds to a vowel sound. This vowel sound can be simulated using acoustic simulation based on the mesh model of the vocal tract. The modification to the vowel sound determined by acoustic numerical simulation is shown below the vocal tract mesh models.
Picture
Figure 7. LDDMM is applied to morph a female vocal tract to a male vocal as shown on top. Acoustic numerical simulation is applied using the vocal tract mesh to simulate the vowel sound corresponding to the morphed vocal tract mesh.
​Real-time magnetic resonance imaging (MRI) provides an opportunity to better understand the vocal tract dynamics. In this regard, we are trialing radial MRI at 76 fps to explore whether we obtain suitable dynamic images of the vocal tract. An example MRI video (without audio at this stage) is shown below. 
HOME
RESEARCH
PUBLICATIONS
PEOPLE
RESOURCES
NEWS
ABOUT
CONTACT
Picture
Phone: +61 2 9351 7208
​Email: [email protected]
Copyright ©  Morphoacoustics.org 2019
  • Home
  • Research
  • Publications
  • People
  • SYMARE Database
  • Demos
  • SUPPORT
  • About
  • Contact
  • Blog