Each day, the technical program will include plenary lectures by an eminent acoustician on several topic of scientific interest. Click on the title to bring up the abstract.
''From acoustic simulation to virtual auditory displays''
Simulation and auralization techniques are used in engineering, architecture, sound design and in applications in hearing research. The components of this technique are acoustic simulations and signal processing tools and the data interfaces in between, for which well-established solutions exist. The main bottlenecks are lack of data of 3D characterization of sound sources and material parameters, and interfaces to spatial audio technology. These problems are subject to research. Whether the virtual environment is considered sufficiently accurate or not, depends on many perceptual factors, and on the pre-conditioning and the degree of immersion of the user in the virtual environment. In this presentation the processing steps for creation of Virtual Acoustic Environments are briefly presented, and the achievable degree of realism discussed in examples including room acoustics, archeological acoustics, transportation noise, and hearing research.
'On the perspective of underwater acoustic tomography for probing ocean currents in shallow-water environments'
Oceanographic processes in coastal regions including wind driven flows, tidal currents, river outflows, internal waves, eddies, western boundary currents, etc. are highly variable in time and space. Conventional oceanographic measurements (e.g., acoustic Doppler current profiler) cannot provide a synoptic image of those dynamic processes, especially for short time and space scales. Ocean Acoustic Tomography (OAT) uses time-of-flight measurements from different angles across the water. OAT is an effective method for mapping the spatial distribution of current and temperature fields. This talk will focus on the OAT applications to probe the current field in shallow water environments and present recent experimental results. Included are 1) the application of the middle-range (~50 km) OAT technique to study the spatial and temporal variations of the sub-branch of the Kuroshio off the east coast of Taiwan, 2) exploiting the communication signals of distributed networked underwater sensors for ocean current mapping, and 3) integrating moving vehicles to enhance OAT results.
''Understanding music perception from the perspective of oscillation and resonance''
Over the last decade my lab has investigated psychoacoustic properties of pitch, timbre, and rhythm as perceived by the ear (auditory) as well as the skin (vibrotactile). Mechanoreceptors in the skin are structurally similar to those in the ear and exhibit frequency tuning enabling coarse pitch perception. Although the skin is equipped with only a few broadly tuned frequency channels and without a “place code”, this appears to be enough to enable discrimination between complex vibrotactile waveforms that have been matched for fundamental frequency and subjective magnitude (i.e., vibrotactile timbre perception). The skin is also quite capable of giving rise to the perception of rhythm, however this capacity proves challenging with complex rhythms.
Neuro-electric measures allow us to examine resonance to different levels of oscillatory structure. Auditory neurons in the brainstem are capable of phase locking with tone frequencies in music. The fidelity of this type of neural resonance is better in individuals with music training and worse in individuals with hearing impairment. Neurons in auditory and motor cortices have been found to phase lock to the dominant beat frequency in music (i.e., the pulse). This form of neural resonance continues even after the music has stopped, and much like the brainstem response to tone frequencies, its fidelity tends to be better in individuals with music training.
The picture that emerges from this body of work is that perception of music is underpinned by neural resonance to different levels of oscillatory structure present in auditory and vibrotactile waveforms. Long-term active engagement with music supports the fidelity of neural resonance.
''How the brain makes sense of complex auditory scenes''
Everyday listening involves a complex interplay between the ear, which transduces sound energy into neural responses, and the brain, which makes sense of these inputs. Historically, research on the ear tended to ignore the fact that what we can perceive in sound depends on what task the brain is engaged by, while research on cortical processing of sound ignored the complexity and sophistication of how the ear works. In this talk, I will explore how everyday perceptual abilities depend jointly on how the ear encodes information (and individual differences in the fidelity with which it does so) and how attention and other state dependent variables change the information we perceive.
''Hearing Protectors: State of the Art and Emerging Technologies of Comfort and Uncertainty in Measurements''
In many industrial and military situations it is not practical or economical to reduce ambient noise to levels that present neither a hazard to hearing nor annoyance. In these situations, personal hearing protection devices are capable of reducing the noise by up to around 35 dB. Although the use of a hearing protector is recommended as a temporary solution until action is taken to control the noise, in practice, it ends up as a permanent solution in most cases. Therefore, hearing protectors must be both efficient in terms of noise attenuation and comfortable to wear. Comfort in this case is related to the agreement of the user to wear the hearing protector consistently and correctly at all times. The purpose of this paper is to review the stat of art for the need to develop methods to quantify comfort and noise leakage, also to quantify the uncertainty in evaluating hearing protector noise attenuation.