binauralFIR

by Ircam-RnD

Ircam-RnD /binauralFIR

Binaural module for the Web Audio API

131 Stars 15 Forks Last release: Not found BSD 3-Clause "New" or "Revised" License 84 Commits 26 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

BinauralFIR node

Processing audio node which spatializes an incoming audio stream in three-dimensional space for binaural audio.

The binauralFIR node provides binaural listening to the user with three simple steps. The novelty of this library is that it permits to use your own HRTF dataset. This library can be used as a regular node - AudioNode - inside the Web Audio API. You can connect the native nodes to the binauralFIR node by using the connect method to binauralFIR.input:

nativeNode.connect(binauralFIR.input);
binauralFIR.connect(audioContext.destination);

We provide a HRTF dataset example provided by IRCAM in the /example/snd/complete_hrtfs.js file.

Example

A working demo for this module can be found here and in the

examples
folder.

HRTF dataset format

As this library allow you to use your own HRTF Dataset, if you want to use your dataset in the library you have to follow the following format:

Data

Description

azimuth
| Azimuth in degrees: from 0 to -180 for source on your left, and from 0 to 180 for source on your right
distance
| Distance in meters
elevation
| Elevation in degrees: from 0 to 90 for source above your head, 0 for source in front of your head, and from 0 to -90 for source below your head)
buffer
| AudioBuffer representing the decoded audio data. An audio file can be decoded by using the buffer-loader library

This data must be provided inside an Array of Objects, like this example:

[
  {
    'azimuth': 0,
    'distance': 1,
    'elevation': 0,
    'buffer': AudioBuffer
  },
  {
    'azimuth': 5,
    'distance': 1,
    'elevation': 0,
    'buffer': AudioBuffer

}, { 'azimuth': 5, 'distance': 1, 'elevation': 5, 'buffer': AudioBuffer } ]

API

The

binauralFIR
object exposes the following API:

Method

Description

binauralFIR.connect()
| Connects the binauralFIRNode to the Web Audio graph
binauralFIR.disconnect()
| Disconnect the binauralFIRNode from the Web Audio graph
binauralFIR.HRTFDataset
| Set HRTF Dataset to be used with the virtual source.
binauralFIR.setPosition(azimuth, elevation, distance)
| Set position of the virtual source.
binauralFIR.getPosition()
| Get the current position of the virtual source.
binauralFIR.setCrossfadeDuration(duration)
| Set the duration of crossfading in miliseconds.
binauralFIR.getCrossfadeDuration()
| Get the duration of crossfading in miliseconds.

License

This module is released under the BSD-3-Clause license.

Acknowledgments

This code has been developed from both Acoustic And Cognitive Spaces and Analysis of Musical Practices IRCAM research teams. It is also part of the WAVE project (http://wave.ircam.fr), funded by ANR (The French National Research Agency), ContInt program, 2012-2015.

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.