AudioContext
The AudioContext interface represents an audio-processing graph built from audio modules linked together, each represented by an AudioNode.
An audio context controls both the creation of the nodes it contains and the execution of the audio processing, or decoding. You need to create an AudioContext before you do anything else, as everything happens inside a context. It's recommended to create one AudioContext and reuse it instead of initializing a new one each time, and it's OK to use a single AudioContext for several different audio sources and pipeline concurrently.
Constructor
AudioContext()-
Creates and returns a new
AudioContextobject.
Instance properties
Also inherits properties from its parent interface, BaseAudioContext.
AudioContext.baseLatencyRead only-
Returns the number of seconds of processing latency incurred by the
AudioContextpassing the audio from theAudioDestinationNodeto the audio subsystem. AudioContext.outputLatencyRead only-
Returns an estimation of the output latency of the current audio context.
AudioContext.sinkIdRead only Experimental-
Returns the sink ID of the current output audio device.
Instance methods
Also inherits methods from its parent interface, BaseAudioContext.
AudioContext.close()-
Closes the audio context, releasing any system audio resources that it uses.
AudioContext.createMediaElementSource()-
Creates a
MediaElementAudioSourceNodeassociated with anHTMLMediaElement. This can be used to play and manipulate audio from<video>or<audio>elements. AudioContext.createMediaStreamSource()-
Creates a
MediaStreamAudioSourceNodeassociated with aMediaStreamrepresenting an audio stream which may come from the local computer microphone or other sources. AudioContext.createMediaStreamDestination()-
Creates a
MediaStreamAudioDestinationNodeassociated with aMediaStreamrepresenting an audio stream which may be stored in a local file or sent to another computer. AudioContext.createMediaStreamTrackSource()-
Creates a
MediaStreamTrackAudioSourceNodeassociated with aMediaStreamrepresenting an media stream track. AudioContext.getOutputTimestamp()-
Returns a new
AudioTimestampobject containing two audio timestamp values relating to the current audio context. AudioContext.resume()-
Resumes the progression of time in an audio context that has previously been suspended/paused.
AudioContext.setSinkId()Experimental-
Sets the output audio device for the
AudioContext. AudioContext.suspend()-
Suspends the progression of time in the audio context, temporarily halting audio hardware access and reducing CPU/battery usage in the process.
Events
sinkchangeExperimental-
Fired when the output audio device (and therefore, the
AudioContext.sinkId) has changed.
Examples
Basic audio context declaration:
js
const audioCtx = new AudioContext();
const oscillatorNode = audioCtx.createOscillator();
const gainNode = audioCtx.createGain();
const finish = audioCtx.destination;
// etc.
Specifications
| Specification |
|---|
| Web Audio API # AudioContext |
Browser compatibility
BCD tables only load in the browser