WebVR Experiment: #4— Simple Music Visualizer

6/23–3 Hour Sprint

July 11, 2018

Code & Design

Experiment 4 Goal:

Dynamic Album Preview w/ A Music Visualizer

Experiment 4 Shipped:

Three-Bar Visualizer



  • Loads A Preset Environment — using the public A-Frame Environment component with pre-loaded preset for quick visual improvement
  • Plays, Analyzes, & Renders Music Wave Data Through 3 Columns — using the Web Audio API to create an audio context, connect it to our audio destination, load & play an audio track, initialize an audio analyze node, & finally map the track wave data to A-Frame entities.

***this piece in particular focuses extensively on the native Web Audio API as opposed to A-Frame & mixed reality technologies. If you’re a music head like myself, then carry onward as it’s about to get really interesting as we peek under the hood of the audio layer of modern browsers.


In case you skipped the italicized paragraph above, I”ll re-iterate here: the core focus of this article is not WebVR technology, but rather how audio works in modern browsers. As you read from the opening statement, the goal was to extend the previous artist discography by somehow inserting a music visualizer into the current scene aka…building a webvr music visualizer.

As linked above, the A-Frame docs have an amazing audio visualizer example that inspired this idea; even better, the audio visualizer source code was written by none other than WebVR virtuoso Kevin Ngo. Kevin maintains A-Frame & has published multiple popular components that I use in previous projects.

~~~Perfect, only a matter of time before we’re visualizing all the musics…

Nope. My faith was shaken.

A long 75+ minutes of copy/pasting examples, git cloning the audio visualizer component branch, & trying everything in-between I failed to implement anything that looked remotely like the mesmerizing example above. I think open-source gets taken for granted way too often, so instead of drawing my pitchfork on a GitHub issue I decided to try leaning less on Mr.Ngo. Back to the drawing board, I could at least try writing the logic myself in plain-ole’ vanilla Javascript & placing it within a new component.

Hitting Google I searched for “Javascript music visualizer tutorials” which led to multiple full-length articles. Admittedly all of these articles seemed impossible to understand at first, but persistence is key. The one critical information I parsed & retained was that almost every single article strongly leveraged the native, modern-browser audio API. With that tidbit of information, I fired up a blank index.html page, imported A-Frame & A-Frame Environment, & registered a new A-Frame component named audiodata in our <head> tag that will hold all of the music visualizer logic.

Web Audio API

Drawing a vector or svg in html requires the <canvas></canvas> element; working with any sound in html requires the equivalent of a sound canvas — this is called the audio context. A new instance of an audio context, while it does nothing itself, marks the beginning of web audio API usage as it will contain the rest of our logic. The following diagram illustrates this accordingly:

Image for post

var audioContext = new (window.AudioContext || window.webkitAudioContext);

My next goal here was to simply play an mp3 track. A quick glance at the diagram above told me that at the very minimum I needed to initialize two additional nodes (source & destination), connect them to our audio context, & finally connect them to each other.

The source node intuitively suggested that this is where I’d connect the mp3 file URL. The API docs relayed that an audio destination is the technical term for the software & hardware that’s going to output the sound in the audio context; in this case, that means the users speakers. Relatively simple process, yet still quite far from a full-fledged visualizer. In order to visualize, I first had to add a third node that would provide the data needed, enter the GainNode.

Image for post

Taken directly from the documentation, the GainNode “interface represents a change in volume; it is an audio-processing node that causes a given gain to be applied to the input data before its propagation to the output.” In less technical terms, this is the programming object that is going to analyze the data coming in from our data source — what data is it analyzing? The gain, or change in volume from the input (source).

Satisfied with visualizing changes in volume, I proceeded to implement the logic displayed above by again initializing a GainNode object within our audio context & then connecting it to the two other audio nodes. Now it’s only a matter of creating a media element within the A-Frame scene, looping through all data returned every frame that music is playing, & attaching said data to resizable-columns. The code snippet below holds the crux of this sprint’s result, console-logging the data array returned:

let source = ‘some mp3 file';
let masterGain;
var audioContext = new (window.AudioContext);
masterGain = audioContext.createGain();
let song = new Audio(source);
songSource = audioContext.createMediaElementSource(song);
const analyser = audioContext.createAnalyser();
masterGain.connect(analyser); function updateWaveform() {
var dataArray = new Float32Array(analyser.frequencyBinCount);

And there we go! The tiny logic for a native web music visualizer used for a webvr frontend. Frankly, I was surprised at how painless it was to implement this natively, versus deciphering Mr.Ngo’s audio-visualizer component — the modern web browser has come a long way for audio support. I hope this trend doesn’t stop with webvr.

Image for post

Obviously came no where close to the target goal; however, due to the change of strategy mid-way through, I can’t say that I’m exactly disappointed. Visit the visualizer here: https://musicworldtest3.neocities.org/

Moving on to WebVR Experiment #5! Where I’ll continue this music series & finally integrate the music visualizer with our album discography.

New Public Components & Technologies Used: