Music to Watch URL’s By
Developed in 1998 on the MediaLab Arts degree course at the University of Plymouth, the webPlayer is a generative music creator, taking pages from the web and turning them into music, providing a soundtrack to enhance your browsing experience.
Some samples output from the webPlayer, as mp3’s:
- webplayer instructions page, recorded @ 5am, 17/05/00
- webplayer instructions page, recorded @ 4pm, 17/05/00
- webplayer instructions page, recorded @ 11pm, 17/05/00
The boundaries between sound and music are becoming increasingly blurred, as tastes change and artists produce increasingly diverse audio experiences using electronic boxes in preference to classic instruments – Gescom, the Aphex Twin, Speedy J and Autechre are just a few artists whose output would have been considered unlistenable only a few decades ago, but whose popularity is on the increase.
For the purposes of this project, I defined ‘music’ as a more traditional form of audio experience: of chords and rhythms, melodies and harmonies as appropriate. In the generative sector at least, the musical output is often only appreciated as a result of the process it has been through, and is made more listenable by a knowledge of that process, just as a Jackson Pollock paint-splattered canvas is deemed more artistic than a small child’s poster-paint flicked outburst, because we know that Pollock was using the motion of the trees to guide him. When creating for the immaterial canvas of the electronic age, process is more important than ever before.
To achieve my goal, I had to define the process of converting raw text to music. The most straightforward way to do this is via numbers – every ASCII character has an ASCII value ranging from 1 to 255 – the same range as most midi-based tools. Early versions of the webPlayer utilised this method, which produced results exactly like the ones I didn’t want – non-musically based, effectively randomly generated dirge. Unfortunately, the structures that the English language imposes on sentence composition are very different to those used in the phrasing of musical composition, and one does not transpose directly onto the other.
Fortunately, I stumbled upon the serialist composers and their work – people who used strict mathematical formulas to structure their music composition, effectively generating music out of numbers. By implementing a similar process to the likes of Schoenburg, Messian et al, the output became structured , rhythmic and more musical in nature. The formulae that the webPlayer utilises are quite complex in nature, but fundamental to the process of its musical generation, so I have included a complete breakdown of them below.
During the project’s duration, Macromedia’s Director 8 came on the market, and an undocumented feature was whispered about, adding the functionality to shift the pitch of embedded sounds in real time. As a result, I ditched the ‘Beatnik’ xtra I was using like a hot potato, solving all my latency issues and compatibility problems in one go. Download size is always a primary concern of mine when building Shockwave pieces, so I chose to use just one, pure note instead of a range of instrument samples, keeping the size of the movie down to just 52k. I think this pure waveform also reflects the generative nature of the webPlayer – there is no reason to use an instrument in its place, and no method of choosing which instrument it should be. To use a pure waveform is the only logical conclusion.
One element to the success of the piece is the speed at which it plays. Originally, and for the first 10 versions out of 13, the playback was much faster than it is in the final webPlayer – but the ‘tunes’ coming out of the structures were faltering due to the rateshifting of the notes – resulting in slightly longer notes at lower frequencies, and the subsequent loss of all structure and ‘musicality’. By slowing the playback of the notes and replacing the original short, harsh sample with a longer, softer one, the webPlayer assumes a completely different stance – much more ambient, and ‘background’ in nature. It was at this point that I came up with the sub-heading of ‘Music to Watch URL’s by…’, as the output of the webPlayer had changed dramatically from ‘up front’ to purely ‘atmospheric’ in nature.
Every piece of music should have a score of some form, and the webPlayer’s is essentially the raw HTML data from whatever URL it is pointed at, plus the date and time that the page is processed. There is no random functionality in the webPlayer so, given these elements, any piece of music generated could be reproduced at a later date. Every element of the generative process is cyclic in nature, so theoretically the same piece could be generated without knowledge of these elements… but it is extremely unlikely! Ultimately, the webPlayer’s output is determinable but not predictable – it is unlikely that you will be able to see a web page and ‘name that tune’ – there are simply too many factors involved.
The visual output of the webPlayer changed during its production, from a graphic realisation of the notes being played (two small blobs racing across the screen, leaving trails) to the ‘spectrum analyser’ in the final version. While the ‘blobs’ output was aesthetically quite nice, I soon realised that essentially there should be no ‘concrete’ output from the webPlayer which could be captured and referred to at a later date. The beauty of the piece is its ephemerality – it is truly music of the moment – and the dancing green bars reflect this perfectly. They have no meaning in themselves or if captured statically, but together they form an accurate representation of the sound.
The original webPlayer appeared over the web site in a small pop-up window. The intention was that you can feed it any URL’s you visit as you see them – simply copying and pasting the information from your browsing window to the webPlayer.
The webPlayer was developed in Macromedia Director and exported as a Shockwave .dcr file – a now defunct technology which is unplayable online.
All output from the webPlayer is generative in nature – as little as possible is predefined. There are three sources – the HTML of the URL pasted into the webPlayer, the current time, and the current date. All three affect the output of the webPlayer to some extent, but the most influential is the HTML content itself, as the basis for note sequencing.
The first stage in the process is to strip out all the text-based content, excluding all tagged content, as a lot of it is common to many web pages,and would result in a repetitive output. Content is stripped into sentences, then processed through an ASCII-based filter to result in a ‘base sequence’ of 8 numbers between 1 and 8 (every number is used once).
Note Sequences and Transformations
The webPlayer utilises a well-known series of transformations adapted from the processes used by Schoenburg, which he adapted from ‘transformations used with themes and motifs of Machaut 600 years earlier, by Bach 200 years earlier, and by Beethoven 100 years earlier.’ (Holtzman 1996: 57)
The original series transformations, formed with 12 notes, were based on the use of a clock face. The webPlayer uses similar transformations, but effectively adapted for an 8-hour clock (if there were such a thing) because its sequences comprise only 8 notes.
To create the reverse sequence, the webPlayer begins with the base sequence, and simply reverses it.
|base:||[7, 3, 4, 1, 2, 5, 6, 8]|
reverse = simple reverse order
|reverse:||[8, 6, 5, 2, 1, 4, 3, 7]|
The calculation of the inverse sequence is slightly more complicated. The webPlayer first calculates the shift in step between each note in the base sequence:
|base:||[7, 3, 4, 1, 2, 5, 6, 8]|
|shift:||-4, +1, -3, +1, +3, +1, +2|
…and then reverses that shift, by swapping all the positive and negative changes:
|shift:||+4, -1, +3, -1, -3, -1, -2|
Applying this shift to create the inverse sequence, this is where the ‘clock face’ metaphor creeps in.
The original shift from 7 down to 3 was calculated as ‘-4’. But now the webPlayer has to add 4 to 7. As you can see in the diagram, by wrapping the calculation around an 8-hour clock face, 7+4=3 once again. Applying this principle to all the shifts in the base sequence, the webPlayer generates the following inverse sequence:
|inverse:||[7, 3, 2, 5, 4, 1, 8, 6]|
The inverseReverse sequence applies the same rules, but using the shifts from the reverse sequence as a basis for transformation:
|reverse:||[8, 6, 5, 2, 1, 4, 3, 7]|
|shift:||-2, -1, -3, -1, +3, -1, +4|
|shift:||+2, +1, +3, +1, -3, +1, -4|
|invRev:||[8, 2, 3, 6, 7, 4, 5, 1]|
These four transformations are created for every base sequence created, and applied based on the structure arising from the HTML table data – every row tag ‘tr’ reverses the current sequence, and every cell tag ‘td’ inverts it. Thus the structure of the web page affects the structure of the music it generates.
In an effort to make the note sequences more ‘tuneful’, scale structures are applied to each sequence before they are streamed to the sound channels – a process which makes the output inherently more ‘musical’, as the sets of notes are classically recognised as sounding good together, and have been used in musical composition for centuries. The chord chosen is based on the time of day that the music is generated, cycling through the day (midnight – 6am: major, 6am-noon: natural minor, etc…)
There are four scales programmed into the webPlayer, referring to the pitch shift from the base note:
|gScaleData =||[[0, 2, 4, 5, 7, 9, 11, 12], — major chord|
|[0, 2, 3, 5, 7, 8, 10, 12], — natural minor|
|[0, 2, 3, 5, 7, 8, 11, 12], — harmonic minor|
|[0, 2, 3, 5, 7, 9, 11, 12]] — melodic minor|
This base note is adjusted during the day, shifting a semitone every hour through a twelve hour cycle – from 6am to 6pm, and from 6pm to 6am.
Messiaen’s ‘Quatuor pour la Fin de Temps’ was one of the first compositions to apply the rigid structures of serial sequencing to the duration of each note as well as the pitch. First performed in a prisoner of war camp in Gorlitz, Germany with a violinist, clarinettist, and a cellist repeatedly playing a series of 29 chords with a series of 17 durations, it was a landmark piece in many ways.
The webPlayer calculates a sequence of durations based on the current time, but this feature had to be withdrawn as the engine is unable to apply the sequences due to limitations of the pitch-shifting capabilities of Director 8.
It seems that Messiaen was also breaking new ground with his 1949 work, ‘Mode de valeurs et d’intensite’, by applying structures to the dynamics of the piece. The webPlayer emulates this process, calculating a series of volume levels based on the minute value of the current time, repeating the sequence in five-minute chunks. Thus, the dynamics of the output change every single minute.
Using the computer’s capabilities to take his process one stage further, the webPlayer also calculates a series for ‘stereo positioning’ of each note. Calculating a fraction based on the current date, it uses the result to influence a series of numbers from 80 down towards 0, then reverses the sequence and appends it to the original one. The polarity of each number in the sequence is alternated, and these values applied to the stereo positioning of each note, where -80 is positioned far to the left, and 80 is positioned far to the right:
example sequence: [-80, 40, -26, 20, -16, 13, -13, 16, -20, 26, -40, 80]
By reversing the sequence, the effect achieved is one of constantly centralising and dispersing focus – effectively moving the location of the instruments as they are being played, and altering the way in which they harmonise together.
Once all the sequences have been calculated, it is a simple process to stream them: dispersing them between two sound channels, alternating from one to the other, until all the notes from all the sequences are exhausted. Upon completion, both sound channels are activated simultaneously, and playback begins…
During playback, a third channel is also active, but its notes are calculated in real-time, based on the output of sound channel one. By playing a note one full octave below the pitch of the next note available in the first sound channel, the webPlayer’s output has more depth, is more flowing dynamically, and offers greater opportunity for harmonisation between the notes being played. Effectively, the sound is ‘warmer’ to listen to.
Despite the cyclic nature of the webPlayer’s compositional methods, the music generated can be enchanting to listen to. The number of processes imposed on the input of raw HTML data result in a cascading output that belies its repetitive formulation.
The delicacy of the sound sample used – the soft attack, slight modulation and gentle fade – adds to the ‘inoffensive’ nature of the music produced. There are no sharp edges, no sudden surprises, in fact there is very little to draw attention to the music itself – a perfect quality for background, atmospheric music. Due to the constantly changing texture of the webPlayer’s output, it is quite easy to let it run for an hour or so without growing tired – however long you have listened already, you never know what’s going to come out next.
This is the first generative music piece I have created, and I am eager to pursue the processes further -applying different structures and transformations, adjusting the parameters, changing the source of inputs – the scope for development is vast.
The Great Generative Debate continues to rage with regard to the ownership and authorship of generative music output. While I have utilised the transformation processes of classic serialist composers, I have taken full responsibility for the way in which they are applied, and to what. But as author of the program, can I really consider myself the composer of any output it generates, or does that honour fall to the person who chooses the URL the webPlayer points at, and the time that they do it? And if the output was recorded and sold, who owns the copyright? In fact, is there any copyright to be had?
While I have emulated the transformation processes used by many serialist composers in the generation of this music, I feel that I have added a step to their process of generative composition – taking it one stage further. In his book, ‘Digital Mantras’, Steven Holtzman writes:
“Once Boulez had defined the rules, the ‘mechanism’ could run its own course. The process of composition was effectively automated. Define the series that control each musical element. Define how these series and controls interact. And then, let the process follow its course.” (p91-92)
Effectively, Boulez and the other serialist composers were into generative music, but they retained their ‘composer’ title by creating the formulae and dictating the initial tonal sequences for their pieces. By allowing a randomly-selected HTML source to dictate these tonal sequences, I am removing my influence from this part of the equation altogether, resulting in an output which is truly generative in nature.