Winter Leaves

Winter Leaves is an investigation into the relationships between harmony and timbre.

The basic technique is to build sounds whose spectra have a harmonic value. One of the ways to achieve this is to create sounds whose spectral components have chordal relationships.
In fact, Winter Leaves is based on 3 sounds, each made up of 5 sinusoids whose frequencies have a fixed musical interval between them. All sounds are built on the same root (47.5 Hz, about G). The other 4 components have ratios of 2 (8ve) in the first sound, of 2.24 (major 9th) in the second and 2.8856 (non-tempered interval corresponding to an increasing 11th) in the third.
The 3 basic sounds, therefore, have the following sinusoidal components:

  • Ratio 2 -> 47.5, 95, 190, 380, 760 (overlapped 8ve)
  • Ratio 2.24 -> 47.5, 106.4, 238.3, 533.9, 1195.9 (overlapped 9th)
  • Ratio 2.8856 -> 47.5, 137.1, 395.5, 1141.3, 3293.4 (overlapped 11th+).

Of course spectra like this have both a tonal and a harmonic value. In the first spectrum the sound is a perfect consonance, in the second is a temperated dissonance and in the third with a non-tempered sound.

Continue reading

Wires

wires sonogram

The Wires basic idea is working with large sound masses evolving and changing through the time like a complex flux. Each mass is composed by a great number of single sounds (from 10 to 300, average is 50 to 100) that can seldom be perceived as “notes”.
So the way the mass “sounds” depends on parameters like frequency range, density, durations, attack time and spectra of composing sounds in a sort of granular micro-polyphony.
I composed Wires defining the masses evolution at high level by means of tendency masks, using the first version of my computer assisted composition system, called ALGEN to generate the low level instructions controlling the single sounds that has been synthesized by a simple FM algorithm.
Wires was realized at the “Centro di Sonologia Computazionale” (CSC – University of Padova) using the MUSIC360 software for digital sound synthesis.

V’ger

Click here to listen.
Note: This is a compressed stereo reduction from original quad – MP3 256 bps – Dur 7’30”


This piece was composed for the International Year of Astronomy (2009) and for the “Music and Astronomy” conference organized by the Bonporti Conservatory of Trento and Riva del Garda.

In the first Star Trek movie, V’ger (pron. Vyger) is the name given by the Borg to a fictional Voyager 6 intelligent probe, launched from Earth and recovered by them while wandering in outer space. The real Voyager 1 and 2 probes, was launched in the late 70s and destined to get lost in interstellar space, after passing around the outer planets from which they sent us images, but also sounds.

Continue reading

Nabaz’mob

In this post, the story of Nabaztag is taken from Wikipedia, with some notes by myself.

The word Nabaztag (“նապաստակ” which in Armenian language means rabbit) indicates the wifi rabbit conceived by Rafi Haladjian and Olivier Mével and produced in 2005 by the French company Violet.

The object, sold from June 2005, by the end of October 2006 had reached 35,000 copies in France alone. At the end of 2006 a more advanced model was introduced, the Nabaztag: tag that supports mp3 streaming via the internet, has a microphone to receive voice commands and an RFID reader with personalized tags to receive commands. This model also has PULL technology, which means it can query the server on its own initiative. As of September 2007, there are more than 180,000 Nabaztags around the world.

On October 20, 2009, Violet, struggling for insane management, is bought by the well-known software publisher Mindscape which puts on the market an even more advanced model called Karotz with webcam and greater memory capacity. Soon, however, even the latter entered into crisis. On July 29, 2011 Mindscape announced the shutdown of Nabaztag’s management servers, creating 180,000 orphans in one go, but made public the code for managing multimedia “bunnies”, making it possible for different user communities to create new servers. However, the various user communities have favored alternative solutions, based on the Opensource OpenJabNab, Nabizdead and OpenNag projects, simpler to implement than the original server (called “burrow”, referring to wild rabbit burrows) Violet / Mindscape but without support for older first generation Nabaztag units. The user communities born in the immediate closure of the “official” server support only Nabaztag: tags.

Later Mindscape is acquired by Aldebaran Robotics, a company specializing in toy and amateur robots, which sells Karotz’s stocks without developing the product, despite the fact that it had incorporated and clearly visible hooks for accessories and extensions. Finally, with a shocking announcement from its CEO, it communicates the shutdown of the Karotz servers for February 18, 2015, thus marking the end of the project whose existence remains entrusted to amateur servers.

Since the creation of Nabaztag, Antoine Schmitt is its behavioral designer and Jean-Jacques Birgé its sound designer. Together, they also composed the Opera Nabaz’mob for 100 communicating rabbits, which won the Prix Ars Electronica Award of Distinction Digital Musics 2009 and an excerpt of which can be seen in this video.

The video on this page is a shorter excerpt, but the audio is better.

Arras

Barry Truax – Arras (1980) – for four computer-synthesized soundtrackstruax

Author’s notes:

Arras refers metaphorically to the heavy wall hanging or tapestry originally produced in the French town of the same name. The threads running through the material form both a background and, when coloured, a foreground pattern as well, even when they are the same thread. In the piece there is a constant ambiguity between whether a component sound is heard as part of the background texture, or whether it is heard as a foreground event because, in fact, the frequencies are the same. The listener can easily be drawn into the internal complexity of the constantly shifting pattern, but at the same time can sense the overall flow of the entire structure.

Arras is a continuously evolving texture based on a fusion of harmonic and inharmonic spectra. The large-scale structure of the piece follows a pattern of harmonic change going from closely spaced odd harmonics through to widely spaced harmonics towards the end. Each harmonic timbre is paired with an inharmonic one with which it shares specific harmonics, and with which it overlaps after each twenty-five second interval. This harmonic/inharmonic structure forms a background against which specific events are poised: shorter events, specific high pitches which otherwise would be imbedded in the overall timbre, and some percussive events. However, all frequencies found in the foreground events are the same as those in the background texture; hence the constant ambiguity between them.

Arras received an honourable mention in the computer music category of the 1980 International Competition of Electroacoustic Music sponsored by the G.M.E.B. in Bourges, France.

Arras is available on the Cambridge Street Records CD Pacific Rim, and the RCI Anthology of Canadian Electroacoustic Music.

More technical notes here.
Listen to Arras excerpt.

Androgyny

Barry Truax – Androgyny (1978) – a spatial environment with four computer-synthesized soundtracks

Author’s notes:

Androgyny explores the theme of its title in the abstract world of pure sound. The piece, however, is not programmatic; instead, the dramatic form of the piece has been derived from the nature of the sound material itself. In this case, the sound construction is based on ideas about an acoustic polarity, namely “harmonic” and “inharmonic,” or alternatively, “consonance” and “dissonance.” These concepts are not opposed, but instead, are related in ways that show that a continuum exists between them, such as in the middle of the piece when harmonic timbres slowly “pull apart” and become increasingly dissonant at the peak intensity of the work. At that point a deep harmonic 60 Hz drone enters, similar to the opening section, but now reinforced an octave lower, and leads the piece through to a peaceful conclusion. High above the drone are heard inharmonic bell-like timbres which are tuned to the same fundamental pitch as the harmonic drone, a technique used throughout the work with deeper bells.

The work is designed to sound different spatially when heard on headphones. Through the use of small binaural time delays, instead of intensity differences, the sounds are localized outside the head when heard through headphones. Various spatial movements can also be detected, such as the circular movement of the drones in the last section of the piece.

Although not intended to be programmatic, the work still has environmental images associated with it, namely those suggested by the I Ching hexagram number 62, Preponderance of the Small, with a changing line to number 31, Wooing. The reading describes a mountain, a masculine image, hollowed out at the top to enclose a lake, a feminine image. The two exist as a unity. Thunder is heard close by, clouds race past without giving rain, and a bird soars high but returns to earth.

Androgyny is available on the Melbourne album Androgyne and the Cambridge Street Records CD, SFU 40.

Production Note:

The work was realized with the composer’s POD6 and POD7 programs for computer sound synthesis and composition at Simon Fraser University. All the component sounds are examples of frequency modulation (FM) synthesis, generated in binaural stereo, with time differences between channels. However, considerable analog mixing in the Sonic Research Studio at Simon Fraser University produced the resulting complex work.

Listen to Androgyny

David Wessel

WesselPer ricordare David Wessel, scomparso il 13 Ottobre a 73 anni, mettiamo alcune testimonianze.

Per primo, il suo brano del 1977, Anthony.

In realtà Wessel è sempre stato un ricercatore più che un compositore, difatti la sua produzione musicale è rara. Anthony è un tipico brano costruito con fasce sonore in dissolvenza incrociata ed è stato uno dei primi pezzi realizzati con le macchine per la sintesi in tempo reale costruite da Peppino Di Giugno all’IRCAM.

Quella utilizzata qui è la 4A del 1975, uno dei primi modelli, il primo ad andare oltre lo stadio di prototipo. Si trattava di un processore capace di generare in tempo reale fino a 256 oscillatori digitali in wavetable (cioè con forma d’onda memorizzata e quindi con contenuto armonico definito dall’utente) con relativo inviluppo di ampiezza. Nonostante il fatto che gli oscillatori non si potessero connettere fra loro, era un passo avanti notevole per quegli anni perché, con i sistemi analogici dell’epoca, era già difficile arrivare a 10 oscillatori (per maggiori particolari vedi le voci 4A e Giuseppe Di Giugno sul blog di Alex Di Nunzio).

Se da punto di vista quantitativo la 4A era un grande passo avanti, la qualità del suono era limitata dal fatto che non si potevano realizzare dei metodi di sintesi a spettro sonoro variabile (per es. con filtri o modulazione di frequenza), se non ricorrendo all’additiva. In Anthony, Wessel aggira questo limite evitando una caratterizzazione melodica del materiale sonoro, affidandosi, invece, a grandi cluster in lenta mutazione armonica.

Att.ne: il brano inizia molto piano. Inoltre con gli altoparlantini del computer ne sentite metà.

Un secondo contributo video riguarda l’attività di David Wessel come ricercatore interessato principalmente all’interazione uomo – macchina, tanto che nel 1985 aveva fondato all’IRCAM un dipartimento di ricerca dedicato allo sviluppo di software musicale interattivo.

Qui viene mostrato lo SLAB, uno dispositivo di controllo composto da una serie di touch pad sensibili alla posizione e alla pressione. Ogni pad trasmette al computer informazioni relative alla posizione xy del dito che lo preme e alla pressione esercitata. Il flusso di dati è ethernet, non MIDI, per cui le misure sono accurate e la risposta è veloce (questa storia del superamento del MIDI ce la porteremo dietro per la vita; per citare Philip Dick, la cavalletta ci opprime). Maggiori dati tecnici sullo SLAB, qui. Per gli impazienti, nel video la performance inizia a 2:40.

Max Mathews & John Chowning

In questa lunga conversazione con Curtis Roads, compositore e musicologo, Max Mathews & John Chowning ripercorrono varie tappe della storia della Computer Music.

A partire dagli anni ’50, Max Mathews ha creato, per primo, una serie di software che hanno permesso all’elaboratore di produrre e controllare il suono: si tratta della serie MUSIC I, II, III … fino al MUSIC V, largamente utilizzato, fra gli altri da Jean Claude Risset per sintetizzare alcuni famosi brani degli anni ’70, ma soprattutto per fare ricerca e rendere palesi le possibilità offerte dall’audio digitale.

John Chowning, percussionista, compositore e ricercatore, è invece noto come l’inventore della modulazione di frequenza (FM) per la sintesi del suono, tecnica sviluppata negli anni ’70 e poi ceduta a Yamaha che l’applicò in una lunga serie di sintetizzatori commerciali fra cui il DX7 (1983) che resta tuttora il synth più venduto nella storia.

Il video include anche una performance della pianista Chryssie Nanou che esegue Duet for One Pianist di J. C. Risset, in cui il pianista suona un pianoforte a coda Yamaha dotato di MIDI controller e duetta con un computer che “ascolta” l’esecuzione e a sua volta invia comandi MIDI allo stesso strumento i cui tasti iniziano ad abbassarsi da soli in un 4 mani con fantasma.

Riverrun

Barry Truax – Riverrun (1986), musica acusmatica di sintesi in 4 canali.

Recentemente Truax ha prodotto una revisione del brano spazializzato su 8 canali. Qui possiamo ascoltare soltanto l’edizione mixata in stereo disponibile in CD sul sito dell’autore.

Riverrun è realizzato quasi totalmente in sintesi granulare, un sistema in cui il suono, anche se sembra continuo, è composto da piccoli grani sonori la cui durata può andare da un centesimo a circa un decimo di secondo. Quando la durata dei grani è lunga (> 50 msec) è più facile distinguerli e quindi si percepisce la natura granulare del suono. Con durate comprese fra i 10 e i 30 msec (da 1 a 3 centesimi di secondo) e con grani parzialmente sovrapposti, invece, la granularità non è percepibile e si ottengono suoni continui.

La sonorità dell’insieme dipende sia dal suono dei singoli grani che dalla loro distribuzione in frequenza. La manipolazione di questi due parametri, oltre alla durata e alla densità, permette di ottenere un vasto spettro sonoro in evoluzione.

L’idea della sintesi granulare risale a Xenakis ed è stata implementata su computer da Curtis Roads nel 1978. Qui alcune note tecniche di Barry Truax.

Qui, le note dell’autore su Riverrun.

RIP Max Mathews

maxheadPessima giornata il 21 Aprile. All’età di 84 anni si è spento Max Mathews. Non un vero compositore, ma praticamente l’inventore della computer music, avendo ideato e scritto, nel 1957, il primo software che permettesse ad un elaboratore di emettere suoni.

Per raccontarlo con le sue parole:

Computer performance of music was born in 1957 when an IBM 704 in NYC played a 17 second composition on the Music I program which I wrote. The timbres and notes were not inspiring, but the technical breakthrough is still reverberating. Music I led me to Music II through V. A host of others wrote Music 10, Music 360, Music 15, Csound and Cmix. Many exciting pieces are now performed digitally. The IBM 704 and its siblings were strictly studio machines – they were far too slow to synthesize music in real-time. Chowning’s FM algorithms and the advent of fast, inexpensive, digital chips made real-time possible, and equally important, made it affordable.

Starting with the Groove program in 1970, my interests have focused on live performance and what a computer can do to aid a performer. I made a controller, the radio-baton, plus a program, the conductor program, to provide new ways for interpreting and performing traditional scores. In addition to contemporary composers, these proved attractive to soloists as a way of playing orchestral accompaniments. Singers often prefer to play their own accompaniments. Recently I have added improvisational options which make it easy to write compositional algorithms. These can involve precomposed sequences, random functions, and live performance gestures. The algorithms are written in the C language. We have taught a course in this area to Stanford undergraduates for two years. To our happy surprise, the students liked learning and using C. Primarily I believe it gives them a feeling of complete power to command the computer to do anything it is capable of doing.

Eccolo in un video del 2007 mentre ascolta Daisy Bell (A Bycicle Build for Two), eseguita e cantata nel 1961 da un elaboratore IBM 7094 grazie al software di John Kelly, Carol Lockbaum e dello stesso Mathews.