Winter Leaves

Winter Leaves is an investigation into the relationships between harmony and timbre.

The basic technique is to build sounds whose spectra have a harmonic value. One of the ways to achieve this is to create sounds whose spectral components have chordal relationships.
In fact, Winter Leaves is based on 3 sounds, each made up of 5 sinusoids whose frequencies have a fixed musical interval between them. All sounds are built on the same root (47.5 Hz, about G). The other 4 components have ratios of 2 (8ve) in the first sound, of 2.24 (major 9th) in the second and 2.8856 (non-tempered interval corresponding to an increasing 11th) in the third.
The 3 basic sounds, therefore, have the following sinusoidal components:

  • Ratio 2 -> 47.5, 95, 190, 380, 760 (overlapped 8ve)
  • Ratio 2.24 -> 47.5, 106.4, 238.3, 533.9, 1195.9 (overlapped 9th)
  • Ratio 2.8856 -> 47.5, 137.1, 395.5, 1141.3, 3293.4 (overlapped 11th+).

Of course spectra like this have both a tonal and a harmonic value. In the first spectrum the sound is a perfect consonance, in the second is a temperated dissonance and in the third with a non-tempered sound.

Continue reading

Wires

wires sonogram

The Wires basic idea is working with large sound masses evolving and changing through the time like a complex flux. Each mass is composed by a great number of single sounds (from 10 to 300, average is 50 to 100) that can seldom be perceived as “notes”.
So the way the mass “sounds” depends on parameters like frequency range, density, durations, attack time and spectra of composing sounds in a sort of granular micro-polyphony.
I composed Wires defining the masses evolution at high level by means of tendency masks, using the first version of my computer assisted composition system, called ALGEN to generate the low level instructions controlling the single sounds that has been synthesized by a simple FM algorithm.
Wires was realized at the “Centro di Sonologia Computazionale” (CSC – University of Padova) using the MUSIC360 software for digital sound synthesis.

V’ger

Click here to listen.
Note: This is a compressed stereo reduction from original quad – MP3 256 bps – Dur 7’30”


This piece was composed for the International Year of Astronomy (2009) and for the “Music and Astronomy” conference organized by the Bonporti Conservatory of Trento and Riva del Garda.

In the first Star Trek movie, V’ger (pron. Vyger) is the name given by the Borg to a fictional Voyager 6 intelligent probe, launched from Earth and recovered by them while wandering in outer space. The real Voyager 1 and 2 probes, was launched in the late 70s and destined to get lost in interstellar space, after passing around the outer planets from which they sent us images, but also sounds.

Continue reading

Melody Generator

Not much remains of this Melody Generator. This is the article by the author, Dirk-Jan Povel, published in 2010. There is no more trace of the software on the internet.

melody generator

 

Melody Generator is a tool for the construction of (tonal) melodies. Melody Generator runs on Macintosh and Windows platforms and can be downloaded freely.

A melody is conceived as consisting of a number of Parts. At present Melody Generator comprises three models of Melody generation: Attraction, Chord-based, Scale-based.

Each Part is generated in a number of phases: Construction, Editing, Re-arranging, and Transforming:

Construction is performed in the Melody Construction pane, shown above. Each aspect of a Part can be generated repeatedly and the results inspected. A melody can also be based upon a ‘Form’. By pushing the ‘Done’ button the construction of a Part is terminated.

Editing Notes can be added by right-clicking on an empty slot, notes can be removed or modified by right-clicking on a note.

A melody can be elaborated and transposed. Elaboration may again be removed.

Arrange: After one or more Parts have been generated, Parts can be removed, moved and duplicated.

Transform After a Part has been finished you can apply one or more transformations. These are are most useful for making variations of a Part.

Each step in the construction of a melody is displayed in the Melody pane and can be played with the parameter settings in the Play parameters pane.

Melodies can be stored temporarily in the Melody Store pane. Melodies can be saved to disk in MIDI format and mg2 format.

Software by Dirk-Jan Povel.

David Wessel

WesselPer ricordare David Wessel, scomparso il 13 Ottobre a 73 anni, mettiamo alcune testimonianze.

Per primo, il suo brano del 1977, Anthony.

In realtà Wessel è sempre stato un ricercatore più che un compositore, difatti la sua produzione musicale è rara. Anthony è un tipico brano costruito con fasce sonore in dissolvenza incrociata ed è stato uno dei primi pezzi realizzati con le macchine per la sintesi in tempo reale costruite da Peppino Di Giugno all’IRCAM.

Quella utilizzata qui è la 4A del 1975, uno dei primi modelli, il primo ad andare oltre lo stadio di prototipo. Si trattava di un processore capace di generare in tempo reale fino a 256 oscillatori digitali in wavetable (cioè con forma d’onda memorizzata e quindi con contenuto armonico definito dall’utente) con relativo inviluppo di ampiezza. Nonostante il fatto che gli oscillatori non si potessero connettere fra loro, era un passo avanti notevole per quegli anni perché, con i sistemi analogici dell’epoca, era già difficile arrivare a 10 oscillatori (per maggiori particolari vedi le voci 4A e Giuseppe Di Giugno sul blog di Alex Di Nunzio).

Se da punto di vista quantitativo la 4A era un grande passo avanti, la qualità del suono era limitata dal fatto che non si potevano realizzare dei metodi di sintesi a spettro sonoro variabile (per es. con filtri o modulazione di frequenza), se non ricorrendo all’additiva. In Anthony, Wessel aggira questo limite evitando una caratterizzazione melodica del materiale sonoro, affidandosi, invece, a grandi cluster in lenta mutazione armonica.

Att.ne: il brano inizia molto piano. Inoltre con gli altoparlantini del computer ne sentite metà.

Un secondo contributo video riguarda l’attività di David Wessel come ricercatore interessato principalmente all’interazione uomo – macchina, tanto che nel 1985 aveva fondato all’IRCAM un dipartimento di ricerca dedicato allo sviluppo di software musicale interattivo.

Qui viene mostrato lo SLAB, uno dispositivo di controllo composto da una serie di touch pad sensibili alla posizione e alla pressione. Ogni pad trasmette al computer informazioni relative alla posizione xy del dito che lo preme e alla pressione esercitata. Il flusso di dati è ethernet, non MIDI, per cui le misure sono accurate e la risposta è veloce (questa storia del superamento del MIDI ce la porteremo dietro per la vita; per citare Philip Dick, la cavalletta ci opprime). Maggiori dati tecnici sullo SLAB, qui. Per gli impazienti, nel video la performance inizia a 2:40.

Orchidée

Questo interessante sistema chiamato Orchidée, realizzato all’IRCAM da Grégoire Carpentier and Damien Tardieu con la supervisione del compositore Yan Maresz, è in grado di fornire una o più orchestrazioni di un suono dato.

In pratica, significa che un compositore può arrivare con un suono e farlo analizzare al sistema che poi fornisce varie combinazioni di suoni orchestrali che approssimano la sonorità data.

Un esempio, tratto da quelli forniti dall’IRCAM, dice più di molte parole. Qui potete ascoltare:

Si tratta di Computer Aided Orchestration (orchestrazione assistita) ed è un ulteriore esempio di come il computer possa ormai affiancare il compositore in molte fasi del suo lavoro.

Il sistema si basa su un largo database di suoni orchestrali che sono stati analizzati e catalogati in base a una serie di descrittori sia percettivi (es. brillantezza, ruvidità, presenza, colore, …) che notazionali (strumento, altezza, dinamica, etc).

Un algoritmo genetico individua, poi, varie soluzioni, ognuna ottimizzata rispetto a uno o più descrittori, il che significa che non esiste una soluzione ideale, ma più di una, ciascuna delle quali si avvicina molto ad un aspetto del suono in esame, risuldando, invece, più debole sotto altri aspetti. Per esempio, si potrebbe ottenere un insieme che approssima molto bene il colore del suono, ma non la sua evoluzione. Sta poi al compositore scegliere quella che gli appare più funzionale al proprio contesto compositivo.

La descrizione completa del sistema Orchidée si trova qui. Vari altri esempi si possono ascoltare qui.

Bach Panther

Una rapida lezione sulla fuga e sul contrappunto. Tra l’altro, è un modo inedito di filmare il piano.

Fugue n°XXIV extract from 60 “Préludes & Fugues dans les Trente Tonalités” of Stéphane Delplace.
Based on the Henri Mancini’s Pink Panther Theme.
Interpreted by Stéphane Delplace.
Filmed and directed by Stéphan Aubé.
More : http://www.stephanedelplace.com

Piano Etudes

by Jason Freeman è un altro esempio di opera aperta via web in cui l’utente crea un brano seguendo un percorso fatto di frammenti. Andate qui.

Notate:

  • dopo aver scelto uno studio, cliccando “settings” potete vedere le note o il piano roll
  • cliccando “sharing” potete salvare la vostra creazione
  • potete apporre il vostro nome come autore accanto a quello di Jason Freeman cliccando “Anonymous”

Note dell’autore:

Inspired by the tradition of open-form musical scores, I composed each of these four piano etudes as a collection of short musical fragments with links to connect them. In performance, the pianist must use those links to jump from fragment to fragment, creating her own unique version of the composition.

The pianist, though, should not have all the fun. So I also developed this web site, where you can create your own version of each etude, download it as an audio file or a printable score, and share it with others. In concert, pianists may make up their own version of each etude, or they may select a version created by a web visitor.

I wrote Piano Etudes for Jenny Lin; our collaboration was supported, in part, with a Special Award from the Yvar Mikhashoff Pianist/Composer Commissioning Project. Special thanks to Turbulence for hosting this web site and including it in their spotlight series and to the American Composers Forum’s Encore Program for supporting several live performances of this work. I developed the web site in collaboration with Akito Van Troyer.

Lo Spazio tra le Pietre

sonogramma

I composed Lo Spazio tra le Pietre (The Space between the Stones) for the “Music and Architecture” event, organized by the Bonporti Conservatory of Trento and Riva del Garda on 18/10/2008.

From my point of view, the relationships between music and architecture do not end with the important question of designing places for music, but have deeper aspects that certainly affect the composition and reach its fruition.
If, on the one hand, a building exists statically in space, it cannot be appreciated in its entirety without a time interval. Its global form is never evident in its entirety, not even from above. It is formed in the memory of those who approached it from many sides and saw its shape get lost, while the construction details and then the materials become gradually more evident. Similarly, a piece of music statically exists in some form of notation and unfolds over time. The temporal listening gradually highlights the internal structure, the constitutive elements and the details.
Thus, it is possible to think of a piece of music as a static object and a building as a dynamic structure. But it is from a compositional point of view that the analogies become closer.

When I work with fully synthetic sounds that don’t derive from any real sound, as in the case of this piece, I generally follow a top-down approach. At first I imagine a shape, often in spatial terms and then I build the materials and methods with which to make it.
Thus, as in architecture, for me composing a piece means building the basic materials starting from the minimum components and modulating the temporal and spatial void that separates them, trying to assemble them in a coherent environment.

The parameters I manipulate, as in instrumental music, are temporal and spatial, but, unlike instrumental music, they extend to the microscopic level. Thus, the manipulation of time does not stop at the lengths of the notes, but goes down to the attack and decay times of the individual components of each sound (the partial harmonics or inharmonics). Similarly, on a spatial level my intervention is not limited to the interval, which determines the character of the harmonic relations, but goes as far as the distance between the partials that form a single sound, determining, to a certain extent, its timbre.

The point, however, is that, in my work, methods are much more important than materials. Indeed, even the materials themselves, in the end, derive from the methods. My problem, in fact, is not writing a sequence of sounds and develop it, but to generate a surface, a “texture”, having a precise perceptive value.
In fact, this is texture music. Even when you think you are hearing a single sound, you actually hear at least 4/5 of them. And I’m not talking about partials, but about complex sounds, each of which has from a minimum of 4 partials, up to a maximum of about 30. For example, the initial G #, which comes from nothing and is then surrounded by other notes ( FA, Bb, FA #, A) is made up of 8 sounds with very little difference in pitch changing every 0.875 seconds. This creates the perception of a single sound, but with a certain type of internal movement.

Texture music, micro-polyphony that moves far below the threshold of base 12 temperament. In fact, here I work with an octave divided into 1000 equal parts and in certain points of the piece thousands of complex sounds coexist simultaneously to generate a single ” crash “.
It follows that it is not possible to write such a “score” by hand. I have personally designed and programmed a compositional software called AlGen (AlgorithmicGeneration) through which I drive the masses and the computer generates the detail (the single sounds).
AlGen already existed in 1984 and had been used to compose Wires, to which this piece owes a lot, but, while at that time it was just a routine block linked to the Music360 synthesis program by Barry Vercoe (the direct ancestor of today’s CSound), for this occasion it has been completely rewritten and is a software in its own right. In current version it incorporates various probabilistic distributions, serial methods, linear and non-linear algorithms to better control the generated surfaces and above all their evolution.

Nonetheless, AlGen does not incorporate any form of “intelligence”. It does not make decisions based on harmony, context, etc. It is a blind executor of orders. It generate values from a probabilistic set or computes functions and generates notes, but fortunately it does not think and decide. What it deals with are pure numbers and it doesn’t even know if it is calculating durations, densities or frequencies.

Lo Spazio tra le Pietre was composed in my studio in September – October 2008 and synthesized in 4 channels by CSound. Synthesis algorithm: simple FM.
CSound score generated by the AlGen assisted composition software written by the author.

The whole piece is thought of as a spatial structure. Its skyline is evident in the sonogram at the top of the page and its structure. as an alternation of shapes, full and empty, is clearly visible in the enlargement of a fragment of about 1 minute (below, as usual you can click on the images to enlarge them).

Mauro Graziani – Lo Spazio tra le Pietre (The Space between the Stones) (2008), stereo reduction from original quad

sonogramma frammento

ST/10-1,080262

Ecco un altro lavoro stocastico di Iannis Xenakis. Il titolo è codificato: ST sta per stochos, 10-1 significa che è la prima composizione di questa serie per 10 strumenti, 080262 è la data.

L’elemento di interesse di questo brano risiede nel fatto che è uno dei primi ad essere stato composto da un software. All’epoca, infatti, il lavoro di Xenakis aveva richiamato l’attenzione dell’IBM che aveva messo a disposizione alcuni programmatori per formalizzare il suo processo compositivo.

Come in altri brani di questi anni, la statistica e la teoria delle distribuzioni sono alla base dell’intera composizione (vedi Musica e Matematica 03).

Iannis Xenakis – ST/10-1,080262
Paris Instrumental Ensemble for Contemporary Music, Cond. Konstantin Simonovitch