The last century has b e en particularly rich in movements that consisted of
breaking the rules of previous music, from “serialists” like Sch¨onberg and Stock-
hausen, to “aleatorists” like Cage. The realm of composition principles today
is so disputed and complex, that it would not be practical to try and define a
set of rules that fits them all. Perhaps a better strategy is a generic modeling
tool that can acc ommodate specific rules from a corpus of examples. This is
the approach that, as modelers of musical intelligence, we wish to take. Our
goal is more specifically to build a machine that defines its own creative rules
by listening to and learning from musical examples.
Humans naturally acquire knowledge, and comprehend music from listening.
They automatically hear collections of auditory objects and recognize patterns.
With experience they can predict, classify, and make immediate judgments
about genre, style, beat, composer, performer, etc. In fact, every composer
was once ignorant, musically inept, and learned certain skills e ss entially from
listening and training. The act of composing music is an act of bringing personal
experiences together, or “influences.” In the case of a computer program, that
personal experience is obviously quite non-existent. Though, it is reasonable
to believe that the musical experience is the most essential, and it is already
accessible to machines in a digital form.
There is a fairly high degree of abstraction between the digital representation
of an audio file (WAV, AIFF, MP3, AAC, etc.) in the computer, and its mental
representation in the human’s brain. Our task is to make that connection by
modeling the way humans perceive, learn, and finally represent music. The
latter, a form of memory, is assumed to be the most critical ingredient in their
ability to compose new music. Now, if the machine is able to perceive music
much like humans, learn from the experience, and combine the knowledge into
creating new compositions, is the composer: 1) the programmer who conceives
the machine; 2) the user who provides the machine with examples; or 3) the
machine that makes music, influenced by these examples?
Such ambiguity is also found on the synthesis front. While composition (the cre-
ative act) and performance (the executive act) are traditionally distinguishable
notions—except with improvised music where both occur simultaneously—with
new technologies the distinction can disapp ear, and the two notions merge.
With machines generating sounds, the composition, which is typically repre-
sented in a symbolic form (a score), can be executed instantly to become a
performance. It is common in electronic music that a computer program syn-
thesizes music live, while the musician interacts with the parameters of the
synthesizer, by turning knobs, selecting rhythmic patterns, note sequences,
sounds, filters, etc. When the sounds are “stolen” (s ampled) from already
existing music, the authorship question is also supplemented with an ownership
issue. Undoubtedly, the more technical sophistication is brought to computer
music tools, the more the musical artifact gets disconnected from its creative
source.
23