Category Archives: musth625


(musTh625) moving from DAW-based musique concrete to virtual samplers

As you add virtual samplers (Kontakt) to your closet of techniques available to you as a computer musician, it can help to understand the function of a virtual sampler – why to use one, and what interesting things can you make it do. As an introduction, I’m organizing the post into a set of questions.

what can a virtual sampler do?

A virtual sampler allows you to

  • easily transpose recorded samples through the use of a MIDI keyboard
  • set start and stop points within an audio file
  • create looping sections within an audio file that continue as long as you hold down a key (and also in other ways)
  • add synthesis-based processing to your samples, through the use of envelopes and effects

All of the above capabilities are non-destructive processes, generated in realtime by the virtual sampler.

how can MIDI be of help?

Since MIDI is a control language that describes performance actions, it can provide note control for starting and stopping notes in time, transposition information through keyboard tracking, and performance controls that can vary a sound while it is playing through the use of Continuous Controllers (CC). Since MIDI was designed to separate performance controls from sound generation, it is up to you to program instruments that respond to MIDI data in interesting and musical ways.

why would a sampler also include synthesis?

We traditionally think of synthesizers as generating sound from scratch, and samplers as using prerecorded audio as the source of their sound generation. In reality, digital synthesizers and digital samplers are fairly similar. A digital synthesizer must use a sampled waveform to generate sound. Sampled waveforms are really very short digital samples (often one cycle in length).

Synthesis techniques and controls offer a wide variety of audio processing and transformational techniques that can be used in realtime. Without diving deep into synthesis, it can be useful to understand some basic synthesis elements that we will use in our sampler instruments. Kontakt provides envelopes, low frequency oscillators (LFOs), filters, and other effects that can be applied to sampled audio.

I will go over introductory synthesis controls in my next post.


(musTh625) project 2: musique concrete and virtual instruments

Due Tuesday, June 3, at beginning of class.


Compose a short work (75 – 90 seconds, 1’15 – 1’30) utilizing musique concrète techniques. You will process audio as before in Audacity and Audition, and assemble in Digital Performer, with the addition of using a virtual sampler (Kontakt) to further process audio samples. You are encouraged to build upon your first project.

Form again does not have to be much of a concern. Gestural or continual variation should dominate your project, whereby anything material that is presented can be subject to immediate variation. Your use of Kontakt should not be as an organ/keyboard, but as an additional way of expanding your developmental possibilities. Your Kontakt instruments must make use of interesting modulation techniques (envelopes and LFOs) and have external MIDI CC realtime controls to allow for interesting development of the sound over time.


  • Duration: 1’15 – 1’30 (10)
  • The inclusion of at least three virtual instruments in Kontakt. (10)
  • The project must rely on gesture as a primary component of the work. (10)

Additional factors you will be graded on

  • Creativity: are your edited sounds interesting, and used in interesting ways in the project? Do your Kontakt instruments make use of interesting modulation and realtime controls? (40)
  • Quality of edits and finished audio: you should not have audible clicks at beginnings or ends of edited audio, and your audio should not distort (it should not go over the maximum amplitude). (20)
  • Organization of files and following turn-in procedure: is there a finished, mixed audio file? Can I open your project file and play back all the tracks contained? Did you include your original source files and your processed files? (10)

(musTh625) using reverb, bouncing to disk

You can use reverb as an effect in Digital Performer to add a sense of depth, or distance to the apparent sound field. I’ve posted about how to use ProVerb, and provided some links to help understand reverb in general, here:

To finish your project, you will need to mix down, or bounce to disk your project in DP to a single stereo audio file. Instructions are here:


(musTh625) project 1: musique concrete

Due Tuesday, May 27, at beginning of class.


Compose a short work (45 – 60 seconds) utilizing musique concrète techniques. You will process your sounds in Audacity and/or Audition, and assemble/compose them in Digital Performer. You should use three to five original sound sources.

You will edit and process your sounds using the basic techniques of musique concrète, namely

  • cut/copy/paste
  • gain/amplitude change (overall, or as an envelope)
  • changing playback speed (with or without changing pitch)
  • reversing audio (change direction)
  • equalization

Form does not have to be much of a concern. The ethos of concrete music is experimentation with sound. Anything that sounds interesting from moment to moment will work as a composition in this short format. It can be best to think in terms of gestural or continual variation, whereby anything material that is presented can be subject to immediate variation.

Your project cannot consist entirely of looped material. You should focus on musical gestures, not simply repeated patterns.


  • Duration: 45 – 60 seconds (10)
  • The project must rely on gesture as a primary component of the work. (10)

Additional factors you will be graded on:

  • Creativity: are your edited sounds interesting, and used in interesting ways in the project? (40)
  • Quality of edits and finished audio: you should not have audible clicks at beginnings or ends of edited audio, and your audio should not distort (it should not go over the maximum amplitude). (20)
  • Organization of files and following turn-in procedure: is there a finished, mixed audio file? Can I open your project file and play back all the tracks contained? Did you include your original source files and your processed files? (10)
  • Use of the required minimum number of sound sources (10)

100 points total

Work Procedure/Organization

  • Create a project folder. Name it with your last name and “-1.” For example, my project folder would be kothman-1.
  • Within the project folder, create a folder that contains your original source sound files (name it source sounds, or source files). These files will contain the specific segments of audio you used to start your editing. If you are using snippets from larger works or sound files, your original sounds folder will contain the edited snippets as well.
  • Create a folder for your processed sounds, within the project folder.
  • Launch Digital Performer and save your Digital Performer project within your overall project folder.
  • Make sure you bounce to disk, to make a final stereo mix of your entire project.
  • Compress your entire project1 folder and turn in that compressed folder to me in class (via some type of removable media or cloud storage).

(musTh625) musique concrete, starting your piece 2

Part 1 described how to collect sounds, organize your files, and listen to your soundfiles for smaller your gestures. This part will outline some initial processes you can apply to your soundfiles to get interesting results. For now we will stick to Audition and Audacity for processing.

using audition

Audition uses an integrated window that contains a listing of open files, a level meter, and an editing space, among other functions. You can drag an audio file into the file list or editing space to open it in Audition. The editing space has two parts: on the top is an overview of the entire file, and the larger space contains whatever amount of the file you are viewing. You can use the top pane to zoom in on parts of the file.

As with any graphic editor, you can position the playback cursor (or playback head, if you prefer) and hit the space bar to start playing, or you can select a portion of the audio and hit space to just hear that portion. If you find a segment that is interesting on its own, you should copy it into a new file. You can copy, create a new file, and then past the audio into it (as with most programs). Audition also has a special “Copy to new” command that reduces the task to one command. If the new file you are creating is just a segment of original, unprocessed audio, you should save it with some unique name into your source files folder (for original audio).

With any audio segment taken from a longer file, you should check for clicks/pops at the beginning and end of the file. These clicks happen when you have a beginning or ending amplitude that is not zero, causing the speakers to rapidly move in a discontinuous way. You should listen to these and fix them before processing the file further. Audition has fade handles at beginnings and ends of files. Before you make use of them, make sure you have zoomed in enough so that you can know how long (in time) you are making the fade. Alternately, you can select a portion of the audio and choose Favorites | Fade in (or Fade out).

With any original soundfile you should remove extra silence at the beginning of the file before you begin processing it. As you stretch audio (slow it down), you will be stretching the silence as well if it is not removed.

effect editing with audition

Besides cut, copy, and paste, most of the other processing commands you will use are found in the Effects menu, organized by type of process. We’ll cover some basics. Keep in mind that effects are destructive, meaning that your file will change. You should use the Save As… command to save any processed audio into a new file. Save your processed files into your processed sounds folder.


Reversing audio is a simple, one step process. Remember to save your new audio as a new file.

Time and Pitch | Stretch and Pitch

Stretch and Pitch allows you to change the length of a file and its pitch. The two can be “locked,” which acts like changing the speed on a tape recorder. Making the duration longer results in the pitch becoming lower, and vice versa. If the two values are not locked, you can change duration independent of pitch. You can adjust by dragging percentage and pitch sliders, or you can double click on the displayed value to the right of the sliders and type in a specific number. You can also specify an exact duration.

In the dialog box you can preview a portion of the sound before applying the process by clicking on the play icon. Listen to the preview to see if you like the process.

Amplitude and Compression | Gain Envelope

Using the gain envelope you can create and drag breakpoints to create a new/different amplitude envelope for the file. Use it to apply attacks to steady audio, or other dynamic effects. As you drag breakpoints you will see the amplitude change in dB and percentage. If the percentage is 100% there is no change. Above 100% and you will be amplifying the audio, and below you will be reducing the amplitude. Be careful with values over 100%, as you can end up with amplitude values that are out of range. If the playback meters turn red, undo and try again with smaller amplitude values.

Filter and EQ 

You have a number of choices to change the amplitudes of different frequency regions in a sound. The FFT filter lets you draw arbitrary function shapes. The EQ’s use sliders to apply amplitude changes to regions around the specified frequencies. The parametric EQ lets you create a function shape, but it is a combination of filters, and can be a little tricky to use at first. As the filters interact you get different shapes that you may not be expecting.

using audacity

Being opensource/free software makes Audacity very popular for audio editing. It is a very good editor for most functions. Like audition you can drag audio files into the edit space. Each file creates a new track in Audacity, as Audacity is a multi-track editor. Since we will use a Digital Audio Workstation for multi-track editing, let’s confine ourselves to the stereo/mono editing features of Audacity.

Audacity uses its own project format to save files. Anything you want to save as a separate audio file needs to be exported. I generally select the portion of the audio I want to export, and choose the Export Selection from the File menu. If you are creating audio segments from a file, you should copy them into a new track, or after the original audio in the track you’re working on. Zoom in, apply audio fades (Fade In and Fade Out from the Effects menu), select the portion, and export to an audio file.

One note about selecting stereo audio in Audacity. If you click and move in the middle of the two tracks, you actually resize the vertical size of the tracks. You want to make sure you click and drag within the left or right channel.

editing audio in audacity

Like Audition, you will find your audio processing routines in the Effects menu, except for amplitude envelopes (discussed separately).

Change Pitch, Change Speed, Change Tempo

Audacity separates these processes, which in Audition were in one command that made use of locking (or not locking) the parameters.

Change Pitch lets you specify a pitch change without changing the duration of the audio. The way Audacity thinks of percentages is not like other programs, so it can be easiest to specify a change in semitones (with fractional semitones possible).

Change Speed changes both duration and pitch, locked together. Positive percentage changes will play the file faster, and negative percentages will play the file slower. You should experiment a bit to understand how much change you will get. Note the original time of the file, choose a percentage, and the compare it to the resulting duration.

Change Tempo is like the Change Pitch command, only it changes duration without changing pitch. Beside using percentages like Change Speed, you can specify the ending duration in seconds.

All editing commands in Audacity have a Preview button, like the preview play icon in Audition.


The equalization effect in Audacity allows you to either draw curves or adjust sliders.

creating an amplitude envelope in audacity

Audacity changes amplitude over time (envelope) through the use of a tool and manipulations in the edit space. You usually begin in Audacity with the selection tool (the vertical slash, like a text insertion icon). Next to it is the envelope tool.


Note that when the envelope tool is selected there are purple lines above and below the graphic waveform display in each track. Mouse over them and your cursor becomes the envelope tool icon. Click to insert a break point. Drag break points to create envelope segments. You can hear the effects of your changes by playing the audio file.

When you like the envelope that you have created, you need to render the change. (Up until this point your editing has been non-destructive.) Choose Tracks | Mix and Render to apply the audio envelope. Save your processed audio as a new file.


(musTh625) musique concrete, starting your piece 1

Starting your first musique concrete project usually requires a new way of thinking about music and working with sound in general. To help you organize your work in the early stages, I’m going to summarize some key points, tools, and procedures.

sound sources

The starting assumption is that you want to create a piece of musique concrete that involves development through manipulation and transformation of sound sources. For a short concrete piece (1 – 2 minutes), three to five sound sources should be adequate. If you have too many sound sources, you will tend to not develop/transform them. If you don’t have enough source material, finding enough material to be creative could be a challenge.

Freesound ( is an excellent source for recorded sound. You will need to make an account in order to download soundfiles. Look for uncompressed files, usually in .wav or .aiff format. FLAC files are uncompressed, but you might have problems editing them. Not all editors and players support this format.

Look for some sounds that do something, i.e., have activity and/or change in sound content. You can think of doing something as being like a musical motive or gesture. If a sound has some interesting activity in it, you will be able to find interesting transformations. In class, I often use a bird call and a glass breaking as examples of sounds that have activity.

Also look for sounds that have some type of static, yet rich, sound content. Creating a counterpoint between active sounds and static or slowing moving sounds can be an effective compositional strategy. Look for sounds that stay relatively constant for 10 – 30 seconds (or more), but that have a rich frequency spectrum (contain a lot of frequencies). You will still want to develop and make changes to your slow moving sounds through filtering, amplitude envelopes, time stretching, and pitch transposition. A sound that doesn’t have much frequency content will not be that interesting to use as part of a longer texture. Think about using machine sounds, air conditioners, trains (without horns or whistles), and car motors.

organizing your sounds

Once you find and download some interesting sounds, you want to start thinking about good file management techniques. Collect the sounds you want to use into a single folder. I will usually copy them the sounds from other places on my computer, or else I will make a backup copy of all the sounds if I don’t have other copies. In any case, you want to have backup files of all your original sounds, and make sure that those backup files are somewhere accessible.

You should make a parent folder for the entire project as soon as you begin working on a new project. This folder will contain all your soundfiles and data files from all the applications you might work with. As you work, you will necessarily create many new soundfiles. I try to keep my original source files separate from any processed files I create. Make a new folder for processed sounds. Depending on the size of your project, you might want to make subfolders for processed sounds from each original soundfile. For longer pieces (say around 6′ and longer), I make subfolders. For short concrete works, one folder for all your processed files is probably enough, provided you use understandable names for processed files. 

As you process your audio, save files with names that you can quickly recognize as to sound source and process. Names like cool.aiff and boom.aiff aren’t specific enough, which means you will have to listen to the file to identify it. If you can incorporate the source name and the process into the filename, you will be better able to quickly choose the appropriate file from a listing of names. glassbreak4x24d indicates to me that the sound source is a glass breaking, it was been stretched to 4 times its original length, and transposed 24 half steps down. Getting the name right takes only a few extra seconds when saving a new file, but will save you much more time when assembling/composing your piece.


Throughout the whole process, you need to do a lot of listening to your sounds. You need to get familiar with your material in order to make decisions about how to use it musically. As you listen, select portions of the audio to listen to separately. You can isolate parts of audio that make interesting gestures, then copy and paste them into new files to save and process separately. As you do this listening, you will be making compositional decisions. It could help you to write down notes about sounds interesting, and if you have any ideas about how to combine sounds (horizontally or vertically).


(musTh625) audacity – free audio editing software

Audacity is an audio editor that works on both Mac and Windows. You can download it from Sourceforge.

Audacity page at sourceforge

Just remember, you have to check for updates on your own.


(pianoPed) software notation overview

Software notation programs like Finale and Sibelius can provide you with the ability to produce professional looking music scores and examples. That ability comes with a learning curve, however. Keep in mind, notation by hand is hard. There are a lot of rules to learn, and a lot of formatting issues to contend with. Knowing some elements of notation, and having a method of working can help you produce quality notation in a straight forward way.

notation books/guides

While notation software usually removes the need to know most notation rules, software can make mistakes. And you still are responsible for placing some notation elements in their proper places. Having a notation book or quick guide accessible is extremely helpful for those times when questions arise.

Gardner Read’s Music Notation: A Manual of Modern Practice, is a  comprehensive source for notation rules.

Kurt Stone’s Music Notation in the Twentieth Century: A Practical Guidebook, focuses more on problems and solutions for notating modern music.

Alfred Music publishes a pocket guide that is inexpensive and very useful: Essential Dictionary of Music Notation: The Most Practical and Concise Source for Music Notation. (available as a small paperback, or in Kindle format)

elements of notation, in no particular order, and not necessarily exhaustive

  • Notes
  • Beams
  • Stems
  • Articulations
  • Dynamics
  • Tempo Markings (Metronome Markings, Accel, Ritards)
  • Page Dimension (size)
  • Page Orientation (portrait or landscape)
  • Page Margins
  • Staff Spacing
  • System Spacing
  • Page Reduction (Finale)
  • Brackets/Braces
  • Clefs
  • Time Signatures
  • Key Signatures
  • Accidentals
  • Slurs/Phrase Marks
  • Layers (multiple voices on one staff)
  • Text (titles, composer, notes)

basic method of working

I find that it is best to define the large scale elements (page layout, staff layout, spacing, title and composer text), then enter notes, then move out from the notes with articulations, dynamics, expressions, etc.


(musTh625) first reading assignment

For Monday (3/19), read:


(musTh625) making a listening guide for electro-acoustic music

Rather than make you memorize pieces/composers/dates in a short summer session, I want you to engage a few selected works in more detail. To do so, I want you to create listening guides for each assigned listening piece.

The format of the guide will be as follows:

  • composer name
  • year composed
  • duration
  • overview
  • detailed examination of a section of the work

Ideally, you should load the audio file of the piece into a graphic audio editor, such as Audacity. Viewing the graphic waveform while you listen can at least help you visualize gestures and sections by following the amplitude differences of the waveform display. You can also select portions to listen to in isolation. If you don’t have access to a graphic editor at the time, use your media player’s timer.

For the overview, note the timings of important events and sectional divisions. Identify some sonic/musical aspects that help to delineate sections.

When providing more detail for a specific section, write down a brief description of events with timings. There is no right or wrong description, but try to be as musically descriptive as possible without adding external programmatic descriptions.

Adding a musical program to a work changes the way it was intended to be heard, and can interfere with understanding the work. If there is definite rhythm or pitch, you may want to include this information. You could also indicate if the work uses natural or synthesized sounds as its main source material, or some combination (if you can tell).