Author Archives: Keith Kothman

(sonicarts) saving retro synth programs and finding them later

Saving your work on a computer is good.* Backing up your saved work on a computer is also good.**

*Saving your programs/patches*** on the Retro Synth allows you to save work in progress (if you like something even a little, save it), to make variations of programs you like, and to generally develop a library of programs that you can use in a project.

**Since the lab computers will erase user data once you log out of a computer, you HAVE to back up your work if you want to have it the next time you return to work.

***A program or patch is the unique settings of each dial, modulation routing, pitch bend amount, etc., that produces a unique sound.

The Retro Synth in Logic allows you to save programs you make, and if you store them in the preferred location it will show you a list of your saved programs in the same drop down menu that shows stock/factory programs. Click on the Retro Synth main menu to access file management commands (save, load, etc.).

retrosynthdefaultmenu

You can scroll down to the save command.

defaultmenu

 

Save the program in the default location, which is usernamehome > Music > Audio Music Apps > Plug-In Settings > Retro Synth. Any user-saved programs will show up above the factory programs.

All is well, at least until you log out. When you log out, any user-created files are deleted. Here’s where the backing up part comes into play.

If you’re  used to putting your files on the desktop, you might have trouble finding the folder with your Retro Synth programs. You can keep track of the above file path, or you can use the Spotlight search feature to find the files. Search for any of the filenames you saved, click on it in the resulting file listing, then look at the file path in the bottom of the finder window.

search

 

searchpath

 

When you return to the lab to continue working, you need to copy your Retro Synth programs back into this same folder.

(sonicarts) using the retro synth in logic

We will be using the Retro Synth in Logic Pro X as our analog synthesis instrument. The basic process is to

  • Create a software instrument track
  • Load the Retro Synth as your instrument
  • Use the Analog tab in the Retro Synth
  • Play the synth using any attached MIDI keyboard

creating the software instrument track

When you open a new project and are prompted to create a new track, choose the “software instrument” option. Be sure to deselect (uncheck) the “Open Library” option (circled in red in the picture below). If Logic opens the “library,” you will be limited to stock instrument sounds.

softwareinstrument

 

load the retro synth

Once you have the software track created, go to the channel strip in the main window, or switch to the mixer view. The software instrument track will have insert locations for MIDI FX, Instrument, an Audio FX. Click on the Instrument insert and choose Retro Synth > stereo from the popup menu.

instrumentinsert

 

the retro synth

The Retro Synth has four options: Analog, Sync, Table, and FM. Stay with the Analog option.

retrosynth

playing the synth

Select the software instrument track with the Retro Synth loaded in the main window or mixer window. That track will now be listening to MIDI input. You can also use the computer keyboard or an onscreen MIDI keyboard. You can select either option in the Window menu (Show Keyboard or Show Musical Typing).

(sonicarts) simple analog synthesis

In conjunction with learning MIDI, we will be learning some simple analog synthesis to produce sounds for the final project. I’m not going to go too far into the history of analog synthesis, other than to mention that when we talk about today, we are actually talking about virtual analog synthesis. Virtual means that we are recreating the techniques and electronic hardware via digital software packages. If it happens in our computer, with software, there is nothing really analog about it. It is a digital system imitating an analog system.

digital audio processing

Up to now, we’ve been starting with sample digital audio recordings that we have manipulated in various ways to create transformed sounds. The transformations have themselves taken on a compositional aspect, as we lengthen things, shorten things, transpose things, etc., to produce music dependent in some way on the original sound sources.

synthesis in general

Synthesis starts with elemental sound components – waveforms – combining and processing them to create musical sound that is not dependent on any prerecorded sound. The two main synthesis techniques we will be using can be compared to painting and sculpture. Additive Synthesis starts with the most basic elements and creates sounds by combining these elements. It is like painting, in that every color must be added to a blank page, primary colors combined with other primary colors to produce new colors, drawn onto the page to make an interesting picture. Subtractive Synthesis starts with a rich block of sound, like sculpture starts with a large piece of material, and pares away (carves out) parts to find an interesting subset of sound.

Our basic elements of synthesis are:

oscillators – envelope generators – filters – modulators

oscillators

Oscillators create fluctuating amplitudes over time, generally in basic geometric shapes or randomly. These basic shapes are referred to as elemental waveforms. The name of each waveform comes from its amplitude shape over time, and each waveform has its own unique spectrum**.

**For our purposes, we will refer to frequency content that is integer related to the fundamental, as well as the fundamental, as partials. You know about the overtone series. The overtone series starts with a fundamental, with the first overtone occurring next (the octave above the fundamental). When you use partials to describe the overtone series, you start counting with the fundamental as the first partial. The first overtone is the second partial, and so on. The difference is slight but important in understanding the amplitude relationships in elemental waveforms.

Sine waves are the simplest waveforms, containing only the first partial (fundamental frequency). Sine waves are usually the basis for additive synthesis, because if you have enough sine waves with control of frequency and amplitude for each, then you can theoretically recreate any sound. The issue with additive synthesis is the usually massive amount of control needed to produce interesting sounds.

Noise exists at the other end of the continuum from sine waves. White noise is a random fluctuation that produces all frequencies at equal amplitude. Since there are more frequencies as you progress upwards through the pitch range, white noise sounds unbalanced towards the high pitch range.

Sawtooth waves contain all partials with the relative amplitudes of 1/pn (pn = equals partial number). The first partial is 1/1, second is 1/2, third, 1/3, etc.

Square waves contain all the odd partials with relative amplitudes of 1/pn. 1/1, 1/3, 1/5, etc.

Triangle waves contain all the odd partials with relative amplitudes of 1/pn-squared. 1/1, 1/9, 1/25, etc.

We will generally use rich waveforms (sawtooth, square, triangle, and noise) as our sound sources, and filter them to produce interesting results.

filters

Filters are named according the frequencies they let pass unchanged. The most common types are low pass, high pass, band pass, and band reject, spelled in a variety of ways (lopass, hipass, etc.)

Low pass filters have a cutoff frequency, allowing frequencies below the cutoff to pass unchanged, and frequencies above the cutoff to be progressively reduced in amplitude. Low pass filters are our most common tool for creating sounds that respond like natural sounds, as increased amplitude usually involves adding more frequencies to a sound. We can control the cutoff frequency to increase when the amplitude of the sound increases.

High pass filters perform the opposite function of low pass filters. They are most useful for percussive sounds, especially sounds like cymbals, that have a lot of high frequency content without low frequencies.

Band pass and band reject filters have a center frequency. Frequencies around the center either pass through (band pass) or are rejected (band reject, or notch) progressively as you move out in both directions from the center frequency. Band reject filters are mostly used for noise reduction.

Analog filters generally have a slope designation that indicates how much amplitude reduction is applied as frequencies move away from the cutoff. The gentlest slope is 6 dB per octave, meaning that one octave away from the cutoff is 6 dB less in amplitude, two octaves 12 dB, etc. You will find 12 dB and 24 dB slopes in most analog synths. Band pass and reject filters add both sides, meaning the minimum slope is 12 dB. While it may seem logical to use the steepest slope available to you, doing so can create unwanted artifacts. Filters are actually delay lines. To increase the slope, you increase the size of the delay line. Increasing the delay line causes phase problems (timing issues), which are especially noticeable with transient sound elements.

Filters also often have resonance controls. Resonance comes from the delay line being fed back into the original signal, and subsequently delayed/filtered again. This feedback produces an amplitude increase around the cutoff/center frequency.

envelope generators

An envelope is simply a function that changes over time. Typically we think of amplitude (loudness) envelopes, but envelopes can be applied to pitch, filters, and any other parameter. The most common envelope is called an ADSR envelope, named after its four segments:

  • Attack time
  • initial Decay time
  • Sustain level
  • Release Time

They often look like this in generic picture form:

ADSR_envelope

modulators

The most typical modulator is an LFO, or Low Frequency Oscillator. LFO’s operate in the range of frequencies below audio hearing (less than 20 Hz), although most go up to 50 – 100 Hz maximum to produce some interesting modulation effects. LFO’s can provide pitch vibrato by modulating the frequency of an oscillator, amplitude tremolo by modulating the amplitude of a sound, or other effects like spectrum vibrato by changing the cutoff frequency of a filter. The elemental shapes can provide a variety of effects. Triangle and sine waves are usually employed to produce smooth fluctuations. Sawtooth waves have a sharp change at the start or end of a cycle, and a smooth continuous signal between. Square waves are like on/off switches, while noise can produce a random fluctuation that has many interesting uses.

(sonicarts) midi with a limited focus

Since we are only working with MIDI for a short time in this class, we will keep our focus on a very limited set of MIDI commands:

  • Noteon/Noteoff
  • Continuous Controller
  • Pitch Bend

Noteon/Noteoff messages include data for the number of the MIDI note pressed, and the velocity that it was pressed with. Keep in mind that although software may display MIDI note number as a pitch, the computer stores the information as a number. Most of us think of how hard we press a key (or hit a drum), but MIDI measures how fast we press the key. It’s cheaper to detect than force.

Pitch Bend sends a command that corresponds to the amount of deviation, or movement of the pitch bend wheel/joy stick. The actual pitch deviation is determined by that data being applied to the pitch bend maximum range that is usually set for each synthesizer program (or patch). The pitch bend control sends a percentage of change, which is then applied to the maximum possible value on the receiving end.

Continuous Controller (CC) messages are intended to be used to send performance control information while a note is being held. There are a maximum possible 128 continuous controllers, but in practice only a small subset are used. The most common:

  • 1: Mod Wheel
  • 2: Breath Control
  • 7: Volume
  • 10: Pan
  • 64: Sustain (a switch, 0 – 63 = off; 64 – 127 = on)

MIDI devices don’t have to respond to all of these controllers, nor do they have to send these CC numbers to represent the specified data types listed. The five listed values, however, are commonly used for the listed functions. Software may mask the CC number and simply show your the associated name.

(sonicarts) midi intro

I have a previous post that outlines MIDI information. Jeffrey Hass has a full chapter as part of his online Introduction to Computer Music. (pages 1 – 5, 11 and 12 are the most applicable to what we are doing)

For our purposes (a short intro), it is most useful to understand the following concepts:

  • devices
  • ports
  • channels
  • programs/patches/instruments
  • commands
    • noteon/noteoff
    • continuous controller
    • pitch bend

midi devices

A MIDI device is anything that can send and receive MIDI commands. From a practical standpoint, a device must include at least one MIDI port. Many current devices, like USB keyboard controllers, have multiple MIDI ports for sending and receiving data.

midi ports

A MIDI port can be either a physical, 5-pin MIDI connection, or a logical (computer defined) data connection made through a USB or other connector. Each port can communicate data on 16 MIDI channels.

midi channels

A MIDI channel is a logical data path for communicating information. All 16 channels travel on the same physical cable, but use channel status messages to sort data. A device that is listening to MIDI channel 1 will only respond to data that is sent with that MIDI status address, ignoring messages with other channel addresses. Using MIDI channels allows for routing commands to specific instruments.

midi programs/patches/instruments

A MIDI program/patch/instrument is a definition of how to play specific sounds on a MIDI device. One program could play a piano sound; another program could play a saxophone sound. The three terms can be used interchangeably, but they can also have specific meanings for a particular device or piece of software. For our purposes, Kontakt uses the term instrument to define each sampler device in its rack.

midi commands

Most MIDI commands have accompanying data ranges from 0 – 127 (7 bits allows for 128 possible values).

Noteon and noteoff commands are the most commonly used MIDI messages. A noteon command consists of the MIDI channel, noteon command, note number, and key velocity (how fast the key was pressed, corresponding to force). The data that you will focus on consists of the note number and key velocity. A middle C struck with full force would give a note number of 60 and a key velocity of 127. Lower key velocities would result from slower key strikes. A MIDI instrument has to be programmed to adjust amplitude in response to key velocities. Almost all synthesizer/samplers bypass the noteoff command in favor of a noteon command with a key velocity of 0. For example, a note number of 60 with a velocity of 0 would turn off a sounding middle C.

Continuous Controllers (CC) send data values that can change any number of instrument parameters. A CC command consists of the MIDI channel, CC command, CC number, and CC value, with our focus on the CC number and value. The Mod Wheel is a commonly used CC, assigned the controller number of 1 almost universally. You have to program an instrument to respond to specific CC numbers. CCs are used to change parameters while a note is sounding.

Pitch Bend can be thought of as a special type of CC, although it has its own MIDI command. It allows for pitch changes to a note while sounding, up or down. You must program an instrument to respond to pitch bend, and by how much (how much is usually specified in semitones, or half steps).

(sonicArts) multitrack concrete music, 2014

this assignment document is also available on the course site on Blackboard

due 11/4, at the beginning of class

overview

Building on your knowledge of sample processing learned in your two-track collage, and DAW experience from your negative space project, create a two-minute work of concrete music using classic musique concrète techniques and granular synthesis. You may use sounds from your 2-track collage, your soundwalk, your negative space project, sounds from online libraries, sample CDs in Bracken (Education Resources Desk), and/or other recorded sounds of your own choosing.

Use an audio editor (Audition or Audacity) and Cecilia to process your samples, and use Logic Pro to assemble your piece. You may use any plugins in Audition or Audacity that you have used before (such as EQ, normalization, amplitude gain change, brick wall limiting), plus delay and reverb plugins in Digital Performer. You may also use limiter and EQ plugins in DP.

Like your 2-track collage, work in a gestural style, where development and variation of source material is of primary importance. Use Logic Pro like a tape deck, using “clock” time as your ruler, not metrical time.

For pieces of this length, form still doesn’t really need to be much of a concern. Almost anything that works moment to moment will work for two minutes. You are expected to take advantage of the multi-track capabilities of Logic Pro to create layers of activity. It is also expected that each layer of activity will be comprised of multiple tracks. Slow moving background material, derived from the same source material used for quicker gestures at any given point, can help to define a section as much as fast-moving foreground gestures.

Your project should use a 44.1 kHz sampling rate, and 24-bit resolution.

requirements

  • Length must be at least two minutes. Try not to go much longer than two minutes. Bloat will not help your grade.
  • Use at least three different sound source types, i.e., three different sounds of dishes breaking is one sound type (breaking dishes). You would still need two other types of sounds. Feel free to use more sound types, but remember that development of your sounds through processing is a significant goal of the project, and a significant portion of your grade.
  • Build gestures and layers from multiple sound clips in multiple tracks. Although there is no minimum required number of tracks, textures in the range of six to ten tracks will be likely.
  • Keep your files organized! Keep copies of original and Audition/Cecilia-processed files in subfolders of your Logic Pro folder for backup purposes. Any audio file you use in Logic will be imported to the audio files folder of your project, as long as you set your project settings correctly.
  • The finished project should be primarily gesture-based (Some repeated patterns are ok, especially repeated gestures with variations, but don’t use Logic loops set to repeat 50 times.
  • Title your work.

 grading criteria

  1. Creativity in manipulating your sound material, particularly with regard to creating gestures and motives, and using multiple tracks to create interesting gestures and textures. (40 pts)
  2. Diversity of sound material, and required number of sound sources. (5)
  3. Use of Logic Pro to automate mix parameters (Pan, Level, and plugin parameters). (20)
  4. Meeting the required length (2 minutes; definitely not shorter; probably not much longer). (10)
  5. Overall sound quality (15), which includes:
    - proper amplitude levels (no clipping, maximize peak signal to noise ratio)
    - quality of edits (no pops or clicks!!!!)
    - no distortion or over-amplitude samples
  6. Organization of files (5).
  7. Following the turn-in procedure (5).

100 points total

turn-in procedure

  • Name your Logic Pro project folder with your name and “MultitrackConcrete.” (kothmanMultitrackConcrete)
  • Your project folder should also contain any Audition/Audacity/Cecilia (A/A/C) files you created (including original source material). Using sub-folders is highly recommended. These subfolders for source and A/A/C-processed files can be at the level below your project folder.
  • You can turn in your project folder to me via flash drive, external hard drive, iLocker, or Dropbox. There is a good chance your project will be too large for Blackboard, and downloading from Bb takes too much time. If you use iLocker or Dropbox, compress your project folder, upload it, and email me the link to download it.
  • Your project folder must contain a mixed (bounced) stereo master, at the same directory level as your Logic Pro data file.

 

(sonicArts) cecilia basics and an intro to filter warper

cecilia basics

Cecilia 4 basics gives you an overall rundown of Cecilia preferences and operation. It covers the basics, and is a useful place to return to when you get stymied by the program.

filter warper

If you want to do simple time and pitch changes on an audio file, use the FilterWarper module.

Launch Cecilia4. The editor window opens. Close the editor window, and under the File menu, choose Modules > Time > FilterWarper.

The FilterWarper is designed to produce the smoothest time and pitch shifting of any granular operator in Cecilia. Therefore, you don’t usually want to change any setting beyond the output duration, transposition, window type, and index. Generally, I like to use a Hamming window to produce less granular artifacts, like clicking or other noise.

Remember that the Index controls the playback location in the soundfile. If you want to play through the soundfile from start to finish in the forward direction, leave the index running from lower left corner to upper right corner. If you want to play the soundfile backwards, set the Index function to run from the upper left corner to the lower right corner. You can also set the index to play a portion of the soundfile by starting and/or ending somewhere other than a corner.

If you want to experiment with other parameter settings, start with the number of overlaps. If you bring this value down to 1, you will hear amplitude modulation – a tremolo effect. More overlaps create an amplitude output that more smoothly tracks the original amplitude envelope of the source file. More overlaps also thicken the sound and increase the output amplitude overall. You can adjust the output gain in the output section. Try different values and listen to the result.

Depending on the source input, changing the window size (in samples) may create audible results. Window deviation is a maximum random amount to add or subtract from the specified window size. If you are hearing regular amplitude modulation that you wish to randomize, apply a larger random deviation.

(sonicArts) granular synthesis tools, cecilia and paulstretch

paulstretch

Outside readers will look at the title of this post and tell me, correctly, that Paulstretch isn’t a granular synthesis program. True, but we will still use it as part of our added toolbox. Paulstretch is an effect available to use in Audacity. It has only two controls: one for specifying the duration factor (5 means “5 times the original duration”), and one for specifying time resolution. Smaller time values give you better rhythmic resolution (time) but worse frequency resolution. You can experiment with this setting. If you have a very active sound (changes a lot, quickly), then try a smaller time resolution to capture that activity. For sounds that don’t change that quickly, you can use a long time resolution and better frequency resolution. You should experiment by trying medium to longer time resolutions on active sounds, and vice versa, to hear how this parameter affects the output sound.

Paulstretch uses a processing algorithm that is optimized for extreme time stretching of a sound, 8x and above, and for use on pitched input.

cecilia

You can download Cecilia4 for Mac and Windows from Google Code. There is a version 5 of the software, but it does not include the Filter Warper, which is central to a lot of granular processing. Cecilia is written in Python, and looks almost identical on Mac and Windows. It can be a little clunky in some areas.

To hear audio in Cecilia, you should go to Cecilia > Preferences… and click on the speaker icon. Choose “PortAudio” for your audio driver, and whatever interface device you want to use for listening (builtin, MOTU, AudioBox, etc.). I know that I always tell you to use Core Audio on a Mac, but Cecilia is the exception.

(sonicArts) granular synthesis intro

intro

Granular synthesis provides us with a method of changing pitch and time independently of each other. It works on the principle that any sound can be thought of as a discrete particles, or segments of time. These discrete particles are referred to as grains. The process is analogous to color printing or viewing images on a TV or computer monitor. Images are comprised of discrete pixels, usually comprised of limited colors, that are combined to create continuous color fluctuation in what appears to be a single image. For sound, grain durations typically fall between 1 ms to 100 ms.

grain parameters

  • Playback speed
  • Index location (location in source soundfile used to read audio)
  • (maximum) amplitude
  • Grain envelope (shape of the amplitude envelope)
  • Duration
  • Panning

Grain parameters do not vary within a single grain. To create a sound that changes over time, you change the initial parameters to subsequent grains.

parameters of grain combinations

Parameters of grain combinations affect the use of two or more grains together or in sequence.

  • Frequency of grains (how many grains per second)
  • Density of grains (the number of grains happening at one time, can be thought of as the vertical analog to the horizontal frequency of grains)
  • Number of grain streams (can be related to density), sometimes referred to as the number of overlaps.

windows (grain envelopes)

  • A window is a short-time amplitude envelope. The total length of the window matches the grain duration.
  • The window shape can be chosen to emphasize legato amplitude connections between grains, amplitude discontinuity between grains, or anywhere in between.
  • Although Cecilia defaults to a sine window for the Filter Warper, better results will usually be had with the Hamming window (more like a bell curve).

overlaps and streams

  • A stream is the individual series of grains occurring one after another.
  • Multiple streams involve overlapping envelopes.
  • Overlapping envelopes generally produce a smoother amplitude output.

high-level (macro) organization

The number of parameters per grain to control, and the number of grains to generate per second, require some type of macro control.

  • Pitch-Synchronous organization analyzes the sound file ahead of time to set parameters so that a specified pitch will result. The parameter settings of individual grain parameters are linked. Kontakt tone machine uses pitch-synchronous organization. At this level, we do not have any tools to produce pitch-synchronous organization.
  • Asynchronous organization means that all grain parameters are specified independently of each other. Control functions are usually specified to change parameters over time. Asynchronous means that any changes to one grain have no effect on any other grain.
  • Quasi-synchronous organization indicates that some, but not all, parameters are linked. It is the most common organization offered in the programs we use (Cecilia). Most often, grain duration determines the frequency of grains, as grains are created in succession. (This organization leads to a type of AM synthesis, or tremolo.)

(sonicArts) finishing your negative space project

some final issues before you finish your negative space project…

using the “advanced” logic

The lab computers have default program preferences. Make sure you refer to my previous post on preferences and project settings.

Most importantly, make sure that you’re running Logic with the Show Advanced Tools feature turned on. The default state doesn’t have this feature turned on. You will know right away when you go to open a project and the tracks window has “wood sidebars” on each side. Don’t even bother finishing the initial project creation. Cancel, and go to Preferences > Advanced Tools, and click to select Show Advanced Tools. Additional options will appear in the preference window. Select all of the additional options.

You can then create a project and work according to what I have been demonstrating in class and in videos.

have enough layers, and have mix automation

You will be graded on whether you put together combined source material from different tracks. Make sure that you have created textures that make use of more than one audio file at a time. You don’t have to always have combinations, but a significant portion of your work should have them. Also, you must have volume and pan automation on each track.

bouncing your mix

The final step for any multitrack project is to “bounce” the mix down to a single stereo audio file. To do so, you need to do some prep work, then the actual bounce.

First, play your project with the mixer pane or window visible. You want to be able to see if any audio track or output track is going over the max amplitude. Just above the volume/gain slider in each track are two windows that display dB values. The one on the left is the setting for the fader. The one on the right is the actual amplitude being sent out the track after any effects/processors and volume gain changes have been applied. No individual audio track should go above 0 dB, nor should the output track. The background colors will change if that happens (yellow for audio, red for output track). You can also look to see if any of the dB values have gone into the positive number range, which is over the max amplitude allowed. Mixing is addition, and signals that don’t distort on their own may add together to create distortion. Adjust volume automation accordingly.

Second, change your counter display to show both beats and time. You click on the icon in the left of the display to change display formats. You need to see beats (musical time) so that you can see how long your project is in musical time. The bounce dialog box uses musical time only to specify the section of the project to be bounced. Move the playback head to the end of your project and read the musical time display.

Third, start the bounce process itself. Choose File > Bounce > Project or Section… to bring up the bounce dialog box. Choose PCM as the “destination” for bouncing. Below that is the duration setting of the bounce. Start at all 1′s. Enter your end time, but add a few extra measures so that you don’t cut any audio off. Choose to bounce “offline” so that the computer will computer your bounce as fast as it can. Next, I choose WAV as my file format, 44,100 KHz SR, and 16 bit resolution. Choose one of the Dither options, as you are converting down from 24 bits to 16. Then click on bounce, name the file accordingly, and save it into the same folder level as the logic project file.

Fourth, open up your bounced file in Audition. You want to listen to it to make sure it contains the right audio and has no errors. And you want to delete excess blank audio at the end of the file. I usually leave 3 – 4 seconds of silence at the end of my project. Save the file again.

Fifth, and finally, right click your project folder, or select it and go the gear menu of the finder window, and choose compress. That will create a single zip file of your entire project. Upload that and email me the link.