(max) lecture videos posted for groove~, buffer~, and waveform~

YouTube videos posted to makeup for Thursday’s class. They’re still processing, and not yet ready for viewing. You can go to to see them once they’ve finished processing. One is on the groove~ and buffer~. The other adds the waveform~ object. I know a lot of you didn’t look at the last videos I posted. Remember that these are class assignments.


(max) matrix~, suboptimal reverb, and an accidental forbidden planet patcher

example patchers

echo and reverb

Max does not come with a reverb processor, and most people get around this by using plugins or routing through a DAW. But we can make something of low quality that gives some approximation by using delay lines. We can try to simulate one by using allpass filters and delay lines, but allpass filters won’t work in our matrix~ (coming later), so let’s just make a simple one with delay lines.

Echo is easy. The <delayEcho.maxpat> shows a simple echo, using tapin~/tapout~ with feedback. Adjust the amount of feedback for to adjust the amount of time the echo lasts.

The <delayReverbSuboptimal.maxpat> just uses three tapin~/tapout~ delay line pairs with feedback. Use higher amounts of feedback (around .9) and times that are not too short to produce resonant pitch and not multiples of each other. I used 90 ms, 100 ms, and 130 ms for delay values. It’s not a great reverb, but it does simulate the general crappiness of old spring reverbs.

The [click~] object is being used in both demo patchers. It is used for creating short time (technically one sample) sounds. The multislider object allows for you  to adjust frequencies present in the click.

matrix~ and matrixctrl

The [matrix~] object allows for you to connect multiple audio inputs and outputs, and switch between them like a patchbay. The <ForbiddenPlanet-matrix.maxpat> demonstrates an example using three sound sources (pluck, Theremin, and sequencer). These sound sources are all from previous patchers. There are also three effects (echo, reverb, and degrade). The matrix~ object (down towards the bottom of the patcher) specifies 6 inputs, 5 outputs, and a default connection gain of 1.

IMPORTANT: Some Max objects don’t allow for feedback circuits. If you connect any object in a way that feedback is possible (like inputs and outputs to a matrix~), even if the connections are not active, Max will shut down audio and you will hear no sound. I can’t find a definitive list of these objects, so you have to use trial and error. In the course of setting up this patcher, I discovered that [degrade~] is one of those objects. So [degrade~] connects its outputs directly to the ezdac~.

The easiest way to control a [matrix~] is throught a [matrixctrl] object. Towards the top right is the [matrixctrl], with a rows of dots. Clicking on the dots turns connections on and off. Columns represent inputs, and rows represent outputs. Counting of columns and rows starts with 0, so the input columns are (0) pluck, (1) Theremin, (2) sequence, (3) echo, (4) reverb, and (5) degrade (which is not connected). The outputs are (0) L (to ezdac~), (1) R, (2) echo, (3) reverb, (4) degrade.

Start by clicking in the zero column (pluck) and rows zero and one (L and R) output. Play your MIDI keyboard. You should hear the pluck output for a Karplus-Strong Algorithm. Next, click zero column row two. The pluck should now be going out and to the echo processor. To hear the output of the echo processor you must click on rows zero and one in column three.

Experiment. You can send something to echo, then reverb, then degrade. I named the patcher after the movie Forbidden Planet, which was the first movie to have an all electronic soundtrack. Being from the 1950s, it has lots of stereotypical sci-fi sounds. Try using the Theremin with a band-limited noise source with reverb and make fast, swooping gestures along with short gestures. Then go listen to the main title from the movie.


(max) mouse theremin, lcd, band-limited noise

example patchers and video playlist

brief overview

Since I take you through explanations in the videos, I’m just providing an outline here.

Part 1 shows you how to map mouse tracking, via [mousestate], to frequency and amplitude of an adjustable oscillator.

Part 2 covers drawing with an [lcd] object, in a more limited way than the tutorial patcher. The focus is on:

  • understanding space within the [lcd],
  • outputting x/y coordinates when the mouse button is pressed,
  • outputting x/y coordinates when the mouse button is up,
  • clearing drawings from the [lcd]
  • specifying pen color and pen size,
  • specifying shapes, and
  • recording and manipulating sprites.

Part 3 shows the finished patcher, using an [lcd] object to draw and transmit information to an adjustable oscillator with a selectable waveform. Some highlights:

  • initializing drawing parameters for the [lcd],
  • using [split] objects to limit the x/y coordinates sent to scale to those in the [lcd], and
  • creating band-limited noise as one of the waveform options.

(max) karplus – strong plucked string algorithm

The Karplus-Strong synthesis algorithm is an early example of physical modeling synthesis. Physical models generally provide parameter controls that relate to real world changes. The Karplus-Strong algorithm uses noise fed into a feedback delay line, with lowpass filtering of the feedback, to produce plucked string textures. You can read about online at Music and Computers: A Theoretical and Historical Approach, Chapter 4 (Section 4.9). The site is an online textbook by Phil Burk, Larry Polansky, Douglas Repetto, Mary Roberts, and Dan Rockmore.




For the pluck, the patcher uses either [noise~] or [rand~]. [rand~] will provide random noise up to the frequency specified in the argument (or message sent). Using [rand~] produces an excitation that is less chaotic, and hence, less bright in timbre of the pluck and resulting decay. The frequency message should be below the Nyquist Frequency (1/2 the sampling rate) to effect any noticeable change from using [noise~]. The pluck/excitation source is shaped by an [adsr~] envelope, with 1 ms attack, no initial decay, sustain amplitude of 1., and a 1 ms release. The [adsr~] object is sent the maxsustain 10 message at load, which means that it only needs to be triggered on. The object will move to its release section after 10 ms of sustain time without a zero-amplitude input.

The short burst of noise is sent directly out and also to a [tapin~]/[tapout~] pair. (Actually, two pairs, to represent two strings. More on that below.) To produce the pitch that corresponds to the MIDI note struck, you must convert MIDI-to-frequency, and then divide 1000 by that frequency to get the period of the frequency in milliseconds. Setting the delay time to the period of the frequency causes the delay line to resonate that frequency, and it’s harmonics.

The feedback amount in the delay line affects the decay time of the string, as does the lowpass filter. Experiment with these settings. Note that you cannot set the feedback amount to more than .999 to prevent an increasing feedback amplitude loop.

The second delay line has its time set by a multiplication factor to simulate a second string, such as you might find in a harpsichord or 12-string guitar. You can set the string to be detuned very slightly, or to be vibrating at some ratio to the fundamental.



Note that if the frequency goes too high, or the delay time is too short, the output pitch from the delay line will hit a ceiling, or maximum point. This maximum point is determined by the signal vector size, in the Options > Audio Status window. The signal vector divided by the sampling rate gives the shortest time in seconds for the tapout value. You can set the signal vector to smaller values to obtain shorter delay times. The only drawback is increased processor demand, as you will notice your CPU percentage will increase as you lower the signal vector.

further modifications

The example shows you how to implement two strings. Neither string offers relative gain control, which you could add. You could also add some type of gain control to the portion of the noise that is sent directly to the output, to adjust the level of the pluck relative to the decay.

Change from noise~ to rand~, and then change frequency values to rand~ and listen to the changing sound. Try adjusting the lowpass cutoff frequency and the feedback amount to gain further control of the instrument.


(max) fm synthesis, two implementations

demo patchers:

fm synthesis

Plenty of articles and resources exist describing frequency modulation (fm) synthesis. If you want to investigate online, I recommend Jeff Haas’s Intro to Computer Music, Vol 1, chapter 4. I’m going to focus on implementing fm synthesis in Max in a way that produces predictable, and therefore controllable, sidebands. The examples are modeled after the implementation of fm synthesis in the Yamaha DX series of synthesizers, which paired each oscillator with its own envelope generator.

important points

To understand the implementation of fm synthesis, you must keep in mind these important points:

  • The output of the modulator is added to the frequency input of the carrier. (frequency of note + modulator oscillator out)
  • The amplitude of the modulator does not range from 0 to 1. You multiply the envelope of the modulator (range: 0 – 1) by the number of sidebands you want (range: 0 – someNumber) by the frequency of the modulator. For example, if you want to produce 5 sidebands with a modulating oscillator frequency of 440 Hz, you multiply the envelope*5*modulatorFrequency to get the modulator amplitude.


<FMsynth1.maxpat> demonstrates a simple, two oscillator frequency modulation algorithm, designed to be used inside a [poly~] object. It makes extensive use of [p] subpatchers, named according to function. Pitch modulation from pitch bend, LFO, or envelope occurs like before. The [p modulatorOp] subpatcher implements the amplitude scaling for a modulating oscillator. The [p carrierOp] subpatcher accepts that modulator output and adds it (at audio rate) to its frequency input.

Within the standalone, mono version you can specify the ratio (multiple of the input frequency) of the modulator frequency and carrier frequency. Envelope controls for each are in subpatchers [p xxxxxEnvelopeControls], where you can specify [adsr~] settings. Receive objects also allow for these settings to be sent as a list from the parent polyphonic patcher. There is also an input object for setting the modulation index, which specifies the maximum number of sidebands possible.

FMPolySynth1 and [pak]

<FMPolySynth1.maxpat> demonstrates controls for <FMsynth1.maxpat>. Following the send objects should make its functions clear. All changeable parameters are connected to a [preset] object to store settings. The only new object is [pak], which is like [pack] except that input in any inlet causes the output of the entire list. If you change the release time, for example, the entire list is sent. Sending the envelope settings as lists makes for cleaner programming, as you don’t need a send/receive object pair for every envelope parameter of every envelope.


<FMsynth2~.maxpat> extends the functionality of <FMsynth1~.maxpat> by creating 8 operators that can each be either carriers or modulators. Two operators ([p operator2] and [p operator1]) are connected in the same algorithm as the default settings of <FMsynth1~.maxpat>. All of the operator subpatchers have 2 outlets. If you open any of the operator subpatchers you will see that the left outlet has amplitude scaled only by the its envelope generator, in the range of 0 – 1. This left outlet is for carrier signal output. The right outlet has its amplitude scaled by the number of sidebands and the oscillator frequency, which means that the right outlet is used for modulating the frequency of other operators. DON’T CONNECT THE RIGHT OUTLET TO AUDIO OUTPUT! 

Additional receive objects have been added to the operator subpatchers to reduce the number of needed inputs.

No parent polyphonic patcher has been provided, as you can implement it given your experience with the first patcher in this post. You will have to create the appropriate send objects in your parent patcher for each parameter you want to control.

You can arrange the operators in any fm configuration. For example:

  • 2 > 1 and 4 > 3, for a pair of modulator/carriers.
  • 4 > 3 > 2 > 1, for very bright and possibly noisy timbres. You can even use all in a single series.
  • 3 > 1 and 2 > 1, for an instrument that has a different timbre at the attack from the one at the sustain.

You can even use each operator as a carrier only, with no frequency modulation. Such a configuration yields an additive synthesis instrument.

other possibilities

With the previously demonstrated LFO capabilities of the amplitude modulation patcher you can modulate the output of any modulator to create a fluctuating number of sidebands present. You can also use multiple modulators/carriers and slightly detune them over longer periods of time for interesting results.

You can (and should) add filters to the instrument patchers (FMsynth1~ and FMsynth2~) to further controls timbres over time.


(max) applying pitch and amplitude modulation, in general

other pitch or amplitude parameters can be modulated the same as note pitch and note amplitude

I’ve spent a fair amount of time on synthesis, and recently on modulation of pitch and amplitude. You need to keep in mind that the techniques I’ve shown for pitch modulation can be used to modulate any pitch control. For example, the modulation of a cutoff frequency for a filter is handled the same way as modulating a MIDI note. In each case, you have an

  • offset (the MIDI note for pitch, or the MIDI note plus starting offset of the filter for cutoff frequency),
  • depth (the amount of pitch modulation or center frequency modulation),
  • and the modulation source.

Likewise, you can amplitude modulate any signal, including the signal gain of a filter or the depth of a pitch modulation source. You must simply include offset depth to make up for lack of amplitude modulation depth in any modulation chain.

Just remember that smooth modulation takes place at the audio rate, so you will need to convert messages to audio rate signals in your modulation routines. And don’t forget that lot’s of things can generate messages in Max. You’ve dealt with random number generators, counters, sequencers, etc. Take some time to experiment.


(max) amplitude modulation, drawing your lfo waveform

amplitude modulation, ring modulation

Revisiting our synthesis terminology, both amplitude modulation (AM) and ring modulation (RM) synthesis involve multiplying the amplitude of two signals together. The important difference is that RM involves two bipolar signals (oscillators moving between +/-1), and AM requires that one of the signals be unipolar (oscillating between 0. and 1.). With RM, any adjustment to either signal affects the resulting signal amplitude. If your modulating signal is 0. amplitude, the result of the modulation is 0. amplitude (anything times 0. = 0.). Creating an effective lfo modulation effect requires the ability to control the depth of the modulator signal independent of the output signal amplitude. To achieve that, you must offset the amplitude of the modulating signal by subtracting it’s amplitude from 1.

lfo amplitude modulation

also uses <WaveformSynthWorking2~.maxpat> (used for previous synths)

<BasicWaveformPitchSynthAM.maxpat> implements simple AM synthesis at LFO rates in the [p ampMod] subpatcher. AM synthesis can be implemented in the parent patcher (without any changes to the synth subpatcher) because it addresses the total of all notes being played, unlike pitch modulation. Pitch modulation had to be implemented in the synth subpatcher, since it affects each note playing in an individual way based on the conversion of MIDI note to frequency.

[p ampMod] has four inlets: audio signal to be modulated, lfo rate, lfo depth, and lfo waveform select. lfo rate and depth expect an input range of 0 – 127 (from a MIDI CC), and then scales that input to a frequency range of 0 – 50, and a depth of 0 – 1. You can easily adjust the input and the scaling to whatever you like. My assumption is that this is easiest to control from an external MIDI controller, although I didn’t hook up MIDI CC’s to do that.

Adjusting the amplitude offset of the modulating signal represents the major work being done in the [p ampMod] subpatcher. When a message the subpatcher gets a message to adjust lfo depth (amplitude), it takes the scaled amplitude and subtracts it from 1. to get the offset amplitude to add back to the modulating signal. The lfo depth/amplitude is multiplied to the output of the lfo oscillator, and then the result of that operation has the offset added to it.

Consider the following examples. If the lfo depth is 0 (zero), 1 – 0 = 1 for the offset. The result of the depth multiplied with the lfo signal will be 0, but with the offset of 1 added to the modulation signal before it is multiplied with the input signal, the result is the input signal unchanged. (input signal * 1 = input signal). If the lfo depth is .5, the lfo output will be .5, and the offset of .5 will be added to the modulated signal before multiplication. The output max amplitude of the AM operation is always the same as the input amplitude. Whatever amplitude is reduced through the lfo depth is restored by the offset.

drawable waveforms

I’ve added one more feature to the [p ampMod] subpatcher. Inside that patcher is the [p ampLFOscillators] subpatcher. It has the same basic features of the previous pitch modulation subpatcher, in that it is basically a waveform selector. I’ve added a breakpoint function editor [function] paired with [line~], which can act as an oscillator. When [line~] finishes outputting the ramp signal that is described by all pairs of breakpoints and times, it sends a bang out its right outlet. The bang message can be sent back to retrigger the function editor to output its ramp coordinates again, creating a repeating function waveform that you specified in the editor.

To make the process responsive to changes in the lfo frequency rate, you must do a little conversion from audio to message rate, and then convert frequency to period (T) and multiply by 1000 to convert to milliseconds. The process with objects:

  • the incoming lfo frequency at audio rate comes into a [number~] object.
  • the [number~] object is set to permit signal monitoring mode only. It will read the audio signal and send it as a float out the right outlet at the interval specified in the inspector (last property in the list). The output is the current frequency expressed as a float.
  • the frequency is sent to a reverse divide object [!/ 1.], with 1. divided by the frequency equalling the corresponding Period (T), or the length of time in seconds it takes to complete one cycle at the specified frequency.
  • the Period is multiplied by 1000 to get the Period in milliseconds.
  • the Period in milliseconds combined with a setdomain prefeix [prepend setdomain] and sent to the function editor. The function editor is now adjusted so that it’s length (domain) is the same as one cycle at the specified lfo frequency.

You can draw any arbitrary shape in the function editor, but you must remember that you must have breakpoint values at the very beginning and very end so that the length of the function described matches the domain of the editor. I’ve saved two shapes as a 1/10th saw and 1/2 saw. You can do more, and you can add inputs to select the different presets.

true sawtooth and triangle waveforms

We’ve been using anti-aliased sawtooth, triangle and square waves for audio-rate synthesis, and we’ve been limited to sawtooth waves that go up with saw~. For LFO purposes we don’t have to worry about aliasing, so we can use truer waveforms, which Max provides. I’m holding off on [train~] for pulse/square waves, but I’m using [triangle~] and [phasor~]. [triangle~] actually can provide sawtooth-like waves because you can adjust where the max point falls within half of the period. Using 0.5 creates a true triangle waveform. [phasor~] outputs a rising ramp from 0 going up to 1, and is so named because it is often used to drive oscillators or audio buffers by specifying the phase location. In fact, [triangle~] expects a phase signal to drive it, and [phasor~] operating at the lfo frequency provides it. For sawtooth waves, the output of the first [phasor~] in the subpatcher is subtracted from 1, reversing the signal to go down from 1 to 0. The second [phasor~] provides an up signal. The [umenu] in the parent patcher includes the options for saw down and saw up.

why does it sound like the AM lfo doubles in speed?

When the lfo depth goes above .5 you will start to notice an apparent doubling of the tremolo rate (the lfo rate). This doubling happens as the offset is reduced from the modulating signal, and the modulating signal oscillates above and below 0 amplitude. Each positive AND negative deviation produces an audible “bump” that is heard as distinct. There is a way to fix this issue, but I’ll leave it to you. (and I can go over it in class)


(max) filter intro


The patcher is monophonic only, and only uses [lores~]. It allows for you to set the cutoff frequency by adding the note input and an interval above in half steps.

Apologies for the basic post, but I’m catching up. More info will come.



(max) synth with pitch modulation

adding pitch modulation, part 1 (and selectable waveforms)

<WaveformSynthWorking1~.maxpat> (for poly~)

I’ve adjusted the SimpleSquareKK2~ patcher in a number ways. There are a lot of encapsulations now, to make the patcher clearer by having less clutter and subpatchers named after the function they perform. Encapsulations:

  1. note input and velocity handling
  2. envelope and amplitude scaling
  3. portamento control
  4. pitch conversion from MIDI to frequency
  5. oscillator waveform selection (new section)
  6. gain control

The selectable waveform option is the most significant addition. Multiple oscillators feed into a [selector~] object. A [umenu] controls what inlet is passed to the outlet. The waveform selection is made from the parent patcher.

adding pitch modulation, part 2


Having simplified the subpatcher, we can add pitch modulation controls, which include LFO, pitch bend, and pitch envelope control. As an overview:

  • express your pitch modulation in half steps, since frequency varies by register.
  • set up modulation to be +/- 1 half step, then multiply output by actual pitch modulation depth you want (including fractions of half steps)
  • add pitch modulation to MIDI note input at audio rate. MIDI note input is converted to an audio rate signal before adding pitch modulation
  • convert combined MIDI note input and pitch modulation at audio rate

All of this is well-commented in the example patchers. Make sure you examine each of the subpatchers.

pitchMod (inside of WaveformSynthWorking2~) contains a subpatcher for LFO modulation, pitch bend, and pitch envelope. Rate, depth, and bend amount are sent from the parent patcher. Pitch envelope duration and pitch envelope depth is also sent from parent patcher. The controls in the parent patcher are inside of the [p pitchModControls]. The pitch envelope also needs to be triggered with noteon events, so it gets noteon velocity from note input to trigger the envelope. I have not include sustain control in the pitch envelope, but you could modify it to make it work.

The pitch envelope subpatcher also has added x and y zoom controls via the rsliders (range sliders). You can zoom in on a small range by selecting a small amount in the rslider (corresponding to the area you want to zoom in). There is a reset button to quickly zoom out all the way.

The pitch modulation controls from the parent patcher could use physical control (MIDI CC). You can easily add that, and scale appropriately.

using [number~] instead of line~

In many of the subpatchers of <WaveformSynthWorking2~.maxpat> I’m using a [number~] object to convert a control rate message to an audio rate signal. If you set [number~] to “permit signal output mode” only, then you [number~] will convert the message an audio signal. Further down the inspector you can also set a ramp time in ms, which is like the time argument in a message pair to [line~].


(max) encapsulation

As you learn more objects and more complicated ways of interacting with MIDI and synthesis with Max, your patchers will likewise become increasingly complex. While use of presentation mode can make your final interface much easier to use and understand, you will need to make extensive use of encapsulation if you want to keep any sense of clarity as you develop your patchers. Encapsulation takes two main forms: the [patcher] object, and patchers saved as files that can be used as objects in other patchers. Liberal use of well-named encapsulated patchers and objects will go along way towards making your patchers easier for you and others to understand. I’m going to focus on the [p] object only.

encapsulation and the [patcher] object

I talked about encapsulation in a previous post. Max 7 adds an incredibly easy way to take an existing partial network of objects and encapsulate them with a single command. Select the group of objects you want to encapsulate, then use encapsulate command: Edit > Encapsulate <shift>+<cmd>+<E>. The objects will be moved into a [p] (patcher) object, with all connections and relative placements intact. Any connection to an object outside the selection will automatically generate an inlet or outlet as needed, and connections to those inlets/outlets will already be made. All you have to do is name the [p] object.



with this inside.Encap3

Be sure to name the [p] object (something useful and explanatory of its function – gainControl would be good in this case), and also to add an explanation of the function of the outlet (or inlet) in the Comment attribute field in its Inspector window. The comment is what shows in the popup when you hold your mouse over an inlet or outlet of an object.