Category Archives: max

composition1 computerMusic1 computerMusic2 computerMusic3 max musth625 sonicArts

(sonicArts) online storage

I’ve been pushing iLocker in class, an online storage solution offered to all of you from Ball State. (I won’t call it free, given what you pay in technology and student services fees, not to mention tuition.)


If you don’t have a good FTP program, or otherwise know how to set it up on a computer that isn’t your own, it is ugly to use. UGLY.

So I would recommend Dropbox, or Box, or some other free online storage service. Make sure you put the file in your public folder, and copy the link to give to me.



(max) sample and hold goodness, and some more filter fun

It’s getting late in the semester, and I’m feeling a little punchy.

sample and hold

Sample and hold was/is a common unit generator on advanced analog synthesizers. When triggered, a sample and hold unit will read (sample) and output the value of an arbitrary signal input. If the input is a noise source, sample and hold will provide you with a discrete random value. If the input source is a repeating wave (typically at an LFO rate), sample and hold can give you a (semi) repeating set of values, like an analog sequencer.

Download SampleAndHoldSimple.maxpat.

The demo patch illustrates the two functions described above. In “Pure Bleep Bloop,” a noise source (multiplied by 8000) is connected to the sah~ input. The sah~ object has an argument of 0.01, which is the trigger value. When a control signal in the right inlet rises above 0.01 it will trigger sampling and outputting of a value by sah~. To trigger another value, the control signal must fall below 0.01 and then rise above it again. The sah~ value will be between +/- 8000, or whatever value you type in to the range of random frequency float object. A phasor~ with a frequency of 8 Hz is providing the trigger. The sampled output is going through an abs~ object, converting any negative number to positive. The number then is being fed into the frequency input of a saw~ object, giving you the classic Bleep Bloop sound of random electronic music.

The right hand side of the patch (“Odd Sequencer”) uses a sine tone (cycle~) as a source to be sampled, giving you a type of sequencer. If you type a phasor~ frequency, the corresponding period becomes the frequency of the cycle~. The result sounds like a stepped sine tone. If you adjust the cycle~ frequency to something that isn’t the period length of the trigger, you start to get a repetitive pattern of frequencies. If the frequency of cycle~ divides evenly into the trigger/phasor~ frequency, a single repeated pattern will be heard. If the cycle~ frequency doesn’t divide equally into the trigger frequency, a series of related patterns emerge. Each will be similar (the up/down motion of the wave), but the non-equal division will leave you with a remainder amount that will throw sampling off a bit for each pattern, until the two values sync after the appropriate times through.

applying sample and hold to filter control

Download filterSaH.maxpat. Sample and hold functions are being used to control gain and center frequency to a reson~ object (a resonant bandpass filter).  One set of controls links the slope change time between value changes to the period length of the trigger frequency. The other set of controls allows for independent control of the slope time, which can be set to short time values and create “popping” effects. The radio buttons control which set of control inputs pass to reson~.

A word of note. The radiogroup object is finicky when it comes to changing properties. To change the number of buttons displayed, open the inspector, change the number elements, then click on the items enabled value list so that it updates.

biquad~, filtergraph~, and taking the sharpest edge off the degrade~ object

Download WavePlayOffsetDistFiltered.maxpat. It contains a nice use of biquad~ and filtergraph~ as a means of filtering out some of the really high frequencies that result from bit degradation. The main body of the patch has been covered before (phase controlled looping playback, with frequency shifting and bit degradation).

biquad~ uses filter coefficients to control a flexible filter equation that can give all types of filtering, with resonant feedback. Since coefficients are not musically easy to use, we need an object that will provide it for us. filtergraph~ provides a graphic interface for setting the filter coefficients. The easiest way to use filtergraph~ and biquad~ is to open up the help patch for filtergraph~ and copy it into your patch (minus the audio input section). It gives you an attrui object to set the filter type, and float objects that can display the changes in the graph, or send changes to the graph.

A word of warning for all feedback filters, be sure to include a “clear” message in your patch. Although it isn’t ridiculously easy to “blow up” a filter, it can happen. Only a clear message will get it working again. And for those of you that might think that “blowing up” a filter could lead to interesting, glitchy sonic results — it doesn’t. When a filter blows up, it simply stops outputting signal.



(max) spectral processing in max

Starting with version 5, Max introduced the pfft~ object, which greatly simplifies spectral processing (fft-based processing). I’ve uploaded a folder of patches,, that illustrates some basic spectral patches and processes.

The pfft~ object is like the poly~ object, it uses a spectral subpatch to perform the FFT/iFFT and processing. The pfft~ object lets you set the FFT size and number of overlaps, creating the appropriate number of instances with sample delays. These example patchers are very similar to the MSP tutorial on pfft~.

get out what you put in

The first patch to take a look at is spectralStuff2.maxpat. It is setup to provide looping playback from two buffers. The groove~ objects and controls are inside the p subpatches groovePlayer1 and groovePlayer2. Only one groove~/buffer~ runs through a spectral process, fftPassThru~. fftPassThru~ is the simplest of spectral patches. It merely has an fftin~ and fftout~. Input audio should output exactly the same.

The pfft~ object specifies a frame size of 1024, with 4 overlaps. Inside the fftPassThru~, you specify fftin~ inputs (with numbers corresponding to the inlet). Audio coming into an fftin~ will be converted from time domain to frequency domain. That conversion creates a stream of real and imaginary values for each frequency bin. The audio rate nature of max processing means that these bin values are transmitted at the same sampling rate specified. An fftout~ performs an ifft process, converting the real and imaginary numbers back to an audio signal. You can also specify non-fft inputs and outputs to send other types of data into and out of a pfft~ subpatch.

simple convolution

spectralStuff3.maxpat shows a simple convolution process. Since convolution multiplies amplitudes of frequency bins together, you must have two audio signals playing to hear any output.

The convolve~ patch is still quite simple. Two channels of audio come in and get converted to frequency domain signals. The real and imaginary numbers represent x and y, a cartesian plot of incoming sample values. To perform convolution, you must convert cartesian coordinates to a polar representation as amplitude and phase angle with the cartopol~ objects. You then multiply those signals, amplitude multiplied by amplitude and phase angle multiplied by phase angle. The results of the multiplications are converted back to x,y values (real and imaginary) with poltocar~ (polar to cartesian), and then sent out the fftout~.

spectral noise gating

spectralStuffNoiseGating.maxpat demonstrates simple noise gating. Only frequency bins with an amplitude above the gate value will be resynthesized. noisegate~.maxpat has an example of both fftin~ and in~ inlets in the same subpatch.

frequency crossover

spectralStuffFrequencyCrossover.maxpat illustrates routing of output based on frequency content of input signal. The third outlet of an fftin~ object is the FFT bin index. You can multiply this bin index by the fundamental frequency of the FFT to get the frequency value for the bin. Using dspstate~ to get the sampling rate and fftinfo~ to get the size of the FFT, you can find the fundamental frequency of the FFT. Note that dspstate~ outputs information whenever audio processing is turned on, but fftinfo~ only outputs information when the subpatch is running inside a pfft~ object.

The result of the frequency comparison is a 1 or 0, which controls a gate objects that route the audio signals. You have to add 1 to the comparison result, as 0 will close the gate entirely — 1 or 2 will route input to the corresponding outlet. Note that you don’t have to perform cartopol and poltocar conversions inside the pfft~ subpatch, as you are getting frequency information from the bin index and simply using that information to determine output of each bin.

I use this process in my laptop performance pieces (Bent MetalThrown Glass, and Forced Air – In/Ex) in an expanded form. I route the signal to 8 outputs based on frequency content. You can easily modify the crossover patch to do this by sending the frequencies higher than the first cutoff to a second frequency comparison, and so on. Once you have routed output based on frequency you can apply time-based processes to the resulting audio bands, such as delay, amplitude modulation, panning, etc.

pitch shifting with gizmo~

I’ve used freqshift~ before, but since it uses ring modulation to shift pitch it introduces a fair amount of distortion to the frequency spectrum. gizmo~ is an FFT-based pitch shifter that shifts pitch according to a transposition ratio (similar to controlling the playback speed of groove~). The parent patch is FFTPitchShift.maxpat. The subpatch (pitchshiftgizmo~.maxpat) implementation is simple. The only wrinkle is the addition of a route object that looks at data types rather than the first item in a list to route data to separate outlets. Route is being used in the subpatch to filter out non-number messages from reaching the gizmo~ object (just as a programming safety valve).

The parent patch includes a section to use MIDI note input to determine the transposition ratio. You first set a base key with the keyboard slider (kslider). Middle C, 60, is the default base key. You can then play notes on a MIDI keyboard to determine the transposition ratio. The interval between the played note and the base key is determined, then sent to an expr object. Expr evaluates math expressions in C-like syntax. You have to declare variable data types and what inlet they will be coming in. $f1 declares that floating point numbers will be coming in the first inlet. Using the interval size in a 2-to-the-x/12 power equation gives a transposition factor. You can use this same equation for playback ratios with groove~. (Before, I had converted the base note to frequency, the incoming midi to frequency, and created a ratio based on those two numbers. The result is the same.)

max max-lecturenotes

(max) resonators and more poly~

High feedback, short-time delay lines can be used to resonate specified frequencies in a source signal. It’s the principle behind the resonators module in Cecilia. Creating a usable Max patch that performs this function takes a few steps, but can produce some very interesting sonic results.

To follow the discussion, you need to download the parent patcher, ResonatorBank.maxpat, and the poly~ subpatcher resonatorDelay~.maxpat.

specifying delay times as a function of pitch

For usability purposes, it is easier to specify the pitch you want to resonate, and convert that pitch to a corresponding delay time. In other words, you want to find the period of the frequency. The math to do that is simple:

T = 1/f

In Max, time is specified in milliseconds by default, so you would modify the equation to:

T = 1000/f

The subpatcher (p) midiToDelayTime performs this function, expecting pitch input in the form of MIDI note. The parent patch is setup to input MIDI notes from a MIDI keyboard or use a coll object to store sets of pitches (counted and routed to the different voices of the resonator bank). The converted delay times are sent to individual voices of the poly~ resonatorDelay by use of the target message.

using poly~ to create a bank of resonators, and more

By using poly~ we can create a bank of resonator processors. The resonatorDelay~ subpatcher is just a delay line (tapin~/tapout~) with feedback and adjustable output gain. (You need to adjust output gain down significantly as you increase delay line feedback.) But the use of poly~ allows for the adjustment of one additional, extremely needed parameter: signal vector size. The signal vector size determines the number of samples in a vector that are operated on in one function, and the size divided by the sample rate determines your minimum possible delay time. Smaller vectors give shorter possible delay times, but add a heavy hit to your cpu load. You can specify a different signal vector size for a poly~ object by use of the @vs attribute. The smaller vector applies only to the poly~ object, and won’t affect your cpu load as much as if you set the small size for the entire patcher. In the example patcher, I’ve set the signal vector size to 8 samples.

To understand the difference, Max usually operates at a 64 sample vector size (viewable in the audio status window). That size gives a minimum delay time of

64/44,100 = (approx) 1.45 ms, or about F5

Moving the signal vector size to 8 samples gives a minimum delay time of

8/44,100 = (approx) 0.18 ms, or about E8

Using the small vector size only in the poly~ saves you a few % points of cpu load, which can help in a complicated, processor-intensive patcher.

final note

You will notice that audio input and delay times to the poly~ are both sent to the first inlet. This combination works because the resonatorDelay~ subpatch defines both an in~ 1 (audio input to first inlet) and in 1 (data input to first inlet). Max understands to route the inputs by data type in the subpatch. Combining data inputs in a single inlet can help you reduce the number of inlets you need for a subpatch. Many audio objects make use of this data handling ability.


(max) end of semester patcher dump

It’s going to be impossible to describe the backlog of patchers that I need to post. You should be able to follow them from in-class presentations and documentation in the patchers.

  • RecordNGroove2.maxpat: introduces the dropfile object, allowing you to drag and drop audio files from your desktop finder window into a max patcher; and shows how to use MIDI note input. along with MIDI base key, to set playback rations for groove~.
  • WaveformGroove.maxpat: uses the waveform~ object to allow for graphical interaction with the contents of an audio buffer, allowing for the selection of loop points, zooming in and out, and even redrawing parts of a wave.
  • WaveformGrooveGlitch.maxpat: adds a section to WaveformGroove that determines the length of the selected loop, divides it into 10 equal sections, and then randomly selects a loop portion to go to in groove~. It scrambles the audio.
    • WaveformGlitch.maxpat is a previous version of this patcher. I didn’t show it in class, but it adds an envelope with adjustable duration. It allows you to play very short segments in random order. Follow the comments.
  • fiddleBasic.maxpat: the fiddle~ object uses fft analysis to track pitch. You need to download fiddle~ and install the object and its help file in the appropriate places. (Applications/Max6/Cycling ’74/msp-externals for fiddle~, and Applications/Max6/Cycling ’74/msp-help for the fiddle~ help patcher)
  • fiddleLiveInput.maxpat: adds a live input component to the fiddleBasic patch. With live input you should see more detected pitch and amplitude new notes being generated.
  • WavePlay.maxpat: uses wave~ instead of groove~ to play looping audio from a buffer~. Wave~ is controlled via phase location. The patcher also uses a waveform~ object with a slider overlay to show playback position.
  • WavePlayOffset.maxpat: adds controls to set the loop amount to something less than the entire file, and to have the loop start at different places in the file (with wraparound). It can create changeable stutter effects. The first modulo object (%~) sets the loop amount, keeping the same pb ratio speed. The second modulo object allows for the wraparound if the loop start point (offset) and amount would take you past the end of the buffer.
  • WavePlayOffsetDist.maxpat: adds a frequency shifter (freqshift~) that uses ring modulation to shift pitch (positive band only by default), and the degrade~ object to reduce the bit resolution of the signal. The frequency shifter distorts the ratio of partials, creating some nice robot effects. The degrade~ object can also reduce the Sampling Rate, but bit reduction produces an attractive sort of quantizing noise commonly used in a lot of rock and techno. You can put the bit resolution back to 16 to hear only the effects of the freqshift~.
max max-lecturenotes

(max) groove~ and buffer~: playing audio from RAM

We’re transitioning from synthesis to audio playback from buffers, which allows us to manipulate and process audio in live performance. The basic object for all of these functions is buffer~.

Example patcher: GrooveSimple.maxpat


buffer~ holds an audio file in RAM. The basic parameters are the buffer name, length in ms, and number of channels. The buffer name is essential so that other objects can access the contents of the buffer for playback, processing, or other info. You have to define a length, but you don’t need to over-stress about the size. I usually specify large buffers because RAM is generally abundant.

You generally input audio into a buffer~ in one of three ways: read, replace, or record. Read and replace bring up the standard Mac file dialog window and let you choose a file (unless you specify a filename along with the message). Read keeps the buffer length and number of channels as specified in the arguments. Replace resizes the buffer to fit the new file, and also changes the number of channels to match. I tend to favor using replace.


groove~ allows for ratio-based playback of audio from a buffer~, with looping and continuously variable speed control. Some things to keep in mind:

  • playback ratio has to be sent as a signal to groove~. The usual method is a float sent to sig~. Unity playback is 1. You can also reverse audio with negative ratios.
  • if no loop points are sent to groove~, the entire buffer will be looped.
  • a float sent to the left inlet sets the time of the playback head. You have to set the playback head before initial playback, as well as after reaching the end of the buffer if looping is not enabled.
max max-lecturenotes

(max) project 2: synthesis

MUST 450 (342—Computer Music II)
Project 2 | Audio/Synthesis Performance Patch
DUE: 3/24/14

ASSIGNMENT: to create an audio performance patch that utilizes several different methods of data manipulation and synthesis controlled by a MIDI keyboard/interface. You will perform the resulting composition in class during a performance session.


  1. This project will build upon the synthesis techniques and MIDI controls that we have discussed in class. Your objective should be to create a performance patch that features real-time control of 8 different parameters. You may consider building synths that will work in conjunction with the data storage objects (tables, histograms, coll, umemu, etc) and control of the basic MIDI parameters that we discussed earlier in class. You should make every effort to be creative with this project. Think about ways that Max can extend synthesis controls beyond standard programs. Draw upon your knowledge of experimental art music forms and unique beat-based compositions.
  2. You should complete a composition to perform that is ONE MINUTE in length. Some elements to consider include:
    1. What synths will you use? How will you control them in real-time? Will you use data storage objects to facilitate your performance? Will you control all of the elements your self?
    2. You should change pitch ranges, durations, velocities both within sections, and within gestures.
    3. Will you use a control interface? How can mapping your parameters to knobs and sliders increase the expressivity of your performance? How many parameters can you control at a time?
    4. Can you automate some of these processes and focus on marco-controls?
    5. Write a brief report (two pages, typed, double spaced, 12-point font, 1 inch margins) describing the sounds you created, any use of real-time control of your sounds, and significant DSP processes that you used. In your report, also briefly summarize the central organizing idea of the composition, and describe the formal structure. Make any other comments you feel are relevant.


  1. At least 8 controllable parameters. This may filter components, pitch ranges, delay times, durations, etc. You should choose a VARIETY of controllable parameters.
  2. Your project should be at least 1 minute in duration.
  3. Your project should make use of layering as means creating a complex sonic texture.
  4. You must use at least 4 separate synths (choose from addative, subtractive, chiptune, FM synthesis
  5. Your projects must use at least two separate filters (high pass, low pass, reson, biquad, delay lines,
  6. or anything else you find! Be creative! Investigate!).


  1. Your Max patch should be labeled “yourLastName_yourFirstName_AUDIO”.
  2. Your studio report should be labeled “yourLastName_AUDIO_14”.
  3. All of these materials should be placed in a folder labeled “yourLastName_AUDIO_14”.
  4. “yourLastName_FINALAUDIO_14” should be zipped into a folder.


  1. Max patch.
  2. Any accompanying audio files/video files.
  3. Your studio report, in PDF format.
  4. Submit (via a file sharing site i.e. iLocker, yousendit, dropbox) by the due date. Please DO NOT require a login in order to enable download.


  1. Creativity of patcher and performance (40 pts).
  2. Neatness and documentation via comments (and/or send/receive as appropriate) (20 pts).
  3. Meeting the 1-minute length of performance and having the required 5 parameters under patcher control/variation (10 pts).
  4. Following turn-in procedure (5 pts).

75 points total

max max-lecturenotes

(max) mapping MIDI to audio synthesis

I’ve covered some basic audio synthesis in a previous post. Now it’s time to map MIDI note and velocity information to the synthesis process.

Note and Velocity Mapping

Download EnvelopeGraphicSustainNoteOnOff.maxpat.

The bulk of the patcher is the same as the graphic envelope series. The only addition is a MIDI note input section in the upper left corner, and corresponding changes in triggering the function envelopes.

The notein object sends key velocity to a select object, looking for 0 velocities (note off velocities). These 0 velocities will be used to trigger the release stage of the envelope. MIDI note and velocity feeds stripnote, with note numbers being converted from MIDI to frequency in the mtof object. The on velocity triggers the function envelope.

The preset object has been changed very slightly, as you no longer need to store the oscillator frequency in the preset.

Velocity Sensitivity

Download EnvelopeGraphicSustainVelAmp.maxpat.

This patcher adds velocity sensitivity to the NoteOnOff mapping patcher. It takes the noteon velocity, divides it by 127. and uses that result as a scaling factor for the amplitude envelope. The scaling factor is applied to the output of the envelope line~ object.

The weakness in this patcher is that the mapping of velocity to amplitude is linear, while we perceive amplitude to loudness in an exponential way.

Better Velocity Sensitivity

Download EnvelopeGraphicSustainVeldbAmp.maxpat.

To achieve better velocity to amplitude mapping it is more appropriate to scale velocity to dB. This patcher does that, and then converts dB to amplitude (dbtoa) before applying it to the envelope line~ output.

You can experiment with the low end of the output scaling to fine tune key velocity sensitivity. After some experimentation, I’ve settled on -48 dB, which provides at least some audibility for each note, but provides sufficient high amplitude sensitivity.


max max-lecturenotes

(max) basic audio, oscillators and envelopes

Now that we’ve had a little fun with our Theremin, let’s turn to some basic synthesis with oscillators and envelope generators.

The Very Basic Setup

Download the BasicAudio.maxpat. This patcher provides a very basic adjustable synthesizer, with a saw~ object feeding into an audio multiply (*~). Changes to frequency and amplitude are made with float objects. Amplitude changes can be made with either the float feeding a message or a pack object. Both function the same way.

Adding Envelope Control

Next, we turn to the BasicSynthEnvelope.maxpat. Messages are sent to line~ in the <target time> format, meaning what value to change to over how long. Moving left to right, the first pair of messages show individual envelope line segments creating an attack and release. (Remember, to hear anything you must enter a frequency.)

The next message over, triggered by the space bar, includes an entire ADSR envelope description. The first 0. followed by a comma tells line to go to that value immediately, then target/time pairs follow:

  • 0.9 10 (reach amplitude of 0.9 over 10 ms) Attack
  • 0.25 100 (go to amp 0.25 over 100 ms) initial Decay
  • 0.25 1000 (stay on amp 0.25 over 1000 ms) Sustain level and time
  • 0. 250 (go to amp 0. over 250 ms) Release

As an organizing tip, I specify amps (target values) in floating point numbers, and time in integers. It helps me read through the list.

The final message on the right, triggered by the 1 key, is just a slow attack and release envelope.

Using the Function Editor and Presets

Download the BasicSynthEnvelopeGraphic.maxpat and explore the preset and function objects.

The preset object stores and recalls data in user interface objects like number boxes (integer and floats), function editor, toggle states, sliders and dials. You store a information by sending the message store #. Recalling a preset just involves sending a number. Dark grey circles have a preset stored at that location. The blue circle is the current preset. The patcher loads preset 1 as current.

Preset objects can store all or some of the UI objects in a patcher. I’m using the left outlet of preset to connect to inputs of objects that I want to store in a preset location.

The function editor is a graphic breakpoint editor. You add points by clicking in the window. You can drag points, add as many as you like, and delete points (shift-click). As you move a point around you see its current X and Y values. When you send a bang to function, it will output to its second outlet its contents as target/time pairs for sending to line~. The spacebar is set to send a bang to function. You can see the output of function in the message below it.

Choose preset 2 by typing a 2 on the keyboard. When you subtract 48 from the key input you get numbers that you type aligning with values output.

Hit the spacebar again to compare settings. Frequency, function, and the number object all change. The number object that feeds into prepend setdomain changes the overall length of the function. All points scale to the new length. (Y is the domain of the function editor; X is the range.)

Sustain Points

The final addition is adding sustain points in the function editor. Download BasicSynthEnvelopeGraphicSustain.maxpat. It has two changes: the function editor colors have changed to better show the sustain points, and a toggle has been added to the space bar feeding either a bang or a next message.

Cmd-click any point in function and you make that point a sustain point. You can have as many sustain points as you want in the function editor. Sending a bang to function initiates the function, stopping at the sustain point. A next message tells function to continue through the points.

max max-lecturenotes

(max) theremin fun, part 2

Continuing with the description of patchers from part 1


ThereminInProgressLCD starts to convert ThereminInProgress from mousestate control to LCD drawing control. No data is being fed to saw~, so audio is not in use for this patcher. The main focus is the LCD and sprite control within it.

On the right of the LCD are drawing commands, some of which are new. Moving from the top down, a loadmess is used to turn on idle for the LCD, which will send mouse location out the second outlet when the mouse is moved over the LCD without any buttons pressed.

Next, you find that X/Y position is being received from a send object (s idleMouse). The X/Y data feeds into the prepend object, outputting <drawsprite circle x y>. A sprite is a graphic image that draws on top of an object. in this case, it is a thick circle that follows the mouse when the LCD is in idle mode. When you press the mouse button down, the sprite stops moving and a line is drawn that follows the mouse position.

Below the prepend object you have a loadbang. It triggers a message to enablesprites (allowing for sprites to appear in the LCD), and then it records a sprite for later use. All of the sprite commands and messages are found in the LCD help patcher. The message starts recording a sprite, changes the pen size, then gives a command to paint a frameoval with a startX startY endX endY R G B values. (-12 = equals 12 pixels to left and above current mouse location to 12 pixels to the right and below current mouse location.)


ThereminInProgressLCD2 is a fully functional Theremin, that outputs mouse X/Y position when the mouse button is down, scales it to pitch and amplitude, and uses it for oscillator playback.

Some issues were initially encountered with drawing and only creating sound when the mouse button is down. In the previous patcher, the sprite would remain visible even when the mouse button was down. It would be better to not have the idle sprite display when the mouse was drawing. To accomplish this task I used the third outlet of LCD to track whether the mouse button was pressed. Pressing the mouse button (the message 1) caused a hidesprite circle message to be sent to the LCD. Releasing the mouse button (message 0) was used to trigger a clear message to the LCD. It seemed aesthetically pleasing to have the drawn line remain on screen for a very short time, so I routed the bang through a delay message.

To only have audio playing when the mouse was drawing in the LCD, I again used the mouse button state from the third outlet. After the amplitude value from mouse Y was applied to the oscillator signal, another audio multiply was added. At this point in the audio chain the signal is being multiplied by the mouse state, either 1 for down or 0 for up.

One final note is that the two meter~ objects have different settings for the number of light segments.


ThereminLCDCleaner is functionally the same as LCD2. The drawing functions have been incorporated into a patcher object, as has the oscillator selection section. A loadmess has been added to select the sawtooth wave upon loading, and a live.gain~ has been added to allow for final output volume control (separate from mouse tracking) and metering.

I will revisit the ThereminLCD when we start to talk about audio effects.