Category Archives: computerMusic1

composition1 computerMusic1 computerMusic2 computerMusic3 max musth625 sonicArts

(sonicArts) online storage

I’ve been pushing iLocker in class, an online storage solution offered to all of you from Ball State. (I won’t call it free, given what you pay in technology and student services fees, not to mention tuition.)

but…

If you don’t have a good FTP program, or otherwise know how to set it up on a computer that isn’t your own, it is ugly to use. UGLY.

So I would recommend Dropbox, or Box, or some other free online storage service. Make sure you put the file in your public folder, and copy the link to give to me.

 

computerMusic1 computerMusic2 musth625 sonicArts

digital performer intro, part 2

part one of the digital performer intro is here

Importing Audio and the Soundbites Window

You can drag and drop audio from a Finder window into the Soundbites pane in Digital Performer. Audio imported into your project gets converted to the project audio format, and copied into the Audio Files folder within your project folder. Soundfiles are turned into soundbites by Digital Performer, and listed in the pane. The first column is the Move handle (MVE), the second displays the soundbite name (the filename is not changed), and the third column is lists the soundbite duration. You can distinguish between mono and stereo files by the soundbite move handle: one tilde (~) is mono, and two tildes is stereo (indicated in the graphic below).

soundbiteswin

read more »

computerMusic1 computerMusic2 musth625 sonicArts

digital performer intro

Digital Performer and DAWs

Unlike stereo audio editors (Audacity, Peak, Audition), Digital Performer is an example of a Digital Audio Workstation (DAW). Digital Performer relies (mostly) on non-destructive processing and mixing. The program allows for multiple sounds to be used at once by reading from the multiple sound files, applying gain changes as indicated by mix commands, and applying processing through plugins. When you have completed a mix (or at any stage along the way), you “bounce” your project to stereo, which mixes and applies all processing to the individual tracks.

Since a DAW project has a more complex organization of files than a stereo audio editor, and a more complex set of preferences and setup, it is important to understand setup and file organization for a DAW project.

First Step – Starting a New Project

Launch Digital Performer. The default setting is to open a project or create a new project. Create a new project, and name it according to the assignment instructions (or whatever you want to call it). Creating a Digital Performer project creates a new folder, with a data file and audio files folder within. Other folders will get created as needed. For simplicity, you should plan on creating other folders to store your original and edited audio source files.

Studio Setup and Program Preferences

Once you have a project open, you should check your program preferences. There are a lot of sub-menus in the preferences window. Choose General | Audio Files. Here you set your file format for new projects, and the file format for the current project. My default choices are shown below. I recommend Broadcast WAVE, Interleaved (check box), and 16 Bit Integer sample format. The other important setting is in the Audio File Locations section. You should choose to “Always copy imported audio to project audio folder.” The other choice for processed files will be made for you.

read more »

computerMusic1 lectureNotes_cm1

(compMus1) Freespace problems on the server

Problem:

You get an error message that says that there is not enough free space on the server when you try to copy your project files to my drop box on the musictech server.

Solution:

You need to clear out space on YOUR server folder.

Server allocation limits (disk quotas) are tied to the user, not the folder. While I have unlimited storage space on the server, you have 2 GB limits. If you’re at or near the limit and you try to copy something into my drop box, you may end up over your disk quota.

Since the server hard drives are erased over the summer, you need to back up your files to personal storage anyway.

computerMusic1 lectureNotes_cm1

(compMus1) Quiz update – Listening List

The listening/identification portion of the quiz is still on for Thursday. We’ll do the listening portion in class, with the remainder of the quiz completed at home and turned in on Monday.

Everything you need for the listening quiz is in this post.

computerMusic1 lectureNotes_cm1

(compMus1) No class on 4/28; Quiz Update

I’m sick and won’t be able to make it in for classes tomorrow (Tuesday, 4/28).

The quiz on Thursday will be a take-home quiz. I’ll give it to you Thursday, and it will be do Monday.

Keep an eye on the blog for more updates.

computerMusic1 lectureNotes_cm1

(compMus1) The Vocoder in Reason

Vocoders use a type of cross synthesis. Cross synthesis involves using the parameters of one sound to synthesize or process similar parameters in another sound. In the case of a vocoder, the amplitudes of frequency bands of a modulation audio signal are used to set the gain levels for bandpass filters that are applied to a carrier audio input signal. Frequency bands of the modulator signal which have little or no amplitude result in cutting or eliminating those frequency bands in the carrier signal. Frequency bands of the modulator signal that have strong amplitudes lead to boosts in gain for those frequencies if present in the carrier signal.

Vocoding in Reason is accomplished with the BV512 Vocoder. Since vocoders require two audio devices to work, you will need to create a two different synthesizers and /or samplers to connect to the unit. Generally speaking, your carrier signal is harmonic, frequency rich, and rather consistent in spectral content over time. A sawtooth or pulse wave (with short duty cycle) are classic examples of good carrier signals. Your modulation signal should have a changing spectrum, which will provide dynamic (changing) filter parameters over time. Using speech as a modulator with a pulse or saw wave as carriers creates the classic talking robot sound.

Modulating signals must be monophonic for the BV512. You can accomplish with either a line mixer or the Spider Audio Merger and Splitter. If you use the merger, plug both outputs from your mod device into the left inputs of the merger (not left and right, because that doesn’t result in a merged signal). Follow the flow chart on the back of the device. If you use a line mixer, you can control individual levels, and can even use multiple synths combined with level control as a single mod signal.

Since you will most likely need to trigger the carrier and the modulator for every note, you should use a Combinator device. If you already have your carrier, mod, merger, and vocoder devices in the rack, you can select all of them (shift-click on each) and choose “combine” from the edit window (or right-click and choose from the contextual menu). If you have nothing, start with the combinator, and create your devices inside it. All the devices will respond to the same MIDI gate if you assign a MIDI channel to Combinator – MIDI in the Advanced Control section of the interface. The output of the vocoder should connect to the input of the combinator, and the output of the combinator should go to the hardware interface. Save your patches for your indivdual audio devices within those devices, but save a patch for the combinator. This will save your vocoder settings, and all your internal routing within the combinator.

The vocoder controls are relatively simple. You choose the number of frequency bands you want to use (for filters). 512 bands was not common on original vocoders. This band setting is more of a spectral cross synthesis, and gives you the most mapping of one sound to another. 32 bands is the more common setting. At 16, 8, and 4 bands you lose some of the individuality of the modulator, but this doesn’t mean it’s wrong to use these settings. In fact, changing between low and high band values can create some interesting variations with the same signals. In the filter band display you will see how the filters are being set according the modulation signal. Below those constantly changing leds are sliders that let adjust the individual frequency bands. Adjustments can help with distortion when modulator and carrier have a lot of energy in the same frequency bands. The bottom right of the unit has a wet/dry control. Wet means the output signal is the result of the processing. Dry in this unit’s case means you hear the modulator signal only.

The envelope settings can be very useful for creating musical variation. The envelope applies to how long the frequency gain settings hold their places. Short attack and decay times mean that the filter settings will fairly closely track the input modulator signal characteristics. Using longer decay times means that the filter settings will hold longer, skipping over some of the input mod signal. The filter settings don’t update until the envelope has finished its cycle. A longer decay setting creates a slower, sliding amplitude changes. Again, this effect allows for variation using the same mod and carrier signals.

Your carrier and/or modulator can be sequenced using the Matrix or the step sequencer in the Thor synth. You can use a Redrum for either source as well. I’ll try to set up some examples for next class.

Assignments_cm1 computerMusic1

(compMus1) Final Project Document in iLocker

The handout describing the final project is in my iLocker account.

computerMusic1 lectureNotes_cm1

(compMus1) Controller Numbers for the Matrix Pattern Sequencer

You can use MIDI continuous controllers to start and stop the Matrix Pattern Sequencer, and to change patterns.

CC #92 will enable the pattern sequencer. A value of 1 play-enables the sequencer (the sequencer will run/play); a value of 0 disables playback (stops the sequencer).

CC #3 selects the pattern for playback, and can be sent while the sequencer is running to change patterns. Bank A, 1 – 8 corresponds to values 1 – 8. Bank B, 1 – 8 corresponds to values 9 – 16. Banks C and D continue this pattern.

I’ll follow up with a post about programming consoles in DP, or you can look it up in the documentation.

computerMusic1 lectureNotes_cm1

(compMus1) Blackboard and Social Networking Info

For all of my blogging, I’ve been a little lax in communicating about the other online connection points for the music technology program. With the end of the semester approaching, it seems like time for a rundown.

Blackboard

There is a music technology program group on Blackboard. You should be automatically enrolled as a music tech major, but if not, contact Michael Pounds to get added. Many alumni are also members of this group, and you should check out the disucssions from time to time.

Facebook

Pounds, Allison, and I have Facebook accounts, which all have a mix of social and professional contacts. Most of the GA’s are on as well. There is also a BSU Music Technology group on Facebook. The group and individual composers both publicize their events through the site, so connecting here can help you stay abreast of current and upcoming events.

LinkedIn

LinkedIn has more of a reputation as a professional networking site, and most of the contacts here are alumni and faculty, although a few students join. Pounds and I are on, and there is a BSU Music Technology group here as well.

MET-L Listserv

The ever present program listerv, which all of you should subscribe to for announcements and information. Instructions for signing up are in this post.