MIDI Tools #2: Generators

As of April 2025, this tutorial requires Live 12.2 Beta, or the standalone version of Max 9 (not included with Ableton Live Suite) to use the new v8 JavaScript object (see here for standalone Max 9 setup).

This article continues on from "MIDI Tools #1: Transformations" where we built a transformation type of MIDI tool with Max for Live, the v8 object, and JavaScript code. This time, we'll build a generator type of MIDI tool.

To focus on what can be done with generator MIDI tools, we won't design a note-generating algorithm from scratch. Instead, we'll port the note-generating algorithm that we already built in the "Generating MIDI Clips" article to a MIDI tool. Then we'll add interactive controls and additional features. We'll also learn how to organize more complex UIs in Max patches using Presentation Mode.

Creating a MIDI Generator Tool

Similar to what we did when creating a MIDI transformation tool, we select a clip in Live and go to the Generate panel of clip view. Click on the list of generator tools and select "Max MIDI Generator" in the "User:" section:

Note this column of clip view can be expanded to the right to make a two-column layout. Then there's an icon instead of the word "Generate" in the list of tabs. If you get lost, Live's Info View will say "MIDI Generative Tools" when you hover over the correct tab:

Once you've navigated to the generator tools and selected "Max MIDI Generator", click the edit button that's next to the "Auto" button (it looks like this ). This opens the Max patch editor:

Save the Max patch in the default location (the "Max Generators" folder under "MIDI Tools" in your Live User Library) to create a new generator tool.

As usual, I like to clean up and delete the "Build your Generator here" and "Device vertical limit" comments so we can start from a clean slate.

Patch Structure

We will be using the same structure as our transformation tool from the previous article.

It's the same idea: a live.miditool.in object outputs data about the clip using Max's dictionary data structure. We can add a v8.codebox (or v8) object to receive and process the dictionary data in JavaScript. Then, the notes we want in the clip are sent from the JavaScript code to live.miditool.out using another dictionary.

live.miditool.in outputs two different dictionaries from its two outlets, and they both have useful data in them. We can combine them with dict.join like we did when combining the two dictionaries in the previous article.

Also the same as before: add a button in the patch to trigger live.miditool.in for testing our JavaScript from inside the patch (so we don't always need to switch back to Live's window and click "Generate"). This button will allow the UI to automatically trigger the generator when the "Auto" option is enabled in Live's MIDI Tools interface.

Here's the initial patch:

The initial JavaScript code is stuff we've seen in the previous article:


function msg_dictionary({ notes, clip, scale, grid, seed }) {
  outlet_dictionary(0, { notes });
}

This code passes through whatever notes are currently selected in the clip without adding, changing, or removing any of them. So: it does nothing.

In the interest of setting this up as a useful template for starting any new MIDI Tool project, I have also destructured the notes, clip, scale, grid, and seed data from the combined dictionaries. We won't actually use all that data in this article, but that's how you'd access it if you want to use it. You can remove whichever of those fields you don't need.

v8 MIDI Tool Template

Here's the initial patch we walked through above (see previous screenshot). This is a good starting point for any MIDI Tool, both generators and transformations, that use JavaScript.

It's good to know how to build this patch, but to save time, you can paste this into an empty Max MIDI Generator or MIDI Transformation patch to get a new project started quickly. Here's a bookmark to this.


----------begin_max5_patcher----------
573.3ocqVssiZCCD84juBK+TqTZTbBKhcaU+C5WPYExjLPMJXGY6PYKh+8ZO
NAX2xkrc4ARjGOdNyY7Ylvt3H5b0VvPIOQ9IIJZWbTDZxaHpacDcMeaYM2ft
Qm2ZsJIMIrUC2V9Kgb4LMTZCggkklkPXi8OyG0+j7b2IDUXXTyW8EV1wvn4q
AKnmAR97Zv6RV2dx10BYMXQ3YGMpZsu0Zvj8kFHjJz4b4RJ4Y+t6ii8ORFHM
kvucoXe9YgsH6nUhRa5JkX.EfLl+0CrzGbUf7yWAlPOGIyeejzmSBkjqe4+i
paljVpp.uS2jTivKzQEcqxNOsJnW6tagRZMh+foOF1qwVuyRm1.i829gRpLM
7Rn56zKUNnGxH+YW3b9T0jmoXrVzJwxFYsY4rikvOsiHUVvjPJqEMIDSIuFR
HK0hJ2B.pH6+LY2T8T4Ta.5SOrqnzcdmaecp06299LcgnFNPEoRB8aX3afpY
tBmqZOiasZgqIKzUFc3N6BMI3d3E9cRgWK1.oqEUBqRUmN.g9XTRLIzoeAcd
Nc3MyEW7hMo628rg90z0g4M4aAqv+5Q10HL6cP3rSXCFKWRIe6PYLc71eMEM
pVcYeApahB4XFUAFqPx8xyS7oH3yYKhCEmhAfC6NfS9.vYxcBG1.vg8Aww+M
uaRn7+gPAcAuoYCnMcNiX3D4qTZ+xGSvkBYXINvipgMhd+GiV3Zmr15zzs5v
jnsiGQCG0MZTKaEnRM1yNGjXCjepEN1M7O.b8Yw6i+qfFwur
-----------end_max5_patcher-----------

Reminder from the previous article: if you need to debug and look at any of the dictionary data, use dict.codebox. Two useful places to insert these are after the dict.join (to view the input to the generator code) and after v8.codebox (to view the output of the generator code):

Now we have the foundation for a JavaScript MIDI generator tool and we're ready to start generating some notes.

Generating Notes

To generate notes, we build up an array of notes in JSON format. Using dict.codebox as shown above, we can look at the dictionary data to see the note JSON format. If the clip contains notes and some are selected, when you click the button (when the patch is locked) it will show something like the following in the dict.codebox. This is the note JSON format:


{
  "note_id" : 1,
  "pitch" : 60,
  "start_time" : 0.0,
  "duration" : 1.0,
  "velocity" : 100.0,
  "mute" : 0,
  "probability" : 1.0,
  "velocity_deviation" : 0.0,
  "release_velocity" : 64.0
}

Most of the note properties are optional. A few are required. If we were modifying an existing note, we would need to maintain the note_id. When we are generating new notes, we do not provide a note_id.

Let's see what happens if we only set pitch:


function msg_dictionary({ clip, scale, grid }) {
  const notes = [{
    pitch: 60,
  }];
  outlet_dictionary(0, { notes });
}

Since we are building up a new list of notes, we don't want to unpack existing notes in the msg_dictionary({...}) function's parameters. We also won't be using seed in this article, so I have removed that too.

Now if we try triggering this code (by clicking the button in the locked patch or the "Generate" button in Live), nothing will happen and we will see an error in Live's Status Bar:

Error in Live's status bar: The Note dictionary received from Max was
malformed. The key "start_time" was missing.

which says:

The Note dictionary received from Max was malformed. The key 'start_time' is missing.

This makes sense. We told Live the pitch of the note we wanted to create, but did not tell it where in the clip's timeline it should appear, so it doesn't know what to do. So let's give it a start_time:


function msg_dictionary({ clip, scale, grid }) {
  const notes = [{
    pitch: 60,
    start_time: 0,
  }];
  outlet_dictionary(0, { notes });
}

and now it complains:

The key 'duration' is missing.

This also makes sense. It doesn't know when the note should end. So, add that too:


function msg_dictionary({ clip, scale, grid }) {
  const notes = [{
    pitch: 60,
    start_time: 0,
    duration: 1,
  }];
  outlet_dictionary(0, { notes });
}

Now, a note appears!

So pitch, start_time, and duration are required to generate a note. The other properties are optional: velocity, mute, probability, velocity_deviation, and release_velocity. When the optional properties are not provided:

Using Grid Interval and Clip Length

Let's generate multiple notes. We need to decide how long each note will be and how many we're going to generate. Let's use data from the clip to decide.

First let's determine the time range we are working with. Using dict.codebox in the Max patch, we can see the clip data contains something like this:

{
  "time_selection_start" : 0.0,
  "time_selection_end" : 4.0,
  "insert_marker_time" : 0.0
}

Actually, it depends on if you have any notes selected or not in clip view. The above is the data available when no notes are selected. If any notes are selected, it looks like this instead:

{
  "time_selection_start" : 0.0,
  "time_selection_end" : 1.0,
  "first_note_start" : 0.0,
  "last_note_end" : 1.0,
  "lowest_pitch" : 60,
  "highest_pitch" : 60
}

As far as I can tell, insert_marker_time is only available when no notes are selected. That's ok, we are interested in time_selection_start and time_selection_end. Note that with no notes selected, this will be the length of the clip, otherwise it's the time range of the selected notes in the clip.

Sometimes you want the generator to operate on the current selection, which seems to be the main use case Ableton had in mind when they designed things to behave this way. But some generators may work best when no notes are selected, because otherwise time_selection_end will be the time of the end of the last selected note, not the end of the clip. If you ever find yourself wondering "why didn't it generate more notes?" or "why did it start/stop in the middle of the clip?", this may be why. While building a generator, if it doesn't seem to be working correctly, try deleting all the notes in the clip to make sure the note selection state isn't causing problems.

It appears we won't know where the start and end of the clip is when any notes are selected. Using what we know about the Live API, we could query the selected Clip object for this information. Then, we could ignore the current selection state of clip view and process the entire clip, if desired.

We can find the actual start and end of the currently selected clip with new LiveAPI("live_set view detail_clip").get("start_time") and .get("end_time"). You can do this inside msg_dictionary(). In this context, the Live API path "live_set view detail_clip" should always return a Clip object, because a clip is always selected when we are interacting with a MIDI tool.

To keep things simple, I won't be using this technique in this article, but it is a useful technique to know.

Now we have a start and an end time to work with. How long should we make each note's duration? The grid data we receive looks like this:

{
  "interval" : 0.25,
  "enabled" : 1
}

interval is the size of the grid relative to quarter notes, so 0.25 is a 1/16 grid. enabled is 1 when "Snap to grid" is enabled, and 0 when it is not.

For this tutorial, we'll use the interval and ignore whether the grid is enabled or not.

We now have enough information to fill up the selected time range with notes the size of the current grid:


function msg_dictionary({ clip, scale, grid }) {
  const notes = [];
  let pitch = 60;
  let duration = grid.interval;

  let start_time = clip.time_selection_start;
  while (start_time < clip.time_selection_end) {
    notes.push({
      pitch,
      start_time,
      duration,
    });
    start_time += grid.interval;
  }

  outlet_dictionary(0, { notes });
}

Let's walk through the code. We are still hard-coding pitch to 60. We'll do something more interesting with that soon. The note durations are set to the grid size. We start at the selection start time, insert a note, and increment the start time by the grid size. The start time increment is the same as the note duration, so we generate back-to-back notes. We loop and keep doing that until we hit the end of the time selection.

Now you can change the grid size and it will affect how many notes are generated:

Neat.

UI Controls with Presentation Mode

UI for Pitch

Let's get control over the note pitch. The process for setting up a UI is the same as in the previous article. First, we declare an attribute in JavaScript:


var startingPitch = 36;  
declareattribute("startingPitch",
  { type: "long", min: 0, max: 127, default: 36 });

function msg_dictionary({ clip, scale, grid }) {

At this point, it might have made more sense to call this attribute pitch, but I'm looking ahead to the final version of the generator. I know I want to eventually call this attribute startingPitch, so let's name it this way now so we don't have to rename it later.

In the Max, patch, add a live.numbox. Then, like we did in the previous article, open the Object Action Menu by hovering over the left edge of the live.numbox and clicking the green circle that appears. In the Object Action Menu, select "Connect ▸" and "startingPitch".

Next, connect the live.numbox to the button with a patch cord, so that "Auto" mode will work in the MIDI tool (again, this is what we did in the previous article). Now everything is wired up.

Let's make an improvement. Open the inspector for the live.numbox, and under the "Basic" settings, change the "Unit Style" to "MIDI Note":

This makes the numbox display the MIDI Note value that corresponds to what we see in the piano roll in clip view, so for example, it will display "C1" instead of "36".

We still need to update the JavaScript code to actually use the startingPitch attribute:


var startingPitch = 36;  
declareattribute("startingPitch",
  { type: "long", min: 0, max: 127, default: 36 });

function msg_dictionary({ clip, scale, grid }) {
  const notes = [];
  let pitch = startingPitch;
  let duration = grid.interval;

  let start_time = clip.time_selection_start;
  while (start_time < clip.time_selection_end) {
    notes.push({
      pitch,
      start_time,
      duration,
    });
    start_time += grid.interval;
  }

  outlet_dictionary(0, { notes });
}

Now, if you change value of the live.numbox connected to the startingPitch attribute, it changes the pitches of the notes being generated.

UI For Duration

We're going to do almost exactly the same thing again to get control over the note duration:

  1. Declare an attribute for duration in the code:

var durationMultiplier = 1;
declareattribute("durationMultiplier",
  { type: "float", min: 0.01, max: 1, default: 1 });
  1. Add another live.numbox to the patch.

  2. Use the Object Action Menu to connect the live.numbox to the durationMultiplier attribute.

  3. Connect the live.numbox to the button with a patch cord.

This time, we will not change the "Unit Style" in the live.numbox inspector, because this is not a MIDI pitch. Note that because we used the attribute option type: "float", the live.numbox will display floating point numbers like "1.00" instead of integers, and we can input fractional values.

Then. update the JavaScript code to use this value:


var startingPitch = 36;
declareattribute("startingPitch",
  { type: "long", min: 0, max: 127, default: 36 });

var durationMultiplier = 1;
declareattribute("durationMultiplier",
  { type: "float", min: 0.01, max: 1, default: 1 });

function msg_dictionary({ clip, scale, grid }) {
  const notes = [];
  const pitch = startingPitch;
  const duration = durationMultiplier * grid.interval;

  let start_time = clip.time_selection_start;
  while (start_time < clip.time_selection_end) {
    notes.push({
      pitch,
      start_time,
      duration,
    });
    start_time += grid.interval;
  }

  outlet_dictionary(0, { notes });
}

Now you should be able to control pitch and duration with the UI, and we can influence the generator by changing the grid size:

Excellent. Now let's make the UI look good.

Presentation Mode

In the previous article we only had a couple UI controls and we cleaned things up by hiding the patch cords. In this article, we are going to be adding a lot more UI controls, so let's use a better solution: Max's Presentation Mode.

Presentation mode allows you to design a separate layout for your patch to be the user interface. You can hide any objects you don't want to see, and position and size everything differently from the main patch, where you might want to put things in different places for patching convenience.

There is a button to toggle between Presentation Mode and normal "patching" mode. The button is in the bottom toolbar of the Max patch editor and looks like a board on an easel:

Click that to toggle between presentation and patching modes. If you have hidden the toolbar, this feature is also available in the Max application menu under View → Presentation.

When you enable Presentation Mode, everything disappears! That's because only objects that have been explicitly added to Presentation Mode are visible in it. Disable Presentation Mode by clicking the presentation mode button to toggle back to the normal patching mode and all the objects in the patch re-appear.

In patching mode, select the two UI controls (the two live.numbox objects) and right click (or if you're on a MacBook, two-finger tap or control+click) to open a context menu where you can select "Add to Presentation". Alternately, select the objects and choose Object → "Add to Presentation" from the Max application menu. You will also find "Remove from Presentation" commands in the menu to remove objects from Presentation Mode after they've been added.

Note that when an object is added to Presentation Mode and is not selected, it has a pink halo around it, so if you look closely in normal patching mode, you can tell which objects have been added to Presentation Mode.

Now when we enable Presentation Mode, the two objects we added (and only those two objects) are visible:

This UI is too minimalistic and is not self-explanatory. In the previous tutorial, we used a live.slider object for the UI, which has a built-in label. live.numbox does not have a built-in label, so we have to add our own labels. Let's do that now so the functions of the UI are clear.

To save us some time, when we are in Presentation Mode, any object we add will automatically be added to Presentation Mode. So, make sure you are in Presentation Mode and add a live.comment object (if you forget, you can delete the object, switch to Presentation Mode, and try again, or explicitly add the object to Presentation Mode like we did with the live.numbox objects).

Enter the text "starting pitch" into the live.comment. Make another live.comment with the text "duration". Then adjust the layout so it is clear which comment goes with which live.numbox:

Note that you can move these objects around wherever you want in Presentation Mode and it will not affect their position in normal patching mode. Also note when you add a new object to the patch while in Presentation Mode, the position of the object in normal patching mode is the position where the object was initially created. Then, if you move it in Presentation Mode, it does not affect the position in normal patching mode. Therefore you will likely need to move the live.comment objects around in normal patching mode to put them in a reasonable position.

Open in Presentation

Once you are happy with the layout in Presentation Mode and normal patching mode, you can save and... nothing really seems to happen in the MIDI Tool UI in Live. Maybe the comments appeared, depending on where you put them in normal patching mode. Since the use of Presentation Mode is optional, all Max for Live devices display their normal patching mode in Live by default, but we can change this behavior.

Open the inspector in the right side panel of the Max patch editor. Make sure nothing is selected in the patch. If something is selected, click into an empty area of the patch. Then, a diamond shape containing a "P" will appear at the top of the inspector:

This allows you to view the inspector for the entire patch. Click it. Then select the "Basic" settings, and enable "Open in Presentation":

Note when you enable "Open in Presentation" a box appears in the patch background in Presentation Mode that indicates the visible area in Live's UI. The visible area box also disappears from normal patching mode. This is a clear indicator that "Open in Presentation" is enabled, and is invaluable for laying out the patch properly.

Save the patch again, and now our Presentation Mode UI is displayed in Live:

Nice. This is a good foundation for adding a lot more UI controls. But first, let's make our note generator algorithm more interesting.

The Rhythm of the Primes Algorithm

In the "Generating MIDI Clips" tutorial, we implemented an algorithm based on an idea "the rhythm of the primes" where prime numbers dictate the rhythm. Let's port that code into our generator tool and use the MIDI Tools UI to control various aspects of its behavior.

Looking back at the final version of that project: we can omit the entire class ClipSlot {...} class definition and the clip.call("add_new_notes", ...) Live API call because the live.miditool.out object handles that for us in a MIDI tool. The rest of the "rhythm of the primes" code looks like this:


const primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53];
const basePitch = 36;
const duration = 1;
const clipLength = 256;

const clipSlot = new ClipSlot(0, 0);
const clip = clipSlot.makeClip(clipLength);
const notes = [];

let pitch = basePitch;
for (const prime of primes) {
  let noteCount = 0;
  for (let start=0; start < 2*clipLength; start++) {
    if (start % prime == 0 && (pitch == basePitch || start > 0)) {
      const velocity = 127 - (100 * (noteCount % prime))/(prime - 1);
      notes.push({ pitch, velocity, start_time: start/2, duration });
      noteCount++;
    }
  }
  pitch++;
}

Let's go through all the constants one by one and see how they need to change for our generator tool:

  1. const primes can stay the same. We'll want more prime numbers to work with later, but it's fine for now.

  2. const basePitch will be set to the startingPitch attribute that's connected to the UI control in our generator tool.

  3. const duration will be set to the durationMultiplier * grid.interval value that we're already using for duration in our generator.

  4. const clipLength will be calculated from the clip data. clip.time_selection_end - clip.time_selection_start is the length in beats of the current selection of notes in clip view. We'll use this value as the clip length for our generator.

  5. const clipSlot is for calling the Live API. We don't need it because we're in a MIDI tool. We'll delete it.

  6. const clip is also for calling the Live API. We'll delete it.

  7. const notes = []; will stay the same. We're still building up a list of notes from scratch.

After the constants are defined, the main algorithm stays mostly the same. We will have to make a few adjustments, but let's start by copying the code into our generator tool and testing it.

When we bring "the rhythm of the primes" code into our generator's v8.codebox object, note that the primes constant never changes, so we can define it at the top level, outside the msg_dictionary() function. Everything else goes into msg_dictionary() and replaces the while loop we had in there.

Here's the first attempt of copying "the rhythm of the primes" code into our generator:


const primes = [2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53];

var startingPitch = 36;
declareattribute("startingPitch",
  { type: "long", min: 0, max: 127, default: 36 });

var durationMultiplier = 1;
declareattribute("durationMultiplier",
  { type: "float", min: 0.01, max: 1, default: 1 });

function msg_dictionary({ clip, scale, grid }) {
  const notes = [];

  const basePitch = startingPitch;

  const { time_selection_start, time_selection_end } = clip;
  const clipLength = time_selection_end - time_selection_start;
  const duration = durationMultiplier * grid.interval;

  let pitch = basePitch;
  for (const prime of primes) {
    let noteCount = 0;
    for (let start=0; start < 2*clipLength; start++) {
      if (start % prime == 0 && (pitch == basePitch || start > 0)) {
        const velocity = 127 - (100 * (noteCount % prime))/(prime - 1);
        notes.push({ pitch, velocity, start_time: start/2, duration });
        noteCount++;
      }
    }
    pitch++;
  }

  outlet_dictionary(0, { notes });
}

In order to test this properly, we should make the clip longer. Assuming the clip is looped, set the loop length to 8.0.0 (8 bars). Then try the generator:

Not bad. It works!

Fixing Bugs with Time

If we play around with it, we can start to notice some issues with how time is handled, such as: if we increase the grid interval to 1 bar, it generates the same number of notes and only changes their durations, which don't fit properly:

Let's go through how time is handled and make some adjustments.

First, there's the main loop for (let start=0; start < 2*clipLength; start++). The start < 2*clipLength is unusual. What is the 2* doing there? Back when this code was designed in "Generating MIDI Clips", we sped things up to make it more interesting. But we don't need this code because we want to control this via the grid interval. So let's change the for loop condition to start < clipLength. We also need to undo a corresponding change when we generate the note: start_time: start/2 should change to start_time: start.

This still doesn't address the issue from the previous screenshot where large grid intervals cause the notes to run into each other. Instead, larger grid intervals should result in less notes than smaller grid intervals. The problem is we need to scale start_time by the grid interval. We can do this the same way we are using it to calculate duration by multiplying it: start_time: start * grid.interval.

This helps, but now a grid interval of 1 bar generates too many notes, and a grid interval of 1/16 (shown here) generates too few (it's supposed to fill the clip):

To fix that we have to adjust the loop condition again and scale it by the grid interval too. To make it work properly, this time we have to divide: start < clipLength/grid.interval (to be honest, I got it backwards and multiplied at first, but through testing and experimentation it became clear that is backwards and dividing is the correct thing to do here).

That mostly fixes things, but if we're being really thorough, I see one more bug: If we drag the clip's loop bar so the loop starts on the 2nd bar, our generator keeps generating the first note at the start of the 1st bar, which we'll never hear. This wasn't a problem in the original code because it always generated a new clip that started on the 1st bar. To fix this in MIDI Tools, we offset each note's start time: start_time: start * grid.interval + time_selection_start,

In summary, these adjustments to the loop condition and the notes' start_time fix the bugs with how time is handled in our generator:


function msg_dictionary({ clip, scale, grid }) {
  const notes = [];

  const basePitch = startingPitch;

  const { time_selection_start, time_selection_end } = clip;
  const clipLength = time_selection_end - time_selection_start;
  const duration = durationMultiplier * grid.interval;

  let pitch = basePitch;
  for (const prime of primes) {
    let noteCount = 0;
    for (let start=0; start < clipLength/grid.interval; start++) {
      if (start % prime == 0 && (pitch == basePitch || start > 0)) {
        const velocity = 127 - (100 * (noteCount % prime))/(prime - 1);
        const start_time = start * grid.interval + time_selection_start;
        notes.push({ pitch, velocity, start_time, duration });
        noteCount++;
      }
    }
    pitch++;
  }

  outlet_dictionary(0, { notes });
}

Additional UI Controls

Now that our note-generating algorithm is working well, we can add more UI controls to make the MIDI tool more flexible and powerful.

Pitch Count (and More Primes)

First I want to get control over the number of different pitches that are generated. Currently our const primes defines a list of 16 prime numbers. I potentially want to generate a lot more pitches than that: up to 128 for all 128 pitches supported by MIDI. Actually, we're going to add another feature that will require even more prime numbers momentarily, so let's create a list of a lot of primes.

Here's a JavaScript array with all the primes less than 1000. That's the first 168 prime numbers. If I were you, I would copy and paste this:

const primes = [
  2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109,
  113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239,
  241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379,
  383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521,
  523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661,
  673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827,
  829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991,
  997,
];

Now let's add a "pitch count" UI control. It's the same process as before:

  1. Declare an attribute for pitchCount in the code as an integer that can go from 1 to 128:

    
    var pitchCount = 16;
    declareattribute("pitchCount", 
      { type: "long", min: 1, max: 128, default: 16 });
    
  2. Add another live.numbox to the patch.

    a. Add the live.numbox to Presentation Mode (or be in Presentation Mode when you create it). We'll do this with all our UI controls from now on.

    b. Also add a live.comment that says "pitch count" to label the input in Presentation Mode.

  3. Use the Object Action Menu to connect the live.numbox to the pitchCount attribute. NOTE: This only works in normal patching mode, not Presentation Mode. If you're not seeing it, make sure you aren't in Presentation Mode.

  4. Connect the live.numbox to the button with a patch cord.

I am trying to make the layout of the live.comment objects look nice in Presentation Mode and it's a bit of a struggle. I found this helps: select all the live.comment objects, open the inspector, and under "Layout" settings, change the "Text Justification" to "right". Then, when you drag the live.comment objects around, Max has some kind of "smart alignment" snapping feature that helps you positions objects edges or centers in alignment to objects around them. It's pretty intuitive, and I find it's easier to line up the right edge of all the right-aligned comments.

Anyway, here's where I'm at after adding the UI objects and laying things out in Presentation Mode:

And as usual, we aren't done yet because we didn't actually use the pitchCount attribute in the generator algorithm. To do that, we adjust the outer while loop to loop pitchCount times:


const primes = [
  2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109,
  113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239,
  241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379,
  383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521,
  523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661,
  673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827,
  829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991,
  997,
];

var startingPitch = 36;
declareattribute("startingPitch",
  { type: "long", min: 0, max: 127, default: 36 });

var pitchCount = 16;
declareattribute("pitchCount",
  { type: "long", min: 1, max: 128, default: 16 });

var durationMultiplier = 1;
declareattribute("durationMultiplier",
  { type: "float", min: 0.01, max: 1, default: 1 });

function msg_dictionary({ clip, scale, grid }) {
  const notes = [];

  const basePitch = startingPitch;

  const { time_selection_start, time_selection_end } = clip;
  const clipLength = time_selection_end - time_selection_start;
  const duration = durationMultiplier * grid.interval;

  let pitch = basePitch;
  for(let i = 0; i < pitchCount; i++) {
    const prime = primes[i];

    let noteCount = 0;
    for (let start=0; start < clipLength / grid.interval; start++) {
      if (start % prime == 0 && (pitch == basePitch || start > 0)) {
        const velocity = 127 - (100*(noteCount % prime))/(prime - 1);
        const start_time = start * grid.interval + time_selection_start;
        notes.push({ pitch, velocity, start_time, duration });        
        noteCount++;
      }
    }
    pitch++;
  }

  outlet_dictionary(0, { notes });
}

Now we can set starting pitch to C-2 and pitch count to 128. Try making the clip at least 64 bars long. Boom! Lots of notes:

Up / Down Direction

Next let's add the ability to invert the pattern so higher pitches use lower prime numbers and lower pitches use higher prime numbers. In other words, we'll flip it upside down.

Again we'll declare an attribute in code, add a UI object to Presentation Mode (with a label), and connect the UI object to the attribute. This time a few things are different because we want a two state toggle-switch-like interface for the "up" and "down" direction. We'll declare the attribute like this:


var direction = 0;
declareattribute("direction", 
  { style: "enum", enumvals: ["up", "down"], default: 0 });

Using the "enum" style with enumvals let's us specify the different values as strings for the UI. They will be set on the direction variable in JavaScript as the enum value indexes (0 for "up" and 1 for "down").

We could use a live.menu object for this to have a dropdown UI for selecting "up" or "down". When there are only two or three options, I often like to use a live.tab object. Add a live.tab and connect it to the direction attribute (remember you need to be in normal patching mode, not presentation mode, to connect to the attribute).

As usual, connect the live.tab to the button with a patch cord so it will trigger "Auto" behavior. Then, add it to Presentation Mode. In Presentation Mode, add a "direction" label using a live.comment object (which I right aligned again to help with the layout).

Also as usual, we need to update the generator algorithm to do something with the new direction attribute. Instead of always incrementing pitch++ at the end of the main loop, we'll either add 1 or -1 depending on the value of direction. When direction == 0 the direction is "up" and we want to add 1. When direction == 1 the direction is "down" and we want to add -1:


// ... const primes and other attribute declarations ...

var direction = 0;
declareattribute("direction", 
  { style: "enum", enumvals: ["up", "down"], default: 0 });

function msg_dictionary({ clip, scale, grid }) {
  const notes = [];

  const basePitch = startingPitch;
  const pitchIncrement = direction ? -1 : 1;

  const { time_selection_start, time_selection_end } = clip;
  const clipLength = time_selection_end - time_selection_start;
  const duration = durationMultiplier * grid.interval;

  let pitch = basePitch;
  for(let i = 0; i < pitchCount; i++) {
    const prime = primes[i];

    let noteCount = 0;
    for (let start=0; start < clipLength / grid.interval; start++) {
      if (start % prime == 0 && (pitch == basePitch || start > 0)) {
        const velocity = 127 - (100*(noteCount % prime))/(prime - 1);
        const start_time = start * grid.interval + time_selection_start;
        notes.push({ pitch, velocity, start_time, duration });        
        noteCount++;
      }
    }
    pitch += pitchIncrement;
  }

  outlet_dictionary(0, { notes });
}

Now we can flip the pattern of notes upside down:

Starting Prime

Maybe we don't always want to have the starting pitch use the prime number 2. Maybe we want to start with 3, or 5, or any other prime. Let's add a UI control for that.

Let's declare the attribute and update the algorithm logic at the same time. All we need to do is offset the index into the primes[] array to shift where we start. We'll add the attribute startingPrimeIndex for this.


var startingPrimeIndex = 0;
declareattribute("startingPrimeIndex", 
  { style: "enum", enumvals: primes.slice(0, 36), default: 0 });

function msg_dictionary({ clip, scale, grid }) {
  const notes = [];
  
  const basePitch = startingPitch;
  const pitchIncrement = direction ? -1 : 1;

  const { time_selection_start, time_selection_end } = clip;
  const clipLength = time_selection_end - time_selection_start;
  const duration = durationMultiplier * grid.interval;

  let pitch = basePitch;
  for(let i = 0; i < pitchCount; i++) {
    const prime = primes[startingPrimeIndex + i];

    let noteCount = 0;
    for (let start=0; start < clipLength / grid.interval; start++) {
      // ... same as before ...

What's going on with enumvals this time? It's a neat trick: We want the UI to display the actual prime number as the label, but we want the startingPrimeIndex to be the index into the primes[] array for that prime number. This label / index-into-list-of-labels relationship is the way enum attributes work. Instead of setting a list of strings like ["up", "down"], we can set enumvals: primes.slice(0, 36), which is the first 36 prime numbers in primes[] to be used as the labels in the UI. The value that gets set on startingPrimeIndex will be the index into primes[] since we sliced the array starting at the beginning (at index 0).

Those are all of the JavaScript code changes. Now we can add a UI object. Since we are displaying numbers, let's use a live.numbox again. It works with enums (including string enumvals, so you can make it display things other than numbers).

You know the drill by now: Add a live.numbox and (when not in Presentation Mode), connect it to the startingPrimeIndex attribute in its Object Action Menu. Connect it with a patch cord to the button that all the other UI controls are patched to. Add the live.numbox to Presentation Mode along with a label like "starting prime".

Here's the patch now in normal patching mode (I moved some other UI controls because it made sense to put "starting prime" next to "starting pitch"):

Voilà! Now we can make sparser patterns:

Legato

Let's add one more UI control before moving on.

The duration UI control's units is the grid interval. So, if you a 1/16 note grid, a starting prime of 2, and a duration of 1.0, the starting pitch will play a note for a 1/16 note, and then have a rest for a 1/16 note, then play a 1/16 note, rest for 1/16 note, and so on. The pitch for prime number 3 will have note, rest, rest, note, rest, rest, etc, and all of those are also 1/16 notes. If you change the grid interval to 1/4, then the duration of all these notes and rests will be a 1/4 note.

Currently, the duration control can make those notes shorter, but it can't make them longer, so we'll always have rests due to the nature of the algorithm. What if we don't want rests? We could set the max duration to something higher than 1.0, but, what number would we pick? If we generate the full range of 128 pitches, and we use the max startingPrimeIndex of 36, then we can reach the 164th prime number, which is 971. If we wanted to fill in all the resets between the notes at that pitch, we'd need to set the duration that high. But most of the time you'd probably be more interested in setting values like 2 and 3 and 5. The minimum duration is 0.01 and that's the minimum I want to keep. If we imagine dragging a numbox or slider to set any number between 0.01 and 971, the usability with this approach does not seem good.

A somewhat common feature in music production (and music theory) is "legato", which means a note plays up until the start of the next note, with no rests in between. Our goal is to have the option to have no rests. Let's try adding a legato mode to our generator so we can generate sustained sounds.

As explained at the start of this section, the prime numbers control how many rests are generated in between the notes. If we want the notes to be legato when the durationMultiplier (the duration UI control) is set to the default of 1.0, then we can multiply the note duration's by the current prime value. Let's do that when a new legato attribute, which can either be 0 (disabled) or 1 (enabled), is enabled:


var legato = 0;
declareattribute("legato", 
  { type: "long", min: 0, max: 1, default: 0 });

function msg_dictionary({ clip, scale, grid }) {
  const notes = [];
  
  const basePitch = startingPitch;
  const pitchIncrement = direction ? -1 : 1;

  const { time_selection_start, time_selection_end } = clip;
  const clipLength = time_selection_end - time_selection_start;
  const baseDuration = durationMultiplier * grid.interval;

  let pitch = basePitch;
  for(let i = 0; i < pitchCount; i++) {
    const prime = primes[startingPrimeIndex + i];

    let noteCount = 0;
    for (let start=0; start < clipLength / grid.interval; start++) {
      if (start % prime == 0 && (pitch == basePitch || start > 0)) {
        const velocity = 127 - (100 * (noteCount % prime))/(prime - 1);
        const start_time = start * grid.interval + time_selection_start;
        let duration = baseDuration;
        if (legato) {
          duration *= prime;
        }
        notes.push({ pitch, velocity, start_time, duration });
        noteCount++;
      }
    }
    pitch += pitchIncrement;
  }

  outlet_dictionary(0, { notes });
}

Because we have to do some conditional manipulation of the duration now, I have introduced baseDuration and a new duration variable in the inner loop. I also pulled out the start_time calculation (which is the same as before) to make the code structure more consistent: now we calculate all the note properties and then call notes.push({...}).

For the UI control, this is simpler than our other attributes. legato is either on or off, and the simple live.toggle toggle button object will work well here. So add a live.toggle, connect it to the legato attribute in the usual way, add it to Presentation Mode, and give it a label "legato".

At this point we're done adding UI controls, so you might want to make things nice and neat. Here's a tip. In Presentation Mode: Position the top-most UI control (mine is the starting pitch numbox) where you want it, and also position the bottom-most (mine is the legato toggle) where you want it. Select all the UI controls (the right column of objects not including the labels). Then in the Max application menu, choose Arrange → Distribute → Vertically. This creates equal space between all the objects and looks nice and consistent. Then you can do the same with the labels on the left column.

This works, but, it is generating notes that go past the end of the clip. For example, here's the generator with default settings except legato is enabled in a 4-bar clip:

The highest note goes well into the 7th bar. It's annoying because almost half the piano roll's horizontal space is taken up by notes outside of the actual clip length. Let's fix it.

After calculating the legato duration, we can check if the end time of the note (calculated from start_time + duration) goes past the time_selection_end. If so, we set the duration so that it ends exactly at time_selection_end:


        let duration = baseDuration;
        if (legato) {
          duration *= prime;
          if (start_time + duration > time_selection_end) {
            duration = time_selection_end - start_time;
          }
        }
        notes.push({ pitch, velocity, start_time, duration });

Problem solved:

Our generator UI is done!

Using the Current Scale

Before we wrap up, there is one last topic to explore: How to respect the current scale. We've had our scale data ready to use as a parameter to our msg_dictionary() function this entire time, so let's finally use it.

Let's take a look at the scale data in dict.codebox object. In Ableton Live, scales are associated with clips, and you can have clips with different scales in the same Live Set (even playing at the same time).

Live's global scale has no impact on MIDI tools. Only the clip's scale is relevant.

So the clip we are processing might have a scale applied. Technically, a clip always has a scale, and when the clip's scale looks disabled in the UI, its scale is the chromatic scale which has every note in it.

When I say every note, I mean every note in the 12 tone equal temperament (12 TET) tuning system (Ableton Live's default tuning system), which follows the "Western" music tradition and divides the octave in twelve equal intervals we call the chromatic scale.

Live supports other tuning systems, but at the time of writing, MIDI Tools do not directly support them. You can access TuningSystem data via the Live API, but it's out of scope for these tutorials. I assume you use the default tuning system.

Here's what dict.codebox shows with a clip's scale is disabled:

{
  "scale_mode" : 0,
  "root_note" : 0,
  "scale_intervals" : [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11 ],
  "scale_name" : "Chromatic"
}

And with a clip's scale set to C Major:

{
  "scale_mode" : 1,
  "root_note" : 0,
  "scale_intervals" : [ 0, 2, 4, 5, 7, 9, 11 ],
  "scale_name" : "Major"
}

And with a clip's scale set to D Major:

{
  "scale_mode" : 1,
  "root_note" : 2,
  "scale_intervals" : [ 0, 2, 4, 5, 7, 9, 11 ],
  "scale_name" : "Major"
}

scale_mode indicates if a scale is enabled or not for the clip. Since we always have a scale even when a clip's scale is disabled, we'll use the chromatic scale as a default, and can always apply a scale in our algorithm so there's less conditional logic. This means we can ignore scale_mode in our generator logic.

The scale_intervals are relative to the root_note. When we change from C Major to D Major, only the root_note changes. This presents a slight problem because we are operating on MIDI pitches, which are absolute values. We need to convert this relative interval data to something that will work with MIDI pitches. If you have scale data such as the above examples, and you are trying to decide if MIDI pitch 59 is in the current scale, how do you know?

A good way to handle this is to convert the scale's root note and intervals list to a list of pitch classes. Then we can calculate the pitch class of any MIDI pitch with pitch % 12. If the MIDI pitch's pitch class is in the scale's list of pitch classes, then the MIDI pitch is in the scale and we can generate notes for that pitch.

In code, the conversion from scale data to pitch class works like this:


const { scale_intervals, root_note } = scale;
const pitchClasses = scale_intervals.map(i => (root_note + i) % 12);

And then this will be true if the given pitch is in the scale:


pitchClasses.includes(pitch % 12)

Now we have to wrap the main note-generating part of our algorithm (inside the outermost pitch loop) with a check to see if the current pitch is in the scale. If so, we get the next prime number and generate the notes for that pitch, otherwise we keep going and look for the next pitch in the current scale.

To make this work properly with the pitchCount attribute, we need to keep an additional counter of how many pitches we actually generated notes for. I'm using a new pitchesGenerated variable to keep track of it. In the loop iterations where we actually generate notes, we increment pitchesGenerated at the end of the loop. We also want to adjust our lookup of the next prime number to be primes[startingPrimeIndex + pitchesGenerated]; so we advance to the next prime only when we generate notes.

Finally, because we're changing how many times we might loop, I'm tightening up the logic to ensure we'll stop looping if we generate invalid MIDI pitches (outside the range 0-127 inclusive):


function msg_dictionary({ clip, scale, grid }) {
  const notes = [];

  const basePitch = startingPitch;
  const pitchIncrement = direction ? -1 : 1;

  const { time_selection_start, time_selection_end } = clip;
  const clipLength = time_selection_end - time_selection_start;
  const baseDuration = durationMultiplier * grid.interval;

  const { scale_intervals, root_note } = scale;
  const pitchClasses = scale_intervals.map(i => (root_note + i) % 12);

  let pitch = basePitch;
  let pitchesGenerated = 0;

  while(pitchesGenerated < pitchCount && pitch >= 0 && pitch <= 127) {
    if (pitchClasses.includes(pitch % 12)) {
      const prime = primes[startingPrimeIndex + pitchesGenerated];      

      let noteCount = 0;
      for (let start=0; start < clipLength / grid.interval; start++) {
        if (start % prime == 0 && (pitch == basePitch || start > 0)) {
          const velocity = 127 - (100*(noteCount % prime))/(prime - 1);
          const start_time = start*grid.interval + time_selection_start;
          let duration = baseDuration;
          if (legato) {
            duration *= prime;
            if (start_time + duration > time_selection_end) {
              duration = time_selection_end - start_time;
            }
          }
          notes.push({ pitch, velocity, start_time, duration });
          noteCount++;
        }
      }
      pitchesGenerated++;
    }
    pitch += pitchIncrement;
  } 

  outlet_dictionary(0, { notes });
}

And now if a scale is enabled on the clip, only the notes of the scale will be generated. If you toggle the scale on and off in the main clip properties immediately after generating notes, it will update the notes to fit the scale:

Scales are ignored when the instrument on the track is a Drum Rack. If the scale support is not working, make sure you aren't on a track with a Drum Rack.

There's a bug though. If the starting pitch is not in a scale, the first note isn't generated. Try it: set starting pitch to C#1 and use a C Major scale. The first note is a D, and it does not start on the first beat. That's because of this logic:


if (start % prime == 0 && (pitch == basePitch || start > 0)) {

Specifically, the part pitch == basePitch is a problem, because the first pitch that actually generates notes is not basePitch when the base pitch isn't in the scale. We can fix this by relying on our new pitchesGenerated counter. Change this condition to:


if (start % prime == 0 && (pitchesGenerated == 0 || start > 0)) {

pitchesGenerated == 0 will be true the first time we generate any notes, regardless of the basePitch and scale settings.

Now the scale support should work correctly in all cases.

Final Version

Here's the final version of the code. We accomplished quite a bit in less than 80 lines of code, and half of that code is the list of primes and attribute declarations:


const primes = [
  2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83, 89, 97, 101, 103, 107, 109,
  113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179, 181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239,
  241, 251, 257, 263, 269, 271, 277, 281, 283, 293, 307, 311, 313, 317, 331, 337, 347, 349, 353, 359, 367, 373, 379,
  383, 389, 397, 401, 409, 419, 421, 431, 433, 439, 443, 449, 457, 461, 463, 467, 479, 487, 491, 499, 503, 509, 521,
  523, 541, 547, 557, 563, 569, 571, 577, 587, 593, 599, 601, 607, 613, 617, 619, 631, 641, 643, 647, 653, 659, 661,
  673, 677, 683, 691, 701, 709, 719, 727, 733, 739, 743, 751, 757, 761, 769, 773, 787, 797, 809, 811, 821, 823, 827,
  829, 839, 853, 857, 859, 863, 877, 881, 883, 887, 907, 911, 919, 929, 937, 941, 947, 953, 967, 971, 977, 983, 991,
  997,
];

var startingPitch = 36;
declareattribute("startingPitch",
  {type: "long", min: 0, max: 127, default: 36});

var pitchCount = 16;
declareattribute("pitchCount", 
  {type: "long", min: 1, max: 128, default: 16});

var durationMultiplier = 1;
declareattribute("durationMultiplier",
  {type: "float", min: 0.01, max: 1, default: 1});

var direction = 0;
declareattribute("direction", 
  {style: "enum", enumvals: ["up", "down"], default: 0});

var startingPrimeIndex = 0;
declareattribute("startingPrimeIndex", 
  {style: "enum", enumvals: primes.slice(0, 36), default: 0});
  
var legato = 0;
declareattribute("legato", 
  {type: "long", min: 0, max: 1, default: 0});

function msg_dictionary({ clip, scale, grid }) {
  const notes = [];

  const basePitch = startingPitch;
  const pitchIncrement = direction ? -1 : 1;

  const { time_selection_start, time_selection_end } = clip;
  const clipLength = time_selection_end - time_selection_start;
  const baseDuration = durationMultiplier * grid.interval;

  const { scale_intervals, root_note } = scale;
  const pitchClasses = scale_intervals.map(i => (root_note + i) % 12);

  let pitch = basePitch;
  let pitchesGenerated = 0;

  while(pitchesGenerated < pitchCount && pitch >= 0 && pitch <= 127) {
    if (pitchClasses.includes(pitch % 12)) {    
      const prime = primes[startingPrimeIndex + pitchesGenerated];      

      let noteCount = 0;
      for (let start=0; start < clipLength / grid.interval; start++) {
        if (start % prime == 0 && (pitchesGenerated == 0 || start > 0)) {
          const velocity = 127 - (100*(noteCount % prime))/(prime - 1);
          const start_time = start*grid.interval + time_selection_start;
          let duration = baseDuration;
          if (legato) {
            duration *= prime;
            if (start_time + duration > time_selection_end) {
              duration = time_selection_end - start_time;
            }
          }
          notes.push({ pitch, velocity, start_time, duration });
          noteCount++;
        }
      }
      pitchesGenerated++;
    }
    pitch += pitchIncrement;
  }

  outlet_dictionary(0, { notes });
}

Here is the final Max patch if you had any trouble getting things to work, or if you jumped ahead to try it.

IMPORTANT NOTE: When pasting this into an empty MIDI tool patch, the patch needs to be set to open in Presentation Mode in the patch inspector. Refer to the "Open in Presentation" section earlier in this article if you need a reminder on how to do that. If you don't do this, the UI will not display properly and the MIDI tool will be mostly unusable.


----------begin_max5_patcher----------
2465.3oc0Zs1aaarD8y1+JHHPKjiUT46G1wonnWfKJvs.86wABzRqbX.Eo.I
kiSc8+86blkOknnnkkcbMRVQt6ryYlyt6vYWxGN8D0aRtWjopbgxmTN4jGN8
jS3pPEmTb+IpKCteVTPFKlZT3chIyRVtTDmqNVJPt39bYihaCxSJqdUP9ruD
Fe6zTwrbIFV5NSzFqXLwdrhsNtT2ahlxmK6QpHizaPdXRbidYvcRW9is1VcC
3+00Y4gKBmw8Ecynnwv4rokbyWeuoVooEudYXbjHm8I8NPuY8jvIqyKkVCU9
3omhhwOERKO41aiDkVvcAowAKEbyohHBz6D+m0oRv6g.sXVyfYPcalJr2OC5
5KYP2czsFjjgaM7ojElKRmJhCtIRrAmzAAhNLcVRbbAvp24M829sKtXi4ECh
mKqUVU92WIjthpZkUmEbmX9zf77zvaVmKpuJqXrnXv.rczZQxhxpKqusOtdY
ABIKVnNl9ItBpVRFFGlGFDIEVqOQ1l3ZIUTR7sUSBZwQsDaIMWhUAMr0QqIy
o+w5nqly9RRZ9..ojf6xNWSdSV92kNhlrcdEP4OGxxgcECIKOHMmlzqrJMbo
nmkB9tXZrIJb0FZnDtOFdS7zZ9mdmpXngUzM+QFVgTCjpivJkL4eAh7OhmKt
uG1TWmI.ClOsrFZfEOYu7mXtIg1gNZxYFG+nL83uCZHvXmQbnfAKhRBxe4C8
X.rLQgMJbQgtNWx0pKqwGkFbMF70lrLlbqV70VbqVbM1RMxR5vs5v06xW6xs
5xs5wW6wW6KwRSBulDeshJ8kVlrRCYklEFZwcRQrj+Xq+ZFLcmSE5Lvpo8yM
t5vvKKWrhmeX5zSDXi8EA1+ENB77Pr9t+7PL7zOjD473n0tSrZEsv4YkUmt2
Ozr5BtoqXuChCskICqwIkooUby.i45ZOweSVzwXKcPt4znvXQ1zRKnIIzjFe
Ax6aKVng0rigkgMbY1Wn5Wwv0qWAvlm7sW0TE2jV6NawmaLsdQoJTlwa4HYq
Boo7JyRVW2VWo9Hy86IlIoNKus4D8My7ww5viko+1LURlH+88widtGdJj1FS
bdZoP5e7iW0ga91NywVwWzcNVAX1hG5NBC+37tZOLdP6W0Zuwf52Pdyrm046
+faLc4bkNrS9RSehaqEGlOqLlb+QFjY+66Gi48Elw+P22uEH+mwF8cdaFbdP
y9bpO1vgGZt3XC0rlXr4zu9iMa+BjKYgW9mqixCWEEJR+WYL5iVNf6hNdJGc
XQfZZEwdxVzb+YKNH6oj101WDa8iTD6Ab5X6IbiowgmTiyN5TiUJVufmCVKO
6eQKRLcN5mCTCp3vyFYuZavIj387mdSLcd+w74ItNxbxsJK67vX05aRnVeSB
G36r4l.btCe9.byXw2HSriynZV9julDteBP2Rtz0eqiFoACz8QHY7zbRXSzp
qfzueXt5cdTxQyE6HdEu3d+dqKeDa1tE6N0ra+s+2Xwhj37rv+VNUVahVuz.
DtxH+velDmjsJXlX9GU2+KvC8cAIbyoYyJVNRWDmkKeKPYJWo7oqiUnQDEj5
KkEG4pzND0MgKS+2GgnwKhfDfp2jpyB6fDYp3Bhf9O0lC1dNcuKdiOTctTcd
zudzu9tbl1n.ZUiu0eLfUmww.0XxnxW4iIWnvF0Yi5bfb..cW1xPqdnUet.0
wv3C6ENfAa0PyFFrOfaMknZ.OvvlKf.P4FNPJX9FtnNnbCO1yoBSXzlPulvh
MA0Xx7ArXSKt.TD3CSPHlvXMgwZ5JQ0DZyDDhILUKdi3DOP7IJLXVkK.2BRv
hYYnWKXmVfjsfwZAkaARvxCWARvBttsbCL3JRe.UavC1vgs4wKnHanCa3v1v
gsgCaCEY6yCmX7jOuE30Nvgcz4qPCl7AwvEnAnTG30N7z.GIpNvwcfdcfW63
yuHPtvGSRPAFabgu5Be0EZyEiItvDcgu5BSzkmNAqyEzlGTfGFI7L3BLIiTE
P0yfm0gBXQdPQdvr7fC6AywCiqd7LSnRe3g9Pa9vl7gB7wPpO7Pe3b9PU9fv
8AY4Cs3CE36K8Ueef9mu753qio3JJsdXFs9xzgZZtfBGkJpdT9nqa+PuqUYc
8.VMegx0p3gsTcJTPDrBVghmcgbgxbwh.JUP7tNd7rJPq2OOgn9NPrVndgSu
BNuFvo2DtsSJEv1MpaK6lny4+T6sHo4BKnI9Mgu7DTIT01ApkhTBFmh.PCmx
L.C+RIPkQgNuVc8JT007INes5mafqVCb29sQsaCXaYGfkHCJOIKJblXD1Lty
Y6vTjevA6FdY6CaZUGPrXcrjfWlc6z5m+N5AkYQgqFqjMKHRLV41zv4JOdlx
C.E4yUhSxkOVQthnr5aBxDkqHZMy+xZY34m+Q7rTANHCRv5w4eU4853Yisz4
CJ4DcMMSDIkZJq3waVqHlrQRavxafFt8+IhuMGlTGc48cp8Ka6RkeYQvX2dM
w6XBZRXLk3GM.ugwyb3zxFyFqjljjOE7GatbyaxN+NRpg42M58jkAqFEpb0G
UFUqlyUBOS4mnExmU.MkufTOjBpFQtrUKhr+qHVPthXdwrKz729RXjXzVB7g
lAd94etP2ej5W8ce3JD3pXNhhR3BkQMcEhclEsdtHSVqzbqjtx4wBCxdjKP9
TGKDOeKyubBH9CtGnjxPjZWV1xhjTkQnYVmWoco7Bx0ZLA4W1XfTJy4m2vNk
dlru+To8VPDaSbbK+y+Tf0GUzNqkpJc66DQIyBy+tBygzTxQ5ZZuaTsmTfzY
m8KijP9dE8ytbaEw3LMWRh7MuqkGQz2NmsW9GHo40y2aN8ukbfGjge1vmTp6
96JFJurc6UTnzTOutCeriknaodkl1WmKoqU9FH+XyaacCGPaxp0Yegh9wCji
qFWF2PgiqA+w1CAUCWmedi5q.o5hMmkTIdgDxEHme0FAJYgdTNWWtmflQron
7OTDTlMqGK29vBZEc09KhShq9jvjm+.saFh454vG5bKsG91t2w9Q4CaZY37v
7jjnICYaoZ7gC4161RMTG9du26qD+nt+619Kg4dcXW4YP3Y2mCq+Db3lmjNq
KU9KKn8GyLaNn91tXVx5zYkDTwKSr98yqRg4ov1UmU1mpOlDHTmr3PAxa..Y
dDvw90xgrds.xbH.cDvg+X1dU7HiA.j2QBG8Afi9yDG8WMlS23UCIsAfjwV.
ICHErZ0chzrBgYLnnqeMIk+3WFy2FFKukO+K0TwcgkxyejfpAoT7zbJX55T4
y.u2Q9NCTwAWmFuNr38kPdGAIG4FOujOEN4IESA3O8wS++nT2e+G
-----------end_max5_patcher-----------

Next Steps

That wraps up the tutorials on MIDI Tools, and most of the tutorials on JavaScript (V8) in Ableton Live.

I want to revisit the logging code one more time because there's a bunch of things we can do to make it better, so it works well in any JavaScript project. It's not the most exciting thing, but when you need to debug code, being able to log effectively is invaluable. I haven't been mentioning logging in the MIDI Tools tutorials because we were able to use dict.codebox to look at data, but when I was designing these tutorials, I needed to use the logging code to debug when things weren't working.

Maybe one day, Max will have a robust console logger built in, but until then, we can achieve it ourselves in "Max Console #2: Logging Improvements".

👉 Leave feedback on this article 👈