The Live API

This is a follow-up to the JavaScript in Live tutorial "The Max Console". In this tutorial we'll learn how to access the different Live objects available to the Live API, how to examine their properties and child objects, and how to make changes to the objects.

In the next tutorial, we'll use this knowledge to generate MIDI clips

Setup

Start with a new Max MIDI Effect device containing a v8 or v8.codebox object, as explained in "Getting Started".

First let's paste in our log() function that we built in the previous tutorial. This will help us explore the Live API. The rest of this tutorial assumes log() is defined in your script. I won't show this code again, so pretend it's at the top of all the code examples (that's why the rest of the code starts at line 9).


const toString = (any) => {
  const s = String(any);
  return s.includes("[object Object]") ? JSON.stringify(any) : s;
}
const log = (...any) => post(...any.map(toString), "\n");
const error = (...any) => globalThis.error(...any.map(toString), "\n");
log("------------------------------------------------,\n", new Date());

Ready? Let's create a LiveAPI object and take a look at some of its properties:


const liveObject = new LiveAPI();

log("path:", liveObject.path);
log("id:", liveObject.id);
log("children:", liveObject.children);
log(liveObject.info);

The Max window shows:

path:
id:  0
children:  this_device,control_surfaces,live_app,live_set
No object

Hmmm... The path is empty and we're apparently looking at "No object". But it has some children, which seems promising (more on that soon).

The reason we have no object is because we haven't connected the API to an actual Live object yet. We can do that by giving a path to the LiveAPI constructor:


const liveObject = new LiveAPI("live_set master_track");

log("path:", liveObject.path);
log("id:", liveObject.id);
log("children:", liveObject.children);

I've omitted liveObject.info this time. We'll come back to it soon. Now we should see something like this (your id may be different):

path:  live_set master_track
id:  2
children:  canonical_parent,clip_slots,devices,group_track,mixer_device,view

Live Paths

In order to connect to an object in Live we need to give the LiveAPI a path to the object. How do we determine the path?

Examine the Live Object Model diagram. We form paths by starting from a root object and following the arrows on the diagram from object to object. What's a root object? Remember the children of the liveObject when we had "No object"? Those are the root objects: live_app, live_set, control_surfaces, and this_device.

As we follow the arrows around the Live Object Model diagram, we build a space separated string. That's the path. When we did "live_set master_track", this corresponds to starting at the live_set root object (at the top of the diagram), and following the master_track arrow down to a Track object (represented by the box that says "Track"):

Live Object Model Diagram excerpt

Let's try a more complex path. Follow along with the arrows in the diagram.


const path = "live_set master_track mixer_device volume";
const liveObject = new LiveAPI(path);

log("path:", liveObject.path);
log("id:", liveObject.id);
log("children:", liveObject.children);
path:  live_set master_track mixer_device volume
id:  3
children:  canonical_parent

This time we've followed arrows in the diagram as far as we can, but there's still a child called "canonical_parent". As you might guess from the name, this is a parent object from which you can reach this object. So the term "child" is misleading here. Think of children as paths we can follow to reach other objects.

Live Objects

Now we can access Live objects via paths. We can learn a lot about the different Live objects by looking at their info property.


const liveObject = new LiveAPI("live_set");
log(liveObject.info);
id 1 
type Song 
description This class represents a Live set. 
children cue_points CuePoint 
children return_tracks Track 
children scenes Scene 
children tracks Track 
children visible_tracks Track 
child groove_pool GroovePool 
child master_track Track 
child view View 
property appointed_device NoneType 
property arrangement_overdub bool 
property back_to_arranger bool 
property can_capture_midi bool 
property can_jump_to_next_cue bool 
property can_jump_to_prev_cue bool 
property can_redo bool 
property can_undo bool 
property clip_trigger_quantization int 
property count_in_duration int 
property current_song_time float 
property exclusive_arm bool 
property exclusive_solo bool 
property file_path str 
property groove_amount float 
property is_ableton_link_enabled bool 
property is_ableton_link_start_stop_sync_enabled bool 
property is_counting_in bool 
property is_playing bool 
property last_event_time float 
property loop bool 
property loop_length float 
property loop_start float 
property metronome bool 
property midi_recording_quantization int 
property name str 
property nudge_down bool 
property nudge_up bool 
property overdub bool 
property punch_in bool 
property punch_out bool 
property re_enable_automation_enabled bool 
property record_mode bool 
property root_note int 
property scale_intervals IntVector 
property scale_mode bool 
property scale_name str 
property select_on_launch bool 
property session_automation_record bool 
property session_record bool 
property session_record_status int 
property signature_denominator int 
property signature_numerator int 
property song_length float 
property start_time float 
property swing_amount float 
property tempo float 
property tempo_follower_enabled bool 
property tuning_system NoneType 
function capture_and_insert_scene 
function capture_midi 
function continue_playing 
function create_audio_track 
function create_midi_track 
function create_return_track 
function create_scene 
function delete_return_track 
function delete_scene 
function delete_track 
function duplicate_scene 
function duplicate_track 
function find_device_position 
function force_link_beat_time 
function get_beats_loop_length 
function get_beats_loop_start 
function get_current_beats_song_time 
function get_current_smpte_song_time 
function is_cue_point_selected 
function jump_by 
function jump_to_next_cue 
function jump_to_prev_cue 
function move_device 
function play_selection 
function re_enable_automation 
function redo 
function scrub_by 
function set_or_delete_cue 
function start_playing 
function stop_all_clips 
function stop_playing 
function tap_tempo 
function trigger_session_record 
function undo 
done 

First there's some general information: id, type, and description. Note the type of this object is a Song. If you click the "Song" box back in the Live Object Model diagram, it will jump to the reference for the Song object, which provides more detailed information.

After the general info, there's a long list of the things in the object. They fall into three categories: children, properties, and functions. Let's take a closer look at each.

Live Object Children

As we've seen, Live objects' children correspond to Live paths to other objects.

We can determine the children of a Live object by consulting the Live Object Model reference, or by looking at a LiveAPI object's children and info properties with log(liveObject.children) and log(liveObject.info).

We form a Live path by going from one child to the next in the hierarchy of Live objects. Sometimes a child is actually a parent in the case of "canonical_parent", so we can move up and down the object hierarchy. For example, the paths "live_set master_track canonical_parent" and "live_set" will both give you the Song object.

Children come in two forms: single child and children list. You'll see both forms in the info property, such as when we logged liveObject.info for the "live_set" path in the previous section:

children cue_points CuePoint 
children return_tracks Track 
children scenes Scene 
children tracks Track 
children visible_tracks Track 
child groove_pool GroovePool 
child master_track Track 
child view View

Children lists have additional implications for Live paths. We need to tell Live which child in the list we want to access. This is done by providing an index into the list, counting from 0 as is typical in programming languages. Let's take a look at some examples:


// the first track:
new LiveAPI("live_set tracks 0");

// the second track:
new LiveAPI("live_set tracks 1");

// the second clip slot in the third track:
new LiveAPI("live_set tracks 2 clip_slots 1")

// the first control surface:
new LiveAPI("control_surfaces 0");

Back in the Live Object Model reference, take a closer look at the different types of arrows. Arrows come in single and list types. List type arrows are children lists where we need to provide an index to access a particular object. Also note the "canonical paths" are solid lines. These indicate which arrows to follow backwards when going up to a "canonical_parent".

Live Object Model Diagram Path Types

Live Object Properties

Live object properties store the state of each Live object. They allow us to look at the current state of Live and change that state.

We can examine the object's properties with the liveObject.get() method:


const liveObject = new LiveAPI("live_set");
log("tempo:", liveObject.get("tempo") );

And we can change the object's properties with the liveObject.set() method:


const liveObject = new LiveAPI("live_set");
liveObject.set("tempo", 80);

Note the tempo has changed in Live after you run this script. Also note this change is performed as an undoable operation. I believe you can always undo changes made by liveObject.set().

All properties can be get(), but not all properties can be set(). The Live Object Model reference shows whether each property is "read-only" or not. If it says "read-only", you can't call set() for that property.

For example, we cannot set the name of the Live Set via the Live API:

Object Property Observers

Some properties also have an indicator for "observe". To observe a property, you can register a callback function when you construct a LiveAPI object. That function receives two-element lists containing a property name and its new value every time that property changes:


const onChange = ([property, value]) => {
  log(`onChange: ${property} = ${value}`);
}
const liveObject = new LiveAPI(onChange, "live_set");
liveObject.property = "tempo"; // trigger onChange() for tempo changes

liveObject.set("tempo", 110);
liveObject.set("tempo", 130);
onChange: id = 1
onChange: tempo = 120
onChange: tempo = 110
onChange: tempo = 130

Note the object's id is also reported through this callback, so check which property has changed and code your logic as needed.

onChange() will be called whenever you change the tempo in Live. I defined it as a constant assigned to an arrow function, rather than using the function keyword, so that it can't be called from the Max patch (as we learned when creating our custom log() function). You can also define it inline if you prefer:


const liveObject = new LiveAPI(
  ([property, value]) => log(`onChange: ${property} = ${value}`),
  "live_set"
);
liveObject.property = "tempo";

Live Object Functions

Besides setting properties, we can call functions on Live objects in order to make changes to Live and trigger various features. For this we use liveObject.call(). Here's a simple example. Make sure the transport is stopped and try the following script:


const liveObject = new LiveAPI("live_set");
liveObject.call("start_playing");

Depending on the nature of the function, its result may or may not be undoable. This should reflect typical Live behavior. In this case, clicking play on the transport is not undoable, so neither is the Song's start_playing function.

Many things that can be clicked on and interacted with the mouse in Live's GUI can also be triggered with functions in the LiveAPI.

This is one of the simplest examples of calling a Live object function, because it has no parameters. In future articles we'll see how to call functions that take parameters. As a quick preview, you can do things like this:


const liveObject = new LiveAPI("live_set");
liveObject.call("create_midi_track", 0);

Here, the create_midi_track function takes an integer for the index at which to insert a new track (or -1 to insert at the end). By passing in the parameter 0, we insert a new track at the beginning of the list of tracks.

this_device

You may have noticed the root object this_device does not appear on the Live Object Model diagram. It's a special path for the Max for Live Device object that contains our JavaScript code. The canonical_parent is particular useful here, because we can start from our Max for Live device and go upwards to the containing track. From there we can interact with Live objects relative to the current Max for Live device.


const liveObject = new LiveAPI("this_device");
log("current Max for Live device path:", liveObject.path);

const parent = new LiveAPI("this_device canonical_parent");
log("current Max for Live device's parent:", parent.path);
current Max for Live device path:  live_set tracks 1 devices 0
current Max for Live device's parent:  live_set tracks 1

In this case, the Max for Live device was the first device on the second track.

Safely Constructing a LiveAPI Object

We have been doing what I call an "exploratory coding session". It's useful because we get to try lots of features and learn how Max for Live works. But you need to be aware: some of what we are doing is "wrong" and you shouldn't do it when you build a real Max for Live device.

There are constraints on when you can safely construct a new LiveAPI() object. The documentation for the JS LiveAPI documentation explains:

Technical note: you cannot use the LiveAPI object in JavaScript global code. Use the live.thisdevice object to determine when your Max Device has completely loaded (the object sends a bang from its left outlet when the Device is fully initialized, including the Live API).

We've been using the LiveAPI object in JavaScript global code this entire time, and that works. When we're in an exploratory coding session, we might temporarily run into a problem with this, but once we've opened the Max patch editor, the device will initialize and then we keep running our code in the already initialized device. When our code re-runs as we edit it, it is not attempting to run during device initialization, so it's ok. But this is the device maker's experience. It will be broken if you make a device this way and share it with other people.

When JavaScript code runs immediately during device initialization, which happens for any top-level code in a v8 object when running inside Live (rather than the Max patch editor), calls to new LiveAPI() will fail.

As an experiment, let's make a simple device to randomize the tempo. We can see if it's working or not because it should always change the tempo. Use this code:


const randomTempo = 80 + 60 * Math.random();
const liveObject = new LiveAPI("live_set");
liveObject.set("tempo", randomTempo);

Save the script a few times from the code editor window (or click the hammer icon to force it to re-run a few times in v8.codebox) It should work and keep changing the tempo because we're in "exploratory coding session" mode.

Now save the Max patch and close the patch editor. Remove the device from your track and then re-add it. It doesn't work. The tempo doesn't change. Open Live's copy of the Max Console (the "Max Window") from the device title bar right-click menu (see "Finding the Max Console" #3 if you're not sure where to find it) and you'll see the problem we've been talking about:

Live API is not initialized, use live.thisdevice to determine when
initialization is complete

This message gives a hint towards the solution:

  1. Add an object to the Max patch to trigger your JavaScript after the device has completely initialized. Two Max object are commonly used:
    • live.thisdevice automatically triggers when the device is initialized
    • live.button provides a UI to trigger manually
  2. Connect the live.thisdevice or live.button object to the v8 or v8.codebox object to trigger it with a bang message
  3. Wrap your JavaScript Live API logic in a bang() function

Here's a fixed version of the tempo randomizer using both automatic and manual triggering (which is a pretty reasonable thing to do, depending on the script). I used this opportunity to start sketching out a UI for the device by moving everything except the comically large live.button (the big black circle) below the "device line" in the patch. Nothing below there can be seen in Live, as you can see at the bottom screenshot.



function bang() {
  const randomTempo = 80 + 60 * Math.random();
  const liveObject = new LiveAPI("live_set");
  liveObject.set("tempo", randomTempo);
}

Now, if you save the device and close the Max patch editor, adding the device to a Live set will randomize the tempo and you can click the button to randomize it as much as you want. The errors are gone from the Max Window.

Live API Documentation

I can't cover everything about the Live API in these articles. When you need more information, Max's documentation is a great resource:

Next Steps

The "Generating MIDI Clips" tutorial covers how to algorithmically generate notes into a MIDI clip using the Live API.

Table of Contents:
  1. JavaScript (V8) in Ableton Live Overview
  2. Getting Started
  3. Real Time MIDI Processing
  4. The Max Console
  5. The Live API
  6. Generating MIDI Clips
  7. Max Console Part 2: Better Logging for Objects