Archives

UX and the music software Part 2: The rise of multitrack recording

Everything within this series is a combination of research and opinion. I strive to be as accurate as possible however I may skip a number of interesting details regarding this large history. Please feel free to email me with any opinions or ideas.


Part 1 of this series mentioned how Pro Tools originates as a project out of UC Berkeley and became a consumer product around 1991. Let’s acknowledge why digital multitrack recording is important. First of all it helped resolve the obvious limitations of tape. The conversation during the 1990s was, “Sure tape sounds great but now you can have theoretically endless dubbing options with digital recordings. Record ten takes, twenty, or even one hundred!” This was a sales and marketing sentiment. It was discussed in music production circles. The novelty of “endless” takes. Select the best recording. You’re no longer influenced by the hassle tape presents.

Multitrack recording became the sales position of music software and the creative angle. Since Pro Tools tried to solve perceived problems of tape recording, it’s solutions defined the experience of the software. It’s solutions were in response to mainstream music production concepts. This is why the multitrack paradigm became as significant as it is. It not only existed as a common production concept in recording studios before the digital era—record one performer, then another, then mix it together—but it continued as a paradigm during the digital era.

In other words: new technology initially set out to solve old problems instead of looking for new ones. Pro Tools popularized the trope of solving increasingly older recording problems (No one talks about the limits of tape anymore).

Multitrack recording Defined Creative Direction

The popularity and significance of Pro Tools defined the marketplace. Multitrack recording was a thing and so it’s marketing defined expectations in the minds of consumers. How else would a consumer/musician discuss computer music in the 90s without discussing the software’s ability to mix multiple tracks and record multiple tracks simultaneously? There were few other defining aspects in the early 90s. Many other factors defined the fidelity of the final recorded material—my first computer i/o box only supported 20bit audio.

So as personal computing increased dramatically in relevance,
very much due to Microsoft, the computer music industry had to compete on that OS. It did so in Emagic Logic, Cubase, and Twelve Tone Systems’ Cakewalk. In fact Emagic initially only offered a feature-rich MIDI sequencer on Windows months before they provided any audio features (this link may not work as of 6/20/16 but is currently being migrated to SOS’s new site).

The presence of multiple companies defining the landscape of computer music around multitracking acted as a further education to new consumers. This was also the easiest way to design and market these products. A musician who is new to computer-aided music only knew of a few options. It defined how consumers could experience computer music recording.

In Part 1 of this series I discussed trackers. They did not bubble up to the top alongside Pro Tools because it’s paradigm was not familiar enough to demand attention from companies. Imagine if it did. Imagine the way music would have been handled differently. Let that sit in. This is one way in which software defined creative directions. Software has played a large role in defining the styles of music available over the past few decades. If Pro Tools, for some radical reason, included a tracker feature, the history of modern music would be different (more on trackers later).

Why does that matter? Many musicians opted to be traditional solo acts or within traditional band structures during the early 90s and certainly before. Multitrack software supported this. However as the decades passed the manner of solo-musicianship has changed. Did the software lag behind?

However it wasn’t just the motivations of business that popularized multitrack recording. It was the focus of many musicians. It’s increasingly difficult to recall musical society without electronic music. However even into the 90s, many musicians opted for solo musicianship with acoustic or electric instruments or chose be in bands. If this was the most common way to perform music then it makes sense that software companies focused on fulfilling their needs. Many musicians are just regular people adopting the musical strategy of their peers and those they admire.

Why does that matter? Many musicians opted to be traditional solo acts or within traditional band structures during the early 90s and certainly before. Multitrack software supported this. However as the decades passed the manner of solo-musicianship has changed. Did the software lag behind? Few, if any, electronic musicians dub their lead synth on top of their bass track on top of their drum track—a multi-track recording strategy. Since few solo musicians do this, why is this style of software still used by contemporary solo acts?

What about MIDI?

The world of MIDI had a separate evolution outside of digital audio recording. MIDI, the protocol for communicating between different music machines, began its standardization in 1983 and was strongly focused on communicating between synthesizers. It had some years to define it’s role on the computer. By the late 80s we had MIDI sequencers—possibly first introduced on the Atari system—and thus it introduced very different user interface concepts compared to later multitrack concepts.

Side note: I just noticed I keep saying Emagic Logic. Some may be wondering if that’s the same as Apple’s Logic. It is. Apple purchased Logic from the German company Emagic in the early 2000s.

Two young technologies converge

As mentioned it is my opinion Pro Tools popularized computer-aided music during the early 90s but why didn’t the MIDI Sequencer do the same in the 90s? It was a less common paradigm. Fewer musicians approached music from the perspective of sequencing MIDI notes. Fewer knew it existed. A traditional guitarist wasn’t handling MIDI. Since there was money to be made, companies broadened their objectives. MIDI sequencing was combined with multitrack recording.

So two things were occurring by the early 1990s: companies discovered the increasing relevance of multitrack recording on the computer and companies who previously focused on MIDI sequencing saw an opportunity to converge both approaches. All the while alternatives, like trackers and MaxMSP (initially a visual programing language for MIDI), existed quietly in the background. This means we had two user interface concepts handling two different approaches to music production slowly integrating into one another.

More about the history of Pro Tools – http://www.musicradar.com/tuition/tech/a-brief-history-of-pro-tools-452963

The next part in this series will focus on the MIDI sequencer.

Let’s kill the Step Sequencer

Or a “Re-imagination of the Step Sequencer Paradigm”

As I continue down my research for algorithmic music processes, I’m working hard at constructing modular, building blocks in MaxMSP. Through this recent work I’ve created modular components dealing with probability, simple synths, envelopes, etc. So my mindset has been “Modularity!” and not just because it’s a good idea but because I’m still ineffective at planning and structuring more elaborate projects. This is how I decided to approach a traditional step sequencer. The first component I needed to develop was the “step” component. This is where I realized the limiting assumptions of “sequencing” data.

step-module-maxmsp

As I worked my way through analyzing my needs for a step sequencer, I separated the concept into a few smaller parts: a step, a time control, a timeline.

My earliest introductions to music sequencing were with Cakewalk Pro Audio and the hardware sequencer on a Korg N364 (still one of the best). I was eventually introduced to step sequencing through the Korg EA-1 and Fruity Loops. The assumed structure of both approaches goes like this: Place data within grid. Sometimes you can control the grid. Done. There are of course alternatives such as euclidian sequencers and likely other options out there. Considering how any form of data can be translated into any other form of data (practically), music sequencing is theoretically limitless. You can turn a series of tweets into musical information. You could claim to control those tweets and thus “sequence” your music based on its set of rules. On a pragmatic level however we deal with common sequencing paradigms.

As I worked my way through analyzing my needs for a step sequencer, I separated the concept into a few smaller parts: a step, a time control, a timeline. Traditional sequencing ideas can neatly fall into these categories. Let’s hold it up against the euclidian approach for a moment:

  • A euclidian sequencer uses circles, a radius, and its circumference in order to understand timing and additional data.
  • A line representing the radius would move within the circle at an arbitrary rate. The rate is the time control and the circle could be called the timeline.
  • Nodes might be placed within the circle with varying distances from the center; the nodes represent sound/music/data. These could be called steps.
  • When a radius crosses a node, it is triggered. This type of functionality basically abandons a common grid system.

A euclidian sequencer is a pretty exciting approach but I stumbled across another funny way to structure sequenced information. A method that could mimic a step sequencer and a euclidian one but with more options.

Separating the “step” from the timeline and time control source

Since my original intention was simply to build a traditional step sequencer in parts, I first started on the component I’m calling the “step”. I decided to combine a few features such as the playlist max object which plays samples, a custom envelope I call “ADXR”, signal input, and one additional, curious feature: bang output. This is where I realize the step sequencing concept can be tweaked.

4-steps

The bang output–for non Max users, a bang represents a trigger data–leaves the step object after a duration of time. In other words when a step has a duration of one whole note and is triggered, after a whole note transpires the bang output occurs. That output can then trigger another step. Repeat. Let me describe it more carefully:

  • Step object receives a trigger
  • Step object plays a sample (a kick drum for example)
  • Step object counts down the defined duration that is input into the object (for example a quarter note’s length)
  • When the duration ends, a trigger output occurs.
  • The output can trigger anything else including another step (for example a snare)


My audio examples are still in their “autechre” phase

I’ve designed the steps to receive notation based times and milliseconds. So it can work without a grid system (by grid I’m referring to structured times). A kick drum plays. We wait 200ms. The snare plays. We wait 1 quarter note. Another snare plays. We wait 100ms. A hihat plays…

Here’s where it gets interesting. This modular setup allows for a feedback loop. A sequence of loops can trigger itself in a cycle. A structured rhythm can be set up to follow a strict time OR a step can occur based on millisecond durations which aligns with some of the qualities of a euclidian sequencer (some qualities).

If you wanted to fully mimic a euclidian approach, you would want to create multiple feedback loops with at least two steps (Max does not allow one object to loop back into itself without a few caveats). With multiple feedback loops like this you could trigger them all and copy the euclidian approach. However this is just the start. Before I list other possible scenarios I should admit that this isn’t necessarily innovative. It’s not ground breaking. Many people using MaxMSP–and similar technologies–are engaged in abstract, complicated feedback loops and flows. That’s not new. I think what is interesting about this concept is that it bridges traditional electronic music user experience concepts with more intricate flows. I’ve taken a “step” paradigm and designed it to fit into a variety of scenarios.

I think what is interesting about this concept is that it bridges traditional electronic music user experience concepts with more intricate flows.

I can’t say the wheel has been reinvented here but I think a new type of wheel is being introduced for those who need it. This modular step approach can get rather complicated.

The Modular Step Object Flowchart

ModularStepSeq-flowchart

This is a work in progress and outlines some of the key factors of my step object/patch. There are some details to the patch that I am ignoring (they are not critical to the over all purpose) but this flowchart outlines a few of those features. This flowchart however does not offer an example of an elaborate sequence. I have another flowchart to explain that.

Two nearly identical loops with (likely) very different results

step-seq-concept-loop

Loop A slightly represents a simple loop. A kick is followed by a snare then hihats. Then it almost repeats. If you look at the second to last hihat in Loop A, that hihat triggers two things. It triggers two hihats. Those two hihats then go on to follow their own sequence. One of the hihats loops back to a kick. The other eventually stops after a few steps. Additionally you can see durations between triggers designated. The only surprising duration in Loop A is a 16th note in the middle. You can almost imagine how this might sound. Think for a moment how that rhythm may go.

Now let’s look at Loop B. At first glance it’s pretty similar but look at the durations. We have 8ths, 16ths, and 200ms. That alone really complicates the rhythm and moves it outside typical time structures. Look at the second to last hihat as well. What’s different? It loops back into the main loop! How do you think this rhythm sounds? I have no idea. In fact during development of all this I constantly found myself in reboot-worthy feedback loops. That is one problem I still need to address. If you can avoid though, you will find yourself creating very different types of rhythmic structures.

Early stages of development

I skipped additional details regarding this development. For example I’m building in a signal gate into each step object. So you can sequence samples and sequence gates upon a live signal. Although the audio I provide may not shed too much light on the potential, I will continue going down this rabbit hole to see what musical qualities can be achieved this way. I think it looks promising.

#modular #feedback #drummachine

A video posted by @estevancarlos on

Algorithmic Music Study in Max/MSP #1

The audio in my algorithmic music studies is right now, irrelevant. I am focused on ways to trigger and variate between parameter changes. So I decided to create an animated GIF of Max/MSP instead.

The circles represent a random selection between two options, each of which change a different parameter. The left-hand side graph represents delays in milliseconds between the triggering of four other parameters. domain name owner The graph in the center is a collection of 16 key:values that are changing other parameters. The graph on the far right is the waveform, which is clearly thrashing.

max1