Archives

UX and the music software Part 2: The rise of multitrack recording

Everything within this series is a combination of research and opinion. I strive to be as accurate as possible however I may skip a number of interesting details regarding this large history. Please feel free to email me with any opinions or ideas.


Part 1 of this series mentioned how Pro Tools originates as a project out of UC Berkeley and became a consumer product around 1991. Let’s acknowledge why digital multitrack recording is important. First of all it helped resolve the obvious limitations of tape. The conversation during the 1990s was, “Sure tape sounds great but now you can have theoretically endless dubbing options with digital recordings. Record ten takes, twenty, or even one hundred!” This was a sales and marketing sentiment. It was discussed in music production circles. The novelty of “endless” takes. Select the best recording. You’re no longer influenced by the hassle tape presents.

Multitrack recording became the sales position of music software and the creative angle. Since Pro Tools tried to solve perceived problems of tape recording, it’s solutions defined the experience of the software. It’s solutions were in response to mainstream music production concepts. This is why the multitrack paradigm became as significant as it is. It not only existed as a common production concept in recording studios before the digital era—record one performer, then another, then mix it together—but it continued as a paradigm during the digital era.

In other words: new technology initially set out to solve old problems instead of looking for new ones. Pro Tools popularized the trope of solving increasingly older recording problems (No one talks about the limits of tape anymore).

Multitrack recording Defined Creative Direction

The popularity and significance of Pro Tools defined the marketplace. Multitrack recording was a thing and so it’s marketing defined expectations in the minds of consumers. How else would a consumer/musician discuss computer music in the 90s without discussing the software’s ability to mix multiple tracks and record multiple tracks simultaneously? There were few other defining aspects in the early 90s. Many other factors defined the fidelity of the final recorded material—my first computer i/o box only supported 20bit audio.

So as personal computing increased dramatically in relevance,
very much due to Microsoft, the computer music industry had to compete on that OS. It did so in Emagic Logic, Cubase, and Twelve Tone Systems’ Cakewalk. In fact Emagic initially only offered a feature-rich MIDI sequencer on Windows months before they provided any audio features (this link may not work as of 6/20/16 but is currently being migrated to SOS’s new site).

The presence of multiple companies defining the landscape of computer music around multitracking acted as a further education to new consumers. This was also the easiest way to design and market these products. A musician who is new to computer-aided music only knew of a few options. It defined how consumers could experience computer music recording.

In Part 1 of this series I discussed trackers. They did not bubble up to the top alongside Pro Tools because it’s paradigm was not familiar enough to demand attention from companies. Imagine if it did. Imagine the way music would have been handled differently. Let that sit in. This is one way in which software defined creative directions. Software has played a large role in defining the styles of music available over the past few decades. If Pro Tools, for some radical reason, included a tracker feature, the history of modern music would be different (more on trackers later).

Why does that matter? Many musicians opted to be traditional solo acts or within traditional band structures during the early 90s and certainly before. Multitrack software supported this. However as the decades passed the manner of solo-musicianship has changed. Did the software lag behind?

However it wasn’t just the motivations of business that popularized multitrack recording. It was the focus of many musicians. It’s increasingly difficult to recall musical society without electronic music. However even into the 90s, many musicians opted for solo musicianship with acoustic or electric instruments or chose be in bands. If this was the most common way to perform music then it makes sense that software companies focused on fulfilling their needs. Many musicians are just regular people adopting the musical strategy of their peers and those they admire.

Why does that matter? Many musicians opted to be traditional solo acts or within traditional band structures during the early 90s and certainly before. Multitrack software supported this. However as the decades passed the manner of solo-musicianship has changed. Did the software lag behind? Few, if any, electronic musicians dub their lead synth on top of their bass track on top of their drum track—a multi-track recording strategy. Since few solo musicians do this, why is this style of software still used by contemporary solo acts?

What about MIDI?

The world of MIDI had a separate evolution outside of digital audio recording. MIDI, the protocol for communicating between different music machines, began its standardization in 1983 and was strongly focused on communicating between synthesizers. It had some years to define it’s role on the computer. By the late 80s we had MIDI sequencers—possibly first introduced on the Atari system—and thus it introduced very different user interface concepts compared to later multitrack concepts.

Side note: I just noticed I keep saying Emagic Logic. Some may be wondering if that’s the same as Apple’s Logic. It is. Apple purchased Logic from the German company Emagic in the early 2000s.

Two young technologies converge

As mentioned it is my opinion Pro Tools popularized computer-aided music during the early 90s but why didn’t the MIDI Sequencer do the same in the 90s? It was a less common paradigm. Fewer musicians approached music from the perspective of sequencing MIDI notes. Fewer knew it existed. A traditional guitarist wasn’t handling MIDI. Since there was money to be made, companies broadened their objectives. MIDI sequencing was combined with multitrack recording.

So two things were occurring by the early 1990s: companies discovered the increasing relevance of multitrack recording on the computer and companies who previously focused on MIDI sequencing saw an opportunity to converge both approaches. All the while alternatives, like trackers and MaxMSP (initially a visual programing language for MIDI), existed quietly in the background. This means we had two user interface concepts handling two different approaches to music production slowly integrating into one another.

More about the history of Pro Tools – http://www.musicradar.com/tuition/tech/a-brief-history-of-pro-tools-452963

The next part in this series will focus on the MIDI sequencer.

#Immigration Hackathon Project

I took the opportunity recently to finally attend a hackathon event held by the city of Los Angeles and officially supported by the mayor. It was an amazing experience that reminded me about my passion for social and civil matters. Belize The theme of this particular hackathon was immigration and a very interesting assortment of people showed up for the event: activists, non-profit members, and technologists.

I joined a group who decided to create an “all in one” web application that addressed the many needs of new immigrants and/or uninformed immigrants in the Los Angeles area. Our perspective is that many services and resources exist across the city but are either hard to find or it’s difficult to find the good services. list of domains Our discussion was very rewarding and it reminded me of my time as a graduate researcher working on a project for the displaced people after Hurricane Katrina. best places to visit One of the key things I tried to stress with the group is that this web application needs to be optimized for those who want to HELP others. We fully understand that not every immigrant in L.A. who is seeking services will have a smartphone with a data plan, let alone speak english.

I’ll discuss the project in more detail as time goes on but we’re all excited and hope to submit this to a final city-wide competition in a few months. Here is a very early process chart showing how a user would flow through portions of this web application.

AskAng-Process-822015

Prison Industrial Complex: Current Development Part 1

As I’ve worked through my currently titled project, “Prison Industrial Complex: A Game” it’s become larger, more detailed, and more interesting. To summarize, this game explains details about the private prison system from the perspective of a dystopian machine. The tone is cynicism and it engages the player to answer questions.

Still at the planning and structuring phase, I’m finding myself engaged in so many methods of planning and diagramming. I’d like to take a moment to cover this process.

Initial Diagrams

After my sketches on graph paper but before the wireframes, I create outlines that organize a variety of notes relating to game components, UI elements, research notes, and programming notes. On a side note, the Mac application, “Tree 2″ is supremely useful for this purpose.

Prison Industrial Complex

The game/narrative will include layers information while simultaneously testing the participant. For example prison data will be populated in the background of the game creating a graph. Also on the drawing board is the implementation of a live Twitter feed relating to the company Corrections Corp Of America. That’s a possibility. A maybe. If the Twitter content is interesting, it will work. What I know will be definitively included is live stock market data relating to the company. My hopes is that the game design increasingly feels chaotic, distracting. Something that could parallel the tone of loud sounds and lights.

Wireframes

At this stage in the process, all the details of the game itself are not in place. I still need to dive deeper into the subject matter which will likely allow me to generate more details. So the wireframes currently represent a broad type of interaction. Certain types of screens will exist and I am focused on designing those parts currently.

Idle Screen Voice

The wireframe above represents the “Idle Screen”. This screen introduces the narrator of the game which is represented by vertical lines. The vertical lines are visualization of the voice/audio. So as the narrator speaks, the lines fluctuate in height.

Highlight

The following wireframe represents a “Question” screen that offers three answer options to the player. This project is being designed with the Leap Motion in mind and so a player will move their hand horizontally across the Leap Motion in order to select an option. Once their hand is idle over an option for a length of time, a countdown timer begins, counting down from 3 to 0. Once that countdown completes, that selection is official. This approach is meant to compensate for a click function.

Designs

I’m often inspired by Russian constructivism, minimalism, as well as grunge typography from David Carson. I don’t intend to abandon all typographic rules and I understand there’s a contradiction in the influences listed, however I hope to either strike a balance or create a contrast. We will see.

prison_new_layout2_white

prison_new_layout_idle

prison_new_layout2

Application Diagrams

As a part of this process, I’m introducing myself to UML diagrams. It’s a fascinating additional part of this process. Relating the UML diagrams to the designs and wireframes really changes my perspective on those parts of development.

uml

Sound Design & Music

During this fresh development I’m scrapping my previous approach where I placed most of the logic in MaxMSP. Max is a flow based programming language but became… obviously inefficient when dealing with the logic of this game. At least it’s inefficient with it’s built in objects. I might consider writing my own Max objects in the future. I’m told it’s pretty easy to do so in Java (as opposed to C++).

For now however, the logic is now in Processing while the sound, music, some math, and video processing effects will reside within Max. One part of this project that I’m very excited to present eventually is the music. The music will be partially algorithmic but more importantly shift and change as the user progresses through the game–as is generally expected. My intention is to create a tension through the music that hopefully influences the experience of the gameplay. I will arrive at that stage eventually.

Screen Shot 2015-07-04 at 1.25.51 PM

This completes the first summary of my development process.

Let’s kill the Step Sequencer

Or a “Re-imagination of the Step Sequencer Paradigm”

As I continue down my research for algorithmic music processes, I’m working hard at constructing modular, building blocks in MaxMSP. Through this recent work I’ve created modular components dealing with probability, simple synths, envelopes, etc. So my mindset has been “Modularity!” and not just because it’s a good idea but because I’m still ineffective at planning and structuring more elaborate projects. This is how I decided to approach a traditional step sequencer. The first component I needed to develop was the “step” component. This is where I realized the limiting assumptions of “sequencing” data.

step-module-maxmsp

As I worked my way through analyzing my needs for a step sequencer, I separated the concept into a few smaller parts: a step, a time control, a timeline.

My earliest introductions to music sequencing were with Cakewalk Pro Audio and the hardware sequencer on a Korg N364 (still one of the best). I was eventually introduced to step sequencing through the Korg EA-1 and Fruity Loops. The assumed structure of both approaches goes like this: Place data within grid. Sometimes you can control the grid. Done. There are of course alternatives such as euclidian sequencers and likely other options out there. Considering how any form of data can be translated into any other form of data (practically), music sequencing is theoretically limitless. You can turn a series of tweets into musical information. You could claim to control those tweets and thus “sequence” your music based on its set of rules. On a pragmatic level however we deal with common sequencing paradigms.

As I worked my way through analyzing my needs for a step sequencer, I separated the concept into a few smaller parts: a step, a time control, a timeline. Traditional sequencing ideas can neatly fall into these categories. Let’s hold it up against the euclidian approach for a moment:

  • A euclidian sequencer uses circles, a radius, and its circumference in order to understand timing and additional data.
  • A line representing the radius would move within the circle at an arbitrary rate. The rate is the time control and the circle could be called the timeline.
  • Nodes might be placed within the circle with varying distances from the center; the nodes represent sound/music/data. These could be called steps.
  • When a radius crosses a node, it is triggered. This type of functionality basically abandons a common grid system.

A euclidian sequencer is a pretty exciting approach but I stumbled across another funny way to structure sequenced information. A method that could mimic a step sequencer and a euclidian one but with more options.

Separating the “step” from the timeline and time control source

Since my original intention was simply to build a traditional step sequencer in parts, I first started on the component I’m calling the “step”. I decided to combine a few features such as the playlist max object which plays samples, a custom envelope I call “ADXR”, signal input, and one additional, curious feature: bang output. This is where I realize the step sequencing concept can be tweaked.

4-steps

The bang output–for non Max users, a bang represents a trigger data–leaves the step object after a duration of time. In other words when a step has a duration of one whole note and is triggered, after a whole note transpires the bang output occurs. That output can then trigger another step. Repeat. Let me describe it more carefully:

  • Step object receives a trigger
  • Step object plays a sample (a kick drum for example)
  • Step object counts down the defined duration that is input into the object (for example a quarter note’s length)
  • When the duration ends, a trigger output occurs.
  • The output can trigger anything else including another step (for example a snare)


My audio examples are still in their “autechre” phase

I’ve designed the steps to receive notation based times and milliseconds. So it can work without a grid system (by grid I’m referring to structured times). A kick drum plays. We wait 200ms. The snare plays. We wait 1 quarter note. Another snare plays. We wait 100ms. A hihat plays…

Here’s where it gets interesting. This modular setup allows for a feedback loop. A sequence of loops can trigger itself in a cycle. A structured rhythm can be set up to follow a strict time OR a step can occur based on millisecond durations which aligns with some of the qualities of a euclidian sequencer (some qualities).

If you wanted to fully mimic a euclidian approach, you would want to create multiple feedback loops with at least two steps (Max does not allow one object to loop back into itself without a few caveats). With multiple feedback loops like this you could trigger them all and copy the euclidian approach. However this is just the start. Before I list other possible scenarios I should admit that this isn’t necessarily innovative. It’s not ground breaking. Many people using MaxMSP–and similar technologies–are engaged in abstract, complicated feedback loops and flows. That’s not new. I think what is interesting about this concept is that it bridges traditional electronic music user experience concepts with more intricate flows. I’ve taken a “step” paradigm and designed it to fit into a variety of scenarios.

I think what is interesting about this concept is that it bridges traditional electronic music user experience concepts with more intricate flows.

I can’t say the wheel has been reinvented here but I think a new type of wheel is being introduced for those who need it. This modular step approach can get rather complicated.

The Modular Step Object Flowchart

ModularStepSeq-flowchart

This is a work in progress and outlines some of the key factors of my step object/patch. There are some details to the patch that I am ignoring (they are not critical to the over all purpose) but this flowchart outlines a few of those features. This flowchart however does not offer an example of an elaborate sequence. I have another flowchart to explain that.

Two nearly identical loops with (likely) very different results

step-seq-concept-loop

Loop A slightly represents a simple loop. A kick is followed by a snare then hihats. Then it almost repeats. If you look at the second to last hihat in Loop A, that hihat triggers two things. It triggers two hihats. Those two hihats then go on to follow their own sequence. One of the hihats loops back to a kick. The other eventually stops after a few steps. Additionally you can see durations between triggers designated. The only surprising duration in Loop A is a 16th note in the middle. You can almost imagine how this might sound. Think for a moment how that rhythm may go.

Now let’s look at Loop B. At first glance it’s pretty similar but look at the durations. We have 8ths, 16ths, and 200ms. That alone really complicates the rhythm and moves it outside typical time structures. Look at the second to last hihat as well. What’s different? It loops back into the main loop! How do you think this rhythm sounds? I have no idea. In fact during development of all this I constantly found myself in reboot-worthy feedback loops. That is one problem I still need to address. If you can avoid though, you will find yourself creating very different types of rhythmic structures.

Early stages of development

I skipped additional details regarding this development. For example I’m building in a signal gate into each step object. So you can sequence samples and sequence gates upon a live signal. Although the audio I provide may not shed too much light on the potential, I will continue going down this rabbit hole to see what musical qualities can be achieved this way. I think it looks promising.

#modular #feedback #drummachine

A video posted by @estevancarlos on

“Boundaries” Leap Motion / MaxMSP Project

A downloadable PDF is available containing an overview of the project

Two notes: This is a work in progress and I should point out that I’m not sure how effectively I could pull of this UI within Max. It’s more or less a pie in the sky user interface that I hope can be managed. That aside, I took a step back from just programming in Max to take a moment and actually plan my development.

Overview:

“Boundaries” is a Max4 Live patch that displays the functional region of the Leap Motion and allows for the quantization and MIDI mapping of this region. You can view one of my early sketches here.

boundaries

Background:

This project utilizes the Leap Motion. The Leap Motion “tracks both hands and all 10 fingers” using stereo cameras and infrared LEDs.

Goals:

  • To scale incoming data by defining quandrants within Leap’s detectable area that we’ll call “space”.
  • “Boundaries” aims to quantize the space based on user input of columns, rows, and depth.
  • The user can then define which hand parameter(s) will act as a trigger when entering a specific quadrant in space (palm, finger, etc).
  • It will offer color-coded feedback to communicate when your hand is leaving the detectable range of the Leap Motion.
  • boundaries_chart

    Quadrants:

    I wanted to be able to designate and control the region of which the Leap Motion detects. One of the interesting parts of the device is the amount of complicated data it can receive and send. This can become unwieldy so the idea of quantizing the space appeared useful. I created three parameters that control rows, columns, and depth.

    quadrants

    Triggers:

    The Leap Motion can recognize many details relating the hand and turn it into incoming data. This patch allows the user to select the incoming data and then apply it to a selected quadrant. The result is that when the Leap Motion data is true within a quadrant, it triggers an assigned mapping once until you repeat.

    triggers

    Visualization

    One of the problems using the Leap Motion is that not many programs offer any feedback system regarding your hands within it’s detectable range. What this means is that it’s too easy to remove your hand(s) from the range thus dramatically altering the data you’re attempting to send. Belize .

    Some kind of feedback system can be easily implemented by at least offering visual feedback based on the position of the palms and their x,y,z data. This patch will contain a visualization that shows the quadrants in addition to displaying feedback when you leave the region or get near a further range. It is designated as a red area.

    Screen Shot 2015-01-05 at 12.06.09 AM
    Current development in Max/MSP

    leap_diagram

    3d_range

    And so…

    I still have to see how much of this UI can be developed in Max. I have my suspicions that nuanced aspects will be impossible or impractical to develop but we will see. However developing a feedback system for the Leap Motion using Jitter will be a very useful project in it of itself.