What’s going on with skeuomorphic design for music software?

What will nostalgic, skeuomorphic design for music software be in the year 2050? 3D renderings of iPads within virtual reality?

I sometimes doubt my own knowledge and music ability when I’m faced with complex user interfaces in music software. I think to myself, “I should know this” followed up with, “I think I should know.” I find this especially true when confronting skeumorphism.

I don’t own any vintage equipment. I rarely interact with classic devices. This isn’t motivated by a lack of interest but instead is a combination of pragmatism and my pocketbook. I do understand that classic equipment plays a significant role in music production and audio engineering for many reasons. This is often expressed through software user interfaces. A sense of value, physical metaphors, and stylized tone are often expressed through a user interface. Skeuomorphism seems to stress all those qualities but I question whether we are being short sighted.

I am attempting to write a series of articles on the subject of user experience and UI design within DAWs but I wanted to cut to the chase of my thesis briefly in this post: Physical representations in UIs need to be more forward thinking to compensate for lost metaphors and the profound influence UIs have on user experience. In other words: Should we be more beholden to authentic hardware or to newer user experiences that can enhance creativity? How do companies strike a balance?

Let’s not stumble towards skeuomorphic representation in the future. Let’s plan for it. Vintage equipment of the past will create more obscure UI metaphors of the future.

There are many hobbyist and amateur music producers who may never have access to the vintage analog equipment represented in some software. I find that this issue needs to be confronted for the long term. The affordances that are expressed through physical hardware may not always translate to a GUI. Especially when such physical interaction is increasingly abstract.

In the example below, the item outlined in orange is a style of a physical button that I have not seen in probably 10+ years and I am 34. I have my doubts as to whether a 20 year old producer has seen this type of knob/button either.

Skeuomorphic Design in Music Software

If we were to analyze the clues of this physical interface item we could discuss the curved front that seems like a perfect placement for a finger or thumb. When I last recall using the physical counterpart, I remember that despite its curved front it could not be pressed in on either side. It was not a toggle. In fact it could sort of wiggle from left to right but always returned to center. During my first time using this type of interface I realized how ingenious it was that the curvature was perfect for gentle motions with just one finger.

Fast forward to today and we have to ask ourselves, “are the affordances of the original physical button evident as a GUI item?” I haven’t performed any tests but upon asking one person, my girlfriend, she deduced that it must be a toggle: you press on one side or the other. She reached this conclusion due to its curved shape.

Let’s be honest. It looks CLOSE to a light switch but it is not. This is where skeuomorphism gets messy.

How does that GUI item actually work? You click on it once–anywhere. You don’t drag side to side as you do with the original physical item. It actually is a toggle but not in the way previously described. It offers an On/Off state but it offers no visual feedback regarding a left or right side being on or off probably because the original physical item has no such states.

What will physical representation of music equipment be a few decades from now?

How do we guide producers and musicians along the way? Let’s not stumble towards skeuomorphic representation in the future. Let’s plan for it. Vintage equipment of the past will create more obscure UI metaphors of the future.

Many musicians interact with production in many more ways than before. I myself am already planning on experimenting with music production in virtual reality. It’s for those reasons that the critical use of the right metaphors has to change with time. We already have many musicians comfortable with modern forms of human-computer interaction such as touch, swipes, pinch, drag, etc. This need not remain exclusive to the realm simple smartphone apps.

I am not suggesting we create new software that arbitrarily uses new forms of interaction. I am suggesting future producers will be most familiar with interaction experiences of 2016 rather than 1960. So software user interface design needs to work with that in mind. Plan for that, as I suggest. But how?

I believe some visual metaphors need to be phased out or better implemented. Within the example the buttons outlined in orange should be clear toggle buttons. This software does offer toggle buttons though. The item highlighted in yellow is an example. Do you see it? That’s a toggle switch in the down position. My test subject did not recognize its visual cues. Let’s unpack why that may be.

In the software the top portion of the toggle switch is a few shades of beige, brown, and eggshell. The rectangular portion overlapping the circle represents the top of a switch. One problem we have here is that height or depth is not visually clear. It is not evident that there is a portion of the interface positioned above another portion in any significant way. It’s also not clear there is a cylindrical component connecting things. Basically you can’t see a switch at all.

In my opinion the fastest way to resolve this is to emphasize the down state by visually communicating that the cylindrical portion of the switch is present and pointing downwards. In my example below you can now see more of the switch. This improves the visual communication.

switch2
Made this in ten minutes

But let’s return to my original thesis that skeuomorphism requires a different assortment of questions. Using our switch example if we visually communicate a clearer toggle switch we’re confronted with a visual element that takes up more vertical space. This has the potential to pose problems.

What if we needed words to explain both states of a toggle switch? Should we move the words higher above and lower beneath the switch so that visualization doesn’t overlap? It overlaps in the original software.
What if that took up too much space when the text is placed differently?

In the images below we see the risk of having the visual switch overlap the text. It inhibits some understanding of the interface. Within the “on” and “off” examples we now have extra space between the words and switch. In design, proximity is often used to suggest relationship. We are losing a sense of proximity in the “on/off” examples.

switch4 switch_on switch_off

There’s a way to resolve this: don’t use the visual metaphor of this type of switch.

However if we return to my initial point, how do you avoid this in nostalgic, skeuomorphic design? If you’re copying a vintage item that has this switch do we need to be authentic or more communicative? There is value in mimicking a beautiful item but maybe too many user experiences suffer as a result.

I don’t have the data on this but I aim to argue, eventually, that the interface has a profound influence on creative output. This simple switch can make the difference between a musical parameter being used well or even at all. Imagine that the proper usage of a musical parameter can be heavily influenced on whether the user even understands the interface. The impact it could have on creative output should give any musician pause.

UX and the music software Part 2: The rise of multitrack recording

Everything within this series is a combination of research and opinion. I strive to be as accurate as possible however I may skip a number of interesting details regarding this large history. Please feel free to email me with any opinions or ideas.


Part 1 of this series mentioned how Pro Tools originates as a project out of UC Berkeley and became a consumer product around 1991. Let’s acknowledge why digital multitrack recording is important. First of all it helped resolve the obvious limitations of tape. The conversation during the 1990s was, “Sure tape sounds great but now you can have theoretically endless dubbing options with digital recordings. Record ten takes, twenty, or even one hundred!” This was a sales and marketing sentiment. It was discussed in music production circles. The novelty of “endless” takes. Select the best recording. You’re no longer influenced by the hassle tape presents.

Multitrack recording became the sales position of music software and the creative angle. Since Pro Tools tried to solve perceived problems of tape recording, it’s solutions defined the experience of the software. It’s solutions were in response to mainstream music production concepts. This is why the multitrack paradigm became as significant as it is. It not only existed as a common production concept in recording studios before the digital era—record one performer, then another, then mix it together—but it continued as a paradigm during the digital era.

In other words: new technology initially set out to solve old problems instead of looking for new ones. Pro Tools popularized the trope of solving increasingly older recording problems (No one talks about the limits of tape anymore).

Multitrack recording Defined Creative Direction

The popularity and significance of Pro Tools defined the marketplace. Multitrack recording was a thing and so it’s marketing defined expectations in the minds of consumers. How else would a consumer/musician discuss computer music in the 90s without discussing the software’s ability to mix multiple tracks and record multiple tracks simultaneously? There were few other defining aspects in the early 90s. Many other factors defined the fidelity of the final recorded material—my first computer i/o box only supported 20bit audio.

So as personal computing increased dramatically in relevance,
very much due to Microsoft, the computer music industry had to compete on that OS. It did so in Emagic Logic, Cubase, and Twelve Tone Systems’ Cakewalk. In fact Emagic initially only offered a feature-rich MIDI sequencer on Windows months before they provided any audio features (this link may not work as of 6/20/16 but is currently being migrated to SOS’s new site).

The presence of multiple companies defining the landscape of computer music around multitracking acted as a further education to new consumers. This was also the easiest way to design and market these products. A musician who is new to computer-aided music only knew of a few options. It defined how consumers could experience computer music recording.

In Part 1 of this series I discussed trackers. They did not bubble up to the top alongside Pro Tools because it’s paradigm was not familiar enough to demand attention from companies. Imagine if it did. Imagine the way music would have been handled differently. Let that sit in. This is one way in which software defined creative directions. Software has played a large role in defining the styles of music available over the past few decades. If Pro Tools, for some radical reason, included a tracker feature, the history of modern music would be different (more on trackers later).

Why does that matter? Many musicians opted to be traditional solo acts or within traditional band structures during the early 90s and certainly before. Multitrack software supported this. However as the decades passed the manner of solo-musicianship has changed. Did the software lag behind?

However it wasn’t just the motivations of business that popularized multitrack recording. It was the focus of many musicians. It’s increasingly difficult to recall musical society without electronic music. However even into the 90s, many musicians opted for solo musicianship with acoustic or electric instruments or chose be in bands. If this was the most common way to perform music then it makes sense that software companies focused on fulfilling their needs. Many musicians are just regular people adopting the musical strategy of their peers and those they admire.

Why does that matter? Many musicians opted to be traditional solo acts or within traditional band structures during the early 90s and certainly before. Multitrack software supported this. However as the decades passed the manner of solo-musicianship has changed. Did the software lag behind? Few, if any, electronic musicians dub their lead synth on top of their bass track on top of their drum track—a multi-track recording strategy. Since few solo musicians do this, why is this style of software still used by contemporary solo acts?

What about MIDI?

The world of MIDI had a separate evolution outside of digital audio recording. MIDI, the protocol for communicating between different music machines, began its standardization in 1983 and was strongly focused on communicating between synthesizers. It had some years to define it’s role on the computer. By the late 80s we had MIDI sequencers—possibly first introduced on the Atari system—and thus it introduced very different user interface concepts compared to later multitrack concepts.

Side note: I just noticed I keep saying Emagic Logic. Some may be wondering if that’s the same as Apple’s Logic. It is. Apple purchased Logic from the German company Emagic in the early 2000s.

Two young technologies converge

As mentioned it is my opinion Pro Tools popularized computer-aided music during the early 90s but why didn’t the MIDI Sequencer do the same in the 90s? It was a less common paradigm. Fewer musicians approached music from the perspective of sequencing MIDI notes. Fewer knew it existed. A traditional guitarist wasn’t handling MIDI. Since there was money to be made, companies broadened their objectives. MIDI sequencing was combined with multitrack recording.

So two things were occurring by the early 1990s: companies discovered the increasing relevance of multitrack recording on the computer and companies who previously focused on MIDI sequencing saw an opportunity to converge both approaches. All the while alternatives, like trackers and MaxMSP (initially a visual programing language for MIDI), existed quietly in the background. This means we had two user interface concepts handling two different approaches to music production slowly integrating into one another.

More about the history of Pro Tools – http://www.musicradar.com/tuition/tech/a-brief-history-of-pro-tools-452963

The next part in this series will focus on the MIDI sequencer.

UX and the music DAW Part 1: An Introduction to the 80s and 90s

The UX of your DAW software makes a critical impact on creativity.

I was a around the age of 9 when my uncle, a computer engineer, insisted that we take a trip to a computer software store to purchase something new. I was familiar with the computer to the extent that my uncle introduced all concepts and material to me from an ancient IBM compatible 8086 to Sierra Online’s King’s Quest. On this particular day we were apparently upgrading from Windows 3.1 to Windows 95. As a young person I did not actually follow the news or developments of this subject. I was just a willing participant.

Windows 95

From my vantage point I took all technological advances for granted. However my uncle would offer context by emphasizing significant developments. Windows 95 was such a thing. From there the idea of personal computing increased, presumably exponentially or that’s the impression given to me at the time. There’s little reason to doubt that.

I was being bred as a computer nerd and noticed people around my community slowly acquiring computers for the first time. Microsoft pulled it off. For some reason people wanted computers. I put it that way because not everyone understood what to do with it.

I cut my teeth on 1990s, personal computing, audio recording software. I saw the rawness of the industry and it’s extremely important to discuss.

Alongside this advancement, audio recording software followed behind. Not at the same rate or with the same success. Not at all. It followed none the less. I cut my teeth on 1990s, personal computing, audio recording software. I saw the rawness of the industry and it’s extremely important to discuss. More important than what a younger producer may assume.

The evolution of the DAW is about two things: the evolution of computer hardware and the evolution of consumer needs. The nexus in between is where we find music production software shifting as a new Intel processor is introduced and it changed awkwardly with consumer mindsets.

The evolution of the DAW is about two things: the evolution of computer hardware and the evolution of consumer needs. The nexus in between is where we find music production software…

I don’t intend to cover the audio software of the 1980s. Primarily because I never used that software during that period. However let’s have an overview.
An admittedly unusual overview of music DAW software during the 1980s

  • The Macintosh, as many know, was one of the first personal computers and certainly the first that has clearly defined our idea of personal computing today
  • Early software projects, originating from UC Berkeley were introduced in the late 80s and would eventually evolve into Pro Tools
  • The first version of Pro Tools, released in 1991 worked on Apple’s system 6/7
  • Recording four tracks was the maximum offered by Pro Tools 1 during it’s release
  • The two top Billboard singles of 1987 were “Walk Like An Egyptian” by The Bangles and “Alone” by Heart
  • The Music Tracker scene advanced as shareware versions were created to support the Windows system

Schism-beyond

But wait, what’s a “Music Tracker” you may ask? As I mention, I’m skipping some details of the 80s. An undercurrent that should be acknowledged is that the paradigm of multitrack recording, that was spearheaded by Pro Tools, was not the only school of thought at the time. A different concept of computer music production existed as early as 1987 and was called a Music Tracker. It originated from a commercial software project called “Ultimate Soundtracker… written by Karsten Obarski… by EAS Computer Technik for the Commodore Amiga.” Think of it as a sort of step sequencer for small audio samples.

It’s worth noting this area of music software development because it represents the alternative influences that one day influence mainstream music production. None the less, Music Tracker software was likely fringe and underground. You know what else was underground during this period? Most electronic music.

Another significant, but increasingly insignificant, software development project that occurred during this active period of 1987-91: Cakewalk 1.0.

Another significant, but increasingly insignificant, software development project that occurred during this active period of 1987-91: Cakewalk 1.0. It was initially released on DOS then Windows 3.1 in 1991. I was first introduced to it on Windows 98.

This is where we will begin in the next part of this series. As mentioned, I cut my teeth on Cakewalk and in my mind it represents the complicated dynamics of technological innovation, evolving musical tastes, and software limitations that have a role in all music production software but are very pronounced in Cakewalk’s history. Simply put, multitrack recording concepts took a few punches during the 90s as many factors changed consumer demands. Cakewalk in my view represents that tense, confusing period.

I should explain my thesis with more clarity even though I am attempting a less formal approach with this series of articles. I’d like to move into different areas of history and ideas as the series evolves. However the larger thesis I aim to argue is this: all music software functions as an instrument. It functions as a musical instrument due to qualities of user experience. The evolution of user experience tropes and measures within music production software has not always been deliberate or informed. We as musicians can benefit or find ourselves hindered by this. However being unaware of these details doesn’t allow us to work within or outside the structure of the instrument. This series will present a history of user experience and user interfaces in music software as well as a theory of why it’s important to production.

Work in Progress: Branded, performative, visuals and technical framework built in Max MSP and Processing

I’m working with a band, Tickle Torture, on a VEVE project, providing visuals for his event. To summarize: I’m slicing, cropping, and openGL processing their videos while presenting animation elements. This is the progress after three days. Half of which was me sitting around wondering what I’m going to do.

tt2

Why this is important

I’ve needed to start the creation of a technical framework I can work from moving forward. This project has been a wonderful excuse/opportunity to get that going. I decided some months ago that I need to further integrate Processing into my visual performance set up. There’s a few reasons why. The paradigm of handling vector graphics and animation is more intuitive in Processing. Actually that’s the only reason why. It’s a mess in Max as far as I’m concerned.

A video posted by estevan (@estevancarlos) on

So Max/MSP is becoming my video playback+video processing station that sends an openGL feed to Processing. If Processing is better for vector graphics, Max is better for shaders. The paradigm of signal flow and “pipes” is a more sensible approach when dealing with shaders–half the time. It’s a decent balance.

So this is just a sneak peek at a work in progress. More to come.

13151372_601756283324868_81670817_n

13183371_634851386670961_97856656_n

Currently Offering MaxMSP / Max for Live training

For those who follow my development and career, I have engaged in a broad practice with technology. From front-end development, UI/UX design, sound design, and something along the lines of simple software development with MaxMSP. That’s a fuzzy area. Which is what makes it interesting. what is a server I am now offering MaxMSP and Max for Live training to any students interested.

I will provide more details soon, as they development. To celebrate this announcement I have a new section of my website where I will detail MaxMSP development and projects.

Experimental Solutions in Ableton Live: Download a sample of the new eBook

Experimental Solutions in Ableton Live

I have been sitting on the development of the book for some time and decided to use an interesting publishing platform to present it. I will be adding content and finishing the book while simultaneously promoting it on LeanPub.com. The platform encourages a lean philosophy and regular update system for ebook publishing. It’s pretty sensible with regard to technical books, which mine is.

Download a Sample Chapter

Experimental Solutions in Ableton Live will focus on introducing music theory and history within the context of the DAW software Ableton Live. I was inspired to address this topic, first and foremost because I am a musician but also as a counterpoint to the EDM obsessed training that currently exists online. Not everyone wants to learn how to make Dubstep. Some young producers should probably understand a bit of theory and musical history as in order to frame their work and goals.

The book will introduce topics of algorithmic music, generative music, and aleatoric music. I present interesting techniques available in Ableton Live and connect the dots to music theory.

Let’s kill the Step Sequencer

Or a “Re-imagination of the Step Sequencer Paradigm”

As I continue down my research for algorithmic music processes, I’m working hard at constructing modular, building blocks in MaxMSP. Through this recent work I’ve created modular components dealing with probability, simple synths, envelopes, etc. So my mindset has been “Modularity!” and not just because it’s a good idea but because I’m still ineffective at planning and structuring more elaborate projects. This is how I decided to approach a traditional step sequencer. The first component I needed to develop was the “step” component. This is where I realized the limiting assumptions of “sequencing” data.

step-module-maxmsp

As I worked my way through analyzing my needs for a step sequencer, I separated the concept into a few smaller parts: a step, a time control, a timeline.

My earliest introductions to music sequencing were with Cakewalk Pro Audio and the hardware sequencer on a Korg N364 (still one of the best). I was eventually introduced to step sequencing through the Korg EA-1 and Fruity Loops. The assumed structure of both approaches goes like this: Place data within grid. Sometimes you can control the grid. Done. There are of course alternatives such as euclidian sequencers and likely other options out there. Considering how any form of data can be translated into any other form of data (practically), music sequencing is theoretically limitless. You can turn a series of tweets into musical information. You could claim to control those tweets and thus “sequence” your music based on its set of rules. On a pragmatic level however we deal with common sequencing paradigms.

As I worked my way through analyzing my needs for a step sequencer, I separated the concept into a few smaller parts: a step, a time control, a timeline. Traditional sequencing ideas can neatly fall into these categories. Let’s hold it up against the euclidian approach for a moment:

  • A euclidian sequencer uses circles, a radius, and its circumference in order to understand timing and additional data.
  • A line representing the radius would move within the circle at an arbitrary rate. The rate is the time control and the circle could be called the timeline.
  • Nodes might be placed within the circle with varying distances from the center; the nodes represent sound/music/data. These could be called steps.
  • When a radius crosses a node, it is triggered. This type of functionality basically abandons a common grid system.

A euclidian sequencer is a pretty exciting approach but I stumbled across another funny way to structure sequenced information. A method that could mimic a step sequencer and a euclidian one but with more options.

Separating the “step” from the timeline and time control source

Since my original intention was simply to build a traditional step sequencer in parts, I first started on the component I’m calling the “step”. I decided to combine a few features such as the playlist max object which plays samples, a custom envelope I call “ADXR”, signal input, and one additional, curious feature: bang output. This is where I realize the step sequencing concept can be tweaked.

4-steps

The bang output–for non Max users, a bang represents a trigger data–leaves the step object after a duration of time. In other words when a step has a duration of one whole note and is triggered, after a whole note transpires the bang output occurs. That output can then trigger another step. Repeat. Let me describe it more carefully:

  • Step object receives a trigger
  • Step object plays a sample (a kick drum for example)
  • Step object counts down the defined duration that is input into the object (for example a quarter note’s length)
  • When the duration ends, a trigger output occurs.
  • The output can trigger anything else including another step (for example a snare)


My audio examples are still in their “autechre” phase

I’ve designed the steps to receive notation based times and milliseconds. So it can work without a grid system (by grid I’m referring to structured times). A kick drum plays. We wait 200ms. The snare plays. We wait 1 quarter note. Another snare plays. We wait 100ms. A hihat plays…

Here’s where it gets interesting. This modular setup allows for a feedback loop. A sequence of loops can trigger itself in a cycle. A structured rhythm can be set up to follow a strict time OR a step can occur based on millisecond durations which aligns with some of the qualities of a euclidian sequencer (some qualities).

If you wanted to fully mimic a euclidian approach, you would want to create multiple feedback loops with at least two steps (Max does not allow one object to loop back into itself without a few caveats). With multiple feedback loops like this you could trigger them all and copy the euclidian approach. However this is just the start. Before I list other possible scenarios I should admit that this isn’t necessarily innovative. It’s not ground breaking. Many people using MaxMSP–and similar technologies–are engaged in abstract, complicated feedback loops and flows. That’s not new. I think what is interesting about this concept is that it bridges traditional electronic music user experience concepts with more intricate flows. I’ve taken a “step” paradigm and designed it to fit into a variety of scenarios.

I think what is interesting about this concept is that it bridges traditional electronic music user experience concepts with more intricate flows.

I can’t say the wheel has been reinvented here but I think a new type of wheel is being introduced for those who need it. This modular step approach can get rather complicated.

The Modular Step Object Flowchart

ModularStepSeq-flowchart

This is a work in progress and outlines some of the key factors of my step object/patch. There are some details to the patch that I am ignoring (they are not critical to the over all purpose) but this flowchart outlines a few of those features. This flowchart however does not offer an example of an elaborate sequence. I have another flowchart to explain that.

Two nearly identical loops with (likely) very different results

step-seq-concept-loop

Loop A slightly represents a simple loop. A kick is followed by a snare then hihats. Then it almost repeats. If you look at the second to last hihat in Loop A, that hihat triggers two things. It triggers two hihats. Those two hihats then go on to follow their own sequence. One of the hihats loops back to a kick. The other eventually stops after a few steps. Additionally you can see durations between triggers designated. The only surprising duration in Loop A is a 16th note in the middle. You can almost imagine how this might sound. Think for a moment how that rhythm may go.

Now let’s look at Loop B. At first glance it’s pretty similar but look at the durations. We have 8ths, 16ths, and 200ms. That alone really complicates the rhythm and moves it outside typical time structures. Look at the second to last hihat as well. What’s different? It loops back into the main loop! How do you think this rhythm sounds? I have no idea. In fact during development of all this I constantly found myself in reboot-worthy feedback loops. That is one problem I still need to address. If you can avoid though, you will find yourself creating very different types of rhythmic structures.

Early stages of development

I skipped additional details regarding this development. For example I’m building in a signal gate into each step object. So you can sequence samples and sequence gates upon a live signal. Although the audio I provide may not shed too much light on the potential, I will continue going down this rabbit hole to see what musical qualities can be achieved this way. I think it looks promising.

#modular #feedback #drummachine

A video posted by @estevancarlos on

Six string gestures with Max MSP

Six String Leap Motion MaxMSP Instrument

My interest in a six string instrument was spurred by the startup company, Artiphon, who had the idea of creating a sensible, tactile device that allows for new but familiar string instrument gestures. I don’t play guitar but I do own a classical guitar. I am just now learning to play it. Coming from the world piano I find that I’m relieved by the expressiveness allowed with a string instrument like the guitar. The performer defines the frequencies through their gesture. You can not do this on the piano. However I figure I could do this with Max MSP.

 

Artiphon_INSTRUMENT_1_Press_0.0

My interest in a six string instrument was spurred by the startup company, Artiphon, who had the idea of creating a sensible, tactile device that allows for new but familiar string instrument gestures.

It makes me think of an interesting music production dilemma I try to work around: how to create outside the confines of quantization. Using Ableton Live as actively as I do it’s easy to take quantizing for granted. It ends up removing so much nuance to production and replaces it with overbearing precision. The same could be said for the piano.

With the piano you’re forced to address predefined frequencies and set intervals (you can’t retune on the fly). I am personally ready to move passed quantization and into the world of microtonality.

Virtual String Instrument

So being inspired by what looks like a sensible and well designed virtual string instrument, the Instrument 1, I decided to hack and prototype a concept relating to six strings. Since I don’t own the actual instrument, I decided to use my Leap Motion instead. The Leap allows for a range of data that could mimic the idea of a range of frequencies on a string. Using the Leap also allows me to further my research of the device (I have other Leap Projects I want to develop).

 

I am using my tool of choice, Max MSP, and created six lines that respond to 6 intervals within an axis controlled by the palm from the Leap Motion. In other words, the Max MSP patch allows you to select X, Y, or Z as an axis. You can then calibrate the maximum and minimum values of that axis. From there six intervals are defined that trigger six different frequencies that are a part of standard guitar tuning. Belize Where I take it from here is the exciting part.

A Max patch containing Leap Motion Input

I am using my tool of choice, Max MSP, and created six lines that respond to 6 intervals within an axis controlled by the palm from the Leap Motion. In other words, the Max MSP patch allows you to select X, Y, or Z as an axis.

Gestural control within a 3D space with the Leap Motion

The Leap Motion can recognize fingers but it is very imprecise. Because of this I will only use the palm as an input for the sake of stability. I may look further into finger data at a later date. Since I am using the palm which represents one point in a 3d space, strumming is the primary gesture I am experimenting with regard to this prototype.

With a real guitar the speed of which a person strums can influence the timbral quality of the instrument. The force of which they push down on the strings influences the strength of the vibrations or loudness. Speed and strength of strumming often go together since a fast gesture is often intended to have a level of force. This type of gestural dynamic leaves room for interesting design decisions.

The very simple synthesis component

Speed and strength of strumming often go together since a fast gesture is often intended to have a level of force. This type of gestural dynamic leaves room for interesting design decisions.

For example:

  • One axis of data (Z axis) could represent the force or pressure put upon the strings. Even though the physics of a real guitar would suggest this translates into higher peaks and valleys within a vibration, in a virtual/MIDI instrument this information could define FM modulation upon the corresponding synthesis or could represent a delay that is applied to the strings.
  • Strumming a real guitar quickly would create a complex set of vibrations/frequencies. We could emulate that directly through software OR we could measure the details of the strumming gesture and translate the data differently. There is a duration of time that exists between each strike of the string–likely measured in milliseconds. So the speed of the strumming dictates this timing between each string hit. That time could be used to control a reverb on the instrument. The faster the strumming, the more or less reverb. A duration between two string hits of 1ms could scale to an increase of 5% wetness on a reverb.

I have only completed the portion of this experiment where data between Max MSP and the Leap Motion are properly set up. Additionally a very simple visual is established. The next part is to create interesting synthesis that corresponds to interesting data I can gather from the gestures. More to come.