Selected Projects from the first half of 2017

Warehouse party in the Los Angeles artist’s district. Video collage and additional OpenGL programming.

A post shared by estevan (@estevancarlos) on

OpenGL + Twitter feed proof of concept

Content from specific twitter feeds are presented in an OpenGL visualization.

A post shared by estevan (@estevancarlos) on

Vaporwave themed video collage

A post shared by estevan (@estevancarlos) on

Pulsar VJ material

A post shared by estevan (@estevancarlos) on

A post shared by estevan (@estevancarlos) on

Illustration concepts in WebVR

A post shared by estevan (@estevancarlos) on

A post shared by estevan (@estevancarlos) on

Max/MSP Studies

A post shared by estevan (@estevancarlos) on

Illustrations + Video VJ material for Tickle Torture

tickle_ink from estevancarlos on Vimeo.

Graphic Design

What’s going on with skeuomorphic design for music software?

What will nostalgic, skeuomorphic design for music software be in the year 2050? 3D renderings of iPads within virtual reality?

I sometimes doubt my own knowledge and music ability when I’m faced with complex user interfaces in music software. I think to myself, “I should know this” followed up with, “I think I should know.” I find this especially true when confronting skeumorphism.

I don’t own any vintage equipment. I rarely interact with classic devices. This isn’t motivated by a lack of interest but instead is a combination of pragmatism and my pocketbook. I do understand that classic equipment plays a significant role in music production and audio engineering for many reasons. This is often expressed through software user interfaces. A sense of value, physical metaphors, and stylized tone are often expressed through a user interface. Skeuomorphism seems to stress all those qualities but I question whether we are being short sighted.

I am attempting to write a series of articles on the subject of user experience and UI design within DAWs but I wanted to cut to the chase of my thesis briefly in this post: Physical representations in UIs need to be more forward thinking to compensate for lost metaphors and the profound influence UIs have on user experience. In other words: Should we be more beholden to authentic hardware or to newer user experiences that can enhance creativity? How do companies strike a balance?

Let’s not stumble towards skeuomorphic representation in the future. Let’s plan for it. Vintage equipment of the past will create more obscure UI metaphors of the future.

There are many hobbyist and amateur music producers who may never have access to the vintage analog equipment represented in some software. I find that this issue needs to be confronted for the long term. The affordances that are expressed through physical hardware may not always translate to a GUI. Especially when such physical interaction is increasingly abstract.

In the example below, the item outlined in orange is a style of a physical button that I have not seen in probably 10+ years and I am 34. I have my doubts as to whether a 20 year old producer has seen this type of knob/button either.

Skeuomorphic Design in Music Software

If we were to analyze the clues of this physical interface item we could discuss the curved front that seems like a perfect placement for a finger or thumb. When I last recall using the physical counterpart, I remember that despite its curved front it could not be pressed in on either side. It was not a toggle. In fact it could sort of wiggle from left to right but always returned to center. During my first time using this type of interface I realized how ingenious it was that the curvature was perfect for gentle motions with just one finger.

Fast forward to today and we have to ask ourselves, “are the affordances of the original physical button evident as a GUI item?” I haven’t performed any tests but upon asking one person, my girlfriend, she deduced that it must be a toggle: you press on one side or the other. She reached this conclusion due to its curved shape.

Let’s be honest. It looks CLOSE to a light switch but it is not. This is where skeuomorphism gets messy.

How does that GUI item actually work? You click on it once–anywhere. You don’t drag side to side as you do with the original physical item. It actually is a toggle but not in the way previously described. It offers an On/Off state but it offers no visual feedback regarding a left or right side being on or off probably because the original physical item has no such states.

What will physical representation of music equipment be a few decades from now?

How do we guide producers and musicians along the way? Let’s not stumble towards skeuomorphic representation in the future. Let’s plan for it. Vintage equipment of the past will create more obscure UI metaphors of the future.

Many musicians interact with production in many more ways than before. I myself am already planning on experimenting with music production in virtual reality. It’s for those reasons that the critical use of the right metaphors has to change with time. We already have many musicians comfortable with modern forms of human-computer interaction such as touch, swipes, pinch, drag, etc. This need not remain exclusive to the realm simple smartphone apps.

I am not suggesting we create new software that arbitrarily uses new forms of interaction. I am suggesting future producers will be most familiar with interaction experiences of 2016 rather than 1960. So software user interface design needs to work with that in mind. Plan for that, as I suggest. But how?

I believe some visual metaphors need to be phased out or better implemented. Within the example the buttons outlined in orange should be clear toggle buttons. This software does offer toggle buttons though. The item highlighted in yellow is an example. Do you see it? That’s a toggle switch in the down position. My test subject did not recognize its visual cues. Let’s unpack why that may be.

In the software the top portion of the toggle switch is a few shades of beige, brown, and eggshell. The rectangular portion overlapping the circle represents the top of a switch. One problem we have here is that height or depth is not visually clear. It is not evident that there is a portion of the interface positioned above another portion in any significant way. It’s also not clear there is a cylindrical component connecting things. Basically you can’t see a switch at all.

In my opinion the fastest way to resolve this is to emphasize the down state by visually communicating that the cylindrical portion of the switch is present and pointing downwards. In my example below you can now see more of the switch. This improves the visual communication.

Made this in ten minutes

But let’s return to my original thesis that skeuomorphism requires a different assortment of questions. Using our switch example if we visually communicate a clearer toggle switch we’re confronted with a visual element that takes up more vertical space. This has the potential to pose problems.

What if we needed words to explain both states of a toggle switch? Should we move the words higher above and lower beneath the switch so that visualization doesn’t overlap? It overlaps in the original software.
What if that took up too much space when the text is placed differently?

In the images below we see the risk of having the visual switch overlap the text. It inhibits some understanding of the interface. Within the “on” and “off” examples we now have extra space between the words and switch. In design, proximity is often used to suggest relationship. We are losing a sense of proximity in the “on/off” examples.

switch4 switch_on switch_off

There’s a way to resolve this: don’t use the visual metaphor of this type of switch.

However if we return to my initial point, how do you avoid this in nostalgic, skeuomorphic design? If you’re copying a vintage item that has this switch do we need to be authentic or more communicative? There is value in mimicking a beautiful item but maybe too many user experiences suffer as a result.

I don’t have the data on this but I aim to argue, eventually, that the interface has a profound influence on creative output. This simple switch can make the difference between a musical parameter being used well or even at all. Imagine that the proper usage of a musical parameter can be heavily influenced on whether the user even understands the interface. The impact it could have on creative output should give any musician pause.

UX and the music software Part 2: The rise of multitrack recording

Everything within this series is a combination of research and opinion. I strive to be as accurate as possible however I may skip a number of interesting details regarding this large history. Please feel free to email me with any opinions or ideas.

Part 1 of this series mentioned how Pro Tools originates as a project out of UC Berkeley and became a consumer product around 1991. Let’s acknowledge why digital multitrack recording is important. First of all it helped resolve the obvious limitations of tape. The conversation during the 1990s was, “Sure tape sounds great but now you can have theoretically endless dubbing options with digital recordings. Record ten takes, twenty, or even one hundred!” This was a sales and marketing sentiment. It was discussed in music production circles. The novelty of “endless” takes. Select the best recording. You’re no longer influenced by the hassle tape presents.

Multitrack recording became the sales position of music software and the creative angle. Since Pro Tools tried to solve perceived problems of tape recording, it’s solutions defined the experience of the software. It’s solutions were in response to mainstream music production concepts. This is why the multitrack paradigm became as significant as it is. It not only existed as a common production concept in recording studios before the digital era—record one performer, then another, then mix it together—but it continued as a paradigm during the digital era.

In other words: new technology initially set out to solve old problems instead of looking for new ones. Pro Tools popularized the trope of solving increasingly older recording problems (No one talks about the limits of tape anymore).

Multitrack recording Defined Creative Direction

The popularity and significance of Pro Tools defined the marketplace. Multitrack recording was a thing and so it’s marketing defined expectations in the minds of consumers. How else would a consumer/musician discuss computer music in the 90s without discussing the software’s ability to mix multiple tracks and record multiple tracks simultaneously? There were few other defining aspects in the early 90s. Many other factors defined the fidelity of the final recorded material—my first computer i/o box only supported 20bit audio.

So as personal computing increased dramatically in relevance,
very much due to Microsoft, the computer music industry had to compete on that OS. It did so in Emagic Logic, Cubase, and Twelve Tone Systems’ Cakewalk. In fact Emagic initially only offered a feature-rich MIDI sequencer on Windows months before they provided any audio features (this link may not work as of 6/20/16 but is currently being migrated to SOS’s new site).

The presence of multiple companies defining the landscape of computer music around multitracking acted as a further education to new consumers. This was also the easiest way to design and market these products. A musician who is new to computer-aided music only knew of a few options. It defined how consumers could experience computer music recording.

In Part 1 of this series I discussed trackers. They did not bubble up to the top alongside Pro Tools because it’s paradigm was not familiar enough to demand attention from companies. Imagine if it did. Imagine the way music would have been handled differently. Let that sit in. This is one way in which software defined creative directions. Software has played a large role in defining the styles of music available over the past few decades. If Pro Tools, for some radical reason, included a tracker feature, the history of modern music would be different (more on trackers later).

Why does that matter? Many musicians opted to be traditional solo acts or within traditional band structures during the early 90s and certainly before. Multitrack software supported this. However as the decades passed the manner of solo-musicianship has changed. Did the software lag behind?

However it wasn’t just the motivations of business that popularized multitrack recording. It was the focus of many musicians. It’s increasingly difficult to recall musical society without electronic music. However even into the 90s, many musicians opted for solo musicianship with acoustic or electric instruments or chose be in bands. If this was the most common way to perform music then it makes sense that software companies focused on fulfilling their needs. Many musicians are just regular people adopting the musical strategy of their peers and those they admire.

Why does that matter? Many musicians opted to be traditional solo acts or within traditional band structures during the early 90s and certainly before. Multitrack software supported this. However as the decades passed the manner of solo-musicianship has changed. Did the software lag behind? Few, if any, electronic musicians dub their lead synth on top of their bass track on top of their drum track—a multi-track recording strategy. Since few solo musicians do this, why is this style of software still used by contemporary solo acts?

What about MIDI?

The world of MIDI had a separate evolution outside of digital audio recording. MIDI, the protocol for communicating between different music machines, began its standardization in 1983 and was strongly focused on communicating between synthesizers. It had some years to define it’s role on the computer. By the late 80s we had MIDI sequencers—possibly first introduced on the Atari system—and thus it introduced very different user interface concepts compared to later multitrack concepts.

Side note: I just noticed I keep saying Emagic Logic. Some may be wondering if that’s the same as Apple’s Logic. It is. Apple purchased Logic from the German company Emagic in the early 2000s.

Two young technologies converge

As mentioned it is my opinion Pro Tools popularized computer-aided music during the early 90s but why didn’t the MIDI Sequencer do the same in the 90s? It was a less common paradigm. Fewer musicians approached music from the perspective of sequencing MIDI notes. Fewer knew it existed. A traditional guitarist wasn’t handling MIDI. Since there was money to be made, companies broadened their objectives. MIDI sequencing was combined with multitrack recording.

So two things were occurring by the early 1990s: companies discovered the increasing relevance of multitrack recording on the computer and companies who previously focused on MIDI sequencing saw an opportunity to converge both approaches. All the while alternatives, like trackers and MaxMSP (initially a visual programing language for MIDI), existed quietly in the background. This means we had two user interface concepts handling two different approaches to music production slowly integrating into one another.

More about the history of Pro Tools –

The next part in this series will focus on the MIDI sequencer.

UX and the music DAW Part 1: An Introduction to the 80s and 90s

The UX of your DAW software makes a critical impact on creativity.

I was a around the age of 9 when my uncle, a computer engineer, insisted that we take a trip to a computer software store to purchase something new. I was familiar with the computer to the extent that my uncle introduced all concepts and material to me from an ancient IBM compatible 8086 to Sierra Online’s King’s Quest. On this particular day we were apparently upgrading from Windows 3.1 to Windows 95. As a young person I did not actually follow the news or developments of this subject. I was just a willing participant.

Windows 95

From my vantage point I took all technological advances for granted. However my uncle would offer context by emphasizing significant developments. Windows 95 was such a thing. From there the idea of personal computing increased, presumably exponentially or that’s the impression given to me at the time. There’s little reason to doubt that.

I was being bred as a computer nerd and noticed people around my community slowly acquiring computers for the first time. Microsoft pulled it off. For some reason people wanted computers. I put it that way because not everyone understood what to do with it.

I cut my teeth on 1990s, personal computing, audio recording software. I saw the rawness of the industry and it’s extremely important to discuss.

Alongside this advancement, audio recording software followed behind. Not at the same rate or with the same success. Not at all. It followed none the less. I cut my teeth on 1990s, personal computing, audio recording software. I saw the rawness of the industry and it’s extremely important to discuss. More important than what a younger producer may assume.

The evolution of the DAW is about two things: the evolution of computer hardware and the evolution of consumer needs. The nexus in between is where we find music production software shifting as a new Intel processor is introduced and it changed awkwardly with consumer mindsets.

The evolution of the DAW is about two things: the evolution of computer hardware and the evolution of consumer needs. The nexus in between is where we find music production software…

I don’t intend to cover the audio software of the 1980s. Primarily because I never used that software during that period. However let’s have an overview.
An admittedly unusual overview of music DAW software during the 1980s

  • The Macintosh, as many know, was one of the first personal computers and certainly the first that has clearly defined our idea of personal computing today
  • Early software projects, originating from UC Berkeley were introduced in the late 80s and would eventually evolve into Pro Tools
  • The first version of Pro Tools, released in 1991 worked on Apple’s system 6/7
  • Recording four tracks was the maximum offered by Pro Tools 1 during it’s release
  • The two top Billboard singles of 1987 were “Walk Like An Egyptian” by The Bangles and “Alone” by Heart
  • The Music Tracker scene advanced as shareware versions were created to support the Windows system


But wait, what’s a “Music Tracker” you may ask? As I mention, I’m skipping some details of the 80s. An undercurrent that should be acknowledged is that the paradigm of multitrack recording, that was spearheaded by Pro Tools, was not the only school of thought at the time. A different concept of computer music production existed as early as 1987 and was called a Music Tracker. It originated from a commercial software project called “Ultimate Soundtracker… written by Karsten Obarski… by EAS Computer Technik for the Commodore Amiga.” Think of it as a sort of step sequencer for small audio samples.

It’s worth noting this area of music software development because it represents the alternative influences that one day influence mainstream music production. None the less, Music Tracker software was likely fringe and underground. You know what else was underground during this period? Most electronic music.

Another significant, but increasingly insignificant, software development project that occurred during this active period of 1987-91: Cakewalk 1.0.

Another significant, but increasingly insignificant, software development project that occurred during this active period of 1987-91: Cakewalk 1.0. It was initially released on DOS then Windows 3.1 in 1991. I was first introduced to it on Windows 98.

This is where we will begin in the next part of this series. As mentioned, I cut my teeth on Cakewalk and in my mind it represents the complicated dynamics of technological innovation, evolving musical tastes, and software limitations that have a role in all music production software but are very pronounced in Cakewalk’s history. Simply put, multitrack recording concepts took a few punches during the 90s as many factors changed consumer demands. Cakewalk in my view represents that tense, confusing period.

I should explain my thesis with more clarity even though I am attempting a less formal approach with this series of articles. I’d like to move into different areas of history and ideas as the series evolves. However the larger thesis I aim to argue is this: all music software functions as an instrument. It functions as a musical instrument due to qualities of user experience. The evolution of user experience tropes and measures within music production software has not always been deliberate or informed. We as musicians can benefit or find ourselves hindered by this. However being unaware of these details doesn’t allow us to work within or outside the structure of the instrument. This series will present a history of user experience and user interfaces in music software as well as a theory of why it’s important to production.

Work in Progress: Branded, performative, visuals and technical framework built in Max MSP and Processing

I’m working with a band, Tickle Torture, on a VEVE project, providing visuals for his event. To summarize: I’m slicing, cropping, and openGL processing their videos while presenting animation elements. This is the progress after three days. Half of which was me sitting around wondering what I’m going to do.


Why this is important

I’ve needed to start the creation of a technical framework I can work from moving forward. This project has been a wonderful excuse/opportunity to get that going. I decided some months ago that I need to further integrate Processing into my visual performance set up. There’s a few reasons why. The paradigm of handling vector graphics and animation is more intuitive in Processing. Actually that’s the only reason why. It’s a mess in Max as far as I’m concerned.

A video posted by estevan (@estevancarlos) on

So Max/MSP is becoming my video playback+video processing station that sends an openGL feed to Processing. If Processing is better for vector graphics, Max is better for shaders. The paradigm of signal flow and “pipes” is a more sensible approach when dealing with shaders–half the time. It’s a decent balance.

So this is just a sneak peek at a work in progress. More to come.



Video Composition Software with Max, Processing, and Syphon

Over the past few months I have heavily dedicated my focus on VEVE and creating live visual software within Max/MSP/Jitter. I’d like to describe some of the process broadly. Much of this has been focused on experimentation, slowly bleeding into more controlled design scenarios. In other words, I haven’t always known enough in order to design a final product according to my tastes or aesthetic. However as things progress, I am better able to design what I want or need.

A photo posted by estevan (@estevancarlos) on

Accidentally came across an aesthetic that isn't quite my own. #processing #maxmsp #reas

A video posted by estevan (@estevancarlos) on

The Technology

As usual, I am focused on Max/MSP/Jitter. Jitter provides means for video manipulation and OpenGL programming. I’m going to focus on discussing video within Jitter. Recently I found myself playing with masking within Jitter by combining and manipulating the matrices of two video feeds. From this process I realized I could just bring an external “video” feed from elsewhere. So using a third party tool called Syphon, I brought an external OpenGL texture into Max from Processing. So there is a Syphon object in Max that can access this OpenGL texture and there is a Syphon library in Processing that can send the OpenGl data.

#alvanoto #maxmsp #jitter #lava

A video posted by estevan (@estevancarlos) on

Early masking test

Additionally, for the actual performance, MIDI was utilized in order to use a MIDI controller to control the software. MIDI data is sent to Max and then using the OpenSoundControl protocol, that MIDI data is converted to OSC and sent to Processing.This of course requires an additional library in process in order to understand OSC data.

Technologies used:

  1. Max/MSP/Jitter
  2. Syphon for creating and sending an OpenGL texture
  3. Processing
  4. MIDI
  5. OpenSoundControl

The Development

I am still actively studying Jitter (the collection video objects, processes within Max). Video can be managed as a matrix or OpenGL within Jitter and there are reasons one might use either. OpenGl provides lower level processes that occur on the GPU. This creates efficiency and dramatic speed improvements. The matrix in Jitter is, at it’s core, arrays of data arranged in columns and rows. You are essentially creating, importing, and manipulating pixel data within these matrices, which often exist as “planes” of matrices. Planes represent types of data included: reds, blues, greens, alpha–to put it simply. This can be CPU intensive and I unfortunately deal with this intimately. I’m just not pulling off great frame rates. It’s frustrating.

I initially created two video players, using these matrices, in Jitter. One of which represented a mask and the other just a video. Using a process that I wish I could explain in more detail, the functionality of a Jitter object allowed for a masking technique when both matrices were merged or “packed” together.

The results were straight forward. It was at this point I realized that it may be interesting to dynamically create masks instead of having a looped video or image. I find it much easier to develop and animate 2D graphics in Processing so opted for that approach.

#underworld #processing #maxmsp

A video posted by estevan (@estevancarlos) on

Introducing Syphon and Processing

The masks are represented as black or white color data within a matrix (or specifically RGB is at 0 or maximum). Using the object I can’t fully explain, it takes the black or white pixel data in order to mask the additional matrix input. So these are the colors we use in Processing.

As I mentioned, it’s much easier for me to create and animate 2D graphics in Processing. I couldn’t even begin to explain how it’s managed in Jitter. So I was able to rapidly develop multiple functions in Processing that I am calling “scenes”. These scenes would animate and create a series of different black shapes or grids within a canvas/sketch. Using the Syphon library, this canvas sent down an OpenGL pipeline as a texture.

So where previously I utilized a matrix playing video or loading an image within Jitter, I now am connecting to Syphon in order to pipe in the texture data. This texture data is then converted to a matrix.

#maxmsp #autechre #processing

A video posted by estevan (@estevancarlos) on


This unfortunately introduces some inefficiencies and I still need to find a solution. Keeping everything within OpenGL could be the ideal scenario–if I had a decent video card–so converting it to a matrix taxes my computer even more.

Additionally I could not find a way to send different textures out of one Processing sketch. Why did I want to do this? I wanted multiple masks for multiple videos. So I ended up having to create two different Processing sketches, each sending out a texture with Syphon. This was probably more taxing on my 2gb video card and converting it to a matrix was additionally taxing on my CPU.

This was unfortunately just a series of inefficiencies that require some rethinking. I am not sure how intensive Syphon is when sending out a texture. I’m not even sure yet how to monitor that data. However creating two Processing sketches may be a problem with a solution. Additionally using OpenGL within Jitter can really improve speed especially when handling higher resolution video and textures. Thus keeping it in OpenGL may be the real solution… I just need to figure out a non-matrix masking technique. When I attempt this project again, that will be my objective.

Cross-software Communication

I was facing a fast deadline and wasn’t able to resolve some OSC issues. OpenSoundControl is a protocol that allows for communication over a network and between applications. With a MIDI instrument communicating to Max, I mapped that MIDI data into some corresponding OSC data. Within Processing, with an OSC library installed, they listened for the OSC messages and translated that data according to my needs (triggering scenes, changing shape data). The issue I confronted is that I was not able to communicate to both sketches at the same time. I don’t know why. It may be a simple issue but I’ll need to revisit it. I’m looking at a tool like Osculator in order to better manage this type of task.

The results

The results were positive and interesting. Despite the performance issues, I am very happy about it. Considering that I am somewhat new to many of these details, dealing with efficiency is the next logical step and it’s better I deal with it now than later.

I am very excited about the possibilities of developing algorithmic and generative masks with Processing that can be manipulated by MIDI/OSC. Now I will acknowledge that once I dive further into OpenGL, I may find it better to create masks there–within Jitter–instead of with Processing. However being able to quickly program in Processing is a major plus.

A video posted by estevan (@estevancarlos) on

A beginning for VEVE

Rapid development of interactive tools and software is an exciting process. For a recent client I developed (and am still tweaking) and live video manipulation tool focused on glitchy, dark, tones. This client is integrating their Lemur software into the setup and will control a series of parameters on the custom software VEVE is providing. A more thorough description of the project will come soon.

A photo posted by estevan (@estevancarlos) on

Currently offering interactive, VJing for events

I’d like to introduce VEVE, a new company offering live and interactive visualizations. We are still developing the right portfolio and presentation but I’d like to offer a sneak peek at things in the pipeline. I am also seeking potential clients and collaborators who are interested in what they see. Contact me.

Currently Offering MaxMSP / Max for Live training

For those who follow my development and career, I have engaged in a broad practice with technology. From front-end development, UI/UX design, sound design, and something along the lines of simple software development with MaxMSP. That’s a fuzzy area. Which is what makes it interesting. what is a server I am now offering MaxMSP and Max for Live training to any students interested.

I will provide more details soon, as they development. To celebrate this announcement I have a new section of my website where I will detail MaxMSP development and projects.

#Immigration Hackathon Project

I took the opportunity recently to finally attend a hackathon event held by the city of Los Angeles and officially supported by the mayor. It was an amazing experience that reminded me about my passion for social and civil matters. Belize The theme of this particular hackathon was immigration and a very interesting assortment of people showed up for the event: activists, non-profit members, and technologists.

I joined a group who decided to create an “all in one” web application that addressed the many needs of new immigrants and/or uninformed immigrants in the Los Angeles area. Our perspective is that many services and resources exist across the city but are either hard to find or it’s difficult to find the good services. list of domains Our discussion was very rewarding and it reminded me of my time as a graduate researcher working on a project for the displaced people after Hurricane Katrina. best places to visit One of the key things I tried to stress with the group is that this web application needs to be optimized for those who want to HELP others. We fully understand that not every immigrant in L.A. who is seeking services will have a smartphone with a data plan, let alone speak english.

I’ll discuss the project in more detail as time goes on but we’re all excited and hope to submit this to a final city-wide competition in a few months. Here is a very early process chart showing how a user would flow through portions of this web application.