Archives

#Immigration Hackathon Project

I took the opportunity recently to finally attend a hackathon event held by the city of Los Angeles and officially supported by the mayor. It was an amazing experience that reminded me about my passion for social and civil matters. Belize The theme of this particular hackathon was immigration and a very interesting assortment of people showed up for the event: activists, non-profit members, and technologists.

I joined a group who decided to create an “all in one” web application that addressed the many needs of new immigrants and/or uninformed immigrants in the Los Angeles area. Our perspective is that many services and resources exist across the city but are either hard to find or it’s difficult to find the good services. list of domains Our discussion was very rewarding and it reminded me of my time as a graduate researcher working on a project for the displaced people after Hurricane Katrina. best places to visit One of the key things I tried to stress with the group is that this web application needs to be optimized for those who want to HELP others. We fully understand that not every immigrant in L.A. who is seeking services will have a smartphone with a data plan, let alone speak english.

I’ll discuss the project in more detail as time goes on but we’re all excited and hope to submit this to a final city-wide competition in a few months. Here is a very early process chart showing how a user would flow through portions of this web application.

AskAng-Process-822015

Voice Visualization for Narrative/Game

One component of my “Prison Industrial Complex: A Game” project regards a visualization for the narrator. I am designing an aesthetic influenced by minimalism and bauhaus. Belize . So the visual graph representing the audio of a narrator is a series lines vertically extending downward.

Voice Graph

I am using an object in Max, [fffb~], that is simply a bandpass filter with an argument that defines the amount of filter banks. From there I take the amplitude of twelve banks and I compensate for the DC offset by creating an if statement that multiplies the signal by -1.0 if the signal falls below 0. As a result the signal is always positive which allows me to control the data for my Processing visualization.

Filter Banks secure web browser

Experimental Solutions in Ableton Live: Download a sample of the new eBook

Experimental Solutions in Ableton Live

I have been sitting on the development of the book for some time and decided to use an interesting publishing platform to present it. I will be adding content and finishing the book while simultaneously promoting it on LeanPub.com. The platform encourages a lean philosophy and regular update system for ebook publishing. It’s pretty sensible with regard to technical books, which mine is.

Download a Sample Chapter

Experimental Solutions in Ableton Live will focus on introducing music theory and history within the context of the DAW software Ableton Live. I was inspired to address this topic, first and foremost because I am a musician but also as a counterpoint to the EDM obsessed training that currently exists online. Not everyone wants to learn how to make Dubstep. Some young producers should probably understand a bit of theory and musical history as in order to frame their work and goals.

The book will introduce topics of algorithmic music, generative music, and aleatoric music. I present interesting techniques available in Ableton Live and connect the dots to music theory.

Prison Industrial Complex: Current Development Part 1

As I’ve worked through my currently titled project, “Prison Industrial Complex: A Game” it’s become larger, more detailed, and more interesting. To summarize, this game explains details about the private prison system from the perspective of a dystopian machine. The tone is cynicism and it engages the player to answer questions.

Still at the planning and structuring phase, I’m finding myself engaged in so many methods of planning and diagramming. I’d like to take a moment to cover this process.

Initial Diagrams

After my sketches on graph paper but before the wireframes, I create outlines that organize a variety of notes relating to game components, UI elements, research notes, and programming notes. On a side note, the Mac application, “Tree 2″ is supremely useful for this purpose.

Prison Industrial Complex

The game/narrative will include layers information while simultaneously testing the participant. For example prison data will be populated in the background of the game creating a graph. Also on the drawing board is the implementation of a live Twitter feed relating to the company Corrections Corp Of America. That’s a possibility. A maybe. If the Twitter content is interesting, it will work. What I know will be definitively included is live stock market data relating to the company. My hopes is that the game design increasingly feels chaotic, distracting. Something that could parallel the tone of loud sounds and lights.

Wireframes

At this stage in the process, all the details of the game itself are not in place. I still need to dive deeper into the subject matter which will likely allow me to generate more details. So the wireframes currently represent a broad type of interaction. Certain types of screens will exist and I am focused on designing those parts currently.

Idle Screen Voice

The wireframe above represents the “Idle Screen”. This screen introduces the narrator of the game which is represented by vertical lines. The vertical lines are visualization of the voice/audio. So as the narrator speaks, the lines fluctuate in height.

Highlight

The following wireframe represents a “Question” screen that offers three answer options to the player. This project is being designed with the Leap Motion in mind and so a player will move their hand horizontally across the Leap Motion in order to select an option. Once their hand is idle over an option for a length of time, a countdown timer begins, counting down from 3 to 0. Once that countdown completes, that selection is official. This approach is meant to compensate for a click function.

Designs

I’m often inspired by Russian constructivism, minimalism, as well as grunge typography from David Carson. I don’t intend to abandon all typographic rules and I understand there’s a contradiction in the influences listed, however I hope to either strike a balance or create a contrast. We will see.

prison_new_layout2_white

prison_new_layout_idle

prison_new_layout2

Application Diagrams

As a part of this process, I’m introducing myself to UML diagrams. It’s a fascinating additional part of this process. Relating the UML diagrams to the designs and wireframes really changes my perspective on those parts of development.

uml

Sound Design & Music

During this fresh development I’m scrapping my previous approach where I placed most of the logic in MaxMSP. Max is a flow based programming language but became… obviously inefficient when dealing with the logic of this game. At least it’s inefficient with it’s built in objects. I might consider writing my own Max objects in the future. I’m told it’s pretty easy to do so in Java (as opposed to C++).

For now however, the logic is now in Processing while the sound, music, some math, and video processing effects will reside within Max. One part of this project that I’m very excited to present eventually is the music. The music will be partially algorithmic but more importantly shift and change as the user progresses through the game–as is generally expected. My intention is to create a tension through the music that hopefully influences the experience of the gameplay. I will arrive at that stage eventually.

Screen Shot 2015-07-04 at 1.25.51 PM

This completes the first summary of my development process.

New technologies are allowing for new narratives

Screen Shot 2015-06-04 at 11.34.33 PM

It seems like every couple of weeks I become side tracked and attempt a new project exploring new ideas. I definitely don’t view this as a problem since I’m currently focused on doing as much new research and exploration as possible. In that spirit I am attempting my first narrative through technology. Sort of first attempt.

Musically speaking, I certainly believe my work has at times functioned as narrative. Being an electronic musician I think that could count as narrative through electronic synthesis/sampling. However in my new project I’m touching on more holistic storytelling and more traditional concepts. Without revealing too much I’m using my common resources for prototyping: MaxMSP, Processing, Leap Motion, graphic design. free website domain . Eventually I’d like to focus more on OpenFrameworks and other resources but that will be addressed at a later date.

A video posted by @estevancarlos on

I’ve been wanting to confront subjects that are important to me. One of which is the matter of the prison-industrial-complex: a profit driven system that sets up large portions of Americans for failure. If you do not derive from the right community with the right school and you find yourself involved with the wrong type of drugs, you may end up ostracized from mainstream society and involved in a career of prison servitude. Millions of Americans are.

With this subject in mind I am creating two primary things: a game and a game master/narrator. A “game” (in heavy quotes) that primarily acts as story explaining the private prison system in the United States. The “game master” will be an operating system representing the prison system. Belize . So the interaction is that of a user interacting with a cold bureaucratic computer.

In total it’s becoming a wonderfully thorough project involving many technical details. I’m including the narration in JSON files that will be parsed with javascript within MaxMSP. These JSON files will also include triggers that can communicate to other parts of the software project. The user interface or “view” will exist within Processing. A video signal from processing is sent to Max using Syphon. Then MaxMSP will provide additional video processing. Additionally I’m utilizing the leap motion as the controller. It will provide a simple interface for selecting and navigating the game/narrative.

"scene1" : {
        "section1" : [

            {
                "id" : 0,
                "content" : "Welcome to Prison Industrial Complex: A Game",
                "flags" : [0,0,0]
            },

            {
                "id" : 1,
                "content" : "Wave your hand over the interface to begin.",
                "flags" : [0,0,0]
            },

            {
                "id" : 2,
                "content" : "No no no no.",
                "flags" : [0,0,0]
            },

            {
                "id" : 3,
                "content" : "Did you know the United States has the least amount of encarcerated people in the world? No? That's because it's not true.",
                "flags" : [0,0,0]
            },

            {
                "id" : 4,
                "content" : "Become a part of the private prison industry. Imagine the earning potential. And only the earning potential. Don't imagine the other details.",
                "flags" : [0,0,0]
            },

            {
                "id" : 5,
                "content" : "Come. Play. You have nothing to lose.",
                "flags" : [0,0,0]
            }

        ]
    },

I’m aiming for a dystopian tone so I’ve decided to employ glitch aesthetics. Specifically, I’m researching the kind of glitch style I want. It’s so much fun I’ll have to prevent myself from going overboard. Since this is a work in progress I want to keep a few cards close to my chest. However I’ve been itching to present some concept work and progress.

No FX

With Glitch FX

More to come.

Let’s kill the Step Sequencer

Or a “Re-imagination of the Step Sequencer Paradigm”

As I continue down my research for algorithmic music processes, I’m working hard at constructing modular, building blocks in MaxMSP. Through this recent work I’ve created modular components dealing with probability, simple synths, envelopes, etc. So my mindset has been “Modularity!” and not just because it’s a good idea but because I’m still ineffective at planning and structuring more elaborate projects. This is how I decided to approach a traditional step sequencer. The first component I needed to develop was the “step” component. This is where I realized the limiting assumptions of “sequencing” data.

step-module-maxmsp

As I worked my way through analyzing my needs for a step sequencer, I separated the concept into a few smaller parts: a step, a time control, a timeline.

My earliest introductions to music sequencing were with Cakewalk Pro Audio and the hardware sequencer on a Korg N364 (still one of the best). I was eventually introduced to step sequencing through the Korg EA-1 and Fruity Loops. The assumed structure of both approaches goes like this: Place data within grid. Sometimes you can control the grid. Done. There are of course alternatives such as euclidian sequencers and likely other options out there. Considering how any form of data can be translated into any other form of data (practically), music sequencing is theoretically limitless. You can turn a series of tweets into musical information. You could claim to control those tweets and thus “sequence” your music based on its set of rules. On a pragmatic level however we deal with common sequencing paradigms.

As I worked my way through analyzing my needs for a step sequencer, I separated the concept into a few smaller parts: a step, a time control, a timeline. Traditional sequencing ideas can neatly fall into these categories. Let’s hold it up against the euclidian approach for a moment:

  • A euclidian sequencer uses circles, a radius, and its circumference in order to understand timing and additional data.
  • A line representing the radius would move within the circle at an arbitrary rate. The rate is the time control and the circle could be called the timeline.
  • Nodes might be placed within the circle with varying distances from the center; the nodes represent sound/music/data. These could be called steps.
  • When a radius crosses a node, it is triggered. This type of functionality basically abandons a common grid system.

A euclidian sequencer is a pretty exciting approach but I stumbled across another funny way to structure sequenced information. A method that could mimic a step sequencer and a euclidian one but with more options.

Separating the “step” from the timeline and time control source

Since my original intention was simply to build a traditional step sequencer in parts, I first started on the component I’m calling the “step”. I decided to combine a few features such as the playlist max object which plays samples, a custom envelope I call “ADXR”, signal input, and one additional, curious feature: bang output. This is where I realize the step sequencing concept can be tweaked.

4-steps

The bang output–for non Max users, a bang represents a trigger data–leaves the step object after a duration of time. In other words when a step has a duration of one whole note and is triggered, after a whole note transpires the bang output occurs. That output can then trigger another step. Repeat. Let me describe it more carefully:

  • Step object receives a trigger
  • Step object plays a sample (a kick drum for example)
  • Step object counts down the defined duration that is input into the object (for example a quarter note’s length)
  • When the duration ends, a trigger output occurs.
  • The output can trigger anything else including another step (for example a snare)


My audio examples are still in their “autechre” phase

I’ve designed the steps to receive notation based times and milliseconds. So it can work without a grid system (by grid I’m referring to structured times). A kick drum plays. We wait 200ms. The snare plays. We wait 1 quarter note. Another snare plays. We wait 100ms. A hihat plays…

Here’s where it gets interesting. This modular setup allows for a feedback loop. A sequence of loops can trigger itself in a cycle. A structured rhythm can be set up to follow a strict time OR a step can occur based on millisecond durations which aligns with some of the qualities of a euclidian sequencer (some qualities).

If you wanted to fully mimic a euclidian approach, you would want to create multiple feedback loops with at least two steps (Max does not allow one object to loop back into itself without a few caveats). With multiple feedback loops like this you could trigger them all and copy the euclidian approach. However this is just the start. Before I list other possible scenarios I should admit that this isn’t necessarily innovative. It’s not ground breaking. Many people using MaxMSP–and similar technologies–are engaged in abstract, complicated feedback loops and flows. That’s not new. I think what is interesting about this concept is that it bridges traditional electronic music user experience concepts with more intricate flows. I’ve taken a “step” paradigm and designed it to fit into a variety of scenarios.

I think what is interesting about this concept is that it bridges traditional electronic music user experience concepts with more intricate flows.

I can’t say the wheel has been reinvented here but I think a new type of wheel is being introduced for those who need it. This modular step approach can get rather complicated.

The Modular Step Object Flowchart

ModularStepSeq-flowchart

This is a work in progress and outlines some of the key factors of my step object/patch. There are some details to the patch that I am ignoring (they are not critical to the over all purpose) but this flowchart outlines a few of those features. This flowchart however does not offer an example of an elaborate sequence. I have another flowchart to explain that.

Two nearly identical loops with (likely) very different results

step-seq-concept-loop

Loop A slightly represents a simple loop. A kick is followed by a snare then hihats. Then it almost repeats. If you look at the second to last hihat in Loop A, that hihat triggers two things. It triggers two hihats. Those two hihats then go on to follow their own sequence. One of the hihats loops back to a kick. The other eventually stops after a few steps. Additionally you can see durations between triggers designated. The only surprising duration in Loop A is a 16th note in the middle. You can almost imagine how this might sound. Think for a moment how that rhythm may go.

Now let’s look at Loop B. At first glance it’s pretty similar but look at the durations. We have 8ths, 16ths, and 200ms. That alone really complicates the rhythm and moves it outside typical time structures. Look at the second to last hihat as well. What’s different? It loops back into the main loop! How do you think this rhythm sounds? I have no idea. In fact during development of all this I constantly found myself in reboot-worthy feedback loops. That is one problem I still need to address. If you can avoid though, you will find yourself creating very different types of rhythmic structures.

Early stages of development

I skipped additional details regarding this development. For example I’m building in a signal gate into each step object. So you can sequence samples and sequence gates upon a live signal. Although the audio I provide may not shed too much light on the potential, I will continue going down this rabbit hole to see what musical qualities can be achieved this way. I think it looks promising.

#modular #feedback #drummachine

A video posted by @estevancarlos on

Six string gestures with Max MSP

Six String Leap Motion MaxMSP Instrument

My interest in a six string instrument was spurred by the startup company, Artiphon, who had the idea of creating a sensible, tactile device that allows for new but familiar string instrument gestures. I don’t play guitar but I do own a classical guitar. I am just now learning to play it. Coming from the world piano I find that I’m relieved by the expressiveness allowed with a string instrument like the guitar. The performer defines the frequencies through their gesture. You can not do this on the piano. However I figure I could do this with Max MSP.

 

Artiphon_INSTRUMENT_1_Press_0.0

My interest in a six string instrument was spurred by the startup company, Artiphon, who had the idea of creating a sensible, tactile device that allows for new but familiar string instrument gestures.

It makes me think of an interesting music production dilemma I try to work around: how to create outside the confines of quantization. Using Ableton Live as actively as I do it’s easy to take quantizing for granted. It ends up removing so much nuance to production and replaces it with overbearing precision. The same could be said for the piano.

With the piano you’re forced to address predefined frequencies and set intervals (you can’t retune on the fly). I am personally ready to move passed quantization and into the world of microtonality.

Virtual String Instrument

So being inspired by what looks like a sensible and well designed virtual string instrument, the Instrument 1, I decided to hack and prototype a concept relating to six strings. Since I don’t own the actual instrument, I decided to use my Leap Motion instead. The Leap allows for a range of data that could mimic the idea of a range of frequencies on a string. Using the Leap also allows me to further my research of the device (I have other Leap Projects I want to develop).

 

I am using my tool of choice, Max MSP, and created six lines that respond to 6 intervals within an axis controlled by the palm from the Leap Motion. In other words, the Max MSP patch allows you to select X, Y, or Z as an axis. You can then calibrate the maximum and minimum values of that axis. From there six intervals are defined that trigger six different frequencies that are a part of standard guitar tuning. Belize Where I take it from here is the exciting part.

A Max patch containing Leap Motion Input

I am using my tool of choice, Max MSP, and created six lines that respond to 6 intervals within an axis controlled by the palm from the Leap Motion. In other words, the Max MSP patch allows you to select X, Y, or Z as an axis.

Gestural control within a 3D space with the Leap Motion

The Leap Motion can recognize fingers but it is very imprecise. Because of this I will only use the palm as an input for the sake of stability. I may look further into finger data at a later date. Since I am using the palm which represents one point in a 3d space, strumming is the primary gesture I am experimenting with regard to this prototype.

With a real guitar the speed of which a person strums can influence the timbral quality of the instrument. The force of which they push down on the strings influences the strength of the vibrations or loudness. Speed and strength of strumming often go together since a fast gesture is often intended to have a level of force. This type of gestural dynamic leaves room for interesting design decisions.

The very simple synthesis component

Speed and strength of strumming often go together since a fast gesture is often intended to have a level of force. This type of gestural dynamic leaves room for interesting design decisions.

For example:

  • One axis of data (Z axis) could represent the force or pressure put upon the strings. Even though the physics of a real guitar would suggest this translates into higher peaks and valleys within a vibration, in a virtual/MIDI instrument this information could define FM modulation upon the corresponding synthesis or could represent a delay that is applied to the strings.
  • Strumming a real guitar quickly would create a complex set of vibrations/frequencies. We could emulate that directly through software OR we could measure the details of the strumming gesture and translate the data differently. There is a duration of time that exists between each strike of the string–likely measured in milliseconds. So the speed of the strumming dictates this timing between each string hit. That time could be used to control a reverb on the instrument. The faster the strumming, the more or less reverb. A duration between two string hits of 1ms could scale to an increase of 5% wetness on a reverb.

I have only completed the portion of this experiment where data between Max MSP and the Leap Motion are properly set up. Additionally a very simple visual is established. The next part is to create interesting synthesis that corresponds to interesting data I can gather from the gestures. More to come.