Video Composition Software with Max, Processing, and Syphon

Over the past few months I have heavily dedicated my focus on VEVE and creating live visual software within Max/MSP/Jitter. I’d like to describe some of the process broadly. Much of this has been focused on experimentation, slowly bleeding into more controlled design scenarios. In other words, I haven’t always known enough in order to design a final product according to my tastes or aesthetic. However as things progress, I am better able to design what I want or need.

A photo posted by estevan (@estevancarlos) on

Accidentally came across an aesthetic that isn't quite my own. #processing #maxmsp #reas

A video posted by estevan (@estevancarlos) on

The Technology

As usual, I am focused on Max/MSP/Jitter. Jitter provides means for video manipulation and OpenGL programming. I’m going to focus on discussing video within Jitter. Recently I found myself playing with masking within Jitter by combining and manipulating the matrices of two video feeds. From this process I realized I could just bring an external “video” feed from elsewhere. So using a third party tool called Syphon, I brought an external OpenGL texture into Max from Processing. So there is a Syphon object in Max that can access this OpenGL texture and there is a Syphon library in Processing that can send the OpenGl data.

#alvanoto #maxmsp #jitter #lava

A video posted by estevan (@estevancarlos) on

Early masking test

Additionally, for the actual performance, MIDI was utilized in order to use a MIDI controller to control the software. MIDI data is sent to Max and then using the OpenSoundControl protocol, that MIDI data is converted to OSC and sent to Processing.This of course requires an additional library in process in order to understand OSC data.

Technologies used:

  1. Max/MSP/Jitter
  2. Syphon for creating and sending an OpenGL texture
  3. Processing
  4. MIDI
  5. OpenSoundControl

The Development

I am still actively studying Jitter (the collection video objects, processes within Max). Video can be managed as a matrix or OpenGL within Jitter and there are reasons one might use either. OpenGl provides lower level processes that occur on the GPU. This creates efficiency and dramatic speed improvements. The matrix in Jitter is, at it’s core, arrays of data arranged in columns and rows. You are essentially creating, importing, and manipulating pixel data within these matrices, which often exist as “planes” of matrices. Planes represent types of data included: reds, blues, greens, alpha–to put it simply. This can be CPU intensive and I unfortunately deal with this intimately. I’m just not pulling off great frame rates. It’s frustrating.

I initially created two video players, using these matrices, in Jitter. One of which represented a mask and the other just a video. Using a process that I wish I could explain in more detail, the functionality of a Jitter object allowed for a masking technique when both matrices were merged or “packed” together.

The results were straight forward. It was at this point I realized that it may be interesting to dynamically create masks instead of having a looped video or image. I find it much easier to develop and animate 2D graphics in Processing so opted for that approach.

#underworld #processing #maxmsp

A video posted by estevan (@estevancarlos) on

Introducing Syphon and Processing

The masks are represented as black or white color data within a matrix (or specifically RGB is at 0 or maximum). Using the object I can’t fully explain, it takes the black or white pixel data in order to mask the additional matrix input. So these are the colors we use in Processing.

As I mentioned, it’s much easier for me to create and animate 2D graphics in Processing. I couldn’t even begin to explain how it’s managed in Jitter. So I was able to rapidly develop multiple functions in Processing that I am calling “scenes”. These scenes would animate and create a series of different black shapes or grids within a canvas/sketch. Using the Syphon library, this canvas sent down an OpenGL pipeline as a texture.

So where previously I utilized a matrix playing video or loading an image within Jitter, I now am connecting to Syphon in order to pipe in the texture data. This texture data is then converted to a matrix.

#maxmsp #autechre #processing

A video posted by estevan (@estevancarlos) on

Inefficiecies

This unfortunately introduces some inefficiencies and I still need to find a solution. Keeping everything within OpenGL could be the ideal scenario–if I had a decent video card–so converting it to a matrix taxes my computer even more.

Additionally I could not find a way to send different textures out of one Processing sketch. Why did I want to do this? I wanted multiple masks for multiple videos. So I ended up having to create two different Processing sketches, each sending out a texture with Syphon. This was probably more taxing on my 2gb video card and converting it to a matrix was additionally taxing on my CPU.

This was unfortunately just a series of inefficiencies that require some rethinking. I am not sure how intensive Syphon is when sending out a texture. I’m not even sure yet how to monitor that data. However creating two Processing sketches may be a problem with a solution. Additionally using OpenGL within Jitter can really improve speed especially when handling higher resolution video and textures. Thus keeping it in OpenGL may be the real solution… I just need to figure out a non-matrix masking technique. When I attempt this project again, that will be my objective.

Cross-software Communication

I was facing a fast deadline and wasn’t able to resolve some OSC issues. OpenSoundControl is a protocol that allows for communication over a network and between applications. With a MIDI instrument communicating to Max, I mapped that MIDI data into some corresponding OSC data. Within Processing, with an OSC library installed, they listened for the OSC messages and translated that data according to my needs (triggering scenes, changing shape data). The issue I confronted is that I was not able to communicate to both sketches at the same time. I don’t know why. It may be a simple issue but I’ll need to revisit it. I’m looking at a tool like Osculator in order to better manage this type of task.

The results

The results were positive and interesting. Despite the performance issues, I am very happy about it. Considering that I am somewhat new to many of these details, dealing with efficiency is the next logical step and it’s better I deal with it now than later.

I am very excited about the possibilities of developing algorithmic and generative masks with Processing that can be manipulated by MIDI/OSC. Now I will acknowledge that once I dive further into OpenGL, I may find it better to create masks there–within Jitter–instead of with Processing. However being able to quickly program in Processing is a major plus.

A video posted by estevan (@estevancarlos) on

Leave a Reply

Your email address will not be published. Required fields are marked *