The limits of Processing, and how I transcend them with OPENRNDR
If you’ve never heard about “creative coding”, most likely you won’t find this story interesting. On the other hand maybe it’s worth learning about this slightly esoteric phenomenon of contemporary culture. Obviously it is feeding on the possibilities which technology creates, but in the same time it is transgressing the capitalist framework of performing only these activities, which are aligned with the supply stream of continuously propelled consumer demands. This kind of code, which is expressed not to satisfy any needs, except for aesthetic and conceptual ones. Recently I am meditating on the phenomenon of “conceptual needs” and I would like to write more about them soon.
I’ve been using Processing from Processing Foundation for a while in my projects. It’s a great tool, especially if your are starting your journey into creative coding. Processing is abstracting the complexity of underlying technology, helping us to focus on the effect one wants to achieve, not technicalities. But Processing has own limits. I realized that there are things I cannot express, and as Wittgenstein said: “The boundary of my language represents the boundary of my world”. And I want my world to have floating point textures. :)
The picture above illustrates this problem. I wanted to treat every single pixel on the screen as if it was a particle. For this reason I was trying to encode current particle position in the texture which can then be processed directly on the GPU — order of magnitude faster. But in Processing it is not possible to associate frame buffer objects with textures of the floating point precision. I saw this “pixelation” glich on my control image, and I couldn’t explain it for a while, until I realized that 8bit integer numbers encoding the texture are not precise enough to describe accurately coordinates of my floating point particles.
Another example with Kinect depth camera data might illustrate the problem even better:
The first picture comes from Processing using Open Kinect for Processing project. In this setup the depth data is used only within very narrow range of values, which allows to highlight facial features. But it is clearly visible that transitions between depth levels are not smooth.
The second picture represents the same algorithm, but this time running in OPENRNDR. I had to provide my own code transferring depth data to GPU, but the code is minimal and actually super clean comparing to how it is implemented in Processing. What is more, in my new solution, the raw depth data is being transferred directly to GPU as a ByteBuffer and processed quickly there, instead of being pre-processed on CPU with conversion from 11bit resolution to 8bit resolution in order to compose Processing PImage which is then uploaded as a texture. This picture proves that having resolution of 2048 depth levels makes a huge difference comparing to 256 depth levels. Especially if this data is supposed to be converted into 3D representation (point cloud, etc.).
There are still plenty of use cases where Processing, shaders and textures will work great together. Recently I even contributed some code to the Shadertoy2Processing project started by Raphaël de Courville, where I show more advanced use cases involving multiple fragment shaders, multiple buffers, and the feedback loop on top of them. The Processing code for these techniques came out surprisingly clean and readable, without dependency on low level OpenGL API which I was afraid will be necessary. I also started a new project on GitHub called processing-shaders where I collect some other use cases for Kinect.
I will still use Processing when producing code for others, to let them easily maintain it. But my personal projects from now on will be based on OPENRNDR.
Take a look at the simple code taking data from the Kinect depth camera. It is so surprisingly compact, no library like Open Kinect for Processing was needed, just basic freenect library, and everything is processed on GPU for maximal performance (Update: I contributed official Kinect support for OPENRNDR which makes is even simpler):