ShadowSmoke (2009)

Manipulate luminous smoke in this interactive projected installation

Solo work. I wrote the software and created all the camera rigging.

ShadowSmoke/Besmoke is a piece created in early 2009. It is a Navier Stokes fluid dynamics simulator that visualizes the movement of dense fluid with vivid colors. Its first incarnation was known as Besmoke, though with the addition of computer vision to detect human motion disturbing the scene, its now known as ShadowSmoke.

At its core it is a grid-based Navier-Stokes fluid simulation that approximates the fluid dynamics in a stable and computationally inexpensive way. Its based on Jos Stam’s “Real-Time Fluid Dynamics For Games” paper. Each grid cell has a density magnitude and a velocity vector. The algorithm evolves those parameters for each time step. The color blue represents areas of higher density and the color red represents areas of lower density. Black regions represent “obstacles” that refuse to permit a density or velocity. The obstacle map is loaded from a png file.

You can interact with the simulation in the most basic way by using the mouse. Using the left and right mouse buttons, you can set regions of high density and modify the velocity vector field.

Besmoke also listens for sound input to introduce new sources of dense fluid. “Thumps” (or low-passed sounds) like you might associate with music’s thumping bass causes dense fluid to be injected into the very middle of the screen. Loud sounds in any frequency band causes the emitter to eject dense fluid. The emitter is a point that moves clockwise around the perimeter of the screen at a fixed speed. As you can see from the video, loud low sounds trigger both behaviours.

Besmoke is also accelerometer-aware. In the video, you can see I’m using my iPhone to “change the gravity” in the simulation. I can hook any accelerometer up to this system, of course. The iPhone was a convenient source of data.

I’ve connected the OCZ NIA as well, allowing me to set the overall agitation level of the fluid based on the alpha wave, beta wave, or EMG output of that device. In this video, you can see how my “agitation level” affects the fluid characteristics.

I am not mostly a geek. from eric gradman on Vimeo.

I can also use multitouch input as a substitute for the mouse. I can use multiple fingers to “agitate” the fluid simulation by altering the velocity vector field.

Here are some technical details:

The simulation is based on the work of Jos Stam. This paper is a great read. I’m simulating a 128^2 cell grid.
The graphics are rendered using OpenGL.
I’m using OpenCV internally to represent the density and velocity vector fields. Though I’m not actually doing any computer vision here, OpenCV is good at manipulating large matrices.
I’m using ChucK audio programming language for sound analysis. There are two shreds running; one for each of the sound behaviors described above.
I’m running OSCemote on the iPhone to capture accelerometer and multitouch input. OSCemote is awesome, and I use it to configure and control most of my new projects.
The components communicate using Open Sound Control. The multitouch events are represented using TUIO, so you should be able to run this on any touchlib/reactable device.
I’m using liblo in C++ to receive OSC events.
project image

LET'S BUILD SOMETHING