More Multi-Scale Things

December 20th, 2019

I’ve been continuing my experiments into generative imagery based on Jonathan McCabe’s multi-scale work. The more I get into the guts of how these function and gradually stray from the original algorithm, the weirder things have gotten.

And, I’ve again been collaborating with Anaccordion to provide hypnotic music for the pieces.

Here’s a few favorites:

More Slit-scans

September 30th, 2019

Here are some more of my experiments with slit-scan image processing…

And a little bit of info on the process-

My earlier slit-scan methodology seen in the Driving Things here used a relatively simple GPU-driven setup in Touch Designer:

These new videos use a more complex CPU-driven setup in Houdini:

While slower to process, the new method has the advantage of being able to process much larger frame counts without worrying about maxing out the graphics card.

All of these Broadway videos started with some slow motion footage like this:

And then the process swaps either the width or height pixel axis of the image with the time axis. In these cases it’s always the width getting swapped with time, so the more frames that were originally captured, the wider the final video is:

Multi-Scale Things

January 31st, 2019

For a while I’ve been looking into ways to digitally create the equivalent of direct filmmaking techniques in the tradition of Stan Brakhage and others. While not necessarily trying to exactly match the look of the scratched or painted film in those works, I am trying to match their amount of visual complexity.

Check out this film by Josh Lewis for example:

There is complex fine detail that is unique to each frame, and there is also a larger compositional structure that’s cohesive from frame to frame.


So a couple years ago I decided to try implementing Jonathan McCabe’s multi-scale Turing pattern algorithm. Jonathan does a lot of amazing algorithmic work like this:

In particular, his multi-scale Turing patterns had the sort of visual complexity I was looking for. He has a brief pdf explaining how they work:

jonathanmccabe.com/Cyclic_Symmetric_Multi-Scale_Turing_Patterns.pdf

I used the visual effects software Sidefx Houdini to implement a version of McCabe’s algorithm and was able to get something with a fairly similar look:

The above is an assembly of distinct one-frame simulations that I created using initial conditions and sim parameters that change over time. Each frame is a distinct simulation of around 40 steps. In some segments the input seed noise is animated, in other segments the seed noise stays the same while the inhibitor and activator radii values are animated by noise functions. There are also some tiny pill-shaped features that I think are artifacts of the underlying grid resolution.

After that first series of tests I decided to re-implement the algorithm from scratch to give me more flexibility in controlling the look. While I had originally coded the algorithm in Open CL in Houdini, the new version is a node-based combination of Houdini sops and vex code. The node-based implementation makes it very easy to make experimental changes to the algorithm. Here’s one my first experiments with the new implementation:

Here’s another, this one used a wavy noise pattern as its initial condition:

More to come!

Rye Thing

June 10th, 2018

More fun with macro photography…

Danish rye bread as seen through the Nanoha 5x ultra macro lens:

Slice Thing

June 1st, 2018

I’ve got a new rather short short called Slice Thing that will be showing June 30 at Rooftop Films as part of their New York Nonfiction program.  And the screening will be in Greenwood Cemetery!

It’s an abstracted vision of New York City pizza in the form of frenetic pulsing macro photography, made using the Nanoha 5x micro 4/3 lens again.  Aka 2018: A Cheese Odyssey.