RandyCamp Clipcycle:
A Portable Visual System
The Shift
This started as a simple question: can I make the RandyCamp visuals feel alive without manually VJing every cut? The first answer was a clip cycler. That was useful, but too thin. Once the system started blending clips, layering stills, randomizing transitions, and running dedicated RandyCamp media folders, it stopped feeling like a sketch.
The project is now a build. It has a runtime, a media pipeline, a local launcher, a preview path, and a clear next form: an installation-grade visual system that can run during RandyCamp with a physical or simple digital interface.
What Exists Now
The runtime is Clipcycle Clarence, a Python application built with PySide6 and python-mpv. It uses a single fullscreen window by default. Playback comes from libmpv, not OpenCV, VLC bindings, or a web player.
The visual engine uses two mpv players inside one OpenGL compositor. One deck is active. The other can become a ghost layer, so the image is not just one clip after another. The hidden deck can affect the visible deck through blend modes, shader overlays, and transitions before the system resolves into the next clip.
The RandyCamp config points at /home/james/Clips/randycamp-visuals. The current local folder has 81 playable media files: 61 videos and 20 images. There are desktop launchers for fullscreen use and for a 1280 by 720 preview window.
The Visual Language
The system is intentionally unstable in a controlled way. It can randomize segment length, playback speed, reverse playback, orientation, transition style, blend mode, shader overlay, and still-image overlays. Stills are not treated as dead slides. They can become moving transparent texture layers with their own opacity, drift, tile count, rotation, and fade timing.
The generated RandyCamp material includes ASCII monkeys, psychedelic loops, transparent overlays, and a separate Art Institute of Chicago prototype that cycles 102 videos against public-domain artwork. That ArtIC version is not the final installation. It is a proof that the visuals can hold together as a live collage system instead of a playlist.
The Interface Problem
The current controls are still too developer-shaped: keyboard commands, config files, launchers, and timers. That is fine for testing. It is not the right interface for RandyCamp.
The next design problem is deciding what the operator should actually control. Not everything needs a knob. The useful controls are probably mode, intensity, category, freeze/advance, transition behavior, and maybe a panic/reset action. The interface should shape the performance without forcing someone to manage the whole engine manually.
This is where the project becomes UX work, not just visual code. The interface has to survive noise, darkness, distraction, and people who should not need to understand the internals to make the thing feel alive.
Hardware Direction
The likely target is a Raspberry Pi installation build. That is not done yet. The current verified runtime is on my Ubuntu workstation. The Pi version will need its own performance audit, hardware decode check, launch-on-boot path, controller mapping, and failure recovery.
That hardware constraint is useful. It forces the project away from desktop demo thinking and toward a box that can be turned on, left alone, and still make sense. If it cannot run reliably at the event, the visuals are not finished.
What Comes Next
1. Control surface
Define the small set of controls that matter during a live installation and map them to keyboard, MIDI, GPIO, web UI, or another simple interface.
2. Raspberry Pi test
Prove whether libmpv, PySide6, OpenGL compositing, and the current RandyCamp media can run acceptably on the target hardware.
3. Installation mode
Add boot behavior, safe defaults, crash recovery, and a way to restart or reconfigure without turning the installation into a laptop session.