The completed build.
Printed with matte black PLA & wood PLA.
While browsing Reddit, I discovered the r/wigglegram community, where enthusiasts share and discuss equipment for creating animated GIFs with wiggle stereoscopy. Multiple images are captured next to each other at the same time which is then made into a gif with a 3D effect as the frames play in a loop. This technique, which can use digital or analog cameras, primarily focuses on vintage 3D models like the Nimslo and Nishika cameras from the 80s and 90s. After weeks of designing, developing, experimenting, testing software limits, and selecting the right hardware, I’ve built my own version of a camera that captures multiple images simultaneously using a Raspberry pi 4 and three USB webcam modules.
I had four old 0.3 MP USB webcams, along with Raspberry Pi 2, 3, and 4 models, so I started experimenting with the Pi 2 to measure its performance in capturing images. One common issue with digital image capturing is the delay when using multiple cameras. I also found that on Windows, only one webcam could be used at a time, while on Android, you could use two webcams simultaneously, but adding a third wouldn’t work, evident by a popular multi webcam android app that supposedly supported multiple webcams, also having this limitation.
Even with an Android setup, the results were limited to devices like the Nintendo 3DS or 2DS, which already captures two images at once at 0.3 MP making them capabile of creating wigglegrams. To move beyond these limitations, I decided to capture images from each camera in parallel processes. While delays were still present, using a Raspberry Pi 400 significantly reduced them compared to the Pi 2.
Here’s a summary of those results:
Raspberry Pi 2: Unstable capture with a ~16,500 ms delay using 3 cameras.
Raspberry Pi 3: Unstable capture with a ~1,800 ms delay using 3 cameras.
Raspberry Pi 400 (4GB): Stable capture with a ~1,200 ms delay using 4 cameras.
The 1.2 second delay was still significant so going forward, i decided to continue with three cameras, as this would still be an upgrade over a Nintendo 3DS.
One issue I encountered was image cropping. These USB cameras had manual focus (rotating the plastic cap infront of the lens), which provided a small range of sharp quality but varied from one device to another. Upgrading to a new set of webcams could address this inconsistency and improve the resolution. However, I also had to consider that higher resolutions might introduce longer delays due to data transfer limits. The Raspberry Pi 4 has two USB 3.0 ports and two USB 2.0 ports, meaning one camera would operate at slower data transfer speeds.
I tested using a Raspberry Pi 400, which is a Pi integrated into a keyboard. While it worked well for testing, its lack of portability made it impractical for this project. I had to choose between the Raspberry Pi 4 B+ and the Pi 5 B+. Ultimately, I settled on the Pi 4 because the Pi 5's portable battery requirements were more challenging to manage from what I gathered online. (And the fact that I found brand new Pi 4 units for under half price :D)
Created using the first revision of the setup.
After learning to avoid manual focus lenses like those from Revision A, I had to choose between auto-focus and fixed-focus USB webcams. Since the goal was to capture similar images separated by small distances, I decided that fixed-focus lenses would be the better option, incase autofocus made zooming adjustments.
With the basic functionality in place, I worked on refining the software to improve performance and image quality. Once the software was optimised, I designed the button functionality and LED purposes for the setup and designed an enclosure to house everything, including a small power bank, using Sketchup. Although some aspects of the enclosure design were later adjusted, it successfully made the setup portable. Taking more pictures with the completed setup out in the real world exposed additional issues, which I addressed in Revision C.
To enhance usability, I added a basic Python Flask web server to preview and download images after they had been taken and connected to my home network. While I considered including a screen or tethering the setup to my phone display, this feature remains absent for now.
Taking the camera to a museum revealed several issues. First, most images were unusable due to poor and inconsistent lighting—one lens would capture an image that was too bright, while another was too dark. Outside the museum, I attempted to photograph a subject in front of a tree, but all three captured images turned out completely white. Fortunately, this issue was resolved in the final revision.
Second, I was taking pictures blindly without a way to preview them, which often led to poor framing. I couldn’t check how the images turned out until I transferred them to a computer via scp or the Flask web server.
Third, the spacing between the lenses was too close, reducing the effectiveness of the 3D effect. Spacing the lenses further apart would significantly improve the results, (similar to how our eyes are distanced apart).
Lastly, while the setup somewhat resembles a camera shape, it is still too bulky for my liking, leaving room for further refinement in another revision.
Upgraded to 3x fixed focus 2MP USB webcams (https://s.click.aliexpress.com/e/_m01QQuJ - variation: 90 degree fixed focus)
Add support for USB power bank (momax 1-Power Mini 5000mah)
Designed & iterated through 3D printed housing with various case features
4x LEDs, 2x Push switches
Raspberry pi 4 4GB
Python flask web server to view image sets and download sets in bulk
LEDs:
flash (white)
status (blue)
success (green)
failure (red)
Buttons:
Shutter button
Secondary button:
Push to toggle flash
Hold to turn pi off
Turn Pi on, when it's off
Inactivity shutdown
Turn flash on/off (White LED at front of camera, no resistor)
Auckland Museum 1
Auckland Museum 2
Testing
Addressing the issues from Revision B...
The first issue, unpredictable lighting, was caused by the camera modules retaining settings from the previous environment. For instance, moving from a dark museum to a bright outdoor area resulted in overexposed images, such as completely white frames. I found out that it can take around 5 frames to fully adjust per camera, and it remembers its previous levels no matter when the next image is next captured until it is booted again. Initially, I experimented with dumping multiple frames to correct this, which solved the problem but introduced additional delays. To address this, I implemented a "camera warm-up" feature activated when the shutter button was pressed. However, this process was slower than expected taking 10+ seconds and wasn’t always necessary.
My testing indicated that the "flash" function was questionable so I removed it. To simplify, I repurposed the existing "flash on/off" button into a dedicated "warm-up button." This button also doubles as the on/off button for the Raspberry Pi, shutting it down when held for three seconds and, because it’s connected to GPIO pin #3, allowing the Pi to power on. This revision streamlined the setup and reduced the overcomplication of LEDs and buttons.
Adding a screen to the setup wasn’t feasible due to the CPU load spikes caused by capturing images, which could hang the script. Instead, I integrated Bluetooth functionality to connect with my phone. After an image is captured, the frames are stitched together using Imagick, and a prompt appears on my phone to receive the file via Bluetooth. This provided a simple preview to verify whether the images were usable and determine if adjustments were necessary in retaking the images.
Bluetooth temporarily interrupts the camera’s operation, though it wasn’t always needed. To address this, I adjusted the secondary button’s functionality so that holding it for three seconds toggles Bluetooth on and off, allowing simpler operation when required rather than disabling bluetooth on my phone or losing the bluetooth functionality.
The third and fourth issues—insufficient spacing between the lenses and the bulkiness—needed to be addressed. The second revision used the 3D-printed frame from the first version as a base, but I needed to increase the spacing between the cameras while keeping the setup as compact as possible (distance increased from 5mm to 30mm). This gave me the opportunity to rethink the layout without being constrained by the initial lens arrangement. I changed it from three printed pieces, to two, although the assembly process presented new challenges, it got resolved after a few printed revisions.
Final Build - Front
Final Build - Back 1
Final Build - Back 2
In an attempt to make the setup more portable, I explored splitting the camera into two parts: one enclosure housing the Raspberry Pi 4 and the battery, and a second enclosure containing a 4-port USB 3.0 hub connected via a USB 3.0 cable. This hub would connect three cameras and a Raspberry Pi Pico. The Pico was programmed to receive serial data from the Pi 4 and function as a USB HID device, enabling two-way communication.
The concept allowed the Pi 4 and battery to fit into a pocket or bag while keeping the cameras separate. Despite efforts to minimise bulk—such as creatively splicing USB wiring and soldering directly onto the usb pins 😉 and removing the hub’s plastic enclosure—the result was still too bulky. Ultimately, I returned to the single-enclosure design, which balanced portability and functionality more effectively. Despite the bulkiness of this approach, it was interesting to investigate how the serial communication over USB works in such as setup.
Finally, here's a few wigglegrams captured using the last revision (Revision C).
Waikowhai
"Track closed due to slips"
"BEST KEPT STREET AWARD 1977-78"
The way images are edited and the choice of focal point significantly affect the final outcome. They can be animated using tools like Photoshop, After Effects, GIMP, or Wigglegrams.com.