The Rhizome

Yes, at some point in time one has to admit that they have a thing for something when they keep doing said thing over and over. In my case, it is my ability to eventually end up making local multiplayer games. I genuinely love them to bits <3 and can’t seem to stop making them! So when time came for me to submit a proposal for a project at Srishti, I did just that 🙂 a local multiplayer.

The Rhizome was one of the installations as part of the Entanglement: A Dance between Art & Science – at the Serendipity Arts Festival in Goa 2016 as well as ITBiz in Bengaluru 2016. The Entanglement was curated by the Center for Experimental Media and Arts – CEMA which is part of Srishti Institute of Art Design and Technology and in collaboration with Science Gallery International.

Rhizome was a local multiplayer game with some specific goals in mind – an easy to play game, the game had to be publicly installed and as a personal challenge, with a seamless interface for the player.

of Low Energy Beacons and Triangulation

The first iteration of the project focused on using low energy Bluetooth beacons as a way of tracking users in a space. The initial idea focused around 4 sensors placed around a space that tracked a beacon that the user carried, possible in the form of a wrist band or something else that was given to them when they entered the space. Since my goal was to make it as modular as possible as well, the first iteration focused on getting all this processing tracking and rendering done via a Raspberry Pi. Since I was familiar with SFML(stands for Simple Fast Media Library – is a c++ graphics and media library, which handles OS, graphics, audio and input) and there was a port of SFML which ran on Raspbian, this was my first choice. A lot of time was spent working on getting SFML up and running well on the Pi, but the major setback was then we analyzed the data that we got from the beacons – they were just not good enough. The beacons behaved sporadically and the varied drastically; accuracy went out the window. At some point after so many weeks of trial and error, the plan to triangulate the beacon position was ditched. My fallback option was to do image processing – which I really didn’t want to get into at that point.

The one where we dangle expensive cameras from the ceiling.

My approach was simple enough on paper; capture a video, track users, get location, send data to game, game uses data to draw and gameplay logic. As simple as that sounds, getting each of those components to work without a hitch was the biggest challenge and I was working against the clock. To make things manageable, the project was split largely into three parts – dealing with the physical space and getting the image, processing the image and transmitting the data, receiving the data and using it.

Since we decided go the image processing route and my lack of experience working with the SFML on the Pi, I decided to move back to Windows and use a slightly more familiar game development framework, cocos2d-x. What this meant was that I could capture the video, do the image processing and run the game/client on the same computer (somewhat like a large Raspberry Pi, but with a lot more memory and a graphics card). Writing the game/client went much faster than I expected and was tested early on using using controllers. I wrote game in a way that I could plug the data in once it started coming.

In the beginning I used a regular webcam to capture the image of a controlled space, with a solid background, enough light and small objects that I could track. With the help of TSPS (Toolkit for Sensing People in Spaces – http://www.tsps.cc) I was able to calibrate the camera to detect blobs. TSPS thankfully did a lot of the heavy lifting and packaged the data as JSON that it could serve via TCP or websockets etc. Our game client then read this data (in our case I tried both websockets in cocos2dx and also TCP via SDL), parsed it and rendered the game accordingly.

Cocos2d-x provided an API to use websockets and I initially planned to use this. I was eventually able to get the JSON data flowing from TSPS to my game, but there was a strange lag and loss of data in between. To makes things a bit simple, I switched to TCP via SDL_net to get a raw dump of all the data from TSPS, and parsed all of it per frame myself. Here is a video showing that version working on a Desktop as a client for TSPS with a webcam hooked on.

Camera woes

One of the biggest problems I faced during the whole process was to get a camera to work the way I wanted it. The problem was that we had a large room and a fixed ceiling. At such a short distance, the image from the webcam covered a small box in the middle of the room. To solve this problem, I tested an array of cameras to eventually end up using a DSLR with a very wide angle lens. The DSLR with the Lens gave you a large enough region to work with, but how do you now get data from a DSLR to TSPS (which works well with webcams and even the Kinect). Thankfully, newer cameras have a live preview mode which can be captured like a stream, all you need to do was keep the camera active. Enter SparkoCam – which is a software that emulates a webcam and also takes input from multiple kinds of cameras that offer a live preview. This worked fine with a small cable plugged into the PC, but the camera’s USB interface couldn’t draw enough over a 10 meter cable; for which I eventually used a powered USB hub near the camera.

The second problem was the battery back on the camera. The live preview mode drew little power, but a fully charged battery pack would last just about 4 hours – which meant that we had to swap these packs every 4 hours. The alternative was to use a powered battery pack which plugged directly into an AC source, but I couldn’t find one in time.

The final setup

My final software and hardware setup was in a way like so and as crazy as it sounds, it worked without any problems throughout the day(with 2+ battery changes) for the 7 days we were there at the Serendipity Festival.

Canon EOS Rebel T1i with a wide angle lens -> connected to a Powered USB hub -> connected to the PC via a 10 meter USB cable -> which runs SparkoCam which captures the live preview and simulates a webcam -> which is read by TSPS, processed, tagged and then the data sent out via a TCP port -> which is read by the Game/client -> which renders the whole thing displays it on a 14 x 9 feet video wall.

I leave you with some images, behind the scenes and more.

This slideshow requires JavaScript.

Thank you 🙂

This would not have been possible without the help of the awesome folk at Srishti – http://srishti.ac.in especially Geetha, and everyone in CEMA <3 and my my co-collaborators Vinod K.A., Divya Prabha, Ishan Srivastava and Sushim Ghatak

More links

The Entaglement Video – https://www.youtube.com/watch?v=uPkIc_oQTA0
Images from the Entanglement on Instagarm – https://www.instagram.com/explore/tags/theentanglementgoa2016/
A News article about the installations – http://bit.ly/goasf2016_article0