My talk about Lo-fi AR at EnthusiastiCon Berlin

raspberry pi sewn into a suit On the 25th of May, I gave a lightning talk about lo-fi AR at EnthustiastiCon, a conference that celebrates programming. To use the organizer’s words, the conference is about ‘the strange, the wonderful, and the clever solutions to unusual problems’. This blog post aims to sum up the talk for those who missed it. Additionally, I’ll add some links to resources I found useful when starting with lo-fi tech.

What’s AR?

To make sure everyone in the audience understands the tech behind AR, I started my talk with short definitions that I then challenged.

Augmented Reality, often abbreviated as AR, alters a user’s perception of reality. Compared to AR, Virtual Reality replaces one’s perception with a digital environment. Current understandings and examples of both forms are mostly visual.

slide displays a HTC Vive headset and a mobile no-name headset On the left, you can see a typical headset called HTC Vive. Its functionality relies on able-bodied human, binocular vision. Inside the headset, one finds two lenses with which a human user looks at two images that show the same content seen from slightly different angles. The images are displayed on an internal screen. The human brain turns these 2D images into a 3D world. The headset on the right works similarly. The difference is that it is way cheaper. Instead of a built-in screen, one puts a smartphone into it which then serves as a screen. That comes with certain limitations but also possibilities!

nonhuman animals perceive their surroundings not dominantly visually What happens, however, if binocular human vision isn’t the limit for our tech? Different animals perceive their surroundings in different ways. Other mammals like cats and rabbits, but also most birds and fish, have partly binocular and partly monocular vision. Some birds can additionally see UV light. Insects have compound eyes, and spiders have up to eight eyes. Bats don’t rely much on vision and use echolocation to detect prey. Sharks have electrical perception.

My experimental research on AR

With my collaborator Jasper Meiners I have developed experiments to open augmented reality up to multi-species perceptions. We started our AR experiments with research about animal perception. In the experiment that I presented at EnthusiastiCon, we have asked ourselves what happens when eyes become detached from the head and how human testers react to multi-eyed vision.

Jumping spider Phidippus Regius Jumping spiders of the family Phidippus Regius can see sharp in the range of 30 cm. In the picture, you can see my jumping spider Cyan. With her eight eyes she is an expert when it comes to multi-eye perception.

Photos of the prototyping sessions with webcams and test persons To simulate a multi-eyed vision, my first idea was to use multiple webcams and the game engine Unity. At that point, Jasper and I worked with the headset of the HTC Vive which I showed you in the beginning. The problems began when we tried to handle the input of multiple cameras via one USB card on a VR PC running Windows. When it finally kind of worked by using one USB port per camera, we attached the cameras to my wrists and knees and arranged the video streams in my field of vision. This was exciting, but the cables of the headset limited my movements and I couldn’t freely explore my new set of detached eyes.

Going lo-fi

We wondered how to read the video streams from the cameras without being tied to a heavy computer. Our idea was to use one of the lightest computers we can afford, the Raspberry Pi. Since people build home surveillance systems with Raspberry Pis, it is relatively easy to find examples for fetching camera output. When you use a Raspberry Pi camera board module, there’s great documentation online on how to stream, modify and analyze videos with Python and OpenCV. For my purposes of just streaming videos, I chose a tool called Motion. Motion can stream videos and also trigger scripts depending on changes in the video footage.

code snippets of motion's configuration and the arrangements of videos in HTML I used a router to locally stream the videos to a website. With only HTML and CSS, Jasper arranged the videos and used blend mode for overlapping effects. On the slide you can see both how Motion looks on a Raspberry Pi and how the video streams are arranged via HTML.

the raspberry pi lo-fi suit in an exhibition setup To attach Raspberry Pis and webcams to the human body, we built a lo-fi AR suit. The Raspberry Pis and power banks are sewn into the back of the suit. Two webcams can be used on the upper half of the tester’s body and the other two on the lower half.

This is a recording of a performance we did with the lo-fi AR suit at a media arts festival in Kassel. I am performing with the suit. People in the exhibition see the same imagery on a monitor that I am seeing in my headset. They liked watching themselves being seen by my webcams. I can only use the suit for half an hour and afterwards feel like I swam for two hours. It’s apparently hard work for the brain to process four videos that are connected to one’s body.

slide with answers to the research question as described in the text below Coming back to the initial research questions, this experiment indicates that a different configuration of visual inputs does translate into a different sense of space and leads to adapted movements. If you want to actively use your knees as visual inputs, moving the body to the ground feels quite reasonable.

Why lo-fi AR? How can you get started?

pixelated flower So, why would you get started with lo-fi AR? AR isn’t finally defined as a specific technology with specific tools yet. There’s lots of room for experimentation. Lo-fi means not only to figure things out with inexpensive technology but also to see the value in imperfections, glitches and the experimental. Another reason to get started is that developing lo-fi AR applications is a lot of fun! Or because you like pixelated flowers!

Throughout my work with AR, I have used a range of approaches. If you want to get started, here’s what you can do:

  • With the game engine Unity, you can fetch video streams and rearrange the imagery in a 3D environment. By following this tutorial, you can assign names to your different webcams and switch between them. By appending the inputs to different game objects, one can arrange a collage of the streams. Unity lets you export for the common commercial headsets such as HTC Vive and Oculus headsets, as well as build apps for mobile AR. When using mobile AR, you can use your mobile phone’s real-time camera input and apply shaders to alter it. In the example, the color white is replaced by black. city scene in which everything that's white is rendered as black
  • In a different collaborative experiment we have used particle.io and a bunch of different sensors to measure air pollution. Depending on the sensor inputs, my team’s app altered the mobile phone’s camera imagery. On the left, you can see the portable sensor packages and on the right, the altered camera imagery. The dark spots indicate strong carbon dioxide pollution. We performed this experiment as city-walks in Leipzig. We measured very locally and made information available that’s usually not accessible to citizens. In combination with sensors, lo-fi AR can empower citizens. sensor packages and altered camera imagery
  • A promising option I haven’t tested much is webVR. A-Frame is a web framework with great documentation. By using HTML, CSS and webVR specific commands, you can build and animate simple three-dimensional shapes. The cool thing is that users don’t have to install anything, but just visit your website.

I hope you feel encouraged to start your own AR experiments. Why, for example, not build lo-fi AR that only relies on touch? Thanks again to EnthusiastiCon for the opportunity to present on lo-fi AR. I enjoyed the conference very much and can recommend y’all to go there next year if you can.

slide with answers to the research question as described in the text below