The Technical Aspects of “Skin”

11:36 AM in General by Adityo Pratomo

Just as a quick reminder, at this stage we’ve pretty much decided how our artwork would be. It will have a texture of a plastic bag on the screen that audience can interact with. We will also have a grid of plastic bags as the physical part of the installation that will represent the dynamic surface of the plastic bag on the screen. The plastic bags will have LEDs on them which will react to what happen on the screen as well as the audience’s position on the whole environment. We’re aiming to give the audience the sensation of touching the skin, that’s why we do not only give the chance to do it virtually, but also physically. That’s why we also need to have a physical aspect on the installation.

From that design, I then formulated how each part of the artwork should be and how they are connected, which is showed in the picture below:

I will now describe how each part was built.

The Processing Part

This is the heart of the artwork, as the whole interaction process will be based on what happen on the screen. For this, we will have a texture of the plastic bag applied on the whole screen. This surface will then be reactive to the audience’s movement front of the screen.

The textures of the plastic bag were achieved in two steps. First of all we did a cloth simulation using the Traer Physics library for processing. We created a grid of particles that has a spring feature. That way, the particles position and velocity can be manipulated dynamically with a degree of bouncy feeling to it so that they can return to their default position (e.g. flat on the screen). To further simulate a plastic surface feeling, I applied a heavy damping on the particles, so that the surface will only bounce once in a slow motion, because that’s what being expected from a plastic surface.

After I finished this particle part, I then applied texture of plastic bag on top of the particles.  To do this, I created a triangle strip polygon on where the particles grid is and then lays the plastic bag textures on top of it. Johnny created this plastic bag texture using image from his camera that was manipulated furthermore using Photoshop. While this may seems to be an easy part, in truth this phase was proven to be tricky, as we have to found a perfect balance between a clearly perceived image of a plastic bag texture and the maximum details that can be displayed clearly on the SmartSlab. In practice, we had to go forth several time before we finally nailed the final form of the plastic bag texture.

 

The next step was to manipulate that plastic bag texture using image from Kinect. Initially, I was aiming to use the coordinates from the Kinect’s depth image to dictate which particles’ position can be manipulated. Eventually, this method proven to be performance heavy, resulting in a lagged display. Rob then came to help and suggested that we can instead use just the Kinect’s depth image as a parameter to offset the particles, without even having to extract each depth image pixels’ x and y coordinate. The result was magical, the plastic bag surface was interacting with the movement from Kinect with no delay on the display and a faster performance overall. We then have to tweak some more parameters in order for the system to be suitable for the exhibition, even though at that point, the interactive screen was already quite impressive.

In the end, we managed to come up with a better parameter for the mapping between Kinect’s depth image and the particles. I also applied light on the surface, in order to get the effect of the person shape in the screen and to provide a better effect on the interaction. At this point Johny who has a better experience on 3D, suggested that in order to provide a more detailed display, we can either make the polygon’s shape smaller so that the light will be more detailed, or we can apply shader instead of Processing’s light method. But due to the lack of time, we omit this option and instead putting it as some suggestions for the future.

The Arduino Part

Now, if the screen was the heart, then for me the physical element of this installation will either make or break the whole installation. The lights will emphasize the whole message that we try to convey from this installation, that’s why I put a lot of pressure on my mind to make sure that the light will actually work. Luckily, technically speaking this is quite a simple job to do, plus we already had a working prototype from our previous concept.

The idea was to make lights turning on depending on where the action on the screen happened, a simple mapping. Then we also wanted to reward people that actually trying to interact with the plastic bag. That’s why we put additional LED that will be activated depending on the person’s range relative to the plastic bag. The closer the audience with the plastic bag then more LED will be turned on.

To achieve that, I opted to use two separate Arduino boards; one board will be connected to the Processing sketch via XBee wireless communication line so that the laptop and the Arduino can be located in separate places. Another Arduino will receive input from an IR sensor that senses range of audience. Both Arduinos will be a separate entity. Now, this may seems quite inefficient, but for me this will provide a failover system, so in case one of the Arduino is not working, we will still have LEDs turned on in the plastic bags. Plus, since I have two Arduinos lying on my room, then why not use them.

For the Arduino that receive input from the IR sensor, the workflow is quite streamlined. After connecting the IR sensor and The Arduino, I then programmed the Arduino to receive and read the input from the IR sensor. Here, I achieved that using a lookup table to interpolate the reading from the sensor. It’s easier this way because the raw data from the IR is quite hard to read and used since it’s an analog sensor. This way, I managed to create a mapping between the voltage input from the IR and the range of an object to the sensor. I then map the range into the number of LED that will light up, again the closer the audience to the sensor, then more LED will be turned on. Quite a simple process altogether.

Connecting Processing and Arduino

For the other Arduino that will talk to the Processing sketches, the process of creating it proven to involve more work. First is deciding which part of the screen interaction that will be passed on as a message for the Arduino and how. After some tinkering, I decided that the depth image’s position could be the parameter to turn on the LEDs. For this, I used one of the examples from Dan Shiffman’s openkinect library that will track the depth image’s position. This process was proven to be light enough that it didn’t affect the whole performance of the system.

After that milestone was achieved, the next step would be connecting those parameters to Arduino to turn on the LEDs. I already knew that this communication will be done via serial port. So, I first tried to make Processing do a bunch of serial write process depending on the parameter from the Kinect’s depth image to give different effects of LED. But this process caused a massive lag on the system and the display. Then, I tried different approach, which is to do serial write only twice and let the Arduino interpret the whole bunch of data, in short, load balancing. This was then the solution for that problem.

That step was done using the USB port on my laptop. Now to the real part, which is connecting the laptop and Arduino wirelessly using XBee protocol. For this part, I used two XBee Series 1 chips, one for sending and one for receiving. This is probably the trickiest bit of all process given that I never used XBee before. But after some readings I founded that in order to do a communication via XBee, some steps are required. First of all, I have to program both XBee chips so that they’re in the same network. I used the application CoolTerm on my Mac machine to do this, programmed both chips using XBee commands. After some testing to make sure that they can communicate with each other I then connect the sending XBee to my laptop and use that as the Processing’s serial port to send data to the receiving XBee. Voila, it worked. Not as hard as I imagined in the first place.

Conclusion

In the end we managed to get all technical part of the system working. Quite a lengthy process, but we enjoyed every single one of it. Of course with so many building blocks we always had a bigger risk of failure, it’s then up to us to cope with it when system failure occurred.