An Exhibition on Holiday

2:20 PM in General by Adityo Pratomo

Hello everyone :)

I just want to share this experience of exhibiting my installation during the holiday. I am currently participating in this sound art exhibition called “Derau” (Indonesian word for “noise”). This exhibition is held by a new media art community named Common Room . This exhibition runs for 16 days, and it’s held in this small 2-level gallery (to add some more creepiness, it’s located in front of a graveyard).

In this exhibition I’m exhibiting my interactive sound art installation called “Robobrain” . I’ve written a post about it in my blog, so you can head there :) In short, this installation will generate sounds based on people’s interaction with it through the simplest form of interface, knobs. The point is, I was trying to practice what we’ve learned this semester in terms of interactivity and developing the concept. I only use an Arduino board and some electronic components and sensor for this installation as a practice to create a single computer-less system.

So, there you have it. I hope everyone have a pleasant holiday.

Cheers,

Didit

Thing I Missed: Syphon

1:24 PM in General by Adityo Pratomo

The name says it all. Apparently there’s an application that enables sharing of video between application such as Quartz Composer and Max/MSP, it’s called Syphon, and you can read more about it here:

http://syphon.v002.info/

A port of this version for Processing is on its way, but for now, some people are using it with JSyphon, Syphon’s port for Java. Some discussions on Syphon and Processing here:

http://forum.processing.org/topic/syphon-integration-with-processing

Can’t believe that I missed this one, as this may have helped me in linking animations from different apps. But may be, it can be included in my next project.

Hope this helps :)

Fy: a picture is worth a thousand words

12:57 AM in General by susannec

Given that we are visual people, why not show the progression of the project predominately through photos – Enjoy.

Human Skin – I found these images taken with a Coloured Scanning Electron Micrograph absolutely fascinating. All these images are magnification of the skin, the first image (magnification x117 at 10cm wide) details the ridges of a human fingerprint pattern. The second image, (magnification: x750 when printed 10cm wide) is the skin surface of a 40-year-old man.  Lastly, the third image (magnification x840 at 10cm wide) shows the surface layer of the human skin, the epidermis. I consider this image most relevant to our installation as “the outer layer of the epidermis (the stratum corneum) is a tough coating formed from overlapping layers of dead skin cells, which are continually sloughed off and replaced with cells from the living layers beneath.”

(First and third images) Human skin outermost layer. Images courtesy of www.psmicrographs.co.uk. (Second image) courtesy of www.sciencephoto.com

 

 

 

 

Conceptual model – Re-use and re-purpose of plastic bags. Notice the resembles of the images above “overlapping layers of dead skin cells”

cells formation with plastic bags

 

 

 

 

 

 

 

 

 

 

 

 

 

Construction of skin – Plastic bags taped to wire

wire and plastic bags

 

 

 

 

 

 

 

 

 

 

 

 

 

Let them breathe – We wanted to test what it would look like outside. The gentle breeze helped them dance.

 

 

 

 

 

 

 

 

 

 

 

 

 

Lets get serious – the overall design shape was established

replenishing of skin

 

 

 

 

 

 

 

 

 

 

 

 

 

Hard wiring – we placed a few LEDs along each row of bags.

testing of the LEDs

 

 

 

 

 

 

 

 

 

 

 

 

 

The rejuventation of skin - the LEDs represent the sense of touch. Like the human skin, this installation is equipped with sensors, moreso to detect the movement of people and then responds by glowing. Furthermore, by interacting with the screen also activates the LEDs via Arduino.

replenishing of skin - testing of the LEDs, arduino and screen

 

 

 

 

 

 

 

 

 

 

 

 

 

In preparation for the big day – yes it is the palm of my hand

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

The shedding of dead skin – Bring together the screen and the physical installation.

Update #6 Final blog post… Woo hoo!

12:52 AM in General by Inez Ang

 

click here to watch video

 

Artistic Statement
Over the next 20 years, the area around the Grid gallery will be a place in transition with land reclamation of the western edge of the city and the construction of Barangaroo. This re-engineering would further detach its inhabitants from an already transient space. Watering Hole is an attempt to nourish this void by providing permanence in impermanent times.

It is an oasis in the heart of the city. One that tries to recapture the spirit of a community through simple play. Using water as the portal to a school of virtual fish, senses are aroused and soothed by aesthetics which encourage calmness both in body and mind. Running one’s fingers through gentle bubbling water creates a generative soundscape while affecting the behaviour of the fish. In its full implementation, it is an ecosystem that lives and thrives according to the quality of its immediate environment and interaction with the community.

Through playful engagement and the calming aesthetic of the pond, it’s hoped some relief may be brought to a community fragmented by corporations and under siege by the promise of a future.

Design Process
Research and Ideation
The project began on the highways of cyberspace and byways of Sydney where grounding research on the proposed installation site, Grid Gallery, took place. As the context behind the space emerged through archival photos, Google Maps, news reports, street observation and interviews, one thing was certain – it’s a transitory space with disconnected inhabitants, and it was about to undergo a major and long-drawn overhaul. This inspired the idea to create an eco-system that would provide a sort of permanence throughout these impermanent times.

An interactive fish pond based on generative algorithms was conceived and its ‘quality of life’ would depend on its physical environment and interaction from passers-by. The screen would play on a mental model of a window into the underwater world while water becomes the user interface. Bubbles generated by an air pump and a submersible light in the water would not only provide visual and haptic feedback, but the tactile quality of water also stimulates the sense of touch and has been shown to have an intrinsic appeal to user’s memories.

Designing the Interaction

Drawing on research into the field of HCI, the interaction modalities were designed around “The Audience Funnel”. It attempts to draw the passer-by into direct interaction with its water interface by attracting attention and arousing curiosity through a combination of sound, graphical animation and physical responses.

Upon further reflection, a decision was made to remove the on-screen instructions to make the experience more immersive and encourage more exploratory behaviour from the users. This meant that users would be guided purely by feedback and it was our job to craft its responsiveness to establish the ‘virtual connection’.

The look-and-feel of the installation was inspired by Ommwriter and DoNothingFor2Minutes.com, both from the Calming Technology paradigm. Through design, they aim to reduce stress and encourage in their users ‘a restful state of alertness’. This mantra became the core of our user experience . We would use technology to create a fluid screen aesthetic, gentle bubbling and a generative soundscape to create an oasis of calm.

After a brain storm for graphical aesthetics, blues, greens and neutrals were chosen for our colour palette and we dove into creating an ambient swarm of small fish with realistic water ripples against a dreamy backdrop in Processing. However, we quickly learnt after our first screen test that large, low resolution displays work best with simple, solid shapes, non-gradated fills and minimal clutter. Working with large displays also means that the eye has more area to process hence visual feedback has to also call attention to itself for the user to register the feedback.

Testing and iterating though the design phase also helped us to refine the interaction. Our final design employs a more fluid form of interaction where natural actions are caught and rewarded. The feedback is quick and its effects taper off slowly, allowing visual aftereffects to linger and sound to build up in layers creating a meditative atmosphere. We also tried to give our fish a personality by making them skittish and sensitive to quick movements. This consistent response sends a strong signal to users to slow down.

Technical Development

We chose Processing as our development environment. Due to last minute hardware difficulties, the final setup was consisted of one computer doing the Kinect and screen component (Comp A) and another doing the Arduino component (Comp B),  connecting both via oscP5 library.

Hand movements above the water are captured in the depth image of the Kinect using the OpenKinect library, which is processed using OpenCV and custom algorithms in Comp A. Depending on the gesture performed, Processing would relay information either to the screen or via OSC to Comp B, which turns on the LED light and air pump via Arduino and SeeedStudio’s Relay Shield.

If movement is captured by the PIR sensor on Comp B, the LED light and air pump would momentarily turn on and a message would be sent via OSC to Comp A where a short stream of virtual bubbles would be released.

To implement the technicalities, we acquired skills both in software and hardware. Projects on OpenProcessing.org provided us with reference points and Daniel Shiffman’s blog and Kickstarter project “Nature of Code” equipped us with crucial knowledge on programming vectors, particle systems and steering behaviours. Going through the OpenCV API also helped us gain the skills to refine our blob tracking to filter out multiple hands over a small region-of-interest. We also researched into basic electronics (and acquired soldering skills to fix wrongly cut cables) in order to work with the Arduino and the other components.

Reflection
Our main agendas for the project were to experiment with technology and understand the public design space. Selecting a simple concept to work with during the group ideation phase gave us a lot of latitude in the design space to craft and fine tune the user experience. We were able to make a simple concept better rather than a great idea work. With not more than a year of programming experience between us, we’ve certainly acquired a multitude of technical knowledge and are just beginning to understand how people behave in such spaces.

We were pleased to hear people comparing the slow hand movements to taiji on a few occasions and one even remarked the experience was meditative. This meant our design was, to some degree, successful in conveying the feeling/ idea of calm to our users. One thing we did underestimate is the appeal of the fish tank itself having heard “Oh look a fish tank!” more than a couple of times. This novelty item in the middle of a public space created a honey pot effect for us and probably got us more attention than the fish on the screen. People also tend to become a little self-conscious when they step up to interact and will usually stop after a try or two if they don’t get a response. It’s a fast, dynamic space where expectations are high and reactions are immediate. The difference between a positive and negative response, judging from facial expressions, is really a matter of seconds. However, they will make an effort to understand the interaction if the content/ experience is engaging enough.

On the downside, we didn’t notice anyone’s hand touching the water which meant we lost one of our most powerful feedback mechanisms, the tactile feel of the water bubbles. People also tended to be engrossed in the happenings on the screen while interacting and tended to miss the light and bubbles turning on and off. This was probably because the visual feedback in the physical tank was not in the natural line of sight. For future development, one of the major improvements would be to use a shallower water receptacle to eliminate the boundaries between the user and the water surface yet maintain the visual novelty that served us so well. We could also create more ‘bubble points’  in the water to to make the water seem more playful and enticing to touch. These are some things we took into consideration designing the Grid Gallery mockup.

Fy: Skin Final Report and Reflection

12:30 AM in General by Adityo Pratomo

Skin, is a mixed media screen art interaction that incorporates a physical spatial installation. In the words of Juhani Pallasmaa, “As we look, the eye touches, and before we even see an object, we have already touched it and judged its weight, temperature and surface texture.” Skin, plays with the sense of touch and bring attention to the unconscious element of touch in vision and the hidden tactile experience that defines the sensuous qualities of the perceived.


The Experience
As the viewer interacts the screen, what can be seen and defined, is a skin-like surface layered to generate the movement of the viewer. The skin fluidly reveals edges and corners, blending the texture of plastic bag to form the temporal and continuity of the viewer’s presence. The interaction with the screen can be experienced as both mental and physical, it allows exploration of the hapticity of skin screen surface. The term, haptic relates to or based on the sense of touch. For this reason, the physical spatial installation was critical in activating all the senses within a new environment. The concept-encouraged viewers to touch with their eyes and sense with their being – brings the experience to life.

To create this intended user experience, we played around with the technologies introduced to us during the course of the semester. We managed to come up with a mix of motion sensing using Kinect and Processing and physical feedback through lights using Arduino and IR sensors.

Finally, after going through the long process of creating this artwork, from its conception to creating the early prototype , testing it and then coming back with the final prototype , we then finally come to showcase the artwork to its intended audience: the public. It’s the exhibition night.

The Exhibition Night

On the night where Skin was going to be introduced to public we suffered another technical difficulties. For some reason, the LEDs that were working several minutes before our showcase suddenly died. We only managed to get 2 LEDs working from a total of 14 LEDs. But on the other hand, the screen interaction actually worked well and reactive enough to the audience’s movement. So in short, we had the screen interaction working perfectly as planned but not the physical interaction.

The effect of that condition was actually quite severe. On one hand we had two kids playing happily with I, one was even caught saying “are we inside that plastic bag?” while pointing to the plastic bags that were hung on the tree. A great response that showed that we did able to give the audience the intended experience and perception of the artwork. This also showed that the displayed texture bag could be showed quite well that the audience can actually see and judged that it was a plastic bag. I did an interview with another audience to make sure of it and she gave me a positive feedback.

Meanwhile, on the other hand, we actually had some feedback from other audiences that was saying that the whole artwork was broken simply because we only had 2 LEDs working. This then prevented the audiences from interacting with it because they saw it as a broken thing and nobody wanted to play with a broken toy. Even though we clearly had some people having fun with it.

Another thing that we examined was the fact that some people actually trying to touch the plastic bag and hoping that something would happen on the screen. When I told them that if you touch it then only the LEDs inside the bag would turn on, then they were hoping it could do more than just that. They were hoping for a dialogue between the screen and the plastic bags, not only a one-way interaction that we were providing at that night.

Things We Learned

The chance to exhibit the artwork and actually testing it to public was a great chance for us to proof our theories of what the artwork should provide as well as getting precious feedback regarding the development of the prototype. With that in mind then, having a partially working prototype and a less than positive feedback was actually something precious for us.

In the end, those weaknesses in our whole artwork then pointed us to several things:

1.     By having more elements on the environment, the audiences actually expecting more things can be done with all of those objects. A one-way interaction, or even a loose interaction on just one thing simply just won’t be enough

2.     We have something working on the screen, but the experience for the audience just weren’t enough, because it’s too easy for them, even though what we were aiming on the first place was something that the audience can easily picked up. In practice, too easy also just doesn’t cut it.

3.     The physical objects should then be able to amplify the whole experience in case that the screen failed to satisfy what the audiences were expecting. This can be reached by having a more complicated mapping between action and reaction on these objects.

4.     A better quality of the displayed texture and body form should also help entertaining the audience even better. Because then they can see that there’s an interaction between the body and the screen in a clearer view. Applying a smaller and more polygons as well as a proper shader might be helpful to reach this goal.

These points can then be regarded as the critical steps that need to be fulfilled in order to create a more engaging interactive installation for deployment in public space.

The Future

As we’ve pointed out the possible room for improvement for our installation, we’d also like to propose how the installation will fit the space of the Grid Gallery. We stay to our initial aim to light up the whole place through exploration, giving the performer (the people interacting with the installation) and the spectator (people viewing the installation without interacting with it) a chance to enjoy a brighter Grid Gallery area.

Here are our mock ups of how we will transform the Grid Gallery area

Conclusion

It’s been an interesting project to work. The chance to actually build something for the public display is quite rare especially having it tested for public. It would be even better if then in the future we can actually build upon this stage of the prototype.

Other Documentation

Here’s a link video documentation of Skin, made for a visual comprehensive introduction and demonstration of what “Skin” is

Skin from Adityo Pratomo on Vimeo.

by Kevin

Making Space – Universe Invented

12:24 AM in General by Kevin

Making Space – Interactive Space Installation from kychan on Vimeo.

Journey into the Cosmos

Cosmic imagery has been created many times on TV, movies or games, either in 2D imagery or passive 3D renderings, but never interactively experienced by motion detection.  Looking at the night sky as children, the distant stars appear only as flat because we’re not able to move around them.  In the installation, we wanted to create the most immersive experience of space to date using the large screen.  This was the beginning of Making Space – a desire to use beauty to inspire.

 

Collective Human Pursuit

The Making Space universe is concerned with the cosmos as a reflection of our society and collective human pursuits.  Gravity of individual atoms each exerts influence on the solar system, as a part of something bigger.  In human society, strangers come together in pursuit of something, collective actions ultimately manifests into advances in the society we live in now.  As stars on a starry night, daily actions of single human beings are important pieces to a functional society as a whole.  Each singularity is in relation to others as a part of something bigger.

Using motion detection, the viewer is able to engage the universe in navigating the cosmos, and collectively create the universe.  The installation aims to build relationship and connect members of the space through collection actions.  The individual viewer is able to immerse themselves in the space experience, while leaving a permanent mark in the universe.  Ultimately, each individual influence creates a powerful scene for others in the space to appreciate.

 

Making + Chaos (The Chaos Theory)

Figuring out how to make the complex aesthetics generative was the biggest challenge in the project.  The gradient of cosmic colours of galaxies, layers of clouds, stars in different depth and contrast of light and dark colours needs to be created as input.  We cannot afford to have over-complex rendering processes having thousands of objects in the scene.  The Carina Nebula image was our aesthetic reference and trying to reproduce the same aesthetic quality with all the limitations seemed overwhelming.  In “You Must First Invent The Universe“, we conducted small experiments to make the construction manageable, and find ways to achieve the result without having to hand-craft every million atom in space.

In our research we were fascinated by the discovery the Mandelbrot Set (probably common knowledge to people with Science/IT background), an image with endless complexity (chaos) created by a simple loop.  Learning that chaos is deeply intertwined in the universe and its beauty really influenced both the development of the concept and code.  Using fractals was a lucrative idea because it involves simply creation of a single star object and a replication process that progressively alters the original, adding variation and complexity into the whole scene.  Unfortunately, the fractal experiments did not produce any results to our liking mainly due to limited coding knowledge, but it provided the thinking we needed for the rest of the project.  If a form of self-replication can be achieved, it can go to create the complex gradients of cosmic colours and generate objects in space in complete randomness, creating complexity and unpredictable beauty.

Iterations

Nine Experiments Later“, we had figured out most of what was needed to make the universe.  By now we had a good candidate .  Time was starting to run out and progress was slow, often we would run into code blocks that we cannot solve for days.  To make sure things run on schedule, we broke the project into components and tested small and often.  This avoided laying much work to waste by building something that wouldn’t go into the final cut in the end.

(from top, left to right)

Aesthetics Ver.3 (22/05/2011) Revealed limitation on displaying gradient and textures.  The cloud texture turned out an unrecognisable blur, this prompt us to create images in higher contrast.

MillionCubes Ver.3 (26/05/2011): The universe in its most basic existence.  Tested with little expectation and turned out better than expected, with a decent glow and recognisable rotation.  Needed to improve glow, form, and sense of depth, but the contrast is good.

Billboards Ver.4 (04/06/2011): Improved the form of stars.  Needed to keep colours more consistent and within similar class/hue.

Billboards Ver.4 (04/06/2011): Variation of the above.  Need more weight in the center of stage, or the vocal point.

Prototype B (29/05/2011): Rotation and variation of colours was good.  Fluid effects not suitable type of interaction but provided good revelation on what colours are possible in the final product.  The interaction

Billboards Ver.7 (08/06/2011): The gamble with 2 layer colour texture didn’t pay off, so gradient is really the way to go.  After this we switched back to using gradients for the stars.  This testing was perhaps the most valuable, as it provided some final answers to the projects.  After this, we improved the contrast, smoothness of colours to make it more spacey, created an improved overlap effect of colours.  The interaction worked only okay, and it needed a delay or extension of detection to keep the zoom in effect smooth.  The zoom in effect worked nicely, and it showed that the platform is mostly working as we want it.  The final version came at version.11, and by then we had a smoothly running zoom effect, a time extender for detection and beautiful aesthetics that was just right :)

A more detailed record of prototype evaluations can be found in our Evaluation post.

 

Afterwards

At launch night, one of the viewers told us he felt that each star was like time in one’s life, that ones shinning brightly and closely together are the glorious years, and the lonely and distant are like bad times.  It seemed like the installation managed to create emotional resonance to viewers.  Personally, seeing the installation in action sends chills to my spine, perhaps because it didn’t come easy and took so much work :P.

The project was met much resistance in its making and it has been a long journey here, but in the end, I think the project is a success.  It’s amazing that different people we talk to see different things in the project.  I think that’s also the nature of the cosmos – eternally mysterious and filled with human imagination.

quick little explanation

12:06 AM in General by johnny campos

Well I though i would give a quick little explanation of how we made our project. After our group had  came up with a new concept we then had to figure out a way to come up with a way to program this into processing. Didit had mentioned that he had seen a cloth morpher like library on the internet somewhere. So i searched the net and  found the library that specifically dealt with using a form of cloth simulation.

Traer. Physics library



 

Basically what this library does is gets a 3d mesh that already has a predefined vertice on it. Once you get a vertice position you are able to shape the mesh any way you want. so what we did was use the kinect to detect the user and once it did it would change values of specific  vertices and it would move those vertices either forward or backwards depending on the movement of the user. By the end of the project with all the coding done we could finally see some nice fluid  movement in the model.

 

Graphic design

One of the main differences that we had to change was how the user interacted with the screen. So from our original idea where we had an animated gif on someone’s head to then having it a graphic texture to fit the screen was a pretty big change for me.  my job suddenly changed from being a 3d modeler to being a graphic designer. One of the main challenges for me was how to display a texture on the smart slab and still have it still look like plastic. So this is where I came in with my magic. So what we did was that I got a high detailed image of a plastic had see how that looked like on the smart slab. As expected our group couldn’t even tell that there was plastic on the screen, so we took that high rez image into Photoshop where i played with the contrast of the image as well as change the colour so that it matches with the arduino colour. After that we tried another test but unfortunately it still wasn’t displaying what we wanted to see and so it was back to Photoshop. So after the contrast method failed I had to think of other ways to make the plastic look like plastic on the smart slab, so I did my own experiments with the filters, colour balance and contrast and was able to produce some nice images.

after we had tested each of these images out we could see that the picture below is what resembled a plastic bag on the smart slab.

So that was some of the changes we implemented one of my teams members will soon be uploading the report that will speak more of the process that our assignment undertook.

 

 

Simplicity Final Blog Post

12:05 AM in General by Hendy Santono

Concept

The concept of “Bokehlicious” originated from our ideas (Simplicity). The combination of our grounding research and ideation created an idea of having a relaxation. The purpose is to let the passerby having a unique experience through screen. We would like to capture their interest, to take them out from daily routine in their life and encourage them to notice the surrounding environment. Our concept is to construct an urban installation; make a connection between pedestrian and media facades.

http://www.youtube.com/watch?v=VWubD7-9-_E

Concept Development

Previously, we are using the color from user t-shirt to represent the color of bokeh inside their silhouette, but as user testing evaluation was conducted, we find out that mostly people create a gloomy color bokeh inside their silhouette. With the advice that Ms. Deborah gave, we scrap out the color based on T-shirt, and fixed the color to bright color.

Currently, the color inside silhouette emit bright color that user can change when their silhouette collide with other silhouette.

At first, we had some crack on the background because of bad background extraction, but when we do user testing, we find some interesting behavior that user actually use the crack in the background (which emit bokeh) to create the different color and extract their bokeh and put it inside their body. Based on this observation, we build the crack more viable to user for use.

The bokeh inside silhouette represent the color of life which naturally happen in daily life, we give freedom to user which color that represent their self by interact with other people and change the color of their bokeh.

Background represents nature in the city to make it more concrete, the weather to make it the environment looks real and dynamic in which supported by the sound as background music.

Developing the Prototype

Our original concept would be based on relaxation sound, nature background, weather forecast and interactive sketch running on the screen.

Throughout the development, our aesthetic constantly changed based on the testing on smartslab, feedback from user who tested our prototype, and most importantly advice from Deborah when she saw the smartslab screen, into what we now have.

Exhibition Night & Reflection

In the exhibition, we had problem on capturing image to extract the background because people keep moving around the screen.

Through the exhibition, we found out people do enjoy doing stuff in front of smartslab, even people who don’t give attention to the screen feel dramatic change of feel when our sketch showed (thanks to the contrast of serious theme from the team before compared to our team’s theme of relaxation).

The exhibition was great, I love the people participated with the screen. They look like enjoy it so much and found something new that they never see before. Our suggestion is to put a short description or put the poster next to the screen to let them know the concept of the installation.

Technical:

Hardware & Software:
-Notebook Macbook Air
-Microsoft Kinect
-processing
-borrowed Notebook Macbook Pro

Processing Library:
-openkinect
-opencv
-minim

Credits:

General Structure
Based on kinect toolkit provided by Rob Saunders

Background Removal
Based on Dan’s Sullivan background removal
http://itp.nyu.edu/varwiki/Classword/HRS10code#KinectRemoveBackground

Based on kinect toolkit provided by Rob Saunders
Center point & blobtracking
Based on opencv example:blob

Rain Effect
Based on David Miro
http://www.openprocessing.org/visuals/?visualID=2319

Improved based on Anastasis Chandras
http://www.openprocessing.org/visuals/?visualID=9299

Bokeh Particle System
Based on particle system from Nicholas Kelly

Based on Ari Prasetyo
http://www.openprocessing.org/visuals/?visualID=9152

References:

Grounding Research
http://www.idea9102.net/wp/archives/568

Ideation Q&A
http://www.idea9102.net/wp/archives/1122

Simplicity Progress Report (Interim Presentation)
http://www.idea9102.net/wp/archives/2396

Smartslab Basic Prototype Testing
http://www.idea9102.net/wp/archives/2604

User Evaluation
http://www.idea9102.net/wp/archives/3274

Special thanks to:
- Rob Saunders
- Martin Tomisch
- Nicholas Kelly
- Debora Turnbull
- Keir
- Chris Law

We are nothing without you guys :)

Update #5: Getting physical

11:01 PM in General by Inez Ang

Getting Arduino to work in Processing is simple enough with a step-by-step guide here.

Arduino + PIR sensor
To begin working with our Sparkfun PIR sensor, we used a great sensor report by an ITP student. Based on some forum threads, analog values could be used to measure distance so we tried extensively to find patterns in the values, making data logs in text files along the way. We gave up in the end because the values were horribly inconsistent.

Since the PIR sensor is constantly streaming information and changes quite quickly, tying our state machine directly to it was making a real mess of the interface with bubbles going off and on all the time and sound terribly disrupting. Thanks to Jeremy Blum, we understood the concept of implementing interrupts and found a wonderful Processing code example on how to do software debouncing to stabilise the reading. *We also love the fact that the code was written for a tweeting cat mat :) *

This debouncing technique was so useful that we implemented the same logic to fine tune our gesture recognition algorithm.

Arduino + Relay Shield
Although we could have built our own relay circuit to control the air pump, neither of us had worked with voltage higher than 9V and honestly, the symbols on the schematics were a little too much to handle at this point. Choosing a relay shield required a bit of research into how much electrical load we needed (220V – 240V) and how much the relay could carry. We settled for the SeeedStudio shield due to availability and the fact that it switches 120V – 250V.
In theory, it’s quite simple. The air pump would be plugged into the wall socket as usual while the negative wire would feed into the relay. This relay is a Single-Pole-Double-Throw (SPDT, thanks Make Electronics!) so you can connect it one end to COM (common terminal) and the other to either NO (normally open) or NC (normally closed). According to the data sheet, the only difference is whether output for on is 0 or 1. The rest is up to the code to tell the relay to connect or disconnect the power to switch the pump on/off.

Click to play video

In our haste, we cut both cables when we should have only done one so it was out with the soldering iron, heat shrink tubing and gaffer tape to be doubly sure.  It was an ecstatic moment when the bubbles came on but it soon turned into a ???? moment when we realised there were no pins left if we attached the shield. A lot more Googling later, the answer was to breakout the shield onto a breadboard to free up some pins.

The submersible LED was chosen due to its low power consumption (6V, 30mA). To save $35 buying a transformer for it, we decided to take a calculated risk – snip off the original power pin, expose the leads and stick it into the Arduino. Fortunately for us, it worked with a 100Ω resistor to protect it. The resistor value was calculated using the formula R = V/I.

 

Update #4: Dang those algorithms!

10:06 PM in General by Inez Ang

 

Cracking the gestures
Probably one of the trickiest coding hurdles for us was to figure out how to tell between the different gestures. We first operated on a grid model that tracked the time in which the user takes to pass through the different areas. If more time is spent in one area, then it would be a STILL gesture, crossing more areas in a short amount of time would be a LONG SWIPE. But this code proved to be very cumbersome and overtly tedious to implement.

After a good night’s sleep, we realised a more fluid and accurate model would be to track distance between co-ordinates over time. This way we could dynamic set boundaries and get point accurate co-ordinates for the elements to react to. An array was created to store blob centroids every even frame (for better performance) and time could be calculated by looking at the co-ordinates at specific array positions eg. To find out how far the hand has moved over a second (@30fps), co-ordinates at position 29 and 14 would be compared.

Compensation for those blobs
During development, we decided that we would only focus on crafting the interface for 1 user. After scrutinising the OpenCV API, we discovered that the blobs returned in the blob array are in the order of largest to smallest. To filter out all the other extraneous hands, we would track only the centroid of the first (biggest) blob.

The results of our field test showed a problem with our hand tracking algorithm. If only the hand is in the Kinect’s FOV, the center point of the hand would be accurate. However, if the user stretches across the tank and the arm is in the FOV as well, the center point would end up in the middle of the arm. This offset threw the responsiveness of the system out the door.

Fixing this problem was a laborious task, having to generate data logs and finding patterns amongst all the numbers. Fortunately, there was a consistent correlation between the size of the blob area to the amount of x/y offset and we were able to fine tune our hand tracking.

Click for video

City___ The Final Word

9:31 PM in General by Jonathan McEwan

CITY___ is an interactive, social experimental artwork which visually represents feelings in an attractive fluid dynamic simulation. These feelings are sourced in real-time from Sydney centred social network content.

Three modes of interaction work cohesively together to form the basis CITY___’s “Interaction Onion”. The primary layer is a passive interaction mode where feelings sourced from social networks are streamed into the visualisation as a jet of colour. A reactive interaction mode forms the secondary layer which blooms immense colour into the simulation, responding to people’s answers to a proposed question at the mobile site: http://cityunderscore.pgcreative.net. The tertiary layer is an interactive mode that captures people’s movement to swill the fluid injected via the first two layers.

 

Conception

City Mood concept story board - Phil Gough

CITY___ was created from the amalgamation of the concepts created by Phil Gough and Jonathan McEwan.

Phil’s concept (Grounding; Ideation) was to translate the feeling of Sydney at any point in time into a vivid visual display. Jonathan’s concept (Grounding; Ideation) was to provide a colourful fluid simulation as an abstract urban playground.

Through this combination, CITY___ was born. For more details of the project concept please visit: Team Plasma’s Ideation, CITY___’s Name Day and City___ High Resolution Images blog posts.
  

Challenges

Colours in Culture - informationisbeautiful.net

The process of bringing CITY___ into fruition wasn’t without its difficulties. These challenges were broken down and addressed as part of our design process.

Most of our design decisions were conceived in our individual ideation process, and then ironed out in our combined design. See Team Plasma’s Ideation, and In response to all your feedback, mostly about colours… blog posts.

The most tedious challenge of the project was balancing the visuals of the fluid dynamic simulation. There were a number of variables which required fine tuning to achieve the right compliment of aesthetics we had envisaged. In addition to this, the fluid simulation library (MSAFluid) was only configurable to a point, and required us to directly modify its behaviour. For more information regarding the challenges and solutions during the implementation phase please refer to the three part blog post: CITY___ Implementation Process: Part I; Part II; Part III
  

Observation Exercise and User Reaction

Before the exhibition, City___ was displayed on the SmartSlab to evaluate its progress. This was a good opportunity to observe how people behave with such an unusual display introduced into their surroundings. The simulation was allowed to run for a few hours, while users were observed. This observation highlighted three reactions.

  • Ignored - Many people paid no attention at all to the project. They may not have noticed that the screen was displaying anything at all. This is probably because the screen is a normal part of the environment, but has not been used very often.
  • Novelty - Some people saw the animation as a novel change from their routine. The display provided some enjoyment, or distraction, but they didn’t attempt to engage themselves with the project
  • Investigation - Some people took time to sit and observe the simulation. This gave us the opportunity to discuss the project with them, and find out their thoughts. All of the users who we were able to chat with said that they enjoyed the aesthetics display but many did not realise that the display was interactive. This was probably due to the orientation of the screen relative to the pedestrians. Of the group of users who took time to play with the screen, after observing others interacting with it or through conversation with us, there were some who introduced and demonstrated the project to their friends and discussed the project without our intervention.

We see this as situation an ideal result for our project.
 
 

Evaluation & Improvements

After our testing, dry run, and exhibition night we took the time to reflect on the success of our implementation. Our final product was very close to the vision we conceived in our ideation phase. That being said there were a few area’s which could be improved to boost the quality in our design, especially with the transition from the GridGallery to the SmartSlab. The fundamental shift was from an environmental art piece for an urban façade to an exhibition based performance for a free standing screen. Challenges initially unforeseen while trying to satisfy the design brief were uncovered through this transition.

The use of a standard web cam over the Kinect was a decision made in response to the context of the urban setting. In comparison, a web cam is inexpensive, and still afforded us the same level of motion capture as the Kinect. What we sacrificed in this decision had some major ramifications for our presentation in the context of the Wilkinson building courtyard:

  1. No defined interaction zone - Use of the Kinect would give us access to depth based movement detection which would allow us to define a set area for interaction. Without this defined zone, a plethora of noise was generated from the large crowd of people standing in the background. During the exhibit this overload of movement made it more difficult to interpret what was going on and derive meaning from the project.
  2. Inadequate light source - Even with our calibration tool (via TouchOSC) we found a limit to the web cam’s ability to compensate for low light conditions. In theory, as the Kinect has its own light source (infrared projector) and specialised camera, it would be better suited to the lower light conditions experienced during the exhibition.

What we would potentially lose from using the Kinect is the blanket cone of movement detection the web cam gave us. To highlight the importance this, we found that the web cam would pick up the movement of traffic through the entrance of the courtyard. As an unexpected side effect of the web cam’s position, the traffic affected the screen in a unique and obvious way. We found that this triggered a connection in people’s mind between the consequences of their actions within the immediate space.

Our design didn’t include sound due to the absence of speakers at the Grid Gallery site and the concern of vandalism. This was carried over to the design at the Wilkinson building. From observing other presentations it was evident that people’s emergence was greatly enhanced by the presence of audio and music. Our final prototype incorporates sound, however, only to satisfy the performance based shift required for the exhibition setting.

One of our main concerns during the exhibition was the absence of explanation. We found that during the exhibit, people required encouragement to visit the site rather than investigating\exploring on their own accord. Once the concept was explained, people were impressed by the ingenuity, and amazed at the interaction possibilities at their command. Inspiring a person to explore and\or investigate the concept was a hurdle we had not anticipated in the exhibition context.
 

That’s a wrap!

Overall our efforts during this project have been extremely educational and have allowed us to glean valuable insights into urban design and media facades. The prototype of CITY___ shows an applied design process through the success of our presentation during the exhibit.

Simplicity: User Evaluation

9:00 PM in General by Hendy Santono

Hi guys!
Sorry for the late update from us. Unfortunately, our blog post didn’t go through and saved on our draft, we just realize and post it again. After we did user evaluation on Tuesday 7 June 2011, we found a few major problems, such as:
- Bokeh keep changing colour (blinking) because of the background noise
- Background (theme) should be change after 5 peoples interact but the background keep changing (blinking) as well because of the background noise
- Bokeh left mark/trails whenever the people move

From the user evaluation that we found, we were discussed with Deborah about the Aesthetic point of view. So, we decided to change our colour concept to make it more stunning. Previously our concept is the colour changing based on the t-shirt they using, so basically we tried to code our previous concept and we have done to get the middle point of what the kinect sees.

Therefore we tested on the Smartslab, Unfortunately the colour that kinect got is always white because the dominant colour is white. Our new concept is determining bright colours. The colour will change based on how many people in front off the screen. The purpose is to make people feel relaxation and make it more concrete, which is our main purpose of this project.

This video shows what we got while we did the user evaluation

To solve the background noise, we did a change for the background changing as well. To keep the background stable, we did a background change if we press the ‘w’ button.

The Journey of a shoal of red Carp – Final Report of Interactive Installation “Fancy Carp”

9:00 PM in General by tao deng

Project Introduction

“Fancy Carp – My water babies in Grid Gallery” is an interactive installation designed for Grid Gallery – a large scale outdoor panel outside Energy Australia’s new City North zone substation. Grid Gallery is a 15m(w) x 1m(h) panel composed by many LED (Light $Emitting Diode) enabled facade.

Grid Gallery

 

Fancy Carp Installation on Smarts lab (Test Version)


Ideation

The journey starts about two months ago, back then these red colored fancy carp were indeed a group of orange or verdant tropical fish living in a huge water tank. According to Ted our team member’s conception, there were light-footed bubbles, limpid sea water, slender seaweed and even a huge blue shark. That is an underwater world full of childlike imaginations and pure color tones.

Ted’s Fish Tank Ideation http://www.idea9102.net/wp/archives/1523

 

To see a spot full of natural’s own colors and motions is always pleasant, deep inside, we are probably still those funny faced monkeys living in a wild jungle. Nature, especially water, could possibly arouse those vague and remote memories we had when we were still fetuses. Yet more significantly, we have to give our “nature featured” installation bit character in order to make it favorable.

 

If I have the freedom as a designer to reassemble the memories I had, I wish I could relive those scenes that are most beautiful and enduring. Fancy carp is almost my immediate thought when it comes to water garden and ornamental fish, perhaps as lots of other Chinese originated people would. Those red fish gathered in hundreds of even thousands in spring in the water, they carve your memory with their amazing mass appearance. They’re probably not eye-strikingly beautiful, but they’re truly unique to be able to this close to each other.

Traditional Chinese Painting themed Fancy Carp


Famous Tourist Attraction in City Hang Zhou

“Fancy Carp” – My water babies in Grid Gallery Concept: http://www.idea9102.net/wp/archives/1946

 

Final Concept

To reflect the spirit of “Urban Playground” as well as addressing the concordant interactions between the screen and passers-by, we incorporated three kinds of interactions in terms of how the screen will respond to the motions detected by Kinect:

  • Feeding – fish swim forward to shadow.

  • Shadowing – shape of shadow and ink painting

 

Upon Rob’s suggestion, shadowing became the core interaction part of the installation. In a way, “shadowing” is also the embodiment of the most adorable character fancy carp may possess. Within a shoal of red carp, the boundaries between different kinds, shapes, sizes or any other diversity no longer exist; as a specific species, they’re simply close to each other, they make the appearance of a shoal of fish an amazing scene.

 

External Influences

Design Influences: The Chinese painting style we rooted our design theme on gave the installation its own characteristics, however, in a way it also constraints the potential design space given the fact that we have to use simple color tones. There is sometimes another version of Fancy Carp in my mind for the Presentation night, of this virtual existence of Fancy Carp I will only see a group of red carp and color paint drawn on the screen. And that is beautiful enough.

 

Internal Influences: All of our lecturers and tutors, as well as Deborah Turnbull and Keir Winesmith have provided enthusiastic support during the studio time on our group project. Although there might be debates working as a group I found it is good fun to work as a team. Communication brings lots of satisfaction to both parties of the communication as we’re all social animals. Successful communication is also probably the essential spirit of any art pieces.

 

External Influences: The best thing to study in a studio environment is that you can always learn from the others. This semester, we are lucky enough to work abreast with some excellent groups who are brilliant on forming concepts, constructing designing and executing concepts. It’s not only that we as a team have learned abundantly from the other groups, but also as a little community from Asian Chinese background in Australia to learn from other nation’s cultures and excellent qualities they have displayed in the studio. Precision, diligence, enthusiasm, persistence and of course teamwork as well as many other qualities may seem irrelevant to the design but they are the basic elements for any successful group projects.

 

The Installation

We went through different stages during the design process. The journey of a group of red Carp starts from

A fish vs. a group of fishhttp://www.idea9102.net/wp/archives/2304

This post explains how we drew a fish & a group of fish in processing. This is the first version of fish we want to display on screen. We drew a few fish gif picture in Photoshop and coded them to be animated. But we found a sample called “flocking” in processing library. We found more example of “flocking” in Open processing website then we made the fish in “flocking” way.

“Negotiation between Realities and Imaginations”

http://www.idea9102.net/wp/archives/2551

The test we did on Smarts lab to choose black over White canvas as the background. We used fluid as ink to meet our design idea – Chinese ink. We considered making white background and black ink, but after we tested on smart lab, the white background is to bright. Then we decided to use new idea.

Eventually they’ll get there, slowly but steadily – Fancy Carp

http://www.idea9102.net/wp/archives/2762

Once the red carp are incorporated with interactive capabilities, they became extremely slow. Face detection code was making the animation to very slowly, Rob helped us to fix this problem.

 

“Fancy Carp Evaluation on Presentation Night”

http://www.idea9102.net/wp/archives/2905

 

We changed Kinect detection code during our project. In the first version, we used average point of the image to control the fluid ink force and used center of the blob to control fish direction. We changed all detection way by using center of blob only because we found the code was too complex.
Here comes a short and sweet interaction video on Fancy Carp we took on the presentation night. Once again, it proved that “responsive creatures are always capable of triggering off a whole range of positive emotions in us” and here the responsive creatures are those people who conduct interactions with our installation.

 

 

Findings & Conclusions

  • Design Constraints V.S. full expressions

If we could redesign the project I wish we would largely consider the constrains a LED screen brings to the visual results. We should also spend more time to refine the visual expression. Delicate patterns or fine lines are most unobvious on the screen, yet with all those neatly arranged beehive like little illuminants, the screen is an animal itself with its own characteristics. It’s suitable for rumbustious music and fun-involving continuous motions; it’s also good for calm scene and soft gentle colors. There just have to be a fine balance between adjusting the screen’s constrains and power in terms of visual and audio expressions. But one installation can only have one dominant feature, otherwise it looks bad.

  • Nature’s own rhyme

Nature is forever the treasury of endless inspirations, not limited to this, it’s also the finest artist who makes amazing art with the most rich color tones and exciting motions. We as designers or animators are more like imitators sometimes, we put the motions of animals’ heartbeats on the screen, we try to assemble those scenes that touched us in our cognitive world and try to engage others. However, those artists who are not constrained by nature’s own rhyme will be more creative and fulfilling. In the world drawn by Pablo Picasso, things are distorted and misplaced – the reality is probably because the world looks different to everyone & only those greatest ones could reproduce a world with brand-new look that we’ve never seen.

  • The role of randomness during design process

While we emphasize and try our best to work according to the strict schedule of group project, there is always randomness during the design process. If we didn’t test black and white background we will probably have a white background for the fish; if we didn’t have a pre-presentation night we may have lots of particles on the screen on that big night. It’s amazing to fulfill our design concept after going through all the changes and problems. If there were things unique about our installation and design it’s probably because we have gone through lots of randomness and these make our theme and installation truly special, so is our experience in Installation studio as a team!

 

Thanks everyone who helped us to complete this project.

Update #3: Fishee fishee

8:33 PM in General by Inez Ang

 

Interface design

Taking away the results from the last screen test, we’ve reworked the look and feel inspired by one of the images from the mood board.

 

Based on the second test, our images are looking way better. Crisp shapes, solid colour backgrounds, soft glows when needed. It reminds us of deep sea creatures and looks slightly surreal. Perfect for our nocturnal demo. Now I’m starting to feel calm.

Click for video.

While interacting with it, George pointed out that he couldn’t tell which screen reaction was linked to his action and suggested to make the food blink. He made a lot of sense because the screen is such a large area that the eye can become lost in all the visual information. Plus the fact that the graphics are such low-res, the mind has to actively fill in the details while trying to figure out what’s going on. Hence, feedback has to be horribly obvious and call attention to itself. So George, now we shall have blinking food and fish. Thanks!

We found the best way to animate the fish is to create a Gif animation (thanks Didit!) and store it in a Fish class with infinitely customisable variables. That way we can change the size, speed and colour. We tried all ways to change the colour of the loaded gif and finally realised looking through the GifAnimation library API that we had to first load the animation into a PImage array. To make multiple fish blink for 5 secs each after they eat the food, we had to implement a counter for each fish object AND store each fish to be blinked into a ArrayList AND delete each one when its done. I’m rambling but you get the point… It’s a helluva lot of work for blinking fish George.

The swarming was achieved by hacking various pieces of code from Daniel Shiffman’s Autonomous Steering Behaviours tutorial together with Ambient Aquarium by Miles Glass. It was quite task to understand these fairly complex algorithms but Shiffman’s KickStarter project ‘Nature of Code’ gave us a good headstart with chapters on vectors and particle systems.

The red particles represented our attempt to play with the virtual space and introduce a little flight of fancy. It is an implementation of a code library from Generative Gestalt. To get the glow on the bubbles, we tried using blurs but found it was slowing down the system heaps (Yes Rob, blurs are expensive). Lucky for us, Takumi had posted his beautiful glowing bubbles on OpenProcessing.org. The algorithm is quite clever really, it’s just 30 ellipses slightly increasing in size and each has a stroke of decreasing value to give that tapering off effect. People can be so clever!

 

Sound composition

The punk philosophy of creating music with three chords inspired the implementation of the  generative soundscape. We isolated three key user actions and assigned each an instrument of different timbre. Each instrument comprised of three different musical notes that randomly play when its action is performed. As the user interacts, they get a nice layering of various sounds and one would never get the same tune twice. The sounds were created and processed in GarageBand to avoid copyright infringement.

Click for sound demo.

 

Interaction Design

Early in the design phase, the range of user interaction was defined by hand movement along the x and z-axis of the water. It was reasoned that having the user dip his hand in the water would produce a stronger response hence connect more deeply with the experience. However, a test <link> with the Kinect mounted above a tank of water revealed depth tracking through moving water to be impossible. While we could have achieved this tracking using a normal webcam, the accuracy of the IR tracking on the Kinect made it a robust sensor for low light conditions and this was a good enough reason to redesign our interaction around it.

Now focusing on movements along the x-axis only, the speed of the moving hand and stillness became the key aspects of our interaction. To encourage stillness, we implemented an element of challenge in our design <link>. We designed three stages of responses in 20 second phases as a way to motivate the users to keep their hands still for a minute. While this idea was conceptually sound, our informal user test conducted a week before the dry run showed that people generally did not keep their hand still whilst above the water and when asked to, showed signs of bewilderment and discomfort <link>. Coupled with feedback from the interim presentation about public exhibitions being a more dynamic space, we came to the conclusion that overtly structured interactions like this may prove to be ineffective. Another important point is that although the system was designed for 1 user, users don’t actually care because hands also came in from all directions and often it was a few at a time.

Based on our observations during the test, we tried to make the system more intuitive by remodelling the system’s reaction times according to how people were using it. We also tried to steer and catch ‘desired’ actions to reward rather than imposing a methodical sequence of events. Casting the fish into a skittish type of character gave the fish a consistent response to guide the user – move too fast and the fish will run away, so move slowly. Here’s a breakdown of the new mode of interaction;

 

The weekend and the night before the dry run was a lot of field testing and hardware calibration for us. We finalised our Kinect position in the tree and made sure the OpenCV ROI was mapping well, sorted out access to various cables and a host of logistics. Martin was very helpful in helping us figure out where to put what and came up with the idea of storing everything in milk crates. Here’s what our final exhibition plan looks like.

While our second user test on the dry run night ran into some technical difficulties with the Arduino component, the interaction with the screen went generally quite well. People seemed to enjoy the aesthetics of the sound and display, and the fun of the tank as the interface. There was usually multiple hands in the tank at once and we’re glad we decided to filter out all but the biggest blob in our code. The interaction was finally coming together and what was left was to sort out bugs in the code. Rob and Debra also gave us some tips on how to decorate our spartan tank.

Final post-Hanging Garden-Team Spicy Tuna

7:31 PM in General by Roven Yu

Project overview

The objective of this project was to design and create a playful interactive urban screen based on Grid Gallery that is a 15 meters long and 1 meter high LED screen on the corners of Erskine and Sussex streets near the CBD area.

Our goal was to create a garden like city space that allows audiences to get closer to the natural environment after a day’s work. When no passer-by is detected, a garden scene will be displayed as colorful flowers spinning, swinging, blooming and stretching in speeds and directions on canvas. Once audience comes closer to the screen, the garden will disappear and be replaced by fairies following the audience. The audiences are going to enjoy a visual feast of a butterfly transforming to a fairy and also be surprised by having their own fairy avatars showing on the screen.

The final prototype was displayed on the Smart Slab locates at the Architecture Faculty of Sydney University during the exhibition night.
(The link below is to our video)

http://www.youtube.com/watch?v=m1kyP8IE2MI

Inspiration

Have you even felt that once you were passing a place, and you wanted to just sit nearby for a few minutes? Or some places which you could totally relax and refreshes your mind after a day’s of struggling over work or study, therefore, we meant to build up a space that not only beautiful but also to surprise our audiences in many ways.

To make our dream site come true, what kind of concept, graphic and interaction our audiences can accept is going to be the main concern during the whole process.

Design process

At the grounding research phase, we found that our potential audiences are mostly busy workers and they prefer something unique and outstanding such as road sign-like devices or a green elements design.
(This is the link to the grounding research blog post)
http://www.idea9102.net/wp/archives/820

We both developed our design concepts based on the user research findings. After discussion we decided to go with Paul’s idea that contained three stages using simple but colorful graphics, which are:

Stage1: Interaction with sound and vehicles (flower blossom)
Stage2: Interaction with passersby. (Vines and branches grow and follow, butterfly becomes fairy eventually)
Stage3: Interaction with audiences who stay in the interaction zone. (Opera house appears)

(This is the link to the ideation and sketching blog post)
http://www.idea9102.net/wp/archives/1574
(Above are the links to Paul’s research and ideation articles since we decided to use his idea)

After refining the idea with discussion we took out the third stage from the original design and decided to use 3D models into the scenes to see how was the effects.
(This is the link to the project concept and progress report)
http://www.idea9102.net/wp/archives/2180

We did a series of testing for our graphics in a low-resolution environment, and we were trying to find out the most suitable glowing volume and colors to our animation. We got rid of the flower patterns that run too much into details that are not recognizable in the low-resolution screen. And there is another change that we have made which is instead of fairy leading audience through the screen, we decide to let the fairy going from far to close in order to give a more intimate interaction between audiences and fairies. We collected some useful feedbacks from the evaluation stage and made appropriate improvement according to them such as some user pointed out the visually aesthetic display was impressive but easily get bored by staring at it for a while. In order to make the screen livelier and less dull we also added looping background music to ease the atmosphere and particle system to enrich the display.
(This is the link to the evaluation report and a few testing we have made for the graphics)
http://www.idea9102.net/wp/archives/2885

http://www.youtube.com/watch?v=BsMljFostdY

http://www.youtube.com/watch?v=x7aQz0kNn7E

http://www.youtube.com/watch?v=Ex0QG_0r6oU

Then, we started programming as well. The general method of building the program is basically to import movie clips into processing program and implement the interaction incorporate with kinect. We put a lot effort in trying to use webcam as our sensor before digging into kinect. But considering a lot of physical aspects such as light and the technical difficulties we decided to use kinect instead. We tested with different libraries using random downloaded movie clips to achieve a beta program skeleton so that once we finished the 3D models we only need to render out the movie clips such as background and fairies in proper length and size to be imported to the beta version.
(This is the link to the building progress)
http://www.idea9102.net/wp/archives/2543

working with webcam using openCV:

working with average point tracking in openKinect:

final prototype-working with blob detection in openKinect

Compare to the original idea we got rid of the part of interact with vehicle and sound. Secondly we removed the third stage and focused on the first and second stages.

Stage1: Background movie displayed while there is no passer-by

Stage2: Butterfly transformed into fairy while following passer-by

Overall, we did some of main changes whether to our concept, graphics or design of the interaction. For our concept, we were focusing on to create a nature and letting fairies lead our audiences to explore our installation, but after we improved the design, we had re-interpreted our concept to be a garden like city space and increase to 3 fairy avatars. For the purposes of finding out the most accurate glowing volume and graphic colours which could bring the best effects to audiences in a low-res screen, therefore, we decided to keep the background simple and choose some of the brightest colours for our fairies to look even more stand out, on the other hand, we had encountered many setbacks during evaluation period such as unstableness of the program after we attached movie clips, the size of the particle system and the narrow interaction zone.

Conclusion

Although there’re many obstacles, which causes a few scope changes, but that also leading us to add even more variations to our project. After the ideation and sketching phase, we have re-positioned our concept to be simpler in order to make it more intuitive, and that give a general feature of what we are going to create. Then we refined our program again and again based on both user feedbacks and our own assumptions. Still, there are improvements we can implement to our project if given more time but we are happy to see everyone enjoy it during the launch night!

Gratefully appreciate all the supports from Rob, Martin, Nick, Deborah and Keir !!

Team Spicy Tuna: Paul, Roven

Fy: Just a refresher

1:44 PM in General by susannec

Our group Fy. is an art and design collaborative trio comprising of Adityo Pratomo, Johnny Campos and Susanne Chan. Emerging from different disciplines, the cross pollination of skills including computer engineering, music, multimedia and architecture, encourages creative and dynamic exploration.

From the original idea fields of bag trees to now the new concept Skin, the journey of this screen installation draws a smile to my face.  Let me explain, firstly from the beginning.

“If space junk is the human debris that litters the universe, Junkspace is the residue mankind leaves on the planet.” – Architect, Rem Koohlaas

It was from this quote that ignited the idea to expand the grid gallery to encompass the space underneath the western distributor. Instead of simply promoting media screen art/screen interaction, we decided to incorporate physical object interaction, enabling various artistic genres to be showcased. The expanded gallery space transforms the unnoticed screens and ugliness of the western distributor into a cultural experience making it a destination.

Initially, we were leaning towards a sustainable idea in regards to plastic bags and their impact on the environment. We want people to realise how we neglected to consider that there are 3.76 billion plastic bags in landfills (according to Clean Up Australia) and so, the re-use and repurpose of plastic bags was a focal point. By populating the bags around the areas of the screen and highway brought the attention of the neglected space beneath the western distributor. The plastic bags were a metaphor for the neglected space in our minds – something we just don’t consider, as it doesn’t impact us directly. However, it was at this point where we told that the concept was too dark. We wanted to inform the viewers of this neglect by replacing their heads with plastic bags. (You can read the process behind field of bag trees here). It was by no mean to suggest suicide thoughts although it’s understandable how the deeper meaning would have been lost.

 

 

 

 

 

 

 

 

 

 

 

 

 

After much heated and fervent group discussions, the decision to focus on our strong points provided the driving force for a better concept. We wanted to allow the viewer to linger on the urban playground, slowing down the city, the gallery, and the neglected spaces. We wanted to invite viewers to look at the area with a difference eye by presenting them time and space to gaze and not glance, to interact and not ignore – to experience the space sensed by the body through tactility, movement and presence.  The decision to concentrate on the hapticity of the screen in combination with the physicality of the plastic bag was unanimous.

by Mela

Final blog post for Liquid Light :)

1:34 PM in General by Mela

 

 


Every day, we cross paths with our familiar strangers — the melancholy man who sits on the same seat in the train, the young woman sharing our steps to the office building, the teenager with the same penchant for our favourite restaurant. We are aware of their existence, but we do not communicate with them…

Or do we?

We all consist of energy. Our auras emanate from our beings, mostly unseen… Yet the intimacy of our auras brings forth a sense of propinquity. We are more connected to each other than we think; the distances between us are not empty spaces, but are entities permeated by the energies we produce.

In our installation titled, “Liquid Light,” we attempt to visualize the invisible energies and interactions that exist between us and our familiar strangers.

 

Design Ideas

We had several ideas we wanted to implement in our installation. The primary ones would be the visualization of human auras and our interconnectedness with one another. When the viewer steps in front of installation, a glow appears around his silhouette, alluding to his aura or energy. When another viewer enters the space, he too generates an aura on the screen, but then there is another thing created: a connection, a line of energy, between the first aura and the second.

Our group liked the concept of familiar strangers and envisioned an installation that would emphasize our connectedness with them. Hopefully, this installation will start a dialogue between two familiar strangers when they get visual affirmation that they are, indeed, connected somehow.

As an individual stays longer in front of the screen, his aura slowly changes colors. This hints at the dynamism of people’s auras. People have constantly changing energies and this was an obvious way to display it visually. The longer a person stays on screen, ripples form around his aura as well.

Besides making use of the SmartSlab (as required by the design brief), we also wanted to create an immersive experience that engaged the senses. Movement around the installation generates different sounds. After a few minutes, the screen fades to white and triggers a mist to gently fall onto the crowd.

 

Project Development

The project was an amalgamation of ideas from all our presentations. After we formed a group, we had a discussion talking about the best features of our ideas and seeing how they could best be combined. We created a project schedule listing the major milestones we wanted to reach each week to ensure we completed our installation successfully before the due date.

 

Week 1
  • Aura connections
  • Water/Bubbles [see through-ish]
Week 2
  • Auras
  • Heating up, charging
Week 3
  • Bringing all the aspects together
Other
  • Mist
  • Sound

 

Here’s a quick visual run-through of how our code progressed:

[storyboard]

[01]

[02]

[03]

[04]

[05]

[06]

[07]

 

For a more detailed description on the development of Liquid Light, check out our previous blog entries. Links to our previous presentations, videos, and photos can be found in these posts.

 

All Posts: http://www.idea9102.net/wp/groups/team-liquid-light/

Initial ideation presentation: http://www.idea9102.net/wp/archives/2048

First programming attempts and mist experimentation: http://www.idea9102.net/wp/archives/2366

Processing sketch testing at the SmartSlab: http://www.idea9102.net/wp/archives/2674

More Kinect testing and project refinement: http://www.idea9102.net/wp/archives/2734

 

User Evaluation

Our group was lucky in the sense that our initial Processing sketches converted quite well on the SmartSlab. We did not have to make any major changes to our code after testing pre-generated videos on the screen. The user testing sessions below refer to sessions where we tried hooking the Kinect to the screen.

 

Session 1: 27 May 2011

  • The Kinect did not work properly from behind thick glass windows. There was a definite lag between the viewers’ movement and the auras displayed onscreen. Blob detection was also quite faulty – it wasn’t detecting the blobs that well and there was a big disconnect from the viewers’ actions and the auras displayed on screen.
    • Action: The SmartSlab was going to be moved from behind the patio glass windows to the doors, which could be opened, and hence not obstruct the view of the Kinect.
  • Users liked the general aesthetics of the work. However, the sketch was way too bright on the SmartSlab. There were times when the screen would be overwhelmed by brightness. This, combined with the low resolution of the screen, made it really hard for people to understand what was taking place.
    • o Action: We reduced the glow that was being generated by the aura connections.
  • The mist worked well, both in terms of spatial placement and height. It caused the desired “wow” effect on the audience. Everyone loved it – it was not as “wet” as we had feared it may be. However, it was only visible from a few angles, depending on how the light got reflected. For example, if the screen at that moment was particularly bright, the mist would be visible; otherwise, you couldn’t see it at all. It was nearly impossible to capture on our smartphone cameras. More than one participant suggested that we use a projector to enhance the effect and make the mist more visible.
    • Action: We decided to create a second Processing sketch that would be triggered by events in the first, main sketch. The second sketch would consist of an image of a mist. It would be projected partially on the physical mist and partially on the floor. This would give users the impression of being surrounded by water.
    • Action: We also decided to include sound in our installation to make the entire experience more immersive.

 

Session 2: 07 June 2011

[User Test Video #1] [User Test Video #2]

  • We had previously tested the mist on our laptop screens and finally decided on one with white water set against a purple background. However, after testing it on the projector in front of the SmartSlab, the image was barely distinguishable. A suggestion was made by some users to change the purple background, which would help enhance the image overall definition and contrast.
    • Action: The Processing sketch mist was changed to display white mist on black background. This did help visibility.

  • The interaction with the screen was responsive, engaging and intuitive.
  • The sketch appeared monochromatic. While we had programmed in many different colours, it was set to switch to the next colour after 1 minute each. Since the people we tested it would flit in front of the screen and would not stay for extremely long periods of time, we found that most of the time the aura colours would just be golden.
    • Action: We adjusted the colour timings so that the colours would rotate much faster. More colours were also added to the cycle.
  • So far, the audio we found and implemented sounded pleasant and appropriate. However, since we were only testing it in the SmartSlab area we still didn’t know how it would sound like on the big screen.
  • Setting up the sketch on the SmartSlab computer during a group performance proved to require further coordination. Finding the radio cables and settings took way longer than acceptable and caused lack of synchronism between image and sound.
    • Action: We implemented of a “setup phase” at the beginning of the sketch’s
      running time, with extra controls to signal the installation was actually “ready” for public interaction. That allowed the group to properly configure cables and make sure all settings were in place before image and sound started playing and the public could initiated the interactive experience.

 

Session 3: 08 June 2011 (Group dry run)

  • With the screen moved to the other side of the patio, the interaction zone floor turned into a decked wooden surface, which reflected light even less than the concrete surface tested the night before. Changing the background colour for the projected mist image proved insufficient to make it visible.
    • Action: We decided to remove the projections. It was basically a little more trouble than it was worth. It was taking a little too long to set it up correctly and after we finally did it, it was barely visible.
  • Having lots of people interacting at the same time (a scenario whose test up to then had not been possible) caused the sketch’s performance to degrade a bit.
  • Sound was really good on the big audio speakers, causing a highly enthusiastic interactive reaction from the audience.
  • Mist worked very well, complementing the sketch and the sound.

 

 

Technical implementation

We used the following technologies in our installation:

 

Credits

Our project would not have been possible if not for the expertise of our lecturers, friends, and various colleagues from the electronic art community who have made their work freely available online for us to study and build upon! The codes listed below were used for inspiration – some elements of the algorithm were retained but they were modified upon integration in our installation code.

  • Background removal:
    Based on code by Rob Saunders (courtesy to the project)
  • Projected mist (eventually discarded from the final presentation):
    Based on “Sentient Greenhouse” by Robert Francis, licensed under
    Creative Commons Attribution-Share Alike 3.0 and GNU GPL license.
    Work: http://openprocessing.org/visuals/?visualID=22411

by Heather

A Flaneur’s Trace

12:50 PM in General by Heather

We engaged the idea that the screen could capture a moment in time and keep possession of it; that a person could leave a remnant of their experience.

We wished to create a curious manifestation that would spark the interest of the pedestrian waiting on the street corner. This is reflected via an interactive projection that appears at the foot of the pedestrian. It projects an evocative ‘path’, using vivid imagery, that leads them to the screen.

Once they arrive at the screen, the passerby immediately catches sight of their silhouette reflecting back at them. As they stand motionless for a temporary moment, colour and movement begin to slowly emerge out of their silhouette, encouraging the passerby to slow down and notice the curious detail surrounding them. The amount of colour they generate is proportionate to the amount of time they stand in front of the screen. They are creating a piece of art which becomes part of the space.

As they move away from the screen, their silhouette and their generated colour remains on the screen as trace of their experience. This is left on the screen for others to see. Slowly the image fades over time, replaced by other remnants of peoples’ moments. The trace is transient, just as a person’s place in the city.

The individual, who, a short time ago was waiting on the street in a monotonous state, is now a Flaneur – a wanderer of the street who establishes a temporary relationship with what he or she sees; engaging in an unplanned journey through a landscape, where he encounters an entirely new and authentic experience.

 

Developing the concept

The concept for “A Flaneur’s Trace” evolved from our grounding research (Heather’s, Steph’s, Jane’s) which recognised there was an opportunity to take the passerby out of their daily routine and immerse them in a thoughtful experience. We were able identify that the potential users consisted predominantly of office workers due to the Grid Gallery being located near the heart of the financial district. As a result, we wished to create a curious manifestation that would capture their interest, take them out of their daily routine and encourage them to take notice of their surroundings.

We began the collaboration of Heather and Steph‘s ideas and, as a group, we were able to build on these concepts to create a sophisticated installation with the potential to develop a long term relationship and identification between the pedestrian, media facade and his or her environment:

Developing the prototype

Our concept would be comprised of two components: an interactive sketch running on the screen and another controlling an interactive floor projection. Using what we’d learnt earlier in the semester, we decided to build our installation using Processing and chose to use the Microsoft Kinect as our sensor.

We visited the site to make a thorough evaluation about how we would successfully incorporate the sensors and projectors required for our installation into the site surroundings. We wanted to do this in an unobtrusive manner in order to not take away from the user experience. We made enquiries to professionals who deal with the installation of sensors and from the information they provided we found ways that, in the professional world, we would be able to install the hardware and implement the brief.

Evolution of our prototype

Throughout the development of our prototype, our aesthetics were constantly evolving to incorporate feedback and findings from our testing. During the testing, we realised that the colours and effects that looked great on the computer screen, were quite underwhelming when played on the Smartslab. For example, the use of reds, yellows, pinks and oranges seemed to be overtaken by red, which created the look of a fireball. This red fireball contrasted with the blue silhouette, and made the installation look like fire and ice which was portraying a different theme to our concept. As a result, we looked into different colour palettes and focused on the kind of mood we wanted the colours to evoke.

We also had some slight issues with the floor projection not appearing on the wooden floor boards. To combat this, we made the attractors in the floor projection white so they would stand out. We also had to minimise the brightness of the screen as it was drowning out the projection.

For the exhibition night, we added some introductory text to our installation to spark some anticipation amongst the crowd. We also decided to add music to our installation to assist in creating a mood and sparking interest throughout the entire demonstration. The final version of our processing code can be found here.

Exhibition Night & Reflections

We had a slight issue with people standing in front of the screen when we took our background image. This meant that their silhouettes were almost embedded in the screen for the whole time. This could have caused a slight confusion but thankfully, no one seemed to notice.

Initially, users wanted to move around and manipulate their silhouette through a diverse range of movements. They appeared to enjoy their silhouette and their ability to control what they were seeing. It was quite captivating to view the variety of interactions that were emerging. At the outset, the users were interacting with the screen individually however, as time went by, they began to collaborate with each other and started interacting as a group.

Until a couple of minutes into the demonstration, their ability to generate colour from their silhouette wasn’t realised as they were constantly moving around. Once they caught onto the second dimension to the work (with a little input from us), they started to stand still and fill the screen with colour. As there were so many people interacting with the screen, there was a limited amount of space that each person’s colour could take up, but they still understood the concept and playfully interacted with the screen in the ways they chose.

The floor projection appeared unnoticed throughout the first half of the demonstration until Keir subtly moved in and began exploring the projection. This was intriguing to observe as everyone else caught on to what he was doing and immediately wanted to join in. As we had some slight issues calibrating the floor projection on the night (even though it worked perfectly in the run-through), it wasn’t coming out of their feet; the attractors were appearing in front of them instead. This, however, didn’t seem to matter as the users interacted with it differently and instead tried to ‘catch’ the attractors and stamp on them with their feet.

It was extremely rewarding to see our work come into fruition. We were taken aback by the variety of interactions that appeared and the feedback we received.

Special thanks to everyone who has supported us throughout the semester: Rob, Martin, Keir, Nick & Deborah :)

Ciao for now,

Skin Test (not in a Dermatologist’s Practice)

12:33 PM in General by Adityo Pratomo

After putting the whole system in place, including the physical part, we then proceed to the next step, which is testing how the whole artwork as a system will work. We did this simple testing twice, once is done three days before the exhibition night and once on the dry run session, which is a day before the exhibition. But first of all, I won’t call it a proper usability testing because in truth we’re just testing the functionalities as well as some degree of consideration towards how user will interact with it. This post is here to show how we improved the prototype and going back and forth before finally called it the final prototype.

On the first test, we focused more on the screen part and the LEDs were just simply turned on. We founded from this test that the early form of the plastic bag texture required more contrast so that it can be seen properly on the screen. We also founded the best range from screen to the audience where the plastic bag can be recognized that will further affect the threshold value from the Kinect. Also founded from this test is the decision to change the plastic bag color to blue, so that it collate further with the blue LED color on the plastic bags. At this stage we simply hang the plastic bag on the tree, forming a wall. But we realized that this form would create a barrier between audience and the screen, so we have to back to drawing board and decide where we will put it. Another thing that we realized is that we need more LEDs, because 8 LEDs just won’t give a good effect to the plastic bags. Here’s an image from that first test

The second test, which is during the dry run, was done so that the system can adapt to the new SmartSlab environment. Sadly, due to some technical difficulties we were unable to put the interactive LEDs inside the plastic bag, but anyway we had those LEDs on its own entities. Again, an improper testing, but not for the screen, because we still tested how the screen with the new plastic bag texture looked like. Even though we managed to have the screen interacting, the interaction just feels wrong, because people can only interact with it from a very close distance. I though it was because of the Threshold value for the Kinect, that was the initial suspect. This is the vital point of failure, because the screen interaction is pretty much the heart of all stuff, so having it less interactive would just kill the whole installation. From the visual point of view, the new plastic bag texture looked nice. While even though we missed testing the interactivity with the plastic bag, we founded a good way to hang the plastic bag to keep the audience curious while not distracting them from interacting with the screen.

As a solution to the problem that we encountered during the dry run, I did some fixing and tweaking on the code. I then founded out that the problem was not only because of the threshold value, but also because of when we have the canvas being 600 x 200; I forgot to re-scale the Kinect’s depth image. That small mistake proven to be costly, because as somebody stand further from the Kinect, only upper part of his/her body can be seen by the Kinect, and it’s getting worse on the position where we intend to have the interaction (e.g. the position where the plastic bag texture can be seen properly). That explains why no interaction can be seen on the screen during the dry run. So, a simple procedure of rescaling the image fixed the whole problems.

 

The Technical Aspects of “Skin”

11:36 AM in General by Adityo Pratomo

Just as a quick reminder, at this stage we’ve pretty much decided how our artwork would be. It will have a texture of a plastic bag on the screen that audience can interact with. We will also have a grid of plastic bags as the physical part of the installation that will represent the dynamic surface of the plastic bag on the screen. The plastic bags will have LEDs on them which will react to what happen on the screen as well as the audience’s position on the whole environment. We’re aiming to give the audience the sensation of touching the skin, that’s why we do not only give the chance to do it virtually, but also physically. That’s why we also need to have a physical aspect on the installation.

From that design, I then formulated how each part of the artwork should be and how they are connected, which is showed in the picture below:

I will now describe how each part was built.

The Processing Part

This is the heart of the artwork, as the whole interaction process will be based on what happen on the screen. For this, we will have a texture of the plastic bag applied on the whole screen. This surface will then be reactive to the audience’s movement front of the screen.

The textures of the plastic bag were achieved in two steps. First of all we did a cloth simulation using the Traer Physics library for processing. We created a grid of particles that has a spring feature. That way, the particles position and velocity can be manipulated dynamically with a degree of bouncy feeling to it so that they can return to their default position (e.g. flat on the screen). To further simulate a plastic surface feeling, I applied a heavy damping on the particles, so that the surface will only bounce once in a slow motion, because that’s what being expected from a plastic surface.

After I finished this particle part, I then applied texture of plastic bag on top of the particles.  To do this, I created a triangle strip polygon on where the particles grid is and then lays the plastic bag textures on top of it. Johnny created this plastic bag texture using image from his camera that was manipulated furthermore using Photoshop. While this may seems to be an easy part, in truth this phase was proven to be tricky, as we have to found a perfect balance between a clearly perceived image of a plastic bag texture and the maximum details that can be displayed clearly on the SmartSlab. In practice, we had to go forth several time before we finally nailed the final form of the plastic bag texture.

 

The next step was to manipulate that plastic bag texture using image from Kinect. Initially, I was aiming to use the coordinates from the Kinect’s depth image to dictate which particles’ position can be manipulated. Eventually, this method proven to be performance heavy, resulting in a lagged display. Rob then came to help and suggested that we can instead use just the Kinect’s depth image as a parameter to offset the particles, without even having to extract each depth image pixels’ x and y coordinate. The result was magical, the plastic bag surface was interacting with the movement from Kinect with no delay on the display and a faster performance overall. We then have to tweak some more parameters in order for the system to be suitable for the exhibition, even though at that point, the interactive screen was already quite impressive.

In the end, we managed to come up with a better parameter for the mapping between Kinect’s depth image and the particles. I also applied light on the surface, in order to get the effect of the person shape in the screen and to provide a better effect on the interaction. At this point Johny who has a better experience on 3D, suggested that in order to provide a more detailed display, we can either make the polygon’s shape smaller so that the light will be more detailed, or we can apply shader instead of Processing’s light method. But due to the lack of time, we omit this option and instead putting it as some suggestions for the future.

The Arduino Part

Now, if the screen was the heart, then for me the physical element of this installation will either make or break the whole installation. The lights will emphasize the whole message that we try to convey from this installation, that’s why I put a lot of pressure on my mind to make sure that the light will actually work. Luckily, technically speaking this is quite a simple job to do, plus we already had a working prototype from our previous concept.

The idea was to make lights turning on depending on where the action on the screen happened, a simple mapping. Then we also wanted to reward people that actually trying to interact with the plastic bag. That’s why we put additional LED that will be activated depending on the person’s range relative to the plastic bag. The closer the audience with the plastic bag then more LED will be turned on.

To achieve that, I opted to use two separate Arduino boards; one board will be connected to the Processing sketch via XBee wireless communication line so that the laptop and the Arduino can be located in separate places. Another Arduino will receive input from an IR sensor that senses range of audience. Both Arduinos will be a separate entity. Now, this may seems quite inefficient, but for me this will provide a failover system, so in case one of the Arduino is not working, we will still have LEDs turned on in the plastic bags. Plus, since I have two Arduinos lying on my room, then why not use them.

For the Arduino that receive input from the IR sensor, the workflow is quite streamlined. After connecting the IR sensor and The Arduino, I then programmed the Arduino to receive and read the input from the IR sensor. Here, I achieved that using a lookup table to interpolate the reading from the sensor. It’s easier this way because the raw data from the IR is quite hard to read and used since it’s an analog sensor. This way, I managed to create a mapping between the voltage input from the IR and the range of an object to the sensor. I then map the range into the number of LED that will light up, again the closer the audience to the sensor, then more LED will be turned on. Quite a simple process altogether.

Connecting Processing and Arduino

For the other Arduino that will talk to the Processing sketches, the process of creating it proven to involve more work. First is deciding which part of the screen interaction that will be passed on as a message for the Arduino and how. After some tinkering, I decided that the depth image’s position could be the parameter to turn on the LEDs. For this, I used one of the examples from Dan Shiffman’s openkinect library that will track the depth image’s position. This process was proven to be light enough that it didn’t affect the whole performance of the system.

After that milestone was achieved, the next step would be connecting those parameters to Arduino to turn on the LEDs. I already knew that this communication will be done via serial port. So, I first tried to make Processing do a bunch of serial write process depending on the parameter from the Kinect’s depth image to give different effects of LED. But this process caused a massive lag on the system and the display. Then, I tried different approach, which is to do serial write only twice and let the Arduino interpret the whole bunch of data, in short, load balancing. This was then the solution for that problem.

That step was done using the USB port on my laptop. Now to the real part, which is connecting the laptop and Arduino wirelessly using XBee protocol. For this part, I used two XBee Series 1 chips, one for sending and one for receiving. This is probably the trickiest bit of all process given that I never used XBee before. But after some readings I founded that in order to do a communication via XBee, some steps are required. First of all, I have to program both XBee chips so that they’re in the same network. I used the application CoolTerm on my Mac machine to do this, programmed both chips using XBee commands. After some testing to make sure that they can communicate with each other I then connect the sending XBee to my laptop and use that as the Processing’s serial port to send data to the receiving XBee. Voila, it worked. Not as hard as I imagined in the first place.

Conclusion

In the end we managed to get all technical part of the system working. Quite a lengthy process, but we enjoyed every single one of it. Of course with so many building blocks we always had a bigger risk of failure, it’s then up to us to cope with it when system failure occurred.