Z Looking Glass: Project SmartSurfaces

A insider's observation of Project SmartSurface

CMUcam3: working module but not working CMUcam3-Arduino system December 19, 2009

Filed under: Uncategorized — zilinzen @ 1:51 pm

After over spending nearly 20 hours extra on fixing peggy2 board I mentioned in the previous post, there is only a few hours left for me to work on CMUcam3-Arduino system and its facial-recognition driven motor system.

CMUcam3-Arduino Driving System

The basic programming architecture is shown below.

When in “interaction mode”, CMUcam3 camera dictates the only input information of the surface when there are people around. Based on different information provided by the camera, programming stored in Arduino Mega decides and sends corresponding commands to motor controllers that I mentioned in the post about Pololu motor controller. Each motor controller that controls different parts of the surface such as arm, shoulder, head, ect. then executes the commands they received.

What we get in the end is the resulting mechanical motions carried out by DC motors. Besides, CMUcam3 camera can also provide related visual feed that controls color mixing of the LED matrix through Arduino Mega.

While there is no people around, the surface is in “solar tracking mode”, which means LDR light sensors dictates the command over entire system. LDR senses change in sunlight intensity. Similar to CMUcam3, it provides information to Arduino Mega,  which controls the motion of the surface. That’s how the surface can be “heliotropic”. Based on the architecture shown below, it seems as long as each module is working and be able to establish proper communications with other modules, it will be a piece of cake to have a “function” and “smart” surface. While, things always work as it should in theories. However, that’s not the case in reality.

Facial detection is implemented by a algorithm based on on the well known paper “Robust Real-Time Face Detection” by P. Viola and M. Jones from 2004. However, facial detection is not same as facial tracking. The first thing that the surface must accomplish under “interaction mode” is being able to “recognize and focus” on a person. If it keeps staring at a red ball on the ground, there is no way that the human-machine interaction can begin. After several hours of trying to understand lines and lines of codes written, I managed to gain access to the coordinate of the sub-frame of the face detected by the camera relative to the whole frame of the picture, as shown below. This breakthrough means I can now set a threshold around the center of the frame. If the coordinate of the sub-frame that contains the face of a person is at outside of that threshold, CMUcam3 can send command to Arduino Mega, which would tell DC motors to move the body so it can always “facing” the target person in front of it.

Once obtain the coordinate of a face in a frame, facial tracking is possible

So far, I have successfully hacked the facial detection algorithm. All I need now is to let the camera communicate with Arduino Mega and that was the biggest obstacles I was not be able to remove within 6 hours before the final presentation in the gallery.

When it comes to serial communication, there are lots of unforeseen bugs and glitches that are hidden so deep that keeps the whole system from functioning. Here is some quick catch-up for those of you who are not familiar with this subject: serial communication, at least for Arduino, are carried out in bytes. Basically, you can only send one character or one number at a time. And the corresponding “byte” can be found in ASCII table under “Dec” column. Since all the information that an Arduino needs to know is whether to keep the motor on or off for the next millisecond, so one character or number should do the job. When the CMUcam3 was talking to a PC via the hypo-terminal, everything works perfectly. But when I hooked the camera to Arduino Mega, I kept receiving three sets of random ASCII numbers, instead of one set I got when experimenting under Windows Hypo-terminal.  I was stuck on this issue for several hours but failed to resolve it before the final presentation started in the gallery. I guess I must do some “Bitwise Operation” such as “Arithmetic Shift” to re-assemble the three sets of numbers into a valid number or character. Maybe with another 2 hours, I would have figured it out. But the reality was that our final smartsurface is not “smart” because of communication failure between CMUcam3 and Arduino Mega,  the two most important components of the “brain”.

Again, things always work as it should in theories but not always in the real world. A valuable lesson I learned from this is realizing the danger of linear thinking. Adding two working modules doesn’t always yield a working integrated system. Anticipating this kind of integration problems and planning enough time to solve these issues are the steps I miss-calculated for this project. The bright side is that this is my first time to program a visual system with high level C/C++ knowledge that I’ve never learned before. I was surprised how far I’ve managed to shorten the learning curve from several months to couple days. Experience wise, I enjoyed it.

A smartsurface that is not so "smart". Final presentation day at gallery. The CMUcam3 was not installed because of communication issues

Here is a video demonstrating some mechanical motions we managed to pull off at the very end:

Advertisements
 

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s