Thursday, May 14, 2015

Final Report





Teleport to Anywhere with a Single Click













University of Pennsylvania
ESE 350: Embedded Systems/Microcontroller Laboratory
Yixuan Geng, Yifan Xu


Table of Contents
Abstract …………………………………………………………………………...1
Introduction ……………………………………………………………………..1
Eye Detecting …………………………………………………………………...3
Explore …………………………………………………………………….3
Solution …………………………………………………………………..6
IMU Fusion ………………………………………………………………………..9
Hardware ………………………………………………………………...9
Software …………………………………………………………………..10
Robot Design ……………………………………………………………………..10
Mechanical ……………………………………………………………….10
Electrical ………………………………………………………………….. 12
Software …………………………………………………………………...13
Video Streaming ………………………………………………………………….13
Hardware …………………………………………………………………..13
Software …………………………………………………………………….14
Testing and Evaluation ………………………………………………………...14
Conclusion …………………………………………………………………………...15

Abstract
Yi is a remote robot that can provide people with a truly immersive way to see the world through the eyes of Yi. Our motivation is to help people with movement difficulties (eg. patients, elderly, etc.) a means of exploring the outside world.

Yi has two separate parts connected through the internet. The first part is the robot part. It is controlled by several servo motors that can mimic the movements of a persons head and eyes. It also has two cameras that can stream 3-d video back to the user. The second part is the control and display part. We are planning to use EOG techniques or IR eye tracking techniques or capture the movement eyes and an IMU to capture the movement of the user head. The inputs are used to determine the orientation and focus of the robot and its cameras. Also a 2-screen display is used to display the camera video.

Introduction
Goal of the Project
To build a remote robot and head mounted display system that can provide users with a truly immersive way to see the world through the eyes of the remote robot.
Potential Uses:
  1. Immobile people can use Yi to explore the outside world without leaving their bed or wheelchair.
  2. Can be used for improving user experience of teleconferences.
  3. Can be used for people to explore hazardous places.
Overall architecture
Head-mounted display and sensor system
The user will be using a Head-mounted display and sensor system to see through the cameras of the remote robot and provide commands through the movement of head and eye. The display will use oculus-like headsets (We will not be focusing on the display at this stage). To track head movement, we will use an IMU and mount it on the headset. To track eye ball orientation, we will use EOG (Electrooculography) technique. We will have various electrodes that are placed around the eye to detect the change of electric field as the eye moves. We will also design our own filter and amplification circuit. We will use Raspberry Pi to process sensory information and send the orientation information to the other Raspberry Pi that are on the remote robot through Wifi.
Remote robot
The remote robot will have 9 servos in total. A Raspberry Pi with Wifi module and two camera module will also be used. The first three servos will be used to control the row, pitch, and yaw of the robot main frame based on the orientation of the user’s head. The cameras will function as the users remote ‘eyes’, and will be controlled by 3 servos each. Two servos correspond to the pitch and yaw of the eye. The third servo will be used to adjust the focus of the camera based on the calculated focus lens of the users eyes through their orientation. This is important if the user is trying to focus on a near object.  We are using two cameras to provide separate videos to each eye to generate 3-D effects, which we will explore further after the mechanical, sensory, and controls work well.
Process:
The device will have a process seen in the following block diagram:
Eye Orientation Detection
For eye orientation detection, there are multiple possible methods including EOG, infra-red reflection, and image processing. We choose EOG as our detection technique because compared to the other two technique, EOG doesn’t require a device in front of the eye.
EOG stands for Electrooculography. This technique is based on the fact that eye balls have electrical polarity. The retina has a more positive charge than corneal. So eye movement will result in a change in the nearby electric field. If we place electrodes around the eye, the electrode voltage readings will vary corresponding to the eye movement.
Explore: Signal Processing with High-Pass-Filter
circuit.PNG
The signal will first go through a high pass filter that filters out the drifting component of the signal. Then it goes through a buffer, a second stage low pass filter at 16Hz and finally a Twin-T notch filter at 60Hz.
Interesting thing about this circuit is that the drifting of the voltage level, which was a major problem for my previous circuit, is now gone. The voltage level will always stay around 0V (or any voltage I want), which is great for microcontroller sampling because it never goes out of the range of 0-5V. The downside of the circuit is that when the signal stays at some voltage level (e.g. when the user is staring at some angle), the output voltage decays to zero within maybe 100ms (half period depends on circuit design). So I would need to use some algorithm to recover the signal.
The up-left graph is the Vout of the circuit. When I roll my eyes to the right, Vout goes up. If we zoom in, we will see that the signal decays exponentially to the reference point after each peak, which means that at "free fall" (when Vin doesn't change), dVout/dt = C0 * (Vout - C1), where C0 is a constant that can be measured, and C1 is equal to the bias voltage I set. So in order to recover the signal, I can calculate dVout/dt, add back the "free-fall decay speed" of C0 * (Vout-C1) to get dVin/dt, and then do integral of dVin/dt to get the orientation of the eye. The up-right graph is the Vin recovered by algorithm above.
However, the algorithm doesn't work as nicely as above every time. See the following as an example:
The problem is that there's still the annoying drifting. Over about 10 seconds the random drift could account for as much as 50% of the output, which means that if the output changes by 100, the actual signal could have changed by any number between 50 to 100. This accuracy is unacceptable!
Also, there's the problem of choosing C1 (the reference voltage). Since there's the integral step in my algorithm, changing C1 by 0.01V would cause a massive drift of the output. It becomes really hard to let the micro-controller set the reference voltage correctly in the algorithm when it's real-time. How do i get that reference voltage, which is supposed to be constant but is actually different across different tests for unknown reasons? Average algorithm would be problematic when the signal is not symmetric.
So I decided to give up this circuit! This theory is beautiful, but there's always the distance between theory and reality.
Solution: Signal Processing without High-Pass-Filter
New circuit:
Not so different. Just that the high-pass filter and the buffer are gone. I tried this circuit and found that the drifting became really small in short period of time. Why much smaller than the circuit I used two weeks ago? I guess on that circuit I had too many cascaded RC low-pass filters, and they were cascaded without any buffer in between, so the cut-off frequency was actually much lower than what I calculated.(lowered to around 4Hz instead of the 16Hz that I intended) This would cause the drift to be relatively bigger than the EOG signal since drifting usually has a much lower frequency. And when I had 10 RC LPF cascaded like that, it's no surprise that I got a crazy 5V drift when the EOG signal is just about 200mV. With the new circuit, the cut-off freq stays at 16Hz. As a result, the drift and the EOG signal are "treated" quite equally. Over about a minute the drift could be around 0.1V - 0.2V when the signal is at 0.5V.

Although the drifting problem is bigger for this circuit than the previous one, the short-term performance is definitely better for this circuit. Check out this experiment for linear relation between EOG signal and eye movement below. The user’s eye will focus on the five colorful points in the graph below alternatively. From the signal on the oscilloscope we see that there is a linear relation between the EOG signal and the eye orientation. Also we see that the short-term accuracy of the system is good.
This is our prototype circuit:
We wrap up our circuit (two layers, one for horizontal eye movement and the other for vertical):



IMU Fusion
This is our final IMU setup. It would be placed on top of user’s head, leaning to the back at about 45 degrees.


Hardware:
We are using LSM9DS0 as our IMU chip. This chip includes a 3D digital linear acceleration sensor, a 3D digital angular rate sensor, and a 3D digital magnetic sensor. The chip is connected to an mbed via I2C, as shown above, for data processing. The data from eye detection circuit will also be sent to this mbed. This mbed will package these data and send it to a raspberry pi for communication with the robot. The connection between mbed and LSM9DS0 is as follows:
Software
For the purpose of the project, we need to get real time data of pitch and yaw of the user’s head. For pitch (vertical head rotation), we rely on accelerometer and gyroscope. We use a kalman filter to filter out the noise as much as possible.





Robot Design
Mechanical
We are using 6 Servos to control the robot body and onboard cameras. Two normal size servos (Power HD 6001B) are used to control to the pitch and yaw of the robot body frame. Four micro servos(Tower Pro SG5R) are used to control the pitch and yaw of the two cameras, two servos for each camera. The overall design can be seen in the following graph:


The first normal size servo’s horn are attached to the bottom acrylic sheet, with its base attached to the second acrylic sheet. In this way, when the servo horn rotates, the body of the whole robot also rotates, corresponding to the yaw of the head.  The second normal size servo is attached to the two vertical acrylic sheets, with the camera part attached to the sides. Thus when it rotates, the camera part also rotates, corresponding to the pitch of the head.   

For each of the cameras, the second micro servo is attached to the first micro servo, while the first micro servo is attached to the whole camera part supporting acrylic. The cameras are attached to the second servo horn. With such design, we can keep the center of the camera fixed while changing the yaw and pitch of the camera.

Finally, two supporting vertical acrylic sheets are added to hold the two raspberry pis used for video streaming, data processing, and servo control.

Electrical
We will be using two raspberry pi in our robot, one for each pi camera. On the first pi, we will also be performing tasks such as communication between the sensing part and servo control. These are shown in the following image:
\\base\root\homedir\photo 2.JPG

We are using a dedicated PWM control circuit (Adafruit 16-Channel 12-bit PWM/Servo Driver - I2C interface - PCA9685) for accurate servo control. As can be seen from the image above, the servos are connected to pin 0,1,7,8,14,15 of the PWM Driver. Power is supplied through an external power supply and is connected to the black and red wire. Connection instructions between raspberry pi and the PWM Driver can be  found in this online tutorial by adafruit (https://learn.adafruit.com/adafruit-16-channel-servo-driver-with-raspberry-pi).


The raspberry pi are powered by a micro USB cable and internet connection is provided through the on board ethernet port. The pi camera is directly connected to the onboard camera port.



Software
For PWM Driver, we will be using Adafruit_PWM_Servo_Driver Library. Through the library, we can specify port, frequency, duty-cycle, and polarity. In Adafruit_PWM_Servo_Driver.py, we initiated 6 ports for the 6 servos, each starting at middle point. The duty-cycle, which corresponds to the angles of each servo, is updated from the information from the sensing part. Communication is performed through TCP/IP Protocol, using standard python socket library.

After we receive the angles from the sensing part, we applied a 5 data point moving average filter to smooth out the data. We also implemented a maximum and minimum for each angle, as well as a deadzone to deal with servo shaking.  We are using MJPGStreamer for video streaming, which will be explained in detail in the next section.

Video Streaming
Hardware
We will be using Pi camera Rev 1.3 to perform video streaming. The camera is connected directly to the camera port on the raspberry pi. The connections can be seen in the following graph:

Software
We will be using an open source project called MJPGStreamer to interface with the camera module and stream the video through http. The setup code can be found in the cam_setup file:
cd /usr/src
cd mjpg-streamer
cd mjpg-streamer
cd mjpg-streamer-experimental
export LD_LIBRARY_PATH=.
./mjpg_streamer -o "output_http.so -w ./www" -i "input_raspicam.so -x 640 -y 480 -fps 20"
Resolution and fps can be changed by editing the bolded part of the cam_setup file. Also, night version can also be enabled by adding “-night” after the “-fps 20”.

Testing and Evaluation
We tested each part individually, and then constructed overall tests of the whole system. For eye detection, after we constructed the circuit, test of each component was performed through an oscilloscope to ensure each component is working fine. For the IMU, we conducted tests by rotating each euler angle individually. We’ve found that the fusion software we were using was not accurate in a sense that yaw output changes with pitch. We are planning to investigate into this issue during summer break. The servos were tested individually to find the midpoint and duty-cycle per degree. Before connecting with the sensing part, the robot was tested by manually inputting angles. The cameras are tested using the online MJPGStreamer.

Finally, after each individual part is working properly, the robot and the sensing part are connected through TCP/IP Protocol. The sensing part are sending angles of user head and eye orientation to to the robot, and after a moving average are applied, these angles are mapped to the servos. When tested together, these parts worked properly together, thus meaning that we have successfully created a telepresence robot that is controlled directly by user’s head and eye movements. To look at something else, the user can simply turn his/her head or roll his/her eyes. This is a very intuitive control mechanism, and can provide users with a truly immersive experience. However, there is still more work to do. First, the servos are sometimes shaking. Second, we haven’t constructed an accurate model for EOG. this the angles outputted are not very accurate. Finally, our IMU fusion program haven’t been calibrated.


Conclusion
After 5 weeks, we successfully built the head-eye orientation detection system and a robot that moves two cameras correspondingly and stream the video over the internet. We have tested the entire system for stability issues for more than 100 times and it proved that the system is highly reliable within a short period time (usually 30 seconds) after system reset. After that short period the system might not work properly due to the complex relation between EOG signal and users’ metabolism. We will do further experiments to figure that out.

Saturday, April 25, 2015

Baseline Demo

Hi guys!!!
We have our baseline demo in the following video:

p

We are going to work harder next week to build our final product. The have the following goals:
1. compact our sensing stuff to one box
2. smooth the servos

Eye Detection _ Update 3 _ April 14 - 25

We finally finished our first milestone demo!!! This past week has been nothing but lab for me. I know it's been about two weeks since my last post when I got the eye signal without the help of Biopac for the first time. I thought we could demo the next day, but I just couldn't get the signal processing circuit to work again. Maybe there were some bad connections on the breadboard somewhere. Maybe the amplifiers or the electrodes were worn out. Maybe my signal got wrecked just by someone walking around me (It's true! The electrodes can become that sensitive! I'll show you a video later.)

So I decided to solder my circuit onto a new board so that I don't need to worry about bad connections. Guess what? The new board couldn't work, not even the first stage of the circuit, the DIY instrumentation amplifier. When I set both input voltages at 0, it outputs a 5V high-frequency oscillation, which I couldn't explain. I need to a fine-tuned instrumentation amplifier!

So I waited for my AD620 to come for about a week. I followed some advice online and changed my circuit:
Coming out of the in-amp, the signal will go through a high-pass filter, a buffer, a second-order low-pass filter and a 60Hz "Twin-T" notch filter. 




Interesting thing about this circuit is that the drifting of the voltage level, which was a major problem for my previous circuit, is now gone. The voltage level will always stay around 0V (or any voltage I want), which is great for microcontroller sampling because it never goes out of the range of 0-5V. The downside of the circuit is that when the signal stays at some voltage level (e.g. when the user is staring at some angle), the output voltage decays to zero within maybe 100ms (half period depends on circuit design). So I would need to use some algorithm to recover the signal.
The up-left graph is the Vout of the circuit. When I roll my eyes to the right, Vout goes up. If we zoom in, we will see that the signal decays exponentially to the reference point after each peak, which means that at "free fall" (when Vin doesn't change), dVout/dt = C0 * (Vout - C1), where C0 is a constant that can be measured, and C1 is equal to the bias voltage I set. So in order to recover the signal, I can calculate dVout/dt, add back the "free-fall decay speed" of C0 * (Vout-C1) to get dVin/dt, and then do integral of dVin/dt to get the orientation of the eye. The up-right graph is the Vin recovered by algorithm above.

However, the algorithm doesn't work as nicely as above every time. See the following as an example:
The problem is that there's still the annoying drifting. Over about 10 seconds the random drift could account for as much as 50% of the output, which means that if the output changes by 100, the actual signal could have changed by any number between 50 to 100. This accuracy is unacceptable!

Also, there's the problem of choosing C1 (the reference voltage). Since there's the integral step in my algorithm, changing C1 by 0.01V would cause a massive drift of the output. It becomes really hard to let the micro-controller set the reference voltage correctly in the algorithm when it's real-time. How do i get that reference voltage, which is supposed to be constant but is actually different across different tests for unknown reasons? Average algorithm would be problematic when the signal is not symmetric.

So I decided to give up this circuit! This theory is beautiful, but there's always the distance between theory and reality.

New circuit: 

Not so different. Just that the high-pass filter and the buffer are gone. I tried this circuit and found that the drifting became really small in short period of time. Why much smaller than the circuit I used two weeks ago? I guess on that circuit I had too many cascaded RC low-pass filters, and they were cascaded without any buffer in between, so the cut-off frequency was actually much lower than what I calculated.(lowered to around 4Hz instead of the 16Hz that I intended) This would cause the drift to be relatively bigger than the EOG signal since drifting usually has a much lower frequency. And when I had 10 RC LPF cascaded like that, it's no surprise that I got a crazy 5V drift when the EOG signal is just about 200mV. With the new circuit, the cut-off freq stays at 16Hz. As a result, the drift and the EOG signal are "treated" quite equally. Over about a minute the drift could be around 0.1V - 0.2V when the signal is at 0.5V.

Although this is much better than before, I still need to work on this for the next week. So far I have tried some algorithms to eliminate noise and drifting of the HPF. For example I took the speed at each point and filter out all the low-speed points. This would filter out most of the noise and drifting, but due to the exponential decay property of that HPF circuit, some of the low-speed points play an important part in the recovery algorithm. Without them I will always under-compensate for the decay and thus the drift of reference. I guess it won't be such a big problem for the new circuit since it doesn't have that exponential decay property. I will put this part in the final post after I finish more testing on noise elimination, smoothing, and the final wrap-up. Check out Yifan's post for our milestone demo! It's really cool to see the robot's eyes move in the same way my eyes do!



Thursday, April 9, 2015

Eye Detection _ Update 2 _ April 3 - 9

This past week I've been working on building a circuit to process the EOG signal without using biopac. First of all, everyone has been telling me that I might get my eyes fried. The reason is that sometimes there might be large AC voltage in main power ground. So if I connect my eyes to the ground of main power, my eyes might be exposed to high AC voltage. The solution is easy, just keep the entire EOG circuit away from main power supply. I used batteries for my sensing circuit to power up the op-amps and the mbed. This mbed is going to use a zigbee chip to send the processed signal to another mbed that is connected to oscilloscope.

The first few days were not easy. I used a Duracell USB instant charger to power my circuit. Therefore my circuit was single-powered (0-5V), which was such a big mistake that I didn't realize at first. The circuit just wouldn't work, because the op-amps I was using were LM-741 and they had a input voltage range of power supply range minus 4V. Considering that my power supply from USB charger is not exactly 5V but around 4.5V instead, I only got an input voltage range of 0.5 volts! So I borrowed another charger to make the circuit split-powered.



The circuit is not very different from what I built for biopac. The V+ and V- electrodes signals will first go into an instrumentation amplifier. The output would be gain * (V+ - V-). Then this signal will go through a bunch of low-pass filters and amplifiers. Most of the AC noise will be filtered out and the DC component will be amplified to a much larger extent than the noise. Considering that the EOG signal would only be 3uV to 3.5mV, I set the DC gain at 20 and the AC gain of 10Hz signal at 1/2 for the instrumentation op amp.

It is not as easy as it seems. Sometimes the output voltage level drifts slowly from 0 to 5V and then from 5V to 0, which I just couldn't explain. Later I found some possible explanations for that:
1. My electrodes were slowly falling off of my face
2. Maybe I was just using some bad op-amps.
3. Voltage supply was not stable.

To eliminate these three possible causes for the problem, I...
1. wore a band to keep the electrodes on my face.
2. switched to LTC 1050 op amps from BioEngineering Labs. Compared to LM741 they seem to be much better for tasks that aim at small voltage and high accuracy.
3. Got chargers fully charged before using them. Put capacitors in parallel with power supply.

Also the noise seems to be much much bigger than the EOG signal. Fortunately they are at high freq and they seem to be repetitive. Therefore I did the following:
1. Add a bunch of low-pass filters. Alternatively I could just use a low-pass filter at a very low cut-off frequency. But then there comes the problem of low slew-rate, which means that the reaction of the system would be too slow. (When I set cut-off frequency at 1Hz, the half-period reaction time was about 1 second. Unacceptable!)
2. Use average algorithm in mbed to eliminate noise. I let the mbed calculate the average voltage of the all the measurements in past 15ms.

And after a long time of fine-tuning the circuit, I got this video:
You can see that there was a buggy moment in the video when the voltage level incorrectly stayed at maybe 5V. I still need to work on that. But anyway I've got my circuit working. And tomorrow Yifan and I are going to connect the sensing unit to the robotic unit. Something amazing is gonna happen!!

Eye Detection _ Update 1 _ Mar 27-29


Sorry for the late update! I'll try to remember what happened in the first three days. Basically I tested the biopac with electrodes around my eyes. The signal is very clean, though really small. So I built a very simple amplifier and low-pass filter. The amplifier has a gain of 10 and the filter has cut-off frequency at 30Hz. It worked really well. See video below:

In this video the signal corresponded to the horizontal movement of the eye. The center electrode acts like the ground. The other two are Vin+ and Vin-. According to the EOG theory, Vin+ minus Vin- will be proportional to horizontal orientation of the eye. 

Then I did a linearity test to see whether the signal is linearly dependent on eye movement. What I did was to stare at the red and blue circles from the left most one to the right most one, one by one. And then from right most one to left most one. Here's the circles I stared at and the signal I got:

Tuesday, April 7, 2015

Update 2: Camera orientation control 2.0 and Main Frame Control 1.0

Hi Everyone!

We have finished an updated version of our camera orientation control system. We can improved the design of the camera slot as well as the connection between servos and parts.


We also have designed a body frame , which has 2 degrees of freedom corresponding to the yaw and pitch of the robot head.


Plan for this week:
(1) work on Gyroscope sensing
(2) integration between eye sensing and control

Wednesday, April 1, 2015

1st Update: Camera Orientation Control

Hi Everyone!!!

This is the first update of our project. For the past week, we have been working on using EOG to sense eye orientation and the mechanical design of the robot eye. This post talks about the version 1 of the mechanical design:



We use 2 servos for each camera, the bottom one controls the yaw of the eye, while the side one controls the pitch of the eye. Such a design can ensure the camera is rolling like an eyeball, with its center fixed and all rotations are about the center.

We also used Adafruit 16 channel servo driver to expand the number of servos controlled by a single Raspberry Pi.

Details about this driver can be found via this link: https://learn.adafruit.com/adafruit-16-channel-servo-driver-with-raspberry-pi?view=all