Friday, May 4, 2012

Pushing Images to a Server

Once the images are generated by the Kinect and ffmpeg, we need to push them to a server before they can be read by any other application. For each image that ffmpeg generates, we initially calculate the size the image (in is the File pointer).

fseek (in , 0 , SEEK_END);
int lSize = ftell (in);
rewind (in);
We then send this to the server so that it knows how many bytes it needs to read before it writes it to the file system.

sprintf(Buffer, "%d", lSize);
send(s, Buffer, sizeof(Buffer), 0);
Now we need to send the actual image to the server. We read MAX_BUF bytes from the currently opened file into the Buffer and send it over the socket to the server. If the remaining bytes in the file are less than MAX_BUF, fread fails, so we need to keep track of how many bytes were read and how many more need to be read. If the remaining bytes are less than MAX_BUF, we read and send only the remaining number of bytes to the server. 
while (1) {
bzero(Buffer, sizeof(Buffer));
len = fread(Buffer,sizeof(Buffer),1, in);
if(len <= 0) { 
break; 
}
send(s, Buffer, sizeof(Buffer), 0);
sz+= len*MAX_BUF;
int rem = lSize - sz;
if (rem > 0 && rem < MAX_BUF) {
len = fread(Buffer, rem, 1, in);
send(s, Buffer, rem, 0);
sz+= len*MAX_BUF;
break; 
}
}

Getting Images From The Kinect


Once all the libfreenect Kinect drivers were successfully installed, we had to figure out how to adapt the available methods to fit our needs. There is not much  documentation available about libfreenect, so the easiest way to go about it was to base our program off the exisiting sample programs that comes with the libfreenect library. While the glview program was a good start to help introduce us to Kinect's features, the record.c program provided more insight into how to get the RGB and Depth streams.

Before you can start getting video and depth information from the Kinect there are a few steps you need to do to initialize the device. First include the libfreenect header file. Then you need to get a context, select the subdevices and finally open the device.

freenect_context *f_ctx;
freenect_device *f_dev;
freenect_init(&ctx, 0)
freenect_select_subdevices(ctx, (freenect_device_flags)(FREENECT_DEVICE_MOTOR | FREENECT_DEVICE_CAMERA));
freenect_open_device(ctx, &dev, 0)

If all this works correctly, you can start the video and depth.

freenect_set_video_mode(dev, freenect_find_video_mode(FREENECT_RESOLUTION_MEDIUM, FREENECT_VIDEO_RGB));
freenect_start_video(dev);
freenect_set_depth_mode(dev, freenect_find_depth_mode(FREENECT_RESOLUTION_MEDIUM, FREENECT_DEPTH_11BIT));
freenect_start_depth(dev);

At this point the Kinect RGB and IR cameras start doing their job. But to be able to do something with the data that the Kinect creates, you need to identify callback functions that get called everytime RGB or depth data is received.

freenect_set_video_callback(dev, rgb_cb_ffmpeg);
freenect_set_depth_callback(dev, depth_cb_ffmpeg);

Now whenever the program receives video and depth information from the Kinect, it calls the rgb_cb_ffmpeg and the depth_cb_ffmpeg functions respectively. Since we decided to send frames over a TCP socket rather than use a streaming protocol like RTSP, we use ffmpeg to generate jpeg images of the frames. The following commands are written in the callback functions to create a jpeg image on the file system whenever it receives data.

snprintf(cmd, 1024, "ffmpeg -pix_fmt rgb24 -s %dx%d -r 5 -f rawvideo -vframes 1 "
"-i /dev/stdin -f image2 -r 5 Pics/sample%d.jpg",
FREENECT_FRAME_W, FREENECT_FRAME_H, rgb_count);
proc = popen(cmd, "w");
fwrite(rgb, freenect_get_current_video_mode(dev).bytes, 1, proc);

Once you've written all this code*, you want to see your Kinect in action. The easiest way to compile your program is to place it in the include subdirectory of your libfreenect directory. To compile, run the following:

gcc -lfreenect -o yourProgram yourProgram.c

If that doesn't work, try:

cc -c yourProgram.c
cc  yourProgram.o -o  yourProgram -lfreenect
To run the program:

sudo LD_PRELOAD="/usr/local/lib/libfreenect.so" ./yourProgram

Remember that you need superuser privileges to run the program!



*NOTE: The code in this blog post is not complete. It is only intended to get you started with your Kinect.





Monday, April 30, 2012

Controlling an iRobot Create over serial

Controlling an iRobot device is remarkably easy. If done over the receive and transmit pins on the db-25 connector, no connection configuration is needed and you can simply push messages over a GPIO. For us, it was easier to open and connect a USB-serial cable than build connections from the Beagleboard to the create ourselves. Driving and retrieving information from the create is simple once you've opened and configured the serial port properly, and here's a code snippet of how to do so:

In the beginning of the file:

/* Define the serial port path */
#define CREATE_SERIAL_PORT "/dev/ttyUSB0"
/* Define baudrate */
#define CREATE_SERIAL_BRATE B57600

Somewhere else in the file (such as a function):

fd = open(CREATE_SERIAL_PORT, O_RDWR | O_NOCTTY | O_NDELAY );


// can't open port
if (fd == -1) {
  printf("Error opening port\n");
  return -1;
}
// open successful
else {
  printf("Serial port opened with status: %d\n", fd);
  fcntl(fd, F_SETFL, 0);
}
// configure port
struct termios portSettings;
tcgetattr(fd, &portSettings);


if(cfsetispeed(&portSettings, CREATE_SERIAL_BRATE) != 0)
  printf("Failed setting baud rate for input");
if(cfsetospeed(&portSettings, CREATE_SERIAL_BRATE) != 0)
  printf("Failed setting baud rate for output");


//set parity bits
portSettings.c_cflag &= ~PARENB;
portSettings.c_cflag &= ~CSTOPB;
portSettings.c_cflag &= ~CSIZE;
portSettings.c_cflag |= CS8;
cfmakeraw(&portSettings);


if(tcsetattr(fd, TCSANOW, &portSettings) != 0) {
  printf("Failed pushing port settings.\n");
  return fd;
}


Since there is no actual support for serial communication in C, they work around it by treating the connection as a file with the fopen and fclose commands. After opening the port, various checks need to be done to make sure that it was open properly, set the proper configuration, and then make sure that those settings were pushed properly.

After this, communication is very simple and well explained in the iRobot Create manual, found here:
http://www.irobot.com/filelibrary/pdfs/hrd/create/Create%20Open%20Interface_v2.pdf

There are messages intended for you to be able to drive the Create a specific distance or rotate a certain amount, but for our application we found it easier to just sleep for a specified time and then stop after it has completed. Such that:


wd = write(fd, initsafe, 8);
wd = write(fd, forward, 8);
sleep(1);
wd = write(fd, initsafe, 8);
wd = write(fd, stop, 8);


Where those messages were specified by:

char initsafe[] = {128, 131};
char forward[] = {137, 0, 100, 128, 0 };
char stop[] = {137, 0, 0, 0, 0 }; 

Other similar and useful messages are:

char init[] = {128};
char initfull[] = {128, 132};  
char LED[] = {139, 8, 0, 128 };
char beep[] = {141, 1 };
char reverse[] = {137, 255, 100, 128, 0 };
char left[] = {137, 0, 100, 0, 1 };
char right[] = {137, 0, 100, 255, 255 }; 

Next, we wanted to be able to read the sensor data from all of the available sensors on the Create.

char packet[] = {142, 6};


wd = write(fd, initfull , 8);
wd = write(fd, packet, 8);


read(fd, &a, sizeof(unsigned char));
read(fd, &b, sizeof(unsigned char));
read(fd, &c, sizeof(unsigned char));
read(fd, &d, sizeof(unsigned char));
read(fd, &e, sizeof(unsigned char));
read(fd, &f, sizeof(unsigned char));
read(fd, extras, sizeof(extras));


Packet was a Create command that allowed you to specify a group of sensor and data packets that are available, and from that you can read them in whatever method you prefer. From this we created a set of cases to allow the robot to move if that action was available, and not move if something happened that could possibly damage the device, such as falling down a flight of stairs.




Wednesday, April 25, 2012

Testing "live" stream with the Android App

After doing plenty of research on how we were going to stream the video from the Kinect to the Android app, we decided to start with saving images and creating the effect of a live stream with those images. The ImageView continuously gets updated using a simple Handler that receives update messages from a forever executing AsyncTask that connects to the server and downloads images. The Handler provided a way to update the ImageView in the UI thread (since you can only do so on this thread) while also spawning the task of downloading to a different thread (an AsynTask).

      private final Handler mHandler = new Handler() {  
           @Override  
           public void handleMessage(Message msg) {  
                switch (msg.what) {  
                case UPDATED:  
                     imageView.setImageBitmap(bitmapImage);  
                     break;  
                }  
           }  
      };  

Our test was with the above animation. There were 6 separate images of size 25kb (roughly the same as the ones created by the Kinect). The download time between each image was about 400ms. This created a pretty good video effect, so we were confident in our approach.

Monday, April 23, 2012

Installing Kinect Drivers



There are a lot of resources out there to install open source drivers for the Kinect. We decided to go with libfreenect for its simplicity and ease of installation. It also comes with a few demo programs that are helpful in verifying that everything is installed correctly, and as a reference for integrating the kinect into your own software. Copy and paste installation instructions are:

sudo apt-get install git-core cmake libglut3-dev pkg-config build-essential libxmu-dev libxi-dev libusb-1.0-0-dev
git clone git://github.com/OpenKinect/libfreenect.git
cd libfreenect
mkdir build
cd build
cmake ..
make
sudo make install
sudo ldconfig /usr/local/lib64/
sudo glview

The last command opens a window that views the RGB and depth streams from the kinect. This fails if you are not running a GUI, so something like Tilt demo or Record is sufficient for quick testing.



Now, if you want to actually see the kinect on the beagleboard it requires installing either gnome or lxde. We installed lxde at first to check and see that everything was working, but haven't used it since. There was honestly no point to installing it, but in case you want to know how...

Gnome:
sudo apt-get install xfce4 gdm xubuntu-gdm-theme xubuntu-artwork
LXDE:
sudo apt-get install lxde



After restarting, the interface should be visible on a directly connected monitor. This is not advised though, because it slows the system down quite a bit.

We also installed a number of other packages, some of which we won't get the opportunity to use (like ARToolKitPlus) because of time constraints.

Some essentials:
sudo apt-get install build-essential libavformat-dev ffmpeg

These packages make the core of our streaming program, and ffmpeg is relatively easy to use.

For ARToolKitPlus:
wget http://launchpad.net/artoolkitplus/trunk/2.2.1/+download/ARToolKitPlus-2.2.1.tar.bz2
cd Desktop/ARToolKitPlus
mkdir build
cd build cmake .. make
sudo make install


Also, order to access all of the kinect drivers you need to set up the permissions to either access through root, or enable it to be used by a specific user. This is done through the file located in,
/etc/udev/rules.d/66-kinect.rules

Copy and paste this into the file:
#Rules for Kinect ###################################################### SYSFS{idVendor}=="045e", SYSFS{idProduct}=="02ae", MODE="0660",GROUP="video" SYSFS{idVendor}=="045e", SYSFS{idProduct}=="02ad", MODE="0660",GROUP="video" SYSFS{idVendor}=="045e", SYSFS{idProduct}=="02b0", MODE="0660",GROUP="video" ### END #############################################################

Then,
sudo adduser ubuntu video

Once all of this is done (and packages have been installed to your liking) you can begin to hack the kinect to fit the needs of your project!

Adapted from these sources:
http://openkinect.org/wiki/Getting_Started#Ubuntu_Manual_Install
http://www.ecse.monash.edu.au/twiki/bin/view/WSRNLab/BeagleBoardConfigurationForKinect

Monday, April 9, 2012

Wifi access using AirPennNet

Anyone that has used AirPennNet knows how flaky and annoying it can be. With a PC or Mac you have to install securew2 after connecting to AirPennNet-help and following their configuration procedure. With the beagleboard there is an ethernet port, but when plugged in on campus you need to authenticate through a browser in order to gain access, plus we also want to be untethered and able to move and work freely. There is no wifi on the board but a USB wireless-N dongle is convenient and easy, and many of them are supported by the generic wext driver. We had to install wpasupplicant in order to be compantible with the network (sudo apt-get install wpasupplicant)

Move to and edit the file interfaces (/etc/network/interfaces) and place this into the file:


auto wlan0
iface wlan0 inet dhcp pre-up wpa_supplicant -iwlan0 -c/etc/wpa_supplicant.conf -Bwpa-driver wext wpa-conf /etc/wpa_supplicant.conf



Next move one level up to /etc/ and edit wpa_supplicant.conf, filling out user information:


ctrl_interface=/var/run/wpa_supplicant.conf 

ctrl_interface_group=0 eapol_version=2 ap_scan=1 
network={ 
priority=1 
ssid="AirPennNet" 
key_mgmt=WPA-EAP 
eap=TTLS 
phase2="auth=PAP" 
identity="<YOUR_PENNNAME>" 
password="<YOUR_PASSWORD>"
ca_cert="/etc/ssl/certs/UTN_USERFirst_Hardware_Root_CA.pem" 
}


The first file sets up the wifi network for dynamic networking and loads the driver and wpa support on boot as a background daemon. The second file sets the network up to connect to AirPennNet with the correct security and user info. There is an obvious security issue with placing your information in one file, but great for ease of access and simple setup. The alternative is manually start wpa_cli and put in your info on every login.

After all of this is set up an IP address should be issued to the beagleboard upon the next boot. To check, type in ifconfig -a to see all of the available network connections. If wlan0 exists and has network credentials everything is all set! AirPennNet can still be flaky, and it sometimes takes multiple tries to connect, which is true for our own personal computers too though...

Adapted from:
http://www.seas.upenn.edu/cets/answers/airpennnet-linux.html
A decent general reference for network configuration:
https://help.ubuntu.com/10.04/serverguide/C/network-configuration.html

Setting up a beagleboard

We set up our beagleboard XM with Ubuntu 11.04 (Natty Narwhal) located at:

http://cdimage.ubuntu.com/releases/11.04/release/
We chose the preinstalled headless image for OMAP3.

After completing the download, just insert the SD card (in linux). After making sure it's unmounted, move into the directory, and enter this command changing the file name, card size, and location of the device.


sudo sh -c 'zcat ./ubuntu-netbook-10.10-preinstalled-headless-netbook-armel+omap
.img.gz |dd bs=4M of=/dev/sde ; sync'
Note: this beagleboard XM only suppports up to an 8 GB micro SD card.

Once the sync is complete the card can be put into the board and booted!

You can see the boot by either connecting an HDMI monitor and keyboard directly to the device, or by using screen to tunnel in through a USB-Serial connection (Ubuntu).

screen /dev/ttyUSB0 115200