Category Archives: Uncategorized

Raspberry pi setting up model A with camera module as bot.

Using my model B with a usb video camera was fun but with the new camera module and the cheaper model A I wanted to give those a go.

Parts:
rPi model A (no ethernet, one usb) $25
Camera Module for rPi $25
RTL8188cus wifi dongle (<$5)
Something to drive around
Circuitry to drive motors
Circuitry to tell you when batteries are low http://zonerobotics.com/wordpress/?p=257
Connect Over Serial:

To do this I used a usb<->uart converter plugged directly into the rPi as the pinout describes here http://elinux.org/RPi_Low-level_peripherals. Using 115k, 8n1 and a putty session.

Connect wifi:

http://learn.adafruit.com/adafruits-raspberry-pi-lesson-3-network-setup/setting-up-wifi-with-occidentalis

Update rPi, setup Camera files:

http://www.tweaktown.com/guides/5617/raspberry-pi-camera-module-review-and-tutorial-guide/index2.html

Free up some space (I’m on a 2 gig sd)

This will ruin X11/gui interface. You might want a bigger sd if you still need that, I dont want it for now.

sudo apt-get remove `sudo dpkg --get-selections | grep -v "deinstall" | grep x11 | sed s/install//`
After that I have about 400 megs left 🙁 not much to play with.If you want to take off more here is an article http://www.cnx-software.com/2012/07/31/84-mb-minimal-raspbian-armhf-image-for-raspberry-pi/

Streaming from the Camera Module:

Looking around I see no /dev/video0. So upon looking through the vast internet I dont think there is a video4linux driver. How lame, I thought this would be plug and play. I’ve written my own cmos camera drivers and broadcast them through bluetooth with the stm32 it’s not trivial. I was hoping for a simpler solution being that this camera was expensive and that the pi is a powerhouse compared to my other units. See this article for more details about why it does not work and some ways that it might

http://raspberrypi.stackexchange.com/questions/7446/how-can-i-stream-h264-video-from-raspberry-camera-module-via-apache-nginx-for-re

This page has a more optimistic output with three different ways to stream.

http://www.mybigideas.co.uk/RPi/RPiCamera/

First thing I had to install VLC, ouch 132mb gone. Then I tried

raspivid -o - -t 9999999 |cvlc -vvv stream:///dev/stdin --sout '#rtp{sdp=rtsp://:8554/}' :demux=h264

The latency was really bad, seemed like minutes till I actually got the video. It kept restarting over and over and hitting this error

[0xd01d28] main input error: ES_OUT_SET_(GROUP_)PCR  is called too late (pts_delay increased to 1953 ms)
[0xd01d28] main input error: ES_OUT_RESET_PCR called

So I found this link that talks about “low latency” gstreamer and I was using gstreamer on my linux box the other day with much interest so I was pulled more to this.

Raspberry Pi camera board – Gstreamer

Before installing gstreamer I deleted more stuff with apt-get remove as in the above article. Left me with about 300mb, I didnt uninstall vlc since it was the only thing I have going so far. Gstreamer was 154mb, these packages are huge. Here goes another 20 minutes waiting for that.

Fought with this for a while, I could never get a client to work with gstreamer. Tried vlc again with slower/smaller settings and it was just as crappy.

VLC Sorta Success:

After working with these tools for a long time I got this to work pretty well. Its smooth fast and no freezing

raspivid -t 999999 -h 600 -w 800 -fps 30 -hf -b 2000000 -o - |cvlc -vvv stream:///dev/stdin --sout '#standard{access=http,mux=ts,dst=:8080}' :demux=h264

I tried to make it smaller and less framerate/size/bitrate each time it got worse and worse. With a smaller bitrate/fps it would freeze and disconnect, which does not make any sense to me, with less data it should work better right ? But I think it has to do more processing if you scale it in any way which hogs up the cpu and screws everything up. I read tons of articles the last one was this

http://www.raspberrypi.org/phpBB3/viewtopic.php?f=43&t=43969

Making Webpage:

Installed apache then used this on a video.html page

<!DOCTYPE html>
<html><body>
<OBJECT classid="clsid:9BE31822-FDAD-461B-AD51-BE1D1C159921"
 codebase="http://downloads.videolan.org/pub/videolan/vlc/latest/win32/axvlc.cab"
 width="800" height="600" id="vlc" events="True">
 <param name="Src" value="http://PI_IP_ADDRESS:8080/" />
 <param name="ShowDisplay" value="True" />
 <param name="AutoLoop" value="False" />
 <param name="AutoPlay" value="True" />
 <embed id="vlcEmb" type="application/x-google-vlc-plugin" version="VideoLAN.VLCPlugin.2" autoplay="yes" loop="no" width="640" height="480"
 target="http://PI_IP_ADDRESS:8080/" ></embed>
</OBJECT>
</html></body>

Latency !!:

I’m getting like 4sec of latency which ruins everything. I had much better results with the usb cam. However these people seem to think it will work much faster with “no latency” so I’m going to try their netcat solution later

http://www.raspberrypi.org/phpBB3/viewtopic.php?f=43&t=39996&start=50#p341674

Yes at least 3 second delay. No change in delay with bitrate. Looking at the nc solution this is the most complete example I can find (first line run on pi, second on windows)

raspivid -t 999999 -o - | nc 192.168.1.76 5001
c:appsncnc.exe -L -p 5001 | c:appsmplayermplayer.exe -fps 31 -cache 1024 -

 My exact example was
On the Pi:

raspivid -t 999999 -vf -hf -b 2000000 -o - | nc 192.168.2.17 5001

Then on the machine (where I had to download mplayer and all the codecs)

raspivid -t 999999 -vf -hf -b 2000000 -o - | nc 192.168.2.17 5001

This would start off just like the rest but after about a minute of running the latency was gone. There is maybe a 200-300ms latency. I could snap my fingers and see it way before I could say “one Mississippi” (very scientific).

Conclusion:
I don’t get it. I read online about all these “it does not have enough power” etc. etc. My stm32 could pump data out via the DMA as fast as we could process it.

  1. Video is shaky, camera is stable but it looks like what you are looking at is in a earthquake
  2. Motion is frequently blurred and pixelated, smearing across the screen
  3. Camera is much more expensive than a usb solution
  4. All normal ways of streaming have a 3+ second latency, some other methods have less latency (netcat) but they are not reliable and seem to crash all the sudden. Plus they are not streaming into a webpage or anything simple.
  5. reducing bitrate or resolution just seems to make things worse

If I have to choose between this and a simple usb camera I was having lots more luck with usb camera. I can see the application of the raspberry pi camera in maybe a webcam or something that takes pictures, but as eyes for a teleop bot the usb camera may not be as HQ but it also did not smear, shake, lag or cost as much !

 

STM32 code revision

Thinking about this for a long time I’ve decided to re-factor all of my stm32 code to make it more versatile. I need to be able to turn on/off peripherals at will and set them on a schedule. The current system is very static.

So I’ve devised a way to use nodes and serve them up and configure them at will based on the server. This helps a lot with configuration and should help me put an easy configuration face onto any stm32.

I’m still torn between the PI and my hardware. I think I could add my current hardware to the PI and get really good expand ability + camera on the raspberry pi; however, I want to make sure the users are able to configure and tweak all of the settings through their pi rather then just get a video stream.

Goals:

Hardware Comm agnostic(can switch from usart to spi etc)
All perph have standard
One state machine, schedules
Will work well with Pi
each perph has its own commands

Protocol:

Encapsulated, multiple commands at once inside
ID:TotalSize:[ID:Size:Data …. n …. ID:Size:Data]CRC
The outer “ID” is the id of the server or hardware, the inner “ID” is the id of the command that will handle “Size” amount of data. The data is as such
CMD:Data
so for example a “update schedule (0x01?)” command that sends data 0xFF will cause the perph to fire on every interval of the state machine. Commands that span perphs will start with 0 commands that are custom will start at 0xFF down. So for instance a command that sends video config data will start from the upper range, and a 0xFF will mean different things to different perphs.

Actually I think we could get away with no overall size and just have a identifier followed by data. On the client side we reset on the id and continually calculate crc till we have a match.

Raspberry Pi + Wifi Dongle + Usb Webcam = remote control web bot


I will try and explain all of the steps that I went through to do this. Since I’ve already done it I might miss a few. Feel free to add comments and I’ll respond and or update the post. I’m not going to rewrite all of the instructions in other pages I will just link to them.

Setup Your Raspberry Pi

OS
Assuming you have some debian based version of linux running on your raspberry pi. I had a version of the os compiled with ROS included because I plan on using ROS asap. I got my version from here.

http://www.instructables.com/id/Raspberry-Pi-and-ROS-Robotic-Operating-System/
Direct Download Link of Image (Raspian)

Connection
Please use a ssh client and ethernet to setup the initial connection, or if you already have wifi setup all the better. I’ll assume your talking to your device through ethernet for now.

WebCam
Plug in your webcam to your usb port. Then run command “lsusb” hopefully you see a new device there, if not you need to search the net on getting your camera to work. Most of the information related to ubuntu or debian is still relevant and I find myself finding lots of solutions in posts that are not Pi specific.

If you do have a new device look at the ID, check the list of supported/verified devices here http://elinux.org/RPi_VerifiedPeripherals if its not there you might be out of luck but try anyway.

The application I used was mjpg_streamer. Instruction on how to install it and make it start up on boot can be found here.
http://www.phillips321.co.uk/2012/11/05/raspberrypi-webcam-mjpg-stream-cctv/

Once you have installed that software I did not use the commands in that post. I reduce the framerate and use raw yuv. If you have a camera that supports mjpg you might save some cpu cycles by using it. But seeking real time this is what I did.

mjpg_streamer -i "/usr/lib/input_uvc.so -d /dev/video0 -y -q 40 -r 160x120 -f 10" -o "/usr/lib/output_http.so -p 8081 -c un:pw -w /home/pi/mjpg-streamer/mjpg-streamer/www/" -b

The last line with the www threw me for a loop. It is the location where you have your html pages, there are a whole bunch of really good demo pages that come with the package so if you dont delete them I would use them and test at this point to be sure everything works.

If you want it to start on startup it is also in the instructions I linked to above.

At this point you should be able to stream your camera over ethernet. Test different frame rates and see the frame count on the demo javascript page.

I removed the cable from my webcam and soldered the usb connector directly to the board. I might not do that next time, but a 5ft cable was really getting in the way. A 6 inch cable would be nice. I took out the Mic also, it was simply glued to the inside of the webcam.

Wifi
Plug in your wifi dongle. Mine was kinda fat and had this huge plastic enclosure so I stripped that off, this allowed me to plug two devices into my model b.  Again use lsusb and see if you have a new device, if you do try ifconfig and see if there is a wlan0 device. At this point I set my device to automatically connect to my wireless router using a static ip address and wpa. I’m going to try and set it up as an access point which will be easy on my phone, but a pain for my wired computers. Anyway this is what my entry looks like in /etc/network/interfaces add a similar entry for connecting to your router.

auto wlan0
allow-hotplug wlan0
iface wlan0 inet static
address 192.168.2.86
netmask 255.255.255.0
broadcast 192.168.2.255
gateway 192.168.2.1
wpa-ssid "belkin.544"
wpa-psk "yourwifipassword"

You will have to change your password and will more than likely have another ssid. Once you add that to your interfaces file type “ifconfig wlan0 down” then  “ifconfig wlan0 up” then see if your wifi is actually connected. The easiest way I can think of is to ping the static ip you are assigning to the wifi device from the computer your using, or you can unplug the ethernet and try and ping something on your network.

If you have issues you should search the internet. All of this is basic wifi setup steps on any linux distro.

GPIO
In order to turn the motors on and off or left and right you will need control of your gpio. I used webiopi the installation instructions can be found on their wiki as well as how to install it as a service that will start on boot.
https://code.google.com/p/webiopi/wiki/INSTALL

It was really easy to install this portion so I wont go into detail on it.

Putting it together
So we have a webserver listening on one port and gpio control listening on another port. I did not know if I should make a page with the webstream and put it in the webiopi www folder or a page with gpio and put it in the mjpg_Streamer www directory. I think I tried both and the one that worked best was putting my test.html in the directory of webiopi. I havent tested this a whole bunch but I know that way works.

So create a new page in /usr/share/webiopi/htdocs and then you can begin working on your remote control page. For the stream itself it is too easy, a one liner
<img width=”320″ height=”240″ src=”http://192.168.2.86:8081/?action=stream”>
The gpio was a little more finicky. First I could get NOTHING to work until I put the webiopi().refreshGPIO(true); at the beginning of my webiopi().ready function. I dont know why and it is not stated on their page.

Here is the full code of my test.html. I have to switch both motors at the same time since its a tracked vehicle.

pi@raspberrypi:/usr/share/webiopi/htdocs$ cat test.html
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN"
"http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
 <head>
 <title>zBotPiTest</title>
 <meta http-equiv="content-type" content="text/html; charset=iso-8859-1" />
 <script type="text/javascript">
/* Copyright (C) 2013 Michael McCarty http://www.zonerobotics.com
 */
 </script>
<body onload="createImageLayer();">
<img width="320" height="240" src="http://192.168.2.86:8081/?action=stream"><br/>
<script type="text/javascript" src="/webiopi.js"></script>
<script type="text/javascript">
function makeMUMDButton(id,label)
 {
 var button = $('<button type="button" class="Default">');
 button.attr("id", id);
 button.text(label);
 button.bind("mousedown", mousedown(id));
 button.bind("mouseup", mouseup(id));
 return button;
 }
 webiopi().ready(function()
 {
 webiopi().refreshGPIO(true);
 var content, button;
 content = $("#content");
 button = webiopi().createGPIOButton(21,"21");content.append(button);
 button = webiopi().createGPIOButton(22,"22");content.append(button);
 button = webiopi().createGPIOButton(10,"10");content.append(button);
 button = webiopi().createGPIOButton(9,"9");content.append(button);
 button = webiopi().createGPIOButton(11,"11");content.append(button);
}
);
function mousedown(dir){
switch(dir)
{
case 1:
webiopi().digitalWrite(0,1);
webiopi().digitalWrite(4,1);
break;
case 2:
webiopi().digitalWrite(1,1);
webiopi().digitalWrite(17,1);
break;
case 3:
mouseup(0);
webiopi().digitalWrite(1,1);
webiopi().digitalWrite(4,1);
break;
case 4:
mouseup(0); // all off
webiopi().digitalWrite(0,1);
webiopi().digitalWrite(17,1);
break;
default:
mouseup(0); // just stop
}
 }
// this will turn all motors off !
 function mouseup(){
 webiopi().digitalWrite(0,0);
 webiopi().digitalWrite(1,0);
 webiopi().digitalWrite(4,0);
 webiopi().digitalWrite(17,0);
 }
</script>
 <div id="content" align="left"></div>
<button type="button" class="Default" onmousedown="mousedown(1);" onmouseup="mouseup(1);" id="1" label="f">f</button>
<button type="button" class="Default" onmousedown="mousedown(2);" onmouseup="mouseup(2);" id="2" label="f">b</button>
<button type="button" class="Default" onmousedown="mousedown(3);" onmouseup="mouseup(3);" id="3" label="f">l</button>
<button type="button" class="Default" onmousedown="mousedown(4);" onmouseup="mouseup(4);" id="4" label="f">r</button>
</body>
</html>

I didnt use the webiopi wrappers for creating functions because they did not seem to work with arguments and I would rather just pass an argument to my mouseup and mousedown instead of creating a different function for each button.

Wrap Up
The other things you will note is when you start up the gpio are inputs and could be in a bad state. You cant wait for your webpage to load before fixing them. Fortunatley webiopi has a way of setting them up on start in the config file. You can read about that here
https://code.google.com/p/webiopi/wiki/CONFIGURATION#GPIO_setup

Apart from that my bot will run one side when I first start up for a few seconds, and when the battery dies it remains in whatever state it was in, even if that state was going forward. To fix the first part I plan to implement a circuit to not activate my motor drivers until a timer has expired. The second problem I also plan on solving with a circuit so that when battery gets close to low the pi is notified and it can set gpio and notify the user before shutting off.

Hardware
I’m using a lm7805 that is being driven by two cr123 lithium batteries. I can drive around for about 30 minutes before my batteries die. It does not take long to recharge them and I have an extra pair that is always charging. The batteries themselves are very lightweight and are only about $1 each on ebay. There is a big cap on the 5v power side of the 7805 in case there is a dip in current during motor drive, this way the circuit does not loose power when we put a lot of current into the motors.

My motor drivers are TA7291S/SG because thats what I have around. I soldered them dead bug style to a female header. This allows me to have motor drivers and power circuitry stay with the body and simply plug into the pi gpio header.

I’m using Pololu 30T track set http://www.pololu.com/catalog/product/1416 and some micro motors I got off ebay a while ago. The case was built from plexiglass from a design I made years ago.

Pictures

Update Speed fps issues ! 

I was going crazy trying to figure out why if I have usb 2.0 which runs at “480 Mbit/s (effective throughput up to 35 MB/s or 280 Mbit/s)” why the hell does my framerate drop to almost nothing when I use 320×240. Then it occurred to me even though whats going to the webpage is pretty small since my camera does not have jpeg compression everything going through the usb port is raw, and the usb port is shared with the wifi. So, those two devices are quickly going to max out the bandwidth.

Just a quick math note, the raw data going though my port is
320x240x2= 153600 at 10fps = 1536000 which is 1.5MB/s ….

Ya I take all that back, the usb should not be the bottleneck. It might be the two devices on the same port. Just as a test I used a raw usbcam on my normal pc and was getting realtime at 640×480. So I dont know, I’ll try and test a cam that supports h.264 or mjpg streaming once I get one. For now looks like low resolution and 150Kbs will have to do. Maybe the “Raspberry PI CSI” camera will solve all of this ?

 

Raspberry Pi vs Small MCU type Solutions

I have known about the raspberry pi for quite some time. So why did I spend so much time working with embedded systems like pic and stm32 ? Initially I thought of the extremely low cost and power requirements for those devices. My initial estimates were for a controllable tracked video bot for as little as $25, which is still realistic.

I’m going to weigh the costs and benefits of each system and we can see they are different systems, kind of like comparing apples to oranges. So yes I think both systems are very desirable. The pi gets me where I want to be faster, and it can support ROS. To me ROS makes all the difference so I’m going down the Pi path for that reason. But I’m going to release all of my code for the stm and everything I have worked on so far in case someone else wants to still pursue the embedded path.

The last thing I completed was a phone controlled bot based off of the stm32f4 discovery board. I was using a bluetooth module and was getting about a 250×100 frame every 2 seconds on my phone. I have a full api thats easy to use and all of the source is open and free here. There are android and pure java clients that get the video feed. Even thought the HC-05 module boasts 1.3Mbps it also says 160Kbps which is the max speed I was able to push through it. If anyone figures out the actual max speed please let me know !

https://code.google.com/p/stm-phone-camera-bot/source/checkout

With all of this working at a frame every 2 seconds I still had in the back of my head that with jpeg that could be 1/16 compression giving me 8 frames per second. I was also planning to implement nordic rf 2401 which gives a possible 2 mbps which would greatly increase the frame rate. What I would end up with in the end would be pretty spectacular but it would not run ROS.

So lets compare everything.

Assuming !
Motor control circuit is the same
Power charge circuit is the same
And disregarding passive components as negligible

Plan A:

Stm32F4, Bluetooth module connect direct to telephone.
Difficulty to Implement: Moderate
Todo: Interface with Ov2460, more drivers for HC-05
Modular: Low

Product Cost dBm Range ft Ram MB Clock Mbps Volts mA Watts
STM32F407VET6 10.99 0.125 168 3.3 87 0.287
HC-05 5.66 4 30 3.3 50 0.165
OV2640 9.99 3.3 45 0.149
sum 26.64 4 30 0.125 168 0 9.9 182 0.601

Plan B:

STM32F4 based, nordic chip. For this to work I would need to make a dongle board for converting the nordic input to usb. This would be very hard and possibly slow things down considerably, but at least I would need chips on both ends to control everything. I would use a lower cost stm32 for the Dongle end.
Development: Hard
Todo: dongle, spi nrf interface driver, camera driver

Product Cost dBm Range ft Ram MB Clock Mbps Volts mA Watts
STM32F407VET6 1 10.99 10.99 0.125 168 3.3 87 0.287
STM32F051K8U6 1 3.17 3.17 0.0625 48 3.3 22 0.073
OV2640 1 9.99 9.99 3.3 45 0.149
FTDI-FT230XS-R 1 2.04 2.04 2 5.5 8.3 0.046
nrf2401AG 2 4.2 8.4 1 3.6 18 0.065
2.4Ghz Antenna 2 3.25 6.5 984 0
sum 8 33.64 41.09 0 984 0.1875 216 1 19 180 0.619

Plan C:

Use a raspberry pi (Model A) and usb camera, stream over wifi. Of course there are obvious issues with this like how do I embedd a usb dongle or webcam into the final product. In a few units I can dismantle and solder but I would need a better solution for mass production. Also I will need to integrate a usb hub whereas in the other designs that was not nessisary.
Difficulty: done
Todo: Add configuration, Create Circuit board, Test

Product Cost dBm Range ft Ram MB Clock Mbps Volts mA Watts
rPi Model A 1 35 35 256 700 5 500 2.5
Logitec C200 1 3.3 3.3 5 200 1
RTL8188cus 1 4.75 4.75 -65 300 150 3.3 600 1.98
2.4Ghz Antenna 1 3.25 3.25 0
TUSB2046BIVFRG4 1 3.2 3.2 3.3 40 0.132
8gb microsd 1 5.95 5.95
sum 6 55.45 55.45 -65 300 256 700 150 16.6 1340 5.612

Summarizing and including the model B since model A does not always seem to be available.

Product Cost Cost dBm Range ft Ram MB Clock Mbps Volts mA Watts
STM BT 0 26.64 4 30 0.125 168 0.16 9.9 182 0.601
STM NRF 0 41.09 0 984 0.1875 216 1 19 180 0.619
RPI – A 0 55.45 -65 300 256 700 150 16.6 1340 5.612
RPI – B 0 71.45 -65 300 256 700 150 5 1540 7.7

The big things to note are 150Mbps(pi) vs 1Mbps(STM) and the mA 1.5A (Pi) vs .180A (STM). So all things the same the STM plan would last 8 times as long as the Pi on the same battery charge. For example I’m using two CR123 Rechargables at 800mAh in the pi that gives me 32 Minutes, the STM would last for 4.3 hours on the same two light batteries. However when we are pumping motors continually the amps used by the PI might not be so important. For example if our motors (fa-130 for example) use from 200 to 2.2A each then we round that to 2A we need a larger battery solution. The 150Mbps vs 1Mbps is a no brainier if you are trying to push any decent video size through the device.

At the cost of only $14 more it seems like the Pi is the clear way to continue. The stm plans would only support one set of per devices like cameras, wireless etc whereas the pi has tons of drivers and a huge list of already supported devices.

So with all that in mind I’m moving forward knowing that I will be pulling something like 3A and will need a large battery solution. Since the motors and several other parts of the circuit would be the same regardless of the “brain” then we can always go back. But with things like ROS and Ubuntu I dont think there is going to be any turning back. The most I might do is use my STM32 knowledge to integrate smart perphs into the pi.

How to Stream video from your Raspberry Pi and control GPIO

There are a plethora of different packages out there for streaming video. The one I found that is closest to real time is “mjpg_streamer” that does not mean its the best or that you cannot achieve better with another package, it just means this is the one that I got to work best so far.

To install follow the instructions here
http://www.tanzilli.com/video_streaming
If thats not enough info you can look here too
http://www.phillips321.co.uk/2012/11/05/raspberrypi-webcam-mjpg-stream-cctv/

The final command I used was a very low image size to give me about 10fps

mjpg_streamer -i "/usr/lib/input_uvc.so -d /dev/video0 -y -r 160x120 -f 10" -o "/usr/lib/output_http.so -p 8081 -w /home/pi/mjpg-streamer/mjpg-streamer/www/" -b

From there you can http into your pi on its ip at yourpiip:8081 and you should see pages to stream your video. The tricky thing that is not mentioned on those pages is the last -w argument is for the webpages which if you follow the instructions you delete. You should save those pages or when connecting you will get a “404 not found” because you pointed mjpg_streamer to host pages that dont exist. I have the full path in my argument as you see.

From there gpio was a snap I used WebIOPi you can find the instructions here
https://code.google.com/p/webiopi/wiki/INSTALL
and yes you can have them both running at once (camera gpio) as long as they are on different ports.

I installed both as a service so my pi starts right up and streams video and opens its gpio. So I hooked up some lights and put them in front of the camera. Now I can toggle them on and off from anywhere in the world (since I forwarded the pages) and see them.

The webiopi wont let me drive since I have to turn on two gpio at the same time to go forward or back, but it will be good for testing. Next I’m moving on to making my own javascript gpio control then I can connect it to the chassis and drive it around !

RaspBerry Pi, Goals

Though unfortunate events I burnt up my stm32f4 discovery, so I thought to myself now would be a good time to load up the raspberry pi and see what I could do with it.

I found an instructable with a link to Raspian with ROS built in. http://www.instructables.com/id/Raspberry-Pi-and-ROS-Robotic-Operating-System/step2/Writing-the-image-to-the-SD-card/

Then I found instructions on how to setup a wifi webcam that runs off solar batteries. This camera is hosted across the web in any browser. http://www.instructables.com/id/Raspberry-Pi-Completely-Wireless-IP-Camera-Solar/

To manipulate the gpio there are several different methods, there is a very exhaustive page here that shows how to do it in basically any language you want. Looks like pwm might be out of scope for the first run but controlling my already built motor drivers should be easy.http://elinux.org/Rpi_Low-level_peripherals

GOAL:

Putting this all together we should have what we wanted very quickly. A robot that can drive around using the wifi providing a video feed to a client. Looks like just by installing software the client will be a webpage. I will have to find a way to integrate the motor control and the video feed on the same page. If I could accomplish this then I would be far past where it has taken me about 2 years to get using MCU and integrating directly with the camera.

Once this works:

Daughter board
– Gets Power from batteries
– Charges Batteries
– Monitors Power
– Motor Drivers (4)
– Servo Drivers
– LED and Laser Connectors
– Camera Connector ?

Body
– Build a body that fits batteries, pi and daughterboard
– Construct and test

Other thoughts
I don’t know if we can take the schematic of the pi, strip out the stuff we dont want and include the stuff we do, such as wifi, motor drivers etc. If we could simply make our own version of the rPi that would be better than fitting a square peg in a hexagon hole.

There is tons of interest right now in the rPi, it is buzzing all over the internet and every Tom, Dick and Harry is making accessories for it. It will be nice to ride that wave instead of using MCUs that are less renown. I can say that if we make accessories they should cost less than the pi itself. If I make a full video bot package we should be able to keep the cost below $100.

Unpleasent Bluetooth Suprises

Using the bluetooth modules I have become pretty familiar with them. There are several different models out there but most of them are banging the BC417 core for bluetooth.

The idea was we could have speeds up to the at commands for baud which has a max of  1382400 baud. With that we could do 3.5 frames a second with raw at 251×100 at 2 bpp, of course in a perfect world with no overhead. Then I would move onto a camera that has built in jpeg compression and get a 1/16 compression ratio moving my fps to 56 fps at which time I could start to adjust for a bigger image and get closer to 20fps but with a larger image.

Wrong ! After setting the bluetooth to a higher baud 921600 I maxed out the kbps at 160 rx. This does not make any sense to me so I look at the modules specs and I see

Rate: Asynchronous: 2.1Mbps(Max) / 160 kbps.

There was another spec for synchronous but usart is async. I really dont know why these are written this way, maybe its 2.1Mbps upload and 160kbps down or maybe its 2.1Mbps on the usart side and 160kbps max on the bluetooth side or maybe its simply showing that the max is 2.1Mbps and we need at least 160kbps for the other side.

So I dove deeper into the bc417 datasheet it shows the uart can communicate as fast as 2764800, but if you look at the rfcomm section 9.2.1 you see it has a 350kbps max data rate. I have no idea how any of these chips could have data coming in at a huge rate and going out at a much slower one, maybe I would see buffer overflow if I pushed to much. Actually I think I did at about 700+ bytes.

Bottom line max kbps is 350 for what we are using, all I can seem to get out of it is 160kbps which may be a limitation of how they implemented the host mcu on the bluetooth module, but the datasheet for the bt module is confusing (to me). So even if we could get 350kbps that would be only .87 fps raw (a frame every 1.14 sec) or possibly ~13 fps with jpeg compression.

What I have working now is 160kbps at a decent range. That gives me .39fps or a frame every 2.56 sec which is ~5fps with compression. If I put my api on the computer and use a serial adapter I can get 921600 baud (max of prolific usb converter) and see 2fps. 2fps is much better than a frame every 2.56 sec, and it still sucks !

Options ?

I could use the nordic rf2401 chips and implement a dongle as I had in mind before. I know android devices are getting better at using usb converters and such. This would give me decent performance.

Rosberry pi with wifi.

Indecision and setbacks

At the point where I have video working and migrated everything to the F4 where all I had left to do was move it to the android phone (and test) to achieve my milestone I got seriously sidetracked by ROS.

I seem to run into this stumbling block again and again, I get so far along in the project that I already have the next generation in mind and its so much better (in my mind) that I’m prone to throwing away large portions of previous design and coding to move to the “next thing” without achieving the almost completed product or milestone. I am going to stop doing that. I see now that by using ROS and open libraries out there my robot could go much futher than I alone could ever take it; however, I only learned that lesson by going this far down the road I took. I’m sure all of the lessons were not learned yet, so with such a small amount of development left to meet my modest milestone I’m going to revert back to my old code and meet my milestone before going off on any tangents.

Tangents

I can rest a little knowing that all of the ideas I had will be written down and I can pickup where I left off.

Basically everything I have done up until now has already been done and much better in ROS by the amazing people at willow garage. You could sum up my most intricate design into a high power micro pumping video out on rosserial to rosjava_core and viewing this in rviz. Then if you gave me 20 years maybe I could have written all of that code. There is no “game” in ROS, it is real robotics but we can create a subscriber or service that does what we want.

So to sum up how I plan to move forward with ROS after meeting my last milestone is this list.

  1. Investigate existing hardware that works with rosserial and could use a camera. Looks like ros works on embedded linux or arduino (but not all arduinos yet). Maybe the rasberry Pi is the best option, since it already could do wifi and embedded linux and would have drivers for the camera
  2. Embed ROS or Rosserial into the device, depending on which one you choose. If we go the Pi route we can just run ROS or ROSjava right on the Pi itself. Then we basically have a full robotics platform for $35 plus sd card and wifi, we would want to make it headless eventually.
  3. Get the device working with a camera, then publish a camera node in ROS
  4. Setup motors and subscribe to a joystick publisher. Joystick to motors.
  5. Create clients to view/subscribe to the camera node and publish the joystick node. Maybe we could use rosjava or some other software available and simply point it at the rosmaster? Either way we would want the already predefined and usable ROS format, and we can use the currently available ros tools to test and debug all kinds of stuff.
  6. Create easy to use API for beginners so they can easily use our robot and develop their own applications. I know ROS has a learning curve, maybe we could wrap it and make it simpler.

So, I’m going back to meeting my milestone even if it sets me back from ros for a few more weeks. I want to make sure I’ve learned all the lessons of my current path and not just 85% of them. But then I’m going full speed with ROS and more than likely the rasberry pi, which is where I should have started to begin with imho.

Ros/Pi Cost:
Rasberry Pi $35
Wifi WL-700N-RXS $11 (omg you kidding me? 150Mbps!)
Camera board $25 (rPi official camera, expected to be released soon)
Body $30?

All of that adds up to  $101, my goal was always below $100 and that is really close. With the Pi there is the possibility of having multiple cameras and there should be a huge reduction in the amount of development. The bandwidth is almost 200x, the speed is about 10x but the price is only about 2x (for the main board vs stm32 f4).

Granted if I did spend years developing the stm32f103 into something usable it could possibly cost less than $20 for the whole board but in my opinion thats not worth it.

And if I ever want to I can always go back to the Milestone (video and control through phone) and pick up where I left off.

http://wiki.jigsawrenaissance.org/ROS_on_RaspberryPi

http://www.raspberrypi.org/archives/tag/camera-board

http://pingbin.com/2012/12/setup-wifi-raspberry-pi/

http://www.ros.org/wiki/rosserial

 

A first Glance at the zBot API

I’ve created my first “Java” tutorial. Still not quite finished I want to add motor and gpio control, plus a little more tinsel. But I thought it might be handy to have this here so I can show it to other people.

Basically if you make this in your java main and call setupbot() the bot will set itself up and start polling, then as soon as an image is ready you will get a callback “onnewimage” where you can display it on the screen.

I’m going to add a joystick and some more output to the java frame before I’m done with this example.

public class SimpleBotTutorial implements IBotListener{

ZBot zBot = null;
InputStream inS;
OutputStream outS;

JFrame frame = new JFrame("Video Feed");
JLabel JLabelImage = new JLabel();

public void logI(String str)
{
      System.out.println(str);
}

private boolean getStreams()
{
try
{
      CommPortIdentifier portIdentifier;
      portIdentifier = CommPortIdentifier.getPortIdentifier("COM3");

      CommPort commPort = portIdentifier.open(this.getClass().getName(),2000);
      SerialPort serialPort = (SerialPort) commPort;
      serialPort.setSerialPortParams(baud,SerialPort.DATABITS_8,SerialPort.STOPBITS_1,SerialPort.PARITY_NONE);

      inS = serialPort.getInputStream();
      outS = serialPort.getOutputStream();
      return true;
}
catch (Exception e)
{
      // TODO Auto-generated catch block
      e.printStackTrace();
      // TODO FAIL EXIT
      return false;
}

}

private void setupGui()
{
      frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
      frame.setMinimumSize(new Dimension(251,150));
      frame.getContentPane().add(JLabelImage, BorderLayout.CENTER);
      frame.pack();
      frame.setVisible(true);
}

public boolean startBot()
{
      // if we cant get the stream no reason continuing
      if(!getStreams())
      {
            return false;
      }

      // setup our zbot and pass it the streams
      zBot = new ZBot( inS, outS );
      // tell it we implement its callbacks for video and events
      zBot.setIbl(this);

      // wait a second for a connection, then check
      try {
            Thread.sleep(5000);
      } catch (InterruptedException e) {
            // TODO Auto-generated catch block
      e.printStackTrace();
      }

      // check to see if its ready or we timed out
      if(zBot.getState() < 2)
      {
            // fail
            zBot.pollEngine.stopSvc();
            // close our streams
            try {
                  inS.close();
                  outS.close();
            } catch (IOException e) {
                  // TODO Auto-generated catch block
                  e.printStackTrace();
            }
      // let caller know we failed
      return false;
      }

      // setup the gui to get our data
      setupGui();

return true;
}

public void printAllBotValues()
{
      this.logI("=================================================");
      for ( EzReadVars rv : EzReadVars.values())
      {
            this.logI("Var: " + rv + " Value: " + zBot.getValue(rv));
      }
}

public void rawToImage(byte[] rawData, int w, int h)
{
      // make new container for the image
      BufferedImage NewImg = new BufferedImage(w, h, BufferedImage.TYPE_USHORT_565_RGB);
      // convert to short for raster
      short[] picAsShort = new short[w*h];
      ByteBuffer.wrap(rawData).order(ByteOrder.BIG_ENDIAN).asShortBuffer().get(picAsShort);
      // put directly in raster of image
      short[] imgData = ((DataBufferUShort)NewImg.getRaster().getDataBuffer()).getData();
      System.arraycopy(picAsShort, 0, imgData, 0, picAsShort.length);
      // set our jlabel to the image
      this.JLabelImage.setIcon(new ImageIcon(NewImg));
      // refresh
      frame.repaint();
}

@Override
public void onNewImage(ZImage image) {
      // turn it into a image and display it
      rawToImage(image.getData(),image.getWidth(),image.getHeight());
}

@Override
public void onNewEvent(int event) {
      // TODO Auto-generated method stub
}
}

Zbot Meet ROS (Robot Operating System)

I thought of several things that I realize that it will be a long time before I’m able to do, things like lasers, navigation, facial recognition, cloud workload balancing and much more. As of now I’m basically building a easy to use RC video Tank that I’m going to put a game on top of.

For some reason I decided to investigate ROS (Robot Operating Sytem) more and now I’m really staring to see how it works. For this reason I’ve put off active development a little while I investigate ROS and make sure my system can be implemented as a ROS package. I believe that using ROS will remove decades of futile “boiler code” development and give me quick access to all of the things that initially got me interested in robotics.

ROS is amazing, it has really blown my mind to see how they have put all of these things together to make a working system. In the ROS system everything is super loosely coupled. Your “robot” is simply a system of talkers, listeners and thinkers or in ROS terms publishers, subscribers and services. I’ve only got about a day of Research in me but I think I have the basic concept. A sensor will talk and “publish” its information at an interval to a “master”. Another subscriber that could be a motor or whatever can subscribe via the “master” to that “publisher” in this way your eyes, ears and sensors are all separate. As far as I can tell a “service” is something that can process hefty data and respond with an answer, like the logic that glues the pieces together.

So how does this all fit in with ZBot ? Well ROS runs under linux so we cant just cram ros into our stm32, we could abandon our platform and use a $30 chip that would run linux and go from there (this is what I plan on experimenting with with my rasberry pi) but we cannot meet our goals with a system that “runs linux”. So, we need to at some point create a “bridge” between our driver and turn our motors and our camera into ROS topics and publish them to the ROS master. This opens the door up to more sensors and things such as odometers, position sensors etc. But more importantly it opens the door up to being able to use those new things and not have to write navigation code and such.

Right now I think the main development will continue as planned, we still want a rc tank with video, but I’m going to be darn sure I’m not spending any time replacing functionality that exists in ROS. Once I get to a point where everything works as I want it too I will write a driver (bridge) for ROS. It will be similar to the NXT driver for ROS. We simply need to add topics for reading our camera and controlling our motors.

ROS actually takes it a step further with their Android “rosjava_core” and their partnership with google. Using these functions you can take a cell phone and open all its sensors up as a ROS topic. I’m currently working on this also. So if my Zbot is a ROS topic and my phone is also then I can simply place my phone on top of my Zbot and be able to retrieve the gps, accelerometer and a faster video stream from the phone itself. All of this data gets shipped over wifi or 4g to a ros master running on some linux server that shares control of the zbot/phone (with all these added sensors). At this point the zbot becomes like a “phone chariot” and we have a robot that can go anywhere (given 4g/gps). Of course all this could be opened up to other nodes such that if there is an alarm the robots investigate, and a service that ships the data to google for cloud processing. Basically everything I could imagine is possible, and I’m now realizing I dont have to do it all !

So I’ll try not to get too sidetracked into ROS, but I want to be sure my zbot is a good fit now so I’m not kicking myself in the butt later. In order to do that I need to investigate a little into the ROS system and set one up etc.

I should be able to meet my milestone of “video on phone, controllable bot” before embarking on the ROS discovery, and I really should so I think thats what I’ll do.