Category Archives: WebSite Information

Unpleasent Bluetooth Suprises

Using the bluetooth modules I have become pretty familiar with them. There are several different models out there but most of them are banging the BC417 core for bluetooth.

The idea was we could have speeds up to the at commands for baud which has a max of  1382400 baud. With that we could do 3.5 frames a second with raw at 251×100 at 2 bpp, of course in a perfect world with no overhead. Then I would move onto a camera that has built in jpeg compression and get a 1/16 compression ratio moving my fps to 56 fps at which time I could start to adjust for a bigger image and get closer to 20fps but with a larger image.

Wrong ! After setting the bluetooth to a higher baud 921600 I maxed out the kbps at 160 rx. This does not make any sense to me so I look at the modules specs and I see

Rate: Asynchronous: 2.1Mbps(Max) / 160 kbps.

There was another spec for synchronous but usart is async. I really dont know why these are written this way, maybe its 2.1Mbps upload and 160kbps down or maybe its 2.1Mbps on the usart side and 160kbps max on the bluetooth side or maybe its simply showing that the max is 2.1Mbps and we need at least 160kbps for the other side.

So I dove deeper into the bc417 datasheet it shows the uart can communicate as fast as 2764800, but if you look at the rfcomm section 9.2.1 you see it has a 350kbps max data rate. I have no idea how any of these chips could have data coming in at a huge rate and going out at a much slower one, maybe I would see buffer overflow if I pushed to much. Actually I think I did at about 700+ bytes.

Bottom line max kbps is 350 for what we are using, all I can seem to get out of it is 160kbps which may be a limitation of how they implemented the host mcu on the bluetooth module, but the datasheet for the bt module is confusing (to me). So even if we could get 350kbps that would be only .87 fps raw (a frame every 1.14 sec) or possibly ~13 fps with jpeg compression.

What I have working now is 160kbps at a decent range. That gives me .39fps or a frame every 2.56 sec which is ~5fps with compression. If I put my api on the computer and use a serial adapter I can get 921600 baud (max of prolific usb converter) and see 2fps. 2fps is much better than a frame every 2.56 sec, and it still sucks !

Options ?

I could use the nordic rf2401 chips and implement a dongle as I had in mind before. I know android devices are getting better at using usb converters and such. This would give me decent performance.

Rosberry pi with wifi.

Indecision and setbacks

At the point where I have video working and migrated everything to the F4 where all I had left to do was move it to the android phone (and test) to achieve my milestone I got seriously sidetracked by ROS.

I seem to run into this stumbling block again and again, I get so far along in the project that I already have the next generation in mind and its so much better (in my mind) that I’m prone to throwing away large portions of previous design and coding to move to the “next thing” without achieving the almost completed product or milestone. I am going to stop doing that. I see now that by using ROS and open libraries out there my robot could go much futher than I alone could ever take it; however, I only learned that lesson by going this far down the road I took. I’m sure all of the lessons were not learned yet, so with such a small amount of development left to meet my modest milestone I’m going to revert back to my old code and meet my milestone before going off on any tangents.

Tangents

I can rest a little knowing that all of the ideas I had will be written down and I can pickup where I left off.

Basically everything I have done up until now has already been done and much better in ROS by the amazing people at willow garage. You could sum up my most intricate design into a high power micro pumping video out on rosserial to rosjava_core and viewing this in rviz. Then if you gave me 20 years maybe I could have written all of that code. There is no “game” in ROS, it is real robotics but we can create a subscriber or service that does what we want.

So to sum up how I plan to move forward with ROS after meeting my last milestone is this list.

  1. Investigate existing hardware that works with rosserial and could use a camera. Looks like ros works on embedded linux or arduino (but not all arduinos yet). Maybe the rasberry Pi is the best option, since it already could do wifi and embedded linux and would have drivers for the camera
  2. Embed ROS or Rosserial into the device, depending on which one you choose. If we go the Pi route we can just run ROS or ROSjava right on the Pi itself. Then we basically have a full robotics platform for $35 plus sd card and wifi, we would want to make it headless eventually.
  3. Get the device working with a camera, then publish a camera node in ROS
  4. Setup motors and subscribe to a joystick publisher. Joystick to motors.
  5. Create clients to view/subscribe to the camera node and publish the joystick node. Maybe we could use rosjava or some other software available and simply point it at the rosmaster? Either way we would want the already predefined and usable ROS format, and we can use the currently available ros tools to test and debug all kinds of stuff.
  6. Create easy to use API for beginners so they can easily use our robot and develop their own applications. I know ROS has a learning curve, maybe we could wrap it and make it simpler.

So, I’m going back to meeting my milestone even if it sets me back from ros for a few more weeks. I want to make sure I’ve learned all the lessons of my current path and not just 85% of them. But then I’m going full speed with ROS and more than likely the rasberry pi, which is where I should have started to begin with imho.

Ros/Pi Cost:
Rasberry Pi $35
Wifi WL-700N-RXS $11 (omg you kidding me? 150Mbps!)
Camera board $25 (rPi official camera, expected to be released soon)
Body $30?

All of that adds up to  $101, my goal was always below $100 and that is really close. With the Pi there is the possibility of having multiple cameras and there should be a huge reduction in the amount of development. The bandwidth is almost 200x, the speed is about 10x but the price is only about 2x (for the main board vs stm32 f4).

Granted if I did spend years developing the stm32f103 into something usable it could possibly cost less than $20 for the whole board but in my opinion thats not worth it.

And if I ever want to I can always go back to the Milestone (video and control through phone) and pick up where I left off.

http://wiki.jigsawrenaissance.org/ROS_on_RaspberryPi

http://www.raspberrypi.org/archives/tag/camera-board

http://pingbin.com/2012/12/setup-wifi-raspberry-pi/

http://www.ros.org/wiki/rosserial

 

A first Glance at the zBot API

I’ve created my first “Java” tutorial. Still not quite finished I want to add motor and gpio control, plus a little more tinsel. But I thought it might be handy to have this here so I can show it to other people.

Basically if you make this in your java main and call setupbot() the bot will set itself up and start polling, then as soon as an image is ready you will get a callback “onnewimage” where you can display it on the screen.

I’m going to add a joystick and some more output to the java frame before I’m done with this example.

public class SimpleBotTutorial implements IBotListener{

ZBot zBot = null;
InputStream inS;
OutputStream outS;

JFrame frame = new JFrame("Video Feed");
JLabel JLabelImage = new JLabel();

public void logI(String str)
{
      System.out.println(str);
}

private boolean getStreams()
{
try
{
      CommPortIdentifier portIdentifier;
      portIdentifier = CommPortIdentifier.getPortIdentifier("COM3");

      CommPort commPort = portIdentifier.open(this.getClass().getName(),2000);
      SerialPort serialPort = (SerialPort) commPort;
      serialPort.setSerialPortParams(baud,SerialPort.DATABITS_8,SerialPort.STOPBITS_1,SerialPort.PARITY_NONE);

      inS = serialPort.getInputStream();
      outS = serialPort.getOutputStream();
      return true;
}
catch (Exception e)
{
      // TODO Auto-generated catch block
      e.printStackTrace();
      // TODO FAIL EXIT
      return false;
}

}

private void setupGui()
{
      frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE);
      frame.setMinimumSize(new Dimension(251,150));
      frame.getContentPane().add(JLabelImage, BorderLayout.CENTER);
      frame.pack();
      frame.setVisible(true);
}

public boolean startBot()
{
      // if we cant get the stream no reason continuing
      if(!getStreams())
      {
            return false;
      }

      // setup our zbot and pass it the streams
      zBot = new ZBot( inS, outS );
      // tell it we implement its callbacks for video and events
      zBot.setIbl(this);

      // wait a second for a connection, then check
      try {
            Thread.sleep(5000);
      } catch (InterruptedException e) {
            // TODO Auto-generated catch block
      e.printStackTrace();
      }

      // check to see if its ready or we timed out
      if(zBot.getState() < 2)
      {
            // fail
            zBot.pollEngine.stopSvc();
            // close our streams
            try {
                  inS.close();
                  outS.close();
            } catch (IOException e) {
                  // TODO Auto-generated catch block
                  e.printStackTrace();
            }
      // let caller know we failed
      return false;
      }

      // setup the gui to get our data
      setupGui();

return true;
}

public void printAllBotValues()
{
      this.logI("=================================================");
      for ( EzReadVars rv : EzReadVars.values())
      {
            this.logI("Var: " + rv + " Value: " + zBot.getValue(rv));
      }
}

public void rawToImage(byte[] rawData, int w, int h)
{
      // make new container for the image
      BufferedImage NewImg = new BufferedImage(w, h, BufferedImage.TYPE_USHORT_565_RGB);
      // convert to short for raster
      short[] picAsShort = new short[w*h];
      ByteBuffer.wrap(rawData).order(ByteOrder.BIG_ENDIAN).asShortBuffer().get(picAsShort);
      // put directly in raster of image
      short[] imgData = ((DataBufferUShort)NewImg.getRaster().getDataBuffer()).getData();
      System.arraycopy(picAsShort, 0, imgData, 0, picAsShort.length);
      // set our jlabel to the image
      this.JLabelImage.setIcon(new ImageIcon(NewImg));
      // refresh
      frame.repaint();
}

@Override
public void onNewImage(ZImage image) {
      // turn it into a image and display it
      rawToImage(image.getData(),image.getWidth(),image.getHeight());
}

@Override
public void onNewEvent(int event) {
      // TODO Auto-generated method stub
}
}

Zbot Meet ROS (Robot Operating System)

I thought of several things that I realize that it will be a long time before I’m able to do, things like lasers, navigation, facial recognition, cloud workload balancing and much more. As of now I’m basically building a easy to use RC video Tank that I’m going to put a game on top of.

For some reason I decided to investigate ROS (Robot Operating Sytem) more and now I’m really staring to see how it works. For this reason I’ve put off active development a little while I investigate ROS and make sure my system can be implemented as a ROS package. I believe that using ROS will remove decades of futile “boiler code” development and give me quick access to all of the things that initially got me interested in robotics.

ROS is amazing, it has really blown my mind to see how they have put all of these things together to make a working system. In the ROS system everything is super loosely coupled. Your “robot” is simply a system of talkers, listeners and thinkers or in ROS terms publishers, subscribers and services. I’ve only got about a day of Research in me but I think I have the basic concept. A sensor will talk and “publish” its information at an interval to a “master”. Another subscriber that could be a motor or whatever can subscribe via the “master” to that “publisher” in this way your eyes, ears and sensors are all separate. As far as I can tell a “service” is something that can process hefty data and respond with an answer, like the logic that glues the pieces together.

So how does this all fit in with ZBot ? Well ROS runs under linux so we cant just cram ros into our stm32, we could abandon our platform and use a $30 chip that would run linux and go from there (this is what I plan on experimenting with with my rasberry pi) but we cannot meet our goals with a system that “runs linux”. So, we need to at some point create a “bridge” between our driver and turn our motors and our camera into ROS topics and publish them to the ROS master. This opens the door up to more sensors and things such as odometers, position sensors etc. But more importantly it opens the door up to being able to use those new things and not have to write navigation code and such.

Right now I think the main development will continue as planned, we still want a rc tank with video, but I’m going to be darn sure I’m not spending any time replacing functionality that exists in ROS. Once I get to a point where everything works as I want it too I will write a driver (bridge) for ROS. It will be similar to the NXT driver for ROS. We simply need to add topics for reading our camera and controlling our motors.

ROS actually takes it a step further with their Android “rosjava_core” and their partnership with google. Using these functions you can take a cell phone and open all its sensors up as a ROS topic. I’m currently working on this also. So if my Zbot is a ROS topic and my phone is also then I can simply place my phone on top of my Zbot and be able to retrieve the gps, accelerometer and a faster video stream from the phone itself. All of this data gets shipped over wifi or 4g to a ros master running on some linux server that shares control of the zbot/phone (with all these added sensors). At this point the zbot becomes like a “phone chariot” and we have a robot that can go anywhere (given 4g/gps). Of course all this could be opened up to other nodes such that if there is an alarm the robots investigate, and a service that ships the data to google for cloud processing. Basically everything I could imagine is possible, and I’m now realizing I dont have to do it all !

So I’ll try not to get too sidetracked into ROS, but I want to be sure my zbot is a good fit now so I’m not kicking myself in the butt later. In order to do that I need to investigate a little into the ROS system and set one up etc.

I should be able to meet my milestone of “video on phone, controllable bot” before embarking on the ROS discovery, and I really should so I think thats what I’ll do.

Update, migrated to stm32 F4, Easy to use Java API, Video in progress.

Migration to F4

Since I’m a lazy sort I took the easy way out and bought a unit that had enough memory to actually support an image of at least QCIF resolution. I found an easy to use board on ebay that came with all kinds of modules I could just plug into the F4 discovery and get going, even camera. So, I think this was a good choice since my main downfall is lack of time for development.

All of the code has been migrated over and is working as it worked on the F3. I only have one ADC though. I have actually made a few improvements to the code and protocol. So now I have more speed and 192Kb to work with.

On another note I do plan on getting this working then possibly migrating a version to the stm32 F3. I could still use the F3 with dma based off gpio to control the camera. I realize now working with the F4 yeah its great and will get me where I want faster, but its also 3 times as much, and the F3 could do everything I need given the right amount of memory.

Easy to use Java API

Somewhere along the line I decided to write code with full comments and extract everything that was OS based. I segregated and divided all objects into a format that only makes accessible to the user what they need. At one point I was convinced the bluetooth module was not working so I switched away from the android platform and in doing so confirmed the API works well in a windows environment also.

Basically this is how the API works.

  1. Create a new “zBot” passing it an input and output stream. This makes it transmission medium agnostic.

    zBot = new ZBot( inS, outS );

  2.  Then you simply wait for the bots state to be 2 or above meaning its ready. I will make this a enum soon.
  3.  Then you can simply pull the values provided in the enum of “readValues”. For example this prints out all available vars, like adc, gpio, time etc.

    for ( EzReadVars rv : EzReadVars.values())
    {
    this.logI(“Var: ” + rv + ” Value: ” + zBot.getValue(rv));
    }

  4. For the image you can either read out the bytes of the image or you can subscribe and get a callback when a new image gets pulled from the bot.

    while(zBot.getImageState() != EzImageState.Ready){}
    int width = zBot.getValue(EzReadVars.VideoWidth);
    int height = zBot.getValue(EzReadVars.VideoHeight);
    //int bpp = zBot.getValue(EzReadVars.v);
    byte[] pic = zBot.getRawFrame();

This is cross platform and pretty easy to use, but by far not done. I really want to tie down everything into enums and document everything. Then I’ll make an example for android, windows and linux and release those. This will give people everything they need to get started with the API.

Video in progress

I’m so tired of saying this. The video is by far the hardest part for me. I finally have good data coming out at 251×100 resolution at rgb 565. I dont know why the camera is set at this resolution, but through painstakingly slow testing I have found it’s at this resolution. This resolution is something I can work with at least for now. It might actually be higher than 100 but I stopped the dcmi transfer at 100 lines.

So now I’m seeing beyond that to the fact that dma spitting my image out usart while dcmi is spitting my image into the same memory does not seem to work. I think they are conflicting when they hit the bus. So I’m going to implement double buffering. To do this I have to add some code to the zbot hw code and add some code to the java side also. Basically it will take a pic on one side and when we send it an update it will fill the picture in the “other side” so this way we can pull image A then hit “update” then pull image B then hit update and so on. This way the unit is always taking a snapshot and writing it to the picture buffer we just received. This should not be too hard.

Here is a picture of color bars at 251×100 resolution that I pulled. At least now I can crank up the speed and start to get my actual fps.

Test image from the robot using ov7670

Change In Direction !

After thinking and thinking about what I’m actually developing and what I plan on attracting the market with I realize they don’t match up at all. Adding features that allow users to build a script of i2c commands that can be forwarded across the network to a chip on the other side of the zbot may be useful to some people, but not the majority.

I realize that I need to work only on the features necessary to meet my initial goals. Then I can add in all the neat features and sdk that will be used by others. So what are my initial goals ?

Initial Goals

  • Fun game with zbots, like fps
  • video
  • easy to use, cheap

Things I have done that are not related to those goals

  • I2C forward
  • Java Service, subscription
  • basically the whole client was for experimenting not games
  • library in c#

I started to realize this when I was thinking about the usb “dongle” and how it would be better as a ethernet “dongle”. Then I started to think about how it would need to support multiple devices, etc. The cost of the “dongle” would be similar to that of the zbot, doubling the cost, not good. Then I thought about how the dongle could be the service and allow for the user to not tie up a “server” computer to host connecting to zbots. These aren’t the days of people configuring hard to use servers so their friends can play quake anymore. Just by having a dongle and server I’ve excluded a large number of potential users.

The solution ? Bluetooth and android, yes what I already knew a long time ago. I had abandoned this idea a while back because bluetooth was too short range and not fast enough at long range, plus it was a little pricey. But given the cost of a “dongle” I could basically have wifi, so if I want to keep the cost down a “dongle” is not the way to go, yet. Since I started this project (I think) there have been other bluetooth solutions created. These other solutions will give me about 1Mbps and about 30 ft of range. Not the best numbers but a good start. Aside from that they support usart so I would not have to change any of my current code.

So, I’m scrapping any goals not related to the immediate goals. There is just a Zbot and a app that runs on android phones. The Zbot will be a toy that you can use with your phone to fight with your friends, capture the flag or do missions in the real world. My development time should be about 1/10th what it was and we should see the goal much faster. Then after the initial goal, we add in the sdk, the features and the server / client model.

Progress C#, jpg, server … ugh

Been making lots of progress. I can now get 14fps reliably from the server. I take the data in as raw 565 then convert it to jpg in the java server from there I send it to a c# client. So once I get the video feed from the robot it should be fairly efficient and fast. The client/server can easily meet the desired (initial) output of 10fps.

Now I’m using random data as a basis for rgb video values, but I’ve also tested with a raw byte array.

So a few more additions to the client and I will be ready to go back into the camera/bot portion. The system is designed in such a way that you could have multiple servers and multiple robots. You simply double click one to view/control it. The setting up of the subscriptions and all of that took quite a bit of time.

Sprint1 done, protocol established.

I’ve finally flushed out a protocol and got it working well. I rejected the crc because in most cases I wont be using it; however, I did leave all the code and the space in there to use it in case we need it later.

Now that we can read and write to the device across any medium (spi, usart, etc) I need to start adding the functionality to use what we read and write with the protocol.

I have also been researching wireless solutions and found several. I really like Nordic’s RF line but found another more complete (and cheep) solution on ebay from hoperf. The hoperf board comes with an antenna built in and all of the required components so all you need to connect is an spi interface. The ic is very similar in functionality to the Nordic chips. The cost per hoperf board is about $3 each. I ordered 6 of them. The speed I can get is 1Mbps and the range is about 60-90meters which is at least 200 feet.

Pushing my protocol to higher speeds I was able to get a throughput of about 150kbs. With 128x96px image @ 25k bytes I could get possibly 6 fps raw. I’m aiming for raw first once that whole system is working then I will work on compression. The idea is that later when the whole thing is working I will have 1Mbps so that will allow me a larger raw image, then when we still need more I can enable compression, but at that point I will already have a usable fun system. This is opposed to me working on compression for months like I did before and having no working system.

So
Motors, I2C, Wireless, Camera (raw 128×96), Circuit
is the priorities for now. And they are attainable within a short amount of time, I have already had all of these things working separate from each other already.

On to sprint2 !

Stm32 woes gone ?

I dont know why but revisiting it all of the usarts work just fine now so I’m steaming ahead with the stm link debugger. Finished most of the protocol. I can do read and writes to all global variables used within the stm32 program. The only things I have left are CRC and I need to make the java server poll the zBot as fast as possible and maintain a snapshot of its variables for sharing with a client. This is granted there is a client connected.

On another note I really like the nordic chips for a wireless solution. They have an “on air” data rate of up to 2Mbps. I’m still not sure of the range but if its the same as comparable ic’s its about 1000 feet which is sufficient for now. The cost is a measly $7, much less than a wifi solution. The only downside is I have to create a dongle to plug into the computer, but I think the cost and development offset makes it worthwhile.

 

http://www.mouser.com/ProductDetail/Nordic-Semiconductor/nRF24L01P-T/?qs=sGAEpiMZZMt%252bz66QWnul0XZ9nqKlGruUzD33zj574R0%3d

Few more days and I’ll be done with the protocol and then I think I’ll be on the camera again. I’m aiming for 128×96 raw at 10fps, I can rough that over a 250Kbps line. After that the skys the limit but that’s a quickly achievable goal.