Monday, April 23, 2007

ZX Spectrum dreams

Got to mention this post on Slashdot:

http://games.slashdot.org/games/07/04/23/1214256.shtml

Although this the Sinclair ZX Spectrum was not my first computer, it was an (large) extension of my first (ZX81) and had a huge impact in my programming and game playing habits.

I also wrote an article about games on the machine ages ago:

http://www.lightsoft.co.uk/PD/rob/speccy_g.html

Monday, April 09, 2007

What Next Robot?

Guess I missed telling you about a decision on the robot project.

We'd been doing mostly motor control and related sensing and got the robot to move a fixed distance and turn (albeit with less accuracy than in a straight line).

Rather than make the position control really, really good, I thought we should work on something we don't know whether will work at all; the camera.

Most of the maze solving robots use simple infra-red sensors either along the tops of walls or, more likely, pointed the side walls (which is better for angular momentum reasons) with, sometimes, some not so simple processing. I fancied using a camera. Cameras have been talked about a lot but in reality machine vision is not an overly simple problem for individuals, especially spare-time hobbyists, to tackle - but not impossible. For us an additional problem is that adding infra-red sensors as well as the camera would require input/output expansion on our current microcontroller and extra mechanical work that I'm not overly keen to do.

There are several really big unknowns with the camera solution:
  1. Can we communicate with the camera? (both the serial and parallel interface)
  2. Can we perform the algorithm to know where we are when stationary? (the camera is probably useless without this ability)
  3. Can we get a picture to know where we are when moving? (might need IR sensors that we don't currently have if we can't do this)
  4. Is the angle of view good enough? (might well need a wide-angle lens as it stands)
To do this, we would require at least a single picture off the camera of the maze - to which we can evaluate all of the above points. So that's what I'm working on.

Doing this now rather than later will allow us to know if the current philosophy behind the robot is possible and if we require hardware adjustments or extra sensors to compensate for what we can't do with the camera. We are effectively bringing the risk earlier in the project to allow us more time to compensate if the camera does not work in some way.

At some point I'll detail what the theory behind the camera position sensing is.

Robot Camera Commands

(Hopefully this won't end up as a robot blog!)

I've been working on the camera command interface, which is a I2C serial interface.

I've split this into two sections:

1. A lower part that communicates with the on-board i2c peripheral registers and just reads or writes a data stream (although it also handles the device address and read/write bit). As with all low level drivers, this turns out more complicated than it first appeared - especially since I need multiple state machines.

2. A camera specific controller that read and writes to specific camera registers and uses (1) above to do that without knowing about the serial interface. This is the bit I'm doing at the moment.

This division simplifies each section and focuses it on being good at the work at hand.
Newer›  ‹Older