Monday, April 14, 2008

Micromouse contest - MINOS 08 - report 3

At the end of the first day we had a really good meal at a local Italian restaurant and I think everyone got on really well and enjoyed themselves.

On the second day, the main thing (apart from a very nice breakfast) was the competition. This was pretty informal and there was a good atmosphere. It was timed and scored for each run.

Adrian divided the competition into heats and finals. The slowest in the qualifying heats would go first in the finals. These themselves were divided into wall follower and maze solver races. Even the mice that didn't finish were given a chance to run in the final. Perhaps there was over dozen mice competing with a fair number of different designs.

The wall followers were both contact and non-contact in one race - there probably weren't enough mice to split the competition into three races.

Quite a lot of the mice suffered very badly due to wall detection problems. They used a standard plastic wall set and it's the third year these walls have been used and some people hadn't altered their mice sensors - so something was up. I did wonder whether the walls had aged somehow and become less reflective of IR. They looked OK in visible light, of course.

Between races I got a chance to do some more on the vision processing code. A few things to sort out yet - but I'm getting further bit by bit. Of course seeing all of those mice compete really made me want to run our mouse. Still, without the vision processing the only place it will go is straight into a wall!

The event finished earlier than scheduled at somewhere between noon and 1pm. The only bad thing about this was a four hour wait at Heathrow, which is probably one of the most boring, soul-destroying places on the planet. At least I had Alan to talk to, my phone (to ring Claire) and my laptop. Things could have been SO much worst!

Saturday, April 12, 2008

Micromouse contest - MINOS 08 - report 2

Today was mostly talks and presentations. There were a couple of 'practice' sessions where we took the opportunity to take some shots of a full size maze using the robots camera for off-line processing. A full maze is 16x16 cells and measures 2.88m square - so it's a bit big for my house. (We have a 5x5 practice maze).

There were talks on accelerometers, intelligent mice, chassis design, another paper on machine vision (but done very differently from ours), basic mouse design and many other interesting topics with much food for thought thrown in. There seems to be a lot of details to building a really good mouse - something that isn't apparent on the surface.

My talk went well (Alan tells me) and we got some good ideas including some names of some useful image processing algorithms and a later talk (by Tony Wilcox) reminded us that we could use programmable logic plus fast memory to remove, effectively, the CMUcam3 from our design (which we use for processing and the FIFO). Strangely we had talked about that and dropped the idea in favour of the a FIFO/second micro. I guess, although a lot of programmable logic is in-circuit programmable, I was more comfortable (and thought I could work quicker) with standard code. Today's presentation - and our vision work over the last few weeks - might have changed my mind.

However, we are not going to do that immediately - the algorithms are the same whether we have a processor doing them or programmable logic. However it would give us a solution for any slow processing speed we come up against - since we could do the capture AND at least the first two image processing algorithms (bitwise filter and red extract) in hardware on the programmable logic. It might even mean we had power to process a higher resolution camera.

Hopefully more tomorrow.

Micromouse conference - Minos 08 - report 1

So Alan and I are on our way to this tiny conference. Got up at 4:50 (yawn) to get the significantly cheaper flight. The plane was overbooked but we were eventually able to get on anyway in the end (last two people on - it was close enough that they talked to us about
bumping us to the next flight).

We'd hoped to have our mouse running, but it's not quite there. We have done most of the vision processing as a Cocoa application. This allows us to process pictures taken by the robot and process them into, ultimately, wall/robot positional data.

At the moment the Cocoa application does everything except properly convert the coordinates to the world domain. I was working on a couple of constants we need to insert and haven't had a chance to try them on the program over the last few nights.

I've volunteered do a presentation at the conference to show people what we've been doing. We spent Wednesday and Thursday writing this presentation and I spent last night packing (mostly). However, I have to be careful not to focus overly on the presentation slides and
actually show people the program - which is what I really want to present.

After the conference we need to finish the world coordinate conversion and then do a port to embedded robot code of the vision processing algorithms. The Cocoa app was definitely not written for execution speed or a small RAM footprint, but it shouldn't take very long to
convert. We can then see whether it's got any major problems like execution speed, poor positional accuracy, motion blur, etc..

I'm hoping that I might get a chance to run the application during the conference with those constants inserted and see whether it's complete rubbish or not.

Hopefully more later!

Newer›  ‹Older