Friday, February 29, 2008

Copy of email about Robot Vision

I've got the images properly coming out, in their proper colours, now. And as we suspected, I've had no problems with getting images - so it was the fact I hadn't set up the ports before initializing the camera. I've attached an image for your interest.

(it's a bit yellow because of the lighting)

As for the colour order, well, I was trying to understand why it wasn't BGRG as per the manual, but GRGB...

Additionally I noticed we are getting a yellow last vertical pixel row (when looking at white). Magnify attached if you want to see. I suspected that this had something to do with the fact that the colour order (RGB order) wasn't what thought it should be, i.e. we were a sub-pixel out. This bug was complicated by the fact that my bitmap (BMP) image converter had a minor buggette in it (BMP format stores the pixels as BGR for Intel/little endian reasons) .... .

The text below the dashed line explains what I saw when I looked at the raw HEX file and what I conclude from this.

After reading the text below you might think 'why does this matter since the image looks ok?'. Well, one thing is that since we are proposing to do colour detect ... if we end up needing every single bit of resolution we need to understand what we are actually getting in terms of colour pixel position. That Bayer filter is bad enough without having shifted colour pixels we don't understand...

If resolution turns out not to be a problem then it's probably irrelevant.


--------------

Suspected hardware bug ...
When CCIR656 mode is off (CCIR601 mode) we get the last byte of the line (1280 wide line) as 0x10 (i.e. black level)
When CCIR656 mode is on we get the last byte of the line as 0xff

CCIR656 mode puts a 4 byte start header and end header which is FF 00 00 xx although these don't appear in the fifo data (normally) since they are outside of the HREF window.

Therefore it looks like we are dropping the first pixel written to the FIFO.

We also see this "feature" in their code, where they read green first (GRGB) even though the data sheet clearly says BGRG ... (We have to do this as well)

I *suspect* this has something to do with the HREF/NAND gate hack (rather than, say, the pixel clock which appears to be set as rising both in the FIFO and the camera for data valid).

However, since it really only effects the last pixel it probably doesn't effect us as long as we keep in mind that the pixel positions are moved for blue pixels, and I'm not going to spend any more time on this.


Regards,
Rob

Sunday, February 10, 2008

Multi-threading

Stu has been doing lots of threading recently and seems to be having lots of fun. I makes me want to incorporate threading in the programs I'm working on at the moment!

It appears that there is one big advantage and one big disadvantage:

Advantage: you can avoid explicit state machines - sometimes essential with other peoples code you have no control of (e.g. OS calls, library calls, etc that just lock until completed). This simplifies your code immensely. It can make a complex program several simple programs.

Disadvantage: Mistakes in the threading design or programming are horrible to debug and can cause almost random behaviour that can be impossible to debug and track down.


Comments?

Random read if you have spare 5 minutes


http://math.hws.edu/javanotes/c8/s1.html#robustness.1.1


You know, I thought that access to vectors via [] would be fine in, unchecked, in programs you understand totally yourself. However, a recent medium complexity C++ program* (a text converter in this case) showed that me that this is not the case. It was crashing on the gcc build. I compiled it with VS2005.Net (to debug) and luckily the debug version has vector [] out-of-range accesses trapped (thanks Microsoft! I don't say that very often!). It immediately showed me the illegal access!

It's trivial to subclass vector to make Vec (it's in Stroustrup tC++PL) where [] access cause at() to run - at() is the same as [] but checked. I hadn't done it because I thought there was no way I was going to fall foul of it in such 'simple' programs that I had written myself.

(This isn't meant to be a lecture by the way!)

Which goes to show - perhaps all array access should be checked, all strings/containers should be self expanding (i.e. avoid buffer overflows) and memory should be garbage collected, etc. and Java people really had it right - in all cases, even for games, embedded, real-time, and all the rest.(*2)

I'm not sure how far Obj-C goes with this regard ... I've asked Stu.

Python has these things. C++ has a sort of poor half way house from C (if I'm being honest) even if it's blazingly fast(*3). And I'm not suggesting we all convert to Java - just that I think these things are things that should apply to all programming languages regardless - at least as a switchable option. Let's face it - a lot of the ancient BASICs had checked limits on array access.

Some new, big programs that I'm involved in worry me - manually locking with threads, manual memory allocation, etc. All bugs that won't necessarily get caught to make nowhere near a CORRECT and ROBUST program (let alone proving those things). Are we building the same faults in that our current systems suffer with - great from the outside but a extension and maintenance nightmare in 10 years time?

Also - why does it appear that Java is so much slower than C++ on the applications we run at the moment? It can't just be the limit checks outlined above. And surely the JIT compilers are fast? Is it the type of applications that are written with it - i.e. network heavy and dependant on the response time of remote servers?


BFN
Rob

* NOTE1: The reason it's C++ (when all the other programs I was writing to do text processing are Python) is because of the number of cross-references for the several hundred files I have to parse. This is even though I have duplicate data in both vector and maps to make access hyper fast and it still takes many minutes to run over the code-set I'm processing

*NOTE2: I'm aware of things like programming-by-contract programming, defensive programming, leaving your assertions in production code, etc.

*NOTE3: I won't even mention that it appears that only 50% of C++ compilers complain about uninitialised variables being used... Grrrr...

Sunday, February 03, 2008

Shaping what we do

What a huge gap! I didn't realise I hadn't posted for this long.

An interesting realisation I had last week (that is probably obvious to many people) is that everything we do is shaped by our limitations as humans - that includes physical and mental abilities.

I want to talk about lifetimes - but I'll digress for a bit.

One of the things I mentioned several times last year (maybe even on this blog) was that one thing that bothered me is people talking about the 'unlimited abilities' of the mind. I have no issue with 'unlimited possibilities' (or at least a good approximation) but I see no evidence anywhere of unlimited abilities. I'm not talking about peoples often under-appreciated ability (by themselves or others) to learn any skill. I'm talking about things like: you can remember an unlimited amount of information or, and this is my favourite gripe, unlimited multitasking abilities.

One example: I was at a workshop to do with several technologies and some engineers were discussing that the mind had unlimited multitasking ability. Someone cited the example
of what one of their children had learnt to do simultaneously; chat on instant messaging with friends, chat on the phone, a couple of other things and do their homework. Well, maybe, but probably not. But there are people (scientists actually) who learn about what you can and can't do at the same - it was reported in New Scientist. And their findings show that the human brain is anything but unlimited in terms of multitasking ability.

Another example: talking on a mobile in your hand and driving a car physically reduces your ability to drive - cause one hand is tied up. But its been shown talking on the phone reduces your ability to drive and this is linked to your limited multitasking ability.

Back to lifetimes.

I seems that the things that we do as the human race are limited by the length of our life. For instant, and this is the example that sprang to mind last week, getting to moon or even another planet in our solar system is possible. But getting to another star, whilst difficult from a resource point of view, is probably limited by the amount of time not just to engineer a solution - but the mission time. Probably it would take many decades even at the fastest speed we could achieve. I think this is the limiting factor.

The same goes for people doing research; ever wondered why a very futuristic technology is only 10-15 years away? Funding. But why is funding limited to this time-scale? Because people want to see results in their working lifetime. I appreciate you also have things like inflation and gains on other investments completing against you.

Of course, I'm attributing the cause in all cases to be a single thing. But I bet it has a much larger effect than people appreciate - and are may even not be aware of - because we live in this time span and therefore our thoughts are bounded by it without our conscious realisation. It's also depressing to think that you are going to die before all the good stuff happens :-)

So what's all this got to do with a programming blog?

Well, if we ever get an artificial intelligence with replaceable body parts their life might extend to hundreds of years (assuming they don't have fatal accidents). What will their take on be a 500 year trip to the stars? Maybe a lot different from ours.

And assuming war doesn't break out between humans and the AI then what will they make of our shorted lived race?
Newer›  ‹Older