I'm a tech-nerd.... and you have a valid question. Thinking about it, I think the bottle neck would be the coil itself. Example, if you have a large coil on the bottom of your 'stick', it takes a finite amount of time for electrons to run through that coil and register a value. A changing of fields around that coil would surely change the charactoristics of that coil, but it would still take a long time for the electrons to run around that coil winding, and then back up to the processor for a 'sample'. I could be wrong, but this is probably most definatley the bottleneck.
I'm wondering, but if I were to do something like that, I would develop multiple smaller coils, perhaps a dozen smaller coils (for quicker sampling) and sample each one... then they would no longer be looking for a 'field change', but instead, a 'walking' coil change. It would probably have an accelerometer in it to determine the swing rate, angle, etc., to add into it's calculations... and it would sample each coil much faster, and then compare the samples from each coil as the 'anomoly' walked across the coils to determine a 'hit'. You would also have to account for infractions from opposing coils....perhaps a 'daisy chained' or 'individual sample' at _MHz, and have each coils charactoristics mapped in ram as this quick processor did it's thing.
But, I'm already thwarting my own idea, because I just realized, that with depth, you need a large coil, as the detection depth is a factor of the width of the coil (i.e. bigfoot). So, My guess is, that they have already determined the processor speed based upon the coil size. ...a dozen smaller coils would give a much smaller depth....
Let me think on that for a bit....
You hear that Whites? I'm available for consulting.
copyright(c) 2009 mikeofaustin