Introduction
The vision class is responsible for taking images and converting them
into something the analyizer can understand. The process of converting an
image into a table state is completely hidden from the rest of CUE, so
that different vision methods can be tested. Currently, the job of
vision is broken up into the following key states.
- Calibration:
Calibration is currently done only at the beginning, but the
interface can also force the vision class to recalibrate if necessary.
The Vision class takes in an image, preferably of the table with no balls,
and more importantly, no people obscuring the edges of the table. From
this image, the edges of the table are found using the hough transform.
This allows a Lookup Table to be
created which maps an (inherently three-dimensional) image coordinate to a
two-dimensional model coordinate. In addition, the boundaries of the
table are found, which allows circle detection to run much more
efficiently, as only the tablespace is looked at, not the areas outside
the table, which contained significant amount of noise.
- FindCircles
Using a hough transform, circles are found, which represent areas
which are most likely balls. Currently, this works very well when the
balls are spread apart, but is considerably less effective on the break.
This was expected, as the circles do overlap on the iamge.
- GetPixelColors
For each of the circles found, the pixel colors are saved as an image.
This image will be passed to keith's analyzer which will return an id
representing the most likely ball.
What's left
Currently, the following things need to worked on:
- Creating the tableState. By next week, I hope to have sucessfully
merged my code with Keiths, and ideally everyone elses, so that I can
return a useful table state.
- More to come when I finish this part
Carleton Jillson
Last modified 2/18/98
|