Step 1: The image is rotated and flipped horizontally to be aligned
with the actual position of the robot.
Step 2: The image is cropped to only include the beacons area.
Step 3: The raw image is labeled which allows to refer to the image
currently being processed at a later time.
Step 4: RGB filtering is performed to detect the first color.
Step 5: Closing process is performed to connect nearby detected objects.
Step 6: Blob-size filtering is performed to remove the image noise.
Step 7: The center-of-gravity of the detected beacon is marked as
the position of the beacon in the image plane.
Step 8: Steps 4–7 are repeated for the second and third color,
using the labeled raw image.
Algorithm 1: The real-time image-processing algorithm.