Section outline

  • Color Detection and Object Tracking with OpenCV

    Color detection is one of the most effective and beginner-friendly techniques for enabling vision in robotics. By detecting specific colors in the video feed, your robot can identify and track colored objects such as balls, lines, or markers. This section walks you through the entire process of setting up real-time color tracking using OpenCV, from color filtering to object localization and tracking logic.

    • 1. Understanding the HSV Color Space
      Unlike the standard RGB color format, the HSV (Hue, Saturation, Value) color space is much more suitable for color detection because it separates image brightness (Value) from color information (Hue and Saturation). This makes color filtering more reliable under varying lighting conditions.

      HSV Breakdown:

      • Hue: Type of color (e.g., red, blue, green)
      • Saturation: Intensity or purity of the color
      • Value: Brightness of the color

      To use HSV in OpenCV, convert your image frame using:

      hsv = cv2.cvtColor(frame, cv2.COLOR_BGR2HSV)

      2. Defining Color Ranges
      Each color you want to detect needs a defined lower and upper HSV range. For example, to detect a red object:

      lower_red = np.array([0, 120, 70])
      upper_red = np.array([10, 255, 255])
      mask = cv2.inRange(hsv, lower_red, upper_red)

      The mask is a binary image that highlights only the parts of the frame that fall within your defined color range.

      3. Finding Object Contours
      Once you have a color mask, use contour detection to identify the shape and location of the object:

      contours, _ = cv2.findContours(mask, cv2.RETR_TREE, cv2.CHAIN_APPROX_SIMPLE)
      for cnt in contours:
          area = cv2.contourArea(cnt)
          if area > 500:
              x, y, w, h = cv2.boundingRect(cnt)
              cv2.rectangle(frame, (x, y), (x + w, y + h), (0, 255, 0), 2)

      This code draws a rectangle around the detected object if its area is large enough, reducing noise from small false detections.

      4. Object Tracking Logic
      To track an object, find its center and map its position relative to the frame’s center. You can then control a robot’s motors to follow the object based on its position:

      • If object is left of center – turn left
      • If object is right of center – turn right
      • If centered – move forward

      Sample logic to calculate object center:

      cx = x + w // 2
      cy = y + h // 2
      cv2.circle(frame, (cx, cy), 5, (255, 0, 0), -1)

      5. Handling Multiple Colors and Objects
      You can track multiple colors simultaneously by applying multiple masks and merging them using bitwise operations. For example:

      combined_mask = cv2.bitwise_or(mask1, mask2)

      Also, remember to handle overlapping objects and sort by area or proximity depending on the task.

      6. Optimizing Performance

      • Use smaller frame size to reduce lag (e.g., 320x240)
      • Apply Gaussian blur to reduce noise
      • Filter contours by area and shape for accuracy

       

      Color detection provides a solid foundation for computer vision in robotics. You can now identify and follow a ball, detect specific markers on the floor, or even trigger actions based on the presence of certain colors. In upcoming sections, we will expand this capability using object detection models and more complex image analysis techniques.