Categories
Coding Projects Raspberry Pi Scripts Video

Raspberry Pi + Garage door opener: Part 2

In Part 1 of my ‘Raspberry Pi + Garage door’ series, I showed a super simple way to control a garage door with a script that could potentially be ran from the Internet.

This part expands on that and tackles the issue: ‘How do I know the state of my garage door if I’m not at home?’

Because the code operates no differently than someone pressing a single button on the remote control, you would normally have to look with your own eyes to see if you were closing, stopping, or opening the garage door. This can be an issue when your eyes are nowhere close to it.

I wanted to come up with a solution that didn’t involve running new equipment such as a switch to detect the door’s orientation. I decided to utilize what I already had in the garage: A camera. Namely, this one: Foscam FI8910W

The idea is to use the camera to grab an image, pipe that image into OpenCV to detect known objects, and then declare the door open or closed based off of those results.

I whipped up a couple of shapes in Photoshop to stick on the inside of my door:

shapes_web

 

videostream
Shapes taped to the inside of the garage door

I then cropped out the shapes from the above picture to make templates for OpenCV to match.

Templates for OpenCV

The basic algorithm is this:

  1. Get the latest image from the camera
  2. Look for our templates with OpenCV
  3. If all objects (templates) were detected, the door is closed – otherwise it’s open
1423729418.65
Shapes successfully detected

 

To help make step 3 more accurate, I added a horizontal threshold value which is defined in the configuration file. Basically, we’re using this to make sure we didn’t get a false positive – if the objects we detect are horizontally aligned, we can be pretty certain we have the right ones.

I was happy to find that the shapes worked well in low-light conditions. This may be due to the fact that my garage isn’t very deep so the IR range is sufficient, as well as the high contrast of black shapes on white paper.

Currently I have some experimental code in the project for detecting state changes. This will not only provide more information (e.g. the door is opening because we detect the pentagon has gone up x pixels), but is good for events (e.g. when the alarm system is on, let me know if the door has any state change).

I’ve tested running this on the Raspberry Pi and it works fine, though it can be a good bit slower than a full-blown machine. I have a Raspberry Pi 2 on order and it’ll be interesting to see the difference. Since this code doesn’t need anything specific to the Raspberry Pi, someone may prefer to run it on a faster box to get more info in the short time span it takes for a door to open or close.

I’ve created a video to demo the script in action!

GitHub: https://github.com/twstokes/door_detect