In order to check for motion, the strategy is to take two pictures with a delay between each, slightly blur both in order to account for noise in the images, find pixels that have changed between the first and the second picture, and count the number of changed pixels. The result would be a black and white image in which white pixels correspond to changed pixels and black pixels correspond to pixels that did not change.
The first four lines import the necessary packages, in this case, cv2 for image processing, os for saving images to a flash drive, and time for temporarily pausing the program. Ignoring the methods for now, line 9 creates the video capture object and the integer, 0, references which camera, in which case is the Logitech camera.
Line 9 creates a variable that is going to keep track of the index of each photo taken which will be necessary for taking multiple pictures.
Line 10 is where we create an array named images that we will eventually populate with the two images. Line 11 denotes the location where we will be storing the pictures. Line 14 takes the first image. cam.read(0) returns an array of two values: the first is a boolean of whether the image was taken successfully and the second value is the image taken. In this case we only want the second one. Next we delay the program and take the second image.
In Line 17, we blur both images by calling the method, blur_image(image), on line 5 and find the difference between the two blurred images through cv2's absdiff(image, image) method. The result looks something like this:
If you are running this code on a computer you can un-comment line 18 to see this. Line 19 takes the image we created on Line 17 and through cv2’s threshold method, turns all pixels to 255 (white) if the pixel difference is greater than 7 and 0 (black) if pixels difference is less. This is to both filter out random noise that blurring might not have completely eliminated as well as highlight the changes between the two pictures.
To see this in action, un-comment line 20. I'm not going to show the result, mainly because the result is kinda creepy.
Line 22 adds up all the white pixels which is the difference between the two images and prints out that number.
Line 23 checks if the number of changed pixels is greater than 1000 (which is a threshold I got through trial and error) and if it is greater than 1000, it will save the image to the folder specified and give it the name, frame-#.png with the # representing the index of the image. Then we increment the index on line 25 in preparation for the next image.
Lines 27-29 specifies what happens if escape is pressed but this is completely optional.
Congratulations. If you follow this tutorial, you should see the Raspberry Pi taking pictures if it detects motion. No more angry seniors and no more robot-snatching burglars!