Click here to Skip to main content
15,887,683 members
Please Sign up or sign in to vote.
3.00/5 (1 vote)
I am building an interactive installation using live video footage. English isn't my native language, sorry for possible mistakes.

I want to use live webcam footage. The program should have a reference image of the background and should subtract that from the moving foreground. Every few frames it should capture the foreground and stack those snaps on top of each other (in an opacity), so that it forms a transparent trail when people walk by.

It should look like this: http://postimg.org/image/ebse5tt5v/

I'm not a skilled programmer, but I have some experience in Processing. I'm using the OpenCV library and I got the background subtraction working, however it isn't using a reference image. Is this even possible in Processing + OpenCV? And does anyone know how?

Any help is appreciated, thanks!

Processing sketch:
Java
import gab.opencv.*;
import processing.video.*;

Capture cam;
Movie video;
OpenCV opencv;

void setup() {
  size(640, 480);
  cam = new Capture(this, 640, 480, 30);
  cam.start();
  
  opencv = new OpenCV(this, 640, 480);
  // opencv.capture(640, 480);
  opencv.startBackgroundSubtraction(5, 3, 0.1);
  
}

void draw() {
  if(cam.available()){
    cam.read(); 
  }
  
  image(cam, 0, 0);  
  opencv.loadImage(cam); 
  
  opencv.updateBackground();  
  opencv.dilate();
  opencv.erode();

  noFill();
  stroke(255, 0, 0);
  strokeWeight(4);
  for (Contour contour : opencv.findContours()) {
    contour.draw();
  }
}

void movieEvent(Capture m) {
  m.read();
}
Posted

1 solution

First part of my answer may come at surprise, because it's not even discussed in the Open CV documentation: strict background subtraction, even based on referenced background image (called background model), is theoretically impossible. This is because even two images provides incomplete information on the scene, which does not describe, for example, reflexes of background objects on foreground and visa versa. Here is the simple example illustrating the idea: a background object can cast a shadow on some background object. As your reference picture does not have this shadow, the shadowed area can be mistaken for the part of the foreground object, which is not really the case. The root of the problem is: a sequence of "2D" images does not provide full information to build a 3D model taking into account all ray tracing and reflection/dissipation on all surfaces, set aside dissipation in volume. So, all the methods of background separation are approximate and can give you quite pure results, depending on the scene.

The background subtraction in Open CV, based on pre-recorded background model, is explained here:
http://docs.opencv.org/master/db/d5c/tutorial_py_bg_subtraction.html[^],
http://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html[^].

Note that the demonstrated operations is not the end of story. You got only the black-and-white model of background separation. You have to preserve the foreground image and put it on the white spots (multiplication can work), and replace black background with, say, zero-opacity pixels. You still can have some false-positives, foreground pixels mistakenly recognized as background.

—SA
 
Share this answer
 
v2

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)

  Print Answers RSS
Top Experts
Last 24hrsThis month


CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900