First part of my answer may come at surprise, because it's not even discussed in the Open CV documentation: strict background subtraction, even based on referenced background image (called
background model), is
theoretically impossible. This is because even two images provides incomplete information on the scene, which does not describe, for example, reflexes of background objects on foreground and visa versa. Here is the simple example illustrating the idea: a background object can cast a shadow on some background object. As your reference picture does not have this shadow, the shadowed area can be mistaken for the part of the foreground object, which is not really the case. The root of the problem is: a sequence of "2D" images does not provide full information to build a 3D model taking into account all ray tracing and reflection/dissipation on all surfaces, set aside dissipation in volume. So, all the methods of background separation are approximate and can give you quite pure results, depending on the scene.
The background subtraction in Open CV, based on pre-recorded background model, is explained here:
http://docs.opencv.org/master/db/d5c/tutorial_py_bg_subtraction.html[
^],
http://docs.opencv.org/master/d1/dc5/tutorial_background_subtraction.html[
^].
Note that the demonstrated operations is not the end of story. You got only the black-and-white model of background separation. You have to preserve the foreground image and put it on the white spots (multiplication can work), and replace black background with, say, zero-opacity pixels. You still can have some false-positives, foreground pixels mistakenly recognized as background.
—SA