(as discussed previously)
We have ways of ignoring moving objects which reveals the background (over time). Over even longer time spans (5 - 10mins?) We can compare background images for differences. That could be used to detect things like a back-pack that is left somewhere, for example. (5 - 10 mins or so, depending on on the scene), to do that.
So basically the system would "warm up" (analyze enough frames to learn background), save the background image, then repeat: looking for differences in the background image. These differences (blobs) would indicate objects placed in the scene and left there for some length of time. Such objects would be circled in red and saved/logged as separate images, and an alarm event (or equivalent) would be raised. An API control will be provided to "clear" alarms and reset the system (indicating that the current backgound image/scene is ok, back to initial state, etc).