Poor mans PTZ auto tracking

DanDenver

Getting comfortable
Joined
May 3, 2021
Messages
492
Reaction score
791
Location
Denver Colorado
Can Deepstack/SenseAI tell you where in the image the object of interest is located? Like so many pixels high and so many pixels over?
Just curious as I could then tell my PTZ where to turn to. Thus making my non-auto tracking PTZ a poor mans auto tracking PTZ.
I would create 9 presets on the PTZ correlating to the 9 quadrants. Then if I can learn where the object is, I can point the PTZ to the proper area.
Tad clunky, but could work
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,191
Reaction score
49,089
Location
USA
There is another way to do it that would probably be more responsive than waiting for DeepStack as I think the object would be out of the view before it is processed and moved.

Before I got an autotrack PTZ, I created a poorman's tracking PTZ using clone cameras in BI. My wide angle overview camera was cloned 7 times and I created a zone for different areas of the field of view and then would call up the PTZ to move to the preset number for each of those areas. It wasn't as good as a tracking PTZ, but it worked surprisingly well. As someone would walk up my driveway, the PTZ would follow them just by going to the preset number based on which zone they were in.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,191
Reaction score
49,089
Location
USA
You don't have an overview camera now? It is only one overview camera and then you clone it as many times as you need in BI. The CPU overhead is minimal. You could even hide the views.

Even if DS could take your PTZ field of view and say an object is so many pixels high and so many pixels over, I think your concept wouldn't work. How would it then move it around as a poor man PTZ if it doesn't have an overview to know where to send it?

Under your example, you would have the PTZ set up as an overview and it says the object is at X,Y on the screen, so it zooms in to that area via Preset. Now what?

How does it move around the yard to the next preset as the person moves? It has just zoomed into another preset area that is tighter than the FoV of the original preset?
 

DanDenver

Getting comfortable
Joined
May 3, 2021
Messages
492
Reaction score
791
Location
Denver Colorado
Yep, having fast response times from the AI would be critical. If it lost track it would just zoom out and start over. thus a poor mans version for sure.
I was not really envisioning any zoom as this approach is janky, but more to the point of following an object as it moved around (so always some space around the subject). I just have not been able to find a small PTZ that auto tracks.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,191
Reaction score
49,089
Location
USA
Even if it doesn't zoom, the presets would have to have so much overlap in order to work properly and I am still struggling to figure out how to make it work without an overview.

Main view detects object at X,Y in the view, so it swings to Preset 1

Preset 1 now detects object at X,Y in the view, so it swings to Preset 2

Preset 2 now detects object at X,Y in the view, so it swings to Preset 3

ETC.

That works for one specific spot in the field of view.

But all this would know is the object is at X,Y, but not know direction of travel.

So what if Preset 2 now detects object at X-50, Y-50, how do you get it to go to that Preset that would correspond to that location?

If you were to set up each field of view with say 4 quadrants and a preset for each quadrant, you would then be setting up 4 clones for each PTZ preset to direct it to where it should move to based on which quadrant it is in.

There isn't a way in BI to take a field of view and tell it if the object is at this X,Y, then preset 1, but if it is at X-50, Y-50 then go to this preset.
 

DanDenver

Getting comfortable
Joined
May 3, 2021
Messages
492
Reaction score
791
Location
Denver Colorado
Yeah, that is why I was curious if the AI not only reported that a person was found at such and such percent, but also the X/Y coordinates.
With some scripting I could convert those X/Y coordinates to a specific quadrant of the cameras current view.
 

wittaj

IPCT Contributor
Joined
Apr 28, 2019
Messages
25,191
Reaction score
49,089
Location
USA
It has been awhile that have used the .dat file, but take a look at that, I think it might have coordinates in it?
 

jawa6988

n3wb
Joined
Dec 3, 2023
Messages
2
Reaction score
1
Location
US
So at risk of reviving a dead post:

Wouldn't this be totally possible now without using specific presets? Maybe not with AI, but at least with the motion zones on the PTZ cam. I believe it can be configured, for example to move left by a small increment if the motion is in zone A, up if in zone B etc. As the camera moves, the motion zones will hold their positions with respect to the lens and the field of view, not the landscape LOL. I imagine tracking may be slow due to the delay waiting for the camera to stop moving, then start to detect motion again, but I think it could work for a cheap ptz cam like the plasticky indoor wifi ones. I have a couple that are wifi only and have no homing function so they lose their positions after a few moves and I'm looking for a way to make them a little more useful before I just toss them out.
 

DanDenver

Getting comfortable
Joined
May 3, 2021
Messages
492
Reaction score
791
Location
Denver Colorado
I did not update this thread, but I wrote a program that scraped the log output of BI which does indeed contain the coordinates of when a ‘person’ is found. You can then use those coordinates to move the camera.
Remember this approach contained no zooming.

My approach was simple. Create 9 presets. When motion is detected in any of the 9 zones, center the camera on that zone. This of course would only be effective on slow moving objects.

However, I ran into a math problem as the coordinates contain the size of the object as well, but reverse engineering that into useable info exceeded my limit of interest. For the panning effect to have any value, the size of the object (‘person’) found, matters. Math problems make me sleepy.
It was a fun adventure, but I am too lazy to take it to a useful level.

The approach in post #2 above requires multiple cameras, which is exactly what i was trying to avoid, but it seems like a viable option.

Note your idea of simply panning the camera a “small increment” is good, but realize that by the time the AI coordinates hit the log file and you parse that info, too much time has passed for that info to be actionable. Meaning that such an approach would require a more native solution that is possibly on the cameras software itself (Which is how those auto tracking PTZ’s are sold, right?)
This is why I was proposing to break the view down into 9 zones of view. For a slow moving object, it takes a small amount of time for them to move from one zone to another. This small amount of time introduced by this approach is critical as you have to somehow get the coordinates, move the camera, then read the scene again, then keep repeating. This activity is not instantaneous. Since my camera is an overview camera covering a large area, I felt that it met the basic requirements.
 

jawa6988

n3wb
Joined
Dec 3, 2023
Messages
2
Reaction score
1
Location
US
That's some fun info. I didn't know the coordinates were in the log. I played with my simple, non-AI approach a little over the weekend on a camera rig I built using an esp board with stepper motors for PT control. You're right, the timing is the issue with any method really (which is why it really needs to live in the camera itself). This camera is a board I pulled from a laptop and soldered a USB cable and is connected to the RasPi that runs klipper for the 3d Printers so it's already ridiculously overcomplicated and adding a lot of latency. It's not a performance rig by any means HAHA... but the motion tracking does work if: its a slow moving object like a person just milling about or repeated actions (like the 3d printer bed moving). It works well enough to center the camera on the printer while it's actively printing and then move to center on me when I enter my workshop shop, but there's no way I'd expect it to actually track and keep up with a person, dog, or vehicle.
 
Top