Jump to content

Advanced Pixel Search Library


FastFrench
 Share

What do you think of this library ?  

35 members have voted

  1. 1. What do you think about FastFind ?

    • It's really great, I can't imagine doing a script without it
    • It's of some use for me in my current scripts
    • I could use it some day
    • Looks nice, but of no use for me
    • I've tried it, but it doesnt fit my needs
    • Sounds good on the paper, but can't make it work (bug or too difficult to use it)
  2. 2. Have you tried it ?

    • Yes, I'm using it right now
    • Somewhat, I've done some testing, will surely use it later
    • I've downloaded it and just played a little with packaged demo scripts
    • I've downloaded it, but not tried it so far
    • Not downloaded it so far, but I probably will some day
    • It's of no interested for me.
  3. 3. What is missing or should be improved ?

    • It has all the features I may need about Pixels Handling
    • OK it's pretty fast, but couldn't it be faster ?
    • Too hard to use it, could you simplify usage ?
    • Some additional features would be nice to have (please explain in a message)
    • It really lacks some decent documentation (I still hope to find someone to help on that)
    • Some critical features are missing, can't use it (please explain in a message)
    • I found some minor bugs (please explain in a message)
    • I've found some serious bugs (please explain in a message)
    • I've never tried it so far, can't tell
    • It would be nice if you could provide easy access to the associated tool - FFShowPixel
    • I would like to use it other languages. Could you provide wrappers ? (please explain in a message)


Recommended Posts

Hello,

I didn't receive any new posts with email in this topic, but I had subscribed it... Strange.

Anyway, now I could test some code for 2.0 if you want.

Thank you, I'll probably ask when there will be something to test ;)

But still looking for someone ready to handle the update of the AutoIt wrapper.

Excuse me, can I use the original search mode function PixelSearch as a function of FFNearestPixel?

I need the coordinates of the first pixel.Thanks.

With FFNearestPixel, you've a finer control on the order of the search. But you can't do exactly the same as with PixelSearch. Now, if you set the reference point as the top left corner of your search area, the result will usually be the same as PixelSearch with default scan order.
Link to comment
Share on other sites

Thank you, I'll probably ask when there will be something to test ;)

But still looking for someone ready to handle the update of the AutoIt wrapper.

It's what I meant with the test :)

If you want to send me the code I could try to make the wrap.

Link to comment
Share on other sites

I was hoping someone could help me with how to use the 'snapshots', as you can see below I want to use FFColorCount in a loop. But the only way I can do this is by taking a new snapshot everytime it is runned. I tried serveral things but I cant figure out how to use the same snapshot for all the ffcolorcounts in the loop

FFSetDefaultSnapShot(0)
$snap=FFSnapShot($iLeft, $iTop, $iRight, $iBottom)

For $iRow = $iTop to $iBottom
       $iRowCount=FFColorCount($icolor,0,True, $iLeft,$iRow,$iRight,$iRow)
Edited by Rutger83
Link to comment
Share on other sites

Just check the parameters of FFColorCount...

FFSetDefaultSnapShot(0)
$snap=FFSnapShot($iLeft, $iTop, $iRight, $iBottom)

For $iRow = $iTop to $iBottom
       $iRowCount=FFColorCount($icolor,0,False)

BUT you can only count on the full snapshot, so it doesn't help. That's the reason you need here to make a new SnapShot with a different line each time (as you did), if you need to count separately each row. Edited by FastFrench
Link to comment
Share on other sites

Hi, good job on FF2.0

I was trying the FFComputeMeanValues on 4 rectangle snapshots on a webcam pic to determine motion detection and specifically to understand which direction the person is going.

I use this so I put rectangles on door A, B, C and D (the white rectangle in bottomright)

I get the snapshots every 100ms and I collect every subsequent detection area in a string with all the areas interested, so, in theory, analyzing the first and the last detection I should know the starting and ending crossing and so the direction of the person.

It works quite well, but I get some problems:

1) At the right of D there is a window so when someone passes in that area, can cast some shadows on door A and this fires up the MeanValues of A without someone crosses there. (A false detection)

2) Apart that, sometimes if someone crosses from D to A or viceversa, I get sequences like this:

correct:

DDDBA

AAABBABBDDD

NOT correct:

from DtoA: DBDBDABAC

from AtoD: ABDDADA

that means in the final motion there is still a motion in an area different from the real final.

For C area it could be some pixels are still out of range after passing through A.

For the A area, it could be the door is still slowly moving when arriving to D

The point is that these false detection could really be true if someone changes idea and wants really to go to C or to return to A from where it was coming...

So, how could I get the real direction of a person avoiding false detection?

Edited by frank10
Link to comment
Share on other sites

  • 2 weeks later...

I'm having trouble with FFIsDifferent and FFLocalizeChanges... just a very basic example:

$InitialSnapshot = FFSnapShot(0, 0, 0, 0, 0, WinGetHandle("[ACTIVE]"))
Sleep(5000)
$NewSnapshot = FFSnapShot(0, 0, 0, 0, 0, WinGetHandle("[ACTIVE]"))
$ChangeResult = FFLocalizeChanges($InitialSnapshot, $NewSnapshot, 0) ; have tried using FFIsDifferent here also
If IsArray($ChangeResult) = False Then Msgbox(0,"Error","Result is false")

Result always comes back false, regardless of whether the window has changed. I added in FFSaveJPG so I could track, and the window is definitely changing in the 5000ms. I figured with FFLocalizeChanges, I'd be able to get back the number of different pixels between the two snapshots, yet the result is False (ie, not an array of information).

Any idea what I'm doing wrong?

Thanks in advance

Steve

Link to comment
Share on other sites

You are putting your two snapshots in the same memory slot (0). Furthermore the return of Snapshot func is 1, so it's useless to use a var to get the return value.

You should do like this:

FFSnapShot(0, 0, 0, 0, 0, WinGetHandle("[ACTIVE]"))
Sleep(5000)
FFSnapShot(0, 0, 0, 0, 1, WinGetHandle("[ACTIVE]"))
$ChangeResult = FFLocalizeChanges(0, 1, 0)
If IsArray($ChangeResult) = False Then Msgbox(0,"Error","Result is false")

This way you put the 2 Snapshots in memory slot 0 and 1 and you compare them in LocalizeChanges. Adjust also the shadevariation to a value more than 0 to avoid false positive due to noise.

Link to comment
Share on other sites

I saw this beautiful Siemens alarm detection video.

This is what I'm looking for.

Do you think it's possible to obtain similar result?

Determining the baricenter of a detection rectangle is easy and also tracking it to obtain the path.

The problem is to determine the rectangle to fit the size of a man and above all to split in 2 rectangles when 2 men go in different directions.

In FFlocalizechanges, it could be made monitoring the width of the detection: if more than a threshold, split it at the default maximum width rectangle of a man at left and another one at right.

The best should be different tracking of multiple objects in a scene, so it should necessary a threshold control in every group of changing pixels in the scene, to select different groups separately. You could set an array of groups of pixels with their coordinates.

Another problem could be the perspective of a barrier line: at the moment we use a 2d position system to check the detection, but it should be made taking care of perspective, but this could be made inside autoit script.

Edited by frank10
Link to comment
Share on other sites

You are putting your two snapshots in the same memory slot (0).

Ahh - many thanks frank10. I've adjusted and works well. The script is to be used to monitor a given window on a regular basis to see if it changes. If there has been no change after a set period, the snapshot is to be saved as a JPG and emailed to helpdesk. After the period of time (5000ms) if the snapshots differ, the initial snapshot (memory slot 0) needs to be overwritten by the current snapshot, in order to check again after the next period of time. Would the FFDuplicateSnapshot feature work here? For instance;

FFDuplicateSnapShot(1, 0)

Thanks again

Steve

Link to comment
Share on other sites

I saw this beautiful Siemens alarm detection video.

This is what I'm looking for.

Do you think it's possible to obtain similar result?

Determining the baricenter of a detection rectangle is easy and also tracking it to obtain the path.

The problem is to determine the rectangle to fit the size of a man and above all to split in 2 rectangles when 2 men go in different directions.

In FFlocalizechanges, it could be made monitoring the width of the detection: if more than a threshold, split it at the default maximum width rectangle of a man at left and another one at right.

The best should be different tracking of multiple objects in a scene, so it should necessary a threshold control in every group of changing pixels in the scene, to select different groups separately. You could set an array of groups of pixels with their coordinates.

Another problem could be the perspective of a barrier line: at the moment we use a 2d position system to check the detection, but it should be made taking care of perspective, but this could be made inside autoit script.

Interesting. But you're right, detecting and tracking several changing sub-areas at the same time starts be be somewhat out of the scope of FFFastFind, and I would say maybe even of a scripted solution. To have decent performance, you would probably need to implement specific algorithms with an optimized language (C++ or something similar).

Now, if you make some assumptions to simplify the problem, maybe you can find some way to do it with the materials you have here (FFFastFind + Autoit).

Link to comment
Share on other sites

Hi, good job on FF2.0

I was trying the FFComputeMeanValues on 4 rectangle snapshots on a webcam pic to determine motion detection and specifically to understand which direction the person is going.

I use this so I put rectangles on door A, B, C and D (the white rectangle in bottomright)

I get the snapshots every 100ms and I collect every subsequent detection area in a string with all the areas interested, so, in theory, analyzing the first and the last detection I should know the starting and ending crossing and so the direction of the person.

It works quite well, but I get some problems:

1) At the right of D there is a window so when someone passes in that area, can cast some shadows on door A and this fires up the MeanValues of A without someone crosses there. (A false detection)

2) Apart that, sometimes if someone crosses from D to A or viceversa, I get sequences like this:

correct:

DDDBA

AAABBABBDDD

NOT correct:

from DtoA: DBDBDABAC

from AtoD: ABDDADA

that means in the final motion there is still a motion in an area different from the real final.

For C area it could be some pixels are still out of range after passing through A.

For the A area, it could be the door is still slowly moving when arriving to D

The point is that these false detection could really be true if someone changes idea and wants really to go to C or to return to A from where it was coming...

So, how could I get the real direction of a person avoiding false detection?

You could do some filtering, but I can't find a reliable way to consider

ABDDADA as a correct A to D path

Mean of 3 last steps is closer of A than D. Maybe you should/could adjust detection params to limit false detection in A.

Link to comment
Share on other sites

I think the problem is also related with shadows casted by door, window or the man himself when walking.

When someone is in D sometimes he breaks the window's light from D and casts a shadow to A that fires the Mean values...

The same occurs when the door in A is moving: it reflects light that fires changes in C.

Anyway, I saw the beautiful open source openCV: that's the way to detect and track a movement, obviously it doesn't rely only to mean/luma values.

I would like to try it and maybe to use some DllCall from Autoit if I get it to work, but I don't know if it's possible with all that huge files .h .dll .lib maybe a very restricted set of func just that ones needed...

Do you know something of that? Are you interested to try a little of conversion in Autoit, maybe in another thread?

Link to comment
Share on other sites

I think the problem is also related with shadows casted by door, window or the man himself when walking.

When someone is in D sometimes he breaks the window's light from D and casts a shadow to A that fires the Mean values...

The same occurs when the door in A is moving: it reflects light that fires changes in C.

Anyway, I saw the beautiful open source openCV: that's the way to detect and track a movement, obviously it doesn't rely only to mean/luma values.

I would like to try it and maybe to use some DllCall from Autoit if I get it to work, but I don't know if it's possible with all that huge files .h .dll .lib maybe a very restricted set of func just that ones needed...

Do you know something of that? Are you interested to try a little of conversion in Autoit, maybe in another thread?

Sorry, but I don't think I'd have the time for that.

Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...