Jump to content

Opencv UDF


mylise
 Share

Recommended Posts

7 hours ago, Lion66 said:

Is it possible to use this point search method with any comparison method, and will it work for turning images or with a resizing?

I am not sure if I understand the question.
If by comparison method you mean variants of BFMatcher, then the answer is no, with pure OpenCV at least. OpenCV only implemented compute for ORB and Brisk algorithms
If by comparison method you mean matchTemplate or compareHist, the anwser is most likely no because there are reliable for non rotated and non scaled images.
I am not an expert in image processing. I am only talking from a school project point of view.

 

7 hours ago, Lion66 said:

I would instead of a frame of four corners, would draw a circle with the center in the accumulation of correspondence points. If there are tools for this.

If I am not wrong, you wanted to find an object in a scene.
Given keypoints of the object, I don't see how you will match then with keypoints of the scene without computing descriptors of the object and the scene.
If you find a working example in python or c++, I will try to convert it with the UDF functions

 

7 hours ago, Lion66 said:

P.S. In new version of UDF missing $CV_BGR2GRAY=6 in file cv_constants.au3.

Old examples do not work. This is done on purpose?

Not really on purpose.
Contants, which are enums on the c++ side, are now generated from the c++ files and available in cv_enums.au3
cv_constants.au3 contains contants that are not part of OpenCV but used in OpenCV.

$CV_BGR2GRAY is now $CV_COLOR_BGR2GRAY

Link to comment
Share on other sites

Thank you for your answers.

So far, I see a low detection percentage of objects with my image sets (less than 50%) in all examples: in your code and all examples on the python that I found.

I don't understand why this is happening.

I haven't figured out the relationships between point detection, comparison, and framing draw algorithms yet.

Perhaps the examples on the Internet contain only basic commands?

Or I don't understand anything.

Link to comment
Share on other sites

Hi smbape

I'm trying to turn the picture to an arbitrary angle.

What is the parameter $matRotationMatrix2D  ?

You may see other errors.

Thank you.

Local $ptemp = _cveImreadAndCheck("wally3.png")
Local $angle = 10
Local $scale = 1
Local $matRotationMatrix2D

_cveMatGetWidth($ptemp)
_cveMatGetHeight($ptemp)
Local $center = [_cveMatGetWidth($ptemp)/2, _cveMatGetHeight($ptemp)]

Local $rot_mat = _cveGetRotationMatrix2DMat($center, $angle, $scale, $matRotationMatrix2D)
_cveWarpAffineMat($ptemp, $ptemp, $rot_mat, 1, $CV_INTER_LINEAR)

_cveImshowMat("", $ptemp)
_cveWaitKey()

 

Link to comment
Share on other sites

Hi @Lion66

Here is how to do it

Local $ptemp = _cveImreadAndCheck("wally3.jpg")
_cveImshowMat("Original", $ptemp)

Local $angle = 10 ; Rotation angle in degrees. Positive values mean counter-clockwise rotation
Local $scale = 1 ; Isotropic scale factor

; grab the dimensions of the image and calculate the center of the image
Local $size = _cvSize()
_cveMatGetSize($ptemp, $size)
Local $center = DllStructCreate($tagCvPoint2D32f)
$center.x = $size.width / 2
$center.y = $size.height / 2

; rotate our image by $angle degrees around the center of the image
; this is done by computing the rotation matrix with _cveGetRotationMatrix2DMat
; then apply the rotation matrix on the image with _cveWarpAffineMat

#Region compute the rotation matrix
Local $rot_mat = _cveMatCreate()
_cveGetRotationMatrix2DMat($center, $angle, $scale, $rot_mat)
#EndRegion compute the rotation matrix

#Region apply the rotation matrix
; The rotated image width is below max(width, height) * $scale
; The rotated image height is below max(width, height)$angle
; For simplicity, put the image in a matrix of size { width, height}
; Otherwise, we should apply $rot_mat on each corner the image,
; then compute the rotated image size from the rotated corners
Local $rotated = _cveMatCreate()
_cveWarpAffineMat($ptemp, $rotated, $rot_mat, $size, $CV_INTER_LINEAR)
#EndRegion apply the rotation matrix

; display the rotated imgage
_cveImshowMat("Rotated", $rotated)
_cveWaitKey()

 

Edited by smbape
Link to comment
Share on other sites

Hi @smbape

Returning to the topic of detect key point, I came to the conclusion that the ORB method shows unsatisfactory results.
I obtain good results in the examples using KAZE (not AKAZE) and BRISK. I am attaching examples of the python.
I'd be happy if you could do the same with your library.
Need to make a choice between KAZE and BRISK, leave a choice of match types, if possible, add the ability to change the number of matches (in the example the "dmatches").
And an additional question:
Is it possible to calculate the region coordinates relative to the picture (in which we are looking for an object) that is drawn at the end of the example?
Thank you.

BRISK-DetectAndFrame.py KAZE-DetectAndFrame.py

Edited by Lion66
Link to comment
Share on other sites

Hello again @Lion66

6 hours ago, Lion66 said:

I obtain good results in the examples using KAZE (not AKAZE) and BRISK. I am attaching examples of the python.

I tried the attached file and didn't get any better result.
I first tough I may be doing something wrong, but I got the same results in c++.

## extract the matched keypoints
src_pts  = np.float32([kpts1[m.queryIdx].pt for m in dmatches]).reshape(-1,1,2)
dst_pts  = np.float32([kpts2[m.trainIdx].pt for m in dmatches]).reshape(-1,1,2)
    
## find homography matrix and do perspective transform
M, mask = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC,5.0)

You are finding homography using all the matches, even the worst ones. I don't see how you get better result with it.
Can you try your python code with box.png and box_in_scene.png and give a feedback ?

6 hours ago, Lion66 said:

if possible, add the ability to change the number of matches (in the example the "dmatches")

Not sure if I understand the request. dmatches has the same values as matches. There cannot be more matches.

 

6 hours ago, Lion66 said:

Is it possible to calculate the region coordinates relative to the picture (in which we are looking for an object) that is drawn at the end of the example?

Not sure if I understand the question. dst is already relative to the scene picture.
Do you mean caculare the region as a rectangle not the 4 points corners (dst)?

160732-opencv-udf.au3

Edited by smbape
Link to comment
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
 Share

  • Recently Browsing   0 members

    • No registered users viewing this page.
×
×
  • Create New...