Jump to content

Leaderboard

Popular Content

Showing content with the highest reputation on 06/09/2025 in all areas

  1. Working towards the 1.9.0 release was taking too long, so I'm releasing 1.8.4 in the meantime Notable changes: ignoreInternalInIncludes setting would also ignore declarations in current file. Fixed so only internal declarations in included files are ignored. When resolving included files, the same file could be loaded from disk multiple times. This will improve performance, when opening au3 files.
    1 point
  2. ioa747

    SciTE AI assistant

    I added a new function: _AI_Request which asynchronously sends a request to _AI_Call() and waits for the response. I also added a new parameter $iThink - Enable or disable thinking if Model supporting (0=none, 1=yes/not visible, 2=yes/visible). (Default is 0) to conform with the new ability to enable or disable thinking. (feature since version 0.9.0) Ollama now has the ability to enable or disable thinking. This gives users the flexibility to choose the model’s thinking behavior for different applications and use cases. When thinking is enabled, the output will separate the model’s thinking from the model’s output. When thinking is disabled, the model will not think and directly output the content. Models that support thinking: DeepSeek R1 Qwen 3 more will be added under thinking models.
    1 point
  3. smbape

    Mediapipe UDF

    After a long hesitation about whether it would be fun to do or not, I finally did it. This UDF provides a way to use mediapipe in AutoIt The usage is similar to the python usage of mediapipe Prerequisites Download and extract autoit-mediapipe-0.10.24-opencv-4.11.0-com-v0.5.0.7z into a folder Download the opencv UDF from here Sources Here Documentation A generated documentation for functions is available here (v0.5.0) Examples More examples can be found here (v0.5.0) To run them, please follow these instructions (v0.5.0) Face Detection with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/face_detector/python/face_detector.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/face_detector/python/face_detector.ipynb ;~ Title: Face Detection with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.24-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" ; STEP 1: Import the necessary modules. Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Main() Func Main() Local $_IMAGE_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\brother-sister-girl-family-boy-977170.jpg" Local $_IMAGE_URL = "https://i.imgur.com/Vu2Nqwb.jpg" Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\blaze_face_short_range.tflite" Local $_MODEL_URL = "https://storage.googleapis.com/mediapipe-models/face_detector/blaze_face_short_range/float16/1/blaze_face_short_range.tflite" Local $url, $file_path Local $sample_files[] = [ _ _Mediapipe_Tuple($_IMAGE_FILE, $_IMAGE_URL), _ _Mediapipe_Tuple($_MODEL_FILE, $_MODEL_URL) _ ] For $config In $sample_files $file_path = $config[0] $url = $config[1] If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next ; Compute the scale to make drawn elements visible when the image is resized for display Local $scale = 1 / resize_and_show($cv.imread($_IMAGE_FILE), Default, False) ; STEP 2: Create an FaceDetector object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.FaceDetectorOptions(_Mediapipe_Params("base_options", $base_options)) Local $detector = $vision.FaceDetector.create_from_options($options) ; STEP 3: Load the input image. Local $image = $mp.Image.create_from_file($_IMAGE_FILE) ; STEP 4: Detect faces in the input image. Local $detection_result = $detector.detect($image) ; STEP 5: Process the detection result. In this case, visualize it. Local $image_copy = $image.mat_view() Local $annotated_image = visualize($image_copy, $detection_result, $scale) Local $bgr_annotated_image = $cv.cvtColor($annotated_image, $CV_COLOR_RGB2BGR) resize_and_show($bgr_annotated_image, "face_detector") $cv.waitKey() ; STEP 6: Closes the detector explicitly when the detector is not used in a context. $detector.close() EndFunc ;==>Main Func isclose($a, $b) Return Abs($a - $b) <= 1E-6 EndFunc ;==>isclose ; Checks if the float value is between 0 and 1. Func is_valid_normalized_value($value) Return $value >= 0 And $value <= 1 Or isclose(0, $value) Or isclose(1, $value) EndFunc ;==>is_valid_normalized_value #cs Converts normalized value pair to pixel coordinates. #ce Func _normalized_to_pixel_coordinates($normalized_x, $normalized_y, $image_width, $image_height) If Not (is_valid_normalized_value($normalized_x) And is_valid_normalized_value($normalized_y)) Then ; TODO: Draw coordinates even if it's outside of the image bounds. Return Default EndIf Local $x_px = _Min(Floor($normalized_x * $image_width), $image_width - 1) Local $y_px = _Min(Floor($normalized_y * $image_height), $image_height - 1) Return _OpenCV_Point($x_px, $y_px) EndFunc ;==>_normalized_to_pixel_coordinates #cs Draws bounding boxes and keypoints on the input image and return it. Args: image: The input RGB image. detection_result: The list of all "Detection" entities to be visualize. Returns: Image with bounding boxes. #ce Func visualize($image, $detection_result, $scale = 1.0) Local $MARGIN = 10 * $scale ; pixels Local $ROW_SIZE = 10 ; pixels Local $FONT_SIZE = $scale Local $FONT_THICKNESS = 2 * $scale Local $TEXT_COLOR = _OpenCV_Scalar(255, 0, 0) ; red Local $bbox_thickness = 3 * $scale Local $keypoint_color = _OpenCV_Scalar(0, 255, 0) Local $keypoint_thickness = 2 * $scale Local $keypoint_radius = 2 * $scale Local $annotated_image = $image.copy() Local $width = $image.width Local $height = $image.height Local $bbox, $start_point, $end_point, $keypoint_px Local $category, $category_name, $probability, $result_text, $text_location For $detection In $detection_result.detections ; Draw bounding_box $bbox = $detection.bounding_box $start_point = _OpenCV_Point($bbox.origin_x, $bbox.origin_y) $end_point = _OpenCV_Point($bbox.origin_x + $bbox.width, $bbox.origin_y + $bbox.height) $cv.rectangle($annotated_image, $start_point, $end_point, $TEXT_COLOR, $bbox_thickness) ; Draw keypoints For $keypoint In $detection.keypoints $keypoint_px = _normalized_to_pixel_coordinates($keypoint.x, $keypoint.y, $width, $height) $cv.circle($annotated_image, $keypoint_px, $keypoint_thickness, $keypoint_color, $keypoint_radius) Next ; Draw label and score $category = $detection.categories(0) $category_name = $category.category_name $probability = Round($category.score, 2) $result_text = $category_name & ' (' & $probability & ')' $text_location = _OpenCV_Point($MARGIN + $bbox.origin_x, $MARGIN + $ROW_SIZE + $bbox.origin_y) $cv.putText($annotated_image, $result_text, $text_location, $CV_FONT_HERSHEY_PLAIN, $FONT_SIZE, $TEXT_COLOR, $FONT_THICKNESS) Next Return $annotated_image EndFunc ;==>visualize Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Face Landmarks Detection with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/face_landmarker/python/[MediaPipe_Python_Tasks]_Face_Landmarker.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/face_landmarker/python/[MediaPipe_Python_Tasks]_Face_Landmarker.ipynb ;~ Title: Face Landmarks Detection with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" ; STEP 1: Import the necessary modules. Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") Global $solutions = _Mediapipe_ObjCreate("mediapipe.solutions") _AssertIsObj($solutions, "Failed to load mediapipe.solutions") Global $landmark_pb2 = _Mediapipe_ObjCreate("mediapipe.framework.formats.landmark_pb2") _AssertIsObj($landmark_pb2, "Failed to load mediapipe.framework.formats.landmark_pb2") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Main() Func Main() Local $_IMAGE_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\business-person.png" Local $_IMAGE_URL = "https://storage.googleapis.com/mediapipe-assets/business-person.png" Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\face_landmarker.task" Local $_MODEL_URL = "https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task" Local $url, $file_path Local $sample_files[] = [ _ _Mediapipe_Tuple($_IMAGE_FILE, $_IMAGE_URL), _ _Mediapipe_Tuple($_MODEL_FILE, $_MODEL_URL) _ ] For $config In $sample_files $file_path = $config[0] $url = $config[1] If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next ; STEP 2: Create an FaceLandmarker object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.FaceLandmarkerOptions(_Mediapipe_Params("base_options", $base_options, _ "output_face_blendshapes", True, _ "output_facial_transformation_matrixes", True, _ "num_faces", 1)) Local $detector = $vision.FaceLandmarker.create_from_options($options) ; STEP 3: Load the input image. Local $image = $mp.Image.create_from_file($_IMAGE_FILE) ; STEP 4: Detect hand landmarks from the input image. Local $detection_result = $detector.detect($image) ; STEP 5: Process the classification result. In this case, visualize it. Local $annotated_image = draw_landmarks_on_image($image.mat_view(), $detection_result) resize_and_show($cv.cvtColor($annotated_image, $CV_COLOR_RGB2BGR)) $cv.waitKey() ; STEP 6: Closes the face detector explicitly when the face detector is not used in a context. $detector.close() EndFunc ;==>Main Func draw_landmarks_on_image($rgb_image, $detection_result) ; Compute the scale to make drawn elements visible when the image is resized for display Local $scale = 1 / resize_and_show($rgb_image, Default, False) Local $face_landmarks_list = $detection_result.face_landmarks Local $annotated_image = $rgb_image.copy() Local $face_landmarks_proto ; Loop through the detected faces to visualize. For $face_landmarks In $face_landmarks_list ; Draw the face landmarks. $face_landmarks_proto = $landmark_pb2.NormalizedLandmarkList() For $landmark In $face_landmarks $face_landmarks_proto.landmark.append($landmark_pb2.NormalizedLandmark(_Mediapipe_Params("x", $landmark.x, "y", $landmark.y, "z", $landmark.z))) Next $solutions.drawing_utils.draw_landmarks(_Mediapipe_Params( _ "image", $annotated_image, _ "landmark_list", $face_landmarks_proto, _ "connections", $solutions.face_mesh.FACEMESH_TESSELATION, _ "landmark_drawing_spec", Null, _ "connection_drawing_spec", $solutions.drawing_styles.get_default_face_mesh_tesselation_style($scale))) $solutions.drawing_utils.draw_landmarks(_Mediapipe_Params( _ "image", $annotated_image, _ "landmark_list", $face_landmarks_proto, _ "connections", $solutions.face_mesh.FACEMESH_CONTOURS, _ "landmark_drawing_spec", Null, _ "connection_drawing_spec", $solutions.drawing_styles.get_default_face_mesh_contours_style(0, $scale))) $solutions.drawing_utils.draw_landmarks(_Mediapipe_Params( _ "image", $annotated_image, _ "landmark_list", $face_landmarks_proto, _ "connections", $solutions.face_mesh.FACEMESH_IRISES, _ "landmark_drawing_spec", Null, _ "connection_drawing_spec", $solutions.drawing_styles.get_default_face_mesh_iris_connections_style($scale))) Next Return $annotated_image EndFunc ;==>draw_landmarks_on_image Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Face Stylizer #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/face_stylizer/python/face_stylizer.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/face_stylizer/python/face_stylizer.ipynb ;~ Title: Face Stylizer #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" ; STEP 1: Import the necessary modules. Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Main() Func Main() Local $_IMAGE_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\business-person.png" Local $_IMAGE_URL = "https://storage.googleapis.com/mediapipe-assets/business-person.png" Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\face_stylizer_color_sketch.task" Local $_MODEL_URL = "https://storage.googleapis.com/mediapipe-models/face_stylizer/blaze_face_stylizer/float32/latest/face_stylizer_color_sketch.task" Local $url, $file_path Local $sample_files[] = [ _ _Mediapipe_Tuple($_IMAGE_FILE, $_IMAGE_URL), _ _Mediapipe_Tuple($_MODEL_FILE, $_MODEL_URL) _ ] For $config In $sample_files $file_path = $config[0] $url = $config[1] If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next ; Preview the images. resize_and_show($cv.imread($_IMAGE_FILE), "face_stylizer: preview") ; STEP 2: Create an FaceLandmarker object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.FaceStylizerOptions(_Mediapipe_Params("base_options", $base_options)) Local $stylizer = $vision.FaceStylizer.create_from_options($options) ; STEP 3: Load the input image. Local $image = $mp.Image.create_from_file($_IMAGE_FILE) ; STEP 4: Retrieve the stylized image Local $stylized_image = $stylizer.stylize($image) ; STEP 5: Show the stylized image Local $rgb_stylized_image = $cv.cvtColor($stylized_image.mat_view(), $CV_COLOR_RGB2BGR) resize_and_show($rgb_stylized_image, "face_stylizer: stylized") $cv.waitKey() ; STEP 6: Closes the stylizer explicitly when the stylizer is not used in a context. $stylizer.close() EndFunc ;==>Main Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Gesture Recognizer with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/gesture_recognizer/python/gesture_recognizer.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/gesture_recognizer/python/gesture_recognizer.ipynb ;~ Title: Gesture Recognizer with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $landmark_pb2 = _Mediapipe_ObjCreate("mediapipe.framework.formats.landmark_pb2") _AssertIsObj($landmark_pb2, "Failed to load mediapipe.framework.formats.landmark_pb2") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Global $mp_hands = $mp.solutions.hands Global $mp_drawing = $mp.solutions.drawing_utils Global $mp_drawing_styles = $mp.solutions.drawing_styles Main() Func Main() Local $IMAGE_FILENAMES[] = ['thumbs_down.jpg', 'victory.jpg', 'thumbs_up.jpg', 'pointing_up.jpg'] Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\gesture_recognizer.task" Local $_MODEL_URL = "https://storage.googleapis.com/mediapipe-models/gesture_recognizer/gesture_recognizer/float16/1/gesture_recognizer.task" Local $sample_files[UBound($IMAGE_FILENAMES) + 1] $sample_files[0] = _Mediapipe_Tuple($_MODEL_FILE, $_MODEL_URL) Local $url, $file_path, $name For $i = 0 To UBound($IMAGE_FILENAMES) - 1 $name = $IMAGE_FILENAMES[$i] $file_path = $MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $name $url = "https://storage.googleapis.com/mediapipe-tasks/gesture_recognizer/" & $name $sample_files[$i + 1] = _Mediapipe_Tuple($file_path, $url) Next For $config In $sample_files $file_path = $config[0] $url = $config[1] If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next ; STEP 2: Create an GestureRecognizer object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.GestureRecognizerOptions(_Mediapipe_Params("base_options", $base_options)) Local $recognizer = $vision.GestureRecognizer.create_from_options($options) Local $image, $recognition_result, $top_gesture, $hands_landmarks For $image_file_name In $IMAGE_FILENAMES ; STEP 3: Load the input image. $image = $mp.Image.create_from_file($MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $image_file_name) ; STEP 4: Recognize gestures in the input image. $recognition_result = $recognizer.recognize($image) ; STEP 5: Process the result. In this case, visualize it. $top_gesture = $recognition_result.gestures(0) (0) $hands_landmarks = $recognition_result.hand_landmarks display_image_with_gestures_and_hand_landmarks($image, $top_gesture, $hands_landmarks) Next $cv.waitKey() ; STEP 6: Closes the gesture recognizer explicitly when the gesture recognizer is not used in a context. $recognizer.close() EndFunc ;==>Main #cs Displays an image with the gesture category and its score along with the hand landmarks. #ce Func display_image_with_gestures_and_hand_landmarks($image, $gesture, $hands_landmarks) ; Display gestures and hand landmarks. Local $annotated_image = $cv.cvtColor($image.mat_view(), $CV_COLOR_RGB2BGR) Local $title = StringFormat("%s (%.2f)", $gesture.category_name, $gesture.score) ; Compute the scale to make drawn elements visible when the image is resized for display Local $scale = 1 / resize_and_show($annotated_image, Default, False) Local $hand_landmarks_proto For $hand_landmarks In $hands_landmarks $hand_landmarks_proto = $landmark_pb2.NormalizedLandmarkList() For $landmark In $hand_landmarks $hand_landmarks_proto.landmark.append($landmark_pb2.NormalizedLandmark(_Mediapipe_Params("x", $landmark.x, "y", $landmark.y, "z", $landmark.z))) Next $mp_drawing.draw_landmarks( _ $annotated_image, _ $hand_landmarks_proto, _ $mp_hands.HAND_CONNECTIONS, _ $mp_drawing_styles.get_default_hand_landmarks_style($scale), _ $mp_drawing_styles.get_default_hand_connections_style($scale)) Next resize_and_show($annotated_image, $title) EndFunc ;==>display_image_with_gestures_and_hand_landmarks Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Hand Landmarks Detection with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/hand_landmarker/python/hand_landmarker.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/hand_landmarker/python/hand_landmarker.ipynb ;~ Title: Hand Landmarks Detection with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $solutions = _Mediapipe_ObjCreate("mediapipe.solutions") _AssertIsObj($solutions, "Failed to load mediapipe.solutions") Global $landmark_pb2 = _Mediapipe_ObjCreate("mediapipe.framework.formats.landmark_pb2") _AssertIsObj($landmark_pb2, "Failed to load mediapipe.framework.formats.landmark_pb2") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Global $mp_hands = $mp.solutions.hands Global $mp_drawing = $mp.solutions.drawing_utils Global $mp_drawing_styles = $mp.solutions.drawing_styles Main() Func Main() Local $_IMAGE_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\woman_hands.jpg" Local $_IMAGE_URL = "https://storage.googleapis.com/mediapipe-tasks/hand_landmarker/woman_hands.jpg" Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\hand_landmarker.task" Local $_MODEL_URL = "https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task" Local $url, $file_path Local $sample_files[] = [ _ _Mediapipe_Tuple($_IMAGE_FILE, $_IMAGE_URL), _ _Mediapipe_Tuple($_MODEL_FILE, $_MODEL_URL) _ ] For $config In $sample_files $file_path = $config[0] $url = $config[1] If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next ; STEP 2: Create an ImageClassifier object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.HandLandmarkerOptions(_Mediapipe_Params("base_options", $base_options, _ "num_hands", 2)) Local $detector = $vision.HandLandmarker.create_from_options($options) ; STEP 3: Load the input image. Local $image = $mp.Image.create_from_file($_IMAGE_FILE) ; STEP 4: Detect hand landmarks from the input image. Local $detection_result = $detector.detect($image) ; STEP 5: Process the classification result. In this case, visualize it. Local $annotated_image = draw_landmarks_on_image($image.mat_view(), $detection_result) resize_and_show($cv.cvtColor($annotated_image, $CV_COLOR_RGB2BGR)) $cv.waitKey() ; STEP 6: Closes the hand detector explicitly when the hand detector is not used in a context. $detector.close() EndFunc ;==>Main Func draw_landmarks_on_image($rgb_image, $detection_result) ; Compute the scale to make drawn elements visible when the image is resized for display Local $scale = 1 / resize_and_show($rgb_image, Default, False) Local $MARGIN = 10 * $scale ; pixels Local $FONT_SIZE = $scale Local $FONT_THICKNESS = 2 * $scale Local $HANDEDNESS_TEXT_COLOR = _OpenCV_Scalar(88, 205, 54) ; vibrant green Local $hand_landmarks_list = $detection_result.hand_landmarks Local $handedness_list = $detection_result.handedness Local $annotated_image = $rgb_image.copy() Local $width = $annotated_image.width Local $height = $annotated_image.height Local $hand_landmarks, $handedness, $hand_landmarks_proto Local $min_x, $min_y, $text_x, $text_y ; Loop through the detected hands to visualize. For $idx = 0 To $hand_landmarks_list.size() - 1 $hand_landmarks = $hand_landmarks_list($idx) $handedness = $handedness_list($idx) $min_x = 1 $min_y = 1 ; Draw the hand landmarks. $hand_landmarks_proto = $landmark_pb2.NormalizedLandmarkList() For $landmark In $hand_landmarks $hand_landmarks_proto.landmark.append($landmark_pb2.NormalizedLandmark(_Mediapipe_Params("x", $landmark.x, "y", $landmark.y, "z", $landmark.z))) If $landmark.x < $min_x Then $min_x = $landmark.x If $landmark.y < $min_y Then $min_y = $landmark.y Next $solutions.drawing_utils.draw_landmarks( _ $annotated_image, _ $hand_landmarks_proto, _ $solutions.hands.HAND_CONNECTIONS, _ $solutions.drawing_styles.get_default_hand_landmarks_style($scale), _ $solutions.drawing_styles.get_default_hand_connections_style($scale)) ; Get the top left corner of the detected hand's bounding box. $text_x = $min_x * $width $text_y = $min_y * $height - $MARGIN ; Draw handedness (left or right hand) on the image. $cv.putText($annotated_image, $handedness(0).category_name, _ _OpenCV_Point($text_x, $text_y), $CV_FONT_HERSHEY_DUPLEX, _ $FONT_SIZE, $HANDEDNESS_TEXT_COLOR, $FONT_THICKNESS, $CV_LINE_AA) Next Return $annotated_image EndFunc ;==>draw_landmarks_on_image Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "image" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Image Classifier with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/image_classification/python/image_classifier.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/image_classification/python/image_classifier.ipynb ;~ Title: Image Classifier with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Main() Func Main() Local $IMAGE_FILENAMES[] = ['burger.jpg', 'cat.jpg'] Local $url, $file_path For $name In $IMAGE_FILENAMES $file_path = $MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $name $url = "https://storage.googleapis.com/mediapipe-tasks/image_classifier/" & $name If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\efficientnet_lite0.tflite" If Not FileExists($_MODEL_FILE) Then $download_utils.download("https://storage.googleapis.com/mediapipe-models/image_classifier/efficientnet_lite0/float32/1/efficientnet_lite0.tflite", $_MODEL_FILE) EndIf ; STEP 2: Create an ImageClassifier object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.ImageClassifierOptions(_Mediapipe_Params("base_options", $base_options, "max_results", 4)) Local $classifier = $vision.ImageClassifier.create_from_options($options) Local $image, $classification_result, $top_category, $title For $image_name In $IMAGE_FILENAMES ; STEP 3: Load the input image. $image = $mp.Image.create_from_file($MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $image_name) ; STEP 4: Classify the input image. $classification_result = $classifier.classify($image) ; STEP 5: Process the classification result. In this case, visualize it. $top_category = $classification_result.classifications(0).categories(0) $title = StringFormat("%s (%.2f)", $top_category.category_name, $top_category.score) resize_and_show($cv.cvtColor($image.mat_view(), $CV_COLOR_RGB2BGR), $title) Next $cv.waitKey() ; STEP 5: Closes the classifier explicitly when the classifier is not used in a context. $classifier.close() EndFunc ;==>Main Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Image Embedding with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/image_embedder/python/image_embedder.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/image_embedder/python/image_embedder.ipynb ;~ Title: Image Embedding with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Global $cosine_similarity = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.components.utils.cosine_similarity") _AssertIsObj($cosine_similarity, "Failed to load mediapipe.tasks.autoit.components.utils.cosine_similarity") Main() Func Main() Local $IMAGE_FILENAMES[] = ['burger.jpg', 'burger_crop.jpg'] Local $url, $file_path For $name In $IMAGE_FILENAMES $file_path = $MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $name $url = "https://storage.googleapis.com/mediapipe-assets/" & $name If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\mobilenet_v3_small.tflite" If Not FileExists($_MODEL_FILE) Then $download_utils.download("https://storage.googleapis.com/mediapipe-models/image_embedder/mobilenet_v3_small/float32/1/mobilenet_v3_small.tflite", $_MODEL_FILE) EndIf ; Create options for Image Embedder Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $l2_normalize = True ;@param {type:"boolean"} Local $quantize = True ;@param {type:"boolean"} Local $options = $vision.ImageEmbedderOptions(_Mediapipe_Params( _ "base_options", $base_options, _ "l2_normalize", $l2_normalize, _ "quantize", $quantize)) ; Create Image Embedder Local $embedder = $vision.ImageEmbedder.create_from_options($options) ; Format images for MediaPipe Local $first_image = $mp.Image.create_from_file($MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $IMAGE_FILENAMES[0]) Local $second_image = $mp.Image.create_from_file($MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $IMAGE_FILENAMES[1]) Local $first_embedding_result = $embedder.embed($first_image) Local $second_embedding_result = $embedder.embed($second_image) Local $similarity = $cosine_similarity.cosine_similarity($first_embedding_result.embeddings(0), $second_embedding_result.embeddings(0)) ConsoleWrite('@@ Debug(' & @ScriptLineNumber & ') : $similarity = ' & $similarity & @CRLF) ;### Debug Console resize_and_show($cv.cvtColor($first_image.mat_view(), $CV_COLOR_RGB2BGR), $IMAGE_FILENAMES[0]) resize_and_show($cv.cvtColor($second_image.mat_view(), $CV_COLOR_RGB2BGR), $IMAGE_FILENAMES[1]) $cv.waitKey() ; Closes the embedder explicitly when the embedder is not used in a context. $embedder.close() EndFunc ;==>Main Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Image Segmenter #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/image_segmentation/python/image_segmentation.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/image_segmentation/python/image_segmentation.ipynb ;~ Title: Image Segmenter #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Main() Func Main() Local $IMAGE_FILENAMES[] = ['segmentation_input_rotation0.jpg'] Local $url, $file_path For $name In $IMAGE_FILENAMES $file_path = $MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $name $url = "https://storage.googleapis.com/mediapipe-assets/" & $name If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\deeplab_v3.tflite" If Not FileExists($_MODEL_FILE) Then $download_utils.download("https://storage.googleapis.com/mediapipe-models/image_segmenter/deeplab_v3/float32/1/deeplab_v3.tflite", $_MODEL_FILE) EndIf Local $BG_COLOR = _OpenCV_Scalar(192, 192, 192) ; gray Local $MASK_COLOR = _OpenCV_Scalar(255, 255, 255) ; white ; Create the options that will be used for ImageSegmenter Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.ImageSegmenterOptions(_Mediapipe_Params("base_options", $base_options, _ "output_category_mask", True)) ; Create the image segmenter Local $segmenter = $vision.ImageSegmenter.create_from_options($options) Local $image, $segmentation_result, $category_mask, $image_data Local $fg_image, $bg_image, $fg_mask Local $output_image, $blurred_image ; Loop through demo image(s) For $image_file_name In $IMAGE_FILENAMES ; Create the MediaPipe image file that will be segmented $image = $mp.Image.create_from_file($MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $image_file_name) ; mediapipe uses RGB images while opencv uses BGR images ; Convert the BGR image to RGB $image_data = $cv.cvtColor($image.mat_view(), $CV_COLOR_RGB2BGR) ; Retrieve the masks for the segmented image $segmentation_result = $segmenter.segment($image) $category_mask = $segmentation_result.category_mask ; Generate solid color images for showing the output segmentation mask. $fg_image = $cv.Mat.create($image_data.size(), $CV_8UC3, $MASK_COLOR) $bg_image = $cv.Mat.create($image_data.size(), $CV_8UC3, $BG_COLOR) ; Foreground mask corresponds to all 'i' pixels where category_mask[i] > 0.2 $fg_mask = $cv.compare($category_mask.mat_view(), 0.2, $CV_CMP_GT) ; Draw fg_image on bg_image based on the segmentation mask. $output_image = $bg_image.copy() $fg_image.copyTo($fg_mask, $output_image) resize_and_show($output_image, 'Segmentation mask of ' & $image_file_name) ; Blur the image background based on the segmentation mask. $blurred_image = $cv.GaussianBlur($image_data, _OpenCV_Size(55, 55), 0) $image_data.copyTo($fg_mask, $blurred_image) resize_and_show($blurred_image, 'Blurred background of ' & $image_file_name) Next $cv.waitKey() ; Closes the segmenter explicitly when the segmenter is not used ina context. $segmenter.close() EndFunc ;==>Main Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Interactive Image Segmenter #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/interactive_segmentation/python/interactive_segmenter.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/interactive_segmentation/python/interactive_segmenter.ipynb ;~ Title: Interactive Image Segmenter #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Global $containers = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.components.containers") _AssertIsObj($containers, "Failed to load mediapipe.tasks.autoit.components.containers") Main() Func Main() Local $IMAGE_FILENAMES[] = ['cats_and_dogs.jpg'] Local $url, $file_path For $name In $IMAGE_FILENAMES $file_path = $MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $name $url = "https://storage.googleapis.com/mediapipe-assets/" & $name If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\magic_touch.tflite" If Not FileExists($_MODEL_FILE) Then $download_utils.download("https://storage.googleapis.com/mediapipe-models/interactive_segmenter/magic_touch/float32/1/magic_touch.tflite", $_MODEL_FILE) EndIf Local $x = 0.68 ;@param {type:"slider", min:0, max:1, step:0.01} Local $y = 0.68 ;@param {type:"slider", min:0, max:1, step:0.01} Local $BG_COLOR = _OpenCV_Scalar(192, 192, 192) ; gray Local $MASK_COLOR = _OpenCV_Scalar(255, 255, 255) ; white Local $OVERLAY_COLOR = _OpenCV_Scalar(100, 100, 0) ; cyan Local $RegionOfInterest_Format = $vision.InteractiveSegmenterRegionOfInterest_Format Local $RegionOfInterest = $vision.InteractiveSegmenterRegionOfInterest Local $NormalizedKeypoint = $containers.keypoint.NormalizedKeypoint ; Create the options that will be used for InteractiveSegmenter Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.InteractiveSegmenterOptions(_Mediapipe_Params("base_options", $base_options, _ "output_category_mask", True)) ; Create the interactive segmenter Local $segmenter = $vision.InteractiveSegmenter.create_from_options($options) Local $image, $roi, $segmentation_result, $category_mask, $image_data Local $fg_image, $bg_image, $fg_mask Local $output_image, $blurred_image, $overlayed_image Local $keypoint_px, $alpha Local $color = _OpenCV_Scalar(255, 255, 0) Local $thickness = 10 Local $radius = 2 Local $scale ; Loop through demo image(s) For $image_file_name In $IMAGE_FILENAMES ; Create the MediaPipe image file that will be segmented $image = $mp.Image.create_from_file($MEDIAPIPE_SAMPLES_DATA_PATH & "\" & $image_file_name) ; Compute the scale to make drawn elements visible when the image is resized for display $scale = 1 / resize_and_show($image.mat_view(), Default, False) ; mediapipe uses RGB images while opencv uses BGR images ; Convert the BGR image to RGB $image_data = $cv.cvtColor($image.mat_view(), $CV_COLOR_RGB2BGR) ; Retrieve the masks for the segmented image $roi = $RegionOfInterest(_Mediapipe_Params("format", $RegionOfInterest_Format.KEYPOINT, _ "keypoint", $NormalizedKeypoint($x, $y))) $segmentation_result = $segmenter.segment($image, $roi) $category_mask = $segmentation_result.category_mask ; Generate solid color images for showing the output segmentation mask. $fg_image = $cv.Mat.create($image_data.size(), $CV_8UC3, $MASK_COLOR) $bg_image = $cv.Mat.create($image_data.size(), $CV_8UC3, $BG_COLOR) ; Foreground mask corresponds to all 'i' pixels where category_mask[i] > 0.1 $fg_mask = $cv.compare($category_mask.mat_view(), 0.1, $CV_CMP_GT) ; Draw fg_image on bg_image based on the segmentation mask. $output_image = $bg_image.copy() $fg_image.copyTo($fg_mask, $output_image) ; Compute the point of interest coordinates $keypoint_px = _normalized_to_pixel_coordinates($x, $y, $image.width, $image.height) ; Draw a circle to denote the point of interest $cv.circle($output_image, $keypoint_px, $thickness * $scale, $color, $radius * $scale) ; Display the segmented image resize_and_show($output_image, 'Segmentation mask of ' & $image_file_name) ; Blur the image background based on the segmentation mask. $blurred_image = $cv.GaussianBlur($image_data, _OpenCV_Size(55, 55), 0) $image_data.copyTo($fg_mask, $blurred_image) ; Draw a circle to denote the point of interest $cv.circle($blurred_image, $keypoint_px, $thickness * $scale, $color, $radius * $scale) ; Display the blurred image resize_and_show($blurred_image, 'Blurred background of ' & $image_file_name) ; Create an overlay image with the desired color (e.g., (255, 0, 0) for red) $overlayed_image = $cv.Mat.create($image_data.size(), $CV_8UC3, $OVERLAY_COLOR) ; Create an alpha channel based on the segmentation mask with the desired opacity (e.g., 0.7 for 70%) ; fg_mask values are 0 where the mask should not apply and 255 where it should ; multiplying by 0.7 / 255.0 gives values that are 0 where the mask should not apply and 0.7 where it should $alpha = $fg_mask.convertTo($CV_32F, Null, 0.7 / 255.0) ; repeat the alpha mask for each image channel color $alpha = $cv.merge(_OpenCV_Tuple($alpha, $alpha, $alpha)) ; Blend the original image and the overlay image based on the alpha channel $overlayed_image = $cv.add($cv.multiply($image_data, $cv.subtract(1.0, $alpha), Null, Default, $CV_32F), $cv.multiply($overlayed_image, $alpha, Null, Default, $CV_32F)) ; Draw a circle to denote the point of interest $cv.circle($overlayed_image, $keypoint_px, $thickness * $scale, $color, $radius * $scale) ; Display the overlayed image resize_and_show($overlayed_image, 'Overlayed foreground of ' & $image_file_name) Next $cv.waitKey() ; Closes the segmenter explicitly when the segmenter is not used ina context. $segmenter.close() EndFunc ;==>Main Func isclose($a, $b) Return Abs($a - $b) <= 1E-6 EndFunc ;==>isclose ; Checks if the float value is between 0 and 1. Func is_valid_normalized_value($value) Return ($value > 0 Or isclose(0, $value)) And ($value < 1 Or isclose(1, $value)) EndFunc ;==>is_valid_normalized_value #cs Converts normalized value pair to pixel coordinates. #ce Func _normalized_to_pixel_coordinates($normalized_x, $normalized_y, $image_width, $image_height) If Not (is_valid_normalized_value($normalized_x) And is_valid_normalized_value($normalized_y)) Then ; TODO: Draw coordinates even if it's outside of the image bounds. Return Default EndIf Local $x_px = _Min(Floor($normalized_x * $image_width), $image_width - 1) Local $y_px = _Min(Floor($normalized_y * $image_height), $image_height - 1) Return _OpenCV_Point($x_px, $y_px) EndFunc ;==>_normalized_to_pixel_coordinates Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Language Detector with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/language_detector/python/[MediaPipe_Python_Tasks]_Language_Detector.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/language_detector/python/[MediaPipe_Python_Tasks]_Language_Detector.ipynb ;~ Title: Language Detector with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" ; STEP 1: Import the necessary modules. Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $text = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.text") _AssertIsObj($text, "Failed to load mediapipe.tasks.autoit.text") Main() Func Main() Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\language_detector.tflite" Local $_MODEL_URL = "https://storage.googleapis.com/mediapipe-models/language_detector/language_detector/float32/latest/language_detector.tflite" Local $url, $file_path Local $sample_files[] = [ _ _Mediapipe_Tuple($_MODEL_FILE, $_MODEL_URL) _ ] For $config In $sample_files $file_path = $config[0] $url = $config[1] If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next ; Define the input text that you wants the model to classify. Local $INPUT_TEXT = "分久必合合久必分" ;@param {type:"string"} ; STEP 2: Create a LanguageDetector object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $text.LanguageDetectorOptions(_Mediapipe_Params("base_options", $base_options)) Local $detector = $text.LanguageDetector.create_from_options($options) ; STEP 3: Get the language detcetion result for the input text. Local $detection_result = $detector.detect($INPUT_TEXT) ; STEP 4: Process the detection result and print the languages detected and their scores. For $detection In $detection_result.detections ConsoleWrite(StringFormat("%s: (%.2f)", $detection.language_code, $detection.probability) & @CRLF) Next ; STEP 5: Closes the detector explicitly when the detector is not used ina context. $detector.close() EndFunc ;==>Main Func _OnAutoItExit() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Object Detection with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/object_detection/python/object_detector.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/object_detection/python/object_detector.ipynb ;~ Title: Object Detection with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Main() Func Main() Local $_IMAGE_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\cat_and_dog.jpg" Local $_IMAGE_URL = "https://storage.googleapis.com/mediapipe-tasks/object_detector/cat_and_dog.jpg" Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\efficientdet_lite0.tflite" Local $_MODEL_URL = "https://storage.googleapis.com/mediapipe-models/object_detector/efficientdet_lite0/int8/1/efficientdet_lite0.tflite" Local $url, $file_path Local $sample_files[] = [ _ _Mediapipe_Tuple($_IMAGE_FILE, $_IMAGE_URL), _ _Mediapipe_Tuple($_MODEL_FILE, $_MODEL_URL) _ ] For $config In $sample_files $file_path = $config[0] $url = $config[1] If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next ; Compute the scale to make drawn elements visible when the image is resized for display Local $scale = 1 / resize_and_show($cv.imread($_IMAGE_FILE), Default, False) ; STEP 2: Create an ObjectDetector object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.ObjectDetectorOptions(_Mediapipe_Params("base_options", $base_options, _ "score_threshold", 0.5)) Local $detector = $vision.ObjectDetector.create_from_options($options) ; STEP 3: Load the input image. Local $image = $mp.Image.create_from_file($_IMAGE_FILE) ; STEP 4: Detect objects in the input image. Local $detection_result = $detector.detect($image) ; STEP 5: Process the detection result. In this case, visualize it. Local $image_copy = $image.mat_view() Local $annotated_image = visualize($image_copy, $detection_result, $scale) Local $bgr_annotated_image = $cv.cvtColor($annotated_image, $CV_COLOR_RGB2BGR) resize_and_show($bgr_annotated_image, "object_detection") $cv.waitKey() ; STEP 6: Closes the detector explicitly when the detector is not used ina context. $detector.close() EndFunc ;==>Main #cs Draws bounding boxes and keypoints on the input image and return it. Args: image: The input RGB image. detection_result: The list of all "Detection" entities to be visualize. scale: Scale to keep drawing visible after resize Returns: Image with bounding boxes. #ce Func visualize($image, $detection_result, $scale = 1.0) Local $MARGIN = 10 * $scale ; pixels Local $ROW_SIZE = 10 ; pixels Local $FONT_SIZE = $scale Local $FONT_THICKNESS = $scale Local $TEXT_COLOR = _OpenCV_Scalar(255, 0, 0) ; red Local $bbox, $start_point, $end_point Local $category, $category_name, $probability, $result_text, $text_location For $detection In $detection_result.detections ; Draw bounding_box $bbox = $detection.bounding_box $start_point = _OpenCV_Point($bbox.origin_x, $bbox.origin_y) $end_point = _OpenCV_Point($bbox.origin_x + $bbox.width, $bbox.origin_y + $bbox.height) $cv.rectangle($image, $start_point, $end_point, $TEXT_COLOR, 3) ; Draw label and score $category = $detection.categories(0) $category_name = $category.category_name $probability = Round($category.score, 2) $result_text = $category_name & ' (' & $probability & ')' $text_location = _OpenCV_Point($MARGIN + $bbox.origin_x, $MARGIN + $ROW_SIZE + $bbox.origin_y) $cv.putText($image, $result_text, $text_location, $CV_FONT_HERSHEY_PLAIN, $FONT_SIZE, $TEXT_COLOR, $FONT_THICKNESS) Next Return $image EndFunc ;==>visualize Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Pose Landmarks Detection with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/object_detection/python/object_detector.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/object_detection/python/object_detector.ipynb ;~ Title: Pose Landmarks Detection with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $cv = _OpenCV_get() _AssertIsObj($cv, "Failed to load opencv") Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") Global $solutions = _Mediapipe_ObjCreate("mediapipe.solutions") _AssertIsObj($solutions, "Failed to load mediapipe.solutions") Global $landmark_pb2 = _Mediapipe_ObjCreate("mediapipe.framework.formats.landmark_pb2") _AssertIsObj($landmark_pb2, "Failed to load mediapipe.framework.formats.landmark_pb2") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $vision = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.vision") _AssertIsObj($vision, "Failed to load mediapipe.tasks.autoit.vision") Main() Func Main() Local $_IMAGE_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\girl-4051811_960_720.jpg" Local $_IMAGE_URL = "https://cdn.pixabay.com/photo/2019/03/12/20/39/girl-4051811_960_720.jpg" Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\pose_landmarker_heavy.task" Local $_MODEL_URL = "https://storage.googleapis.com/mediapipe-models/pose_landmarker/pose_landmarker_heavy/float16/1/pose_landmarker_heavy.task" Local $url, $file_path Local $sample_files[] = [ _ _Mediapipe_Tuple($_IMAGE_FILE, $_IMAGE_URL), _ _Mediapipe_Tuple($_MODEL_FILE, $_MODEL_URL) _ ] For $config In $sample_files $file_path = $config[0] $url = $config[1] If Not FileExists($file_path) Then $download_utils.download($url, $file_path) EndIf Next ; STEP 2: Create an PoseLandmarker object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $vision.PoseLandmarkerOptions(_Mediapipe_Params( _ "base_options", $base_options, _ "output_segmentation_masks", True)) Local $detector = $vision.PoseLandmarker.create_from_options($options) ; STEP 3: Load the input image. Local $image = $mp.Image.create_from_file($_IMAGE_FILE) ; STEP 4: Detect pose landmarks from the input image. Local $detection_result = $detector.detect($image) ; STEP 5: Process the detection result. In this case, visualize it. Local $annotated_image = draw_landmarks_on_image($image.mat_view(), $detection_result) ; Display the image resize_and_show($cv.cvtColor($annotated_image, $CV_COLOR_RGB2BGR), "Pose Landmarks Detection with MediaPipe Tasks : Image") ; Visualize the pose segmentation mask. Local $segmentation_mask = $detection_result.segmentation_masks(0).mat_view() resize_and_show($segmentation_mask, "Pose Landmarks Detection with MediaPipe Tasks : Mask") $cv.waitKey() ; STEP 6: Closes the detector explicitly when the detector is not used ina context. $detector.close() EndFunc ;==>Main Func draw_landmarks_on_image($rgb_image, $detection_result) ; Compute the scale to make drawn elements visible when the image is resized for display Local $scale = 1 / resize_and_show($rgb_image, Default, False) Local $pose_landmarks_list = $detection_result.pose_landmarks Local $annotated_image = $rgb_image Local $pose_landmarks_proto ; Loop through the detected poses to visualize. For $pose_landmarks In $pose_landmarks_list ; Draw the pose landmarks. $pose_landmarks_proto = $landmark_pb2.NormalizedLandmarkList() For $landmark In $pose_landmarks $pose_landmarks_proto.landmark.append($landmark_pb2.NormalizedLandmark(_Mediapipe_Params("x", $landmark.x, "y", $landmark.y, "z", $landmark.z))) Next $solutions.drawing_utils.draw_landmarks( _ $annotated_image, _ $pose_landmarks_proto, _ $solutions.pose.POSE_CONNECTIONS, _ $solutions.drawing_styles.get_default_pose_landmarks_style($scale)) Next Return $annotated_image EndFunc ;==>draw_landmarks_on_image Func resize_and_show($image, $title = Default, $show = Default) If $title == Default Then $title = "" If $show == Default Then $show = True Local Const $DESIRED_HEIGHT = 480 Local Const $DESIRED_WIDTH = 480 Local $w = $image.width Local $h = $image.height If $h < $w Then $h = $h / ($w / $DESIRED_WIDTH) $w = $DESIRED_WIDTH Else $w = $w / ($h / $DESIRED_HEIGHT) $h = $DESIRED_HEIGHT EndIf Local $interpolation = ($DESIRED_WIDTH > $image.width Or $DESIRED_HEIGHT > $image.height) ? $CV_INTER_CUBIC : $CV_INTER_AREA If $show Then Local $img = $cv.resize($image, _OpenCV_Size($w, $h), _OpenCV_Params("interpolation", $interpolation)) $cv.imshow($title, $img.convertToShow()) EndIf Return $w / $image.width EndFunc ;==>resize_and_show Func _OnAutoItExit() _OpenCV_Close() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Text Classifier with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/text_classification/python/text_classifier.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/text_classification/python/text_classifier.ipynb ;~ Title: Text Classifier with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $text = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.text") _AssertIsObj($text, "Failed to load mediapipe.tasks.autoit.text") Main() Func Main() Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\bert_classifier.tflite" If Not FileExists($_MODEL_FILE) Then $download_utils.download("https://storage.googleapis.com/mediapipe-models/text_classifier/bert_classifier/float32/1/bert_classifier.tflite", $_MODEL_FILE) EndIf ; Define the input text that you want the model to classify. Local $INPUT_TEXT = "I'm looking forward to what will come next." ; STEP 2: Create a TextClassifier object. Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) Local $options = $text.TextClassifierOptions(_Mediapipe_Params("base_options", $base_options)) Local $classifier = $text.TextClassifier.create_from_options($options) ; STEP 3: Classify the input text. Local $classification_result = $classifier.classify($INPUT_TEXT) ; STEP 4: Process the classification result. In this case, print out the most likely category. Local $top_category = $classification_result.classifications(0).categories(0) ConsoleWrite('@@ Debug(' & @ScriptLineNumber & ') : ' & StringFormat('%s (%.2f)', $top_category.category_name, $top_category.score) & @CRLF) ;### Debug Console ; STEP 6: Closes the classifier explicitly when the classifier is not used ina context. $classifier.close() EndFunc ;==>Main Func _OnAutoItExit() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj Text Embedding with MediaPipe Tasks #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #AutoIt3Wrapper_Change2CUI=y #AutoIt3Wrapper_Au3Check_Parameters=-d -w 1 -w 2 -w 3 -w 4 -w 5 -w 6 #AutoIt3Wrapper_AU3Check_Stop_OnWarning=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** ;~ Sources: ;~ https://colab.research.google.com/github/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/text_embedder/python/text_embedder.ipynb ;~ https://github.com/google-ai-edge/mediapipe-samples/blob/88792a956f9996c728b92d19ef7fac99cef8a4fe/examples/text_embedder/python/text_embedder.ipynb ;~ Title: Text Embedding with MediaPipe Tasks #include "autoit-mediapipe-com\udf\mediapipe_udf_utils.au3" #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _Mediapipe_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-mediapipe-com\autoit_mediapipe_com-0.10.14-4110.dll") OnAutoItExitRegister("_OnAutoItExit") ; Tell mediapipe where to look its resource files _Mediapipe_SetResourceDir() ; Where to download data files Global Const $MEDIAPIPE_SAMPLES_DATA_PATH = @ScriptDir & "\examples\data" Global $download_utils = _Mediapipe_ObjCreate("mediapipe.autoit.solutions.download_utils") _AssertIsObj($download_utils, "Failed to load mediapipe.autoit.solutions.download_utils") ; STEP 1: Import the necessary modules. Global $mp = _Mediapipe_get() _AssertIsObj($mp, "Failed to load mediapipe") Global $autoit = _Mediapipe_ObjCreate("mediapipe.tasks.autoit") _AssertIsObj($autoit, "Failed to load mediapipe.tasks.autoit") Global $text = _Mediapipe_ObjCreate("mediapipe.tasks.autoit.text") _AssertIsObj($text, "Failed to load mediapipe.tasks.autoit.text") Main() Func Main() Local $_MODEL_FILE = $MEDIAPIPE_SAMPLES_DATA_PATH & "\bert_embedder.tflite" If Not FileExists($_MODEL_FILE) Then $download_utils.download("https://storage.googleapis.com/mediapipe-models/text_embedder/bert_embedder/float32/1/bert_embedder.tflite", $_MODEL_FILE) EndIf ; Create your base options with the model that was downloaded earlier Local $base_options = $autoit.BaseOptions(_Mediapipe_Params("model_asset_path", $_MODEL_FILE)) ; Set your values for using normalization and quantization Local $l2_normalize = True ;@param {type:"boolean"} Local $quantize = False ;@param {type:"boolean"} ; Create the final set of options for the Embedder Local $options = $text.TextEmbedderOptions(_Mediapipe_Params( _ "base_options", $base_options, "l2_normalize", $l2_normalize, "quantize", $quantize)) Local $embedder = $text.TextEmbedder.create_from_options($options) ; Retrieve the first and second sets of text that will be compared Local $first_text = "I'm feeling so good" ;@param {type:"string"} Local $second_text = "I'm okay I guess" ;@param {type:"string"} ; Convert both sets of text to embeddings Local $first_embedding_result = $embedder.embed($first_text) Local $second_embedding_result = $embedder.embed($second_text) ; Retrieve the cosine similarity value from both sets of text, then take the ; cosine of that value to receie a decimal similarity value. Local $similarity = $text.TextEmbedder.cosine_similarity($first_embedding_result.embeddings(0), _ $second_embedding_result.embeddings(0)) ConsoleWrite('@@ Debug(' & @ScriptLineNumber & ') : $similarity = ' & $similarity & @CRLF) ;### Debug Console EndFunc ;==>Main Func _OnAutoItExit() _Mediapipe_Close() EndFunc ;==>_OnAutoItExit Func _AssertIsObj($vVal, $sMsg) If Not IsObj($vVal) Then ConsoleWriteError($sMsg & @CRLF) Exit 0x7FFFFFFF EndIf EndFunc ;==>_AssertIsObj
    1 point
  4. smbape

    OpenCV v4 UDF

    I wanted to use OpenCV v4+ in AutoIt. I found Opencv UDF on the forum, but there was no support for OpenCV v4+ This UDF provides support for OpenCV v4+ Update There is a new implementation using COM. It is almost as easy as python to use It is also possible to interact with GDI+ Download and extract opencv-4.11.0-windows.exe into a folder Download and extract autoit-opencv-4.11.0-com-v2.7.0.7z into a folder Sources Here Documentation A generated documentation for functions is available here (v2.7.0) Examples More samples can be found here (v2.7.0) To run them, please follow these instructions (v2.7.0) Showing an image #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return Local $img = _OpenCV_imread_and_check(_OpenCV_FindFile("samples\data\lena.jpg")) $cv.imshow("Image", $img) $cv.waitKey() $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _OpenCV_Close() EndFunc ;==>_OnAutoItExit Drawing contours #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return Local $img = _OpenCV_imread_and_check("samples\data\pic1.png") Local $img_grey = $cv.cvtColor($img, $CV_COLOR_BGR2GRAY) $cv.threshold($img_grey, 100, 255, $CV_THRESH_BINARY) Local $thresh = $cv.extended[1] Local $contours = $cv.findContours($thresh, $CV_RETR_TREE, $CV_CHAIN_APPROX_SIMPLE) ConsoleWrite("Found " & UBound($contours) & " contours" & @CRLF & @CRLF) $cv.drawContours($img, $contours, -1, _OpenCV_Scalar(0, 0, 255), 2) $cv.imshow("Image", $img) $cv.waitKey() $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _OpenCV_Close() EndFunc ;==>_OnAutoItExit Showing an image in autoit GUI #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" #include <GUIConstantsEx.au3> _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return #Region ### START Koda GUI section ### Form= Local $FormGUI = GUICreate("show image in autoit gui", 400, 400, 200, 200) Local $Pic = GUICtrlCreatePic("", 0, 0, 400, 400) GUISetState(@SW_SHOW) #EndRegion ### END Koda GUI section ### Local $img = _OpenCV_imread_and_check(_OpenCV_FindFile("samples\data\lena.jpg")) _OpenCV_imshow_ControlPic($img, $FormGUI, $Pic) Local $nMsg While 1 $nMsg = GUIGetMsg() Switch $nMsg Case $GUI_EVENT_CLOSE ExitLoop EndSwitch WEnd $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _OpenCV_Close() EndFunc ;==>_OnAutoItExit Showing an image in an autosized autoit GUI #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" #include <GUIConstantsEx.au3> _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return #Region ### START Koda GUI section ### Form= Local $FormGUI = GUICreate("show image in autoit gui [WINDOW_AUTOSIZE]", 400, 400, 200, 200) Local $Pic = GUICtrlCreatePic("", 0, 0, 400, 400) GUISetState(@SW_SHOW) #EndRegion ### END Koda GUI section ### Local $img = _OpenCV_imread_and_check(_OpenCV_FindFile("samples\data\lena.jpg")) ; get the image size and resize the GUI and the PIC control WinMove($FormGUI, "", Default, Default, $img.width, $img.height) GUICtrlSetPos($Pic, Default, Default, $img.width, $img.height) _OpenCV_imshow_ControlPic($img, $FormGUI, $Pic) Local $nMsg While 1 $nMsg = GUIGetMsg() Switch $nMsg Case $GUI_EVENT_CLOSE ExitLoop EndSwitch WEnd $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _OpenCV_Close() EndFunc ;==>_OnAutoItExit Screen capture #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return Local $iLeft = 200 Local $iTop = 200 Local $iWidth = 400 Local $iHeight = 400 Local $aRect[4] = [$iLeft, $iTop, $iWidth, $iHeight] Local $img = _OpenCV_GetDesktopScreenMat($aRect) $cv.imshow("Screen capture", $img) $cv.waitKey() $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _OpenCV_Close() EndFunc ;==>_OnAutoItExit Find template #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return Local $img = _OpenCV_imread_and_check(_OpenCV_FindFile("samples\data\mario.png")) Local $tmpl = _OpenCV_imread_and_check(_OpenCV_FindFile("samples\data\mario_coin.png")) ; The higher the value, the higher the match is exact Local $threshold = 0.8 Local $aMatches = _OpenCV_FindTemplate($img, $tmpl, $threshold) Local $aRedColor = _OpenCV_RGB(255, 0, 0) Local $aMatchRect[4] = [0, 0, $tmpl.width, $tmpl.height] For $i = 0 To UBound($aMatches) - 1 $aMatchRect[0] = $aMatches[$i][0] $aMatchRect[1] = $aMatches[$i][1] ; Draw a red rectangle around the matched position $cv.rectangle($img, $aMatchRect, $aRedColor) Next $cv.imshow("Find template example", $img) $cv.waitKey() $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _OpenCV_Close() EndFunc ;==>_OnAutoItExit Video capture file #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" #include <Misc.au3> _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return Local $cap = _OpenCV_ObjCreate("cv.VideoCapture").create(_OpenCV_FindFile("samples\data\vtest.avi")) If Not $cap.isOpened() Then ConsoleWriteError("!>Error: cannot open the video file." & @CRLF) Exit EndIf Local $frame = _OpenCV_ObjCreate("cv.Mat") While 1 If _IsPressed("1B") Or _IsPressed(Hex(Asc("Q"))) Then ExitLoop EndIf If Not $cap.read($frame) Then ConsoleWriteError("!>Error: cannot read the video or end of the video." & @CRLF) ExitLoop EndIf $cv.imshow("capture video file", $frame) Sleep(30) WEnd $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _OpenCV_Close() EndFunc ;==>_OnAutoItExit Video capture camera #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" #include <Misc.au3> _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return Local $iCamId = 0 Local $cap = _OpenCV_ObjCreate("cv.VideoCapture").create($iCamId) If Not $cap.isOpened() Then ConsoleWriteError("!>Error: cannot open the camera." & @CRLF) Exit EndIf Local $frame = _OpenCV_ObjCreate("cv.Mat") While 1 If _IsPressed("1B") Or _IsPressed(Hex(Asc("Q"))) Then ExitLoop EndIf If $cap.read($frame) Then ;; Flip the image horizontally to give the mirror impression $frame = $cv.flip($frame, 1) $cv.imshow("capture camera", $frame) Else ConsoleWriteError("!>Error: cannot read the camera." & @CRLF) EndIf Sleep(30) WEnd $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _OpenCV_Close() EndFunc ;==>_OnAutoItExit Resize an image with GDI+ #Region ;**** Directives created by AutoIt3Wrapper_GUI **** #AutoIt3Wrapper_UseX64=y #EndRegion ;**** Directives created by AutoIt3Wrapper_GUI **** #include "autoit-opencv-com\udf\opencv_udf_utils.au3" _OpenCV_Open("opencv-4.11.0-windows\opencv\build\x64\vc16\bin\opencv_world4110.dll", "autoit-opencv-com\autoit_opencv_com4110.dll") _GDIPlus_Startup() OnAutoItExitRegister("_OnAutoItExit") Example() Func Example() Local $cv = _OpenCV_get() If Not IsObj($cv) Then Return Local $bMethod = 1 Local $img = _OpenCV_imread_and_check(_OpenCV_FindFile("samples\tutorial_code\yolo\scooter-5180947_1920.jpg")) Local $resized If $bMethod Then $resized = $img.GdiplusResize(600, 400) Else Local $hImage = $img.convertToBitmap() Local $hResizedImage = _GDIPlus_ImageResize($hImage, 600, 400) _GDIPlus_BitmapDispose($hImage) $resized = $cv.createMatFromBitmap($hResizedImage) _GDIPlus_BitmapDispose($hResizedImage) EndIf $cv.imshow("Resized with GDI+", $resized) $cv.waitKey() $cv.destroyAllWindows() EndFunc ;==>Example Func _OnAutoItExit() _GDIPlus_Shutdown() _OpenCV_Close() EndFunc ;==>_OnAutoItExit
    1 point
  5. This UDF brings the power and flexibility of jq to AutoIt scripts. jq is an open-source, powerful, and flexible command-line based JSON processor. As it says on their website, jq is like 'sed' for JSON. jq can be used for the simplest of tasks like retrieving JSON objects and values (parsing), to very advanced JSON processing using its numerous built-in functions and conditional processing. Its built-in functions can handle math, selection, conditional processing, mapping, object and array manipulation, flattening, reduction, grouping, and much more. You can even create your own jq functions. You can learn more about jq and even play with it in real-time, using jq's online jq playground, all on their website. Here and some helpful links to get you more familiar with jq, what can be done with it, its built-in functions, and its syntax. jq Website: https://jqlang.github.io/jq/ jq Manual: https://jqlang.github.io/jq/manual/ jqWiki (FAQ, Cookbook, Advanced Topics) https://github.com/jqlang/jq/wiki jq is a single 32 or 64 bit executable that has no other dependencies. Just like using the SQLite UDF, the only requirement to use this UDF is that the jq executable reside in a location in which the UDF can execute it. The latest win32 & win64 versions have been included in the UDF download. You can always get newer versions from the jq website. jq at a high level Like 'sed', jq reads JSON in, either through STDIN or one or more files, processes it thru one or more "filters", and outputs the results. You can, optionally, supply "options" that affect how it reads the input, where it gets its "filters", and how it writes its output. It looks a little like this: JSON ---> jq processor (using supplied filters and options) ---> Output So in jq lingo, you basically use "Filters" to tell jq what you want it to do. So in the UDF file, that is why the main functions ( _jqExec() and _jqExecFile() ) refer to filters and options. Please make note that jq works with relatively strict JSON. This means that all JSON read must be conform to the standard. Luckily, jq is pretty good at identifying where a format error exists in non standard JSON. The jq UDF There are 2 main funtions in the UDF file, _jqExec and jqExecFile. With these 2 functions, you can pretty much do anything that jq can do. The only difference between to two functions is whether the JSON is supplied by a string or a file. The 2 primary functions simply call the jq executable with the supplied information, after properly formatting the parameters. There are additional functions in the UDF to easily pretty-print your json, compact-print your json, dump the json data with its associated paths, and see if specific JSON keys exist, but they all just execute the _jqExec or _jqExecFile function with the proper filter. There are also a couple of extra functions to display what version of the UDF and jq executable you are currently using. There are also a couple of functions to enable and disable logging of jq information for debugging purposes. Most of the jq UDF file functions return an @error if unsuccessful. Some also include @extended info. Please see the actual function headers for more information on their usage and return values. The 2 primary functions below just format your jq request and pass it on the jq executable. The functions will also properly escape double quotes (") that are used in the filter. For most simple tasks, you just need to supply the JSON source and a filter. _jqExec($sJson, $sFilter, $sOptions = Default, $sWorkingDir = Default) Or _jqExecFile($sJsonFile, $sFilter, $sOptions = Default, $sWorkingDir = Default) Using jq in your script As stated earlier, the jq executable must reside somewhere where the script can locate and execute it. The _jqInit() function always has to be executed before any jq processing occurs. _jqInit() merely locates the executable or uses the supplied path. It also clears any previous debug log. The jq UDF folder contains a jq example script that has several examples to how to do some of the most common JSON processing tasks. Here are a few examples to get you started: How to pretty-print some JSON #include "jq.au3" ;_jqInit is only needed if the jq executale is not in the PATH or @ScriptDir _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '{"fruits":[{"Apple":{"color":"Red","season":"Fall"}}, {"Banana":{"color":"Yellow","season":"Summer"}}]}' $sCmdOutput = _jqPrettyPrintJson($sJson) ConsoleWrite(@CRLF & "Pretty-Print JSON" & @CRLF & $sCmdOutput & @CRLF) How to compact-print some JSON #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '{ "fruits" : [{"Apple" : {"color":"Red","season":"Fall"}}, {"Banana":{"color":"Yellow","season":"Summer"}}]}' $sCmdOutput = _jqCompactPrintJson($sJson) ConsoleWrite(@CRLF & "Compact-Print JSON" & @CRLF & $sCmdOutput & @CRLF) Dump JSON data (paths and values) #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '{ "fruits" : [{"Apple" : {"color":"Red","season":"Fall"}}, {"Banana":{"color":"Yellow","season":"Summer"}}]}' $sCmdOutput = _jqDump($sJson) ConsoleWrite(@CRLF & "Dump JSON paths and values" & @CRLF & $sCmdOutput & @CRLF) How to GET JSON values #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '{"Apple" : {"color":"Red","season":"Fall"}, "Banana":{"color":"Yellow","season":"Summer"}}' $sFilter = '.Banana.color' $sCmdOutput = _jqExec($sJson, $sFilter) ConsoleWrite("Get color of banana" & @CRLF) ConsoleWrite("Input: : " & _jqCompactPrintJson($sJson) & @CRLF) ConsoleWrite("Filter : " & $sFilter & @CRLF) ConsoleWrite("Output : " & $sCmdOutput & @CRLF) or #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '{"Apple" : {"color":"Red","season":"Fall"}, "Banana":{"color":"Yellow","season":"Summer"}}' $sFilter = 'getpath(["Banana", "color"])' $sCmdOutput = _jqExec($sJson, $sFilter) ConsoleWrite("Get color of banana" & @CRLF) ConsoleWrite("Input: : " & _jqCompactPrintJson($sJson) & @CRLF) ConsoleWrite("Filter : " & $sFilter & @CRLF) ConsoleWrite("Output : " & $sCmdOutput & @CRLF) Check for the existence of a key #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '{"Apple":{"color":"Red","season":"Fall"}, "Banana":{"color":"Yellow","season":"Summer"}}' $sFilter = '.Banana | has("color")' $sCmdOutput = _jqExec($sJson, $sFilter) ConsoleWrite("Check for existence of color key within Banana object" & @CRLF) ConsoleWrite("Input: : " & _jqCompactPrintJson($sJson) & @CRLF) ConsoleWrite("Filter : " & $sFilter & @CRLF) ConsoleWrite("Output : " & $sCmdOutput & @CRLF) Count of how many Items in an object #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '{"Apple":{"color":"Red"}, "Banana":{"color":"Yellow","season":"Summer"}}' $sFilter = '.Banana | length' $sCmdOutput = _jqExec($sJson, $sFilter) ConsoleWrite("How many items in the Banana object" & @CRLF) ConsoleWrite("Input: : " & _jqCompactPrintJson($sJson) & @CRLF) ConsoleWrite("Filter : " & $sFilter & @CRLF) ConsoleWrite("Output : " & $sCmdOutput & @CRLF) How to PUT/Create/Modify JSON #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sInput = "" $sFilter = 'setpath(["Apple","color"];"Red") | setpath(["Banana","color"];"Yellow") | setpath(["Banana","season"];"Summer")' $sOptions = '-n' ;required if no input supplied $sCmdOutput = _jqExec($sInput, $sFilter, $sOptions) ConsoleWrite("Update/Create JSON" & @CRLF) ConsoleWrite("Filter : " & $sFilter & @CRLF) ConsoleWrite("Output : " & @CRLF & $sCmdOutput & @CRLF) List all of the fruits (top-level keys) #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '{"Apple":{"color":"Red"}, "Banana":{"color":"Yellow","season":"Summer"}}' $sFilter = 'keys | .[]' $sCmdOutput = _jqExec($sJson, $sFilter) ConsoleWrite("List all top-level keys (fruits)" & @CRLF) ConsoleWrite("Input : " & $sJson & @CRLF) ConsoleWrite("Filter : " & $sFilter & @CRLF) ConsoleWrite("Output : " & @CRLF & $sCmdOutput & @CRLF) Calculate the sum of all of the objects' price * qty #include "jq.au3" ;Initialize jq environment _jqInit() If @error Then Exit ConsoleWrite("ERROR: Unable to initialize jq - @error = " & @error & @CRLF) $sJson = '[{"id":1,"price":20.00,"qty":10},{"id":2,"price":15.00,"qty":20.25},{"id":3,"price":10.50,"qty":30}]' $sFilter = 'map(.price * .qty) | add' $sCmdOutput = _jqExec($sJson, $sFilter) ConsoleWrite("Calculate the sum of all of the objects' price * qty" & @CRLF) ConsoleWrite("Input : " & $sJson & @CRLF) ConsoleWrite("Filter : " & $sFilter & @CRLF) ConsoleWrite("Output : " & $sCmdOutput & @CRLF) The examples above, and the ones in the example files, merely scratch the surface of what jq can do. It may look intimidating at first but it really isn't that bad once you start playing with it. You can find several more examples and play around with them or create your own using jqPlayground. jqPlayground is a separate, stand alone, interactive tool for creating, testing and learning JSON processing using jq. If you have any questions regarding the UDF, or how to perform a certain task using jq, I'll try my best to answer them. Since jq has been around for a while now, there's also several jq-related questions and answers on StackOverflow. >>> Download the UDF in the Files Section <<<
    1 point
×
×
  • Create New...