[MediaPipe] Pose Landmark Detection 자세 특징 감지
AI, ML, DL 2025. 2. 11. 20:32 |반응형
MediaPipe를 사용해 자세 특징을 감지해 보자
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
|
import numpy as np
import cv2
from mediapipe import solutions
from mediapipe.framework.formats import landmark_pb2
import mediapipe as mp
from mediapipe.tasks import python
from mediapipe.tasks.python import vision
def draw_landmarks_on_image(rgb_image, detection_result):
pose_landmarks_list = detection_result.pose_landmarks
annotated_image = np.copy(rgb_image)
# Loop through the detected poses to visualize.
for idx in range(len(pose_landmarks_list)):
pose_landmarks = pose_landmarks_list[idx]
# Draw the pose landmarks.
pose_landmarks_proto = landmark_pb2.NormalizedLandmarkList()
pose_landmarks_proto.landmark.extend([
landmark_pb2.NormalizedLandmark(x=landmark.x, y=landmark.y, z=landmark.z) for landmark in pose_landmarks
])
solutions.drawing_utils.draw_landmarks(
annotated_image,
pose_landmarks_proto,
solutions.pose.POSE_CONNECTIONS,
solutions.drawing_styles.get_default_pose_landmarks_style())
return annotated_image
# Create an PoseLandmarker object.
base_options = python.BaseOptions(model_asset_path='pose_landmarker_full.task')
# https://ai.google.dev/edge/mediapipe/solutions/vision/pose_landmarker
options = vision.PoseLandmarkerOptions(base_options=base_options, output_segmentation_masks=True)
detector = vision.PoseLandmarker.create_from_options(options)
# Load the input image.
image = mp.Image.create_from_file("pose.jpg")
# Detect pose landmarks from the input image.
detection_result = detector.detect(image)
# Process the detection result. In this case, visualize it.
annotated_image = draw_landmarks_on_image(image.numpy_view(), detection_result)
cv2.imshow('sean', cv2.cvtColor(annotated_image, cv2.COLOR_RGB2BGR))
cv2.waitKey(0)
# Visualize the pose segmentation mask.
# segmentation_mask = detection_result.segmentation_masks[0].numpy_view()
# visualized_mask = np.repeat(segmentation_mask[:, :, np.newaxis], 3, axis=2) * 255
# cv2.imshow('sean', visualized_mask)
# cv2.waitKey(0)
|
pose_landmarker_full.task
8.96MB
소스를 입력하고 실행한다.
0 - nose
1 - left eye (inner)
2 - left eye
3 - left eye (outer)
4 - right eye (inner)
5 - right eye
6 - right eye (outer)
7 - left ear
8 - right ear
9 - mouth (left)
10 - mouth (right)
11 - left shoulder
12 - right shoulder
13 - left elbow
14 - right elbow
15 - left wrist
16 - right wrist
17 - left pinky
18 - right pinky
19 - left index
20 - right index
21 - left thumb
22 - right thumb
23 - left hip
24 - right hip
25 - left knee
26 - right knee
27 - left ankle
28 - right ankle
29 - left heel
30 - right heel
31 - left foot index
32 - right foot index
반응형
'AI, ML, DL' 카테고리의 다른 글
[MediaPipe] Async Pose Landmark Detection 비동기 자세 특징 감지 (0) | 2025.02.12 |
---|---|
[MediaPipe] Face Landmark Detection 얼굴 특징 감지 (1) | 2025.02.11 |
[MediaPipe] Face Detection 얼굴 감지 (0) | 2025.02.11 |
[MediaPipe] Hand Landmark Detection 손 특징 감지 (0) | 2025.02.11 |
[MediaPipe] Object Detection 객체 감지 (1) | 2025.02.11 |