-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mediapipe into development #53
Conversation
) | ||
|
||
def detect(self, image: np.ndarray) -> MediapipeObservation: | ||
rgb_image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB) # assumes we're always getting BGR input - check with Jon to verify |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think BGR is a very cv2
specific convention at this point, its a rather old convention (.bmp
images were BGR, for example), RGB is a much much more common standard these days.
Is Basler RGB? That is, do images come out of the Basler pipeline as RGB or BGR?
We should have a policy of converting cv2 images to RGB immediately upon read, and assuming they are RGB anywhere else in their lifetime.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yep, the RGB camera comes out in RGB, the IR in monochrome.
I think immediate conversion sounds like a good idea
return self.num_body_points + (2 * self.num_single_hand_points) + self.num_face_points | ||
|
||
@property | ||
def pose_trajectories(self) -> np.ndarray[..., 3]: # TODO: not sure how to type these array sizes, seems like it doesn't like the `self` reference |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typing is tricky with numpy arrays - I think there are pacakages that help with it, but nothing particularly venerable.
I think there's a way to set up custom type annotations and validators in pydantic (or dataclasses) or something? Or just type hint them as 'np.ndarray' and then just do a shape validation on initialization or read?
This is great! Works like a charm :) (note - wizard pose is bc I was trying to face the camera while pressing the Pause (space) bar with my elbow 💪 😂 Gonna merge this in to my branch so I can play with it, thanks! I left some notes on your comments, looking forward to chatting about your experience building this thing |
Great @jonmatthis, your changes look good as well! Looking at this and the charuco tracker, I think it's worth declaring some of the methods as abstract in the |
Applying new architecture changes to the Mediapipe tracker.
Currently running fine in the webcam demo. For now I'm using the builtin mediapipe drawing utils for animation, as I figured @jonmatthis or @aaroncherian would have opinions on how best to annotate the output.
TODOs: