Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all articles
Browse latest Browse all 348

How to get the correct X Y distance from depth camera

$
0
0
Hi, I would like to create TF for an object detected in OpenCV using depth camera. When I look at the code from the book "ROS Robotics By Example", they use the X, Y coordinate detected in the picture and put that directly into the TF. I am confused here. The X, Y coordinate is just the pixel from the photo, why they can put it in the TF directly? Thanks! # Find the circumcircle of the green ball and draw a blue outline around it (self.cf_u,self.cf_v),radius = cv2.minEnclosingCircle(ball_image) ball_center = (int(self.cf_u),int(self.cf_v)) #This function builds the Crazyflie base_link tf transform and publishes it. def update_cf_transform(self, x, y, z): # send position as transform from the parent "kinect2_ir_optical_frame" to the # child "crazyflie/base_link" (described by crazyflie.urdf.xacro) self.pub_tf.sendTransform(( x, y, z), tf.transformations.quaternion_from_euler(self.r, self.p, self.y), rospy.Time.now(), "crazyflie/base_link", "kinect2_ir_optical_frame") [https://github.com/PacktPublishing/ROS-Robotics-By-Example/blob/master/Chapter_9_code/crazyflie_autonomous/scripts/detect_crazyflie.py](https://github.com/PacktPublishing/ROS-Robotics-By-Example/blob/master/Chapter_9_code/crazyflie_autonomous/scripts/detect_crazyflie.py)

Viewing all articles
Browse latest Browse all 348

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>