I am trying to calibrate a projector-camera stereo pair. Is there a projector to camera calibration package in ROS? I am looking for something similar to the one available for the monocular or stereo camera calibration available in ROS. (http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration)
↧
ROS Projector Camera calibration package
↧
Changing the dictionary of aruco in aruco_ros package
I am using Logitech C270 with resolution 640x480 to detect aruco marker using aruco_ros package. I have succeeded to detect the marker when the distance between the marker and the camera is 180 cm. But I want to detect the marker when the distance between the marker and the camera is 240 cm. The marker which i used it from the dictionary 6x6. I tried with marker 5x5 but i failed. I think that when I can change the default dictionary from 6x6 to 4x4 I can solve this problem. I need help in changing the dictionary or any other idea assist me, Please.
↧
↧
camera calibration algorithm
Hello everyone !
I've read all official tutorials of ros ( about topics and nodes, packages etc)
then I wanted to calibrate my camera
( Bus 001 Device 002: ID 058f:3881 Alcor Micro Corp.)
[screen](http://free-picload.com/image/G040Y)
I did everything that was written in this [article](http://wiki.ros.org/camera_calibration)
and tried with this [video](https://www.youtube.com/watch?v=WvGcxJ1L3NM)
But nothing happened
My algorithm of actions
1) git clone image_pipeline in folder src in catkin_ws
2) entered the folder catkin_ws then catkin_make
3) roscore in another terminal
4) source .devel/setup.bash
5) rosdep install camera_calibration
until this step everything is fine
6) then i checked what topics are running
rostopic list (and now i have first trouble)
i saw only
/rosout
/rosout_agg
[screen](http://free-picload.com/image/G0gqk)
although must see
/camera/camera_info
/camera/image_raw
then I leave it without attention and went to the next step. In a little window picture freezes and I saw in terminal bunch of errors with title: error data jpeg or something like that Camera calibration window doesn't see images from camera
Can someone explain me what i do incorrect, please?
Thank you
↧
Problems when relaunching camera
First time after starting PC I can launch camera by:> roslaunch usb_cam usb_cam-test.launch
However after closing it, using ctrl+c in the terminal and running again I get following error:
> [ERROR] [1548665838.684033928]:> VIDIOC_S_FMT error 5, Input/output> error [usb_cam-1] process has died> [pid 7194, exit code 1, cmd> /opt/ros/kinetic/lib/usb_cam/usb_cam_node> __name:=usb_cam __log:=/home/nuvo/.ros/log/a60e58e0-22c3-11e9-8944-f81a670f7983/usb_cam-1.log].> log file:> /home/nuvo/.ros/log/a60e58e0-22c3-11e9-8944-f81a670f7983/usb_cam-1*.log
↧
Problems when relaunching camera
I am using kinetic distribution and getting an error when relaunching camera after closing it using ctrl+c in the terminal. Error is the following one:> [ERROR] [1548665838.684033928]:> VIDIOC_S_FMT error 5, Input/output> error>> [usb_cam-1] process has died [pid> 7194, exit code 1, cmd> /opt/ros/kinetic/lib/usb_cam/usb_cam_node> __name:=usb_cam __log:=/home/nuvo/.ros/log/a60e58e0-22c3-11e9-8944-f81a670f7983/usb_cam-1.log].> log file:> /home/nuvo/.ros/log/a60e58e0-22c3-11e9-8944-f81a670f7983/usb_cam-1*.log
How can I solve it?
Command I am using to launch it is:
> roslaunch usb_cam usb_cam-test.launch
↧
↧
How to retrieve absolute pose in ROS
Hello,
i'm currently doing the detection of ArUco and Alvar fiducial markers (glued on a cube), by that trying to grasp the cube with the detection of the marker. The camera is mounted on the Panda from Franka Emika, which is controlled by ROS with the aid of ROS MoveIt! (I'm sending some joint_values to the robot so we can kind of scan the table where he is on). There's a link to a video where one can see what i'm really doing.
Link: https://youtu.be/oiPJZO2T00k
Technical part:
I want to retrieve the absolute pose of the marker, i.e the transform from my first frame, in this case `/world` to `ar_marker_3` and then send this to the robot as his goal. But for some reason (as seen in the video) every time the robot moves in the XY direction of the table, the frame of the marker varies as well, meaning the pose of the marker is actually not absolute (i presume). So my question is: is this correct? Or the way i'm thinking is wrong? I thought that the pose of the marker relative to the world SHOULD be always fixed since the marker is not moving.
I think this question maybe not specific to ROS but robotics in general, but i'm quite in a dead end so any helpful would be nice! Thanks!
Pictures of the tf_tree if it helps..
[C:\fakepath\tftree1.png](/upfiles/15494925571229308.png)
[C:\fakepath\tftree2.png](/upfiles/154949256882357.png)
↧
How to add two usb 2.0 cameras with libuvc_camera driver?
Hi! I am a student who is conducting a research. I am really appreciate your work on ROS development. But now I encountered some problems with making use of usb cameras on ROS. Below is the information about my question.
**Names and versions of stacks:** I am using ROS kinetic with libuvc_camera
**Platform:** Ubuntu16.04
**Problem:** I want to use two generic usb 2.0 cameras with libuvc_camera package. Currently, I just can use one camera
and show video stream in display type of image (not display type of camera) in Rviz. My question is how to
use two usb 2.0 cameras in ROS and show them in display type of camera. I am not sure whether I need to
change code from driver or launch file. If it is necessary to change the code, could you give me some
suggestion, please? I might leave out some important details that you want to know from me. Please let
me know if you need any information from me. Thank you so much.
↧
Camera Info Subscriber Throws Error
I am trying to subscribe to the camera info topic provided by my stereo-camera using Python:
self.camera_info = rospy.Subscriber("/nerian_stereo/stereo_camera_info", CameraInfo)
however, I get the following error:
[ERROR] [1550489001.885020747]: Client [/ros_node] wants topic /nerian_stereo/stereo_camera_info to have datatype/md5sum [sensor_msgs/CameraInfo/c9a58c1b0b154e0e6da7578cb991d214], but our version has [nerian_stereo/StereoCameraInfo/eca3b527e5502c28c9f17e8c40cf1d1c]. Dropping connection.
I did check if that topic publishes or not, and it does publish the following:
header:
seq: 0
stamp:
secs: 1550489069
nsecs: 658059275
frame_id: "map"
left_info:
header:
seq: 195108
stamp:
secs: 1550489069
nsecs: 658059275
frame_id: "map"
height: 592
width: 800
distortion_model: "plumb_bob"
D: [-0.06367426558562306, 0.09877103851635476, 0.00031019558294633924, -0.00034850082816263023, -0.035294875542571706]
K: [684.0436298696771, 0.0, 383.8977028127636, 0.0, 681.8825674053443, 291.46442657960296, 0.0, 0.0, 1.0]
R: [0.9999975506787532, 0.0020932805291636923, 0.0007188971558162386, -0.0020925240090497767, 0.9999972578637527, -0.0010514790758340117, -0.000721096225178569, 0.00104997219086539, 0.9999991887889872]
P: [677.5530232455858, 0.0, 380.73382568359375, 0.0, 0.0, 677.5530232455858, 298.38965225219727, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 1
binning_y: 1
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
right_info:
header:
seq: 195108
stamp:
secs: 1550489069
nsecs: 658059275
frame_id: "map"
height: 592
width: 800
distortion_model: "plumb_bob"
D: [-0.07050014821240644, 0.15121497345155674, 0.001179938195432741, 0.0005238836140711377, -0.13054262535915223]
K: [688.81655046723, 0.0, 378.2111821877113, 0.0, 686.8626877361024, 304.3933390790509, 0.0, 0.0, 1.0]
R: [0.9999990871452151, -0.0007296095146042925, 0.0011372680829972413, 0.0007284141541189546, 0.9999991822576714, 0.0010511407178051629, -0.0011380340752739026, -0.0010503113560976435, 0.9999988008615304]
P: [677.5530232455858, 0.0, 380.73382568359375, -67.95124548529029, 0.0, 677.5530232455858, 298.38965225219727, 0.0, 0.0, 0.0, 1.0, 0.0]
binning_x: 1
binning_y: 1
roi:
x_offset: 0
y_offset: 0
height: 0
width: 0
do_rectify: False
Q: [1.0, 0.0, 0.0, -380.7133483886719, 0.0, 1.0, 0.0, -298.3891296386719, 0.0, 0.0, 0.0, 677.5579223632812, 0.0, 0.0, 9.971163749694824, 0.0]
T_left_right: [-0.1002891008148319, 7.317194896777106e-05, -0.00011405569753621334]
R_left_right: [0.9999959342341739, 0.0028204958709017384, -0.00041990256476554416, -0.0028213726499287835, 0.9999938100485598, -0.00210231323429016, 0.00041397039979239043, 0.0021034893883887227, 0.99999770197781]
---
What am I doing wrong?
↧
How to get the correct X Y distance from depth camera
Hi, I would like to create TF for an object detected in OpenCV using depth camera. When I look at the code from the book "ROS Robotics By Example", they use the X, Y coordinate detected in the picture and put that directly into the TF. I am confused here. The X, Y coordinate is just the pixel from the photo, why they can put it in the TF directly? Thanks!
# Find the circumcircle of the green ball and draw a blue outline around it
(self.cf_u,self.cf_v),radius = cv2.minEnclosingCircle(ball_image)
ball_center = (int(self.cf_u),int(self.cf_v))
#This function builds the Crazyflie base_link tf transform and publishes it.
def update_cf_transform(self, x, y, z):
# send position as transform from the parent "kinect2_ir_optical_frame" to the
# child "crazyflie/base_link" (described by crazyflie.urdf.xacro)
self.pub_tf.sendTransform(( x, y, z), tf.transformations.quaternion_from_euler(self.r, self.p, self.y), rospy.Time.now(), "crazyflie/base_link", "kinect2_ir_optical_frame")
[https://github.com/PacktPublishing/ROS-Robotics-By-Example/blob/master/Chapter_9_code/crazyflie_autonomous/scripts/detect_crazyflie.py](https://github.com/PacktPublishing/ROS-Robotics-By-Example/blob/master/Chapter_9_code/crazyflie_autonomous/scripts/detect_crazyflie.py)
↧
↧
Transform camera frame to 3D point
Hi all,
I am work on kitti dataset. How can transform detected object over camera frame to 3D point cloud coordinate ?
Thanks in advance!
↧
Obtaining Camera Pose via TF
I have the calibration board fixed on the work station, and I have its world coordinates, entered them in URDF, all good. I used OpenCV's pose estimation module to find the board's pose (rvecs and tvecs) from the camera frame as well. This also is OK. However, I want to get the camera pose (from the world frame) and to do that I think I need to use TF. So basically, we can formulate the problem as follows:
*Given:*
**World -> Board (board in world frame)**
**Board -> Camera (board in camera frame)**
*Find:*
**World -> Camera (world in camera frame, and vice versa, camera in the world frame).**
I have the world and board in my URDF, but since I don't have my camera (which is what I am trying to find out anyway), I don't know how I could utilise TF functionalities.
Does anyone have an experience with this setup?
↧
unable to get Point Grey USB camera work in ubuntu
Hello,
I have problem in using POINT GREY FIREFLY MV FMVU-13S2C digital camera to work on Ubuntu 11.04.
This is the version of ubuntu.
adrian@ubuntu:~$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 11.04
Release: 11.04
Codename: natty
As far as I know, POINT GREY FIREFLY MV FMVU-13S2C uses IEEE1394 interface to transfer data with computer. So I downloaded the Coriander – the linux GUI for IEEE1394/Firewire cameras.
However, when I plugged in the camera and launched the coriander, this error appears:
Warning: could not find a Digital Camera on the bus.
Please check that:
-the cables are properly set
-the devices on the bus are properly powered
-your camera is compliant with the Digital Camera specs (see http://www.1394ta.org).
Then I run the command “lsusb”:
adrian@ubuntu:~$ lsusb
Bus 003 Device 002: ID 1e10:2002 Point Grey Research, Inc.
Bus 003 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 002 Device 004: ID 17ef:1003 Lenovo Integrated Smart Card Reader
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Bus 001 Device 007: ID 04f2:b217 Chicony Electronics Co., Ltd
Bus 001 Device 006: ID 0a5c:217f Broadcom Corp. Bluetooth Controller
Bus 001 Device 005: ID 147e:2016 Upek Biometric Touchchip/Touchstrip Fingerprint Sensor
Bus 001 Device 004: ID 046d:c52b Logitech, Inc. Unifying Receiver
Bus 001 Device 003: ID 0765:5001 X-Rite, Inc.
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Apparently, the first output is the camera.
Then I guess maybe I need to configure udev to work with camera. So I run “sudo gedit /etc/udev/rules.d/10-pointgrey.rules” and type in the following text.
BUS=="usb", SYSFS{idVendor}=="1e10", SYSFS{idProduct}=="2002", GROUP="plugdev"
Then I restart the udev by “sudo restart udev” and restart coriander again. But the same error still appears.
Then I opened the official website of the POINT GREY and downloaded the FlyCapture2 software. Surprisingly, This software is able to recognize the camera. Why coriander cannot recognize camera? camera1394 ros node also cannot recognize camera.
Can any help me to solve this problem? Thanks in advance!
↧
How to publish tf between base_link camera
I want to publish Transform between camera and base_link. But i am not able to please help and I have seen the predefined packages but since I use ROS kinetic I cannot use those.
I have done this till now
import rospy
from nav_msgs.msg import Odometry
import tf
#import turtlesim.msg
def func1(msg, name):
br = tf.TransformBroadcaster()
br.sendTransform((0,0,0),
tf.transformations.quaternion_from_euler(0, 0, 0),
rospy.Time.now(),name,
"base_link")
if __name__ == '__main__':
rospy.init_node('optical_cam_transform_publisher')
cam_frame="optical_came_frame"
#odom=Odometry()
rospy.Subscriber('/odom',
Odometry,
func1,cam_frame
)
rospy.spin()
↧
↧
launching image_view for a topic
I am trying to view the images my robot's cameras are receiving remotely.
The following in the command line will bring up a window with the video stream
$ rosrun image_view image_view image:=/stereo/right/image_raw
However I cannot find any information on how to put the "`image:=/stereo/right/image_raw`" part into a launch file
Alternatively, how can I use image_view in a python node to embed the video in a gui of my design?
I'm using:
ROS electric
Ubuntu 11.10
uvc_camera
Tkinter
↧
April Tags 2 Camera Setup
I am working with apriltags2_ros to try to do a simple tracking but am unable to set up my webcam to work with the apriltags2_ros package. I am using this link as a reference, http://wiki.ros.org/apriltags2_ros/Tutorials/Detection%20in%20a%20video%20stream
but when I launch the continuous_detection.launch and run rqt and image_view there is no image displayed from my webcam so how would I set it up to work with my webcam?
↧
sensor_msgs/Image data to sensor_msgs/CompressedImage data
I have an industrial camera (DVP interface), and I added some code in the driver to convert the cv image into ros image data. I want to use this camera to test the [fiducials package](http://wiki.ros.org/fiducials). But it needs to subscribe to a `sensor_msgs/CompressedImage` topic, and now the driver can only provide the `sensor_msgs/Image` topic. Is there any way to convert the `sensor_msgs/Image` data into `sensor_msgs/CompressedImage` data?
↧
Reading Stereo Camera Calibration using rosparam
Hi all!
I have a YAML file which contains calibration matrices for a stereo pair of cameras. I just can't figure out how to get the calibration data into a CameraInfoManager instance. The YAML file looks like this:
# Transform that takes a point in lidar frame and puts it in the left DAVIS camera frame.
T_cam0_lidar:
- [ 0.01140786, -0.99985524, 0.01262402, 0.03651404]
- [ 0.04312291, -0.01212116, -0.99899624, -0.08637514]
- [ 0.99900464, 0.0119408 , 0.04297839, -0.12882002]
- [ 0. , 0. , 0. , 1. ]
cam0:
# Transform that takes a point in the left DAVIS IMU frame to the left DAVIS camera frame.
T_cam_imu:
- [ 0.9998771896957381, -0.015128404695721132, -0.004091075349566317, -0.002760732198752687]
- [ 0.015081690576531692, 0.9998233340761795, -0.011217987615716542, -0.001652095229536009]
- [ 0.004260062852482415, 0.011154909598696812, 0.99992871, -0.017829494499329843]
- [0.0, 0.0, 0.0, 1.0]
projection_matrix:
- [199.6530123165822, 0.0, 177.43276376280926, 0.0]
- [0.0, 199.6530123165822, 126.81215684365904, 0.0]
- [0.0, 0.0, 1.0, 0.0]
rectification_matrix:
- [0.999877311526236, 0.015019439766575743, -0.004447282784398257]
- [-0.014996983873604017, 0.9998748347535599, 0.005040367172759556]
- [0.004522429630305261, -0.004973052949604937, 0.9999774079320989]
camera_model: pinhole
distortion_coeffs: [-0.048031442223833355, 0.011330957517194437, -0.055378166304281135,
0.021500973881459395]
distortion_model: equidistant
intrinsics: [226.38018519795807, 226.15002947047415, 173.6470807871759, 133.73271487507847]
resolution: [346, 260]
rostopic: /davis/left/image_raw
cam1:
# Transform that takes a point in the left DAVIS IMU frame to the right DAVIS camera frame.
T_cam_imu:
- [0.9999459669775138, -0.002658051013083418, -0.010049770654903653, -0.10052351879548456]
- [0.002628365498611729, 0.9999921475216119, -0.0029659045702786734, -0.002149609013368439]
- [0.01005757526494452, 0.0029393298430320275, 0.9999451012529955, -0.01780039434280798]
- [0.0, 0.0, 0.0, 1.0]
# Transform that takes a point in the left DAVIS camera frame to the right DAVIS camera frame.
T_cn_cnm1:
- [0.9999285439274112, 0.011088072985503046, -0.004467849222081981, -0.09988137641750752]
- [-0.011042817783611191, 0.9998887260774646, 0.01002953830336461, -0.0003927067773089277]
- [0.004578560319692358, -0.009979483987103495, 0.9999397215256256, 1.8880107752680777e-06]
- [0.0, 0.0, 0.0, 1.0]
projection_matrix:
- [199.6530123165822, 0.0, 177.43276376280926, -19.941771812941038]
- [0.0, 199.6530123165822, 126.81215684365904, 0.0]
- [0.0, 0.0, 1.0, 0.0]
rectification_matrix:
- [0.9999922706537476, 0.003931701344419404, -1.890238450965101e-05]
- [-0.003931746704476347, 0.9999797362744968, -0.005006836150689904]
- [-7.83382948021244e-07, 0.0050068717705076754, 0.9999874655386736]
camera_model: pinhole
distortion_coeffs: [-0.04846669832871334, 0.010092844338123635, -0.04293073765014637,
0.005194706897326005]
distortion_model: equidistant
intrinsics: [226.0181418548734, 225.7869434267677, 174.5433576736815, 124.21627572590607]
resolution: [346, 260]
rostopic: /davis/right/image_raw
and I am able to access the data through a nodehandle by adding a rosparam to my launch file:
and
std::vector test;
nh.getParam("event_stereo/stereo_params/cam0/projection_matrix", test);
This works fine, so I am able to access what is in the YAML through the nodehandle. What I'd like to do now, is actually load the matrices into a image_geometry::PinholeCameraModel so I can rectify incoming images and to load the camera extrinsics in order to do some stereo camera operations. I've tried all sorts of things, increasingly inelegant, but there must be an easy way...
Many thanks in advance!
↧
↧
How to create ocupancy grid map from my camera topic
Hello,
I am new to ros and gazebo, I am trying to generate an occupancy grid map using the camera topic of my robot. I also want to do an object recognition and to do an autonomous navigation. How to do that? I am using ros melodic and gazebo 9.0.
I will also add a radar sensor and generate an other occupancy grid map.
and finally I will fuse these two maps based on dempster shafer theory.
I really need help.
Thanks.
↧
Combine two parent-child transformations for common link
I am trying to connect two tf trees via a known link by following the advice from @tfoote [here](https://answers.ros.org/question/322015/tf-static_transform_publisher-removes-link-instead-of-linking-two-tf-trees/?answer=322017#post-id-322017). One link belongs to the robot tf tree and the other belongs to a camera tf tree that is not part of the robot urdf. `ar_tracker_alvar` is providing the known link from the camera to a link defined in the robot urdf. I have looked up the transformations using `tf2_ros`, where `robot_link_1` and `alvar_link_1` share the same point in space:
tf_buffer =tf.Buffer()
known_transform_1 = tf_bufffer.lookup_transform('robot_link_1' , 'base_link', rospy.Time.now(), rospy.Duration(1.0))
known_transform_2 = tf_bufffer.lookup_transform('alvar_link_1' , 'camera_link', rospy.Time.now(), rospy.Duration(1.0))
I have added the position information to get a new combined xyz transformation:
t = TransformStamped()
t1 = known_transform_1.transform.translation
t2 = known_transform_2.transform.translation
t.x = t1.x + t2.x
t.y = t1.x + t2.x
t.z = t1.x + t2.x
The resulting vector appears to be the correct length from the camera to the robot base. Next, my understanding of how to combine the rotation information is as follows:
q_robot = known_transform_1.transform.rotation
q_camera = known_transform_2.transform.rotation
rotation = tf.transformations.quaternion_multiply(q_robot, q_camera)
I am using this information to send to a broadcaster to provide a link between the two trees:
tf_broadcaster = tf.TransformBroadcaster()
t_broadcaster.sendTransform(t)
Combining the quaternions, however, does not work as expected.

As you can see, the rotation is off.
I have tried changing the placement of ar tags as well as other analytical solutions, but I appear to be missing a critical piece of information for how to combine the rotation components.
How do I perform the translation to link the two trees?
You can see in the visualization that the robot and camera appear to be aligned as expected.
For more context, below is the outcome of a tag placed in a different location:


EDIT:
Trying the suggestion by @janindu to directly link `robot_link_1` and `camera_link` results in a similar situation where the tf_tree is completed but the translation is still incorrect at a different pose. See the following:

[https://answers.ros.org/question/322015/tf-static_transform_publisher-removes-link-instead-of-linking-two-tf-trees/](https://answers.ros.org/question/322015/tf-static_transform_publisher-removes-link-instead-of-linking-two-tf-trees/)
↧
Moving camera with librviz does not update view
Hi,
I am on Ubuntu 16.04 with ROS Kinetic and trying to move the camera position in a 3D visualizer widget programmatically using librviz. I basically followed the provided [tutorial](http://docs.ros.org/indigo/api/librviz_tutorial/html/index.html) and came up with
manager_ = new rviz::VisualizationManager( render_panel_ );
render_panel_->initialize( manager_->getSceneManager(), manager_ );
manager_->initialize();
manager_->startUpdate();
controller_ = manager_->getViewManager()->getCurrent();
controller_->getCamera()->moveRelative(Ogre::Vector3(0, 0.1, 0));
Unfortunately, this does not do anything. Also `controller_->getCamera()->setPosition(...)` does not have any effect.
I can only see a change when I add a `controller_->getCamera()->lookAt(...)` to the code. This will result in the desired change of the position plus an in general unwanted change in orientation.
It seems like `moveRelative(...)` and `setPosition(...)` do not trigger an update of the view?! Is there a solution to make this work without the `lookAt(...)` or do I miss something important here?
↧