Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 348 articles
Browse latest View live

Camera Calibration fails to run

$
0
0
I am running ROS Indigo on Ubuntu 14.04. I am trying to follow the camera calibration tutorial on the ROS Wiki. http://wiki.ros.org/camera_calibration I give the following command > rosrun camera_calibration> cameracalibrator.py --size 8x6> --square 0.108 image:=/my_camera/image camera:=/my_camera I get the following error > ImportError: numpy.core.multiarray> failed to import Traceback (most> recent call last): File> "/opt/ros/indigo/lib/camera_calibration/cameracalibrator.py",> line 47, in > import cv2 ImportError: numpy.core.multiarray failed to import I thought it was to do with updating numpy and did a rosdep update but no differnce. What is a possible way to solve this problem.

shifting existing sorce code to ros

$
0
0
I have not used ROS before and have gone through beginner tutorials on ROS. I want to create two nodes: one on the robot's processor and one on a remote computer connected on the same network. In SSH, I am not able to see the camera feed from the robot, and Remote Desktop lags. So I need a balance between the two situations wherein I can rely on the fast SSH for sending commands and ROS for publishing camera feed. I have my source code that I want to shift to ROS not entirely but just the part where I am able to publish the camera feed. After going through the wiki I yet didn't figure out how am I supposed to go about it. eg. Should I create a catkin workspace with roscpp etc. and shift the existing folder to src. Then use $catkin_make for compiling files with ROS headers included and $make for compiling the rest of the repository. The source code is in C++ and both the Robot's processor and the remote computer work on Ubuntu 14.04.2

camera calibration with non standard image size

$
0
0
I am trying to calibrate a mvBluefox-igc for a vision based estimator. When i run camera calibration a black bar appears at the bottom of the image in the window. So the images looks as if a bunch of zeroed rows were added to the bottom of the image. I can get the calibration to work but I believe the coefficients that are returned are being corrupted by the empty rows at the bottom of the image. The estimator misbehaves in a way that makes me believe that the estimated f_y is somehow larger than what it really is. Would corrupted coefficients be effected by additional empty rows in the calibration image? All help is appreciated.

libuvc launch get permission denied opening usb error

$
0
0
Hi all, new bee to ROS, I am following [libucv](http://wiki.ros.org/action/fullsearch/libuvc_camera?action=fullsearch&context=180&value=linkto%3A%22libuvc_camera%22), trying to run my webcam and publish to /camera/image_raw topic. Catkin_make is ok. but when I launch. I got permission denied opening errors like: PARAMETERS * /camera/mycam/auto_exposure: 3 * /camera/mycam/auto_white_balance: False * /camera/mycam/camera_info_url: file:///tmp/cam.yaml * /camera/mycam/frame_rate: 15 * /camera/mycam/height: 480 * /camera/mycam/index: 1 * /camera/mycam/product: 0x0 * /camera/mycam/serial: * /camera/mycam/timestamp_method: start * /camera/mycam/vendor: 0x0 * /camera/mycam/video_mode: uncompressed * /camera/mycam/width: 640 * /rosdistro: indigo * /rosversion: 1.11.13 NODES /camera/ mycam (libuvc_camera/camera_node) ROS_MASTER_URI=http://localhost:11311 core service [/rosout] found process[camera/mycam-1]: started with pid [12166] **[ERROR] [1445862646.193876279]: Permission denied opening /dev/bus/usb/001/002** [camera/mycam-1] process has died [pid 12166, exit code 255, cmd /home/visioner/libuvc_ws/devel/lib/libuvc_camera/camera_node __name:=mycam __log:=/home/visioner/.ros/log/f0455380-7b93-11e5-a62a-e4f89ca8e3f5/camera-mycam-1.log]. log file: /home/visioner/.ros/log/f0455380-7b93-11e5-a62a-e4f89ca8e3f5/camera-mycam-1*.log all processes on machine have died, roslaunch will exit shutting down processing monitor... any ideas about this? I am pretty sure my cameras are OK, I could find in `ls /dev/video*` , I even write a simple opencv program showing that I can get my camera run. Any ideas about this? Thanks!

Getting the picoflexx depth sensor working on ROS

$
0
0
Hi, I'm trying to get the picoflexx depth sensor working with ROS Indigo on Ubuntu 14.04. I can run the normal royaleviewer.sh from the official package "libroyale-1.0.5.40-LINUX-64Bit", and the depth camera is working fine. Then I tried this wrapper here : https://github.com/fmina/ros_picoflexx It compiles fine, but when I run the launch file, I got this error : process[firefly/picoflexx_cam-1]: started with pid [9406] [ INFO] [1445772351.805227168]: Detecting cameras... Detected 1 camera(s). [firefly/picoflexx_cam-1] process has died [pid 9406, exit code -11, cmd /home/cyril/catkin_ws/devel/lib/ros_picoflexx/ros_picoflexx_node __name:=picoflexx_cam __log:=/home/cyril/.ros/log/e35b1f4c-7b07-11e5-9dd2-247703332438/firefly-picoflexx_cam-1.log]. log file: /home/cyril/.ros/log/e35b1f4c-7b07-11e5-9dd2-247703332438/firefly-picoflexx_cam-1*.log Using rosrun leads to a segmentation fault, which is why we see exit code -11 I think. Anyone got this error and found how to fix that ? EDIT : As suggested by gvdhoorn, I've run the node with GDB by adding to the node parameters : launch-prefix="gdb -ex run --args So the segmentation fault occur in the function void PicoFlexxCamera::Initialize(), when it's checking the camera id. As a workaround, I've just removed the if condition. The original code is : for (int i = 0; i < camlist.size(); i++) { if (camlist[i] == camera_id_) { camera_device_ = manager_.createCamera(camlist[i]); ROS_INFO_STREAM(camera_name_ << " initialized correctly"); } } So I just commented out the if : for (int i = 0; i < camlist.size(); i++) { //if (camlist[i] == camera_id_) { camera_device_ = manager_.createCamera(camlist[i]); ROS_INFO_STREAM(camera_name_ << " initialized correctly"); //} } The node works well, I can visualize the PointCloud2 in rviz. If someone has a better solution, please post it.

knowing objects dimension in ROS with a Basler camera

$
0
0
Hi Ros users: I use Ros indigo and ubunto 14.04. I have a Basler camera. I would like to know the width of an object (a cube for example). So the process will be: 1-Camera detecs the object. 2-A ros node says the width. Is there any package that does that? (I could do with rviz maybe) Thanks a lot.

Looking for a stereo USB camera driver for Jade

$
0
0
Hi, I'm new to the ROS system, and my first project would be really simple: I'd like to use the rtabmap for stereo SLAM. My problem is, I was not able to find a camera driver node. For example: http://wiki.ros.org/uvc_camera would be great, but it is not released for Jade. Does anyone have a working stereo camera driver node on Jade? Could you please tell me, how did you get it to work? Thanks! Adam

URDF prismatic joint on 2 axis

$
0
0
Hi all.
I have an URDF robot model in GAZEBO with a camera and i wanna move the camera on two axis Y,Z.
I know that i have to use the prismatic type, but i cant figure out how to set it. I've tried to create two joints between the base_link and the camera_link one for the Z slide and the other one for the Y slide but it gives me errors [ERROR] [1447412621.880447632, 1337.300000000]: Tried to advertise a service that is already advertised in this node [/tablet_camera/set_parameters] [ERROR] [1447412621.989409540, 1337.300000000]: Tried to advertise a service that is already advertised in this node [/tablet_camera/image_raw/compressedDepth/set_parameters] [ERROR] [1447412622.090016972, 1337.300000000]: Tried to advertise a service that is already advertised in this node [/tablet_camera/image_raw/compressed/set_parameters] [ERROR] [1447412622.263917910, 1337.300000000]: Tried to advertise a service that is already advertised in this node [/tablet_camera/image_raw/theora/set_parameters] [ERROR] [1447412627.912015293, 1337.300000000]: This robot has a joint named "camera_joint_y" which is not in the gazebo model. [FATAL] [1447412627.912106858, 1337.300000000]: Could not initialize robot simulation interface This is the code in my xacro file relative to the camera joints transmission_interface/SimpleTransmissionEffortJointInterfaceEffortJointInterface1transmission_interface/SimpleTransmissionEffortJointInterfaceEffortJointInterface1 And this is the plugin inside the xacro 30.01.3962634800800R8G8B80.02300gaussian0.00.007true0.0tablet_cameraimage_rawcamera_infocamera_link0.070.00.00.00.00.0 Of course I've created the position controllers yaml file etc etc for the controller manager

SetCameraInfo successfully called but no change in /camera_info topic

$
0
0
Hello there, I am trying to write a little C++ package to apply custom made calibration files to my indigo system using the following code (inspired by [this](http://answers.ros.org/question/43623/how-to-call-service-set_camera_info-of-gscam-package/) thread): #include #include #include #include #include #include #include #include using namespace std; int main(int argc, char **argv) { ros::init(argc, argv, "apply_camera_info"); if (argc != 3) { ROS_INFO("usage: apply_camera_info "); return 1; } ros::NodeHandle n; std::ostringstream oss; oss << argv[1] << "set_camera_info"; std::string s = oss.str(); ros::ServiceClient client = n.serviceClient(s); std::string camera_name = argv[1]; std::string filename = argv[2]; sensor_msgs::CameraInfo camera_info; camera_calibration_parsers::readCalibration(filename, camera_name, camera_info); sensor_msgs::SetCameraInfo srv; srv.request.camera_info = camera_info; if (client.call(srv)) { std::ostringstream sss; sss << srv.request.camera_info; ROS_INFO("%s", sss.str().c_str()); ROS_INFO("Calibration successfully applied."); } else { ROS_ERROR("Failed to call service /set_camera_info"); return 1; } return 0; } Actually the code seems to work fine. ROS_INFO returns the correct camera_info values after running the command. However, when I run `rostopic echo /camera_info` in another tab, I still get the old values. So no update is actually taking place. Would you have any suggestions for me? Btw: I am using a ueye camera. Thanks Ben

Can't run camera_uvc package on jade

$
0
0
Hi, I had a problem building and running the http://wiki.ros.org/uvc_camera. I would need a stereo camera driver providing 2 images and 2 camera_info messages on Jade. I found camrea_uvc, I tried building the camera_uvc from source using this guide: https://defendtheplanet.net/2014/11/05/using-ros-indigo-webcam-by-the-uvc_camera-usb-video-class-package/ , the rosdep install part failed with "missing resource". But if I use catkin_make, it checks the dependencies, right? Catkin_make is successful: *[100%] B[100%] Built target uvc_stereo_node*. But after i try to launch it, it fails: "ERROR: cannot launch node of type [uvc_camera/camera_node]: can't locate node [camera_node] in package [uvc_camera]" Is this an installation problem? Can you link me some tutorials about building packages from source? Thank you in advance!

Trying to get position data from Aruco Package

$
0
0
Hi all, I am trying to get position data using the [aruco ros package](http://wiki.ros.org/aruco). I can get the marker to appear in the the screen, and it will be highlighted. But, no position data is being published to the markers topic. If I output to the screen in the launch file, there is output that identifies the marker, and has 4 2D points, as well as a 3D translation matrix, and a rotation transform(from what I read on the Aruco site). The last two are all 0. But the 2D points seem to be what I want. What topic should these be broadcasting on? And also I am assuming I'll need the transformation matrix and rotation transform, could they be defaulting to zero since I don't have a camera config file, because that is the case.

Error while interfacing my webcam to ROS

$
0
0
I installed **ros-indigo-usb-cam** package and executed the code `rosrun usb_cam usb_cam_node ~video_device "/dev/video0"` which resulted in following error **[ERROR] [1435589390.503107344]: outbuf size mismatch. pic_size: 604160 bufsize: 614400** How to fix this error for my logitech USB webcam, also educate me how to view the vision specifications of the camera in ROS.

Is there a way to switch PTAM in order to make it work with the bottom camera ?

$
0
0
Hello, I'm interested by using the bottom camera in order to build a mapping of the floor of a room. But when i toggle the camera to use the bottom one, the PTAM is always reset and there is no keypoint on the screen. Does someone know which part of the code I need to modify to make it work or do you know if there is another way to make a mapping with the bottom camera ?

ROS supported inexpensive($200~$300) global shutter cameras

$
0
0
Could someone please suggest a global shutter camera for a machine vision project. Following are the requirements. 1. Resolution more than 640x480 2. Field of View 120 degrees or greater (horizontally) 3. 60 fps or more 4. Global shutter 5. USB connectivity 6. Color or Monochrome 7. $200 ~ $300 I have seen Aptina MT9V034 / MT9M021 based cameras, but not sure whether there are drivers for ROS. Furthermore, any PointGrey camera suggestions would be helpful as well. Thank you

libuvc_ros not building

$
0
0
Hello, I'm trying to use this package thero: http://wiki.ros.org/libuvc_camera But I get this error when I do catkin_make: CMake Error at libuvc_ros-master/libuvc_camera/CMakeLists.txt:9 (find_package): By not providing "Findlibuvc.cmake" in CMAKE_MODULE_PATH this project has asked CMake to find a package configuration file provided by "libuvc", but CMake did not find one. Could not find a package configuration file provided by "libuvc" with any of the following names: libuvcConfig.cmake libuvc-config.cmake Add the installation prefix of "libuvc" to CMAKE_PREFIX_PATH or set "libuvc_DIR" to a directory containing one of the above files. If "libuvc" provides a separate development package or SDK, be sure it has been installed. -- Configuring incomplete, errors occurred! See also "/home/paulo/catkin_ws/build/CMakeFiles/CMakeOutput.log". See also "/home/paulo/catkin_ws/build/CMakeFiles/CMakeError.log". make: *** [cmake_check_build_system] Error 1 Invoking "make cmake_check_build_system" failed Is this my fault or are there missing config files in this package?

How to enable/add camera on Kuka Youbot in Gazebo?

$
0
0
Hi I am pretty new to ROS, Gazebo, and Ubuntu. Now I'm trying to add a camera on the Kuka Youbot Model which I installed from the youbot website: http://www.youbot-store.com/wiki/index.php?title=Gazebo_simulation&hmswSSOID=10b4d7be36c130126e02a9c81ce579a7f71c954f. So I'm using the Indigo version of ROS, and I've tried to enable the cameras that came with the package but I couldn't get any of them to appear on the simulation. From what I know, to use one of the camera, you just include the refernce and simulation tag of the camera in the youbot.urdf.xacro file. I tried that but didn't work. I need this for a senior design project, in which i need to simulate lane keeping in the road using cameras. I would appreciate any help from anyone with experience with this. Thanks.

camera_pose_calibration migrate to indigo

$
0
0
Hello guys, I'm trying to use this [calibration package](http://wiki.ros.org/camera_pose_calibration) for 2 monocular cameras that are fixed on the bottom of a Quadcoter. According to the [documentation](http://wiki.ros.org/camera_pose_calibration?distro=groovy) this package is capable to do this. ### my setup My setup is shown in this figure ![drawing](https://cloud.githubusercontent.com/assets/15015938/11775751/c3d099b6-a241-11e5-8bf0-48350c6ff591.png) - Ubuntu 14.04 (Trusty) - ROS Distro= Indigo - Catkin workspace with catkin tools ### my goal My goal is to get the pose between the two bottom cameras and also the tf Transformation of the bigger one (I already got the tf from the small bottom camera). ### installation problems Problem about the whole story is more or less the installation. I checked with ```javascript rosdep db ``` for the package ros-indigo-camera-pose so i could use ```javascript sudo apt-get install ros-indigo-camera-pose ``` but had no success. Second attempt was to try an installation from source through github. Somehow Catkin tools does not recognise the camera_pose package when I try to build it through the command: ```javascript catkin build [package] ``` Checking the dependencies with `rosdep` fails too. So my first thought was to update this package, so catkin tools can build it. Any suggestion how to do this without studying informatics for 5 years :D?

ROS supported for video feeds from 2 robots

$
0
0
hi, may i know whether ROS is able to support video feed streaming from 2 robots at the same time? or have to do with 1 robot at a time?

Intel RealSense R200 Camera Calibration

$
0
0
I've been trying to calibrate the Intel RealSense camera using various ROS tools for some time. I'm running Ubuntu 14.04 LTS with ROS Indigo. The nodelet I've been using is: [http://wiki.ros.org/RealSense_R200](http://wiki.ros.org/RealSense_R200) Has anyone been able to get this camera calibrated within ROS? I'm going to post basically everything I've tried since it might be helpful for others, but this is going to be very long and verbose. Here's what I've tried: **PTAM** ([http://wiki.ros.org/ethzasl_ptam/Tutorials/camera_calibration](http://wiki.ros.org/ethzasl_ptam/Tutorials/camera_calibration)). Result: I can launch the camera calibration tool, but the output is distorted and the nodelet crashes shortly after launching. See: ![PTAM camera calibration output](https://storage.jumpshare.com/preview/i6NYjVW-SOv-76bC4znba1Fi_g7sBlmeUY6mozOK9NEtHImXWjbVEahYFd0wWJZCk-xbdU7j9RiXc76aOBhMyN0Iq-_ZMIwlJNqsu6s4bO0F1kR3dMUjedqC16uBUu85) And the following terminal output: Welcome to CameraCalibrator -------------------------------------- Parallel tracking and mapping for Small AR workspaces Copyright (C) Isis Innovation Limited 2008 Gui is on [cameracalibrator-1] process has died [pid 5428, exit code -11, cmd /home/mark/catkin_ws/devel/lib/ptam/cameracalibrator image:=camera/color/image_raw pose:=pose __name:=cameracalibrator __log:=/home/mark/.ros/log/806c7c96-b3ce-11e5-a526-001c42361d26/cameracalibrator-1.log]. log file: /home/mark/.ros/log/806c7c96-b3ce-11e5-a526-001c42361d26/cameracalibrator-1*.log And the only thing I change in PtamFixParams.yaml was: ImageSizeX: 640 ImageSizeY: 480 **ROS Calibration** [http://wiki.ros.org/camera_calibration](http://wiki.ros.org/camera_calibration) Result: the r200 nodelet doesn't have a /camera/set_camera_info topic, so this method can't launch. ('Waiting for service', '/camera/set_camera_info', '...') Service not found **libuvc_camera** [http://wiki.ros.org/libuvc_camera](http://wiki.ros.org/libuvc_camera) Result: Also fails. Note: I don't launch the R200 nodelet when launching this driver. I think the main problem is that there are multiple video streams coming over one USB. Here's my launch, output, and video4linux information: [http://pastebin.com/mD8hvL4u](http://pastebin.com/mD8hvL4u). What's worth noting is that in the launch, you specify one index, but the device actually has three /dev/videoX entries. I'm not sure how I can map these indices to a global index, but I think if I could I'd be good to go. And of course, please suggest any other calibration methods; this is just what I've tried so far. Thanks!

How to use Openni_Tracker with a moving camera?

$
0
0
Hi, I know that openni tracker doesn’t support a moving camera application. I read that changing the SceneAnalyzer with one that handles a moving camera may work. You know how? Or maybe someone knows a simple package to track 1 person in a mobile system using a kinect Thanks!
Viewing all 348 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>