Quantcast
Channel: ROS Answers: Open Source Q&A Forum - RSS feed
Viewing all 348 articles
Browse latest View live

How to use ROS with 4 Realsense D435 depth/IR to create volumetric holograms

$
0
0
How to use ROS with 4 Realsense D435 cameras to create volumetric holograms? Similar to how it's done by Intel in this video: https://www.youtube.com/watch?v=9oQDz_cfUlo I will provide a reward for the solution. thanks,

Camera for Turtlebot 2 with Skeleton Tracking (2018)

$
0
0
Hi All, I am new to ROS and now exploring Turtlebot 2 and I would like to add-in the skeleton tracking features for my Turtlebot. After doing some web searching, I found that the skeleton tracking packages available such as: http://wiki.ros.org/openni_tracker http://wiki.ros.org/skeleton_markers#Works_Only_With_Microsoft_Kinect They are supporting the Xbox 360 camera or Xbox 1 camera. And they have reached EOL. Question: What is the camera you would recommend? In Turtlebot 2, the Orbbec Astra camera is the successor for Kinect Camera, but it also means it can't work with skeleton tracking packages such as openni_tracker, skeleton_markers ... Any recommendation of camera that can work with Turtlebot setup, but yet can work with available Skeleton Tracking ROS software packages? Thanks a lot for your reply / reading this question.

image stream using camera

$
0
0
Hello guys, My question might seem easy for many since i am still fresh in Ros and Opencv. I am following the tutorials provided on Ros Website. I am going through the tutorial ''Converting between ROS images and OpenCV images (C++)''. I am trying to solve point no. 4 in this tutorial ''An example ROS node''. I ran my system and did the catkin_make and it works fine. However, for running the node i need to provide an image stream from a camera. How can I do that. I don't know where to go from there. I would definitely appreciate any help. Thanks

Pose estimation error with ar_pose

$
0
0
I am using ar_pose package to estimate the pose of aruco marker. The camera I used is the gazebo camera. The description of the camera is following in urdf file: 30.00.8727640480BGR80.0250gaussian0.00.007true0.0cameraimage_rawcamera_infocamera_link_optical0.070.00.00.00.00.0 In launch file, I ran the ar_pose node like this: The estimation result shows in the picture. ![pose estimation result](https://i.imgur.com/PDGSZ9V.png) It seems that the detected pose is not aligned to the marker pose. Someone knows why and thanks for the attention!

API for changing a camera resolution

$
0
0
What is the standard ROS way for implementing on-the-fly resolution changing for a camera node? I haven't found any official articles on this, also I haven't found any popular camera nodes, supporting this. This feature is critical for my application, I want to implement it for e. g. cv_camera, but what should be the API? Should it be dynamic_reconfigure parameters? Actually, I have never seen such parameters in a real-live package. Or should it be some ROS service? set_camera_info, maybe? Or maybe just standard parameters should be used, so the node should poll them periodically?

rviz Camera doesnt render input image & geometry

$
0
0
Hello, I'm trying to overlay image & render in one window using next tutorial http://wiki.ros.org/rviz/DisplayTypes/Camera. Looks like all input is correct. Unfortunately I've got only black window without errors & warnings. Simple image view & 3d render are working correct. ![image description](https://preview.ibb.co/ip8OE7/Selection_036.png) ros-kinetic-rviz/xenial,now 1.12.15-0xenial-20180319-132427-0800 amd64 [installed] Any help?

How to implement a semantic camera in ROS + Morse

$
0
0
Hi ROS community! :) I'm a pretty beginner in ROS and I'm trying to implement a [**semantic camera**](https://www.openrobots.org/morse/doc/1.2/user/sensors/semantic_camera.html) in ROS + Morse + Blender platform in order to get data from specific objects from Blender scenario. The problem is that we don't a lot of information documented or code as example to follow. I follow these documentation from Morse: [Semantic camera](https://www.openrobots.org/morse/doc/1.2/user/sensors/semantic_camera.html) and [SemanticCameraPublisher class](https://www.openrobots.org/morse/doc/1.2/user/code/morse.middleware.ros.html#morse.middleware.ros.semantic_camera.SemanticCameraPublisher). Both lists some parameters and also the code to consider on Morse Builder API, but I don't know how or where I could use them in my code. In builder.py code, I set : from morse.builder import * atrv = ATRV() #the robot atrv.translate(x=2.5, y=3.2, z=0.0) #starting location #adding a passive object table = PassiveObject('props/objects','SmallTable') table.setgraspable() table.translate(x=0, y=0, z=-5) table.properties(Object = True, Graspable = True, Label = "TABLE") #adding a semantic camera camera = SemanticCamera() atrv.append(camera) camera.translate(x=0.3, z=0.762) camera.add_interface('ros',topic='/camera') Then my goal is that ATVR robot needs to identify the table object, like this [article](http://airccse.com/jares/papers/1116jares01.pdf) does. In subscriber code, I've tried to define a msg function to print **get_local_data()**, but I don't know what object I need to declare from import, because the message is a String type (because rospy publishes the data of the semantic camera as JSON in a ROS String message). My current code is like that, but it is not working at all: import rospy from std_msgs.msg import String def callback_semcam(msg): reponse = String.get_local_data() tmp = reponse.result() print (tmp['visible_objects']) if __name__=='__main__': rospy.init_node("obstacle_check_node") rospy.Subscriber('/camera', String, callback_semcam) rospy.spin() # this will block untill you hit Ctrl+C When I run the code above, I get this error: [ERROR] [WallTime: 1524369527.637321] bad callback: Traceback (most recent call last): File "/opt/ros/indigo/lib/python2.7/dist-packages/rospy/topics.py", line 720, in _invoke_callback cb(msg) File "obstacle_check.py", line 27, in callback_semcam reponse = String.get_local_data() AttributeError: type object 'String' has no attribute 'get_local_data' Any idea of what we need to take into account when we uses a semantic camera in ROS?

Images from fisheye camera are displayed correctly with BGR8 encoding but not with MONO8

$
0
0
Hi, I want to output grayscale images from a fisheye camera (FOV < 180 degree). I understand that ROS currently doesn't support fisheye [distortion models](http://docs.ros.org/kinetic/api/sensor_msgs/html/namespacesensor__msgs_1_1distortion__models.html). However, I see that if I use a BGR8 image encoding (by setting sensor_msgs::image_encodings::BGR8), I can still get a "normal" distorted image, like this: ![normal_BGR8](/upfiles/1524389791658917.png). But if I change to MONO8 encodings, the image is unusable: ![weird_MONO8](/upfiles/15243898507167106.png) These 2 images were captured with my camera firmly attached to the table. So I don't understand why I can get a "normal" color image but not a grayscale one?

Insta360 camera support

$
0
0
Has anyone tried to use the Insta360 Pro camera with ROS? I don't see any official ROS support on their site.

How to change pitch of camera in Gazebo?

$
0
0
I am working on a TB3 simulation in Gazebo as https://www.youtube.com/watch?v=RgOvX3rUsI4 I want to change the pitch of the camera on TB3 to look lower, so that TB3 still can see the ball even it is very close. But which file and parameter should I change? .urdf.xacro? .gazebo.xacro? model.config? model.sdf? There are many files with similar setting that confusing me... Please advise, thank you.

Create a subscriber to an Image topic

$
0
0
Hi, Im trying to wrtie a node that subscribes to a image topic, however, after almost copying the ros tutorial, I keep getting an error. The relevant parts of my code are: ros::NodeHandle nh; image_transport::ImageTransport it(nh); image_transport::Subscriber sub; sub=it.subscribe("/sensors/camera/top/image_raw",1, Imtest::callbackSubscriber); And my callback is: void Imtest::callbackSubscriber(const sensor_msgs::ImageConstPtr& msg) The error I get is: /home/kal3-5/workspaces/my_first_ws/src/imtest_ros_tool/src/imtest/imtest.cpp:41:83: error: invalid use of non-static member function sub=it.subscribe("/sensors/camera/top/image_raw",1, Imtest::callbackSubscriber); My version of ROS is Kinetic, Ubuntu 16.04

read the sensor data of gazebo - sdf model in ROS

$
0
0
As I understand with plugin and urdf and xacro we can publish the sensor data (lase_ camera_ contact ...) in rostopic , using "urdf_spawner" in gazebo_ros package. I am wondering if there is way to use SDF models and sensor data directly in ROS? Because I want to use the drcsim models whose models defined as SDF model. I think the best example for using the robot model and sensor data in gazebo and ros is : roslaunch gazebo_plugins multi_robot_scenario.launch

How to execute a ros callback just once

$
0
0
I want to subscribe to openni's "camera/rgb/camera_info" topic to get the camera matrix. Since the matrix doesn't change, I would like to execute the callback just once. How could I achieve this? Is the only way to unsubscribe? Or is it possible to use a ros service for this? Regards, Oier

Monocular Camera and 2D Lidar calibration

$
0
0
I have a monocular camera and a 2d lidar placed on top a platform, both of units are static and relative to each other, I wanted to know the procedure for calibrating both units. Is there an package present in ROS. Lidar module - RPLIDAR

rviz Camera display incorporating CameraInfo operational parameters?

$
0
0
Does rviz support incorporating the operational parameters described in the camera_info message ([sensor_msgs/CameraInfo](http://www.ros.org/doc/api/sensor_msgs/html/msg/CameraInfo.html "CameraInfo")) when overlaying the data from elsewhere (robot model, tf, markers) while displaying a camera feed? The source ([rviz camera display](https://github.com/ros-visualization/rviz/blob/groovy-devel/src/rviz/default_plugin/camera_display.cpp "rviz camera display")) shows creating a projection based on the camera height and width fields, as well as the projection camera matrix P - the calibration parameters. These are typically described by full camera frame calibration data [rep 104](http://www.ros.org/reps/rep-0104.html "rep 104"). Is rviz not following the CameraInfo rep? Should it create the projection matrix based on the roi and binning parameters?

white balance problem for USB camera

$
0
0
Hi All, I have an Orbbec Astra Pro camera and I can get the RGB image using libuvc_camera package. However, the RGB image I get doesn't look very good. You can see the image below. ![image description](/upfiles/1527707570696579.png) It looks like there's one yellow layer on top of the image. I was told this is due to a bad white balance. I tried to enable the auto_white_balance setting in libuvc but it makes no difference. This is the launch file I ran: Does anyone known how to solve this while balance problem with my astra pro camera? I'm running this on Ros-Kinetic with Ubuntu 16.04. Thanks!

custom stereo camera publisher fails mysteriously

$
0
0
I'm trying to make a stereo camera for visual odometry. My Python script to publish the images consistently stops after ~500 images, with no errors. I have cameras plugged into the 2 CSI ports on the standard Raspberry Pi Compute Module breakout board. I'm using Kinetic on Raspian. I'm using the picamera library so I can access the cameras as if I was using raspivid. This brings in both cameras as one image with good sync (better than 1/60 sec anyway) which i then split up and publish using sensor_msgs. I *think* sensor_msgs is the only way to do it as I need to set identical timestamps, but I'm a novice. Also the publish rate is pretty slow. I'm asking for 10 FPS but getting about half that. message queue is set to 10. Would that be a problem? I suspect I'm going to have to redo it in C++ but I'd rather avoid that because python is quick to use. Can someone tell me what I'm doing wrong, or a better way? #!/usr/bin/env python # picamera stereo ROS node using dual CSI Pi CS3 board # Wes Freeman 2018 # modified from code by Adrian Rosebrock, pyimagesearch.com # and jensenb, https://gist.github.com/jensenb/7303362 from picamera.array import PiRGBArray from picamera import PiCamera import time import numpy as np import cv2 import rospy from sensor_msgs.msg import CameraInfo, Image import yaml # cam resolution # x must be multiple of 128: 640, 512, 384 res_x = 384 # per camera res_y = 288 # initialize the camera print("Init camera...") camera = PiCamera(stereo_mode = 'side-by-side',stereo_decimate=False) camera.resolution = (res_x*2, res_y) camera.framerate = 5 #camera.exposure_mode = 'sports' camera.exposure_mode = 'antishake' camera.video_stabilization = True camera.awb_mode = 'tungsten' rawCapture = PiRGBArray(camera, size=(res_x*2, res_y)) # picamera struct, a numpy array? #setup the publishers print("init publishers") # queue_size should be roughly equal to FPS? left_img_pub = rospy.Publisher('left_camera/image_color', Image, queue_size=10) right_img_pub = rospy.Publisher('right_camera/image_color', Image, queue_size=10) rospy.init_node('stereo_pub') print("Setup done, entering main loop") framecount=0 # capture frames from the camera try: for frame in camera.capture_continuous(rawCapture, format="rgb", use_video_port=True): framecount +=1 # grab the raw NumPy array representing the image and split it left_image = frame.array[:, :res_x, :] right_image = frame.array[:, res_x+1:, :] # clear the stream in preparation for the next frame rawCapture.truncate(0) stamp = rospy.Time.now() left_img_msg = Image() left_img_msg.height = res_y left_img_msg.width = res_x left_img_msg.step = res_x*3 # bytes per row: pixels * channels * bytes per channel (1 normally) left_img_msg.encoding = 'rgb8' left_img_msg.header.frame_id = 'image_raw' left_img_msg.header.stamp = stamp left_img_msg.data = left_image.flatten().tolist() right_img_msg = Image() right_img_msg.height = res_y right_img_msg.width = res_x right_img_msg.step = res_x*3 right_img_msg.encoding = 'rgb8' right_img_msg.header.frame_id = 'image_raw' right_img_msg.header.stamp = stamp right_img_msg.data = right_image.flatten().tolist() #publish the image pair left_img_pub.publish(left_img_msg) right_img_pub.publish(right_img_msg) print("published frame: ", framecount) except rospy.ROSInterruptException: camera.close() pass

How to combine two pose estimates (from cameras) ?

$
0
0
Hi, I am currently in a situation where I have multiple cameras and I wish to combine their measurements into a common frame: the **world frame.** In order to do this, I use an April Tag and find the camera's world poses at the start and then use it later on. So I use 4x4 Homogenous transformation matrices (T) and calculate my world pose like this: T_world_to_robot = T_world_to_camera x T_camera_to_robot . I am quite certain my `T_camera_to_robot` matrix is correct ( I visualise this by projecting a 3D mesh of the robot - **please see image below**), however I am getting poor results (x y and z coordinates are off by *50cm* when the robot is close/far from the camera) for the world estimates from this equation. I average the world estimates obtained from each camera to produce the `T_world_to_robot` matrix. **Hence my question:** 1. Is this the right way to obtain the world pose of the robot when using multiple cameras ? 2. Assuming someone provides the answer for my Q1, and I can get good world estimates, Can I just average (average the x,y,z coordinates and roll,pitch,yaw) the world estimates from each camera to obtain a combined result ? The blue mesh is projected using `T_camera_to_robot` (and this seems to align perfectly) The magenta mesh is the projected using `T_world_to_robot` ![image description](/upfiles/15284980016493369.png)

Adding official support for Industrial cameras

$
0
0
Hello, I'm a sales engineer with an industrial camera manufacturer (Lumenera). I would like to know the process for getting our cameras officially supported as we have had a few customers who have used them successfully in their ROS projects. Look forward to hearing back from you. Thanks, Brendan

OpenCV 3 runtime error "Invalid pointer to file storage" cv_bridge + third party library

$
0
0
I'm trying to publish the image of a [picozense camera](https://www.pico-interactive.com/zense) in a ROS node in **ROS kinetic, Ubuntu 16.04**. I build a catkin package and included the camera's API (.h and .so files) which is OpenCV3 compatible. This works fine, i can call the API functions in the c++ source code. However, OpenCV crashes at runtime with an error that after some research most posts argue is a conflict between OpenCV2 and OpenCV3. But my installed **OpenCV version is 3.3.1** and i don't have any OpenCV 2.x installed. (Also i'm using cv_bridge which [depends on OpenCV3](https://github.com/ros-perception/vision_opencv/blob/kinetic/cv_bridge/CMakeLists.txt) components) The only OpenCV package i added to my dependencies and in my code is the *highgui* package. If i leave it out, the node does not crash at all. But even with the highgui package the node runs successfully sometimes. How can i fix the error? **OpenCV Runtime Error** rosusr@ubuntu:~/catkin_ws$ rosrun picozense publisher 1 OpenCV Error: Bad argument (Invalid pointer to file storage) in cvGetFileNodeByName, file /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/persistence.cpp, line 861 terminate called after throwing an instance of 'cv::Exception' what(): /tmp/binarydeb/ros-kinetic-opencv3-3.3.1/modules/core/src/persistence.cpp:861: error: (-5) Invalid pointer to file storage in function cvGetFileNodeByName Aborted (core dumped) **Running the node works sometimes** (the initialization just fails because the camera is not connected) rosusr@ubuntu:~/catkin_ws$ rosrun picozense publisher 1 2 PsInitialize status: -1 Initialize failed! Shutdown status: 0 **publisher.cpp** #include #include #include #include "PicoZense_api.h" int main(int argc, char** argv){ ros::init(argc, argv, "publisher"); ros::NodeHandle nh; PsReturnStatus status; std::cout << "1" << std::endl; status = PsInitialize(); std::cout << "2" << std::endl; std::cout << "PsInitialize status: " << status << std::endl; if (status != PsReturnStatus::PsRetOK){ std::cout << "Initialize failed!" << std::endl; status = PsShutdown(); std::cout << "Shutdown status: " << status << std::endl; return -1; } // ... // using opencv highgui etc. here // return 0; } **CMakeLists.txt** cmake_minimum_required(VERSION 2.8.3) project(picozense) add_compile_options(-std=c++11) find_package(catkin REQUIRED COMPONENTS cv_bridge image_transport sensor_msgs message_generation ) find_package(OpenCV 3 REQUIRED COMPONENTS highgui) generate_messages( DEPENDENCIES sensor_msgs ) catkin_package( CATKIN_DEPENDS cv_bridge image_transport sensor_msgs message_runtime ) include_directories( include ${catkin_INCLUDE_DIRS} #picoZense API header file is here (/picozense/include/) ${OpenCV_INCLUDE_DIRS} ) message(STATUS "catkin_include_dirs: " ${catkin_INCLUDE_DIRS}) message(STATUS "opencv_include_dirs: " ${OpenCV_INCLUDE_DIRS}) add_executable(publisher src/publisher.cpp) add_dependencies(publisher ${catkin_EXPORTED_TARGETS} ${${PROJECT_NAME}_EXPORTED_TARGETS}) target_link_libraries(publisher ${catkin_LIBRARIES} ${OpenCV_LIBRARIES} ${PROJECT_SOURCE_DIR}/lib/x64/libpicozense_api.so) message(STATUS "opencv_libraries: " ${OpenCV_LIBRARIES}) **package.xml** picozense0.0.0The picozense packagerosusrTODOcatkincv_bridgeimage_transportsensor_msgsmessage_generationcv_bridgeimage_transportsensor_msgsmessage_generationcv_bridgeimage_transportsensor_msgsmessage_runtime **ldd libpicozense_api.so** linux-vdso.so.1 => (0x00007ffc19ea8000) libpthread.so.0 => /lib/x86_64-linux-gnu/libpthread.so.0 (0x00007ff34274c000) libstdc++.so.6 => /usr/lib/x86_64-linux-gnu/libstdc++.so.6 (0x00007ff3423ca000) libm.so.6 => /lib/x86_64-linux-gnu/libm.so.6 (0x00007ff3420c1000) libgcc_s.so.1 => /lib/x86_64-linux-gnu/libgcc_s.so.1 (0x00007ff341eaa000) libc.so.6 => /lib/x86_64-linux-gnu/libc.so.6 (0x00007ff341ae0000) /lib64/ld-linux-x86-64.so.2 (0x000055f87df4e000) I've already searched and tried a lot but any help is very much appreciated!
Viewing all 348 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>