Hi
I have fixed a Insta 360 cam on robot. I am using GSCAM package for streaming the video footage. When i launch it and try to see the footage wireless at the control station. I am facing a huge delay. Can I know how to fix that delay?
↧
How to decrease delay in Gscam camera transmission wirelessly?
↧
get position of an object from image
my environment is gazebo 7 and ros kinetic and I work on a virtual robots project.
my robot model only supports thermal camera and rgb camera (I don't have depth camera).
I can detect an object in `x` , `y` position of robot camera with `w` width and `h` height.
Now my question is, how can I find position of that object in map?
for example: if my robot position is (250, 250) and `x , y , w , h` are (72 , 22 , 23 , 109) in camera ( camera resolution is (240 , 320)), how can get position of that object? (I know true position is (245 , 269))
↧
↧
Create a subscriber to an Image topic
Hi,
Im trying to wrtie a node that subscribes to a image topic, however, after almost copying the ros tutorial, I keep getting an error.
The relevant parts of my code are:
ros::NodeHandle nh;
image_transport::ImageTransport it(nh);
image_transport::Subscriber sub;
sub=it.subscribe("/sensors/camera/top/image_raw",1, Imtest::callbackSubscriber);
And my callback is:
void Imtest::callbackSubscriber(const sensor_msgs::ImageConstPtr& msg)
The error I get is:
/home/kal3-5/workspaces/my_first_ws/src/imtest_ros_tool/src/imtest/imtest.cpp:41:83: error: invalid use of non-static member function
sub=it.subscribe("/sensors/camera/top/image_raw",1, Imtest::callbackSubscriber);
My version of ROS is Kinetic, Ubuntu 16.04
↧
calibrating virtual camera
hi everyone.
I spawn my robot model with launching robot_spawn file, it 's a sdf robot model and it has a camera that uses libgazebo_ros_openni_kinect.so driver.
robot sapwns successfully and I can get image from it's topic and $(camera_topic)/camera_info topic published successfully.
my question is: info published by camera_info is true? should I calibrate my camera? how can I calibrate my virtual camera?
↧
get world coordinates from image coordinates
hi everyone, I now maybe someone asked this question before but I can't found my answer in this forum.
I have a calibrated virtual single camera in gazebo8 and ROS kinetic, after applying some opencv filters, now an object was detected in `x = 79.0, y = 23.0, w = 28.0, h = 127.0` coordinates of image. and was published this info into `/robot0/object` topic.
I'm using `image_geometry` package to get position of that object in gazebo simulated world, also I know position of that object in this case `x = 245 , y = 269` ,I'm using `gmapping` package and size of map has set to `500 , 500`.
this is my code that get object info from `/robot0/object` topic and try to guess object position in gazebo simulated world :
def get_object(array):
global img_geo,robot_odometry
u = array.data[0] + (array.data[2] // 2)
v = array.data[1] + array.data[3]
time = rospy.Time(0)
listener = tf.listener.TransformListener()
camera_point = img_geo.projectPixelTo3dRay((img_geo.rectifyPoint((u, v))))
point_msg.pose.position.x = camera_point[0] + robot_odometry.pose.pose.position.x
point_msg.pose.position.y = camera_point[1] + robot_odometry.pose.pose.position.y
point_msg.pose.position.z = 0
point_msg.pose.orientation.x = 0
point_msg.pose.orientation.y = 0
point_msg.pose.orientation.z = 0
point_msg.pose.orientation.w = 1
point_msg.header.frame_id = img_geo.tfFrame()
point_msg.header.stamp = time
try:
listener.waitForTransform(img_geo.tfFrame(), 'map', time, rospy.Duration(1))
tf_point = listener.transformPose('map', point_msg)
print(convert_from_robot_to_map(tf_point.pose.position.y, tf_point.pose.position.x))
except Exception:
pass
def convert_from_robot_to_map(robot_y, robot_x):
global map_info
map_x = (robot_x - map_info.info.origin.position.x) / map_info.info.resolution
map_y = (robot_y - map_info.info.origin.position.y) / map_info.info.resolution
return map_y, map_x
but unfortunately printed result isn't true. printed info (`print(convert_from_robot_to_map(...))`) is `y = 250 , x = 249` that is almost robot position in map( robot position in this case is `x = 250 , y = 250`), and `camera_point` object from projecting pixel to 3d ray in this case is `(-0.23228537548412242, 0.0925648488771315, 0.9682330572174006)` . and `tf_point` object values is`(-2.3449200226706126, 0.910935376894018)`.
I also passed my camera info to `img_geo.fromCameraInfo()`.
What is wrong in this code?? what should I do to get true coordinates of object in gazebo world???
↧
↧
Depth Camera Plugin for Gazebo?
Which depth camera plugin is used for current versions of Gazebo (7/8)? I want something that publishes a depth image topic or a point cloud topic. I tried the Kinect plugin (from this tutorial: http://gazebosim.org/tutorials?tut=ros_depth_camera&cat=connect_ros), but it appears that it doesn't work with newer versions of Gazebo.
↧
Access Sony XC-56 camera on Fanuc robot using ROS
Hello,
I am trying to figure out how to use Fanuc's Sony camera with ROS. I am running ROS on R30iB controller and on PC I am using ROS Kinetic. The communication between the PC and the controller is with an Ethernet cable. Is it possible to somehow stream the video from controller to the IP address and use the camera as the IP camera? Is there any other option to obtain the camera image to PC without using the Fanuc iRVision software? Was anyone already successful in getting the camera working with ROS using some of the available ROS camera drivers or at least getting the camera image to PC running Linux?
Thanks!
↧
Best mapping technique to use with LiDAR distance sensor , camera and IMU data?
I have raspberry pi with ROS kinetic installed along with a camera, teraranger one distance sensor, and PX4 flight controller with IMU data. I want to perform mapping of an indoor environment. Do you have any suggestions on mapping algorithm to use here? the TR one only has sensor_msgs/Range.msg and not LaserScan. Any suggestions on mapping techniques using one of them or a combination of them are welcome.
↧
Set RViz camera/view on fuerte
Hi,
I'm writing a RViz plugin which allows the user to play rosbags and record them.
During playing the rosbags I would like to make it possible to do a tracking shot in the scene. Therefore I have to set the current view in RViz.
Is there a possibility to do this programmatically?
There are some articles which say it is possible with a RViz plugin, but it's never explained anywhere.
If it is not possible in a single plugin, is it possible via librviz?
Thanks in advance, best regards, Josch.
EDIT:
The node should run on fuerte and Ubuntu 12.04 LTS.
EDIT 2:
New question: How do I get an instance from the VisualizationManager? Creating my own one requires a RenderPanel and a WindowManagerInterface-implementing class.
↧
↧
No camera message received with tum_simulator
I am running tum_simulator in ros kinetic ubuntu 16.04
when i run $ roslaunch cvg_sim_gazebo ardrone_testworld.launch
i get the following warning,
xacro: Traditional processing is deprecated. Switch to --inorder processing!
To check for compatibility of your document, use option --check-order.
For more infos, see http://wiki.ros.org/xacro#Processing_Order
inconsistent namespace redefinitions for xmlns:xacro:
old: http://playerstage.sourceforge.net/gazebo/xmlschema/#interface
new: http://ros.org/wiki/xacro (/home/dyana/catkin_make/src/cvg_sim_gazebo/urdf/quadrotor_base.urdf.xacro)
inconsistent namespace redefinitions for xmlns:xacro:
old: http://playerstage.sourceforge.net/gazebo/xmlschema/#interface
new: http://ros.org/wiki/xacro (/home/dyana/catkin_make/src/cvg_sim_gazebo/urdf/sensors/sonar_sensor.urdf.xacro)
inconsistent namespace redefinitions for xmlns:xacro:
old: http://playerstage.sourceforge.net/gazebo/xmlschema/#interface
new: http://ros.org/wiki/xacro (/home/dyana/catkin_make/src/cvg_sim_gazebo/urdf/sensors/generic_camera.urdf.xacro)
inconsistent namespace redefinitions for xmlns:xacro:
old: http://playerstage.sourceforge.net/gazebo/xmlschema/#interface
new: http://ros.org/wiki/xacro (/home/dyana/catkin_make/src/cvg_sim_gazebo/urdf/sensors/generic_camera.urdf.xacro)
redefining global property: pi
when processing file: /home/dyana/catkin_make/src/cvg_sim_gazebo/urdf/quadrotor_sensors.urdf.xacro
xacro.py is deprecated; please use xacro instead
and when I
$ rostopic echo /ardrone/front/camera_info
WARNING: no messages received and simulated time is active.
Is /clock being published?
I get the above message.
Is it because the camera xacro is not loaded properly?
I did not have such issues while running with ros indigo. I get this issue after I moved to kinetic and gazebo7.14
How do i resolve this?
↧
openni2_launch ASUS Xtion 2 No devices connected
Hi everyone,
I attempted to connect ASUS Xtion 2 with openni2.launch, but I could not.
And I received the following ROS_INFO message.
PARAMETERS
* /camera/camera_nodelet_manager/num_worker_threads: 4
* /camera/depth_rectify_depth/interpolation: 0
* /camera/driver/auto_exposure: True
* /camera/driver/auto_white_balance: True
* /camera/driver/color_depth_synchronization: False
* /camera/driver/depth_camera_info_url:
* /camera/driver/depth_frame_id: camera_depth_opti...
* /camera/driver/depth_registration: False
* /camera/driver/device_id: #1
* /camera/driver/rgb_camera_info_url:
* /camera/driver/rgb_frame_id: camera_rgb_optica...
* /rosdistro: indigo
* /rosversion: 1.11.21
NODES
/camera/
camera_nodelet_manager (nodelet/nodelet)
depth_metric (nodelet/nodelet)
depth_metric_rect (nodelet/nodelet)
depth_points (nodelet/nodelet)
depth_rectify_depth (nodelet/nodelet)
depth_registered_sw_metric_rect (nodelet/nodelet)
driver (nodelet/nodelet)
points_xyzrgb_sw_registered (nodelet/nodelet)
register_depth_rgb (nodelet/nodelet)
rgb_rectify_color (nodelet/nodelet)
/
camera_base_link (tf/static_transform_publisher)
camera_base_link1 (tf/static_transform_publisher)
camera_base_link2 (tf/static_transform_publisher)
camera_base_link3 (tf/static_transform_publisher)
auto-starting new master
process[master]: started with pid [24252]
ROS_MASTER_URI=http://localhost:11311
setting /run_id to d70c95f0-8c9d-11e7-98a5-9c4e36cd2e30
process[rosout-1]: started with pid [24272]
started core service [/rosout]
process[camera/camera_nodelet_manager-2]: started with pid [24283]
process[camera/driver-3]: started with pid [24289]
process[camera/rgb_rectify_color-4]: started with pid [24291]
process[camera/depth_rectify_depth-5]: started with pid [24292]
process[camera/depth_metric_rect-6]: started with pid [24311]
process[camera/depth_metric-7]: started with pid [24326]
process[camera/depth_points-8]: started with pid [24336]
process[camera/register_depth_rgb-9]: started with pid [24339]
process[camera/points_xyzrgb_sw_registered-10]: started with pid [24359]
process[camera/depth_registered_sw_metric_rect-11]: started with pid [24365]
process[camera_base_link-12]: started with pid [24376]
process[camera_base_link1-13]: started with pid [24403]
process[camera_base_link2-14]: started with pid [24413]
process[camera_base_link3-15]: started with pid [24439]
[ERROR] [1503999521.814253704]: Skipping XML Document "/opt/ros/indigo/share/swri_transform_util/nodelet_plugins.xml" which had no Root Element. This likely means the XML is malformed or missing.
[ INFO] [1503999521.821898422]: Initializing nodelet with 4 worker threads.
[ INFO] [1503999522.394933929]: No matching device found.... waiting for devices. Reason: std::string openni2_wrapper::OpenNI2Driver::resolveDeviceURI(const string&) @ /tmp/binarydeb/ros-indigo-openni2-camera-0.2.7/src/openni2_driver.cpp @ 631 : Invalid device number 1, there are 0 devices connected.
And CMakeLists.txt in openni2_camera.
cmake_minimum_required(VERSION 2.8.3)
project(openni2_camera)
find_package(catkin REQUIRED camera_info_manager dynamic_reconfigure image_transport nodelet sensor_msgs roscpp message_generation)
find_package(Boost REQUIRED COMPONENTS system thread)
find_package(PkgConfig)
# pkg_check_modules(PC_OPENNI2 libOpenNi2)
# if (NOT PC_OPENNI2_FOUND)
# pkg_check_modules(PC_OPENNI2 REQUIRED openni2)
# endif()
generate_dynamic_reconfigure_options(cfg/OpenNI2.cfg)
add_service_files(FILES
GetSerial.srv)
generate_messages()
catkin_package(
INCLUDE_DIRS include
LIBRARIES openni2_wrapper
CATKIN_DEPENDS camera_info_manager dynamic_reconfigure image_transport nodelet sensor_msgs roscpp message_runtime
DEPENDS libOpenNI2
)
include_directories(include
${catkin_INCLUDE_DIRS}
${Boost_INCLUDE_DIRS}
/home/myname/xtion2/ASUS-Linux-x64-OpenNI2.2/Include
)
link_directories(/home/myname/xtion2/ASUS-Linux-x64-OpenNI2.2/Redist)
find_path(OPENNI2_INCLUDE_PATH NAMES OpenNI.h
PATHS
/home/myname/xtion2/ASUS-Linux-x64-OpenNI2.2/Include
)
find_library(OPENNI2_LIBRARY NAMES OpenNI2 libOpenNI2
PATHS
/home/myname/xtion2/ASUS-Linux-x64-OpenNI2.2/Redist
)
add_library(openni2_wrapper
src/openni2_convert.cpp
src/openni2_device.cpp
src/openni2_device_info.cpp
src/openni2_timer_filter.cpp
src/openni2_frame_listener.cpp
src/openni2_device_manager.cpp
src/openni2_exception.cpp
src/openni2_video_mode.cpp
)
target_link_libraries(openni2_wrapper ${catkin_LIBRARIES} ${OPENNI2_LIBRARY} ${Boost_LIBRARIES} )
add_executable(test_wrapper test/test_wrapper.cpp )
target_link_libraries(test_wrapper openni2_wrapper ${Boost_LIBRARIES})
add_library(openni2_driver_lib
src/openni2_driver.cpp
)
target_link_libraries(openni2_driver_lib openni2_wrapper ${catkin_LIBRARIES} ${Boost_LIBRARIES} )
add_dependencies(openni2_driver_lib ${PROJECT_NAME}_gencfg ${PROJECT_NAME}_generate_messages_cpp)
add_library(openni2_camera_nodelet
ros/openni2_camera_nodelet.cpp
)
target_link_libraries(openni2_camera_nodelet openni2_driver_lib ${catkin_LIBRARIES} ${Boost_LIBRARIES} )
add_dependencies(openni2_camera_nodelet ${PROJECT_NAME}_gencfg)
add_executable(openni2_camera_node
ros/openni2_camera_node.cpp
)
target_link_libraries(openni2_camera_node openni2_driver_lib ${catkin_LIBRARIES} ${Boost_LIBRARIES} )
add_dependencies(openni2_camera_node ${PROJECT_NAME}_gencfg ${PROJECT_NAME}_generate_messages_cpp)
add_executable(list_devices
src/list_devices.cpp
)
target_link_libraries(list_devices openni2_wrapper)
if (UNIX AND NOT APPLE)
add_executable(usb_reset src/usb_reset.c)
set(ADDITIONAL_EXECUTABLES "usb_reset")
endif()
install(TARGETS openni2_wrapper openni2_camera_nodelet openni2_camera_node list_devices openni2_driver_lib ${ADDITIONAL_EXECUTABLES}
ARCHIVE DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
RUNTIME DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
install(DIRECTORY include/${PROJECT_NAME}/
DESTINATION ${CATKIN_PACKAGE_INCLUDE_DESTINATION}
)
# add xml file
install(FILES openni2_nodelets.xml
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
I could connect Xtion 2 with Niviewer.
If you could connect Xtion 2 with openni2.launch, would you tell me how to do it, please ?
Thank you.
↧
How to Turn a Camera Down on Stageros
Hi .. I have the following robot:
define roomba_cam position
(
size [0.33 0.33 0.1]
# this block approximates the circular shape of a Roomba
block(
points 16
point[0] [ 0.225 0.000 ]
point[1] [ 0.208 0.086 ]
point[2] [ 0.159 0.159 ]
point[3] [ 0.086 0.208 ]
point[4] [ 0.000 0.225 ]
point[5] [ -0.086 0.208 ]
point[6] [ -0.159 0.159 ]
point[7] [ -0.208 0.086 ]
point[8] [ -0.225 0.000 ]
point[9] [ -0.208 -0.086 ]
point[10] [ -0.159 -0.159 ]
point[11] [ -0.086 -0.208 ]
point[12] [ -0.000 -0.225 ]
point[13] [ 0.086 -0.208 ]
point[14] [ 0.159 -0.159 ]
point[15] [ 0.208 -0.086 ]
z [0 0.1]
)
localization "gps"
localization_origin [0 0 0 0]
color "gray50"
camera
(
origin [0 0 -0.11 0]
# laser properties
resolution [ 1280 720 ]
range [ 0.2 9.0 ]
fov [ 70.0 40.0 ]
pantilt [ 0.0 0.0 ]
# model properties
size [ 0.1 0.07 0.05 ]
color "black"
watts 100.0 # TODO find watts for sony pan-tilt camera
)
)
I put the camera underneath the robot because I want to place it at the top of the simulation and I need the camera to be facing down, but it is facing forward. Can anyone help me how to do it to turn it down?
↧
Raspicam_node stops working after 30 seconds
Hi all
I installed the raspicam_node plugin on a raspberry pi. The node is starting up well and is publishing the right data for around 30 seconds. Afterwards there is no data being transmitted any more - also no error messages in the console. And I tested it with different network configurations and also running all nodes on one and the same machine and it's always the same. After some time (10s - 60s) the camera stops working. Nevertheless the light on the camera is still on. And when I use a node with rospy (Python node), it works fine.
Can anyone help me? Thank you in advance.
↧
↧
hector quadrotor camera change
I am trying to change the camera that comes with the hector_quadrotor_demo that is a front facing camera.
I noticed that by using the command "roslaunch hector_quadrotor_demo outdoor_flight_gazebo.launch" the model that loads is the file "quadrotor_hokuyo_utm30lx.urdf.xacro".
There, I see a front facing camera.
Somewhere in the files I see that there is a downward camera, but when I include the code it never shows up in RViz.
The quadrotor_hokuyo_utm30lx.urdf.xacro file has this code:
But if I try to include the downward one, exactly as found in quadrotor_downward_cam.urdf.xacro:
It doesn't work.
I already tried to modify the files, and change the .urdf that is loaded, but no matter what I change, it always loads the same configurations. I also did catkin_make in the ~/catkin_ws directory, and nothing.
I am using ros kinetic and gazebo 1.7
↧
hector quadrotor camera change
I am trying to change the camera that comes with the hector_quadrotor_demo that is a front facing camera.
I noticed that by using the command "roslaunch hector_quadrotor_demo outdoor_flight_gazebo.launch" the model that loads is the file "quadrotor_hokuyo_utm30lx.urdf.xacro".
There, I see a front facing camera.
Somewhere in the files I see that there is a downward camera, but when I include the code it never shows up in RViz.
The quadrotor_hokuyo_utm30lx.urdf.xacro file has this code:
But if I try to include the downward one, exactly as found in quadrotor_downward_cam.urdf.xacro:
It doesn't work.
I already tried to modify the files, and change the .urdf that is loaded, but no matter what I change, it always loads the same configurations. I also did catkin_make in the ~/catkin_ws directory, and nothing.
I am using ros kinetic and gazebo 1.7
↧
How to execute ROS commands on remote host via application?
Hello everyone,
I am newbie in ROS environment and I have problem, I have a master node on my Raspberry Pi which is connected to the same local network. I communicate with RPI using SSH client like PuTTY or just in terminal in order to run ROS commands on it.
But I want to do the same thing using my own written application, I mean by clicking one button on GUI I want to establish ssh connection, login to system, set camera param, switch between different cameras, run cv_camera node on remote host and subscriber node on my local computer to display camera frames.
How to implement it? (I use Python and PyQt ) I thought about calling the shell script after clicking the button that would do all these tasks automatically. But maybe there is a simpler way to do it...
↧
How to subcribe 2 image topics with 1 node in same time ?
**Hi buddy**,
My name is Toan. I am a newbie in ROS. Now, i'm using Kinect Camera and trying to subcribe 2 image topics with 1 node. But seem like my code is wrong something. If i try to subcribe 1 node with 1 topic is fine. In details, i make case 1 or case 2 is comment , it's work for 1 node 1 topic.
What should i do now ?
This is my code:
#!/usr/bin/env python
import sys
import rospy
import numpy as np
import math
from std_msgs.msg import String
from sensor_msgs.msg import Image
from cv_bridge import CvBridge, CvBridgeError
import cv2
#------Start of Class
class image_converter:
def __init__(self):
self.bridge = CvBridge()
self.image_sub = rospy.Subscriber("/kinect/rgb",Image,self.callback_rgb) #Case 1
self.image_sub = rospy.Subscriber("/kinect/depth",Image,self.callback_depth) #Case 2
def callback_rgb(self,data):
cv_image_rgb = self.bridge.imgmsg_to_cv2(data, "bgr8")
cv2.imshow("RGB_Sub", cv_image_rgb)
if cv2.waitKey(1) & 0xFF == ord('q'):
rospy.signal_shutdown('Quit')
cv2.destroyAllWindows()
def callback_depth(self, data):
cv_image_depth = self.bridge.imgmsg_to_cv2(data, "bgr8")
cv2.imshow("Depth_Sub", cv_image_depth)
if cv2.waitKey(1) & 0xFF == ord('q'):
rospy.signal_shutdown('Quit')
cv2.destroyAllWindows()
def main():
img_cvt = image_converter()
rospy.init_node('image_converter', anonymous=True)
try:
rospy.spin()
except KeyboardInterrupt:
print("Shutting down")
if __name__ == '__main__':
main()
Error:
(RGB_Sub:9479): Gtk-WARNING **: gtk_disable_setlocale() must be called before gtk_init()
[xcb] Unknown request in queue while dequeuing
[xcb] Most likely this is a multi-threaded client and XInitThreads has not been called
[xcb] Aborting, sorry about that.
python: ../../src/xcb_io.c:179: dequeue_pending_request: Assertion `!xcb_xlib_unknown_req_in_deq' failed.
Aborted (core dumped)
↧
↧
Wide angle fisheye camera for ROS
Hi, for my application, I would like to use a fish eye camera with at least 270 degree of view. Could any one recommend one with ROS (or at least C++ API) and USB support?
↧
lidar_camera_calibration stalls
I am using the following library to try to calibrate a Velodyne LiDAR to a PiCam RGB camera: https://github.com/ankitdhall/lidar_camera_calibration. It took me a while to configure, but now everything builds and runs; however, the program is hanging up. Here's the printouts I get when I run it.
user@computer$ roslaunch lidar_camera_calibration find_transform.launch
... logging to /home/manny/.ros/log/703a468e-d0dc-11e5-9811-b827eb0c153d/roslaunch-manny-13746.log
Checking log directory for disk usage. This may take awhile.
Press Ctrl-C to interrupt
Done checking log file disk usage. Usage is <1GB.
started roslaunch server http://manny:33471/
SUMMARY
========
PARAMETERS
* /aruco_mapping/calibration_file: /home/manny/Code/...
* /aruco_mapping/marker_size: 0.1016
* /aruco_mapping/num_of_markers: 1
* /aruco_mapping/roi_allowed: False
* /aruco_mapping/space_type: plane
* /lidar_camera_calibration/camera_frame_topic: /raspicam_node/im...
* /lidar_camera_calibration/camera_info_topic: /raspicam_node/ca...
* /lidar_camera_calibration/velodyne_topic: /velodyne_points
* /rosdistro: melodic
* /rosversion: 1.14.3
NODES
/
aruco_mapping (aruco_mapping/aruco_mapping)
find_transform (lidar_camera_calibration/find_transform)
ROS_MASTER_URI=http://stick:11311
process[aruco_mapping-1]: started with pid [13756]
process[find_transform-2]: started with pid [13757]
Size of the frame: 1280 x 960
Limits: -2.5 to 2.5
Limits: -4 to 4
Limits: 0 to 1.7
Number of markers: 1
Intensity threshold (between 0.0 and 1.0): 0.0004
useCameraInfo: 0
Projection matrix:
[981.17163, 0, 640, 0;
0, 981.17163, 480, 0;
0, 0, 1, 0]
MAX_ITERS: 100
[ INFO] [1547151515.113124565]: Reading CameraInfo from configuration file
[ INFO] [1547151515.790099973]: Calibration file path: /home/manny/Code/catkin_ws/src/aruco_mapping/data/picam_calibration.ini
[ INFO] [1547151515.848735811]: Number of markers: 1
[ INFO] [1547151515.848845526]: Marker Size: 0.1016
[ INFO] [1547151515.848888023]: Type of space: plane
[ INFO] [1547151515.848919640]: ROI allowed: 0
[ INFO] [1547151515.848945892]: ROI x-coor: 0
[ INFO] [1547151515.848975687]: ROI y-coor: 0
[ INFO] [1547151515.849009739]: ROI width: 0
[ INFO] [1547151515.849052101]: ROI height: 8096
[ INFO] [1547151515.881308433]: Calibration data loaded successfully
It seems to me that the calibration configuration files are being loaded correctly, but that the lidar_camera_calibration program cannot "latch onto" the ROS topics (I mean, connect to and receive the data from the ROS topics that are streaming data from the live devices, LiDAR and RGB camera). This is just a guess. Another possibility is that the program does not give any more feedback until it finds the calibration board. If that is the case, it is a bit unfortunate, because it would be nice to know if it is even able to read the data from the devices. At the moment, the devices cannot even see the calibration board, but they could when a tried a few days ago.
When I list the ROS topics, I get the three that I think it needs, `/raspicam_node/camera_info`, `/raspicam_node/image/compressed`, and `/velodyne_points`.
user@computer$ rostopic list
/aruco_markers
/aruco_poses
/clicked_point
/diagnostics
/initialpose
/lidar_camera_calibration_rt
/move_base_simple/goal
/raspicam_node/camera_info
/raspicam_node/image/compressed
/raspicam_node/parameter_descriptions
/raspicam_node/parameter_updates
/rosout
/rosout_agg
/scan
/tf
/tf_static
/velodyne_nodelet_manager/bond
/velodyne_nodelet_manager_cloud/parameter_descriptions
/velodyne_nodelet_manager_cloud/parameter_updates
/velodyne_nodelet_manager_driver/parameter_descriptions
/velodyne_nodelet_manager_driver/parameter_updates
/velodyne_nodelet_manager_laserscan/parameter_descriptions
/velodyne_nodelet_manager_laserscan/parameter_updates
/velodyne_packets
/velodyne_points
`lidar_camera_calibration` doesn't crash. It just "sits there", cursor blinking.
I suppose my next step in troubleshooting will be to look more closely at their C++ code, to see where it's hanging up, and maybe add a bunch of more print-outs around there.
Any help will be greatly appreciated. Thank you.
↧
[ROS2] image_transport and RVIZ2 Camera, something wrong
Hi,
using Rviz2 and image_transport with Crystal Clemmys I noticed that the `Camera` plugin in Rviz2 is not aligned to how `image_transport::CameraPublisher` creates the topic structure.
The `Camera` plugin for an image topic `image/data` expects the `camera_info` as `image/data/camera_info`, instead the `image_transport::CameraPublisher`publishes it as `image/camera_info` (as for ROS1).
Who is wrong? The Camera plugin or image_transport?
↧