Hi,
I am working on a project and i need to find the pose of a pattern but i don't know how to do it i know that some calibration package use it to calibrate the camera but i try to look inside but find nothings (i have miss something for sure but dunno what).
Thanks you in advance !
Bwaki
PS: i use Orbbec Astra camera already calibrate on Ubuntu 16.04 with ROS1 Kinetic
↧
How to find the pose of a pattern
↧
How to read read the video data from Vport P16-1MP-M12-IR camera
I have rugged vPort P16-1MP-M112-IR camera which provides an HD (720P,1280X720) video image and feature an H.264/MJPEG IP dome. In addition,the cameras feature EN 50155 compliance ,vandal proofing , rugged M12 ethernet port,built in microphone , digital input PoE power input and an IR illuminator for day and night image capability.
Can anybody suggest ROS driver package for the same in order to read the video image from camera.
↧
↧
How to execute ROS commands on remote host via application?
Hello everyone,
I am newbie in ROS environment and I have problem, I have a master node on my Raspberry Pi which is connected to the same local network. I communicate with RPI using SSH client like PuTTY or just in terminal in order to run ROS commands on it.
But I want to do the same thing using my own written application, I mean by clicking one button on GUI I want to establish ssh connection, login to system, set camera param, switch between different cameras, run cv_camera node on remote host and subscriber node on my local computer to display camera frames.
How to implement it? (I use Python and PyQt ) I thought about calling the shell script after clicking the button that would do all these tasks automatically. But maybe there is a simpler way to do it...
↧
How can I use RESONON PIKA L hyperspectral camera with ROS?
Hi, I am using RESONON PIKA L hyperspectral camera. I want to use it with ROS. Can some one give a hint how to do that? Is there driver available for this camera?
↧
ZED Camera calibration file in YAML?
I have ZED camera calibration file in .conf format. However the ROS node requires a .yaml file as input. I am not sure how to write an equivalent .yaml file.
↧
↧
Camera plugin error (VirtualBox, Kinetic)
As I try to echo the image_raw topic of the camera, the simulation crashes:
rostopic echo /robotino/camera1/image_raw WARNING: no messages received and simulated time is active. Is /clock being published?
[ INFO] [1531314909.385823119, 0.033000000]: Camera Plugin: Using the 'robotNamespace' param: '/' [ INFO] [1531314909.403410859, 0.033000000]: Camera Plugin (ns = /) , set to "" [ INFO] [1531314909.709798404, 0.067000000]: waitForService: Service [/gazebo/set_physics_properties] is now available. [ INFO] [1531314909.893404319, 0.197000000]: Physics dynamic reconfigure ready. gzserver: /build/ogre-1.9-mqY1wq/ogre-1.9-1.9.0+dfsg1/OgreMain/src/OgreRenderSystem.cpp:546: virtual void Ogre::RenderSystem::setDepthBufferFor(Ogre::RenderTarget): Assertion `bAttached && "A new DepthBuffer for a RenderTarget was created, but after creation" "it says it's incompatible with that RT"' failed. Aborted (core dumped) [gazebo-2] process has died [pid 13310, exit code 134, cmd /opt/ros/kinetic/lib/gazebo_ros/gzserver -e ode /home/ulysses/catkin_ws/src/robotino_gazebo/worlds/robotino.world __name:=gazebo __log:=/home/ulysses/.ros/log/6cd65eba-850c-11e8-804e-08002782662e/gazebo-2.log]. log file: /home/ulysses/.ros/log/6cd65eba-850c-11e8-804e-08002782662e/gazebo-2.log
↧
Finding pose in base coordinate when given input point on camera image
Hello,
I'm a beginner to ROS, and I'm using ROS Indigo in Ubuntu 14.04.
With mobile manipulator, I'm going to implement a teleoperation system which user can operate from a distance.
Especially in manipulation part, I'm thinking of manipulation which user can see robot's head camera view (from remote PC) and determine the pose of end-effector.
For example, if user gives specific input point (with a mouse input) to an image from head camera, system calculates the pose of input point at the base coordinate. (So that it makes to move the end-effector to the pose.)
Therefore, I would like to ask about the validity of this plan, and if possible, can you recommend the most appropriate method? (Or any package that I can apply.)
Thank you very much.
↧
generating monocular data-set from a camera
I am implementing ORB_SLAM2 library for monocular camera. I am using ROS Lunar to process my own sequences. I want to ask that what is the best way to generate the data set from the given camera? I have implemented some of the data set like TUM, KITTI, EuROC, I want to create my own.... I can record the video from the USB camera and now want to create a dataset out of it to process using SLAM2
[link text]( http://vision.in.tum.de/data/datasets/rgbd-dataset/download )
[link text](http://www.cvlibs.net/datasets/kitti/eval_odometry.php)
there is also somewhat related question with the python scripts provided on the following link. I sont know the command to implement it.
[link text](https://answers.ros.org/question/11537/creating-a-bag-file-out-of-a-image-sequence/)
↧
Unable to run a specific node in node manager
There is this camera node I can't run in node manager. It keeps crashing and unable to detect any cameras. However, I am able to run it smoothly from the terminal.
Also, I am able to run other nodes (camera or others) smoothly from the node manager.
↧
↧
Fusion of Radar and Camera
Hi
Is there any developed package for the fusion of TI Radar and ZED camera. I am looking for something like this in the below video.
https://www.youtube.com/watch?v=sPQGtIXaXqA
Regards,
Saptarshi
↧
Black Image: pylon_camera
I am using Resonon Pika L which has pylon camera(hyperspectral) and using it with Pylon ROS package. I followed the article available at "http://wiki.ros.org/pylon_camera" to setup the pylon ROS package.
After running 'rosrun pylon_camera pylon_camera_node', I run 'rosrun image_view image_view image:=/pylon_camera_node/image_raw' But I am only getting black image.
I also checked the data published to '/pylon_camera_node/image_raw' topic. It is just a matrix full of 0's. So I am not sure how to see the images from the camera. I am new to ROS. I can see some gray-line images using PylonViewerApp
Any help in this regard would be highly appreciated.
↧
USB camera does not take any pictures when ROS is launched at system startup
I have been trying to take pictures using a USB cam connected to my Jetson TX2. My code works pretty well when I execute it from terminal using the command `roslaunch my_ros my_launch_file.launch` and takes the pictures and saves them in a particular path. However, when I run the launch file at system startup, the code runs and the USB camera is triggered (the light on the camera tuns on) but no images are generated in the given path. Here is my code along with the explanations.
import rospy
from std_msgs.msg import Float32MultiArray
import cv2
import commands
import time
no_of_cameras = 2 # define number of attached cameras
time.sleep(120) # allow some time for the system see the usb camera
# this function returns the number of connected cameras
def getCameraInfo():
Res=commands.getoutput("ls -l /dev/video*") #search for cameras attached
CamCnt=Res.count("video") #getting number of cameras attached the board
return CamCnt/2
def process_image(frame, i):
###
### Image Processing code comes here
###
if i == 6:
res = True
if i != 6:
res = False
return res
def talker():
pub = rospy.Publisher('servo', Float32MultiArray, queue_size=10)
rospy.init_node('talker', anonymous=True)
# check for the number of cameras continuously
while True:
er = getCameraInfo()
if er > 1:
break
time.sleep(5)
cap = cv2.VideoCapture(1) # indicator light on my usb cam is turned ON when this command is executed
rate1 = rospy.Rate(0.5)
forward_slow = Float32MultiArray()
dont_move = Float32MultiArray()
forward_slow.data = [90,84]
dont_move.data = [90,90]
i = 0
while not rospy.is_shutdown():
image_name = "/home/nvidia/Desktop/my_ros/cam" + str(i) + ".png" # define path and name of the image
ret,frame=cap.read()
cv2.imwrite(image_name, frame)
res = process_image(frame, i) # image processing function. not related to to the problem
if res:
rospy.loginfo(dont_move)
pub.publish(dont_move)
rospy.sleep(1)
if not res:
rospy.loginfo(forward_slow)
pub.publish(forward_slow)
rospy.sleep(1)
i = i + 1
rate1.sleep()
cap.release() # indicator light on my usb cam is turned OFF when this command is executed
if __name__ == '__main__':
try:
talker()
except rospy.ROSInterruptException:
pass
What I am basically trying to do here is to take pictures with predefined intervals and then process them. According to the result of image processing, I will control robot with some basic commands (go forward, dont move,...). But I cannot figure out what sort of problem may be preventing the program from saving images when it is run automatically at system startup. Any kind of comments are appreciated (ros distro: kinetic)
↧
Adding a camera to a model in Gazebo? (Beginner)
Hello! I am very new to using things like Gazebo and ROS. I was wondering about how one might take a model from Gazebo (like turtlebot or quadrotor) and attach a camera to it, and see the feed?
My guess is that I have to go into the code of the model files and add camera code? But I am unfamiliar with the code or how it works. And how would I get the feed? Is there a simpler way to do this?
Can someone please guide me through this process? I have Gazebo5, and I am running Ubuntu.
Thank you!
↧
↧
Camera recommendation used for depth measurement
Hi all,
First, thank you for your time to read this post. I'm a graduate student in mechanical engineering, and currently being assigned too many tasks related to computer science :( The thing we are trying to do is to get the normal vectors of the selected points located on one object. Then reconstruct the object from its visual information captured either by cameras or laser sensors. Really appreciate your help on the following questions.
1. We are looking for a good camera for depth measurement. I'm wondering if anyone knows what's the brand or model of cameras being used in this [video](https://www.youtube.com/watch?v=iaF43Ze1oeI)? This project comes from Googles AI robotics project. Also, I'd really appreciate it if you could make other camera recommendations under the same setup. Several people recommended intel cameras. But our project manager requires us to use the best camera available. Is there any other better cameras that you would recommend?
2. [This](https://www.youtube.com/watch?v=iaF43Ze1oeI) camera (the link is same as above) seems to be a monocular camera. Is there any reasons why monocular camera is being used here?
3. I assume laser scanner could provide better performance in terms of depth measurements of the whole object under the same setup (fixed camera placed on top of the robot manipulator). The question is could we identify the selected point on the object? We could mark those selected points with different colors.
4. From my limited understanding, laser scanners provide point cloud information (please correct me if I'm wrong). If we want to get a better understanding of how to use point cloud, such as reconstruct point cloud to form the shape (?) of the object. What classes would you recommend? I know there is a PCL library but it looks kind of confusing to me.
Thank you again for your help.
↧
hz of our sensor's the topic was DECREASED, When I record some data with rosbag.
Hello, experts.
I used rosbag to acquire our sensor data as camera, lidar.
You can reference below spec.
Camera Node : 1980x1208 (FHD, RGB), 20 hz
Lidar Node : 64 Channel, 20 hz
I recorded topics on pc using rosbag, but there is a decreasing.
// command `$ rosbag record camera_topoic lidar_points_topic`
When I check on another pc, their spec are blew.
Camera Node: 20 hz -> 5~6 hz
Lidar Node: 20 hz -> 8~10 hz
I don't know where it depend on which problem.
How can I debug this problem?
Is there a general information which is good for this problem.
↧
How can I publish camera footage from Insta 360 camera?
Hi,I have used usb_cam node for publishing a single camera footage earlier. But when I connected the Insta 360 degree camera and tired to publish it I see nothing a screen with dots half and green half at bottom.So can any one please help me how to publish the live footage of Insta 360 Camera in Ros.
↧
error while trying to install ifm3d camera
ifm3d is a 3d camera for more details click on the link: https://github.com/lovepark/ifm3d
When I run catkin_make_isolated I get these errors:
e/nvidia/catkin_ws/build_isolated/ifm3d'
-- ifm3d found component: camera
-- ifm3d found component: framegrabber
CMake Warning at /usr/lib/cmake/ifm3d-0.9.0/ifm3d-config.cmake:44 (message):
ifm3d could not find component: image
Call Stack (most recent call first):
CMakeLists.txt:4 (find_package)
CMake Error at CMakeLists.txt:4 (find_package):
Found package configuration file:
/usr/lib/cmake/ifm3d-0.9.0/ifm3d-config.cmake
but it set ifm3d_FOUND to FALSE so package "ifm3d" is considered to be NOT
FOUND.
-- Configuring incomplete, errors occurred!
See also "/home/nvidia/catkin_ws/build_isolated/ifm3d/CMakeFiles/CMakeOutput.log".
<== Failed to process package 'ifm3d':
Command '['/home/nvidia/catkin_ws/devel_isolated/gpio_control/env.sh', 'cmake', '/home/nvidia/catkin_ws/src/ifm3d-ros', '-DCATKIN_DEVEL_PREFIX=/home/nvidia/catkin_ws/devel_isolated/ifm3d', '-DCMAKE_INSTALL_PREFIX=/home/nvidia/catkin_ws/install_isolated', '-G', 'Unix Makefiles']' returned non-zero exit status 1
Reproduce this error by running:
==> cd /home/nvidia/catkin_ws/build_isolated/ifm3d && /home/nvidia/catkin_ws/devel_isolated/gpio_control/env.sh cmake /home/nvidia/catkin_ws/src/ifm3d-ros -DCATKIN_DEVEL_PREFIX=/home/nvidia/catkin_ws/devel_isolated/ifm3d -DCMAKE_INSTALL_PREFIX=/home/nvidia/catkin_ws/install_isolated -G 'Unix Makefiles'
Command failed, exiting.
catkin_make_isolated 44,82s user 24,52s system 140% cpu 49,351 total
And in ~/dev/ifm3d/build I'm running make check and get these errors
[ RUN ] Image.MoveCtor
unknown file: Failure
C++ exception with description "Lib: XMLRPC Timeout - can you ping' the sensor?" thrown in the test body. [ FAILED ] Image.MoveCtor (1035 ms) [ RUN ] Image.MoveAssignmentOperator unknown file: Failure C++ exception with description "Lib: XMLRPC Timeout - can youping' the sensor?" thrown in the test body.
[ FAILED ] Image.MoveAssignmentOperator (2032 ms)
[ RUN ] Image.CopyCtor
unknown file: Failure
C++ exception with description "Lib: XMLRPC Timeout - can you ping' the sensor?" thrown in the test body. [ FAILED ] Image.CopyCtor (2022 ms) [ RUN ] Image.CopyAssignmentOperator unknown file: Failure C++ exception with description "Lib: XMLRPC Timeout - can youping' the sensor?" thrown in the test body.
[ FAILED ] Image.CopyAssignmentOperator (2037 ms)
[ RUN ] Image.References
unknown file: Failure
C++ exception with description "Lib: XMLRPC Timeout - can you ping' the sensor?" thrown in the test body. [ FAILED ] Image.References (2031 ms) [ RUN ] Image.CloudMechanics unknown file: Failure C++ exception with description "Lib: XMLRPC Timeout - can youping' the sensor?" thrown in the test body.
[ FAILED ] Image.CloudMechanics (2025 ms)
[ RUN ] Image.XYZImage
unknown file: Failure
C++ exception with description "Lib: XMLRPC Timeout - can you ping' the sensor?" thrown in the test body. [ FAILED ] Image.XYZImage (2026 ms) [ RUN ] Image.ComputeCartesian unknown file: Failure C++ exception with description "Lib: XMLRPC Timeout - can youping' the sensor?" thrown in the test body.
[ FAILED ] Image.ComputeCartesian (2034 ms)
[ RUN ] Image.TimeStamp
unknown file: Failure
C++ exception with description "Lib: XMLRPC Timeout - can you ping' the sensor?" thrown in the test body. [ FAILED ] Image.TimeStamp (1019 ms) [ RUN ] Image.IlluTemp unknown file: Failure C++ exception with description "Lib: XMLRPC Timeout - can youping' the sensor?" thrown in the test body.
[ FAILED ] Image.IlluTemp (1017 ms)
[----------] 10 tests from Image (17281 ms total)
[----------] Global test environment tear-down
[==========] 10 tests from 1 test case ran. (17281 ms total)
[ PASSED ] 0 tests.
[ FAILED ] 10 tests, listed below:
[ FAILED ] Image.MoveCtor
[ FAILED ] Image.MoveAssignmentOperator
[ FAILED ] Image.CopyCtor
[ FAILED ] Image.CopyAssignmentOperator
[ FAILED ] Image.References
[ FAILED ] Image.CloudMechanics
[ FAILED ] Image.XYZImage
[ FAILED ] Image.ComputeCartesian
[ FAILED ] Image.TimeStamp
[ FAILED ] Image.IlluTemp
10 FAILED TESTS
modules/image/test/CMakeFiles/check_image.dir/build.make:57: recipe for target 'modules/image/test/CMakeFiles/check_image' failed
make[3]: *** [modules/image/test/CMakeFiles/check_image] Error 1
CMakeFiles/Makefile2:751: recipe for target 'modules/image/test/CMakeFiles/check_image.dir/all' failed
make[2]: *** [modules/image/test/CMakeFiles/check_image.dir/all] Error 2
CMakeFiles/Makefile2:76: recipe for target 'CMakeFiles/check.dir/rule' failed
make[1]: *** [CMakeFiles/check.dir/rule] Error 2
Makefile:184: recipe for target 'check' failed
make: *** [check] Error 2
Do you guys understand my problem?
↧
↧
swapping between two topics in rqt_image_view
Hello all,
I am a newbie to ROS. I am trying to visualize image feed from two cameras in `RQT` using `image_view`. But according to the conventional use, one needs two `image_view` windows. But considering the space constraint of the screen, i want to display both images in a single window and want to visualize one camera at a time. i.e., when i click a button i want to see image from `camera1` and when i click another button i want to see image from `camera2`. could someone suggest a plain sailing method to do this ...
↧
In Gazebo, my robot can't see AR tag
OS : Ubuntu 16.04
ROS : Kinetic
Simulator : Gazebo
I made an AR tag model using Blender. (.dae)
And make a SDF file for Gazebo. I can see the AR tag in Gazebo.
----------
4 0 0 0 0 0 true 2 0 0 0 0 0 model://ar_tags_for_turtlebot3_exp_house/meshes/ar_tag_00.dae
----------
But, my robot can't see the AR tag. It just see the box !!!
In Rviz, it is same. Robot can't sense any obstacle(LIDAR). But, In camera, robot can see the box..
I don't know why. Please help me.
↧
How to take image from kinect and visualize using opencv?
I have a kinect set up using openni. I want to use opencv for further processing. When I use imgmsg_to_cv2() and cv2.imshow no image shows up and the terminal freezes. What am I missing?
class Frame:
def __init__(self):
self.bridge = cv_bridge.CvBridge()
cv2.namedWindow("window", 1)
self.image_sub = rospy.Subscriber('camera/rgb/image_raw',Image, self.img_callback)
def img_callback(self,data):
image = self.bridge.imgmsg_to_cv2(data,desired_encoding='bgr8')
cv2.imshow("window", image)
cv2.waitKey(3)
if __name__ == '__main__':
rospy.init_node('image_subscriber', anonymous = True)
frame = Frame()
rospy.spin()
↧