Skip to content
Snippets Groups Projects
Commit 693189b3 authored by Younes Oubkis's avatar Younes Oubkis
Browse files

successful build

parent b125b26f
Branches
No related tags found
No related merge requests found
Showing
with 1224 additions and 0 deletions
MIT License
Copyright (c) 2022 Kell ideas Ltd.
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.
\ No newline at end of file
# ROS wrapper for ArUco Opencv module
## Quick start
1. Prepare the camera topics.
The ROS driver for the camera you are using should publish 2 topics containing: Camera images (most commonly `image_raw`, `image_color`, `image_mono` or `image_rect` topic) and Camera calibration data (`camera_info` topic). Both topics should be published in the same namespace. Let's assume the driver publishes on these topics:
```
/camera1/image_raw
/camera1/camera_info
```
> :warning: As of now the wrapper does not support multiple camera sources published on the same topics.
The camera should be calibrated (the better the calibration the better the marker pose estimation). If the camera is uncalibrated (the matrices D, K, R, P in `camera_info` messages are set to 0) or the calibration is invalid, you should [recalibrate the camera](http://wiki.ros.org/camera_calibration/Tutorials/MonocularCalibration).
Also, make sure that the `frame_id` published with the Camera image headers is not empty.
2. Prepare the marker.
You can use scripts in `aruco_opencv` package to generate PDF files with ArUco markers.
Let's create a 0.2 x 0.2 m marker (including white margin) with ID 0:
```
rosrun aruco_opencv create_marker 0
```
The output will look like this:
```
ArUco dictionary: 4X4_50
Inner marker bits: 4
Marker border bits: 1
Pixels per bit: 1
Margin pixels: 1
Marker side size: 0.1500 m - 6 pixels
Output image side size: 0.200 m - 8 pixels
Output DPI: 1.016
Generating marker with ID 0...
Converting images to pdf and writing to output file markers.pdf...
```
Please note the `Marker side size` value. It will be passed to the marker tracker as a parameter.
Make sure to print the marker in the original scale.
3. Run the marker tracker.
Start the `aruco_opencv/ArucoTracker` nodelet with `cam_base_topic` parameter set to the Camera image topic name and `marker_size` to the value noted in the previous step:
```
rosrun nodelet nodelet standalone aruco_opencv/ArucoTracker _cam_base_topic:=camera1/image_raw _marker_size:=0.15
```
If the images are rectified (undistorted), you should also set `image_is_rectified` parameter to `true`:
```
rosrun nodelet nodelet standalone aruco_opencv/ArucoTracker _cam_base_topic:=camera1/image_raw _image_is_rectified:=true _marker_size:=0.15
```
4. Visualize the data.
For each received image, `aruco_tracker` will publish a message on `aruco_detections` topic:
```
rostopic echo /aruco_detections
```
Put the marker in front of the camera. If the marker is detected, the `markers` array should contain the marker poses.
The marker poses are also published on TF. You can visualize the data in RViz by setting fixed frame to the `frame_id` of the camera and adding the TF display.
On the `/aruco_tracker/debug` topic you can see the Camera images with the frame axes of detected markers drawn on top of it.
BasedOnStyle: LLVM
ColumnLimit: 100
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package aruco_opencv
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.3.1 (2023-02-22)
------------------
* Fix build for Debian Buster
* Contributors: Błażej Sowa
0.3.0 (2023-02-22)
------------------
* Add Board detection (`#6 <https://github.com/fictionlab/aruco_opencv/issues/6>`_)
* Rename SingleMarkerTracker to ArucoTracker
* Add BoardPose msg, change MarkerDetection to ArucoDetection
* Load board descriptions from yaml file
* Add more boards to example configuration
* Change default marker dictionary
* Add board pose estimation
* Lock camera info for board estimation
* Ignore duplicate image frames
* Add scripts for generating markers and boards
* Fix included headers file extensions
* Port changes from foxy branch
* Simplify filling the camera matrix from camera info
* Contributors: Błażej Sowa
0.2.0 (2022-09-07)
------------------
* Move message definitions to aruco_opencv_msgs package (`#2 <https://github.com/fictionlab/aruco_opencv/issues/2>`_)
* Fix build for Debian Buster
* Publish transforms from output_frame to markers on tf2
* Transform marker poses to specified output frame
* Allow changing camera info without restarting the tracker
* Add dynamically reconfigurable parameters for corner refinement
* Contributors: Błażej Sowa
0.1.0 (2022-07-07)
------------------
* Initial version of the package
* Contributors: Błażej Sowa
cmake_minimum_required(VERSION 3.0.2)
project(aruco_opencv)
find_package(catkin REQUIRED COMPONENTS
aruco_opencv_msgs
cv_bridge
dynamic_reconfigure
image_transport
nodelet
roscpp
tf2_geometry_msgs
tf2_ros
)
find_package(OpenCV ${OpenCV_VERSION} REQUIRED COMPONENTS aruco)
find_package(yaml-cpp REQUIRED)
generate_dynamic_reconfigure_options(
cfg/ArucoDetector.cfg
)
catkin_package()
include_directories(
include
${catkin_INCLUDE_DIRS}
${YAML_CPP_INCLUDE_DIRS}
)
add_library(${PROJECT_NAME}
src/aruco_tracker.cpp
src/utils.cpp
)
target_link_libraries(${PROJECT_NAME}
${catkin_LIBRARIES}
opencv_aruco
${YAML_CPP_LIBRARIES}
)
add_dependencies(${PROJECT_NAME}
${PROJECT_NAME}_gencfg
)
install(
TARGETS ${PROJECT_NAME}
LIBRARY DESTINATION ${CATKIN_PACKAGE_LIB_DESTINATION}
)
install(
PROGRAMS
scripts/create_board
scripts/create_marker
DESTINATION ${CATKIN_PACKAGE_BIN_DESTINATION}
)
install(FILES nodelet_plugins.xml
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
install(
DIRECTORY
config
launch
DESTINATION ${CATKIN_PACKAGE_SHARE_DESTINATION}
)
# aruco_opencv
A ROS package for detecting ArUco markers. The difference between this package and [aruco_ros](https://github.com/pal-robotics/aruco_ros) is that it uses the [aruco module](https://docs.opencv.org/4.x/d9/d6a/group__aruco.html) from OpenCV libraries instead of the [original ArUco library](https://www.uco.es/investiga/grupos/ava/node/26).
#!/usr/bin/env python3
from dynamic_reconfigure.parameter_generator_catkin import *
gen = ParameterGenerator()
gen.add("adaptiveThreshWinSizeMin", int_t, 0, "", 3, 3, 100)
gen.add("adaptiveThreshWinSizeMax", int_t, 0, "", 23, 3, 100)
gen.add("adaptiveThreshWinSizeStep", int_t, 0, "", 10, 1, 100)
gen.add("adaptiveThreshConstant", double_t, 0, "", 7.0, 0.0, 100.0)
gen.add("minMarkerPerimeterRate", double_t, 0, "", 0.03, 0.0, 4.0)
gen.add("maxMarkerPerimeterRate", double_t, 0, "", 4.0, 0.0, 4.0)
gen.add("polygonalApproxAccuracyRate", double_t, 0, "", 0.03, 0.0, 0.3)
gen.add("minCornerDistanceRate", double_t, 0, "", 0.05, 0.0, 0.25)
gen.add("minDistanceToBorder", int_t, 0, "", 3, 0, 100)
gen.add("minMarkerDistanceRate", double_t, 0, "", 0.05, 0.0, 0.25)
gen.add("markerBorderBits", int_t, 0, "", 1, 1, 3)
gen.add("perspectiveRemovePixelPerCell", int_t, 0, "", 4, 1, 20)
gen.add("perspectiveRemoveIgnoredMarginPerCell", double_t, 0, "", 0.13, 0.0, 0.5)
gen.add("maxErroneousBitsInBorderRate", double_t, 0, "", 0.35, 0.0, 1.0)
gen.add("minOtsuStdDev", double_t, 0, "", 5.0, 0.0, 30.0)
gen.add("errorCorrectionRate", double_t, 0, "", 0.6, 0.0, 1.0)
corner_refinement_enum = gen.enum(
[
gen.const("None", int_t, 0, ""),
gen.const("Subpix", int_t, 1, ""),
gen.const("Contour", int_t, 2, ""),
gen.const("Apriltag", int_t, 3, ""),
],
"",
)
gen.add(
"cornerRefinementMethod", int_t, 0, "", 0, 0, 3, edit_method=corner_refinement_enum
)
gen.add("cornerRefinementWinSize", int_t, 0, "", 5, 2, 10)
gen.add("cornerRefinementMaxIterations", int_t, 0, "", 30, 1, 100)
gen.add("cornerRefinementMinAccuracy", double_t, 0, "", 0.1, 0.01, 1.0)
exit(gen.generate("aruco_opencv", "aruco_tracker", "ArucoDetector"))
cam_base_topic: usb_cam/image_raw
image_is_rectified: false
output_frame: ''
marker_dict: ARUCO_ORIGINAL
publish_tf: true
marker_size: 0.0742
image_queue_size: 1
# Dynamically reconfigurable Detector parameters
# https://docs.opencv.org/4.2.0/d5/dae/tutorial_aruco_detection.html
adaptiveThreshWinSizeMin: 3
adaptiveThreshWinSizeMax: 23
adaptiveThreshWinSizeStep: 10
adaptiveThreshConstant: 7
minMarkerPerimeterRate: 0.03
maxMarkerPerimeterRate: 4.0
polygonalApproxAccuracyRate: 0.03
minCornerDistanceRate: 0.05
minDistanceToBorder: 3
minMarkerDistanceRate: 0.05
markerBorderBits: 1
perspectiveRemovePixelPerCell: 4
perspectiveRemoveIgnoredMarginPerCell: 0.13
maxErroneousBitsInBorderRate: 0.35
minOtsuStdDev: 5.0
errorCorrectionRate: 0.6
cornerRefinementMethod: 0 # 0 - None, 1 - Subpix, 2 - Contour, 3 - Apriltag
cornerRefinementWinSize: 5
cornerRefinementMaxIterations: 30
cornerRefinementMinAccuracy: 0.1
// Copyright 2022 Kell Ideas sp. z o.o.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
#pragma once
#include <unordered_map>
#include <opencv2/aruco.hpp>
#include <opencv2/calib3d.hpp>
#include "geometry_msgs/Pose.h"
#include "cv_bridge/cv_bridge.h"
namespace aruco_opencv
{
geometry_msgs::Pose convert_rvec_tvec(const cv::Vec3d & rvec, const cv::Vec3d & tvec);
extern const std::unordered_map<std::string, cv::aruco::PREDEFINED_DICTIONARY_NAME> ARUCO_DICT_MAP;
} // namespace aruco_opencv
<launch>
<node name="aruco_tracker" pkg="nodelet" type="nodelet"
args="standalone aruco_opencv/ArucoTracker" output="screen">
<rosparam command="load"
file="$(find aruco_opencv)/config/aruco_tracker.yaml" />
<param name="board_descriptions_path"
value="$(find aruco_opencv)/config/board_descriptions.yaml" />
</node>
</launch>
\ No newline at end of file
<library path="lib/libaruco_opencv">
<class name="aruco_opencv/ArucoTracker" type="aruco_opencv::ArucoTracker" base_class_type="nodelet::Nodelet">
<description>
Tracks positions of ArUco markers and boards.
</description>
</class>
</library>
\ No newline at end of file
<?xml version="1.0"?>
<?xml-model
href="http://download.ros.org/schema/package_format3.xsd"
schematypens="http://www.w3.org/2001/XMLSchema"?>
<package format="3">
<name>aruco_opencv</name>
<version>0.3.1</version>
<description>
ArUco marker detection using aruco module from OpenCV libraries.
</description>
<maintainer email="support@fictionlab.pl">Fictionlab</maintainer>
<license>MIT</license>
<author email="blazej@fictionlab.pl">Błażej Sowa</author>
<buildtool_depend>catkin</buildtool_depend>
<build_depend>roscpp</build_depend>
<build_depend>image_transport</build_depend>
<build_depend>tf2_ros</build_depend>
<build_depend>tf2_geometry_msgs</build_depend>
<build_depend>cv_bridge</build_depend>
<build_depend>nodelet</build_depend>
<build_depend>dynamic_reconfigure</build_depend>
<build_depend>aruco_opencv_msgs</build_depend>
<build_depend>yaml-cpp</build_depend>
<exec_depend>roscpp</exec_depend>
<exec_depend>image_transport</exec_depend>
<exec_depend>tf2_ros</exec_depend>
<exec_depend>tf2_geometry_msgs</exec_depend>
<exec_depend>cv_bridge</exec_depend>
<exec_depend>nodelet</exec_depend>
<exec_depend>dynamic_reconfigure</exec_depend>
<exec_depend>yaml-cpp</exec_depend>
<exec_depend>python3-opencv</exec_depend>
<exec_depend>python3-numpy</exec_depend>
<exec_depend>python3-img2pdf</exec_depend>
<export>
<nodelet plugin="${prefix}/nodelet_plugins.xml" />
</export>
</package>
\ No newline at end of file
#!/usr/bin/env python3
from __future__ import annotations
import argparse
import numpy as np
import cv2
import img2pdf
INCH_TO_METERS = 0.0254
ARUCO_DICTS = {
"4X4_50": cv2.aruco.DICT_4X4_50,
"4X4_100": cv2.aruco.DICT_4X4_100,
"4X4_250": cv2.aruco.DICT_4X4_250,
"4X4_1000": cv2.aruco.DICT_4X4_1000,
"5X5_50": cv2.aruco.DICT_5X5_50,
"5X5_100": cv2.aruco.DICT_5X5_100,
"5X5_250": cv2.aruco.DICT_5X5_250,
"5X5_1000": cv2.aruco.DICT_5X5_1000,
"6X6_50": cv2.aruco.DICT_6X6_50,
"6X6_100": cv2.aruco.DICT_6X6_100,
"6X6_250": cv2.aruco.DICT_6X6_250,
"6X6_1000": cv2.aruco.DICT_6X6_1000,
"7X7_50": cv2.aruco.DICT_7X7_50,
"7X7_100": cv2.aruco.DICT_7X7_100,
"7X7_250": cv2.aruco.DICT_7X7_250,
"7X7_1000": cv2.aruco.DICT_7X7_1000,
"ARUCO_ORIGINAL": cv2.aruco.DICT_ARUCO_ORIGINAL,
"APRILTAG_16h5": cv2.aruco.DICT_APRILTAG_16h5,
"APRILTAG_25h9": cv2.aruco.DICT_APRILTAG_25h9,
"APRILTAG_36h10": cv2.aruco.DICT_APRILTAG_36h10,
"APRILTAG_36h11": cv2.aruco.DICT_APRILTAG_36h11,
}
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
parser.add_argument(
"-o",
dest="output",
type=str,
default="markers.pdf",
help="Path to output PDF file containing ArUco marker boards",
)
parser.add_argument(
"-d",
dest="dict",
type=str,
default="ARUCO_ORIGINAL",
choices=list(ARUCO_DICTS.keys()),
help="Dictionary of ArUco markers to use",
)
parser.add_argument(
"-b",
"--border-bits",
type=int,
default=1,
help="Number of border bits (black squares surrounding the inner marker)",
)
parser.add_argument(
"-p", "--pixels-per-bit", type=int, default=1, help="Pixels per marker bit"
)
parser.add_argument(
"-m",
"--margin-pixels",
type=int,
default=1,
help="Number of white margin pixels",
)
parser.add_argument(
"-s",
"--size",
type=float,
default=0.10,
help="Output board X axis size (with margins) in meters",
)
parser.add_argument(
"--sep",
"--separation-pixels",
dest="separation_pixels",
type=int,
default=2,
help="Number of white pixels between markers",
)
parser.add_argument(
"-x",
"--markers-x",
type=int,
default=2,
help="Number of markers in the X direction",
)
parser.add_argument(
"-y",
"--markers-y",
type=int,
default=2,
help="Number of markers in the Y direction",
)
parser.add_argument(
"id", type=int, nargs="+", help="ID of the first marker in the board"
)
return parser.parse_args()
def main():
args = parse_args()
aruco_dict = cv2.aruco.getPredefinedDictionary(ARUCO_DICTS[args.dict])
marker_side_pixels = (
aruco_dict.markerSize + 2 * args.border_bits
) * args.pixels_per_bit
output_x_pixels = (
marker_side_pixels * args.markers_x
+ args.separation_pixels * (args.markers_x - 1)
+ 2 * args.margin_pixels
)
output_y_pixels = (
marker_side_pixels * args.markers_y
+ args.separation_pixels * (args.markers_y - 1)
+ 2 * args.margin_pixels
)
dpi = output_x_pixels / (args.size / INCH_TO_METERS)
marker_size = (marker_side_pixels / dpi) * INCH_TO_METERS
separation_size = (args.separation_pixels / dpi) * INCH_TO_METERS
output_y_size = (output_y_pixels / dpi) * INCH_TO_METERS
print(f"ArUco dictionary: {args.dict}")
print(f"Inner marker bits: {aruco_dict.markerSize}")
print(f"Marker border bits: {args.border_bits}")
print(f"Pixels per bit: {args.pixels_per_bit}")
print(f"Margin pixels: {args.margin_pixels}")
print(f"Marker side size: {marker_size:.4f} m - {marker_side_pixels} pixels")
print(f"Separation size: {separation_size:.4f} m - {args.separation_pixels} pixels")
print(f"Board Size: {args.markers_x} x {args.markers_y} markers")
print(
f"Output image size: {args.size:.3f} x {output_y_size:.3f} m - {output_x_pixels} x {output_y_pixels} pixels"
)
print(f"Output DPI: {dpi:.3f}")
images = []
for marker_id in args.id:
print(f"Generating board with first marker ID {marker_id}...")
board = cv2.aruco.GridBoard_create(
args.markers_x,
args.markers_y,
marker_size,
separation_size,
aruco_dict,
marker_id,
)
marker = np.zeros((output_y_pixels, output_x_pixels), dtype="uint8")
board.draw(
(output_x_pixels, output_y_pixels),
marker,
args.margin_pixels,
args.border_bits,
)
img = cv2.imencode(".png", marker)[1].tobytes()
images.append(img)
print(f"Converting images to pdf and writing to output file {args.output}...")
my_layout_fun = img2pdf.get_fixed_dpi_layout_fun((dpi, dpi))
with open(args.output, "wb") as output_file:
output_file.write(img2pdf.convert(images, layout_fun=my_layout_fun))
if __name__ == "__main__":
main()
#!/usr/bin/env python3
from __future__ import annotations
import argparse
import numpy as np
import cv2
import img2pdf
ARUCO_DICTS = {
"4X4_50": cv2.aruco.DICT_4X4_50,
"4X4_100": cv2.aruco.DICT_4X4_100,
"4X4_250": cv2.aruco.DICT_4X4_250,
"4X4_1000": cv2.aruco.DICT_4X4_1000,
"5X5_50": cv2.aruco.DICT_5X5_50,
"5X5_100": cv2.aruco.DICT_5X5_100,
"5X5_250": cv2.aruco.DICT_5X5_250,
"5X5_1000": cv2.aruco.DICT_5X5_1000,
"6X6_50": cv2.aruco.DICT_6X6_50,
"6X6_100": cv2.aruco.DICT_6X6_100,
"6X6_250": cv2.aruco.DICT_6X6_250,
"6X6_1000": cv2.aruco.DICT_6X6_1000,
"7X7_50": cv2.aruco.DICT_7X7_50,
"7X7_100": cv2.aruco.DICT_7X7_100,
"7X7_250": cv2.aruco.DICT_7X7_250,
"7X7_1000": cv2.aruco.DICT_7X7_1000,
"ARUCO_ORIGINAL": cv2.aruco.DICT_ARUCO_ORIGINAL,
"APRILTAG_16h5": cv2.aruco.DICT_APRILTAG_16h5,
"APRILTAG_25h9": cv2.aruco.DICT_APRILTAG_25h9,
"APRILTAG_36h10": cv2.aruco.DICT_APRILTAG_36h10,
"APRILTAG_36h11": cv2.aruco.DICT_APRILTAG_36h11,
}
def parse_args() -> argparse.Namespace:
parser = argparse.ArgumentParser(
formatter_class=argparse.ArgumentDefaultsHelpFormatter
)
parser.add_argument(
"-o",
dest="output",
type=str,
default="markers.pdf",
help="Path to output PDF file containing ArUco markers",
)
parser.add_argument(
"-d",
dest="dict",
type=str,
default="ARUCO_ORIGINAL",
choices=list(ARUCO_DICTS.keys()),
help="Dictionary of ArUco markers to use",
)
parser.add_argument(
"-b",
"--border-bits",
type=int,
default=1,
help="Number of border bits (black squares surrounding the inner marker)",
)
parser.add_argument(
"-p", "--pixels-per-bit", type=int, default=1, help="Pixels per marker bit"
)
parser.add_argument(
"-m",
"--margin-pixels",
type=int,
default=1,
help="Number of white margin pixels",
)
parser.add_argument(
"-s",
"--size",
type=float,
default=0.10,
help="Output marker size (with margins) in meters",
)
parser.add_argument("id", type=int, nargs="+", help="ID of the marker to generate")
return parser.parse_args()
def main():
args = parse_args()
aruco_dict = cv2.aruco.getPredefinedDictionary(ARUCO_DICTS[args.dict])
marker_side_pixels = (
aruco_dict.markerSize + 2 * args.border_bits
) * args.pixels_per_bit
output_side_pixels = marker_side_pixels + 2 * args.margin_pixels
dpi = output_side_pixels / (args.size / 0.0254)
marker_size = (marker_side_pixels / dpi) * 0.0254
print(f"ArUco dictionary: {args.dict}")
print(f"Inner marker bits: {aruco_dict.markerSize}")
print(f"Marker border bits: {args.border_bits}")
print(f"Pixels per bit: {args.pixels_per_bit}")
print(f"Margin pixels: {args.margin_pixels}")
print(
f"Marker side size: {marker_size:.4f} m - {marker_side_pixels} pixels"
)
print(f"Output image side size: {args.size:.3f} m - {output_side_pixels} pixels")
print(f"Output DPI: {dpi:.3f}")
images = []
for marker_id in args.id:
print(f"Generating marker with ID {marker_id}...")
marker = np.zeros((marker_side_pixels, marker_side_pixels), dtype="uint8")
cv2.aruco.drawMarker(
aruco_dict, marker_id, marker_side_pixels, marker, args.border_bits
)
if args.margin_pixels > 0:
marker = np.pad(marker, args.margin_pixels, constant_values=255)
img = cv2.imencode(".png", marker)[1].tobytes()
images.append(img)
print(f"Converting images to pdf and writing to output file {args.output}...")
my_layout_fun = img2pdf.get_fixed_dpi_layout_fun((dpi, dpi))
with open(args.output, "wb") as output_file:
output_file.write(img2pdf.convert(images, layout_fun=my_layout_fun))
if __name__ == "__main__":
main()
// Copyright 2022 Kell Ideas sp. z o.o.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
#include <mutex>
#include <opencv2/aruco.hpp>
#include <opencv2/calib3d.hpp>
#include <yaml-cpp/yaml.h>
#include <nodelet/nodelet.h>
#include <pluginlib/class_list_macros.h>
#include <ros/ros.h>
#include <cv_bridge/cv_bridge.h>
#include <dynamic_reconfigure/server.h>
#include <image_transport/camera_common.h>
#include <image_transport/image_transport.h>
#include <tf2_geometry_msgs/tf2_geometry_msgs.h>
#include <tf2_ros/transform_broadcaster.h>
#include <tf2_ros/transform_listener.h>
#include <aruco_opencv/ArucoDetectorConfig.h>
#include <aruco_opencv/utils.hpp>
#include <aruco_opencv_msgs/ArucoDetection.h>
namespace aruco_opencv {
class ArucoTracker : public nodelet::Nodelet {
// Parameters
std::string cam_base_topic_;
bool image_is_rectified_;
std::string output_frame_;
std::string marker_dict_;
bool transform_poses_;
bool publish_tf_;
double marker_size_;
int image_queue_size_;
std::string board_descriptions_path_;
// ROS
ros::Publisher detection_pub_;
ros::Subscriber cam_info_sub_;
ros::Time last_msg_stamp;
bool cam_info_retrieved_ = false;
image_transport::ImageTransport *it_;
image_transport::ImageTransport *pit_;
image_transport::Subscriber img_sub_;
image_transport::Publisher debug_pub_;
dynamic_reconfigure::Server<aruco_opencv::ArucoDetectorConfig> *dyn_srv_;
// Aruco
cv::Mat camera_matrix_;
cv::Mat distortion_coeffs_;
cv::Mat marker_obj_points_;
cv::Ptr<cv::aruco::DetectorParameters> detector_parameters_;
cv::Ptr<cv::aruco::Dictionary> dictionary_;
std::vector<std::pair<std::string, cv::Ptr<cv::aruco::Board>>> boards_;
// Thread safety
std::mutex cam_info_mutex_;
// Tf2
tf2_ros::Buffer tf_buffer_;
tf2_ros::TransformListener *tf_listener_;
tf2_ros::TransformBroadcaster *tf_broadcaster_;
public:
ArucoTracker()
: camera_matrix_(3, 3, CV_64FC1), distortion_coeffs_(4, 1, CV_64FC1, cv::Scalar(0)),
marker_obj_points_(4, 1, CV_32FC3) {}
private:
void onInit() override {
auto &nh = getNodeHandle();
auto &pnh = getPrivateNodeHandle();
detector_parameters_ = cv::aruco::DetectorParameters::create();
retrieve_parameters(pnh);
if (ARUCO_DICT_MAP.find(marker_dict_) == ARUCO_DICT_MAP.end()) {
ROS_ERROR_STREAM("Unsupported dictionary name: " << marker_dict_);
return;
}
dictionary_ = cv::aruco::getPredefinedDictionary(ARUCO_DICT_MAP.at(marker_dict_));
if (!board_descriptions_path_.empty())
load_boards();
dyn_srv_ = new dynamic_reconfigure::Server<aruco_opencv::ArucoDetectorConfig>(pnh);
dyn_srv_->setCallback(boost::bind(&ArucoTracker::reconfigure_callback, this, _1, _2));
if (transform_poses_)
tf_listener_ = new tf2_ros::TransformListener(tf_buffer_);
if (publish_tf_)
tf_broadcaster_ = new tf2_ros::TransformBroadcaster();
// set coordinate system in the middle of the marker, with Z pointing out
marker_obj_points_.ptr<cv::Vec3f>(0)[0] = cv::Vec3f(-marker_size_ / 2.f, marker_size_ / 2.f, 0);
marker_obj_points_.ptr<cv::Vec3f>(0)[1] = cv::Vec3f(marker_size_ / 2.f, marker_size_ / 2.f, 0);
marker_obj_points_.ptr<cv::Vec3f>(0)[2] = cv::Vec3f(marker_size_ / 2.f, -marker_size_ / 2.f, 0);
marker_obj_points_.ptr<cv::Vec3f>(0)[3] =
cv::Vec3f(-marker_size_ / 2.f, -marker_size_ / 2.f, 0);
it_ = new image_transport::ImageTransport(nh);
pit_ = new image_transport::ImageTransport(pnh);
detection_pub_ = nh.advertise<aruco_opencv_msgs::ArucoDetection>("aruco_detections", 5);
debug_pub_ = pit_->advertise("debug", 1);
NODELET_INFO("Waiting for first camera info...");
std::string cam_info_topic = image_transport::getCameraInfoTopic(cam_base_topic_);
cam_info_sub_ = nh.subscribe(cam_info_topic, 1, &ArucoTracker::callback_camera_info, this);
img_sub_ =
it_->subscribe(cam_base_topic_, image_queue_size_, &ArucoTracker::callback_image, this);
}
void retrieve_parameters(ros::NodeHandle &pnh) {
pnh.param<std::string>("cam_base_topic", cam_base_topic_, "camera/image_raw");
ROS_INFO_STREAM("Camera Base Topic: " << cam_base_topic_);
pnh.param<bool>("image_is_rectified", image_is_rectified_, false);
ROS_INFO_STREAM("Assume images are rectified: " << (image_is_rectified_ ? "YES" : "NO"));
pnh.param<std::string>("output_frame", output_frame_, "");
if (output_frame_.empty()) {
ROS_INFO("Marker detections will be published in the camera frame");
transform_poses_ = false;
} else {
ROS_INFO("Marker detections will be transformed to \'%s\' frame", output_frame_.c_str());
transform_poses_ = true;
}
pnh.param<std::string>("marker_dict", marker_dict_, "4X4_50");
ROS_INFO_STREAM("Marker Dictionary name: " << marker_dict_);
pnh.param<bool>("publish_tf", publish_tf_, true);
ROS_INFO_STREAM("TF publishing is " << (publish_tf_ ? "enabled" : "disabled"));
pnh.param<double>("marker_size", marker_size_, 0.15);
ROS_INFO_STREAM("Marker size: " << marker_size_);
pnh.param<int>("image_queue_size", image_queue_size_, 1);
ROS_INFO_STREAM("Image Queue size: " << image_queue_size_);
pnh.param<std::string>("board_descriptions_path", board_descriptions_path_, "");
}
void load_boards() {
ROS_INFO_STREAM("Trying to load board descriptions from " << board_descriptions_path_);
YAML::Node descriptions;
try {
descriptions = YAML::LoadFile(board_descriptions_path_);
} catch (const YAML::Exception &e) {
ROS_ERROR_STREAM("Failed to load board descriptions: " << e.what());
return;
}
if (!descriptions.IsSequence()) {
ROS_ERROR_STREAM("Failed to load board descriptions: root node is not a sequence");
}
for (const YAML::Node &desc : descriptions) {
std::string name;
try {
name = desc["name"].as<std::string>();
const bool frame_at_center = desc["frame_at_center"].as<bool>();
const int markers_x = desc["markers_x"].as<int>();
const int markers_y = desc["markers_y"].as<int>();
const double marker_size = desc["marker_size"].as<double>();
const double separation = desc["separation"].as<double>();
auto board = cv::aruco::GridBoard::create(markers_x, markers_y, marker_size, separation,
dictionary_, desc["first_id"].as<int>());
if (frame_at_center) {
double offset_x = (markers_x * (marker_size + separation) - separation) / 2.0;
double offset_y = (markers_y * (marker_size + separation) - separation) / 2.0;
for (auto &obj : board->objPoints) {
for (auto &point : obj) {
point.x -= offset_x;
point.y -= offset_y;
}
}
}
boards_.push_back(std::make_pair(name, board));
} catch (const YAML::Exception &e) {
ROS_ERROR_STREAM("Failed to load board '" << name << "': " << e.what());
continue;
}
ROS_INFO_STREAM("Successfully loaded configuration for board '" << name << "'");
}
}
void reconfigure_callback(aruco_opencv::ArucoDetectorConfig &config, uint32_t level) {
if (config.adaptiveThreshWinSizeMax < config.adaptiveThreshWinSizeMin)
config.adaptiveThreshWinSizeMax = config.adaptiveThreshWinSizeMin;
if (config.maxMarkerPerimeterRate < config.minMarkerPerimeterRate)
config.maxMarkerPerimeterRate = config.minMarkerPerimeterRate;
detector_parameters_->adaptiveThreshWinSizeMin = config.adaptiveThreshWinSizeMin;
detector_parameters_->adaptiveThreshWinSizeMax = config.adaptiveThreshWinSizeMax;
detector_parameters_->adaptiveThreshWinSizeStep = config.adaptiveThreshWinSizeStep;
detector_parameters_->adaptiveThreshConstant = config.adaptiveThreshConstant;
detector_parameters_->minMarkerPerimeterRate = config.minMarkerPerimeterRate;
detector_parameters_->maxMarkerPerimeterRate = config.maxMarkerPerimeterRate;
detector_parameters_->polygonalApproxAccuracyRate = config.polygonalApproxAccuracyRate;
detector_parameters_->minCornerDistanceRate = config.minCornerDistanceRate;
detector_parameters_->minDistanceToBorder = config.minDistanceToBorder;
detector_parameters_->minMarkerDistanceRate = config.minMarkerDistanceRate;
detector_parameters_->markerBorderBits = config.markerBorderBits;
detector_parameters_->perspectiveRemovePixelPerCell = config.perspectiveRemovePixelPerCell;
detector_parameters_->perspectiveRemoveIgnoredMarginPerCell =
config.perspectiveRemoveIgnoredMarginPerCell;
detector_parameters_->maxErroneousBitsInBorderRate = config.maxErroneousBitsInBorderRate;
detector_parameters_->minOtsuStdDev = config.minOtsuStdDev;
detector_parameters_->errorCorrectionRate = config.errorCorrectionRate;
#if CV_VERSION_MAJOR >= 4
detector_parameters_->cornerRefinementMethod = config.cornerRefinementMethod;
#else
detector_parameters_->doCornerRefinement = config.cornerRefinementMethod == 1;
#endif
detector_parameters_->cornerRefinementWinSize = config.cornerRefinementWinSize;
detector_parameters_->cornerRefinementMaxIterations = config.cornerRefinementMaxIterations;
detector_parameters_->cornerRefinementMinAccuracy = config.cornerRefinementMinAccuracy;
}
void callback_camera_info(const sensor_msgs::CameraInfo &cam_info) {
std::lock_guard<std::mutex> guard(cam_info_mutex_);
if (image_is_rectified_) {
for (int i = 0; i < 9; ++i)
camera_matrix_.at<double>(i / 3, i % 3) = cam_info.P[i];
} else {
for (int i = 0; i < 9; ++i)
camera_matrix_.at<double>(i / 3, i % 3) = cam_info.K[i];
distortion_coeffs_ = cv::Mat(cam_info.D, true);
}
if (!cam_info_retrieved_) {
NODELET_INFO("First camera info retrieved.");
cam_info_retrieved_ = true;
}
}
void callback_image(const sensor_msgs::ImageConstPtr &img_msg) {
ROS_DEBUG_STREAM("Image message address [SUBSCRIBE]:\t" << img_msg.get());
if (!cam_info_retrieved_)
return;
if (img_msg->header.stamp == last_msg_stamp) {
ROS_DEBUG("The new image has the same timestamp as the previous one (duplicate frame?). "
"Ignoring...");
return;
}
last_msg_stamp = img_msg->header.stamp;
auto callback_start_time = ros::Time::now();
// Convert the image
auto cv_ptr = cv_bridge::toCvShare(img_msg);
std::vector<int> marker_ids;
std::vector<std::vector<cv::Point2f>> marker_corners;
cv::aruco::detectMarkers(cv_ptr->image, dictionary_, marker_corners, marker_ids,
detector_parameters_);
int n_markers = marker_ids.size();
std::vector<cv::Vec3d> rvec_final(n_markers), tvec_final(n_markers);
aruco_opencv_msgs::ArucoDetection detection;
detection.header.frame_id = img_msg->header.frame_id;
detection.header.stamp = img_msg->header.stamp;
detection.markers.resize(n_markers);
{
std::lock_guard<std::mutex> guard(cam_info_mutex_);
#if CV_VERSION_MAJOR >= 4
cv::parallel_for_(cv::Range(0, n_markers), [&](const cv::Range &range) {
#else
const cv::Range range = cv::Range(0, n_markers);
({
#endif
for (int i = range.start; i < range.end; i++) {
int id = marker_ids[i];
#if CV_VERSION_MAJOR >= 4
cv::solvePnP(marker_obj_points_, marker_corners[i], camera_matrix_, distortion_coeffs_,
rvec_final[i], tvec_final[i], false, cv::SOLVEPNP_IPPE_SQUARE);
#else
cv::solvePnP(marker_obj_points_, marker_corners[i], camera_matrix_, distortion_coeffs_,
rvec_final[i], tvec_final[i], false, cv::SOLVEPNP_ITERATIVE);
#endif
detection.markers[i].marker_id = id;
detection.markers[i].pose = convert_rvec_tvec(rvec_final[i], tvec_final[i]);
}
});
for (const auto &board_desc : boards_) {
std::string name = board_desc.first;
auto &board = board_desc.second;
cv::Vec3d rvec, tvec;
int valid = cv::aruco::estimatePoseBoard(marker_corners, marker_ids, board, camera_matrix_,
distortion_coeffs_, rvec, tvec);
if (valid > 0) {
aruco_opencv_msgs::BoardPose bpose;
bpose.board_name = name;
bpose.pose = convert_rvec_tvec(rvec, tvec);
detection.boards.push_back(bpose);
rvec_final.push_back(rvec);
tvec_final.push_back(tvec);
n_markers++;
}
}
}
if (transform_poses_ && n_markers > 0) {
detection.header.frame_id = output_frame_;
geometry_msgs::TransformStamped cam_to_output;
// Retrieve camera -> output_frame transform
try {
cam_to_output = tf_buffer_.lookupTransform(output_frame_, img_msg->header.frame_id,
img_msg->header.stamp, ros::Duration(1.0));
} catch (tf2::TransformException &ex) {
ROS_WARN("%s", ex.what());
return;
}
for (auto &marker_pose : detection.markers)
tf2::doTransform(marker_pose.pose, marker_pose.pose, cam_to_output);
for (auto &board_pose : detection.boards)
tf2::doTransform(board_pose.pose, board_pose.pose, cam_to_output);
}
if (publish_tf_ && n_markers > 0) {
std::vector<geometry_msgs::TransformStamped> transforms;
for (auto &marker_pose : detection.markers) {
geometry_msgs::TransformStamped transform;
transform.header.stamp = detection.header.stamp;
transform.header.frame_id = detection.header.frame_id;
transform.child_frame_id = std::string("marker_") + std::to_string(marker_pose.marker_id);
tf2::Transform tf_transform;
tf2::fromMsg(marker_pose.pose, tf_transform);
transform.transform = tf2::toMsg(tf_transform);
transforms.push_back(transform);
}
for (auto &board_pose : detection.boards) {
geometry_msgs::TransformStamped transform;
transform.header.stamp = detection.header.stamp;
transform.header.frame_id = detection.header.frame_id;
transform.child_frame_id = std::string("board_") + board_pose.board_name;
tf2::Transform tf_transform;
tf2::fromMsg(board_pose.pose, tf_transform);
transform.transform = tf2::toMsg(tf_transform);
transforms.push_back(transform);
}
tf_broadcaster_->sendTransform(transforms);
}
detection_pub_.publish(detection);
if (debug_pub_.getNumSubscribers() > 0) {
auto debug_cv_ptr = cv_bridge::toCvCopy(img_msg, "bgr8");
cv::aruco::drawDetectedMarkers(debug_cv_ptr->image, marker_corners, marker_ids);
{
std::lock_guard<std::mutex> guard(cam_info_mutex_);
for (size_t i = 0; i < n_markers; i++) {
#if CV_VERSION_MAJOR >= 4
cv::drawFrameAxes(debug_cv_ptr->image, camera_matrix_, distortion_coeffs_, rvec_final[i],
tvec_final[i], 0.2, 3);
#else
cv::aruco::drawAxis(debug_cv_ptr->image, camera_matrix_, distortion_coeffs_,
rvec_final[i], tvec_final[i], 0.2);
#endif
}
}
debug_pub_.publish(debug_cv_ptr->toImageMsg());
}
auto callback_end_time = ros::Time::now();
double whole_callback_duration = (callback_end_time - callback_start_time).toSec();
double image_send_duration = (callback_start_time - img_msg->header.stamp).toSec();
NODELET_DEBUG("Image callback completed. The callback started %.4f s after "
"the image frame was "
"grabbed and completed its execution in %.4f s.",
image_send_duration, whole_callback_duration);
}
};
} // namespace aruco_opencv
PLUGINLIB_EXPORT_CLASS(aruco_opencv::ArucoTracker, nodelet::Nodelet)
// Copyright 2022 Kell Ideas sp. z o.o.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy
// of this software and associated documentation files (the "Software"), to deal
// in the Software without restriction, including without limitation the rights
// to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
// copies of the Software, and to permit persons to whom the Software is
// furnished to do so, subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in
// all copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
// FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL
// THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
// LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
// OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
// THE SOFTWARE.
#include "tf2/convert.h"
#include "tf2_geometry_msgs/tf2_geometry_msgs.h"
#include "aruco_opencv/utils.hpp"
namespace aruco_opencv {
using cv::aruco::PREDEFINED_DICTIONARY_NAME;
geometry_msgs::Pose convert_rvec_tvec(const cv::Vec3d &rvec, const cv::Vec3d &tvec) {
geometry_msgs::Pose pose_out;
cv::Mat rot(3, 3, CV_64FC1);
cv::Rodrigues(rvec, rot);
tf2::Matrix3x3 tf_rot(rot.at<double>(0, 0), rot.at<double>(0, 1), rot.at<double>(0, 2),
rot.at<double>(1, 0), rot.at<double>(1, 1), rot.at<double>(1, 2),
rot.at<double>(2, 0), rot.at<double>(2, 1), rot.at<double>(2, 2));
tf2::Quaternion tf_quat;
tf_rot.getRotation(tf_quat);
pose_out.position.x = tvec[0];
pose_out.position.y = tvec[1];
pose_out.position.z = tvec[2];
tf2::convert(tf_quat, pose_out.orientation);
return pose_out;
}
const std::unordered_map<std::string, cv::aruco::PREDEFINED_DICTIONARY_NAME> ARUCO_DICT_MAP = {
{"4X4_50", PREDEFINED_DICTIONARY_NAME::DICT_4X4_50},
{"4X4_100", PREDEFINED_DICTIONARY_NAME::DICT_4X4_100},
{"4X4_250", PREDEFINED_DICTIONARY_NAME::DICT_4X4_250},
{"4X4_1000", PREDEFINED_DICTIONARY_NAME::DICT_4X4_1000},
{"5X5_50", PREDEFINED_DICTIONARY_NAME::DICT_5X5_50},
{"5X5_100", PREDEFINED_DICTIONARY_NAME::DICT_5X5_100},
{"5X5_250", PREDEFINED_DICTIONARY_NAME::DICT_5X5_250},
{"5X5_1000", PREDEFINED_DICTIONARY_NAME::DICT_5X5_1000},
{"6X6_50", PREDEFINED_DICTIONARY_NAME::DICT_6X6_50},
{"6X6_100", PREDEFINED_DICTIONARY_NAME::DICT_6X6_100},
{"6X6_250", PREDEFINED_DICTIONARY_NAME::DICT_6X6_250},
{"6X6_1000", PREDEFINED_DICTIONARY_NAME::DICT_6X6_1000},
{"7X7_50", PREDEFINED_DICTIONARY_NAME::DICT_7X7_50},
{"7X7_100", PREDEFINED_DICTIONARY_NAME::DICT_7X7_100},
{"7X7_250", PREDEFINED_DICTIONARY_NAME::DICT_7X7_250},
{"7X7_1000", PREDEFINED_DICTIONARY_NAME::DICT_7X7_1000},
{"ARUCO_ORIGINAL", PREDEFINED_DICTIONARY_NAME::DICT_ARUCO_ORIGINAL},
#if CV_VERSION_MAJOR >= 4
{"APRILTAG_16h5", PREDEFINED_DICTIONARY_NAME::DICT_APRILTAG_16h5},
{"APRILTAG_25h9", PREDEFINED_DICTIONARY_NAME::DICT_APRILTAG_25h9},
{"APRILTAG_36h10", PREDEFINED_DICTIONARY_NAME::DICT_APRILTAG_36h10},
{"APRILTAG_36h11", PREDEFINED_DICTIONARY_NAME::DICT_APRILTAG_36h11},
#endif
};
} // namespace aruco_opencv
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Changelog for package aruco_opencv_msgs
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
0.3.1 (2023-02-22)
------------------
0.3.0 (2023-02-22)
------------------
* Add Board detection (`#6 <https://github.com/fictionlab/aruco_opencv/issues/6>`_)
* Add BoardPose msg, change MarkerDetection to ArucoDetection
* Contributors: Błażej Sowa
0.2.0 (2022-09-07)
------------------
* Move message definitions to aruco_opencv_msgs package (`#2 <https://github.com/fictionlab/aruco_opencv/issues/2>`_)
* Contributors: Błażej Sowa
0.1.0 (2022-07-07)
------------------
cmake_minimum_required(VERSION 3.0.2)
project(aruco_opencv_msgs)
find_package(catkin REQUIRED COMPONENTS
geometry_msgs
message_generation
std_msgs
)
add_message_files(
DIRECTORY msg
FILES
ArucoDetection.msg
BoardPose.msg
MarkerPose.msg
)
generate_messages(
DEPENDENCIES
geometry_msgs
std_msgs
)
catkin_package(
CATKIN_DEPENDS
geometry_msgs
message_runtime
std_msgs
)
std_msgs/Header header
aruco_opencv_msgs/MarkerPose[] markers
aruco_opencv_msgs/BoardPose[] boards
\ No newline at end of file
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment