Skip to main content

autonomy-setup-husarion-ugv

Autonomy reference stack

The husarion-ugv-autonomy template repository provides a Docker-Compose workspace that bundles all navigation-related nodes (Nav2), SLAM and the docking server required by Panther / Lynx.

PathPurpose
compose.hardware.yamlLaunches navigation and docking containers on the real robot. Environment variables such as OBSERVATION_TOPIC, OBSERVATION_TOPIC_TYPE, CAMERA_* and SLAM are forwarded so you can point the stack at any LIDAR/camera without editing the file. Networking is set to host for zero-conf DDS discovery.
compose.simulation.yamlAdds a Gazebo container plus a components.yaml model layout. use_sim_time is preset, so clocks are synchronised automatically.
config/Holds runtime parameters: nav2_params.yaml, pc2ls_params.yaml (planner, controller & point-cloud-to-laser tuning), components.yaml (where each sensor is mounted in simulation), docking_server.yaml, apriltag.yaml (dock poses and tag size) cyclonedds.xml (DDS QoS / discovery options)
sensors/Optional compose fragments for bespoke drivers; drop in another file when the default containers do not cover your sensor.

Helper commands (Justfile)

just dock DOCK_NAME        # Dock Husarion UGV to the charging dock using navigation stack
just dock-direct DOCK_NAME # Dock Husarion UGV to the charging dock without using navigation stack
just start-hardware # Start navigation on User Computer inside Husarion UGV
just start-simulation # Start Gazebo simulator with navigation stack
just start-visualization # Configure and run Husarion WebUI
just stop-visualization # Stop Husarion WebUI
just undock # Undock Husarion UGV from the charging dock

Upstream ROS 2 code

Both navigation and docking containers are built from the husarion/husarion-ugv-autonomy:<distro>-<tag> image, which in turn packages the sources in husarion_ugv_autonomy_ros (SLAM, Nav2 bring-up, docking state-machine). Fork that repository if you need to change node logic and rebuild the image locally.

Sensor interface

The stack is sensor-agnostic: any device that publishes LaserScan or PointCloud2 (plus a TF to base_link) and any RGB-D/mono camera that publishes Image + CameraInfo can be used. Simply export the five environment variables listed in the README, or add a new service under sensors/ that runs your proprietary driver.

Extended notes

A detailed explanation of the architecture and customisation workflow is available in the blog post: https://husarion.com/blog/husarion-ugv-autonomy/