multiple-object-tracking-lidar

by praveen-palanisamy

C++ implementation to Detect, track and classify multiple objects using LIDAR scans or point cloud

262 Stars 123 Forks Last release: 11 months ago (1.0.2) MIT License 32 Commits 4 Releases

Available items

No Items, yet!

The developer of this repository has not created any items for sale yet. Need a bug fixed? Help with integration? A different license? Create a request here:

Multiple objects detection, tracking and classification from LIDAR scans/point-clouds

DOI

Sample demo of multiple object tracking using LIDAR scans

PCL based ROS package to Detect/Cluster --> Track --> Classify static and dynamic objects in real-time from LIDAR scans implemented in C++.

Features:

  • K-D tree based point cloud processing for object feature detection from point clouds
  • Unsupervised euclidean cluster extraction (3D) or k-means clustering based on detected features and refinement using RANSAC (2D)
  • Stable tracking (object ID & data association) with an ensemble of Kalman Filters
  • Robust compared to k-means clustering with mean-flow tracking

Usage:

Follow the steps below to use this (

multi_object_tracking_lidar
) package:
  1. Create a catkin workspace (if you do not have one setup already).
  2. Navigate to the
    src
    folder in your catkin workspace:
    cd ~/catkin_ws/src
  3. Clone this repository:
    git clone https://github.com/praveen-palanisamy/multiple-object-tracking-lidar.git
  4. Compile and build the package:
    cd ~/catkin_ws && catkin_make
  5. Add the catkin workspace to your ROS environment:
    source ~/catkin_ws/devel/setup.bash
  6. Run the
    kf_tracker
    ROS node in this package:
    rosrun multi_object_tracking_lidar kf_tracker

If all went well, the ROS node should be up and running! As long as you have the point clouds published on to the

filtered_cloud
rostopic, you should see outputs from this node published onto the
obj_id
,
cluster_0
,
cluster_1
, …,
cluster_5
topics along with the markers on
viz
topic which you can visualize using RViz.

Supported point-cloud streams/sources:

The input point-clouds can be from: 1. A real LiDAR or 2. A simulated LiDAR or 3. A point cloud dataset or 4. Any other data source that produces point clouds

Citing

If you use the code or snippets from this repository in your work, please cite:

@software{praveen_palanisamy_2019_3559187,
  author       = {Praveen Palanisamy},
  title        = {{praveen-palanisamy/multiple-object-tracking-lidar: 
                   Multiple-Object-Tracking-from-Point-Clouds_v1.0.2}},
  month        = dec,
  year         = 2019,
  publisher    = {Zenodo},
  version      = {1.0.2},
  doi          = {10.5281/zenodo.3559187},
  url          = {https://doi.org/10.5281/zenodo.3559186}
}

Wiki

Checkout the Wiki pages

  1. Multiple-object tracking from pointclouds using a Velodyne VLP-16

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.