Self-Supervised Scene Flow Estimation with 4-D Automotive Radar

Our model architecture is composed of the Radar-Oriented Flow Estimation (ROFE) module and Static Flow Refinement (SFR) module. ROFE module consumes two consecutive 4D radar point clouds and outputs a coarse scene flow. The SFR module first generates a static mask and then estimates a rigid ego-motion transformation used to refine the static flow vectors. The entire model can be trained end-to-end with our proposed novel self-supervised losses.

Abstract


Scene flow allows autonomous vehicles to reason about the arbitrary motion of multiple independent objects which is the key to long-term mobile autonomy. While estimating the scene flow from LiDAR has progressed recently, it remains largely unknown how to estimate the scene flow from a 4-D radar - an increasingly popular automotive sensor for its robustness against adverse weather and lighting conditions. Compared with the LiDAR point clouds, radar data are drastically sparser, noisier and in much lower resolution. Annotated datasets for radar scene flow are also in absence and costly to acquire in the real world. These factors jointly pose the radar scene flow estimation as a challenging problem. This work aims to address the above challenges and estimate scene flow from 4-D radar point clouds by leveraging self-supervised learning. A robust scene flow estimation architecture and three novel losses are bespoken designed to cope with intractable radar data. Real-world experimental results validate that our method is able to robustly estimate the radar scene flow in the wild and effectively supports the downstream task of motion segmentation.

Visualization of the 6-dim measurements of a 4D radar point cloud, including the 3D positional information, relative radial velocity (RRV), radar cross section (RCS) and Power measurements. RRV characterises the instant motion level of the objects in the scene relative to the ego-vehicle while RCS and Power measurement characterise the reflectivity of those objects in different aspects.

Qualitative results


For evaluation, we collect an inhouse dataset by driving a vehicle in the wild for 43km. Our model is trained end-to-end in a self-supervised setting on the unannotated training set and is evaluated on the testing set. Visualization of our scene flow estimation and motion segmentation results on the inhouse dataset can be seen below.

Scene Flow Estimation

Scene flow estimation visualization. The left figures are the corresponding images captured by the camera. Points from the first frame and the second frame are coloured blue and magenta respectively, while the warped point cloud is shown in green. Yellow circles denote the zooming-in operation. Generally, the green points should be closer to the magenta points if the scene flow is accurately predicted.


Motion Segmentation

Visualization of motion segmentation results. The left column shows our prediction while the right column is the ground truth. Moving and stationary points are rendered as pink and teal, respectively. Note that this is non-trivial as the ego-vehicle is also moving in both two scenes.


Demo Video


We also run our method on the publicly available View-of-Delft dataset and make a demo video showing our qualitative results. The top row shows the corresponding RGB image with radar points projected onto it. The bottom row shows radar points in BEV and decorate points with different colours denoting the scene flow magnitude and direction.

Citation


@article{ding2022raflow, 
        author={Ding, Fangqiang and Pan, Zhijun and Deng, Yimin and Deng, Jianning and Lu, Chris Xiaoxuan}, 
        journal={IEEE Robotics and Automation Letters},  
        title={Self-Supervised Scene Flow Estimation with 4-D Automotive Radar},   
        year={2022}, 
        volume={7}, 
        number={3}, 
        pages={8233-8240}, 
        doi={10.1109/LRA.2022.3187248}
        }

Acknowledgments


This research is supported by the EPSRC, as part of the CDT in Robotics and Autonomous Systems at Heriot-Watt University and The University of Edinburgh (EP/S023208/1).