| CARVIEW |
Select Language
HTTP/2 301
server: GitHub.com
content-type: text/html
location: https://ai4ce.github.io/FLAT/
x-github-request-id: 633E:3655F2:78AFD4:8732D8:695107F0
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 10:35:29 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210074-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766918129.812472,VS0,VE197
vary: Accept-Encoding
x-fastly-request-id: d968be7e4d44389a0d1f0ce15d1a418a1a516deb
content-length: 162
HTTP/2 200
server: GitHub.com
content-type: text/html; charset=utf-8
last-modified: Sat, 31 Jul 2021 17:30:45 GMT
access-control-allow-origin: *
strict-transport-security: max-age=31556952
etag: W/"610588c5-37b0"
expires: Sun, 28 Dec 2025 10:45:29 GMT
cache-control: max-age=600
content-encoding: gzip
x-proxy-cache: MISS
x-github-request-id: B211:1F53DD:784394:86C80F:695107EF
accept-ranges: bytes
age: 0
date: Sun, 28 Dec 2025 10:35:29 GMT
via: 1.1 varnish
x-served-by: cache-bom-vanm7210074-BOM
x-cache: MISS
x-cache-hits: 0
x-timer: S1766918129.023204,VS0,VE216
vary: Accept-Encoding
x-fastly-request-id: 746e45fb4366ddbbfee98bdec80ae8050892ac68
content-length: 3947
FLAT
* Equal contributions.
† The corresponding author is Chen Feng
Fooling LiDAR Perception via Adversarial Trajectory Perturbation
- Yiming Li 1 *
- Congcong Wen 1, 2 *
- Felix Juefei-Xu 3
- Chen Feng 1 †
* Equal contributions.
† The corresponding author is Chen Feng
Abstract
LiDAR point clouds collected from a moving vehicle are functions of its trajectories, because the sensor motion needs to be compensated to avoid distortions. When autonomous vehicles are sending LiDAR point clouds to deep networks for perception and planning, could the motion compensation consequently become a wide-open backdoor in those networks, due to both the adversarial vulnerability of deep learning and GPS-based vehicle trajectory estimation that is susceptible to wireless spoofing? We demonstrate such possibilities for the first time: instead of directly attacking point cloud coordinates which requires tampering with the raw LiDAR readings, only adversarial spoofing of a self-driving car's trajectory with small perturbations is enough to make safety-critical objects undetectable or detected with incorrect positions. Moreover, polynomial trajectory perturbation is developed to achieve a temporally-smooth and highly-imperceptible attack. Extensive experiments on 3D object detection have shown that such attacks not only lower the performance of the state-of-the-art detectors effectively, but also transfer to other detectors, raising a red flag for the community.Motion Distortion in LiDAR
LiDAR measurements are obtained along with the rotation of its beams, so the measurements in a full sweep are captured at different timestamps, introducing motion distortion which jeopardizes the vehicle perception. Autonomous systems generally utilize LiDAR's location and orientation obtained from the localization system to correct distortion. Most LiDAR-based datasets [2, 7] have finished synchronization before release. Hence, the performance of current 3D perception algorithms in the distorted point cloud remains unexplored. In this work, we recover the point cloud before motion correction through linear pose interpolatation and rigid body transformation.White Box Attack
Our white box model, PointRCNN [31], uses PointNet++ [27] as its backbone and includes two stages: stage-1 for proposal generation based on each foreground point, and stage-2 for proposal refinement in the canonical coordinate. Since PointRCNN uses raw point cloud as the input, the gradient can smoothly reach the point cloud, then arrive at vehicle trajectory. In this work, we individually attack the classification as well as regression branches in stage-1 and stage-2, with four attack targets in total.Black Box Attack
PointPillar [14] proposes a fast point cloud encoder using pseudo-image representation. It divides point cloud into bins and uses PointNet [26] to extract the feature for each pillar. Due to the non-differentiable preprocesssing stage, the gradient cannot reach the point cloud. Peiyun et al. [11] proposed to augment PointPillar with the visibility map, achieving better precision. In this work, we use PointPillar++ to denote PointPillar with visibility map in [11]. We use perturbation learned from the white box PointRCNN to attack black box PointPillar++, in order to examine the transferability of our attack pipeline.
a) Original Detections. Green/red boxes denote groundtruth/predictions respectively.
b) Detections after attack. Green/red boxes denote groundtruth/predictions respectively..
a) Original Detections. Green/red boxes denote groundtruth/predictions respectively.
b) Detections after attack. Green/red boxes denote groundtruth/predictions respectively..