Abstract
Neuromorphic cameras feature asynchronous event-based pixel-level processing and are particularly useful for object tracking in dynamic environments. Current approaches for feature extraction and optical flow with high-performing hybrid RGB-events vision systems require large computational models and supervised learning, which impose challenges for embedded vision and require annotated datasets. In this work, we propose ED-DCFNet, a small and efficient (< 72k) unsupervised multi-domain learning framework, which extracts events-frames shared features without requiring annotations, with comparable performance. Furthermore, we introduce an open-sourced event and frame-based dataset that captures indoor scenes with various lighting and motion-type conditions in realistic scenarios, which can be used for model building and evaluation. The dataset is available at https://github.com/NBELab/UnsupervisedTracking.
Original language | English |
---|---|
Title of host publication | Proceedings - 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024 |
Publisher | IEEE Computer Society |
Pages | 2191-2199 |
Number of pages | 9 |
ISBN (Electronic) | 9798350365474 |
DOIs | |
State | Published - 2024 |
Event | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024 - Seattle, United States Duration: 16 Jun 2024 → 22 Jun 2024 |
Publication series
Name | IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops |
---|---|
ISSN (Print) | 2160-7508 |
ISSN (Electronic) | 2160-7516 |
Conference
Conference | 2024 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, CVPRW 2024 |
---|---|
Country/Territory | United States |
City | Seattle |
Period | 16/06/24 → 22/06/24 |
Bibliographical note
Publisher Copyright:© 2024 IEEE.
Keywords
- event camera
- Neuromorphic vision
- object tracking