About the Role
Develop and deploy cutting-edge perception algorithms that enable autonomous vehicles to accurately detect, classify, and track objects and understand their environment from various sensor inputs. This role is fundamental to the vehicle's situational awareness.
Responsibilities
- Design and implement deep learning models for object detection, tracking, and segmentation.
- Integrate and process data from Lidar, radar, and cameras.
- Develop sensor fusion techniques to create a robust environmental model.
- Evaluate and optimize perception system performance for accuracy and latency.
- Collaborate with calibration, data annotation, and planning teams.
Requirements
- Strong expertise in computer vision or 3D perception for autonomous vehicles.
- Experience with deep learning frameworks (e.g., TensorFlow, PyTorch).
- Proficiency in C++ and Python.
- Familiarity with Lidar, radar, and camera sensor data processing.
Qualifications
Experience with multi-sensor fusion, robust tracking, or domain adaptation for perception models.
Benefits
- Competitive salaries, stock options & 401(k) plan with match
- Daily catered lunches & a fully stocked kitchen
- Comprehensive health and wellness benefits, including paid parental leave
- Company events, retreats and team social hours
- Collaborative office space built for productivity & wellbeing
- Furry friends are welcome at our pet friendly office