About the Role
This role involves developing and integrating advanced perception algorithms for autonomous systems, primarily focusing on sensor data processing and interpretation. The engineer will contribute to improving the robustness and accuracy of object detection, tracking, and scene understanding in complex environments.
Responsibilities
- Design and implement software for processing various sensor data (e.g., LiDAR, camera, radar).
- Develop and integrate algorithms for object detection, classification, tracking, and semantic segmentation.
- Optimize perception pipelines for real-time performance on embedded platforms.
- Conduct rigorous testing and validation of perception systems in simulation and real-world scenarios.
- Collaborate with ML engineers and robotics teams to deploy perception solutions.
Requirements
- Bachelor's or Master's degree in Computer Science, Robotics, or a related engineering field.
- Strong programming skills in C++ and Python.
- Experience with perception algorithms and libraries (e.g., OpenCV, PCL).
- Familiarity with deep learning frameworks (e.g., PyTorch, TensorFlow) for perception tasks.
- Experience with real-time operating systems (RTOS) and embedded systems.
Qualifications
- Ph.D. in a relevant field.
- Experience with sensor fusion techniques.
- Experience with autonomous driving or aerial systems.