Senior/Staff Machine Learning Engineer, Perception
Agtonomy
Software Engineering
South San Francisco, CA, USA
Posted on Sep 30, 2025
About Us
At Agtonomy, we’re not just building tech—we’re transforming how vital industries get work done. Our Physical AI and fleet services turn heavy machinery into intelligent, autonomous systems that tackle the toughest challenges in agriculture, turf, and beyond. Partnering with industry-leading equipment manufacturers, we’re creating a future where labor shortages, environmental strain, and inefficiencies are relics of the past. Our team is a tight-knit group of bold thinkers—engineers, innovators, and industry experts—who thrive on turning audacious ideas into reality. If you want to shape the future of industries that matter, this is your shot.
About the Role
We’re looking for a skilled software engineer to build and refine perception algorithms that give our autonomous tractors human-like awareness in rugged environments. You’ll develop computer vision and machine learning systems to process noisy data from cameras, LiDAR, and radar, enabling tractors to navigate through any type of dirty mess it may find itself in. This role is hands-on: you’ll write production-grade software, optimize models for embedded hardware, and test your work on real tractors at operating farms all over the world. Working closely with team members across the autonomy stack, you’ll own critical pieces of our perception stack, driving innovations that make our systems generalized, safe and reliable.
What you'll do
- Develop computer vision and machine learning models for real-time perception systems, enabling tractors to identify crops, obstacles, and terrain in varying unpredictable conditions.
- Build sensor fusion algorithms to combine camera, LiDAR, and radar data, creating robust 3D scene understanding that handles challenges like crop occlusions or GNSS drift.
- Optimize models for low-latency inference on resource-constrained hardware, balancing accuracy and performance.
- Design and test data pipelines to curate and label large sensor datasets, ensuring high-quality inputs for training and validation, with tools to visualize and debug failures.
- Analyze performance metrics and iterate on algorithms to improve accuracy and efficiency of various perception subsystems.
What you’ll bring
- A MS, or PhD in Computer Science, AI, or a related field, or 5+ years of industry experience building vision-based perception systems.
- Deep expertise in developing and deploying machine learning models, particularly for perception tasks such as object detection, segmentation, mono/stereo depth estimation, sensor fusion, and scene understanding.
- Strong understanding of integrating data from multiple sensors like cameras, LiDAR, and radar.
- Experience handling large datasets efficiently and organizing them for labeling, training and evaluation.
- Fluency in Python and experience with ML/CV frameworks like TensorFlow, PyTorch, or OpenCV, with the ability to write efficient, production-ready code for real-time applications.
- Proven ability to design experiments, analyze performance metrics (e.g., mAP, IoU, latency), and optimize algorithms to meet stringent performance requirements in dynamic settings.
- An eagerness to get your hands dirty and agility in a fast-moving, collaborative, small team environment with lots of ownership.
What makes you a strong fit
- Experience architecting multi-sensor ML systems from scratch.
- Experience with Foundational models for robotics or Vision-Language-Action (VLA) models
- Experience with compute-constrained pipelines including optimizing models to balance the accuracy vs. performance tradeoff, leveraging TensorRT, model quantization, etc.
- Experience implementing custom operations in CUDA.
- Publications at top-tier perception/robotics conferences (e.g. CVPR, ICRA, etc.).
- Passion for sustainable agriculture and securing our food supply chain.
Benefits
• 100% covered medical, dental, and vision for the employee (partner, children, or family is additional)
• Commuter Benefits
• Flexible Spending Account (FSA)
• Life Insurance
• Short- and Long-Term Disability
• 401k Plan
• Stock Options
• Collaborative work environment working alongside passionate mission-driven team!
Our interview process is generally conducted in five (5) phases:
1. Phone Screen with Hiring Manager (30 minutes)
2. Technical Evaluation in Domain (1 hour)
3. Software Engineering Evaluation (1 hour)
4. Panel Interview (Video interviews scheduled with key stakeholders, each interview will be 30 to 60 minutes)