Data Engineer Jobs at Nuro with Visa Sponsorship
Nuro builds autonomous delivery technology, and its Data Engineer roles sit at the intersection of robotics, real-time data pipelines, and large-scale infrastructure. The company has a consistent track record of sponsoring work visas across multiple categories, making it a realistic target for international candidates in this field.
See All Data Engineer at Nuro JobsOverview
Showing 5 of 76+ Data Engineer Jobs at Nuro jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 76+ Data Engineer Jobs at Nuro
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Data Engineer Jobs at Nuro.
Get Access To All Jobs
Who We Are
Nuro is a self-driving technology company on a mission to make autonomy accessible to all. Founded in 2016, Nuro is building the world’s most scalable driver, combining cutting-edge AI with automotive-grade hardware. Nuro licenses its core technology, the Nuro Driver™, to support a wide range of applications, from robotaxis and commercial fleets to personally owned vehicles. With technology proven over years of self-driving deployments, Nuro gives the automakers and mobility platforms a clear path to AVs at commercial scale, empowering a safer, richer, and more connected future.
About the Role
Nuro leverages many different bench-top systems to evaluate and regression test different aspects of the software and hardware integration layer. This performance simulation platform includes systems.
At Nuro, every autonomy code change, from ML model updates to radius of map around the robot to number of evaluated trajectories, must be validated for real-time performance on actual robot compute hardware before it reaches the road. You will own the infrastructure that makes this possible.
Our Performance Simulation Platform is a hybrid benchmarking system: physical bench-top rigs running production robot compute (NVIDIA Thor platform), orchestrated by cloud-native infrastructure (Kubernetes, GCP), automated data pipelines feeding performance metrics into BigQuery and Grafana, pre/post simulation magic, custom tracing and profiling tools, and much much more.
Engineers across the company rely on this platform daily to answer questions like:
- How will my new ML model affect contention on the GPU?
- How does a new data format impact onboard logging rate or network contention as more data might be flowing from through the system?
- How much memory should be allocated for this new module, and how does it fit into the overall system budget?
You’ll be responsible for development, integration, and the evolution of this platform — from the bare-metal OS and networking layer through the job orchestration and CI/CD integration up to the data analysis and visualization layer. This is a high-ownership, high-autonomy role on a small team where your work directly gates the release velocity of the entire autonomy stack. You’ll be the technical DRI for the platform — setting the roadmap, making architectural calls, representing the platform's needs to the leadership team, and ensuring the system scales through multiple hardware generations.
About the Work
- Benchmarking Infrastructure: Develop and maintain the job orchestration layer that schedules, executes, and validates autonomy performance benchmarks across a fleet of physical bench-top systems — integrated into CI/CD pipelines as merge-blocking and release-blocking quality gates.
- Platform Reliability & Observability: Build monitoring, alerting, and self-healing automation for the bench fleet. Proactively identify systemic risks — capacity bottlenecks, hardware degradation patterns, infrastructure single points of failure — before they become outages. Track utilization, failure rates, and capacity trends to ensure the platform scales ahead of organizational demand.
- Performance Data Pipelines: Design and build end-to-end data pipelines that capture fine-grained performance metrics (CPU/GPU utilization, memory bandwidth, E2E latency, scheduling jitter) from bench-top runs, process them at scale, and surface actionable insights through dashboards and automated regression detection.
- Statistical Analysis & Experimentation: Work with Data Science to develop rigorous experimentation methodology for performance results from non-deterministic autonomy workloads — including variance analysis, significance testing, and regression detection. Bare-Metal & OS Platform: Guide the SRE team through the OS and system-level configuration of bench hardware — including Linux kernel tuning, boot infrastructure, networking, and hardware bring-up — ensuring the platform faithfully reproduces production robot compute behavior.
- Drive Platform & Allocation Strategy: Own the planning lifecycle for the benchmarking fleet across hardware generations. Partner with engineering and program leadership to negotiate hardware allocation, model utilization scenarios under real-world constraints, and present data-backed trade-off recommendations — balancing testing coverage, user throughput, cost, and SLA commitments against finite physical resources.
- Cross-Functional Collaboration: Partner with Hardware Engineering, NPI (New Product Introduction), SRE (Site Reliability Engineering), Perception, Behavior, and Data Science teams to translate their performance analysis needs into robust, self-service infrastructure.
About You
- Experience: 3+ years of industry software engineering experience.
- Software Engineering: Strong proficiency in Python and working proficiency in C++. You write clean, testable, well-documented code and care about long-term maintainability.
- Data Engineering: Experience building data pipelines, ingestion, transformation, storage, and visualization. Familiarity with SQL and analytical workflows.
- Systems & Infrastructure: Deep comfort with Linux systems — you’ve configured kernels, debugged boot issues, written systemd units, or managed bare-metal infrastructure. You understand networking, storage, and compute at a level beyond "it just works."
- Technical Leadership: Experience setting technical vision and roadmap for a project or platform, driving alignment across multiple stakeholders. You’ve independently identified the cross-functional partners needed to unblock and deliver, and you’ve briefed senior engineering leadership on trade-offs and recommendations.
- AI-Native: You treat AI as a core part of your engineering workflow, not an occasional shortcut — you use agentic tooling (e.g., Claude Code) across the development lifecycle and you understand the boundaries of when AI output demands extra scrutiny versus when it accelerates you.
- Bias for Action: Comfortable operating in ambiguous, fast-moving environments where you need to balance long-term architecture with short-term delivery.
Bonus Points:
- Experience with performance engineering, especially around tooling integration (perf, Perfetto, pprof, eBPF, NVIDIA Nsight Systems, NVIDIA CUPTI).
- Experience in robotics or AV, particularly with NVIDIA DriveOS stack.
At Nuro, your base pay is one part of your total compensation package. For this position, the reasonably expected base pay range is between $152,000 and $228,000 for the level at which this job has been scoped. Your base pay will depend on several factors, including your experience, qualifications, education, location, and skills. In the event that you are considered for a different level, a higher or lower pay range would apply. This position is also eligible for an annual performance bonus, equity, and a competitive benefits package.
At Nuro, we celebrate differences and are committed to a diverse workplace that fosters inclusion and psychological safety for all employees. Nuro is proud to be an equal opportunity employer and expressly prohibits any form of workplace discrimination based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, veteran status, or any other legally protected characteristics. #LI-DNP

Who We Are
Nuro is a self-driving technology company on a mission to make autonomy accessible to all. Founded in 2016, Nuro is building the world’s most scalable driver, combining cutting-edge AI with automotive-grade hardware. Nuro licenses its core technology, the Nuro Driver™, to support a wide range of applications, from robotaxis and commercial fleets to personally owned vehicles. With technology proven over years of self-driving deployments, Nuro gives the automakers and mobility platforms a clear path to AVs at commercial scale, empowering a safer, richer, and more connected future.
About the Role
Nuro leverages many different bench-top systems to evaluate and regression test different aspects of the software and hardware integration layer. This performance simulation platform includes systems.
At Nuro, every autonomy code change, from ML model updates to radius of map around the robot to number of evaluated trajectories, must be validated for real-time performance on actual robot compute hardware before it reaches the road. You will own the infrastructure that makes this possible.
Our Performance Simulation Platform is a hybrid benchmarking system: physical bench-top rigs running production robot compute (NVIDIA Thor platform), orchestrated by cloud-native infrastructure (Kubernetes, GCP), automated data pipelines feeding performance metrics into BigQuery and Grafana, pre/post simulation magic, custom tracing and profiling tools, and much much more.
Engineers across the company rely on this platform daily to answer questions like:
- How will my new ML model affect contention on the GPU?
- How does a new data format impact onboard logging rate or network contention as more data might be flowing from through the system?
- How much memory should be allocated for this new module, and how does it fit into the overall system budget?
You’ll be responsible for development, integration, and the evolution of this platform — from the bare-metal OS and networking layer through the job orchestration and CI/CD integration up to the data analysis and visualization layer. This is a high-ownership, high-autonomy role on a small team where your work directly gates the release velocity of the entire autonomy stack. You’ll be the technical DRI for the platform — setting the roadmap, making architectural calls, representing the platform's needs to the leadership team, and ensuring the system scales through multiple hardware generations.
About the Work
- Benchmarking Infrastructure: Develop and maintain the job orchestration layer that schedules, executes, and validates autonomy performance benchmarks across a fleet of physical bench-top systems — integrated into CI/CD pipelines as merge-blocking and release-blocking quality gates.
- Platform Reliability & Observability: Build monitoring, alerting, and self-healing automation for the bench fleet. Proactively identify systemic risks — capacity bottlenecks, hardware degradation patterns, infrastructure single points of failure — before they become outages. Track utilization, failure rates, and capacity trends to ensure the platform scales ahead of organizational demand.
- Performance Data Pipelines: Design and build end-to-end data pipelines that capture fine-grained performance metrics (CPU/GPU utilization, memory bandwidth, E2E latency, scheduling jitter) from bench-top runs, process them at scale, and surface actionable insights through dashboards and automated regression detection.
- Statistical Analysis & Experimentation: Work with Data Science to develop rigorous experimentation methodology for performance results from non-deterministic autonomy workloads — including variance analysis, significance testing, and regression detection. Bare-Metal & OS Platform: Guide the SRE team through the OS and system-level configuration of bench hardware — including Linux kernel tuning, boot infrastructure, networking, and hardware bring-up — ensuring the platform faithfully reproduces production robot compute behavior.
- Drive Platform & Allocation Strategy: Own the planning lifecycle for the benchmarking fleet across hardware generations. Partner with engineering and program leadership to negotiate hardware allocation, model utilization scenarios under real-world constraints, and present data-backed trade-off recommendations — balancing testing coverage, user throughput, cost, and SLA commitments against finite physical resources.
- Cross-Functional Collaboration: Partner with Hardware Engineering, NPI (New Product Introduction), SRE (Site Reliability Engineering), Perception, Behavior, and Data Science teams to translate their performance analysis needs into robust, self-service infrastructure.
About You
- Experience: 3+ years of industry software engineering experience.
- Software Engineering: Strong proficiency in Python and working proficiency in C++. You write clean, testable, well-documented code and care about long-term maintainability.
- Data Engineering: Experience building data pipelines, ingestion, transformation, storage, and visualization. Familiarity with SQL and analytical workflows.
- Systems & Infrastructure: Deep comfort with Linux systems — you’ve configured kernels, debugged boot issues, written systemd units, or managed bare-metal infrastructure. You understand networking, storage, and compute at a level beyond "it just works."
- Technical Leadership: Experience setting technical vision and roadmap for a project or platform, driving alignment across multiple stakeholders. You’ve independently identified the cross-functional partners needed to unblock and deliver, and you’ve briefed senior engineering leadership on trade-offs and recommendations.
- AI-Native: You treat AI as a core part of your engineering workflow, not an occasional shortcut — you use agentic tooling (e.g., Claude Code) across the development lifecycle and you understand the boundaries of when AI output demands extra scrutiny versus when it accelerates you.
- Bias for Action: Comfortable operating in ambiguous, fast-moving environments where you need to balance long-term architecture with short-term delivery.
Bonus Points:
- Experience with performance engineering, especially around tooling integration (perf, Perfetto, pprof, eBPF, NVIDIA Nsight Systems, NVIDIA CUPTI).
- Experience in robotics or AV, particularly with NVIDIA DriveOS stack.
At Nuro, your base pay is one part of your total compensation package. For this position, the reasonably expected base pay range is between $152,000 and $228,000 for the level at which this job has been scoped. Your base pay will depend on several factors, including your experience, qualifications, education, location, and skills. In the event that you are considered for a different level, a higher or lower pay range would apply. This position is also eligible for an annual performance bonus, equity, and a competitive benefits package.
At Nuro, we celebrate differences and are committed to a diverse workplace that fosters inclusion and psychological safety for all employees. Nuro is proud to be an equal opportunity employer and expressly prohibits any form of workplace discrimination based on race, color, religion, gender, sexual orientation, gender identity or expression, national origin, age, genetic information, disability, veteran status, or any other legally protected characteristics. #LI-DNP
See all 76+ Data Engineer at Nuro jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Data Engineer at Nuro roles.
Get Access To All JobsTips for Finding Data Engineer Jobs at Nuro Jobs
Tailor your portfolio to autonomous systems data
Nuro's engineering work centers on sensor fusion, vehicle telemetry, and real-time pipeline infrastructure. Highlight projects involving high-throughput data ingestion, time-series processing, or distributed systems rather than generic ETL or analytics work.
Find Nuro's open Data Engineer roles on Migrate Mate
Search Migrate Mate to filter specifically for Data Engineer openings at Nuro that include visa sponsorship. This saves time you'd otherwise spend manually screening job listings that don't confirm sponsorship upfront.
Address specialty occupation requirements in your resume
USCIS scrutinizes Data Engineer petitions for specialty occupation status. Make sure your resume explicitly ties your degree field to the technical responsibilities of the role, connecting your background in computer science, engineering, or a related discipline to the position's requirements.
Prepare for a technical interview process with infrastructure depth
Nuro's Data Engineer interviews typically probe distributed systems design, pipeline reliability, and data modeling at scale. Come prepared to discuss fault-tolerant architectures and experience with cloud-native tooling, since autonomous vehicle data demands low-latency, high-reliability infrastructure.
Clarify the LCA timeline with your recruiting contact
Your employer must file a certified Labor Condition Application with DOL before the H-1B petition reaches USCIS. Ask Nuro's recruiting team early in the offer stage when they typically initiate this step so you can plan your start date accordingly.
Data Engineer at Nuro jobs are hiring across the US. Find yours.
Find Data Engineer at Nuro JobsFrequently Asked Questions
Does Nuro sponsor H-1B visas for Data Engineers?
Yes, Nuro sponsors H-1B visas for Data Engineer roles. The company has an established immigration support process for technical positions, and Data Engineer is a role that typically meets USCIS specialty occupation criteria given its degree requirements in computer science, engineering, or a closely related field. Confirm sponsorship availability with the recruiter when you receive an offer.
Which visa types does Nuro commonly sponsor for Data Engineer roles?
Nuro sponsors several visa categories for Data Engineers, including H-1B, H-1B1 for Chilean and Singaporean nationals, TN for Canadian and Mexican nationals, F-1 OPT and CPT for students, J-1 for exchange visitors, and employment-based Green Card pathways such as EB-2 and EB-3. The right category depends on your nationality, educational background, and current immigration status.
How do I apply for Data Engineer jobs at Nuro?
You can browse and apply for Data Engineer positions at Nuro directly through Migrate Mate, which filters for roles that include visa sponsorship. When applying, tailor your materials to Nuro's autonomous systems focus, emphasizing pipeline engineering, distributed data infrastructure, and experience with large-scale real-time or sensor-driven data. Apply early since technical roles at robotics companies often move through interviews quickly.
What qualifications does Nuro expect for Data Engineer candidates?
Nuro typically looks for a bachelor's degree or higher in computer science, software engineering, or a related technical field. Strong candidates demonstrate hands-on experience with distributed data systems, stream processing frameworks, cloud infrastructure, and data pipeline design at scale. Experience adjacent to robotics, autonomous systems, or high-frequency telemetry data is a meaningful differentiator for this specific company.
How do I plan my timeline if Nuro is sponsoring my H-1B?
If you need an H-1B and aren't currently in valid status, the standard lottery cap means you'd need to be selected in the April registration window, with employment typically starting October 1. If you're already on H-1B or another valid status, Nuro can file a transfer petition at any time. USCIS premium processing is available and reduces adjudication to roughly 15 business days if timing is critical.
See which Data Engineer at Nuro employers are hiring and sponsoring visas right now.
Search Data Engineer at Nuro Jobs