Backend Engineer Jobs at NVIDIA with Visa Sponsorship
Backend Engineer roles at NVIDIA sit at the intersection of infrastructure scale and cutting-edge GPU computing, covering everything from distributed systems to high-performance data pipelines. NVIDIA has a consistent track record of sponsoring work visas for engineering talent, including H-1B, E-3, and employment-based Green Card pathways.
See All Backend Engineer at NVIDIA JobsOverview
Showing 5 of 111+ Backend Engineer Jobs at NVIDIA jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 111+ Backend Engineer Jobs at NVIDIA
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Backend Engineer Jobs at NVIDIA.
Get Access To All Jobs
INTRODUCTION
Reinforcement learning post-training is driving some of the most significant capability gains in AI today. It is the process that teaches a model to reason through hard problems, follow complex instructions, and act as an autonomous agent. It is also one of the hardest infrastructure challenges in the field. RL requires inference, rollout generation, and training running in a continuous loop. The rollout step is what makes it hard: the model must interact with environments, tools, and other models to produce the signal that drives learning. Coordinating actor, critic, and reward models across heterogeneous hardware at scale pushes the limits of what distributed systems can do.
NVIDIA is building an RL Frameworks engineering team to develop the open-source tools and infrastructure that AI researchers and post-training teams depend on. The team spans the full software stack, from collaborating closely with the researchers and labs pushing the frontier, to contributing to RL frameworks like VeRL, Miles, and TorchTitan, to improving the distributed runtimes they depend on, including Ray and Monarch. Whether your strength is working with researchers to understand and address their need optimizing deep learning frameworks, or building distributed infrastructure, we want to hear from you. Come join us to build the systems that enable the next generation of AI.
ROLE AND RESPONSIBILITIES
You will architect and build RL post-training infrastructure that scales efficiently from experimentation on a single GPU to production across thousands of nodes. This means tuning RL training-inference-rollout loops on GPUs, CPUs, and LPUs for performance where it matters, contributing to and improving the performance and usability of open-source RL frameworks, and partnering with the teams who own them. The role also spans fault tolerance, elastic scaling, and fast restarts so long-running distributed training jobs survive failures, stragglers, and resource contention.
Beyond GPU-accelerated training, this work includes partnering with teams building CPU-driven rollout workloads, including tool-use, code execution, and agentic environments, supplying the systems and framework engineering needed to run them efficiently alongside GPU- or LPU-accelerated generation and GPU-accelerated training. It also means advocating for researcher and partner needs with NVIDIA's networking, math library, and compiler teams so the capabilities RL workloads require get prioritized and delivered, and working with hardware teams to take advantage of next-generation hardware capabilities in post-training workloads.
BASIC QUALIFICATIONS
- MS or PhD in Computer Science, Computer Engineering, or a related field (or equivalent experience)
- 5+ years of professional experience in distributed systems, high-performance computing, deep learning infrastructure, or ML systems engineering
- Strong proficiency in Python and C/C++
- Demonstrated experience building or contributing to large-scale distributed systems or runtime frameworks in production at a frontier AI lab, hyperscaler, or major technology company
- Strong verbal and written communication skills and the ability to collaborate across organizational and geographic boundaries
PREFERRED QUALIFICATIONS
Depth in one or more of the following technical areas:
- Reinforcement learning for LLM post-training (RLHF, PPO, GRPO, DPO, reward modeling), including how algorithms map to distributed execution and the systems challenges they create (heterogeneous placement, rollouts, environment execution, resharding between training and generation)
- PyTorch internals, including distributed training primitives (FSDP, tensor parallelism, pipeline parallelism) and their composition
- Kubernetes runtime internals (container lifecycle, pod scheduling, resource quotas, GPU allocation)
- End-to-end distributed systems design (service boundaries, data flows, consistency models, failure modes, recovery approaches)
Experience in any of the following areas is a plus:
- Deep expertise in networking (NCCL, NVLink, InfiniBand), advanced multi-dimensional parallelisms (Megatron-LM, FSDP2, TP/DP/PP, MoE), or memory optimizations (quantization-aware training, mixed precision)
- Experience integrating high-performance inference engines (vLLM, SGLang, TensorRT-LLM) into RL training loops for GPU-accelerated rollout
- Strong background in actor- and task-based distributed programming (Ray, Monarch, or comparable systems)
- Familiarity with multi-turn training, multi-agent co-evolution, or VLM post-training
Ways to stand out from the crowd:
- Open-source contributions to RL post-training or distributed training projects (e.g., VeRL, Miles, TorchTitan, OpenRLHF, NeMo-Aligner, DeepSpeed-Chat), including significant work on framework internals where applicable
- Kubernetes work beyond routine operations (custom operators, GPU device plugins, or scheduling contributions)
- Direct experience operating frontier-scale training (RL post-training at thousands of GPUs and/or large-scale LLM or multimodal pre-training)
- Hands-on experience with production distributed failures at scale (stragglers, resource contention, hardware faults)
Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 27, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

INTRODUCTION
Reinforcement learning post-training is driving some of the most significant capability gains in AI today. It is the process that teaches a model to reason through hard problems, follow complex instructions, and act as an autonomous agent. It is also one of the hardest infrastructure challenges in the field. RL requires inference, rollout generation, and training running in a continuous loop. The rollout step is what makes it hard: the model must interact with environments, tools, and other models to produce the signal that drives learning. Coordinating actor, critic, and reward models across heterogeneous hardware at scale pushes the limits of what distributed systems can do.
NVIDIA is building an RL Frameworks engineering team to develop the open-source tools and infrastructure that AI researchers and post-training teams depend on. The team spans the full software stack, from collaborating closely with the researchers and labs pushing the frontier, to contributing to RL frameworks like VeRL, Miles, and TorchTitan, to improving the distributed runtimes they depend on, including Ray and Monarch. Whether your strength is working with researchers to understand and address their need optimizing deep learning frameworks, or building distributed infrastructure, we want to hear from you. Come join us to build the systems that enable the next generation of AI.
ROLE AND RESPONSIBILITIES
You will architect and build RL post-training infrastructure that scales efficiently from experimentation on a single GPU to production across thousands of nodes. This means tuning RL training-inference-rollout loops on GPUs, CPUs, and LPUs for performance where it matters, contributing to and improving the performance and usability of open-source RL frameworks, and partnering with the teams who own them. The role also spans fault tolerance, elastic scaling, and fast restarts so long-running distributed training jobs survive failures, stragglers, and resource contention.
Beyond GPU-accelerated training, this work includes partnering with teams building CPU-driven rollout workloads, including tool-use, code execution, and agentic environments, supplying the systems and framework engineering needed to run them efficiently alongside GPU- or LPU-accelerated generation and GPU-accelerated training. It also means advocating for researcher and partner needs with NVIDIA's networking, math library, and compiler teams so the capabilities RL workloads require get prioritized and delivered, and working with hardware teams to take advantage of next-generation hardware capabilities in post-training workloads.
BASIC QUALIFICATIONS
- MS or PhD in Computer Science, Computer Engineering, or a related field (or equivalent experience)
- 5+ years of professional experience in distributed systems, high-performance computing, deep learning infrastructure, or ML systems engineering
- Strong proficiency in Python and C/C++
- Demonstrated experience building or contributing to large-scale distributed systems or runtime frameworks in production at a frontier AI lab, hyperscaler, or major technology company
- Strong verbal and written communication skills and the ability to collaborate across organizational and geographic boundaries
PREFERRED QUALIFICATIONS
Depth in one or more of the following technical areas:
- Reinforcement learning for LLM post-training (RLHF, PPO, GRPO, DPO, reward modeling), including how algorithms map to distributed execution and the systems challenges they create (heterogeneous placement, rollouts, environment execution, resharding between training and generation)
- PyTorch internals, including distributed training primitives (FSDP, tensor parallelism, pipeline parallelism) and their composition
- Kubernetes runtime internals (container lifecycle, pod scheduling, resource quotas, GPU allocation)
- End-to-end distributed systems design (service boundaries, data flows, consistency models, failure modes, recovery approaches)
Experience in any of the following areas is a plus:
- Deep expertise in networking (NCCL, NVLink, InfiniBand), advanced multi-dimensional parallelisms (Megatron-LM, FSDP2, TP/DP/PP, MoE), or memory optimizations (quantization-aware training, mixed precision)
- Experience integrating high-performance inference engines (vLLM, SGLang, TensorRT-LLM) into RL training loops for GPU-accelerated rollout
- Strong background in actor- and task-based distributed programming (Ray, Monarch, or comparable systems)
- Familiarity with multi-turn training, multi-agent co-evolution, or VLM post-training
Ways to stand out from the crowd:
- Open-source contributions to RL post-training or distributed training projects (e.g., VeRL, Miles, TorchTitan, OpenRLHF, NeMo-Aligner, DeepSpeed-Chat), including significant work on framework internals where applicable
- Kubernetes work beyond routine operations (custom operators, GPU device plugins, or scheduling contributions)
- Direct experience operating frontier-scale training (RL post-training at thousands of GPUs and/or large-scale LLM or multimodal pre-training)
- Hands-on experience with production distributed failures at scale (stragglers, resource contention, hardware faults)
Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 27, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
See all 111+ Backend Engineer at NVIDIA jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Backend Engineer at NVIDIA roles.
Get Access To All JobsTips for Finding Backend Engineer Jobs at NVIDIA Jobs
Align your portfolio to NVIDIA's stack
NVIDIA's backend engineering interviews weight systems design and GPU-adjacent infrastructure heavily. Build demonstrable experience in CUDA-compatible data pipelines, distributed computing, or high-throughput API design before applying, so your portfolio speaks directly to their hiring bar.
Target teams with recurring backend openings
NVIDIA's cloud infrastructure, AI platform, and autonomous vehicle divisions post backend roles consistently. Tracking which business units publish these roles repeatedly signals where headcount is stable, which correlates directly with employer willingness to invest in visa sponsorship.
Confirm sponsorship scope during the recruiter screen
NVIDIA sponsors H-1B, E-3, and employment-based Green Cards, but sponsorship availability can vary by role level and business unit. Ask the recruiter directly which visa types the hiring team has sponsored for this specific position before progressing to technical rounds.
Prepare your credentials for specialty occupation documentation
USCIS requires your degree to relate directly to the role for H-1B specialty occupation status. Backend engineering at NVIDIA typically maps to computer science, computer engineering, or a closely related field, so gather your transcripts and degree equivalency evaluations before an offer is extended.
Understand NVIDIA's PERM timeline if targeting a Green Card
If NVIDIA initiates PERM labor certification for EB-2 or EB-3, DOL processing can take six months or longer before the I-140 petition is even filed. Factor this into your planning, particularly if you're nearing the end of an initial H-1B approval period.
Browse open backend roles using Migrate Mate
Filtering for NVIDIA backend roles by visa sponsorship type saves significant research time. Use Migrate Mate to surface active openings that match your visa category and engineering background, so you're applying to positions where sponsorship is already confirmed.
Backend Engineer at NVIDIA jobs are hiring across the US. Find yours.
Find Backend Engineer at NVIDIA JobsFrequently Asked Questions
Does NVIDIA sponsor H-1B visas for Backend Engineers?
Yes, NVIDIA sponsors H-1B visas for Backend Engineers. The company participates in the annual H-1B cap lottery for new applicants and also files cap-exempt petitions where eligible. Because NVIDIA operates across multiple business units with ongoing backend infrastructure needs, sponsorship for this function is well-established, though availability can vary by team and seniority level. Confirming sponsorship intent with the recruiter early in the process is recommended.
How do I apply for Backend Engineer jobs at NVIDIA?
Applications go through NVIDIA's careers portal, where backend roles are listed by team and location. Tailoring your resume to emphasize distributed systems, high-performance computing, or GPU infrastructure experience improves your screening odds. You can also browse NVIDIA Backend Engineer roles filtered by visa sponsorship type on Migrate Mate, which helps you identify active openings aligned to your work authorization situation before you apply.
Which visa types does NVIDIA commonly sponsor for Backend Engineers?
NVIDIA sponsors H-1B visas for the broadest range of backend engineering candidates. Australian citizens working in qualifying specialty occupation roles are also eligible for E-3 sponsorship, which bypasses the H-1B lottery entirely. For longer-term pathways, NVIDIA initiates EB-2 and EB-3 Green Card sponsorship through the PERM labor certification process, typically after an employee has been in role for a period of time.
What qualifications does NVIDIA expect for Backend Engineer roles?
Most Backend Engineer positions at NVIDIA require a bachelor's degree or higher in computer science, computer engineering, or a directly related field. Practically, NVIDIA's bar for backend engineering skews heavily toward systems-level work: experience with large-scale distributed systems, low-latency infrastructure, or data pipeline architecture is common across job postings. Familiarity with GPU computing environments or CUDA-based workflows is a differentiator for roles sitting closer to the hardware layer.
How do I manage visa timing when joining NVIDIA as a Backend Engineer?
If you're transferring an existing H-1B from another employer, NVIDIA can file an H-1B transfer petition and you're authorized to start work once USCIS receives it, without waiting for approval. For new H-1B applicants, employment can't begin until October 1 following a successful lottery selection. E-3 applicants can move faster since there's no lottery, with consular processing often completing within a few weeks of receiving a certified Labor Condition Application from USCIS.
See which Backend Engineer at NVIDIA employers are hiring and sponsoring visas right now.
Search Backend Engineer at NVIDIA Jobs