Senior Software Developer Jobs at NVIDIA with Visa Sponsorship
NVIDIA's Senior Software Developer roles span GPU architecture, AI inference, and large-scale systems work that sits at the frontier of what the industry is building. NVIDIA has an established track record of sponsoring international engineers across multiple visa categories, making it a realistic target for skilled developers navigating U.S. work authorization.
See All Senior Software Developer at NVIDIA JobsOverview
Showing 5 of 89+ Senior Software Developer Jobs at NVIDIA jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 89+ Senior Software Developer Jobs at NVIDIA
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Senior Software Developer Jobs at NVIDIA.
Get Access To All Jobs
INTRODUCTION
Reinforcement learning post-training is driving some of the most significant capability gains in AI today. It is the process that teaches a model to reason through hard problems, follow complex instructions, and act as an autonomous agent. It is also one of the hardest infrastructure challenges in the field. RL requires inference, rollout generation, and training running in a continuous loop. The rollout step is what makes it hard: the model must interact with environments, tools, and other models to produce the signal that drives learning. Coordinating actor, critic, and reward models across heterogeneous hardware at scale pushes the limits of what distributed systems can do.
NVIDIA is building an RL Frameworks engineering team to develop the open-source tools and infrastructure that AI researchers and post-training teams depend on. The team spans the full software stack, from collaborating closely with the researchers and labs pushing the frontier, to contributing to RL frameworks like VeRL, Miles, and TorchTitan, to improving the distributed runtimes they depend on, including Ray and Monarch. Whether your strength is working with researchers to understand and address their need optimizing deep learning frameworks, or building distributed infrastructure, we want to hear from you. Come join us to build the systems that enable the next generation of AI.
ROLE AND RESPONSIBILITIES
You will architect and build RL post-training infrastructure that scales efficiently from experimentation on a single GPU to production across thousands of nodes. This means tuning RL training-inference-rollout loops on GPUs, CPUs, and LPUs for performance where it matters, contributing to and improving the performance and usability of open-source RL frameworks, and partnering with the teams who own them. The role also spans fault tolerance, elastic scaling, and fast restarts so long-running distributed training jobs survive failures, stragglers, and resource contention.
Beyond GPU-accelerated training, this work includes partnering with teams building CPU-driven rollout workloads, including tool-use, code execution, and agentic environments, supplying the systems and framework engineering needed to run them efficiently alongside GPU- or LPU-accelerated generation and GPU-accelerated training. It also means advocating for researcher and partner needs with NVIDIA's networking, math library, and compiler teams so the capabilities RL workloads require get prioritized and delivered, and working with hardware teams to take advantage of next-generation hardware capabilities in post-training workloads.
BASIC QUALIFICATIONS
- MS or PhD in Computer Science, Computer Engineering, or a related field (or equivalent experience)
- 5+ years of professional experience in distributed systems, high-performance computing, deep learning infrastructure, or ML systems engineering
- Strong proficiency in Python and C/C++
- Demonstrated experience building or contributing to large-scale distributed systems or runtime frameworks in production at a frontier AI lab, hyperscaler, or major technology company
- Strong verbal and written communication skills and the ability to collaborate across organizational and geographic boundaries
PREFERRED QUALIFICATIONS
Depth in one or more of the following technical areas:
- Reinforcement learning for LLM post-training (RLHF, PPO, GRPO, DPO, reward modeling), including how algorithms map to distributed execution and the systems challenges they create (heterogeneous placement, rollouts, environment execution, resharding between training and generation)
- PyTorch internals, including distributed training primitives (FSDP, tensor parallelism, pipeline parallelism) and their composition
- Kubernetes runtime internals (container lifecycle, pod scheduling, resource quotas, GPU allocation)
- End-to-end distributed systems design (service boundaries, data flows, consistency models, failure modes, recovery approaches)
Experience in any of the following areas is a plus:
- Deep expertise in networking (NCCL, NVLink, InfiniBand), advanced multi-dimensional parallelisms (Megatron-LM, FSDP2, TP/DP/PP, MoE), or memory optimizations (quantization-aware training, mixed precision)
- Experience integrating high-performance inference engines (vLLM, SGLang, TensorRT-LLM) into RL training loops for GPU-accelerated rollout
- Strong background in actor- and task-based distributed programming (Ray, Monarch, or comparable systems)
- Familiarity with multi-turn training, multi-agent co-evolution, or VLM post-training
Ways to stand out from the crowd:
- Open-source contributions to RL post-training or distributed training projects (e.g., VeRL, Miles, TorchTitan, OpenRLHF, NeMo-Aligner, DeepSpeed-Chat), including significant work on framework internals where applicable
- Kubernetes work beyond routine operations (custom operators, GPU device plugins, or scheduling contributions)
- Direct experience operating frontier-scale training (RL post-training at thousands of GPUs and/or large-scale LLM or multimodal pre-training)
- Hands-on experience with production distributed failures at scale (stragglers, resource contention, hardware faults)
Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 27, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

INTRODUCTION
Reinforcement learning post-training is driving some of the most significant capability gains in AI today. It is the process that teaches a model to reason through hard problems, follow complex instructions, and act as an autonomous agent. It is also one of the hardest infrastructure challenges in the field. RL requires inference, rollout generation, and training running in a continuous loop. The rollout step is what makes it hard: the model must interact with environments, tools, and other models to produce the signal that drives learning. Coordinating actor, critic, and reward models across heterogeneous hardware at scale pushes the limits of what distributed systems can do.
NVIDIA is building an RL Frameworks engineering team to develop the open-source tools and infrastructure that AI researchers and post-training teams depend on. The team spans the full software stack, from collaborating closely with the researchers and labs pushing the frontier, to contributing to RL frameworks like VeRL, Miles, and TorchTitan, to improving the distributed runtimes they depend on, including Ray and Monarch. Whether your strength is working with researchers to understand and address their need optimizing deep learning frameworks, or building distributed infrastructure, we want to hear from you. Come join us to build the systems that enable the next generation of AI.
ROLE AND RESPONSIBILITIES
You will architect and build RL post-training infrastructure that scales efficiently from experimentation on a single GPU to production across thousands of nodes. This means tuning RL training-inference-rollout loops on GPUs, CPUs, and LPUs for performance where it matters, contributing to and improving the performance and usability of open-source RL frameworks, and partnering with the teams who own them. The role also spans fault tolerance, elastic scaling, and fast restarts so long-running distributed training jobs survive failures, stragglers, and resource contention.
Beyond GPU-accelerated training, this work includes partnering with teams building CPU-driven rollout workloads, including tool-use, code execution, and agentic environments, supplying the systems and framework engineering needed to run them efficiently alongside GPU- or LPU-accelerated generation and GPU-accelerated training. It also means advocating for researcher and partner needs with NVIDIA's networking, math library, and compiler teams so the capabilities RL workloads require get prioritized and delivered, and working with hardware teams to take advantage of next-generation hardware capabilities in post-training workloads.
BASIC QUALIFICATIONS
- MS or PhD in Computer Science, Computer Engineering, or a related field (or equivalent experience)
- 5+ years of professional experience in distributed systems, high-performance computing, deep learning infrastructure, or ML systems engineering
- Strong proficiency in Python and C/C++
- Demonstrated experience building or contributing to large-scale distributed systems or runtime frameworks in production at a frontier AI lab, hyperscaler, or major technology company
- Strong verbal and written communication skills and the ability to collaborate across organizational and geographic boundaries
PREFERRED QUALIFICATIONS
Depth in one or more of the following technical areas:
- Reinforcement learning for LLM post-training (RLHF, PPO, GRPO, DPO, reward modeling), including how algorithms map to distributed execution and the systems challenges they create (heterogeneous placement, rollouts, environment execution, resharding between training and generation)
- PyTorch internals, including distributed training primitives (FSDP, tensor parallelism, pipeline parallelism) and their composition
- Kubernetes runtime internals (container lifecycle, pod scheduling, resource quotas, GPU allocation)
- End-to-end distributed systems design (service boundaries, data flows, consistency models, failure modes, recovery approaches)
Experience in any of the following areas is a plus:
- Deep expertise in networking (NCCL, NVLink, InfiniBand), advanced multi-dimensional parallelisms (Megatron-LM, FSDP2, TP/DP/PP, MoE), or memory optimizations (quantization-aware training, mixed precision)
- Experience integrating high-performance inference engines (vLLM, SGLang, TensorRT-LLM) into RL training loops for GPU-accelerated rollout
- Strong background in actor- and task-based distributed programming (Ray, Monarch, or comparable systems)
- Familiarity with multi-turn training, multi-agent co-evolution, or VLM post-training
Ways to stand out from the crowd:
- Open-source contributions to RL post-training or distributed training projects (e.g., VeRL, Miles, TorchTitan, OpenRLHF, NeMo-Aligner, DeepSpeed-Chat), including significant work on framework internals where applicable
- Kubernetes work beyond routine operations (custom operators, GPU device plugins, or scheduling contributions)
- Direct experience operating frontier-scale training (RL post-training at thousands of GPUs and/or large-scale LLM or multimodal pre-training)
- Hands-on experience with production distributed failures at scale (stragglers, resource contention, hardware faults)
Widely considered to be one of the technology world’s most desirable employers, NVIDIA offers highly competitive salaries and a comprehensive benefits package. As you plan your future, see what we can offer to you and your family.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 184,000 USD - 287,500 USD for Level 4, and 224,000 USD - 356,500 USD for Level 5. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 27, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
See all 89+ Senior Software Developer at NVIDIA jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Senior Software Developer at NVIDIA roles.
Get Access To All JobsTips for Finding Senior Software Developer Jobs at NVIDIA Jobs
Align Your Portfolio to GPU or AI Work
NVIDIA's engineering teams prioritize deep systems expertise, not generalist experience. Before applying, tailor your portfolio to highlight GPU programming, CUDA, parallel computing, or AI inference work. Generic software projects won't differentiate you in a highly specialized applicant pool.
Verify Your Degree Field Matches the Role
H-1B and E-3 sponsorship both require your position to qualify as a specialty occupation, meaning your degree field must directly relate to the role. A computer science or electrical engineering degree maps cleanly; an unrelated field may require supporting documentation to establish equivalency.
Target NVIDIA's Roles Through Migrate Mate
Filter for Senior Software Developer openings at NVIDIA with confirmed visa sponsorship using Migrate Mate. This removes the guesswork of identifying which roles are actively open to H-1B or E-3 candidates, so you apply where sponsorship is already in scope.
Understand NVIDIA's Internal Transfer Option
If you're already in the U.S. on a valid visa, an H-1B transfer to NVIDIA avoids the lottery entirely. NVIDIA can file a cap-exempt transfer petition, meaning you can start without waiting for an October 1 start date if you've already been counted against the cap.
Ask About LCA Wage Level Before Negotiating
NVIDIA files a Labor Condition Application with the DOL before submitting your H-1B petition. The LCA locks in a wage level. Ask your recruiter which wage level the role is classified at, since this directly affects your offer range and USCIS adjudication.
Prepare for a Technical Interview Process Built Around Systems Depth
NVIDIA's Senior Software Developer interviews typically involve multiple rounds focused on low-level systems design, memory architecture, and performance optimization. Practicing general coding problems isn't enough; prepare for domain-specific rounds where hardware-software interaction is the core evaluation criterion.
Senior Software Developer at NVIDIA jobs are hiring across the US. Find yours.
Find Senior Software Developer at NVIDIA JobsFrequently Asked Questions
Does NVIDIA sponsor H-1B visas for Senior Software Developers?
Yes, NVIDIA sponsors H-1B visas for Senior Software Developer roles. The process involves NVIDIA filing a Labor Condition Application with the DOL and then submitting an H-1B petition to USCIS on your behalf. For new H-1B holders, this is subject to the annual lottery, with a cap-exempt transfer available if you already hold H-1B status with another employer.
How do I apply for Senior Software Developer jobs at NVIDIA?
Apply directly through NVIDIA's careers portal, where Senior Software Developer roles are listed by team and location. You can also browse confirmed visa-sponsoring openings at NVIDIA through Migrate Mate, which filters for roles where international sponsorship is in scope. Tailor your application to the specific engineering domain, whether that's AI, graphics, networking, or systems software, rather than submitting a generic resume.
Which visa types does NVIDIA commonly use for Senior Software Developer roles?
NVIDIA sponsors H-1B visas as the primary work authorization path for Senior Software Developers. Australian citizens can pursue the E-3 visa, which bypasses the lottery and allows for a faster timeline. For longer-term permanent residence, NVIDIA also supports EB-2 and EB-3 Green Card sponsorship, typically initiated after a period of employment through the PERM labor certification process.
What qualifications does NVIDIA expect for Senior Software Developer roles?
NVIDIA's Senior Software Developer positions generally require a bachelor's or master's degree in computer science, electrical engineering, or a closely related field. Beyond the degree, these roles demand demonstrated expertise in systems-level programming, performance optimization, and often domain-specific knowledge in GPU computing, AI frameworks, or networking. Several years of post-graduation industry experience is expected, not just academic credentials.
How do I understand the visa sponsorship timeline for a Senior Software Developer offer at NVIDIA?
Timeline depends on your current visa status. If you need a new H-1B, the lottery runs in March for an October 1 start date, meaning six or more months between offer and start. E-3 applicants can often complete consular processing in four to eight weeks. H-1B transfers for candidates already in status can proceed more quickly, with USCIS premium processing available to reduce the adjudication window to around two weeks.
See which Senior Software Developer at NVIDIA employers are hiring and sponsoring visas right now.
Search Senior Software Developer at NVIDIA Jobs