ML Software Engineer Jobs at NVIDIA with Visa Sponsorship
NVIDIA hires ML Software Engineers across research, inference infrastructure, and model deployment teams, and the company has a consistent track record of sponsoring work visas for this function. If you're an international candidate targeting a role here, you're applying to one of the most active technical employers in the sponsorship space.
See All ML Software Engineer at NVIDIA JobsOverview
Showing 5 of 35+ ML Software Engineer Jobs at NVIDIA jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 35+ ML Software Engineer Jobs at NVIDIA
Sign up for free to unlock all listings, filter by visa type, and get alerts for new ML Software Engineer Jobs at NVIDIA.
Get Access To All Jobs
INTRODUCTION
NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice to join us today!
ROLE AND RESPONSIBILITIES
As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technical leader to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. As an expert, you will help us with the strategic challenges we encounter including: compute, networking, and storage design for large scale, high performance workloads, effective resource utilization in a heterogeneous compute environment, evolving our private/public cloud strategy, capacity modeling, and growth planning across our global computing environment.
What you'll be doing:
- Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
- Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
- Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
- Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving user needs.
- Support our researchers to run their workloads including performance analysis and optimizations.
- Conduct root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
BASIC QUALIFICATIONS
What we need to see:
- Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
- Minimum 5+ years of experience designing and operating large scale compute infrastructure.
- Experience with AI/HPC advanced job schedulers, such as Slurm, K8s, PBS, RTDA or LSF.
- Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions.
- Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt.
- In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud.
- Proficiency in Python programming and bash scripting.
- Applied experience with AI/HPC workflows that use MPI.
- Experience analyzing and tuning performance for a variety of AI/HPC workloads.
- Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields.
PREFERRED QUALIFICATIONS
Ways to stand out from the crowd:
- Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking.
- Experience with Machine Learning and Deep Learning concepts, algorithms and models.
- Familiarity with InfiniBand with IPoIB and RDMA.
- Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads.
- Familiarity with deep learning frameworks like PyTorch and TensorFlow.
COMPENSATION
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 28, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

INTRODUCTION
NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice to join us today!
ROLE AND RESPONSIBILITIES
As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technical leader to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. As an expert, you will help us with the strategic challenges we encounter including: compute, networking, and storage design for large scale, high performance workloads, effective resource utilization in a heterogeneous compute environment, evolving our private/public cloud strategy, capacity modeling, and growth planning across our global computing environment.
What you'll be doing:
- Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
- Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
- Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
- Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving user needs.
- Support our researchers to run their workloads including performance analysis and optimizations.
- Conduct root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
BASIC QUALIFICATIONS
What we need to see:
- Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
- Minimum 5+ years of experience designing and operating large scale compute infrastructure.
- Experience with AI/HPC advanced job schedulers, such as Slurm, K8s, PBS, RTDA or LSF.
- Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions.
- Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt.
- In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud.
- Proficiency in Python programming and bash scripting.
- Applied experience with AI/HPC workflows that use MPI.
- Experience analyzing and tuning performance for a variety of AI/HPC workloads.
- Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields.
PREFERRED QUALIFICATIONS
Ways to stand out from the crowd:
- Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking.
- Experience with Machine Learning and Deep Learning concepts, algorithms and models.
- Familiarity with InfiniBand with IPoIB and RDMA.
- Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads.
- Familiarity with deep learning frameworks like PyTorch and TensorFlow.
COMPENSATION
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 28, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
See all 35+ ML Software Engineer at NVIDIA jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new ML Software Engineer at NVIDIA roles.
Get Access To All JobsTips for Finding ML Software Engineer Jobs at NVIDIA Jobs
Align your portfolio to NVIDIA's ML stack
NVIDIA recruits heavily for roles involving CUDA optimization, TensorRT, and large-scale training pipelines. Before applying, make sure your GitHub, published work, or project descriptions explicitly reference these frameworks rather than generic machine learning experience.
Target teams where sponsorship is routine
NVIDIA's Deep Learning Frameworks, Inference, and Applied Research teams consistently hire international engineers. Filtering job postings by these business units focuses your effort on positions where the hiring workflow already includes visa filing steps.
Secure your credential evaluation before interviews start
If your engineering or computer science degree is from outside the United States, get a credential evaluation from a NACES-approved service before you reach the offer stage. NVIDIA's immigration team needs this document to support an H-1B specialty occupation determination, and delays here can push your start date.
Understand the H-1B cap registration window
USCIS opens H-1B cap registration each March for a roughly two-week window. If NVIDIA extends an offer after April, your start date will shift to October 1 of the following fiscal year unless you're already in a cap-exempt status like OPT or a prior H-1B transfer.
Clarify E-3 eligibility early if you're Australian
Australian citizens can use the E-3 visa, which has no lottery and a faster timeline than the H-1B. NVIDIA sponsors E-3 for qualifying ML Software Engineer roles, so raise your citizenship at the offer stage so their legal team can file the Labor Condition Application with DOL accordingly.
Use Migrate Mate to find open ML roles with confirmed sponsorship
Searching for NVIDIA ML Software Engineer openings across generic job boards surfaces roles without any visa context. Migrate Mate filters specifically for positions where NVIDIA has an active sponsorship history, so you're targeting openings that are already verified for international candidates.
ML Software Engineer at NVIDIA jobs are hiring across the US. Find yours.
Find ML Software Engineer at NVIDIA JobsFrequently Asked Questions
Does NVIDIA sponsor H-1B visas for ML Software Engineers?
Yes, NVIDIA sponsors H-1B visas for ML Software Engineers. The company works with immigration counsel to file H-1B petitions for qualifying hires, which includes submitting a Labor Condition Application to the DOL and a Form I-129 to USCIS. Because H-1B is subject to the annual cap and lottery, your start date depends on when NVIDIA extends the offer relative to the USCIS registration window each March.
How do I apply for ML Software Engineer jobs at NVIDIA?
Apply directly through NVIDIA's careers portal, where ML Software Engineer postings are listed by team and location. Tailor your resume to emphasize GPU computing, model optimization, or large-scale training experience relevant to the specific team. You can also browse current NVIDIA ML Software Engineer openings filtered by sponsorship eligibility through Migrate Mate, which surfaces roles where NVIDIA has sponsored international candidates for this function.
Which visa types does NVIDIA commonly use for ML Software Engineers?
NVIDIA sponsors H-1B for most international ML Software Engineer hires and E-3 for Australian citizens, which bypasses the H-1B lottery. For engineers pursuing permanent residence, NVIDIA has a track record of supporting EB-2 and EB-3 Green Card sponsorship through the PERM labor certification process, typically after an employee has established tenure in a qualifying role.
What qualifications does NVIDIA expect for ML Software Engineer roles?
Most NVIDIA ML Software Engineer postings require a bachelor's degree or higher in computer science, electrical engineering, or a closely related field, and the role must qualify as a specialty occupation under USCIS standards. Practically, NVIDIA's technical bar emphasizes systems-level ML knowledge: familiarity with CUDA, distributed training frameworks like Megatron or PyTorch, and experience deploying models at scale distinguish competitive candidates from applicants with general ML backgrounds.
How do I plan my timeline for an NVIDIA ML Software Engineer role with visa sponsorship?
If you're on OPT, factor in that NVIDIA's recruiting cycles for technical roles often span eight to twelve weeks from application to offer. An H-1B cap case filed for you in March has an October 1 start date, so coordinate your OPT expiration against that window. If you're cap-exempt or eligible for E-3, NVIDIA can file outside the lottery and target an earlier start date, sometimes within 30 to 90 days of offer acceptance.
See which ML Software Engineer at NVIDIA employers are hiring and sponsoring visas right now.
Search ML Software Engineer at NVIDIA Jobs