AI ML Engineering Jobs at NVIDIA with Visa Sponsorship
NVIDIA's AI ML Engineering roles sit at the intersection of GPU architecture, large-scale model training, and production inference systems. NVIDIA has a strong track record of sponsoring international engineers across H-1B, E-3, and employment-based Green Card pathways, making it a viable target for visa-dependent candidates with deep ML expertise.
See All AI ML Engineering at NVIDIA JobsOverview
Showing 5 of 35+ AI ML Engineering Jobs at NVIDIA jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 35+ AI ML Engineering Jobs at NVIDIA
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI ML Engineering Jobs at NVIDIA.
Get Access To All Jobs
INTRODUCTION
NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice to join us today!
ROLE AND RESPONSIBILITIES
As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technical leader to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. As an expert, you will help us with the strategic challenges we encounter including: compute, networking, and storage design for large scale, high performance workloads, effective resource utilization in a heterogeneous compute environment, evolving our private/public cloud strategy, capacity modeling, and growth planning across our global computing environment.
What you'll be doing:
- Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
- Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
- Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
- Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving user needs.
- Support our researchers to run their workloads including performance analysis and optimizations.
- Conduct root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
BASIC QUALIFICATIONS
What we need to see:
- Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
- Minimum 5+ years of experience designing and operating large scale compute infrastructure.
- Experience with AI/HPC advanced job schedulers, such as Slurm, K8s, PBS, RTDA or LSF.
- Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions.
- Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt.
- In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud.
- Proficiency in Python programming and bash scripting.
- Applied experience with AI/HPC workflows that use MPI.
- Experience analyzing and tuning performance for a variety of AI/HPC workloads.
- Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields.
PREFERRED QUALIFICATIONS
Ways to stand out from the crowd:
- Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking.
- Experience with Machine Learning and Deep Learning concepts, algorithms and models.
- Familiarity with InfiniBand with IPoIB and RDMA.
- Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads.
- Familiarity with deep learning frameworks like PyTorch and TensorFlow.
COMPENSATION
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 28, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

INTRODUCTION
NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice to join us today!
ROLE AND RESPONSIBILITIES
As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technical leader to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. As an expert, you will help us with the strategic challenges we encounter including: compute, networking, and storage design for large scale, high performance workloads, effective resource utilization in a heterogeneous compute environment, evolving our private/public cloud strategy, capacity modeling, and growth planning across our global computing environment.
What you'll be doing:
- Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
- Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
- Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
- Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving user needs.
- Support our researchers to run their workloads including performance analysis and optimizations.
- Conduct root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
BASIC QUALIFICATIONS
What we need to see:
- Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
- Minimum 5+ years of experience designing and operating large scale compute infrastructure.
- Experience with AI/HPC advanced job schedulers, such as Slurm, K8s, PBS, RTDA or LSF.
- Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions.
- Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt.
- In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud.
- Proficiency in Python programming and bash scripting.
- Applied experience with AI/HPC workflows that use MPI.
- Experience analyzing and tuning performance for a variety of AI/HPC workloads.
- Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields.
PREFERRED QUALIFICATIONS
Ways to stand out from the crowd:
- Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking.
- Experience with Machine Learning and Deep Learning concepts, algorithms and models.
- Familiarity with InfiniBand with IPoIB and RDMA.
- Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads.
- Familiarity with deep learning frameworks like PyTorch and TensorFlow.
COMPENSATION
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 28, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
See all 35+ AI ML Engineering at NVIDIA jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI ML Engineering at NVIDIA roles.
Get Access To All JobsTips for Finding AI ML Engineering Jobs at NVIDIA Jobs
Tailor your portfolio to NVIDIA's stack
NVIDIA's AI ML Engineering roles consistently require hands-on experience with CUDA, TensorRT, and distributed training frameworks. Document projects involving GPU optimization or large model inference before applying, not after you land an interview.
Target teams building production AI systems
NVIDIA hires ML engineers into distinct verticals: autonomous vehicles, cloud AI, and developer tools. Applying to a team whose product aligns with your domain significantly improves your chances of clearing the technical screen.
Prepare for the specialty occupation standard early
USCIS requires H-1B petitions to demonstrate the role qualifies as a specialty occupation. For AI ML Engineering, that means your degree field and job duties need to align precisely. Gather transcripts and any graduate research documentation before your employer files.
Understand NVIDIA's internal immigration timeline
Large technology employers typically begin H-1B cap-subject filings in March for an October start date. If you're interviewing in Q4 or Q1, factor this window into your offer negotiation so your start date aligns with USCIS processing.
Use Migrate Mate to filter open roles by visa type
NVIDIA posts AI ML Engineering positions across multiple teams simultaneously. Use Migrate Mate to filter live openings specifically by visa sponsorship type, so you apply to roles where your visa category is already confirmed as supported.
AI ML Engineering at NVIDIA jobs are hiring across the US. Find yours.
Find AI ML Engineering at NVIDIA JobsFrequently Asked Questions
Does NVIDIA sponsor H-1B visas for AI ML Engineers?
Yes, NVIDIA sponsors H-1B visas for AI ML Engineering roles. For cap-subject candidates, NVIDIA files petitions in the annual H-1B lottery window, which opens in March. If you're already in H-1B status with another employer, NVIDIA can file a transfer petition outside the lottery, which avoids the wait.
Which visa types does NVIDIA sponsor for AI ML Engineering roles?
NVIDIA sponsors H-1B visas for most international candidates in AI ML Engineering. Australian citizens can pursue the E-3 visa instead, which has no annual lottery and is generally faster to obtain. For candidates on a longer-term path, NVIDIA also supports EB-2 and EB-3 Green Card sponsorship once you're established in the role.
How do I apply for AI ML Engineering jobs at NVIDIA?
Applications go through NVIDIA's careers portal. Most AI ML Engineering roles require a technical screen covering GPU programming, model optimization, or systems design, followed by multiple rounds of interviews. Migrate Mate aggregates NVIDIA's open AI ML Engineering positions filtered by visa sponsorship type, which makes it easier to identify the right roles before applying directly on NVIDIA's site.
What qualifications does NVIDIA look for in AI ML Engineering candidates?
NVIDIA typically expects a bachelor's or master's degree in computer science, electrical engineering, or a closely related field. Practical experience with CUDA, large-scale model training, and inference optimization carries significant weight. For H-1B purposes, your degree field needs to align with the specific role, so a degree in a tangential discipline may require additional documentation showing equivalency.
How long does the visa sponsorship process take when joining NVIDIA?
Timeline depends on your visa category. E-3 consular processing typically takes two to six weeks once your employer files the Labor Condition Application with DOL. H-1B transfers for candidates already in status can take two to four months under standard USCIS processing. Cap-subject H-1B candidates must wait for the October 1 fiscal year start date, meaning an offer signed in spring may not result in a start date until fall.
See which AI ML Engineering at NVIDIA employers are hiring and sponsoring visas right now.
Search AI ML Engineering at NVIDIA Jobs