AI ML Engineer Jobs at NVIDIA with Visa Sponsorship
NVIDIA's AI ML Engineer roles sit at the intersection of GPU architecture, large-scale model training, and applied research. The company has a consistent track record of sponsoring work visas for this function, making it a realistic target for international engineers with strong ML credentials.
See All AI ML Engineer at NVIDIA JobsOverview
Showing 5 of 35+ AI ML Engineer Jobs at NVIDIA jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 35+ AI ML Engineer Jobs at NVIDIA
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI ML Engineer Jobs at NVIDIA.
Get Access To All Jobs
INTRODUCTION
NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice to join us today!
ROLE AND RESPONSIBILITIES
As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technical leader to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. As an expert, you will help us with the strategic challenges we encounter including: compute, networking, and storage design for large scale, high performance workloads, effective resource utilization in a heterogeneous compute environment, evolving our private/public cloud strategy, capacity modeling, and growth planning across our global computing environment.
What you'll be doing:
- Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
- Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
- Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
- Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving user needs.
- Support our researchers to run their workloads including performance analysis and optimizations.
- Conduct root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
BASIC QUALIFICATIONS
What we need to see:
- Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
- Minimum 5+ years of experience designing and operating large scale compute infrastructure.
- Experience with AI/HPC advanced job schedulers, such as Slurm, K8s, PBS, RTDA or LSF.
- Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions.
- Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt.
- In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud.
- Proficiency in Python programming and bash scripting.
- Applied experience with AI/HPC workflows that use MPI.
- Experience analyzing and tuning performance for a variety of AI/HPC workloads.
- Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields.
PREFERRED QUALIFICATIONS
Ways to stand out from the crowd:
- Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking.
- Experience with Machine Learning and Deep Learning concepts, algorithms and models.
- Familiarity with InfiniBand with IPoIB and RDMA.
- Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads.
- Familiarity with deep learning frameworks like PyTorch and TensorFlow.
COMPENSATION
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 28, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

INTRODUCTION
NVIDIA has continuously reinvented itself over two decades. Our invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing. NVIDIA is a “learning machine” that constantly evolves by adapting to new opportunities that are hard to solve, that only we can tackle, and that matter to the world. This is our life’s work, to amplify human imagination and intelligence. Make the choice to join us today!
ROLE AND RESPONSIBILITIES
As a member of the GPU AI/HPC Infrastructure team, you will provide leadership in the design and implementation of groundbreaking GPU compute clusters that run demanding deep learning, high performance computing, and computationally intensive workloads. We seek a technical leader to identify architectural changes and/or completely new approaches for our GPU Compute Clusters. As an expert, you will help us with the strategic challenges we encounter including: compute, networking, and storage design for large scale, high performance workloads, effective resource utilization in a heterogeneous compute environment, evolving our private/public cloud strategy, capacity modeling, and growth planning across our global computing environment.
What you'll be doing:
- Provide leadership and strategic guidance on the management of large-scale HPC systems including the deployment of compute, networking, and storage.
- Develop and improve our ecosystem around GPU-accelerated computing including developing scalable automation solutions.
- Build and maintain AI and ML heterogeneous clusters on-premises and in the cloud.
- Create and cultivate customer and cross-team relationships to reliably sustain the clusters and meet user evolving user needs.
- Support our researchers to run their workloads including performance analysis and optimizations.
- Conduct root cause analysis and suggest corrective action. Proactively find and fix issues before they occur.
BASIC QUALIFICATIONS
What we need to see:
- Bachelor’s degree in Computer Science, Electrical Engineering or related field or equivalent experience.
- Minimum 5+ years of experience designing and operating large scale compute infrastructure.
- Experience with AI/HPC advanced job schedulers, such as Slurm, K8s, PBS, RTDA or LSF.
- Proficient in administering Centos/RHEL and/or Ubuntu Linux distributions.
- Solid understanding of cluster configuration management tools such as Ansible, Puppet, Salt.
- In-depth understanding of container technologies like Docker, Singularity, Podman, Shifter, Charliecloud.
- Proficiency in Python programming and bash scripting.
- Applied experience with AI/HPC workflows that use MPI.
- Experience analyzing and tuning performance for a variety of AI/HPC workloads.
- Passion for continual learning and staying ahead of emerging technologies and effective approaches in the HPC and AI/ML infrastructure fields.
PREFERRED QUALIFICATIONS
Ways to stand out from the crowd:
- Background with NVIDIA GPUs, CUDA Programming, NCCL and MLPerf benchmarking.
- Experience with Machine Learning and Deep Learning concepts, algorithms and models.
- Familiarity with InfiniBand with IPoIB and RDMA.
- Understanding of fast, distributed storage systems like Lustre and GPFS for AI/HPC workloads.
- Familiarity with deep learning frameworks like PyTorch and TensorFlow.
COMPENSATION
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD for Level 3, and 184,000 USD - 287,500 USD for Level 4. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 28, 2026. This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
See all 35+ AI ML Engineer at NVIDIA jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI ML Engineer at NVIDIA roles.
Get Access To All JobsTips for Finding AI ML Engineer Jobs at NVIDIA Jobs
Align your portfolio with NVIDIA's research priorities
NVIDIA's ML engineering roles heavily favor experience with CUDA, transformer architectures, and distributed training at scale. Frame your project portfolio around these specifically before applying, not general ML work.
Target roles within NVIDIA's applied AI teams
NVIDIA hires AI ML Engineers across both research and product-facing teams. Applied teams building inference infrastructure or model optimization tooling tend to move faster through offer and filing stages than pure research tracks.
Prepare your credentials for a specialty occupation determination
USCIS evaluates whether your role qualifies as a specialty occupation under H-1B. For AI ML Engineer positions, document how your degree field, whether computer science, electrical engineering, or applied mathematics, directly maps to the job duties listed in your offer.
Understand NVIDIA's E-3 pathway if you hold Australian citizenship
NVIDIA sponsors E-3 visas, which have no lottery and a faster timeline than H-1B. If you're an Australian citizen, flag this during your interview process so the recruiting team routes you through the correct filing pathway from day one.
Confirm the filing timeline against your current status expiry
If you're on OPT or a 60-day grace period, map your status end date against USCIS H-1B premium processing timelines, currently 15 business days after receipt. NVIDIA's legal team will need lead time, so raise your deadline at the offer stage, not after signing.
Use Migrate Mate to identify open AI ML Engineer roles at NVIDIA
Searching broadly across job boards misses roles that are actively open to visa sponsorship. Use Migrate Mate to filter specifically for AI ML Engineer positions at NVIDIA where H-1B or E-3 sponsorship is confirmed.
AI ML Engineer at NVIDIA jobs are hiring across the US. Find yours.
Find AI ML Engineer at NVIDIA JobsFrequently Asked Questions
Does NVIDIA sponsor H-1B visas for AI ML Engineers?
Yes, NVIDIA sponsors H-1B visas for AI ML Engineers. The company has a well-established immigration process and works with experienced legal counsel to handle H-1B filings. If you receive an offer, NVIDIA's recruiting team will walk you through the sponsorship timeline. Premium processing is available through USCIS if your status deadline is tight.
Which visa types does NVIDIA commonly sponsor for AI ML Engineer roles?
NVIDIA sponsors H-1B visas for most international AI ML Engineer hires. Australian citizens are eligible for the E-3 visa, which skips the lottery and typically processes faster. For engineers on longer-term pathways, NVIDIA also sponsors EB-2 and EB-3 Green Card petitions, including PERM labor certification filings through the DOL.
What qualifications does NVIDIA expect for AI ML Engineer roles?
Most AI ML Engineer openings at NVIDIA require a bachelor's degree at minimum, with many senior roles expecting a master's or PhD in computer science, electrical engineering, or a closely related field. Hands-on experience with GPU computing, large-scale model training, and frameworks like PyTorch or JAX is expected. Practical experience with inference optimization or CUDA programming strengthens applications significantly.
How do I apply for AI ML Engineer jobs at NVIDIA?
You can find and apply for AI ML Engineer roles at NVIDIA directly through NVIDIA's careers portal, or browse verified open roles filtered for visa sponsorship through Migrate Mate. When applying, tailor your resume to reflect experience with distributed training, GPU optimization, or production ML systems. NVIDIA's recruiting process typically includes technical screens followed by multi-round system design and coding interviews.
How do I plan my H-1B filing timeline when targeting NVIDIA?
H-1B cap-subject filings open in March each year, with an October 1 start date. If you're on OPT, confirm your STEM OPT extension is in place to bridge the gap if needed. NVIDIA initiates the employer-side LCA filing with DOL before submitting the H-1B petition to USCIS, so communicate your status expiry to the recruiter as early as the offer stage.
See which AI ML Engineer at NVIDIA employers are hiring and sponsoring visas right now.
Search AI ML Engineer at NVIDIA Jobs