E-3 Visa AI Research Engineer Jobs
AI Research Engineer roles qualify as E-3 specialty occupations, making them a strong fit for Australian professionals seeking U.S. sponsorship. The E-3 has no lottery and no annual cap, so your timeline depends on your employer's LCA filing and your consulate appointment, not a random draw.
See All AI Research Engineer JobsOverview
Showing 5 of 432+ AI Research Engineer jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 432+ AI Research Engineer jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Research Engineer roles.
Get Access To All Jobs
INTRODUCTION
NVIDIA's GPUs are at the core of modern AI infrastructure, from training large-scale models to running inference in production. That position depends on software as much as hardware, and compiler engineering is a big part of what makes it work.
We are looking for outstanding AI Research Engineer /Applied Scientist focused on Compilers /Low-level optimization to join the team and develop groundbreaking technologies in machine learning compilers and AI systems. We build innovative AI compiler solutions that work together with NVIDIA's software stack to provide comprehensive acceleration for modern machine learning models.
ROLE AND RESPONSIBILITIES:
- Help trailblaze company efforts in applying AI within conventional compilation pipelines.
- Design and implement AI-based technology addressing core problems of low-level GPU programming.
- Build training pipelines for supervised fine-tuning and reinforcement learning (RL/RLHF-style or policy optimization variants).
- Define model inputs/outputs over compiler low level compiler representations.
- Develop evaluation frameworks to measure code quality, runtime, compile-time overhead, and correctness.
- Intelligent (domain task based) prompt engineering.
- Collaborate with compiler engineers to integrate learned policies into production toolchains.
- Prototype and iterate on model architectures, prompts, and fine-tuning strategies for scheduling and allocation tasks.
- Create datasets from compiler traces, optimization passes, and target-specific performance signals.
- Apply RL techniques to optimize for downstream objectives (performance, spill reduction, instruction-level parallelism, etc.) and run rigorous experiments, ablations, and benchmarking across workloads and hardware targets.
BASIC QUALIFICATIONS:
- M.S./PhD degree in Computer Engineering, Computer Science related technical field (or equivalent experience).
- 5+ years of experience building AI/ML systems.
- Strong software engineering skills in Python and at least one systems language (C++ preferred).
- Hands-on experience training/fine-tuning large models (Transformers, PEFT/LoRA, distributed training).
- Solid understanding of machine learning fundamentals and experimentation best practices.
- Experience with reinforcement learning (e.g., policy gradients, actor-critic, offline RL, bandit-style optimization).
- Knowledge of prompt-engineering techniques.
- Ability to work across research and engineering, from prototype to production.
PREFERRED QUALIFICATIONS:
- Distributed training/inference at scale.
- Experience working with the NVIDIA NeMo framework.
- Understanding of GPU performance, experience with benchmarking suites and performance profiling tools.
- Formal methods or static analysis familiarity for correctness guarantees.
- CUDA programming experience.
With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to unprecedented growth, our exclusive engineering teams are rapidly growing. If you're a creative and autonomous program manager with a real passion for technology, we want to hear from you.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 26, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.

INTRODUCTION
NVIDIA's GPUs are at the core of modern AI infrastructure, from training large-scale models to running inference in production. That position depends on software as much as hardware, and compiler engineering is a big part of what makes it work.
We are looking for outstanding AI Research Engineer /Applied Scientist focused on Compilers /Low-level optimization to join the team and develop groundbreaking technologies in machine learning compilers and AI systems. We build innovative AI compiler solutions that work together with NVIDIA's software stack to provide comprehensive acceleration for modern machine learning models.
ROLE AND RESPONSIBILITIES:
- Help trailblaze company efforts in applying AI within conventional compilation pipelines.
- Design and implement AI-based technology addressing core problems of low-level GPU programming.
- Build training pipelines for supervised fine-tuning and reinforcement learning (RL/RLHF-style or policy optimization variants).
- Define model inputs/outputs over compiler low level compiler representations.
- Develop evaluation frameworks to measure code quality, runtime, compile-time overhead, and correctness.
- Intelligent (domain task based) prompt engineering.
- Collaborate with compiler engineers to integrate learned policies into production toolchains.
- Prototype and iterate on model architectures, prompts, and fine-tuning strategies for scheduling and allocation tasks.
- Create datasets from compiler traces, optimization passes, and target-specific performance signals.
- Apply RL techniques to optimize for downstream objectives (performance, spill reduction, instruction-level parallelism, etc.) and run rigorous experiments, ablations, and benchmarking across workloads and hardware targets.
BASIC QUALIFICATIONS:
- M.S./PhD degree in Computer Engineering, Computer Science related technical field (or equivalent experience).
- 5+ years of experience building AI/ML systems.
- Strong software engineering skills in Python and at least one systems language (C++ preferred).
- Hands-on experience training/fine-tuning large models (Transformers, PEFT/LoRA, distributed training).
- Solid understanding of machine learning fundamentals and experimentation best practices.
- Experience with reinforcement learning (e.g., policy gradients, actor-critic, offline RL, bandit-style optimization).
- Knowledge of prompt-engineering techniques.
- Ability to work across research and engineering, from prototype to production.
PREFERRED QUALIFICATIONS:
- Distributed training/inference at scale.
- Experience working with the NVIDIA NeMo framework.
- Understanding of GPU performance, experience with benchmarking suites and performance profiling tools.
- Formal methods or static analysis familiarity for correctness guarantees.
- CUDA programming experience.
With competitive salaries and a generous benefits package, we are widely considered to be one of the technology world’s most desirable employers. We have some of the most forward-thinking and hardworking people in the world working for us and, due to unprecedented growth, our exclusive engineering teams are rapidly growing. If you're a creative and autonomous program manager with a real passion for technology, we want to hear from you.
Your base salary will be determined based on your location, experience, and the pay of employees in similar positions. The base salary range is 152,000 USD - 241,500 USD. You will also be eligible for equity and benefits.
Applications for this job will be accepted at least until April 26, 2026.
This posting is for an existing vacancy.
NVIDIA uses AI tools in its recruiting processes.
NVIDIA is committed to fostering a diverse work environment and proud to be an equal opportunity employer. As we highly value diversity in our current and future employees, we do not discriminate (including in our hiring and promotion practices) on the basis of race, religion, color, national origin, gender, gender expression, sexual orientation, age, marital status, veteran status, disability status or any other characteristic protected by law.
See all 432+ AI Research Engineer jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Research Engineer roles.
Get Access To All JobsTips for Finding E-3 Visa Sponsorship as an AI Research Engineer
Translate your research credentials for U.S. employers
Australian honours degrees and PhD programs aren't always understood by U.S. hiring managers. Frame your thesis work, published research, and conference contributions in terms of the DOL specialty occupation standard: a specific bachelor's degree in a directly related field.
Target employers with active LCA filing history
Search the DOL's Foreign Labor Certification Data Center disclosure files to verify that a company has filed LCAs for AI or machine learning roles before. Prior LCA activity signals an established process and an HR team that won't treat your sponsorship as a first-time experiment.
Flag E-3 eligibility early in the interview process
Most U.S. tech employers default to assuming H-1B when they hear 'visa sponsorship.' Clarify upfront that you're Australian and eligible for E-3, which requires no lottery and can be approved before your start date, removing the 12-month hiring uncertainty that deters sponsors.
Align your job title to DOL-recognized specialty occupations
Titles like 'AI Researcher' or 'Research Scientist' map cleanly to DOL specialty occupation categories; vague titles like 'AI Lead' or 'Innovation Engineer' can complicate LCA certification. Work with your employer to ensure the job title and description match the USCIS specialty occupation definition.
Use Migrate Mate's E-3 filing service to streamline your offer stage
Once you have an offer, use Migrate Mate's E-3 filing service to handle your LCA filing, DS-160, and consulate preparation end-to-end. This keeps your employer's legal burden low and reduces the back-and-forth that delays start dates on complex AI research roles.
Prepare for consulate-specific technical scrutiny
Consular officers at Sydney and Melbourne sometimes probe the specialty occupation nexus for AI roles, particularly when your degree is in a related field like mathematics or electrical engineering rather than computer science directly. Bring documentation linking your academic background to the specific research methods in your job offer.
AI Research Engineer jobs are hiring across the US. Find yours.
Find AI Research Engineer JobsAI Research Engineer E-3 Visa: Frequently Asked Questions
Where can I find AI Research Engineer jobs with E-3 visa sponsorship?
Migrate Mate is built specifically for Australian professionals searching for E-3 sponsorship roles in the U.S. You can filter by job title and see which employers have a history of filing for E-3 or related work visas. That employer-level data saves you from applying to companies that have never navigated sponsorship before.
How much does it cost to get an E-3 visa?
Migrate Mate's E-3 filing service covers the entire process for $499, including the Labor Condition Application, visa document preparation, and consulate appointment guidance. Traditional immigration lawyers charge $2,000–$5,000+ for the same work. The E-3 has less paperwork than most work visas, so paying thousands for legal help is usually unnecessary.
Does an AI Research Engineer role qualify as a specialty occupation for the E-3?
Yes. AI Research Engineer roles require at least a bachelor's degree in computer science, machine learning, electrical engineering, or a closely related field, which meets the USCIS specialty occupation standard. The key is that the job description must show the degree is a prerequisite, not just preferred. Roles that accept 'any technical degree' can run into LCA complications, so the job title and duties need to be specific.
How does the E-3 visa compare to the H-1B for AI Research Engineer roles?
For Australian nationals, the E-3 is significantly more practical than the H-1B for this role. There's no annual lottery, no cap, and no waiting until October 1 to start. An employer can file your LCA with the DOL, receive certification, and have you in a consulate appointment within weeks of signing your offer. H-1B requires winning a random lottery draw, then waiting up to six months before employment can begin.
Can I switch employers or projects on an E-3 as an AI Research Engineer?
You can change employers, but your new employer must file a fresh LCA before you begin work with them. There's no portability provision like some other visa categories. If you're moving between AI research teams within the same company, a new LCA is generally not required unless your role, location, or wage level changes materially. Plan for a gap of two to four weeks between offer acceptance and cleared LCA certification.
See which AI Research Engineer employers are hiring and sponsoring visas right now.
Search AI Research Engineer Jobs