ML Research Engineer Jobs in USA with Visa Sponsorship
ML Research Engineer roles attract strong H-1B sponsorship from AI labs, large tech companies, and research-focused startups. A master's or PhD in computer science or a related field is standard, and employers regularly sponsor both new graduates and experienced researchers. For detailed occupation requirements, see the O*NET profile.
See All ML Research Engineer JobsOverview
Showing 5 of 721+ ML Research Engineer jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 721+ ML Research Engineer jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new ML Research Engineer roles.
Get Access To All Jobs
About Liquid AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The Opportunity
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
What We're Looking For
We need someone who:
- Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
- Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
- Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
- Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
The Work
- Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
- Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
- Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
- Contribute production code to the core audio repository, collaborating with infrastructure and research teams
- Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
Must-have
Desired Experience
- Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
- Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
- Proficiency in PyTorch and familiarity with distributed training frameworks (DeepSpeed, FSDP, or similar)
- Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have
- Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
- Experience designing and running large-scale training experiments on distributed GPU clusters
- Open-source contributions that demonstrate code quality and engineering judgment
What Success Looks Like (Year One)
- Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
- Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
- By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
What We Offer
- Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year

About Liquid AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The Opportunity
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
What We're Looking For
We need someone who:
- Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
- Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
- Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
- Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
The Work
- Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
- Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
- Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
- Contribute production code to the core audio repository, collaborating with infrastructure and research teams
- Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
Must-have
Desired Experience
- Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
- Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
- Proficiency in PyTorch and familiarity with distributed training frameworks (DeepSpeed, FSDP, or similar)
- Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have
- Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
- Experience designing and running large-scale training experiments on distributed GPU clusters
- Open-source contributions that demonstrate code quality and engineering judgment
What Success Looks Like (Year One)
- Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
- Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
- By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
What We Offer
- Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
How to Get Visa Sponsorship as a ML Research Engineer
Target employers with established research divisions
AI labs and large tech companies with dedicated research teams sponsor H-1B petitions far more consistently than startups without legal infrastructure. Look for organizations with a track record of LCA filings under ML-related job titles.
Lead with publications and research output
Sponsoring employers want evidence of original contribution. A strong publication record at NeurIPS, ICML, or ICLR signals exactly the kind of specialized expertise USCIS expects to see supporting an H-1B specialty occupation claim.
Understand how your degree field maps to the role
USCIS requires a direct relationship between your degree field and the position. Computer science, electrical engineering, statistics, and applied mathematics are the most defensible degree fields for ML Research Engineer sponsorship petitions.
Ask about premium processing during your offer negotiation
USCIS premium processing reduces the H-1B petition review window to 15 business days. Research-focused employers routinely pay this fee, and it is a reasonable expectation to raise during the offer discussion stage.
ML Research Engineer jobs are hiring across the US. Find yours.
Find ML Research Engineer JobsSee all 721+ ML Research Engineer jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new ML Research Engineer roles.
Get Access To All JobsFrequently Asked Questions
Do ML Research Engineer roles qualify as H-1B specialty occupations?
Yes. ML Research Engineering consistently qualifies as a specialty occupation because the role requires at minimum a bachelor's degree in a specific technical field, such as computer science, statistics, or electrical engineering. Employers at AI labs and major tech companies have a strong track record of approved H-1B petitions for this title, and the position's theoretical depth reinforces the specialty occupation argument.
Is a PhD required to get H-1B sponsorship as an ML Research Engineer?
Not always, but it depends heavily on the employer and the seniority of the role. Research-track positions at dedicated AI labs frequently require a PhD as a minimum qualification, which actually strengthens the specialty occupation case for H-1B purposes. Applied or production-focused ML Research Engineer roles at larger tech companies may sponsor candidates with a strong master's degree and relevant publications or industry experience.
How does the H-1B lottery affect ML Research Engineers specifically?
ML Research Engineers face the same general-category lottery odds as most other H-1B applicants, with a selection rate around 25% in recent years. However, researchers with a U.S. master's or PhD from a U.S. institution enter the advanced degree pool first, which has historically offered slightly better odds. Employers at universities and nonprofit research institutions are cap-exempt, meaning those roles bypass the lottery entirely.
Can an ML Research Engineer self-petition for a green card without employer sponsorship?
Yes, the EB-2 National Interest Waiver is the most viable self-petition path for ML researchers. USCIS has granted NIW approval to researchers demonstrating that their work has national importance and that they are well-positioned to advance it. A strong publication record, citations, and evidence of impact on the field significantly improve approval odds. Some exceptional researchers also qualify for the EB-1A extraordinary ability category.
Where can I find ML Research Engineer jobs that offer visa sponsorship?
Migrate Mate is built specifically for international candidates and filters for roles where employers are open to sponsoring work visas. Browsing ML Research Engineer listings on Migrate Mate lets you focus on employers with verified sponsorship histories rather than sorting through postings that exclude international applicants. It is the most direct way to find research roles aligned with your visa situation.
What is the prevailing wage requirement for sponsored ML Research Engineer jobs?
U.S. employers sponsoring a visa must pay at least the prevailing wage, which is what workers in the same role, area, and experience level typically earn. The Department of Labor sets this rate to make sure companies aren't hiring foreign workers simply because they'd accept lower pay than a U.S. worker. It varies by job title, location, and experience. You can look up current prevailing wage rates for any occupation and location using the OFLC Wage Search page.
See which ML Research Engineer employers are hiring and sponsoring visas right now.
Search ML Research Engineer Jobs