ML Research Engineer Jobs for OPT Students
ML Research Engineer jobs are among the most OPT-friendly roles in tech. Most positions sit squarely within STEM-designated degree programs, making you eligible for the 24-month STEM OPT extension. Roles span industry labs, academic research centers, and AI-focused startups actively accustomed to sponsoring F-1 students.
See All ML Research Engineer JobsOverview
Showing 5 of 342+ ML Research Engineer jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 342+ ML Research Engineer jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new ML Research Engineer roles.
Get Access To All Jobs
About Liquid AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The Opportunity
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
What We're Looking For
We need someone who:
- Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
- Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
- Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
- Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
The Work
- Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
- Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
- Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
- Contribute production code to the core audio repository, collaborating with infrastructure and research teams
- Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
Must-have
Desired Experience
- Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
- Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
- Proficiency in PyTorch and familiarity with distributed training frameworks (DeepSpeed, FSDP, or similar)
- Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have
- Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
- Experience designing and running large-scale training experiments on distributed GPU clusters
- Open-source contributions that demonstrate code quality and engineering judgment
What Success Looks Like (Year One)
- Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
- Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
- By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
What We Offer
- Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year

About Liquid AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The Opportunity
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
What We're Looking For
We need someone who:
- Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
- Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
- Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
- Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
The Work
- Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
- Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
- Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
- Contribute production code to the core audio repository, collaborating with infrastructure and research teams
- Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
Must-have
Desired Experience
- Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
- Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
- Proficiency in PyTorch and familiarity with distributed training frameworks (DeepSpeed, FSDP, or similar)
- Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have
- Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
- Experience designing and running large-scale training experiments on distributed GPU clusters
- Open-source contributions that demonstrate code quality and engineering judgment
What Success Looks Like (Year One)
- Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
- Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
- By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
What We Offer
- Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
How to Get Visa Sponsorship as a ML Research Engineer
Target STEM OPT-eligible employers first
Not every company is E-Verify enrolled, which is required for your 24-month STEM OPT extension. Confirm enrollment before applying. Industry research labs and larger tech companies are almost always enrolled. Smaller startups require individual verification.
Align your degree field to the role explicitly
STEM OPT eligibility depends on your degree matching your job. ML Research Engineer roles typically require Computer Science, Electrical Engineering, or Statistics. If your degree is adjacent, document how your coursework directly supports the research function.
Prioritize companies with existing H-1B sponsorship history
An employer willing to sponsor your future H-1B is far more valuable than one that isn't. Check public OFLC disclosure data to see which companies have filed H-1B petitions for research engineering roles. Past behavior is the strongest signal.
Address OPT directly in your cover letter or outreach
Many hiring managers assume international students require immediate visa sponsorship. Clarifying upfront that you have up to three years of OPT authorization, including the STEM extension, removes the biggest objection before it becomes one.
Leverage your research publications and GitHub to stand out
ML Research Engineer hiring is credential-heavy. Published papers, reproducible code repositories, and Kaggle competition results compensate for visa status concerns by making your technical contribution undeniable. Employers hire talent they cannot easily replace.
ML Research Engineer jobs are hiring across the US. Find yours.
Find ML Research Engineer JobsSee all 342+ ML Research Engineer jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new ML Research Engineer roles.
Get Access To All JobsFrequently Asked Questions
Do ML Research Engineer jobs qualify for the 24-month STEM OPT extension?
Yes, in nearly all cases. ML Research Engineer roles fall under CIP codes tied to Computer Science, Electrical Engineering, Applied Mathematics, and Statistics, all of which are STEM-designated. Your degree field must match the role, and your employer must be enrolled in E-Verify. If both conditions are met, you're eligible for the full 24-month extension on top of your initial 12-month OPT.
Where is the best place to find ML Research Engineer jobs that sponsor OPT students?
Migrate Mate is built specifically for F-1 OPT students and filters for employers actively open to hiring international candidates. General job boards mix in roles that quietly exclude visa holders, wasting application time. Migrate Mate surfaces ML Research Engineer openings from companies with demonstrated sponsorship history, so you can focus your search on realistic opportunities.
Can I work at a university research lab on OPT as an ML Research Engineer?
Yes. University and academic lab positions qualify as standard OPT employment as long as the role is directly related to your field of study and you're paid as an employee rather than receiving a stipend through a fellowship. Confirm with your DSO that the position qualifies before accepting. Academic labs are also typically E-Verify enrolled, making them fully compatible with STEM OPT extension requirements.
What happens to my OPT if my ML Research Engineer job ends unexpectedly?
You have a 60-day unemployment grace period during initial OPT and 60 days during the STEM extension. During that window you must report the unemployment to your DSO and actively pursue new employment. Time in OPT status counts toward your total 90-day unemployment limit across the authorization period. Finding a new qualifying role quickly is critical to maintaining lawful status.
Does contract or consulting work count as valid OPT employment for an ML Research Engineer?
Yes, self-employment and contract work are permitted on OPT as long as the work is directly related to your degree field. You must be able to document the relationship between your ML research activities and your academic training. For STEM OPT specifically, the employer you report must be E-Verify enrolled, which complicates pure freelance arrangements. Short-term consulting contracts with established companies are generally more straightforward to document properly.
See which ML Research Engineer employers are hiring and sponsoring visas right now.
Search ML Research Engineer Jobs