Data Engineer Jobs at Liquid AI with Visa Sponsorship
Liquid AI hires Data Engineers to build and maintain the data infrastructure powering its AI research and model development. The company sponsors work visas for engineering roles, making it a realistic target if you're on F-1 OPT, CPT, or an H-1B and want to work at the frontier of AI.
See All Data Engineer at Liquid AI JobsOverview
Showing 5 of 25+ Data Engineer Jobs at Liquid AI jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 25+ Data Engineer Jobs at Liquid AI
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Data Engineer Jobs at Liquid AI.
Get Access To All Jobs
ABOUT LIQUID AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
WHAT WE'RE LOOKING FOR
We need someone who:
- Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
- Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
- Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
- Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
THE WORK
- Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
- Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
- Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
- Contribute production code to the core audio repository, collaborating with infrastructure and research teams
- Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
DESIRED EXPERIENCE
Must-have:
- Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
- Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
- Proficiency in PyTorch and familiarity with distributed training frameworks (DeepSpeed, FSDP, or similar)
- Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have:
- Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
- Experience designing and running large-scale training experiments on distributed GPU clusters
- Open-source contributions that demonstrate code quality and engineering judgment
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
- Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
- Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
- By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
WHAT WE OFFER
- Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year

ABOUT LIQUID AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
WHAT WE'RE LOOKING FOR
We need someone who:
- Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
- Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
- Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
- Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
THE WORK
- Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
- Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
- Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
- Contribute production code to the core audio repository, collaborating with infrastructure and research teams
- Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
DESIRED EXPERIENCE
Must-have:
- Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
- Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
- Proficiency in PyTorch and familiarity with distributed training frameworks (DeepSpeed, FSDP, or similar)
- Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have:
- Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
- Experience designing and running large-scale training experiments on distributed GPU clusters
- Open-source contributions that demonstrate code quality and engineering judgment
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
- Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
- Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
- By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
WHAT WE OFFER
- Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
See all 25+ Data Engineer at Liquid AI jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Data Engineer at Liquid AI roles.
Get Access To All JobsTips for Finding Data Engineer Jobs at Liquid AI Jobs
Align your portfolio to AI data pipelines
Liquid AI builds novel AI architectures, so your projects should show experience with large-scale data ingestion, feature engineering, or ML training pipelines. Generic ETL work is less compelling than examples tied to model development workflows.
Confirm LCA filing with your recruiter early
Before accepting an offer, ask Liquid AI's recruiting team whether a Labor Condition Application is already in progress with the DOL. LCA certification must precede your H-1B petition, and delays here can push your start date back significantly.
Use Migrate Mate to filter verified sponsorship roles
Sponsorship availability varies by team and budget cycle at AI startups. Use Migrate Mate to surface Data Engineer openings at Liquid AI that are confirmed to sponsor, so you're not spending time on roles that won't support your visa.
Prepare credentials for a specialty occupation case
Data Engineer roles at AI companies typically qualify as H-1B specialty occupations, but USCIS scrutinizes whether your specific degree field aligns with the position. Have your transcripts and any equivalency evaluations ready before the petition stage.
Request cap-exempt filing if you have prior H-1B history
If you've been counted against the H-1B cap at a previous employer, Liquid AI can file your transfer petition outside the April lottery window. Clarify your prior H-1B history with HR so they can route your petition correctly from the start.
Data Engineer at Liquid AI jobs are hiring across the US. Find yours.
Find Data Engineer at Liquid AI JobsFrequently Asked Questions
Does Liquid AI sponsor H-1B visas for Data Engineers?
Yes, Liquid AI sponsors H-1B visas for Data Engineer roles. As an AI research company operating in a highly specialized technical space, it participates in the annual H-1B cap season with USCIS. If you already hold H-1B status with another employer, a transfer to Liquid AI can be filed at any time without waiting for the April lottery.
Which visa types are commonly used for Data Engineer roles at Liquid AI?
Liquid AI supports H-1B, F-1 OPT, and F-1 CPT for Data Engineer positions, along with TN status for Canadian and Mexican nationals in qualifying roles. F-1 students hired during their program can often start on CPT, then transition to OPT post-graduation while an H-1B petition is prepared for the next cap season.
What qualifications or experience does Liquid AI expect for Data Engineer roles?
Liquid AI looks for engineers with experience building data infrastructure at scale, particularly pipelines that feed machine learning or model training systems. Proficiency in tools like Spark, Airflow, or cloud-native data platforms is relevant. A bachelor's degree or higher in computer science, data engineering, or a closely related field is typically expected to satisfy H-1B specialty occupation requirements.
How do I apply for Data Engineer jobs at Liquid AI?
You can browse and apply for Data Engineer openings at Liquid AI directly through their careers page or through Migrate Mate, which surfaces roles with confirmed visa sponsorship so you can filter specifically for positions that match your immigration status. When applying, tailor your resume to highlight experience with ML-adjacent data systems, as Liquid AI's work centers on novel AI model development.
How do I plan my timeline if I need Liquid AI to sponsor an H-1B?
The H-1B cap lottery opens in March and work authorization begins October 1 if selected. Plan to have an offer in hand by February or early March so your employer can register you in time. If you're on OPT and your authorization extends beyond October 1, you have more flexibility. STEM OPT provides up to 24 months of additional work authorization, giving you two additional cap seasons to secure an H-1B.
See which Data Engineer at Liquid AI employers are hiring and sponsoring visas right now.
Search Data Engineer at Liquid AI Jobs