AI Data Engineer Jobs at Liquid AI with Visa Sponsorship
Liquid AI hires AI Data Engineers to build and maintain the data infrastructure behind its next-generation liquid neural network models. The company sponsors H-1B, F-1 OPT, and TN visas for technical roles, making it a realistic target if you're on a work authorization timeline.
See All AI Data Engineer at Liquid AI JobsOverview
Showing 5 of 25+ AI Data Engineer Jobs at Liquid AI jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 25+ AI Data Engineer Jobs at Liquid AI
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Data Engineer Jobs at Liquid AI.
Get Access To All Jobs
ABOUT LIQUID AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
WHAT WE'RE LOOKING FOR
We need someone who:
- Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
- Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
- Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
- Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
THE WORK
- Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
- Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
- Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
- Contribute production code to the core audio repository, collaborating with infrastructure and research teams
- Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
DESIRED EXPERIENCE
Must-have:
- Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
- Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
- Proficiency in PyTorch and familiarity with distributed training frameworks (DeepSpeed, FSDP, or similar)
- Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have:
- Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
- Experience designing and running large-scale training experiments on distributed GPU clusters
- Open-source contributions that demonstrate code quality and engineering judgment
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
- Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
- Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
- By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
WHAT WE OFFER
- Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year

ABOUT LIQUID AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
THE OPPORTUNITY
Our Audio team is building frontier speech-language models that handle STT, TTS, and speech-to-speech in a single architecture. This role sits at the center of applied audio model development, working directly with the technical lead to ship production systems that run on-device under real-time constraints. You will own critical workstreams across data pipelines, evaluation systems, and customer deployments. If you want high ownership on rare technical problems in a small, elite team where your code ships, this is the role.
WHAT WE'RE LOOKING FOR
We need someone who:
- Builds first, theorizes later: You ship working systems, not just notebooks. Production-grade code is your default, not a stretch goal.
- Owns outcomes end-to-end: From data pipelines to customer deployments, you take responsibility for the full stack without waiting for someone else to handle the hard parts.
- Thrives under constraints: On-device, low-latency, memory-limited systems excite you. You see constraints as design parameters, not blockers.
- Ramps quickly on new territory: Gaps in specific subdomains are fine if you close them fast. You seek out feedback and stay focused on what moves the needle.
THE WORK
- Build and scale data pipelines for audio model training, including preprocessing, augmentation, and quality filtering at scale
- Design, implement, and maintain evaluation systems that measure multimodal performance across internal and public benchmarks
- Fine-tune and adapt audio models for customer-specific use cases, owning delivery from requirements through deployment
- Contribute production code to the core audio repository, collaborating with infrastructure and research teams
- Support experimentation under real hardware constraints, shifting between customer work and core development as priorities evolve
DESIRED EXPERIENCE
Must-have:
- Strong programming fundamentals with demonstrated ability to write clean, maintainable, production-grade code
- Experience building and shipping production ML systems beyond model training (data pipelines, evals, serving infrastructure)
- Proficiency in PyTorch and familiarity with distributed training frameworks (DeepSpeed, FSDP, or similar)
- Track record of collaborating effectively in shared codebases with high engineering standards
Nice-to-have:
- Direct experience with audio/speech models (ASR, TTS, vocoders, diarization, or speech-to-speech systems)
- Experience designing and running large-scale training experiments on distributed GPU clusters
- Open-source contributions that demonstrate code quality and engineering judgment
WHAT SUCCESS LOOKS LIKE (YEAR ONE)
- Within 6 months, you independently deliver production-ready data pipelines or evaluation systems and own at least one customer workstream end-to-end
- Your PRs to the core audio repo are accepted without heavy rework, demonstrating strong judgment in system design
- By year end, you operate as a second pillar to the technical lead, unblocking parallel workstreams and raising overall team velocity
WHAT WE OFFER
- Rare technical problems: Work on audio-to-audio frontier systems with real ownership in a team small enough that your contributions ship directly to production.
- Compensation: Competitive base salary with equity in a unicorn-stage company
- Health: We pay 100% of medical, dental, and vision premiums for employees and dependents
- Financial: 401(k) matching up to 4% of base pay
- Time Off: Unlimited PTO plus company-wide Refill Days throughout the year
See all 25+ AI Data Engineer at Liquid AI jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Data Engineer at Liquid AI roles.
Get Access To All JobsTips for Finding AI Data Engineer Jobs at Liquid AI Jobs
Align your portfolio to liquid neural networks
Liquid AI's core research centers on liquid neural networks and time-series modeling. Before applying, build or document projects involving sequential data pipelines, continuous-time models, or dynamic inference systems to directly match what their engineering teams are building.
Verify your OPT STEM extension eligibility early
AI Data Engineer roles at Liquid AI qualify under STEM OPT, giving F-1 graduates up to 36 months of work authorization without H-1B sponsorship. File your STEM OPT extension at least 90 days before your initial OPT expires to avoid a gap in authorization.
Target Liquid AI's research-adjacent data roles
Liquid AI's data engineering needs are tightly coupled to model training infrastructure, not just analytics pipelines. Applying to roles that mention feature engineering for neural architectures or training data curation signals you understand the technical stack and raises your sponsorship business case.
Get your specialty occupation documentation in order
For H-1B petitions, USCIS scrutinizes whether an AI Data Engineer role qualifies as a specialty occupation requiring a specific degree. Gather transcripts, degree equivalency evaluations if you studied outside the U.S., and reference letters that tie your credentials directly to the job duties.
Use Migrate Mate to find open AI Data Engineer roles at Liquid AI
Liquid AI posts AI Data Engineer openings across different hiring cycles. Use Migrate Mate to filter specifically for Liquid AI's sponsored positions so you can track new listings and apply during active hiring windows rather than chasing roles that have already closed.
Clarify TN eligibility before your offer negotiation
Canadian and Mexican nationals can pursue a TN visa for AI Data Engineer roles under the computer systems analyst category, bypassing the H-1B lottery entirely. Confirm with your prospective team at Liquid AI that the job duties map to a qualifying TN occupation before the offer is finalized.
AI Data Engineer at Liquid AI jobs are hiring across the US. Find yours.
Find AI Data Engineer at Liquid AI JobsFrequently Asked Questions
Does Liquid AI sponsor H-1B visas for AI Data Engineers?
Yes, Liquid AI sponsors H-1B visas for AI Data Engineer roles. Because H-1B cap-subject petitions are subject to an annual lottery, timing matters. Liquid AI would need to register you in the March lottery window, with employment starting no earlier than October 1 if selected. Cap-exempt pathways are not available for standard industry hires.
Which visa types are commonly used for AI Data Engineer roles at Liquid AI?
The most common pathways for AI Data Engineers at Liquid AI are H-1B, F-1 OPT, F-1 CPT, and TN. F-1 OPT is often the entry point for recent graduates, with STEM OPT extensions providing runway while an H-1B petition is prepared. Canadian and Mexican nationals can explore the TN visa as a lottery-free alternative.
How do I apply for AI Data Engineer jobs at Liquid AI?
You can find open AI Data Engineer positions at Liquid AI through Migrate Mate, which filters specifically for visa-sponsored roles at companies like Liquid AI. When applying, tailor your resume to highlight experience with large-scale data pipelines, model training infrastructure, and any work involving neural network architectures to match Liquid AI's research-driven engineering environment.
What qualifications does Liquid AI expect for AI Data Engineer roles?
Liquid AI typically looks for candidates with a bachelor's or master's degree in computer science, data engineering, machine learning, or a closely related field. Hands-on experience with distributed data systems, Python-based ML tooling, and data pipeline orchestration is expected. Familiarity with training data workflows for deep learning models is a meaningful differentiator given Liquid AI's research focus.
How do I plan my timeline if I need H-1B sponsorship at Liquid AI?
The H-1B lottery registration opens each March for an October 1 start date, so you'd need an offer in place before the registration window closes. If you're on F-1 OPT, your OPT period bridges the gap. USCIS premium processing is available if Liquid AI elects to use it, reducing adjudication to around 15 business days after filing.
See which AI Data Engineer at Liquid AI employers are hiring and sponsoring visas right now.
Search AI Data Engineer at Liquid AI Jobs