Liquid AI H-1B Visa Sponsorship Jobs USA
Liquid AI sponsors H-1B visas for roles in artificial intelligence research and software engineering, making it a relevant target for international candidates in cutting-edge AI fields. As an emerging AI company, it sponsors selectively, prioritizing highly specialized technical talent.
See All Liquid AI JobsOverview
Showing 5 of 53+ Liquid AI H-1B Visa Sponsorship Jobs USA jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 53+ Liquid AI H-1B Visa Sponsorship Jobs USA jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Liquid AI H-1B Visa Sponsorship Jobs USA roles.
Get Access To All Jobs
About Liquid AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The Opportunity
This is a rare chance to sit at the intersection of frontier vision-language models and real-world deployment. You'll own applied post-training work for VLMs end-to-end for some of the world's largest enterprises, while still contributing directly to Liquid's core multimodal model development.
Unlike most roles that force a trade-off between customer impact and foundational work, this role gives you both: deep ownership over how vision-language models are adapted, evaluated, and shipped, and a direct line into the evolution of Liquid's multimodal post-training stack.
If you care about visual understanding, data quality, evaluation, and making VLMs actually work in production, this is a chance to shape how applied multimodal AI is done at a foundation model company.
What We're Looking For
We need someone who:
-
Takes ownership: Owns VLM post-training projects end-to-end, from customer requirements through delivery and evaluation.
-
Thinks end-to-end: Can reason across visual data curation, training, alignment, and evaluation as a single system.
-
Is pragmatic: Optimizes for model quality and customer outcomes over publications or theory.
-
Communicates clearly: Can translate between customer needs and internal technical teams, and push back when needed.
The Work
-
Act as the technical owner for enterprise customer VLM post-training engagements.
-
Translate customer requirements into concrete multimodal post-training specifications and workflows.
-
Design and execute visual data generation, filtering, and quality assessment processes, including image-text pair curation, annotation pipelines, and synthetic data generation for visual tasks.
-
Run supervised fine-tuning, preference alignment, and reinforcement learning workflows for vision-language models.
-
Design task-specific evaluations for visual understanding, grounding, OCR, document parsing, and other multimodal capabilities. Interpret results and feed learnings back into core post-training pipelines.
Desired Experience
Must-have:
-
Hands-on experience with data generation and evaluation for VLM or multimodal post-training.
-
Experience training or fine-tuning vision-language models using SFT, preference alignment, and/or RL.
-
Strong intuition for visual data quality, annotation design, and multimodal evaluation.
-
Familiarity with vision encoders, image-text architectures, and how visual representations interact with language model backbones.
Nice-to-have:
-
Experience with visual grounding, document understanding, OCR, or video understanding tasks.
-
Experience contributing to shared or general-purpose multimodal post-training infrastructure.
-
Prior exposure to customer-facing or applied ML delivery environments.
-
Familiarity with alignment or RL techniques beyond basic supervised fine-tuning in the multimodal setting.
What Success Looks Like (Year One)
-
Independently owns and delivers enterprise VLM post-training projects with minimal oversight.
-
Is trusted by customers as the technical owner, demonstrating strong judgment and delivery quality on multimodal workloads.
-
Has made durable contributions to Liquid's general-purpose multimodal post-training pipelines by feeding applied learnings back into baseline model development.
What We Offer
-
Real ML work: You will fine-tune vision-language models, generate multimodal data, and ship solutions, not configure API calls. Your work feeds directly back into our core model development.
-
Compensation: Competitive base salary with equity in a unicorn-stage company.
-
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents.
-
Financial: 401(k) matching up to 4% of base pay.
-
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year.

About Liquid AI
Spun out of MIT CSAIL, we build general-purpose AI systems that run efficiently across deployment targets, from data center accelerators to on-device hardware, ensuring low latency, minimal memory usage, privacy, and reliability. We partner with enterprises across consumer electronics, automotive, life sciences, and financial services. We are scaling rapidly and need exceptional people to help us get there.
The Opportunity
This is a rare chance to sit at the intersection of frontier vision-language models and real-world deployment. You'll own applied post-training work for VLMs end-to-end for some of the world's largest enterprises, while still contributing directly to Liquid's core multimodal model development.
Unlike most roles that force a trade-off between customer impact and foundational work, this role gives you both: deep ownership over how vision-language models are adapted, evaluated, and shipped, and a direct line into the evolution of Liquid's multimodal post-training stack.
If you care about visual understanding, data quality, evaluation, and making VLMs actually work in production, this is a chance to shape how applied multimodal AI is done at a foundation model company.
What We're Looking For
We need someone who:
-
Takes ownership: Owns VLM post-training projects end-to-end, from customer requirements through delivery and evaluation.
-
Thinks end-to-end: Can reason across visual data curation, training, alignment, and evaluation as a single system.
-
Is pragmatic: Optimizes for model quality and customer outcomes over publications or theory.
-
Communicates clearly: Can translate between customer needs and internal technical teams, and push back when needed.
The Work
-
Act as the technical owner for enterprise customer VLM post-training engagements.
-
Translate customer requirements into concrete multimodal post-training specifications and workflows.
-
Design and execute visual data generation, filtering, and quality assessment processes, including image-text pair curation, annotation pipelines, and synthetic data generation for visual tasks.
-
Run supervised fine-tuning, preference alignment, and reinforcement learning workflows for vision-language models.
-
Design task-specific evaluations for visual understanding, grounding, OCR, document parsing, and other multimodal capabilities. Interpret results and feed learnings back into core post-training pipelines.
Desired Experience
Must-have:
-
Hands-on experience with data generation and evaluation for VLM or multimodal post-training.
-
Experience training or fine-tuning vision-language models using SFT, preference alignment, and/or RL.
-
Strong intuition for visual data quality, annotation design, and multimodal evaluation.
-
Familiarity with vision encoders, image-text architectures, and how visual representations interact with language model backbones.
Nice-to-have:
-
Experience with visual grounding, document understanding, OCR, or video understanding tasks.
-
Experience contributing to shared or general-purpose multimodal post-training infrastructure.
-
Prior exposure to customer-facing or applied ML delivery environments.
-
Familiarity with alignment or RL techniques beyond basic supervised fine-tuning in the multimodal setting.
What Success Looks Like (Year One)
-
Independently owns and delivers enterprise VLM post-training projects with minimal oversight.
-
Is trusted by customers as the technical owner, demonstrating strong judgment and delivery quality on multimodal workloads.
-
Has made durable contributions to Liquid's general-purpose multimodal post-training pipelines by feeding applied learnings back into baseline model development.
What We Offer
-
Real ML work: You will fine-tune vision-language models, generate multimodal data, and ship solutions, not configure API calls. Your work feeds directly back into our core model development.
-
Compensation: Competitive base salary with equity in a unicorn-stage company.
-
Health: We pay 100% of medical, dental, and vision premiums for employees and dependents.
-
Financial: 401(k) matching up to 4% of base pay.
-
Time Off: Unlimited PTO plus company-wide Refill Days throughout the year.
Job Roles at Liquid AI
How to Get Visa Sponsorship in Liquid AI H-1B Visa Sponsorship Jobs USA
Target roles that align with Liquid AI's core research
Liquid AI focuses on foundational AI model development. H-1B sponsorship is most common for roles in machine learning research, systems engineering, and applied AI, positions where specialized technical depth is genuinely hard to fill domestically.
Confirm sponsorship before applying
Not every open role at a technology company comes with H-1B sponsorship. Check each job listing carefully for explicit sponsorship language, or use Migrate Mate to filter for verified H-1B sponsors and avoid wasting applications on roles that won't support your visa.
Highlight research credentials and publications
Liquid AI competes for talent at the frontier of AI. Candidates with academic publications, conference contributions, or open-source AI work stand out, these credentials also strengthen the specialty occupation argument in your H-1B petition.
Understand the H-1B petition timeline early
H-1B sponsorship at any technology company requires lead time. Cap-subject petitions are filed in April for an October start. Raise your visa status early in conversations so Liquid AI's legal team can plan the petition timeline around your situation.
Ask directly about their immigration support process
Smaller and emerging AI companies vary in how structured their H-1B processes are. During interviews or offer negotiations, ask whether they work with an immigration attorney and what the typical sponsorship timeline looks like for new hires.
Demonstrate long-term value to the team
H-1B sponsorship is a multi-year commitment for an employer. Framing your skills as core to Liquid AI's research roadmap, not just an immediate hire, reassures hiring managers that the sponsorship investment is worthwhile for a specialized technology role.
Liquid AI jobs are hiring across the US. Find yours.
Find Liquid AI JobsSee all 53+ Liquid AI jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Liquid AI roles.
Get Access To All JobsFrequently Asked Questions
Does Liquid AI sponsor H-1B visas?
Yes, Liquid AI sponsors H-1B visas. As a technology company focused on AI research and development, Liquid AI has sponsored H-1B petitions for specialized roles where they need international talent with expertise not readily available in the domestic labor market.
Which roles at Liquid AI typically receive H-1B sponsorship?
H-1B sponsorship at Liquid AI is concentrated in technical departments, particularly machine learning research, AI systems engineering, and software development. These roles require degree-level expertise in a specific field, which is the core requirement for a qualifying specialty occupation under H-1B regulations.
How do I navigate the H-1B application process at Liquid AI?
Once you receive and accept an offer, Liquid AI works with an immigration attorney to file your H-1B petition. The employer handles Labor Condition Application filing with the Department of Labor, followed by the I-129 petition with USCIS. Your role is to provide supporting documents including degree credentials and employment records.
How do I find H-1B-sponsored job openings at Liquid AI?
Migrate Mate lists verified H-1B sponsors including companies in the AI and technology space, letting you filter specifically for employers with confirmed sponsorship history. This saves significant time compared to scanning general job boards where sponsorship eligibility is rarely clear upfront.
What timeline should I expect for H-1B sponsorship at Liquid AI?
For cap-subject H-1B petitions, the annual registration window opens in March, with selected petitions filed by April 1 and employment starting October 1 at the earliest. If you are already in valid H-1B status with another employer, a transfer to Liquid AI can happen more quickly outside the cap cycle.
What is the prevailing wage for H-1B jobs at Liquid AI?
H-1B employers must pay at least the prevailing wage, which is determined when they file the Labor Condition Application with the Department of Labor. The rate is based on the role, location, and experience level, and ensures international hires are paid comparably to U.S. workers in the same position. You can look up prevailing wage rates for any occupation and location using the DOL's OFLC Wage Search tool.
See which Liquid AI employers are hiring and sponsoring visas right now.
Search Liquid AI Jobs