VP Engineering Jobs in USA with Visa Sponsorship
VP Engineering roles attract strong H-1B and O-1A sponsorship from U.S. tech companies, though the executive seniority raises nonimmigrant intent scrutiny at consulates. Most employers file through specialized immigration counsel given the complexity and compensation involved. For detailed occupation requirements, see the O*NET profile.
See All VP Engineering JobsOverview
Showing 5 of 712+ VP Engineering jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 712+ VP Engineering jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new VP Engineering roles.
Get Access To All Jobs
About Fluidstack
At Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light. We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.
About The Role
As VP of Engineering, you will own the full software and SRE organizations responsible for our managed orchestration (Kubernetes and SLURM) offerings as well as our managed inference services. You will set the technical direction, build and scale the team, and personally drive architectural decisions that determine how the world's leading AI organizations train and serve their models. You still ship production systems at scale and can go deep on a kernel scheduler, NCCL collective, or KV cache implementation when it matters. You think in terms of systems boundaries, failure modes, and second-order effects. You know how to grow engineering organizations without losing velocity. You ensure we strike the right balance between fast delivery and reliable operation.
You Will
- Own and scale the engineering organization across managed Kubernetes and SLURM, as well as our managed inference product, including Software Engineers and SREs across all three product areas.
- Set the technical and architectural roadmap for cluster orchestration and AI inference serving, from bare-metal provisioning through control-plane design and developer-facing APIs.
- Drive reliability, performance, and scalability standards across the stack, owning SLAs for customers running production AI training and inference workloads on Fluidstack infrastructure.
- Partner closely with Product, Sales, and Customer Success to translate customer needs from top AI labs and enterprises into concrete engineering investments and prioritization decisions.
- Establish engineering culture, hiring bar, and operational practices that attract and retain exceptional talent in a competitive market.
- Remain hands-on at the level of design reviews, architecture decisions, and critical incident response, maintaining deep technical credibility with the team.
- Build and maintain a high-trust, high-accountability team environment where engineers own outcomes end-to-end, from design through production operations.
Basic Qualifications
- 10+ years of software engineering or systems engineering experience, with at least 4 years managing engineering teams including both Software Engineers and SREs.
- Deep hands-on experience with Kubernetes and SLURM in production environments, including scheduling internals, resource management, and multi-tenant cluster operations.
- Strong background in bare-metal infrastructure and GPU/accelerator systems, including server imaging, networking (InfiniBand/RoCE), firmware, and hardware lifecycle management.
- Demonstrated ability to build and scale AI inference serving infrastructure, including familiarity with inference optimization techniques (quantization, continuous batching, speculative decoding, KV cache management).
- Track record of building and growing high-performing engineering organizations of 40+ engineers across complex, cross-functional domains.
- Strong communicator who can represent technical strategy to executive leadership, customers, and board-level stakeholders.
Preferred Qualifications
- Prior experience in an AI infrastructure neocloud, hyperscaler (AWS, GCP, Azure), or AI lab (OpenAI, Anthropic, DeepMind) in a senior technical or engineering leadership role.
- Hands-on experience with large-scale GPU cluster operations: multi-node training job scheduling, collective communication tuning, topology-aware placement, and fault recovery.
- Familiarity with frontier model inference serving frameworks (vLLM, TensorRT-LLM, SGLang) and the systems-level tradeoffs involved in latency, throughput, and cost optimization.
- Experience with GPU NPI processes, cluster bring-up, and hardware qualification at scale.
- Exposure to agentic inference workloads and the distinct systems requirements they impose relative to batch or streaming inference.
- Contributions to open-source infrastructure projects in the Kubernetes, SLURM, or MLOps ecosystems.
Salary And Benefits
The base salary range for this role is $280,000 to $450,000. Starting salary will be determined based on relevant experience, skills, and market location. In addition to base salary, this role includes a meaningful equity package, performance bonus, and the following benefits:
- Competitive total compensation package (cash + equity)
- Health, dental, and vision insurance
- Retirement plan
- Generous PTO policy
We are committed to pay equity and transparency. Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email careers@fluidstack.io with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.
Compensation Range: $280K - $450K

About Fluidstack
At Fluidstack, we’re building the infrastructure for abundant intelligence. We partner with top AI labs, governments, and enterprises - including Mistral, Poolside, Black Forest Labs, Meta, and more - to unlock compute at the speed of light. We’re working with urgency to make AGI a reality. As such, our team is highly motivated and committed to delivering world-class infrastructure. We treat our customers’ outcomes as our own, taking pride in the systems we build and the trust we earn. If you’re motivated by purpose, obsessed with excellence, and ready to work very hard to accelerate the future of intelligence, join us in building what's next.
About The Role
As VP of Engineering, you will own the full software and SRE organizations responsible for our managed orchestration (Kubernetes and SLURM) offerings as well as our managed inference services. You will set the technical direction, build and scale the team, and personally drive architectural decisions that determine how the world's leading AI organizations train and serve their models. You still ship production systems at scale and can go deep on a kernel scheduler, NCCL collective, or KV cache implementation when it matters. You think in terms of systems boundaries, failure modes, and second-order effects. You know how to grow engineering organizations without losing velocity. You ensure we strike the right balance between fast delivery and reliable operation.
You Will
- Own and scale the engineering organization across managed Kubernetes and SLURM, as well as our managed inference product, including Software Engineers and SREs across all three product areas.
- Set the technical and architectural roadmap for cluster orchestration and AI inference serving, from bare-metal provisioning through control-plane design and developer-facing APIs.
- Drive reliability, performance, and scalability standards across the stack, owning SLAs for customers running production AI training and inference workloads on Fluidstack infrastructure.
- Partner closely with Product, Sales, and Customer Success to translate customer needs from top AI labs and enterprises into concrete engineering investments and prioritization decisions.
- Establish engineering culture, hiring bar, and operational practices that attract and retain exceptional talent in a competitive market.
- Remain hands-on at the level of design reviews, architecture decisions, and critical incident response, maintaining deep technical credibility with the team.
- Build and maintain a high-trust, high-accountability team environment where engineers own outcomes end-to-end, from design through production operations.
Basic Qualifications
- 10+ years of software engineering or systems engineering experience, with at least 4 years managing engineering teams including both Software Engineers and SREs.
- Deep hands-on experience with Kubernetes and SLURM in production environments, including scheduling internals, resource management, and multi-tenant cluster operations.
- Strong background in bare-metal infrastructure and GPU/accelerator systems, including server imaging, networking (InfiniBand/RoCE), firmware, and hardware lifecycle management.
- Demonstrated ability to build and scale AI inference serving infrastructure, including familiarity with inference optimization techniques (quantization, continuous batching, speculative decoding, KV cache management).
- Track record of building and growing high-performing engineering organizations of 40+ engineers across complex, cross-functional domains.
- Strong communicator who can represent technical strategy to executive leadership, customers, and board-level stakeholders.
Preferred Qualifications
- Prior experience in an AI infrastructure neocloud, hyperscaler (AWS, GCP, Azure), or AI lab (OpenAI, Anthropic, DeepMind) in a senior technical or engineering leadership role.
- Hands-on experience with large-scale GPU cluster operations: multi-node training job scheduling, collective communication tuning, topology-aware placement, and fault recovery.
- Familiarity with frontier model inference serving frameworks (vLLM, TensorRT-LLM, SGLang) and the systems-level tradeoffs involved in latency, throughput, and cost optimization.
- Experience with GPU NPI processes, cluster bring-up, and hardware qualification at scale.
- Exposure to agentic inference workloads and the distinct systems requirements they impose relative to batch or streaming inference.
- Contributions to open-source infrastructure projects in the Kubernetes, SLURM, or MLOps ecosystems.
Salary And Benefits
The base salary range for this role is $280,000 to $450,000. Starting salary will be determined based on relevant experience, skills, and market location. In addition to base salary, this role includes a meaningful equity package, performance bonus, and the following benefits:
- Competitive total compensation package (cash + equity)
- Health, dental, and vision insurance
- Retirement plan
- Generous PTO policy
We are committed to pay equity and transparency. Fluidstack is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Fluidstack will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.
You will receive a confirmation email once your application has successfully been accepted. If there is an error with your submission and you did not receive a confirmation email, please email careers@fluidstack.io with your resume/CV, the role you've applied for, and the date you submitted your application-- someone from our recruiting team will be in touch.
Compensation Range: $280K - $450K
How to Get Visa Sponsorship in VP Engineering
Lead with business impact, not technical credentials
Consular officers and USCIS adjudicators evaluating VP-level H-1B petitions scrutinize whether the role genuinely requires a specialty occupation degree. Frame your application around specific business outcomes and organizational scope, not just technical skills.
Consider the O-1A if your profile is strong
VPs with publications, speaking engagements, board memberships, or documented business impact often qualify for O-1A extraordinary ability. It bypasses the H-1B lottery entirely and carries no annual cap, making it worth evaluating before committing to H-1B.
Prepare for heightened nonimmigrant intent scrutiny
Executive roles raise flags at visa interviews because officers may question whether you intend to eventually immigrant. Strong ties to your home country, a clear temporary purpose, and a well-documented employment contract all help counter this concern.
Ensure your employer's petition reflects actual VP duties
USCIS denials at this level often cite vague job descriptions. The I-129 petition should specifically describe team size, budget authority, reporting structure, and the specialized knowledge that makes a bachelor's degree in a specific field genuinely necessary.
Confirm the employer has sponsored at this level before
Not all companies that sponsor engineers have experience sponsoring executive roles. VP-level petitions require more detailed organizational evidence. Ask HR whether their immigration counsel has handled director or VP filings previously before accepting an offer.
Use premium processing to protect your start date
VP roles often come with firm start dates tied to organizational planning cycles. Premium processing delivers an initial USCIS decision within 15 business days, reducing the risk of a delayed approval derailing your onboarding timeline significantly.
VP Engineering jobs are hiring across the US. Find yours.
Find VP Engineering JobsSee all 712+ VP Engineering jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new VP Engineering roles.
Get Access To All JobsFrequently Asked Questions
Do VP Engineering roles qualify as specialty occupations for H-1B purposes?
Yes, but the petition needs to be built carefully. USCIS requires that the position normally requires a bachelor's degree or higher in a specific specialty field. For VP Engineering, a degree in computer science, electrical engineering, or a related discipline typically satisfies this, provided the job description clearly reflects that requirement and not just general management duties.
Is a computer science degree required to get H-1B sponsorship as a VP of Engineering?
Not necessarily a CS degree specifically, but you need a bachelor's or higher in a field directly related to the role. Engineering, software systems, or information technology degrees are all accepted. USCIS also allows three years of specialized work experience to substitute for each year of missing formal education, which helps candidates with non-traditional academic backgrounds.
How can I find VP Engineering jobs that offer visa sponsorship in the U.S.?
Most general job boards don't filter by visa sponsorship willingness, which wastes time at the executive level where sponsorship conversations happen late in the process. Migrate Mate is built specifically for international candidates and shows VP Engineering roles at companies actively open to sponsorship, so you can focus your outreach on employers who are already aligned.
Does holding a VP title make it harder to maintain H-1B nonimmigrant intent?
It can. Consular officers sometimes view senior executive roles as evidence of immigrant intent, particularly if the compensation and organizational stature suggest long-term establishment in the U.S. The strongest defense is a well-documented temporary employment contract, evidence of ties abroad, and a clear employer justification for why this role requires international talent specifically.
Can a company sponsor a VP Engineering role on an L-1A instead of an H-1B?
Yes, if you've worked for the sponsoring company abroad for at least one continuous year within the past three years in a managerial or executive capacity. The L-1A is often preferable for VP-level transfers because it bypasses the H-1B lottery, has a direct pathway to the EB-1C green card, and has no annual numerical cap limiting availability.
What is the prevailing wage requirement for sponsored VP Engineering jobs?
U.S. employers sponsoring a visa must pay at least the prevailing wage, which is what workers in the same role, area, and experience level typically earn. The Department of Labor sets this rate to make sure companies aren't hiring foreign workers simply because they'd accept lower pay than a U.S. worker. It varies by job title, location, and experience. You can look up current prevailing wage rates for any occupation and location using the OFLC Wage Search page.
See which VP Engineering employers are hiring and sponsoring visas right now.
Search VP Engineering Jobs