Cloud Engineer Jobs at Anthropic with Visa Sponsorship
Anthropic hires Cloud Engineers to build and maintain the secure, scalable infrastructure that powers frontier AI research. The company has a consistent track record of sponsoring work visas for this function, and Cloud Engineer roles appear regularly across their open positions.
See All Cloud Engineer at Anthropic JobsOverview
Showing 5 of 89+ Cloud Engineer Jobs at Anthropic jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 89+ Cloud Engineer Jobs at Anthropic
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Cloud Engineer Jobs at Anthropic.
Get Access To All Jobs
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
ABOUT THE ROLE
As a Cluster Deployment Engineer, you will own how large-scale AI compute clusters physically come together inside our datacenter fleet. You will set the deployment-engineering strategy for cluster build-out — how racks are organized into pods, halls, and sites; how compute, network, power, and cooling systems interface at the rack boundary; and how deployment scope flows cleanly from hardware specification to facility delivery to a running cluster. This role is focused on deployment engineering, not on datacenter network or systems design — your scope is making sure clusters land cleanly and predictably, not designing the fabrics or facilities themselves.
This is a senior individual-contributor role with broad technical influence. You will work across hardware, networking, facilities, supply chain, and construction to ensure that every generation of accelerator we deploy lands in a datacenter that is ready for it — on schedule, at full density, and with every piece of required infrastructure accounted for. You will be the person who sees around corners: anticipating how next-generation rack designs will stress our facilities, where our deployment model will break at scale, and what needs to change now so that the next cluster turn-up is faster and more predictable than the last.
You will operate at the intersection of engineering strategy and execution discipline, partnering with internal research and systems teams, external developers, engineering firms, and OEM partners to deliver cluster capacity at the speed the frontier demands.
Responsibilities
- Own cluster-level deployment strategy — define how AI compute clusters are organized across the floor, how racks interconnect, and how cluster topology requirements translate into facility and deployment scope across a portfolio of sites.
- Set rack interface standards spanning power, network, mechanical, thermal, and spatial domains, and ensure that every deployment includes the complete set of infrastructure required to bring a cluster online.
- Drive multi-threaded cluster bring-up programs across hardware, networking, power, and cooling — owning plans, dependencies, and critical paths from hardware specification through energization and turn-up.
- Partner with internal engineering teams — research, systems, networking, and hardware — to translate cluster requirements into deployable facility scope, and to derisk onboarding of new hardware platforms well ahead of delivery.
- Lead external partner execution with developers, engineering firms, OEMs, and construction teams, driving technical reviews, deviation management, and handoffs that keep deployments on schedule and within specification.
- Improve cluster turn-up reliability and repeatability — identify systemic gaps in deployment scope, tooling, and partner interfaces, and drive durable fixes that reduce time-to-serve for new capacity.
- Define and track deployment KPIs — cluster readiness, schedule adherence, scope completeness, time-to-first-packet — and use historical trends to forecast risk and inform capacity planning.
- Coordinate cross-functional readiness across supply chain, security, operations, and construction to ship production-ready compute capacity.
- Provide crisp executive visibility on deployment progress, tradeoffs, and risks across a portfolio of concurrent cluster programs.
- Design cluster interfaces for durability — define rack and cluster-level interfaces that remain robust across hardware generations, so that facility scope and deployment models do not need to be reinvented every time the underlying hardware changes.
- Build cluster layout and BOM tooling — create and maintain the tools, templates, and data models that turn cluster topology and rack specifications into accurate floor layouts, deployment sequences, and complete bills of materials, replacing one-off spreadsheets with repeatable, auditable workflows.
YOU MAY BE A GOOD FIT IF YOU
- Have 10+ years of experience in hyperscale datacenter environments, with senior-level responsibility for cluster deployment, large-scale IT integration, or equivalent infrastructure programs.
- Have delivered AI, HPC, or high-density compute clusters at scale and developed a strong intuition for the constraints that govern cluster deployment — interconnect reach, adjacency, power density, and thermal limits.
- Can operate fluently across the boundary between IT hardware and facility infrastructure, and have set interface standards that held up across multiple hardware generations and sites.
- Have led cross-functional programs with both internal engineering teams and external developers, engineering firms, and OEM partners, and are effective at driving alignment across organizational levels.
- Combine strong systems thinking with execution discipline — comfortable zooming from cluster topology and portfolio strategy down to the specific interface detail that will otherwise become a field issue.
- Communicate clearly with technical and executive audiences, and can distill complex, multi-disciplinary programs into decisions and tradeoffs leadership can act on.
- Thrive in ambiguous, fast-moving environments where the hardware, the scale, and the requirements are all changing simultaneously.
- Hold a Bachelor's degree in Electrical Engineering, Mechanical Engineering, Computer Engineering, or equivalent practical experience.
STRONG CANDIDATES MAY ALSO
- Have direct experience deploying leading-edge AI accelerator clusters at hyperscale.
- Have shaped reference designs, deployment standards, or cluster-level playbooks that were adopted across a fleet.
- Have experience working across multiple geographies and understand how regional codes, climate, utility constraints, and supply chains shape cluster-level decisions.
- Have partnered closely with hardware and system providers on long-term platform onboarding and bring-up.
- Have experience building the program mechanisms — roadmaps, milestones, risk registers, runbooks — that make delivery predictable at massive scale.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$320,000 - $405,000 USD
Logistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process.

About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
ABOUT THE ROLE
As a Cluster Deployment Engineer, you will own how large-scale AI compute clusters physically come together inside our datacenter fleet. You will set the deployment-engineering strategy for cluster build-out — how racks are organized into pods, halls, and sites; how compute, network, power, and cooling systems interface at the rack boundary; and how deployment scope flows cleanly from hardware specification to facility delivery to a running cluster. This role is focused on deployment engineering, not on datacenter network or systems design — your scope is making sure clusters land cleanly and predictably, not designing the fabrics or facilities themselves.
This is a senior individual-contributor role with broad technical influence. You will work across hardware, networking, facilities, supply chain, and construction to ensure that every generation of accelerator we deploy lands in a datacenter that is ready for it — on schedule, at full density, and with every piece of required infrastructure accounted for. You will be the person who sees around corners: anticipating how next-generation rack designs will stress our facilities, where our deployment model will break at scale, and what needs to change now so that the next cluster turn-up is faster and more predictable than the last.
You will operate at the intersection of engineering strategy and execution discipline, partnering with internal research and systems teams, external developers, engineering firms, and OEM partners to deliver cluster capacity at the speed the frontier demands.
Responsibilities
- Own cluster-level deployment strategy — define how AI compute clusters are organized across the floor, how racks interconnect, and how cluster topology requirements translate into facility and deployment scope across a portfolio of sites.
- Set rack interface standards spanning power, network, mechanical, thermal, and spatial domains, and ensure that every deployment includes the complete set of infrastructure required to bring a cluster online.
- Drive multi-threaded cluster bring-up programs across hardware, networking, power, and cooling — owning plans, dependencies, and critical paths from hardware specification through energization and turn-up.
- Partner with internal engineering teams — research, systems, networking, and hardware — to translate cluster requirements into deployable facility scope, and to derisk onboarding of new hardware platforms well ahead of delivery.
- Lead external partner execution with developers, engineering firms, OEMs, and construction teams, driving technical reviews, deviation management, and handoffs that keep deployments on schedule and within specification.
- Improve cluster turn-up reliability and repeatability — identify systemic gaps in deployment scope, tooling, and partner interfaces, and drive durable fixes that reduce time-to-serve for new capacity.
- Define and track deployment KPIs — cluster readiness, schedule adherence, scope completeness, time-to-first-packet — and use historical trends to forecast risk and inform capacity planning.
- Coordinate cross-functional readiness across supply chain, security, operations, and construction to ship production-ready compute capacity.
- Provide crisp executive visibility on deployment progress, tradeoffs, and risks across a portfolio of concurrent cluster programs.
- Design cluster interfaces for durability — define rack and cluster-level interfaces that remain robust across hardware generations, so that facility scope and deployment models do not need to be reinvented every time the underlying hardware changes.
- Build cluster layout and BOM tooling — create and maintain the tools, templates, and data models that turn cluster topology and rack specifications into accurate floor layouts, deployment sequences, and complete bills of materials, replacing one-off spreadsheets with repeatable, auditable workflows.
YOU MAY BE A GOOD FIT IF YOU
- Have 10+ years of experience in hyperscale datacenter environments, with senior-level responsibility for cluster deployment, large-scale IT integration, or equivalent infrastructure programs.
- Have delivered AI, HPC, or high-density compute clusters at scale and developed a strong intuition for the constraints that govern cluster deployment — interconnect reach, adjacency, power density, and thermal limits.
- Can operate fluently across the boundary between IT hardware and facility infrastructure, and have set interface standards that held up across multiple hardware generations and sites.
- Have led cross-functional programs with both internal engineering teams and external developers, engineering firms, and OEM partners, and are effective at driving alignment across organizational levels.
- Combine strong systems thinking with execution discipline — comfortable zooming from cluster topology and portfolio strategy down to the specific interface detail that will otherwise become a field issue.
- Communicate clearly with technical and executive audiences, and can distill complex, multi-disciplinary programs into decisions and tradeoffs leadership can act on.
- Thrive in ambiguous, fast-moving environments where the hardware, the scale, and the requirements are all changing simultaneously.
- Hold a Bachelor's degree in Electrical Engineering, Mechanical Engineering, Computer Engineering, or equivalent practical experience.
STRONG CANDIDATES MAY ALSO
- Have direct experience deploying leading-edge AI accelerator clusters at hyperscale.
- Have shaped reference designs, deployment standards, or cluster-level playbooks that were adopted across a fleet.
- Have experience working across multiple geographies and understand how regional codes, climate, utility constraints, and supply chains shape cluster-level decisions.
- Have partnered closely with hardware and system providers on long-term platform onboarding and bring-up.
- Have experience building the program mechanisms — roadmaps, milestones, risk registers, runbooks — that make delivery predictable at massive scale.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Annual Salary:
$320,000 - $405,000 USD
Logistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process.
See all 89+ Cloud Engineer at Anthropic jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new Cloud Engineer at Anthropic roles.
Get Access To All JobsTips for Finding Cloud Engineer Jobs at Anthropic Jobs
Align your cloud certifications to AI infrastructure
Anthropic's Cloud Engineer roles sit inside an AI research environment, so certifications in AWS, GCP, or Azure carry more weight when paired with experience in GPU cluster management, distributed training pipelines, or high-throughput storage systems.
Target roles labeled infrastructure or platform engineering
Anthropic posts Cloud Engineer positions under several titles. Filtering for infrastructure, platform, or reliability engineering surfaces the highest-volume sponsorship track for this function and reduces time spent on roles with different hiring patterns.
Browse open Cloud Engineer roles using Migrate Mate
Migrate Mate aggregates Cloud Engineer openings at verified sponsoring employers like Anthropic, so you can filter directly by visa type and confirm sponsorship eligibility before you apply, rather than discovering the situation after an offer.
Request your LCA before your start date
For H-1B transfers to Anthropic, your new employer must file a certified Labor Condition Application with the DOL before petitioning USCIS. Confirming this timeline with your recruiter at the offer stage prevents a gap between your last day and portability protection.
Clarify E-3 and H-1B1 eligibility early if you qualify
Australian and Chilean or Singaporean nationals can use the E-3 or H-1B1 pathways, both cap-exempt and processable at a U.S. consulate without the lottery. Raising this with your Anthropic recruiter before the offer letter is drafted speeds the sponsorship decision considerably.
Prepare documentation showing systems ownership not just contribution
Anthropic's specialty occupation determination for Cloud Engineers typically requires evidence that your degree and role share a direct relationship. Letters of recommendation and project documentation that show you designed and owned systems, not just operated them, strengthen your H-1B petition.
Cloud Engineer at Anthropic jobs are hiring across the US. Find yours.
Find Cloud Engineer at Anthropic JobsFrequently Asked Questions
Does Anthropic sponsor H-1B visas for Cloud Engineers?
Yes, Anthropic sponsors H-1B visas for Cloud Engineer roles and has done so consistently. The company participates in the annual H-1B cap lottery for new applicants and also supports cap-exempt transfers for candidates already holding H-1B status with another employer, which bypasses the lottery entirely and allows for a faster start.
Which visa types does Anthropic commonly sponsor for Cloud Engineers?
Anthropic sponsors H-1B visas for the broadest range of Cloud Engineer candidates. Australian citizens can pursue the E-3 visa, which is cap-exempt and processed at a U.S. consulate without lottery exposure. Chilean and Singaporean nationals may qualify for the H-1B1, another cap-exempt option. All three classifications require an employer-filed Labor Condition Application certified by the DOL.
How do I apply for Cloud Engineer jobs at Anthropic?
Applications go through Anthropic's careers portal, where Cloud Engineer roles are posted under infrastructure, platform, and reliability engineering categories. You can also find and filter verified sponsoring openings on Migrate Mate, which lets you confirm visa sponsorship before investing time in an application. Tailoring your resume to highlight distributed systems design and cloud-native architecture at scale improves your chances at the screening stage.
What qualifications and experience does Anthropic expect for Cloud Engineer roles?
Anthropic's Cloud Engineer postings typically require a bachelor's degree or higher in computer science, computer engineering, or a closely related field, alongside hands-on experience with large-scale cloud infrastructure on AWS, GCP, or Azure. Experience with Kubernetes, infrastructure-as-code tooling like Terraform, and systems that support machine learning workloads is frequently listed as a core requirement rather than a preference.
How do I plan my timeline if I need Anthropic to sponsor my H-1B?
If you need a new H-1B, plan around the April 1 filing deadline and the October 1 employment start date that governs the cap season. USCIS opens registration in March, so you need an offer in hand before that window. Premium processing is available and currently cuts the adjudication period to roughly 15 business days, which matters if your OPT or current status expires close to your intended start date.
See which Cloud Engineer at Anthropic employers are hiring and sponsoring visas right now.
Search Cloud Engineer at Anthropic Jobs