AI Data Engineer Jobs at Anthropic with Visa Sponsorship
Anthropic hires AI Data Engineers to build and maintain the data infrastructure that trains frontier AI models. The company sponsors H-1B, H-1B1, and E-3 visas for this function, and its active sponsorship track record reflects consistent demand for qualified international candidates in this role.
See All AI Data Engineer at Anthropic JobsOverview
Showing 5 of 41+ AI Data Engineer Jobs at Anthropic jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 41+ AI Data Engineer Jobs at Anthropic
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Data Engineer Jobs at Anthropic.
Get Access To All Jobs
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Team
As AI training and deployments scale, the volume of data we need to monitor and understand is exploding. Our team uses Claude itself to make sense of this data. We own an integrated set of tools enabling Anthropic to ask open-ended questions, surface unexpected patterns, and maintain meaningful human oversight over massive datasets. Our tools are widely adopted internally — powering ongoing enforcement, threat intelligence investigations, model audits, and more — and we’re looking for experienced engineers and researchers to both scale up existing applications and go zero-to-one on new ones.
About The Role
As a Research Engineer on our team, you'll design and build systems that let AI analyze large, unstructured datasets — think tens or hundreds of thousands of conversations or documents — and produce structured, trustworthy insights. You'll work across the full stack, from core analysis frameworks through user-facing apps and interfaces. This is a high-leverage role. The tools you build will be used by dozens of researchers and investigators, and directly shape our ability to measure and mitigate both misuse and misalignment.
Responsibilities
- Design and implement AI-based monitoring systems for AI training and deployment
- Extend and improve core frameworks for processing large volumes of unstructured text
- Partner with researchers and safety teams across Anthropic to understand their analytical needs and build solutions
- Develop agentic integrations that allow AI systems to autonomously investigate and act on analytical findings
- Contribute to the strategic direction of the team, including decisions about what to build, what to partner on, and where to invest
You May Be a Good Fit If You
- Have 5+ years of software engineering experience, with meaningful exposure to ML systems
- Are excited about the problem of scaling human oversight of AI systems
- Are familiar with LLM application development (context engineering, evaluation, orchestration)
- Enjoy building tools that other people use — you care about UX, reliability, and documentation
- Can context-switch between deep infrastructure work and user-facing product thinking
- Thrive in collaborative, cross-functional environments
Strong Candidates May Also Have
- Research experience in AI safety, alignment, or responsible deployment
- Practical experience with both data science and engineering, including developing and using large-scale data processing frameworks
- Experience with productionizing internal tools or building developer-facing platforms
- Background in building monitoring or observability systems
- Comfort with ambiguity — our team is small and growing, and you'll help define what we become
Annual Salary
$320,000—$405,000 USD
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us.
To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How We're Different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Guidance on Candidates' AI Usage:
Learn about our policy for using AI in our application process.

About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Team
As AI training and deployments scale, the volume of data we need to monitor and understand is exploding. Our team uses Claude itself to make sense of this data. We own an integrated set of tools enabling Anthropic to ask open-ended questions, surface unexpected patterns, and maintain meaningful human oversight over massive datasets. Our tools are widely adopted internally — powering ongoing enforcement, threat intelligence investigations, model audits, and more — and we’re looking for experienced engineers and researchers to both scale up existing applications and go zero-to-one on new ones.
About The Role
As a Research Engineer on our team, you'll design and build systems that let AI analyze large, unstructured datasets — think tens or hundreds of thousands of conversations or documents — and produce structured, trustworthy insights. You'll work across the full stack, from core analysis frameworks through user-facing apps and interfaces. This is a high-leverage role. The tools you build will be used by dozens of researchers and investigators, and directly shape our ability to measure and mitigate both misuse and misalignment.
Responsibilities
- Design and implement AI-based monitoring systems for AI training and deployment
- Extend and improve core frameworks for processing large volumes of unstructured text
- Partner with researchers and safety teams across Anthropic to understand their analytical needs and build solutions
- Develop agentic integrations that allow AI systems to autonomously investigate and act on analytical findings
- Contribute to the strategic direction of the team, including decisions about what to build, what to partner on, and where to invest
You May Be a Good Fit If You
- Have 5+ years of software engineering experience, with meaningful exposure to ML systems
- Are excited about the problem of scaling human oversight of AI systems
- Are familiar with LLM application development (context engineering, evaluation, orchestration)
- Enjoy building tools that other people use — you care about UX, reliability, and documentation
- Can context-switch between deep infrastructure work and user-facing product thinking
- Thrive in collaborative, cross-functional environments
Strong Candidates May Also Have
- Research experience in AI safety, alignment, or responsible deployment
- Practical experience with both data science and engineering, including developing and using large-scale data processing frameworks
- Experience with productionizing internal tools or building developer-facing platforms
- Background in building monitoring or observability systems
- Comfort with ambiguity — our team is small and growing, and you'll help define what we become
Annual Salary
$320,000—$405,000 USD
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us.
To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How We're Different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Guidance on Candidates' AI Usage:
Learn about our policy for using AI in our application process.
See all 41+ AI Data Engineer at Anthropic jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Data Engineer at Anthropic roles.
Get Access To All JobsTips for Finding AI Data Engineer Jobs at Anthropic Jobs
Tailor your portfolio to frontier model data
Anthropic's AI Data Engineer roles center on large-scale training data pipelines and RLHF dataset construction. Showcase projects involving data curation at scale, human feedback workflows, or pre-training data quality systems rather than generic ETL or business intelligence work.
Confirm your visa category before applying
Anthropic sponsors H-1B, H-1B1, and E-3 visas for this role. If you're Australian or a Singaporean national, clarify with the recruiter early whether they'll file an E-3 or H-1B1 petition, since processing timelines and renewal structures differ meaningfully between those categories.
Time your application around H-1B registration windows
If you need H-1B sponsorship and aren't currently in a cap-exempt status, the USCIS registration window opens each March for an October 1 start date. Starting your Anthropic job search in the preceding fall gives you enough runway to clear interviews before registration opens.
Request premium processing during offer negotiation
USCIS offers premium processing for H-1B petitions, reducing adjudication to roughly 15 business days. Raising this with Anthropic's recruiting team before you sign the offer letter is easier than requesting it after the petition has already been filed.
Search verified AI Data Engineer roles on Migrate Mate
Filter for AI Data Engineer openings at Anthropic on Migrate Mate to see roles confirmed to accept your specific visa type. This saves time you'd otherwise spend emailing recruiters to confirm sponsorship eligibility before applying.
Address specialty occupation evidence in your application materials
USCIS scrutinizes whether AI Data Engineer roles qualify as specialty occupations. Frame your resume and cover letter around the theoretical and applied depth required: degree-level knowledge of distributed systems, data provenance, or ML data infrastructure signals that the role isn't generalist.
AI Data Engineer at Anthropic jobs are hiring across the US. Find yours.
Find AI Data Engineer at Anthropic JobsFrequently Asked Questions
Does Anthropic sponsor H-1B visas for AI Data Engineers?
Yes, Anthropic sponsors H-1B visas for AI Data Engineer roles. The company has an active sponsorship track record for this function across its Science and Research work. If you're subject to the H-1B cap and haven't been selected in a prior lottery, timing your offer to align with the March USCIS registration window is the most practical path forward.
Which visa types does Anthropic commonly use for AI Data Engineer roles?
Anthropic sponsors H-1B, H-1B1, and E-3 visas for AI Data Engineer positions. H-1B is the most broadly applicable. H-1B1 is available to Singaporean and Chilean nationals, and E-3 is available exclusively to Australian citizens. Each carries different annual filing windows, renewal structures, and dependent work authorization rules, so confirm which applies to your nationality early in the process.
What qualifications does Anthropic expect for AI Data Engineer positions?
Anthropic typically expects a bachelor's degree or higher in computer science, statistics, or a related technical field, along with hands-on experience building large-scale data pipelines. For roles supporting model training, experience with human feedback data, data quality systems, or distributed data infrastructure is particularly relevant. Depth in Python, SQL, and familiarity with ML workflows strengthens your application significantly over general data engineering experience.
How do I apply for AI Data Engineer jobs at Anthropic?
You can apply directly through Anthropic's careers page. To confirm visa sponsorship eligibility before applying, browse AI Data Engineer openings at Anthropic on Migrate Mate, where roles are filtered by visa type so you can identify positions that match your sponsorship needs. During the application process, be prepared for multiple technical rounds focused on data systems design, coding, and occasionally a domain-specific assessment tied to AI data workflows.
How long does the visa sponsorship process take once Anthropic extends an offer?
For H-1B petitions filed with USCIS, standard processing takes three to five months. Premium processing reduces adjudication to around 15 business days. E-3 and H-1B1 petitions processed at a U.S. consulate abroad are typically faster, often decided at the interview appointment. Factor in LCA certification with the DOL, which precedes the USCIS filing and generally takes one to two weeks, when planning your start date.
See which AI Data Engineer at Anthropic employers are hiring and sponsoring visas right now.
Search AI Data Engineer at Anthropic Jobs