AI Research Engineer Jobs at Anthropic with Visa Sponsorship
Anthropic hires AI Research Engineers to work at the frontier of large language model development, safety research, and interpretability. The company has a consistent track record of sponsoring work visas for this function, covering multiple visa categories for qualified candidates from outside the United States.
See All AI Research Engineer at Anthropic JobsOverview
Showing 5 of 28+ AI Research Engineer Jobs at Anthropic jobs


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?


Have you applied for this role?
See all 28+ AI Research Engineer Jobs at Anthropic
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Research Engineer Jobs at Anthropic.
Get Access To All Jobs
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Team
As AI training and deployments scale, the volume of data we need to monitor and understand is exploding. Our team uses Claude itself to make sense of this data. We own an integrated set of tools enabling Anthropic to ask open-ended questions, surface unexpected patterns, and maintain meaningful human oversight over massive datasets. Our tools are widely adopted internally — powering ongoing enforcement, threat intelligence investigations, model audits, and more — and we’re looking for experienced engineers and researchers to both scale up existing applications and go zero-to-one on new ones.
About The Role
As a Research Engineer on our team, you'll design and build systems that let AI analyze large, unstructured datasets — think tens or hundreds of thousands of conversations or documents — and produce structured, trustworthy insights. You'll work across the full stack, from core analysis frameworks through user-facing apps and interfaces. This is a high-leverage role. The tools you build will be used by dozens of researchers and investigators, and directly shape our ability to measure and mitigate both misuse and misalignment.
Responsibilities
- Design and implement AI-based monitoring systems for AI training and deployment
- Extend and improve core frameworks for processing large volumes of unstructured text
- Partner with researchers and safety teams across Anthropic to understand their analytical needs and build solutions
- Develop agentic integrations that allow AI systems to autonomously investigate and act on analytical findings
- Contribute to the strategic direction of the team, including decisions about what to build, what to partner on, and where to invest
You May Be a Good Fit If You
- Have 5+ years of software engineering experience, with meaningful exposure to ML systems
- Are excited about the problem of scaling human oversight of AI systems
- Are familiar with LLM application development (context engineering, evaluation, orchestration)
- Enjoy building tools that other people use — you care about UX, reliability, and documentation
- Can context-switch between deep infrastructure work and user-facing product thinking
- Thrive in collaborative, cross-functional environments
Strong Candidates May Also Have
- Research experience in AI safety, alignment, or responsible deployment
- Practical experience with both data science and engineering, including developing and using large-scale data processing frameworks
- Experience with productionizing internal tools or building developer-facing platforms
- Background in building monitoring or observability systems
- Comfort with ambiguity — our team is small and growing, and you'll help define what we become
Annual Salary
$320,000—$405,000 USD
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us.
To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How We're Different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Guidance on Candidates' AI Usage:
Learn about our policy for using AI in our application process.

About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About The Team
As AI training and deployments scale, the volume of data we need to monitor and understand is exploding. Our team uses Claude itself to make sense of this data. We own an integrated set of tools enabling Anthropic to ask open-ended questions, surface unexpected patterns, and maintain meaningful human oversight over massive datasets. Our tools are widely adopted internally — powering ongoing enforcement, threat intelligence investigations, model audits, and more — and we’re looking for experienced engineers and researchers to both scale up existing applications and go zero-to-one on new ones.
About The Role
As a Research Engineer on our team, you'll design and build systems that let AI analyze large, unstructured datasets — think tens or hundreds of thousands of conversations or documents — and produce structured, trustworthy insights. You'll work across the full stack, from core analysis frameworks through user-facing apps and interfaces. This is a high-leverage role. The tools you build will be used by dozens of researchers and investigators, and directly shape our ability to measure and mitigate both misuse and misalignment.
Responsibilities
- Design and implement AI-based monitoring systems for AI training and deployment
- Extend and improve core frameworks for processing large volumes of unstructured text
- Partner with researchers and safety teams across Anthropic to understand their analytical needs and build solutions
- Develop agentic integrations that allow AI systems to autonomously investigate and act on analytical findings
- Contribute to the strategic direction of the team, including decisions about what to build, what to partner on, and where to invest
You May Be a Good Fit If You
- Have 5+ years of software engineering experience, with meaningful exposure to ML systems
- Are excited about the problem of scaling human oversight of AI systems
- Are familiar with LLM application development (context engineering, evaluation, orchestration)
- Enjoy building tools that other people use — you care about UX, reliability, and documentation
- Can context-switch between deep infrastructure work and user-facing product thinking
- Thrive in collaborative, cross-functional environments
Strong Candidates May Also Have
- Research experience in AI safety, alignment, or responsible deployment
- Practical experience with both data science and engineering, including developing and using large-scale data processing frameworks
- Experience with productionizing internal tools or building developer-facing platforms
- Background in building monitoring or observability systems
- Comfort with ambiguity — our team is small and growing, and you'll help define what we become
Annual Salary
$320,000—$405,000 USD
Logistics
Education requirements: We require at least a Bachelor's degree in a related field or equivalent experience.
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us.
To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How We're Different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.
Guidance on Candidates' AI Usage:
Learn about our policy for using AI in our application process.
See all 28+ AI Research Engineer at Anthropic jobs
Sign up for free to unlock all listings, filter by visa type, and get alerts for new AI Research Engineer at Anthropic roles.
Get Access To All JobsTips for Finding AI Research Engineer Jobs at Anthropic Jobs
Align your research to Anthropic's published work
Anthropic publishes papers on constitutional AI, mechanistic interpretability, and model evaluation. Referencing specific threads of that work in your application signals genuine fit and strengthens the specialty occupation framing your H-1B petition will need.
Prepare a targeted research portfolio early
USCIS scrutinizes specialty occupation claims for AI roles. Compile publications, conference presentations, and technical reports that establish your field before you start interviewing. Gaps in documentation slow petitions and can trigger Requests for Evidence.
Confirm your visa category with the recruiter upfront
Anthropic sponsors H-1B, H-1B1, and E-3 visas for this role. Australian and Chilean nationals should confirm which category applies to them at the offer stage, since the filing timelines and employer obligations differ meaningfully across those three types.
Use Migrate Mate to filter live AI Research Engineer openings
Anthropic's research hiring moves quickly and positions aren't always well-indexed. Use Migrate Mate to surface active AI Research Engineer roles at Anthropic filtered by visa sponsorship, so you're applying to positions currently open rather than outdated listings.
Understand Anthropic's internal transfer and team placement process
Anthropic often hires AI Research Engineers into specific research teams rather than a general pool. Clarify at the offer stage whether your role will sit under a named team, since the job duties stated on your LCA with DOL must match your actual day-to-day work.
Account for H-1B cap timing if you're currently on OPT
If you're on F-1 OPT, your H-1B petition must be filed for an October 1 start date. That means Anthropic needs to initiate your LCA and USCIS registration in January through March, so surface your status to the recruiter before you receive the formal offer.
AI Research Engineer at Anthropic jobs are hiring across the US. Find yours.
Find AI Research Engineer at Anthropic JobsFrequently Asked Questions
Does Anthropic sponsor H-1B visas for AI Research Engineers?
Yes, Anthropic sponsors H-1B visas for AI Research Engineers. The company has an established pattern of filing petitions for this role and supports candidates through the full process, including the Labor Condition Application with DOL and the USCIS petition. If you're subject to the H-1B cap, confirm your registration timeline with the recruiting team early in the offer process.
How do I apply for AI Research Engineer jobs at Anthropic?
Applications go through Anthropic's careers portal. The process typically includes a technical screen, research discussions, and a multi-stage interview covering both engineering depth and alignment with Anthropic's safety-focused research agenda. Migrate Mate is a reliable way to track current AI Research Engineer openings at Anthropic filtered by visa sponsorship eligibility, so you're targeting active roles.
Which visa types does Anthropic sponsor for AI Research Engineers?
Anthropic sponsors H-1B, H-1B1, and E-3 visas for AI Research Engineers. H-1B is the most common path for nationals from most countries. H-1B1 applies to Chilean and Singaporean nationals, and E-3 applies to Australian citizens. Each visa has different filing requirements and timelines, so confirming your category with the recruiter at the offer stage avoids delays.
What qualifications does Anthropic expect for AI Research Engineer roles?
Anthropic's AI Research Engineer roles typically require a graduate degree in computer science, mathematics, or a related field, along with demonstrable experience in machine learning research. Familiarity with large language model training, interpretability methods, or AI safety evaluation is weighted heavily. Published work or contributions to research that align with Anthropic's technical agenda materially strengthens your candidacy and supports your visa petition's specialty occupation classification.
How long does the visa sponsorship process take for an Anthropic offer?
Timeline depends on visa type. E-3 and H-1B1 petitions are filed directly at a U.S. consulate and can move within a few weeks of offer acceptance if your documents are ready. H-1B petitions require DOL LCA certification before USCIS filing, which adds several weeks. Premium processing through USCIS reduces the adjudication window to 15 business days, and many tech employers at Anthropic's scale elect that option.
See which AI Research Engineer at Anthropic employers are hiring and sponsoring visas right now.
Search AI Research Engineer at Anthropic Jobs