Job Description
About Anthropic
Anthropic’s mission is to create reliable, interpretable, and steerable AI systems. We want AI to be safe and beneficial for our users and for society as a whole. Our team is a quickly growing group of committed researchers, engineers, policy experts, and business leaders working together to build beneficial AI systems.
About the Team
Safeguards Labs is a new team operating at the intersection of research and engineering, chartered to investigate novel safety methods that protect Claude and the people who use it. We prototype new approaches to safe models, usage safeguards, and production safety — pressure-testing ideas through offline analysis and subsets of traffic before they graduate into production systems run by our partner Safeguards teams. Our work overlaps closely with account abuse, model behavior safeguards, and other safeguard subteams, and we serve as a research arm that can take on ambitious, ambiguous problems and turn them into deployed defenses.
About the Role
We're hiring research engineers to define and execute the Labs research agenda. You'll scope your own projects, run experiments end-to-end, and decide when an idea is ready to hand off to a production team — or when to kill it and move on. The team is small and being built deliberately around a roughly 3:1 mix of researchers to software engineers, so each person has substantial latitude over what they work on and high leverage on the team's direction.
Responsibilities:
- Lead and contribute to research projects investigating new methods for detecting misuse of Claude, identifying malicious organizations and accounts, strengthening model safeguards, and other safety needs.
- Design and run offline analyses over model usage data to surface abuse patterns, build classifiers and detection systems, and evaluate their effectiveness.
- Develop and iterate on prototypes that could eventually feed signals into the real-time safeguards path, partnering with engineers on tech transfer.
- Contribute to a broader research portfolio investigating methods for detecting abusive behavior in chat-based or agentive workflows, and for training the model to robustly refrain from dangerous responses or behaviors without over-refusing.
- Build evaluations and methodologies for measuring whether safeguards actually work, including in agentic settings.
- Write up findings clearly so they inform decisions across Trust & Safety, research, and product teams.
You may be a good fit if you:
- Have a track record of independently driving research projects from ambiguous problem statements to concrete results, ideally in AI, ML, security, integrity, or a related technical field.
- Are comfortable scoping your own work and switching between research, engineering, and analysis as a project demands.
- Have working familiarity with how large language models operate — sampling, prompting, training — even if LLMs aren't your primary background.
- Are proficient in Python and comfortable working with large datasets.
- Care about the societal impacts of AI and want your work to directly reduce real-world harm.
Strong candidates may also have:
- Experience building and training machine learning models, including classifiers for abuse, fraud, integrity, or security applications.
- Knowledge of evaluation methodologies for language models and experience designing evals.
- Experience with agentic environments and evaluating model behavior in them.
- Background in trust and safety, integrity, fraud detection, threat intelligence, or adversarial ML.
- Experience with red teaming, jailbreak research, or interpretability methods like steering vectors.
- A history of taking research prototypes and transferring them into production systems.
The annual compensation range for this role is listed below.
For sales roles, the range provided is the role’s On Target Earnings ("OTE") range, meaning that the range includes both the sales commissions/sales bonuses target and annual base salary for the role.
Logistics
Minimum education: Bachelor’s degree or an equivalent combination of education, training, and/or experience
Required field of study: A field relevant to the role as demonstrated through coursework, training, or professional experience
Minimum years of experience: Years of experience required will correlate with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be in one of our offices at least 25% of the time. However, some roles may require more time in our offices.
Visa sponsorship: We do sponsor visas! However, we aren't able to successfully sponsor visas for every role and every candidate. But if we make you an offer, we will make every reasonable effort to get you a visa, and we retain an immigration lawyer to help with this.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed. Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Your safety matters to us. To protect yourself from potential scams, remember that Anthropic recruiters only contact you from @anthropic.com email addresses. In some cases, we may partner with vetted recruiting agencies who will identify themselves as working on behalf of Anthropic. Be cautious of emails from other domains. Legitimate Anthropic recruiters will never ask for money, fees, or banking information before your first day. If you're ever unsure about a communication, don't click any links—visit anthropic.com/careers directly for confirmed position openings.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues. Guidance on Candidates' AI Usage: Learn about our policy for using AI in our application process
More jobs from Anthropic
Manager, Partner Account Managers
1 hour, 36 minutes agoCluster Deployment Engineer
1 hour, 37 minutes agoSales Strategy & Operational Excellence Lead
1 hour, 38 minutes agoDon't have an account? Register