Anthropic | $240,000-$285,000
Steph's Notes:
Same note as with the other T&S Lead role: companies are very good at acknowledging when certain roles might come into contact with disturbing content but are very bad at addressing how they plan to support your mental health post-exposure to said content.
Unfortunately, Anthropic doesn't appear to be different in this respect, and I'd encourage anyone who ends up interviewing for this role to ask some pointed questions about it.
Original Job Description: Trust and Safety Responsible Deployment Policy Lead
As the Trust and Safety (T&S) Responsible Deployment Policy Lead, you will manage and grow a high-impact team focused on developing policies that guide the safe and responsible deployment of Anthropic's tools and services. Your team will set policy frameworks that help limit harms associated with high-risk sectors like healthcare and finance, create guidelines for third-party developers, identify risks associated with and mitigations strategies for new product features and functions, and more. You'll work cross-functionally with Legal, Public Policy, Sales and Product in upholding robust and comprehensive policies that prevent harm while enabling innovation.
IMPORTANT CONTEXT ON THIS ROLE: In this position you may be exposed to and engage with explicit content spanning a range of topics, including those of a sexual, violent, or psychologically disturbing nature. This role may require responding to escalations during weekends and holidays.
Responsibilities:
- Lead, grow and motivate a team of policy managers through coaching, career development and performance management
- Set OKRs and the long-term strategy for the Responsible Deployment Policy Team
- Oversee policy workflows for drafting, reviewing, approving, and implementing responsible deployment policies across Anthropic’s products and services
- Work with the team to develop frameworks for deploying tools and services in high-risk industries and provide oversight and final approval of new and updated responsible deployment policies
- Establish decision making frameworks for evaluating the harms with and without these policies
- Monitor policy development progress, jump in where needed to maintain velocity in a high growth environment
- Work with your team to evaluate and offer policy judgements in cases involving harmful use or sensitive users
- Work closely with key stakeholders to incorporate their research, feedback and deep expertise into the policy development process
- Research and stay up-to-date on AI industry risks and work with your team to align our policies to mitigate them
You may be a good fit if you:
- 8+ years of experience in trust and safety, policy, national security, or risk
- 3+ years of experience managing a team (experience scaling teams is a plus!)
- Experience or deep interest in AI and its impact on society
- Crisp writing, presentation and critical thinking skills
- Ability to work cross functionally with technical and non-technical collaborators and get stakeholder buy-in on policies and detection and enforcement strategies
- Ability to think creatively and with an automation-first mindset about how to use technology in a way that is safe and beneficial, and ultimately furthers the goal of advancing safe AI systems
Annual Salary (USD)
- The expected salary range for this position is $240k-$285k.
We encourage you to apply even if you do not believe you meet every single qualification. Not all strong candidates will meet every single qualification as listed.
Research shows that people who identify as being from underrepresented groups are more prone to experiencing imposter syndrome and doubting the strength of their candidacy, so we urge you not to exclude yourself prematurely and to submit an application if you're interested in this work. We think AI systems like the ones we're building have enormous social and ethical implications. We think this makes representation even more important, and we strive to include a range of diverse perspectives on our team.
Compensation and Benefits
Anthropic’s compensation package consists of three elements: salary, equity, and benefits. We are committed to pay fairness and aim for these three elements collectively to be highly competitive with market rates.
Equity - On top of this position's salary (listed above), equity will be a major component of the total compensation. We aim to offer higher-than-average equity compensation for a company of our size, and communicate equity amounts at the time of offer issuance.
US Benefits
The following benefits are for our US-based employees:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Comprehensive health, dental, and vision insurance for you and all your dependents.
- 401(k) plan with 4% matching.
- 22 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Stipends for education, home office improvements, commuting, and wellness.
- Fertility benefits via Carrot.
- Daily lunches and snacks in our office.
- Relocation support for those moving to the Bay Area.
UK Benefits
The following benefits are for our UK-based employees:
- Optional equity donation matching at a 3:1 ratio, up to 50% of your equity grant.
- Private health, dental, and vision insurance for you and your dependents.
- Pension contribution (matching 4% of your salary).
- 22 weeks of paid parental leave.
- Unlimited PTO – most staff take between 4-6 weeks each year, sometimes more!
- Health cash plan.
- Life insurance and income protection.
- Daily lunches and snacks in our office.
This compensation and benefits information is based on Anthropic’s good faith estimate for this position as of the date of publication and may be modified in the future. Employees based outside of the UK or US will receive a different benefits package. The level of pay within the range will depend on a variety of job-related factors, including where you place on our internal performance ladders, which is based on factors including past work experience, relevant education, and performance on our interviews or in a work trial.
How we're different
We believe that the highest-impact AI research will be big science. At Anthropic we work as a single cohesive team on just a few large-scale research efforts. And we value impact — advancing our long-term goals of steerable, trustworthy AI — rather than work on smaller and more specific puzzles. We view AI research as an empirical science, which has as much in common with physics and biology as with traditional efforts in computer science. We're an extremely collaborative group, and we host frequent research discussions to ensure that we are pursuing the highest-impact work at any given time. As such, we greatly value communication skills. We do not have boundaries between engineering and research, and we expect all of our technical staff to contribute to both as needed.
The easiest way to understand our research directions is to read our recent research. This research continues many of the directions our team worked on prior to Anthropic, including: GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come work with us!
Anthropic is a public benefit corporation based in San Francisco. We offer competitive compensation and benefits, optional equity donation matching, generous vacation and parental leave, flexible working hours, and a lovely office space in which to collaborate with colleagues.