I continue to wish that Anthropic would address how they're mitigating the mental health risks of Trust & Safety work, and there are some minor flags ("fast-paced environment," hello darkness my old friend), but it's a solid Eh, It's Probably Fine.
I always want to call attention to the fact that companies are very good at acknowledging when certain roles might come into contact with disturbing content but are very bad at addressing how they plan to support your mental health post-exposure to said content.
Same note as with the other T&S Lead role: companies are very good at acknowledging when certain roles might come into contact with disturbing content but are very bad at addressing how they plan to support your mental health post-exposure to said content.
The "ambiguous environments" bit stands out to me in particular, since all of the responsibilities of the role seem pretty concrete in scope – so where's the ambiguity coming from?
Yes, it's AI. I'm as surprised as you. The job description is honest and straightforward, the compensation is fantastic for a role at this level, the benefits are great, the job application is thoughtful and not burdensome to applicants, and their Careers page is clear and informative. Wild, huh?