Overview
Team: Trust, Safety & Platform Integrity
Employment Type: Full-Time, Permanent
Work Style: Hybrid
Trust & Safety
Job Overview
At Datemil Inc., maintaining the safety, integrity, and trust of our platforms is essential to everything we build. As Integrity Operations Manager – Sensitive Content, you will lead the operational strategy and daily execution for the Sensitive Content pillar, ensuring that high-risk and policy-sensitive material is reviewed, classified, and handled accurately at scale.
This role oversees moderation operations, vendor/BPO performance, labeling workflows, and AI-supported review pipelines that help reduce harm while protecting the member experience. You will work closely with Policy, Product, Engineering, QA, and external partners to ensure our moderation systems remain consistent, scalable, and aligned with company standards.
The ideal candidate combines strong operational leadership, trust & safety expertise, vendor management experience, and thoughtful use of AI-assisted moderation tools, while always keeping human judgment and member impact at the center of decisions.
Apply for this role
Review the full role details below, then reach out to the team to continue the application process for this position.
Team: Trust, Safety & Platform Integrity
Employment Type: Full-Time, Permanent
Work Style: Hybrid
Sensitive Content Operations
Lead the day-to-day operations for the Sensitive Content pillar, overseeing teams responsible for classifying images and content at scale to support safety, integrity, and harm-reduction outcomes.
Ensure moderation workflows are accurate, efficient, and aligned with internal policies and community standards.
Maintain high operational discipline while balancing speed, quality, and risk.
Vendor / BPO & AI Moderation Partnerships
Own external moderation vendor and BPO relationships end-to-end.
Set clear performance expectations, define SLAs, and run regular governance reviews.
Monitor delivery quality, throughput, backlog health, and cost efficiency.
Ensure distributed teams consistently meet operational and quality targets.
Partner with AI and automation teams to maintain strong human-in-the-loop (HITL) review quality.
Policy, Taxonomy & Labeling Operations
Translate sensitive content policies and taxonomies into clear labeling instructions and moderation guidelines.
Build and maintain annotation standards that can be applied consistently across internal and vendor teams.
Validate understanding through calibration sessions, sampling reviews, and feedback loops.
Support taxonomy updates, new harm categories, and changes to policy definitions.
Data Labeling & AI Safety Support
Coordinate data labeling workflows used to support AI safety systems and ML-driven moderation pipelines.
Ensure annotation quality, reviewer alignment, and dataset reliability.
Work with cross-functional partners to maintain high standards for training data.
Balance automation with human oversight to maintain fairness and accuracy.
Quality & Performance Management
Define quality measurement frameworks, including sampling plans, alignment checks, and error tracking.
Identify patterns in review mistakes, inconsistency, or drift.
Partner with QA and Learning & Development teams to improve training programs.
Implement audits and quality improvement initiatives that raise accuracy across teams.
Special Projects & Operational Improvements
Lead special labeling or moderation projects such as:
taxonomy changes
new harm patterns
targeted quality improvements
pipeline redesign
Drive projects from insight to execution with clear timelines and measurable outcomes.
Continuously improve workflows to support scale, efficiency, and consistency.
Reporting, Metrics & Insights
Build and maintain operational reporting on:
quality scores
throughput
backlog health
escalation volume
cost / efficiency metrics
Use data to explain trends, identify risks, and guide decisions.
Present clear performance updates to cross-functional partners and leadership.
Cross-Functional Collaboration
Partner closely with Policy, Product, Engineering, QA, L&D, and Vendor teams.
Help design moderation tools, workflows, and pipeline improvements.
Bring an agile mindset to operational changes and system updates.
Ensure safety considerations are included in new product features.
Escalations & High-Sensitivity Issues
Own escalations related to high-risk or sensitive content.
Make careful, consistent decisions under pressure.
Balance speed, safety, user impact, and business risk.
Model calm, disciplined leadership in complex situations.
Typically requires 4–6 years of relevant experience, though we welcome candidates with alternative backgrounds demonstrating equivalent skills.
Experience leading large-scale moderation, vendor, or BPO operations, including performance management, QA programs, and delivery across distributed teams.
Strong knowledge of trust & safety policy taxonomies and how to turn them into real moderation workflows.
Experience coordinating data labeling for AI safety / ML systems, including annotation standards, reviewer alignment, and dataset quality.
Comfortable working with operational data and reporting; able to spot trends and explain them clearly.
Familiarity with SQL, dashboards, or analytics tools is a strong plus.
A people-first leader who can coach through ambiguity, set clear expectations, and create psychologically safe ways of working.
Demonstrated ability to collaborate across Policy, Product, Engineering, QA, and vendor partners while taking ownership for results.
Thoughtful understanding of AI and automation — knowing when it helps, when human judgment is required, and how to maintain strong HITL quality.
Strong organizational skills and attention to detail.
Comfortable working in high-volume, fast-moving environments.
Able to handle confidential or sensitive material with professionalism and care.
Salary Range: $95,000 – $120,000 annually, depending on experience and qualifications.
Datemil Inc. is the parent company of Datemil Date, V.I.Pursuit, Concierge Matchmaking, Plink Bestie, Networking, and Coaching. Our platforms are designed to help people build meaningful connections in environments that are safe, respectful, and trustworthy.
We believe strong integrity operations are essential to building communities where people feel secure and valued.
We encourage thoughtful and responsible use of AI to support operational efficiency, labeling workflows, and moderation tools. However, decisions involving sensitive content always require human judgment, accountability, and careful review.
Candidates may use AI tools during the application process but should not use them to misrepresent experience or provide inauthentic responses.
Datemil Inc. is committed to building an inclusive workplace where all employees feel respected and supported. We welcome applicants from all backgrounds and life experiences and provide reasonable accommodations throughout the hiring process.
We may use AI-assisted tools to support parts of the recruitment process, such as transcription, summarization, or resume matching. These tools are used only to assist our team, and all final hiring decisions are made by people.
Participation in AI-supported interviews is optional and will not affect your candidacy.
Help lead safety and integrity operations at scale
Work on real-world trust & safety challenges
Shape AI-assisted moderation workflows
Partner with product, policy, and engineering teams
Contribute to building safer digital communities
Ready to apply for this role?
If this position matches your background and interest, continue to the application contact flow for this specific role.
Cookies
Accept all cookies or manage your preferences at any time.