
Designing a Ratings & Age Classification System for Online Games
Ratings Policy Intern | Netflix
context
As Netflix expanded into interactive entertainment, it faced a new challenge: how to responsibly classify and communicate online gaming content to a global audience.
Existing ratings systems like ESRB, PEGI, and IARC primarily focused on traditional game content—violence, language, and sexual themes—but underrepresented risks unique to online play, such as toxicity, gambling-like features, and user-generated content.
Goal: Develop a modern, scalable framework for rating Netflix games that reflects both traditional content and emerging behavioral risks, giving parents and players the clarity to make informed choices.
problem
Netflix needed a trusted ratings model that could extend its established maturity system into the world of games—balancing global consistency, cultural nuance, and digital safety.
Challenges included:
Lack of standardized signals for online interactivity and toxicity
Limited representation of monetization and gambling-like mechanics
Need for a scalable system that fits Netflix’s cross-media ecosystem
Parental confusion around what “online interactions” actually entail
Challenge: Build a next-generation ratings model that addresses the full spectrum of online experiences—from gameplay content to player behavior.
1. Research & Benchmarking
Conducted a comparative analysis of global rating systems (ESRB, PEGI, IARC, ACB).
Identified gaps in current models, particularly around interactivity, discrimination, and monetization.
Consulted with internal content policy, product, and operations teams to align goals.
process
2. Framework Design
Authored the exploratory policy report “Designing a New Ratings & Age Classification System for Online Games.”
Proposed a multi-dimensional Netflix Game Rating Scale covering five axes:
Content Intensity – violence, sex, language, discrimination, drugs, horror
Interactivity – multiplayer, chat, UGC, roleplay
Monetization – loot boxes, gambling-like systems
Discrimination & Toxicity – harassment, slurs, stereotyping
Systemic Risks – persistent IDs, cross-platform exposure
Example rating output:
“Rated 12+ for violence; online interactivity may expose players to discriminatory chat; contains loot boxes.”
3. Policy & Tooling Proposals
Developed reviewer playbooks with decision trees for toxicity, gambling, and UGC moderation.
Recommended override-only advisories and optional interaction/stereotype tags for transparency.
Proposed ML classifiers to detect toxic chat, gambling mechanics, and user-generated risk content.
Outlined UX improvements for storefront advisories and parental controls.
4. Testing & Implementation Plan
Designed a 90-day rollout roadmap including pilot testing with 50 Netflix titles.
Proposed A/B testing of advisories to measure parental comprehension and complaint reduction.
Defined success metrics:
↑ Parental understanding of advisories
↓ Complaint rates for toxicity/gambling
≥85% reviewer agreement across labels
“The system bridges entertainment and safety—helping families understand not just what’s in a game, but how it behaves online.”
outcome
Created a scalable, future-ready ratings framework adaptable to Netflix’s global gaming catalog.
Provided actionable recommendations for policy, product, and ML teams to enhance player transparency.
Established a foundation for cross-platform trust and parental confidence in Netflix Games.
impact
Advanced Netflix’s readiness for ethical and transparent game publishing.
Introduced policy innovation that merges entertainment standards with online safety and user trust.
Strengthened the bridge between content policy, technology, and player experience.
Deepened understanding of global regulatory alignment, from PEGI to IARC and ACB systems.