Register your interest with us to gain access to our development alpha. Enter your details and we’ll be in touch within 48 hours.
Make in-game player communities, chatrooms and online social platforms safer and more inclusive environments.
Ally underpins the duty of care games studios and social platforms have towards their communities, helping to safeguard against abusive behaviour, grooming and other unwanted activity – all on the basis of listening to the player about what makes them feel safe.
Using the power of machine learning and predictive analytics, Ally detects potentially abusive language and behaviours.
The system takes a player-centric approach, asking the player whether or not they are ok and learning what situations, language and individuals are within their “safe zone” comfort level. With this combination of both player and community preferences, Ally can take quick action when needed.
Smart abuse detection is the first line of defence. Ally begins by monitoring all of in-game chat and behaviours. We do not rely upon monitoring through sampling. We free up your moderation team to focus on the worst perpetrators and abuse incidents.
Whether reported by the player or not, potential abuse is detected and automatically investigated through a context-aware multi-stage triage process of analysis.
If we come across an abuse incident that we don’t already know how to deal with for a particular player, our AI natural language character (driven by Character Engine) chats to the player to check how they feel about it.
Each player or user defines what “safe” means for them. Our natural language classifiers enable players to confirm whether their comfort has been breached.
We take a multi-level monitoring approach which can be tailored according to your architecture and needs.
Based on what each player sees and experiences, our analysis enables Ally to identify stalking, begging or even non-verbal abuse, both in one-to-one or many-to-one scenarios.
We look for new patterns of behaviour, enabling you to find situations you may not have been looking for.
Based on custom settings defined by the Community Safeguarding team, our Triage Manager decides how the system responds to each abusive scenario as it is detected.
Intervene against abusers with automated system actions from the Allybot, actions such as muting, automatic reminder of community rules, suspension of abuser, or even with direct action from the Community Safeguarding team for the most serious abuse.
We can create a case file for further analysis by the community team, whether a player proactively reports an instance of abuse or responds to an Ally enquiry.
We all have different boundaries in different contexts. Ally empowers the user to define what ‘safe’ means for them.
Our Ally system’s artificial intelligence can cope with ease. It’s all about the community you want to create. Your community team decides how you want Ally to intervene to each level of abuse.
As players gain more experience in a game or chat room, or build their own community with whom they feel relaxed, their response to problematic language may change. They’re free to tell Ally whenever their preferences change – or even how they’re feeling on any given day.
The power of predictive analytics enables us to detect potential abusive patterns of behaviour based upon signature analysis. This helps community moderation teams develop rules for new abuse scenarios and to identify the most serious long-term abuse.
We would love for you to join our closed alpha for Character Engine and Ally. Please sign up here and let us know a little about yourself.