Popular game developers Riot Games and Ubisoft have recently teamed up to develop an AI database that will combat toxicity in online games.
Toxicity in online games is prevalent still to this day. Online games’ anonymity allows some players to vent their real-life frustration on others. Also, the need to win in online competitive games makes some people say and do things that are not welcome.
To combat this problem, there are already many safeguards taken by online game developers. Some of these include chat moderation, banning toxic players, rewarding good behavior, etc. But still, we see toxic people in our online games.
So, two of the biggest game developers, Riot Games and Ubisoft, have recently teamed up to develop an AI-based database that will learn and moderate toxic behavior in online games.
Riot Games x Ubisoft Collaboration:
The news about the collaboration came in the form of a blog post on the official Riot Games website. It was titled “Zero Harm in Comms: Riot and Ubisoft Working Together on Research Project to Create More Positive Gaming Communities”. The name of the research project is called “Zero Harm in Comms”.
As mentioned previously, Riot Games and Ubisoft are planning to develop AI-based preemptive moderation tools that detect and mitigate disruptive behavior in-game. With data gathered from games developed by both Riot Games and Ubisoft, AI can be better trained to cover a wide range of use cases.
As more and more people get into gaming, developers are looking to automate the moderation process because of a lack of manpower and resources. This AI database could be just the solution to this problem. After proper training, the AI will be able to automatically detect harmful and toxic behavior and ban players automatically.
That’s all we know so far about this new collaboration project between Riot Games and Ubisoft. Keep an eye on our website for more gaming news just like this.