AI in games might’ve just proven itself useful for a change—Activision claims Call of Duty’s seen a 43% drop in ‘disruptive voice chat’ since the start of the year thanks to its robo-snitch Techy CEO
[ad_1]
AI in games is a weird subject—glassy-eyed, soulless neo-NPCs, hideous AI art slapped onto social media accounts and, thanks to the seemingly irresistible, gravitational allure of its power as a buzzword to sell software and tat, a lot of confusion as to what “AI-powered” actually means.
I’m on record as a sceptic myself. While I think AI—that is, specifically, generative AI—doesn’t really have much to offer games in terms of writing or art, I must admit, with a mouthful of humble pie, that Activision (of all the studios) might’ve stumbled onto a good use for it with the help of Modulate.
I say might’ve only because it’s my job to be sceptical, and because these are all numbers from Activision itself, which has a clear and vested interest in wanting you to believe that it makes good decisions. Still, as per its toxicity report blog post, the numbers are allegedly looking pretty good.
“Since rolling out an improved voice chat enforcement in June 2024,” Activision claims, “Call of Duty has seen a combined 67% reduction in repeat offenders of voice-chat based offences in Modern Warfare 3 and Call of Duty: Warzone. In July 2024, 80% of players that were issued a voice chat enforcement since launch did not re-offend. Exposure to disruptive voice chat continues to fall, dropping by 43% since January 2024.”
The AI software in question is ToxMod, which Activision announced it’d be including in Call of Duty August of last year. This software essentially acts as a goodie-two shoes narc that’s always watching your games—it’s not responsible for banning anybody, but it will listen to and report your flaming for moderation, hopefully, by a team of people able to make up for the fact that AI has a habit of imagining things.
What’s more—and this is a big claim, which I again entreat you to handle with scepticism—ToxMod is able to listen to the tone of voice, allowing it to separate banter and genuine hate. It has a severity rating system too.
For example, certain slurs have been reclaimed by their communities, so when it detects one, it claims to keep an eye on the immediate reactions of other people present—for example, “If someone says the n-word and clearly offends others in the chat, that will be rated much more severely than what appears to be reclaimed usage that is incorporated naturally into a conversation.”
There’s some other common-sense stuff, too. For example, if there’s a young-sounding person in chat, offences will be rated higher. Us grown adults can probably handle some bad words, but I think we can all agree that teaching little Timmy how to insult my sexuality with his whole chest probably isn’t great for our society on the whole.
There’s privacy concerns, sure—though while I’m not saying the situation’s great, we’re sort of beyond the pale at this point. Adverts are tailored to your search history, your data on any social media platform is cut up and sold to the highest bidder, and if you have a smartphone, chances are your exact location going back months is available. All this to say, I think an AI-based reporting system is probably the least of our worries.
If Activision’s numbers are taken on good faith, it’s a solid trade-off. Call of Duty is one of the most infamously toxic games when it comes to voice chat—whether that’s numerically true is another thing, but I’m of the opinion that a game generally gets a rep for such for a good reason. Easing up on the pressure of besieged mod teams and taking the responsibility for reportage out of players’ hands so they can focus on, y’know, playing a game, instead of navigating report menus? That seems like a net good, and my hat is, reluctantly, tipped.
[ad_2]