In the multiplayer game League of Legends, players who use abusive language can see their words used against them in a court of their peers. The technology behind this jury can be applied to other online communities.
Swearing and name-calling are common in online multiplayer games, but the creators behind one of those games, League of Legends, recently demonstrated a new system designed to stop bad behavior as well as its spread. The aptly-named Tribunal collects all instances of negative behavior wherever it exists — including in chat logs — and presents the worst cases to the game’s forums, where players vote on their acceptability. Offenders convicted of especially bad actions could be banned from the game.
What’s the Big Idea?
Jeff Lin, lead designer of social systems at Riot Games, says that League of Legends “can create behavioural profiles for every player” and measure how often they use abusive language. They have also used simple warnings at the start of battles to nudge players towards better gameplay. A system similar to Tribunal could work for other online communities, imposing social norms in a realm where anonymity often gives people permission to behave inappropriately. According to University of Michigan professor Cliff Lampe, “This really helps to shape sites…these social structures lead to more sustainable sites.”