Upvote 0 Downvote

Why I made Azodu

submitted by Erebus Loading...

Azodu is a discussion platform in which moderation is handled exclusively by AI. While human moderation worked well enough during the early days of the internet, it has gradually come under the influence of corporations, state actors, political groups, and other forms of institutional power. There is no better example of this than Reddit, which is now an echo chamber of propaganda and partisan talking points.

The Moderation Problem

While it is easy to lay out a clear set of rules which almost everyone agrees on ("don’t be racist", "don’t threaten people", etc), it is incredibly difficult for people to enforce rules in an unbiased and consistent manner.

The mind of a machine is codified (literally) while the human mind is not. An AI mind is simply a bunch of code, a written down set of rules and procedures to affect a result. Therefore, with AI moderation, it is possible to codify not only the rules, but interpretation of the rules. This is the goal with Azodu.

How do I know AI mods aren’t biased?

Currently I use Open AI’s moderation endpoint.

The API more or less answers the question "is this content malicious?" If it returns true, the content is rejected. If it returns false, the content is accepted.

While I don’t have access to the base model (which is closed source), my testing has confirmed the model is surprisingly unbiased. At least I believe it is an order of magnitude better than human moderation on Reddit. That is not to say that AI models can't be biased. I am very much aware of the danger.

What if Open AI’s model doesn’t stay unbiased?

Using Open AI’s moderation endpoint is a temporary measure to first prove the viability of the concept, which is to have 100% automated AI moderation without human intervention.

My eventual plan is to train our own models and open source them, so the mechanisms for moderation and content approval are 100% open to scrutiny by the public.

Do you use any human moderation?

The platform will have a grace period (6 months) in which some human moderation will be utilized. This will not be the primary means of moderation, but a last resort if the AI moderation (which is very much experimental) fails or is exploited. I expect to fail a lot and expect to learn a lot while this platform is still new and growing. This is an experiment that is the first of its kind.

The goal is to eventually have 100% automated AI moderation using 100% open source models. To remove the human element completely from moderation. Remove the human element and remove the bias. Read The Case for AI Moderation which further explains the risks and rewards.

Think of human moderation on Azodu as the training wheels. Eventually, the training wheels will need to come off to fulfill the full vision of the project.

What if the AI model wrongfully rejects my submission?

During the grace period (6 months) we will work on and discuss rejections in the Discord server. The Discord server is a place for any and all feedback. Again, the goal is to eventually have 100% accuracy and no human intervention involved in the moderation process.

How can I trust any of this?

Most social media sites (e.g. Facebook, Twitter, Reddit) were created in an era in which state actors and political institutions had very little understanding of the internet and the capacity it had in the spread of information. Because of this, no strong measures were built into Facebook or Reddit to protect the free exchange of ideas.

Azodu was born in an era in which the free exchange of information is very much under attack. Every day, powerful individuals and institutions work to manipulate and control information on the internet to shape the way we think. Azodu is built from the ground up with the deliberate intent to protect the free exchange of ideas. I believe AI can help in this battle by eliminating the need for easily corruptible human moderators.

We will not go the way of Reddit. I do not view freedom of speech as a triviality. I believe freedom of speech is a basic human right. I see the danger of silencing someone as much greater than the danger of allowing them to say whatever they want.

To learn more about how the platform is intended to work, read the How it works page.