Better Programming

Advice for programmers.

Follow publication

Automated Moderation Using OpenAI

Maximizing user experience with AI-Powered tools

Joran Quinten
Better Programming
Published in
4 min readDec 12, 2022
Photo by Kilimanjaro STUDIOz on Unsplash

To get started, you need to sign up at OpenAI and generate a key.

Next, we’ll set up a simple Nuxt project. Just use the following command to scaffold out a starter project and add the openai package (nuxt docs prefer using yarn but feel free to follow the npm or pnpm steps):

npx nuxi init openai.moderation
cd openai.moderation
yarn install
yarn add openai
yarn dev -o

This should result in the starter project running on (typically) http://localhost:3000.

Now open the project in your favorite IDE, and let’s get started!

Configure the key

Create an .env file in the root of the project containing this line (replace with your personal key):

OPENAI_API_KEY=ALWAYSKEEPSECRETSTOYOURSELF

Next, open the nuxt.config.ts and make sure it looks like this:

export default defineNuxtConfig({
runtimeConfig: {
OPENAI_API_KEY: process.env.OPENAI_API_KEY,
},
})

Setting up the API

In order to communicate to the OpenAI endpoint, we’ll need a server of our own.

In Nuxt, adding an API endpoint is just as easy as adding a file in a server/api folder.

So first create that folder structure and place the following in a file called moderate.post.ts:

export default defineEventHandler(event => {
const body = await readBody(event)
return body?.message
})

This will just return whatever we post to the /api/moderate endpoint (Nuxt will set up the routing for us).

The input component

We’re going to create a small component that just takes in text input and will hit the endpoint we’ve set up when submitting so that we can validate the response.

Create a Moderate.vue component in a components folder in the root of the project.

Let’s start by defining the scripts using the script setup notation:

<script setup lang="ts">
const input = ref("");
const result = ref([]);

const onSubmit = async () => {
const response = await $fetch("/api/moderate", {
method: "post",
body: { message: input.value },
});
result.value.unshift(response);
input.value = "";
};
</script>

First, we’re setting up a handle to take care of the input and the result and we’re defining a hander to call the endpoint we’ve already setup, appending the input as a message property on the body. (The .value refers to the mutable and reactive reference)

Now we’ll add a template with:

  • A small form containing an input;
  • A submit button that will call the onSubmit handler;
  • A place to display the output of the endpoint

You can style it however you want, it’s however not really the purpose of this tutorial. Just go ahead and paste this below the script tag:

<template>
<div>
<div class="input">
<input type="text" v-model="input" />
<button type="submit" @click="onSubmit">Validate moderation</button>
</div>
<div class="output">
<ul>
<li :key="i.id" v-for="i in result">
{{ i.results }}
</li>
</ul>
</div>
</div>
</template>

Now save this file and let’s load the component on the app.vue, by replacing it's contents with this:

<template>
<div>
<Moderate />
</div>
</template>

You should now see the component running on your localhost. Once insert some text and hit submit, it should be returned by our own endpoint and show up in the component as part of the list item.

Adding intelligence

Finally, we’ll update the moderate.post.ts file to make use of the OpenAI capabilities. The moderation API is one of the more straightforward ones, so it's a good one to get started with.

Instead of returning the body.message immediately, we'll first configure the OpenAI client by instantiating it with the key. Then we'll query the endpoint with the contents of the message. This means we also need to change the handler to an async function!

The file should look like this:

import { Configuration, OpenAIApi } from 'openai';

// it's an async function now!
export default defineEventHandler(async (event) => {
const body = await readBody(event)
// setup the configuration
const configuration = new Configuration({
apiKey: process.env.OPENAI_API_KEY,
});
// instantiate the openaiClient
const openaiClient = new OpenAIApi(configuration);

// Make the call to the moderation endpoint
const res = await openaiClient.createModeration({
input: body?.message,
});

// return the result
return res.data
})

That’s it. So you now have the opportunity to test this out by being very aggressive towards the input field. You should see an assessment of your input by various categories and grades, similar to this example:

{
"id": "modr-XXXXX",
"model": "text-moderation-001",
"results": [
{
"categories": {
"hate": false,
"hate/threatening": false,
"self-harm": false,
"sexual": false,
"sexual/minors": false,
"violence": false,
"violence/graphic": false
},
"category_scores": {
"hate": 0.18805529177188873,
"hate/threatening": 0.0001250059431185946,
"self-harm": 0.0003706029092427343,
"sexual": 0.0008735615410842001,
"sexual/minors": 0.0007470346172340214,
"violence": 0.0041268812492489815,
"violence/graphic": 0.00023186142789199948
},
"flagged": false
}
]
}

If you’re done with this example, one of the fun ways to play around with OpenAI is by using the image generation API.

With the basis, we’ve laid you should be capable of either modifying the existing code or making your own integration in a framework you prefer.

Using these sorts of tools could help you a lot when dealing with publishing user-generated content. Bear in mind though, that this is just an example and not a real-world implementation. Also, as OpenAI suggests, always keep some human eyes on hand when dealing with these sorts of things. A valid use case for this example would be to preemptively flag submissions before publishing.

Using AI to reduce the load on humans without completely removing them, would be a sensible and good use of current capabilities. AI, just like humans, still has flaws, but we can utilize it to assist us in simple tasks.

Joran Quinten
Joran Quinten

Written by Joran Quinten

Dad with ♡ for web, tech, science, tinkering with stuff & photography. Works as an Interaction Developer. Tweets stuff @joranquinten & writes every now and then

No responses yet

Write a response