Detect and Blur Human Faces on Your Website Using JavaScript

AI capable of detecting and blurring human faces can take a key role in content moderation is this digital era for your website.

Hrishikesh Pathak
Better Programming

--

Detect and blur human faces using Pixlab demo

Content moderation is very necessary for a website. If you are developing a website where users can upload images, then you have to be extra cautious. You can’t trust your users. If they upload some objectionable content, then sometimes you as a creator of the site, become the victim of it.

In every modern web 2.0 application, a content moderation system is present. Some popular websites like Facebook, Instagram, and Twitter have both automatic and manual content moderation systems in place.

But for a small team and individual developer, manual content moderation is not feasible and not very economical. Therefore we can use artificial intelligence (AI) based automation to detect any objectionable content and blur it out.

Building AI-based automation is not very easy. You have to hire a talented developer and should have a lot of data to train your AI model. But we can take a shortcut here. There are much software as a service (SaaS) platforms present in the market, which can help us in this aspect.

Pixlab is a SaaS platform, which can give a user-friendly application programming interface (API) to their state-of-the-art AI models. You can easily integrate those services into your app. Check out this link for more information.

Agenda

In this article, we are going to make a web app using Pixlab API. I will use vanilla JavaScript.

You can use the same logic to any framework like react, Vue, Angular, etc. As we are using Pixlab API heavily, make sure to obtain an API key to follow along.

In the web app, first, we take an image input from the user. Then we detect people’s faces present in the image. At last, we blur out the faces of the image and render the final image in our app.

We will use the facedetect API endpoint to detect human faces in an image. Then we make another request to mogrify endpoint with face coordinates we received from facedetect API to blur our image.

The final version of our project will look like this.

Live video demo

As we are making a web app, we can’t make a direct request to Pixlab servers due to CORS restrictions. CORS restrictions are there to protect the user. You can learn more about CORS here. Therefore we make a proxy server using Nodejs and enable the CORS there. Then we make all the requests from our front end to that proxy server and route these requests to Pixlab APIs to bypass CORS restrictions.

Enough of talking, let’s make our web application.

Project setup

Before diving deep into the tutorial, let us scaffold our project. We need to make both the front-end and back-end (as a proxy server) in this project. Therefore, Make 2 directories named frontend and backend inside your project root.

Inside your frontend directory, make 3 files named index.html, style.css, index.js. Install live-server in your VSCode to serve these static files.

Inside your backend directory, initiate an npm project by running this command.

cd backend
npm init -y

As we are making an express.js app as our proxy server, let’s install all the dependencies in one go.

npm install axios cors dotenv express express-fileupload form-data
npm install — save-dev nodemon

Now change the script section of your package.json file with these 2 commands.

"scripts": {
"dev": "nodemon server.js",
"start": "node server.js"
},

Now create server.js file inside your backend directory.

After all these setups, the project structure will look something like this.

.
├── backend
│ ├── package.json
│ ├── package-lock.json
│ └── server.js
└── frontend
├── index.html
├── index.js
└── style.css

Let’s quickly learn what is the use of all those npm packages in our project.

  1. axios: Axios is an http client very popular in the Node.js world. It helps us write complex queries very easily.
  2. cors: cors library is used to add CORS headers in every request to our server. You can also make a lot of customization in CORS policy using this package.
  3. dotenv: This package helps us to create and use environment variables in our Node.js project. This is required to hide your API key or other secrets, that you don’t want to push to GitHub.
  4. express: This library needs no explanation. This is a very popular server library with middleware capability in the Node.js world.
  5. express-fileupload: This library works as middleware and gives us access to all files uploaded from the client.
  6. form-data: This package provide browser FormData object in Nodejs environment. To make a multipart/form-data request to some API, I use this package in this project.
  7. nodemon: This is a development dependency that automatically restarts your server whenever some code changes in your javascript files.

Let’s make our proxy NodeJs server

As I have mentioned earlier that due to browser CORS policy we can’t directly call Pixlab API from our front-end app. Therefore we will make a Node.js server that proxy our request to Pixlab API.

In this section I have used both client and front-end interchangeably. Please keep this in mind.

Proxy user uploaded image to Pixlab API

This is the trickiest part of our project to proxy the image uploaded. Pixlab prefers to accept an online image link to process our request. To upload our local image to a storage bucket, Pixlab also provides a developer-friendly API called the store.

This API accepts a POST request. The body of the request should be multipart/form-data which contains the user uploaded image and the API key. If the request is successful, API takes your image and uploads it to a storage bucket online, and gives you the link to that image.

In our proxy server, we take the user file input in /upload route. We access the user-uploaded image using the express-fileupload package. After adding this package as a middleware, we can access the user upload file using just req.files method.

Then we construct our multipart/form-data request using the form-data package I mentioned earlier. Append the user uploaded image and the API key in form-data. You can use dotenv package here to hide your API key and access it as an environment variable.

After the construction of multipart/form-data, we submit the request to Pixlab API. Then Whatever response we get, if it is 200, we pipe it as a response to the user.

The code of our /upload path looks like this.

If this request becomes successful, then we get a link to the user uploaded image. We keep the link with us to use in facedetect and mogrify API requests.

Proxy facedetect API (Face Detection)

Now let’s make a Facedetect API proxy using Node.js. We use /facedetect path to proxy this API. To read user-sent JSON data, use express.json() middleware in our server.

At first, we grab the user-sent image URL (from our previous request) and make a get request to Pixlab API with this image URL and Pixlab API key. Then we just send the response from the server to the client.

The code for /facedetect path looks like this.

After a successful request, we get a list of face coordinates from the server. We send these coordinates to the client. We need these coordinates to use in mogrify API to blur people’s faces.

Proxy mogrify API (Face Blur)

We use /mogrify path of our server to call for Pixlab’s mogrify API. The client provide the image URL and face coordinates we got from the above 2 requests. After parsing the user-provided data, we make a POST request to Pixlab mogrify API.

The code inside /mogrify looks like this.

After a successful request, it returns a blurred face version of the image we have uploaded previously.

Then we pass the new blurred image link to the client. Now the client can use this link to display the image.

Building the front-end part

Adding an intuitive front-end is very necessary for the user’s perspective. In this section, we make this front-end part of our application. For the sake of simplicity, I keep the front end as minimal as possible.

Get user file input

At first, populate your “index.html” file with the bare minimum HTML markup. For your reference, this is my starting HTML template for this project.

In the above HTML code, we link our CSS and javascript file with HTML and make a bare-bone structure of our website.

Now, to take a file input from the user, we have to add an input tag inside out HTML file. Make sure to add accept attribution to only accept jpg and png images.

Now add 2 image tags in your HTML markup. One is for showing user uploaded images another is for rendering processed blur face images from the Pixlab API server.

Finally, add a button to invoke the image processing.

Make our front-end interactive

Inside the index.js file, first, we define all the DOM nodes we need in this process. This includes the input tag for the image (imageInput) from the user, 2 image tags to display the initial (image) and final (finalImage) result, and one button (processBtn) to start the process.

Now when a user picks a new image using our file picker, we read this image as a DataURL and then render this image to our initial image tag.

Now we have the user-picked image in our hand. Therefore this is the time to send the request to our proxy server to get started with the image processing.

Image processing using Pixlab API

In this process, we make a total of 3 requests to the server every time the user uploads an image. These requests are consecutive, so we have to make the query strictly in order.

  1. Uploading the image to remote server: To upload an image to the proxy server, we make a POST request to /upload route of the proxy server with the user-picked image. We make a helper function to make this process easier.

2. Calling the face detection API: By using the remote image link we get from the previous request, we call the face detect API. We make a POST request to the /facedetect route of our proxy server.

3. Blur the detected faces: We get coordinates of the faces of the image we have uploaded from the previous query. Now we call the /mogrify proxy route to blur our image. Again we are making a POST request with the face coordinates and the image link.

We get the blur image link from this query in return. We will use this URL to display the image in front of our users.

The button that manages all these functions

All of these processes are managed by the process button we defined earlier. It makes request one by one to each endpoint and passes required values from one function to another. The processing button is the manager of our front-end part.

Bonus

If you are reading this far, this is the GitHub project link for you. I have to make a couple of changes here and there to make our web app look nicer. Also, you can check out the CSS part that I haven’t included in this article.

If you are reading this article till now, I am very glad that I can produce such content that people read.

Do you have any queries? I am available on Twitter as @hrishikshpathak. Make your version of this web app and show me on Twitter. Till then bye.

--

--