How I Use My Terminal as a Webcam

The TE-WE project

PRAKHAR KAUSHIK
Better Programming

--

First, a demo

Te-We is a project which focuses on using a webcam just with a terminal. We will be creating a terminal stream powered by Python from a webcam.

One day I was going through my system to find a webcam application but was not able to find one because I use Debian Linux, and I had added each and every piece of software manually. I never downloaded anything for the webcam.

So I thought, “why not make something with a terminal for this?” I came across a project which was an image-to-ASCII project, and I thought, “why not combine the video stream with a similar project?” and hence TE-WE was developed.

This whole thing is ASCII art of a video stream from the webcam.

If you find it interesting, you can find the whole unscripted programming video of it below.

So getting back to the code part — let's start with it.

Step 1

Create an ASCII image using an actual image.

Step 2

Replace the ASCII image with a video stream frame and print them one over the other to get the effect of a continuous frame.

Done!

Let's discuss everything in detail. Our first aim will be to convert the following image:

Deadpool (Image source: walpaperlist)

to an ASCII image like the one below.

Deadpool (Image source: Author’s terminal)

Once that's done, all that's left is to create a video stream as a source for frames to the program.

Let's see the code first.

Let me explain everything here. For now, ignore lines 10–12. The remaining part is simple: Convert the whole RGB image to a black and white image so that we can actually get the intensity of each pixel.

Then, based on the intensity of each pixel, we will find a suitable character for each pixel and create an image out of it.

But the problem is that it's still black and white, so the next part will be to get the approximate ANSI color for each pixel, and hence we will find the colored image printed.

Let's see the code to get RGB to ANSI. It was taken from the library torrycrass/image-to-ansi.

So with this, we will get the ANSI color code from RGB, and then we just need to get the ANSI color for each pixel.

And here’s where lines 10–12 will come into action. We are using their RGB value for each pixel and saving the ANSI color code separately before converting the image to black and white.

And done — our first part is done.

Step 2

Adding a video stream is simple and can be found anywhere, as it’s just using opencv to get the frames from video and print them on the screen.

So here we are, done.

Feedback

Do leave feedback on the project. If you want to check out the source code, it’s on GitHub at pr4k/Te-We.

--

--

Python Developer interested in learning new languages, also an Active Open Source Contributor.