Python Wallpaper Engine

A fresh new look everyday with Python

Sam Berry
Better Programming

--

In this project, you’ll learn some new web scraping tricks by making an app that automatically finds a desktop wallpaper and applies it to your PC, using Python.

So, what will we need?

A Source of Images

For our image source, we’ll need a website that has a collection of images with a suitable resolution and that are of high quality. It shouldn’t use JavaScript to load the images because then we wouldn’t be able to use Beautiful Soup 4 (the go-to Python library for efficient web scraping), and we would have to resort to a library that is effectively an automated browser (e.g. Selenium or Mechanical Soup). They have their uses but tend to be slow and inefficient for what we want to do.

Now the problem is that pretty much all wallpaper websites like Unsplash and Pexels load the images with JS, so we will need to resort to a different method.

Reddit is known to be very easy to scrape, and after having a look, I found several subreddits with cool pictures and wallpapers. The best one I found was r/wallpaper, which had lots of content. So this will be our source.

A Way To Download These Images

Programmatically getting posts with Reddit is easy. In fact, you don’t need a web scraping library at all.

At the end of each subreddit URL, you can type /new.json, /hot.json, or /top.json to get a JSON file with all the data on all the recent posts in that category.

More on JSON

JSON stands for JavaScript Object Notation, and it’s basically a way of saving /loading a dictionary to/from a file. A lot of APIs use it to return data, and it can be parsed in Python.

Try loading up this URL: https://www.reddit.com/r/wallpaper/hot.json.

As you can see, it takes the structure:

{“kind”: “X”, “data”: {“modhash”: “X”, “dist”: X, “children”: [{},{},{}]

We are interested in that last part: "children".

“children” is an array of dictionaries where each dictionary contains the data of a post (title, author, contents, etc.).

Let’s have a try at parsing this in Python.

First, import the built-in requests library so that we can get the JSON file from the URL:

import requests

Now use the HTTP GET method that requests data from a specific resource to get the JSON data:

textData = requests.get("https://www.reddit.com/r/wallpaper/hot.json").content

Try printing the response from the request. In my case, I was returned {"message": "Too Many Requests", "error": 429}.

I haven’t made too many requests, but the server knows that this isn’t a genuine request from a browser or a genuine Reddit app.

To fix this, we need to supply a user agent in the headers. This site shows the latest user agents, so I’ll include that in my request’s headers:

myHeaders={"User-Agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.85 Safari/537.36"}textData = requests.get("https://www.reddit.com/r/wallpaper/hot.json",headers=myHeaders).content

After printing the string again, I was returned the large JSON file that we want.

Now that we have the JSON data in text, we’ll need to parse it, so import the built-in JSON library:

import json

To convert the text data into a dictionary, we’ll use the loads method. You can read more about various JSON methods on W3Schools.

jsonData = json.loads(textData)

Following the structure I pointed out earlier, we can find the children (the posts) by accessing the items at different indices in the newly constructed dictionary. We will access the children like so: jsonData -> "data" -> "children".

posts = jsonData["data"]["children"]

You can now get the image URL of any post. Let’s try with the first post:

posts[0]["data"]["url"]

This gave me the following URL: https://i.redd.it/24vbhq06y6v61.jpg, which is a full-res image. We need to find a way to save this to the disk.

We can perform another HTTP GET request to the image’s URL, which will give us the contents of the image and write the bytes to a file in bytes mode:

imageContents = requests.get(posts[0]["data"]["url"],headers=myHeaders).contentwith open("wallpaper.jpg","wb") as imageFile:
imageFile.write(imageContents)

This successfully saves it to disk. Cool. But we want it to be random, so let’s change that 0 index to a random number between 0 and the length of the posts list:

Now to actually set it as your wallpaper.

According to this Stack Overflow thread, on Windows, you can use the ctypes module. For me, this code worked:

import ctypesSPI_SETDESKWALLPAPER = 20ctypes.windll.user32.SystemParametersInfoW(SPI_SETDESKWALLPAPER, 0, "%USERPROFILE%/wallpaper.jpg" , 3)

If this doesn’t work for you or you’re on Linux/Unix, have a look through that thread and try some different things.

Now we’ll compile all that code together and make it repeat after a fixed time interval:

Lastly, you might want it to run at startup. If so, you will also want it to be windowless, so let’s look into that.

  • Save the Python file as a .pyw rather than .py. This means the file creates no window.
File saved as .pyw
  • Right-click on the .pyw file and click “Create shortcut.”
Creating shortcut
  • Press Windows key + R and type shell:startup.
  • Drag the new shortcut into the new explorer window.

And you’re done. The script will run every time you start your PC and will randomly change your wallpaper every 30 minutes.

--

--