So that is the video I wanted to end up with. I detail here how to get to that point. In my case, I use a website that can show me a heatmap of my running at various points in time. I wanted to make a video of 2021, showing the progress per month in Brussels, Belgium. I also wanted some simple repeatable steps to get there. The map view takes up a certain part in my browser, that is the part I wanted to turn into this video.

Getting there requires three steps:

  • Acquiring pictures: this is going to depend on what you want to see. All pictures are the same size and you are interested in the same region for each.
  • Cropping pictures: keep only the relevant part of the acquired pictures.
  • Making a video: use the cropped pictures and combine them into a video file.

Acquire Pictures

Note: this section is pretty specific to what I needed, you might need to think and adjust it to your situation.

Open what you want make a timelapse video of and get it positioned how you want. You will change what is rendered between each screenshot, but you do not want the area to move around your screen. In my case, I open the website showing my heatmap, centre the map on the area I want, get the zoom right, and then do not move the map or the browser window any more for the remainder of this step.

With your area set up, have the different renders pop up there. In my case, I change the map’s time filter. This shows different lines (representing GPS traces of my runs) on the map, but the zoom and position do not change. I set it to 31 December 2020 and take a fullscreen screenshot (cropping will happen later). Change the time filter, take another screenshot, repeat till you have enough pictures.

Crop Pictures

The situation now: you have multiple pictures of the same size. Your area of interest is the same for every picture. In my example, I have several screenshots of my entire screen. The map I am interested in covers the same area in every screenshot.

To do this consistently, we can use the convert command line tool provided by ImageMagick. The first step is finding out what dimensions we want to crop out of our pictures. This step is a bit of a trial-and-error to find the values we want. Just fill in numbers and adjust them till the cropped result shows what you like. You can give whatever output file you want, just keep in mind that convert will override existing files without asking.

convert inputfile.png -crop '1600x1500+750+300' outputfile.cropped.png

Here the numbers are the ones you want to play with. The format is: WIDTHxHEIGHT+XOFFSET+YOFFSET.

Alternatively, you could look in an image viewer to decide on the pixel measurements you want.

Once you have settled on your cropping values, you can use them to crop every one of the pictures. Either do this manually one by one, or use the following script.

#!/bin/bash

for file in $(ls -1 *.png); do
  convert $file -crop '1600x1500+750+300' ${file/.png/.cropped.png}
done

You now have a bunch of pictures, all cropped to the relevant area. You can move them to a different folder to make the next step easier.

Combine Pictures to a Video

The situation now: you have a folder with pictures of the same size. You want to throw them together into a video file. The sequencing matches up with the naming, i.e., you want the same order as they appear in ls or a glob pattern.

For this step, we use ffmpeg. It is a really powerful tool, the only difficulty is finding the right incantation of commands to make it do what you want. In our case, the following seems to suffice

ffmpeg -f image2 -pattern_type glob -framerate 1 -i 'AGLOBMATCHINGYOURFILES' -s WIDTHxHEIGHT -pix_fmt yuv420p video.mp4
  • -f image2 sets the inputs format.
  • -pattern_type glob is so we can use glob matching. I found that the easiest approach. If your images are named pic001.png, pic002.png, … then you can also do it without glob matching and use ffmpeg’s built in matching syntax.
  • -framerate 1 is a simple way to adjust the speed of the video. 1 means 1 frame per second. Higher numbers means more frames per second.
  • -i 'GLOB' specifies the input files. E.g. for bxl.202012.png, bxl.202101.png, bxl.202102.png, … I used -i 'bxl.*'
  • -s WIDTHxHEIGHT specifies the width and height of your video. I think this might not actually be required (it can get that from the image size, surely), but I have not tested that yet. Just fill in the same width and height you used for cropping.
  • -pix_fmt yuv420p: the generated video file was not playing in Firefox or Chrome. A helpful SO answer told me to use this since the default (yuv444p) does not seem to be compatible with current browsers.
  • -vcodec libx264: I did not end up using this flag since ffmpeg seemed to default to that codec anyway. libx265 would be another option, but Firefox said it was not currently supported.

In my case, I wanted to leave the first and the last frame showing for a longer time. I am sure there is some ffmpeg way to go about that, but I just added some copies of my first and final file before running the ffmpeg command.

Either way, you are left with a video.mp4 file that shows what we wanted. You can play some more with ffmpeg if you want different video files, but this was good enough for me.


Possible improvement: Automatically adding some text on each image. In my case that would be the date for that image. Turns out convert can also do that. Would probably want to parse it out of the image name or something to do it fully automatically then.