Openstreetcam fisheye removal

I record a lot of my cycling routes using a GoPro Hero 5 black mounted to my bike.
After recording I upload the images (1 images/sec) to openstreetcam.
But the face and car sign detection cannot cope with the fisheye in the GoPro images so I had to remove them before upload.

This is the script I use:

#!/bin/bash

MOGRIFY=`which mogrify`
if [ $? -eq 1 ]; then
  echo "you need to install imagemagick"
  exit 1
fi

if [ "$#" -ne 1 ]; then
  echo "Add folder to process as argument."
  exit 1
fi

source_folder=$1
path=`realpath $1`
dest_folder=`dirname $path`/z_fisheye/`basename $1`
source_count=`find $source_folder -name \*JPG | wc -l`
echo "number of files to process: ${source_count}"

if [ -f "${source_folder}/000_fisheye.done" ]; then
  echo "${source_folder} already processed!"
  exit
fi

mkdir -p $dest_folder
(
  cd $source_folder
  for photo in *.JPG; do
    if [ -f "${dest_folder}/${photo}" ]; then
      echo -n "."
    else
      $MOGRIFY -path ${dest_folder} -distort barrel "0 0 -0.2" ${photo}
      echo -n "."
    fi
  done | pv -pte -i0.1 -s${source_count} > /dev/null
)

# check if all images from source folder are in dest_folder
dest_count=`find $dest_folder -name \*JPG | wc -l`
if [[ $source_count == $dest_count ]]; then
  touch ${source_folder}/000_fisheye.done
fi
After all files in a folder are processed a file named 000_fisheye.done will be created in the source folder.
This helps finding the folders I can upload to openstreetcam.

Progressbars in Bash

In a bash script I needed a progressbar for a "for"-loop.

The best solution was to use pv.

photo_count=`find . -name \*JPG | wc -l`
for photo in *.JPG; do
    do_some_stuff $photo
    echo -n "."
done | pv -pte -i0.1 -s${photo_count} > /dev/null
The count is necessary to get a real progressbar and an ETA.
How does it work: pv is counting the characters (here '.') until size (-s) is reached.

The results looks like:

0:00:03 [==========>                              ] 30% ETA 0:00:10

Run AllenNLP on AWS P2 with ArchLinux

DISCLAIMER

uplinklabs discontinued support for EC2, so this blog post will probably stop working in the future!

Quest: Run AllenNLP with GPU support on an AWS p2/p3 instances with ArchLinux.

Steps:

  1. Go to https://www.uplinklabs.net/projects/arch-linux-on-ec2/

  2. Start a new EC2 instance by clicking on the "ebs hvm x86_64 lts"-link for your preferred region.

  3. Choose p2.xlarge as instance type (all other p2 and p3 will work the same)

  4. Configure the stuff you need (allow ssh, storage, ...)

  5. Launch instance (save the pem-certificate!)

  6. Get the IP address from the ec2 instances view

  7. Connect to server

ssh -i certificate.pem root@<IP-ADDRESS>

Yay! We have a shell on archlinux.

  1. First you may have to repopulate the archlinux-keyring

pacman-key --init
  1. Next update the already installed packages

pacman --noconfirm -Syu
  1. Install a few packages (we need nvidia-dkms and not nvidia because the kernel is a custom one and not default archlinux)

pacman --noconfirm -S git cudnn cuda python-virtualenvwrapper nvidia-dkms nvidia-utils htop tmux
  1. Reboot server to load new kernel with nvidia module

reboot
  1. After the reboot, connect to server again

ssh -i certificate.pem root@<IP-ADDRESS>
  1. Test if nvidia module is loaded

nvidia-smi
  1. Add virtualenvwrapper to your shell

source /usr/bin/virtualenvwrapper.sh
  1. And create a virtualenv

mkvirtualenv testing
  1. Install allennlp

pip install allennlp
  1. And test if PyTorch/AllenNLP finds the gpu

python -c "import torch; print('cuda:', torch.cuda.is_available())"
  1. (Optional) run the AllenNLP testsuite

allennlp test-install

Now you can use AllenNLP with GPU on AWS!