How to run PyTorch with GPU and CUDA 9.2 support on Google Colab

(Comments)

pytorch-colab

It has been a while since I wrote my first tutorial about running deep learning experiments on Google's GPU enabled Jupyter notebook interface- Colab. Since then, my several blogs have walked through running either Keras, TensorFlow or Caffe on Colab with GPU accelerated.

  1. "How to run Object Detection and Segmentation on a Video Fast for Free" - My first tutorial on Colab, colab notebook direct link.
  2. "Quick guide to run TensorBoard in Google Colab", - Colab notebook direct link.
  3. Run caffe-cuda on Colab - Colab notebook direct link.

One missing framework not pre-installed on Colab is PyTorch. Recently, I am checking out a video to video synthesis model requires running on Linux, plus there are gigabytes of data and pre-trained model to download before I can take the shiny model for a spin. I was wondering, why not give Colab a try by leveraging its awesome downloading speed and freely available GPU?

Enjoy the Colab notebook link for this tutorial.

Let's starts by installing CUDA on Colab.

Installing CUDA 9.2

Why not other CUDA versions? Here are three reasons.

  1. As of 9/7/2018, CUDA 9.2 is the highest version officially supported by Pytorch seen on its website pytorch.org.
  2. Some of you might think to install CUDA 9.2 might conflicts with TensorFlow since TF so far only supports up to CUDA 9.0. Relax, think of Colab notebook as a sandbox, even you break it, it can be reset easily with few button clicks, let along TensorFlow works just fine after installing CUDA 9.2 since I tested. 
  3. If you install CUDA version 9.0, you might come across the issue when compiling native CUDA extensions for Pytorch. Some sophisticated Pytorch projects contain custom c++ CUDA extensions for custom layers/operations which run faster than their Python implementations. The downside is you need to compile them from source for the individual platform. In Colab case, which is running on an Ubuntu Linux machine, g++ compiler is employed to compile the native CUDA extension. But CUDA version 9.0 has a bug working with g++ compiler to compile native CUDA extensions, that's why we picked CUDA version 9.2 which got the bug fixed.

Back to installing, the Nvidia developer site will ask you for the Ubuntu version where you want to run the CUDA. To find out, run this cell below in a Colab notebook.

!cat /etc/*-release

It returns the information you want.

VERSION="17.10 (Artful Aardvark)"

After that, you will be able to navigate through the target platform selections, make the installer type "deb(local)", then right click on the "Download (1.2 GB)" button to copy the link address.

Back to Colab notebook, paste the link after a wget command to download the file. A 1.2GB file only takes about 10 seconds to download on Colab which means there is no coffee break -_-.

!wget https://developer.nvidia.com/compute/cuda/9.2/Prod2/local_installers/cuda-repo-ubuntu1710-9-2-local_9.2.148-1_amd64

Run the following cell to complete the CUDA installation.

!dpkg -i cuda-repo-ubuntu1710-9-2-local_9.2.148-1_amd64
# You can use this line to find out the directory name
# !ls /var/ | grep cuda-repo
!apt-key add /var/cuda-repo-9-2-local/7fa2af80.pub
!apt-get update
!apt-get install cuda

If you see those lines at the end of the output, that means the installing was successful.

Setting up cuda (9.2.148-1) ...
Processing triggers for libc-bin (2.26-0ubuntu2.1) ...

Continue with Pytorch.

Install PyTorch

Very easy, go to pytorch.org, there is a selector for how you want to install Pytorch, in our case,

  • OS: Linux
  • Package Manager: pip
  • Python: 3.6, which you can verify by running python --version in a shell.
  • CUDA: 9.2

It will let you run this line below, after which, the installation is done!

pip3 install torch torchvision

Run vid2vid demo

Out of the curiosity how well the Pytorch performs with GPU enabled on Colab, let's try the recently published Video-to-Video Synthesis demo, a Pytorch implementation of our method for high-resolution photorealistic video-to-video translation. That video demo turns poses to a dancing body looks enticing. 

pose

Besides, the demo also depends on custom built CUDA extensions gives the chance to test out the installed CUDA toolkit.

The cell below does all the job from getting the code to running the demo with the pre-trained model.

!git clone https://github.com/NVIDIA/vid2vid
!pip install dominate requests
# This step downloads and sets up several CUDA extensions
!python scripts/download_flownet2.py
# Download pre-trained model (smaller model)
!python python scripts/download_models_g1.py
# Run the demo
!python test.py --name label2city_1024_g1 --dataroot datasets/Cityscapes/test_A --loadSize 1024 --n_scales_spatial 3 --use_instance --fg --n_downsample_G 2 --use_single_G

The generated frame goes to directory results/label2city_1024_g1/test_latest/images which you can display one by calling the cell below.

from IPython.display import Image
Image(filename='results/label2city_1024_g1/test_latest/images/fake_B_stuttgart_00_000000_000003_leftImg8bit.jpg') 

That wraps up this tutorial.

Conclusion and further thought

This short post shows you how to get GPU and CUDA backend Pytorch running on Colab quickly and freely. Unfortunately, the authors of vid2vid haven't got a testable edge-face, and pose-dance demo posted yet, which I am anxiously waiting. So far, It only serves as a demo to verify our installing of Pytorch on Colab. Feel free to connect with me on social media where I will keep you posted on my future projects and other practical deep learning applications.

Current rating: 2.3

Comments