Using VS Code and Podman to Develop SYCL Applications With DPC++'s CUDA Backend

I recently wanted to create a development container for VS Code to develop applications using SYCL based on the CUDA backend of the oneAPI DPC++ (Data Parallel C++) compiler. As I’m running Fedora, it seemed natural to use Podman’s rootless containers instead of Docker for this. This turned out to be more challenging than expected, so I’m going to summarize my setup in this post. I’m using Fedora Linux 36 with Podman version 4.1.0.

Prerequisites

Since the DPC++ is going to use CUDA behind the scene, you will need an NVIDIA GPU and the corresponding Kernel driver for it. I’ve been using the NVIDIA GPU driver from RPM Fusion. Note that you do not have to install CUDA, it is part of the development container alongside the DPC++ compiler.

Next, you require Podman, which on Fedora can be installed by executing

sudo dnf install -y podman

Finally, you require VS Code and the Remote - Containers extension. Just follow the instructions behind those links.

Installing and Configuring the NVIDIA Container Toolkit

The default configuration of the NVIDIA Container Toolkit does not work with Podman, so it needs to be adjusted. Most steps are based on this guide by Red Hat, which I will repeat below.

  1. Add the repository:

    curl -sL https://nvidia.github.io/nvidia-docker/rhel9.0/nvidia-docker.repo | sudo tee /etc/yum.repos.d/nvidia-docker.repo
    

    If you aren’t using Fedora 36, you might have to replace rhel9.0 with your distribution, see the instructions.

  2. Next, install the nvidia-container-toolkit package.

    sudo dnf install -y nvidia-container-toolkit
    
  3. Red Hat’s guide mentioned to configure two settings in /etc/nvidia-container-runtime/config.toml. But when using Podman with --userns=keep-id to map the UID of the user running the container to the user running inside the container, you have to change a third setting. So open /etc/nvidia-container-runtime/config.toml with

    sudo -e /etc/nvidia-container-runtime/config.toml
    

    and change the following three lines:

    #no-cgroups = false
    no-cgroups = true
    
    #ldconfig = "@/sbin/ldconfig"
    ldconfig = "/sbin/ldconfig"
    
    #debug = "/var/log/nvidia-container-runtime.log"
    debug = "~/.local/nvidia-container-runtime.log"
    
  4. Next, you have to create a new SELinux policy to enable GPU access within the container:

    curl -sLO https://raw.githubusercontent.com/NVIDIA/dgx-selinux/master/bin/RHEL7/nvidia-container.pp
    sudo semodule -i nvidia-container.pp
    sudo nvidia-container-cli -k list | sudo restorecon -v -f -
    sudo restorecon -Rv /dev
    
  5. Finally, tell VS Code to use Podman instead of Docker by going to User SettingsExtensionsRemote - Containers, and change Remote › Containers: Docker Path to podman, as in the image below.

Docker Path setting

Replace docker with podman.

Using the Development Container

I created an example project that is based on a container that provides

You can install additional tools by editing the project’s Dockerfile.

To use the example project, clone it:

git clone https://github.com/sebp/vscode-sycl-dpcpp-cuda.git

Next, open the vscode-sycl-dpcpp-cuda directory with VS Code. At this point VS Code should recognize that the project contains a development container and suggest reopening the project in the container

Reopen project in container

Click Reopen in Container.

Initially, this step will take some time because the container’s image is downloaded and VS Code will install additional tools inside the container. Subsequently, VS Code will reuse this container, and opening the project in the container will be quick.

Once the project has been opened within the container, you can open the example SYCL application in the file src/sycl-example.cpp. The project is configured to use the DPC++ compiler with the CUDA backend by default. Therefore, you just have to press Ctrl+Shift+B to compile the example file. Using the terminal, you can now execute the compiled program, which should print the GPU it is using and the numbers 0 to 31.

Alternatively, you can compile and directly run the program by executing the Test task by opening the Command Palette (F1, Ctrl+Shift+P) and searching for Run Test Task.

Conclusion

While the journey to use a rootless Podman container with access to the GPU with VS Code was rather cumbersome, I hope this guide will make it less painful for others. The example project should provide a good reference for a devcontainer.json to use rootless Podman containers with GPU access. If you aren’t interested in SYCL or DPC++, you can replace the exising Dockerfile. There are two steps that are essential for this to work:

  1. Create a vscode user inside the container.
  2. Make sure you create certain directories that VS Code (inside the container) will require access to.

Otherwise, you will encounter various permission denied errors.

Avatar
Sebastian Pölsterl
AI Researcher

My research interests include machine learning for time-to-event analysis, causal inference and biomedical applications.