NVIDIA Clara – How to setup your annotation server and kickstart AI driven MedTech projects

Version 1

Announced in March of 2019 at NVIDIA’s GPU Technology Conference, Clara is a powerful suite of tools that aims to put the company at the forefront of the “democratization” of AI. The final goal: empowering data scientists with powerful computing solutions for medical imaging.Clara is extremely interesting when it comes to the preparation of high quality datasets of medical images, as it can provide AI assisted segmentation that speeds up the process considerably.

The cool thing about Clara is that not only you can use an extensive library of pre-trained models for this purpose, but you can also load your own.

While the tool itself is very powerful and useful, its setup can be a little bit tricky the first time.

Read on to find out how we got it up and running here at ScanDiags. Let’s start!

Step 1 – Make a NVIDIA developer account

Follow thislink and input your information to create an NVIDIA developer account. This will give you access to all the resources available on their portal (pre-trained models, SDKs and more).

Step 2 – Install Clara SDK

To get started, you’ll need to install the SDK on your server or local machine. To do so, you will need the NVIDIA drivers and the NVIDIA docker installed on your machine. If you already have these requirements satisfied, you can skip the following two command blocks and directly pull the SDK.   NVIDIA driver sudo apt update && sudo apt upgrade -y && \ sudo apt purge nvidia* -y && \ sudo add-apt-repository -y ppa:graphics-drivers && \ sudo apt update && \ sudo apt-get install -y nvidia-390

Test it with: nvidia-smi   Install NVIDIA Docker curl -s -Lhttps://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add – && \ distribution=$(. /etc/os-release;echo $ID$VERSION_ID) && \ curl -s -Lhttps://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list && \ sudo apt-get update && \ sudo apt-get install -y nvidia-docker2 && \ sudo pkill -SIGHUP dockerd

Test it with:   sudo docker run –runtime=nvidia –rm nvidia/cuda:9.2-base-ubuntu18.04 nvidia-smi Once you have these requirements satisfied, simply pull the docker image with: docker pull nvcr.io/nvidia/clara-train-sdk:v2.0

That’s it! The Clara SDK is now installed and ready to be used.

Step 3 – Let’s activate it:

#docker run –shm-size=1g –ulimit memlock=-1 –ulimit stack=67108864 –runtime=nvidia -it –mount type=bind, source=/path_to/folder/with_config_and_model/, target=/data -p 5000:5000 nvcr.io/nvidia/clara-train-sdk:v2.0 /bin/bash


Great– your Clara server is up and running!

Now you will have access to different “Admin” utils that will allow you to manage your custom models:

Admin Model

You will be able to use them from the command line by using curl, just like in the following example:

curl -X PUT “{your_folder}” -F “config=@config.json;type=application/json” -F “data=@model.zip”

Now that you know everything about how to setup the Clara server, you’ll need to have a model that is Clara-ready.

Step 4 – Prepare your Tensorflow model

To try out Clara, we trained a pure Tensorflow implementation of the V-Net neural network. We used our own data, but if you just want to test it out simply follow their instructions and use the publicly available LiTS – (Liver Tumor Segmentation) challenge dataset:


You will need to prepare a folder containing the following checkpoint files:

  • Model.ckpt.meta
  • Model.data-00000-of-00001
  • Model.index

These files will be used by Clara to load your model and Tensorflow graph. It’s very important that they all have the same name. Zip them all together (.zip or .tgz format) and upload the archive on the server.

Step 5 – The mighty config.json file

The last thing Clara needs from you is a configuration file, to be served alongside with the model.

Now, I found this quite tricky at first, as I found that the available documentation doesn’t make it clear on how or when this file should be generated.

This file gives an overview of your model to Clara and is essential to using the annotation tool successfully. In the following lines I will provide you with a minimum template to make it work. Make sure to customize the values in curly brackets with your data.   { “version”: “1”, “type”: “segmentation”, “labels”: [ “liver” ], “description”: “My super cool segmentation model”, “format”: “CKPT”, “threshold”: 0.5, “roi”: [ {height}, //i.e. 128 {width}, //i.e. 256 {n_channels}, //i.e. 3 for RGB images ], “sigma”: 3.0, “padding”: 20.0, “input_nodes”: { “image”: “{name of input node}” }, “output_nodes”: { “model”: “{name of output node}” }, “pre_transforms”: [ { “name”: “ResampleVolume”, “args”: { “applied_key”: “image”, “resolution”: “image_resolution”, “target_resolution”: [ 1.0, 1.0, 1.0 ] } }, { “name”: “VolumeTo4DArray”, “args”: { “fields”: “image” } }, { “name”: “ScaleIntensityRange”, “args”: { “field”: “image”, “a_min”: -21, “a_max”: 189, “b_min”: 0.0, “b_max”: 1.0, “clip”: true } } ], “post_transforms”: [ ] }

To find the names of the input and output node, check your graph using Tensorboard. For example, for the V-Net, the graph looks like this:


Messy, right? Let’s zoom in:

zoom V-Net

We have “softmax” as output layer and:

input layer-1

“images” as output layer.


That’s it! Now you have everything you need to load your model onto the Clara AIAA tool. While at first the process may seem convoluted (no pun intended), you’ll find out that once you understand the basic concept, loading models on the server becomes quite fast and easy.

If you or your company work in the field of medical imaging like us here at ScanDiags, I’m sure you’ll find this useful to create neat datasets for your tasks.

About Author

Leave a Reply

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>