Deploying JupyterHub with Kubernetes on OpenStack

Jupyter is now widely used for teaching and research. The use of Kubernetes for deploying a JupyterHub has enabled reliable setups scaling to thousands of users.
There are many cloud computing vendors (Google, Amazon, …) and the first attempts to use JupyterHub with Kubernetes is based on them. But relying on vendor clouds increases the risk of vendor lock-in.
In addition, there are many pre-existing academic clouds managed by people with a high level of expertise and a thorough knowledge of their infrastructure and associated tools. These are often more cost-effective for research and education. Could we build upon these academic cloud computing to provide scalable and high-quality infrastructure for education and research?
In this post, we will focus on how to deploy JupyterHub with Kubernetes on OpenStack. A first attempt to create academic cloud computing in France.

This post is split into two parts (see links below).

Why to deploy JupyterHub on OpenStack is a high-level description of our problem, and our steps to solve it. It explains why we want to deploy a JupyterHub on OpenStack, what difficulties we have encountered, and what we want to do in a near future.

A technical guide to deploying JupyterHub on OpenStack is an in-depth guide that you may follow in order to deploy your own JupyterHub on Kubernetes on OpenStack. It’s designed for any person interested in how to replicate our deployment on their own infrastructure.

Our story, our difficulties and our plans

Why deploy JupyterHub on OpenStack?

JupyterHub, the multi-user Jupyter server, has been actively developed since 2014 and has seen a rapidly growing adoption in the past year.

You may know about Zero to JupyterHub, which provides step-by-step instructions for installing JupyterHub using a vendor-managed Kubernetes cluster. In the guide, you can also find how to set up a Kubernetes cluster on many vendor clouds such as AWS, Azure, and more recently on OpenShift. But what about other cloud infrastructures based on open-source infrastructure, such as OpenStack? While cloud vendors often provide you with many tools that make your life easier, OpenStack requires more explicit configuration and setup.

Earlier this year, we set up a working group across several France universities to explore how to easily set up JupyterHub for teaching and research in our academic cloud infrastructures. It turns out that the technology used across these academic clouds is OpenStack. One of the objectives of this working group is to make just as easy to deploy JupyterHub on OpenStack as compared to following Zero to JupyterHub and using vendor infrastructure.

Note: We are not the first to work on this problem, and we should also mention the work done in Canada through which is a project of PIMS, Compute Canada, and Cybera. They have developed their own deployment tools using terraform and ansible scripts. Our approach differs in that while we use the same technological stack, we prefer not to build a custom deployment tool that we would need to maintain over time.

Issues we encountered

We started a deployment using Kubespray in January of this year and have had a bumpy path since then. To begin, we looked at what existed already in the OpenStack world. We came across Kubespray, which offers a great facility and a lot of flexibility when you want to deploy a Kubernetes cluster. An interesting fact about Kubespray is that it’s not dedicated to OpenStack infrastructures, so you should be able to follow the same procedure for other deployments such as a baremetal cluster.

Using Kubespray, we very quickly had a Kubernetes cluster on OpenStack. However, we ran into network problems, and would lose network packets that made the JupyterHub completely unusable. It took us a long time to realize that we had MTU issues and even longer to solve it. To make things harder, we used a production platform which made it very difficult to update. We finally solved the problem by using a test platform where we could have more freedom.

In Kubespray, there are various CNIs (Container Network Interface) and one of them (weave) allows to modify the MTU. We tried to configure it carefully on the production platform, but we continued facing the same problem. Trying a new version of OpenStack on the test platform, we were able to solve the problem. That means that something bad had also happened with the LoadBalancer. For more explanation, see the MTU section in the technical description below.

We thought we could deploy JupyterHub with Kubernetes on OpenStack in a few weeks but as you can see, that’s not what happened at all. That’s why it was important for us to share our experience in the hopes that it makes the process easier for others. In the last section, we’ll cover more of the technical details for our deployment

What’s next ?

For us, the installation of JupyterHub on OpenStack was just the first step of a long journey. We are able now to offer to our researchers and our students a JupyterHub but we want more. Here’s a short wish-list our deployments.

On-demand environments. Imagine offering researchers and teachers an even more flexible platform where they can create their work environment and distribute them without needing to use central IT for the installation of their packages. As you may have guessed, we are more interested in what BinderHub has to offer.

The steps described above also work for the installation of BinderHub. We deployed a BinderHub on OpenStack alongside a DockerHub registry. Kubespray also offers the possibility to deploy a private registry and we would like to test it with BinderHub.

Cluster monitoring. It would also be great to have monitoring of the Kubernetes cluster. This would allow us to inspect the usage rates and resources available on the deployment. In the roadmap of Kubespray, it is planned to add Grafana and Prometheus installations.

Authentication for BinderHub. Currently BinderHub does not support authentication for users. However, note that a recent pull request on this subject was merged in BinderHub (see

Persistent storage in BinderHub. It is also currently not possible to persist storage across BinderHub sessions. Once authentication is possible in BinderHub, we’d also like to connect user accounts to their storage so that they can keep their work over time. This will also require being able to mount the home directory of each user.

We will work on all these items in the next months.

The Technical Details

This part details the set up of a Kubernetes cluster and JupyterHub using a bare OpenStack infrastructure. To make it as reproducible as possible, we will start by listing the versions we have used.

  • OpenStack: Pike
  • Kubespray: commit 3632290
  • Kubernetes: 1.11.3
  • Helm: 2.9.1
  • JupyterHub: 0.7.0

Now that we’ve described the components and the versions used, let’s start to deploy our JupyterHub on OpenStack !!

The deployment steps are the following

  • Connect to our OpenStack infrastructure
  • Download Kubespray
  • Create your infrastructure using terraform
  • Deploy your Kubernetes cluster using ansible
  • Deploy your JupyterHub using Helm chart
  • Enjoy!

Connect to OpenStack

Kubespray needs a access to your OpenStack infrastructure in order to create all the instances needed for your Kubernetes cluster using the OpenStack CLI (Command-Line Interface). When you log in to your OpenStack dashboard, you can download all the environment variables to use the CLI.

We chose to download the OpenStack RC File V3. You should obtain something like this:

Note that we’ve removed the lines which ask for your password when you use the CLI and add it to “never ask again”. You also have to provide OS_CLOUD and OS_CACERT (even if you don’t have a certificate to access to your OpenStack infrastructure, you must provide one but you can keep it blank).

Now, you can install the OpenStack CLI with the command line. We’ll show two ways to do this below:

  • with virtualenv
virtualenv ~/openstack
source ~/openstack/bin/activate
pip install python-openstackclient
  • with conda
conda create -n openstack python=3.6
source activate openstack
pip install python-openstackclient

Next, source your rc file to activate it

source rc_file

and test your connection

openstack project list

You should be able to see your projects listed.

Once you have access, you will need some information in order to use terraform with Kubespray. You should find the following things (we have highlighted them in the images below):

  1. The name of the image you want to deploy. To list it, run the following command:
openstack image list
List of OpenStack images

2. The id of the flavor describing the type of machine you want to deploy (the flavor must be aUUID and not an integer ID). To find it, run this command:

openstack flavor list
List of OpenStack flavors

Once you’ve got this information, it’s time to install Kubespray.

Install Kubespray

Because Kubespray is simply a GitHub repository, we don’t “install” it in a traditional sense, we only clone the repository to our machine. Since Kubespray is a project that evolves quickly, we’ll list the commit that we used for this post. Run the following command to get Kubespray:

git clone
cd kubespray
git checkout 3632290

Next, prepare all the files describing your Kubernetes cluster. We’ll follow the documentation given by Kubespray and will just change some flags. We encourage you to follow the procedure described below as the documentation seems to have some errors.

Kubespray uses terraform and ansible to deploy your Kubernetes cluster. ansible needs an inventory file which describes your cluster in order to execute the playbook roles on it. Kubespray provides a skeleton dedicated to OpenStack platform to provision your cluster using terraform and create the inventory file for ansible accordingly. To use the skeleton provided by Kubespray, the steps are the following

cp -LRp contrib/terraform/openstack/sample-inventory inventory/jhub
cd inventory/jhub
ln -s ../../contrib/terraform/openstack/hosts

jhub is the name directory we choose to store our inventory but you can choose what you want.

If you look at the inventory/jhub directory you will see

  • the terraform file describing your inventory.
  • group_vars: the directory where we set all the variables used by ansible scripts provided by Kubespray.

Let’s describe our inventory.

Initialize Terraform

In the file, you can specify different kinds of Kubernetes clusters with floating IP for each VM. floating ip means that you ask to OpenStack to give you a public IP address in order to connect to the VM from an external network. You can also have a bastion where you have to log before reaching your Kubernetes cluster.

In the following, we only choose to have a master VM and two nodes for our Kubernetes cluster. Another important part is to specify a GlusterFS to have some storage resources for JupyterHub (database and home directories). Our inventory file looks like this

The flavors are the same for each master, node, and GlusterFS but you can do what you want. We also added dns_nameservers to be sure that we have a correct DNS on each nodes. We will check in future experiments if it’s really necessary.

The ID of the external network and the name of the floatingip_pool can be obtained with the following command:

openstack network list
List of OpenStack networks

You need ssh keys to be able to connect to the nodes. From the documentation of Kubespray:

Ensure your local ssh-agent is running and your ssh key has been added. This step is required by the terraform provisioner:
eval $(ssh-agent -s)
ssh-add ~/.ssh/id_rsa

Now, it’s time to initialize terraform. It’s important for the next steps to be run from the root directory of Kubespray.

terraform init contrib/terraform/openstack

Now, create your VMs!

terraform apply -state=inventory/jhub/terraform.tfstate -var-file=inventory/jhub/ contrib/terraform/openstack

At the end of this process, you can see your instances in the dashboard of OpenStack. It’s important to keep the information given at the end of the output.

At this stage, you’ve just created several VMs with the images given in the file. You don’t have a Kubernetes cluster up and running yet. It’s the next step!

Note: If you want to destroy all that you’ve done, run this command:

terraform destroy -state=inventory/jhub/terraform.tfstate -var-file=inventory/jhub/ contrib/terraform/openstack

Configure your Kubernetes cluster

Again, Kubespray lets you configure your Kubernetes cluster with a lot of possibilities. We will show you one setup, but once you understand the procedure, you should be able to make your own choices.

Let’s start to see if we can ping our VMs. You have to add the following script that we called ssh-nodes.conf in your inventory/jhub directory

You also have to modify the ssh_args variable in ansible.cfg script in the root directory of Kubespray accordingly

ssh_args = -F inventory/jhub/ssh-nodes.conf -o ControlMaster=auto -o ControlPersist=30m -o ConnectionAttempts=100 -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no

Just pay attention that you give the right external address and that your internal network is 10.0.0.*.

To check that everything is configured correctly, this command:

ansible -i inventory/jhub/hosts -m ping all

should have an output like this

You can now install your Kubernetes cluster with the ansible scripts provided by Kubespray. To do that, you will edit the files found in the group_vars directory in inventory/jhub.

You need several things to have a JupyterHub up and running

A CNI (Container Network Interface). Kubespray offers different CNI for your Kubernetes cluster: cilium, calico, contiv, weave or flannel. We will choose calico.

Storage for the data. We’ll deploy a GlusterFS and add storage on the Kubernetes cluster to have access to it.

A LoadBalancer to have access to the service from the external network. You have two kinds of LoadBalancer on Openstack: Neutron or Octavia. You can use both with Kubespray. We will choose Neutron but it will be preferable to use Octavia in the future.

So how do we configure all these items?

First, open the file inventory/jhub/group_vars/all/all.yml and modify the following entries

bootstrap_os: centos
cloud_provider: openstack

Note that the dns address is specific to our infrastructure.

Now, open the file inventory/jhub/group_vars/all/openstack.yml and configure the LoadBalancer

openstack_lbaas_enabled: True
openstack_lbaas_subnet_id: "48ec8433-..."
openstack_lbaas_floating_network_id: "6cd08271-..."

The two IDs are those given at the end of the terraform apply step.

Open the file inventory/jhub/group_vars/k8s-cluster/k8s-cluster.yml and set persistent_volumes_enabled to true and resolvconf_mode to host_resolvconf.

Our OpenStack cloud infrastructure is configured with a VXLAN tunnel where the header size is 50 bytes. We use Calico with Kubernetes which also uses a VXLAN tunnel with a header of 50 bytes. Then, for a default MTU of 1500 bytes, we already have 100 bytes taken by the headers. So, we need to configure carefully the MTU of calico in order to be sure that the packet size (headers included) doesn’t exceed the 1500 bytes.

To configure the MTU of calico, we have to edit the file inventory/jhub/group_vars/k8s-cluster/k8s-net-calico.yml and set the calico_mtu flag to 1400.

It’s important to notice that setting MTU had no effect for OpenStack versions earlier than Pike. The LoadBalancer didn’t work correctly.

The last file to modify is inventory/jhub/group_vars/k8s-cluster/addons.yml. JupyterHub uses Helm charts to deploy all that you need on the Kubernetes cluster and Kubespray can install Helm for you. So just set the helm_enabled flag to true.

Now we can run ansible playbook

ansible-playbook --become -i inventory/jhub/hosts cluster.yml

You can take a coffee break because it takes time to install all the stuff. At the end of this process, you have a Kubernetes cluster up and running.

To be sure, log in on the master nodes (the external address given by terraform) and enter the command

kubectl -n kube-system get pods

You should be able to see all pods of the kube-system namespace running.

The last step is to install the persistent volume from our GlusterFS.

ansible-playbook --become -i inventory/jhub/hosts ./contrib/network-storage/glusterfs/glusterfs.yml

If you log in again to the master of your Kubernetes cluster and enter the following command

kubectl get pv

you will see your GlusterFS storage connected to your Kubernetes cluster.

Install JupyterHub

Now that you have a Kubernetes cluster running, the procedure to install JupyterHub is exactly the same as the one described in Zero to JupyterHub. The only difference is that you don’t have to install Helm, since Kubespray did it for you. We’ll post the commands below, and you can go to the Zero to JupyterHub website for more information.

The first step is to log in to the master node of your Kubernetes cluster. Then, initialize Helm.

helm init --service-account tiller
kubectl patch deployment tiller-deploy --namespace=kube-system --type=json --patch='[{"op": "add", "path": "/spec/template/spec/containers/0/command", "value": ["/tiller", "--listen=localhost:44134"]}]'

Next, follow the procedure described here

Setting up JupyterHub - Zero to JupyterHub with Kubernetes 0.7.0 documentation

If the LoadBalancer did its job, you should be able to see the external IP to connect to your JupyterHub

kubectl -n jhub  get svc

If you enter this address in your web browser. If everything worked, you will see the JupyterHub login page:

JupyterHub login page

Wrapping up and feedback

The steps above described our attempts at running a JupyterHub on Kubernetes using OpenStack. There are likely many other ways to accomplish the same thing, and we’d love to hear feedback on the best procedure to install JupyterHub or BinderHub on OpenStack infrastructure. If you have encountered any issues, please leave a comment or ping us on the gitter channel of JupyterHub or Binder.

Thanks to the Project Jupyter team for their review and helpful comments and especially to Sylvain Corlay and Chris Holdgraf.

About the Authors (alphabetical order)

  • David Delavennat, Research Engineer in Scientific Infrastructures at CMLS (Polytechnique/CNRS) and INSMI (CNRS)
  • Loïc Gouarin, Research Engineer in Scientific Computing at CMAP (Polytechnique/CNRS)
  • Guillaume Philippon, Research Engineer in Scientific Infrastructures at LAL (IN2P3/CNRS)