GPUs

Usage of GPUs

Kubernetes has access to several virtual GPUs.

Currently, all GPUs are being utilized for JupyterHub, and all previous use cases have been well accommodated there.

If, however, there is a compelling reason why your service requires access to a GPU, please feel free to send a request with an explanation to cloud@uni-muenster.de, and we will evaluate the case together with you.

In any case, GPUs in Kubernetes cannot be used for long-running processes and should be allocated within jobs, even in the case of a service, to free up resources once the computations are completed.

Furthermore, it may take several minutes until the job is started with a GPU, as a separate virtual machine must be launched for each GPU pod. It is also possible that all GPUs are being used elsewhere, such as in JupyterHub, so one may need to wait until a GPU becomes available again.