Basically we support two use cases: Running services in virtual machines or storing data in network shares.
Services may be any kind of traditional service, accessible via TCP or UDP on a Port with a fixed IP or DNS name. Also complex services with multiple frontend and backend servers, private networks and firewall protected networking are supported. OpenStack provides an extensive API, which allows all actions to be scripted.
In addition OpenStack offers the possibility to create network shares on which large amounts of data can be stored.
If root
access is desired you can create NFS shares, which limit access by IP addresses.
For data, which needs to be directly accessible from workstations, Usershares can be created.
:warning: Analysis of data should primarily be done on JupyterHub (for interactive work or visualization) or from Palma.
Data analysis is best done on JupyterHub or on Palma. Project requests with the primary use-case of connecting via SSH and doing data analysis are rejected.
Containerized services are best run in Kubernetes. Please refer to the Documentation for more information.
Please contact cloud@uni-muenster.de if you need more information on choosing the best environment for your service.
The requirements for a VM depend heavily on the application that is to run in the VM. Quotas are initially set according to the project proposal. The VMs should get a small root disk (10-20GB is usually sufficient). Additional storage should be attached via additional virtual disks or shares. It should be noted that NVMe storage is only available to a very limited extent and should therefore only be used for databases that have strict latency requirements. Virtual disks are stored in 5-fold replication, so for larger amounts of data one should resort to shares that use 8+3 erasure coding to store the data. The RAM of the VMs should not be set too large, if possible. A VM can always be allocated more RAM by changing the “flavor” of a VM.
The requested quotas for a project should initially provide enough space for testing. A change of quotas can be changed with a brief mail at openstack@uni-muenster.de.
A change of quotas can be changed with a brief mail at openstack@uni-muenster.de.
Officially provided images include CentOS, Ubuntu, Debian, Fedora, Suse and Windows.
Custom images can be uploaded and used in OpenStack. It should be noted that these must run the “cloud-init” tool at startup. A list of common images can be found here: https://docs.openstack.org/image-guide/obtain-images.html Please convert your images to “raw” format before uploading for best performance.
Access depends heavily on the chosen application scenario. The easiest way is to access via user shares (access via \wwu.de\ddfs or from Palma/JupyterHub). For data on virtual hard disks or NFS shares, the data must usually be transferred via SSH/rsync or Samba/FTP server on a virtual machine.
OpenStack VMs and VMs in the VMWare virtualization solution can share data via IP-based NFS shares or Usershares.
Both puppet and ansible for maintenance are possible without any problems. Additionally, VM setup can also be automated externally available interfaces (e.g. OpenStack Heat, Terraform, Vagrant). Boot scripts can be supplied via cloud-init.
Authentication on created VMs is initially done via SSH key. You first store your public key in OpenStack and select it when you create the VM. When the VM is started for the first time, the key is automatically injected into the “authorized_keys”. If a different type of user authentication is desired, you have to set it up yourself.
For load balancing in OpenStack, it is better to scale horizontally, i.e., distribute your service across multiple VMs. This allows the load to be distributed more evenly across the available servers. VMs can then be more easily migrated depending on the load situation of individual machines to improve the distribution.
A built-in load balancer is available in OpenStack for load balancing. This creates a HAProxy pair that can distribute requests to different backends. L7 routing, TLS termination and health checking are also possible. If you are thinking about load balancing for virtual machines, chances are high that you have a service, which is better suited for Kubernetes. Please contact cloud@uni-muenster.de if you need more information on choosing the best environment for your service.