Documentation

This laboratory is meant to offer shared access to some “expensive” computational resources. This access is offered as research-oriented interactive services.

Resources

The laboratory hardware consists of:

At peak, power consumption is about 500W.

Accounts

Each participant is identified by a unique user account, with its username and password. These credentials are unique to the laboratory, and have no relation with other services on Autistici / Inventati.

There is currently no user self-management interface, so please reach out to info@autistici.org for changes.

Services

Notebook

The main application offered by the lab are computational notebooks.

These are interactive Python environments, built with Jupyter, a system which is quite common in the academic world and elsewhere. There is a lot of high-quality existing documentation on the Internet about it.

We offer a single runtime kernel (Python 3.11), which includes a number of widely-used Python ML libraries such as Torch, sentence-transformers, Huggingface, etc. If you need anything else, let us know and we’ll try to include it.

Each user has access to their own isolated runtime environment, with an associated persistent storage volume.

The runtime kernel has access to all GPU resources. We currently lack any resource allocation mechanism, and instead rely on users self-limiting their resource usage to minimize conflict. In the future, it’s likely that we’ll introduce a queuing system for batch processing and other similar quota-like mechanisms, in order to improve resource utilization and make things smoother for users.

In any case, should you have particular necessities that are not met by the above, do not hesitate to contact us.

Storage

Each account has access to local storage space. It is on a NVME support, hence it is very fast. However the overall disk space is limited, and generally not intended for long-term storage (among other things, we’ve made the explicit decision not to make backups of it).

It is generally preferable to access large datasets remotely, e.g. using S3-like APIs or similar methods.

It is possible to access the storage space using SSH / SCP on port 2222. Only public-key authentication is supported.

Ollama

We offer an experimental OpenAI-like API, implemented using Ollama, for simple inference-oriented tasks. Models managed by this service are shared among all users, to optimize resource usage.

The API is reachable at https://ollama.ula.inventati.org/ and it requires authentication using an API key. Unfortunately, the ollama command-line tool does not support authentication. A way to solve this is to use a local forwarding authenticating proxy:

With this tool you can use the ollama client locally transparently.

References

On Jupyter: