You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This web portal is intended to give HPC users a view of the overall use of the HPC cluster and their use. This portal is using the information collected on compute nodes and management servers to produce the information in the various modules:
6
+
This web portal is intended to give HPC users a view of the overall use of the HPC cluster and their use. This portal uses the information collected on compute nodes and management servers to produce the information in the various modules:
7
7
8
-
*[jobstats](docs/jobstats.md)
9
-
*[accountstats](docs/accountstats.md)
10
-
*[cloudstats](docs/cloudstats.md)
11
-
*[quotas](docs/quotas.md)
12
-
*[top](docs/top.md)
13
-
*[usersummary](docs/usersummary.md)
8
+
[Documentation](docs/index.md)
14
9
15
10
Some examples of the available graphs are displayed in the documentation of each module.
16
11
17
12
This portal is made to be modular, some modules can be disabled if the data required is not needed or collected. Some modules have optional dependencies and will hide some graphs if the data is not available.
18
13
19
-
This portal also supports Openstack, the users can see their use without having to install a monitoring agent in their VM in their OpenStack VMs.
14
+
This portal also supports OpenStack, the users can see their use without having to install a monitoring agent in their VM in their OpenStack VMs.
20
15
21
16
Staff members can also see the use of any users to help them optimize their use of HPC and OpenStack clusters.
22
17
@@ -26,7 +21,7 @@ Some information collected is also available for the general public like the num
26
21
## Design
27
22
Performance metrics are stored in Prometheus, multiple exporters are used to gather this data, and most are optional.
28
23
29
-
The Django portal will also access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information. Timeseries are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.
24
+
The Django portal will also access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information. Time series are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.
30
25
31
26

32
27
@@ -35,42 +30,3 @@ Various data sources are used to populate the content of this portal. Most of th
35
30
Some pre-aggregation is done using recorder rules in Prometheus. The required recorder rules are documented in the data sources documentation.
36
31
37
32
[Data sources documentation](docs/data.md)
38
-
39
-
## Test environment
40
-
A test environment using the local `uid` resolver and dummies allocations is provided to test the portal.
41
-
42
-
To use it, copy `example/local.py` to `userportal/local.py`. The other functions are documented in `common.py` if any other overrides are needed for your environment.
43
-
44
-
To quickly test and bypass authentication, add this line to `userportal/settings/99-local.py`. Other local configuration can be added in this file to override the default settings.
This bypasses the authentication and will use the `REMOTE_USER` header or env variable to authenticate the user. This is useful to be able to try the portal without having to set up a full IDP environment. The REMOTE_USER method can be used when using some IDP such as Shibboleth. SAML2 based IDP is now the preferred authentication method for production.
51
-
52
-
Examine the default configuration in `userportal/settings/` and override any settings in `99-local.py` as needed.
This will run the portal with the user `someuser` logged in as a staff member.
61
-
62
-
Automated Django tests are also available, they can be run with:
63
-
64
-
```
65
-
python manage.py test
66
-
```
67
-
68
-
This will test the various modules, including reading job data from the Slurm database and Prometheus. A temporary database for Django is created automatically for the tests. Slurm and Prometheus data are read directly from production data with a read-only account. A representative user, job and account need to be defined to be used in the tests, check the `90-tests.py` file for an example.
69
-
70
-
## Production install
71
-
The portal can be installed directly on a Centos7 or Rocky8 Apache web server or with Nginx and Gunicorn. The portal can also be deployed as a container with Podman or Kubernetes. Some scripts used to deploy both Nginx and Django containers inside the same pod are provided in the `podman` directory.
72
-
The various recommendations for any normal Django production deployment can be followed.
Copy file name to clipboardexpand all lines: docs/accountstats.md
+3-1
Original file line number
Diff line number
Diff line change
@@ -1,7 +1,9 @@
1
1
# Accountstats
2
2
The users can also see the aggregated use of the users in the same group. This also shows the current priority of this account in Slurm and a few months of history on how much computing resources were used.
3
3
4
-
<ahref="accountstats.png"><imgsrc="accountstats.png"alt="Stats per account"width="100"/></a>
Copy file name to clipboardexpand all lines: docs/cloudstats.md
+7-3
Original file line number
Diff line number
Diff line change
@@ -1,9 +1,13 @@
1
1
# Cloudstats
2
2
The stats of the VM running on Openstack can be viewed. This is using the stats of libvirtd, no agent needs to be installed in the VM. There is an overall stats page available for staff. The page per project and VM is also available for the users.
Copy file name to clipboardexpand all lines: docs/data.md
+4-4
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,5 @@
1
1
# Data sources
2
-
Some features will not be available if the exporter required to gather the stats is not configured.
2
+
The main requirement to monitor a Slurm cluster is to install slurm-job-exporter and open a read-only access to the Slurm MySQL database. Other data sources in this page can be installed to gather more data.
3
3
4
4
## slurm-job-exporter
5
5
[slurm-job-exporter](https://github.com/guilbaults/slurm-job-exporter) is used to capture information from cgroups managed by Slurm on each compute node. This gathers CPU, memory, and GPU utilization.
[slurm-exporter](https://github.com/guilbaults/prometheus-slurm-exporter/tree/osc) is used to capture stats from Slurm like the priority of each user. This portal is using a fork, branch `osc` in the linked repository. This fork support GPU reporting and sshare stats.
52
-
53
50
## Access to the database of slurmacct
54
51
This MySQL database is accessed by a read-only user. It does not need to be in the same database server where Django is storing its data.
55
52
53
+
## slurm-exporter
54
+
[slurm-exporter](https://github.com/guilbaults/prometheus-slurm-exporter/tree/osc) is used to capture stats from Slurm like the priority of each user. This portal is using a fork, branch `osc` in the linked repository. This fork support GPU reporting and sshare stats.
55
+
56
56
## lustre\_exporter and lustre\_exporter\_slurm
57
57
Those 2 exporters are used to gather information about Lustre usage.
A test and development environment using the local `uid` resolver and dummies allocations is provided to test the portal.
2
+
3
+
To use it, copy `example/local.py` to `userportal/local.py`. The other functions are documented in `common.py` if any other overrides are needed for your environment.
4
+
5
+
To quickly test and bypass authentication, add this line to `userportal/settings/99-local.py`. Other local configuration can be added in this file to override the default settings.
This bypasses the authentication and will use the `REMOTE_USER` header or env variable to authenticate the user. This is useful to be able to try the portal without having to set up a full IDP environment. The REMOTE_USER method can be used when using some IDP such as Shibboleth. SAML2 based IDP is now the preferred authentication method for production.
12
+
13
+
Examine the default configuration in `userportal/settings/` and override any settings in `99-local.py` as needed.
This will run the portal with the user `someuser` logged in as a staff member.
22
+
23
+
Automated Django tests are also available, they can be run with:
24
+
25
+
```
26
+
python manage.py test
27
+
```
28
+
29
+
This will test the various modules, including reading job data from the Slurm database and Prometheus. A temporary database for Django is created automatically for the tests. Slurm and Prometheus data are read directly from production data with a read-only account. A representative user, job and account need to be defined to be used in the tests, check the `90-tests.py` file for an example.
TrailblazingTurtle is a web portal for HPC clusters. It is designed to be a single point of entry for users to access information about the cluster, their jobs, and the performance of the cluster. It is designed to be modular, so that it can be easily extended to support new features.
5
+
6
+
# Design
7
+
The Django portal will access various MySQL databases like the database of Slurm and Robinhood (if installed) to gather some information.
8
+
9
+
Time series are stored with Prometheus for better performance. Compatible alternatives to Prometheus like Thanos, VictoriaMetrics, and Grafana Mimir should work without any problems (Thanos is used in production). Recorder rules in Prometheus are used to pre-aggregate some stats for the portal.
Copy file name to clipboardexpand all lines: docs/install.md
+9
Original file line number
Diff line number
Diff line change
@@ -1,3 +1,12 @@
1
+
# Installation
2
+
3
+
Before installing in production, [a test environment should be set up to test the portal](development.md). This makes it easier to fully configure each module and modify as needed some functions like how the allocations are retrieved. Installing Prometheus and some exporters is also recommended to test the portal with real data.
4
+
5
+
The portal can be installed directly on a Rocky8 Apache web server or with Nginx and Gunicorn. The portal can also be deployed as a container with Podman or Kubernetes. Some scripts used to deploy both Nginx and Django containers inside the same pod are provided in the `podman` directory.
6
+
The various recommendations for any normal Django production deployment can be followed.
Copy file name to clipboardexpand all lines: docs/jobstats.md
+5-2
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,11 @@
1
1
# Jobstats
2
2
Each user can see their current uses on the cluster and a few hours in the past. The stats for each job are also available. Information about CPU, GPU, memory, filesystem, InfiniBand, power, etc. is also available per job. The submitted job script can also be collected from the Slurm server and then stored and displayed in the portal. Some automatic recommendations are also given to the user, based on the content of their job script and the stats of their job.
3
3
4
-
<ahref="user.png"><imgsrc="user.png"alt="Stats per user"width="100"/></a>
5
-
<ahref="job.png"><imgsrc="job.png"alt="Stats per job"width="100"/></a>
Copy file name to clipboardexpand all lines: docs/nodes.md
+5-2
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,11 @@
1
1
# Nodes
2
2
This main page present the list of nodes in the cluster with a small graph representing the cores, memory and localdisk used. Each node has a link to a detailed page with more information about the node similar to the jobstats page.
3
3
4
-
<ahref="nodes_list.png"><imgsrc="nodes_list.png"alt="Nodes in the cluster with a small graph for each"width="100"/></a>
5
-
<ahref="nodes_details.png"><imgsrc="nodes_details.png"alt="Detailed stats for a node"width="100"/></a>
4
+
## Screenshots
5
+
### Nodes list
6
+

0 commit comments