Federating System and User metrics to S3 in Red Hat OpenShift for AWS
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
This guide walks through setting up federating Prometheus metrics to S3 storage.
ToDo - Add Authorization in front of Thanos APIs
Prerequisites
- A ROSA cluster deployed with STS
- aws CLI
Set up environment
-
Create environment variables
-
Create namespace
AWS Preperation
-
Create an S3 bucket
-
Create a Policy for access to S3
-
Apply the Policy
-
Create a Trust Policy
-
Create Role for AWS Prometheus and CloudWatch
-
Attach the Policies to the Role
Deploy Operators
-
Add the MOBB chart repository to your Helm
-
Update your repositories
-
Use the
mobb/operatorhubchart to deploy the needed operators
Deploy Thanos Store Gateway
-
We use Grafana Alloy to scrape the prometheus metrics and ship them to Thanos, which will then store them in S3. Currently Grafana Alloy requires running as a specific user so we must set a SecurityContextConstraint to allow it.
-
Deploy ROSA Thanos S3 Helm Chart
-
Append remoteWrite settings to the user-workload-monitoring config to forward user workload metrics to Thanos.
Check if the User Workload Config Map exists:
If the config doesn’t exist run:
Otherwise update it with the following:
Check metrics are flowing by logging into Grafana
-
Get the Route URL for Grafana (remember its https) and login using username
rootand the password you updated to (or the default ofsecret). -
Once logged in go to Dashboards->Manage and expand the federated-metrics group and you should see the cluster metrics dashboards. Click on the Use Method / Cluster Dashboard and you should see metrics. \o/.
Cleanup
-
Delete the Helm Charts
-
Delete the namespace
-
Delete the S3 bucket
-
Delete the AWS IAM Role and Policy