Using OpenShift Lightspeed with AWS Bedrock on ROSA
This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.
OpenShift Lightspeed is an AI-powered assistant that helps developers and administrators interact with OpenShift using natural language. This guide walks you through integrating OpenShift Lightspeed with AWS Bedrock on Red Hat OpenShift Service on AWS (ROSA).
Prerequisites
- ROSA Cluster (4.20+)
- AWS CLI configured with appropriate credentials
ocCLI logged in as cluster-adminrosaCLI- An AWS account with access to Amazon Bedrock
- Access to a supported foundation model in Bedrock (e.g., Claude, Llama, etc.)
Architecture Overview
OpenShift Lightspeed uses Large Language Models (LLMs) to provide intelligent assistance. By integrating with AWS Bedrock, you can leverage AWS-managed foundation models while keeping your OpenShift environment secure and compliant.
The integration uses:
- AWS Bedrock: Provides the foundation models for AI inference
- IRSA (IAM Roles for Service Accounts): Enables secure authentication from ROSA to AWS Bedrock
- OpenShift Lightspeed Operator: Manages the Lightspeed service on your cluster
- Bedrock Proxy: Translation layer that bridges Lightspeed with Bedrock (see below)
Bedrock Proxy Component
The bedrock-proxy is a critical translation layer that enables OpenShift Lightspeed to communicate with AWS Bedrock. OpenShift Lightspeed is built to work with OpenAI-compatible APIs, but AWS Bedrock has its own unique API format. Rather than modifying Lightspeed itself, this lightweight proxy makes Bedrock “speak OpenAI” so they can communicate seamlessly.
What the Bedrock Proxy Does:
-
API Translation
- Receives OpenAI format requests from Lightspeed (
/v1/chat/completions) - Translates them to Bedrock format and calls the appropriate model
- Receives OpenAI format requests from Lightspeed (
-
Message Format Conversion
- Amazon Nova doesn’t support
systemrole messages - The proxy extracts system prompts and prepends them to the first user message
- Ensures only
userandassistantroles are sent to Bedrock
- Amazon Nova doesn’t support
-
Parameter Mapping
- Converts OpenAI’s
max_tokens→ Bedrock’smax_new_tokens - Transforms message structure from simple strings to Nova’s
content: [{"text": "..."}]format
- Converts OpenAI’s
-
Streaming Support
- Converts Bedrock streaming responses to OpenAI-compatible Server-Sent Events (SSE)
- Reformats Bedrock’s
contentBlockDeltaevents into OpenAI’sdeltaformat - Ensures Lightspeed receives responses in the expected streaming format
-
Authentication
- Uses IRSA (IAM Roles for Service Accounts) for secure AWS authentication
- The pod’s service account token is projected and used to assume the AWS IAM role
- No static credentials needed - all authentication is handled via the service account
-
Multi-Model Support
- Handles both Claude and Amazon Nova models
- Automatically detects model type and applies appropriate format conversion
Enable Amazon Bedrock Access
-
Enable model access in Amazon Bedrock
Navigate to the AWS Bedrock console and enable access to your desired foundation model. For this guide, we’ll use Anthropic Claude.
-
Request model access if needed
If you don’t have access to the model, request it through the AWS Bedrock console:
- Navigate to Amazon Bedrock → Model access
- Click “Request model access”
- Select the model(s) you want to use
- Submit the request
Configure IAM for Bedrock Access
-
Set environment variables
-
Create IAM policy for Bedrock access
-
Create the IAM policy and capture the ARN
-
Create IAM role with trust policy for IRSA
-
Get the role ARN for later use
Install OpenShift Lightspeed
-
Create the namespace
-
Create the service account with the IAM role annotation
-
Create the OperatorGroup
The OperatorGroup tells the Operator Lifecycle Manager (OLM) which namespaces the operator should monitor.
-
Subscribe to the Operator
The Subscription links the operator from the Red Hat catalog to your cluster.
-
Wait for the operator to be installed
-
Deploy PostgreSQL for conversation cache
OpenShift Lightspeed requires PostgreSQL for conversation caching. -
Create PostgreSQL password secret
-
Wait for PostgreSQL to be ready
-
Build and deploy Bedrock OpenAI-compatible proxy
Since OpenShift Lightspeed doesn’t natively support Bedrock, we need to build a proxy that provides an OpenAI-compatible API. Create a simple proxy application:
-
Expose the OpenShift internal image registry
-
Get the OpenShift internal registry route
-
Login to the OpenShift internal registry
-
Build and tag the image for the internal registry
If building on a Mac, specify the platform to ensure compatibility with OpenShift’s x86_64 nodes. -
Push the image to OpenShift internal registry
-
Deploy the proxy
-
Wait for proxy to be ready
-
Create a credentials secret for the provider
Even though the proxy uses IRSA for authentication, the OLSConfig schema requires a credentialsSecretRef with a key named apitoken. We’ll create a dummy secret to satisfy the validation. -
Create the OLSConfig custom resource
Verify the OLSConfig
-
Check if the OLSConfig was created successfully
-
Verify there are no validation errors
-
Check the OLSConfig status
Look for events and status conditions. A healthy OLSConfig should show:
Valid: True- No error messages in the events
-
Check the operator logs for reconciliation issues
Verify the Installation
-
Check the Lightspeed deployment status
-
Verify the pods are running
-
Check the Lightspeed application server logs
-
Verify the Bedrock proxy is working
Access OpenShift Lightspeed
-
Access Lightspeed through the OpenShift web console
Navigate to the OpenShift console and look for the Lightspeed icon (typically in the top navigation bar or help menu).
-
Test the integration
Try asking questions like:
- “How do I create a new project?”
- “Show me how to deploy a containerized application”
- “What are the current pod resources in my cluster?”
Troubleshooting
OLSConfig Validation Errors
If you see validation errors like “credentialsSecretRef: Required value” or “missing key ‘apitoken’”:
-
Ensure the credentials secret exists and has the correct key:
-
The secret must have a key named
apitoken(notapi_key):
Environment Variables Not Expanding
If the OLSConfig shows literal ${LIGHTSPEED_NAMESPACE} in URLs instead of the actual namespace:
-
Check that the environment variable is set:
-
If using a heredoc, ensure you use
cat <<EOF(without quotes) to allow variable expansion, or replace the variable with the actual namespace valueopenshift-lightspeed. -
Verify the URLs in the OLSConfig:
Operator Not Reconciling
If the OLSConfig exists but no app server pods are created:
-
Check operator logs for errors:
-
Trigger a manual reconciliation:
-
Wait for the lightspeed-app-server pod to be created (image pull can take 3-5 minutes):
Permission Errors
If you see permission errors in the logs:
Verify the IAM role has the correct permissions:
Model Access Issues
Verify model access in Bedrock:
Service Account Annotation
Verify the service account has the correct IAM role annotation:
Cleanup
To remove OpenShift Lightspeed and associated resources:
-
Delete the OLSConfig
-
Delete the operator subscription
-
Delete the namespace
-
Remove AWS IAM resources