Cloud Experts Documentation

Using OpenShift Lightspeed with AWS Bedrock on ROSA

This content is authored by Red Hat experts, but has not yet been tested on every supported configuration.

This guide has been validated on OpenShift 4.20. Operator CRD names, API versions, and console paths may differ on other versions.

OpenShift Lightspeed is an AI-powered assistant that helps developers and administrators interact with OpenShift using natural language. This guide walks you through integrating OpenShift Lightspeed with AWS Bedrock on Red Hat OpenShift Service on AWS (ROSA).

Prerequisites

  • ROSA Cluster (4.20+)
  • AWS CLI configured with appropriate credentials
  • oc CLI logged in as cluster-admin
  • rosa CLI
  • An AWS account with access to Amazon Bedrock
  • Access to a supported foundation model in Bedrock (e.g., Claude, Llama, etc.)

Architecture Overview

OpenShift Lightspeed uses Large Language Models (LLMs) to provide intelligent assistance. By integrating with AWS Bedrock, you can leverage AWS-managed foundation models while keeping your OpenShift environment secure and compliant.

The integration uses:

  • AWS Bedrock: Provides the foundation models for AI inference
  • IRSA (IAM Roles for Service Accounts): Enables secure authentication from ROSA to AWS Bedrock
  • OpenShift Lightspeed Operator: Manages the Lightspeed service on your cluster
  • Bedrock Proxy: Translation layer that bridges Lightspeed with Bedrock (see below)

Bedrock Proxy Component

The bedrock-proxy is a critical translation layer that enables OpenShift Lightspeed to communicate with AWS Bedrock. OpenShift Lightspeed is built to work with OpenAI-compatible APIs, but AWS Bedrock has its own unique API format. Rather than modifying Lightspeed itself, this lightweight proxy makes Bedrock “speak OpenAI” so they can communicate seamlessly.

What the Bedrock Proxy Does:

  1. API Translation

    • Receives OpenAI format requests from Lightspeed (/v1/chat/completions)
    • Translates them to Bedrock format and calls the appropriate model
  2. Message Format Conversion

    • Amazon Nova doesn’t support system role messages
    • The proxy extracts system prompts and prepends them to the first user message
    • Ensures only user and assistant roles are sent to Bedrock
  3. Parameter Mapping

    • Converts OpenAI’s max_tokens → Bedrock’s max_new_tokens
    • Transforms message structure from simple strings to Nova’s content: [{"text": "..."}] format
  4. Streaming Support

    • Converts Bedrock streaming responses to OpenAI-compatible Server-Sent Events (SSE)
    • Reformats Bedrock’s contentBlockDelta events into OpenAI’s delta format
    • Ensures Lightspeed receives responses in the expected streaming format
  5. Authentication

    • Uses IRSA (IAM Roles for Service Accounts) for secure AWS authentication
    • The pod’s service account token is projected and used to assume the AWS IAM role
    • No static credentials needed - all authentication is handled via the service account
  6. Multi-Model Support

    • Handles both Claude and Amazon Nova models
    • Automatically detects model type and applies appropriate format conversion

Enable Amazon Bedrock Access

  1. Enable model access in Amazon Bedrock

    Navigate to the AWS Bedrock console and enable access to your desired foundation model. For this guide, we’ll use Anthropic Claude.

  2. Request model access if needed

    If you don’t have access to the model, request it through the AWS Bedrock console:

    • Navigate to Amazon Bedrock → Model access
    • Click “Request model access”
    • Select the model(s) you want to use
    • Submit the request

Configure IAM for Bedrock Access

  1. Set environment variables

  2. Create IAM policy for Bedrock access

  3. Create the IAM policy and capture the ARN

  4. Create IAM role with trust policy for IRSA

  5. Get the role ARN for later use

Install OpenShift Lightspeed

  1. Create the namespace

  2. Create the service account with the IAM role annotation

  3. Create the OperatorGroup

    The OperatorGroup tells the Operator Lifecycle Manager (OLM) which namespaces the operator should monitor.

  4. Subscribe to the Operator

    The Subscription links the operator from the Red Hat catalog to your cluster.

  5. Wait for the operator to be installed

  6. Deploy PostgreSQL for conversation cache

    OpenShift Lightspeed requires PostgreSQL for conversation caching.
  7. Create PostgreSQL password secret

  8. Wait for PostgreSQL to be ready

  9. Build and deploy Bedrock OpenAI-compatible proxy

    Since OpenShift Lightspeed doesn’t natively support Bedrock, we need to build a proxy that provides an OpenAI-compatible API.

    Create a simple proxy application:

  10. Expose the OpenShift internal image registry

  11. Get the OpenShift internal registry route

  12. Login to the OpenShift internal registry

  13. Build and tag the image for the internal registry

    If building on a Mac, specify the platform to ensure compatibility with OpenShift’s x86_64 nodes.
  14. Push the image to OpenShift internal registry

  15. Deploy the proxy

  16. Wait for proxy to be ready

  17. Create a credentials secret for the provider

    Even though the proxy uses IRSA for authentication, the OLSConfig schema requires a credentialsSecretRef with a key named apitoken. We’ll create a dummy secret to satisfy the validation.
  18. Create the OLSConfig custom resource

Verify the OLSConfig

  1. Check if the OLSConfig was created successfully

  2. Verify there are no validation errors

  3. Check the OLSConfig status

    Look for events and status conditions. A healthy OLSConfig should show:

    • Valid: True
    • No error messages in the events
  4. Check the operator logs for reconciliation issues

Verify the Installation

  1. Check the Lightspeed deployment status

  2. Verify the pods are running

  3. Check the Lightspeed application server logs

  4. Verify the Bedrock proxy is working

Access OpenShift Lightspeed

  1. Access Lightspeed through the OpenShift web console

    Navigate to the OpenShift console and look for the Lightspeed icon (typically in the top navigation bar or help menu).

  2. Test the integration

    Try asking questions like:

    • “How do I create a new project?”
    • “Show me how to deploy a containerized application”
    • “What are the current pod resources in my cluster?”

Troubleshooting

OLSConfig Validation Errors

If you see validation errors like “credentialsSecretRef: Required value” or “missing key ‘apitoken’”:

  1. Ensure the credentials secret exists and has the correct key:

  2. The secret must have a key named apitoken (not api_key):

Environment Variables Not Expanding

If the OLSConfig shows literal ${LIGHTSPEED_NAMESPACE} in URLs instead of the actual namespace:

  1. Check that the environment variable is set:

  2. If using a heredoc, ensure you use cat <<EOF (without quotes) to allow variable expansion, or replace the variable with the actual namespace value openshift-lightspeed.

  3. Verify the URLs in the OLSConfig:

Operator Not Reconciling

If the OLSConfig exists but no app server pods are created:

  1. Check operator logs for errors:

  2. Trigger a manual reconciliation:

  3. Wait for the lightspeed-app-server pod to be created (image pull can take 3-5 minutes):

Permission Errors

If you see permission errors in the logs:

Verify the IAM role has the correct permissions:

Model Access Issues

Verify model access in Bedrock:

Service Account Annotation

Verify the service account has the correct IAM role annotation:

Cleanup

To remove OpenShift Lightspeed and associated resources:

  1. Delete the OLSConfig

  2. Delete the operator subscription

  3. Delete the namespace

  4. Remove AWS IAM resources

Additional Resources

Back to top

Interested in contributing to these docs?

Collaboration drives progress. Help improve our documentation The Red Hat Way.

Red Hat logo LinkedIn YouTube Facebook Twitter

Products

Tools

Try, buy & sell

Communicate

About Red Hat

We’re the world’s leading provider of enterprise open source solutions—including Linux, cloud, container, and Kubernetes. We deliver hardened solutions that make it easier for enterprises to work across platforms and environments, from the core datacenter to the network edge.

Subscribe to our newsletter, Red Hat Shares

Sign up now
© 2026 Red Hat