Track every model invocation and break free from opaque hourly granularity

Traditional cloud cost reporting methodologies often obscure the specifics of AI spend. Daily and hourly aggregates fail to reveal the teams, projects or customers responsible for driving costs.  And without the ability to attribute costs to specific API calls or business units, it’s challenging to figure out the cause when AI spending escalates. 

For organizations expanding their AI initiatives, this lack of visibility presents significant obstacles: accountability for AI usage among teams is compromised, opportunities for optimization are not readily apparent, and cost allocation devolves into a laborious manual process relying on spreadsheets and approximations.

This integration transforms AI from an opaque, unmanageable cost center into a fully visible, allocated, and optimized expense category—giving you the same level of financial control over AI that you have for compute, storage, and other cloud resources.

What's New

We’re excited to unveil our new Amazon Bedrock Integration, the latest addition to nOps Inform, designed specifically to help your business gain unprecedented visibility into AI spending with second-by-second granularity for every model invocation. Starting today, you can access this new integration directly from your Organization Settings under Integrations.

Revolutionary Granularity for Every API Call

The Amazon Bedrock Integration captures every single model invocation in real-time through Bedrock’s Model Invocation Logs, giving you second-level precision instead of daily or hourly summaries. Now you can track individual API calls: see exactly when each model was invoked, who invoked it, how long it took, what tokens were consumed, and what it cost—transforming AI from a black box expense into a fully transparent cost center.

Unlike traditional cost reporting that only shows your hourly granularity per model, you can now inject metadata into each API call, allowing you to track costs per user, client, business unit, etc.  Every model, every call, every token are completely tracked and visible.

True Cost Allocation with Request Metadata

Request Metadata becomes tags in nOps, letting you attribute every dollar spent to specific teams, projects, customers, or business units. Now you can allocate AI costs on Bedrock accurately: slice and dice your Bedrock spending by any custom tag you define in your API calls, from team names to customer IDs to application versions.

Include metadata in your Bedrock API calls like `team: ‘engineering’`, `project: ‘chatbot-v2’`, or `customer_id: ‘12345’`, and every key-value pair automatically becomes a cost allocation dimension in nOps. Finally, you can answer questions like “How much does our AI chatbot cost per customer?” or “Which engineering team needs to optimize their AI usage?” without manual tracking or guesswork.

Who Benefits Most

FinOps TeamsEngineering LeadsFinance & Executive Teams

Automatically attribute every Bedrock dollar to the right team, project, or customer using request metadata as tags, enabling accurate cost allocation without manual tracking.

Pinpoint exactly which applications, features, or code paths drive AI costs and identify optimization opportunities by analyzing token consumption, latency, and costs at the individual invocation level.

Transform AI spending from an opaque line item into a fully transparent, manageable cost center with complete visibility to set budgets, monitor AI ROI, and make data-driven investment decisions.

How It Works

The integration works by connecting nOps to your Amazon Bedrock Model Invocation Logs stored in CloudWatch. Once you enable Model Invocation Logs in Bedrock and configure the appropriate IAM permissions for nOps, the platform automatically retrieves and processes every model invocation with complete metadata preservation.

The integration captures comprehensive data for every API call including model type, input/output tokens, latency, timestamps, and any request metadata you’ve included—giving you complete visibility into your AI spending patterns.

How to Get Started

To start using Bedrock Inform Integration, navigate to Organization Settings IntegrationsInform and click the Add Bedrock Integration button. You’ll need to provide your CloudWatch log group name where Model Invocation Logs are stored. Want more details? Check out our detailed integration guide with step-by-step instructions and IAM policy templates.

If you're already on nOps...

Have questions about Bedrock Cost Visibility? Need help getting started? Our dedicated support team is here for you. Simply reach out to your Customer Success Manager or visit our Help Center. If you’re not sure who your CSM is, send our Support Team a message.

If you’re new to nOps…

nOps was recently ranked #1 with five stars in G2’s cloud cost management category, and we optimize $2+ billion in cloud spend for our customers.

Join our customers using nOps to understand your cloud costs and leverage automation with complete confidence by booking a demo with one of our AWS experts.