Providers¶
Providers are LLM (Large Language Model) services that power your agents. Ag2Trust supports multiple providers, allowing you to choose the best model for each use case.
Supported Providers¶
| Provider | Models | Best For |
|---|---|---|
| OpenAI | GPT-4o, GPT-4o-mini, GPT-4 Turbo | General purpose, fast responses |
| Anthropic | Claude Opus 4, Claude Sonnet 4 | Complex reasoning, longer context |
| AWS Bedrock | Claude, Llama, Mistral, Titan | AWS-native, data residency, multiple model families |
Adding a Provider¶
Via Dashboard¶
- Navigate to Settings > Providers
- Click Add Provider
- Select the provider type
- Enter your API key
- Give it a name (e.g., "OpenAI Production")
- Click Save
Provider Configuration¶
Configuration varies by provider type:
OpenAI / Anthropic (Direct API)¶
| Field | Required | Description |
|---|---|---|
| Name | Yes | Identifier for this provider config |
| Type | Yes | openai or anthropic |
| API Key | Yes | Your provider API key |
| Base URL | No | Custom endpoint (OpenAI-compatible APIs) |
AWS Bedrock¶
| Field | Required | Description |
|---|---|---|
| Name | Yes | Identifier for this provider config |
| Type | Yes | bedrock |
| AWS Access Key ID | Yes | IAM user access key |
| AWS Secret Access Key | Yes | IAM user secret key |
| AWS Region | Yes | Region where Bedrock is enabled (e.g., us-east-1) |
| Session Token | No | For temporary credentials from AWS STS |
All Required Fields Must Be Present
AWS Access Key ID, Secret Access Key, and Region are all strictly required. If any field is missing, agents will start in "ack-only" mode and cannot process LLM requests.
Security¶
Credential Encryption¶
Provider API keys are encrypted using AWS KMS envelope encryption:
┌─────────────────────────────────────────────────┐
│ Encryption Process │
│ │
│ 1. Generate random DEK (Data Encryption Key) │
│ 2. Encrypt API key with DEK (AES-256-GCM) │
│ 3. Encrypt DEK with KMS master key │
│ 4. Store: encrypted_key + encrypted_DEK │
└─────────────────────────────────────────────────┘
Key Points¶
- API keys are never stored in plain text
- Keys are decrypted only when starting an agent
- Decrypted keys passed to containers via environment variables (in-memory only)
- No .env files containing credentials
Credential Flow¶
sequenceDiagram
participant D as Dashboard
participant B as Backend
participant K as AWS KMS
participant A as Agent Container
D->>B: Save provider (API key)
B->>K: Encrypt API key
K-->>B: Encrypted credential
B->>B: Store encrypted
Note over B,A: Later: Agent Start
B->>K: Decrypt credential
K-->>B: Plain API key
B->>A: Pass via env var
A->>A: Use for LLM calls Model Selection¶
OpenAI Models¶
| Model | Speed | Intelligence | Cost | Use Case |
|---|---|---|---|---|
gpt-4o | Fast | High | $$ | General purpose |
gpt-4o-mini | Very Fast | Good | $ | High volume, simple tasks |
gpt-4-turbo | Medium | High | $$$ | Complex reasoning |
Anthropic Models¶
| Model | Speed | Intelligence | Cost | Use Case |
|---|---|---|---|---|
claude-sonnet-4-20250514 | Fast | High | $$ | Balanced performance |
claude-opus-4-20250514 | Medium | Very High | $$$ | Complex analysis |
AWS Bedrock Models¶
Bedrock provides access to multiple model families through AWS:
| Model Family | Example Model ID | Best For |
|---|---|---|
| Anthropic Claude 3.5 | anthropic.claude-3-5-sonnet-20241022-v2:0 | Latest, best performance |
| Anthropic Claude 3 | anthropic.claude-3-sonnet-20240229-v1:0 | General purpose, reasoning |
| Meta Llama | meta.llama3-70b-instruct-v1:0 | Open-source, cost-effective |
| Mistral | mistral.mixtral-8x7b-instruct-v0:1 | Fast inference, coding |
| Amazon Titan | amazon.titan-text-express-v1 | AWS-native, embeddings |
| Cohere | cohere.command-r-plus-v1:0 | RAG, enterprise search |
Model Availability
Available models vary by AWS region. You must enable model access in the AWS Bedrock console before using a model.
Choosing a Model¶
High volume + Simple tasks → gpt-4o-mini
General purpose → gpt-4o or claude-sonnet-4
Complex reasoning → claude-opus-4
Code generation → claude-sonnet-4 or mistral
Cost sensitive → gpt-4o-mini or llama3
AWS data residency → AWS Bedrock (any model)
Multiple Providers¶
You can configure multiple providers for different purposes:
Example Setup¶
| Provider Name | Type | Use Case |
|---|---|---|
| openai-production | OpenAI | Production agents |
| openai-development | OpenAI | Testing (separate quota) |
| anthropic-complex | Anthropic | Complex reasoning tasks |
Benefits¶
- Quota separation: Don't let dev testing affect prod limits
- Model specialization: Use the right model for each task
- Failover: Switch providers if one has issues
- Cost management: Track spending per use case
AWS Bedrock Setup¶
AWS Bedrock allows you to use models hosted in your own AWS account, providing data residency controls and consolidated AWS billing.
Prerequisites¶
- AWS Account with Bedrock access enabled in your region
- IAM User or Role with appropriate permissions
- Model Access enabled in the Bedrock console for desired models
Required IAM Permissions¶
Create an IAM policy with these minimum permissions:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"bedrock:InvokeModel",
"bedrock:InvokeModelWithResponseStream"
],
"Resource": "arn:aws:bedrock:*::foundation-model/*"
}
]
}
Least Privilege
Restrict the Resource ARN to specific models for tighter security:
Enable Model Access¶
Before using a model, you must request access in the AWS console:
- Go to AWS Console > Amazon Bedrock > Model access
- Click Manage model access
- Select the models you want to use
- Submit the access request (some models are instant, others require approval)
Adding a Bedrock Provider¶
- Navigate to Settings > Providers
- Click Add Provider
- Select AWS Bedrock as the type
- Enter your credentials:
- Name: e.g., "Bedrock Production"
- AWS Access Key ID: Your IAM access key
- AWS Secret Access Key: Your IAM secret key
- AWS Region: e.g.,
us-east-1 - Click Save
Model ID Format¶
Bedrock uses specific model identifiers. When creating an agent type, use the full model ID:
| Model | Model ID |
|---|---|
| Claude 3.5 Sonnet | anthropic.claude-3-5-sonnet-20241022-v2:0 |
| Claude 3 Sonnet | anthropic.claude-3-sonnet-20240229-v1:0 |
| Claude 3 Haiku | anthropic.claude-3-haiku-20240307-v1:0 |
| Llama 3 70B | meta.llama3-70b-instruct-v1:0 |
| Mistral Large | mistral.mistral-large-2407-v1:0 |
Finding Model IDs
View available model IDs in the AWS Bedrock console under Foundation models, or use the AWS CLI:
Using Temporary Credentials¶
For enhanced security, use AWS STS temporary credentials:
- Generate temporary credentials via STS
AssumeRole - Include the Session Token when adding the provider
- Update credentials before they expire (typically 1-12 hours)
# Example: Get temporary credentials
aws sts assume-role \
--role-arn arn:aws:iam::123456789:role/BedrockAgentRole \
--role-session-name ag2trust-session
Bedrock-Specific Errors¶
| Error | Cause | Solution |
|---|---|---|
AccessDeniedException | Model not enabled | Enable model in Bedrock console |
ValidationException | Invalid model ID | Verify model ID format |
ResourceNotFoundException | Model not in region | Check regional availability |
ThrottlingException | Rate limit exceeded | Reduce request rate or request quota increase |
Rate Limits¶
Provider Rate Limits¶
Each LLM provider has their own rate limits:
| Provider | Typical Limits |
|---|---|
| OpenAI | Varies by tier (TPM, RPM) |
| Anthropic | Varies by tier |
| AWS Bedrock | Varies by model and account quotas |
Ag2Trust Rate Limits¶
Ag2Trust adds additional rate limiting for protection:
| Limit Type | Value |
|---|---|
| Agent tool calls | 5/minute |
| HTTP requests | 3/minute |
Monitoring Usage¶
Dashboard Metrics¶
Track provider usage in the Dashboard:
- Tokens consumed per agent
- Response times
- Error rates
- Cost estimates
Per-Agent Statistics¶
Returns: - Total tokens used - Average response time - Messages processed - Errors encountered
Best Practices¶
1. Use Separate Keys for Environments¶
Provider: openai-production
└── Production API key with higher limits
Provider: openai-development
└── Development API key for testing
2. Monitor Token Usage¶
Set up alerts for: - Unusual token consumption spikes - High error rates - Slow response times
3. Choose Models Wisely¶
| Task Complexity | Recommended |
|---|---|
| Simple Q&A | gpt-4o-mini |
| Customer support | gpt-4o |
| Code review | claude-sonnet-4 |
| Complex analysis | claude-opus-4 |
| AWS data residency | Bedrock (Claude/Llama) |
4. Rotate Keys Periodically¶
- Generate new API key at provider
- Add new provider config in Ag2Trust
- Update agent types to use new provider
- Restart affected agents
- Delete old provider config
Troubleshooting¶
"Invalid API Key"¶
- Verify key is correct (no extra spaces)
- Check key hasn't been revoked at provider
- Ensure key has required permissions
"Rate Limit Exceeded"¶
- Check your provider's rate limit tier
- Reduce agent activity or add more agents
- Consider upgrading your provider plan
"Model Not Available"¶
- Verify model name is correct
- Check model is available in your region
- Some models require special access
Bedrock: "AccessDeniedException"¶
- Enable model access in AWS Bedrock console
- Verify IAM permissions include
bedrock:InvokeModel - Check the model is available in your selected region
Bedrock: "Invalid credentials"¶
- Verify AWS Access Key ID and Secret are correct
- Check IAM user hasn't been deactivated
- If using session tokens, ensure they haven't expired
Provider API¶
List Providers¶
Add Provider¶
POST /api/providers
Content-Type: application/json
{
"name": "OpenAI Production",
"type": "openai",
"api_key": "sk-..."
}
Delete Provider¶
Cannot Delete In-Use Providers
Providers with active agents cannot be deleted. Stop or reassign agents first.