๐Ÿ“– Documentation

Integration Guides

Connect any tool to SecureKeyHub in minutes. Full OpenAI API compatibility means your existing code works โ€” just change the URL.

๐Ÿฆž OpenClaw โ€” QQ Bot Powered by SecureKeyHub

OpenClaw is a full-featured QQ bot framework. By pointing it to SecureKeyHub, you get access to 40+ models with smart routing, cost control, and real-time billing โ€” all without exposing raw provider keys.

~/.openclaw/hub/config.json
// ~/.openclaw/hub/config.json
{
  "provider": "openai_compatible",
  "base_url": "https://api.securekeyhub.com/v1",
  "api_key": "skh_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx",
  "models": [
    { "name": "deepseek-v4-flash", "model": "deepseek-v4-flash" },
    { "name": "deepseek-v4-pro", "model": "deepseek-v4-pro" },
    { "name": "glm-4.7-flash", "model": "glm-4.7-flash" },
    { "name": "qwq-32b", "model": "qwq-32b" }
  ]
}
๐Ÿ’ก All SecureKeyHub model aliases work directly. No need to configure individual API keys per model โ€” a single SecureKeyHub key manages them all.

๐Ÿค– Hermes Agent โ€” Command-Line AI Assistant

Hermes Agent natively supports SecureKeyHub. Configure it as your model provider and use any supported model with full streaming, tool calls, and cost tracking.

~/.hermes/config.yaml
# ~/.hermes/config.yaml
provider: securekeyhub
model: deepseek-v4-flash

๐Ÿ”Œ OpenAI-Compatible SDK

Any OpenAI SDK works with SecureKeyHub. Just change the base URL and API key. No library changes, no vendor lock-in.

Python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.securekeyhub.com/v1",
    api_key="skh_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
)

response = client.chat.completions.create(
    model="deepseek-v4-flash",
    messages=[{"role": "user", "content": "Hello!"}]
)
print(response.choices[0].message.content)
Node.js
import OpenAI from 'openai';

const client = new OpenAI({
  baseURL: 'https://api.securekeyhub.com/v1',
  apiKey: 'skh_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
});

const response = await client.chat.completions.create({
  model: 'deepseek-v4-flash',
  messages: [{ role: 'user', content: 'Hello!' }]
});
cURL
curl https://api.securekeyhub.com/v1/chat/completions \
  -H "Authorization: Bearer skh_xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "deepseek-v4-flash",
    "messages": [{"role": "user", "content": "Hello!"}]
  }'

Supported Models

All models available through SecureKeyHub at competitive pricing. Prices per 1M tokens.

Model Alias Underlying Model Best For Price / 1M tokens
deepseek-v4-flash DeepSeek V4 Flash Lightweight chat, lowest cost $0.15 / $0.60
deepseek-v4-pro DeepSeek V4 Pro Complex reasoning, code gen $1.50 / $6.00
deepseek-reasoner DeepSeek R1 Deep reasoning, math $0.55 / $2.19
deepseek-chat DeepSeek Chat General conversation $0.27 / $1.10
glm-4.7-flash GLM-4.7-Flash Chinese optimized, fast $0.10 / $0.10
qwq-32b QwQ-32B General, creative writing $0.30 / $0.60
kimi-k2.5 Kimi K2.5 Long context, Chinese $0.45 / $1.80
llama-4-scout Llama 4 Scout Open-source $0.15 / $0.60

Get Started in 3 Steps

1

Create Account

Sign up at dashboard.securekeyhub.com โ€” no credit card required.

2

Copy Your API Key

One key to rule them all. No more managing per-provider credentials.

3

Start Calling Models

Use any OpenAI-compatible SDK. Just change the base URL and you're live.

Go to Dashboard โ†’