MCP Server
MCP Server Overview
Understanding HyperMemory as an MCP server
MCP Server Overview
HyperMemory exposes its memory capabilities through the Model Context Protocol (MCP) — an open standard for connecting AI agents to external tools.
What is MCP?
The Model Context Protocol is a standard interface for AI agents to interact with external services. Think of it like USB for AI: a universal connector that any compatible agent can plug into.
Key benefits:
- No SDK required — Standard protocol, no library dependencies
- Universal compatibility — Works with any MCP-compatible agent
- Streaming support — Real-time responses via SSE
- Structured tools — Well-defined inputs and outputs
HyperMemory as an MCP server
HyperMemory runs as a remote MCP server at api.hypermemory.io. Your agent connects over HTTP/SSE, authenticates with an API key, and gains access to memory tools.
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ │ HTTP │ │ Query │ │
│ Your Agent │ ◄─────► │ MCP Server │ ◄─────► │ Hypergraph │
│ (Claude, etc.) │ /SSE │ hypermemory.io │ │ Database │
│ │ │ │ │ │
└─────────────────┘ └─────────────────┘ └─────────────────┘
Available memory tools
HyperMemory exposes 8 MCP tools for memory operations:
| Tool | Description | Query Cost |
|---|---|---|
memory_store | Store a new memory node | Free |
memory_recall | Query memories by natural language | 1 query |
memory_find_related | Find nodes related to a given node | 1 query |
memory_get_relationships | Get edges/hyperedges for a node | 1 query |
memory_update | Update an existing node | Free |
memory_forget | Delete a node | Free |
memory_export_subgraph | Export a portion of the graph | 1 query |
memory_load_link | Import a subgraph | Free |
Write operations (store, update, forget, load_link) are free and don’t count against your query limit.
How tools appear to your agent
When your agent connects to HyperMemory, it discovers these tools automatically. Here’s what Claude sees:
{
"tools": [
{
"name": "memory_store",
"description": "Store a new memory in the knowledge graph",
"inputSchema": {
"type": "object",
"properties": {
"content": {
"type": "string",
"description": "The memory content to store"
},
"node_type": {
"type": "string",
"description": "Category of the memory"
},
"metadata": {
"type": "object",
"description": "Additional key-value data"
}
},
"required": ["content"]
}
}
]
}
Your agent can then call these tools naturally:
User: "Remember that our Q3 priority is API performance"
Agent (thinking): I should store this as a memory
Agent (calls): memory_store(
content="Q3 priority is API performance",
node_type="decision"
)
Supported clients
HyperMemory works with any MCP-compatible client:
| Client | Status | Notes |
|---|---|---|
| Claude Desktop | ✅ Supported | Native MCP support |
| OpenClaw | ✅ Supported | Native MCP support |
| CrewAI | ✅ Supported | Via MCP tools |
| OpenAI Agents | ✅ Supported | Via function calling adapter |
| Custom Python | ✅ Supported | Use MCP client library |
| Custom Node.js | ✅ Supported | Use MCP client library |
Connection flow
- Configure — Add HyperMemory to your agent’s MCP server list
- Authenticate — Provide your API key in the Authorization header
- Discover — Your agent learns available tools via MCP handshake
- Use — Your agent calls memory tools as needed
Connect your agent
Step-by-step connection guide
Error handling
MCP tools return structured errors:
{
"error": {
"code": "INVALID_PARAMETER",
"message": "node_id is required",
"details": {
"parameter": "node_id",
"expected": "string"
}
}
}
Common error codes:
| Code | Description |
|---|---|
INVALID_PARAMETER | Missing or invalid parameter |
NODE_NOT_FOUND | Referenced node doesn’t exist |
UNAUTHORIZED | Invalid or missing API key |
RATE_LIMITED | Too many requests |
QUOTA_EXCEEDED | Query limit reached |