Integration Guides

CrewAI Integration

Complete guide to integrating HyperMemory with CrewAI

CrewAI Integration

This guide covers integrating HyperMemory with CrewAI for persistent memory across agent crews.

Prerequisites

  • CrewAI installed (pip install crewai)
  • A HyperMemory account (sign up)
  • A HyperMemory API key

Installation

Step 1: Install MCP support

pip install crewai crewai-tools mcp

Step 2: Set environment variable

export HYPERMEMORY_API_KEY="your_api_key_here"

Step 3: Create memory tools

Create a tools file for HyperMemory integration:

# hypermemory_tools.py
from crewai_tools import BaseTool
from mcp import Client
import os

class MemoryStoreTool(BaseTool):
    name: str = "memory_store"
    description: str = "Store a memory in the knowledge graph"
    
    def _run(self, content: str, node_type: str = "fact", metadata: dict = None):
        client = Client(
            server_url="https://api.hypermemory.io/mcp",
            headers={"Authorization": f"Bearer {os.environ['HYPERMEMORY_API_KEY']}"}
        )
        return client.call_tool_sync(
            "memory_store",
            {"content": content, "node_type": node_type, "metadata": metadata or {}}
        )

class MemoryRecallTool(BaseTool):
    name: str = "memory_recall"
    description: str = "Query memories by natural language"
    
    def _run(self, query: str, max_results: int = 5):
        client = Client(
            server_url="https://api.hypermemory.io/mcp",
            headers={"Authorization": f"Bearer {os.environ['HYPERMEMORY_API_KEY']}"}
        )
        return client.call_tool_sync(
            "memory_recall",
            {"query": query, "max_results": max_results}
        )

class MemoryFindRelatedTool(BaseTool):
    name: str = "memory_find_related"
    description: str = "Find memories related to a specific node"
    
    def _run(self, node_id: str, depth: int = 2):
        client = Client(
            server_url="https://api.hypermemory.io/mcp",
            headers={"Authorization": f"Bearer {os.environ['HYPERMEMORY_API_KEY']}"}
        )
        return client.call_tool_sync(
            "memory_find_related",
            {"node_id": node_id, "depth": depth}
        )

Basic usage

Create agents with memory

from crewai import Agent, Task, Crew
from hypermemory_tools import MemoryStoreTool, MemoryRecallTool, MemoryFindRelatedTool

# Create memory tools
memory_store = MemoryStoreTool()
memory_recall = MemoryRecallTool()
memory_find_related = MemoryFindRelatedTool()

# Create a researcher agent with memory
researcher = Agent(
    role="Research Analyst",
    goal="Research topics and store findings in persistent memory",
    backstory="You are a thorough researcher who documents all findings.",
    tools=[memory_store, memory_recall, memory_find_related],
    verbose=True
)

# Create a writer agent with memory access
writer = Agent(
    role="Content Writer",
    goal="Write content based on research stored in memory",
    backstory="You are a skilled writer who builds on documented research.",
    tools=[memory_recall, memory_find_related],
    verbose=True
)

Create tasks that use memory

research_task = Task(
    description="""
    Research the topic: {topic}
    
    1. First, check memory for existing knowledge: memory_recall("{topic}")
    2. Conduct new research
    3. Store all findings using memory_store with appropriate types:
       - Use node_type="research" for findings
       - Use node_type="source" for references
       - Include metadata like date, confidence, source
    4. Create relationships between related findings
    """,
    agent=researcher,
    expected_output="Research summary with stored memory references"
)

writing_task = Task(
    description="""
    Write a report on: {topic}
    
    1. Query memory for all research: memory_recall("{topic} research")
    2. Find related context: memory_find_related on key findings
    3. Synthesize into a coherent report
    4. Store the final report in memory with node_type="report"
    """,
    agent=writer,
    expected_output="Complete report based on researched knowledge"
)

Run the crew

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    verbose=True
)

result = crew.kickoff(inputs={"topic": "AI agent memory systems"})

Advanced patterns

Crew callbacks for automatic memory

from crewai import Crew
from datetime import datetime

def on_task_complete(task, output):
    """Store task outputs automatically"""
    memory_store = MemoryStoreTool()
    memory_store._run(
        content=f"Task '{task.description[:50]}...' completed: {output[:200]}",
        node_type="task_output",
        metadata={
            "task_id": str(task.id),
            "agent": task.agent.role,
            "timestamp": datetime.now().isoformat()
        }
    )

crew = Crew(
    agents=[researcher, writer],
    tasks=[research_task, writing_task],
    task_callback=on_task_complete
)

Shared crew memory

Multiple crews sharing knowledge:

# Research crew
research_crew = Crew(
    agents=[researcher],
    tasks=[research_task],
    verbose=True
)

# Analysis crew (different run, same memory)
analysis_crew = Crew(
    agents=[analyst],
    tasks=[Task(
        description="""
        Analyze findings from previous research.
        Start by recalling: memory_recall("research findings")
        """,
        agent=analyst
    )]
)

# Research crew stores findings
research_crew.kickoff(inputs={"topic": "market trends"})

# Analysis crew retrieves and builds on them
analysis_crew.kickoff()  # Has access to stored research

Memory-driven agent selection

def select_agent_by_expertise(topic):
    """Select the best agent based on stored expertise"""
    memory_recall = MemoryRecallTool()
    
    # Check which agents have expertise
    expertise = memory_recall._run(
        query=f"agent expertise in {topic}",
        max_results=10
    )
    
    # Return agent with most relevant expertise
    if expertise['results']:
        best_match = expertise['results'][0]
        return agents_by_name[best_match['metadata']['agent_name']]
    
    return default_agent

Error handling

from hypermemory_tools import MemoryRecallTool

memory_recall = MemoryRecallTool()

try:
    results = memory_recall._run("project status")
except Exception as e:
    if "RATE_LIMITED" in str(e):
        print("Rate limited - implementing backoff")
        time.sleep(30)
        results = memory_recall._run("project status")
    elif "UNAUTHORIZED" in str(e):
        print("API key invalid - check HYPERMEMORY_API_KEY")
        raise
    else:
        print(f"Memory error: {e}")
        results = {"results": []}  # Fallback to empty

Troubleshooting

Agent not using memory tools

Symptoms: Agent responds without checking memory

Solutions:

  1. Be explicit in task descriptions: “First, query memory using memory_recall…”
  2. Add memory to agent backstory: “You always check existing knowledge before research”
  3. Use task callbacks to verify tool usage

Crew forgetting between runs

Symptoms: New crew runs don’t have previous knowledge

Solutions:

  1. Verify same API key is used (same graph)
  2. Check that previous stores succeeded
  3. Use explicit graph_id if using multiple graphs

Slow crew execution

Symptoms: Tasks take long time with memory operations

Solutions:

  1. Reduce max_results in recall queries
  2. Use more specific queries
  3. Cache frequently-accessed memories locally

Best practices

Define clear memory responsibilities — Designate which agent stores vs. retrieves to avoid duplicates.

Use consistent metadata across agents for better cross-agent queries.

Avoid storing intermediate outputs — Only store final, valuable findings to keep memory clean.

Next steps