Skip to content
Go to Dashboard

WebHooks

Supported Resources

The Memory pipeline primarily exposes three resources:

ResourcePublic meaningTypical use
factFacts processingClean or review facts extracted from Message input.
summarySummary processingAdjust long-term memory or sync it to a profile system.
topicTopic processingReview topic aggregation or sync topic summaries.

Supported Events

EventTimingMain-flow impact
before_addBefore write.Can modify data to be written.
before_llmBefore LLM call.Can only append instruction.
after_llmAfter LLM call.For audit and notification; does not modify the result.

Different resources support different events. Facts support before_add, before_llm, and after_llm; Summary supports before_llm, after_llm, and before_add; Topic supports before_add.

Common Use Cases

Data Cleaning

Use fact.before_add to filter or fix Facts before they are written. This is useful for sensitive data, inconsistent formatting, missing fields, or business-specific entity normalization.

LLM Rule Injection

Return an instruction from fact.before_llm or summary.before_llm. GUMem appends it to the default user prompt to add business rules, compliance requirements, or quality standards.

before_llm cannot replace the full prompt or change the system prompt.

Audit and Monitoring

Use after_llm to receive LLM output and parsed results for logging, alerts, quality analysis, or human review. This stage does not modify written data.

External Sync

Use summary.before_add or topic.before_add to receive long-term memory or topic aggregation results before write. This is useful for user profiles, CRM, recommendation systems, or audit streams.

Request Shape

GUMem sends a POST request to the configured target_url.

json
{
  "hook_id": "hook_xxx",
  "hook_name": "Fact quality checker",
  "project_id": "project_1",
  "resource": "fact",
  "event": "before_llm",
  "mode": "sync",
  "triggered_at": "2026-04-24T06:00:00Z",
  "context": {
    "user_id": "user_1",
    "thread_id": "thread_1",
    "message_ids": ["msg_1"]
  },
  "data": {
    "messages": []
  }
}

context contains user, thread, message, or Summary identifiers for the current stage. data contains stage-specific payload.

Response Shape

If you do not need to modify data, return any 2xx response.

json
{
  "ok": true
}

To affect the main flow, return a data object.

json
{
  "data": {
    "...": "..."
  }
}

before_llm

before_llm can only append instruction.

json
{
  "data": {
    "instruction": "Only extract explicitly supported facts. Ignore speculation."
  }
}

GUMem appends this instruction to the default user prompt. Returning a full prompt or system prompt is not accepted.

before_add

before_add can adjust data before write, but the returned value must pass validation.

Facts before write:

json
{
  "data": {
    "observations": []
  }
}

Summary before write:

json
{
  "data": {
    "propositions": []
  }
}

Topic before write:

json
{
  "data": {
    "topics": []
  }
}

These field names are part of the interface payload. Use the public concepts Facts, Summary, and Topic to reason about them.

Sync and Async Modes

Use sync when the hook needs to affect the main flow, such as filtering data before write, appending LLM instructions, or adjusting Summary before storage.

Use async for logs, audit sync, metrics, or notifications. Async hook return values do not affect the main flow.

Failure Handling

Webhook failures do not stop the main flow. These cases are ignored and logged:

  • Timeout.
  • Target unreachable.
  • Non-2xx response.
  • Invalid JSON.
  • data is not an object.
  • Returned data fails validation.

Minimal Example

python
from fastapi import FastAPI, Request

app = FastAPI()


@app.post("/webhooks/fact-before-llm")
async def fact_before_llm(request: Request):
    payload = await request.json()

    return {
        "data": {
            "instruction": "Prefer durable user preferences and explicit facts."
        }
    }

Next Step

Read Add Memory and Query Memory to understand the write and recall flows affected by WebHooks.