Lifecycle Operations
Manage memory dynamics: consolidation, decay, mode switching, forgetting, export, and import.
lifecycle.consolidate
Trigger memory consolidation -- migrating mature episodic memories into the semantic store as distilled knowledge.
Endpoint: POST /v1/lifecycle/consolidate
Request
{
"strategy": "incremental",
"min_age_hours": 24,
"min_access_count": 3,
"dry_run": false
}| Field | Type | Required | Default | Description |
|---|---|---|---|---|
strategy | ConsolidationStrategy | No | incremental | full, incremental, or selective |
min_age_hours | u32 | No | 0 | Minimum age of episodic records to consider |
min_access_count | u32 | No | 0 | Minimum access count to qualify |
dry_run | bool | No | false | Preview without making changes |
Response
{
"records_processed": 150,
"records_migrated": 12,
"records_compressed": 8,
"records_pruned": 5,
"semantic_nodes_created": 12
}lifecycle.decay_tick
Manually trigger a decay computation cycle.
In server mode, Cerememory also runs decay automatically in the background at the configured interval.
Endpoint: POST /v1/lifecycle/decay-tick
Request
{
"tick_duration_seconds": 3600
}| Field | Type | Required | Default | Description |
|---|---|---|---|---|
tick_duration_seconds | u32 | No | None | Simulated time to add |
Response
{
"records_updated": 342,
"records_below_threshold": 15,
"records_pruned": 8
}lifecycle.dream_tick
Trigger dream processing -- summarize raw journal entries into curated episodic and semantic memories. This is analogous to memory consolidation during sleep.
Endpoint: POST /v1/lifecycle/dream-tick
Request
{
"session_id": "sess_abc123",
"dry_run": false,
"max_groups": 10,
"include_private_scratch": false,
"include_sealed": false,
"promote_semantic": true,
"secrecy_levels": ["public", "sensitive"]
}| Field | Type | Required | Default | Description |
|---|---|---|---|---|
session_id | string | No | None | Process only entries from this session. When omitted, processes all pending entries |
dry_run | bool | No | false | Preview summarization without creating records |
max_groups | u32 | No | 10 | Maximum number of topic groups to process per call |
include_private_scratch | bool | No | false | Include private_scratch visibility entries |
include_sealed | bool | No | false | Include sealed (encrypted) entries |
promote_semantic | bool | No | true | Promote factual content to the semantic store |
secrecy_levels | SecrecyLevel[] | No | All levels | Restrict to specific secrecy levels |
Response
{
"groups_processed": 5,
"raw_records_processed": 45,
"episodic_summaries_created": 8,
"semantic_nodes_created": 3
}In server mode, Cerememory can run dream processing automatically in the background at the configured interval (dream.background_interval_secs).
lifecycle.set_mode
Switch the global recall mode between Human and Perfect.
Endpoint: PUT /v1/lifecycle/mode
Request
{
"mode": "perfect",
"scope": ["episodic", "semantic"]
}| Field | Type | Required | Default | Description |
|---|---|---|---|---|
mode | RecallMode | Yes | -- | human or perfect |
scope | StoreType[] | No | All stores | Limit the change to specific stores |
Response
Returns 204 No Content on success.
lifecycle.forget
Permanently delete memory records.
Endpoint: DELETE /v1/lifecycle/forget
Request
{
"record_ids": ["01916e3a-5678-7000-8000-000000000001"],
"store": null,
"temporal_range": null,
"cascade": true,
"confirm": true
}| Field | Type | Required | Default | Description |
|---|---|---|---|---|
record_ids | Uuid[] | No | None | Specific records to delete |
store | StoreType | No | None | Delete all records in a store |
temporal_range | TemporalRange | No | None | Delete records within a time range |
cascade | bool | No | false | Also delete associated records |
confirm | bool | Yes | -- | Must be true to proceed |
Response
{
"records_deleted": 3
}lifecycle.export
Export memory records to a CMA archive.
Endpoint: POST /v1/lifecycle/export
Request
{
"format": "cma",
"stores": ["episodic", "semantic"],
"include_raw_journal": true,
"encrypt": true,
"encryption_key": "correct horse battery staple"
}| Field | Type | Required | Default | Description |
|---|---|---|---|---|
format | string | No | cma | Archive format; cma is currently the only supported value |
stores | StoreType[] | No | All stores | Limit export to specific stores |
include_raw_journal | bool | No | false | Include raw journal entries in the archive |
encrypt | bool | No | false | Encrypt the archive |
encryption_key | string | No | None | Archive passphrase when encrypt is true |
Response Metadata
{
"archive_id": "export_20260319_100000",
"size_bytes": 1048576,
"record_count": 342,
"checksum": "sha256:abc123..."
}In gRPC, this metadata is sent with the first streamed chunk of archive bytes. In the CLI and MCP tool, Cerememory writes the bytes directly to the requested output path.
Transport Notes
- CLI:
cerememory export --output ./backup.cma ... - MCP:
exportrequires an explicitoutput_pathand always includes the raw journal - gRPC: The archive is streamed in chunks; metadata arrives in the first chunk
Archive Format
CMA archives are JSON Lines:
- Line 1 —
ArchiveHeader(version,timestamp,record_count, optionalcurated_record_count, optionalraw_record_count) - Lines 2..N — One record per line. Curated-only archives serialize
MemoryRecorddirectly; bundle archives serialize anArchiveEntryenvelope tagged with"kind": "memory_record"or"kind": "raw_journal_record"and adatapayload. - Last line —
ArchiveFootercarrying a SHA-256 hex digest of the record lines.
Two format versions are recognized:
| Version | Carries | When emitted |
|---|---|---|
1.0 | Curated MemoryRecord lines only | include_raw_journal = false |
2.0 | ArchiveEntry envelopes mixing curated and raw journal records | include_raw_journal = true (or any MCP export call) |
When encrypt = true, the entire archive is wrapped with ChaCha20-Poly1305 AEAD using an Argon2id-derived key (64 MiB memory, 3 iterations). The encrypted bundle layout is salt(16) || nonce(12) || ciphertext+tag.
lifecycle.import
Import records from a CMA archive.
Endpoint: POST /v1/lifecycle/import
CMP Request
{
"archive_id": "export_20260319_100000",
"strategy": "merge",
"conflict_resolution": "keep_newer",
"decryption_key": "correct horse battery staple",
"archive_data": "<raw CMA bytes>"
}| Field | Type | Required | Default | Description |
|---|---|---|---|---|
archive_id | string | Yes | -- | Archive identifier |
strategy | ImportStrategy | No | merge | merge or replace |
conflict_resolution | ConflictResolution | No | keep_newer | keep_existing, keep_imported, or keep_newer |
decryption_key | string | No | None | Passphrase for encrypted archives |
archive_data | bytes | No | None | Raw CMA archive data when using the protocol directly |
Response
{
"records_imported": 342
}CLI Equivalent
cerememory import ./memories-backup.cma --conflict-resolution keep_newerThe current CLI reads the archive bytes from disk, derives archive_id from the filename, and imports with the protocol's merge strategy while letting you choose the conflict resolution policy.
Next Steps
Observe system state and memory metadata