Skip to content

Lifecycle Operations

Manage memory dynamics: consolidation, decay, mode switching, forgetting, export, and import.

lifecycle.consolidate

Trigger memory consolidation -- migrating mature episodic memories into the semantic store as distilled knowledge.

Endpoint: POST /v1/lifecycle/consolidate

Request

json
{
  "strategy": "incremental",
  "min_age_hours": 24,
  "min_access_count": 3,
  "dry_run": false
}
FieldTypeRequiredDefaultDescription
strategyConsolidationStrategyNoincrementalfull, incremental, or selective
min_age_hoursu32No0Minimum age of episodic records to consider
min_access_countu32No0Minimum access count to qualify
dry_runboolNofalsePreview without making changes

Response

json
{
  "records_processed": 150,
  "records_migrated": 12,
  "records_compressed": 8,
  "records_pruned": 5,
  "semantic_nodes_created": 12
}

lifecycle.decay_tick

Manually trigger a decay computation cycle.

In server mode, Cerememory also runs decay automatically in the background at the configured interval.

Endpoint: POST /v1/lifecycle/decay-tick

Request

json
{
  "tick_duration_seconds": 3600
}
FieldTypeRequiredDefaultDescription
tick_duration_secondsu32NoNoneSimulated time to add

Response

json
{
  "records_updated": 342,
  "records_below_threshold": 15,
  "records_pruned": 8
}

lifecycle.dream_tick

Trigger dream processing -- summarize raw journal entries into curated episodic and semantic memories. This is analogous to memory consolidation during sleep.

Endpoint: POST /v1/lifecycle/dream-tick

Request

json
{
  "session_id": "sess_abc123",
  "dry_run": false,
  "max_groups": 10,
  "include_private_scratch": false,
  "include_sealed": false,
  "promote_semantic": true,
  "secrecy_levels": ["public", "sensitive"]
}
FieldTypeRequiredDefaultDescription
session_idstringNoNoneProcess only entries from this session. When omitted, processes all pending entries
dry_runboolNofalsePreview summarization without creating records
max_groupsu32No10Maximum number of topic groups to process per call
include_private_scratchboolNofalseInclude private_scratch visibility entries
include_sealedboolNofalseInclude sealed (encrypted) entries
promote_semanticboolNotruePromote factual content to the semantic store
secrecy_levelsSecrecyLevel[]NoAll levelsRestrict to specific secrecy levels

Response

json
{
  "groups_processed": 5,
  "raw_records_processed": 45,
  "episodic_summaries_created": 8,
  "semantic_nodes_created": 3
}

In server mode, Cerememory can run dream processing automatically in the background at the configured interval (dream.background_interval_secs).

lifecycle.set_mode

Switch the global recall mode between Human and Perfect.

Endpoint: PUT /v1/lifecycle/mode

Request

json
{
  "mode": "perfect",
  "scope": ["episodic", "semantic"]
}
FieldTypeRequiredDefaultDescription
modeRecallModeYes--human or perfect
scopeStoreType[]NoAll storesLimit the change to specific stores

Response

Returns 204 No Content on success.

lifecycle.forget

Permanently delete memory records.

Endpoint: DELETE /v1/lifecycle/forget

Request

json
{
  "record_ids": ["01916e3a-5678-7000-8000-000000000001"],
  "store": null,
  "temporal_range": null,
  "cascade": true,
  "confirm": true
}
FieldTypeRequiredDefaultDescription
record_idsUuid[]NoNoneSpecific records to delete
storeStoreTypeNoNoneDelete all records in a store
temporal_rangeTemporalRangeNoNoneDelete records within a time range
cascadeboolNofalseAlso delete associated records
confirmboolYes--Must be true to proceed

Response

json
{
  "records_deleted": 3
}

lifecycle.export

Export memory records to a CMA archive.

Endpoint: POST /v1/lifecycle/export

Request

json
{
  "format": "cma",
  "stores": ["episodic", "semantic"],
  "include_raw_journal": true,
  "encrypt": true,
  "encryption_key": "correct horse battery staple"
}
FieldTypeRequiredDefaultDescription
formatstringNocmaArchive format; cma is currently the only supported value
storesStoreType[]NoAll storesLimit export to specific stores
include_raw_journalboolNofalseInclude raw journal entries in the archive
encryptboolNofalseEncrypt the archive
encryption_keystringNoNoneArchive passphrase when encrypt is true

Response Metadata

json
{
  "archive_id": "export_20260319_100000",
  "size_bytes": 1048576,
  "record_count": 342,
  "checksum": "sha256:abc123..."
}

In gRPC, this metadata is sent with the first streamed chunk of archive bytes. In the CLI and MCP tool, Cerememory writes the bytes directly to the requested output path.

Transport Notes

  • CLI: cerememory export --output ./backup.cma ...
  • MCP: export requires an explicit output_path and always includes the raw journal
  • gRPC: The archive is streamed in chunks; metadata arrives in the first chunk

Archive Format

CMA archives are JSON Lines:

  • Line 1ArchiveHeader (version, timestamp, record_count, optional curated_record_count, optional raw_record_count)
  • Lines 2..N — One record per line. Curated-only archives serialize MemoryRecord directly; bundle archives serialize an ArchiveEntry envelope tagged with "kind": "memory_record" or "kind": "raw_journal_record" and a data payload.
  • Last lineArchiveFooter carrying a SHA-256 hex digest of the record lines.

Two format versions are recognized:

VersionCarriesWhen emitted
1.0Curated MemoryRecord lines onlyinclude_raw_journal = false
2.0ArchiveEntry envelopes mixing curated and raw journal recordsinclude_raw_journal = true (or any MCP export call)

When encrypt = true, the entire archive is wrapped with ChaCha20-Poly1305 AEAD using an Argon2id-derived key (64 MiB memory, 3 iterations). The encrypted bundle layout is salt(16) || nonce(12) || ciphertext+tag.

lifecycle.import

Import records from a CMA archive.

Endpoint: POST /v1/lifecycle/import

CMP Request

json
{
  "archive_id": "export_20260319_100000",
  "strategy": "merge",
  "conflict_resolution": "keep_newer",
  "decryption_key": "correct horse battery staple",
  "archive_data": "<raw CMA bytes>"
}
FieldTypeRequiredDefaultDescription
archive_idstringYes--Archive identifier
strategyImportStrategyNomergemerge or replace
conflict_resolutionConflictResolutionNokeep_newerkeep_existing, keep_imported, or keep_newer
decryption_keystringNoNonePassphrase for encrypted archives
archive_databytesNoNoneRaw CMA archive data when using the protocol directly

Response

json
{
  "records_imported": 342
}

CLI Equivalent

bash
cerememory import ./memories-backup.cma --conflict-resolution keep_newer

The current CLI reads the archive bytes from disk, derives archive_id from the filename, and imports with the protocol's merge strategy while letting you choose the conflict resolution policy.

Next Steps

Introspect Operations

Observe system state and memory metadata