Skip to content

Truths

Truths are the stable, core facts Iris knows about you. Unlike regular memories that capture contextual details, Truths represent distilled knowledge that's relevant across conversations - your name, key relationships, fundamental preferences, and important life facts.

Why Truths?

Regular memories excel at capturing contextual information, but some facts about you are universally relevant. You shouldn't need to remind Iris of your name or that you have a partner every time you start a new conversation.

The challenge is identifying which memories deserve this "always available" status. Rather than guessing at extraction time, Truths are earned through behavioral evidence - memories that consistently prove useful across many conversations naturally get promoted.

How Truths Work

Truths sit in a separate layer above regular memories:

Conversations → Memories → Truths

              (semantic search)

When Iris retrieves context for a conversation:

  1. Truths are evaluated first - pinned Truths are always included, others are ranked by relevance to the current conversation
  2. Memories are then searched semantically based on what you're discussing

This means even Truths get contextually ranked. If you have 10 Truths but only room for 5, the most relevant ones to your current topic surface. Your name is always relevant; your coffee preference might not be when debugging code.

Truth Sources

Truths come from three places:

SourceDescription
PromotedAutomatically distilled from memories that prove consistently useful
User CreatedManually added through the Truths UI
Agent CreatedIris creates these during conversation when it identifies core facts

Automatic Promotion (Distillation)

The distillation process runs nightly and evaluates memories based on behavioral evidence:

  • Access frequency - How often was this memory retrieved?
  • Access recency - Is it still being accessed regularly?
  • Consolidation generation - Has it been refined through memory consolidation?

Memories must meet both thresholds to become candidates:

  1. Percentile threshold - In the top N% of memories by access count
  2. Absolute threshold - Minimum number of accesses (default: 10)

This dual-threshold approach prevents premature promotion. A memory accessed twice might be in the top 20% if you only have 10 memories, but it hasn't really proven its value yet.

An LLM then analyzes each candidate to determine if it contains a stable, generalizable fact versus an episodic event.

For example, "User prefers TypeScript over JavaScript" is promotable - it's a stable preference. But "User was debugging a TypeScript error yesterday" is not - it's a temporal event that will become irrelevant.

Truth Refinement

When new evidence supports an existing Truth, it gets refined - updated to incorporate the new information while maintaining its core meaning. This lets Truths evolve naturally as Iris learns more about you.

For example, if a Truth says "Works as a software developer" and new conversations reveal you're specifically a "senior backend engineer at a fintech company", refinement would update the Truth to be more precise.

Generation Tracking

Each Truth tracks its generation - how many times it's been refined:

GenerationMeaning
0Original Truth (directly from promotion or manual creation)
1Refined once with new evidence
2+Multiple refinements
Max (5)Cannot be refined further

Generation tracking helps you understand how "derived" a Truth is from its original evidence. A generation 0 Truth is closely tied to concrete observations. A higher generation Truth has been through multiple refinement cycles.

NOTE

Truths at the maximum generation (default: 5) are protected from further refinement. This prevents Truths from drifting too far from their original evidence.

Temporal Fact Handling

Some Truths contain information that changes over time. The refinement system understands temporal updates:

  • Age-related: "Has a 12-year-old child" updates to "Has a 13-year-old child" when new evidence shows the child's birthday passed
  • Job/Role: "Works as senior engineer" updates to "Works as staff engineer" on promotion
  • Location: "Lives in Brooklyn" updates to "Lives in Queens" after a move
  • Status: "Dating partner" updates to "Married to partner" after a wedding

These aren't treated as contradictions - they're natural progressions where newer information supersedes older information.

Conflict Detection

Sometimes new evidence genuinely contradicts an existing Truth. Rather than blindly merging conflicting information, Iris detects contradictions and handles them appropriately.

How Conflicts Work

When the refinement process encounters new evidence that contradicts an existing Truth:

  1. Analysis - An LLM evaluates whether the new evidence contradicts or enriches the Truth
  2. Evidence Strength - The new evidence is scored based on access count, recency, and consolidation
  3. Resolution - Based on the evidence strength and Truth protection status, the conflict is either auto-resolved or flagged for review

Evidence Strength Calculation

New evidence is scored on a 0-27+ scale:

FactorPoints
Access count1 point per access (capped at 20)
Recency bonus+5 if accessed within 7 days
Consolidation bonus+2 per consolidation generation

Resolution Rules

ConditionResolution
Protected Truth (user-created, agent-created, or pinned)Always flagged for review
Strong evidence (15+ points) against unprotected TruthAuto-updated
Weak evidence against any TruthFlagged for review

Managing Conflicts

The Conflicts page in the UI shows all flagged conflicts. For each conflict, you'll see:

  • The existing Truth content
  • The new evidence that triggered the conflict
  • A proposed update (what the system thinks the Truth should become)
  • The reasoning behind the proposed change
  • The evidence strength score

You can resolve conflicts by:

  • Accept - Use the proposed content as-is
  • Reject - Keep the original Truth, discard the new evidence
  • Merge - Edit the proposed content before accepting

TIP

The merge option lets you tweak the AI's proposed update. Sometimes it's close but not quite right - editing saves time compared to rejecting and manually updating.

Truths About Other People

Truths can capture stable facts about people in your inner circle - your partner, children, or close family. These are valuable context that Iris should remember.

Subject Preservation

When a Truth is about someone other than you, the subject must be preserved in the Truth statement:

MemoryCorrect TruthWrong Truth
"Partner had a doctor appointment about a chronic condition""Partner is managing a chronic health condition""Is managing a chronic health condition"
"Son was diagnosed with ADHD""User's son has ADHD""Has ADHD"

The wrong examples are ambiguous - they could be interpreted as being about you rather than the person they actually describe.

Category Hints

The memory's category provides context for subject identification:

  • Relationships - High chance the memory is about someone else
  • Health - Could be about you or someone close to you
  • Personal/Professional/Preferences/Goals/Hobbies - Usually about you

Pinning

You can pin any Truth to ensure it's always included in context, regardless of relevance scoring. This is useful for:

  • Facts that define how Iris should address or interact with you
  • Information important enough that it should never be filtered out
  • Core identity facts you want guaranteed in every conversation

Pinned Truths are unlimited - if you pin 20 Truths, all 20 will be included. Use this thoughtfully since it affects your context window budget.

Pinning also protects promoted Truths from automatic refinement. User-created and agent-created Truths are already protected by default.

NOTE

Pinned Truths are always included in context regardless of relevance scoring. User-created and agent-created Truths are already protected from automatic refinement, so pinning them is only necessary if you want guaranteed inclusion.

Configuration

SettingDefaultDescription
truths.max_dynamic7Maximum non-pinned Truths to include per conversation
truths.percentile_threshold5Top N% of memories by access count are candidates for promotion
truths.min_absolute_access_count10Minimum access count required for promotion candidacy
truths.min_total_memories20Minimum memories required before distillation runs
truths.max_generation5Maximum refinement generations before a Truth is protected
truths.auto_resolve_strength_threshold15Evidence strength required for auto-resolution of conflicts
truths.similarity_threshold0.40Minimum relevance score for a Truth to be included
truths.duplicate_threshold0.80Similarity threshold to detect duplicate Truths
truths.stale_days90Days without access before a Truth is considered stale

Managing Truths

Via the UI

The Truths page lets you:

  • View all your Truths with their source, generation, and access statistics
  • Create new Truths manually
  • Edit existing Truth content
  • Pin/unpin Truths
  • Delete Truths you no longer want
  • View Truth history and source memories

The Conflicts page lets you:

  • Review flagged conflicts
  • Accept, reject, or merge proposed changes
  • See conflict resolution history

Via Conversation

Iris has tools to manage Truths during natural conversation:

  • store_truth - Create a new Truth when you tell Iris something important
  • search_truths - Find existing Truths
  • update_truth - Modify a Truth's content
  • delete_truth - Remove a Truth

You can say things like "Remember that I'm allergic to shellfish - that's important" and Iris will create a Truth rather than a regular memory.

Running Distillation Manually

You can trigger distillation via Artisan:

bash
# Process all users
php artisan iris:distill-truths --queue

# Process a specific user
php artisan iris:distill-truths --user=1 --queue

# Preview candidates without promoting (dry run)
php artisan iris:distill-truths --dry-run

# View distillation statistics
php artisan iris:distill-truths --user=1 --stats

The --queue flag dispatches jobs for background processing with automatic retry handling for rate limits.