Why this matters
The model does not see the raw repository by magic. Claude Code assembles a carefully shaped context before each turn, and this chapter follows that path from the first cached snapshot to the final prompt text.
Big picture first
Claude Code does not treat memory as one last add-on step. CLAUDE.md content
enters during getUserContext(), memory mechanics can also enter during
getSystemPrompt(), and override logic later chooses the final prompt body.
That is the easiest way to understand this subtree: context and memory can feed
the prompt in more than one place, but the final assembly rules still decide
what the model sees.
How prompt construction flows
Memory is not a final add-on. CLAUDE.md content enters during user-context collection, memory mechanics can enter during default prompt construction, and override logic later chooses the final prompt body.
How this part breaks down
The rest of the subtree keeps the teaching order readable, but the runtime can surface memory in more than one place:
context-snapshots-and-cached-inputsStart with the values Claude Code gathers before it even builds a prompt.default-system-prompt-and-dynamic-sectionsSee how the big default system prompt is built from cached and dynamic sections.prompt-override-priority-and-final-assemblyFollow the rules that decide whether default, custom, coordinator, or agent prompts win.claude-md-and-memory-file-loadingSee how CLAUDE.md files and auto memory become model-visible instructions.prompt-and-memory-data-structuresReview the key types and object shapes before you revisit the deeper pages.
Takeaways
- The prompt path is a pipeline, not a single string concatenation.
- The chapters keep a clear teaching order even though memory enters the runtime in more than one place.
- The appendix is there so the later code pages can name the data shapes first.