Claude Code /goal & Session Management: How to Continue Multi-Day Tasks with AI Without Losing Your Place
If you've used Claude Code for any length of time, you've probably hit this wall: "I have no idea what Claude is actually doing right now." It reads files, edits code, runs terminal commands — but the overall flow never quite comes into view. At first, I spent my time scrolling back through the chat window trying to reconstruct "wait, what did it do earlier?" The bigger problem was that picking up where I left off the day before meant re-explaining everything from scratch: "Yesterday we converted up through UserService, and the tests got stuck here…"
Claude Code is a development tool where an AI agent works directly with a real terminal and codebase. A recent update introduced a revamped Agent View alongside /goal and Session management — and the core of this change is that developers can now clearly see and control what the AI is working toward. It genuinely feels like a shift in the paradigm for working with agents.
By the end of this post, you'll know how to carry out multi-day migrations or large-scale refactoring work with Claude in a systematic, uninterrupted way.
Core Concepts
Seeing How Claude Code Works: Agent View
Previously, Claude Code's work process was nothing but a stream of text flowing like a chat log. Agent View is a panel that shows the currently running agents, any subagents, and each agent's role and progress — all in one screen.
When Claude Code handles complex tasks, an orchestrator agent creates multiple subagents to process work in parallel. Agent View visualizes this structure as a tree.
[Orchestrator]
├── [Subagent A] Scanning src/api directory ✓ Done
├── [Subagent B] Analyzing src/components ⏳ In progress
└── [Subagent C] Validating test files ⏸ WaitingOrchestrator: The top-level agent that designs the overall task and delegates subtasks to subagents. Like a project manager, it keeps the "big picture" in view while distributing work to each agent.
This structure means you can intervene and course-correct when a specific agent is taking longer than expected or drifting in the wrong direction. Without Agent View, I once had Claude reading the wrong files for 30 minutes without realizing it — and had to roll everything back afterward. That kind of situation is much rarer now.
/goal: How to Give the Agent a Clear Direction
/goal is a command for explicitly declaring an objective that spans the entire session. Unlike a one-off instruction like "edit this file," a goal declared with /goal serves as Claude's reference point for every decision as long as the session runs.
/goal Bring unit test coverage for the payment module above 80%, while ensuring existing integration tests don't breakWith this declaration in place, Claude will ask itself "does this change contribute to the goal?" throughout the conversation. If an unrelated refactoring request comes in while it's writing tests, it will check first: "This task isn't directly related to the current goal. Should I proceed?"
Honestly, I once wrote something vague like "improve code quality" and discovered Claude had been busily reorganizing names — completely unrelated to tests. A goal that's too abstract gives Claude room to justify almost anything, so being specific from the start really matters.
Why it's useful: Long conversations tend to drift away from the original intent.
/goalacts as an anchor, keeping Claude from losing the thread.
Sessions: Picking Up Exactly Where You Left Off
The session feature saves your work state and conversation context so you can reload it later. Previously, closing Claude Code meant losing all context. Now you can save and resume work in discrete sessions.
# List sessions
/session list
# Resume a specific session
/session resume auth-migration-day1
# Save the current session
/session save "Payment module test work"A session stores the goal declared with /goal, the list of files changed so far, and the codebase context the agent has accumulated. When you resume the next day, you don't have to re-explain where you left off.
Practical Application
Legacy Migration: Carrying a Three-Day Backend Task Through Without Interruption
Imagine your team is migrating an older Express.js codebase to NestJS — a multi-day effort where session continuity is everything.
# Day 1: Declare goal and start session
/goal Migrate the Express.js-based auth module to NestJS.
Preserve the existing API interface, and keep existing tests passing at each step.
# ... analyze codebase, work through UserService conversion ...
# Save session before logging off
/session save "auth-module-migration-day1"# Day 2: Resume session
/session resume auth-module-migration-day1Claude's response: "We completed the UserService conversion yesterday. Today we'll continue with the AuthGuard."
Claude picks up with the previous day's context, goal, and changes intact.
| Step | Old Approach | With Sessions |
|---|---|---|
| Context recovery | Re-summarize yesterday's conversation | Instantly restored with /session resume |
| Goal consistency | Re-state constraints with every instruction | /goal automatically serves as the reference |
| Progress tracking | Manage a checklist manually | Check in real time via Agent View |
Frontend Component Refactoring: Monitoring Multi-Agent Work
This isn't just for backend work. There are plenty of useful frontend scenarios too. Converting legacy class components to functional ones is a perfect fit — the scope is wide but the pattern is repetitive.
/goal Convert class-based React components in src/components to functional components.
Keep existing snapshot tests passing for each component,
and apply useState/useEffect patterns consistently.With this goal declared, Claude automatically breaks the work into pieces. Agent View displays something like this:
[Orchestrator] Planning component conversion
├── [Subagent A] Scan and convert src/components/ui
├── [Subagent B] Scan and convert src/components/forms
└── [Subagent C] Validate snapshot tests for converted componentsAgent View tip: You can see each subagent's status (running / done / waiting) and the file it's currently processing in real time. The tree structure can feel unfamiliar at first — start by following the orchestrator node and it'll click quickly.
Pros and Cons
Pros
| Item | Details |
|---|---|
| Improved visibility | See what each agent is doing in real time |
| Context persistence | Resume a session without rebuilding codebase understanding from scratch |
| Goal consistency | /goal reduces the drift that happens mid-task |
| Parallel work control | Monitor and intervene in multi-agent work mid-flight |
| Task reproducibility | Session logs let you trace back "how did we end up here?" |
Cons and Caveats
| Item | Details | Mitigation |
|---|---|---|
| Session bloat | Long sessions accumulate context tokens, which can degrade response quality | Recommended to split sessions by feature or by day |
/goal ambiguity |
An abstract goal declaration can lead Claude to make odd judgments | Include specific success criteria (coverage numbers, target directories, etc.) |
| Agent View learning curve | Multi-agent structure can be confusing at first — hard to know which agent to watch | Follow the orchestrator node to understand the tree |
Token context: The limit on how much text an LLM can process in one pass. The longer a session runs, the closer it gets to this limit — and it starts "forgetting" older information.
The Most Common Mistakes in Practice
The first is declaring /goal too vaguely. I wrote "improve the quality of the entire codebase" early on and discovered Claude spending a long time tidying variable names while never touching the tests. It interprets "anything is a quality improvement" and runs with it. Something like "raise unit test coverage in the src/payment module from the current 42% to above 80%" — with scope and a number — works far better.
The second is letting a session run indefinitely. Accumulating a week's worth of work in one session causes context to blur together, and Claude's response quality drops noticeably. After running into this problem, I made a habit of splitting sessions at PR boundaries — and management became much cleaner.
The third is opening Agent View and then not actually checking it. I once had a subagent reading the wrong directory the entire time during a multi-agent run, didn't notice, and had to roll everything back afterward. For large tasks, it's worth checking agent status in Agent View periodically throughout. You don't need to look every five minutes, but scanning the tree every 10–15 minutes makes a real difference.
Closing Thoughts
Claude Code's Agent View, /goal, and session management are the turning point that transforms an AI coding assistant from a "one-off question tool" into a "long-term work partner." Running complex, multi-day tasks together with AI in a structured way is now genuinely achievable — and you can see what's happening throughout the process at a glance.
Three steps you can take right now:
- Update Claude Code to the latest version. Install it with
npm install -g @anthropic-ai/claude-code(orpnpm add -g @anthropic-ai/claude-codefor pnpm environments), then check whether the Agent View panel appears. - Try
/goalon a project you know well. Starting with a clearly scoped goal — something like "find allconsole.logcalls in the current project and replace them with an appropriate logger, while keeping existing tests passing" — makes the impact immediately tangible. - After a work session, save with
/session save <name>, then resume the next day with/session resume. Experiencing the context carry over intact is genuinely striking the first time.