Join waitlist
30 September 2025
Almanac: an Integrated Research Environment
Research workflows everywhere follow the same arc: consumption → synthesis → creation, whether you’re a financial analyst assessing risk, a PM mapping a market, or an academic surveying a field. Today that arc is fractured: we read in browsers and PDF viewers (or outsource to “deep research”), then hop to docs/notion/word to write. Context shatters, flow breaks, quality suffers.
What’s changing and why Almanac exists
1) Bolt-on copilots aren’t enough.
Chat assistants embedded into legacy tools give you a conversational partner, but they operate through human-optimized UIs that are opaque to agents. Asking an agent to edit a table rendered as pixels is brittle; even with multimodality, you get errors and inconsistency.
Almanac’s stance: the right abstraction is dual-faced. Keep the ergonomic, human UI, but also expose a first-class agent interface: deterministic, structured, and semantically addressable so the conversational partner can select sections, apply math tools, insert citations, or refactor passages without guessing at pixels. That substrate makes collaboration reliable and auditable (traceability, rollback) while preserving a delightful editor. This requires re-architecting core interaction models across product and ecosystem layers.
2) “Deep Research” creates new friction.
Users frustrated by scattered tooling are turning to systems that orchestrate swarms of agents and return 10,000-word reports. The bottleneck simply moves: you still have to iterate toward the result you actually need, via chat or a minimal canvas that makes precise structural edits, targeted comparisons, and evidence-aware revisions slow and awkward. It’s also single-player, while real knowledge work is multi-player with shared context, roles, permissions, and versioned edits. And inquiry is Brownian: you hop across adjacent questions; fire-and-forget workflows can’t keep up.
Almanac’s model: AI-first, workspace-native, iterative
Almanac is an Integrated Research Environment (IRE): a workspace-native operating surface where humans and agents are co-equal and accountable. Almanac is powered by Alma, an always-available work partner that works the way you do.
Live research, not a background job.
You state intent in chat; instead of the system disappearing for 20 minutes, Almanac opens a live research workspace:
Left Explorer: sources populate as the agent reads; status and coverage remain visible.
Center Canvas: a continuously growing, multimodal document with excerpts, figures, tables, and inline citations.
You stay in flow: offload browsing and first-pass extraction, keep comprehension and synthesis.
Co-steer with real controls.
At any point you can constrain scope, request comparisons, impose structure, or apply templates. Alma treats you as in-the-loop, asking clarifying questions when intent is ambiguous.
One surface from gather → draft.
As collection winds down, the same canvas becomes the drafting environment. Alma operates across the autonomy slider, from in-workflow assistance (autocomplete, citations, refactors), to a chat sidecar for targeted actions, to delegable execution for sections or full drafts.
Why this wins
The future won’t be defined by bolt-on copilots or 10,000-word dumps. It will be set by a workspace-native IRE that pairs a human-centric editor with a structured agent interface, making collaboration reliable, measurable, and fast.
Almanac compresses time-to-insight, elevates output quality, and compounds a workflow-data moat: every action, citation, and decision becomes structured signal that improves Alma and your team over time.
Research workflows everywhere follow the same arc: consumption → synthesis → creation, whether you’re a financial analyst assessing risk, a PM mapping a market, or an academic surveying a field. Today that arc is fractured: we read in browsers and PDF viewers (or outsource to “deep research”), then hop to docs/notion/word to write. Context shatters, flow breaks, quality suffers.
What’s changing and why Almanac exists
1) Bolt-on copilots aren’t enough.
Chat assistants embedded into legacy tools give you a conversational partner, but they operate through human-optimized UIs that are opaque to agents. Asking an agent to edit a table rendered as pixels is brittle; even with multimodality, you get errors and inconsistency.
Almanac’s stance: the right abstraction is dual-faced. Keep the ergonomic, human UI, but also expose a first-class agent interface: deterministic, structured, and semantically addressable so the conversational partner can select sections, apply math tools, insert citations, or refactor passages without guessing at pixels. That substrate makes collaboration reliable and auditable (traceability, rollback) while preserving a delightful editor. This requires re-architecting core interaction models across product and ecosystem layers.
2) “Deep Research” creates new friction.
Users frustrated by scattered tooling are turning to systems that orchestrate swarms of agents and return 10,000-word reports. The bottleneck simply moves: you still have to iterate toward the result you actually need, via chat or a minimal canvas that makes precise structural edits, targeted comparisons, and evidence-aware revisions slow and awkward. It’s also single-player, while real knowledge work is multi-player with shared context, roles, permissions, and versioned edits. And inquiry is Brownian: you hop across adjacent questions; fire-and-forget workflows can’t keep up.
Almanac’s model: AI-first, workspace-native, iterative
Almanac is an Integrated Research Environment (IRE): a workspace-native operating surface where humans and agents are co-equal and accountable. Almanac is powered by Alma, an always-available work partner that works the way you do.
Live research, not a background job.
You state intent in chat; instead of the system disappearing for 20 minutes, Almanac opens a live research workspace:
Left Explorer: sources populate as the agent reads; status and coverage remain visible.
Center Canvas: a continuously growing, multimodal document with excerpts, figures, tables, and inline citations.
You stay in flow: offload browsing and first-pass extraction, keep comprehension and synthesis.
Co-steer with real controls.
At any point you can constrain scope, request comparisons, impose structure, or apply templates. Alma treats you as in-the-loop, asking clarifying questions when intent is ambiguous.
One surface from gather → draft.
As collection winds down, the same canvas becomes the drafting environment. Alma operates across the autonomy slider, from in-workflow assistance (autocomplete, citations, refactors), to a chat sidecar for targeted actions, to delegable execution for sections or full drafts.
Why this wins
The future won’t be defined by bolt-on copilots or 10,000-word dumps. It will be set by a workspace-native IRE that pairs a human-centric editor with a structured agent interface, making collaboration reliable, measurable, and fast.
Almanac compresses time-to-insight, elevates output quality, and compounds a workflow-data moat: every action, citation, and decision becomes structured signal that improves Alma and your team over time.
Hey, I’m Alma