Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.getcitable.com/llms.txt

Use this file to discover all available pages before exploring further.

The Action Engine is the page you open first thing on a Monday. It turns scan results into specific recommendations: write this blog, fix this page, reply to this thread, launch this ad. Each card tells you what to do, why, and how long it’ll take.

How cards work

Each card is one recommendation. Cards show:
  • A short title and a one-paragraph description
  • Why this action matters — usually the prompts you’re losing on or the gap it closes
  • The estimated effort in minutes
  • A deep link to where the work happens (Brand Studio for content, the page editor for technical fixes, etc.)
You decide what to do with each card:
  • Approve — accept the recommendation and move into the work. For content, this routes to Brand Studio with the brief pre-filled. For technical fixes, it gives you the exact change to make.
  • Dismiss — close the card with a reason. The reason is recorded, so future recommendations can avoid the same mistake.

How cards get ranked

The Action Engine sorts cards across all five action types by expected impact:
  • How under-performing the affected prompts are
  • How strategically important those prompts are (purchase-intent beats general awareness)
  • How easy the action is relative to that impact
  • How many AI engines the fix is likely to move
Today’s batch shows up first, this week’s underneath, then next week’s, then the backlog. Most weeks you should clear today’s and this week’s; everything below is fine to defer.

How it learns from you

The Action Engine watches what you approve and what you dismiss. Recommendations you’ve dismissed for “not aligned with our focus” stop appearing for that theme. Recommendations you’ve approved and shipped feed the 30-day measurement loop — actions that worked are weighted up in similar future recommendations. After a few weeks of approving and dismissing, the queue starts looking more like your queue and less like a generic top-10 list.

What you’ll see in your first week

The Action Engine surfaces a mix on day one — usually a content recommendation, a technical fix on an existing page, and one operational suggestion (most often “enable another AI engine in your scan”). The mix shifts over time as Citable learns which categories you act on. The full list of cards lives on the Action Engine page. Top picks also surface on the dashboard so you can act without an extra click.