Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.frontic.com/llms.txt

Use this file to discover all available pages before exploring further.

A command in the Context Base is a named operation the agent runs when you invoke it by name. Commands share the same shape as skills — the content body is the same kind of instruction — but there’s one crucial difference: the agent never auto-picks a command. It runs only when you ask for it. Use commands for work that’s too narrow or too invasive to auto-apply: a code-review, a test-checkout, a verify-accessibility — things you sometimes want to run and don’t want the agent starting on its own.
Commands currently don’t sync to your local repo via frontic context init. Only skills and rules are exported to .claude/ and .cursor/ today. You invoke commands from the Frontic surfaces (Studio chat, MCP in your editor) where the agent can read them directly from the project’s Context Base.

How commands fit with the other Context Base types

The four Context Base types compared:
Triggered byShapeExample
CommandsForced by name onlyTask procedurecode-review
SkillsA matching domain (or forced by name)Domain expertiseseo-optimizer
RulesAlways activeAlways-on constraintproduct-images-through-cdn
GuidesExplicitly referencedReference documentauthentication-flow
A skill is knowledge, a rule is a constraint, a guide is reference material — all three are things the agent reads. A command is the only one that’s an action. You ask for a skill implicitly (by describing the task) or explicitly (by naming it); you invoke a command by name to have the agent run its defined flow.

What commands are good for

Reviews & checks

code-review, verify-accessibility, check-performance — things you run when you want them, not every time the agent touches the codebase. Invoking them explicitly keeps the agent from second-guessing.

Build recipes

build-landing-page, scaffold-product-detail-page, add-block-component — the team’s way of creating something you put together often enough that “build me one of those” should always land the same shape.
The common pattern: work you want done a specific way when you ask for it, not every time the agent looks at a task.

Writing a command

Open Studio → Context → Commands in the admin app and click New Command. Creation asks for two things:
  • Name — short, memorable, triggerable. code-review and build-landing-page are good.
  • Description — one sentence for the commands list.
Once the command is created, you land in the editor where you write the operation itself in Markdown — the steps in order, with enough detail that the agent doesn’t have to guess. New commands start as Draft — flip the status to Active when the content is ready to run. Commands can be scoped to this project (the default) or promoted to your organization so they apply across every project on your team. The command body is the operation itself. Unlike skills (domain expertise) and rules (constraints), the command body is an actionable procedure the agent is expected to execute when triggered.

Example: build-landing-page

A creation recipe — a repeatable “build me one of these” pattern that bakes in the team’s choices so every landing page lands in the same shape.
# build-landing-page

Build a new marketing landing page from a brief. Before starting,
I'll give you a short brief — the offer, the primary call-to-action,
the audience, and any imagery direction. The flow:

1. **Confirm the brief.** Summarize what you understood back to me.
   If anything is ambiguous (CTA target, audience, hero copy),
   ask before scaffolding. Landing pages are cheap to rebuild but
   annoying to partially redo.

2. **Add a page route** under `pages/marketing/` following the
   project's slug convention. Wire it into the navigation entry
   for marketing pages (not the main site nav).

3. **Lay out the page using existing blocks.** Start from
   `HeroWithCTA`, then one or two supporting blocks from the
   marketing block library (`FeatureGrid`, `TestimonialStrip`,
   `SplitImageText`). Don't invent new blocks unless the brief
   genuinely needs one — if it does, stop and ask.

4. **Wire the SEO composite.** Title, description, canonical URL,
   OG image. Pull the OG image from the brief's hero imagery.

5. **Translate all user-facing strings** from the start — don't
   leave them hardcoded "to translate later". Use the existing
   key prefixes for the marketing namespace.

6. **Verify the page renders in both default and secondary
   markets.** Request context resolution is where most landing
   page bugs come from on this project.

7. **Give me a preview URL** and list what's left for me to review
   (imagery review, copy check, analytics tags).

### Reminders

- Landing pages on this project all follow the hero + three-block
  layout. If the brief asks for more, check in before adding.
- Analytics tagging goes through the `useMarketingEvent` composable
  — don't wire events manually.
- Never inline images. Every image goes through the image CDN;
  `<FronticImage>` handles it.
Now when anyone types /build-landing-page for the spring sale campaign, the agent follows the same flow. Every landing page matches the rest of the site.

Example: verify-accessibility

An audit command — a different shape from a build recipe. The agent reviews something that already exists and reports back.
# verify-accessibility

Run an accessibility pass on the page, component, or feature I'm
pointing you at. Goal: surface issues that would fail our team's
a11y baseline, with enough context that I can fix them myself or
have you fix them.

1. **Identify the target.** If I gave you a route, audit that
   route's rendered output. If I gave you a component, audit that
   component's markup and any direct children it renders.

2. **Check the baselines** in this order (stop and report any
   failures — don't keep digging if the first few are broken):
   - Every interactive element has an accessible name
   - Color contrast meets WCAG AA on brand colors (don't trust the
     token defaults blindly; check the combinations in use on
     this view)
   - Keyboard-only navigation reaches every interactive element in
     a sensible order, with visible focus indicators
   - Form fields have visible labels, not only placeholders
   - Images have alt text (empty `alt=""` for decorative is fine;
     meaningful images need real text)

3. **Scan for our project-specific gotchas:**
   - Custom dropdowns built from divs (should use `<select>` or the
     `<FronticSelect>` component which handles a11y for you)
   - Tooltips without keyboard triggers (hover-only tooltips fail
     keyboard users)
   - Modal dialogs that don't trap focus or restore it on close

4. **Produce a report** grouped by severity:
   - **Blocker** — fails our baseline, must fix before merge
   - **Should-fix** — compliant but fragile or below our bar
   - **Nice-to-have** — small polish items

5. **Don't auto-fix.** Ask me before touching anything. Some
   "issues" are intentional (we have specific cases where we use
   `aria-hidden` deliberately), and I want to confirm per item.
Typing /verify-accessibility gets you a consistent report every time — the same baselines, the same gotchas, the same severity language. No restating “check this, check that” in the prompt.

Triggering a command

Commands are invoked as slash commands from the agent chat — the same mechanism in Studio, Claude Code, Cursor, and any other MCP-capable editor:
/build-landing-page — brief for the spring sale campaign attached.
/verify-accessibility on the checkout confirmation page.
In Studio, there’s a second path: the command menu lets you pick a command from a list and run it without typing the slash — handy when you don’t remember the exact name.
Studio chat command menu listing available commands like build-landing-page and verify-accessibility
The agent loads the command’s content, confirms the parameters, and starts executing. You stay in control: destructive steps still go through normal approval, and the agent will pause for confirmation at any step that needs it.

Command or skill — how to choose

Commands and skills share the same content shape, so the decision is about what the content describes and when you want it to fire:
  • Is it a domain the agent should carry into every relevant task → skill. shopware-headless-commerce, seo-optimizer, product-variants-and-scopes — expertise that applies across many tasks. Let the agent discover and apply it on its own.
  • Is it a specific procedure you only want run when you ask → command. code-review, verify-accessibility, build-landing-page — narrow actions with a beginning, middle, and end. Stay out of every task until invoked.
Tiebreaker: “would I be annoyed if the agent picked this up on its own without me asking?” If yes, it’s a command.

Context Base

The mental model for the whole Context Base.

Skills

Task-specific reusable know-how.

Rules

Always-on guardrails build-side agents follow.

Guides

Reference documents the agent reads on demand.