Prompt management
Contents
Prompt management is currently in beta.
We'd love to hear your feedback as we develop this feature.
Prompt management lets you create and update LLM prompts directly in PostHog. When you use prompts through the SDK, they're fetched at runtime with caching and fallback support—so you can iterate on prompts without deploying code.
Why use prompt management?
- Update prompts without code deploys – Change prompts instantly from the PostHog UI
- Non-engineers can iterate – Product and content teams can tweak prompts without touching code
- Track prompt usage – Link prompts to generations to see which prompts drive which outputs
- Versioning – Every change creates an immutable version you can view, compare, or restore
- (soon) A/B testing – Support for testing different prompt variants using Experiments
Creating prompts
- Navigate to LLM analytics > Prompts
- Click New prompt
- Enter a name for your prompt
- Write your prompt content, using
{{variables}}for dynamic values - Click Create prompt
This creates version 1 of your prompt. Each subsequent edit creates a new immutable version.
Markdown preview
When viewing a prompt, markdown rendering is enabled by default, formatting your prompt text with headings, lists, bold, and other markdown elements. Click the markdown icon next to the Prompt label to toggle it off and view plain text. When editing a prompt, the view switches to plain text automatically.
Prompt naming rules
- Names are immutable after creation (cannot be changed)
- Only letters, numbers, hyphens, and underscores allowed (
^[a-zA-Z0-9_-]+$) - Names must be unique within your project
Managing prompts via MCP
You can also manage prompts through the PostHog MCP server using AI agents like Claude Code, Cursor, or any MCP-connected tool.
The MCP server provides four prompt management tools:
| Tool | Description |
|---|---|
prompt-list | List all team prompts with optional name filtering |
prompt-get | Get a prompt by name, including full content |
prompt-create | Create a new prompt with a unique name and content |
prompt-update | Update an existing prompt's content by name |
This enables teams to manage prompts programmatically from agent workflows without using the web UI.
Template variables
Use double curly braces to define variables in your prompts:
Variable names can include letters, numbers, underscores, hyphens, and dots.
Versioning
Every prompt change creates a new immutable version. Previous versions are preserved and accessible.
How versioning works
- The first save creates version 1
- Each subsequent publish increments the version number
- Previous versions are never modified
- By default, the SDK fetches the latest version
Publish a new version
- Open a prompt and click Edit latest
- Make your changes
- Click Publish version
If someone else published a new version while you were editing, you'll see a conflict error. Refresh the page to load the latest version and try again.
Restore a previous version
- Open a prompt and select a version from the Version history sidebar
- Click Use as latest
- Edit the prompt content if needed
- Click Publish version
This publishes the old content as a new version. The original version remains unchanged.
Compare versions
Compare the content of two prompt versions side-by-side to see what changed. This is available for prompts with two or more versions.
To compare versions:
- Open a prompt and click Compare versions next to the Prompt label
- The diff view shows the current version against the previous version by default
- Use the version dropdown to change the comparison target
You can also click the compare icon on any version in the Version history sidebar to compare it with the currently selected version. The comparison target is highlighted with a Comparing tag in the sidebar.
Unchanged regions are automatically collapsed in the diff view. Click Compare versions again to exit the diff view.
Archive a prompt
Click Archive to remove a prompt from active use. This archives all versions of the prompt. Any code fetching the prompt by name stops resolving it.
Using prompts in code
Prerequisites
- Personal API key (
phx_...) – Used as Bearer auth for prompt fetches - Project Token (
phc_...) – Used astokenquery param so prompt reads are resolved deterministically to the right project - PostHog SDK – Install the Python or JavaScript SDK with the AI package
When you initialize Prompts with a PostHog client, the SDK uses the client's project token automatically. If you initialize Prompts directly, you must pass both keys.
Use your app host (for example, https://us.posthog.com or https://eu.posthog.com) for prompt reads, not an ingestion host like https://us.i.posthog.com.
Python
JavaScript/TypeScript
Caching
Prompts are cached on the SDK side to minimize latency and API calls:
- Default TTL: 5 minutes (300 seconds)
- Configurable per-request: Override with
cache_ttl_seconds(Python) orcacheTtlSeconds(JS) - Stale-while-revalidate: If a fetch fails, the cached value is used (even if expired)
- Fallback support: Provide a fallback value that's used when both fetch and cache fail
Linking prompts to traces
To track which prompts and versions are used in which generations, include the $ai_prompt_name and $ai_prompt_version properties when capturing:
Once linked, you can:
- Filter generations by prompt name and version
- View related traces from the prompt detail page
- See which specific version was used in each trace
- Analyze which prompt versions perform best
Limits
- Maximum prompt size - 1MB per prompt
- Maximum versions per prompt - 2,000