Skip to main content
Use this workflow when many prompts target one model (for example Claude 3.7) and you want replicas that keep the same template and settings but run on another model (for example Claude 3.5 Sonnet on Bedrock). The List prompts, Retrieve prompt, and Create prompt endpoints drive the migration.
Auth: Use an Admin API key or a Workspace API key with prompt permissions. Send x-portkey-api-key on every request.

When to use this

  • Migrate dozens of prompts to a new default model after a provider or catalog change
  • Keep originals untouched by creating named replicas (for example append -replica)
  • Automate what would otherwise be repeated copy-paste in Prompt Studio

How it works

  1. List all prompts and collect their IDs.
  2. Retrieve each prompt’s full definition (template string, parameters, virtual_key, metadata, etc.).
  3. Create a new prompt per ID with the same body fields and a new model value.
Replace the example model string (anthropic.claude-3-5-sonnet) with the exact model identifier your workspace uses.

Step 1: List prompts and collect IDs

import requests

BASE = "https://api.portkey.ai/v1"
headers = {"x-portkey-api-key": "YOUR_API_KEY"}

r = requests.get(f"{BASE}/prompts", headers=headers)
r.raise_for_status()

prompt_ids = [item["id"] for item in r.json()["data"]]
print(prompt_ids)
Python (steps 2–3): Run the snippets in order in the same session so BASE, headers, and prompt_ids / prompt_data stay in scope. Node.js: Examples use native fetch (Node.js 18+).

Step 2: Fetch one prompt’s full configuration

Use this shape to see which fields the API returns before building the create payload (names vary slightly by version; always log once and adjust keys if needed).
prompt_id = prompt_ids[0]
url = f"{BASE}/prompts/{prompt_id}"

prompt_data = requests.get(url, headers=headers).json()
print(prompt_data)

Step 3: Create a single replicated prompt

The replica reuses template content and metadata, overrides model, and uses a distinct name so it does not collide with the original.
TARGET_MODEL = "anthropic.claude-3-5-sonnet"

payload = {
    "name": prompt_data["name"] + "-replica",
    "collection_id": prompt_data["collection_id"],
    "string": prompt_data["string"],
    "parameters": prompt_data["parameters"],
    "virtual_key": prompt_data["virtual_key"],
    "model": TARGET_MODEL,
    "version_description": prompt_data.get(
        "prompt_version_description", "Replicated prompt"
    ),
    "template_metadata": prompt_data["template_metadata"],
}

r = requests.post(f"{BASE}/prompts", json=payload, headers=headers)
r.raise_for_status()
print(r.json())

Full loop: replicate every prompt

import requests

BASE = "https://api.portkey.ai/v1"
TARGET_MODEL = "anthropic.claude-3-5-sonnet"

headers = {"x-portkey-api-key": "YOUR_API_KEY"}

list_res = requests.get(f"{BASE}/prompts", headers=headers)
list_res.raise_for_status()
prompt_ids = [row["id"] for row in list_res.json()["data"]]

for prompt_id in prompt_ids:
    data = requests.get(f"{BASE}/prompts/{prompt_id}", headers=headers).json()

    payload = {
        "name": data["name"] + "-replica",
        "collection_id": data["collection_id"],
        "string": data["string"],
        "parameters": data["parameters"],
        "virtual_key": data["virtual_key"],
        "model": TARGET_MODEL,
        "version_description": data.get(
            "prompt_version_description", "Replicated"
        ),
        "template_metadata": data["template_metadata"],
    }

    r = requests.post(f"{BASE}/prompts", json=payload, headers=headers)
    r.raise_for_status()
    print(r.json())
Field names: If retrieve responses use different keys (for example nested version objects), log one response and map fields explicitly. Null collection_id: Omit or pass null only if the create API accepts it for your workspace. Rate limits: Add backoff or batching for very large prompt libraries.

After replication

  • Point applications at the new prompt IDs or keep names predictable (for example *-replica) and resolve by name if your tooling supports it.
  • For runtime calls, use the Prompt API (/v1/prompts/{promptId}/completions) with the replica’s ID.

Summary

StepAction
1GET /v1/prompts → collect IDs
2GET /v1/prompts/{id} → read full config
3POST /v1/prompts → same body + new model + new name
Bulk replication avoids manual duplication, keeps templates aligned, and makes model upgrades repeatable across the workspace.
Last modified on April 8, 2026