Brand context
How project metadata (colors, fonts, tone, content inventory) shapes the AI motion prompt.
Brand context
Every generation includes a project-scoped brand context that's injected into the vision prompt. Two different projects pointed at the same source image produce different motion — the brand context tells the model what your product is, who you sell to, and what tone it should hit.
What lives in the brand context
tstype ProjectBrand = { brandName: string; // "Acme" colors: { primary: '#0d9373'; secondary: '#1f2937'; accent: '#f59e0b'; background: '#0a0a0a'; foreground: '#fafafa'; }; fonts: { heading: 'Inter'; body: 'Inter' }; logoUrl?: string; // Populated by deep-crawl: contentInventory?: { tagline: string; description: string; features: { title; description; icon? }[]; pricing: { tier; price; features }[]; testimonials: { quote; author; role? }[]; stats: { label; value }[]; announcements: { title; date?; body }[]; codeExamples: { language; code; label? }[]; }; messagingContext?: { tone: string; // "technical, confident" targetAudience: string; // "platform engineers" valueProps: string[]; ctas: string[]; }; designContext?: { vibe: string; borderRadius: 'sharp' | 'rounded' | 'pill'; backgroundPattern?: 'dot-grid' | 'line-grid' | 'gradient-only' | 'solid'; hasGlassmorphism: boolean; hasGradientBorders: boolean; hasRadialGlows: boolean; monoFont?: string; sectionLabelStyle?: string; colorScheme: 'dark' | 'light'; };};
How it's injected
The vision model receives the brand context as a system-prompt block before it inspects the source image:
textBrand: Acme — fast SaaS observability platformTone: technical, confidentTarget: platform engineersColors: primary #0d9373, accent #f59e0bVibe: dark, sharp edges, dot-grid backgrounds, no glassmorphism
Then it generates a motion prompt that fits both the source image and the brand. Without brand context, the model defaults to a generic "make this image move tastefully" prompt that ignores your product identity.
Three ways to populate it
Project → Settings → Brand. Fields for colors, fonts, brand name. Quickest if you already know your tokens.
Send a domain to POST /api/projects/:id/detect-brand. Firecrawl scrapes it, extracts colors + fonts + brand name. Lightweight — runs in ~10 seconds, fills in colors/fonts only.
Send a domain to POST /api/projects/:id/deep-crawl. Scrapes representative pages, then runs Claude Sonnet 4 over the markdown to extract contentInventory + messagingContext + designContext. Takes ~30-60 seconds; populates everything.
Cached for 24h in KV. Reruns are free within that window.
Per-asset overrides
Brand context applies to every generation in the project. To bypass it for a single asset, pass ?prompt=… and your override replaces the brand-aware prompt:
text…/hero.png.mp4?prompt=cinematic+slow+pan+left
Capped at 500 chars. Use surgically — you lose the brand-specific guidance.
When brand context is empty
Brand-config-empty projects still generate fine; the prompt is just less opinionated. New projects start with empty brand_config; the dashboard nudges you to run brand-detect on first sign-in.
Inspecting the prompt
Every asset row stores vision_prompt in D1. To see what the vision model wrote for a given asset:
bashcurl -b cookie.txt "https://motioness.com/api/assets/$ASSET_ID" | jq .asset.vision_prompt
Useful when an output looks off — read the prompt, decide whether to improve the brand context or pass ?prompt= for that one URL.
Updating brand context
Whenever you update the brand on the project, the next generation picks it up. Already-generated assets are content-addressed and won't re-run unless you regenerate them (POST /api/assets/:id/regenerate) or vary idem.