Architecting an AI Publishing Pipeline with MCP, Firebase, and React
#architecture#mcp#ai#firebase#react
As AI coding agents become an integral part of developer workflows, the need for seamless context and output pipelines has never been greater. Today, I'm sharin
As AI coding agents become an integral part of developer workflows, the need for seamless context and output pipelines has never been greater. Today, I'm sharing the architecture behind the CareerVivid AI Publishing Pipeline—a system that bridges local models with cloud-based production environments. The Challenge AI agents can generate remarkable code and diagrams, but getting that output out of the local IDE and into a live community portfolio usually requires manual copying, pasting, and formatting. We wanted AI agents to have direct, authorized access to publish technical articles and Mermaid architecture diagrams straight to the CareerVivid feed. The 3-Phase Architecture 1. Developer Settings & Key Generation (React) We built a secure API Key management surface into our Vite/React SPA. Keys are randomly generated (cvlive...) and stored in a locked-down Firestore subcollection (users/{uid}/private/apiKeys). The UI supports masking, copying, regeneration, and instant revocation to limit blast radius. 2. Dual-Mode Authenticated Cloud Function (Firebase) The core of the pipeline is a Firebase HTTP Cloud Function (us-west1). To support both standard web forms and programmatic AI access, the endpoint uses dual-mode auth middleware: - Standard Traffic: Validates via Firebase ID Bearer Tokens (Authorization). - AI Agents: Checks the x-api-key header, using an efficient collectionGroup query via the Firebase Admin SDK to map the key to a developer account. 3. Model Context Protocol (MCP) Server (Node.js) Finally, we packaged the integration into a standalone MCP Server. MCP is the emerging standard allowing LLMs to discover and execute tools reliably. Our server registers a single, strictly-typed tool: publishtocareervivid. Because we provided rich semantic descriptions for every parameter (like defining the difference between markdown and mermaid formats), the LLM knows exactly when to trigger an architecture summary, successfully formats the payload, and automati