<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:content="http://purl.org/rss/1.0/modules/content/">
    <channel>
        <title>Mukesh Biswas — Blog</title>
        <link>https://Itsmemukesh.github.io/portfolio/blog/</link>
        <description>Technical writing, AI, and documentation strategy.</description>
        <lastBuildDate>Sun, 04 Jan 2026 00:00:00 GMT</lastBuildDate>
        <docs>https://validator.w3.org/feed/docs/rss2.html</docs>
        <generator>https://github.com/jpmonette/feed</generator>
        <language>en</language>
        <copyright>© 2026 Mukesh Biswas</copyright>
        <item>
            <title><![CDATA[I Built an AI Agent That Writes Documentation — Here's What I Learned]]></title>
            <link>https://Itsmemukesh.github.io/portfolio/blog/contentGen-blog-article</link>
            <guid>https://Itsmemukesh.github.io/portfolio/blog/contentGen-blog-article</guid>
            <pubDate>Sun, 04 Jan 2026 00:00:00 GMT</pubDate>
            <description><![CDATA[There's a quiet crisis in technical documentation that nobody puts on a roadmap, nobody files a ticket for, and nobody celebrates solving. Features ship fast. Designs change at 3pm on a Friday. A Jira ticket gets marked Done while the help article describing it doesn't exist yet — or worse, still describes the old behavior. The documentation team is always playing catch-up, always the last to know, always sprinting to close a gap that opens faster than it can be filled.]]></description>
            <content:encoded><![CDATA[<p>There's a quiet crisis in technical documentation that nobody puts on a roadmap, nobody files a ticket for, and nobody celebrates solving. Features ship fast. Designs change at 3pm on a Friday. A Jira ticket gets marked Done while the help article describing it doesn't exist yet — or worse, still describes the old behavior. The documentation team is always playing catch-up, always the last to know, always sprinting to close a gap that opens faster than it can be filled.</p>
<p>I've lived that cycle as a technical writer for years. So I built something to fix it.</p>
<p>This is the story of <strong>contentGen</strong> — a custom AI agent I built inside VS Code that reads a Jira ticket, studies the Figma design, cross-references Confluence, checks the existing documentation codebase, and then writes a finished, convention-compliant help topic. What used to consume the better part of a working day now takes under 30 minutes. This article is about how it works, what it took to build it, and why I think VS Code agents are one of the most underestimated tools in the industry right now.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="1-the-problem-nobody-talks-about-in-tech-docs">1. The Problem Nobody Talks About in Tech Docs<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#1-the-problem-nobody-talks-about-in-tech-docs" class="hash-link" aria-label="Direct link to 1. The Problem Nobody Talks About in Tech Docs" title="Direct link to 1. The Problem Nobody Talks About in Tech Docs" translate="no">​</a></h2>
<p>Let's paint the picture honestly.</p>
<p>When a new feature ships, a technical writer typically needs to:</p>
<ul>
<li class="">Find and read the Jira ticket (and its child tickets, and its linked tickets)</li>
<li class="">Track down the Confluence page where someone wrote a discovery brief three sprints ago</li>
<li class="">Open the Figma design file and navigate to the right screens, which may have been reorganized since the last time you looked</li>
<li class="">Cross-check whether existing documentation already covers part of this feature</li>
<li class="">Understand the naming conventions, the product terminology, the UI labels that are actually shipping</li>
<li class="">Figure out which folder in a documentation codebase this content belongs in</li>
<li class="">Write a first draft that matches the established writing style, structure, and formatting conventions</li>
<li class="">Review it against style guides, snippet libraries, and localization constraints</li>
</ul>
<p>That's not a writing task. That's a research project that happens to end in writing. And the research phase alone — before a single word of documentation is typed — routinely takes <strong>five to six hours</strong> for a single medium-complexity feature.</p>
<p>Now multiply that across a fast-moving product team with multiple concurrent releases, and you start to understand why documentation debt compounds so quickly.</p>
<p>The bottleneck isn't skill. It's context-gathering. And context-gathering is exactly what AI is good at.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="2-what-is-a-vs-code-agent--and-why-most-people-underestimate-them">2. What Is a VS Code Agent — And Why Most People Underestimate Them<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#2-what-is-a-vs-code-agent--and-why-most-people-underestimate-them" class="hash-link" aria-label="Direct link to 2. What Is a VS Code Agent — And Why Most People Underestimate Them" title="Direct link to 2. What Is a VS Code Agent — And Why Most People Underestimate Them" translate="no">​</a></h2>
<p>Most people who've used GitHub Copilot think of it as an autocomplete engine that got smarter. A helpful pair programmer. A fast way to generate boilerplate. That's a reasonable impression if you've only used the chat panel.</p>
<p>But VS Code Copilot agents are something different in kind, not just degree.</p>
<p>An agent in VS Code isn't just a chatbot with context about your file. It's a <strong>configurable, tool-wielding, goal-oriented workflow runner</strong>. You define it in a plain Markdown file with a YAML header. You give it a description, a list of tools it's allowed to use, a set of skills it can load, behavioral guardrails, a decision workflow, and a clear understanding of what it's supposed to do — and what it must never do.</p>
<p>When invoked, the agent can:</p>
<ul>
<li class="">Read and write files in your repository</li>
<li class="">Execute search queries across your codebase</li>
<li class="">Call external APIs and MCP (Model Context Protocol) servers — including Jira, Confluence, Figma, and GitHub</li>
<li class="">Ask clarifying questions using structured question cards</li>
<li class="">Load modular "skill" definitions to apply specialized domain knowledge</li>
<li class="">Hold itself to a strict confirmation gate before making changes</li>
</ul>
<p>The key insight that most developers and writers miss is this: <strong>VS Code agents aren't assistants that respond to prompts. They are autonomous actors that execute workflows.</strong> The difference matters enormously once you start building with them.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="3-meet-contentgen-my-documentation-agent">3. Meet contentGen: My Documentation Agent<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#3-meet-contentgen-my-documentation-agent" class="hash-link" aria-label="Direct link to 3. Meet contentGen: My Documentation Agent" title="Direct link to 3. Meet contentGen: My Documentation Agent" translate="no">​</a></h2>
<p><strong>contentGen</strong> is a custom VS Code agent I designed and built from scratch. The elevator pitch is simple:</p>
<blockquote>
<p><em>Give it a Jira ticket number. It reads the design, checks the documentation codebase, and produces a finished, ready-to-review help topic in the correct file, in the correct folder, following the correct conventions.</em></p>
</blockquote>
<p>But the real value isn't the output — it's what happens between the input and the output. contentGen doesn't generate documentation by guessing. It researches first, synthesizes what it finds, and only writes once it understands the feature as well as a writer who has spent half a day on it.</p>
<p>It sits inside the documentation repository as a single <code>.agent.md</code> configuration file. When a writer — or a PM, or an engineer who wants to contribute — invokes it from VS Code, it takes over the research and context-gathering phase entirely, leaving the writer to review and refine rather than discover and draft.</p>
<p>The agent is purpose-built for a MadCap Flare documentation environment, integrated with Atlassian's Jira and Confluence, connected to Figma, and wired into the GitHub repository. It knows the project's folder structure, writing conventions, snippet libraries, template patterns, and localization constraints. It doesn't just write documentation. It writes <em>this team's</em> documentation.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="4-how-it-actually-works-the-research-then-write-pipeline">4. How It Actually Works: The Research-Then-Write Pipeline<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#4-how-it-actually-works-the-research-then-write-pipeline" class="hash-link" aria-label="Direct link to 4. How It Actually Works: The Research-Then-Write Pipeline" title="Direct link to 4. How It Actually Works: The Research-Then-Write Pipeline" translate="no">​</a></h2>
<p>The core design principle of contentGen is <strong>research before writing</strong>. The agent will not touch a source file until it has built a complete picture of what it's creating.</p>
<p>Here's the pipeline in plain English:</p>
<p><strong>Step 1 — Understand the request.</strong> The agent begins by asking the writer a structured set of questions using an interactive question card: Is this a new topic, a revision, or release notes? Which app or product does it belong to? Is there a Jira ticket, a Figma link, or a known file path? It collects only what it needs and recovers missing context through research when possible.</p>
<p><strong>Step 2 — Jira first.</strong> The agent pulls the ticket, reads the description, acceptance criteria, and linked sub-tasks. It follows links to related engineering tickets or discovery briefs to understand not just <em>what</em> the feature does, but <em>why it exists</em> and <em>who it affects</em>.</p>
<p><strong>Step 3 — Figma as the source of truth for UI.</strong> The agent opens the linked Figma design and reads it — literally capturing the actual UI labels, button names, field titles, workflow states, and task sequences from the designed interface. This is critical. Documentation that's written from a ticket description and not the design frequently uses wrong labels, wrong names, and wrong sequences. Figma eliminates that.</p>
<p><strong>Step 4 — Confluence for context.</strong> Any Confluence pages linked to the ticket — discovery briefs, rollout notes, business rules, permissions documentation — are read and synthesized. Edge cases, caveats, and non-obvious behaviors that never make it into a ticket description often live here.</p>
<p><strong>Step 5 — Repo search for existing coverage.</strong> Before creating anything new, the agent searches the documentation codebase for existing topics, snippets, variables, and templates that relate to the feature. It checks whether the content already exists, whether it needs extending, and whether there are established patterns from similar features it should follow.</p>
<p><strong>Step 6 — Propose before writing.</strong> The agent presents a high-level summary of its findings: the proposed file path, the sections it plans to create, the template it will use, any variables or snippets it plans to reference. The writer reviews this plan and confirms before a single source file is touched. This gate is non-negotiable — the agent cannot proceed without approval.</p>
<p><strong>Step 7 — Write with conventions loaded.</strong> Only after confirmation does the agent write the source file. As it does, it loads its specialized skills to apply the correct HTML structure, UI formatting rules, and localization constraints. The output lands in the right <code>en-us</code> folder, in the right format, ready for review.</p>
<p><strong>Step 8 — Validate.</strong> The agent checks its output against repo conventions and obvious structural issues before reporting completion.</p>
<p>The total time for this pipeline, from invocation to a ready-to-review draft: <strong>under 30 minutes.</strong> The equivalent manual process averaged <strong>five to six hours.</strong></p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="5-the-skills-architecture-modular-intelligence">5. The Skills Architecture: Modular Intelligence<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#5-the-skills-architecture-modular-intelligence" class="hash-link" aria-label="Direct link to 5. The Skills Architecture: Modular Intelligence" title="Direct link to 5. The Skills Architecture: Modular Intelligence" translate="no">​</a></h2>
<p>One of the most important design decisions I made while building contentGen was separating its domain knowledge into modular "skill" files rather than baking everything into the agent definition.</p>
<p>A skill in this system is a standalone Markdown file — a <code>SKILL.md</code> — that contains deep, tested, task-specific instructions. The agent loads skills dynamically using the <code>read_file</code> tool at the moment they're needed. It doesn't carry all knowledge all the time. It reaches for the right expertise at the right step.</p>
<p>contentGen uses five skills:</p>
<ul>
<li class=""><strong><code>determine-content-scope</code></strong> — Helps the agent interview the user effectively to understand the task type, delivery scope, and app location without over-asking or under-asking.</li>
<li class=""><strong><code>interactive-question-ux</code></strong> — Governs how the agent asks questions: structured cards, clear options, no unnecessary prompts.</li>
<li class=""><strong><code>apply-flare-conventions</code></strong> — The deep rulebook for MadCap Flare file structure, topic anatomy, TOC integration, and snippet usage.</li>
<li class=""><strong><code>format-ui-elements</code></strong> — Handles correct formatting of buttons, menus, fields, paths, and UI labels in the HTML source.</li>
<li class=""><strong><code>multilingual-guardrails</code></strong> — Ensures the agent only writes in the correct locale folders and never accidentally touches translated content.</li>
</ul>
<p>This modular approach does something subtle but powerful: it makes the agent <strong>maintainable</strong>. If the style guide changes, you update one skill file. If the localization rules get more complex, you update one skill file. The agent itself stays stable. The knowledge it applies evolves independently.</p>
<p>It also makes the system <strong>extensible</strong>. Want to add support for a new documentation product? Create a new skills file that describes its conventions. The agent picks it up automatically.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="6-built-in-guardrails-this-agent-asks-before-it-acts">6. Built-In Guardrails: This Agent Asks Before It Acts<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#6-built-in-guardrails-this-agent-asks-before-it-acts" class="hash-link" aria-label="Direct link to 6. Built-In Guardrails: This Agent Asks Before It Acts" title="Direct link to 6. Built-In Guardrails: This Agent Asks Before It Acts" translate="no">​</a></h2>
<p>I want to spend a moment on something that I think separates a well-engineered AI agent from an impressive demo: <strong>intentional restraint</strong>.</p>
<p>contentGen has explicit boundaries encoded into its definition. These aren't afterthoughts — they are first-class design requirements.</p>
<p><strong>It will not invent product behavior.</strong> If Jira, Figma, Confluence, and the existing codebase don't contain enough information to accurately describe a feature, the agent stops and asks. It will not fill gaps with plausible-sounding fabrications, because in documentation, a plausible-sounding wrong answer creates real user confusion.</p>
<p><strong>It will not write without confirmation.</strong> The proposal-then-confirm gate before writing source files isn't optional. Every invocation requires a human to review the plan and approve it. This keeps the writer in the loop without requiring them to do the research.</p>
<p><strong>It will not wander out of scope.</strong> The agent changes only the files required by the user's request. If it notices something unrelated that looks inconsistent or outdated in a nearby file, it does not fix it. Scope creep in an automated writing agent produces hard-to-review diffs and unpredictable results.</p>
<p><strong>It will not touch non-English content.</strong> The multilingual guardrail enforces a hard boundary: contentGen works exclusively in the <code>en-us</code> source folders. Translated content is produced through a separate localization pipeline and must never be overwritten by an agent operating on English source.</p>
<p><strong>It stops and asks when uncertain.</strong> If the correct app folder can't be verified from context, it asks. If a variable name is ambiguous, it asks. If the ticket doesn't have enough detail to determine which existing topic should be revised, it asks. A concise, targeted question is always preferable to a confident wrong answer.</p>
<p>These guardrails are what make the agent trustworthy enough to run on a production documentation codebase. Impressive output from an AI that can't be trusted is a liability, not an asset.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="7-what-it-took-to-build-it-design-decisions-and-non-obvious-challenges">7. What It Took to Build It: Design Decisions and Non-Obvious Challenges<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#7-what-it-took-to-build-it-design-decisions-and-non-obvious-challenges" class="hash-link" aria-label="Direct link to 7. What It Took to Build It: Design Decisions and Non-Obvious Challenges" title="Direct link to 7. What It Took to Build It: Design Decisions and Non-Obvious Challenges" translate="no">​</a></h2>
<p>Building contentGen from scratch taught me things about AI agents that I didn't find in any tutorial.</p>
<p><strong>The <code>.agent.md</code> format is deceptively powerful.</strong> The file format is simple YAML frontmatter plus Markdown. But the depth of behavioral instruction you can encode in that Markdown — the workflow steps, the decision logic, the tool usage patterns, the explicit boundaries — is substantial. The skill is not in learning the syntax. It's in knowing what to write.</p>
<p><strong>MCP server integration is where the real power lives.</strong> Connecting the agent to Atlassian's MCP server, Figma's MCP server, and GitHub's MCP server transformed it from a smart file editor into a genuine research agent. These integrations let it reach out to the actual systems of record — not cached summaries, not hallucinated approximations — and pull real, current information. Getting these configured correctly required understanding how MCP servers expose their tools and how to reference them in the agent's tool list.</p>
<p><strong>Telling the agent where things live is the hardest part.</strong> The most brittle aspect of any documentation agent is the folder-mapping problem: given a feature description, determining which of a dozen possible <code>en-us</code> product folders the content belongs in. I solved this partly through the <code>determine-content-scope</code> skill, partly through the repo instructions that map product names to folder paths, and partly by instructing the agent to ask when the evidence is ambiguous rather than guess. Getting that triangulation right took more iteration than any other aspect of the build.</p>
<p><strong>Skills files need to be tested like code.</strong> Early versions of the skills definitions produced inconsistent results because they were too abstract. The skill files that work well are the ones that contain concrete, specific, tested instructions — the same level of precision you'd want in a unit test. Vague guidance produces vague output.</p>
<p><strong>The confirmation gate changed how I thought about the agent.</strong> Originally, I built the confirmation step as a safety feature. Over time I came to see it differently: it's a communication protocol between the agent and the writer. The proposal summary the agent produces before writing is itself valuable — it forces the agent to articulate its reasoning and gives the writer a fast way to spot misunderstandings before they become wrong files.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="8-real-world-impact-before-and-after">8. Real-World Impact: Before and After<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#8-real-world-impact-before-and-after" class="hash-link" aria-label="Direct link to 8. Real-World Impact: Before and After" title="Direct link to 8. Real-World Impact: Before and After" translate="no">​</a></h2>
<p>Let me give you the honest comparison.</p>
<p><strong>Before contentGen,</strong> the research phase of a documentation task looked like this: open Jira and spend 20-30 minutes reading the ticket and all its linked context. Search for the Confluence page (which might be linked from the ticket, might not be). Open Figma and navigate to the right frame, re-learning the design tooling every time it gets updated. Search the documentation codebase for related topics. Find the right template. Read the style guide to confirm the right formatting for the feature type. All of that before writing the first sentence. <strong>Minimum time: five to six hours for a moderately complex feature.</strong> Sometimes a full day.</p>
<p><strong>After contentGen,</strong> the writer provides the Jira ticket number and answers a four-question card. The agent reads everything, synthesizes it, and presents a complete content plan in one step. The writer reviews and approves it. The agent writes the draft. <strong>Total elapsed time: under 30 minutes.</strong></p>
<p>The quality difference matters too. Manually researched documentation is only as good as what the writer happened to find and prioritize. contentGen reads all of it — every linked ticket, every Confluence note, every Figma annotation — and incorporates it. The drafts it produces are more complete on first pass precisely because the research is more thorough than any human would do under deadline pressure.</p>
<p>This isn't about replacing writers. The writer's review and refinement step is still essential, and the agent is built to require it. What's been eliminated is the grinding, low-creativity, high-effort research that precedes real writing — the part that consumes time without producing insight.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="9-what-this-tells-us-about-the-future-of-docs--and-ai-agents">9. What This Tells Us About the Future of Docs — and AI Agents<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#9-what-this-tells-us-about-the-future-of-docs--and-ai-agents" class="hash-link" aria-label="Direct link to 9. What This Tells Us About the Future of Docs — and AI Agents" title="Direct link to 9. What This Tells Us About the Future of Docs — and AI Agents" translate="no">​</a></h2>
<p>I built contentGen to solve a problem I had today. But what it implies about the near future of technical communication is worth sitting with.</p>
<p>The documentation workflow I described in section one — research, synthesize, locate, draft, format, validate — is fundamentally a workflow that requires navigating multiple systems, applying many different rule sets, and producing a structured output in a constrained format. That is exactly the class of problem that well-designed AI agents handle well.</p>
<p>What we're beginning to see is a bifurcation in how AI gets used in knowledge work. On one side, there's AI-assisted work: a writer uses AI to go faster on tasks they're already doing. On the other side, there's AI-delegated work: a writer defines a workflow precisely enough that an agent can execute it end-to-end, and the writer's role becomes oversight, review, and direction.</p>
<p>contentGen lives on the second side of that line. And the gap between the two is not technical capability — it's workflow design. The teams that will lead in AI-assisted documentation aren't the ones with the best prompts. They're the ones who invest in encoding their workflows precisely enough that agents can run them reliably.</p>
<p>VS Code agents — and the MCP protocol that wires them to real data systems — represent a platform that most of the industry hasn't fully absorbed yet. The tools are here. The integration points are here. The gap is in knowing how to design for them.</p>
<p>If you're a technical writer, a docs engineer, or a PM who owns documentation quality and you're not experimenting with agents built on this platform, this is the moment to start.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="10-whats-next-for-contentgen">10. What's Next for contentGen<a href="https://itsmemukesh.github.io/portfolio/blog/contentGen-blog-article#10-whats-next-for-contentgen" class="hash-link" aria-label="Direct link to 10. What's Next for contentGen" title="Direct link to 10. What's Next for contentGen" translate="no">​</a></h2>
<p>contentGen is working in production, but the backlog of ideas for what it could do next is longer than when I started.</p>
<p>At the top of the list: <strong>release notes automation.</strong> The agent already supports drafting release notes as a task type, but I want to develop a dedicated workflow that can produce a complete, multi-feature release note document from a sprint's worth of Jira tickets in a single invocation.</p>
<p>I'm also exploring <strong>revision detection</strong> — giving the agent the ability to monitor a watched set of Jira tickets and proactively flag when documentation it previously generated may be outdated based on changes to the linked design or ticket status.</p>
<p>Longer term, I want to extend the skills architecture to support <strong>multiple documentation products</strong> more dynamically, allowing a single agent invocation to coordinate changes across related help systems when a shared feature spans multiple products.</p>
<p>There's also the question that every tool builder eventually confronts: <strong>should I share it?</strong> The core agent pattern is generic enough to be useful outside this specific codebase. I'm thinking through what a generalized version of contentGen would look like — one that could be seeded with a team's own style guide, folder conventions, and tool connections.</p>
<p>If you're working on something similar, or thinking about how to bring this kind of workflow automation to your documentation team — I'd genuinely like to compare notes. Find me on <a href="https://www.linkedin.com/in/mukesh-biswas-tech-writer/" target="_blank" rel="noopener noreferrer" class="">LinkedIn</a>.</p>]]></content:encoded>
            <category>ai-agents</category>
            <category>vs-code</category>
            <category>technical-writing</category>
            <category>documentation</category>
            <category>mcp</category>
            <category>productivity</category>
            <category>github-copilot</category>
        </item>
        <item>
            <title><![CDATA[Cloud Documentation Best Practices for SaaS Teams]]></title>
            <link>https://Itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices</link>
            <guid>https://Itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices</guid>
            <pubDate>Wed, 15 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Cloud documentation is a different beast from on-premise docs. Products change weekly, infrastructure is invisible, and your readers range from infrastructure engineers to non-technical executives. Here's what I've learned writing docs for cloud SaaS products at scale.]]></description>
            <content:encoded><![CDATA[<p>Cloud documentation is a different beast from on-premise docs. Products change weekly, infrastructure is invisible, and your readers range from infrastructure engineers to non-technical executives. Here's what I've learned writing docs for cloud SaaS products at scale.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="why-cloud-docs-fail">Why Cloud Docs Fail<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#why-cloud-docs-fail" class="hash-link" aria-label="Direct link to Why Cloud Docs Fail" title="Direct link to Why Cloud Docs Fail" translate="no">​</a></h2>
<p>Most cloud documentation failures come from the same three root causes:</p>
<ol>
<li class=""><strong>Docs are written once, never updated.</strong> Cloud products ship constantly. Docs tied to a single release cycle fall out of date in weeks.</li>
<li class=""><strong>Architecture lives only in engineers' heads.</strong> Concepts like tenancy, region routing, or identity federation are never properly explained because everyone on the team assumes the reader already knows.</li>
<li class=""><strong>No clear audience separation.</strong> A single doc trying to serve a DevOps engineer, a security administrator, and an end user will serve none of them well.</li>
</ol>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-docs-as-code-foundation">The Docs-as-Code Foundation<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#the-docs-as-code-foundation" class="hash-link" aria-label="Direct link to The Docs-as-Code Foundation" title="Direct link to The Docs-as-Code Foundation" translate="no">​</a></h2>
<p>The single highest-ROI investment for any cloud documentation team is treating docs like code. That means:</p>
<ul>
<li class=""><strong>Version control everything.</strong> Every doc change is a commit, reviewable and reversible.</li>
<li class=""><strong>Docs live next to the product.</strong> When an engineer merges a feature, the doc update is part of the same PR.</li>
<li class=""><strong>CI/CD validates your docs.</strong> Broken links, missing screenshots, and outdated API parameters get caught before they reach users.</li>
</ul>
<p>At Diligent, moving to a docs-as-code workflow reduced our time-to-publish from days to hours and cut stale-doc reports by over 60%.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="structure-for-cloud-concepts">Structure for Cloud Concepts<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#structure-for-cloud-concepts" class="hash-link" aria-label="Direct link to Structure for Cloud Concepts" title="Direct link to Structure for Cloud Concepts" translate="no">​</a></h2>
<p>Cloud products have unique structural needs. Here's a framework that works:</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="conceptual-foundation-first">Conceptual Foundation First<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#conceptual-foundation-first" class="hash-link" aria-label="Direct link to Conceptual Foundation First" title="Direct link to Conceptual Foundation First" translate="no">​</a></h3>
<p>Before any how-to, ensure your readers understand:</p>
<ul>
<li class="">What the product <strong>is</strong> (one sentence, no jargon)</li>
<li class="">What problem it <strong>solves</strong> (use a real customer scenario)</li>
<li class="">How it fits into their <strong>existing stack</strong> (a simple architecture diagram does more than three paragraphs)</li>
</ul>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="task-oriented-procedures">Task-Oriented Procedures<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#task-oriented-procedures" class="hash-link" aria-label="Direct link to Task-Oriented Procedures" title="Direct link to Task-Oriented Procedures" translate="no">​</a></h3>
<p>Users come to docs to accomplish something. Structure every procedure around a job-to-be-done:</p>
<div class="language-text codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-text codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#F8F8F2"><span class="token plain">Before you begin</span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  Prerequisites (access, roles, dependencies)</span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain" style="display:inline-block"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">Steps</span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  1. ...</span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  2. ...</span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain" style="display:inline-block"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">Result / What happens next</span><br></span></code></pre></div></div>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="reference-as-a-separate-layer">Reference as a Separate Layer<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#reference-as-a-separate-layer" class="hash-link" aria-label="Direct link to Reference as a Separate Layer" title="Direct link to Reference as a Separate Layer" translate="no">​</a></h3>
<p>API references, configuration parameters, and error codes belong in their own section. Don't bury them inside procedures — engineers will find them via search, not by reading linearly.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="ai-augmented-documentation-workflows">AI-Augmented Documentation Workflows<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#ai-augmented-documentation-workflows" class="hash-link" aria-label="Direct link to AI-Augmented Documentation Workflows" title="Direct link to AI-Augmented Documentation Workflows" translate="no">​</a></h2>
<p>In 2025, the teams winning at cloud documentation are using AI as a force multiplier, not a replacement. My current workflow:</p>
<ol>
<li class=""><strong>AI drafts the skeleton</strong> from an OpenAPI spec or Jira ticket</li>
<li class=""><strong>I add judgment</strong> — use cases, caveats, the "why behind the what"</li>
<li class=""><strong>SME review</strong> catches technical inaccuracies</li>
<li class=""><strong>AI checks</strong> for consistency, terminology drift, and broken links post-review</li>
</ol>
<p>The key insight: AI is excellent at structure and completeness; humans are essential for accuracy and context.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="metrics-that-actually-matter">Metrics That Actually Matter<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#metrics-that-actually-matter" class="hash-link" aria-label="Direct link to Metrics That Actually Matter" title="Direct link to Metrics That Actually Matter" translate="no">​</a></h2>
<p>Stop measuring documentation by page count. Measure:</p>
<ul>
<li class=""><strong>Support ticket deflection rate</strong> — are users self-serving?</li>
<li class=""><strong>Time-on-page for critical procedures</strong> — are they reading, or bouncing?</li>
<li class=""><strong>Search-exit rate</strong> — are users finding what they searched for?</li>
<li class=""><strong>API onboarding time</strong> — how long until a developer's first successful API call?</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="closing-thoughts">Closing Thoughts<a href="https://itsmemukesh.github.io/portfolio/blog/cloud-documentation-best-practices#closing-thoughts" class="hash-link" aria-label="Direct link to Closing Thoughts" title="Direct link to Closing Thoughts" translate="no">​</a></h2>
<p>The best cloud documentation is invisible — users find what they need, do what they came to do, and move on. That only happens when docs are treated as a product, not an afterthought.</p>
<p>Start with docs-as-code, build a strong conceptual layer, use AI to scale, and measure outcomes over output.</p>
<p>Questions or war stories? Find me on <a href="https://www.linkedin.com/in/mukesh-biswas-tech-writer/" target="_blank" rel="noopener noreferrer" class="">LinkedIn</a>.</p>]]></content:encoded>
            <category>cloud</category>
            <category>documentation</category>
            <category>saas</category>
            <category>technical-writing</category>
        </item>
        <item>
            <title><![CDATA[API Documentation Strategies That Developers Actually Use]]></title>
            <link>https://Itsmemukesh.github.io/portfolio/blog/api-documentation-strategies</link>
            <guid>https://Itsmemukesh.github.io/portfolio/blog/api-documentation-strategies</guid>
            <pubDate>Wed, 08 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[A beautifully written API reference that no developer reads is a failure. Here's how to write API documentation that gets found, understood, and bookmarked.]]></description>
            <content:encoded><![CDATA[<p>A beautifully written API reference that no developer reads is a failure. Here's how to write API documentation that gets found, understood, and bookmarked.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-developers-first-question">The Developer's First Question<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#the-developers-first-question" class="hash-link" aria-label="Direct link to The Developer's First Question" title="Direct link to The Developer's First Question" translate="no">​</a></h2>
<p>When a developer lands on your API documentation, they have one question before any other: <em>"Can this API do what I need in the time I have?"</em></p>
<p>That question needs to be answered within 90 seconds. If it isn't, they leave.</p>
<p>Everything about good API documentation design flows from this constraint.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-three-layers-of-api-docs">The Three Layers of API Docs<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#the-three-layers-of-api-docs" class="hash-link" aria-label="Direct link to The Three Layers of API Docs" title="Direct link to The Three Layers of API Docs" translate="no">​</a></h2>
<p>Great API documentation works at three distinct levels simultaneously:</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="1-the-quickstart-010-minutes">1. The Quickstart (0–10 minutes)<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#1-the-quickstart-010-minutes" class="hash-link" aria-label="Direct link to 1. The Quickstart (0–10 minutes)" title="Direct link to 1. The Quickstart (0–10 minutes)" translate="no">​</a></h3>
<p>A single, complete, copy-paste-runnable example that produces a real, meaningful result. Not "Hello World" — something that demonstrates the actual value of the API. At Diligent, our best-performing quickstarts showed a governance workflow completed end-to-end, not just an authentication handshake.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="2-the-concept-guide-1030-minutes">2. The Concept Guide (10–30 minutes)<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#2-the-concept-guide-1030-minutes" class="hash-link" aria-label="Direct link to 2. The Concept Guide (10–30 minutes)" title="Direct link to 2. The Concept Guide (10–30 minutes)" translate="no">​</a></h3>
<p>Explains the mental model. What is an "entity" in your system? How does authentication work? What are rate limits and how do they apply to this specific API? Developers who skip the concept guide write fragile integrations that break in production.</p>
<h3 class="anchor anchorTargetStickyNavbar_Vzrq" id="3-the-reference-always">3. The Reference (always)<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#3-the-reference-always" class="hash-link" aria-label="Direct link to 3. The Reference (always)" title="Direct link to 3. The Reference (always)" translate="no">​</a></h3>
<p>Complete, exhaustive, accurate. Every parameter, every response code, every error message. Auto-generated from your OpenAPI spec — but always human-reviewed. Auto-generation handles completeness; humans handle clarity.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="what-makes-a-reference-actually-useful">What Makes a Reference Actually Useful<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#what-makes-a-reference-actually-useful" class="hash-link" aria-label="Direct link to What Makes a Reference Actually Useful" title="Direct link to What Makes a Reference Actually Useful" translate="no">​</a></h2>
<p>The worst API references are technically complete but practically useless. Here's the difference:</p>
<p><strong>Bad parameter description:</strong></p>
<blockquote>
<p><code>include_archived</code> — boolean. Whether to include archived items.</p>
</blockquote>
<p><strong>Good parameter description:</strong></p>
<blockquote>
<p><code>include_archived</code> — boolean, default: <code>false</code>. When <code>true</code>, the response includes items with status <code>archived</code>. Use this when building audit reports or data exports. Omit for standard product workflows where archived items should be hidden.</p>
</blockquote>
<p>The good version adds: the default, the context for when to use it, and when to leave it out. It takes 30 more seconds to write and saves developers hours of debugging.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="error-messages-are-documentation">Error Messages Are Documentation<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#error-messages-are-documentation" class="hash-link" aria-label="Direct link to Error Messages Are Documentation" title="Direct link to Error Messages Are Documentation" translate="no">​</a></h2>
<p>Every error response your API returns is documentation your developers will read at 2am when something is broken. Write them accordingly.</p>
<div class="language-json codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-json codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#F8F8F2"><span class="token punctuation" style="color:rgb(248, 248, 242)">{</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token property">"error"</span><span class="token operator">:</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">{</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token property">"code"</span><span class="token operator">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"PERMISSION_DENIED"</span><span class="token punctuation" style="color:rgb(248, 248, 242)">,</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token property">"message"</span><span class="token operator">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"The authenticated user does not have the 'admin.users.read' permission required for this endpoint."</span><span class="token punctuation" style="color:rgb(248, 248, 242)">,</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token property">"details"</span><span class="token operator">:</span><span class="token plain"> </span><span class="token punctuation" style="color:rgb(248, 248, 242)">{</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">      </span><span class="token property">"required_permission"</span><span class="token operator">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"admin.users.read"</span><span class="token punctuation" style="color:rgb(248, 248, 242)">,</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">      </span><span class="token property">"documentation"</span><span class="token operator">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"https://docs.example.com/permissions#admin-users-read"</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">    </span><span class="token punctuation" style="color:rgb(248, 248, 242)">}</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token punctuation" style="color:rgb(248, 248, 242)">}</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain"></span><span class="token punctuation" style="color:rgb(248, 248, 242)">}</span><br></span></code></pre></div></div>
<p>Not:</p>
<div class="language-json codeBlockContainer_Ckt0 theme-code-block" style="--prism-color:#F8F8F2;--prism-background-color:#282A36"><div class="codeBlockContent_QJqH"><pre tabindex="0" class="prism-code language-json codeBlock_bY9V thin-scrollbar" style="color:#F8F8F2;background-color:#282A36"><code class="codeBlockLines_e6Vv"><span class="token-line" style="color:#F8F8F2"><span class="token punctuation" style="color:rgb(248, 248, 242)">{</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain">  </span><span class="token property">"error"</span><span class="token operator">:</span><span class="token plain"> </span><span class="token string" style="color:rgb(255, 121, 198)">"403 Forbidden"</span><span class="token plain"></span><br></span><span class="token-line" style="color:#F8F8F2"><span class="token plain"></span><span class="token punctuation" style="color:rgb(248, 248, 242)">}</span><br></span></code></pre></div></div>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="interactive-examples-are-non-negotiable">Interactive Examples Are Non-Negotiable<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#interactive-examples-are-non-negotiable" class="hash-link" aria-label="Direct link to Interactive Examples Are Non-Negotiable" title="Direct link to Interactive Examples Are Non-Negotiable" translate="no">​</a></h2>
<p>In 2025, static code snippets are table stakes. Developers expect to run examples directly in the docs. Tools like Scalar, Redoc, or Swagger UI with real sandboxed credentials remove friction from the "first successful call" moment, which is the most critical milestone in developer onboarding.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-changelog-is-part-of-the-docs">The Changelog Is Part of the Docs<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#the-changelog-is-part-of-the-docs" class="hash-link" aria-label="Direct link to The Changelog Is Part of the Docs" title="Direct link to The Changelog Is Part of the Docs" translate="no">​</a></h2>
<p>API consumers need to know when things change. A well-maintained changelog:</p>
<ul>
<li class="">Lists every breaking change with a migration guide</li>
<li class="">Notes deprecations with a sunset date (minimum 6 months notice)</li>
<li class="">Calls out performance improvements or new capabilities</li>
</ul>
<p>Treat the API changelog as a product, not an administrative chore.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="measuring-api-documentation-success">Measuring API Documentation Success<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#measuring-api-documentation-success" class="hash-link" aria-label="Direct link to Measuring API Documentation Success" title="Direct link to Measuring API Documentation Success" translate="no">​</a></h2>
<p>The metric I care about most: <strong>time to first successful API call for a new developer</strong>. Track it. If it's over 30 minutes, something is wrong. If it's over an hour, your docs are actively costing you integrations.</p>
<p>Other useful signals:</p>
<ul>
<li class="">GitHub issues or Stack Overflow questions citing your docs</li>
<li class="">Support tickets that start with "I read the docs but..."</li>
<li class="">Direct developer feedback in post-integration surveys</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="final-thought">Final Thought<a href="https://itsmemukesh.github.io/portfolio/blog/api-documentation-strategies#final-thought" class="hash-link" aria-label="Direct link to Final Thought" title="Direct link to Final Thought" translate="no">​</a></h2>
<p>Developers are your most skeptical readers. They don't trust what they can't verify. Every claim in your API docs should be provable with a runnable example. When in doubt: show, don't tell.</p>]]></content:encoded>
            <category>api-documentation</category>
            <category>developer-experience</category>
            <category>technical-writing</category>
        </item>
        <item>
            <title><![CDATA[Cross-Functional Collaboration for Technical Writers]]></title>
            <link>https://Itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips</link>
            <guid>https://Itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips</guid>
            <pubDate>Wed, 01 Jan 2025 00:00:00 GMT</pubDate>
            <description><![CDATA[Technical writing doesn't happen in isolation. The quality of your documentation is directly proportional to the quality of your relationships with product managers, engineers, and designers. Here's how to build those relationships deliberately.]]></description>
            <content:encoded><![CDATA[<p>Technical writing doesn't happen in isolation. The quality of your documentation is directly proportional to the quality of your relationships with product managers, engineers, and designers. Here's how to build those relationships deliberately.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="why-collaboration-is-a-technical-skill">Why Collaboration Is a Technical Skill<a href="https://itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips#why-collaboration-is-a-technical-skill" class="hash-link" aria-label="Direct link to Why Collaboration Is a Technical Skill" title="Direct link to Why Collaboration Is a Technical Skill" translate="no">​</a></h2>
<p>Most technical writers get trained on tools and style guides. Few get trained on how to extract information from a senior engineer who has three minutes between meetings, or how to get a PM to prioritize documentation in a sprint that's already overloaded.</p>
<p>These are learnable skills, and they matter as much as your ability to write a clean procedure.</p>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="working-with-engineers">Working With Engineers<a href="https://itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips#working-with-engineers" class="hash-link" aria-label="Direct link to Working With Engineers" title="Direct link to Working With Engineers" translate="no">​</a></h2>
<p>Engineers are your most valuable source of technical truth. They are also, frequently, your most time-constrained stakeholders.</p>
<p><strong>What works:</strong></p>
<ul>
<li class=""><strong>Async-first.</strong> Never ask for a meeting when a Slack thread or a commented PR would work. Engineers prefer async; it respects their flow state.</li>
<li class=""><strong>Specific questions.</strong> "Can you explain how the authentication works?" gets a 20-minute explanation. "In the auth flow, does the refresh token expire if the access token is never used?" gets a two-sentence answer you can write directly into docs.</li>
<li class=""><strong>Review, don't approve.</strong> Send engineers a draft and ask them to flag anything factually wrong. Don't ask them to write — they will hate it. Do ask them to correct — they will care about accuracy.</li>
<li class=""><strong>Embed in sprints.</strong> Attend standups. Read Jira tickets. You'll learn things nobody thought to tell you.</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="working-with-product-managers">Working With Product Managers<a href="https://itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips#working-with-product-managers" class="hash-link" aria-label="Direct link to Working With Product Managers" title="Direct link to Working With Product Managers" translate="no">​</a></h2>
<p>PMs control the roadmap and, often, the documentation budget indirectly. They're also the best source for the "why" behind features — context that makes the difference between a procedure and an explanation.</p>
<p><strong>What works:</strong></p>
<ul>
<li class=""><strong>Ask about the user problem, not the feature.</strong> "What problem does this solve for the user?" produces better documentation material than "What does this feature do?"</li>
<li class=""><strong>Negotiate doc scope early.</strong> Documentation scope should be in the definition of done. If it's not, advocate for it at sprint planning, not retrospective.</li>
<li class=""><strong>Provide metrics.</strong> PMs respond to data. If your knowledge base reduced support tickets by 30%, that's the language that earns documentation a seat at the table.</li>
<li class=""><strong>Flag documentation risk.</strong> If a feature is shipping without adequate docs, that's a business risk. Name it clearly: "Shipping X without documentation means support ticket volume for this workflow will increase."</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="working-with-designers">Working With Designers<a href="https://itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips#working-with-designers" class="hash-link" aria-label="Direct link to Working With Designers" title="Direct link to Working With Designers" translate="no">​</a></h2>
<p>The best time to collaborate with a designer is before the UI is finalized, not after. UX writing — error messages, tooltips, onboarding flows — is documentation that lives in the product.</p>
<p><strong>What works:</strong></p>
<ul>
<li class=""><strong>Join design reviews.</strong> Figma comments from the documentation perspective catch issues ("This error message doesn't tell the user what to do next") before they're coded.</li>
<li class=""><strong>Maintain a content inventory.</strong> Keep a shared doc of all UI strings that need copy. Designers can't write everything; having a designated owner prevents omissions.</li>
<li class=""><strong>Align on terminology early.</strong> If the product calls it a "workspace" and docs call it a "project," users get confused. Terminology alignment is a joint responsibility.</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="the-information-architecture-problem">The Information Architecture Problem<a href="https://itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips#the-information-architecture-problem" class="hash-link" aria-label="Direct link to The Information Architecture Problem" title="Direct link to The Information Architecture Problem" translate="no">​</a></h2>
<p>The hardest cross-functional challenge in technical writing is information architecture: deciding what lives where, who owns what, and how content stays consistent across docs, UI, and support.</p>
<p>My approach:</p>
<ol>
<li class=""><strong>Establish a single source of truth.</strong> Pick one place (Confluence, Notion, a GitHub repo) where canonical product definitions live.</li>
<li class=""><strong>Make it everyone's job to update it.</strong> Not just the writer's. When a PM changes a feature name, they update the glossary.</li>
<li class=""><strong>Review it quarterly.</strong> Entropy is constant. Content rots. Schedule time to audit.</li>
</ol>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="when-collaboration-breaks-down">When Collaboration Breaks Down<a href="https://itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips#when-collaboration-breaks-down" class="hash-link" aria-label="Direct link to When Collaboration Breaks Down" title="Direct link to When Collaboration Breaks Down" translate="no">​</a></h2>
<p>Sometimes stakeholders don't engage, reviews get skipped, and docs ship with known errors. This happens. What matters is how you respond:</p>
<ul>
<li class=""><strong>Document the gap.</strong> File a ticket, send a Slack message with a timestamp. Create a record.</li>
<li class=""><strong>Quantify the impact.</strong> "We shipped without documenting the new permission model. Support has logged 14 tickets in 48 hours" is more actionable than "we need better process."</li>
<li class=""><strong>Propose the fix, not just the problem.</strong> Come with a solution: a 30-minute doc sprint, a dedicated review slot, a definition-of-done update.</li>
</ul>
<h2 class="anchor anchorTargetStickyNavbar_Vzrq" id="building-long-term-trust">Building Long-Term Trust<a href="https://itsmemukesh.github.io/portfolio/blog/cross-functional-collaboration-tips#building-long-term-trust" class="hash-link" aria-label="Direct link to Building Long-Term Trust" title="Direct link to Building Long-Term Trust" translate="no">​</a></h2>
<p>Every accurate doc you ship builds trust with engineering. Every support ticket you deflect builds trust with product. Every clear error message builds trust with design. These compound over time.</p>
<p>The technical writer who is embedded, collaborative, and data-driven eventually becomes the person the team considers before shipping, not after.</p>
<p>That's the goal.</p>]]></content:encoded>
            <category>collaboration</category>
            <category>product-management</category>
            <category>technical-writing</category>
            <category>career</category>
        </item>
    </channel>
</rss>