<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="3.10.0">Jekyll</generator><link href="https://nicoappel.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://nicoappel.github.io/" rel="alternate" type="text/html" /><updated>2026-01-18T23:31:48+00:00</updated><id>https://nicoappel.github.io/feed.xml</id><title type="html">Working on It</title><author><name>Nico Appel</name></author><entry><title type="html">Why Your Docs Should Live in a Repo Now</title><link href="https://nicoappel.github.io/2025/12/30/ai-as-documentation-engine.html" rel="alternate" type="text/html" title="Why Your Docs Should Live in a Repo Now" /><published>2025-12-30T00:00:00+00:00</published><updated>2025-12-30T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/12/30/ai-as-documentation-engine</id><content type="html" xml:base="https://nicoappel.github.io/2025/12/30/ai-as-documentation-engine.html"><![CDATA[<p>Should your internal documentation live in Google Docs or a git repository?</p>

<p>Most teams stopped asking this question years ago. Google Docs won. Non-technical people can’t use git. End of debate.</p>

<p>Except it’s not the end. The reasoning made sense when it was humans doing all the typing, formatting, placing files in folders. That assumption is dissolving faster than most people realize.</p>

<p>I want to examine this question again – not to relitigate old arguments, but because the ground shifted underneath us. We’re going to stress test both approaches. I’ll make the case for repositories, then surface the strongest objections and address them directly.</p>

<p>The answer, IMO, is clear. But there’s something else most teams overlook entirely. And that might matter more than the tooling decision itself.</p>

<h2 id="the-direct-comparison">The Direct Comparison</h2>

<h3 id="someone-leaves-the-company">Someone Leaves the Company</h3>

<p>Someone leaves the company. Their name appears on the team page, in project documentation, in onboarding guides – you’re not sure where else. In Google Docs, you start searching. Document by document. Hoping you don’t miss one. Hoping someone else didn’t create a doc you don’t know about.</p>

<p>In a repository with AI as your interface, you say: “This person is no longer working for us. Remove them from the team page, and find any other places in the documentation where they’re mentioned.”</p>

<p>The AI searches. Comes back: “I found five files affected by this. Here are the proposed changes.” You review. You approve. One merge request. Complete coverage. Version history intact.</p>

<p>This is a class of capability Google Docs simply doesn’t have.</p>

<h3 id="what-changed-this-month">What Changed This Month</h3>

<p>Here’s another use case, something I use regularly: “What changed in our documentation this month?”</p>

<p>With a repository, the AI reads the commit history and summarizes – by author, by topic, by domain. With Google Docs, you’re clicking through revision histories one document at a time. If you even remember which documents to check.</p>

<h3 id="quality-automation">Quality Automation</h3>

<p>And then there’s quality. Documentation I produce through AI doesn’t have typos. Not because I’m careful – because nothing is typed by hand. Quality agents scan the repository continuously. They check for broken links, inconsistent terminology, naming conventions. Standards that humans struggle to maintain consistently become trivial to enforce.</p>

<p>Google Docs has version history. It has comments. It has suggesting mode. For many teams, it’s genuinely sufficient (to write documents.</p>

<p>But it can’t do comprehensive cross-repository updates. It can’t answer “what changed this month” across all your documentation. It can’t enforce quality standards automatically. The repository approach doesn’t just match Google Docs on convenience – it exceeds the capability ceiling.</p>

<h3 id="the-ais-native-habitat">The AI’s Native Habitat</h3>

<p>Another dimension most people miss: When you put documentation in a repository, you’re putting it in the AI’s native habitat.</p>

<p>These tools were trained on code repositories. That’s their breeding ground – where they learned to navigate file structures, parse markdown, understand diffs, follow conventions. When you ask an AI to operate on a repo, you’re asking it to work in the environment it knows best.</p>

<p>You know, your documentation isn’t just for your human team anymore. It’s for your AI team. Every file you structure well, every convention you follow, every piece of context you make explicit – it serves both audiences. The humans who need to understand what’s happening, and the AI agents who need to operate reliably.</p>

<p>Google Docs might be parseable. But the repo is where the AI lives.</p>

<h2 id="stress-test">Stress Test</h2>

<p>I asked Claude to argue against this approach – to surface some objections because I wanted to find the holes I hadn’t articulated yet. The challenges it came back with were reasonable. Here’s how I addressed them.</p>

<style>
.chat-container { max-width: 100%; margin: 2rem 0; }
.chat-bubble { padding: 1rem 1.25rem; margin: 1rem 0; border-radius: 1rem; max-width: 85%; }
.challenge { background: #f1f1f1; border-left: 3px solid #666; margin-right: auto; }
.response { background: #e8f4e8; border-left: 3px solid #2a7d2a; margin-right: auto; }
.chat-label { font-size: 0.75rem; text-transform: uppercase; letter-spacing: 0.05em; color: #666; margin-bottom: 0.5rem; font-weight: 600; }
.chat-bubble p { margin: 0; }
.chat-bubble p + p { margin-top: 0.75rem; }
</style>

<div class="chat-container">

<div class="chat-bubble challenge">
<div class="chat-label">Challenge</div>
<p><strong>Verification.</strong> If contributors can't read the diffs, how do they know the AI did what they asked?</p>
</div>

<div class="chat-bubble response">
<div class="chat-label">Response</div>
<p>The AI presents its intentions before acting. "Here are the five files I'll modify. Here's each change. Proceed?" You don't parse syntax. You approve intent and outcome.</p>
</div>

<div class="chat-bubble challenge">
<div class="chat-label">Challenge</div>
<p><strong>Friction.</strong> Google Docs: see typo, click, fix. Five seconds. Your model adds steps.</p>
</div>

<div class="chat-bubble response">
<div class="chat-label">Response</div>
<p>I don't write documentation by hand anymore. I describe what I want. The AI writes. Typos don't get introduced because nothing is typed manually. Quality agents catch what slips through. The "fix a typo" scenario barely exists in this model.</p>
</div>

<div class="chat-bubble challenge">
<div class="chat-label">Challenge</div>
<p><strong>Reliability.</strong> AI hallucinates. Errors look authoritative when well-formatted.</p>
</div>

<div class="chat-bubble response">
<div class="chat-label">Response</div>
<p>Active use surfaces problems. Query the documentation daily, run operations against it, and inconsistencies reveal themselves. What gets used stays accurate. What sits unread drifts – regardless of what tool produced it.</p>
</div>

<div class="chat-bubble challenge">
<div class="chat-label">Challenge</div>
<p><strong>Training.</strong> People still need to understand git, version control, how to troubleshoot.</p>
</div>

<div class="chat-bubble response">
<div class="chat-label">Response</div>
<p>Conceptual understanding, not operational skill. Know what version control accomplishes and why it matters. The AI operates the tools. Training people on manual git operations is optimizing for the past.</p>
</div>

<div class="chat-bubble challenge">
<div class="chat-label">Challenge</div>
<p><strong>Bottleneck.</strong> Every change goes through merge requests. Who reviews?</p>
</div>

<div class="chat-bubble response">
<div class="chat-label">Response</div>
<p>Most changes need no review. This isn't production code. Edits are documented. Rollback is trivial. Review exists for awareness when needed, not gatekeeping. And unlike live-edit systems, you can see exactly what changed and why.</p>
</div>

</div>

<p>The objections are reasonable. I don’t think they’t hold.</p>

<p>And there’s a deeper question underneath all of this.</p>

<h2 id="the-dissolution">The Dissolution</h2>

<p>The distinction between “technical” and “non-technical” people is eroding faster than most realize.</p>

<p>Most teams never considered putting documentation in a git repository because git is for code. Documentation is just something people write into documents – Word, then Google Docs. Different worlds entirely.</p>

<p>There have been attempts to bridge them. Wikis, for example, with Notion being a Google Doc and Wiki blend. But my taste runs toward keeping things as close as you can to the bare metal: <strong>text files, version control, minimal abstraction.</strong></p>

<p>The suggestion that docs should live in version-controlled text files, structured like a codebase, surprises most folks. The outdated assumption is that you’d be tied to using command line and type git commands, solve merge conflicts.</p>

<p>No more. AI agents changed the equation. The interface layer shifted.</p>

<p>If you’re not seeing this yet, I am not surprised. It’s kind of hidden still. But, AI can be the documentation interface. Not someday – now. I’m telling you from the trenches: it works.</p>

<p>What does onboarding look like in this model? You teach people the concepts. What is version control? Why does it matter? What’s a commit, a branch, a merge request? You walk them through it once – write some markdown, see what it looks like, understand what a diff shows you.</p>

<p>Then you hand them the AI agent, and probably some dictation tool.</p>

<p>From that point on, they describe what they want in natural language. The AI handles the markdown, the file placement, the structure, the formatting. Contributors don’t need to remember syntax. They need to understand what they’re trying to accomplish.</p>

<p>The barrier that kept documentation in Google Docs – “git is for code, not for us” – is dissolving. If you’re still making decisions based on that assumption, you might be solving a problem that’s disappearing.</p>

<h2 id="forget-documentation">Forget Documentation</h2>

<p>Really, this is not a tooling question. It is not even about documentation as it has been thought of for the longest time.</p>

<p>It’s a directional choice – about what you’re developing your team, your organization, and yourself toward.</p>

<p>If you solve the “non-technical people can’t use git” problem by adding abstraction layers – a CMS on top, a friendly UI that hides the machinery – you might win convenience. But you also prevent people from developing primitive fluency<sup id="fnref:1" role="doc-noteref"><a href="#fn:1" class="footnote" rel="footnote">1</a></sup>: the intuition for how these building blocks work together and compound into capability over time.</p>

<p>From my point of view, you have to <strong>actually buy into this direction.</strong> Performatively doing a little bit of AI – “we do use AI, sure. We transcribe meetings” – won’t cut it. That’s not going to put you or your team in step with what’s happening.</p>

<p>I acknowledge it’s difficult to keep up. It really is. However, this is already happening. And unlike betting on future model capabilities, there’s little downside to moving this way. We’re talking about what works <em>right now.</em></p>

<p>Put yourself and your organization into a state where you’re leveraging what’s already possible – which is in fact a multiplier – and be ready to leverage what’s coming.</p>

<p>Jack Clark in his <a href="https://nicoappel.github.io/2025/12/23/import-ai-438-silent-sirens-flashing-for-us-all.html">recent newsletter</a>:</p>

<blockquote>
  <p>“By the summer I expect that many people who work with frontier AI systems will feel as though they live in a parallel world to people who don’t. And I expect this will be more than just a feeling.”</p>
</blockquote>

<p>More on this to come.</p>

<div class="footnotes" role="doc-endnotes">
  <ol>
    <li id="fn:1" role="doc-endnote">
      <p>Primitive fluency: the intuition for how simple building blocks – text files, version control, AI interaction patterns – work together and compound into capability. Term from Nate B. Jones. <a href="#fnref:1" class="reversefootnote" role="doc-backlink">&#8617;</a></p>
    </li>
  </ol>
</div>]]></content><author><name>Nico Appel</name></author><category term="ai-agents" /><category term="documentation" /><category term="work-primitives" /><summary type="html"><![CDATA[Should your internal documentation live in Google Docs or a git repository?]]></summary></entry><entry><title type="html">The Work Your AI Can’t See</title><link href="https://nicoappel.github.io/2025/12/29/the-work-your-ai-cant-see.html" rel="alternate" type="text/html" title="The Work Your AI Can’t See" /><published>2025-12-29T00:00:00+00:00</published><updated>2025-12-29T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/12/29/the-work-your-ai-cant-see</id><content type="html" xml:base="https://nicoappel.github.io/2025/12/29/the-work-your-ai-cant-see.html"><![CDATA[<p>There’s an overlooked benefit to working with agents: in order to make working with agents work, <strong>you’re going to make work work.</strong></p>

<p>The gap that most organizations stumble over isn’t technical per se. It won’t be solved by more capable models, a better UI, or the next release. It’s the same gap that’s plagued knowledge work for decades: work that exists only in people’s heads, processes that depend on the right person being in the room, documentation that’s either outdated or unfindable or both.</p>

<p>Whether the AI models improve next month or next year, the work before us is the same. But there’s some good news.</p>

<p><img src="/assets/images/work-ai-cant-see-dusty-docs.png" alt="A dusty leather-bound book titled &quot;How We Work&quot; covered in cobwebs" />
<em>Last updated: Q3 2019. “We’ll get to it next sprint.”</em></p>

<h2 id="the-primitives">The Primitives</h2>

<p><a href="https://www.youtube.com/watch?v=4Bg0Q1enwS4">Nate B. Jones</a> frames it as questions that every workflow needs to answer explicitly:</p>

<ol>
  <li><strong>System of record</strong> – Where’s the thing we change? What’s the canonical source of truth?</li>
  <li><strong>State</strong>  – How do we see that it changed? What’s the before/after?</li>
  <li><strong>Gate</strong> – How do we approve it? What are the defined transitions?</li>
  <li><strong>Checks</strong>  – How do we prove it worked? (“It looks good” is not a check.)</li>
  <li><strong>Rollback</strong>  – How do we undo it when it doesn’t?</li>
  <li><strong>Traceability</strong>  – Who did what, and why?</li>
</ol>

<p>This framing resonates with my work at TightOps. A decade of work in and with (distributed team) operations, across different companies and cultures, and always finding the same fundamental failure is not the technology. It is a <strong>legibility</strong> issue – work that isn’t readable, findable, or actionable by anyone who wasn’t in the room when it happened.</p>

<p>James C. Scott developed this into a framework in <em>Seeing Like a State</em> – legibility as what institutions need to function: standardized forms that make complex realities readable and governable. The surname. The cadastral map. The grid city. But that legibility is often crude and extractive, erasing local variation to facilitate state control.</p>

<p>What we are dealing with now is different.</p>

<p>The “magic” of working with AI is that you don’t have to standardize <em>how</em> people work. You can provide the AI with fairly unstructured braindumps, going back and forth between topics, closing some loops and opening others. You can let your conversations be messy. The AI reads through the full transcript and restructures it. It finds the threads, identifies the decisions, surfaces what changed (with some caveats, but that is for another post).</p>

<p>The legibility happens at the output, not the process. People stay human. The AI handles the translation.</p>

<p>Is some nuance lost? Of course. Summarization means emphasis, means choices about what matters. But the cost is negligible, the result is useful, and you end up with a system of record that’s transparent and fair rather than extractive.</p>

<h2 id="outsource-boring-work">Outsource boring work</h2>

<p>For years, the answer to all of this was “better documentation.” And for years, the response was a collective shrug. Documentation felt like a technical nice-to-have. So called non-technical people found it even more tedious (good luck getting your sales team to write and maintain documentation). Busy executives had revenue to chase and fires to put out. The cost was upfront – someone has to write it, maintain it, keep it current – and the benefit was abstract, later, somewhere down the line.</p>

<p>So it didn’t happen. Or it happened once, grew stale, and became another artifact nobody trusted.</p>

<p>Two things changed.</p>

<p>First, documentation became more important. It’s no longer just operational hygiene. It is now the key that, excuse my hyperbole, “unlocks an army of agents.” A different vector of scale entirely. The organizations that have their work documented in legible, structured form can tap into capabilities that others simply cannot access. Documentation got promoted from nice-to-have to strategic lever.</p>

<p>Second, the burden shifted. AI can take messy context – a Slack thread, a voice note, a rambling meeting transcript – and turn it into structured, legible documentation. It can figure out where that information belongs in your existing system. It can update what’s already there rather than creating yet another conflicting source.</p>

<p>The skill and discipline that documentation always required? The AI handles that now. You don’t write documentation from scratch – you capture what’s already happening and let AI structure it. What’s left is curation: deciding what matters, what to feed in, what to keep current. A human still steers. But the documentation becomes much more convenient to create <em>and</em> maintain, plus proves its value in such an obvious way every day, that you’ll go there voluntarily.</p>

<h2 id="compounding">Compounding</h2>

<p>The more you document, the more capable your AI becomes. Not in some abstract “training” sense – you’re <em>not</em> fine-tuning a model. You’re building context.</p>

<p>This is the broader picture: context engineering matters more than prompt engineering. When I tell the AI to process an input and update our team’s documentation accordingly, I don’t fix the prompt when I’m not satisfied with the execution. I fix the context – a readme file, a missing link to a related file, a formatting convention.</p>

<p>The full picture of how your business works, captured in plain text that any AI (or human) can read.</p>

<p>Ask a question about your pricing strategy, and the AI can reference what you’ve already decided. Ask how to solve a problem with your current tool stack, and it knows what tools you’re using. New hires onboard from the same source that agents use. Decisions don’t have to be re-explained every time someone new joins the conversation – human or otherwise.</p>

<p>This is what it means to scale without losing coherence. <strong>The documentation isn’t overhead. It’s the substrate.</strong></p>

<h2 id="entry-point">Entry Point</h2>

<p>Pick one workflow, one project, one team. Audit it against the primitives:</p>

<ul>
  <li>Where’s the system of record? What’s the canonical source of truth?</li>
  <li>Can you see the before/after when something changes?</li>
  <li>What’s the gate? How do changes get approved?</li>
  <li>What checks prove it worked? (Not “it looks good” - something objective.)</li>
  <li>Can you roll back when it doesn’t work?</li>
  <li>Can you trace who did what, and why?</li>
</ul>

<p>The gaps you find will tell you exactly where the friction lives – why things aren’t shipping, why execution feels harder than it should, why scaling keeps breaking what used to work.</p>

<p>And then: start documenting. Not by writing from scratch, but by capturing what’s already happening and letting AI structure it. Talk. Record. Transcribe. Let the machine make it legible. (I’ll likely publish some templates or guiding ideas for this. But really, the AIs are plenty capable of getting you started. Also: “Can you roll back when it doesn’t work?” Sure you can. You can ask AI to restructure your docs, to replace terminology that you want updated, to break up one file gone too large and unwieldy into smaller ones. Let ‘em work. Review. Approve. Correct direction. It’s inexpensive.)</p>

<p>Bottom line: The primitives that make work agent-legible are the same ones that make work work. Solve for one, you solve for both.</p>]]></content><author><name>Nico Appel</name></author><category term="ai-agents" /><category term="workflows" /><category term="documentation" /><category term="operations" /><summary type="html"><![CDATA[There’s an overlooked benefit to working with agents: in order to make working with agents work, you’re going to make work work.]]></summary></entry><entry><title type="html">Quoting Nate</title><link href="https://nicoappel.github.io/2025/12/27/year-end-reflections-why-2025-was-a-pretty-good-year-for-ai-but-not-for-the-reasons-you-think.html" rel="alternate" type="text/html" title="Quoting Nate" /><published>2025-12-27T00:00:00+00:00</published><updated>2025-12-27T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/12/27/year-end-reflections-why-2025-was-a-pretty-good-year-for-ai-but-not-for-the-reasons-you-think</id><content type="html" xml:base="https://nicoappel.github.io/2025/12/27/year-end-reflections-why-2025-was-a-pretty-good-year-for-ai-but-not-for-the-reasons-you-think.html"><![CDATA[<blockquote>
  <p>What gives me hope is that the slop isn’t inevitable, and we humans have a pretty reliable history of finding great works (like Shakespeare) out of the sea of slop we ourselves have written. Good stuff rises to the top.</p>
</blockquote>

<blockquote>
  <p>When you build actual systems—when there’s discipline around what gets generated and how it gets checked before it goes out—you can produce work that’s dramatically better than what most humans produce unassisted. I’ve seen marketing copy that people actually click on, emails that get responses, ad creative that outperforms what teams were producing manually. The difference isn’t the model; it’s everything around the model. Retrieval to ground the claims. Validation to catch the errors. Human checkpoints for edge cases. Taste applied at some point in the process.</p>
</blockquote>

<blockquote>
  <p>Is the content information-dense? Is it something you can come back to? Does it respect your time? Those are the right questions, and AI can help you answer them well—if you build the systems to make it happen rather than just connecting a model to a publish button and hoping for the best.</p>
</blockquote>

<p>via <a href="https://open.substack.com/pub/natesnewsletter/p/year-end-reflections-why-2025-was">Nate</a></p>]]></content><author><name>Nate</name></author><category term="AI Trends" /><category term="Artificial Intelligence" /><category term="Technology" /><category term="Future of Work" /><category term="Productivity" /><summary type="html"><![CDATA[What gives me hope is that the slop isn’t inevitable, and we humans have a pretty reliable history of finding great works (like Shakespeare) out of the sea of slop we ourselves have written. Good stuff rises to the top.]]></summary></entry><entry><title type="html">Quoting Jack Clark</title><link href="https://nicoappel.github.io/2025/12/23/import-ai-438-silent-sirens-flashing-for-us-all.html" rel="alternate" type="text/html" title="Quoting Jack Clark" /><published>2025-12-23T00:00:00+00:00</published><updated>2025-12-23T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/12/23/import-ai-438-silent-sirens-flashing-for-us-all</id><content type="html" xml:base="https://nicoappel.github.io/2025/12/23/import-ai-438-silent-sirens-flashing-for-us-all.html"><![CDATA[<blockquote>
  <p>By the summer I expect that many people who work with frontier AI systems will feel as though they live in a parallel world to people who don’t. And I expect this will be more than just a feeling</p>
</blockquote>

<p>via <a href="https://importai.substack.com/p/import-ai-438-cyber-capability-overhang">Jack Clark</a></p>]]></content><author><name>Jack Clark</name></author><summary type="html"><![CDATA[By the summer I expect that many people who work with frontier AI systems will feel as though they live in a parallel world to people who don’t. And I expect this will be more than just a feeling]]></summary></entry><entry><title type="html">Quoting Tim Dettmers</title><link href="https://nicoappel.github.io/2025/12/15/why-agi-will-not-happen-tim-dettmers.html" rel="alternate" type="text/html" title="Quoting Tim Dettmers" /><published>2025-12-15T00:00:00+00:00</published><updated>2025-12-15T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/12/15/why-agi-will-not-happen-tim-dettmers</id><content type="html" xml:base="https://nicoappel.github.io/2025/12/15/why-agi-will-not-happen-tim-dettmers.html"><![CDATA[<p>Refreshing read.</p>

<blockquote>
  <p>Computation is physical</p>

  <p>A key problem with ideas, particularly those coming from the Bay Area, is that they often live entirely in the idea space. Most people who think about AGI, superintelligence, scaling laws, and hardware improvements treat these concepts as abstract ideas that can be discussed like philosophical thought experiments. In fact, a lot of the thinking about superintelligence and AGI comes from Oxford-style philosophy. Oxford, the birthplace of effective altruism, mixed with the rationality culture from the Bay Area, gave rise to a strong distortion of how to clearly think about certain ideas. All of this sits on one fundamental misunderstanding of AI and scaling: computation is physical.</p>
</blockquote>

<p>via <a href="https://timdettmers.com/2025/12/10/why-agi-will-not-happen/">Tim Dettmers</a></p>]]></content><author><name>Tim Dettmers</name></author><category term="Hardware scaling" /><category term="AI economics" /><category term="AGI skepticism" /><category term="Artificial Intelligence" /><category term="Computation limits" /><summary type="html"><![CDATA[Refreshing read.]]></summary></entry><entry><title type="html">Quoting The Decoder</title><link href="https://nicoappel.github.io/2025/10/21/a-changing-internet-wikipedia-sees-drop-in-traffic-as-ai-and-social-platforms-bypass-links.html" rel="alternate" type="text/html" title="Quoting The Decoder" /><published>2025-10-21T00:00:00+00:00</published><updated>2025-10-21T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/10/21/a-changing-internet-wikipedia-sees-drop-in-traffic-as-ai-and-social-platforms-bypass-links</id><content type="html" xml:base="https://nicoappel.github.io/2025/10/21/a-changing-internet-wikipedia-sees-drop-in-traffic-as-ai-and-social-platforms-bypass-links.html"><![CDATA[<blockquote>
  <p>The <a href="https://diff.wikimedia.org/2025/10/17/new-user-trends-on-wikipedia/">Wikimedia Foundation</a> says page views have dropped by about eight percent compared to last year. The foundation points to generative AI tools and social networks that display Wikipedia content <a href="https://the-decoder.com/pew-finds-that-only-1-percent-of-users-click-a-source-link-directly-from-googles-ai-overviews/">without sending users to the site</a>. Bots that increasingly resemble real users are also putting more strain on Wikipedia’s infrastructure.</p>
</blockquote>

<p>World’s most valuable data. The closest thing to a common, more or less global, understanding we have.</p>

<p>via <a href="https://the-decoder.com/a-changing-internet-wikipedia-sees-drop-in-traffic-as-ai-and-social-platforms-bypass-links/">The Decoder</a></p>]]></content><author><name>The Decoder</name></author><category term="Internet Trends" /><category term="social media" /><category term="Online Traffic" /><category term="Artificial Intelligence" /><category term="Wikipedia" /><summary type="html"><![CDATA[The Wikimedia Foundation says page views have dropped by about eight percent compared to last year. The foundation points to generative AI tools and social networks that display Wikipedia content without sending users to the site. Bots that increasingly resemble real users are also putting more strain on Wikipedia’s infrastructure.]]></summary></entry><entry><title type="html">The Majority AI View</title><link href="https://nicoappel.github.io/2025/10/21/the-majority-ai-view-anil-dash.html" rel="alternate" type="text/html" title="The Majority AI View" /><published>2025-10-21T00:00:00+00:00</published><updated>2025-10-21T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/10/21/the-majority-ai-view-anil-dash</id><content type="html" xml:base="https://nicoappel.github.io/2025/10/21/the-majority-ai-view-anil-dash.html"><![CDATA[<p>Not only too big to fail, but too deep into impossible to keep promises.</p>

<p>Early on, while there was less information, experience, even research, it was tough to argue against wild speculations (hype). Times have changed. For those actively using and following the technology, nothing but a moderate view seems at all reasonable.</p>

<p>I still feel like some of the people working at the “labs” are so inundated or immersed, maybe they are truthful but starkly mistaken. They should be smarter than that. The other explanation is that they are bullshitting and lying, misrepresenting.</p>

<p>Hating to be or become more cynical, I’m making a similar error of not following Occam’s razor. Maybe greed is all that’s needed to explain this.</p>

<p>via <a href="https://www.anildash.com/2025/10/17/the-majority-ai-view/">Anil Dash</a></p>]]></content><author><name>Anil Dash</name></author><category term="Tech Ethics" /><category term="AI Critique" /><category term="Technology Industry" /><category term="Artificial Intelligence" /><category term="Tech Culture" /><summary type="html"><![CDATA[Not only too big to fail, but too deep into impossible to keep promises.]]></summary></entry><entry><title type="html">Daring Fireball: Markdown</title><link href="https://nicoappel.github.io/2025/10/09/daring-fireball-markdown.html" rel="alternate" type="text/html" title="Daring Fireball: Markdown" /><published>2025-10-09T00:00:00+00:00</published><updated>2025-10-09T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/10/09/daring-fireball-markdown</id><content type="html" xml:base="https://nicoappel.github.io/2025/10/09/daring-fireball-markdown.html"><![CDATA[<p>It’s freaking amazing to me that one guy, John Gruber, came up with Markdown. It has been widely adopted and personally I think if I were to get something like that on my record, I’d be done.</p>

<p>via <a href="https://daringfireball.net/projects/markdown/">John Gruber</a></p>]]></content><author><name>John Gruber</name></author><category term="open-source software" /><category term="web development" /><category term="HTML conversion" /><category term="text formatting" /><category term="markdown" /><summary type="html"><![CDATA[It’s freaking amazing to me that one guy, John Gruber, came up with Markdown. It has been widely adopted and personally I think if I were to get something like that on my record, I’d be done.]]></summary></entry><entry><title type="html">Oldletters</title><link href="https://nicoappel.github.io/2025/10/08/oldletters.html" rel="alternate" type="text/html" title="Oldletters" /><published>2025-10-08T00:00:00+00:00</published><updated>2025-10-08T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/10/08/oldletters</id><content type="html" xml:base="https://nicoappel.github.io/2025/10/08/oldletters.html"><![CDATA[<p>One of the problems with newsletters is that they are covering the latest product/software releases, developments, trends, what have you.</p>

<p>Hence, everything is more or less a regurgitation of a press release. It maybe also be “first impressions”, or some initial test of any thing. There is not much substance. Technical specs, dimensions, version numbers, and similar metrics dominate.</p>

<p>Beyond newsletters though, we have cultivated this race to immediate coverage. However, the cases where this is actually useful are rare. Gossip probably being a more legitimate use case for immediate coverage.</p>

<p>How about some <strong>“oldletters“?</strong></p>

<p>They would cover things only after thorough testing, meaning real experience gained by using X <em>over time</em>. A couple of weeks, or a couple of months of usage is what my gut tells me should be about right.</p>

<p>Come to think of it, this category exists, and I have seen it on YouTube with titles such as “6 Months With the …“ (which then is naturally followed by “… – I wasn’t expecting this” 🤦)</p>

<p>I’m more thinking of:</p>

<ul>
  <li>What happened to X?</li>
  <li>What became of Y?</li>
  <li>X after the dust has settled</li>
</ul>

<p>Reviews where there is sufficient time in the “Re” to allow for a more balanced perspective and different kinds of insights.</p>]]></content><author><name>Nico Appel</name></author><summary type="html"><![CDATA[One of the problems with newsletters is that they are covering the latest product/software releases, developments, trends, what have you.]]></summary></entry><entry><title type="html">Building with AI Is Outsourcing</title><link href="https://nicoappel.github.io/2025/10/06/building-with-ai-is-outsourcing.html" rel="alternate" type="text/html" title="Building with AI Is Outsourcing" /><published>2025-10-06T00:00:00+00:00</published><updated>2025-10-06T00:00:00+00:00</updated><id>https://nicoappel.github.io/2025/10/06/building-with-ai-is-outsourcing</id><content type="html" xml:base="https://nicoappel.github.io/2025/10/06/building-with-ai-is-outsourcing.html"><![CDATA[<p>If you’re building things with AI (LLMs and inference), you have to realize that this is simply a new, or more modern, form of outsourcing.</p>

<p>It’s about figuring out what needs to be done, breaking it down into workflows, writing up instructions, and then having <em>somebody else</em> take care of that.</p>

<p>So in essence, it’s just outsourcing, and the same problems that traditionally apply to outsourcing also apply here.</p>

<p>You have the classic problems: dependency, responsibility for the work, potential IP theft, and the need to keep on top of your providers.</p>]]></content><author><name>Nico Appel</name></author><category term="Outsourcing" /><category term="Intellectual Property" /><category term="Workflows" /><category term="Technology Management" /><category term="Artificial Intelligence" /><category term="aal" /><summary type="html"><![CDATA[If you’re building things with AI (LLMs and inference), you have to realize that this is simply a new, or more modern, form of outsourcing.]]></summary></entry></feed>