How to Use AI for Writing
Based on the interview with Tyler Cowen here, I wanted to extract and extend on some aspects Tyler brought up.
Tyler has been very bullish on AI, and has allegedly for some reason avoided a debate with Gary Marcus. I haven’t really looked deeper into that, but let me say that I side with Cowen’s enthusiasm in this particular context: For the curious, learning-oriented knowledge workers, AI in its “primitive” form (chatbots) is an immense tool.
How Cowen is using it – and his recommendations – make a lot of sense, and are highly practical. He is a writer and an economist. Whether he’s the best person to judge the intricate technical limitations of the technology, is another matter.
I fed Claude the full transcript and asked for a ”comprehensive breakdown” (you can find the output here).
In One Sentence
Learn to manage AI effectively while developing complementary human skills.
Asking Better Questions
Cowen emphasizes the importance of specific, practical questions rather than broad queries:
“They’re asking it questions that are too general. They’re not willing to put in enough of their own time generating context.”
In my own work, I have made the same observation: context is key.
Cowen doesn’t mention the way context is handled in Claude’s implementation of the projects feature. I find that feature to be very useful. For my upcoming in-person workshop here in Berlin, titled “Towards Chronic Health – Building Your Personal Well-Being System with AI”, I will share some practical tips on creating and maintaining context for an ongoing project.
The Three Layers of Understanding AI
This is where we can see how Cowen is extrapolating from his own user experience, combined with what he’s reading. He emphasizes that in order to understand AI, there are three layers:
- Access to the best systems: Using premium models with the most advanced capabilities. Cowen makes an argument for paying $200 to get access to o1-Pro from OpenAI. I haven’t tried it, but I am not really tempted either.
- Understanding the rate of improvement: It is difficult for most people to develop any intuition for this aspect. But the constant reminder ought to be that “this is the dumbest model you’ll have to work with”.
- Envisioning decentralized AI networks: Understanding how AIs will develop their own “republic of science” with multiple agents interacting, correcting each other, and creating collective intelligence
The last point is taken directly from what Cowen said. He is extrapolating to AI agents that communicate directly with each other. He also said during the interview that for now, the user has to do or faciliate this part with their interactions.
The Rising Value of Secrets and Networks
This point caught my attention. It’s an interesting observation.
Personnaly, I feel like I have neglected network activation for too long. I’m trying to catch up with that, though.
What is somewhat troubling is how important, or essential, the distinction of “public” is in the following. There’s a flip side to this: certain interest groups might want to prevent some of the more valuable information from staying (or becoming) public.
Cowen’s reasoning is this: In a world where public information is accessible to everyone through AI, we will see that
- Secrets become more valuable: “Humans know secrets. Maybe AIs can be fed secrets, but they don’t in general know secrets.”
- Social networks increase in importance: “Your network of humans is not just 20% more valuable. It could be 50x more valuable.”
- In-person interactions gain value: “Traveling and meeting people becomes way more important.”
What is not included in the summary is that Tyler talked about how there is a kind of game that humans play, in which secrets are being traded.
On DeepSeek
Tyler apparently cherishes DeepSeek for its weirdness, insofar as it isn’t nearly as forced to well behave as the other frontier models.
Use DeepSeek occasionally to avoid getting trapped in bland thinking.