AI

The last time I wrote about what a book that proclaims we should stop building AI right now, because inevitably we will build something too powerful, and it will end humanity. One refutation that I've heard is that even if AI gets super intelligent and it's malicious it's more likely that it causes harm to humanity and then humanity adapts based on understanding of risk than all of humanity getting wiped out.

Regardless I want to talk about how AI is impacting things now. There's so much AI news and change that it's hard to know what to say. Although AI can be very interesting there are a lot of people that are over saturated with this now. The current AI trends are Open Claw, an agent that you can run on your own device, and have it do things autonomously for you. Moltbook the social media site built by Open Claw agents for agents. There's Claude Cowork which is a spin off of Claude Code, but for regular work.

On twitter my entire timeline is filled with AI news, but a lot of it is memes. A couple weeks ago seeing these memes was funny, but now it's kind of exhausting. There are people like Simon Willson who have spent the past year blogging about AI, and everyday they have a new update. The Open Claw creator Peter Steinberger maintains his own blog, and at one point I am pretty sure he mentioned spending thousands+ of dollars on AI subscriptions.

On the one hand being able to build more software can be very exciting. After all in the past 26+ years software has revolutionized things. There are now self driving cars, and machines that talk. In addition, to that even simpler technological solutions are impressive. I despise having to do a tooth mold for a retainer. The plaster you bite on triggers my gag reflex, and it's effectively torture, but now a wand can be waved around your mouth and a 3d scan can be made so you don't have to use these molds. At the eye doctor there are now alternatives to dilating your eyes, a camera can take pictures of your eyes, and provide the retinal images that before would require dilating your eyes, and messing with your vision for a day.

Platforms such as stripe, etsy, and shopify have enabled people to build businesses on their own. Youtube has enabled a creator economy to compete with traditional media. Substack has provided an alternative to print media, and a way to give voices to people. Even Amazon provides means for people to self publish so they can write their own books. Gumroad alternatively provides a platform where people can sell their books. Technology has its downsides, but what it's empowered is amazing.

As a side note I think there should be more government intervention to prevent the negative effects of technology. In NYC a good case of this has been congestion pricing. The streets are less crowded, there is less air pollution, less street noise. This is something that was brought about by the government that would be hard to organize otherwise. The tech companies themselves are trying to optimize shareholder value. It's valid that they strive for this. If we want to limit this then that's the governments place to figure out how to keep competition in the markets or how to enforce consumer protections. I'm far from well versed in this, but one thing that is contentious is that the European Union has far better consumer protections, but a lot smaller GDP growth. This is the source of a whole other discussion around governance, protections, etc so I'll leave it that.


In March 2024 when I signed up for a poetry class one of the first things I did is try to have AI critique my poems. Maybe I learned something from the class. Even if my poetry felt clunky or I often felt like I was masquerading as a poet I learned that poetry was about your emotions, and AI feels at odds with that. When AI was less effective I also asked it to read my blog posts and write something. The problem is it made up an experience, and it also made no sense. I am sure that I could utilize AI to discuss my writing with it. It's also possible that AI could help channel your emotions in a certain way to write poems, but I am not interested in experiencing those art forms in that way at this moment (very non comital).

What I did do from October 2024 pretty much till October 2025, and possibly onward is heavily rely on AI to do my job. When I first started doing this it was more novel. I was learning to use a new tool. Could I be super detailed, and have the plan come to life. Despite the novelty the AI was augmenting my abilities as well. One thing that I've noticed when writing code is that it can take a long time to get started if you are working with code you haven't seen before. Often times I am reading the code and trying to build a mental model about what I need to do. It can take inertia to get going. If AI even has this problem it exists on a smaller time window, and therefore it throws code that you can start working with more quickly.

Another area where I gained speed up is I didn't have to deal with the minutiae of learning a new language. I didn't have to worry about the syntax. Also anytime there was a syntax error I could have the AI fix it. As AI improved over time I didn't have to be as descriptive. The AI could start running the tests itself. It could see the output it could iterate on this process on its own. At the same time what started happening is that I did start to feel my knowledge diminishing.

A common thought with AI generated code is that you have to understand the code that is there. However, from experience as a software engineer I always understand the code I wrote myself better than the code others write. Even though part of the job requires reviewing the code of others it's rare that you understand code you reviewed better than code you wrote. A benefit of writing the code is that it sticks with you. When people want to know how the system works you have a better understanding of the details. You are aware of where to find the code. You might the particulars of what is happening.

These benefits aren't as prevalent when you use AI to write code. You can read the code, but at least in my experience I am not reading it as carefully as other code. For me this has culminated in me reducing my AI reliance. Where possible I try to solve the debugging error on my own / with google. If I am going to use AI I'll ask targeted questions or ask for understanding. It's interesting because the industry reception to AI is different. A friend who works at Meta told me that the internal policy is to use AI 100% of the time. When people mentioned that you would lose understanding if you didn't do things yourself they mentioned you should ask the AI for help, or ask the AI to explain things.

I don't know enough about pedagogy, but I think there's a benefit from learning from your failures. It's good to overcome struggle. At the company I work at they are less bullish on AI. I don't necessarily think that's right, but it's interesting how it creates different incentives. Some people don't use AI at all. One of my colleagues was embarrassed by being someone that had heavy AI usage on a AI code editor. Another teammate of mine told me that they have the AI help generate the plan with them, but then they do the coding. They mentioned that they felt they were getting dumber based on their AI usage.

What's hard is the trends are moving pretty fast, and we don't have good basis of comparison to determine what appropriate ways of using AI are. Anthropic is studying this for instance there latest study measured how AI assistance impacts the ability of coding skills. The temptation with AI agents that produce code with English descriptions (or natural language descriptions) is to wonder if this is the next level of abstraction.

A programming problem can be represented in binary (0s, and 1s). An abstraction above this is an assembly language which is a low level programming language describing specific operations on the computer. Higher level languages provide an abstraction above assembly languages and can compile down to the assembly language. A natural progression could be to say that Large Language Models (LLM) convert a language into a programming language. In fact a few months ago someone was making the facetious argument that the LLM might as well generate assembly language instead of another programming language. I believe one of the points they were making is that people still want to see the program and be able to reason about it. If we were generating assembly language it would be harder to do that.

Another difference between Large Language Models and Programming languages is that programming languages are unambiguous, and deterministic. Large Language Models are based on a statistical process, and you aren't necessarily guaranteed the same output for the same input (from my limited LLM understanding). From a programming perspective this isn't desirable, because it makes predicting what's going to happen more difficult. Steven Yegge actually has a good piece on this about software survival. The point is there are working software tools such as the search tool grep where it's more beneficial for the large language model to utilize the tool instead of do its own thing. In fact a big part of what is powering agents is the knowledge of a set of tools that the agent utilizes to aid it in their task.

Even if the analogy of LLMs being the new compiler break down they still are being used to develop code. Although long running experiments have been made fun of, and haven't been huge successes it is likely the case that the agents will improve at doing long running tasks.

Personally I find that there is value, and merit into having a deeper understanding of the code myself. In the interview process companies currently agree. Even though the way software is being written might be changing the way we evaluate engineers for interviews still is based around writing code without AI. This is at least one argument for limiting reliance on AI.

One valid criticism of how we are interacting with AI is that the interface is limited. Even though the tool is very powerful we are still interacting it with it in the terminal. To be fair developers still utilize the terminal heavily, and it is a very powerful tool. However, if having a developer continue to understand the code they are overseeing is important then perhaps have a terminal edit a bunch of the lines, and show a summary of what was changed isn't the best interface.

What's not clear is the role that people will play in the process. From a practical point of view software is serving as a means to create something. If you can verify that the creation is doing what's expected without looking at the code do you need to see the code? The current reason for seeing the code is that humans have to maintain that code. Alternatively humans have to steer AI to maintain that code. Humans have experience about best practices doing something, but those experiences are passed down from making mistakes, and from other developers. Do we lose those lessons if we are further removed from the code? Does it matter?

The current general consensus is that you should review the code that is generated. Someone has to be responsible for the code. As the author, and as the employee that's you. I've already mentioned that in my experience you learn and absorb more of what you are doing if you do it yourself entirely. However, if the goal of the business is to maximize shareholder value AI can accelerate things. AI used in certain ways will reduce your skills. Even if you are trying to responsibly review what the AI has done perhaps at some point you'll have lost sight of the overarching architecture.

I think the point is something like AI is a powerful tool, but it's not clear how to use it best. In a company setting it's good to have the ability to accelerate things. However, accelerating can have long term consequences both for the company and for the individual. Considering that human capital is currently a company's most valuable asset it can be useful for companies to invest in their human capital. If AI is a replacement for human capital then I understand the desire for a company to put more force behind AI.

Anyway AI has been on my mind a lot. I had spent a whole Saturday or something a few weeks back trying to write about AI. It's a weird tool that is changing the software industry in an interesting way. When AI was writing words and not programs I didn't expect it to start writing programs. Surprisingly (but not in hindsight) it has gotten better at writing programs than it has a writing words. That's a complicated piece of AI it's hard to predict how it'll impact things. I feel like I'll have more thoughts about this, but for now I need to post this in the hopes of unclogging the blog arteries.

Subscribe to thoughts

Don’t miss out on the latest issues. Sign up now to get access to the library of members-only issues.
jamie@example.com
Subscribe