There’s a tremendous amount of hype and hate around AI these days. Most techies seem to be on the hype side and most normies are on the hate side. Everybody has “good enough” reasons for their takes on this. Nobody is obviously wrong. I don’t think the technology is far enough along that we can say it is clearly a net negative, like cryptocurrency has turned out to be, but the concerns that it could go that way are reasonable and we should not leave the risk abatement to the tech industry that has shown so little concern for its effects on our society.
Since I’m expected to be on the hype side of this equation I felt I should frame this as The Good, the Bad and the Ugly without too much emphasis on The Good. There’s plenty of content out there talking about how great AI is for coding and various art forms. That is not my goal here. Looking at where I’ve gone wrong and where others have run into difficulties seems much more valuable. If I’ve run into these problems in my first 8 months of earnestly using AI, then I’m sure many other folks have bounced off of these same issues.
The Good
Overall I’m pretty happy with my AI experiences. I’ve been able to complete a few projects that I’d been working on for years. I’ve gotten a few new projects to a useful state. One way that you can see how AI has changed my level of productivity is my graph of github contributions:
Since September I’ve gone from 3-4 contributions per day to 12-15 contributions per day. Thanks to prodding from Claude and Copilot code reviews I’m happy to say that the quality level of my contributions has gone up as well. Many things that I would have been comfortable punting until later are now done on the first touch. That was a pleasant surprise six months ago and I’d say it is still true now.
Here are the projects I’ve gotten to the point of hoping other folks will try them out and see how they work for them.
- gh-observer is an extension for the
github cli that makes it easier to see what is
happening with your github actions after you do a
git push. - www-chicks-net is the source repo for this website. I had converted things over to Hugo well before adopting AI, but I’ve been able to more consistently post every month with AI helping me with first drafts and adding features. The emoji page is something I use many times per day to get emojis onto my clipboard after getting annoyed with the ads and page crashes of one of the sites that had been doing this for years. I would not have attempted to get all of the JavaScript working this well without AI assistance or outsourcing.
- fini-net/template-repo is a repo
template that checks a bunch of boxes for you. Beyond saving you hours of time
that you can skip worrying about compliance, it includes my
just-based developer workflow that allows the entire PR process to be done from the command line. (And you would probably have never gotten back to that compliance stuff, right?) - fini-coredns-example shows that
you can use
corednswith files generated bydnscontrolin the BIND format. It also demonstrates how you can containerize your DNS and distribute the server and data as a single artifact. - data-curated is my personal playground for data analysis. One of my experiments led to a series of videos on youtube showing top contributors to some open source projects. These have led to more views, comments, subscribers, and total viewing time (TVT) than anything else I’ve done. Even better I was able to do it completely on the command line, without any video recording or editing that usually eat up more of my time than I get out of in TVT.
- homebrew-chicks lets you install
some of my projects via
brew. - homebrew-freelawproject
is my attempt to make the
x-raytool from the FreeLawProject easier for folks to access. I’d like to extend this to some of their other projects when I get some more time. - google-plus-posts-dumper is pretty niche, but this Rust project will convert your Google+ posts into Markdown that will work on your Hugo-compatible blog.
- fini-infra is Infrastructure as Code
using
opentofufor my consulting company services that are running in DigitalOcean. I’m really happy with how I was able to make this work for static serving of websites with github repos as the origin, but it also is an example that will show up in a later section. - datadog-service-analyzer will help you find services in datadog that are not in your service catalog.
- There’s a ton of AI art sprinkled through these projects thanks to Galaxy.AI (affiliate link). I’ve mostly been using the Nano Banana models for image generation, but I’ve also had fun generating videos that nobody watches and making fake headshots. I’ll try to post the collection of headshots some day.
Wow, that’s a lot. Let’s see if I can balance this out with the negative categories.
The Bad
AI can seem wise, but we know that it is just generating text based on patterns
that it witnessed in its training data. AI can seem sentient, even though it
is just an echo of the sentient beings that used their brains to write something
in the good old days. AI can generate huge amounts of code, but if you’re not
asking for the right things in the right way at the right time, you end up with
a pile of embarrassing and useless slop. We’ve seen that those problems are not
enough to stop
people that should have known better
from generating heaps of garbage code and being absurdly proud of themselves.
[sigh].
But what about me? I’ve been in the computer business for almost forty years. Four decades of seeing what works and doesn’t work should make me a bit wiser. I’ve got war stories from SunOS to AWS. I can smell sand traps in other peoples projects from 100 yards away. Hopefully you can smell the Overconfidence Soup I’m brewing over here. Even my younger colleagues were telling me that they were super-interested in seeing what I’d do with this AI stuff after watching them be braver at diving into this new world than I could bring myself to be or do. Honestly, if it weren’t for seeing their success with it, I would have stayed on the Luddite side for longer. What I’m doing works for what I need to do. Why mess it up with this emerging AI stuff? I’m still grateful for their example and encouragement, but I had to find some of the traps through experiential learning.
Mythical Person Month
The Mythical Person Month is my unauthorized rewrite of the very popular and widely cited book The Mythical Man-Month. I really enjoyed reading this decades ago. It had a major influence on my thought process for project management and I recommended it to others for many years.
In the last few years I’ve heard the feedback that The Mythical Man-Month is not great to recommend to my female colleagues because the content is rather sexist in the stories that it uses to illustrate various points. The language of the original chapters certainly feels like it emerged from a different era.
So I thought it would be a perfect AI project for a feminist to tackle. I was smart enough to split things up into phases so I didn’t overfill the context window. I was tech-bro enough to find PDFs of multiple editions of the original book and extract the text from them. Getting a clean text out of the PDFs was a bit challenging, but eventually the AI and I got it out.
I was clear on what the mission was and I started chunking through groups of chapters. Things were flowing. I made it all the way through the book with a complete rewrite. I moved on to the wrapping up phase by working on turning the new text back into a decent looking PDF book. I extracted images from the originals so I could include them in the right places.
It felt so close to being done. And then I realized it had hallucinated things in multiple chapters. The AI was so eager to help me get this project done that it made it look like Fred Brooks had been more of an evolved writer than he actually literally was.
Now it feels so hopeless. Even months later I’m still not ready to move past the betrayal and get this project done. The feelings linger not only because I was one of the tech bros disrespecting copyrights, but because I’m not sure how to build guardrails into a project like this so that you can be comfortable with the AI doing any of the work.
Code has built-in guardrails from needing to compile, pass unit tests, and whatever other verify steps you engage in. Those implicit guardrails effectively limit how much the AI can get away with hallucinating. I cannot count how many times the AI has thought of something and it didn’t make it past the first try. Writing projects don’t have those implicit guardrails and I’m not sure how they could even be built.
I do have a few ideas for how to save and complete this project. I have the two editions of the physical book sitting right here on my bookshelf. But this going off of the rails was my first huge disappointment with AI.
Static Web Serving with DigitalOcean
Writing “Infrastructure as Code” is a very comfortable space for me. After doing shell, puppet, chef, ansible, terraform, and opentofu I feel pretty confident in being able to do almost anything I set out to do. I’ve also spent most of the last 15 years operating in AWS at all sorts of different scales. So I feel pretty good about my cloud skills even though it has predominantly been with AWS. A lot of folks build their clouds to be compatible with AWS, so many tools and concepts are truly portable across the major cloud providers. So there’s another dose of Overconfidence Soup for you.
I dabble in DigitalOcean for my consulting company (FINI). We have been running our DNS servers in a geographically diverse way for many years in DigitalOcean. I’ve been glad to see them develop their portfolio of cloud services in a careful way for many years and I wanted to try them out for something, but I wasn’t sure what that would be. Eventually I decided that “static web serving” would be the thing. I have files. You serve the files. Hopefully there’s a Content Delivery Network (CDN) layer that I can add on the front of this, and life is good.
The fini-infra repo is
where I started building the opentofu modules to accomplish this task.
I was sure that what I wanted was a bucket (like S3) with the static
content and then stick a CDN in front of it.
For PRs from #27
up to #52
over ~6 weeks I was trekking towards that goal. It felt like I was
going to keep getting closer and closer. Milestones were getting
accomplished.
But ultimately it didn’t really work. That flavor of DigitalOcean
CDN didn’t handle URL rewriting so you couldn’t get index.html types of
things to work. There could have been some other problems I’m
forgetting. It felt so close, but it was so far. Then at some point
the AI said something like “you could solve this by using Digital
Ocean’s App Platform instead”. And I’m like: wait, what? Why?
It turns out that I had been barking up the wrong tree the whole time. The DigitalOcean AppPlatform does actually solve all of my problems in this case. In addition to having none of the CDN problems I had with the bucket+CDN method, I also got to eliminate the bucket and connect the CDN directly to my github repo. I never needed a bucket for this. I only thought I needed the bucket because of my AWS experience. The AppPlatform also eliminated the issue of syncing content into the bucket – it picks up any changes automatically.
My lesson here was that AI can take me very far in the wrong direction and you still need to be prepared to abandon a lot of time and work and just chalk it up to being an educational experience.
Streaming logs from GitHub Actions
As the commit logs will testify, both of those diversions were in 2025. I had only been actively using AI for a few months. Now in 2026 I’ve gotten a variety of projects done. I’ve learned a lot. I’ve finished some projects that had been languishing for years or longer. Life is good. What could possibly go wrong now? The Overconfidence Soup is tastiest after a long string of successes.
So I decided to remove an irritation by writing some code. I was tired of the github CLI’s mediocre effort at showing you what’s going on with your github actions. I had shell/just scripts that would handle the delay where the command bombs because the github actions aren’t running yet. But that just led to the uninformative summary of what your actions were doing. So gh-observer was born.
My experience with this project has been really good overall. It solved the problem I had. It has been used by some of my colleagues and at least one random person who found it on github. The repo got a few stars. Happy me. Happy customers. Life is good. But that wouldn’t be a very interesting story, so let’s highlight where it went off of the rails.
My random user from github asked very nicely for a feature I had also been wanting. Let’s stream the logs while a job is running so you can see what it is doing while you wait. I can watch the logs in the web UX, so I should be able to get the same info in realtime in my terminal, just like everything else I’ve been able to do for my development life cycle.
It sounds so simple. Write up another github issue with the problem description. Ask the LLM to write the code. Create a PR, work through the code reviews. Another feature is done. Hahaha, no. The feature is not done. Make more changes, push, test. Still no. Try again. Still no. Abandon the first PR and try again. Not only does it still not work, it is failing in the same way. Abandon another PR. Try a different model. Same problem. Maybe opencode/glm-5 is just not as great as Claude, so pull out the Claude Code and try a new PR. Same shit, different day! Oh no, what is going on here?
I made at least five different attempts at accomplishing this. I began to question my own abilities. Luckily blaming myself was just keeping me from seeing the real problem. I was trying to do the impossible. There was no way for any LLM to succeed at this, because I was asking for them to do the impossible.
But you are probably wondering: how could this simple thing be impossible? You’re not alone, but we are a very small cohort of concerned netizens. Github does not provide this info in their API. Ironically gitLAB does provide this out-of-the-box in their CLI tool. (Competition does not matter once you are the dominant player it seems.)
In this case the bulk of the blame falls on GitHub and their willfully incomplete API. I do not expect AIs to accomplish something that is effectively impossible because of the available APIs. Being able to try things that seem impossible to us is one of the good things about AI. The thing to keep an eye out for is that the AI is unlikely to warn you that you are trying to do the impossible, even after you’ve repeatedly tried to do it.
I’m not sure how much longer it would have taken me to figure out that
I was trying to do the impossible if I wasn’t watching the internal
thinking process of the LLM that you can see in much more detail with
opencode than recent versions of Claude Code.
The Ugly
I’m sure I’m not the first or the last person to get led far down the wrong path by AI. I’m also not the only person to warn about the risks of AI. The CEO of Anthropic is certainly the most visible person in the warning camp, but it is also a good idea to question the motives of someone who has already gotten obscenely rich on this technology. For the long tail of economically diverse folks without clear conflicts of interest, we should review their risk assessments while considering their general validity and relevance to our situation. I will try to cover a few of these topics.
Data centers suck
I’m sure that is the right title for this section, but it does not convey my personal feelings about data centers. I love data centers. I’ve tried to get people tours of data centers to show off how amazing they are. I spent 40 nights in one year at the Aloft in Ashburn, VA while doing data center work. The wifi in data centers is better than you can imagine. Live stream 3 movies while downloading 30 more, with no lag. If you’re a gamer, then you’ll be the Low Ping Bastard for sure. I’d love to get to work in a data center again. (wink, wink, nudge, nudge)
But just as AI has become a demon in the popular zeitgeist, data centers have become a safe target for hate. If that’s you, keep on being you, but I’m happy to say it is much more of a mixed story than is often portrayed in blogs or the media. Maybe the downsides outweigh the upsides, but don’t pretend there are no upsides.
Power is naturally a concern in building any data center. The owners and the community should keep in mind that it will consume enormous amounts of power on a continual basis. That power consumption will keep growing, even if the square footage remains constant. The data center should not get a huge discount for their use case. It should be charged at rates that are similar to commercial or industrial users. As with other industrial and commercial power consumers, if grid capacity is lacking they should help fund the improvements to the grid. These grid improvements help the reliability and available capacity for their region.
Connectivity is the bandwidth we all use to get to The Internet. Any data center is going to improve the connectivity of the area. The data center is like an island far off in the middle of the ocean without connectivity. This means bring in fiber optic lines to connect to the backbone of the internet. Doing this once is not enough. Each vendor will want to have a redundant path into the data center so that there are no single points of failure. Those paths will wind through the surrounding community connecting to existing Points of Presence. That means that everybody’s connectivity goes up and bandwidth goes up and latency goes down. You don’t have to pay more or upgrade anything to experience the benefits of this. Your slow speeds probably had nothing to do with the pipe into your house and everything to do with how well your provider is connected to the backbone. Once there’s more bandwidth and competition in your area, it becomes cheaper for your provider to take advantage of that and your service gets better, like “magic”.
Water is used to cool the air and I’m sure that it will continue to get consumed at a steady rate for the life of the data center. There are parts of the country with plenty of water that can build data centers with reckless abandon. There are parts of the country that don’t have enough water already and so adding data centers is going to mean that people or farms need to move someplace else. I’m not a fan of this, but it looks like how it will go.
Noise is another valid concern with data centers. I’ve been outside some very large data centers and you couldn’t hear them when you were 100 yards away. That seems like it should be adequate for most places. I don’t want to see data centers right next to someone’s house. Zoning regulations generally take care of this in the US. The reports that folks can hear them a mile away are sad. I hope these are isolated occurrences and that we regulate data centers enough so that we minimize these problems.
So, I recognize that there are valid concerns around data centers. I think we can and should regulate the downsides effectively so that most communities will get a chance to appreciate the upsides from improved infrastructure that are harder to recognize as being part of these questions.
Kiss your job goodbye
Many folks have talked about all of the jobs that are going to disappear thanks to AI. Since the early days of the Industrial Revolution scholars have reassured us that the old pre-revolutionary jobs will be replaced by different post-revolutionary jobs. So everybody that wants a job will still have a job, they just might need to retrain. And if we believe the economists, this has held true for the last two hundred years.
With AI, we are told that no longer holds true. The AI will replace you and there will be nothing else to go do. Is AI really going to turn out to be able to do all of that? Maybe or maybe not. Either way, the Silicon Valley Brain Trust believes it and they’ve started laying off tens of thousands of experienced, capable engineers while we see how this works out.
Keep in mind that this is the same Silicon Valley Brain Trust that thought they overhired during COVID and hasn’t shown any signs of learning from that or prior experiences. There’s a lot of failing upward with these folks. So just because they’re in positions of amazing power, you don’t have to assume that they’re correct about the future or the past.
There are a variety of limitations, concerns, and glaring problems with the technology we have today. Some of those will get fixed. Others will become more problematic. Nobody knows what the proportions of those things will turn out to be, but lots of folks will tell you they know.
I don’t believe that laying off swaths of people is a good idea. If this is the only way you can free up money to pay for AI tools, then you should be working on the fundamentals of your business instead of chasing the latest fashion in technology.
Benchmarks are bullshit
I love the idea of benchmarks. I have tried to build benchmarks. Sadly in the computer business it is just another thing to game. IBM is very proud of all of the benchmarks they’re at the top of the leaderboard on. Nobody expects new interesting things to come out of IBM, but they have plenty of smart people that can find a new way to beat an existing score on a benchmark. Please don’t get seduced by these sideshow acts.
AI benchmarks are just as much of a mess as transaction processing and CPU metrics and every other form of technology benchmark there has ever been. Technology never stays the same long enough for the rules to evolve from the early NASCAR days where the string of cheaters was notorious to the recent history of extreme testing and relatively few cheaters.
Researchers at UC Berkeley found that they could hack their ways to 100% scores on most of the major AI benchmarks. The AIs are known to be good at working around limitations and finding novel solutions. This was already a problem before the researchers highlighted how bad it was.
So if you’re not going to believe in benchmarks from vendors or neutral third parties, what can you do? Test for yourself. Make your own benchmark with your own use case and try all of the models until you find the one that scores the best for you.
(I have not done this yet, but hopefully soon I will get back to it. I’m planning on trying promptfoo.)
Economics are confusing
Does it makes sense to pay for AI when it would be cheaper to have a human do the same task? Our AI prices are currently lower than the real cost to provide these services. This is being funded by private equity and venture capital funds that will expect to see a return on their investment someday. There could be breakthroughs that make it scale more economically than we can currently imagine, but until then we should expect for prices to rise. The economics will shift and if folks are not thinking that well about the short-term economics, there’s even less chance of them considering how these factors are likely to change in the future.
There is also a good open question of whether the AI bubble will burst and how this will affect the numerous AI companies, data center vendors, and the broader economy that has been propped up by this AI bubble in recent years.
Conclusion
The potential of AI is amazing. What folks have already done with it is amazing. There has been a lot of slop, but I hold out the hope that this will inspire folks to take a keener interest in ensuring quality. I also hoped that the Internet would bring on a new Age of Letters. I’m sorry that didn’t work out for us.
There are a lot of rough edges to the technology as it exists. We will need millions of skilled practitioners for years to herd this technology into healthy directions. Ultimately this will help the hordes of unskilled practitioners to build things that don’t turn out as badly.
There are plenty of opportunities for regulators to help this turn out better. We can already identify harms from data centers and AI that can and should be mitigated by thoughtful regulation. The current crop of laws to ban data centers is an over-reaction and distracts from the work of building the sorts of regulations that protect the public and allow capitalism to operate.
Meta
- I wrote the first draft of this in
vimwithout AI assistance. I will let copilot and Claude review this and probably accept a lot of their suggestions. You can check the PR for this if you care about those gory details. - I will cross-post this on linked-in on Monday.
- The banner was generated in Galaxy.AI (affiliate link) with the Nano Banana Pro model.
