Never Stop Breaking Up

Just wasn’t meant to be

About two months ago1 I signed up for a frontier LLM / AI subscription. It was the lowest plan at Anthropic so I could use Claude Code. I have a small website business2 that had a lot of stuff broken for a while. Although I had paid a few hundred dollars to a few different developers and even tried to hire several more to help, I wasn’t able to get anyone to help out or write a single line of code. It’s not that fixing the various code problems within a WordPress plugin are beyond me3 but more that tracking down and fixing a bazillion little problems would have been extremely time consuming4 and I just didn’t have the time.

Okay, enough justifications –  I signed up for Anthropic at $20/month and honestly, it was fantastic. I have built out two or three big projects, easily a dozen medium projects, and I have no idea how many minor items. I could go from idea to description to implement so much faster than I could have alone, it’s not even funny.  I’m confident I will keep using several of the things I’ve built for a very long time.  The $20/month plan has it’s limitations – you have a limited amount of amorphous compute you can use during 5 hour stretches as well as a limited amount you can use during a weekly period.  During “non-peak” hours you have more amorphous compute.  I know you get a ton more compute with the $200/month plan, and honestly it’s almost certainly worth it to a full time developer, but I have so many misgivings about funding companies whose value proposition involves boiling oceans of drinking water, slurping up energy, enabling surveillance states, and allowing computers to make decisions in wartime.

Anyhow, I cancelled my subscription today just before it was about to renew for the second time.  I’ve given Anthropic $40 of my money and gotten well more than that in value, so I’m fairly content with that transaction.  But, now that my bigger projects are done I don’t have a need for continued use and can make due with either free options or roll code by hand.

I was tempted.  I’m still tempted.  If I paid several hundred dollars to real humans and received nothing, I could absolutely find a way to spend $240/year to enable me to build more complicated things faster.  Even without these justifications5 I can absolutely afford $20/month.6  But, much like an evil ring that grants you some modest powers, I’m pretty sure the hidden costs just aren’t worth it.

I wondered when I started using a paid LLM again7 how long I would keep paying for it.  I probably got value out of ChatGPT for about two or three months and after that I mostly kept it out of convenience, inertia, and make stupid pictures.8  I stopped using it because I wasn’t getting steady value out of it and I didn’t like continuing to fund OpenAI.  Would I keep the Claude subscription for months longer than I was really using it – out of the convenience of having a frontier LLM on tap?

It didn’t hurt that it felt like Claude was steadily getting less intelligent and helpful.9 If I were a more paranoid or cyclical person I would believe cell phone manufacturers make their phones slow down just as the new flagship phones are released and frontier LLM companies dumb their models down when the newest pricier models come out.

… but maybe slightly tempted?

As frugal as I am, I’m willing to pay for a frontier model because they’re incredibly helpful in realizing .  However, I don’t want to support most of the frontier companies10 , their evil alliances11 , or side quests to block other AI companies from developing, devour the earth’s energon cubes, and boil the oceans.

I mean, why can’t I just do this on a small scale at home?  Part of the problem is that even trying to get my hands on a very small PC is becoming unnecessarily expensive.  At the time I’m writing this, the Raspberry Pi 5 16GB12 is going for $305, closing in on triple the initial MSRP of $120.  Adding a case, some cables, the AI HAT+ 2, a heat sink / cooler, and beefier power supply would probably bring the cost to $600.  I could buy a whole extra brand new desktop PC for that price.  Or just use my current desktop to run an LLM in the background.

Which is what I’m doing literally right now.

I’m running LM Studio on my modest PC13 to serve up small LLMs to VS Code and Cline, to go through some small Python codebases to help me with some projects.  After quite a lot of trial and error, I’ve basically settled on Qwen 3.5 9B using a 4-bit quantization as the best model I can run on my machine that can actually help.  It is punishingly slow… but it does work.  Something that might have taken a frontier model 5-10 seconds to do takes my machine probably an hour.  Some light web research suggests that a frontier model is probably operating around 50-100 tokens per second while my machine can manage a blazing 1-2 tokens per second.

The man has a point…

Since I’m rambling here anyhow…  I’m going to backtrack slightly, just so I can give a little context.  Sometimes I’ll find myself stuck in a cognitive loop of frustration and rabbit holes and decision paralysis.  Writing these things down lets me excise exorcise14 these thought-demons at the cost of inflicting them upon my legions of loyal readers.  I find jotting things down in a semi organized fashion means I don’t have to keep all the little pieces of ideas swirling around in my brain.  I can finally relax, knowing they’ve been realized… somewhere.  This is why I’ll jot down some sketches, create some scraps of code, or tuck a note away in Standard Notes.1516 Well, Working with frontier models makes me hate their rate limits and everything they stand for, which makes me want to build my own.  Where was I?

Right.  I’ve been swirling around the vortex of working with a frontier LLM’s, getting sick of paying and/or supporting them, try some free API resources, bump into their free tier limits, fall down a rabbit hole investigating what it would cost to build a machine of my own, get disgusted at the cost and figure I’ll just run them on my current machine, get slightly frustrated at the time it takes to do anything meaningful, and wonder about maybe throwing a few dollars at a frontier LLM … just to get this project finished.  But, I don’t need a frontier LLM right now and I don’t need to get things done fast … especially when I should be doing the work I perform in exchange for the money I use to pay my mortgage.

¿Por que no los dos?

In some ways, having a very slow LLM at my disposal is actually helpful.  Yes, it does mean I have to listen my little PC’s fan hum to itself for an hour to accomplish something kinda basic.  But, then again… it’s busy working on something, freeing me up to do other things.

Like write blog posts.

He’s got a point…

Plus, there are some possibly realistic uses for this kind of super low cost basic research / experimentation.  I’ve been using this cobbled together system of various LLM’s, frontier and local, plus my modest Python skills, to try and create a semi-useful virtual assistant.  I’ve connected to a few very small LLM’s so it can act as a human-ish interface for useful scripts17 , connected it over the Matrix protocol so I can talk to it securely from a phone even when I’m not home, and now that I know which kinds of models would work for some simple Python code generation, I could have a useful slow coding helper wherever I need it.  Frankly, the main use of the coding assistant for me right now is building deterministic scripts that help me on a daily basis.  There are other directions I could imagine taking this project from here.  By adding a Meshtastic node to my home set up and carrying a small Meshtastic device with me, I could still stay in touch with my very slow and low bandwidth PC wherever I was.  With a solar panel or power supply, I could even run all this entirely off grid.  Going completely off grid isn’t something I’m super into, I like having easy access to broadband and grocery stores, but it sure would be neat and a good excuse to buy a few small Meshtastic devices.

Of course, once I start spinning around the idea of a Meshtastic node, I’ll want to bundle it with a Raspberry Pi 5…

Software Development with LLMs
  1. Series Plugin Test for Illustrative Purposes Only
  2. ChatGPT WordPress Plugins
  3. Coding with an LLM Sidekick
  4. Python Practice with an LLM
  5. Not Team AI
  6. Never Stop Breaking Up
  7. Weakness
  1. You know, before our latest war and revelations AI companies were helping power the county’s military. []
  2. Very boring []
  3. I’m kinda decent at plugin dev for someone with zero training []
  4. Cue meme of Don Draper yelling “That’s what the money is for!” []
  5. Forgive the humble brag []
  6. Just look at all these streaming services I pay for. []
  7. I paid for ChatGPT in 2023 and 2024 []
  8. I made several “make it more” style pictures… []
  9. I was going to find a link to support this … sense – but there were honestly too many links to too many articles I didn’t want to vet.  Suffice it to say the “vibe” I got is that as of April 2026, I’m not the only one who feels like Claude got stupider.  My impression of the consensus is that Claude got too many users, resource usage went up, and quality went down. []
  10. OpenAI, Anthropic, Grok/Twitter/Elon, Google/Evil, or even MicroSoft []
  11. billionaires, oligarchs, fascists, surveillance states, Bezos, Musk, or certain president-grifters []
  12. If you can find one! []
  13. Bought long before RAM-pocalypse []
  14. Sheesh. []
  15. I used to use plain text files, then Google Keep, but you know what – this is service is great and it’s not Google or evil []
  16. As far as I know []
  17. Downloading files automatically, setting reminders, etc []

Not Team AI

Look, I hate AI slop as much as the next person.  My kiddo has been taking a college class where they’ve been delving to the ideas swirling around AI/LLM’s and from what I gather, the class is nearly incomprehensible.  Just like my toaster, oven, toaster oven, fridge, and dryer don’t need wifi – neither does every damn thing need a thick coating of AI slop all over it.

Another Marvel reference?

I’ve been thinking about AI as a variation on the “super soldier serum” administered to Steve Rogers.  Given to a good man, he can be better.  Given to the Red Skull, well, he gets worse.  Instead of only making things better, it seems to simply magnify the attributes of a thing.

I guess I’m struggling with the idea of whether it’s hypocritical of me to use AI for things when so often it just makes things worse. 1  And, I admit it is fairly self-serving to liken my uses to that of Steve Rogers and assign derogatory attributes to other uses.

Maybe it’s that I’m using AI/LLM’s to add micro improvements to my own life, rather than pushing it on others?  After trying to work with free AI’s on some projects, I decided to pay $20 for a month of premium Claude Pro access.  While using the free ones, I discovered:

  • Claude’s free chat would lock a conversation after a certain context length if you uploaded any documents
  • Gemini would time-gate a conversation by not letting you use it after a certain amount in a given period
  • ChatGPT would time-gate a conversation if you uploaded anything, but would merely drop to a lower power model if you didn’t upload content and instead just worked through the chat interface

Overall, ChatGPT was more useful as long as I didn’t upload anything, and I could “make do” with the lower tier models.  I’d paid for the premium tier of ChatGPT for a few months about two years ago and quickly became disillusioned with it.  I found that it would start to chase it’s own tail, forgetting the thread of a conversation and project, randomly refactoring stable code, hallucinating functions, variables, and the names of functions and variables.  It was more work to keep it on the rails than it was to simply just work on my project.  I ended up largely shelving several projects as a result.  I’d tried unsuccessfully to hire someone, I didn’t have the time to work on them by myself, and sure as hell didn’t have the bandwidth to baby sit2 an LLM.

However, working with various LLMs recently gave me a glimmer of hope.  Perhaps they could be useful after all?  Pouring over documentation, searching for answers, and consulting Reddit and StackOverflow were options, but they all had their special problems.  In any case, these days all of these options (except documentation)3 were getting more difficult to use as people started abandoning public forums in favor of just asking an AI.

One of my favorite XKCD comics :)

So, what have I been working on?  Well, I signed up for Claude Pro on 02/09/2026 and in the just over three weeks since then:

  1. WordPress Plugin.
    1. An overhaul of a website’s registration system.  I had been using a now-defunct WordPress plugin on a different website which was basically crumbling to pieces as WordPress and the world moved on.  My needs were simple – so a few days of tinkering with Claude Pro got me something that … just worked for my purposes.  It eliminated all spam robot signups in a way that nothing I’d tried before had been able to manage.  There were a lot of moving pieces to this plugin, and there was certainly some growing pains, but it worked very well, very quickly.  I have built plugins for WordPress before and could well do so again even without an AI, but the speed of the model to build all the trivial or tedious stuff is by definition super-human.  Since the site’s ability to turn visitors into users into (hopefully) a few dollars is dependent upon the ease of registering, this one single change easily justified the $20 cost of using Pro.  That $20 accelerated this from a project I’ve been putting off for literal years because I knew how long it would take me alone, to … solved in a few days.
  2. Python Assistant Script.
    1. As a friend was quick to remind me, I’m very late to the voice activated computer assistant / smart home party.  I’d been working on a version of this with three free frontier LLM models, but it was too much, spread across too many platforms to be really cohesive or stay undamaged by converting parts among through these resources.  Progress on this project has been slower than building a single WordPress plugin, but it has definitely been boosted.  I regularly have to join online meetings where the information to join is sprinkled like breadcrumbs across multiple disparate pages on a given website, sometimes requiring a pseudo-registration process to reach.  Doing all these things manually is a real headache when I haven’t had my morning coffee.  And, let’s be honest, it’s way more fun to throw hours at a problem figuring out how to solve a problem than it is to actually face one’s problems.  I would estimate that this feature will save me about 15 minutes once a week.  Using the above XKCD logic, I’m time/energy/effort-positive if I could built this feature in less than 5 days.  I probably got it working in a few hours.  At the same time, I’ve been “bolting on” new features – a scheduler, time queries, weather queries, media control over my computer, with more features on the way.4
  3. A YouTube Management Chrome Plugin.
    1. I have this unfortunate habit of keeping too many tabs open.  While this is bad enough, keeping a lot of YouTube tabs open will have a huge impact on system memory very quickly.  I didn’t have the time at the moment to watch the videos, didn’t want to lose these videos, and didn’t want to go through the hassle of adding them to playlists.  Instead, apparently I had enough time to build a Chrome plugin that would go through all of my tabs, bookmark each one to a special bookmark sub-folder, sort them into sub-folders, and then close those tabs.  I don’t know that this will ever “save” me time, but it certainly is helping my system work better and keep my tab monster from getting too far out of control.  However, I think I’m going to extend this plugin to be a little more practical.  I think it could work for more than just YouTube videos to mass-close tabs, bookmarking them so they’re not lost, then sorting them into sub-folders.
  4. Email Entries for Work.
    1. My day job requires entry of data into a web portal.  It’s a good content management system, but not great for data entry.  It’s designed for humans to insert data, slowly, one entry at a time.  The UI requires a couple of duplicate keystrokes and/or mouse clicks.  While I deeply dislike having to do something stupid even once.  I absolutely loathe having to do something stupid twice.  It’s basically my kryptonite.  Rather than enter emails into this system, which I fucking hate, I wrote a Python script to pull data from Outlook into a CSV, export the email data into an HTML file which reviews each email and suggests an entry code for each one, and once that data’s been cleaned/formatted, which I upload into a script that I wrote to work with my employer’s website, then begin the process of uploading each one.  Since the data entry website has all kinds of dynamic elements and animated features, I can’t simply populate fields – I have to give each one time to load.  Instead of just uploading an Excel/CSV sheet, I have to wait for each entry to play it’s little animations, time the data to populate, and then click each one manually to enter because the animations sometimes don’t work well.  However, it’s a million times less painful than having to type all this bullshit in myself.
    2. Don’t worry, I don’t upload any of my email or data into any LLM.  All the logic which pulls data out of my Outlook and builds things out of it runs on my local machine.

I never could have built so much, so fast, without the help of a frontier AI.  None of the local LLM’s I’ve tried got even close and none of the free-level AI’s could maintain coherence long enough to help.

Claude Pro isn’t without it’s problems – I still had to monitor the code closely, keep it from forgetting certain key features, and deciding to completely refactor the code.  At the $20 level, I can choose among several different models that are supposedly different levels of quality and consume higher amounts of tokens, and I’m limited to a certain amount of compute within a 4 hour window and limited to a certain amount each week.  Even so, I’ve had more than enough compute for the tasks I’ve been doing.  While these things have been super helpful to me… none of them are cutting edge research or huge trade secrets.  In the chat interface you can switch language models, but doing so requires your conversation restart in a new conversation entirely.  In Claude Code you can switch the models, but I feel like the LLM lost the thread a little when I did this.

I am a frugal man and tried to do this with free LLM access, but the benefit of more capable, more coherent models, with increased ability to share an entire code base (with the help of Claude Code + Github) for $20 has been an unbeatable deal.  I’ve got a few ideas for some additional projects that could benefit from keeping the subscription going and will probably give it another month.  I don’t know that I’d need year-round access though.

Software Development with LLMs
  1. Series Plugin Test for Illustrative Purposes Only
  2. ChatGPT WordPress Plugins
  3. Coding with an LLM Sidekick
  4. Python Practice with an LLM
  5. Not Team AI
  6. Never Stop Breaking Up
  7. Weakness
  1. “Do I contradict myself? Very well then I contradict myself, I am large, I contain multitudes.” – Walt Whitman []
  2. And, let’s be real – train []
  3. RTFM, I guess []
  4. Screenshots, giving me a daily briefing, etc []