Weakness

🤦‍♂️

I just wrote a very large blog post about kicking frontier LLM’s to the curb.  The problem I’m facing is that running a useful scale LLM on my extremely modest PC is not just slow, it’s difficult.  I don’t mind waiting 30 minutes or even an hour for it to work on a small piece of a bigger project, but to come back after an hour and realize it made things worse or stopped after 5 minutes means I have figure out how to kick start it.1

My PC isn’t fancy.  It’s about 3 years old, has 32GB of RAM of which 16 GB is “shared VRAM”, meaning that it’s basically using half of it’s RAM as if it were VRAM.  The result is a machine that’s decent for most work tasks2 but would have poor performance for games, video editing, big 3D model rendering / editing, and… LLM use.  If I had unlimited time and patience, I could probably flog Qwen 3.5 9B with a 4-bit quantization into working well enough over a long enough timeline using my current PC.

I’ve looked into what it would cost to either build a stand-alone system or an entire secondary machine just for these kinds of tasks plus home LLM inference use.  None of these options are particularly attractive at this time.  Single board computers like the Raspberry Pi, Orange Pi, Jetson Nano and others would probably cost in the range of $500 and probably not crack 5 tokens per second.  A GPU in an external enclosure would probably cost around $700 for 16GB and could possibly run up to 40 tokens per second.  However, it would probably be kinda loud and take up desk space.  A Mac Mini with 16 GB of unified memory could probably reach 10-15 tokens per second for $600 or so, which would be a lot slower than a full external GPU but also silent.

Is that too much to ask?

Honestly, none of these options are super attractive right now.  I wouldn’t mind building a DIY rig with an SBC, but that’s a lot of money for not a lot of speed.  I wouldn’t mind getting a Mac, but while it would likely be easier to set up than a Raspberry Pi and could run larger models, it wouldn’t work much faster than the Pi’s.  The benefit of either a SBC or Mac Mini is I could set them up and put them in some unused corner of the house.  Even if the GPU enclosure route is more power and speed for less money, it would need to be both loud and tied to my PC at all times.

None of these solutions are perfect, but pretty much all of them are some combination of expensive with a modest increase over current computing abilities.

Anyhow, I broke down and gave $10 to OpenRouter.ai.

This is not an endorsement – it’s just what I settled on using after poking at various other options.  I’d looked into getting a plan through Alibaba’s Qwen, Kimi AI, Groq3 , Deepseek, and other LLM API aggregators like Togther.AI.  OpenRouther.ai doesn’t charge for 50 daily API calls to a few of their “free” models, but if I carry a $10 credit balance I can have 1,000 calls per day and use more models.  It was easy to kick the tires on their free plan, find it could work well enough for my purposes, and hand them $104 , and want to have access to 200x more API calls per day.

If I’m going to use an LLM and still determined to avoid OpenAI/ChatGPT, Anthropic/Claude, Elon/Grok, Google/Gemini, and their ilk, I have to turn to other models.  I need something that’s better than modern baren StackOverflow but doesn’t need to be a giant evil LLM either.  I’m having a fair bit of success with GPT-OSS 120B, MiniMax M2.5, and Qwen models.

I’m not doing anything groundbreaking.  I’d restarted the virtual assistant project from scratch a few weeks ago and just working on getting the pieces operational.  These skills aren’t anything wild – control over my PC’s media functions, modest automated regular downloading of files, communication over the Matrix protocol, etc.  Even the wakeword, STT5 , and TTS6 systems aren’t very new.  The only “new” thing I’m trying to do is tie these pieces together with a little bit of personality from an LLM.

Even without groundbreaking innovations, it’s interesting to see the “cost” of this inference.  Yesterday I used approximately 12 million tokens, largely with GPT OSS 120B.  Right now Claude is about $1/M tokens for Haiku, $3/M tokens for Sonnet, and $5/M tokens for Opus. 78 It looks like the going rate for GPT OSS 120B is probably about $0.04/M tokens.  Having now used Claude models last month and GPT OSS now, I can say Haiku is very useful, but their other models aren’t 3x and 5x more useful.  But, more importantly, there is no way Haiku is 25x better or that Opus is 125 times better than GPT OSS 120B.  I don’t doubt these models might cost that much more to develop and run, but I’m just not seeing a jump utility that justifies these costs.  I’ll admit that Haiku could probably have done the job in half the tokens, but even so it feels like there’s an upper limit to how useful an LLM could be.  Or, rather, an upper limit to how useful and LLM could be to me.

I just read an interesting blog post / article specifically about Anthropic’s recent publicity blitz / stunt regarding their “Mythic” model.  They are supposedly not releasing the model to the public because it is so smart and dangerous.  Suffice it to say, the author makes a convincing case Anthropic’s claims are smoke and mirrors.  One particular section struck a chord with me:

[W]hat am I getting for $25 per million input tokens that I cannot get from the open-weights ecosystem for more than two orders of magnitude less — roughly 227× cheaper, at eleven cents per million?

What, indeed?

As much as I like to fiddle with little gadgets, make and tinker with things, and even like the odd new shiny toy, I’m not a fan of shoving email/push notifications/cloud/crypto/NFT/blockchain/wifi/mesh/AI into every damn thing.  I don’t need push notifications from my toaster, don’t need to preheat my oven before I get home, don’t want to have an AI analyze the mustard collection in my fridge and offer recipes.

If an LLM like GPT-OSS 120B released in August of 2025 can handle meaningful coding tasks swiftly, what more do regular people really need of an LLM?  I’m not sure regular people really do.  I do think large corporations, data brokers, and governments are probably already licking their lips at the idea of being able to build better profiles for consumers.91011

Perhaps one day I’ll try to bolt on some features that require some novel problem solving – like the ability to research things on the internet, check emails, draft email replies / queries, maybe even do some light scheduling or administrative work.

Software Development with LLMs
  1. Series Plugin Test for Illustrative Purposes Only
  2. ChatGPT WordPress Plugins
  3. Coding with an LLM Sidekick
  4. Python Practice with an LLM
  5. Not Team AI
  6. Never Stop Breaking Up
  7. Weakness
  1. What a funny phrase “kick start”.  I wonder if people mostly think of the crowdfunding platform rather than it’s original usage? []
  2. It does get bogged down in very large PDF’s and spreadsheets []
  3. NOT Grok.  Groq is, as best as I understand them, a chip company that builds devices that can run inference on medium sized LLMs very quickly []
  4. Plus credit card processing fees []
  5. Speech to text []
  6. Text to speech []
  7. These are the “input” $/M token prices.  Claude’s “output” generation $/M token prices are 5x the input cost.  I’m just trying to keep their pricing plan information simple/streamlined for ease of reading and reference []
  8. For the curious, ChatGPT’s pricing is $0.20/M tokens for their 5.4 nano model, 5.4 mini is $0.75/M tokens, and their flagship 5.4 model is $2.50/M tokens. []
  9. I was going to say “users”, but really, the regular people here aren’t the “users” – the companies and governments are.  I may very well need to start calling people “usees”. []
  10. Use-ees? []
  11. It sounds good in my head, but doesn’t seem to track properly when typed []

Coding with an LLM Sidekick

I fell down a rabbit hole recently which lead me to think about my experiences in the nascent field of “prompt engineering.”12

As a thought experiment, I was thinking about what I’ve managed to accomplish working with an LLM, the challenges along the way, and perhaps even where I can see the frayed edges of its current limitations.

After several starts and stops trying to hire someone to assist with a website I own, I turned to the idea of getting help from an LLM. 3 4  After all, some of them were touted as being able to actually draft code, right?  Besides, if the first step in even hiring a developer is just being able to describe what you need, and the first step of getting an LLM to generate some code is defining what I need, then…

There's no way this is going to work, right?
There’s no way this is going to work, right?
  1. Task 1:  Pie Chart WordPress Plugin

    1. I started off with a simple and easy to define task.  My original plugin was a quick and dirty bit of code, so if ChatGPT could create a WordPress plugin, there was a chance it could do something simple like this.
    2. My first attempt was a wildly spectacular, but highly educational, failure.  A brief description of the plugin’s function was enough to get a WordPress plugin template file with very little functionality.  Then came the arduous LLM wrangling, my asking it for refinements, it losing track of the conversation, and the endless sincere heartfelt apologies from ChatGPT about forgetting really basic pieces of information along the way.  Some changes were minor, but changing the names of variables, functions, the plugin, switching API’s, forgetting requirements, etc.  It was constant whack-a-mole that spanned nearly 90 pages of text.
    3. My next attempt was more focused.  I created a framework for discussions, provided more context, goals, descriptions of workflow, and resources for examples.  The result was a lot better, with portions of largely functional code.  However, the LLM kept forgetting things, renaming variables, files, directories, etc.
    4. Next I created the directory structure and blank placeholder files, zipped these, and uploaded them as an attachment for the LLM to review – along with a description of the contents and the above additional context.  This was even better than before, but after a certain depth of conversation no amount of reminding could bring the LLM around to the core of the conversation.
    5. My thinking was that after a certain level of conversation, the LLM was not going to be able to synthesize all of nuance of our conversations plus the content of the code drafted.  To get around this I would begin a conversation, make a little progress, then ask it to summarize the project, the current status, and a plan for completion – which was fed into an entirely new conversation.  This way, Conversation N was able to provide a succinct and complete description which Conversation N+1 could use as a jumping off point.  My thinking was that the LLM would be best positioned to create a summary that would be useful to another LLM.
    6. This process of minor “restarts” in the conversation was one of the most successful and powerful techniques I’ve employed to combat LLM hallucinations and forgetfulness.
  2. Task 2:  Blog Post Series Plugin

    1. After rewriting the above pie chart plugin using an LLM, I turned my attention to a slightly more complicated plugin.  The pie chart plugin is really just a single file which turns a shortcode with a little bit of data into a nice looking pie chart.  There’s no options page, no cross post interaction, database queries or anything.  It was really just a test to see if an LLM could really draft a basic piece of working code.
    2. The series plugin is still a reasonably simple piece of code, but it has several additional feature which require a settings page, saving settings, custom database queries, and organizing information across multiple pages.  It’s also one of the most used plugins on this website.
    3. I figured I would try feeding the LLM a description of my plugin, all the code in a directory structure, and then my initial “base” prompt which explains our roles, needs, resources, and scaffolding for a discussion.  I asked the LLM to summarize the function and features of the plugin, which it did quite nicely.  I added a few additional features I had previously worked on and asked it to incorporate this into the description.  Asking the LLM to simply “build this WordPress plugin” was met with a “you need to hire a developer” recommendation.  However, asking it to propose a workflow for building a plugin with these features was successful.  I was provided with a roadmap for building5 my plugin.
    4. This system worked reasonably well, allowing me to compartmentalize the steps, backtrack, retrace, revise code, working on a section, then another, sometimes going back to a prior sections at the LLM’s direction.  The LLM still tended to get lost, renamed variables/paths/directories/filenames, but it was less pronounced than before.  I did find it harder to use the “summarize and restart” strategy when dealing with a multi-step code development system.  However, it was still workable since I could upload all the code produced so far.
    5. The result was a new plugin, with better functionality than what I’d written myself 10 years before.  Here, the new strategy of having the LLM break the project into sections and providing a roadmap was particularly helpful.
  3. Strategy:  Conversational Scaffolding
    1. I mentioned “conversational scaffolding” and “frameworks” for discussing things with the LLM above.  This was an overarching and evolving strategy I use to help focus the LLM on the goals, keep it on track, and hopefully help it provide meaningful and useful replies.  The full text of my “prompt framework” file is too large to include here, but I’m happy to provide the highlights.
    2. Personas.  I assigned the LLM three distinct personas with differing backgrounds, strengths, and goals.  Their personas were defined in reference to one another, so the first would activate, the second would then review and interact with the first, after this process completed the third would be activated, perhaps interact with the first two, then it would move on.  I would say this process was rather successful.
    3. Myself.  I would describe myself, my goals, level of expertise, etc.  I found that I if I referred to myself as an expert, the LLM would not be as likely to offer me code proposals – but if I described myself as a newbie, it would recommend I hire a developer rather than tackle such a complex problem myself.
    4. Rules for Conversation.  These are a collection of 12 rules (at last count) which helped myself and the LLM interact.  The high points are:
      1. Answer Numbering, Answer Format, Eliminate Guesswork, Organize Assumptions, Conversational Review, Complex Answers, Context Refresher, Problem Solving Approach, File Structure, @Rules, and Personas.
      2. Each of these items were followed by a few sentences explaining something about how the LLM should be expecting to receive information and react.  My favorite of these was the rule “@Rules” which directed the LLM to begin it’s response by reviewing the Rules and following them.
    5. Knowledge.  There are a number of programming languages and technical topics I’m interested in and have used an LLM to address.  To this point, I solicited a list of useful resources from the LLM and started including a “Knowledge” section where I listed dozens of the most important resources for the languages and API’s I most commonly use.
    6. By beginning each prompt with the above “framework” (~10k of text) and following it up with a short description of my project or a file to consider, I found I was able to jump right into the project without having to provide additional significant background information.
  4. Task 3:  “Project Drift”
    1. This is a considerably more complicated task I will simply refer to as “Project Drift.”  This isn’t a real codename since the developer base is all of exactly one dude, but I don’t want to name the location/website for a variety of reasons.  In any case, Project Drift involves multiple user interfaces, numerous settings, database queries, data sanitation and validation procedures, administrator functions, and numerous other facets.  All of the above tasks and attempts were basically part of the run-up to this (ongoing) project.
    2. Using the LLM’s ability to open and read a ZIP file, as well as propose code, has been invaluable.  This in conjunction with my prompt framework allows me to get the LLM up to speed after a micro-restart – and it’s summarization procedures help me get back in the mindset after I’ve stepped away from the project for a few days.
    3. Since this project isn’t done yet, I can only give a progress report.  It’s going very well.  Much of the heavy lifting, scaffolding of the code, can be assembled for me, tedious database queries and chunks of code provided.  There are still large areas where the LLM is unable to be very helpful – and that relates to pinpointing a bug in the code (or between code sections).  This still requires a knowledgeable hand at the helm.
    4. As a solo-coder, having the assistance of another “persona” to keep me on track with a given section of code has been helpful.  I have only assigned three personas, but I could see adding a few more to fulfill different roles.

I would estimate Project Drift is roughly 30-50% complete, but this is still an incredible amount of progress in a very short time.  I would also estimate it has cut the amount of my development time by 90% (but on the easiest and most tedious stuff).

Software Development with LLMs
  1. Series Plugin Test for Illustrative Purposes Only
  2. ChatGPT WordPress Plugins
  3. Coding with an LLM Sidekick
  4. Python Practice with an LLM
  5. Not Team AI
  6. Never Stop Breaking Up
  7. Weakness
  1. I know, it feels pretentious, doesn’t it? []
  2. I’ve got the same knee-jerk reaction to “visionary,” “thought leader,” “polymath,” and “futurist.” []
  3. Don’t get me wrong, some of the developers I’d hired simply disappeared while other relationships didn’t work out due to timing.  I don’t think anyone was malicious, just… busy, really. []
  4. Still, the job needs to be done. []
  5. Re-building? []