Building a Jarvis-inspired voice activated LLM powered virtual assistant

Just another day at the office
Just another day at the office

I’d like my computer to be smarter and more interactive and handle boring stuff for me and I’d also like to play around with some LLM / AI stuff… which brings me to this project.  I’ve got a ton of basic things I’d love for it to do – manage lists, reminders, some Outlook functions, some media functions, and then also be able to interact with me – all via voice commands.  Yes, you can do this with ChatGPT and probably others – but I am loathe to provide any outside resource with more of “me” (DNA, biometrics, voice, ambient noises, etc) than absolutely necessary.  Plus, I’ve been tinkering with these little LLM’s for a while now and see just what I can build out of them and with their assistance.

I’m not great at Python1 , so I admittedly enlisted the help of some very large LLM’s.  I started the main project in conjunction with ChatGPT, used Gemini to answer some basic questions about programming in Python syntax, etc, and Claude for random things.  The reason for keeping my general questions in Gemini versus ChatGPT was so that I could not “pollute” the ChatGPT flow of discussions with irrelevant sidetracks.  This was the same reason for separating out the Claude discussions too.  I find Claude reasonably helpful for coding tasks, but the use limits are too restrictive.

My kiddo asked me how much of the code was written by these models versus my own code.  I’d say the raw code was mostly written by LLM’s – but I’m able to tinker, debug, and… above all learn.  I’d rather be the one writing the code from scratch, but I’m treating these LLM’s like water wings.  I know I’m not keeping myself fully afloat – but I’m actually the one treading water, putting it all together, and learning how to do it myself.  Also… said kiddo was interested in building one too – so I’m helping teach someone else manually, and learning more that way.2

Ingredients

As with many of projects, I started by testing the individual pieces to see if I could get things working.  In order I started with validating individual pieces of the process:

  • Could I get Python to record audio?
  • Could I get Python to transcribe that audio?
  • Could I get Python to use an API to run queries in LM Studio?
    • Yep!  Using the openai API, I could use python to send queries to LM Studio after an LLM had been loaded into memory
  • Could I get Python to get my computer to respond to a “wakeword”?
    • Yep!  There’s another Python module for using  “wakewords” using PocketSphinx.  This was an interesting romp.  I found that I had to really tinker with the data being sent to the Wakeword to be properly recognized and then fiddle with the timing to make sure what came after the wakeword was properly captured before being sent to the LLM.  Otherwise, I ended up with “Jarvis, set a timer for 15 minutes” would become… “Jarvis, for 15 minutes” since the “Jarvis” would get picked up by the wakeword but the rest not caught in time to be processed by whisper.
  • Can I get Python to verbally recite statements out loud?
    • Yep!  I used text to speech using Piper.  However, this process took a while.  One thing I learned was that you needed not just the voice model’s *.ONNX file, but the *.JSON file associated with it.

Until this point, I had wanted to try running LLM’s with the training wheels from LM Studio’s API.  I really like the LM Studio program, but I don’t want to be dependent upon their service when I’m trying to roll my own LLM interface.  Python can run LLM’s directly using “llama-cpp-python” – except that it will throw errors on the version of Python I was running (3.14) and was known to work with a prior version (3.11).

This lead me to learning about running “virtual environments” within Python so that I can keep both versions of Python on my computer, but basically run my code within a specific container tied to the version I need.  Typing this command created the virtual environment within my project folder.  The second command will “activate” that virtual environment.

  • py -3.11 -m venv venv
    • This created the virtual environment, locked to Python 3.11
  • .venv\Scripts\activate
    • This activates the virtual environment, so I can start working inside it

Back to work!

The man's got a job to do
The man’s got a job to do

Building a Pipeline

This is where things really seemed to take off.  I was able to disconnect my script from LM Studio and use Python to directly call the LLM’s I’ve downloaded.  These were reasonably straightforward – and I was suddenly able to go from: Wakeword -> whisper transcribed LLM query -> LLM response -> Piper recited reply.  Then, it was reasonably easy to have the script listen for certain words, and perform certain actions (setting timers was the first such instance).

Optimizations, Problems, Solutions

Complicating factors
Complicating factors

Building something that kind worked brought me to a new and interesting  ideas, challenges, and problems:

  • The original cobbled together process was something like:  record audio, transcribe through Whisper, delete the recording, pass the transcribed statement to the LLM, give that statement to Piper, generate a new recording, play that recording.  However, this process has some obvious “slop” where I’m making and deleting two temporary audio files.  The solution was to find ways to feed the recording process directly into Whisper and feed Piper’s response directly to the speakers, cutting out the two audio files.
  • I realized that I wanted the script to do more than just shove everything I have to say / ask into an LLM – to be really useful, the script would have to do more than just be a verbal interface for a basic LLM.  This is where I started bolting on a few other things – like trying to call a very small LLM to try and parse the initial request to either:
    1. Something that can be easily accomplished by a Python script (such as setting a timer)
    2. Something that needed to be handled by a larger LLM (summarize, translate, explain)
    3. Something that maybe a small model could address easily (provide simple answer to a simple question)
  • I ran into some problems at this point.  I spent a lot of time trying to constrain a small LLM3 to figure out what the user wanted and assign labels/tasks accordingly.  After a lot of fiddling, it turns out that an LLM is generally a “generative” model and it wants to “make” something.  My trying to force it to make a choice among only a dozen “words”4 was really bumping into problems where it would have trouble choosing between two options, choose inconsistently, and sometimes just make up new keywords.  Now, I could come up with a simple Python script which just did basic word-matching to sort the incoming phrases – but it seemed entirely counterproductive to build a Python word-matching process to help a tiny AI.  I then tried building a small “decision tree” of multiple small LLM calls to properly sort between “easy Python script call” and “better call a bigger LLM to help understand what this guy is talking about” and quickly stopped.  Again, my building a gigantic decision tree out of little LLM calls was proving to be a bigger task, adding latency and error with each call.  I was hoping to use a small LLM to make the voice interaction with the computer simple and seamless and then pass bigger tasks to a larger LLM for handling, sprinkling in little verbal acknowledgements and pauses to help everything feel more natural.  Instead I was spending too much time building ways to make a small LLM stupider, doing this repeatedly, and then still ending up with too much slop.
  • And, frankly, it felt weird to try and lobotomize a small LLM into doing something as simple as “does the user’s request best fall into one of 12 categories?”  Yes, small LLM’s can easily start to hallucinate, they can lose track of a conversation, make mistakes, etc.  But, to constrain one so tightly that I’m telling it that it may only reply with one of 12 words feels… odd?
Tell me what I want to hear and this can all stop
Tell me what I want to hear and this can all stop

Over the last few days I’ve been tinkering with building an “intent classifier” or “intent encoder” to do the kind of automatic sorting I was trying to force an LLM to do.  As I understand this process, you feed the classifier a bunch of example statements that have been pre-sorted into different “intent slugs.”  The benefit of a classifier is that it can only reply with one of these “intent slugs” and will never produce anything else.  It’s also way faster.  Calling a small5 LLM with a sorting question could produce a sometimes reliable6 answer in about 0.2 ms, which is almost unnoticeable.  Calling a classifier to sort should enable a 97% reliable result within 0.05 ms.  This is so fast it is imperceptible.

I haven’t tried this yet.  I’ve built up a pile of “examples” from largely synthetic data to feed into a classifier, produce an ONNX file7 , and try out.  However, I wanted to pause at this juncture to write up what I’ve been working on.  I say synthetic data because I didn’t hand write more than 3,000 examples on some 50 different intent slugs.  I wrote a list of slugs, described what each one should be associated with, created a small set of examples, and then asked Gemini to produce reasonable sounding examples based on this information. 8 This list appeared pretty good – but needed to be manually edited and also tidied up.  I wanted to remove most of the punctuation and adjust the ways numbers and statements showed up, because I’m simply not confident that Whisper will be able to accurately match “Add bananas to shopping list” to “Add bananas to ‘shopping list'” to something that the classifier will correctly interpret.

As I tinker with this project… I’m also looking at how I might be able to extend it into further projects.  Not only might it be a great way to help me be more productive, but I might be able to create a really small version that could be put into a companion bot.  A little companion bot with limited space, power, inputs, and abilities to emote could be far more lifelike, independent, and non-deterministic in it’s responses and actions.

Project Jarvis
  1. Building a Jarvis-inspired voice activated LLM powered virtual assistant

 

 

  1. Yet!! []
  2. Thanks Mr. Fenyman! []
  3. Giving it limited context windows, limited tokens to use, highly restrictive system prompts []
  4. Make timer, list timers, make a reminder, add to a list, recite a list, media buttons, etc []
  5. ~1B parameter []
  6. Let’s say 65% reliable []
  7. Yes!  Just like the voice models!! []
  8. I know, more self-reflecting LLM garbage… []

ChatGPT WordPress Plugins

This is kinda bananas.  Years ago I wrote a plugin to solve a problem I had.  I wanted a simple WordPress plugin where I could insert a shortcode into a blog post, specify a series title, and have it automatically search up all the other blog posts that used the same shortcode and series title, and then insert a nice looking list of blog posts in that series in chronological order.

It was one of my first plugins, still available on WordPress.org – just hidden since it hasn’t been updated in almost a decade.  It still works to this very day, if occasionally a little buggy.  After several WordPress versions, it no longer properly displays the series title, which is a real shame.

On a whim, I tried using ChatGPT to generate some plugins.

Here’s an example of my old plugin and the new ChatGPT written plugin (in this order):

Default Series Title

See how bad that was? It completely mangled the title.

Edit:  Since publishing this post, I realized that I would have to choose between

  1. Leaving the old defunct plugin in place just to make a point about how it didn’t stand the test of a decade’s worth of WordPress updates, but then also leaving broken series titles sprinkled through my back catalog of blog posts.
  2. Go back through nearly 10 years of blog posts12 to change them over to the new plugin shortcode.
  3. Disable the old plugin, but have the new plugin work with the old shortcode as well as it’s own new shortcode, at the cost of losing an example of how bad the old plugin performed.

I went with option 3.  Just take my word for it, it looked bad.

He makes a valid point
He makes a valid point

Now for the ChatGPT version:

Software Development with ChatGPT
  1. ChatGPT WordPress Plugins

It took me about an hour to whip up a working WordPress plugin with the same core functionality.  I would break down the time I spent as follows:

Time Spent Creating Series Plugin with ChatGPT

But, that’s not all!  You see, as I was writing this blog post, I realized it would be fun to include a pie chart to indicate the time I’d spent on this.  Unfortunately, the plugin I had written to do exactly this many years ago has apparently completely given up the ghost.  Thus, before I proceeded to this very sentence, I used ChatGPT to create a plugin for displaying custom pie charts!

Time Spent Creating Pie Chart Plugin with ChatGPT

Obviously, this plugin took a lot longer.  The first few versions were having all kinds of problems between the HTML Canvas code and trying to figure out how to make sure the javascript was not loading too early or too late.  In the end, I just asked it whether it was capable of even creating a pie chart – and it gave me a piece of workable javascript.  I told it to refactor the plugin using this same javascript, and then it was a matter of fine tuning the result.

If you don’t know anything about writing WordPress plugins, you could probably use ChatGPT to create a very simple plugin.  However, once it got slightly more complicated, it would likely require some troubleshooting to figure out what was happening.  In the series plugin it took me a while to root through the WordPress functions to figure out that apparently ChatGPT was trying to use a function in a way that simply did not work.  I explained to ChatGPT that that particular function could not operate in that way, explained how the data it was feeding into that function needed to be first modified, and then asked it to refactor the code.  From that point forward, it started to look a lot better.  There were some additional quirks – like putting more than one series title in a single post would only display one.  I suspect these problems of ChatGPT taking a shortcut to generate code, hardcoding certain variables and names, not considering that it might need to operate more than once on the page, may be difficult for it to anticipate and address.  Without some degree of WordPress development knowledge, I think a novice user armed only with ChatGPT would need to do a lot of refactoring, asking the program to generate the plugin all over from scratch many times, before arriving at a workable result.  Then again, a million monkeys at typewriters, right?

I think ChatGPT could be great for creating relatively simple plugins like a series plugin, a pie chart plugin, or even a table of contents plugin.  However, having seen how much time it cut out of the development process, I think it would be interesting to try developing an A/B testing plugin or more complicated plugin.

I think the next task to see if I can get it to generate QMK code for a keyboard, Arduino code, Raspberry Pi code, or a chrome extension.

I can already see some ways to improve both of the ChatGPT generated plugins used in this blog post.  My series plugin included two arrows at the bottom so the reader could navigate to the prior or next post in the series.  And I think it would be great if the chart plugin had a feature where I could specify the units, so the magnitude data would be included with the labels.  I may try getting it to shoehorn these updates later…

If you see these reflected in the charts above, I must have already done it.  :)

Software Development with LLMs
  1. Series Plugin Test for Illustrative Purposes Only
  2. ChatGPT WordPress Plugins
  3. Coding with an LLM Sidekick
  1. NGL, I can really be a lot some times. []
  2. Um, you’ve probably gathered that. []

Series Plugin Test for Illustrative Purposes Only

The only purpose for this post is to serve as a reference for a more interesting and useful post.

Software Development with LLMs
  1. Series Plugin Test for Illustrative Purposes Only
  2. ChatGPT WordPress Plugins
  3. Coding with an LLM Sidekick
Software Development with LLMs
  1. Series Plugin Test for Illustrative Purposes Only
  2. ChatGPT WordPress Plugins
  3. Coding with an LLM Sidekick

[custom_pdf_generator visitor_data=“John Doe”]