Building a Jarvis-inspired voice activated LLM powered virtual assistant

Just another day at the office
Just another day at the office

I’d like my computer to be smarter and more interactive and handle boring stuff for me and I’d also like to play around with some LLM / AI stuff… which brings me to this project.  I’ve got a ton of basic things I’d love for it to do – manage lists, reminders, some Outlook functions, some media functions, and then also be able to interact with me – all via voice commands.  Yes, you can do this with ChatGPT and probably others – but I am loathe to provide any outside resource with more of “me” (DNA, biometrics, voice, ambient noises, etc) than absolutely necessary.  Plus, I’ve been tinkering with these little LLM’s for a while now and see just what I can build out of them and with their assistance.

I’m not great at Python1 , so I admittedly enlisted the help of some very large LLM’s.  I started the main project in conjunction with ChatGPT, used Gemini to answer some basic questions about programming in Python syntax, etc, and Claude for random things.  The reason for keeping my general questions in Gemini versus ChatGPT was so that I could not “pollute” the ChatGPT flow of discussions with irrelevant sidetracks.  This was the same reason for separating out the Claude discussions too.  I find Claude reasonably helpful for coding tasks, but the use limits are too restrictive.

My kiddo asked me how much of the code was written by these models versus my own code.  I’d say the raw code was mostly written by LLM’s – but I’m able to tinker, debug, and… above all learn.  I’d rather be the one writing the code from scratch, but I’m treating these LLM’s like water wings.  I know I’m not keeping myself fully afloat – but I’m actually the one treading water, putting it all together, and learning how to do it myself.  Also… said kiddo was interested in building one too – so I’m helping teach someone else manually, and learning more that way.2

Ingredients

As with many of projects, I started by testing the individual pieces to see if I could get things working.  In order I started with validating individual pieces of the process:

  • Could I get Python to record audio?
  • Could I get Python to transcribe that audio?
  • Could I get Python to use an API to run queries in LM Studio?
    • Yep!  Using the openai API, I could use python to send queries to LM Studio after an LLM had been loaded into memory
  • Could I get Python to get my computer to respond to a “wakeword”?
    • Yep!  There’s another Python module for using  “wakewords” using PocketSphinx.  This was an interesting romp.  I found that I had to really tinker with the data being sent to the Wakeword to be properly recognized and then fiddle with the timing to make sure what came after the wakeword was properly captured before being sent to the LLM.  Otherwise, I ended up with “Jarvis, set a timer for 15 minutes” would become… “Jarvis, for 15 minutes” since the “Jarvis” would get picked up by the wakeword but the rest not caught in time to be processed by whisper.
  • Can I get Python to verbally recite statements out loud?
    • Yep!  I used text to speech using Piper.  However, this process took a while.  One thing I learned was that you needed not just the voice model’s *.ONNX file, but the *.JSON file associated with it.

Until this point, I had wanted to try running LLM’s with the training wheels from LM Studio’s API.  I really like the LM Studio program, but I don’t want to be dependent upon their service when I’m trying to roll my own LLM interface.  Python can run LLM’s directly using “llama-cpp-python” – except that it will throw errors on the version of Python I was running (3.14) and was known to work with a prior version (3.11).

This lead me to learning about running “virtual environments” within Python so that I can keep both versions of Python on my computer, but basically run my code within a specific container tied to the version I need.  Typing this command created the virtual environment within my project folder.  The second command will “activate” that virtual environment.

  • py -3.11 -m venv venv
    • This created the virtual environment, locked to Python 3.11
  • .venv\Scripts\activate
    • This activates the virtual environment, so I can start working inside it

Back to work!

The man's got a job to do
The man’s got a job to do

Building a Pipeline

This is where things really seemed to take off.  I was able to disconnect my script from LM Studio and use Python to directly call the LLM’s I’ve downloaded.  These were reasonably straightforward – and I was suddenly able to go from: Wakeword -> whisper transcribed LLM query -> LLM response -> Piper recited reply.  Then, it was reasonably easy to have the script listen for certain words, and perform certain actions (setting timers was the first such instance).

Optimizations, Problems, Solutions

Complicating factors
Complicating factors

Building something that kind worked brought me to a new and interesting  ideas, challenges, and problems:

  • The original cobbled together process was something like:  record audio, transcribe through Whisper, delete the recording, pass the transcribed statement to the LLM, give that statement to Piper, generate a new recording, play that recording.  However, this process has some obvious “slop” where I’m making and deleting two temporary audio files.  The solution was to find ways to feed the recording process directly into Whisper and feed Piper’s response directly to the speakers, cutting out the two audio files.
  • I realized that I wanted the script to do more than just shove everything I have to say / ask into an LLM – to be really useful, the script would have to do more than just be a verbal interface for a basic LLM.  This is where I started bolting on a few other things – like trying to call a very small LLM to try and parse the initial request to either:
    1. Something that can be easily accomplished by a Python script (such as setting a timer)
    2. Something that needed to be handled by a larger LLM (summarize, translate, explain)
    3. Something that maybe a small model could address easily (provide simple answer to a simple question)
  • I ran into some problems at this point.  I spent a lot of time trying to constrain a small LLM3 to figure out what the user wanted and assign labels/tasks accordingly.  After a lot of fiddling, it turns out that an LLM is generally a “generative” model and it wants to “make” something.  My trying to force it to make a choice among only a dozen “words”4 was really bumping into problems where it would have trouble choosing between two options, choose inconsistently, and sometimes just make up new keywords.  Now, I could come up with a simple Python script which just did basic word-matching to sort the incoming phrases – but it seemed entirely counterproductive to build a Python word-matching process to help a tiny AI.  I then tried building a small “decision tree” of multiple small LLM calls to properly sort between “easy Python script call” and “better call a bigger LLM to help understand what this guy is talking about” and quickly stopped.  Again, my building a gigantic decision tree out of little LLM calls was proving to be a bigger task, adding latency and error with each call.  I was hoping to use a small LLM to make the voice interaction with the computer simple and seamless and then pass bigger tasks to a larger LLM for handling, sprinkling in little verbal acknowledgements and pauses to help everything feel more natural.  Instead I was spending too much time building ways to make a small LLM stupider, doing this repeatedly, and then still ending up with too much slop.
  • And, frankly, it felt weird to try and lobotomize a small LLM into doing something as simple as “does the user’s request best fall into one of 12 categories?”  Yes, small LLM’s can easily start to hallucinate, they can lose track of a conversation, make mistakes, etc.  But, to constrain one so tightly that I’m telling it that it may only reply with one of 12 words feels… odd?
Tell me what I want to hear and this can all stop
Tell me what I want to hear and this can all stop

Over the last few days I’ve been tinkering with building an “intent classifier” or “intent encoder” to do the kind of automatic sorting I was trying to force an LLM to do.  As I understand this process, you feed the classifier a bunch of example statements that have been pre-sorted into different “intent slugs.”  The benefit of a classifier is that it can only reply with one of these “intent slugs” and will never produce anything else.  It’s also way faster.  Calling a small5 LLM with a sorting question could produce a sometimes reliable6 answer in about 0.2 ms, which is almost unnoticeable.  Calling a classifier to sort should enable a 97% reliable result within 0.05 ms.  This is so fast it is imperceptible.

I haven’t tried this yet.  I’ve built up a pile of “examples” from largely synthetic data to feed into a classifier, produce an ONNX file7 , and try out.  However, I wanted to pause at this juncture to write up what I’ve been working on.  I say synthetic data because I didn’t hand write more than 3,000 examples on some 50 different intent slugs.  I wrote a list of slugs, described what each one should be associated with, created a small set of examples, and then asked Gemini to produce reasonable sounding examples based on this information. 8 This list appeared pretty good – but needed to be manually edited and also tidied up.  I wanted to remove most of the punctuation and adjust the ways numbers and statements showed up, because I’m simply not confident that Whisper will be able to accurately match “Add bananas to shopping list” to “Add bananas to ‘shopping list'” to something that the classifier will correctly interpret.

As I tinker with this project… I’m also looking at how I might be able to extend it into further projects.  Not only might it be a great way to help me be more productive, but I might be able to create a really small version that could be put into a companion bot.  A little companion bot with limited space, power, inputs, and abilities to emote could be far more lifelike, independent, and non-deterministic in it’s responses and actions.

Project Jarvis
  1. Building a Jarvis-inspired voice activated LLM powered virtual assistant

 

 

  1. Yet!! []
  2. Thanks Mr. Fenyman! []
  3. Giving it limited context windows, limited tokens to use, highly restrictive system prompts []
  4. Make timer, list timers, make a reminder, add to a list, recite a list, media buttons, etc []
  5. ~1B parameter []
  6. Let’s say 65% reliable []
  7. Yes!  Just like the voice models!! []
  8. I know, more self-reflecting LLM garbage… []

Minecraft Recovery Bundle

Yep, I’m an adult who enjoys playing Minecraft.  Now that this is out of the way…

I enjoy playing in survival mode and building farms for various resources, carving out a base into a mountain side, collecting hard to find items, building something of a fortress to house my resources and “pets.”  Sometimes my kids will join my “world” and help work with me on some project – or just want to do their own thing.  When they do, I like having enough resources so they can build whatever it is they want.  As I’m out exploring or gathering resources, sometimes I’ll end up in a bit of trouble or just be a few materials shy of accomplishing what I need.  For that reason, I have a special bundle I keep in my ender chest stocked with the kinds of things I might need to help me with some common problems or, in a pinch, get me out of a real jam.

Here’s what I keep in that bundle along with the uses for those materials:

# Item Uses
1 Hopper Helping unload, sort things
1 Arrow Using with bow enchanted with infinity
1 Crafting Table Easy access to crafting
1 Ender Chest Easy access to organized inventory
1 Chest Chest or, with the shulker shells, a shulker box
2 Shulker Shell Shulker box
1 String Making another bundle
3 Leather Bundle or ghast harness
1 Golden Apple Healing
1 Nametag Naming and preventing a mob from despawning
1 Anvil Adding enchantments to equipment or using a nametag
3 Spruce Wood Crafting many different things
2 Ice Portable water
1 Gold Block Crating gold boots to avoid piglin hassles
3 Glass Water bottles to duplicate water, ghast harness
2 Trap Door Entering end portals, crawl minding
3 Wool Bed or ghast harness
1 Respawn Anchor Creating a respawn location deep in the nether
2 Glowstone Powering the respawn anchor
1 Lodestone Marking a location for use with a compass
1 Lead Leading or trapping a mob
1 Pointed Dripstone Trap, mob farm, or duplicating water
1 Dripstone Block Duplicating water, making mud or clay
1 Redstone Block Compass
4 Iron Block Iron golem, iron tools, sheers, flint and steel, tools
1 Amethyst Cluster Spyglass, brush
2 Copper Block Brush, copper golem
1 Feather Brush
1 Pumpkin Iron golem, snow golem, copper golem, carved pumpkin, pumpkin seeds
2 Snow Block Snow golem
1 Dried Ghast Flying safely
1 Flint Flint and steel
1 Eye Of Ender Ender chest
1 Bone Block Speeding plant growth
4 Spruce Growing large spruce tree
4 Dirt Growing large spruce tree, food
1 Carrot Food, growing food
60 items total  

The most common things I’ll use this bundle for are:

  • Quickly get a hopper, ender chest, or make an extra bundle to help with inventory management
  • The crafting table for quickly crafting something
  • Using the one arrow with my “infinity bow

A bundle lets me store items, but I have to pull out everything placed into the bundle after the desired item, which can make rooting around deep inside something of a hassle.

It’s extremely rare for me to dig any deeper into this particular bundle past the string and leather … but, if you’re stuck far away, across treacherous territory, deep in the nether, deep in a hole, underground, lost, or need to save a location or mob, this would be a very good pack to have around.

Keychron Keyboard Bluetooth Won’t Work

From dipping my toes in the mechanical keyboard subreddit, it seems some people look down on Keychron keyboards.  It was pricier than other mechanical bluetooth keyboards, but I like being able to reassign keys, I like having nifty RGB lights, and it seemed to have very good reviews.  Sure, perhaps an artisanal, grass fed, locally sourced, single origin, free range, ethically sourced mechanical keyboard would be better or cheaper… but this keyboard arrived quickly, looks good, worked immediately out of the box for a price I was willing to pay.

Anyhow, if you’re here, it’s because something went wrong.

  • Symptoms:
    • At first my Keychron K10 Pro keyboard stopped being able to use the shift keys to write capital letters or symbols using the number keys. Doing a factory reset on the keyboard worked, so I had to re-assign the special keys (screenshot, RGB changes, media keys) again. Unfortunately, now the keyboard wouldn’t work over bluetooth.
  • What I tried:
    • I tried pretty much every combination of starting/restarting the board, flipping between USB/cable and BT, re-flashing and updating the keyboard firmware, then the keyboard bluetooth firmware, turning the PC bluetooth on and off, restarting the computer several times, and reassigning the keys using the launcher.
  • What worked:
    • One of the various trouble shooting pages suggested that I try FN + J + Z to factory reset the keyboard.  Other suggested FN + 1 or FN + 2 or FN + 3.  After a little while I thought – wait a second…  why don’t I try FN + 2 or FN + 3?  In doing so, I saw the bluetooth name for the keyboard pop up on the computer!  I guess for some reason the keyboard is only recognized by FN + 2 or FN + 3.  I don’t know why this worked really, but I’m happy that the keyboard is back.

I hope this helps someone else (or perhaps… future me!)

Tap Light Focus Timer System

I’ve been procrastineering on a “sticky note timer” which would incorporate an e-ink display, be portable, updatable via WiFi, show me what I should be working on, and flash lights at me to give me a sense of movement / time passing / and urgency.  Sometimes I use the word “procrastineering” to refer to when I start to spiral on a project and end up in analysis paralysis.  But, I think it is more appropriately used when I’m doing a deep dive on a project when I really have something much more important / urgent I should be working on.

A long time ago I added a few components to an off the shelf dollar store tap light and turned it into a game buzzer.  While the sticky note timer project was marinating  / incubating1 in the back of my brain, I realized that maybe I don’t need or even want something that high-tech.  Maybe what I need is something dead simple?  As cool as the sticky note timer project is – and it really is neat – there’s a lot of pieces to the puzzle and a fair bit of maintenance that goes along with it once its finished.  You have to connect to it, upload a list, set up timers, etc.

I finally decided on something not so easily adjustable, but still flexible in it’s simplicity.  Rather than making the setup (adding / updating / uploading lists to a timer) something I have to do in order to start the timer, what if I made it part of the timing?

First, let’s look at what the setup.  A dollar store tap light which includes a lot of handy parts – a battery holder, a push button switch, several springs, and a simple and at attractive enclosure.

This slideshow requires JavaScript.

On the far left is a basic off the shelf dollar store tap light.  Next to it are two others I had previously modified to work as game / timer buzzers2  The last picture is the wiring diagram, except that I wired the ATTiny chip to the positive wire coming from the button switch.  Whenever I hit the button, it will toggle the circuit on and off.

Using some parts from my electronics bin3, I cobbled together a prototype on a breadboard that would do the following when the button was hit:

  • Turn orange for 1 minute and beep 3 times in the last 3 seconds
  • Beep once more and turn green for 12 minutes, then fade from yellow through orange over the last 3 minutes
  • Flash red and beep three times after 15 minutes had lapsed (12 minutes of green and 3 minutes of color fading)
  • Turn off, go to a low power mode, and then wake up long enough to flash blue every 8 seconds
  • After 5 minutes, it would flash green and beep twice
  • Then keep doing this 8 second blue flash and green light plus beep every 5 minutes
Animation of LED timer button

You’re probably wondering – what’s with all these timers and lights and beeps?  Here’s how I use them:

  • Place and slap the button to get going
    • I put my phone on my desk and the timer right on top of my phone.  It’s a big 4″ diameter timer and covers the phone pretty well.  I can’t pick up my phone without seeing this timer ticking down.  This is a huge difference between a phone app and a physical thing standing between me and my phone.  There are some web browser based apps – but these don’t really work for me.  Either I have to keep that window open and on top or … I’ll forget it exists.  This timer is right there, front and center, on my desk and lit up no matter where my desktop might take me.
    • Plus, it’s actually a little therapeutic to slap the tap light.  Pushbutton switches like this are built to take a bit of abuse and the physical action of hitting the light is a lot of fun.
  • Orange for 1 minute
    • This is the replacement for the “maintain / update a list.”  Instead of having to fuss with a list, I’ve dumped myself directly into work.  I’m suddenly racing the clock for 60 seconds to write all the things I want to try and accomplish in the next 15 minutes.  Maybe it’s a few emails, make some phone calls, or write / edit a document.  After 57 seconds, the buzzer will beep three times to let me know that the 15 minute timer is about to start.
    • Or, if you already have a particular task to work on, you could use this time to follow a process like Steven Kotler’s suggestions on tactical transitions to a a flow state4.  His three step process is:
      • Anchor your body
        • Practice box breathing.5  You could box breathe 3 times in one minute and have a few second left over to psych yourself up.
      • Focus your mind
        • Write down one clear goal.
      • Trigger your ritual
        • Recite a mantra, perform a gesture, start a “work” playlist
  • Green for 15 minutes
    • It’s go time!  Whatever I wrote down, now I’m in a race to work on those things – and those things only.  I can’t let new emails, calls, etc, distract me – that buzzer is going off in 15 minutes.  As the timer closes in on 15 minutes, with just 3 minutes to go, it turns yellow and fades to orange.  If I look up / down and see this, I know I’m in the home stretch and I’ve got to start moving fast to wrap things up.
  • Red alert!
    • Once the 15 minutes is up the light flashes red and beeps to let me know I’m off the hook.  Now, if I’ve already hit peak productivity, I could keep going.  If I got sidetracked, it’s an alert for me to restart the timer and get back to it.
  • Blue flashes, 5 minute green flash and beeps
    • These blue flashes happen once every 8 seconds6 just to keep the timer present in my vision so it doesn’t just appear into the mess on my desk.
    • If I finished out the 15 minute block of work time and I don’t stop the timer, the 5 minute timer is my reminder to return to my desk, reset the timer, and get going again.
    • If I ended up working past my 15 minute block of work time, the 5 minute beeps still give me a sense of how much time has passed.7
    • Importantly – if I get distracted by a sidequest, one of the beeps every 5 minutes is bound to catch my attention and remind me I’m supposed to restart the timer and get back to work.

So… does it work?  For me, yes!  Here’s why:

  • The hardest part of getting started is getting started.  My tendency is to want to collect all the stuff I’d need, get real comfy, make a list, look up some documents, etc.  This system short circuits all that.  I just need to be able to slap the big button sitting on top of my phone.  If I can manage that, I get 60 seconds to collect myself and then it’s time to rock and roll.  That’s enough time to take some deep breaths, start a playlist, or just sit quietly before I get started.
  • It covers up my biggest distraction.  Unlike an app on the phone or my desktop computer, I can literally cover up my phone with this big damn button.  I won’t see any notifications and if I want to pick up my phone, I have to actually look at and ouch the button – which is itself a reminder to get back to work.
  • It plays into a sense of play, urgency, and my own overdeveloped sense of competitiveness.  I enjoy hitting the timer to turn it on – and I want to beat that 15 minute timer.
  • The 5 minute timer acts like a built in break timer.  If I can get through 15 minutes of work, I can goof off, write a blog post, and without doing anything else that 5 minute timer can bring me back.
  • It includes a “failsafe” to bring me back to the timer if I get distracted by a sidequest.  If I miss the 15 minute timer, there’s another 5 minute timer around the corner.  Even between timers, there’s an intermittent flash of blue light to grab my attention.

The only meaningful “downside” to this timer button for me is there’s no pause button.  However, this isn’t exactly bad.  It helps me really hone in on what’s important and what’s interesting.  If a family member asks me for something or a call comes in, I just need to weigh the benefit of addressing the intrusion against having to restart the timer.  And realistically, if I pause the timer, I’m going to need some time to drop back into “flow” anyhow.

Sticky Note Timer
  1. Ah, just what I need! A new project!
  2. Sticky Note Timer, parts arrived!
  3. Seeed Studio XIAO ESP32C3 and a small sticky note display
  4. Brainstorming More E-Ink Stuff
  5. Smol Fonts for E-Ink Displays
  6. Tap Light Focus Timer System
  1. Fermenting?  Festering? []
  2. The older ones would flash orange a few times to alert you the game was going to start, turn green, fade from yellow to red, then flash red and buzz after 15 seconds. []
  3. I used an ATTiny45 because I had one, but it’s not much more expensive to use an Adafruit Trinket, a buzzer, a RGB/neopixel LED, and some wire.  In a subsequent version, I also used a small prototyping board like the Adafruit Perma Proto Boards []
  4. It’s the second slide []
  5. TLDR:  Breathe in slowly through the nose for 4 seconds, hold for 4 seconds, breathe out slowly through the mouth for 4 seconds, hold for 4 seconds, repeat []
  6. Because that’s the longest the little microchip can do between “deep sleep” to conserve battery life []
  7. I may adjust the program so the first five minutes is 1 beep, second five minutes is two beeps, etc []

Prusa Lack Stack, LED Lighting, CircuitPython Tweaks

Much like those recipes on the internet where the author tells you their life story or inspiration, I’ve got a lot to share before I get to the punchline of this blog post (a bunch of CircuitPython tweaks).  Edit:  On second thought:

  • Keep the lines of code <250
  • Try using mpy-cross.exe to compress the *.py to a *.mpy file

This is a bit of a winding road, so buckle up.

Admission time – I bought a Prusa1 about three years ago, but never powered it on until about a month ago.  It was just classic analysis paralysis / procrastineering.  I wanted to set up the Prusa Lack enclosure – but most of the parts couldn’t be printed on my MonoPrice Mini Delta, which meant I had to set up the Prusa first and find a place to set it up.  But, I also wanted to install the Pi Zero W upgrade so I could connect to it wirelessly – but there was a Pi shortage and it was hard to find the little headers too.  Plus, that also meant printing a new plate to go over where the Pi Zero was installed, a plate that I could only print on the Prusa, but I didn’t have a place to set it up…

ANYHOW, we’ve since moved, I set up the Prusa (without the Pi Zero installed yet), printed a Prusa Lack stack connector to house/organize my printers.  Unlike the official version, I didn’t have to drill any pilot holes or screw anything into the legs of the Lack tables.

Once the Lack tables were put together, I set about putting in some addressable LEDs off Amazon. I found a strip that had the voltage (5V for USB power), density (60 LED’s per meter), and the length (5 meters) I wanted at a pretty good price <$14, shipped.  I did find one LED with a badly soldered SMD component which caused a problem, but I cut the strip to either side of the it, then soldered it back together.  Faster and less wasteful than a return at the cost of a single pixel and bit of solder.

The Lack stack is three tables tall, keeps extra filament under the bottom of the first table, my trusty Brother laser printer on top of the first table, my trusty Monoprice Mini Delta (Roberto) on top of the second table, and the Prusa (as yet unnamed Futurama robot reference… Crushinator?) on top.  Since I don’t need to illuminate the laser printer, I didn’t run any LED’s above it.  I did run a bunch of LED’s around the bottom of the third printer…  this is difficult to explain, so I should just show a picture.

When Adafruit launched their QtPy board about four years ago, I picked up several of them.  I found CircuitPython was a million times easier for me to code than Adafruit, not least of which because it meant I didn’t have to compile, upload, then run – I could just hit “save” in Mu and see whether the code worked.  I also started buying their 2MB flash chips solder onto the backs of the QtPy’s to a ton of extra space.  Whenever I put a QtPy into a project, I would just buy another one (or two) to replace them.  There’s one in my Cloud-E robot and my wife’s octopus robot.  Now, there’s one powering the LED’s in my Lack Stack too.

I soldered headers and the 2MB chip into one of the QtPy’s, which now basically lives in a breadboard so I can experiment with it before I commit those changes to a final project.  After I got some decent code to animate the 300 or so pixels, I soldered an LED connector directly into a brand new QtPy and uploaded the code – and it worked!

Or, so I thought.  The code ran – which is good.  But, it ran slowly, really slowly – which was bad.  The extra flash memory shouldn’t have impacted the little MCU’s processor or the onboard RAM – just given it more space to store files.  The only other difference I could think of was that the QtPy + SOIC chip required a different bootloader from the stock QtPy bootloader to recognize the chip.  I tried flashing the alternate “Haxpress” bootloader to the new QtPy, but that didn’t help either.  Having exhausted my limited abilities, I turned to the Adafruit discord.

I’ll save you from my blind thrashing about and cut to the chase:

  • Two very kind people, Neradoc and anecdata, figured out the reason the unmodified QtPy was running slower was because the QtPy + 2MB chip running Haxpress “puts the CIRCUITPY drive onto the flash chip, freeing a lot of space in the internal flash to put more things.”
    • This bit of code shows how to test how quickly the QtPy was able to update the LED strip.
      • from supervisor import ticks_ms
      • t0 = ticks_ms()
      • pixels.fill(0xFF0000)
      • t1 = ticks_ms()
      • print(t1 – t0, “ms”)
    • It turns out the stock QtPy needed 192ms to update 300 LED’s.  This doesn’t seem like a lot, until you realize that’s 1/5th of a second, or 5 frames a second.  For animation to appear fluid, you need at least 24 frames per second.  If you watched a cartoon at 5 frames per second, it would look incredibly choppy.
    • The Haxpress QtPy with the 2MB chip could update 300 LED’s at just 2ms or 500 frames per second.  This was more than enough for an incredibly fluid looking animation.
    • Solution 1:  Just solder in my last 2MB chip.  Adafruit has been out of these chips for several months now.  My guess is they’re going to come out with a new version of the QtPy which has a lot more space on board.
      • Even so, I’ve got several QtPy’s and they could all use the speed/space boost.  I’m not great at reading/interpreting a component’s data sheet, but using the one on Adafruit, it looks like these on Digikey would be a good match.
  • The second item was a kept running into a “memory allocation” error while writing animations for these LED’s.  This seemed pretty strange since just adding a single very innocuous line of code could send the QtPy into “memory allocation” errors.
    • Then I remembered that there’s a limit of about 250 lines of code.  Just removing vestigial code and removing some comments helped tremendously.
    • The next thing that I could do would be to compress some of the animations from python (*.py) code into *.mpy files which use less memory.  I found a copy of the necessary compression/compiler program on my computer (mpy-cross.exe), but it appeared to be out of date.  I didn’t save the location where I found the file, so I had to search for it all over again.  Only after giving up and moving on to search for “how many lines of code for circuitpython on a microcontroller” did I find the location again by accident..  Adafruit, of course.  :)
    • I’m pretty confident I will need to find the link to the latest mpy-cross.exe again in the future.  On that day, when I google for a solution I’ve already solved, I hope this post is the first result.  :)

The animations for the Lack table are coming along.  I’ve got a nice “pulse” going, a rainbow pattern, color chases, color wipes, and a “matrix rain” / sparkle effect that mostly works.

Animated GIF

I started this blog post roughly 7 months ago2 by the time I finally hit publish.  After all that fuss, ended up switching from CircuitPython (which I find easy to read, write, maintain, update) to Arduino because it was able to hold more code and run more animations.  Besides the pulse animations, rainbow patterns, color chases, color wipes, and a matrix rain, it’s also got this halo animation, some Nyan cat inspired chases, and plays the animations at a lower brightness for 12 hours a day (which is intended to be less harsh at night).  I could probably add a light sensor, but I don’t really want to take everything apart to add one component.

  1. The i3 MK3S+! []
  2. January 7, 2025 []

[2025] Google Pixel Boot Loop Fixes

In the 7 years since I wrote a blog post about rescuing my Google Pixel from a boot loop people have started reaching out to me desperately looking for a way to fix their phones.  This particularly horrible glitch happens at the worst time – when your phone storage is completely full of pictures and videos.  In my case, we were on vacation and not near wifi when I’d happened to fill up the phone storage and it got stuck in a boot loop.1

Google Support was adamant there was no way to recover my data and my options were to factory wipe the phone myself or send it to them so they could do it.  Of the resources found back in 2018, almost nothing survived Google’s march of “progress” and destruction of their own older resources.  In this case the links to Google’s own Pixel support forums and links to resources no longer work – and there are no working Archive.org / Way Back Machine links.

Anyhow, if you’re stuck in the same situation as I was – without the resources and links I had back then, perhaps if you dig around you can still find a way?

“If you have a problem, if no one else can help, and if you can find them, maybe you can…”

Before you get started – a warning.  I don’t currently have this problem and am trying to piece together how I fixed my problem 7 years ago on an older phone, using current guides that are no longer accessible.  I haven’t verified any of these links and resources, I’m just some rando on the internet who is trying to help you out because some other internet randos helped me out a long time ago.  Google has a nasty habit of deleting their own resources and shuffling things around.  I don’t know the first thing about installing new operating systems on phones and following any of these links or suggestions might permanently damage your systems.  But, as I mentioned before…  I tried this because Google Support was beyond unhelpful and I was completely out of options.

You’ve been warned

The basic framework for the fix was:

  1. Get the phone to “Recovery Mode” so at least isn’t not boot looping, overheating, and chewing up your battery.
    1. If you have an unlocked phone, or a locked phone from Google which you could theoretically unlock over a terminal, you should be able to get the phone “Safe Mode” where it will be able to turn on and access the operating system, but with limited other apps useable.
  2. Find and install the latest Android ADB (Android Debug Bridge) and FastBoot (an Android diagnostic tool)
    1. I say “latest,” but I’m not an expert and am not currently having this problem.  Perhaps it’s best to use the version which most closely matches your phone?  Anyhow, I installed ADB on the root of my PC and then created a path to it with “SET PATH=%PATH%;c:\adb” so the operating system would know it could access those resources.
  3. Try to find a “Rescue OTA” (Android Rescue Over-the-Air update) for your phone model.
    1. This would essentially be the same update that you might get when you let your phone download and install an update over night via WiFi – with the only difference that you’ve downloaded it onto your PC and are going to try to shove it back into the phone over a cable.
  4. Try to “sideload” the OTA update back into the phone using ADB / Fastboot (I don’t remember the specific steps to do this – but since these resources are constantly being worked on, I assume someone has written a guide).

If this post helped you out or you found some resources helpful, please let me know so I can update this post and help others.

Good luck!

  1. It was also overheating – which might have been a contributing factor the boot loop – or caused by the constant booting and looping []

Capstan Drives as alternatives to Planetary Gears?

Sometimes I hate the algorithm and sometimes it shows me cool new robotics / mechanics / gadgets and makersAaed Musa has been working on something called a “Capstan Drive” which is a rope driven alternative to gears.  By removing gears and  teeth and replacing them with rope you cut down on noise, eliminate backlash, high torque, low inertia, and low cost – with the major costs being low range of movement and a vertical path for the rope to travel over.  Aaed’s video is well worth a watch and blog well worth reading.  But… if you want to get a sense of how the Capstan drive works…

Capstan drive in action

The benefit of a planetary gear is that it’s a very vertically compact method for increasing rotational speed at the cost of complexity.  With a Capstan Drive (I don’t know if this is supposed to be capitalized) the rope needs to be wrapped around the thinner shaft several times to prevent slippage.  As Aaed notes:

One question that I had when first exploring this reducer was “why doesn’t the rope slip if it’s just wrapped around the smaller drum?”. The answer to that question lies in the capstan equation. With each turn of rope on a drum, the amount of friction increases exponentially. With 3-5 turns of rope, there is enough friction for slipping to not be an issue.

Aaed indicated he was using Dyneema DM20 cord as it has almost no stretch to it.  I wonder if something like fishing line would work?

DIY Lightsaber Build
  1. TwistSabers
  2. DIY Lightsaber Thoughts
  3. Wait, haven’t I worked on this before?!
  4. Considering the design elements of a DIY light saber
  5. More Musings on Lightsabers, Mechanical Components
  6. Slow Progress…
  7. Capstan Drives as alternatives to Planetary Gears?

Fixing a coiled zipper that won’t close

I have a favorite soft pencil case made from faux leather that I’ve been using for more than 20 years, but the zipper had gotten finicky and started to not close.  It started having a problem zipping closed on just one side, but today it wouldn’t close at all and the slide was just moving back and forth without closing anything at all.

After a quick search, I found a video by UCAN Zipper USA with a solution that fixed it immediately.  The narrator said the problem was the zipper started to “open a little bit” with repeated use.  I suspect the slider on my pencil case opened a little by vigorous use or sometimes by accidentally zipping it over something that had been caught in the zipper teeth.

The solution was quite simple:

  • Inspect the closing side of the zipper to see whether one side is more “open” or riding higher than the other.
  • Using pliers, gently clamp that side down just a little, then try to open/close the zipper.  If it doesn’t quite engage yet, clamp down a little more.
Gently clamp the rear / closing side of the zipper where it appears to be loose / open / ride higher

That’s it!  It worked like a charm for me.  While this worked for a coiled zipper, I suspect it would also work for a molded tooth zipper as well.

Slow Progress…

… is still progress.

I designed a planetary gear assembly, more to see whether parts this small would even turn out than to actually make a working component.  The gears are about 3 mm thick, but half of that is the larger part.  I forgot that you can’t have a two-level gear mesh against another identical gear, so these didn’t move at all.

A test planetary gear assembly

I reprinted the parts, this time increasing the center hole size and also removing the teeth off the larger side.  It kinda works, but it’s very finnicky.  This might be a side effect of these gears being very thin and the teeth very small.  I think it’s probably worth sacrificing gear ratio in favor of larger, more consistent teeth.

Small improvements

The OpenSCAD code is a mess, lots of vestigial code remains, lots of non-working parts are commented out, and it all just needs more comments in general.  I hate looking at it.  But, as one of my favorite memes goes…

I mean, he’s got a point
DIY Lightsaber Build
  1. TwistSabers
  2. DIY Lightsaber Thoughts
  3. Wait, haven’t I worked on this before?!
  4. Considering the design elements of a DIY light saber
  5. More Musings on Lightsabers, Mechanical Components
  6. Slow Progress…
  7. Capstan Drives as alternatives to Planetary Gears?

Thermal QR Code Sticker Success!

I could not be happier with how this little thermal label printer turned out!  The highest use case I had for it was to create small QR codes I could stick in my various maker notebooks so that I could easily connect specific pages in my notebooks back to blog posts, essentially being able to embed unlimited digital resources into a simple page.

This slideshow requires JavaScript.

Basically, I arranged the QR codes and text in Inkscape, exported to a flat JPG, saved to my phone, and then printed.

The failed prints you see were printed at Dense, Medium, and then Light, but all came out useless. I realized it was because I had exported the image at 72 DPI, which meant that once the image was exported to either PNG or JPG, the image had gray aliasing between what should have been sharp black and white edges.  This caused the printer to treat the grays as black, which meant the black areas were obscuring the lighter areas, making it harder to scan the images.

I exported at 900 DPI and it printed on “Light” flawlessly. Each QR code sticker is only 12.5mm square, I can fit 8 of them per sticker sheet, and each includes a short label, and can be read by my phone very easily.  Now, I don’t think a 900 DPI image is required to print fine details, but I figured why the hell not give it a shot?

The first website QR code generator I tried was actually a sneaky website.  Rather than creating a QR code for the destination, it ran the URL’s through their own URL shortener, then output that QR code.  I chose that generator since it permits you to select the desired error correction level, but the result was basically useless to me.  If I wanted a QR code pointing to a short-code, I would have pointed it at my own short URL service.  While an unshortened URL will create a larger or more dense QR code, it has the benefit of being somewhat transparent.  When you scan an unshortened URL, your scanning app can show you the destination that would be hidden by a URL shortener.  I ended up using this website to generate the QR codes which allows you to specify the URL, choose from various error correction levels, and then download in a variety of formats.

I was able to pack detailed, unshortened, URLs into just 12.5 mm square plus 4.5 point font labels.  I might be able to print smaller than this, but I don’t have any pressing need to do that.  I’ve seen some suggestions a QR code should be printed at least 10mm square, and this is just above that limit.  However, I suspect those guidelines are for commercial use, whereas these codes are likely to be rarely scanned and don’t need to be optimized for widespread use – just for my own personal benefit.

Thermal Sticker Printer
  1. QR Codes and Avatars
  2. Sticker Printer
  3. Thermal QR Code Sticker Success!