Since the tree itself only has a low-powered MCU, we need another machine to act as a listener.
The architecture is:
- A machine in my office runs the Whisper model and listens for words.
- If certain keywords are found it finds a corresponding command to run (e.g. do a theater chase sequence in a green color).
- It sends that command to the tree over the network.
For now I’m running it from iOS and macOS, so I wrote the current implementation in Swift. The code is currently still in “hack” status, but working well!
Now it’s time to test it when talking to coworkers at Automattic.