Didn't know that about Mycroft. Could that be changed if opensource?
@Blort you might have to develop something yourself.
It's easier than you might think. Basically you'd use CMUSphinx for speech recognition, Wolfram Alpha as the "brain" and Festival to turn the answer into speech.
I vaguely remember making something that did this with Python and a Raspberry Pi, but I imagine someone with more time on their hands could do more clever things with it.
@Blort I tried Susi, and unfortunately it's complete and utter garbage
No personal experience, but this crossed my feed earlier today:
@Blort check this out https://github.com/leon-ai/leon/blob/develop/README.md
On their page it says that they're not working on it now. Unclear if they'll ever come back to it, but seems to be unmaintained.
@Blort sad :(
@kropot Well, yes, however from what I've seen now, there's many similar projects. From my admittedly limited research, it seems there are 4 main parts to a good voice assistant, all of which have good #FOSS options:
Not sure but try almond.stanford.edu
@Blort Susi.AI is in the works
Interesting, but it seems primarily interested in home automation where I'm more interested in a privacy friendly, voice enabled virtual assistant.
I like what they're doing with Project Ada though, adding voice to Almond. Almond looks like more of what I'm after, but without voice recognition or speech synthesis, it's a lot less useful to me.
If I'm not mistaken, ifyou self host Mycroft it doesn't use third parties. The source is open so this should be easily fixed if true.
@Blort a few others have already said it, but Snips.ai is probably what you’re looking for. Fully offline capable and open source.
@Blort I think the TOS is for using their online console for customizing and downloading intents created by others and creating your own and training new models.
I can’t remember for sure though.
Yeah, I need to look into #SnipsAI more closely. It *does* seem to have the technical underpinnings I'm looking for (locally hosted, fine without an internet connection, #opensource ). I'm just always wary when the project website isn't about the users, community and code, but about corporate partners and news items that would attract VC funding.
Snips' website just looks a lot more like the latter.
I don't want to judge it by it's cover though.
@Blort forget Snips. Sonos just bought them and they are pretty much killing the open product. 😭 I’m going to check out Almond.
From what I've seen #Almond *does* look like the best engine for processing user intents and bringing back relevant information (with an impressive scope of integrations and assistant to assistant communication).
The main challenge with Almond is that it seems it currently has no voice input or output (ie speech recognition and synthesis).
I'm a bit wary of #HomeAssistant as well though (although less so than #SnipsAI) as elsewhere they seem to be happy to run their users speech recognition and speech synthesis through #Microsoft by default.
@Blort Yea. I've been following HomeAssistant for a long while now. They are trying to build a tool that the average non-technical user can use and it will "Just Work". They clearly care about offline (their recent blog post calling out Sonos). They also address this in a subsection of their blog post. They are looking to bring it local when it's easier to do so. Personally... I'm not running a live mic that connects to the internet in my living room, so I'll wait. https://www.home-assistant.io/blog/2019/11/20/privacy-focused-voice-assistant/#can-a-virtual-assistant-still-be-private-if-parts-run-in-the-cloud
Good to know. I've only just begun my dive down this rabbit hole (although I've been involved with #FOSS long enough to spot an aspiring VC backed startup from an idealistic community run FOSS project).
@Blort Just did some reading on #HomeAssitant #Ada. They do say: "Ada is a voice assistant that outsources all processing to the speech-to-text (new!), conversation and text-to-speech integrations in Home Assistant. You can pick your own providers for each integration."
I don't see the Azure components merged into the repo yet, but we only have to wait until someone adds a new component for any other TTS or STT provider. There are actually already several TTS ones available.
@Blort For what it's worth, #HomeAssitant has many, many components available. Some are cloud based, some are local. Their goal is to cover every possible component someone will want to use and connect it through a hub that you control/own. This allows connecting cloud platforms and local platforms for automation and gives users the maximum amount of choice.
@Blort The other good news is, HASS is 100% open source with a lot of contributors. If they every tried to do what Snips did, they'd be forked. Similar to Emby/Jellyfin or Subsonic/Libresonic/Madsonic/Airsonic.
This is all just to say: Investing in #homeassistant appears low risk. 😀 Sorry for the flood of mentions. I've done a lot of research here and thought I'd share.
I appreciate all of the info! This is new to me, so I'm happy to learn more. I also don't have an issue with extras that connect to #GAFAM networks. My concern is that in the diagrams by #HomeAssistant's that I saw, they showed #Microsoft #Azure services exclusively for speech recognition / synthesis. I don't mind them as *alternatives*, but I don't want to use & support projects encouraging centralization, datamining, #GAFAM etc. by default.
Hopefully this won't be HomeAssistant.
The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!