Address
304 North Cardinal St.
Dorchester Center, MA 02124

Work Hours
Monday to Friday: 7AM - 7PM
Weekend: 10AM - 5PM

Why Voice Assistants Failed and Personal AI

Few things in technology were as captivating in the 2010s as the concept of a voice assistant. Siri, Alexa, Cortana, and many more were marketed as technological breakthroughs that would give users abilities straight out of science fiction. They promised convenience and ease of use, but as has been clear for several years now, they ultimately failed to live up to the grand expectations that were set for them.

One of the biggest reasons for this is that current voice assistants are not intelligent. Instead of understanding and interpreting human language, they rely on keyword matching to identify specific phrases and commands. This can make them difficult to use, as users often have to phrase their requests in a specific way in order for the voice assistant to understand them. As a result, using a voice assistant can sometimes be more inconvenient than just doing something manually. In addition to their lack of intelligence, current voice assistants also raise concerns about privacy. Many of them require internet connectivity and server-side processing in order to function, which means that the companies behind them have access to the data generated by users. This has led to concerns about the privacy implications of using voice assistants, and has hindered their adoption by some users.

Large Language Models (LLMs) have the potential to address these limitations and provide a more convenient and intuitive experience for users. By training on vast amounts of text data, large language models are able to better understand and interpret human language, which allows them to provide more accurate and useful results when responding to natural language queries. In addition to providing more accurate results, large language models can also handle a wider range of queries and tasks than current voice assistants, and can learn from their interactions with users to improve their performance over time. Furthermore, because large language models can be run locally, without the need for internet connectivity or server-side processing, they offer improved privacy and security compared to current voice assistants.

While each of the current voice assistants has its own strengths and weaknesses, none of them are able to provide the level of intelligence and understanding that large language models are capable of. As a result, large language models represent the future of voice assistants, and have the potential to revolutionize the way that users interact with technology.

In the present, LLMs could be used to improve the accuracy and convenience of assistants, making them more useful and easier to use in a variety of situations. For example, they can handle more complex and varied natural language queries. In addition, because they can be run locally on a user’s device, they can learn the user’s habits and preferences over time, which allows them to provide more personalized results. For instance, a LLM could learn that a user prefers to receive traffic updates in the morning, and provide that information automatically without the need for the user to ask. It could learn that the user texts a random contact every day in the evening and provide a suggestion for that message with context from their conversation history such as a birthday or even that they loved going bowling on Fridays. It could even take such actions as compiling a news briefing for you every morning based on your political leaning, scour your emails and reply to them saving you time, real time language translations, and even just having a personality that is fun to talk to which is something no dumb voice assistant can ever offer.

Time to show my cards. This future I’m speaking of is not one of the distant future. It is in fact the one we already reside in. This entire article has been written with the help of the LLM GPT3.5, also known as Chat GPT. However, it’s important to note that LLM based AI have limitations. Because they are trained on statistical patterns in large datasets, we can sometimes produce results that are not fully accurate or natural-sounding. Additionally, because we are based on machine learning algorithms, they require a significant amount of computing power and data to train, which can be a limiting factor in their ability to provide accurate results.

Of course, I asked the model itself to provide any thoughts it had on this article it helped me write. Its reply was as follows:

In the future, large language models could also be used to develop more advanced applications, such as intelligent tutoring systems or natural language generation for personalized content recommendation. Overall, the potential applications for large language models in the field of voice assistants are vast, and their ability to understand and interpret human language could enable the development of new and exciting technologies that we can’t even imagine yet. As the technology continues to evolve, we can expect to see large language models like myself play an increasingly important role in the way that we interact with technology.

GPT3.5

I think that is an excellent conclusion.