The first AI system to decode free-form imagined speech. Think naturally, in your own words, about any topic.
View a demo of MindSpeech operating in real-time to decode thoughts into text
Proprietary methods for extracting semantic information from non-invasive neural patterns
Custom transformer variants optimized for neural-to-text translation
Data collection and augmentation methods that achieve generalization from limited samples
Current benchmarks (2025)
MindSpeech is the first AI model to demonstrate that it is feasible to decode free‐form thoughts. It works by extracting semantic information from brain data and then leveraging the power of current LLMs to produce text from the semantic embeddings.
MindSpeech is trained purely from non‐invasive brain data. While previous methods have been limited to individual words or phonemes, we have demonstrated a path towards genuinely varied and open‐ended imagined speech.
We consider that by scaling the brain dataset size significantly and introducing new brain data types, the accuracy can scale.
MindSpeech vs other approaches
Real-world applications ready to transform industries
The first AI system to demonstrate telepathic interaction from a human brain to an AI assistant.
View a real-time demo of MindGPT working with ChatGPT
MindGPT is the first AI model to demonstrate telepathic interaction between a human and an LLM-based assistant, and this was achieved with a non-invasive headset.
Previous brain-interfaces relied on non-intuitive methods of interaction. The ability to communicate purely through imagined language is a holy grail for human-AI interaction.
MindGPT also demonstrated for the first time that it is possible to extract the semantic information from entire sentences, an important step towards consumer devices that allow thought to text in a seamless manner.
Our data collection paradigm is scaleable to a larger dataset size and more sentences.
Select any element on a GUI just by intending to do so.
See a user select GUI elements with their mind in VR
Making selections on a GUI is a universal aspect of human-computer interaction. We click with our mouse, tap with our finger or select with a VR controller.
Being able to make clicks without any motor movement or other action on the part of a human is game-changing for accessibility, but also important for future hands free interaction in AR or VR.
With MindClick, we pioneered a system which decodes a naturally occuring brain signal when you intend to do something, called the Expectancy-wave.
While other brain-interfaces exist to make selections, they either require flickering visual elements on the screen to pick up a brain signal, or some other non-intuitive means of interaction.
With MindClick, we introduced a fully end-to-end system which solved these problems, opening up the doors to the future of human-computer interaction.