Our AI Models

Three world's first capabilities

MindSpeech

The first AI system to decode free-form imagined speech. Think naturally, in your own words, about any topic.

Watch MindSpeech Work

View a demo of MindSpeech operating in real-time to decode thoughts into text

30%
Accuracy on Open Vocabulary
<1s
Processing Latency
250+
Topic Domains Tested

Three Key Innovations

01

Neural Signal Processing

Proprietary methods for extracting semantic information from non-invasive neural patterns

02

Architecture Design

Custom transformer variants optimized for neural-to-text translation

03

Training Paradigm

Data collection and augmentation methods that achieve generalization from limited samples

Performance Metrics

Current benchmarks (2025)

Accuracy
30%
Of test results commercially viable
User Generalization
6 hours
Time taken to work on a completely new user
Topic Coverage
300+
Distinct domains successfully tested
Processing Latency
<1s
End-to-end response time

MindSpeech is the first AI model to demonstrate that it is feasible to decode free‐form thoughts. It works by extracting semantic information from brain data and then leveraging the power of current LLMs to produce text from the semantic embeddings.

MindSpeech is trained purely from non‐invasive brain data. While previous methods have been limited to individual words or phonemes, we have demonstrated a path towards genuinely varied and open‐ended imagined speech.

We consider that by scaling the brain dataset size significantly and introducing new brain data types, the accuracy can scale.

How We Compare

MindSpeech vs other approaches

Capability
MindSpeech
Meta Decoder
Neuralink
Academic Best
Free-form thoughts
Unlimited vocabulary
~50 words
New user generalization
Non-invasive
Demonstrated system

Use Cases When Scaled

Real-world applications ready to transform industries

🧠

Immediate Applications

  • Silent AI assistant interaction
  • Accessibility for speech disabilities
  • Private communication in public
  • Thought-based note-taking
  • Hands-free device control
🚀

Platform Applications

  • OS-level thought input
  • Universal API for any software
  • New app categories built on thought
  • AR/VR thought interfaces
  • Brain-to-brain communication

MindGPT

The first AI system to demonstrate telepathic interaction from a human brain to an AI assistant.

See It In Action

View a real-time demo of MindGPT working with ChatGPT

3+
Full sentences
<5s
Processing Latency
First
Brain interaction with LLM

Decoding Semantic Information
with MindGPT

MindGPT is the first AI model to demonstrate telepathic interaction between a human and an LLM-based assistant, and this was achieved with a non-invasive headset.

Previous brain-interfaces relied on non-intuitive methods of interaction. The ability to communicate purely through imagined language is a holy grail for human-AI interaction.

MindGPT also demonstrated for the first time that it is possible to extract the semantic information from entire sentences, an important step towards consumer devices that allow thought to text in a seamless manner.

Our data collection paradigm is scaleable to a larger dataset size and more sentences.

MindClick

Select any element on a GUI just by intending to do so.

Watch the demo

See a user select GUI elements with their mind in VR

~80%
Accuracy in making selections when you want to
<0.5s
Processing Latency
Any
2D or 3D GUI

Unlocking the telepathic
mouse click

Making selections on a GUI is a universal aspect of human-computer interaction. We click with our mouse, tap with our finger or select with a VR controller.

Being able to make clicks without any motor movement or other action on the part of a human is game-changing for accessibility, but also important for future hands free interaction in AR or VR.

With MindClick, we pioneered a system which decodes a naturally occuring brain signal when you intend to do something, called the Expectancy-wave.

While other brain-interfaces exist to make selections, they either require flickering visual elements on the screen to pick up a brain signal, or some other non-intuitive means of interaction.

With MindClick, we introduced a fully end-to-end system which solved these problems, opening up the doors to the future of human-computer interaction.