AI: Less Talk, More Understanding
AI has become the main subject of internet hype in recent years, much like cryptocurrency a decade ago. Large language models can do things that seemed impossible not long ago: write code, produce text, explain complex phenomena. They look almost intelligent.
But the intelligence of modern AI built on LLMs is a widespread misconception. Anyone who has used them extensively understands: they don’t actually know anything. They can generate text that sounds confident and logical but is essentially complete nonsense with no connection to reality. In familiar domains, a person will notice. In unfamiliar ones — they’ll easily be fooled by the confident tone and plausible form. The AI itself feels no difference whatsoever.
Recently a dialogue went viral where various AI systems were asked: if a car wash is a hundred meters from your house, is it easier to walk there or drive? The AI confidently decided to walk. And I recently discussed with ChatGPT the symbolism of the white swan in the Bohemian Rhapsody music video. It turned out the swan was allegedly a central image — even though in reality there is simply no swan in the video.
Why does this happen? Because LLMs are not intelligence. They are very advanced text autocomplete. Much like a phone keyboard suggests a word from the first few letters, only in an infinitely more complex form. The model continues the conversation in the most plausible way. And no matter how natural the result sounds, it doesn’t become intelligence.
About twenty-five years ago I was writing my PhD dissertation on self-learning systems based on knowledge and logical inference. The AI of that era couldn’t speak beautifully, but it had formal rules, strict structure, and explainable conclusions. Hallucinations didn’t exist there architecturally. Those systems had serious limitations that kept them from going mainstream, but the idea itself never disappeared. I never defended the dissertation — life went in a different direction — but those concepts stayed with me.
The industry’s path today is simple: make models bigger, add context, build agentic chains. Faster, higher, stronger. But no matter how much you pump up autocomplete, it will remain autocomplete — with hallucinations and without real knowledge.
That is exactly why I have been working on Cogentis AI for the past few weeks.
It is a platform that adds a layer of formalized knowledge, verifiable sources, and logical inference on top of LLMs. In this architecture, the LLM is the interface — the ability to ask questions in natural language and receive answers just as naturally as today. But the answers themselves are no longer born from probabilities. The result can be “I don’t know,” “insufficient data,” or a conclusion with a clear logical chain that can always be verified.
Theory without practice is dead, so the focus is immediately on applied systems — where the difference is clearly visible: where an LLM fantasizes and a knowledge system works correctly. As with the old expert systems, one natural application is educational systems. In my case — learning German.
Textbooks and language apps approach this task extremely superficially. Textbooks walk everyone through the same linear program. Apps rely on gamification and often teach things of no real value. But a learner is not a table of checkboxes. They already know some things, grasp some things faster, get stuck on others.
Language is a complex network of interconnected concepts, rules, and contexts. One thing pulls another along. That is what knowledge actually is. A system built on Cogentis AI knows what has already been learned, where the gaps are, which dependencies are critical for progress. It explains material through concepts the learner already understands and builds a personal learning trajectory. Not a course. Not a textbook. Not a game. But a living knowledge map of a specific person.
And of course, this architecture is applicable far beyond languages. If it works for such a complex domain, it fits any domain with structured knowledge: from education to analytics, from medicine to law. LLMs become a convenient interface for people. But not a source of truth.
Today everyone has an AI in their pocket that speaks beautifully — often better than many of us. But it still doesn’t know how to know. And suddenly, ideas I was working on a quarter century ago have become critically important again. We already know how to build systems that talk. We know how to build systems that know. We just haven’t truly connected them yet.
It’s time to do that. Less talk — more understanding.