Your Brain with Ergodic

Michaela Murphy
Jun 27, 2025

How we are approaching and prioritising continuous learning to lessen “cognitive debt” when utilising AI tools
Like many, we have seen the stream of articles following the recent publication from the MIT Media Lab, Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task, exploring the cognitive and learning implications of using large language models (LLMs) for task completion.
I (and likely the majority of others) have been expecting a study like this for some time now… achieving instantaneous responses with little-to-no need for our own domain knowledge or research surely lends itself to us exerting less effort; yet, learning the empirical effects on our cognitive function, engagement, and active recall remains shocking.
Concerns regarding the impacts of rapid LLM adoption have been particularly featured in growing scepticism over AI’s introduction into education and prolific popularity amongst young users, worrisome that AI dependency will lead to diminished cognitive function long-term, impacts on skill and language development, and lower academic performance. Yet, the concerns of these associations do not only affect younger and university-aged users, author Kosmyna is already now working on an expanded study focused on AI reliance and reduced critical thinking and problem solving in engineering, which may have implications on organisations seeking to replace entry-level coders with AI.
Engagement, memory recall, and the development of critical thinking skills are all vital to our personal and professional growth, and the overarching generalised use of these AI tools call into question the value of “AI literacy” vs. intellectual autonomy, underscoring a broader concern: How we use AI matters.
Users employ LLMs in a range of roles: passive answer generators, principles-first exploration tools, resource discovery, active collaborators, and final-check assistants. This variability in usage is critical to understanding cognitive impact. In the discussion, Nataliya Kosmyna explores “Withholding LLM tools during early stages might support memory formation. Brain-only group's stronger behavioral recall, supported by more robust EEG connectivity, suggests that initial unaided effort promoted durable memory traces, enabling more effective reactivation even when LLM tools were introduced later” [140]. This finding lends itself to the importance of continued research, especially as reliance on LLMs grows and usage norms continue to emerge or evolve. While there will always be a more superficial or lazy approach to engaging with AI, designing frameworks that encourage consistent and active interaction may be essential for supporting long-term memory, user engagement, and the maintenance of core cognitive skills, while leveraging the full benefits of AI tools.
While writing this piece, Andre Franca, CTO of Ergodic, reminded me, we have already seen a shade of this story before, with the use and widespread availability of calculators. “There's a reason why we still learn arithmetic at school despite the fact that we have calculators. When we solve a quadratic equation at school, the goal isn't only to solve the problem but rather to develop the skills of problem solving – That's where creativity buds and can blossom – Whether you're a painter looking for the right shade of blue or an engineer looking for the right way of designing an API. The moment we stop "problem solving" is the moment we become mere machines, going mindlessly through repetitive tasks. Imagine that for a dystopia: a world in which AI solves all the problems while humans just execute without thinking, no room for creativity at all”.
That leads us to a BIG question: At what point does the relentless pursuit of 'productivity' through AI begin to undermine the cognitive well-being and creativity of individuals who adopt technologies to remain competitive and accelerate their work?
And, that’s where we think our methodology at Ergodic makes a difference.
We understand the proliferation and growing reliance on LLMs and other artificial intelligence tools are impacting learning and reshaping how people think. It’s our perspective that without continuous learning and domain-specific input, individuals risk losing subject matter expertise and may fail to detect instances where LLMs hallucinate, making it imperative to create avenues to facilitate true collaboration between humans and machines. At every stage of development, we consider how to enhance our platform’s reasoning capabilities and action recommendations, while actively incorporating individual perspectives and expertise to improve accuracy and personalisation.
When talking about this topic with CSO, Andrej Nikonov, he explained to me the daily thoughts that go into building and the ideation of new features within our platform. “Our platform is designed to encourage users to provide domain-specific knowledge and use-case-specific requirements. It frequently prompts users to specify what they mean to encourage knowledge-sharing and creativity. More generally, however, it enables users to ask complex questions that might not previously have been considered. Users do not need to be data scientists to pose complex, data-related questions to the agent, as it can transform these into a variety of specific routines to analyse the data. It also displays its reasoning process, enabling users to question or delve deeper into the methodologies used by the agent to approach a problem. All of this reduces the barrier between users and the variety of methods for doing complex evaluations of variable data sets”
This article isn't about villainizing AI or ChatGPT (we are an AI startup after all!) or prescribing how to use AI tools, but we rather highlight the improvements we believe are necessary across the AI sector to foster a more transparent and responsible approach to tech development. Because at Ergodic, we believe that what makes us human is what should flourish. Machines and tools can help you discover problems and their root causes, identify potential impacts of mitigating actions, and multitudes of other actions that demystify queries and create a mental model of problems arising from data.
References:
Chow, A.R. (2025). ChatGPT May Be Eroding Critical Thinking Skills, According to a New MIT Study. [online] TIME. Available at: https://time.com/7295195/ai-chatgpt-google-learning-school/ [Accessed 25 Jun. 2025].
Marrone, R. and Vitomir Kovanovic (2025). MIT researchers say using ChatGPT can rot your brain. The truth is a little more complicated. [online] The Conversation. Available at: https://theconversation.com/mit-researchers-say-using-chatgpt-can-rot-your-brain-the-truth-is-a-little-more-complicated-259450 [Accessed 25 Jun. 2025].
MIT Media Lab. (2025). Project Overview ‹ Your Brain on ChatGPT – MIT Media Lab. [online] Available at: https://www.media.mit.edu/projects/your-brain-on-chatgpt/overview/ [Accessed 25 Jun. 2025].
Harvard Graduate School of Education. (2024). The Impact of AI on Children’s Development. [online] Available at: https://www.gse.harvard.edu/ideas/edcast/24/10/impact-ai-childrens-development [Accessed 26 Jun. 2025].