Skip to main content

The Broader Picture: Generative AI, Education, and the Future of Democracy

By October 17, 2025Developments

Nathalie A. Smuha, University of Toronto

A new academic year has kicked off, and schools, colleges and universities around the world are still grappling with the elephant that swept the room: Generative AI (GenAI). Some institutions have started assuming an open stance, allowing the use of GenAI unless stated otherwise. Out of a desire to ‘embrace the future’ and acknowledging that ‘students are using it anyway’, they are – perhaps unwittingly – propelling its normalization. Others maintain a prohibition by default, unless explicitly stated otherwise. All, however, continue to struggle with how to integrate AI in their curricula, and how to evaluate students’ abilities based on output that may not be (fully) theirs.

In this blogpost, my aim is not to appraise the most appropriate stance. Instead, I would like to broaden the discussion on GenAI in education and consider its implications for the democratic project. To do so, I shift the focus from output to process: when young people turn to GenAI during their education, what happens along the way? We are all familiar with the convenience GenAI enables, and the enhanced access it offers to information – an essential public good in democracies. Yet in this post, I want to draw attention to three ways in students’ (over)reliance on GenAI can also critically affect their ability to engage in civic participation: it can erode their cognitive skills, impact their social and emotional development, and influence the political discourse they are exposed to. None of these concerns can be mitigated by the GenAI policy of a single educational institution – nor should we expect such institutions to bear this responsibility. Instead, I call for a fundamental rethinking of GenAI’s role in education and society, focusing not only on the outputs students deliver, but on the underlying processes that shape them as future democratic citizens.

Cognitive impact

Comparisons of Generative AI and calculators are ubiquitous. Sure, we must teach children how to read and write, just like we teach them how to perform basic calculations. And once they reach a certain level, they can rely on calculators for more complex computations. So why would the same not apply to GenAI? Scholars have by now recounted the many ways in which this analogy is fallacious. I will not repeat them here, but let me highlight one in particular, and that is GenAI’s ability to carry out tasks that are entwined with our ability to engage in critical thinking and to make value judgments: reading and writing.

Writing is one of the main functions for which users rely on ChatGPT, and this is not surprising; the capacity of GenAI to draft whole chunks of text in reply to user prompts is quite extraordinary. GenAI is also highly popular for summarizing, thereby reducing the need to read texts ourselves. Language, however, is the tool through which we shape and communicate our views, structure our thinking, and explore our values. We teach students to read and write not (only) because we want them to be able to produce a good text, but because we want to teach them how to think and reflect. For this, the process matters far more than the output. And it is only by training those muscles, also as adults, that they can be kept fit; there are no shortcuts to thinking.

This seems to be confirmed by recent studies. While offering short-term convenience, the use of GenAI can diminish critical thinking, erode cognitive skills, and reduce our brain activity during deployment. Though people may feel like they perform better since the output they produce may be of higher quality, by sidestepping the process, they retain very little.

This does not mean that reliance on GenAI by definition erodes our cognitive skills, or that all reading and writing must per se serve as a tool to train our thought muscles. Where the process is of little importance, reliance on GenAI will typically be harmless, and can even be highly beneficial. Yet to understand when and how it is beneficial, one already needs a have good sense of judgment – something one acquires through the process of experience. For those who do not (yet) have the discipline to control our human laziness, who are not (yet) able to meaningfully discern benign and critical uses, who are insecure or under high pressure to perform (which is the case in many educational environments), this does not hold. Paradoxically, the more they rely on GenAI, the less they develop their skills, thus the more insecure they remain, and the more they turn to GenAI – a vicious circle that happens to be convenient for GenAI providers.

The consequences of this circle reach far beyond education though, as critical thinking is a precondition of democracy. It is what allows us to evaluate arguments and take informed decisions, it makes us more resilient against manipulation, it facilitates deliberative reasoning through a multitude of perspectives, and it enhances our ability to hold those who are in power to account. In other words: if problematic reliance on GenAI debilitates students’ ability to critically reflect and think, all members of society are affected.

Social impact

Chatbot providers have been targeting educational institutions in particular with free or cheap licenses. Hooking students to GenAI early on can enhance their dependence on the technology in the private sphere too. Beyond enabling the collection of even more data (of a highly intimate nature), this also raises social implications. A recent study in Harvard Business Review estimates that, as of 2025, GenAI chatbots are most prevalently used for companionship and therapy. This includes systems like ChatGPT, Gemini and Meta AI which, despite their general-purpose nature, were developed to cater to that role.

Chatbots capitalize on our inherent impulse to anthropomorphize, as this optimizes engagement. Rationally, we know we are interacting with a chatbot. But emotionally, it is extremely difficult to withstand the sensations that naturally arise when we deal with an entity developed to act as human-like as possible. Think of the (intentional) use of expressions like ‘Aah’ and ‘Hmm…’ to mimic human thought, the interjection of emojis, or the small dots to indicate ‘someone’ is typing. Chatbots’ flattering (or even sycophantic) demeanor also caters to our innate need to be liked and to feel understood. While this need is not a weakness (it enables us to thrive as social beings), it does imply a vulnerability, which is being exploited.

Encouraging the use of chatbots as non-judgmental companions who are always available, always agree, and excel at mimicking empathy (without having an ounce of care or understanding), creates emotional attachment and dependency. When OpenAI released GPT5 earlier this year and disabled previous versions of the chatbot, numerous people suffered significant distress, as they had come to emotionally rely on GPT4. The experience with Replika’s chatbots shows that this is not an isolated case. In more tragic scenarios, the design choices that foster emotional dependency can also exacerbate the risk of manipulation, increase isolation, fuel delusions and even incite people to take their lives or the lives of others.

Young people tend to struggle more than others with the vulnerabilities that are part of human relationships, such as the risk of friction, unavailability, rejection, and disagreement. These unpleasant phenomena can be avoided when emotionally relying on chatbots, which appears to fuel their adoption. Yet, developing the capacity to engage with other human beings – challenging as this may be, especially if they think differently – is not only an important individual skill: it is essential for civic discourse and democracy. It is precisely by going through the process of building meaningful (even if at times complex) human relationships, that we better learn how to co-exist in a pluralistic society. The bypassing of this process is therefore of societal concern too.

Political impact

There is a third way in which overreliance on GenAI may hinder the democratic project, and that is the influence exerted by GenAI providers through their chatbots – which are being used by millions around the world. First of all, GenAI providers are unable to entirely prevent a chatbot from fabricating facts or reflecting inaccuracies: its stochastic nature renders it a fundamentally untrustworthy information source. Yet the marketing hype these providers generate, including in education, helps us to forget this important detail.

More importantly, although ‘hallucinations’ are an involuntary constraint, chatbot providers can also shape their systems’ impact through deliberate design choices. They decide how anthropomorphic their chatbots are, what they are optimized for, which datasets they are trained on, which tests the systems undergo, how they are finetuned, and how they are constrained. These choices are not neutral: they are heavily value-laden. And while every technology embeds certain values, these are manifested far more explicitly in a tool like GenAI, which is increasingly used as an informational one-stop-shop.

From contextualizing historical events to framing moral controversies, people rely on chatbots to gather information about anything and everything, including matters that are morally and politically sensitive. As Buyl and others demonstrated, however, chatbots reflect the worldview of their creators, generating risks of political co-optation. By controlling which information is shared with users and how, chatbot providers thus exert considerable power to shape people’s ideological worldviews. In times where scientific facts are increasingly polarized and treated as opinions, inconvenient truths are being rewritten, and wars are waged against independent data sources, one can wonder how that power will be used.

Users of DeepSeek, a Chinese GenAI chatbot, know that it has been designed to reflect the Chinese party line when asked about politically sensitive matters, such as Taiwan. Yet also in the US, GenAI providers seem to do their utmost best to please the very leaders who are devaluing information. In August 2025, the White House issued an executive order to ‘prevent woke AI’. Acknowledging that millions of people are relying on GenAI to ‘learn new skills, consume information, and navigate their daily lives’, the US President is openly using his power to ensure that chatbots reflect his ideology – in the name of ‘debiasing’ them. Several providers, like X and Meta, already eagerly took or announced actions in this direction. Since the GenAI market is highly concentrated and virtually all providers are US-based, the political instrumentalization of GenAI can significantly impact democracies across the world. In sum, dependence on GenAI, whether used as a writing aid, informational tool or companion, may unwittingly also pave the way for political influence.

The path forward

Each of these impacts of GenAI – cognitive, social and political – can affect the democratic project and merit further research. The sections above can hence be read as a research agenda. Yet for the alarmed reader, a practical question inevitably arises: what can be done to counter these concerns? Simply regulating them away is not evident – especially in a world where technology companies got into the habit of placing products on the market before considering the law, and happily deal with litigation afterwards. That said, GenAI does not operate in a legal vacuum, and educational institutions must ensure that the chatbots they offer comply with privacy law, consumer protection law, human rights law and technology law. Despite calls for deregulation, threats aimed at dissuading the enforcement of digital rules, and flaws in legislation like the AI Act, these laws should be leveraged to curb GenAI’s worst excesses, and new ones may be need to be created.

Beyond regulation, for the many schools now drafting AI policies and evaluating curriculum integration, it is crucial to consider the technology’s broader implications. In liberal democracies, educational institutions are not only tasked with imparting knowledge; they play a crucial role in preparing students to become responsible democratic citizens. This leads to the difficult but necessary task to evaluate which role GenAI can and cannot play to achieve this goal by asking: how does it shape the competences that students need to meaningfully partake in the democratic project? Answering this question presupposes a clear (re-)articulation of what those competences are and why they matter – which calls for wide public debate. While educational institutions can take on a far more active role in organizing such a debate, they should not be carrying this burden on their own.

In essence, GenAI’s societal impact requires a fundamental rethinking of the educational curriculum, with a focus on process. The task is hence far more complex than uncritically adopting or prohibiting GenAI. As part of a deeper reflection exercise, institutions could, for instance, also proactively develop, implement or promote applications that enhance the very capabilities which GenAI’s irresponsible use might erode. When designed with different incentives and affordances, GenAI could also be steered towards fostering critical thinking, social engagement and civic discourse.

Ultimately, the future of education is entwined with the future of democracy. We therefore all have a responsibility to ensure that educational institutions can keep on carrying out their unique role in our democracies – a role that is only increasing in importance with the rise of GenAI. And if GenAI is to play a role in our educational systems, it is equally our responsibility to steer the technology and its providers in a direction that supports, rather than undermines, democratic values.

Suggested citation: Nathalie A. Smuha, The Broader Picture: Generative AI, Education, and the Future of Democracy, Int’l J. Const. L. Blog, Oct. 17, 2025, at: http://www.iconnectblog.com/the-broader-picture-generative-ai-education-and-the-future-of-democracy/

One Comment

  • Paulinyi says:

    Thank you for this thoughtful and careful piece. I strongly agree with your concerns regarding the use of generative AI in early educational stages, particularly when core cognitive and argumentative skills are still being formed.
    I would add, however, a distinction between formative learning and post-formative intellectual work. In academic and professional contexts, we all know how strongly hierarchical structures shape outcomes: access to resources, visibility, and influence tends to concentrate at the top. Individual thinkers have historically been unable to compete with well-funded research groups, editorial boards, or institutional networks, regardless of their underlying intellectual capacity.
    From this perspective, generative AI does not merely “simplify” writing or argumentation. For already trained individuals, it can function as a proxy for processes that were previously external and institutional: iterative revision, exposure to counter-arguments, stylistic refinement, and structural coherence. The risk, then, is not cognitive erosion per se, but a mismatch between the tool and the stage of intellectual development at which it is introduced.

Leave a Reply