AI makes our daily life easier - although we mostly use it to increase our performance and efficiency at work, we still take advantage of it at home too. Writing and updating our grocery list or asking Alexa to play our favourite music or call our loved ones, AI is becoming more integrated into our lives. As it is still a new feature, we are not completely aware of its downfalls and challenges. While we might envision and experience how easy life can be with the support of AI, we might also become its victims. So, a new question has arisen: are we getting smarter or dumber due to AI?
What do we use AI for?
As we already explored in a previous article, Are you an AI power user?, AI is crucial in our work. The chatbot we interact with online when we have questions regarding our subscriptions, when we try to pay an electricity bill, or when we are contacted on LinkedIn by a “recruiter” are all examples of AI in action.
More and more industries benefit from using AI, whether it’s healthcare, finance, or transportation, but even education can harness its advantages. Diagnostics, appointment scheduling, answering the most searched medical questions, analysing market data, creating budgets, or weather forecasting all rely heavily on predictive modelling used by AI and machine learning.
Software developers use ChatGPT to fix bugs in the code they wrote, marketers create images, brainstorming teams ask for ideas and solutions from an AI, and newsletters are scheduled and sent out to target audiences. All of this is making some professions extinct and getting the job done - faster, more efficiently, but not always better than humans would do.
People quickly learned to notice when an article or post was written by AI, and when videos were created by AII. Whether it’s missing fingers from people’s hands in a video or too many emojis used in a text. We are becoming more and more avoidant toward anything that has something to do with AI, and yet still drawn to human-created content. Personalisation has never been this easy, but still, uncanny valley - the eerie sensation when we encounter a robot with human-like characteristics - is on the rise.
AI is similar to calculators – we cannot live without them anymore when we try to divide 24,569 by 45, but in return, we forget how to use basic math equations.
AI and critical thinking – enhancing or diminishing?
A team of researchers from Carnegie Mellon University and Microsoft decided to look into the effect of AI on critical thinking. Their recent paper conducted a survey with 319 knowledge workers to explore when and how people perceive their own critical experience. According to the results, when they primarily use GenAI to ensure the quality of their work - for example, meeting specific criteria - they engage in critical thinking, and it can improve work efficiency.
However, it can lead to overreliance on GenAI tools, resulting in fewer critical thinking efforts. The efforts shift to information verification and AI response integration instead of problem-solving, and to task stewardship instead of execution. This means we rely on the information we are provided by GenAI without fact-checking or even questioning the content we read. GenAI, however, doesn’t work like that - you can ask it to argue for or against the same topic, and it will be able to convince you of either, depending on your preconceptions.
AI tool usage and cognitive offloading
According to another recently published study, there is a significant negative correlation between the frequency of using AI tools and critical thinking. Even though AI tools have their astonishing benefits, they also decrease our engagement in deep and reflective critical thinking through cognitive offloading. Cognitive offloading means relying on the external environment to reduce our cognitive demand, such as taking notes during a meeting or writing a shopping list.
We are prone to use AI the same way, encouraging us to use our brain and memory less - and why wouldn’t we, if there is a tool to do our cognitively challenging tasks? Younger participants are also more at risk of AI dependence and scored lower in critical thinking than older participants in the study. Higher educational attainment led to better critical thinking, so this might be a good way to avoid AI dependence and support the development of the correct way of using GenAI.
What’s next for AI and our critical thinking abilities?
The more we use AI in our work life or private life, the more we trust its output. This means we forget to verify its accuracy, and we can fall prey to compromising on the standards of excellence. There should be a balance, as we should treat AI as a tool to support us and our work, not to replace human interaction or critical thinking.
Higher education, regulations, and AI training need to be involved to ensure that professionals don’t rely heavily on GenAI and that they understand AI’s limitations and flaws in verification. Without proper training and regulation, we will become dumber and might lose one of our biggest assets: critical thinking.