New study says AI is making humans less able to think critically
Artificial intelligence (AI) may be making us less intelligent, a new study suggests, by making humans rely on “cognitive transfer” and gradually losing important critical thinking skills.
A new study published in the Swiss-based international academic journal Societies has explored how humans' increasing reliance on AI tools may be impairing their critical thinking skills, particularly through the phenomenon of "cognitive transfer."
This research highlights profound implications for professionals in high-risk fields, such as law and forensic science, where overreliance on technology can lead to mistakes with serious consequences.

Cognitive Offloading refers to the practice of relying on technology such as AI to store, process, or perform cognitive functions that humans previously performed on their own, such as memory, analysis, or decision making. This can lead to a reduction in the role of individual intelligence in those activities.
The use of AI by lawyers and expert witnesses in legal contexts is becoming a common trend. However, this also poses many risks when AI tools are applied without proper oversight or verification.
This new research further illuminates those risks, highlighting how the convenience AI brings can gradually degrade the quality of human decision-making and critical analytical abilities.
Findings of research on the phenomenon of "transition""cognitive load" and the impact of AI
The study surveyed 666 participants from a diverse demographic to assess the impact of AI tools on critical thinking skills. Key findings from the study include:
Cognitive transfer:People who regularly use AI tend to outsource intellectual tasks to technology, relying on AI to solve problems and make decisions, rather than actively practicing independent critical thinking.
Skill Decline:Over time, people who rely heavily on AI tools gradually lose the ability to critically evaluate information and draw thoughtful conclusions, reflecting a decline in critical thinking.
Generation gap:Younger people tend to rely more heavily on AI tools than older groups, raising concerns about the long-term implications for the expertise and judgment of future professionals.

Researchers warn that while AI can optimize workflows and boost productivity, over-reliance on the technology can lead to a “knowledge gap,” leaving users unable to verify or challenge AI-generated results.
When experts blindly trust AI results without verifying their accuracy, they can inadvertently make serious errors, undermine cases, damage reputations, and erode trust in their expertise.
Any profession that requires judgment and in-depth knowledge can fall victim to “cognitive transfer,” as a recent study has shown.
Without careful human oversight, AI tools can not only improve workflows, but can also undermine the standards of excellence that professionals are expected to maintain.
Fields such as law, insurance and forensic science, which rely on human expertise, are facing both potential benefits and unforeseen challenges from AI.
Search by similarityin how AI is applied in the fields of law and forensic science
While AI can be a valuable aid in data analysis or case preparation, there is growing concern that professionals and lawyers may become too reliant on these tools without thoroughly testing their accuracy.
When professionals in the legal or forensic science fields place too much trust in AI, they face inevitable potential risks.
Unverified data:AI tools can produce results that appear reasonable but are inaccurate, as has been demonstrated in cases where false evidence or faulty calculations were introduced into legal proceedings.
Professional decline:Over time, regularly delegating complex tasks to AI could gradually undermine the important skills needed to critically and critically evaluate or challenge evidence.
Reduced accountability:Blind trust in AI leads to a lack of personal responsibility, creating a dangerous precedent where mistakes are easily overlooked or dismissed.
AI and human expertise: A balance is needed
The key takeaway from both studies is that AI should be viewed as a tool that enhances human capabilities, rather than replacing them entirely. To maintain this balance, the studies suggest that:
Expertisehuman is the foundation:Human knowledge and expertise must always be the basis for decision-making. AI-generated results need to be verified by qualified experts and placed in the appropriate context.

Critical thinking is an indispensable element:Users need to actively engage in analyzing and criticizing AI-generated data, asking questions about the validity of the data and exploring alternative interpretations.
Regulation and specialized training are required:As AI becomes more prevalent, industries need to set strict standards for adopting the technology and ensure that professionals are well-trained and understand both the potential and limitations of AI.
Whether in everyday tasks or in high-risk fields such as law and forensic science, the human element remains pivotal to ensuring accuracy, accountability, and ethical integrity.
Without proper oversight and significant human involvement, we can undermine the professional standards and trust that professionals are expected to uphold.