THIS ARTICLE IS republished from The conversation under one Creative Commons license.
The rapid spread of artificial intelligence has people wondering: who is likely to embrace AI in their daily lives? Many assume that it is the tech-savvy – those who understand how AI works – who are most eager to adopt it.
Surprisingly, our new research published in the Journal of Marketing shows the opposite. People with less knowledge about AI are actually more open to using the technology. We call this difference in willingness to adopt the 'lower literacy-higher receptiveness' link.
This link appears in different groups, settings and even countries. For example, our analysis of data from market research firm Ipsos from 27 countries shows that people in countries with lower average AI literacy are more receptive to AI adoption than people in countries with higher literacy.
Similarly, our research among US students shows that those with less knowledge of AI are more likely to use AI for tasks such as academic assignments.
The reason behind this link lies in the way AI now performs tasks that we once thought only humans could do. When AI creates a work of art, writes a heartfelt response, or plays a musical instrument, it can feel almost magical, as if it is entering human territory.
Of course, AI does not actually possess human qualities. A chatbot can generate an empathetic response, but it doesn't feel empathy. People with more technical knowledge about AI understand this.
They know how algorithms (sets of mathematical rules used by computers to perform certain tasks), training data (used to improve the operation of an AI system), and computer models work. This makes the technology less mysterious.
On the other hand, people with less understanding may view AI as magical and awe-inspiring. We suggest that this sense of magic makes them more open to using AI tools.
Our studies show that this link between lower literacy and higher responsiveness is strongest when using AI tools in areas that people associate with human qualities, such as providing emotional support or advice. When it comes to tasks that don't evoke the same sense of human qualities – such as analyzing test results – the pattern reverses. People with higher AI literacy are more receptive to these applications because they focus on the efficiency of AI, rather than any “magical” qualities.
It's not about ability, fear or ethics
Interestingly, this connection between lower literacy and higher receptivity persists even though people with lower AI literacy are more likely to view AI as less capable, less ethical, and even a bit scary. Their openness to AI seems to stem from their amazement at what it can do, despite these perceived disadvantages.
This finding provides new insights into why people respond so differently to emerging technologies. Some studies suggest that consumers prefer new technology, a phenomenon called “algorithm appreciation,” while others show skepticism or “algorithm aversion.” Our research points to perceptions of the “magical nature” of AI as a key factor shaping these responses.
These insights pose a challenge for policymakers and educators. Efforts to increase AI literacy may inadvertently dampen people's enthusiasm for using AI by making it seem less magical. This creates a tricky balance between helping people understand AI and keeping them open to its adoption.