Sincere new world—A conversation about AI and human thought

Google DeepMInd AI image

There have been major advances in the development of artificial intelligence (AI). Can we trust this technology? Photo by Google DeepMind via Unsplash.


Imagine you are scrolling on social media and come across an ad for something on your mind just a few hours ago: perhaps a book about psychology or a pair of sparkly bright heels like Dorothy had in The Wizard of Oz. You think you must have texted a friend about wanting to learn about psychology or about buying red shoes. Upon scrutinising every word you had recently spoken or texted, you conclude that these were merely notions you had in mind, not verbalised desires.

Welcome to the future, where AI’s ability to guess your thoughts is as uncanny as your mother’s. So how does AI get to know you better than you know yourself? To separate fact from fiction, I ask an expert about some common concerns regarding AI and human relationships. Answering my questions is Benjamin Hardin, a DPhil student at the University of Oxford studying computer science and focusing on human-computer interaction.

Welcome to the future, where AI’s ability to guess your thoughts is as uncanny as your mother’s.’

Ester Paolocci: Researchers at the University of Texas recently published results showing they were able to successfully reconstruct phrases thought of by participants, using functional magnetic resonance imaging (fMRI) and a language model—a precursor to ChatGPT. A language model is a computer program designed to generate human language based on the input it receives. When coupled with fMRI, which measures blood flow and oxygenation levels in the brain, this technique can predict human thought with 70% accuracy. The concept is currently limited by bulky fMRI equipment. Do you foresee a time when lighter, faster, and more accessible technology (such as phones or VR headsets) will be able to record individuals’ thought?

Benjamin Hardin: Definitely! There are already small devices (usually worn in the ears) that can learn to interpret responses from detected brainwaves or electric brain impulses. The user “teaches” the device which brainwaves correspond to specific desired actions. These systems are still limited and cannot read a full sentence you might be thinking. In my opinion, rudimentary systems like this will be on the market within five to ten years, probably in headphones. Regarding recognising all user thoughts, I think we are at least 20 years away from portable technology that can do this. Currently, the biggest challenge is reading brain signals from multiple angles on the head to understand complex thoughts. Systems with this limitation will not be convenient to wear.  

EP: Decoding brainwaves sounds like a complex feat! Of course, this would have positive implications, such as in helping doctors communicate with unresponsive (i.e. comatose or non-verbal) patients. And deciphering how and where our thoughts originate could allow us to answer some of the most fundamental questions in neuroscience. The brain comprehending its own origin seems more plausible with the help of AI.

Nevertheless, there might be some challenges. Geoffrey Hinton, “Godfather of AI”, recently resigned from Google with a public warning regarding the dangers of AI. AI could be exploited to spread misinformation, rig elections, and massively influence society. For example, targeted ads seem harmless, but fundamentally the underlying concept borders on Pavlovian conditioning.

A flashing notification, as you scroll, could stimulate you to frivolously buy an air-fryer when you had no real use for an air-fryer. We know that, already, companies like Cambridge Analytica have sold data obtained using AI, without consent from social media users. It seems when you state something, targeted ads pop up minutes later. Algorithms are now so good, it feels as if they are predicting your next thought, even before you think it. Is this just a coincidence? Or is AI beating you at your own game? What apps are the worst transgressors when it comes to data privacy? Which ones can we trust?

BH: When it comes to coincidence versus predicting your thoughts, it is a bit of both. Partly it is the Baader–Meinhof phenomenon, or selective attention-confirmation bias. For example, this can be observed when you buy a new item. You might start noticing it everywhere because your mind has started to pay attention to it. Similarly, when your friend mentions a water park and then later you see an ad for a water park, you wonder if apps are listening to your conversations.

‘Algorithms are now so good, it feels as if they are predicting your next thought, even before you think it.’

Benjamin Hardin

Here is where it gets tricky. Unless your phone has been infected by a virus, it is not listening to what you say. Conversations in the real world are not used to target ads. Apps are prohibited from doing that by the operating system and it would drain your battery to constantly have the microphone on, converting speech to text. Instead, it is a case of selective attention or the algorithm being good at predicting your interests. 

Nevertheless, most apps do indeed track non-verbal conversations through what you type. As far as we know, iMessage and WhatsApp do not do this. Facebook Messenger, however, admitted to scanning messages for harmful behaviour, though it is unclear if it also uses conversations to inform advertisements. TikTok is probably the biggest collector of data in mainstream apps. Recently, it was discovered that it tracks everything you type in the app. It monitors every video you watch, the video’s content, time spent on the video, etc. It uses all of this information to build a model about what you like, curating videos based on this (and probably selling your data). It is something to keep in mind when using the platform. 

EP: Some specialists argue that AI predictions have become so good that they can guess your thoughts before you have them. Others, however, demonstrated that enabling active listening by mobile phones is easily achieved and probably being exploited as we speak. After all, how can Siri or Alexa “hear” its name and activate if it is not always listening?

BH: It is hard to determine how secure these systems truly are, but I can describe how they claim their systems work. When waiting for the wake word (“Hey Siri”, “Alexa”, etc.), the device is constantly listening for those words. If nothing matches, the sound is immediately deleted. Theoretically, the system is not even converting speech to text, but only looking for soundwaves that resemble those specific words. If the device detects a match, it starts converting sound to text to understand the user’s query. The text is then sent as a request to the cloud where a response is created. Users can, however, opt in to share audio recordings to help improve the system.

Alexa users should assume everything said after the “Alexa” wake word is used for ads. There have been cases where Alexa has recorded private conversations, though the company claims this was a software bug. Apple claims Siri anonymises requests and does not use them for advertising. Overall, Apple deals well with privacy because the Siri microphone cannot be accessed by other apps. Still, high-profile individuals may be attacked with elaborate viruses breaking down this barrier. The average iPhone user with updated software, however, should not be concerned about these viruses.

We should not simply ban AI from doing a job — we did not ban automated textile production so more people can sew.’

Benjamin Hardin

EP: Should it be the individual’s responsibility to regulate their use of advanced technologies and social media? So far, the internet is like the Wild West of the virtual world. Unprecedented cases have been brought before courts worldwide due to use of personal information shared without legal consent from individuals (especially minors) as well as misinformation spread, particularly regarding politics. Do you think policymakers will be able to keep up with advancing technology?

BH: No, I think regulation around social media has already shown that laws and courts will have trouble keeping up with technology. However, I am hopeful for the future as there are many researchers focused on technology regulation. Having research in place going forward will hopefully mean that time between the introduction of technology and its regulation gets increasingly shorter. I think the hardest aspect of responsibly using AI will be allowing AI use if it becomes better than a human at certain jobs.      

On one hand, it may lead to better efficiency and increased economic output. On the other, professionals may have trouble finding alternative employment because AI’s reduction of jobs may outpace job creation. We should not simply ban AI from doing a job — we did not ban automated textile production so more people can sew. We should, however, ensure that people can always find meaningful and fulfilling work. Maybe there will be a hybrid approach that limits the adoption rate of AI in certain sectors, giving people time to adapt? Regulating AI job replacement will be tricky. 


Thanking Ben, many thoughts run through my head. Countless sci-fi novels have entertained stories of sentient AI, whether AI could learn to have “human” thought. Yet, one rarely thinks about how AI would alter the human mind, behaviour, and society. Such technology has the potential to greatly benefit humanity.

At the same time, the only thing separating introspection from actualisation is speech. If AI could take that discretion away from us, whether those thoughts were anticipated by an algorithm or perceived through brainwaves, our subconscious would sit vulnerably in unfamiliar hands. Not only would we be forcefully catapulted into a sincere new world, with no inhibition gating our contemplations, but we would also be heavily influenced by a system that would predict our every move.     

One could reject advancing technology, but then one would become a societal outcast – with no social media to organise events, no clubcard for grocery discounts, no current knowledge of contemporary “culture” such as memes. Both thoughts are scary, but if we are in luck, policymakers with the aid of researchers like Ben will regulate the use of AI and data governance, to help guide us into this intelligent new future. One thing is for certain however, it is going to be a step-by-step learning experience for machine and human alike. Will mankind keep up?


Top