We are not talking about AI enough (really!)
I’m not an insomniac, but sleep didn’t come for me on Tuesday until early morning.
It likely was the result from the overload of hard-to-process information I’d received that evening. Hours before, I’d attended a talk on the future of AI or the opportunities of a new era. While this may have pushed my nervous system into fight/flight/freeze, what was discussed that night felt essential.
This is the dawn of a new era AND an existential crisis for humankind. Light topic, I know. Enjoy!
As always, I appreciate you reading me, and feel free to drop me a line with your thoughts or questions. (Oh and check the footnotes if some of the terminology isn't familiar!)
Another day, another story about AI.
That’s what I could have said before stepping into a private auditorium, at the centre of the compound of one of the famed Swiss banks on Tuesday.
One of the names featured on the invitation intrigued me: Alexandre Pouget, professor of neuroscience at the University of Geneva - currently head of the laboratory of cognitive computational neuroscience. I had skipped the title of his intervention: La prise de décision et le libre arbitre dans les cerveaux artificiels et naturels - Decision making and the power of agency in artificial and human neural networks (my translation). Had I given this a glance, I’d have already felt the echoes of my early efforts to grapple with the existential aspects of free will.
I got in through a nondescript fire door just as the evening program started. A sea of heads announced the room was full. I shuffled past the company's upper echelons standing against the wall and sat next to a charming older couple. We shared roused glances, laughs and worried whispers over the following two hours.
For starters, we heard from our hosts who decoded geo-political, interest rates and other AI-related investment forecasts. They pitched new services centred around emerging technologies.
Despite a willingness to give the well-suited economist and wealth managers my focused attention, my body slid into a gentle lecture-induced torpor. A great desire to close my eyelids came upon me. The brightness of the ceiling lights was unforgiving. I snapped back in my chair, uncrossed my legs and took a deep breath. No napping!
Thankfully, Professeur Pouget, stepped onto the stage. The slender bespectacled 50-something, was wearing a casual attire with a matching attitude, hinting at his presentation skills as much as his expertise.
The atmosphere in the room shifted. As he began to speak, the air itself felt charged with the weight of an impending paradigm shift.
‘We are here to talk about an existential revolution.’
While the economists had painted a picture of our financial future, Professor Pouget started to sketch the blueprint of our species' evolution. The leap from interest rates to neural networks felt jarring. Our speaker meant to wake me (us?) up.
The French and US educated researcher explained further: we are currently creating the next steps in the evolution of the species, marking the next big transition in our evolution. We are ushering in the post-homo erectus era.
I grabbed my phone, which until now had been safely ensconced in the bag on my lap. As the speaker went on, I began frantically typing ‘franglais’ notes (French-English) and snapped at the slides as they appeared behind him, trying to keep up with the crashing waves of information coming at us. He started us with the basics, going to the ancestry of today’s Large Language Models (also known as LLM).
We heard about neural functions, synaptic weight, learning algorithms, back propagation, CPU vs GPU chips, etc. My inner circuits buzzed. I found myself extending my upper body, antenna-like, toward the centre of the room.
This man was opening doors far beyond the ‘how tos’ of being more innovative, creative or productive with AI. My gut was just as mobilised as the rest of me. Electrified as I was, I could pick up slight discomfort in the room at large, despite the speaker’s attempt at light hearted jokes. Hmm.
Le libre arbitre est-il possible?
Does free will exist?
This was the question I ran with for my philosophy dissertation, the test that crowned my baccalauréat (the French equivalent of A levels or exams getting you from secondary school to tertiary education). The topic has been a subject of debate since the Greeks: are humans capable of making choices without being conditioned by past events?
I was delighted with the subject. I’d been primed for it by our super ‘prof de philo’, Monsieur Botaro. Building the argument felt simple. The writing, less so. I scribbled with passion for the duration of the exam. I remember handing over the manuscript pages to the examinator before skipping away with confidence.
My take? Free will doesn’t exist. How did I construct my thesis? I couldn’t tell you now, though I remember borrowing from literature to help illustrate my point. I got 16 out of 20 - a pleasing mark that inferred my success.
Professeur Pouget asked us the same question, rhetorically of course.
He added that his peers, academics who’ve been pondering on this millenia-old debate, translated the term differently. Instead of free will as an abstract concept that implies moral judgement, they referred to it as a ‘sense of agency’, or the idea that we have the capacity to make decisions and act on them.
His opinion?
‘Free will, sense of agency, ça, on oublie - forget it!’
‘The world is born of cause and effect’, as all scientists understand (I paraphrase). He added:
‘We can measure how neurons function, which is why [we know] a sense of agency doesn’t exist.’
For him, determinism wins.
All decisions and actions made by every single one of us are causally inevitable.
‘Why do I think I have a sense of agency or free will?
That is the question that matters’, he said.
If we are built like he says, and that every single action and decision we make, every thought, is the result of cause and effect, of electrical charges in our brains, why do we think we have free will?
How is it that we think we are able to control our lives, our decisions, when apparently, that is not the case?
Away from the glee that bubbled when the topic of free will was introduced, tension and tightness started to make itself felt in my abdomen, my upper back. Prof. Pouget continued on by ex breaking down the brain function of the premotor cortex vs the motor cortex (a distinction that was brand new to me): basically the two systems that control both our decision making (motor cortex) and our fake sense of agency (premotor cortex).
To explain their functions, he broke down the study of epileptic patients whose neurons were triggered in one area and then another.
While one action was consciously deliberate (when working with the motor cortex), this is where it got weird. Cause and effect at play:
⚡️doctors zapped brain tissue in the pre-motor cortex telling the patient not to move a muscle.
↳ effect: the patient raised their arm
↴ knock-on effect: the patient told a story about why they wanted to move their arm, and why they did, despite being told not to. As the patients were triggered in the premotor cortex, their arms moved and despite the fact they didn’t initiate the action, the patients brain constructed a narrative for their decision, as if it had been voluntary, when it was in fact, it was anything but.
Right. Discomfort creeped further in.
This argument advances bleak determinism. Nothing I have ever done I’ve ever had a choice in. Oh it’s the same for you, by the way. We are the product of our conditioning and circumstances. Everything that we do is simple cause and effect, masked by neuronal activity, due to the nature of our brain’s anatomy.
I don’t know about you, but the absence of true possibility bothers me.
What of synchronicity? What of spirituality?
By that point it was nearly 8pm, my stomach grumbled, my mouth felt dry. All this hard thinking had opened my appetite.
Prof. Pouget ramped up to this conclusion:
We are at the start of an existential revolution because this sense of agency and AI is perfectly applicable to self-determinist androids (cue stills of classic sci-fi films behind the speaker).
He asked aloud:
Can AI surpass us, humans?
How soon will this happen?
Can we build regulations?
Is the idea of safety rails and governance even realistic?
Before opening up the discussion for what turned into a lengthy Q&A, the accomplished academic voiced his concerns about terrifying use of these revolutionary technologies, illustrating it with published news that the US Air Force is already testing AI in the air.
A gasp in the auditorium. The information is not hidden, yet not one of us seemed to know about this, or the response from the Chinese, cue pictures of military robot dogs with built-in rifles.
By now do you get why I had trouble sleeping?
THE FUTURE OF WORK, THE FUTURE OF MONEY
Among the many jokes that came up during the Q&A (we needed to dispel the tension), several touched on how there’s no point in anyone studying maths anymore, learning how to code (gen AI can do it for you, why the DIY?), or becoming a radiologist (again, AI is apparently better than the humans or it will be in a matter of months).
Allegedly, lawyers should be safe for a few years. Economists are fine, given that economic models are so hard to predict. I missed my calling. Meanwhile, hedge funds are apparently throwing boatloads of cash at top silicon valley talent to develop their own AI systems. Think gen AI wealth builder.
Eventually, I raised my hand. Knowing as I do that some businesses are more ethically-minded than others, I wanted to know if there is a way that we can invest in ‘ethical AI’ companies? Can they be singled out? By what means?
Given that this conference was highlighting both the investment opportunities and the potential hams of AI in the wrong hands, would this not be of use for all of us?
In this AI race, as in all markets, we the consumers hold an ace up our sleeve. We can use our wallet to back businesses sharing our values, those committed to ethical standards, guardrails, and crucially, the common good.
The wealth managers looked at each other puzzled. No such option exists on the financial markets.
They offered that just like ESG funds were developed, a structure needs to be put in place to evaluate how ethical AI and tech companies are.
I was advised that I can, as an individual, figure it out myself. If you are like me, the onus is on us to find out who and how to back ethically-minded technologists shaping our future.
WE ALL NEED TO DO OUR OWN RESEARCH - FACT CHECK PLEASE
I left convinced that, as professor Pouget said himself, we are actually not talking about AI enough.
We need to have more forums in the presence of the greater public. Scientists need to break down the development of this life-altering technology, tell us about how and why these innovations can and will change life as we know it.
We (the people) need to be given a chance to take an active part in this conversation, so we understand the promises of positive applications of AI, just as much as the doomsday cyborg-dog war scenarios.
We need to be told plainly about the weight of uncertainty that lies ahead for humanity.
Here’s my final take away.
It befalls upon us to do the work, research, learn and choose how we want to participate in this existential revolution. If you’ve got skin in the game (like a child for example), you can’t afford not to learn more about this.
If you don’t agree with me, take it from Taylor Swift, who last week not only endorsed Kamala Harris for her policies, but also highlighted this responsibility of seeking and fact-checking information so we can make informed choices. (She was also hit by fake AI images of herself supporting Trump.)
We, the people, can support ‘good tech’.
AND we should make this a priority, so that collectively we get better industry leaders than the crazed, near trillionaires currently driving innovation in technology.
What do you think?
Please let me know. This is just the start, let’s continue this conversation.
PS. As apparently I have no free will, I was clearly meant to enthusiastically take notes and report back to you. I’ve asked Claude, the AI (by Anthropic, the more ethical challenger to GPT4), to give me some pointers about where I could make my essay better. It helped. But not as much as Freddie, my amazing copywriter.
DIVE DEEPER
A few actionable suggestions to get you started:
Oprah has just done an AI special with Sam Altman and Bill Gates among others. Read the cliff notes of her interview here.
Read this op-ed from the Scientific American where AI ethicist Alex Hanna and linguistics professor Emily Bender made the case that corporate AI labs are misdirecting regulatory attention to imaginary, world-ending scenarios as a bureaucratic manoeuvring ploy. [source Techcrunch]
Sign up for newsletters from trusted writers like Charlie Wazel’s Galaxy Brain at the Atlantic, Casey Newton from Platformer (news at the intersection of Silicon Valley and democracy), or tech publications covering AI, like TechCrunch, Wired (available in multiple countries and languages), or the Verge.
Give podcasts a try from tech-inclined hosts like Tim Ferriss who will bring a variety of voices on the topic regularly.
Consider following the AI research departments of leading universities like MIT, Stanford, or Oxford through their public websites and publications. (That one’s a step too far for me).
Or simply reply ‘yes’ next time anyone invites you to a conference on emerging technologies. It may just change your perspective, as it did for me.
Footnotes
In neuroscience and computer science, synaptic weight refers to the strength or amplitude of a connection between two nodes, corresponding in biology to the amount of influence the firing of one neuron has on another. The term is typically used in artificial and biological neural network research.[1]
A graphics processing unit (GPU) is a specialised electronic circuit initially designed for digital image processing and to accelerate computer graphics, being present either as a discrete video card or embedded on motherboards, mobile phones, personal computers, workstations, and game consoles. After their initial design, GPUs were found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. Other non-graphical uses include the training of neural networks and cryptocurrency mining.
Free will is the capacity or ability to choose between different possible courses of action. Free will is closely linked to the concepts of moral responsibility, praise, culpability, and other judgements which apply only to actions that are freely chosen. It is also connected with the concepts of advice, persuasion, deliberation, and prohibition. Wikipedia
Determinism is the philosophical view that all events in the universe, including human decisions and actions, are causally inevitable.
ESG: Environmental, social, and governance (ESG) is shorthand for an investing principle that prioritises environmental issues, social issues, and corporate governance.[1] Investing with ESG considerations is sometimes referred to as responsible investing or, in more proactive cases, impact investing.[1]