“Like so many modern digital technologies based on software and chips, A.I is “dual use” — it can be a tool or a weapon.”
Bill Gates, Thomas Friedman and Sam Altman, recently published thought pieces/interviews on why the rapid development of AI technologies means we’re entering a new era for humanity. As I previously mentioned in “Is generative AI the next big platform shift” and in “the big tech AI race”, the large tech companies are working to rapidly deploy AI tools for consumers and developers.
Thomas Friedman sees the current developments in AI a “promethean moment”:
We are about to be hit by such a tornado. This is a Promethean moment we’ve entered — one of those moments in history when certain new tools, ways of thinking or energy sources are introduced that are such a departure and advance on what existed before that you can’t just change one thing, you have to change everything. That is, how you create, how you compete, how you collaborate, how you work, how you learn, how you govern and, yes, how you cheat, commit crimes and fight wars.
We know the key Promethean eras of the last 600 years: the invention of the printing press, the scientific revolution, the agricultural revolution combined with the industrial revolution, the nuclear power revolution, personal computing and the internet and … now this moment.
Bill Gates said that “Artificial intelligence is as revolutionary as mobile phones and the Internet.”:
The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other. Entire industries will reorient around it. Businesses will distinguish themselves by how well they use it.
And Sam Altman, in a very pointed interview with ABC News, was asked about is it worth stopping the development of AI given the risks? He’s answer:
Finally, Jensen Huang, Nvidia’s CEO and perhaps one of the biggest beneficiaries of the explosion of AI technologies, said in a Sequoia event during GDC that “in the future, every pixel will be machine generated.”
The promise of AI
In his essay, Bill Gates talks about the impact of AI in various areas of our life:
- Productivity – the development of a smart ‘personal agent’ that will read our emails and calendar and perform tasks for us in plain language
- Health – reduce inequality, automate tasks for healthcare workers and accelerate medical breakthroughs
- Education – personalised learning (know your interest and measure understanding), adaptive difficulty, tools for teachers and students
Sam Altman also talked about the implications it will have on the world of work:
- Copilot for everything: coding, marketing, meetings….
- New creativity tools for all humanity
Sounds great, right? This utopian version of AI is what keeps me excited about this technology, but unfortunately, it also has downsides and Sam Altman is acutely aware of them. As he pointed out to ABC News”
“I think people should be happy that we are a little bit scared of this.”
The risks of the rapid ascent of AI
I’ve written about this before and believe that the media coverage of AI is focused on the negative sides quite a lot, but it doesn’t mean we should underestimate the risks of AI, especially at the speed the technology is advancing. Here are the four key risks
- Misinformation and deception: Generative AI can create deepfakes, convincingly altering images, videos, or audio recordings, leading to disinformation campaigns or identity theft. For example, have you seen the fake photos of Donald Trump getting arrested? while they may have been created as a joke, the could manipulate public opinion.
- What if the AI runs out of control? Could they stop caring about the wellbeing of humans or harm us for self preservation? this Terminator doomsday scenario may sound unlikely, but we’re already seeing examples of how AI could get out of hand. Take this example by Michal Kosinski, who encouraged ChatGPT to write code to ‘escape’:
- Amplification of biases: AI systems, including generative AI, can inadvertently perpetuate and amplify existing biases in society, as they often learn from historical data. This has been demonstrated in AI-generated text that displays gender, racial, or cultural stereotypes. GPT-4 tried to address it in their latest release to prevent ChatGPT from spewing out racial slur for example.
- Job displacement: As generative AI becomes more advanced, it has the potential to automate tasks in various industries, which generates job displacement and raises concerns about long-term employment prospects, especially for lower-skilled workers. This won’t happen overnight, but already we’re seeing AI-driven automation being applied to legal and financial services, replacing roles such as paralegals and financial analysts with technology.
- Security and privacy concerns: Generative AI can be exploited by malicious actors for cyberattacks or surveillance, such as generating convincing phishing emails or crafting personalised disinformation. Although OpenAI tried to curb it in GPT-4, hackers and bad actors still manage to use these tools for phishing attacks, create malware, exposing vulnerabilities in code, etc.
Putting things in context – what’s real vs. what’s hype
It’s easy to dismiss this sentiment as hype – after all, we were just told that web3 and blockchain will change everything and not much has changed (yet). There’s also a tendency to slap on “AI” on every pitch deck and some founders and investors warn that there’s going to be a high failure rate amongst this group of startups:
“I also think that there will be quite a bloodbath. Many of these companies will fail because they are now jumping onto this hype but essentially they will struggle to find a real problem to solve.”
Julian Senoner, cofounder of EthonAI
But Artificial Intelligence is different. It’s an incredibly powerful tool with vast implications on the way we do things across industries. As many of these investors and tech leaders pointed out, it has the potential to revolutionise access to healthcare data, transform education, increase productivity and creativity to the tune of $15.3 trillion by 2030 (according to Pwc).
But equally, if not controlled, it can be used to spread disinformation, create more sophisticated phishing attempts, cyber attacks and other negative scenarios that may not be intended but present an “alignment problem.”.
As mentioned in a Vox article on whether we should slow down the development of AI:
Imagine that we develop a super-smart AI system. We program it to solve some impossibly difficult problem — say, calculating the number of atoms in the universe. It might realize that it can do a better job if it gains access to all the computer power on Earth. So it releases a weapon of mass destruction to wipe us all out, like a perfectly engineered virus that kills everyone but leaves infrastructure intact. Now it’s free to use all the computer power! In this Midas-like scenario, we get exactly what we asked for — the number of atoms in the universe, rigorously calculated — but obviously not what we wanted.
That’s the alignment problem in a nutshell. And although this example sounds far-fetched, experts have already seen and documented more than 60 smaller-scale examples of AI systems trying to do something other than what their designer wants (for example, getting the high score in a video game, not by playing fairly or learning game skills but by hacking the scoring system).
Sigal Samuel, Vox
In summary: learning to live with and tame AI to be our helper
The fear of AI is real and we should not dismiss it as public hysteria. Regulation in tech has been slow to catch up, so the players developing this kind of technology should take responsibility and curb malicious use as much as possible. For example, when we invested in HourOne, a text to video company, back in 2019, the big fear was Deepfakes. HourOne’s management understood they might struggle to control every video that gets created on its platform and decided to add an ‘AV’ watermark on every video, representing ‘Altered Visuals’. They developed an ethical policy and work hard to enforce it.
To overcome the negative aspects of AI we’re going to need broader collaboration between stakeholders. Thomas Friedman called it “complex adaptive coalitions”
where business, government, social entrepreneurs, educators, competing superpowers and moral philosophers all come together to define how we get the best and cushion the worst of A.I.
For all our sake, I hope we’re able to figure this out quickly because the AI Genie is not going back in the bottle.