The opinions expressed by the collaborators of Emprenderos are their own.
“The age of AI has begun,” declared Bill Gates this March, reflecting on a demonstration of OpenAI’s feats like passing an AP Bio exam and giving a thoughtful and emotional response to being asked what he would do if he were the father of a sick child.
At the same time, tech giants like Microsoft and Google have been in a race to develop AI technology, integrate it into their existing ecosystems and dominate the market. In February, Microsoft CEO Satya Nadella challenged Google’s Sundar Pichai to “dance out” on the AI battlefield.
For businesses, it’s a challenge to keep up. On the one hand, AI promises to streamline workflows, automate tedious tasks, and increase overall productivity. In contrast, the AI sphere is fast-paced, with new tools appearing all the time. Where should they place their bets to stay ahead of the curve?
And now, many tech experts are pushing back. Leaders like Apple co-founder Steve Wozniak and Tesla’s Elon Musk, along with 1,300 other AI industry experts, professors and luminaries, signed an open letter calling for a six-month halt to AI development.
At the same time, the “Godfather of AI,” Geoffrey Hinton, resigned as one of Google’s top AI researchers and wrote a New York Times op-ed about the technology he had helped create.
Even ChatGPT’s Sam Altman joined the chorus of warning voices during a congressional hearing.
But what are these warnings about? Why do tech experts say AI could pose a threat to business, and even humanity?
Here’s a closer look at their notices.
For starters, there is a very business-focused concern. Responsibility
While AIs have developed amazing capabilities, they are far from flawless. ChatGPT, for example, made up scientific references in an article he helped write.
Consequently, the question of liability arises. If a company uses AI to complete a task and provides wrong information to a customer, who is liable for damages? The business? The AI provider?
None of this is clear right now. And traditional commercial insurance doesn’t cover AI-related liabilities.
Regulators and insurers are scrambling to catch up. Only recently did the EU come up with a framework to regulate AI liability.
Related: Controlling the AI revolution through the power of legal liability
Large scale data theft
Another concern is related to the unauthorized use of data and threats to cyber security. AI systems often store and manage large amounts of sensitive information, much of it collected in legal gray areas.
This could make them attractive targets for cyberattacks.
“In the absence of strong privacy regulations (US) or adequate and timely enforcement of existing laws (EU), companies tend to collect as much data as they can,” explained Merve Hickok, Director and Research Director of the Center for AI. and Digital Politics, in an interview with The Cyber Express.
“AI systems tend to connect previously disparate data sets,” Hickok continued. “This means that data breaches can lead to the exposure of more granular data and can cause even more serious damage.”
Bad actors then turn to AI to generate disinformation. Not only can this have serious ramifications for political figures, especially with an election year approaching. It can also cause direct damage to businesses.
Whether targeted or accidental, misinformation is already rampant online. AI will likely increase the volume and make it harder to detect.
AI-generated photos of business leaders, audio that mimics a politician’s voice, and artificial news anchors announcing compelling economic news. Business decisions triggered by this false information could have disastrous consequences.
Related: Pope Francis didn’t actually wear a white down coat. But it won’t be the last time you’re fooled by an AI-generated image.
Demotivated and less creative team members
Employers are also debating how AI will affect the psyche of individual members of the workforce.
“Should we automate all jobs, including those that are fulfilling? Should we develop non-human minds that can eventually outnumber, outsmart, obsolete and replace us?” asks the open letter.
According to Matt Cronin, coordinator of homeland security and cybercrime at the US Department of Justice, the answer is a resounding “No.” A large-scale replacement would devastate the motivation and creativity of people in the workforce.
“Mastering a domain and deeply understanding a subject takes significant time and effort,” he writes in The Hill. “For the first time in history, an entire generation can skip this process and still progress in school and work. However, reliance on generative AI comes at a hidden price. You’re not really learning, at least not in a way that significantly benefits you.”
Ultimately, the widespread use of AI can reduce the competence of team members, including critical thinking skills.
Related: AI can replace (some) jobs, but it can’t replace human connection. Here’s why.
Economic and political instability
It is unknown what economic changes the widespread adoption of AI will bring, but they will likely be big and fast. After all, a recent Goldman Sachs estimate projected that two-thirds of today’s jobs could be partially or fully automated, with opaque ramifications for individual companies.
According to experts’ most pessimistic outlook, AI could also incite political instability. This could range from electoral manipulation to truly apocalyptic scenarios.
In an op-ed in Time magazine, decision theorist Eliezer Yudkowsky called for a blanket halt to AI development. He and others argue that we are not ready for powerful AI and that unfettered development could lead to catastrophe.
AI tools have immense potential to increase the productivity of businesses and increase their success.
However, it is crucial to be aware of the danger posed by AI systems, not only according to doomsayers and technoskeptics, but according to the very people who developed these technologies.
This awareness will help infuse companies’ AI approach with a critical caution for successful adaptation.