Views: 19
In numerous circumstances and occasions the AI performance is much better than that of humans. At the same time, there are many “issues” where AI is lagging behind. Although presently the generative AIs are widely used in numerous human activities, a couple of American researchers noted that “generative AI had a long way to go before it would not require intensive human oversight”.
Most evident AI’s “helping hand” is apparent in such spheres as, e.g. lawyers to review contracts, a marketer to build a website or a coder for technical advice, as well as in writing an application and/or a press release, a social media chatter, etc. Some possible AI’s pros and cons depicted by the US scholars are visualized below.
AI’s pros
Generative AI should be treated like a person but also recognized as a software process, says Wharton management professor Ethan Mollick. Treating AI like a person is the best way to work with it, but one should remember that it is just a software process. Mollick believes the probability of computers becoming sentient is small, but considers it one of the four scenarios in his book “Co-Intelligence: Living and Working with AI.”
He emphasizes that the two most likely scenarios are exponential or linear growth of AI and encourages exploring how AI can enhance productivity and improve lives. Mollick also discusses the importance of entrepreneurs using AI and the responsibility of tech companies in regulation.
AI has even been shown to respond to people in crisis with more empathy than some doctors and therapists: “the best way to work with it is to treat it like a person”; though one might be “in an interesting trap,” said Mollick, co-director of the Generative AI Lab at Wharton. “Treat it like a person and you’re 90% of the way there. At the same time, you have to remember you are dealing with a software process.”
The “anthropomorphism of AI” can fuel fears of it becoming an existential threat: in reality, AI will likely continue to grow and help improve human lives. Everyone agrees that regulation is necessary, but figuring out the details is difficult.
Thus prof. Mollick underlines that high-powered, open-source models can be easily stripped of human controls “with just a little bit of work”, which scammers can manipulate. But too much preemptive regulation could stifle experimentation and progress; instead, Mollick advocates for “fast regulations” that can be enacted as problems arise.
Reference to: Mollick E. Co-Intelligence: How to Live and Work with AI. 2024; the book explains what it means to think and work together with smart machines. In:
AI’s cons
The opponents argue that AI will most likely create more jobs for people because it needs intensive human oversight to produce useable results. Modern work is complex; most jobs involve much more than those “things” that AI is good at, e.g. summarizing text and/or “generating output based on prompts”, noted prof. V. Yakubovich, executive director of Wharton’s Mack Institute for Innovation Management with the following arguments.
First, while generative AI has advanced rapidly, it still has a long way to go before it can function autonomously and predictably, which are key features that make it reliable.
Second, large language models, LLMs (like ChatGPT’s versions) are capable of processing vast amounts of data, but “they cannot parse it accurately and are prone to misleading information”, known as AI hallucinations.
Third, companies are risk-averse and need to maintain a high degree of efficiency and control to be successful: i.e. they wouldn’t “be rushing to lay off all their people in exchange for technology that still has a lot of bugs to work out”.
Thus, prof. V. Yakubovich says, there are certain imperfections in AI: despite its shortcomings, generative AI has been touted for its ability to handle what many consider to be mundane communication at work, such as interacting with customers’ online, producing reports as well as writing press releases, etc.
Professor is pointing out that many of those tasks have already been taken from workers: for example, chatbots handle customer complaints, and client-facing employees are often given scripted language vetted by lawyers.
Besides, companies also do not want AI involved in politically sensitive matters, especially if there are legal concerns. “What I see so far in talking to senior leaders of companies is that they try to avoid completely using models in politically charged cases because they know they will have more work to do adjudicating among the different parties”, prof. V. Yakubovich says.
Data science has been around for years, Yakubovich said, yet many companies still lack good infrastructure to organize the tremendous information that the technology is capable of collecting. Even if they built it, humans are still an indispensable part of making sense of it all: “If you want to curate everything, it’s a lot of work, and this is where more jobs will emerge,” he said.
Source and reference to: Yakubovich V. AI Can’t Replace You at Work. Here’s Why. 2024.
Source: https://knowledge.wharton.upenn.edu/article/ai-cant-replace-you-at-work-heres-why/
Conclusions by the mentioned authors:
= AI is more likely to produce more jobs, not less;
= LLMs save time by processing massive data, but humans make it useable; and
= Companies are risk-averse, so they won’t adopt imperfect technology on a large scale.
= The anthropomorphism of AI can fuel fears of it becoming an existential threat. In reality, AI will likely continue to grow and help improve human lives.
= Presently, AI is a “digital solution” for numerous entrepreneurs on a tight budget.
= Industry leaders should implement fast regulations that address problems as they arise and don’t hinder experimentation.