Implications of generative AI in tech and beyond

August 10, 2023 09:34
Illustration: Reuters

Since the release of ChatGPT to the public late last year, generative AI has seen an exponential rate of adoption. The major breakthrough behind these models is the transformer architecture (first developed by Google in 2017). No one quite predicted that this innovation, which was originally developed to address language translation, would offer a scalable framework for the development of highly performing Large Language Models (LLMs). Now that the genie is out of the bottle, the question arises, which players are best positioned to capitalize on these technological advances, and what broader implications are for both within and outside of the tech space.

1. Who does the value of generative AI accrue to?

Hyperscalers, i.e. the large cloud service providers such as Microsoft, Alphabet, Meta and Amazon are facing a unique opportunity to cement their position in the value chain. Historically they were focused on memory, storage and process automation. The more the value of a human brain can be integrated into the technology, the more they will be able to capture large parts of the value creation.

Beyond the position of the hyperscalers, there is much potential for vertically specialized generative AI models with strong domain expertise to thrive, especially in areas that are hard to solve. We will quickly see the emergence of highly specialized large language models (LLMs). For example Bloomberg recently-released its LLM specialized on financial data. Learning by doing will be a powerful driver and generalist AI providers won’t do enough learning in specific areas. Specialist also build brand equity in specific areas. Therefore, generalist are not going to own everything.

The question of generalist vs specialist AI is distinct from the question of who runs the infrastructure. Specialized AI players could very well be run on the clouds of hyperscalers because of better economics. The hyperscalers/generalists are not trying to do everything but rather are trying to build a framework that specialists can use.

AI-specialized chip manufacturers are set to capture part of the value-add. While Nvidia has emerged as a generalist leader in this space, the market will be large enough to allow for multiple established players to coexist. Moreover, highly specialized chip makers will likely capture parts of the AI market as well.

Across other industries, establish companies that succeed in incorporating generative AI technology into their strategy and delivery will have opportunities to gain market share, while new entrants may succeed in disrupting industries.

2. Cybersecurity implications

However, the technology is currently buggy, unreliable but widely available. This makes it much more effective cybercrime than for cybersecurity at the moment. The greater ease that LLMs offer for coding means it’s easier to produce malware. Well-targeted, highly personalized spear-phishing emails to deploy malware will hence likely increase.

More generally speaking, adoption by governments will likely to be at a slower pace than in the private sector, which could put government security at risk. Against this backdrop, and given the rush to get LLMs out quickly, there is a risk of market failure in terms of security. 2024 may be the year that cyber losses become more than just a cost of doing business.

Generative AI expands the ease of disinformation, while providing new tools to sift through information. The ease of creating deep fakes will create considerable challenges, i.e. particular a blurring in the perception of what‘s real and what‘s not.

How can risks be mitigated?
1. Self-imposed restraint by tech companies: large players have announced a variety of ethical AI initiative. However, such self-imposed restraints, are likely to be offset by competitive pressures.
2. Regulation, legislation, technical constraints: Given the pace of innovation and adoption, regulation is far behind and is unlikely to be effective.
3. Policing services: A new market for policing services will have emerged to validate the output of AI models. New methods to verify human identity will also be needed.

While there is room for new companies to occupy the new policing space, there is a debate about whether existing security players will be best positioned to provide effective defenses against the evolving face of cybercrime. The opportunity is to build a new generation of cybersecurity tools that themselves incorporated generative AI to enable the recognition of malware at speed and scale. A greater reliance on AI as a support in delivering cybersecurity will only gain in importance, given the skill shortage witnessed in the cybersecurity space.

3. China and generative AI

US tech companies are ahead in the generative AI race and Chinese companies feel the need to catch up. As in other areas of technology, there is a clear possibility of a parallel AI universe emerging within China’s area of influence, leading to a bifurcation in AI models between China and the West.

There are clear signs that the Chinese government is getting significantly involved setting rules and norms for the AI space. In 2022, AI regulation was passed governing companies’ use of algorithms in online recommendation systems. Additionally, China has spent the past few years providing guidance and frameworks for AI specifically on ethical use of AI.

4. Implications for mental health

Given rising signs of mental health crisis in the Western world, understand the extent to which AI can help alleviate the situation is very relevant. More broadly, according to Research and Markets, the potential for AI-based healthcare solutions is significant and projected to reach USD 103 bn by 2028. However, there is significant level of reluctance that would need to be overcome. According to a Pew Research Center survey, 60% of Americans would feel uncomfortable if their health care provider relied on AI for medical care, only 38% think it would lead to better health outcomes, and 75% are concerned that healthcare providers will move too fast in adopting the technology. Turning to mental health specifically, 79% don‘t want a chat bot supporting them. 46% of U.S. adults say AI chatbots should only be used by people who are also seeing a therapist, while another 28% say they should not be available to people at all.

So, while patients may get more comfortable over time as familiarity with chatbot interactions increase, these remain important hurdles in the near term. What does speak for greater adoption is the fact that chatbots are starting to respond as if they had agency. Moreover, already today some people can forget for moments that they are interacting with a bot. Hence, as the technology improves, the chatbot as a substitute for a human may well gain traction.

Regardless, some areas of concern remain. Mental healthcare seekers are in a vulnerable state, even if the situation is temporary. They may be less able to express themselves and give feedback. Following the interaction, they may disappear without the ability to follow up. Compared to a human therapist, a chatbot’s ability to read emotions and interpret non-verbal cues is more limited, especially in a text conversation. A recent Harvard Business School study found that companion AI chatbots have a number of deficiencies from a mental health perspective.

The broader question of the mental health implications of generative AI remain unclear and empirical evidence is largely lacking.

5. Implications for the labor market and productivity

It’s important to distinguish between tasks and jobs (bundles of tasks). Generative AI means that a new set of tasks can and will eventually be automated. Generative AI is capable of automating many non-routine analytical tasks, many of which we thought could only be performed by humans until recently. The tasks that are actually being displaced depend on the cost of labor in any given location. In most jobs, there are tasks that can be automated. So jobs will evolve. Generative AI will also lead to job augmentation, where employees are not only more productive, but are able to do new things thanks to humans and the new technology working together. The actual path of job creation is very difficult to know.

The impact on the labor force will be differentiated. According to economist Daron Acemoglu there are three responses to automation:
• upskill: results in higher paying job
• reskill: lateral evolution with similar pay
• deskill: these are the losers from automation. They are viewed as problematic by government

Historical example of automation effects are useful to keep in mind. Engel‘s pause describes the effect during the industrial revolution, which eventually led to higher living standards for workers after several generations. However, it hurt workers during a protracted transition phase.

Today, since generative AI potentially affects skilled employees who tend to have more agency than those affected by prior waves of automation, the labor market impact may be less significant. Nonetheless, public spending on reskilling and upskilling will be important. It’s an area of concern for governments as 66% of job seekers says they need upskilling or reskilling, according to surveys.

Companies are grappling with question what humans can do that machines can‘t do. At its best, the human brain can think in terms of future causality. But this takes high-level of thinking, curiosity. Generative AI, in contrast are excellent at processing correlation. They do not have a sense of time. Problem solving, coaching, communication, listening and supportive empathy are all skills in which humans have an advantage. These are present in a large numbers of job types. Note, however, that stressed human brains are not very differentiated from machine, finding it hard to make prediction or show empathy.

This is happening against a backdrop where nearshoring / reshoring is proceeding, albeit at a slower pace than some experts had expected. One of the bottlenecks for nearshoring is easy accessibility to robots and automation, which are necessary for the economics to work. However, AI advances can alleviate these bottlenecks. The level of skill needed for automation is being reduced, with AI advances playing an important role. The shift away from legacy automation technologies will free up engineering resources that can be redeployed. SMEs in particular will be able to deploy automation on their own as breakeven costs for nearshoring / reshoring will fall.

6. Implications for education

Advances in AI will play an important role in specific areas within the broader education system, but will not completely revolutionize the mainstream of education. Impacts will be differentiated by segment and geography.

The areas most affected will be adult education, language education and the assessment business.
• In K-12, the initial discussion around the risk of undetected plagiarism have abated as detection tools emerge that can handle generative AI.
• In higher education, impact will be on doubt resolution and facilitating research.
• Limited impact expected in corporate training, with the exception of short courses for which AI is very well suited.
• Minimal impact is expected in pre-K.

Lead education to absorb much more capital in the future as digital and online education grow from a still small base. For example, currently, digital expenditures are less than 5% of expenditures in education. Venture capital funding to ed tech is 6x less than in health and 4x less than in mobility.

Geographically speaking, AI advances should have less immediate economic impact on countries with fewer knowledge workers in terms of labor displacement, i.e. EM. However, education will become a national imperative in large EMs to avoid an increasing capability gap with DMs. Countries such as Nigeria or Egypt see it as an issue of national security. This will increase the funding for education in EMs. 

 -- Contact us at [email protected]

 

 

Head of research and sustainability, Thematic Equities, Pictet Asset Management