Intelligenza artificiale ed effetti sull’occupazione. Uno studio dell’ILO.

1.Lo studio pubblicato dall’ILO alla fine del 2023, dal titolo “Generative AI and Jobs: A global analysis of potential effects on job quantity and quality[1], fa presente che la maggior parte dei lavori e delle industrie sono solo parzialmente esposti all’automazione e hanno maggiori probabilità di essere integrati piuttosto che sostituiti dall’ultima ondata di intelligenza artificiale generativa, come chatGPT. 

Pertanto, è probabile che l’impatto maggiore di questa tecnologia non sia la distruzione di posti di lavoro, ma piuttosto i potenziali cambiamenti nella qualità dei posti di lavoro, in particolare l’intensità del lavoro e l’autonomia. Il lavoro d’ufficio è risultato essere la categoria con la maggiore esposizione tecnologica, con quasi un quarto delle mansioni considerate altamente esposte e più della metà delle mansioni con un’esposizione di livello medio. In altri gruppi professionali – tra cui manager, professionisti e tecnici – solo una piccola parte delle mansioni è risultata altamente esposta, mentre circa un quarto presentava livelli di esposizione medi. Il rapporto documenta notevoli differenze negli effetti dell’Intelligenza Artificiale sui paesi.

Questo perché l’attuazione dell’intelligenza artificiale dipende molto dal grado di sviluppo economico, sociale e tecnologico di un paese. Si scopre che il 5,5% dell’occupazione totale nei paesi ad alto reddito è potenzialmente esposto agli effetti di automazione della tecnologia, mentre nei paesi a basso reddito il rischio di automazione riguarda solo circa lo 0,4% dell’occupazione. 

D’altro canto, il potenziale di incremento è quasi uguale tra i paesi, suggerendo che con le giuste politiche in atto, questa nuova ondata di trasformazione tecnologica potrebbe offrire importanti benefici soprattutto per i paesi in via di sviluppo. Il documento conclude affermando che gli impatti socioeconomici dell’IA generativa dipenderanno in gran parte da come verrà gestita la sua diffusione. Sarà essenziale promuovere e progettare politiche che sostengano una transizione ordinata, equa e consultiva. Gli spunti di questo studio sottolineano la necessità di politiche proattive che si concentrino sulla qualità del lavoro, garantiscano transizioni eque e siano basate sul dialogo e su una regolamentazione adeguata.

Di seguito riproduciamo il testo dell’Introduzione al suddetto studio, con una tabella allegata (What are GPTs?).

2.Introduction.

Each new wave of technological progress intensifies debates on automation and jobs. Current debates on Artificial Intelligence (AI) and jobs recall those of the early 1900s with the introduction of the moving assembly line, or even those of the 1950s and 1960s, which followed the introduction of the early mainframe computers. While there have been some nods to the alienation that technology can bring by standardizing and controlling work processes, in most cases, the debates have centred on two opposing viewpoints: the optimists, who view new technology as the means to relieve workers from the most arduous tasks, and the pessimists, who raise alarm about the imminent threat to jobs and the risk of mass unemployment. What has changed in debates on technology and workers, however, is the types of workers affected. While the advances in technology in the early, mid and even late-1900s were primarily focused on manual workers, technological development since the 2010s, in particular the rapid progress of Machine Learning (ML), has centred on the ability of computers to perform non-routine, cognitive tasks, and by consequence potentially affect white-collar or knowledge workers. In addition, these technological advancements have occurred in the context of much stronger interconnectedness of economies across the globe, leading to a potentially larger exposure than location-based, factory-level applications. Yet despite these developments, to an average worker, even in the most highly developed countries, the potential implications of AI have, until recently, remained largely abstract.

The launch of ChatGPT marked an important advance in the public’s exposure to AI tools. In this new wave of technological transformation, machine learning models have started to leave the labs and begin interacting with the public, demonstrating their strengths and weaknesses in daily use. The chat function dramatically shortened the distance between AI and the end user, simultaneously providing a platform for a wide range of custom-made applications and innovations.

Given these significant advancements, it is not surprising that concerns over potential job loss have resurged. While it is impossible to predict how generative AI will further develop, the current capabilities and future potential of this technology are central to discussions of its impact on jobs. Sceptics tend to believe that these machines are nothing more than “stochastic parrots” – powerful text summarizers, incapable of “learning” and producing original content, with little future for general purpose use and unsustainable computing costs (Bender et al. 2021).

On the other hand, more recent technical literature focused on testing the limits of the latest models suggests an increasing capability to carry out “novel and difficult tasks that span mathematics, coding, vision, medicine, law, psychology and more”, and a general ability to produce responses exhibiting some forms of early “reasoning” (Bubeck et al. 2023). Some assessments go as far as suggesting that machine learning models, especially those based on large neural networks used by Generative Pre-trained Transformers (GPT, see Text Box 1), might have the potential to eventually become a general-purpose technology (Goldfarb, Taska, and Teodoridis 2023; Eloundou et al. 2023).

This would have multiplier effects on the economy and labour markets, as new products and services would likely spring from this technological platform. As social scientists, we are not in position to take sides in these technical debates. Instead, we focus on the already demonstrated capabilities of GPT-4, including custom-made chatbots with retrieval of private content (such as collections documents, e-mails and other material), natural language processing functions of content extraction, preparation of summaries, automated content generation, semantic text searches and broader semantic analysis based on text embeddings. Large Language Models (LLMs) can also be combined with other ML models, such as speech-to-text and text-to-speech generation, potentially expanding their interaction with different types of human tasks.

Finally, the potential of interacting with live web content through custom agents and plugins, as well as the multimodal (not exclusive to text, but also capable of reading and generating image) character of GPT-4 makes it likely that this type of technology will expand into new areas, thereby increasing its impact on labour. Departing from these observations, this study seeks to add the global perspective to the already lively debate on possible changes that may result in the labour markets as a consequence of the recent advent of generative AI.

We stress the focus of our work on the concepts of “exposure” and “potential”, which does not imply automation, but rather lists occupations and associated employment figures for jobs that are more likely to be affected by GPT-4 and similar technologies in the coming years. The objective of this exercise is not to derive headline figures, but rather to analyse the direction of possible changes in order to facilitate the design of appropriate policy responses, including the possible consequences on job quality. The analysis is based on 4-digit occupational classifications and their corresponding tasks in the ISCO-08 standard. It uses the GPT-4 model to estimate occupational and task-level scores of exposure to GPT technology and subsequently links these scores to official ILO statistics to derive global employment estimates. We also apply embedding-based text analysis and semantic clustering algorithms to provide a better understanding of the types of tasks that have a high automation potential and discuss how the automating and augmenting effects will strongly depend on a range of additional factors and specific country context. We discuss the results of this analysis in the broader context of labour market transformations. We put particular focus on the current disparities in digital access across countries of different income levels, the potential for this new wave of technological transformation to aggravate such disparities, and the ensuing consequences on productivity and income. We also give consideration to jobs with highest automation and augmentation potential and discuss gender-specific differences. The analysis does not take into account the new jobs that will be created to accompany the technological advancement. Twenty years ago, there were no social media managers, thirty years ago there were few web designers, and no amount of data modelling would have rendered a priori predictions concerning a vast array of other occupations that have emerged in the past decades. As demonstrated by Autor et al. (2022), some 60 per cent of employment in 2018 in the United States was in jobs that did not exist in the 1940s. Indeed, the main value of studies such as this one is not in the precise estimates, but rather in understanding the possible direction of change.

Such insights are necessary for proactively designing policies that can support orderly, fair, and consultative transitions, rather than dealing with change in a reactive manner. For this reason, we also emphasize the potential effects of technological change on working conditions and job quality and the need for workplace consultation and regulation to support the creation of quality employment and to manage transitions in the labour market. We hope that this research will contribute to needed policy debates on digital transformation in the world of work. While the analysis outlines potential implications for different occupational categories, the outcomes of the technological transition are not pre-determined. It is humans that are behind the decision to incorporate such technologies and it is humans that need to guide the transition process. It is our hope that this information can support the development of policies needed to manage these changes for the benefit of current and future societies. We intend to use this broad global study as an opening to more in-depth analyses at country level, with a particular focus on developing countries.

3.Text Box 1: What are GPTs?

Generative Pre-Trained Transformers belong to the family of Large Language Models – a type of Machine Learning model based on neural networks. The “generative” part refers to their ability to produce output of a creative nature, which in language models can take the form of sentences, paragraphs, or entire text structures, with characteristics often undistinguishable from that produced by humans. “Pre-trained” refers to the initial training on a large corpus of text data, typically through unsupervised or self-supervised learning, during which the model learns about the text structure by temporarily masking part of the content and trying to minimize errors in the prediction of the masked words. Following pre-training, such models are further fine-tuned with the use of labelled data and so-called “reinforcement learning”, making them more suitable for specific tasks.

This part of training is often perceived as a specialized job, executed by a handful of technical experts. In reality, it is labour intensive and involves many invisible contributors (Dzieza 2023). Its prerequisite is the production of vast amounts of labelled data, typically done by workers on crowdsourcing platforms. “Transformers” refer to the underlying model architecture, which uses numerous mechanisms, such as attention and self-attention frameworks, to develop weights related to the importance of text elements, such as words in a sentence, which are subsequently used for predictions (Vaswani et al. 2017). While GPT specifically refers to models developed by OpenAI (GPT-1, 2, 3 and 4), this type of architecture is used by many more language models already available commercially.

The launch of ChatGPT on 30 November 2022 made GPTs more popular among the public, as it made it possible for individuals with no programming knowledge to interact with GPT-3 (and eventually GPT-4) through a chatbot function with a human-like tone. For research purposes and more complex applications, such language models are typically more powerful when used through an Application Programming Interface (API). An API is a developer access point that relies on a query-response protocol with the use of programming software. In our case, we rely on a Python script based on OpenAI library, designed to connect to GPT-4 model, provide a fine-tuned prompt and receive a response, which is subsequently stored in a database on our server. This enables bulk processing of large numbers of requests and relies on the GPT-4 model with more parameters than what is accessible through the public Chat function.


[1] Cfr. ILO Working paper 96, Autori: Paweł Gmyrek, Janine Berg, David Bescond.