April 8, 2026

Humans in the Loop: The evolution of work in early experiments with generative AI

How can generative AI lead to better jobs? In January 2026, approximately half of American workers reported using AI, but how new technologies have affected the quality of their jobs remains largely unclear. Drawing on a study of more than twenty companies across four major industry groups – healthcare and life sciences; retail; finance, insurance, and real estate; and manufacturing – this paper identifies patterns in how organizations have experimented with generative AI; how those experiments have changed the roles of workers; and how organizations can support high-quality jobs as they integrate new technologies.

 

The applications of generative AI among the companies we studied were directed toward three common challenges. The bottleneck problem is where workers are responsible for a growing number of simple tasks that get in the way of higher value-added work. Generative AI tools have been aimed at relieving these bottlenecks by speeding up near-routine tasks. The cafeteria problem emerges in a process that requires workers to consult experts from various domains and integrate their input into a product, document, or idea. Organizations have looked to generative AI to predict what those domain experts might have said based on what they have produced in the past. The learning curve problem represents the extra time and effort novices require to complete a complex task in a new domain. Generative AI tools have been directed to help workers to perform as if they had more experience – and to develop expertise – in new domains.

 

Across the applications of generative AI addressing these challenges, there has been a shift in the core tasks that professional and technical workers are being asked to perform. Where generative AI tools are being deployed, workers are increasingly asked to perform supervisory control tasks as the “human in the loop” overseeing and analyzing a process rather than executing the process manually. Although supervisory control tasks may be new to workers in law or healthcare, the “human in the loop” concept is not new. A range of occupations from airline pilots and manufacturing technicians to utility operators are supervisors of automated systems, and there are guidelines for how workers in these roles can thrive that can inform the use of generative AI.

 

Supervisory control jobs vary widely in their quality and compensation. Whereas operators of complex systems in nuclear power and aerospace are widely considered interesting and well-paid (if intense) roles, machine operators overseeing automated equipment in industrial environments frequently receive lower pay and are harder for employers to fill. There is a similar range of emerging jobs in generative AI environments: some generative AI tools may require humans in the loop to perform tedious tasks reviewing content that generative AI has produced, whereas other roles call for supervisory control work to interpret and troubleshoot information, which requires higher skill and more engagement.

 

Employees have significant discretion over how they use generative AI tools – and can often shape how they affect their daily work. Although public discussions have frequently presented the march of generative AI as an inevitable force to which companies must react, there has been significant variation in how organizations have chosen to deploy generative AI within their organizations. This variation affirms that management practices matter for how generative AI will shape jobs.

 

Employers, policymakers, and technology providers have several levers to make jobs affected by generative AI more interesting and higher quality: designing interfaces for transparency and situational awareness, designing jobs for learning and mobility, and supporting training that develops judgment and domain expertise. On each of these points, public policy can make it easier for organizations to pursue these practices.