SAN FRANCISCO, Jan 28 (Reuters) - Artificial intelligence data startup Turing, part of a growing sector providing human trainers to AI labs, announced on Tuesday that its revenue surged threefold to $300 million last year, resulting in profitability.
Headquartered in Palo Alto, Turing counts OpenAI, Google, Anthropic, and Meta among its clientele, having been valued at $1.1 billion in 2021.
With the increasing complexity of AI models, there has been an escalating demand for human trainers with specific expertise, which has driven up the valuations of companies like Turing's competitor, Scale AI.
To enhance their AI models, AI data companies match specialized workers with relevant skills to specific projects, thereby easing the AI companies' management load of overseeing numerous trainers.
Turing boasts a database of over four million human experts, including software developers and scientists with doctoral degrees, who can be contracted for data labeling tasks for AI models.
Nevertheless, these services come at a cost: a single intricate annotation may run into hundreds of dollars, and advanced AI models can necessitate millions of annotations. For instance, Meta utilized more than 10 million human annotations in training the Llama 3 models, said Meta executive Joe Spisak last year.
As AI labs encounter the “data wall,” which refers to the limitations in performance caused by a shortage of additional internet training data, they are increasingly turning to human data companies like Turing to enhance the intelligence of their AI models, noted Turing CEO Jonathan Siddharth in an interview with Reuters.
"Companies like Turing are instrumental in maintaining the scalability of models, compensating for the data shortfall that currently exists," Siddharth stated.