If you're facing difficulties with your search keywords. We're here to help you. Please contact our team for any assistance.

Insights

Large, creative AI models will transform lives and labour markets

Since november 2022, when Openai, the company which makes Chatgpt, first opened the chatbot to the public, there has been little else that the tech elite has wanted to talk about. As this article was being written, the founder of a London technology company messaged your correspondent unprompted to say that this kind of ai is “essentially all I’m thinking about these days”. He says he is in the process of redesigning his company, valued at many hundreds of millions of dollars, around it. He is not alone.

They bring enormous promise and peril. In the first of three special articles we explain how they work

Since november 2022, when Openai, the company which makes Chatgpt, first opened the chatbot to the public, there has been little else that the tech elite has wanted to talk about. As this article was being written, the founder of a London technology company messaged your correspondent unprompted to say that this kind of ai is “essentially all I’m thinking about these days”. He says he is in the process of redesigning his company, valued at many hundreds of millions of dollars, around it. He is not alone.
Chatgpt embodies more knowledge than any human has ever known. It can converse cogently about mineral extraction in Papua New Guinea, or about tsmc, a Taiwanese semiconductor firm that finds itself in the geopolitical crosshairs. gpt-4, the artificial neural network which powers Chatgpt, has aced exams that serve as gateways for people to enter careers in law and medicine in America. It can generate songs, poems and essays. Other “generative ai” models can churn out digital photos, drawings and animations.
Running alongside this excitement is deep concern, inside the tech industry and beyond, that generative ai models are being developed too quickly. gpt-4 is a type of generative ai called a large language model (llm). Tech giants like Alphabet, Amazon and Nvidia have all trained their own llms, and given them names like palm, Megatron, Titan and Chinchilla.
The lure grows greater

The lure grows greater

The London tech boss says he is “incredibly nervous about the existential threat” posed by ai, even as he pursues it, and is “speaking with [other] founders about it daily”. Governments in America, Europe and China have all started mulling new regulations. Prominent voices are calling for the development of artificial intelligence to be paused, lest the software somehow run out of control and damage, or even destroy, human society. To calibrate how worried or excited you should be about this technology, it helps first to understand where it came from, how it works and what the limits are to its growth.
The contemporary explosion of the capabilities of ai software began in the early 2010s, when a software technique called “deep learning” became popular. Using the magic mix of vast datasets and powerful computers running neural networks on Graphics Processing Units (gpus), deep learning dramatically improved computers’ abilities to recognise images, process audio and play games. By the late 2010s computers could do many of these tasks better than any human.

The lure grows greater

The London tech boss says he is “incredibly nervous about the existential threat” posed by ai, even as he pursues it, and is “speaking with [other] founders about it daily”. Governments in America, Europe and China have all started mulling new regulations. Prominent voices are calling for the development of artificial intelligence to be paused, lest the software somehow run out of control and damage, or even destroy, human society. To calibrate how worried or excited you should be about this technology, it helps first to understand where it came from, how it works and what the limits are to its growth.
The contemporary explosion of the capabilities of ai software began in the early 2010s, when a software technique called “deep learning” became popular. Using the magic mix of vast datasets and powerful computers running neural networks on Graphics Processing Units (gpus), deep learning dramatically improved computers’ abilities to recognise images, process audio and play games. By the late 2010s computers could do many of these tasks better than any human.
But neural networks tended to be embedded in software with broader functionality, like email clients, and non-coders rarely interacted with these ais directly. Those that did often described their experience in near-spiritual terms. Lee Sedol, one of the world’s best players of Go, an ancient Chinese board game, retired from the game after Alphabet’s neural-net-based AlphaGo software crushed him in 2016. “Even if I become the number one,” he said, “there is an entity that cannot be defeated.”
By working in the most human of mediums, conversation, Chatgpt is now allowing the internet-using public to experience something similar, a kind of intellectual vertigo caused by software which has improved suddenly to the point where it can perform tasks that had been exclusively in the domain of human intelligence.
Despite that feeling of magic, an llm is, in reality, a giant exercise in statistics. Prompt Chatgpt to finish the sentence: “The promise of large language models is that they…” and you will get an immediate response. How does it work?
First, the language of the query is converted from words, which neural networks cannot handle, into a representative set of numbers (see graphic). gpt-3, which powered an earlier version of Chatgpt, does this by splitting text into chunks of characters, called tokens, which commonly occur together. These tokens can be words, like “love” or “are”, affixes, like “dis” or “ised”, and punctuation, like “?”. gpt-3’s dictionary contains details of 50,257 tokens.