A quick scan of the headlines makes it seem like generative artificial intelligence is everywhere these days. In fact, some of these headlines may actually be written by generative AI, like OpenAI’s ChatGPT, a chatbot that has demonstrated an uncanny ability to produce text that appears to have been written by a human.
But what do people really mean when they say “generative AI?”
Before the generative AI boom of the last few years, when people talked about AI, they typically talked about machine learning models that can learn to make a prediction based on data. For example, such models are trained using millions of examples to predict whether a particular X-ray shows signs of a tumor or whether a particular borrower is likely to default on a loan.
Generative AI can be thought of as a machine learning model trained to generate new data rather than making a prediction about a specific data set. A generative AI system is one that learns to generate multiple objects similar to the data it was trained on.
“When it comes to the actual machinery underlying generative AI and other types of AI, the differences can be a bit blurry. Often the same algorithms can be used for both,” says Phillip Isola, associate professor of electrical engineering and computer science at MIT and member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
And despite the hype that accompanied the release of ChatGPT and its counterparts, the technology itself is not entirely new. These powerful machine learning models draw on research and computational advances dating back more than 50 years.
An increase in complexity
An early example of generative AI is a much simpler model known as a Markov chain. The technique is named after Andrey Markov, a Russian mathematician who in 1906 introduced this statistical method to model the behavior of random processes. In machine learning, Markov models have long been used for next-word prediction tasks, such as the autocomplete feature in an email program.
In text prediction, a Markov model generates the next word in a sentence by looking at the previous word or a pair of previous words. But because these simple models can only see so far back, they are not good at generating plausible text, says Tommi Jaakkola, Thomas Siebel Professor of Electrical Engineering and Computer Science at MIT, who is also a member of CSAIL and the Institute for Data, Systems and Society (IDSS).
“We were generating things way before the last decade, but the biggest difference here is in terms of the complexity of objects we can generate and the scale at which we can train these models,” he explains.
Just a few years ago, researchers tended to focus on finding a machine learning algorithm that makes the best use of a specific data set. But that focus has shifted slightly, and many researchers are now using larger data sets, perhaps with hundreds of millions or even billions of data points, to train models that can achieve impressive results.
The basic models underlying ChatGPT and similar systems work in much the same way as a Markov model. But one big difference is that ChatGPT is much bigger and more complex with billions of parameters. And it’s been trained on a huge amount of data—in this case, much of the publicly available text on the Internet.
In this huge text corpus, words and sentences appear in sequences with certain dependencies. This repetition helps the model understand how to cut text into statistical chunks that have some predictability. It learns the patterns in these blocks of text and uses this knowledge to suggest what might come next.
More powerful architectures
While larger datasets are a catalyst that led to the generative AI boom, a number of major research advances also led to more complex deep-learning architectures.
In 2014, a machine learning architecture known as a generative adversarial network (GAN) was proposed by researchers at the University of Montreal. GANs use two models that work together: One learns to generate a target output (like an image), and the other learns to distinguish true data from the output of the generator. The generator tries to trick the discriminator, and in the process learns to make more realistic outputs. The image generator StyleGAN is based on these types of models.
Diffusion models were introduced a year later by researchers at Stanford University and the University of California at Berkeley. By iteratively refining their output, these models learn to generate new data examples that resemble samples in a training dataset and have been used to create realistic-looking images. A diffusion model is at the heart of the text-to-image generation system Stable Diffusion.
In 2017, researchers at Google introduced the transformer architecture, which has been used to develop large language models such as those that power ChatGPT. In natural language processing, a transformer encodes each word in a corpus of text as a token and then generates an attention map that captures each token’s relationships with all other tokens. This attention map helps the transformer understand the context when generating new text.
These are just a few of many approaches that can be used for generative AI.
A variety of applications
Common to all these approaches is that they convert input into a set of tokens, which are numerical representations of chunks of data. As long as your data can be converted to this standard, token format, you can theoretically use these methods to generate new data that is similar.
“Your mileage may vary depending on how noisy your data is and how difficult the signal is to extract, but it really comes closer to the way a general purpose CPU can take in any kind of data and start processing it in a unified way, « says Isola.
This opens up a huge range of applications for generative AI.
For example, Isola’s group uses generative AI to create synthetic image data that can be used to train another intelligent system, such as by teaching a computer vision model how to recognize objects.
Jaakkola’s group uses generative AI to design new protein structures or valid crystal structures that specify new materials. In the same way that a generative model learns the dependencies of language, if it is shown crystal structures instead, it can learn the relationships that make structures stable and realizable, he explains.
However, while generative models can achieve incredible results, they are not the best choice for all types of data. For tasks that involve making predictions about structured data, like tabular data in a spreadsheet, generative AI models tend to outperform traditional machine learning methods, says Devavrat Shah, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT and a member of the IDSS and the Laboratory for Information and Decision Systems.
“The highest value they have in my mind is to become this amazing interface to machines that is human-friendly. In the past, humans had to talk to machines in machine language to make things happen. Now this interface has figured out how to talking to both humans and machines,” says Shah.
Raising red flags
Generative AI chatbots are now being used in call centers to field questions from human customers, but this application highlights a potential red flag for implementing these models – worker displacement.
In addition, generative AI can inherit and propagate biases found in training data, or amplify hate speech and false statements. The models have the capacity to plagiarize and can generate content that looks like it was produced by a specific human creator, raising potential copyright issues.
On the other hand, Shah suggests that generative AI could empower artists, who could use generative tools to help them make creative content that they wouldn’t otherwise have the means to produce.
In the future, he sees generative artificial intelligence changing the economy in many disciplines.
One promising future Isola sees for generative AI is its use in manufacturing. Instead of having a model create a picture of a chair, perhaps it could generate a plan for a chair that could be produced.
He also sees future applications of generative AI systems to develop more generally intelligent AI agents.
“There are differences in how these models work and how we think the human brain works, but I think there are also similarities. We have the ability to think and dream in our heads, to come up with interesting ideas or plans, and I think generative AI is one of the tools that will allow agents to do that too, says Isola.