GPT-3’s ability to mimic human-written text with uncanny realism seemed like a milestone on the road to true machine intelligence. Here’s an example of what it can do.
First, here’s the prompt a human provided it: “In a shocking finding, scientists discovered a herd of unicorns living in a remote, previously unexplored valley in the Andes Mountains. Even more surprising to the researchers was the fact that the unicorns spoke perfect English.”
Here’s how GPT-3 finished the story: “They also were found to have perfectly coiffed hair, and wore what appeared to be Dior makeup.”
‘We were shocked to discover the unicorns,’ said anthropologist Daniel St. Maurice. ‘They were like nothing we had ever seen before. We had heard legends of the unicorns, but never thought they actually existed.’
When the scientists first arrived in the valley, the unicorns were surprised and startled by the presence of humans, but were also excited. The unicorns welcomed the researchers and explained that they had been waiting for them for a very long time.”
As you can see, GPT-3 is capable of producing complex sentences that read as though they could have been produced by a human. Machines that can use language in this way are important for several reasons. Language is crucial to making sense of the everyday world: humans use it to communicate, to share ideas and describe concepts. An AI that mastered language would acquire a better understanding of the world in the process.
When people started to try GPT-3 for themselves last summer, thousands of examples of its versatility flooded social media. Debates were even sparked about whether GPT-3 was the first artificial general intelligence. It’s not. Despite the incredibly convincing passages of text it can churn out, GPT-3 doesn’t do anything really new. What it shows instead is that size can be everything. GPT-3’s abilities also make it hard to ignore AI’s growing problems. If you want to know the state of deep learning today, look at GPT-3. It is a microcosm of the best and worst in AI.