Talking to models instead of invoking them
Prediction: We will use many future NLU models by talking to them instead of treating them like programming APIs.
Enormous language models, and specifically GPT-3, has spawned a new way of extracting model results that could either be a temporary blip on the radar or a major change as fundamental as the hardware-software divide itself. Time will tell.
To set up the analogy: there was a time when software was hardware. Algorithms were etched onto chips. Later, machine code and punchcards permitted easier — but still low-level — direction of the hardware. Today, decades later, we write software with high-level concepts completely divorced from the machine that executes them.
Today AI is still software. Programmers still etch "AI rules" directly into their code. Learned models speed the process, but still require very low-level interactions, as if the model was a complex API call: inputs are encoded as long sequences of numbers; outputs are interpreted across long sequences of probabilities.
The picture below shows this low-level interaction with a text classifier: to determine that an input is of type Name
, you have to interpret the array of probabilities in the output. Very software-like. What if instead you could just talk to the model?
Software was hardware before it became software. AI was software before it became a conversational entity. Enter Deep Learning, Large Language Models, and GPT-3.
Suddenly, it's possible to interact with some models by talking to them. Because these models encode sufficiently general-purpose knowledge, and the encoding itself is derived from reading human language, human language turns out to be a viable way to provide it instructions and receive results.
Experiencing this first-hand is like discovering modern programming languages after having only used punchcards. At the time of writing, the robustness of this style of invocation is still relatively unproven in production, but it's a glimpse of something that can't be unseen.
There a good chance that within a few years, "prompt engineering," will be a common practice in which a programmer searches for human-language prompts that coax a general-purpose model to return the specific results desired.