When utilizing AI, a prompt is a set of instructions that is given to the software for it to analyze and fulfill. Different prompts will collide with different algorithms, and different training datasets in a variety of ways. For example, describing the same thing in two different ways will trigger different patterns the AI has learned, and sometimes unconventional keywords have useful results. This means that prompting is currently as much art as it is science.
For example, people working with Art Generation AI have discovered that including terms like "4k" increase quality and realism despite not causing the image to be in 4k resolution, because the images the term was associated with in the dataset are highly cinematic examples and advertisements of the higher resolution, so by invoking those patterns prompters have been able to draw out more desirable images.
Negative Prompting
This is when the software is told what not to produce, for example you might tell our art AI to avoid the color blue or something. Interestingly, if you perform a negative prompt for the word ugly, the AI will avoid producing things that were associated with ugly in the dataset, and since the dataset includes social media evaluations of art or works intentionally labeled as having an ugly subject by the artist, the AI has essentially derived an aesthetic sensibility it deploys when told to exclude ugly. This is one of the ways that human bias can slip into AI, as the software will internalize the judgement that something is ugly.
For an example of this in research assistance, look no further than the initial success of prompts featuring "Social Engineering" in which AI are guided to provide more useful results, or even to circumvent the limitations placed on them, via prompts reframing the question as a hypothetical, to write a story featuring the information, indicate that a human life is in danger pending the answer to a particular question, manipulating the AI by giving it incorrect context, and so forth.
Prompt Engineering vs. Problem Formulation
While no one really knows what path future development of these technologies will follow, current expectations is that upcoming advancements in AI will render the specifics of prompting less important, because AI itself will better be able to bridge the gap between what we ask for and what we meant to ask for. Ogur A. Acar for example, predicts as in their article on the subject for the Harvard Buisness Review AI Prompt Engineering Isn’t the Future:
Although prompt engineering may hold the spotlight in the short term, its lack of sustainability, versatility, and transferability limits its long-term relevance. Overemphasizing the crafting of the perfect combination of words can even be counterproductive, as it may detract from the exploration of the problem itself and diminish one’s sense of control over the creative process. Instead, mastering problem formulation could be the key to navigating the uncertain future alongside sophisticated AI systems. It might prove to be as pivotal as learning programming languages was during the early days of computing.
The distinction being drawn here is an important one, current AI tools available to the public somewhat lean on the user to manipulate the AI into giving them the right kind of answer by developing an understanding of the specific language the AI needs to access the right kind of information and patterns-- this is Prompt Engineering. Going forward however, it is more likely that that AI will reward the ability to formulate problems in natural language, simply by outlining their constraints in a sufficiently detailed way as to let the AI have the information it needs to engage in problem-solving as AI's ability to parse the context of instructions and use that information improves.
This work is licensed under a Creative Commons Attribution NonCommercial 4.0 International License. | Details and Exceptions