AI-generated Texts Could Increase Threats

Nearly universal access to models that deliver human-sounding text in seconds presents a turning point, according to new research from WithSecure (formerly known as F-Secure Business). The research details experiments conducted using Generative Pre-trained Transformer 3 – language models that use machine learning to generate text.

The experiments used prompt engineering – a concept related to large language models that involve discovering inputs that yield desirable or useful results – to produce a variety of content the researchers deemed harmful.

Numerous experiments assessed how changes in inputs to the available models affected the synthetic text output. The goal was to identify how AI language generation can be misused through malicious and creative prompt engineering, in hopes the research could be used to direct the creation of safer large language models.

The experiments covered phishing and spear-phishing, harassment, social validation for scams, the appropriation of a written style, the creation of deliberately divisive opinions, and using the models to create prompts for malicious text and fake news.

“The fact that anyone with an internet connection can now access powerful large language models has one very practical consequence: it’s now reasonable to assume any new communication you receive may have been written with the help of a robot,” said WithSecure Intelligence Researcher Andy Patel, who spearheaded the research. “Going forward, AI’s use to generate both harmful and useful content will require detection strategies capable of understanding the meaning and purpose of written content.”

The responses from the models in these use cases along with the general development of GPT-3 models led the researchers to several conclusions, including (but not limited to):

  • Prompt engineering will develop as a discipline, as will malicious prompt creation.
  • Adversaries will develop capabilities enabled by large language models in unpredictable ways.
  • Identifying malicious or abusive content will become more difficult for platform providers.
  • Large language models give criminals the ability to make any targeted communication as part of an attack more effective.

“We began this research before ChatGPT made GPT-3 technology available to everyone,” Patel said. “This development increased our urgency and efforts. Because, to some degree, we are all Blade Runners now, trying to figure out if the intelligence we’re dealing with is ‘real,’ or artificial.”

The full research is available at https://labs.withsecure.com/publications/creatively-malicious-prompt-engineering.