With prompt engineers among the workers most in demand in the wake of generative AI’s arrival in the enterprise, it was inevitable that someone would investigate whether their role, too, could be automated, or at least facilitated, by AI.
And, indeed, a recent study focused on how to write the best prompts for a large-language model (LLM) AI to solve mathematical problems has found that another AI gets better results than a human. The study sought to determine whether human-generated “positive thinking” prompts—such as “this will be fun!” or “take a deep breath and think”—produce better responses. The results were mixed when using different LLMs.
To read this article in full, please click here
With prompt engineers among the workers most in demand in the wake of generative AI’s arrival in the enterprise, it was inevitable that someone would investigate whether their role, too, could be automated, or at least facilitated, by AI.And, indeed, a recent study focused on how to write the best prompts for a large-language model (LLM) AI to solve mathematical problems has found that another AI gets better results than a human. The study sought to determine whether human-generated “positive thinking” prompts—such as “this will be fun!” or “take a deep breath and think”—produce better responses. The results were mixed when using different LLMs.To read this article in full, please click here Read More Computerworld
+ There are no comments
Add yours