Large Language Models (LLMs)
Large Language Models are algorithms that can generate text based on input prompts. They are designed to understand the statistical patterns in language and to use these patterns to generate coherent and grammatically correct text. LLMs are a specific type of language model that uses a specific type of neural network architecture, the Long Short-Term Memory (LSTM), to generate text.
Large Language Models such as ChatGPT have been criticized because (1) copyright-protected data has been used for training, (2) there is a lack of transparancy and explanation capability (we literally don't understand what the billions or trillions of parameters inside an LLM mean), (3) they replace human cognitive activity like thinking and writing, and (4), they may facilitate the early introduction of an Artificial General Intelligence (AGI) or Artificial Superintelligence (ASI).
We would like to recommend the work of Stuart Russell, Max Tegmark, Gary Marcus and more recently Geoff Hinton in this topic.
It is also a reality that LLMs are being used by billions of people around the world, including for psychological purposes. As clinical psychologists, we have a duty to assist and to provide information.
What is Prompt Engineering?
Prompt engineering is the process of designing and refining prompts that are used to generate responses from language models such as ChatGPT. Language models are algorithms that can generate text based on input prompts, and prompt engineering involves crafting the prompts in a way that produces desired outputs.
Prompt engineering can involve analysing existing data to identify common patterns and themes that can be used in prompts, as well as tailoring prompts to suit the specific objectives and target audience. Effective prompt engineering can result in language models that generate high-quality and relevant responses to a wide range of tasks and applications, such as generating text for chatbots, language translation, content creation, and more.
Language models can sometimes produce "hallucinations" or generate text that appears to be nonsensical or irrelevant to the prompt. These hallucinations are inappropriate in the contect of psychological AI. This can occur for a few reasons:
Insufficient training data
Language models require large amounts of high-quality training data to learn how to generate coherent and relevant text. If the training data is insufficient or of poor quality, the language model may not have learned the necessary patterns and structures to produce accurate responses.
Overfitting
If a language model is trained too much on a specific type of data or prompt, it may become overfitted and produce overly specific and irrelevant responses to other types of prompts.
Ambiguity in language:
Language is often ambiguous, and the same word or phrase can have multiple meanings depending on the context. Language models may struggle to disambiguate these multiple meanings and produce responses that are inconsistent with the intended meaning.
Inherent limitations of AI
While language models have made significant progress in recent years, they still have inherent limitations due to the complexity of human language and the challenges of simulating human-level intelligence.
Overall, the "hallucinations" produced by language models highlight the need for monitoring these systems to ensure they are generating accurate and relevant responses.
What is Prompt Engineering for Psychology (PEP)?
General-purpose AI can produce clever answers but often without consistency or accountability. JoBot™ takes a different approach. It is trained on the observed practice of a specific professional, what they say, decide, and do, so that it can propose what that professional would likely do next. Every proposed output is then checked against a formal rule engine. At the core of this engine are rules expressed in formal logic, which capture relevant laws, codes of ethics, and organisational policies. Because these rules are written in clear, testable logical form, they can be injected into JoBot™’s prompt, guiding its reasoning during generation and blocking any output that would breach professional obligations. This makes the AI not only predictable but also verifiably compliant with the standards that govern real-world practice.
Being aligned with a trusted professional means that JoBot™ does not drift toward vague generalisations or untested speculation. Instead, it learns directly from the professional’s own corpus while remaining constrained by formal logic rules that express their duties and obligations. In this way, JoBot™ becomes a structured, auditable partner: anchored both in the human professional’s real behaviour and in the external frameworks that guarantee safe practice. Clients and organisations can therefore trust that JoBot™ will act as an extension of the professional, guided by explicit logic and bound by the same standards that define responsible work.
Please email us for more information: jobot@jobot.ai
