Implement Prompt Guardrails
Prompt guardrails provide additional safeguards to protect user privacy, prevent unintended or harmful model behaviors, discourage hallucinated responses, and stay compliant with responsible AI ethical standards when working with large language models (LLMs).
In this document, you will learn how to implement prompt guardrails to redact sensitive information, as well as to discourage undesired output and hallucinations.
Prerequisite(s)
- Understand how you can integrate APISIX with LLM services, such as OpenAI.
- Have a running API7 Enterprise instance or APISIX instance.