Demonstrate, search, predict, or DSPy is a now-famous Stanford paper focused on optimizing the prompting of LLMs. The basic idea is to provide examples instead of instructions.
Ax supports DSPy and allows you to set examples on each prompt. It also allows you to run an optimizer, which runs the prompt using inputs from a test set and validates the outputs against the same test set. In short, the optimizer helps you capture good examples across the entire tree of prompts your workflow is built with.
Pick a prompt strategy
There are various prompts available in Ax, pick one based on your needs.
Generate - Generic prompt that all other prompts inherit from.
ReAct - For reasoning and function calling, multi-step function calling.
ChainOfThough - Increasing performance by reasoning before providing the answer
RAG - Uses a vector database to add context and improve performance and accuracy.
Create a signature
A signature defines the task you want to do, the inputs you’ll provide, and the outputs you expect the LLM to generate.
The next optional but most important thing you can do to improve the performance of your prompts is to set examples. When we say “performance,” we mean the number of times the LLM does exactly what you expect correctly over the number of times it fails.
Examples are the best way to communicate to the LLM what you want it to do. The patterns you define in high-quality examples help the LLM much better than the instructions.
Use this prompt
You are now ready to use this prompt in your workflows.
Easy enough! this is all you need
DAP prompt tuning
What if I want more performance, or do I want to run this with a smaller model? I was told you can tune your prompts with DSPy. Yes, this is true. You can do this. In short, you can use a big LLM to generate better examples for every prompt you use in your entire flow of prompts.