Role
As a [whatever, tbh, Joe Rogan, React Developer, JSON generator, you name it]
- settles the conversation in the context
Instruction
- sets the model's way of doing, thinking.
- provides rules
- should contain the information about date, location, summary of previous conversations (if needed for handling the instruction properly)
- prone to knowledge-cutoff
Context
- includes a set of data that can be manually provided, generated by the model, or dynamically added by the application's logic
- should be clearly separated from the rest of the prompt to avoid misinterpretation
- e.g. using
###
or any other non-expected separator - without separation parts of the context could be mis-understood as the parts of the instruction
- e.g. using
- can be generated by the model as a result of reflexion or a chain of thought
- unnecessary noise should be avoided
- length of the context should be optimal to avoid generating excessive costs
- context should be described, e.g., as `next meetings in the calendar [...]``.
- if the context does not contain sufficient information, this should be indicated that:
- the model should use its own knowledge base
- the model should redirect to human contact
- etc.
- prompts can be composed (one prompt for summarising the website, another one to deal with the information)
- in such cases the combined prompts should be well-defined to avoid the unwanted behavior (due to undeterministic nature of LLMs)
- if one prompt fails, the subsequent ones will fail too (please, use
try..catch
)
- if one prompt fails, the subsequent ones will fail too (please, use
- in such cases the combined prompts should be well-defined to avoid the unwanted behavior (due to undeterministic nature of LLMs)
Examples
- guide the behavior of given LLM
- focus
- speech style
- response format
- dataset classification
- etc.
- LLMs can learn from patterns and subtle changes in the examples provided (few-shot-learning, in-context-learning)
- Examples can be instructions at the same time
Respond in JSON format: {"name": "Krystian", "email": "krystian@example.com"}. [...]
Question
- query that the model is to process, eg.
- a data set for transformation,
- a question,
- a command,
- a simple message as part of a longer conversation
- might be considered by the model as an instruction to execute, which may not be expected in some cases
Completion (answer)
- not a part of the prompt itself, but considered as the part of the token-window
- if we are working with the Chat, the answer will be passed for the subsequent chat messages (the whole conversation is)