Due to the non-deterministic nature of llms, executing the same instructions repeatedly may yield different results (unlike pure-fns).
This behavior is undesirable for application logic that performs specific tasks.
Additional challenges arise when modifying prompts and dealing with unpredictable input data (e.g., in chatbots, you can't predict what the user will type).
WARNING
You can only control the model's behavior, but you can't be sure that the generated result will always match your assumptions.