Model settings control the behavior of the AI model you select to power your application. These settings give you the flexibility to fine-tune the output and response style of your AI.
The temperature setting allows you to adjust the randomness of the model’s response. A temperature closer to 0 will be more likely to return the same responses each time, while a temperature closer to 1 will make responses more creative and unique. Note that when temperature is set too high, the model's tendency to hallucinate increases.
The other setting to configure is the max response size. This refers to the maximum number of output tokens set for the AI model. If your app requires a longer output response, you will need to increase the token amount. An app with a smaller output size can handle a smaller max response.
Note that models will differ in the maximum number of response tokens available.