LLM Configurations

LLM Configurations are containers that bring together all components of working with a Large Language Model:

  1. LLM API Client: The API Client to use for this specific configuration, these are actual credentials to access the vendor or service
  2. Completion Settings: Settings to apply to any generation request, such as temperature, TopK, and token limits.
  3. Model Roles: The roles to apply to any generation request template such as “Himan” or “Assistant”

LLM Configurations are used throughout the UI to configure any interaction with an LLM.