LLM Configurations
LLM Configurations are containers that bring together all components of working with a Large Language Model:
- LLM API Client: The API Client to use for this specific configuration, these are actual credentials to access the vendor or service
- Completion Settings: Settings to apply to any generation request, such as temperature, TopK, and token limits.
- Model Roles: The roles to apply to any generation request template such as “Himan” or “Assistant”
LLM Configurations are used throughout the UI to configure any interaction with an LLM.