Montag has scriptable components, these can be called in three ways:

  1. Raw prompt hook: this is called on the raw prompt before it goes to the model
  2. Raw Response hook: this is called before the response is sent to the user
  3. Web request: as an HTTP POST request

Scripts should be written in the tengo syntax, the full syntax documentation is available on github.

Built-in functions

To enable interoperability within Montag, there are several custom built-in methods and variables that have been incorporated to make scripts more useful:

montagRun(funcName string, input string) string This function calls AI functions that you have specified in the AI functions section of the UI. Useful for scripted functions that might make use of additional LLM inputs.

montagManagedRun(funcName string, input string) map[string]string This function calls AI functions that you have specified in the AI functions section of the UI similar to montagRun. This function will return an error to the script instead of failing the whole script (hence “managed”), it’s return value has two fields: result and error, the error field will be a non-empty string if there is an error with the call.

montagMakeHttpRequest(method string, url string, headers map[string]string, body string) map{status: int, response: string} This function makes an HTTP request to an external source and returns the status code and response as a string.

montagAddToHistory(role string, content string) int This method will add an entry to the prompt history, to be rendered by the template (if enabled in the template) later in the execution chain. the returned value can be ignored.

montagAddToContext(string) This method adds an entry to the context object of the prompt, to be rendered later in the template (if enabled).

montagSendFile(title string, filename string) string This method will send a file to slack and return the URL of the file.

montagKV(operation string, key string, value string) This function allows you to GET and SET arbitrary data into a K/V store (the local DB), example: usage:

fmt := import("fmt")

montagKV("set", "KeyFoo", "Bar")
val := montagKV("get", "KeyFoo")
val2 := montagKV("get", "Bar")


montagSendMessage(string) This method enables you to send messages to the slack chat that the bot is currently engaged in (BETA)

montagGetSnippet(slug string) This function enables you to fetch a snippet from the Snippet store in the UI.

montagGetSecret(name string) string Will return an unencrypted secret to be used in the script. Secrets are encrypted at rest.

montagVectorSearch(namespace string, numResults int, query string) Will search the vectorDB of the bot, in the namespace that is supplied. The namespace must be in the Allowed Namespaces list of the bot running the script, and only has access to the vector DB that the bot uses for it’s standard context lookups. Built-in variables

Montag Specific Variables

Tengo does not have return values for the scripts, instead Montag will pluck a response variable from the script after it has been executed. The variables available are:

montagUserMessage: This is the user input message as a string

montagUserHistory: The current user history, this is an array of maps, containing the role and the message

montagResponse: This is the response that will be sent to the user or to the model

montagOutputs: This is a catch-all map you can use to return multiple output values

montagOverride: This is only used in raw prompt hook and can be used to interrupt the command chain to send a response directly to the end-user, bypassing the model.

montagContext: A list of the data in the context array sent to the LLM (only available in Response Hooks)

montagContextTitles: A list of the titles (usually URLs) in the context array sent to the LLM (only available in Response Hooks)

montagResources: A list of resource objects that have been provided by any Resource Expanders running on the bot

An example

Below is an example of a raw prompt hook that will discover Jira links and send them to an AI function called “bddHelper”, to provide an initial response output of that AI function before continuing.

fmt := import("fmt")
json := import("json")
text := import("text")
base64 := import("base64")

msg := montagUserMessage

// pass through
montagResponse := msg
montagOverride := ""
montagOutputs := {
    "foo": "bar"

user := montagGetSecret("JIRAUsername")
token := montagGetSecret("JIRAToken")
combined := user + ":" + token
basicAuth := "Basic " + base64.encode(combined)
headers := {"Authorization": basicAuth, "Content-Type": "application/json"}
endpoint := ""

if text.contains(msg, "") {
    // Get URL from message
    fmt.println("detected JIRA link")
    found := text.re_find("[-a-zA-Z0-9()@:%_.~#?&//=]*)", msg, 1)
    // fmt.println(found)
    ticketURL := found[0][0]["text"]
    // fmt.println(ticketURL)

    ticketIDs := text.re_find("[A-Z]+-[0-9]+", ticketURL, 1)
    url := endpoint+ticketIDs[0][0]["text"]+"?fields=description"

    // fmt.println(url)

    details := montagMakeHttpRequest("GET", url, headers, "")
	// fmt.println(details)

    body := details["response"]

    asJson := json.decode(body)
    description := asJson["fields"]["description"]


    bddHelperOutput := montagRun("bddHelper", {"Input": description, "Meta": {}})
    montagOverride = bddHelperOutput