AI Developer Portal Intro
If you are in a platform team, and you want to make AI available to your developers, it’s safe to assume that they will want to use best-in-class tooling to be able to write their AI-enabled applications. For example, they may wish to write their code in Python locally, and perhaps even use a toolkit like Langchain to build their applications.
these tools have safe defaults and client for popular AI Vendors such as OpenAI, Anthropic etc. and so are easy to get started with.
However, as a Platform engineer, you worry that your developers could create liability risk if they were to utilise sensitive company data, either confidential or even more senisitive customer data, in their applications without knowing whether that data may leak into a vendor’s training set, or if it is even allowed to pass the edges of your corporate firewall.
That’s what we are trying to solve with the Montag AI Developer Portal. t’s a small implementation but it provides:
- A way for engineers to define which data sources (e.g. txt collections) they might wish to make use of
- A way for them to select the correct LLM to use with that data
- A way for them to monitor the usage of those applications
- A way for them to see the histopry of that application’s prompts
- A way for administrators to gate what applications are allowed, for what purpose, as well as direct approval flows to ensure auditability.
- A way for administrators to quickly cut off access if a problem is identified
Setting up the portal
There are four main steps needed to get the developer portal ready for AI Developer users.
Step 1: Create your data sources
Data Sources in the Developer Portal are your Text Collections that have been placed into the Global Workspace. Let’s say we have three Text Collections, for each of them we will assign a different privacy tier:
- Our Documentation Collection: Public
- Our Internal Support Messages: Confidential
- Customer Use Case Database: PII
Once these are created, we’re ready to move onto step two…
Step 2: Create your LLM Configurations
These are the LLM clients and configurations that you as a platform team wish to make available to the company. They are the same LLM OCnfigs we would use to set up a Bot or an AI Function.
IN this guide, let’s assume you have three different AI vendors:
- OpenAI: For Public data
- OpenAI Enterprise: For Confidential data
- A locally running instance of Llama-2-16b: For PII data
The Privacy settings for each of these LLM Configs will be set in the LLM Client configuration (a sub-object of the LLM config), you can find this just under “LLM Configurations” in the left-hand nav.
Step 3: Create a developer account
Go to the Access Controls page and create a new Access Control. Give them an email address, password, limit them to their own namespace, and then assign them the “AI Developer” role.
That’s it, your Portal is ready to use!
Using the Portal
Step 1: Log in as the developer and create an Application
When the developer logs in they will first see an emtpy dashboard, but they are invited to create a new application. On the nav, you can see the assets they can view - these have read-only permissions, so they will be able to investigate the set up of the platform and read descriptions, but that’s it.
The developer can then create an application, by either creating one from the Apllications view, or selecting “Create Application” in the dashboard.
When they create an application, they will be presented with a form that asks them to select the data sources they wish to use, and the LLM they wish to use.
- Data Sources: These are your Text Collections. This selection is optional, they may not wish to query any of your text collections for their AI project, but if they do, they can select them here to enable explicit access for this Application
- Depending on what data source they select in the data sources drop-down, it changes what LLMs are available to this application. Even if the user mixes and matches a public and a more private data source, the highest-rated LLM in terms of privacy will become selectable in the LLM dropdown and be the only usable model.
Once they have made their selections and create the app, if you have the MONTAG_ADMIN
environment variable set, an email will be sent to the admin to notify them of a pending application submission.
Step 2: Admin approval
On the platform side, the admin can view the aplication, and then view the API token that has been auto-created. This will be disabled by default, and so the admin must enable the token before the develoepr can continue.
Step 3: Developer starts to code!
Back to the developer, once the token has been enabled, they should click on the “Show” button in their application list.
This will show themir app endpoints, and the app token will be visible so they can start to work with the token.
In the app “Show” view, there are three endpoints that Montag provides the user, these are OpenAI and Pinecone-SDK compatible APIs that act as a drop-in replacement to the vendor-provided ones, they are:
- Completions: To create text completions - i.e. chat with the AI
- Embeddings: If the user wishes to use the Vector database, they can use this endpoint to encode their query, it should target the corret namespace to ensure the correct encoding model is used.
- Vector: This is the pinecone-compatible API endpoint that they can query with the embeddings they have created with the Embeddings endpoint.
This means that the developer can now write code using the OpenAI SDK or a toolkit like Langchain, but under-the-hood, actually interact with whatever LLM has been made available by Montag to the user.