https://github.com/lastmile-ai/aiconfig Skip to content Toggle navigation Sign up * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions For + Enterprise + Teams + Startups + Education By Solution + CI/CD & Automation + DevOps + DevSecOps Resources + Learning Pathways + White papers, Ebooks, Webinars + Customer Stories + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} lastmile-ai / aiconfig Public * Notifications * Fork 5 * Star 189 aiconfig -- config-driven, source control friendly AI application development aiconfig.lastmileai.dev License MIT license 189 stars 5 forks Activity Star Notifications * Code * Issues 10 * Pull requests 19 * Discussions * Actions * Projects 0 * Security * Insights Additional navigation options * Code * Issues * Pull requests * Discussions * Actions * Projects * Security * Insights lastmile-ai/aiconfig This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main Switch branches/tags [ ] Branches Tags Could not load branches Nothing to show {{ refName }} default View all branches Could not load tags Nothing to show {{ refName }} default View all tags Name already in use A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch? Cancel Create 274 branches 0 tags Code * Local * Codespaces * Clone HTTPS GitHub CLI [https://github.com/l] Use Git or checkout with SVN using the web URL. [gh repo clone lastmi] Work fast with our official CLI. Learn more about the CLI. * Open with GitHub Desktop * Download ZIP Sign In Required Please sign in to use Codespaces. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching Xcode If nothing happens, download Xcode and try again. Launching Visual Studio Code Your codespace will open once ready. There was a problem preparing your codespace, please try again. Latest commit @LastMileNewGrad LastMileNewGrad [Readme] Discord Invite (#278) ... ee87244 Nov 17, 2023 [Readme] Discord Invite (#278) Discord in readme ee87244 Git stats * 372 commits Files Permalink Failed to load latest commit information. Type Name Latest commit message Commit time .github/workflows [Docs] Set CNAME in docs deployment November 10, 2023 02:04 .vscode [AIC-py] py linter November 16, 2023 14:15 aiconfig-docs [README] Fix code snippet November 17, 2023 11:45 cookbooks [python] {wip} fix palm model parser (#260) November 17, 2023 11:13 extensions [AIC-PY][3/n] llama extension+cookbook: remember November 16, 2023 17:04 python [README] minor updates November 17, 2023 11:56 schema AIConfig Schema updates October 31, 2023 17:18 typescript [README] minor updates November 17, 2023 11:56 workshops Update README.md November 16, 2023 14:05 .DS_Store Changes to get pypi project set up properly October 17, 2023 13:30 .gitignore [AIC-PY][2/n] llama extension+cookbook: streaming November 16, 2023 14:15 CONTRIBUTING.md Add Contributing Guidelines to Docs Site November 16, 2023 16:29 LICENSE Updating License to MIT License October 31, 2023 11:17 README.md [Readme] Discord Invite (#278) November 17, 2023 13:08 SECURITY.md Create SECURITY.md November 15, 2023 21:26 pyrightconfig.json [python] {wip} fix palm model parser November 17, 2023 11:10 View code [ ] Overview What problem it solves Quicknav Features Install Node.js npm or yarn Python pip3 or poetry Set your OpenAI API Key Getting Started Download travel.aiconfig.json Run the get_activities prompt. Run the gen_itinerary prompt. Save the AIConfig Edit aiconfig in a notebook editor Additional Guides OpenAI Introspection API Supported Models Examples AIConfig Schema AIConfig SDK AIConfig create Prompt resolve Prompt serialize Prompt run run_with_dependencies Updating metadata and parameters Register new ModelParser Callback events Extensibility Contributing to aiconfig Cookbooks Chatbot Retrieval Augmented Generated (RAG) Function calling Prompt routing Chain of Thought Using local LLaMA2 with aiconfig Hugging Face text generation Google PaLM Roadmap FAQs How should I edit an aiconfig file? Does this support custom endpoints? Is OpenAI function calling supported? How can I use aiconfig with my own model endpoint? When should I store outputs in an aiconfig? Why should I use aiconfig instead of things like configurator? This looks similar to ipynb for Jupyter notebooks README.md aiconfig Python Node Docs Discord Full documentation: aiconfig.lastmileai.dev Overview AIConfig saves prompts, models and model parameters as source control friendly configs. This allows you to iterate on prompts and model parameters separately from your application code. 1. Prompts as configs: a standardized JSON format to store generative AI model settings, prompt inputs/outputs, and flexible metadata. 2. Model-agnostic SDK: Python & Node SDKs to use aiconfig in your application code. AIConfig is designed to be model-agnostic and multi-modal, so you can extend it to work with any generative AI model, including text, image and audio. 3. AI Workbook editor: A notebook-like playground to edit aiconfig files visually, run prompts, tweak models and model settings, and chain things together. What problem it solves Today, application code is tightly coupled with the gen AI settings for the application -- prompts, parameters, and model-specific logic is all jumbled in with app code. * results in increased complexity * makes it hard to iterate on the prompts or try different models easily * makes it hard to evaluate prompt/model performance AIConfig helps unwind complexity by separating prompts, model parameters, and model-specific logic from your application. * simplifies application code -- simply call config.run() * open the aiconfig in a playground to iterate quickly * version control and evaluate the aiconfig - it's the AI artifact for your application. AIConfig flow Quicknav * Getting Started + Create an AIConfig + Run a prompt + Pass data into prompts + Prompt Chains + Callbacks and monitoring * SDK Cheatsheet * Cookbooks and guides + CLI Chatbot + RAG with AIConfig + Prompt routing + OpenAI function calling + Chain of Verification * Supported models + LLaMA2 example + Hugging Face (Mistral-7B) example + PaLM * Extensibility * Contributing * Roadmap * FAQ Features * [*] Source-control friendly aiconfig format to save prompts and model settings, which you can use for evaluation, reproducibility and simplifying your application code. * [*] Multi-modal and model agnostic. Use with any model, and serialize/deserialize data with the same aiconfig format. * [*] Prompt chaining and parameterization with {{handlebars}} templating syntax, allowing you to pass dynamic data into prompts (as well as between prompts). * [*] Streaming supported out of the box, allowing you to get playground-like streaming wherever you use aiconfig. * [*] Notebook editor. AI Workbooks editor to visually create your aiconfig, and use the SDK to connect it to your application code. Install Install with your favorite package manager for Node or Python. Node.js npm or yarn npm install aiconfig yarn add aiconfig Python pip3 or poetry pip3 install python-aiconfig poetry add python-aiconfig Detailed installation instructions. Set your OpenAI API Key Note: Make sure to specify the API keys (such as OPENAI_API_KEY) in your environment before proceeding. In your CLI, set the environment variable: export OPENAI_API_KEY=my_key Getting Started We cover Python instructions here, for Node.js please see the detailed Getting Started guide In this quickstart, you will create a customizable NYC travel itinerary using aiconfig. This AIConfig contains a prompt chain to get a list of travel activities from an LLM and then generate an itinerary in an order specified by the user. Link to tutorial code: here getting_started_example.mp4 Download travel.aiconfig.json Note: Don't worry if you don't understand all the pieces of this yet, we'll go over it step by step. { "name": "NYC Trip Planner", "description": "Intrepid explorer with ChatGPT and AIConfig", "schema_version": "latest", "metadata": { "models": { "gpt-3.5-turbo": { "model": "gpt-3.5-turbo", "top_p": 1, "temperature": 1 }, "gpt-4": { "model": "gpt-4", "max_tokens": 3000, "system_prompt": "You are an expert travel coordinator with exquisite taste." } }, "default_model": "gpt-3.5-turbo" }, "prompts": [ { "name": "get_activities", "input": "Tell me 10 fun attractions to do in NYC." }, { "name": "gen_itinerary", "input": "Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}.", "metadata": { "model": "gpt-4", "parameters": { "order_by": "geographic location" } } } ] } Run the get_activities prompt. You don't need to worry about how to run inference for the model; it's all handled by AIConfig. The prompt runs with gpt-3.5-turbo since that is the default_model for this AIConfig. Create a new file called app.py and and enter the following code: import asyncio from aiconfig import AIConfigRuntime, InferenceOptions async def main(): # Load the aiconfig config = AIConfigRuntime.load('travel.aiconfig.json') # Run a single prompt (with streaming) inference_options = InferenceOptions(stream=True) await config.run("get_activities", options=inference_options) asyncio.run(main()) Now run this in your terminal with the command: python3 app.py Run the gen_itinerary prompt. In your app.py file, change the last line to below: await config.run("gen_itinerary", params=None, options=inference_options) Re-run the command in your terminal: python3 app.py This prompt depends on the output of get_activities. It also takes in parameters (user input) to determine the customized itinerary. Let's take a closer look: gen_itinerary prompt: "Generate an itinerary ordered by {{order_by}} for these activities: {{get_activities.output}}." prompt metadata: { "metadata": { "model": "gpt-4", "parameters": { "order_by": "geographic location" } } } Observe the following: 1. The prompt depends on the output of the get_activities prompt. 2. It also depends on an order_by parameter (using {{handlebars}} syntax) 3. It uses gpt-4, whereas the get_activities prompt it depends on uses gpt-3.5-turbo. Effectively, this is a prompt chain between gen_itinerary and get_activities prompts, as well as as a model chain between gpt-3.5-turbo and gpt-4. Let's run this with AIConfig: Replace config.run above with this: await config.run("gen_itinerary", params={"order_by": "duration"}, options=inference_options, run_with_dependencies=True) Notice how simple the syntax is to perform a fairly complex task - running 2 different prompts across 2 different models and chaining one's output as part of the input of another. The code will just run get_activities, then pipe its output as an input to gen_itinerary, and finally run gen_itinerary. Save the AIConfig Let's save the AIConfig back to disk, and serialize the outputs from the latest inference run as well: # Save the aiconfig to disk. and serialize outputs from the model run config.save('updated.aiconfig.json', include_outputs=True) Edit aiconfig in a notebook editor We can iterate on an aiconfig using a notebook-like editor called an AI Workbook. Now that we have an aiconfig file artifact that encapsulates the generative AI part of our application, we can iterate on it separately from the application code that uses it. 1. Go to https://lastmileai.dev. 2. Go to Workbooks page: https://lastmileai.dev/workbooks 3. Click dropdown from '+ New Workbook' and select 'Create from AIConfig' 4. Upload travel.aiconfig.json upload_config.mp4 Try out the workbook playground here: NYC Travel Workbook We are working on a local editor that you can run yourself. For now, please use the hosted version on https://lastmileai.dev. Additional Guides There is a lot you can do with aiconfig. We have several other tutorials to help get you started: * Create an AIConfig from scratch * Run a prompt * Pass data into prompts * Prompt chains * Callbacks and monitoring Here are some example uses: * CLI Chatbot * RAG with AIConfig * Prompt routing * OpenAI function calling * Chain of thought OpenAI Introspection API If you are already using OpenAI completion API's in your application, you can get started very quickly to start saving the messages in an aiconfig. Simply add the following lines to your import: import openai from aiconfig.ChatCompletion import create_and_save_to_config new_config = AIConfigRuntime.create("my_aiconfig", "This is my new AIConfig") openai.chat.completions.create = create_and_save_to_config(aiconfig=new_config) Now you can continue using openai completion API as normal. When you want to save the config, just call new_config.save() and all your openai completion calls will get serialized to disk. Detailed guide here Supported Models AIConfig supports the following model models out of the box: * OpenAI chat models (GPT-3, GPT-3.5, GPT-4) * LLaMA2 (running locally) * Google PaLM models (PaLM chat) * Hugging Face text generation models (e.g. Mistral-7B) Examples * OpenAI * LLaMA example * Hugging Face (Mistral-7B) example * PaLM If you need to use a model that isn't provided out of the box, you can implement a ModelParser for it (see Extending AIConfig). We welcome contributions AIConfig Schema AIConfig specification AIConfig SDK Read the Usage Guide for more details. The AIConfig SDK supports CRUD operations for prompts, models, parameters and metadata. Here are some common examples. The root interface is the AIConfigRuntime object. That is the entrypoint for interacting with an AIConfig programmatically. Let's go over a few key CRUD operations to give a glimpse. AIConfig create config = AIConfigRuntime.create("aiconfig name", "description") Prompt resolve resolve deserializes an existing Prompt into the data object that its model expects. config.resolve("prompt_name", params) params are overrides you can specify to resolve any {{handlebars}} templates in the prompt. See the gen_itinerary prompt in the Getting Started example. Prompt serialize serialize is the inverse of resolve -- it serializes the data object that a model understands into a Prompt object that can be serialized into the aiconfig format. config.serialize("model_name", data, "prompt_name") Prompt run run is used to run inference for the specified Prompt. config.run("prompt_name", params) run_with_dependencies This is a variant of run -- this re-runs all prompt dependencies. For example, in travel.aiconfig.json, the gen_itinerary prompt references the output of the get_activities prompt using {{get_activities.output}}. Running this function will first execute get_activities, and use its output to resolve the gen_itinerary prompt before executing it. This is transitive, so it computes the Directed Acyclic Graph of dependencies to execute. Complex relationships can be modeled this way. config.run_with_dependencies("gen_itinerary") Updating metadata and parameters Use the get/set_metadata and get/set_parameter methods to interact with metadata and parameters (set_parameter is just syntactic sugar to update "metadata.parameters") config.set_metadata("key", data, "prompt_name") Note: if "prompt_name" is specified, the metadata is updated specifically for that prompt. Otherwise, the global metadata is updated. Register new ModelParser Use the AIConfigRuntime.register_model_parser if you want to use a different ModelParser, or configure AIConfig to work with an additional model. AIConfig uses the model name string to retrieve the right ModelParser for a given Prompt (see AIConfigRuntime.get_model_parser), so you can register a different ModelParser for the same ID to override which ModelParser handles a Prompt. For example, suppose I want to use MyOpenAIModelParser to handle gpt-4 prompts. I can do the following at the start of my application: AIConfigRuntime.register_model_parser(myModelParserInstance, ["gpt-4"]) Callback events Use callback events to trace and monitor what's going on -- helpful for debugging and observability. from aiconfig import AIConfigRuntime, CallbackEvent, CallbackManager config = AIConfigRuntime.load('aiconfig.json') async def my_custom_callback(event: CallbackEvent) -> None: print(f"Event triggered: {event.name}", event) callback_manager = CallbackManager([my_custom_callback]) config.set_callback_manager(callback_manager) await config.run("prompt_name") Read more here Extensibility AIConfig is designed to be customized and extended for your use-case. The Extensibility guide goes into more detail. Currently, there are 3 core ways to extend AIConfig: 1. Supporting other models - define a ModelParser extension 2. Callback event handlers - tracing and monitoring 3. Custom metadata - save custom fields in aiconfig Contributing to aiconfig This is our first open-source project and we'd love your help. See our contributing guidelines -- we would especially love help adding support for additional models that the community wants. Cookbooks We provide several guides to demonstrate the power of aiconfig. See the cookbooks folder for examples to clone. Chatbot * Wizard GPT - speak to a wizard on your CLI * CLI-mate - help you make code-mods interactively on your codebase. Retrieval Augmented Generated (RAG) * RAG with AIConfig At its core, RAG is about passing data into prompts. Read how to pass data with AIConfig. Function calling * OpenAI function calling Prompt routing * Prompt routing Chain of Thought A variant of chain-of-thought is Chain of Verification, used to help reduce hallucinations. Check out the aiconfig cookbook for CoVe: * Chain of Verification Using local LLaMA2 with aiconfig * LLaMA example Hugging Face text generation * Hugging Face (Mistral-7B) example Google PaLM * PaLM Roadmap This project is under active development. If you'd like to help, please see the contributing guidelines. Please create issues for additional capabilities you'd like to see. Here's what's already on our roadmap: * Evaluation interfaces: allow aiconfig artifacts to be evaluated with user-defined eval functions. + We are also considering integrating with existing evaluation frameworks. * Local editor for aiconfig: enable you to interact with aiconfigs more intuitively. * OpenAI Assistants API support * Multi-modal ModelParsers: + GPT4-V support + DALLE-3 + Whisper + HuggingFace image generation FAQs How should I edit an aiconfig file? Editing a configshould be done either programmatically via SDK or via the UI (workbooks): * Programmatic editing. * Edit with a workbook editor: this is similar to editing an ipynb file as a notebook (most people never touch the json ipynb directly) You should only edit the aiconfig by hand for minor modifications, like tweaking a prompt string or updating some metadata. Does this support custom endpoints? Out of the box, AIConfig already supports all OpenAI GPT* models, Google's PaLM model and any "textgeneration" model on Hugging Face (like Mistral). See Supported Models for more details. Additionally, you can install aiconfig extensions for additional models (see question below). Is OpenAI function calling supported? Yes. This example goes through how to do it. We are also working on adding support for the Assistants API. How can I use aiconfig with my own model endpoint? Model support is implemented as "ModelParser"s in the AIConfig SDK, and the idea is that anyone, including you, can define a ModelParser (and even publish it as an extension package). All that's needed to use a model with AIConfig is a ModelParser that knows * how to serialize data from a model into the aiconfig format * how to deserialize data from an aiconfig into the type the model expects * how to run inference for model. For more details, see Extensibility. When should I store outputs in an aiconfig? The AIConfigRuntime object is used to interact with an aiconfig programmatically (see SDK usage guide). As you run prompts, this object keeps track of the outputs returned from the model. You can choose to serialize these outputs back into the aiconfig by using the config.save(include_outputs=True) API. This can be useful for preserving context -- think of it like session state. For example, you can use aiconfig to create a chatbot, and use the same format to save the chat history so it can be resumed for the next session. You can also choose to save outputs to a different file than the original config -- config.save("history.aiconfig.json", include_outputs=True). Why should I use aiconfig instead of things like configurator? It helps to have a standardized format specifically for storing generative AI prompts, inference results, model parameters and arbitrary metadata, as opposed to a general-purpose configuration schema. With that standardization, you just need a layer that knows how to serialize/deserialize from that format into whatever the inference endpoints require. This looks similar to ipynb for Jupyter notebooks We believe that notebooks are a perfect iteration environment for generative AI -- they are flexible, multi-modal, and collaborative. The multi-modality and flexibility offered by notebooks and ipynb offers a good interaction model for generative AI. The aiconfig file format is extensible like ipynb, and AI Workbook editor allows rapid iteration in a notebook-like IDE. AI Workbooks are to AIConfig what Jupyter notebooks are to ipynb There are 2 areas where we are going beyond what notebooks offer: 1. aiconfig is more source-control friendly than ipynb. ipynb stores binary data (images, etc.) by encoding it in the file, while aiconfig recommends using file URI references instead. 2. aiconfig can be imported and connected to application code using the AIConfig SDK. About aiconfig -- config-driven, source control friendly AI application development aiconfig.lastmileai.dev Topics ai developer-tools generative-ai Resources Readme License MIT license Security policy Security policy Activity Stars 189 stars Watchers 4 watching Forks 5 forks Report repository Releases No releases published Packages 0 No packages published Contributors 11 * * * * * * * * * * * Languages * Python 42.4% * TypeScript 28.3% * Jupyter Notebook 24.8% * SCSS 2.7% * JavaScript 1.7% * CSS 0.1% Footer (c) 2023 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact GitHub * Pricing * API * Training * Blog * About You can't perform that action at this time.