https://github.com/Forethought-Technologies/AutoChain Skip to content Toggle navigation Sign up * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions For + Enterprise + Teams + Startups + Education By Solution + CI/CD & Automation + DevOps + DevSecOps Resources + Customer Stories + White papers, Ebooks, Webinars + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. {{ message }} Forethought-Technologies / AutoChain Public * Notifications * Fork 5 * Star 247 AutoChain: Build lightweight, extensible, and testable LLM Agents autochain.forethought.ai License MIT license 247 stars 5 forks Star Notifications * Code * Issues 1 * Pull requests 12 * Actions * Projects 0 * Security * Insights More * Code * Issues * Pull requests * Actions * Projects * Security * Insights Forethought-Technologies/AutoChain This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main Switch branches/tags [ ] Branches Tags Could not load branches Nothing to show {{ refName }} default View all branches Could not load tags Nothing to show {{ refName }} default View all tags Name already in use A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch? Cancel Create 13 branches 0 tags Code * Local * Codespaces * Clone HTTPS GitHub CLI [https://github.com/F] Use Git or checkout with SVN using the web URL. [gh repo clone Foreth] Work fast with our official CLI. Learn more about the CLI. * Open with GitHub Desktop * Download ZIP Sign In Required Please sign in to use Codespaces. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching Xcode If nothing happens, download Xcode and try again. Launching Visual Studio Code Your codespace will open once ready. There was a problem preparing your codespace, please try again. Latest commit @yyiilluu yyiilluu Add weather example (#87) ... 4887a05 Jul 14, 2023 Add weather example (#87) 4887a05 Git stats * 104 commits Files Permalink Failed to load latest commit information. Type Name Latest commit message Commit time .github Move Docs to Autochain (#37) June 20, 2023 18:02 autochain Add weather example (#87) July 14, 2023 11:09 docs remove metadata July 11, 2023 00:01 test_utils Fix tests (#86) July 12, 2023 17:53 tests Add weather example (#87) July 14, 2023 11:09 .gitignore add search tool June 2, 2023 12:39 .pre-commit-config.yaml Add pre-commit (#5) June 14, 2023 00:20 LICENSE.txt add mit license June 15, 2023 17:22 Makefile rename into autochain June 14, 2023 12:34 README.md remove metadata July 11, 2023 00:01 mkdocs.insiders.yml Add docs setup, structure and tweak Markdown (#4) June 14, 2023 00:32 mkdocs.yml Add missing package tenacity (#78) July 8, 2023 14:46 poetry.lock Add missing package tenacity (#78) July 8, 2023 14:46 pyproject.toml Add pinecone as long term memory (#81) July 10, 2023 21:33 View code AutoChain Features Setup How does AutoChain simplify building agents? Example usage Workflow Evaluation How to run workflow evaluations README.md AutoChain Large language models (LLMs) have shown huge success in different text generation tasks and enable developers to build generative agents based on objectives expressed in natural language. However, most generative agents require heavy customization for specific purposes, and supporting different use cases can sometimes be overwhelming using existing tools and frameworks. As a result, it is still very challenging to build a custom generative agent. In addition, evaluating such generative agents, which is usually done by manually trying different scenarios, is a very manual, repetitive, and expensive task. AutoChain takes inspiration from LangChain and AutoGPT and aims to solve both problems by providing a lightweight and extensible framework for developers to build their own agents using LLMs with custom tools and automatically evaluating different user scenarios with simulated conversations. Experienced user of LangChain would find AutoChain is easy to navigate since they share similar but simpler concepts. The goal is to enable rapid iteration on generative agents, both by simplifying agent customization and evaluation. If you have any questions, please feel free to reach out to Yi Lu yi.lu@forethought.ai Features * lightweight and extensible generative agent pipeline. * agent that can use different custom tools and support OpenAI function calling * simple memory tracking for conversation history and tools' outputs * automated agent multi-turn conversation evaluation with simulated conversations Setup Quick install pip install autochain Or install from source after cloning this repository cd autochain pyenv virtualenv 3.10.11 venv pyenv local venv pip install . Set PYTHONPATH and OPENAI_API_KEY export OPENAI_API_KEY= export PYTHONPATH=`pwd` Run your first conversation with agent interactively python autochain/workflows_evaluation/conversational_agent_eval/change_shipping_address_test.py -i How does AutoChain simplify building agents? AutoChain aims to provide a lightweight framework and simplifies the agent building process in a few ways, as compared to existing frameworks 1. Easy prompt update Engineering and iterating over prompts is a crucial part of building generative agent. AutoChain makes it very easy to update prompts and visualize prompt outputs. Run with -v flag to output verbose prompt and outputs in console. 2. Up to 2 layers of abstraction As part of enabling rapid iteration, AutoChain chooses to remove most of the abstraction layers from alternative frameworks 3. Automated multi-turn evaluation Evaluation is the most painful and undefined part of building generative agents. Updating the agent to better perform in one scenario often causes regression in other use cases. AutoChain provides a testing framework to automatically evaluate agent's ability under different user scenarios. Example usage If you have experience with LangChain, you already know 80% of the AutoChain interfaces. AutoChain aims to make building custom generative agents as straightforward as possible, with as little abstractions as possible. The most basic example uses the default chain and ConversationalAgent: from autochain.chain.chain import Chain from autochain.memory.buffer_memory import BufferMemory from autochain.models.chat_openai import ChatOpenAI from autochain.agent.conversational_agent.conversational_agent import ConversationalAgent llm = ChatOpenAI(temperature=0) memory = BufferMemory() agent = ConversationalAgent.from_llm_and_tools(llm=llm) chain = Chain(agent=agent, memory=memory) print(chain.run("Write me a poem about AI")['message']) Just like in LangChain, you can add a list of tools to the agent tools = [ Tool( name="Get weather", func=lambda *args, **kwargs: "Today is a sunny day", description="""This function returns the weather information""" ) ] memory = BufferMemory() agent = ConversationalAgent.from_llm_and_tools(llm=llm, tools=tools) chain = Chain(agent=agent, memory=memory) print(chain.run("What is the weather today")['message']) AutoChain also added support for function calling in OpenAI models. Behind the scenes, it turns the function spec into OpenAI format without explicit instruction, so you can keep following the same Tool interface you are familiar with. llm = ChatOpenAI(temperature=0) agent = OpenAIFunctionsAgent.from_llm_and_tools(llm=llm, tools=tools) See more examples under autochain/examples and workflow evaluation test cases which can also be run interactively. Read more about detailed components overview Workflow Evaluation It is notoriously hard to evaluate generative agents in LangChain or AutoGPT. An agent's behavior is nondeterministic and susceptible to small changes to the prompt or model. As such, it is hard to know what effects an update to the agent will have on all relevant use cases. The current path for evaluation is running the agent through a large number of preset queries and evaluate the generated responses. However, that is limited to single turn conversation, general and not specific to tasks and expensive to verify. To effectively evaluate agents, AutoChain introduces the workflow evaluation framework which simulates the conversation between a generative agent and simulated users with LLM under different user contexts and desired outcomes of the conversation. This way, we could add test cases for different user scenarios and use LLMs to evaluate if multi-turn conversations reached the desired outcome. To facilitate agent evaluation, AutoChain introduces the workflow evaluation framework. This framework runs conversations between a generative agent and LLM-simulated test users. The test users incorporate various user contexts and desired conversation outcomes, which enables easy addition of test cases for new user scenarios and fast evaluation. The framework leverages LLMs to evaluate whether a given multi-turn conversation has achieved the intended outcome. Read more about our evaluation strategy. How to run workflow evaluations You can either run your tests in interactive mode, or run the full suite of test cases at once. autochain/workflows_evaluation/ conversational_agent_eval /change_shipping_address_test.py contains a few example test cases. To run all the cases defined in a test file: python autochain/workflows_evaluation/conversational_agent_eval/change_shipping_address_test.py To run your tests interactively -i: python autochain/workflows_evaluation/conversational_agent_eval/change_shipping_address_test.py -i Looking for more details on how AutoChain works? See our components overview About AutoChain: Build lightweight, extensible, and testable LLM Agents autochain.forethought.ai Resources Readme License MIT license Stars 247 stars Watchers 6 watching Forks 5 forks Report repository Releases No releases published Used by 1 * @CookingANut @CookingANut / popups Contributors 6 * @yyiilluu * @tiangolo * @xingweitian * @JadCham * @samighoche * @AntoineNasr Languages * Python 99.9% * Makefile 0.1% Footer (c) 2023 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact GitHub * Pricing * API * Training * Blog * About You can't perform that action at this time.