https://github.com/jackmpcollins/magentic Skip to content Toggle navigation Sign up * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions For + Enterprise + Teams + Startups + Education By Solution + CI/CD & Automation + DevOps + DevSecOps Resources + Learning Pathways + White papers, Ebooks, Webinars + Customer Stories + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} jackmpcollins / magentic Public * Notifications * Fork 18 * Star 431 Seamlessly integrate LLMs as Python functions License MIT license 431 stars 18 forks Activity Star Notifications * Code * Issues 1 * Pull requests 2 * Actions * Security * Insights More * Code * Issues * Pull requests * Actions * Security * Insights jackmpcollins/magentic This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main Switch branches/tags [ ] Branches Tags Could not load branches Nothing to show {{ refName }} default View all branches Could not load tags Nothing to show {{ refName }} default View all tags Name already in use A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch? Cancel Create 3 branches 11 tags Code * Local * Codespaces * Clone HTTPS GitHub CLI [https://github.com/j] Use Git or checkout with SVN using the web URL. [gh repo clone jackmp] Work fast with our official CLI. Learn more about the CLI. * Open with GitHub Desktop * Download ZIP Sign In Required Please sign in to use Codespaces. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching GitHub Desktop If nothing happens, download GitHub Desktop and try again. Launching Xcode If nothing happens, download Xcode and try again. Launching Visual Studio Code Your codespace will open once ready. There was a problem preparing your codespace, please try again. Latest commit @jackmpcollins jackmpcollins Add Asyncio section to README (#28) ... afdb225 Sep 25, 2023 Add Asyncio section to README (#28) afdb225 Git stats * 124 commits Files Permalink Failed to load latest commit information. Type Name Latest commit message Commit time .github/workflows Add GitHub actions workflow to run unit-tests (#1) September 13, 2023 00:43 examples Add example notebook for Chain of Verification (#22) September 23, 2023 00:30 src/magentic Rename chat_model files (#26) September 24, 2023 21:45 tests Rename chat_model files (#26) September 24, 2023 21:45 .gitignore Add python .gitignore June 17, 2023 21:41 .pre-commit-config.yaml Enable more linting rules with Ruff. Update pre-commit hooks (#13) September 13, 2023 01:30 LICENSE Add LICENSE June 17, 2023 21:31 README.md Add Asyncio section to README (#28) September 25, 2023 00:16 poetry.lock Bump tornado from 6.3.2 to 6.3.3 (#19) September 20, 2023 23:19 pyproject.toml Bump version to 0.6.0 September 24, 2023 23:03 View code [ ] magentic Installation Usage Streaming Object Streaming Asyncio Additional Features Configuration Type Checking README.md magentic Easily integrate Large Language Models into your Python code. Simply use the @prompt decorator to create functions that return structured output from the LLM. Mix LLM queries and function calling with regular Python code to create complex logic. magentic is * Compact: Query LLMs without duplicating boilerplate code. * Atomic: Prompts are functions that can be individually tested and reasoned about. * Transparent: Create "chains" using regular Python code. Define all of your own prompts. * Compatible: Use @prompt functions as normal functions, including with decorators like @lru_cache. * Type Annotated: Works with linters and IDEs. Continue reading for sample usage, or go straight to the examples directory. Installation pip install magentic or using poetry poetry add magentic Configure your OpenAI API key by setting the OPENAI_API_KEY environment variable or using openai.api_key = "sk-...". See the OpenAI Python library documentation for more information. Usage The @prompt decorator allows you to define a template for a Large Language Model (LLM) prompt as a Python function. When this function is called, the arguments are inserted into the template, then this prompt is sent to an LLM which generates the function output. from magentic import prompt @prompt('Add more "dude"ness to: {phrase}') def dudeify(phrase: str) -> str: ... # No function body as this is never executed dudeify("Hello, how are you?") # "Hey, dude! What's up? How's it going, my man?" The @prompt decorator will respect the return type annotation of the decorated function. This can be any type supported by pydantic including a pydantic model. from magentic import prompt from pydantic import BaseModel class Superhero(BaseModel): name: str age: int power: str enemies: list[str] @prompt("Create a Superhero named {name}.") def create_superhero(name: str) -> Superhero: ... create_superhero("Garden Man") # Superhero(name='Garden Man', age=30, power='Control over plants', enemies=['Pollution Man', 'Concrete Woman']) An LLM can also decide to call functions. In this case the @prompt-decorated function returns a FunctionCall object which can be called to execute the function using the arguments provided by the LLM. from typing import Literal from magentic import prompt, FunctionCall def activate_oven(temperature: int, mode: Literal["broil", "bake", "roast"]) -> str: """Turn the oven on with the provided settings.""" return f"Preheating to {temperature} F with mode {mode}" @prompt( "Prepare the oven so I can make {food}", functions=[activate_oven], ) def configure_oven(food: str) -> FunctionCall[str]: ... output = configure_oven("cookies!") # FunctionCall(, temperature=350, mode='bake') output() # 'Preheating to 350 F with mode bake' Sometimes the LLM requires making one or more function calls to generate a final answer. The @prompt_chain decorator will resolve FunctionCall objects automatically and pass the output back to the LLM to continue until the final answer is reached. In the following example, when describe_weather is called the LLM first calls the get_current_weather function, then uses the result of this to formulate its final answer which gets returned. from magentic import prompt_chain def get_current_weather(location, unit="fahrenheit"): """Get the current weather in a given location""" # Pretend to query an API return { "location": location, "temperature": "72", "unit": unit, "forecast": ["sunny", "windy"], } @prompt_chain( "What's the weather like in {city}?", functions=[get_current_weather], ) def describe_weather(city: str) -> str: ... describe_weather("Boston") # 'The current weather in Boston is 72degF and it is sunny and windy.' LLM-powered functions created using @prompt and @prompt_chain can be supplied as functions to other @prompt/@prompt_chain decorators, just like regular python functions. This enables increasingly complex LLM-powered functionality, while allowing individual components to be tested and improved in isolation. See the examples directory for more. Streaming The StreamedStr (and AsyncStreamedStr) class can be used to stream the output of the LLM. This allows you to process the text while it is being generated, rather than receiving the whole output at once. from magentic import prompt, StreamedStr @prompt("Tell me about {country}") def describe_country(country: str) -> StreamedStr: ... # Print the chunks while they are being received for chunk in describe_country("Brazil"): print(chunk, end="") # 'Brazil, officially known as the Federative Republic of Brazil, is ...' Multiple StreamedStr can be created at the same time to stream LLM outputs concurrently. In the below example, generating the description for multiple countries takes approximately the same amount of time as for a single country. from time import time countries = ["Australia", "Brazil", "Chile"] # Generate the descriptions one at a time start_time = time() for country in countries: # Converting `StreamedStr` to `str` blocks until the LLM output is fully generated description = str(describe_country(country)) print(f"{time() - start_time:.2f}s : {country} - {len(description)} chars") # 22.72s : Australia - 2130 chars # 41.63s : Brazil - 1884 chars # 74.31s : Chile - 2968 chars # Generate the descriptions concurrently by creating the StreamedStrs at the same time start_time = time() streamed_strs = [describe_country(country) for country in countries] for country, streamed_str in zip(countries, streamed_strs): description = str(streamed_str) print(f"{time() - start_time:.2f}s : {country} - {len(description)} chars") # 22.79s : Australia - 2147 chars # 23.64s : Brazil - 2202 chars # 24.67s : Chile - 2186 chars Object Streaming Structured outputs can also be streamed from the LLM by using the return type annotation Iterable (or AsyncIterable). This allows each item to be processed while the next one is being generated. See the example in examples/quiz for how this can be used to improve user experience by quickly displaying/using the first item returned. from collections.abc import Iterable from time import time from magentic import prompt from pydantic import BaseModel class Superhero(BaseModel): name: str age: int power: str enemies: list[str] @prompt("Create a Superhero team named {name}.") def create_superhero_team(name: str) -> Iterable[Superhero]: ... start_time = time() for hero in create_superhero_team("The Food Dudes"): print(f"{time() - start_time:.2f}s : {hero}") # 2.23s : name='Pizza Man' age=30 power='Can shoot pizza slices from his hands' enemies=['The Hungry Horde', 'The Junk Food Gang'] # 4.03s : name='Captain Carrot' age=35 power='Super strength and agility from eating carrots' enemies=['The Sugar Squad', 'The Greasy Gang'] # 6.05s : name='Ice Cream Girl' age=25 power='Can create ice cream out of thin air' enemies=['The Hot Sauce Squad', 'The Healthy Eaters'] Asyncio Asynchronous functions / coroutines can be used to concurrently query the LLM. This can greatly increase the overall speed of generation, and also allow other asynchronous code to run while waiting on LLM output. In the below example, the LLM generates a description for each US president while it is waiting on the next one in the list. Measuring the characters generated per second shows that this example achieves a 7x speedup over serial processing. import asyncio from time import time from typing import AsyncIterable from magentic import prompt @prompt("List ten presidents of the United States") async def iter_presidents() -> AsyncIterable[str]: ... @prompt("Tell me more about {topic}") async def tell_me_more_about(topic: str) -> str: ... # For each president listed, generate a description concurrently start_time = time() tasks = [] async for president in await iter_presidents(): # Use asyncio.create_task to schedule the coroutine for execution before awaiting it # This way descriptions will start being generated while the list of presidents is still being generated task = asyncio.create_task(tell_me_more_about(president)) tasks.append(task) descriptions = await asyncio.gather(*tasks) # Measure the characters per second total_chars = sum(len(desc) for desc in descriptions) time_elapsed = time() - start_time print(total_chars, time_elapsed, total_chars / time_elapsed) # 24575 28.70 856.07 # Measure the characters per second to describe a single president start_time = time() out = await tell_me_more_about("George Washington") time_elapsed = time() - start_time print(len(out), time_elapsed, len(out) / time_elapsed) # 2206 18.72 117.78 Additional Features * The functions argument to @prompt can contain async/coroutine functions. When the corresponding FunctionCall objects are called the result must be awaited. * The Annotated type annotation can be used to provide descriptions and other metadata for function parameters. See the pydantic documentation on using Field to describe function arguments. * The @prompt and @prompt_chain decorators also accept a model argument. You can pass an instance of OpenaiChatModel (from magentic.chat_model.openai_chat_model) to use GPT4 or configure a different temperature. Configuration The order of precedence of configuration is 1. Arguments passed when initializing an instance in Python 2. Environment variables The following environment variables can be set. Environment Variable Description Example MAGENTIC_OPENAI_MODEL OpenAI model gpt-4 MAGENTIC_OPENAI_TEMPERATURE OpenAI temperature 0.5 Type Checking Many type checkers will raise warnings or errors for functions with the @prompt decorator due to the function having no body or return value. There are several ways to deal with these. 1. Disable the check globally for the type checker. For example in mypy by disabling error code empty-body. # pyproject.toml [tool.mypy] disable_error_code = ["empty-body"] 2. Make the function body ... (this does not satisfy mypy) or raise. @prompt("Choose a color") def random_color() -> str: ... 3. Use comment # type: ignore[empty-body] on each function. In this case you can add a docstring instead of .... @prompt("Choose a color") def random_color() -> str: # type: ignore[empty-body] """Returns a random color.""" About Seamlessly integrate LLMs as Python functions Topics agent ai chatbot prompt openai gpt magentic pydantic openai-api llm chatgpt Resources Readme License MIT license Activity Stars 431 stars Watchers 6 watching Forks 18 forks Report repository Releases 11 v0.6.0 Latest Sep 25, 2023 + 10 releases Packages 0 No packages published Contributors 3 * @jackmpcollins jackmpcollins Jack Collins * @dependabot[bot] dependabot[bot] * @manuelzander manuelzander Languages * Python 100.0% Footer (c) 2023 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact GitHub * Pricing * API * Training * Blog * About You can't perform that action at this time.