https://github.com/open-llm-initiative/open-message-format Skip to content Navigation Menu Toggle navigation Sign in * Product + Actions Automate any workflow + Packages Host and manage packages + Security Find and fix vulnerabilities + Codespaces Instant dev environments + GitHub Copilot Write better code with AI + Code review Manage code changes + Issues Plan and track work + Discussions Collaborate outside of code Explore + All features + Documentation + GitHub Skills + Blog * Solutions By size + Enterprise + Teams + Startups By industry + Healthcare + Financial services + Manufacturing By use case + CI/CD & Automation + DevOps + DevSecOps * Resources Topics + AI + DevOps + Security + Software Development + View all Explore + Learning Pathways + White papers, Ebooks, Webinars + Customer Stories + Partners * Open Source + GitHub Sponsors Fund open source developers + The ReadME Project GitHub community articles Repositories + Topics + Trending + Collections * Enterprise + Enterprise platform AI-powered developer platform Available add-ons + Advanced Security Enterprise-grade security features + GitHub Copilot Enterprise-grade AI features + Premium Support Enterprise-grade 24/7 support * Pricing Search or jump to... Search code, repositories, users, issues, pull requests... Search [ ] Clear Search syntax tips Provide feedback We read every piece of feedback, and take your input very seriously. [ ] [ ] Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly Name [ ] Query [ ] To see all available qualifiers, see our documentation. Cancel Create saved search Sign in Sign up Reseting focus You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} open-llm-initiative / open-message-format Public * Notifications You must be signed in to change notification settings * Fork 0 * Star 10 OMF is a compact, user-friendly specification that defines a lightweight API contract between client and server for building conversational agents, and defines a standard schema for the "messages" object, which contains user and assistant interactions. License MIT license 10 stars 0 forks Branches Tags Activity Star Notifications You must be signed in to change notification settings * Code * Issues 0 * Pull requests 2 * Actions * Projects 0 * Security * Insights Additional navigation options * Code * Issues * Pull requests * Actions * Projects * Security * Insights open-llm-initiative/open-message-format This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository. main BranchesTags Go to file Code Folders and files Name Name Last commit message Last commit date Latest commit History 6 Commits examples examples site_content site_content LICENSE LICENSE OMFspec.yml OMFspec.yml README.md README.md View all files Repository files navigation * README * MIT license Open Message Format (OMF) Overview OMF is a compact, user-friendly specification that defines a lightweight API contract between client and server for building conversational agents, and defines a standard schema for the "messages" object, which contains user and assistant interactions. The "messages" object schema in OMF serves a dual purpose, to define a standard way to exchange user and assistant conversations from the client to your backend server, and from your backend server to an upstream LLM. Developers building conversational agents and generative AI applications waste precious time on two things: * Writing integration code that works only with specific client-side UI tools such as Gradio or Streamlit, which reduces their ability to experiment with new tools quickly. * Writing integration code what works with specific LLMs, reducing their ability to easily swap between LLMs for quick experimentation. Writing boilerplate integration code results in a slowed pace of development and a system that is rigid and difficult to change. OMF eliminates this undifferentiated heavy lifting by defining a standard API contract and a schema to exchange messages for conversational apps, and providers built-in convertors that make it easy for developers to experiment with different LLMs. The key advantages of using OMF are: * Interoperability: OMF standardizes prompt interactions between clients, servers, and LLMs. OMF is also interoperable across multiple popular LLMs. * Extensibility: OMF is designed to be extended in order to fit all developers' use cases. Benefits for Developers OMF simplifies the process of sending messages, making it easier to deploy conversational agents and other LLM-based tools and applications. It removes the guesswork for developers on how to send and receive messages. image image (1) How to Use OMF 1. Setting Up Your Endpoint a. Choose Your Development Environment You can use any programming language and web framework that supports OpenAPI specifications. Common choices include: * Python: Flask, FastAPI, or Django. * JavaScript/Node.js: Express.js. * Java: Spring Boot. * Go: Gin or Echo. * Rust: Actix-web or Rocket. b. Implement the API Endpoints If tools like openapi-generator-cli are not be viable for creating server stubs, you can manually implement the endpoint described in the OMF spec: 1. Manually Create the /message Endpoint: + In your chosen framework, define a POST endpoint /message. + Ensure that this endpoint accepts and processes the JSON payload as defined in the OMF spec. + The endpoint should accept a an array of the Message object. Each message will have a role and a content field, and the content could be text, base64-encoded images, or image URLs. 2. Message Handling Logic: + Parse the incoming JSON request into appropriate data models. The Message object should be parsed with a role and an array of ContentItem objects. + Implement logic to handle different types of content, such as text, images, and image URLs. + If you want to, you can directly send the array to an LLM by just passing it in the messages parameter for many LLM providers. You may need to create some tools to convert depending on the model you use. 3. Construct Responses: + Based on the request, generate a response that follows the ResponseMessage schema outlined in the specification. c. Example Setup Here's a simplified example of how to implement the /message endpoint in Python using Flask: from flask import Flask, request, jsonify import openai app = Flask(__name__) @app.route('/message', methods=['POST']) def handle_message(): messages = request.json try: # Send the received messages directly to OpenAI API using the correct method response = openai.chat.completions.create( model="gpt-4o-mini", messages=messages ) # Return the first choice's message directly return response.choices[0].message.content, 200 except openai.error.OpenAIError as e: # Handle OpenAI API errors return {"error": str(e)}, 500 return jsonify(response_message) if __name__ == '__main__': app.run(port=8080) d. Testing Locally Once the endpoint is implemented, you can test it locally using curl, Postman, or any other HTTP client. For example, you can send the following request with curl: curl -X POST http://localhost:8080/message \ -H "Content-Type: application/json" \ -d '[ { "role": "user", "content": "Hello World" } ]' 3. Testing Your Implementation After setting up the server, you can test the /message endpoint using tools like: * curl (as shown above) * Postman: Import the OpenAPI specification and generate requests directly to interact with your locally running server. Ensure that your endpoint processes the incoming messages correctly and returns appropriate responses in line with the OMF specification. 4. Deploying Your API Once your API is working locally, you can deploy it using your preferred method, such as: * Containerization: Use Docker to containerize your application and deploy it to cloud services like AWS, Azure, or GCP. * Dedicated Server: Run the application on a dedicated server using a production-ready web server and reverse proxy. Extending the Specification OMF is designed to be flexible and generic, allowing users to extend and expand the specification to fit specific needs. For instance, users can add arguments specific to OpenAI roles or modify the specification for providers like Cohere, who require separate message and chat_history parameters. An example modification might include adding the name parameter for the OpenAI user role. Message: type: object properties: role: type: string description: | Role of the message sender. Examples include: - `system`: For system messages - `user`: For messages from the user - `assistant`: For messages from the assistant content: oneOf: - type: array items: $ref: "#/components/schemas/ContentItem" description: | Content of the message. It can be a list of content items, such as text and images. - type: string description: text content of the message, name: type: string description: the name of the user sending this message required: - role - content This is a modification to the base spec that adjusts for the name parameter in the OpenAI user message, allowing developers to have chats with multiple people name: type: string description: | The name of the user sending the message. This is the section that was added to the spec in the above Comparison LLM Compatible with Benefits from Requires Provider Stock OMF Additional Tooling Additional Tooling OpenAI Yes Yes No Mistral Yes Yes No AI Anthropic Yes Yes No IBM No No Yes Google Yes (Requires Yes No Conversion) Amazon Yes (Requires Yes No Bedrock Conversion) Cohere Yes (Requires Yes No Conversion) Note: Some models have unique parameters such as message_id or name. These parameters, while easy to add for specific models, are not universal and therefore not included in the base specification. Some models also have certain function calling capabilities but due to function calls being more relevant to a full ChatCompletions setup, this is more relevant to the Open Completions API Roadmap Future improvements include: Feature or Improvement Description Additional message Allows servers to gain more insight into the metadata parameter origins or details of the messages object before it is sent to an LLM. Extended version of the Currently being worked on to provide more spec for Open robust functionality. Completions API dependentRequired Will be implemented once more tools support keyword for OpenAPI OpenAPI 3.1.0, ensuring content types align 3.1.0 with the type in ContentItem. Add more details to the Possible additions include a timestamp Metadata object. parameter, a completion_tokens parameter, and a prompt_tokens parameter We encourage you to suggest and upvote new possible roadmap items via github issues. About OMF is a compact, user-friendly specification that defines a lightweight API contract between client and server for building conversational agents, and defines a standard schema for the "messages" object, which contains user and assistant interactions. Resources Readme License MIT license Activity Custom properties Stars 10 stars Watchers 2 watching Forks 0 forks Report repository Releases No releases published Packages 0 No packages published Contributors 2 * @adilhafeez adilhafeez Adil Hafeez * @aayushwhiz aayushwhiz Aayush Footer (c) 2024 GitHub, Inc. Footer navigation * Terms * Privacy * Security * Status * Docs * Contact * Manage cookies * Do not share my personal information You can't perform that action at this time.