[HN Gopher] Show HN: Open Message Format - a compact, vendor-agn...
       ___________________________________________________________________
        
       Show HN: Open Message Format - a compact, vendor-agnostic spec for
       messages
        
       Author : aayushwhiz
       Score  : 24 points
       Date   : 2024-09-09 13:30 UTC (9 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | aayushwhiz wrote:
       | Open Message Format (OMF) is a specification for structuring
       | message exchanges between users, servers, and large language
       | models (LLMs). It defines an API contract and schema for the
       | "message" object, which supports communication in conversational
       | agents. OMF allows developers to work with different LLMs and
       | tools without writing extra translation code.
        
       | lbotos wrote:
       | Advice: Maybe consider the "comparison box" up higher as it's the
       | easiest fastest way for a new reader to see current state. My
       | first thought was "how many of the big players are compatible
       | with this" and surprisingly, many are now.
       | 
       | The second larger up hill battle you have is specs like this are
       | usually driven by a company or some sort of consortium. Who are
       | you in relation to these orgs to hope to influence/encourage such
       | spec usage?
        
       | rollulus wrote:
       | I think it would be challenging to come up with a more ambiguous
       | name.
        
         | aayushwhiz wrote:
         | Hi, do you think it would be beneficial to specify that it is
         | for LLM interaction within the name itself?
        
       | ianbicking wrote:
       | Lots of broken links in the doc, though I guess the YAML file
       | specifies everything: https://github.com/open-llm-
       | initiative/open-message-format/b...
       | 
       | The metadata tokens is a string [1]... that doesn't seem right.
       | Request/response tokens generally need to be separated, as they
       | are usually priced separately.
       | 
       | It doesn't specify how the messages have to be arranged, if at
       | all. But some providers force system/user/assistant/user... with
       | user last. But strict requirements on message order seem to be
       | going away, a sort of Postel's Law adaptation perhaps.
       | 
       | Gemini has a way of basically doing text completion by leaving
       | out the role [2]. But I suppose that's out of the standard.
       | 
       | Parameters like top_p are very eclectic between providers, and so
       | I suppose it makes sense to leave them out, but temperature is
       | pretty universal.
       | 
       | In general this looks like a codification of a minimal OpenAI GPT
       | API, which is reasonable. It's become the de facto standard, and
       | provider gateways all seem to translate to and from the OpenAI
       | API. I think it would be easier to understand if the intro made
       | it more clear that it's really trying to specify an emergent
       | standard and isn't proposing something new.
       | 
       | [1] https://github.com/open-llm-initiative/open-message-
       | format/b...
       | 
       | [2] https://ai.google.dev/gemini-api/docs/text-
       | generation?lang=r...
        
         | un1imited wrote:
         | hey @ianbicking - thanks a lot for the feedback. I've merged a
         | change to fix the links [1].
         | 
         | > The metadata tokens is a string [1]... that doesn't seem
         | right. Request/response tokens generally need to be separated,
         | as they are usually priced separately.
         | 
         | For the metadata you are right. Request and response tokens are
         | billed separately and should be captured accordingly. I've put
         | a PR to address that [2]
         | 
         | > It doesn't specify how the messages have to be arranged, if
         | at all. But some providers force system/user/assistant/user...
         | with user last. ...
         | 
         | We do assume that last message in the array to be from user.
         | But we are not forcing it at the moment.
         | 
         | [1] https://github.com/open-llm-initiative/open-message-
         | format/p...
         | 
         | [2] https://github.com/open-llm-initiative/open-message-
         | format/p...
        
           | ianbicking wrote:
           | I've hit cross-LLM-compatibility errors in the past with
           | message order, multiple system messages, and empty messages.
           | 
           | Multiple system messages are kind of a hack to invoke that
           | distinct role in different positions, especially the last
           | position. I.e., second to last message is what the user said,
           | last message is a system message telling the LLM to REALLY
           | FOLLOW THE INSTRUCTIONS and not get overly distracted by the
           | user. (Though personally I usually rewrite the user message
           | for that purpose.)
           | 
           | Multiple user messages in a row is likely caused by some
           | failure in the system to produce an assistant response, like
           | no network. You could ask the client to collapse those, but I
           | think it's most correct to allow them. The user understands
           | the two messages as distinct.
           | 
           | Multiple assistant messages, or no trailing user message, is
           | a reasonable way to represent "please continue" without a
           | message. These could also be collapsed, but that may or may
           | not be accurate depending on how the messages are truncated.
           | 
           | This all gets even more complicated once tools are
           | introduced.
           | 
           | (I also notice there's no max_tokens or stop reason. Both are
           | pretty universal.)
           | 
           | These message order questions do open up a more meta question
           | you might want to think about and decide on: is this a
           | prescriptive spec that says how everyone _should_ behave, a
           | descriptive spec that is roughly the outer bounds of what
           | anyone (either user or provider) can expect... or a
           | combination like prescriptive for the provider and
           | descriptive for the user.
           | 
           | Validation suites would also make this clearer.
        
             | aayushwhiz wrote:
             | Yeah, I can completely see this, the goal of this was to be
             | specifically for the messages object, and not a completions
             | object, since in my experience, you usually send messages
             | from front end to backend and then create the completion
             | request with all the additional parameters when sending
             | from backend to an LLM provider. So when just sending from
             | an application to the server, trying to just capture the
             | messages object seemed ideal. This was also designed to try
             | and maximize cross compatibility, so it is not what the
             | format "should be" instead, it is trying to be a format
             | that everyone can adopt without disrupting current setups.
        
       | ianburrell wrote:
       | If these aren't generic messages, then I wouldn't use something
       | as generic as Open Message Format. I would suggest qualifying the
       | name. Open LLM Message Format, OLMF. Or LOMF. Or OLIMF. Something
       | to qualify the scope and advertise what it is for.
        
         | aayushwhiz wrote:
         | I see, thank you for the feedback, I will do a brainstorm on
         | the name, I originally thought that open message format seemed
         | somewhat catchy, but I can totally see how it is generic and
         | not as descriptive as it should be.
        
       | deepsun wrote:
       | Squat all Open-* names!
        
       | endofreach wrote:
       | Couldn't get past the thoughts about the naming after reading the
       | first paragraph...
       | 
       | I propose a better name: Simple Message Spec (SMS)!
        
       ___________________________________________________________________
       (page generated 2024-09-09 23:01 UTC)