[HN Gopher] Show HN: OpenLIT - Open-Source LLM Observability wit...
       ___________________________________________________________________
        
       Show HN: OpenLIT - Open-Source LLM Observability with OpenTelemetry
        
       Hey HN, we're super excited to share something we've been working
       on: OpenLIT. After an engaging preview that some of you might
       recall, we are now proudly announcing our first stable release!
       *What's OpenLIT?* Simply put, OpenLIT is an open-source tool
       designed to make monitoring your Large Language Model (LLM)
       applications straightforward. It's built on OpenTelemetry, aiming
       to reduce the complexities that come with observing the behavior
       and usage of your LLM stack.  *Beyond Basic Text Generation:*
       OpenLIT isn't restricted to just text and chatbot outputs. It now
       includes automatic monitoring capabilities for GPT-4 Vision,
       DALL*E, and OpenAI Audio. Essentially, we're prepared to assist you
       with your multi-modal LLM projects all through a single platform
       and we're not stopping here; more updates and model support are on
       their way!  *Key Features:*  - *Instant Alerts:* Offers immediate
       insights on cost & token usage, in-depth usage analysis, and
       latency metrics. - *Comprehensive Coverage:* Supports a range of
       LLM Providers, Vector DBs, and Frameworks - everything from OpenAI
       and AnthropicAI to ChromaDB, Pinecone, and LangChain. - *Aligned
       with Standards:* OpenLIT follows the OpenTelemetry Semantic
       Conventions for GenAI, ensuring your monitoring efforts meet the
       community's best practices.  *Wide Integration Compatibility:* For
       those already utilizing observability tools, OpenLIT integrates
       with various telemetry destinations, including OpenTelemetry
       Collector, Jaeger, Grafana Cloud, and more, expanding your data's
       reach and utility.  *Getting Started:* Check our quickstart guide
       and explore how OpenLIT can enhance your LLM project monitoring:
       https://docs.openlit.io/latest/quickstart  We genuinely believe
       OpenLIT can change the game in how LLM projects are monitored and
       managed. Feedback from this community could be invaluable as we
       continue to improve and expand. So, if you have thoughts,
       suggestions, or questions, we're all ears.  Let's push the
       boundaries of LLM observability together.  Check out OpenLIT here:
       https://github.com/openlit/openlit  Thanks for checking it out!
        
       Author : aman_041
       Score  : 36 points
       Date   : 2024-04-26 09:45 UTC (2 days ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | talboren wrote:
       | How does this differ from Traceloop's Openllmetry?
       | https://github.com/traceloop/openllmetry
        
         | sshb wrote:
         | From what I can see, Openllmetry asks you to manually call
         | tracer for non-default OpenAI libraries (i.e. not
         | Python/Node.js) [1]
         | 
         | OpenLIT might be easier to integrate with any language that
         | supports OTEL for HTTP clients -- you just trace your HTTP
         | calls to OpenAI.
         | 
         | [1] https://www.traceloop.com/docs/openllmetry/getting-
         | started-r...
        
           | nirga wrote:
           | How does it do that for ruby for example? (which is in the
           | link you provided). OTEL instrumentation for HTTP doesn't
           | instrument the body so you won't be able to see token usage,
           | prompts and completions. Or am I missing something?
        
       | Eridrus wrote:
       | It would be great for this to actually explain what sorts of
       | metrics are being computed here beyond what you get for free by
       | instrumenting the requests library.
       | 
       | From looking at the screenshots, it looks like it can monitor
       | number of tokens, which seems useful, but I'm not clear why that
       | needed a whole big project.
       | 
       | I feel like the stuff you actually want to monitor in prod for ML
       | that you don't get from infra monitoring are things that are not
       | trivial to drop in because you want a sense for how well the ML
       | components are working, which is generally pretty application
       | specific. Having a general framework for that seems useful, but
       | not really what we have here, at least for the moment.
       | 
       | Also, it just seems a bit weird for this to have it's own UI.
       | Part of the point of OTEL is so that you can send all your
       | metrics to one place. Not totally possible all the time and
       | turning metrics into dashboards takes time, but the point of OTEL
       | seems to be to separate these concerns.
        
       | xyst wrote:
       | I browsed through the readme, and don't quite understand how LLM
       | or "GenAI" is supposed to help me with observability.
       | 
       | Pro tip: if you truly want to build an open source community
       | around this. Don't build your community using proprietary chat
       | platforms (ie, slack). Slack in particular only keeps a small
       | amount of history.
        
         | chaos_emergent wrote:
         | I don't think it was ambiguous at all, perhaps you just don't
         | have the problem they describe?
         | 
         | They have an observability platform _for_ LLMs and other
         | generative AI services. Just like you can use observability
         | tools on services like databases or APIs, they've made an
         | observability tool for the aforementioned AI services and the
         | applications built with them. They use OpenTelemetry, a pretty
         | standard observability protocol, to interoperate with other
         | tools that ingest observability-oriented data.
        
       | asabla wrote:
       | So, do I understand this right. But is this supposed to be a
       | centralized (in a way) application for collecting all
       | OpenTelemtry data related to LLM's? And it's supposed to support
       | various of related services and/or services today?
       | 
       | I see how this can be really useful. But at the same time, how do
       | you guys see your self differentiate your self from cloud hosted
       | equivalents? (e.g dashboard-like services in Azure and similar).
       | 
       | Anyhow, interesting project. I'll keep an eye on it for future
       | use
        
       | jackmpcollins wrote:
       | Does the dashboard/UI support traces? I would love a tool in
       | which to view opentelemetry traces, that can neatly display full
       | prompt and response for the spans that represent LLM queries. I'm
       | planning to add opentelemetry instrumentation to magentic [1] and
       | looking for a UI that is easy to run locally that makes it easy
       | to see what an agent is doing (via OTEL traces). I have more of
       | my thoughts on the github issue:
       | https://github.com/jackmpcollins/magentic/issues/136
       | 
       | [1] https://github.com/jackmpcollins/magentic
        
         | Eridrus wrote:
         | You can add whatever span attributes you like to otel traces,
         | and then show those attributes in whatever UI you have (I use
         | Grafana).
        
       ___________________________________________________________________
       (page generated 2024-04-28 23:00 UTC)