[HN Gopher] Show HN: Arch - an intelligent prompt gateway built ...
       ___________________________________________________________________
        
       Show HN: Arch - an intelligent prompt gateway built on Envoy
        
       Hi HN! My name is Adil Hafeez, and I am the Co-Founder at Katanemo
       and the lead developer behind Arch - an open source project for
       developers to build faster, generative AI apps. Previously I worked
       on Envoy at Lyft.  Engineered with purpose-built LLMs, Arch handles
       the critical but undifferentiated tasks related to the handling and
       processing of prompts, including detecting and rejecting jailbreak
       attempts, intelligently calling "backend" APIs to fulfill the
       user's request represented in a prompt, routing to and offering
       disaster recovery between upstream LLMs, and managing the
       observability of prompts and LLM interactions in a centralized way
       - all outside business logic.  Here are some additional key details
       of the project,  * Built on top of Envoy and is written in rust. It
       runs alongside application servers, and uses Envoy's proven HTTP
       management and scalability features to handle traffic related to
       prompts and LLMs.  * Function calling for fast agentic and RAG
       apps. Engineered with purpose-built fast LLMs to handle fast, cost-
       effective, and accurate prompt-based tasks like function/API
       calling, and parameter extraction from prompts.  * Prompt
       guardrails to prevent jailbreak attempts and ensure safe user
       interactions without writing a single line of code.  * Manages LLM
       calls, offering smart retries, automatic cutover, and resilient
       upstream connections for continuous availability.  * Uses the W3C
       Trace Context standard to enable complete request tracing across
       applications, ensuring compatibility with observability tools, and
       provides metrics to monitor latency, token usage, and error rates,
       helping optimize AI application performance.  This is our first
       release, and would love to build alongside the community. We are
       just getting started on reinventing what we could do at the
       networking layer for prompts.  Do check it out on GitHub at
       https://github.com/katanemo/arch/.  Please leave a comment or
       feedback here and I will be happy to answer!
        
       Author : adilhafeez
       Score  : 10 points
       Date   : 2024-10-16 21:27 UTC (1 hours ago)
        
 (HTM) web link (github.com)
 (TXT) w3m dump (github.com)
        
       | sparacha wrote:
       | Hey HN - my name is Salman and I am Adil's Co-Founder. Would love
       | to hear and get feedback. Here is a link to our public roadmap,
       | please lets us know if there are things you'd like for us to work
       | on first
       | 
       | https://github.com/orgs/katanemo/projects/1
        
         | adilhafeez wrote:
         | You can also see current list of issues at
         | https://github.com/katanemo/arch/issues, and can also post new
         | feature requests and bug fixes there.
        
       | lionkor wrote:
       | Hi, I'm curious how preventing jailbreaks protects the _user_?
       | 
       | > Prompt guardrails to prevent jailbreak attempts and ensure safe
       | user interactions [...]
        
         | adilhafeez wrote:
         | Jailbreak ensures a smooth developer experience by controlling
         | what traffic from user make its way to the model. With
         | jailbreak (and other guardrails soon to be added) developers
         | can short-circuit response and with observability developers
         | can get insights on how users are interacting with their APIs.
        
         | sparacha wrote:
         | That's a fair point - technically it protects the application
         | from malicious attempts to subvert the desired LLM experience.
         | The more specific language (and I think we could do better
         | here) would be that Arch ensures users remain within the bounds
         | of an intended LLM experience. That at least was the intention
         | behind "ensure safe user interactions"...
        
         | harlanlewis wrote:
         | Untrusted inputs to systems with agency or access to privileged
         | data. Here's a data exfiltration example in Google AI Studio:
         | 
         | https://x.com/wunderwuzzi23/status/1821210923157098919
        
       | debarshri wrote:
       | Lately, I have seen few gateways around LLM. Namely, openrouter,
       | portkey.ai, etc.
       | 
       | My key question is, who would be the ideal customer who would
       | need a proxy or a gateway like this? Why couldn't it be an
       | extension or plugin of existing LBs, proxies etc.
        
         | sparacha wrote:
         | Two things
         | 
         | 1/ Arch builds on Envoy so that we don't re:invent all the
         | HTTP(s)/TCP level capabilities needed in a modern gateway for
         | applications. So in that sense, we agree with you that it
         | should "extend" something vs. rewriting the whole stack. There
         | are several security and robustness guarantees that we borrow
         | from Envoy as a result of this. To be more specific, a lot of
         | Arch's core implementation today is an Envoy filter written in
         | Rust.
         | 
         | 1/ Arch's core design point is around the handling and
         | processing of prompts, which we believe are nuanced and opaque
         | user request that require secure handling, intelligent routing,
         | robust observability, and integration with backend (API)
         | systems for personalization - all outside business logic. This
         | requires the use of models and LLMs that are fast, cheap and
         | capable to help developers stay focused on application
         | features. For example, Arch uses (fast) purpose-built LLMs for
         | jailbreak detection, converts prompts into API semantics for
         | personalization, and (eventually) automatically routing to the
         | best outbound LLM based on the complexity of a prompt to
         | improve the cost/speed of an app.
         | 
         | We believe #2 will continue to be different and evolve further
         | away from traditional API/HTTP routing that it will require
         | constant invention and work to make the lives of developers
         | easy.
         | 
         | Hope this helps!
        
         | retrovrv wrote:
         | I'm affiliated with Portkey, so can answer who would need such
         | a proxy/gateway:
         | 
         | Sidenote: Arch is def interesting!
         | 
         | A typical user we've seen at Portkey is a mid or a large size
         | eng org where a central "Gen AI team" has now come up. This Gen
         | AI team builds services that the rest of the company uses to
         | build whatever AI features or products they want.
         | 
         | To build such a service, they need traditional API Gateway
         | features like rate limiting, access rules, and also AI-specific
         | features like universal API to multiple LLM providers,
         | universal routing, central guardrails, AI-native observability
         | + central dashboard for other stakeholders, and more.
         | 
         | It can absolutely be a plugin on top of existing Gateways..
         | like we've explored putting Portkey on Kong, but the need for a
         | dedicated AI Gateway still remains, that can do all of these
         | things I described in an easier way.
         | 
         | Probably, solutions like Langchain/Llamaindex etc. also fit in
         | somewhere here, but a dedicated service for "ops" related
         | issues for LLM APIs is something that we're seeing orgs adopt
         | as a good practice.
        
       | edude03 wrote:
       | Envoy is legendary in (dev)ops circles, but I don't understand
       | what it lends to the AI space. I feel like building a separate
       | backend service that runs behind envoy would make more sense but
       | that's just me.
        
         | sparacha wrote:
         | We agree Envoy is legendary - and per se doesn't lend anything
         | to the AI space. That's essentially what we are doing here,
         | building on top of Envoy to add capabilities specifically for
         | AI and prompts. For instance, we use Envoy's filtering
         | capabilities to handle and process prompts - this was get to
         | keep all the robustness and security features for TCP/HTTP from
         | Envoy and solve the critical but undifferentiated tasks related
         | to prompts like safety, observability, routing, function
         | calling, etc.
        
       | crb wrote:
       | Tetrate and Bloomberg want to contribute their code to Envoy to
       | create "Envoy AI Gateway", similarly to how there is an "Envoy
       | Gateway" spec. Do you see this as being complementary or
       | competitive with your work?
       | 
       | https://tetrate.io/press/tetrate-and-bloomberg-collaborate-o...
        
         | sparacha wrote:
         | It's early days, so while there might be some overlap, I am
         | sure there is a lot that we can do together to build
         | complimentary products.
         | 
         | Based on the press release, its kinda hard to tell exactly how
         | different/alike we will be, but Arch will always be "designed-
         | first" for prompts and LLM application workloads without
         | exposing all Envoy related features. And Envoy is "designed-
         | first" for micro-services application workloads. So there will
         | be some overlap but our design principles will deviate over
         | time I feel. But we are very open to collaborating with the
         | community here...
        
       | 3np wrote:
       | That's quite an overloaded name... Good luck with SEO for "how do
       | i run arch on linux?" :p
        
         | sparacha wrote:
         | Yea - that's gonna be an issue. It was a hotly debated name but
         | we figured the SEO will adapt with GenAI and the way we know
         | SEO today might not be problematic. Let's see - tricky for sure
        
       | dang wrote:
       | Offtopic technical note: I've created a new post for this because
       | the previous one (https://news.ycombinator.com/item?id=41801315)
       | was old enough to fall out of the ranked stories on HN.
       | 
       | We picked it for the second-chance pool
       | (https://news.ycombinator.com/item?id=26998308) when it was
       | already several days old, and by the time the thread got going,
       | it basically got evicted from cache. This is a manual workaround
       | to correct that. Sorry all!
       | 
       | I've moved the comments from the other thread hither, which is
       | why most of them are hours older than the current submission is.
        
       | hahdflakdfwdasd wrote:
       | Since its an envoy filter it was trivial to setup in Istio
       | Ingress. Good stuff.
        
         | sparacha wrote:
         | Great to hear that - walk us a bit more how you are planning on
         | using the filter? The filter today does communicate with a
         | model_server service for the intelligence piece. Were you
         | thinking of using the filter for outbound LLM traffic
         | management?
        
         | adilhafeez wrote:
         | You are right, since arch is an ingress wasm filter it can be
         | setup inside Istio just like any other envoy filter. You would
         | need to pass arch_config someone which should be easy. We will
         | have samples/demos for Istio and K8s deployments sometime in
         | future. If you want us to focus on this area more please go
         | ahead and create an issue in our issues page at
         | https://github.com/katanemo/arch/issues
        
       ___________________________________________________________________
       (page generated 2024-10-16 23:01 UTC)