[HN Gopher] Connect-Web: It's time for Protobuf/gRPC to be your ...
___________________________________________________________________
Connect-Web: It's time for Protobuf/gRPC to be your first choice in
the browser
Author : slimsag
Score : 126 points
Date : 2022-08-04 17:20 UTC (5 hours ago)
(HTM) web link (buf.build)
(TXT) w3m dump (buf.build)
| newaccount2021 wrote:
| dathinab wrote:
| But protobuf isn't grate, especially proto v3 is tuned exactly
| for googles needs, tools and approaches. But googles needs, tools
| and approaches are not fitting for most smaller companies.
|
| And gRPC adds a bunch of additional complexity and failure modes,
| sure it's worth considering especially for larger projects. But a
| lot of projects get away just fine with a surprising small number
| of endpoints.
| akshayshah wrote:
| For the most part, we agree! Protobuf isn't perfect. gRPC is
| _definitely_ optimized for megacorporation-sized problems.
|
| No matter who you are, though, it's not fun to hand-write
| boilerplate data types and fetch calls or quibble over which
| parameters should go in the path and which should be query
| params. Even when you're actually sending JSON on the wire and
| using regular HTTP/1.1 POSTs, Protobuf and connect-web make it
| easy to skip right to the interesting part of your code.
| boardwaalk wrote:
| It looks like the Connect protocol supports client streaming,
| which gRPC-Web doesn't, but the only way for a server to speak
| connect-web is to write it in Go and use their libraries?
|
| What about servers in other languages? Are there any plans for
| f.e. Envoy support for going from Connect the frontend to gRPC on
| the backend? Like gRPC-Web to gRPC?
| akshayshah wrote:
| The gRPC-Web _protocol_ supports streaming - it supports
| everything that standard gRPC does.* The client libraries that
| operate in web browsers don't support client streaming, because
| every major browser currently buffers request bodies.
|
| Right now, you're correct - the easiest way for another server
| to speak the Connect protocol is to use connect-go. We're
| planning to support more backend languages soon: Node.js is up
| next, with the JVM maybe afterwards. We'd also love to
| contribute (time, code, review, whatever) to any other backend
| implementations - the protocol is not that complex. File an
| issue on one of our projects or ping us in our Slack, it'll
| probably be an easy sell.
|
| Envoy support is complicated. The honest truth is that we
| haven't looked into it all that much because it feels rude -
| Envoy's architecture requires first-party plugins to live in
| the main repo, and it feels impolite to attempt a C++ code dump
| for our new project. Maybe once Connect takes the world by
| storm :) Even if we did do that, a Connect-to-gRPC translation
| layer would only work for binary payloads - converting JSON to
| binary protobuf requires the schemas, and redeploying proxies
| every time the schema changes is an operational nightmare.
|
| * Worth noting that gRPC-Web doesn't really have a
| specification. There's a narrative description of how gRPC-Web
| differs from standard gRPC, but they specifically mention that
| the protocol is defined by the reference implementation in
| Envoy.
| boardwaalk wrote:
| Thank you for the detailed reply!
|
| Random thoughts: I personally would not mind requiring
| Protobuf payloads so the proxy portion can stay dumb --
| keeping proxies up to date sounds no fun. Losing
| inspectability (not having JSON) is a minus but not a deal
| breaker. Requiring servers to use a new library is a big lift
| for people, and hard to sell if what you have works. Having a
| proxy you can drop in the meantime, while getting client
| streaming at the same time, seems nice. In any case, having
| the shiny Typescript friendly libraries that can easily
| switch between gRPC-Web and Connect is nice, so thanks for
| that.
|
| For what it's worth, our servers are C++ and use Arrow Flight
| which why I'm even interested in client streaming at all (its
| DoPut method has a streaming argument). My current solution
| is to add a regular gRPC method that behaves similarly to
| DoPut but is a unary call for browser clients. Not terrible,
| but it'd be nice to not have to do that.
|
| I may well take you up on filing an issue/pinging y'all on
| Slack.
| rektide wrote:
| If the browser folk had ever made HTTP Push available for use to
| the web page, such that actual grpc could run, I'd agree.
| gRPC/the web are at an impasse versus browsermakers who are
| resistant to new capabilities & who gave up on push as
| unsuccessful after never letting us try. Here's a good recent (4
| year old) issue; it links back to much older issues.
|
| https://github.com/whatwg/fetch/issues/607
|
| The rest of the protobuf world has gotten so much better. But the
| grpc on the web hacks feel bad. The awful option of running
| http3-over-webtransport-over-http3 is becoming alluring given the
| lack of browser care for the basics, and that's terrifying (it
| means the page loading & running it's own page/userland http3
| stack). But there doesn't seem to be room for progress &
| innovation & even basic continuity/ways forward, with HTTP Push
| getting shoved out the airplane door.
|
| The work here to build yet another protocol that is gRPC like to
| workaround limitations is heroic. It's unfortunate that protocol
| development in general is hamstrung by these awkward longstanding
| limitations in the browser.
| akshayshah wrote:
| From my limited perspective, it'd certainly be nice if browsers
| exposed more of HTTP. I also appreciate that the web platform
| is far larger, and far more backward-compatible, than any code
| I've ever even considered writing. If browser makers are slow
| to expose new protocol features, that's a cost I'm happy to
| pay.
|
| At least as I understand it, HTTP Push isn't the problem. The
| key problem is that browsers don't want to expose HTTP
| trailers. I understand their reluctance - even putting aside
| the security concerns, it's a huge semantic change. Many HTTP
| response headers, like etags, are typically computed by
| buffering the response, running some compute, then sending
| headers and the buffered body. All of those would, presumably,
| now need to also be sendable as trailers. What if the server
| sends both an etag header and trailer, and they don't match?
| The rabbit hole goes pretty far. Here's the Chromium issue
| where the gRPC and Chrome teams debate the issue:
| https://bugs.chromium.org/p/chromium/issues/detail?id=691599.
| There's also a series of whatwg issues about exposing trailers
| in the fetch API: https://github.com/whatwg/fetch/issues/34.
|
| For a protocol that sits above L7, the gRPC HTTP/2 protocol is
| liberal about diving past the semantics outlined in RFC 9110
| and specifying transport-level details. (As an example,
| trailers-only responses must be sent in a _single_ HTTP/2
| headers frame with the EOS bit set.) That degree of specificity
| unlocks some performance gains, but it makes it near-impossible
| to speak this protocol using an existing HTTP library - a
| traditional strength of building on top of L7. For broad
| adoption in a _lot_ of environments, gRPC-Web (and Connect) are
| IMO much better protocols.
| maerF0x0 wrote:
| For those griping about using JSON manually with their APIs ...
| This tool is a big help for working with GRPC (at least from cli)
|
| https://github.com/fullstorydev/grpcurl
| lotw_dot_site wrote:
| No demo. I was wondering what kinds of things I could be doing
| here that I couldn't do before (with websockets, webrtc, etc) but
| demo.connect.build just redirects to connect.build. I'd only
| known of "Protocol Buffers" as something that was internal to
| Google, but never knew exactly to what end.
| akshayshah wrote:
| The little demo application is on the connect.build homepage
| and on https://connect.build/demo. I completely forgot to fix
| the redirect from demo.connect.build to point to the actual
| demo page - thank you for reminding me :)
|
| Protocol Buffers are just a schema language. They're
| spiritually akin to OpenAPI, Thrift, Avro, and a bunch of other
| contract-type things. They're _also_ a binary serialization
| format that's usually faster and more compact than JSON. The
| intention is to use Protocol Buffers as a language-independent
| way of describing data bags and network APIs, and then use that
| schema/contract to generate code in the language you actually
| work in.
|
| It's not that you _can't_ do that with websockets, webrtc, or
| just RESTful HTTP/1.1 - it's that you shouldn't _need_ to. The
| hand-written code to define the same data bag
| classes/interfaces/structs/etc in multiple languages, then wire
| them up to the same HTTP boilerplate isn't difficult or
| interesting, it's just toil. For all the times you're just
| calling an API, Protobuf lets you get on with the interesting
| work and skip the boilerplate. Ideally, things work well even
| if you're writing client code in a language that the server
| team doesn't have any expertise in.
|
| If you're doing something unusual and need
| webrtc/websockets/etc, go for it!
| lotw_dot_site wrote:
| Nevermind, I see the demo is the Eliza widget on the main page.
|
| So, I can do Eliza with it then...
|
| I'll keep that in mind for the next time that I need to get an
| Eliza widget working ;)
| [deleted]
| fizx wrote:
| There seems to be two groups of protobuf libraries for HTTP 1.1,
| the ones that want to make it look like binary+HTTP/2 grpc (grpc-
| web, grpc-gateway), and the ones that make it look like idiomatic
| web tech (twirp).
|
| I'm excited to see more ideas in the second category.
| jrockway wrote:
| grpc-gateway is actually in the second class. It allows you to
| annotate your protobuf so as to create a JSON + HTTP method
| based API (people call this "REST") that is translated to gRPC
| for your actual backend.
|
| (gRPC/Web is protobufs over HTTP/1.1. Different than strict
| HTTP/1.1 transcoding, though, which is also possible for gRPC
| to do.)
|
| I think grpc-gateway is absolutely what 99% of users want to
| do. You can generate client stubs (it will even output an
| OpenAPI spec) like you can do for gRPC + protos, but it's all
| HTTP/1.1 + JSON over the wire, so if you really want to hand
| craft curl requests, you can. (But with an OpenAPI spec, you'll
| really like just plugging it into some GUI tool to make these
| one-off requests. Even curl diehards like myself.) grpc-gateway
| is also up front about not supporting things browsers can't do,
| like bidirectional streaming. gRPC/Web always disappointed me
| on that front; "coming soon" for years.
|
| In the past, I used grpc-gateway (in process, some minor
| downsides compared to running the gateway as a separate
| process) + gRPC for my service's internal admin API. It was
| very few lines of code whenever you wanted to add a new
| endpoint, and because of the OpenAPI spec, it became available
| for use in our admin GUI as soon as you pushed the new server
| code. If someone wanted to add a new endpoint, there was
| basically no tedium, edit the proto, "go generate ...",
| implement the generated interface with your actual
| functionality, push to production. We embedded the generated
| OpenAPI spec in the doc, so you could just point at
| app.example.com/OpenAPI.json and clients would auto-update to
| see your new endpoint.)
|
| I actually converted in-place a hand-coded REST API; all the
| requests and responses and endpoints stayed the same (thanks to
| the power of the grpc-gateway annotations), but I deleted all
| the code in every handler that did the marshaling and
| unmarshaling. The result was something that was very easy to
| add to and maintain. I liked it a lot and would do it again
| without hesitation. (We used Retool as the GUI. Not sure I'd
| recommend it, but it worked fine. I've seen some HN posts about
| open source alternatives, those look awesome.)
|
| I have also written several apps that use gRPC/Web. I didn't
| find the experience bad, but the Javascript bundle size is
| huge, so I never felt great about it. grpc-gateway should be
| the first thing you look at, your server code will be simple,
| and your client code will be small. And hey, if you have non-
| web clients, plain gRPC is great too.
| mf192 wrote:
| Curious, do you see connect-web falling into the former or the
| latter group?
| time4tea wrote:
| I just love how a zero byte protobuf message deserialises into a
| valid struct. Very reassuring.
| net_ wrote:
| In the other direction, the "all fields at default value
| serializes as size 0 message that you can't send" behavior has
| been an unending annoyance since switching to proto3.
| jeffbee wrote:
| That's the example I always use for the people who say that
| protocol buffers are "type-safe" which they emphatically are
| not. Any empty buffer will successfully decode as any message
| that has only optional fields!
| 0x457 wrote:
| Well, isn't still type-safe then? If every field is optional,
| then a zero byte message is a totally valid message. The
| issue is that every field is optional, and there are no
| required fields at all.
|
| I'm very annoyed that all fields are optional. Which means I
| have to do additional validation on server side and in a
| language like rust it's double annoying because now you have
| `T` or `Option<T>` and it's confusing for some types:
|
| - `string` will be rust's `String` except it is an empty
| string by default - All "custom" types are `Option<T>` even
| if that makes no sense semantically
|
| So now you have to check if a string is empty, which I guess
| is technically a string anyway. However, now you have no
| idea, it is an empty string as a value or lack of value? You
| end up wrapping `string` into a `StringValue` and now linter
| is complaining that you shouldn't use those.
|
| Overall parser for protobuf messages got simpler, but now
| enforcing safety is on server's implementation.
| dathinab wrote:
| the idea of protobuf is "init to default value and then
| incremental merge message fragments".
|
| Which is even more crazy then missing parts have default
| value, when you consider how that behaves in some cases of
| mismatching schemas (it also can introduce annoying
| limitations when trying to create idiomatic libraries in some
| languages, especially such with "type-safety").
|
| And then add in some of the "automatic type conversions it
| does".
|
| And to top if of some edge cases tend to very often diverge
| in implementations of different languages, no matter what the
| original intention was.
|
| And the result is uh, just uh, very very uh.
|
| I understand why google uses them, it fits them. It probably
| a good idea given google scale problems, company and team
| structure and "skill level" of each member.
|
| But that doesn't mean it's a good fit for everyone.
| jeffbee wrote:
| To be fair to protobuf, its own docs make no claims about
| "safe". That is projected upon it by people who do not
| understand it.
| bjt2n3904 wrote:
| "It's time for..."
|
| Good grief. Pretentious much?
| dboreham wrote:
| Also a reliable indicator that it was never time.
| mosdl wrote:
| Has anyone used their protobuf-es library? Seems interesting and
| better than the other libraries from a quick glance. I wonder if
| there are bazel rules for it.
| akshayshah wrote:
| Probably not - we wrote it alongside connect-web, so it also
| launched very recently :)
|
| We do think it's the best option for protobuf code generation
| in TypeScript, though. It comes from quite a few years of
| experience with proto & TS: the primary author is also the
| author and maintainer of protobuf-ts, and we had the author of
| ts-proto review it and give us a thumbs-up before launch.
|
| There _could_ be Bazel rules for it - we certainly write enough
| other Bazel integrations. Mind filing an issue on
| https://github.com/bufbuild/protobuf-es?
| mosdl wrote:
| Sure, opened an issue.
| masukomi wrote:
| protobuffers are no end of pain for me at work.
|
| Yes, the guarantee of no breaking changes in an API is wonderful.
| Everything else sucks.
|
| Not being able to use JSON to test your api? sucks.
|
| Having to convert the changed .proto files to a new library for
| your language (gem for ruby in my case) and then have your system
| use that instead of the latest release (because you haven't
| tested it yet) sucks.
|
| A terrible ruby library written by people who were obviously not
| ruby devs (it makes FooServiceService classes for example)?
| sucks.
|
| hoop-jumping to force a breaking change in something you thought
| was good, but isn't and hasn't been released yet? Sucks
|
| horribly degraded iteration loops on new functionality? sucks
|
| not being able to host it anywhere that doesn't support HTTP2
| (Heroku)? sucks
|
| Unless you're working on something at google scale and the
| performance difference and bandwidth savings are actually
| noticeable RUN FOR THE HILLS. Protobuffers will bring you nothing
| but pain.
| ghayes wrote:
| You missed my biggest gripe: the Golang influenced decision
| that "zero strictly equals nil." This decision is non-idiomatic
| in, honestly, most languages.
| FridgeSeal wrote:
| It's especially awful because protobufs will incorrectly
| parse non-matching messages-filling in blank and mismatched
| fields, instead of just failing out.
|
| Had that happen more than once, and it made for an extremely
| confusing situation.
| shadowgovt wrote:
| It makes a lot of sense for a wire format to save space by
| having a default value and omitting the message when the
| value is the default, and by standardizing on a specific
| default, the protobuffer protocol avoids developers having to
| wonder what the default is for every field.
|
| I agree that it can be a sharp edge for the tool if you don't
| see it coming... It can be convenient to deserialize a
| protobuffer into an in-memory representation for working with
| it. Proto doesn't do this for you because it would be
| unnecessary overhead if your code can just work with the
| protobuffers as-is, and protobuffer errs on the side of
| speed.
| franciscop wrote:
| Agreed also with Node.js, and we didn't even see meaningful
| improvements in data size for our usecase without double
| encoding. The size of fairly repeated data with JSON+GZIP was
| on the same order of magnitude as the raw data with
| protobuffers. To be fair the protobuffers included some
| unwanted fields, but it was expensive enough decoding them,
| what were we going to do decode AND encode them again? No way,
| decode, slice in JS and send the JSON, with GZIP it was the
| same size.
| [deleted]
| shadowgovt wrote:
| Honestly, that's one of the nicest things about JSON. While
| it has a lot of on the wire redundancy, it is a very
| compressible shape of redundancy.
| hardwaresofton wrote:
| I know it's unlikely, but has anyone ever tried using protobuf
| + HTTP/1/2/3 with `Content-Type`? It's an experiment I've been
| wanting to run forever but haven't found time to.
|
| I wonder if all the speed gains people are seeing would work
| just fine with reasonable `Content-Type` and maybe an extra
| header for intended schema.
| akshayshah wrote:
| Couldn't agree more, especially about the un-idiomatic
| libraries in many languages and gRPC's insistence on end-to-end
| HTTP/2 and trailer support. All of this is, in essence, what
| we're trying to solve with the Connect family of libraries.
|
| Eventually, we'd like you to have `connect-rails` (or maybe
| `connect-rack`, I'm an awful Ruby programmer). It would
| automatically support the gRPC-Web and Connect protocols, and
| hopefully standard gRPC too (no idea if rails/rack/etc supports
| trailers). It should feel like writing any other HTTP code, and
| it should fit right into any existing web application. We'd
| probably end up writing a new protobuf runtime for Ruby, so the
| generated classes feel idiomatic. Now your Ruby code is
| accessible to curl, bash scripts, and plain fetch from the
| browser, but it's also accessible to a whole ecosystem of
| clients in other languages - without anybody writing the boring
| plumbing code over and over.
|
| Protobuf and gRPC aren't perfect, by any stretch, but they're
| effective. At least IMO, they're also past critical mass in
| medium-to-large systems: Cap'n Proto _is_ nicer and better-
| designed, but it doesn't have the adoption and language support
| that Protobuf has (imperfect as the support may be). Once your
| backend is gRPC and protobuf, it's really convenient to have
| that extend out to the front-end and mobile - at least IMO,
| it's just not worth the effort to hand-convert to RESTful JSON
| or babysit a grpc-gateway that needs to be redeployed every
| time you add a field to a message.
| kitsunesoba wrote:
| It's also increasingly common for languages and platforms to
| come with decent JSON parsing capabilities, which is a huge
| point in JSON's favor in my eyes -- fewer dependencies to have
| to wrestle is always welcome.
| aseipp wrote:
| Also, even if you _do_ want to have some kind of binary-ized
| transport with a strict type-checked schema and compilers for
| multiple languages -- ultimately a noble goal, I think -- there
| are much better options available, design and technology wise.
| Cap 'n Proto is wonderful with a very strong foundational
| design and good implementation, it's major drawback being a
| relatively sparse set of working languages, and a lack of
| tooling. But then again the quality of 3rd party Protocol
| Buffers libraries has always been a complete gamble (test it
| until you break it, fork it and patch it to make due), and as
| you note the tools aren't all there for gRPC either.
|
| So in general I completely agree. If you are exposing a Web API
| to consumers -- just keep doing RPC, with HTTP APIs, with
| normal HTTP verbs and JSON bodies or whatever. Even GraphQL,
| despite having a lot of pitfalls, and being "hip and fancy",
| still just works normally with cURL/JSON at the end of the day
| because it still works on these principles. And that matters a
| lot for consumers of Web APIs where some of your users might
| literally be bash scripts.
| agotterer wrote:
| Take a look at Twirp (https://github.com/twitchtv/twirp) open
| sourced by TwitchTv. It's a lot lighter weight than gRPC. It
| does use Protobufs but addresses some of the concerns you
| mentioned, such as being able to test with JSON payloads, works
| over HTTP 1.1 and HTTP/2, good client libraries, and doesn't
| require a proxy.
|
| They address your concerns in more detail in the Twirp release
| announcement (2018) -
| https://blog.twitch.tv/en/2018/01/16/twirp-a-sweet-new-rpc-f...
| akshayshah wrote:
| Twirp is excellent. Their protocol is _very_ similar to the
| Connect protocol for unary RPCs.
|
| However, the semantics that Twirp exposes are different from
| gRPC. That means that it's impossible (or at least _very_
| awkward) to have the same server-side code transparently
| support Twirp and gRPC clients. The Twirp ecosystem is
| smaller, and has more variable quality, than the gRPC
| ecosystem; when you choose Twirp for your server, you're
| betting that you'll never want a client in a language where
| Twirp is painfully bad.
|
| Our hope is that the Connect protocol appeals to the same
| folks who like Twirp, and that the server-side Connect
| implementations make it easier to interop with the gRPC
| ecosystem.
| dontlaugh wrote:
| We use it at work, it does indeed address all of the
| concerns.
|
| The problem is gRPC, not protobuf.
| ohCh6zos wrote:
| The only really nice thing I can say about protobuffers is that
| they've caused less trouble than gRPC has caused.
| francislavoie wrote:
| Seconded, for PHP. It's such a nightmare to work with.
| halfmatthalfcat wrote:
| gRPC-web just isn't there yet but appreciate the work in the
| space. There's also an issue with a proliferation of
| transpilation (protoc) projects:
|
| - grpc-web (the standard binary protocol) with the optional TS
| emit
|
| - grpc-web-text with the optional TS emit
|
| - gprc-ts
|
| - ts-proto
|
| Which to use when, why and the issues with interop between
| backend/frontend server implementations. Also the streaming story
| isn't great. Also the need for Envoy as another piece of infra to
| manage isn't great either.
| lucas_codes wrote:
| What is grpc-web-text?
| mf192 wrote:
| You're right, gRPC-web isn't there and likely won't get there.
| That's why we need something like connect-web.
|
| - no proxies - smaller bundle size - idiomatic TS code in both
| runtime and generated code - streaming - ... and more
| FpUser wrote:
| Been using post only JSON based RPC of my own cooking for ages.
| Worked well for me and my clients.
| victorvosk wrote:
| or... https://github.com/t3-oss/create-t3-app
| hmillison wrote:
| I have used protobufs and grpc on the web before. Maybe this
| project is the magic bullet that makes it easy to do, but in the
| past that typescript support from Google and 3rd parties
| (improbable eng) was lacking and there was little interest in
| making simple changes to make it easier to use.
|
| On top of that, the increase in JS shipped to user's due to the
| size of the generated protobuf objects was unacceptable for
| anything that cares about keeping their bundle size manageable.
| Have you done anything to address this issue?
| timostamm wrote:
| Not sure if it is a magic bullet, but it was definitely written
| by TypeScript developers, for TypeScript developers.
|
| The generated TypeScript code is already pretty minimal because
| all serialization ops are implemented with reflection instead
| of generated code (which is only marginally slower than
| generated code in JS).
|
| But you can also switch to generating JavaScript + TypeScript
| declaration files, which is truly minimal: JavaScript is an
| entire dynamic language, so we actually only generated a small
| snippet of metadata in the .js output, and create a class at
| run time with a function call. The generated typings (.d.ts)
| give you type safety, autocompletion in the IDE, and so on.
|
| You can see the output here:
| https://github.com/bufbuild/protobuf-es/blob/main/packages/p...
| rockemsockem wrote:
| The article specifically mentions that bundle sizes for grpc
| packages in the past were unacceptable and that they've made
| big improvements.
| hmillison wrote:
| That's great news! I missed that in my initial skim of the
| article.
| mf192 wrote:
| Ye, fwiw there is an example code size comparison here:
|
| https://github.com/bufbuild/connect-
| web/blob/main/packages/c...
|
| I'm sure someone will chime in on the implementation details,
| but hopefully others can give it a try with their projects!
| sgammon wrote:
| I am so incredibly excited about this. Buf is an amazing tool
| that every proto developer should know
|
| Great work Buf team
| vosper wrote:
| If you've got Typescript on front-end and back-end then I'd
| recommend looking at tRPC. The schema is your types, which might
| often just be the type of your DB response, leading to low-code
| but fully type-safe coding.
|
| https://trpc.io/
| junon wrote:
| Or just use AJV which has the ability to statically represent
| JSON Schemas as full-fledged types, so that the schemas
| themselves become the authoritative IDL, via the new
| JSONDataType in AJV@7.
| sgammon wrote:
| pretty useless as an RPC layer if it imposes language
| restrictions...
| bluepizza wrote:
| Why?
| akshayshah wrote:
| tRPC is awesome - if you're writing full-stack TypeScript and
| don't care about other languages, it's very convenient.
|
| Protobuf might be a more appealing choice if you want to
| support clients in other languages, or if you're interested in
| getting a bit more value out of your contract language (e.g.,
| breaking change detection).
| fyrn- wrote:
| Cap'nProto Is so much better. Really a lot are, but Cap'nProto is
| main by the original protobuff dev so his description of why he
| made it is relevant. I use protobuff everday at work and honestly
| the generated code is incredibly awful conpared to pretty much
| any alternative. We're way to bought in to switch now though
| sadly.
| melony wrote:
| There aren't enough good cap'n proto code generators (compared
| to protobuffs/flatbuffs). It is not viable unless your stack is
| mainly C++.
| junon wrote:
| I use it in Rust just fine, FWIW.
| kentonv wrote:
| Hi. That's me. I'm glad you like it.
|
| Unfortunately, melony is right: I've only really been able to
| focus on the C++ implementation of Cap'n Proto. Some of the
| others are solid (e.g. Go, Rust), but many are sort of half-
| baked weekend projects of random people, which were later
| abandoned. As a result it's tough for me to recommend Cap'n
| Proto in polyglot environments. Maybe someday I'll be able to
| find the time / resources to build out the ecosystem more
| fully. Great tooling is really important for any of this to
| work.
|
| I'm also an investor in Buf FWIW. I think what they're doing is
| pretty cool.
| junon wrote:
| Hi, I like capnproto as well, thanks for making it. :)
| AtNightWeCode wrote:
| This is not the first attempt to run gRPC from web. It is still a
| bad idea since it does not play nice with common web infra. I
| don't understand what one even tries to accomplish with this.
| biggestlou wrote:
| I strongly suggest that you read the post to see why this
| project is different and specifically intended to place nicely
| with common web infra.
| yandie wrote:
| But then we have to use yet another protocol?
| timostamm wrote:
| If you want to be able to see data in the browsers network
| inspector, you do. If you are already serving gRPC-web, you can
| use the gRPC-web transport in connect-web, it supports both
| protocols. It doesn't support plain gRPC because web browsers
| do not expose trailers, and gRPC uses them extensively.
| theogravity wrote:
| I would really emphasize the no proxy required part in the intro
| of this post. To me its one of the huge factors for not using
| grpc from the web.
| no_circuit wrote:
| No proxy is required if you only write servers in go with their
| connect-go library. Packing protos is not glamorous work, but
| I'm not willing to adopt a gRPC protocol that limits
| implementation choices.
|
| If you have gRPC services chances are you already have a L7
| reverse-proxy like envoy running with grpc/grpc-web support
| whether doing it yourself, or as part of your cloud provider.
| So IMO their connect-go library is only useful if you directly
| expose the server on the internet, and only use the go
| programming language.
| rektide wrote:
| But your services have to support the Connect protocol. You can
| also ship services that support gRPC-Web directly too & run
| without a proxy.
|
| This is definitely a step up, nicely convenient. But also, it
| has downsides too. gRPC is incredibly well supported by a huge
| number of observability tools and service-layers/service-mesh
| systems. Few have support for gRPC-Web. Which if any have
| support for Connect-Web?
|
| In general, it's just massively sad that the browser never got
| around to making HTTP Push useful for protocols. gRPC should
| have been positioned to do great things, was a very common-
| sense overlay atop HTTP2 that provided just a little more
| semantics to say how things ought work. It triangulated nicely
| on where we were going, at it's inception, back in 2014/2015.
| But the browser basically never evolved, never gave developers
| access to do the good stuff we thought we'd start to have
| access to. I still see brand new specs that shout out to the
| Low Level Extensibility Manifesto[1], but for some reason, in
| terms of the most base layer of the web, HTTP, this simple
| guidance has been roundly ignored & the capabilities were never
| delivered. So gRPC has been stuck with a pretty ok protocol, &
| frustratingly discompatible browser.
|
| [1] https://github.com/extensibleweb/manifesto
| sjroot wrote:
| Yeah this is the deal breaker for me as well. This smells
| like an embrace-extend-extinguish campaign frankly?
|
| 1. Embrace gRPC and protobuf 2. Extend with Connect, get
| everyone using this in their APIs and web apps 3. Drift away
| from established gRPC and/or protobuf standards, build
| platforms/business around this and trap as many folks as
| possible
|
| As silly as it may seem, one thing that really sends this
| signal for me is the nice trademark symbol on the Buf name.
| their intention for all of this stuff is to build a business.
| akshayshah wrote:
| In many ways, you're spot on. We are trying to build a
| business: a schema registry for Protobuf. We think that
| schema-driven APIs makes a lot of sense, that Protobuf is
| the de facto best schema language, and that the biggest gap
| is good tooling to share and leverage the schemas.
|
| To retire in Scrooge McDuck-sized vaults of gold,* though,
| we need to make Protobuf super awesome so that individual
| developers and companies are willing to use it. It's not
| super-awesome today, and gRPC is often even worse. To
| everyone's benefit (I hope), we have the funding and runway
| to tackle this problem.
|
| From our perspective, your plan looks more like:
|
| (1) Meet everyone where they are, which is mostly Protobuf
| v3 and gRPC. Adopting a new implementation of a
| standardized wire protocol in some portion of your system
| is risky, but not crazy.
|
| (2) Offer a graceful upgrade path to an RPC protocol we
| think is better, and genuinely solves many pain points with
| the current status quo.
|
| (3) If we're wildly successful, influence the Protobuf
| language community whenever a v4 is under discussion. There
| are certainly warts to fix and nice features to add, but
| stability and widespread support have a powerful magic of
| their own.
|
| Good intentions are hard to prove, of course. Hopefully
| releasing good, well-supported OSS code can be a first step
| in building some trust.
|
| * Kidding about the piles of gold. I wouldn't say no, but
| this is the most fun I've had at work in a decade :)
| xyzzy_plugh wrote:
| > Which if any have support for Connect-Web?
|
| As far as I can tell it's just HTTP, so... all of them?
| rektide wrote:
| gRPC is also just HTTP too, yet these tools go much much
| further to provide good observability of the specific gRPC
| transactions happening over HTTP. With this application
| level protocols flowing atop the hypertext transfer
| protocol, being able to reach in & see what the app
| protocol doing is invaluable for observability. So no, just
| about none of these tools will have actually good Connect-
| Web support. Being able to see streams, parameters, a host
| of other capabilities is unlikely to just work fine.
| akshayshah wrote:
| Unless you're all-in on server streaming, the Connect
| protocol is indistinguishable from RESTful HTTP to this
| class of L7 proxies - it's just a POST to a Protobuf-
| looking path, with an `application/json` or
| `application/proto` Content-Type. Most of the
| proxies/service mesh things that provide rich gRPC
| instrumentation are also capable of providing rich
| instrumentation for this style of HTTP.
| rektide wrote:
| I dunno. I'd like to see some comparisons. HTTP is so
| broad. Observability tools do good with it. But I think
| it's deceitful & disingenuous to blow this topic off like
| this.
|
| Streams use features like HTTP2+ Push, & there's all
| manner of semantic meaning implicit in headers & schemas
| that regular http wont see that these tools pick up on. I
| want to be sold that the complex multi-directional multi-
| stream works fine, are fully observable, but I suspect
| seriously we're overreducing, not being honest, by
| discarding the difficulties of tuning observability to
| specific protocols here & saying general http support
| will buy us everything.
| 89vision wrote:
| I once worked on a home built browser grpc implementation where
| the protobufs were serialized over the wire in base64. It
| certainly had it's kinks to work out, but sharing/generating the
| service definitions and models from the same protobuf markup
| between backend and frontend was really nice.
| miohtama wrote:
| I was working with Cosmos nodes in a hackathon and they did RPC
| where one had Protobufs embedded as base64 encoded within JSON.
| nine_k wrote:
| Doesn't OpenAPI (nee Swagger) give you the same for plain HTTP
| + JSON?
| 89vision wrote:
| yeah, I've done both, but I far prefer writing protobuf
| markup than swagger. YMMV
| AtNightWeCode wrote:
| Yes, you can even generate models from JSON directly. Then
| there is JSON Schema and other schema first options that
| mostly no one uses anymore.
___________________________________________________________________
(page generated 2022-08-04 23:00 UTC)