[HN Gopher] gRPC: The Bad Parts
___________________________________________________________________
gRPC: The Bad Parts
Author : temp_praneshp
Score : 139 points
Date : 2024-06-26 11:30 UTC (1 days ago)
(HTM) web link (kmcd.dev)
(TXT) w3m dump (kmcd.dev)
| tempest_ wrote:
| One of my favourite bits is having to pass a json string to the
| python library to configure a service. To this day I am not
| entirely sure it is adhering to the config
| ycombinatrix wrote:
| same in java
| hamandcheese wrote:
| I remember being surprised at how hard it was to read the source
| code for grpc Java. There's an incredible amount of indirection
| at every turn.
|
| This made it extremely hard to get answers to questions that were
| undocumented.
|
| It's a shame because I know Google can put out easy to read code
| (see: the go standard library).
| hinkley wrote:
| Sass was the first code generator I ever met that produced
| decent output. I'd been coding for 15 years at that point. Less
| is the second one.
|
| That's the end of this story. There are basically two people in
| the world who have demonstrated that they can be trusted to
| generate code that isn't an impenetrable quagmire of pain and
| confusion.
|
| I doubt it's an accident that both emitted declarative output
| rather than imperative, but I would be happy to be proven
| wrong.
| ot wrote:
| > It's a shame because I know Google can put out easy to read
| code (see: the go standard library).
|
| My guess is that the difference is that go is managed by a
| small group of engineers that have strong opinions, really care
| about it, and they have reached "fuck you level", so they can
| prioritize what they think is important instead of what would
| look good on a promo packet.
| delusional wrote:
| I think it's partly a culture thing. Java developers love
| indirection, and they're used to an ecosystem that doesn't want
| to be understood. An ecosystem that wants you to google
| whatever obtuse error message it decides to spit out, and paste
| whatever half thought out annotation some blog post spits back,
| into your code to make it work.
|
| I've worked with people who considered anything that wasn't
| programmed with annotations to be "too advanced" for their use-
| case.
| hot_gril wrote:
| Java is on life support as a language, but the ecosystem is
| strong, that's why it has all these weird features via
| annotations. And people who use Java are just trying to get
| stuff done like everyone else.
| cjensen wrote:
| The generated C++ interfaces to gRPC are also filled with an
| incredible amount of indirection and unnecessary concepts. I'd
| say it's a "bad at writing complex things simply" culture
| rather than being Java-specific.
| hot_gril wrote:
| Autogenerated code in general tends to be unreadable. It's
| not easy and/or not a priority.
| jakjak123 wrote:
| Some of the details inside protobuf in Java can be very
| convoluted, but they are also the result of intense
| benchmarking, years of experiences and a long tail with deep
| legacy support for old java.
|
| Honestly I found the Java bindings to be way better designed
| and thought out than Golang. On a consumer level, the immutable
| message builders are fantastic, the one-ofs are decent compared
| to what Java can offer, and the service bindings actually
| provide a beautiful abstraction with their 0-1-many model. In
| Golang, if you only have to deal with Unary rpc they are OK I
| guess, but I really miss the immutable messages.
| jeffbee wrote:
| "Adds a build step" is just not a thing you notice in any way if
| you also use Bazel, which I imagine Google imagines everyone
| doing. I don't really agree with any of the complaints in this
| article since they are sort of vague and apparently from the
| point of view of browsers, whereas I am a backend developer, but
| I think there is a large universe of things to complain about
| with gRPC. First and foremost, it seems as though bazel, protobuf
| C++, and gRPC C++ developers have never spoken to each other and
| are apparently not aware that it is almost impossible to build
| and link a gRPC C++ server with bazel. The bazel people have made
| it impossible to build protobuf with bazel 7 and the protobuf
| people have made it impossble to use bazel with protobuf 27,
| while the gRPC people and the rules_proto people are not even in
| the conversation. The other complaint from the C++ side is that
| the implementation is now so slow that Go and Java beat it
| easily, making C++ people look stupid at work.
| rstat1 wrote:
| The last time I attempted to use GRPC++ it was pretty hard to
| build even without the heaping monstrosity that is Bazel.
| athorax wrote:
| I do generally agree the tooling sucks, but as mentioned, buf and
| the connectrpc ecosystem have made it much easier to get things
| going.
| jayd16 wrote:
| Its pretty ironic but Microsoft decided to lean into gRPC support
| for C#/ASP.NET and its honestly really well done and has great
| devx.
| PaulWaldman wrote:
| Why is this ironic?
| jiveturkey wrote:
| Google ate their lunch with Chrome, so long ago.
| eddythompson80 wrote:
| What does that have to do with grpc support in C#?
| jayd16 wrote:
| I just meant that one of the better implementations is in a
| language Google doesn't heavily use. Maybe it's not ironic
| and just a refreshing example of OSS at work.
| austin-cheney wrote:
| Yes, all of it.
|
| Google claims gRPC with protobuf yields a 10-11x performance
| improvement over HTTP. I am skeptical of those numbers because
| really it comes down to the frequency of data parsing into and
| out of the protobuf format.
|
| At any rate just use JSON with WebSockets. Its stupid simple and
| still 7-8x faster than HTTP with far less administrative overhead
| than either HTTP or gRPC.
| nevir wrote:
| > because really it comes down to the frequency of data parsing
| into and out of the protobuf format.
|
| Protobuf is intentionally designed to NOT require any parsing
| at all. Data is serialized over the wire (or stored on disk) in
| the same format/byte order that it is stored in memory
|
| (Yes, that also means that it's not validated at runtime)
|
| Or are you referencing the code we all invariably write
| before/after protobuf to translate into a more useful format?
| sa46 wrote:
| You're likely thinking of Cap'n'Proto or flatbuffers.
| Protobuf definitely requires parsing. Zero values can be
| omitted on the wire so there's not a fixed layout, meaning
| you can't seek to a field. In order to find a fields value,
| you must traverse the entire message, and decode each tag
| number since the last tag wins.
| thinkharderdev wrote:
| > Protobuf is intentionally designed to NOT require any
| parsing at all
|
| This is not true at all. If you have a language-specific
| class codegen'd by protoc then the in-memory representation
| of that object is absolutely not the same as the serialized
| representation. For example:
|
| 1. Integer values are varint encoded in the wire format but
| obviously not in the in-memory format
|
| 2. This depends on the language of course but variable length
| fields are stored inline in the wire format (and length-
| prefixed) while the in-memory representation will typically
| use some heap-allocated type (so the in-memory representation
| has a pointer in that field instead of the data stored
| inline)
| arp242 wrote:
| > Data is serialized over the wire (or stored on disk) in the
| same format/byte order that it is stored in memory
|
| That's just not true. You can read about the wire format over
| here, and AFAIK no mainstream language stores things in
| memory like this: https://protobuf.dev/programming-
| guides/encoding
|
| I've had to debug protobuf messages, which is not fun at all,
| and it's absolutely parsed.
| cstrahan wrote:
| > Protobuf is intentionally designed to NOT require any
| parsing at all.
|
| As others have mentioned, this is simply not the case, and
| the VARINT encoding is a trivial counterexample.
|
| It is this required decoding/parsing that (largely)
| distinguishes protobuf from Google's flatbuffers:
|
| https://github.com/google/flatbuffers
|
| https://flatbuffers.dev/
|
| Cap'n Proto (developed by Kenton Varda, the former Google
| engineer who, while at Google, re-wrote/refactored Google's
| protobuf to later open source it as the library we all know
| today) is another example of zero-copy (de)serialization.
| leetharris wrote:
| > At any rate just use JSON with WebSockets. Its stupid simple
| and still 7-8x faster than HTTP with far less administrative
| overhead than either HTTP or gRPC.
|
| gRPC is not supposed to be a standard web communication layer.
|
| There are times where you need a binary format and extremely
| fast serialization/deserialization. Video games are one example
| where binary formats are greatly preferred over JSON.
|
| But I do agree that people keep trying to shove gRPC (or
| similar) into things where they aren't needed.
| anon291 wrote:
| Depending on the use case, it's often better to just copy
| structs directly, with maybe some care for endianness
| (little-endian). But at this point, the two most popular
| platforms, ARM and x86, agree on endianness and most
| alignment.
|
| There's almost no reason why RPC should not just be
| send(sk, (void *)&mystruct, sizeof(struct mystructtype), 0)
| hot_gril wrote:
| Won't work if your struct has any pointers in it.
| anon291 wrote:
| I'd recommend not doing that then. Of course the same is
| true if you coerce a pointer to an int64 and store it in
| a protobuf.
| hot_gril wrote:
| It's not the pointers themselves so much as what they're
| typically used for. How would you do dynamic sizing?
| Imagine sending just a struct of integer arrays this way,
| you'd have to either know their sizes ahead of time or
| just be ok with sending a lot of empty bits up to some
| max size. And recursive structures would be impossible.
|
| You could get around this with a ton of effort around
| serdes, but it'd amount to reinventing ASN1 or Protobuf.
| nly wrote:
| A lot of protocols in low latency trading systems just
| have fixed maximum size strings and will right pad with
| NUL or ASCII space characters.
|
| Packed structs with fixed size fields, little endian
| integers and fixed point is heaven to work with.
| hot_gril wrote:
| I can see that in niche situations, particularly if you
| have a flat structure and uniform hardware. Cap'n Proto
| is also a way to do zero-parsing, but it has other costs.
| jamesmunns wrote:
| Do all of your platforms have the same word width? The same
| -fshort-enums settings? Do you know that none of your data
| structures include pointers? Do all of your systems use the
| same compiler? Compiler version?
|
| I agree it will _usually_ work, but this becomes an ABI
| concern, and it 's surprisingly common to have ABI
| mismatches on one platform with the items I've noted above.
| lanstin wrote:
| I've seen wire protocols that had junk for the alignment
| buffering in such structs. And I've seen people have to
| do a whole lot of work to make the wire protocol work on
| a newer compiler/platform. Also, the whole point of a
| network protocol being documented is that it decouples
| the interface (msgs over a network) from the
| implementation (parsing and acting on msgs). Your v1
| server might be able to just copy the read buffer into a
| struct, but your v2 server won't. And it is possible and
| desirable to live in a world where you can change your
| implementation but leave your interface alone (although
| some parts of the software ecosystem seem to not know
| this nice fact and implicitly fight against realizing
| it).
|
| My issue with gRPC is simple, the Go gRPC server code
| does a lot of allocations. I have a gRPC service where
| each container does 50-80K/second of incoming calls and I
| spend a ton of time in GC and in allocating headers for
| all the msgs. I have a similar REST service where I use
| fasthttp with 0 allocs (but all the stupidly high number
| of connections due to the lack of multiplexing thru the
| connection).
| neonsunset wrote:
| Go's GC wasn't really made with throughput maximization
| in mind. It's a language that doesn't scale that well to
| take advantage of beefy nodes and has weak compiler. I
| suppose the Google's vision for it is to "crank the
| replica count up". gRPC servers based on top of ASP.NET
| Core, Java Vert.X and Rust Thruster will provide you with
| much higher throughput on multi-core nodes.
|
| https://github.com/LesnyRumcajs/grpc_bench/discussions/44
| 1
| throwaway894345 wrote:
| Ignoring the incompatibilities in word size, endianness,
| etc, how does a Go or JavaScript or etc program on the
| receiving end know what `mystruct` is? What if you want to
| send string, list, map, etc data?
| hot_gril wrote:
| gRPC is meant for backend microservices among other things,
| and it's still painful for that for the reasons the article
| describes, all of which could've been fixed or avoided.
| Internally Google doesn't even use gRPC, they use something
| similar that has better internal support according to
| https://news.ycombinator.com/item?id=17690676
|
| I also don't see what'd stop it from being used generally for
| websites calling the backend API. Even if you don't care
| about the efficiency (which is likely), it'd be nice to get
| API definitions built in instead of having to set up OpenAPI.
| randomdata wrote:
| _> Internally Google doesn 't even use gRPC, they use
| something similar that has better internal support
| according to https://news.ycombinator.com/item?id=17690676_
|
| But that says that they _do_ use gRPC internally on
| projects that are new enough to have been able to adopt it?
| hot_gril wrote:
| In 2018, there was some kind of push towards gRPC
| internally, but it was since abandoned and reversed among
| the few who actually switched. They still don't use it
| internally, only externally in some places.
| randomdata wrote:
| So, wrong link?
| nly wrote:
| That's great, but protobufs is slow as shit. I wouldnt use it
| in games.
|
| If I was using something slow that needed flexibility I'd
| probably go with Avro since it has more powerful scheme
| evolution.
|
| If I wanted fast I'd probably use SBE or Flatbuffers
| (although FB is also slow to serialise)
| doctorpangloss wrote:
| > gRPC is not supposed to be a standard web communication
| layer.
|
| It kind of is. What do you think WebTransport in HTTP/3 is?
| It's basically gRPC Next. The only reason gRPC didn't make it
| as the standard web communication layer is because of one
| disastrous decision by one Chrome engineer in
| https://issues.chromium.org/issues/40388906, maybe because he
| woke up on the wrong side of the bed.
| kentonv wrote:
| > Google claims gRPC with protobuf yields a 10-11x performance
| improvement over HTTP.
|
| That... doesn't make any sense, since gRPC is layered on top of
| HTTP. There must be missing context here.
| randomdata wrote:
| gRPC was based on a HTTP draft that predated the
| standardization of HTTP/2, so presumably that statement was
| said about HTTP/1. HTTP/2 may not have existed at the time it
| was asserted.
| lanstin wrote:
| gRPC gives you multiplexing slow request over 1 TCP
| connection, which reduces all the work and memory related
| to 1 socket per pending request; gRPC means you don't have
| to put the string name of a field into the wire, which
| makes your messages smaller which puts less stress into the
| memory system and the network, assuming your field values
| are roughly as large as your field names.
| randomdata wrote:
| Multiplexing is a HTTP/2 feature. But as gRPC was based
| on an early HTTP/2 draft, it beat HTTP/2 to the punch.
| Thus it is likely that HTTP/2 didn't exist at the time
| the statement was made and therefore HTTP/1 would have
| been implied.
| hot_gril wrote:
| Must be comparing to the equivalent JSON-over-HTTP usage.
| perezd wrote:
| Protobuf can typically be about 70-80% smaller than the
| equivalent JSON payloads. If you care about Network I/O costs
| (at a large scale), you'd probably want to realize a benefit in
| cost savings like that.
|
| Additionally, I think people put a lot of trust into JSON
| parsers across ecosystems "just working", and I think that's
| something more people should look into (it's worse than you
| think): https://seriot.ch/projects/parsing_json.html
| hot_gril wrote:
| I agree, there's a lot to gain from getting away from JSON,
| but gRPC needs HTTP/1 support and better tooling to make that
| happen.
| austin-cheney wrote:
| Let's say I wanted to transfer a movie in MKV container
| format. Its binary and large at about 4gb. Would I use JSON
| for that? No. Would I use gRPC/protobuf for that? No.
|
| I would open a dedicated TCP socket and a file system stream.
| I would then pipe the file system stream to the network
| socket. No matter what you still have to deal with packet
| assembly because if you are using TLS you have small packets
| (max size varies by TLS revision). If you are using
| WebSockets you have control frames and continuity frames and
| frame head assembly. Even with that administrative overhead
| its still a fast and simple approach.
|
| When it comes to application instructions, data from some
| data store, any kind of primitive data types, and so forth I
| would continue to use JSON over WebSockets.
| doctorpangloss wrote:
| > JSON with WebSockets. Its stupid simple and still 7-8x faster
| than HTTP with far less administrative overhead than either
| HTTP or gRPC.
|
| Everyone doing what you are saying ends up reinventing parts of
| gRPC, on top of reinventing parts of RabbitMQ. It isn't ever
| "stupid simple." There are ways to build the things you need in
| a tightly coupled and elegant way, but what people want is
| Next.js, that's the coupling they care about, and it doesn't
| have a message broker (neither does gRPC), and it isn't a proxy
| (which introduce a bajillion little problems into WebSockets),
| and WebSockets lifetimes don't correspond to session lifetimes,
| so you have to reinvent that too, and...
| austin-cheney wrote:
| _but what people want is Next.js,_
|
| What people? Developers? This is why I will not do that work
| anymore. Don't assume to know what I want based upon some
| tool set or tech stack that you find favorable. I hate (HATE
| HATE) talk of tech stacks, the fantasy of the developer who
| cannot write original software, who does not measure things,
| and cannot provide their own test automation. They scream
| their stupidity for all the world to hear when they start
| crying about reinventing wheels, or some other empty cliche,
| instead of just delivering a solution.
|
| What I want is two things:
|
| 1. Speed. This is not an assumption of speed. Its the result
| of various measurements in different execution contexts.
|
| 2. Less effort. I want to send a message across a network...
| and done. In this case you have some form of instruction or
| data package and then you literally just write that to the
| socket. That is literally 2 primitive instructions without
| abstractions like _ex: socket.write(JSON.parse(thing));_. It
| is without round trips, without headers, without anything
| else. You are just done.
| jakjak123 wrote:
| I dont even care about the performance. I just want some way to
| version my messages that is backward and forwards compatible
| and can be delivered in all the languages we use in production.
| I have tried to consume json over websockets before and its
| always a hassle with the evolution of the data format. Just
| version it in protobuf and push the bytes over websocket if you
| have a choice. Also, load balancing web socket services can be
| a bitch. Just rolling out our web socket service would
| disconnect 500k clients in 60 seconds if we didnt make huge
| amounts of work.
| neonsunset wrote:
| A lot of tooling badness comes out of the fact that gRPC
| integration in its lingua franca, Go, requires manual wiring of
| protoc.
|
| I don't know why or how there isn't a one-liner option there,
| because my experience with using gRPC in C# has been vastly
| better: dotnet add package Grpc.Tools // note
| below <Protobuf Include="my_contracs.proto" />
|
| and you have the client and server boilerplate (client - give it
| url and it's ready for use, server - inherit from base class and
| implement call handlers as appropriate) - it is all handled
| behind the scenes by protoc integration that plugs into msbuild,
| and the end user rarely has to deal with its internals directly
| unless someone abused definitions in .proto to work as a weird
| DSL for end to end testing environment and got carried away with
| namespacing too much (which makes protoc plugins die for _most_
| languages so it 's not that common of occurrence). The package
| readme is easy to follow too:
| https://github.com/grpc/grpc/blob/master/src/csharp/BUILD-IN...
|
| Note: usually you need Grpc.Client and Google.Protobuf too but
| that's two `dotnet add package`s away.
| tracker1 wrote:
| Similar experiences with web services via WCF. It was in
| dealing with anything published that wasn't .Net where it got
| difficult. PHP services were not complaint with their own WSDL,
| similar for internal types in Java from some systems. It was
| often a mess compared to the C# experience, hence everyone
| moving towards REST or simpler documentation that was easy to
| one-off as needed, or use an API client.
| atombender wrote:
| The Go tooling for gRPC is inexplicably bad, both in terms of
| ergonomics and in terms of performance.
|
| The GoGoProtobuf [1] project was started to improve both. It
| would generate nice Go types that followed Go's conventions.
| And it uses fast binary serialization without needing to resort
| to reflection.
|
| Unfortunately, the gRPC/Protobuf team(s) at Google is famously
| resistant to changes, and was unwilling to work with the GoGo.
| As a result, the GoGo project is now dead. [2]
|
| I've never used Buf, but it looks like it might fix most of the
| issues with the Go support.
|
| [1] https://github.com/gogo/protobuf
|
| [2] https://x.com/awalterschulze/status/1584553056100057088
| mdhb wrote:
| I think there is a really nice opportunity to take some emerging
| open source standards such as CBOR for the wire format, CDDL for
| the schema definition and code generation inputs and WebTransport
| for the actual transport layer.
| devmunchies wrote:
| My biggest issue with GRPC is direct mapping of ip addresses in
| the config or at runtime. From the docs: "When sending a gRPC
| request, the client must determine the IP address of the service
| name." https://grpc.io/docs/guides/custom-name-resolution/
|
| My preferred approach would be to map my client to a "topic" and
| then any number of servers can subscribe to the topic. Completely
| decoupled, scaling up is much easier.
|
| My second biggest issue is proto file versioning.
|
| I'm using NATS for cross-service comms and its great. just wish
| it had a low-level serialization mechanism for more efficient
| transfer like grpc.
| dilyevsky wrote:
| There's https://github.com/nats-rpc/nrpc
| skywhopper wrote:
| The worst part of all is that most people don't need gRPC, but
| use it anyway. It's a net addition of complexity and you're very
| likely not getting the actual benefits. I've seen countless
| simple REST APIs built with language-native tooling burned to the
| ground to be replaced with layers of gRPC trash that requires
| learning multiple new tools and DSLs, is harder to troubleshoot
| and debug, and ultimately tends to force API rigidity far sooner
| than is healthy.
|
| One project I worked on was basically just a system for sharing a
| JSON document to multiple other systems. This was at a golang
| shop on AWS. We could have used an S3 bucket. But sure, an API
| might be nice so you can add a custom auth layer or add server
| side filters and queries down the road. So we built a REST API in
| a couple of weeks.
|
| But then the tech lead felt bad that we hadn't used gRPC like the
| cool kids on other teams. What if we needed a Python client so we
| could build a Ansible plugin to call the API?? (I mean, Ansible
| plugins can be in any language; it's a rest API, Ansible already
| supports calling that (or you could just use curl); or you could
| write the necessary Python to call the REST API in like three
| lines of code.) so we spent months converting to gRPC, except we
| needed to use the Connect library because it's cooler, except it
| turns out it doesn't support GET calls, and no one else at the
| company was using it.
|
| By the time we built the entire service, we had spent months, it
| was impossible to troubleshoot, just calling the API for testing
| required all sorts of harnesses and mocks, no good CLI tooling,
| and we were generating a huge Python library to support the
| Ansible use case, but it turned out that wasn't going to work for
| other reasons.
|
| Eventually everyone on that team left the company or moved to
| other projects. I don't think anything came of it all but we
| probably cost the company a million dollars. Go gRPC!
| thinkharderdev wrote:
| > The worst part of all is that most people don't need gRPC,
| but use it anyway. It's a net addition of complexity and you're
| very likely not getting the actual benefits. I've seen
| countless simple REST APIs built with language-native tooling
| burned to the ground to be replaced with layers of gRPC trash
| that requires learning multiple new tools and DSLs, is harder
| to troubleshoot and debug, and ultimately tends to force API
| rigidity far sooner than is healthy.
|
| This sounds odd to me because I don't really see how gRPC would
| cause any of those issues?
|
| > layers of gRPC trash
|
| What layers? Switching from REST (presumably JSON over http) to
| gRPC shouldn't introduce any new "layers". It's replacing one
| style of API call with a different one.
|
| > learning multiple new tools and DSLs
|
| New tools sure, you need protoc or buf to build the bindings
| from the IDL, but what is the new DSL you need to learn?
|
| > ultimately tends to force API rigidity far sooner than is
| healthy
|
| How does gRPC force API rigidity? It is specifically designed
| to be evolvable (sometimes to its usability detriment IMO)
|
| There are some definite footguns with gRPC and I am becoming
| increasingly annoyed with Protobuf in particular as the years
| go on, but going back to REST APIs still seems like a huge step
| backwards to me. With gRPC you get a workflow that starts with
| a well-defined interface and all the language bindings
| client/server stubs are generated from that with almost zero
| effort. You can kind of/sort of do that with REST APIs using
| openapi specs but in my experience it just doesn't work that
| well and language support is sorely lacking.
| arp242 wrote:
| > What layers? Switching from REST (presumably JSON over
| http) to gRPC shouldn't introduce any new "layers".
|
| Of course it does, starting with the protobufs and code
| generation. You say yourself in your very next reply:
|
| "New tools sure, you need protoc or buf to build the bindings
| from the IDL, but what is the new DSL you need to learn?"
|
| And the DSL is presumably protobuf, which you yourself are
| "increasingly annoyed" with.
| PaulWaldman wrote:
| This anecdote highlights scope creep and mismanagement, not a
| fault of gRPC.
| ivancho wrote:
| I think the anecdote highlights that there's no incremental
| way to approach gRPC, it's not a low risk small footprint
| prototype project that can be introduced slowly and
| integrated with existing systems and environments. Which,
| well, it is a bit of a fault of gRPC.
| perezd wrote:
| I think that's not true. There are plenty of incremental
| ways to adopt gRPC. For example, there are packages that
| can facade/match your existing REST APIs[1][2].
|
| Reads like a skill issue to me.
|
| [1]: https://github.com/grpc-ecosystem/grpc-gateway [2]:
| https://github.com/connectrpc/vanguard-go
| DandyDev wrote:
| People use it - like I do - because they like the improved type
| safety compared to REST. We use gRPC at $dayjob and I would
| hate going back to the stringly typed mess that is JSON over
| REST or the _really_ absurdly over engineered complexity trap
| that is GraphQL. gRPC lets us build type safe, self-documented
| internal APIs easily and with tooling like Buf, most of the
| pain is hidden.
|
| The DSL I consider a plus. If you build REST APIs you will
| usually also resort to using a DSL to define your APIs, at
| least if you want to easily generate clients. But in this case
| the DSL is OpenAPI, which is an error prone mess of YAML or
| JSON specifications.
| lanstin wrote:
| I use it because: 1. I am not writing a network API without a
| solid spec, and 2. I want to decouple the number of tcp
| connections from the amount of pending work. I don't want one
| wonky msg to consume many resources and I want a spike of
| traffic to cause more msgs to be sent to the worker pools not
| cause a bunch of TCP connection establishment, SSL
| handshakes, etc. I also find it personally offensive to send
| field names in each network msg, as per JSON or XML.
| jakjak123 wrote:
| This, 100%. I am never going back to stringly typed JSON in
| whatever random url structure that team felt like doing that
| week. GraphQL is made for Facebook type graph problems. Its
| way overcomplicated for most use cases. I just want a lingua
| franca DSL to enforce my API specification in a consistent
| manner. I dont care if its PUT POST PATCH. Just keep it easy
| to automate tooling.
| lalaithion wrote:
| We use the textproto format extensively at work; it's super nice
| to be able to define tests of your APIs by using .textpb inputs
| and golden .textpb outputs. We have so many more tests using this
| method than if we manually called the APIs by in a programming
| language for each test case, and I wouldn't want to use JSON for
| test inputs, since it lacks comments.
| jvolkman wrote:
| If you use intellij, you can annotate your text protos with
| some header comments and unlock schema validation, completion,
| etc.
| sudorandom wrote:
| Author here: Sorry that I was so harsh on textproto. You are
| right that it has some strengths over JSON... I'm actually a
| fan of JSONC for this reason. It does limit you on tooling...
| but so does textproto, right?
|
| I think the bigger thing that I'm worried about is that gRPC
| has so many mandatory features that it can become hard to make
| a good implementation in new languages. To be honest there are
| some languages where the gRPC implementation is just not great
| and I blame the feature bloat... and I think textproto was a
| good demonstration of that feature bloat to me.
| jakjak123 wrote:
| Yeah, the feature bloat makes it a hurdle to make some good
| quality implementations. I mostly stayed in Java for years.
| They are quite good. grpc-web is ok I guess, protobuf-ts are
| great, swift came along as nice, then I have always been
| saddened by how awful they are in Go. You will get terrible
| enums, terrible one-ofs, the interceptors and service
| registration are very awkward.
| slavomirvojacek wrote:
| I am surprised no-one is mentioning Buf for all the great work
| they've done with the CLI and Connect for much better devex,
| tooling, and interoperability.
| stairlane wrote:
| Something I didn't see listed was the lack of a package manager
| for protos.
|
| For example if I want to import some common set of structs into
| my protos, there isn't a standardized or wide spread way to do
| this. Historically I have had to resort to either copying the
| structs over or importing multiple protoc generated modules in my
| code (not in my protos).
|
| If there was a 'go get' or 'pip install' equivalent for protos,
| that would be immensely useful; for me and my colleagues at
| least.
| JeffMcCune wrote:
| https://buf.build/ is this, no?
| ergl wrote:
| It is mentioned under the "Bad tooling" section
| cletus wrote:
| Story time: the whole development of protobuf was... a mess. It
| was developed and used internally at Google long before it was
| ever open sourced.
|
| Protobuf was designed first and foremost for C++. This makes
| sense. All of Google's core services are in C++. Yes there's Java
| (and now Go and to some extent Python). I know. But protobuf was
| and is a C++-first framework. It's why you have features like
| arena allocation [1].
|
| Internally there was protobuf v1. I don't know a lot about this
| because it was mostly gone by the time I started at Google.
| protobuf v2 was (and, I imagine, still is) the dominant form of.
|
| Now, this isn't to be confused with the _API version_ , which is
| a completely different thing. You would specify this in BUILD
| files and it was a complete nightmare because it largely wasn't
| interchangeable. The big difference is with java_api_version = 1
| or 2. Java API v1 was built like the java.util.Date class.
| Mutable objects with setters and getters. v2 changed this to the
| builder pattern.
|
| At the time (this may have changed) you couldn't build the
| artifacts for both API versions and you'd often want to reuse key
| protobuf definitions that other people owned so you ended up
| having to use v1 API because some deep protobuf hadn't been
| migrated (and probably never would be). It got worse because
| sometimes you'd have one dependency on v1 and another on v2 so
| you ended up just using bytes fields because that's all you could
| do. This part was a total mess.
|
| What you know as gRPC was really protobuf v3 and it was designed
| largely for Cloud (IIRC). It's been some years so again, this may
| have changed, but there was never any intent to migrate protobuf
| v2 to v3. There was no clear path to do that. So any protobuf v3
| usage in Google was really just for external use.
|
| I explain this because gRPC fails the dogfood test. It's lacking
| things because Google internally doesn't use it.
|
| So why was this done? I don't know the specifics but I believe it
| came down to licensing. While protobuf v2 was open sourced the
| RPC component (internally called "Stubby") never was. I _believe_
| it was a licensing issue with some dependency but it 's been
| awhile and honestly I never looked into the specifics. I just
| remember hearing that it couldn't be done.
|
| So when you read about things like poor JSON support (per this
| article), it starts to make sense. Google doesn't internally use
| JSON as a transport format. Protobuf is, first and foremost, a
| wire format for C++-cetnric APIs (in Stubby). Yes, it was used in
| storage too (eg Bigtable).
|
| Protobuf in Javascriipt was a particularly horrendous
| Frankenstein. Obviously Javascript doesn't support binary formats
| like protobuf. You have to use JSON. And the JSON bridges to
| protobuf were all uniquely awful for different reasons. My
| "favorite" was pblite, which used a JSON array indexed by the
| protobuf tag number. With large protobufs with a lot of optional
| fields you ended up with messages like:
| [null,null,null,null,...(700 more nulls)...,null,{/*my
| message*/}]
|
| GWT (for Java) couldn't compile Java API protobufs for various
| reasons so had to use a variant as well. It was just a mess. All
| for "consistency" of using the same message format everywhere.
|
| [1]: https://protobuf.dev/reference/cpp/arenas/
| mike_hearn wrote:
| There was never any licensing issue, do you think Google would
| depend on third party software for anything as core as the RPC
| system? The issue was simply that at the time, there was a
| culture in which "open sourcing" things was being used as an
| excuse to rewrite them. The official excuse was that everything
| depended on everything else, but that wasn't really the case.
| Open sourcing Stubby could certainly have been done. You just
| open source the dependencies too, refactor to make some
| optional if you really need to. But rewriting things is fun,
| yes? Nobody in management cared enough to push back on this,
| and at some point it became just the way things were done.
|
| So, protobuf1 which was perfectly serviceable wasn't open
| sourced, it was rewritten into proto2. In that case migration
| did happen, and some fundamental improvements were made (e.g.
| proto1 didn't differentiate between byte arrays and strings),
| but as you say, migration was extremely tough and many aspects
| were arguably not improvements at all. Java codebases
| drastically over-use the builder/immutable object pattern IMO.
|
| And then Stubby wasn't open sourced, it was rewritten as gRPC
| which is "Stubby inspired" but without the really good parts
| that made Stubby awesome, IMO. gRPC is a shadow of its parent
| so no surprise no migration ever happened.
|
| And then Borg wasn't open sourced, it was rewritten as
| Kubernetes which is "Borg inspired" but without the really good
| part that make Borg awesome, IMO. Etc.
|
| There's definitely a theme there. I think only Blaze/Bazel is
| core infrastructure in which the open source version is
| actually genuinely the same codebase. I guess there must be
| others, just not coming to mind right now.
|
| Using the same format everywhere was definitely a good idea
| though. Maybe the JS implementations weren't great, but the
| consistency of the infrastructure and feature set of Stubby was
| a huge help to me back in the days when I was an SRE being on-
| call for a wide range of services. Stubby servers/clients are
| still the most insanely debuggable and _runnable_ system I ever
| came across, by far, and my experience is now a decade out of
| date so goodness knows what it must be like these days. At one
| point I was able to end a multi-day logs service outage, just
| using the built-in diagnostics and introspection tools that
| every Google service came with by default.
| ainar-g wrote:
| Re. bad tooling. grpcurl[1] is irreplaceable when working with
| gRPC APIs. It allows you to make requests even if you don't have
| the .proto around.
|
| [1]: https://github.com/fullstorydev/grpcurl
| CommonGuy wrote:
| One can use Kreya for a GUI version
| abdusco wrote:
| > make requests even if you don't have the .proto around
|
| Like this? > grpcurl -d '{"id": 1234, "tags":
| ["foo","bar"]}' \ grpc.server.com:443
| my.custom.server.Service/Method
|
| How is that even possible? How could grpcurl know how to
| translate your request to binary?
| ainar-g wrote:
| If I recall correctly, ProtoBuf has a reflection layer, and
| it's probably using that.
| jakjak123 wrote:
| I just build a cli in Java or Go. It literally takes minutes to
| build a client.
| mattboardman wrote:
| My biggest complaint with gRPC is proto3 making all nested type
| fields optional while making primitives always present with
| default values. gRPC is contract based so it makes no sense to me
| that you can't require a field. This is especially painful from
| an ergonomics viewpoint. You have to null check every field with
| a non primitive type.
| dudus wrote:
| If I remember correctly the initial version allowed required
| fields but it caused all sorts of problems when trying to
| migrate protos because a new required fields breaks all
| consumers almost by definition. So updating protos in isolation
| becomes tricky.
|
| The problem went away with all optional fields so it was
| decided the headache wasn't worth it.
| bunderbunder wrote:
| I used to work at a company that used proto2 as part of a
| homegrown RPC system that predated gRPC. Our coding standards
| strongly discouraged making anything but key fields required,
| for exactly this reason. They were just too much of a
| maintenance burden in the long run.
|
| I suspect that not having nullable fields, though, is just a
| case of letting an implementation detail, keeping the message
| representation compatible with C structs in the core
| implementation, bleed into the high-level interface. That
| design decision is just dripping with "C++ programmers
| getting twitchy about performance concerns" vibes.
| fisian wrote:
| That decision seems practical (especially at Google scale).
|
| I think the main problem with it, is that you cannot
| distinguish if the field has the default value or just wasn't
| set (which is just error prone).
|
| However, there are solutions to this, that add very little
| overhead to the code and to message size (see e.g. [1]).
|
| [1]: https://protobuf.dev/programming-guides/dos-donts/
| sebastos wrote:
| The choice to make 'unset' indistinguishable from 'default
| value' is such an absurdly boneheaded decision, and it
| boggles my mind that real software engineers allowed proto3
| to go out that way.
|
| I don't get what part of your link I'm supposed to be
| looking at as a solution to that issue? I wasn't aware of a
| good solution except to have careful application logic
| looking for sentinel values? (which is garbage)
| jsnell wrote:
| Yes, proto3 as released was garbage, but they later made
| it possible to get most proto2 behaviors via
| configuration.
|
| Re: your question, for proto3 an field that's declared as
| "optional" will allow distinguishing between set to
| default vs. not set, while non-"optional" fields don't.
| pavon wrote:
| proto2 allowed both required fields and optional fields, and
| there were pros and cons to using both, but both were
| workable options.
|
| Then proto3 went and implemented a hybrid that was the worst
| of both worlds. They made all fields optional, but eliminated
| the introspection that let a receiver know if a field had
| been populated by the sender. Instead they silently populated
| missing fields with a hardcoded default that could be a
| perfectly meaningful value for that field. This effectively
| made all fields required for the sender, but without the
| framework support to catch when fields were accidentally not
| populated. And it made it necessary to add custom signaling
| between the sender and receiver to indicate message versions
| or other mechanisms so the receiver could determine which
| fields the sender was expected to have actually populated.
| evanmoran wrote:
| You can now detect field presence in several cases:
| https://protobuf.dev/programming-
| guides/field_presence/#pres...
|
| This is very solid with message types, but for basic types
| you can add `optional field` if needed as well (essentially
| making the value nullable)
| hot_gril wrote:
| proto2 required was basically unusable for nuanced reasons,
| so you had to make everything optional, when in many cases
| it was unnecessary. Leading to a lot of null vs 0 mistakes.
|
| Proto3 made primitives all non-optional, default 0. But
| messages were all still optional, so you could always wrap
| primitives that really needed to be optional. Then they
| added optionals for primitives too, implemented internally
| as wrapper messages.
| FridgeSeal wrote:
| According to the blog post of one of the guys who worked on
| proto3: the complexity around versioning and required fields
| was exacerbated because Google also has "middle boxes" that
| will read the protos and forward them on. Having a contract
| change between 2 services is fine, required fields are
| probably fine, have random middle boxes really makes
| everything worse for no discernible benefit.
| silverlyra wrote:
| If you want to precisely capture when a field must be present
| (or will always be set on the server), the field_behavior
| annotation captures more of the nuance than proto2's required
| fields: https://github.com/googleapis/googleapis/blob/master/
| google/...
|
| You could (e.g.) annotate all key fields as IDENTIFIERs.
| Client code can assume those will always be set in server
| responses, but are optional when making an RPC request to
| create that resource.
|
| (This may just work in theory, though - I'm not sure which
| code generators have good support for field_behavior.)
| tantalor wrote:
| RPC/proto is for transport.
|
| Required/validation is for application.
| returningfory2 wrote:
| If that's true, why have types in proto at all? Shouldn't
| everything be an "any" type at the transport layer, and the
| application can validate the types are correct?
| usrnm wrote:
| Different types can and do use different encodings
| hot_gril wrote:
| proto3 added support for optional primitives sorta recently.
| I've always been happy without them personally, but it was
| enough of a shock for people used to proto2 that it made sense
| to add them back.
| sebastos wrote:
| Just out of curiosity, what domain were you working in where
| "0.0" and "no opinion" were _always_ the same thing? The lack
| of optionals has infuriated me for years and I just can't
| form a mental model of how anybody ever found this acceptable
| to work with.
| hot_gril wrote:
| Not really one domain in particular, just internal services
| in general. In many cases, the field is intended to be
| required in the first place. If not, surprisingly 0 isn't a
| real answer a lot of the time, so it means none or default:
| any kind of numeric ID that starts at 1 (like ticket
| numbers), TTLs, error codes, enums (0 is always UNKNOWN).
| Similarly with empty strings.
|
| I have a hard time thinking of places I really need both 0
| and none. The only example that comes to mind is building
| room numbers, in some room search message where we wanted
| null to mean wildcard. In those cases, it's not hard to
| wrap it in a message. Probably the best argument for
| optionals is when you have a lot of boolean request options
| where null means default, but even then I prefer instead
| naming them such that the defaults are all false, since
| that's clearer anyway.
|
| It did take some getting used to and caused some downstream
| changes to how we design APIs. I think we're better for it
| because the whole null vs 0 thing can be tedious and error-
| prone, but it's very opinionated.
| jakjak123 wrote:
| I had the same experience. It was a bit awkward for 6
| months, but down the line we learned to design better
| apis, and dealing with nullable values are tedious at
| best. Its just easier knowing that a string or integer
| will _never_ cause a nullpointer.
| jakjak123 wrote:
| Like nearly every time, empty string and 0 for integers can
| be treated the same as "no value" if you think about it.
| Are you sending data or sending opinions? Usually to force
| a choice, you would make a enum or a one-of where the zero-
| value means the client has forgotten to set it and it can
| be modelled as a api error. Whether the value was actually
| on the wire or not is not really that important.
| hot_gril wrote:
| Yeah, I think it's best to first rethink of null as just,
| not 0 and not any other number. What that means depends
| on the context.
|
| Tangent: I've seen an antipattern of using enums to
| describe the "type" of some polymorphic object instead of
| just using a oneof with type-specific fields. Which gets
| even worse if you later decide something fits multiple
| types.
| jakjak123 wrote:
| I love oneofs in the language with good support, but they
| are woeful in Golang and the "official" grpc-web message
| types.
| kortex wrote:
| 0 as a default for "no int" is tolerable, 0.0 as a
| default for "no float" is an absolute nightmare in any
| domain remotely related to math, machine learning, or
| data science.
|
| We dealt with a bug that for weeks was silently
| corrupting the results of trials pitting the performance
| of various algos against each other. Because a valid
| response was "no reply/opt out", combined with a bug in
| processing the "opt out" enum, also combined with a bug
| in score aggregation, functions were treated like they
| replied "0.0" instead of "confidence = None".
|
| It really should have defaulted NaN for missing floats.
| jakjak123 wrote:
| Same, I rarely felt a need for this distinction.
| jakjak123 wrote:
| It was a little weird at first, but if you just read the 1.5
| pages of why protobuf decided to work like this it made perfect
| sense to me. It will seem over complicated though to web
| developers who are used to being able to update all their
| clients at a whim at any moment.
| _zoltan_ wrote:
| in my latest project I actually needed an rpc library that was
| hardware accelerated and I was surprised gRPC doesn't do RDMA for
| example. why is that?
| cryptonector wrote:
| > Bad tooling
|
| Lolwut. This is what was always said about ASN.1 and the reason
| that this wheel has to be reinvented periodically.
| throwaway894345 wrote:
| It can be true for both ASN.1 and gRPC? Moreover, definitions
| of "bad" can vary.
| bunderbunder wrote:
| All good points. But I'd argue that the single worst part of gRPC
| is the impenetrability of its ecosystem. And I think that, in
| turn, is born of complexity. The thing is so packed with features
| and behaviors and temporal coupling and whatnot that it's
| difficult to produce a compatible third-party implementation.
| That means that, in effect, the only true gRPC implementation is
| the one that Google maintains. Which, in turn, means that the
| only languages with good enough gRPC support to allow betting
| your business on it are the ones that Google supports.
|
| And a lot of these features arguably have a poor cost/benefit
| tradeoff for anyone who isn't trying to solve Google problems. Or
| they introduce painful constraints such as not being consumable
| from client-side code in the browser.
|
| I keep wishing for an alternative project that only specifies a
| simpler, more compatible, easier-to-grok subset of gRPC's feature
| set. There's almost zero overlap between the features that I love
| about gRPC, and the features that make it difficult to advocate
| for adopting it at work.
| neonsunset wrote:
| It is possible to do quite well, as demonstrated by .NET.
|
| Edit: and, if I remember correctly, gRPC tooling for it is
| maintained by about 1-3 people that are also responsible for
| other projects, like System.Text.Json. You don't need numbers
| to make something that is nice to use, quite often, it makes it
| more difficult even.
| pixl97 wrote:
| >It is possible to do quite well, as demonstrated by {one of
| the other largest companies in the world}
|
| FTFY
| svaha1728 wrote:
| No disrespect to the other developers on the team, but James
| Newton-King is no ordinary developer.
| kodablah wrote:
| For the longest time (up until earlier this year[0]), you
| couldn't even get the proto details from a gRPC error. IME GP
| is correct, there are so many caveats to gRPC implementations
| that unless it is a prioritized, canonical implementation it
| will be missing things. It seems there are tons of gRPC
| libraries out there that fell behind or were missing features
| that you don't know until you need them (e.g. I recently had
| to do a big lift to implement HTTP proxy support in Tonic for
| a project).
|
| 0 - https://github.com/grpc/grpc-dotnet/issues/303
| neonsunset wrote:
| > unless it is a prioritized, canonical implementation it
| will be missing things
|
| Perhaps. But for now, such canonical implementation (I
| assume you are referring to the Go one?) is painful to
| access and has all around really bad user experience. I'm
| simply pointing out that more ecosystems could learn from
| .NET's example (or just adopt it) rather than continuing to
| exist in their own bubble, and more engineers could have
| zero tolerance to the tooling with bad UX as it becomes
| more popular.
|
| Now, with that said, I do not think gRPC is ultimately
| good, but I do think it's less bad than many other
| "language-agnostic" options - I've been burned badly enough
| by Avro, thanks. And, at least within .NET, accessing gRPC
| is even easier than OpenAPI-generated clients. If you care
| about overhead, Protobuf will always be worse than bespoke
| solutions like RKYV or MemoryPack.
|
| I'm looking forward to solutions inspired by gRPC yet made
| specifically on top of HTTP/3's WebTransport, but that is
| something that is yet to come.
| jakjak123 wrote:
| .NET looks quite good, as well as Swift actually. I have most
| experience with Java, and those are almost as nice as the
| .NET bindings. I have also used Go quite a bit, and they are
| pretty much awful. It takes so much practice and knowledge to
| use them well in Go.
| PaulHoule wrote:
| People complain about any system which is more complex and
| performant than plain ordinary JSON. Remember how Microsoft
| pushed "Web Services" and introduced AJAX where the last letter
| was supposed to be XML?
|
| Microsoft could make the case that many many features in Web
| Services were essential to making them work but people figured
| out you could just exchange JSON documents in a half-baked way
| and... it works.
| doctorpangloss wrote:
| > But I'd argue that the single worst part of gRPC is the
| impenetrability of its ecosystem
|
| I have had the opposite experience. I visit exactly two
| repositories on GitHub, which seem to have the vast majority of
| the functionality I need.
|
| > The thing is so packed with features and behaviors and
| temporal coupling and whatnot that it's difficult to produce a
| compatible third-party implementation.
|
| Improbable did. But who cares? Why do we care about compatible
| third party implementations? The gRPC maintainers merge third
| party contributions. They care. Everyone should be working on
| one implementation.
|
| > features arguably have a poor cost/benefit tradeoff for
| anyone who isn't trying to solve Google problems.
|
| Maybe.
|
| We need less toil, less energy spent reinventing half of
| Kubernetes and half of gRPC.
| jakjak123 wrote:
| As someone in a very small company, no affiliation with any
| Google employees, gRPC and protobuf has been a godsend in many,
| many ways. My only complaint is that protoc is cumbersome af to
| use, and that is almost solved by buf.build. Except for our
| most used language, Java.
|
| Protobufs has allowed us to version, build and deliver native
| language bindings for a multitude of languages and platforms in
| a tiny team for years now, without a single fault. We have the
| power to refactor and design our platform and apis in a way
| that we never had before. We love it.
| epgui wrote:
| A lot of this kind of criticism rubs me the wrong way, especially
| complaining about having to use words or maths concepts, or
| having to learn new things. That's often not really a statement
| on the inherent virtue of a tool, and more of a statement on the
| familiarity of the author.
|
| I don't want to sound flippant, but if you don't want to learn
| new things, don't use new tools :D
| usrnm wrote:
| Sending a request and getting a response back is not a new
| concept, it's about as old as computer networks in general, and
| gRPC is the only framework that refers to this concept as
| "unary". This is the original argument from the article and I
| tend to agree with it
| epgui wrote:
| Monads and functors are nothing new either, but that doesn't
| mean giving them that name was a bad idea.
|
| Moreover, the term "unary" is used to distinguish from other,
| non-unary options: https://grpc.io/docs/what-is-grpc/core-
| concepts/
| throwaway894345 wrote:
| > I don't want to sound flippant, but if you don't want to
| learn new things, don't use new tools :D
|
| That's precisely the problem. The author wants to convince
| people (e.g., his colleagues) to use a new tool, but he has to
| convince them to learn a bunch of new things including a bunch
| of new things that aren't even necessary.
| jscheel wrote:
| The tooling around gRPC with bazel when using python is so bad
| it's almost impossible to work with, which is hilarious
| considering they both come from Google. Then I had additional
| problems getting it to work with Ruby. Then I had more problems
| getting it to work in k8s, because of load balancing with http/2.
| Combine those issues with a number of other problems I ran into
| with gRPC, and I ended up just building a small JSON-RPC
| implementation that fit our needs perfectly.
| doctorpangloss wrote:
| Another point of view is, don't use Bazel. In my experience,
| Gradle is less of a headache and well supported.
| jscheel wrote:
| For python? Gradle doesn't really support python.
| metadat wrote:
| Gradle is an anti-pattern. Just stick with Maven and live a
| happy life.
|
| As someone who's used Gradle tons, 6 months ago I wrote in
| detail about why not gradle:
|
| https://news.ycombinator.com/item?id=38875936
|
| Gradle still might less bad than Bazel, though.
| cyberax wrote:
| I'm surprised the author doesn't mention ConnectRPC:
| https://connectrpc.com/
|
| It solves ALL the problems of vanilla gRPC, and it even is
| compatible with the gRPC clients! It grew out of Twirp protocol,
| which I liked so much I made a C++ implementation:
| https://github.com/Cyberax/twirp-cpp
|
| But ConnectRPC guys went further, and they built a complete
| infrastructure for RPC. Including a package manager (buf.build),
| integration with observability (
| https://connectrpc.com/docs/go/observability/ ).
|
| And most importantly, they also provide a library to do rich
| validation (mandatory fields, field limits, formats, etc):
| https://buf.build/bufbuild/protovalidate
|
| Oh, and for the unlimited message problem, you really need to use
| streaming. gRPC supports it, as does ConnectRPC.
| athorax wrote:
| The author does mention it in the article and is also a
| contributor to supporting tooling
|
| https://github.com/sudorandom/protoc-gen-connect-openapi
| sudorandom wrote:
| Author here: I definitely should have been more explicit for my
| love of connectrpc, buf, protovalidate, etc. I do mention
| ConnectRPC but maybe not as loud as I could have. I definitely
| try to avoid confessing my love for ConnectRPC in every post,
| but sometimes it's hard not to because they've made such good
| strategic decisions that just make sense and round out the
| ecosystem so well.
| cyberax wrote:
| My fault, I did read the article but I overlooked the
| ConnectRPC mention.
| hot_gril wrote:
| I don't understand why there isn't an HTTP/1 mode for gRPC. Would
| cover the super common use case of client-to-server calls. Give
| people who already have your typical JSON-over-HTTP API something
| that's the same except more efficient and with a nicer API spec.
|
| You know what's ironic, Google AppEngine doesn't support HTTP/2.
| Actually a lot of platforms don't.
| bunderbunder wrote:
| The streaming message transfer modes are the main thing that
| make it difficult.
| hot_gril wrote:
| Streaming seems like it'd work without too much effort. It'd
| be less efficient for sure, but it's also not a very common
| use case.
| bunderbunder wrote:
| In general, I agree. But my understanding is that the
| problem isn't streaming in the abstract, it's supporting
| certain details of the streaming protocol outlined in the
| gRPC spec.
| pcj-github wrote:
| FWIW I never worked at Google and I used protobuf / gRPC
| extensively at work and in nearly all of my side projects.
| Personally, I think overall it's great. I do wish trailers were
| an optional feature though.
| kookamamie wrote:
| Insisting a particularly exotic flavor of HTTP(2) is its most
| severe design flaw, I think. Especially, as it could have worked
| in an agnostic manner, e.g. on top WebSockets.
| sudorandom wrote:
| Author here: it's nerdy web trivia but HTTP trailers are
| actually in the HTTP/1.1 spec, although very few browsers, load
| balancers, programming languages, etc. implemented it at the
| time since it wasn't super useful for the web. You are
| definitely correct that it is an exotic feature that often gets
| forgotten about.
| FridgeSeal wrote:
| > Why does gRPC have to use such a non-standard term for this
| that only mathematicians have an intuitive understanding of? I
| have to explain the term every time I use it.
|
| Who are you working with lol? Nobody I've worked with has
| struggled with this concept, and I've worked with a range of
| devs, including very junior and non-native-English speakers.
|
| > Also, it doesn't pass my "send a friend a cURL example" test
| for any web API.
|
| Well yeah. It's not really intended for that use-case?
|
| > The reliance on HTTP/2 initially limited gRPC's reach, as not
| all platforms and browsers fully supported it
|
| Again, not the intended use-case. Where does this web-browsers-
| are-the-be-all-and-of-tech attitude come from? Not everything
| needs to be based around browser support. I do agree on http/3
| support lacking though.
|
| > lack of a standardized JSON mapping
|
| Because JSON has an extremely anaemic set of types that either
| fail to encode the same semantics, or require all sorts of extra
| verbosity to encode. I have the opposite experience with
| protobuf: I know the schema, so I know what I expect to get valid
| data, I don't need to rely on "look at the json to see if I got
| the field capitalisation right".
|
| > It has made gRPC less accessible for developers accustomed to
| JSON-based APIs
|
| Because _god forbid_ they ever had to learn anything new right?
| Nope, better for the rest of us to just constantly bend over
| backwards to support the darlings who "only know json" and
| apparently can't learn anything else, ever.
|
| > Only google would think not solving dependency management is
| the solution to dependency management
|
| Extremely good point. Will definitely be looking at Buf the next
| time I touch GRPC things.
|
| GRPC is a lower-overhead, binary rpc for server-to-server or
| client-server use cases that want better performance and faster
| integration that a shared schema/IDL permits. Being able to drop
| in some proto files and automatically _have_ a package with the
| methods available and not having to spend time wiring up url's
| and writing types and parsing logic is amazing. Sorry it's not a
| good fit for serving your webpage, criticising it for not being
| good at web stuff is like blaming a tank for not winning street
| races.
|
| GRPC isn't without its issues and shortcomings- I'd like to see
| better enums and a _stronger_ type system, and defs http /3 or
| raw quic transport.
| lanstin wrote:
| I use protobuf to specify my protocol and then generate a
| swagger/openAPI spec then use some swagger codegen to generate
| rest client libraries. For a proxy server I have to fill in
| some stub methods to parse the json and turn it into a gRPC
| call but for the gRPC server there is some library that
| generates a rest service listener that just calls into the gRPC
| server code. It works fine. I had to annotate the proto file to
| say what REST path to use.
| sudorandom wrote:
| Hey, author here:
|
| > Why does gRPC have to use such a non-standard term for this
| that only mathematicians have an intuitive understanding of? I
| have to explain the term every time I use it.
|
| >> Who are you working with lol? Nobody I've worked with has
| struggled with this concept, and I've worked with a range of
| devs, including very junior and non-native-English speakers.
|
| This is just a small complaint. It's super easy to explain what
| unary means but it's often infinitely easier to use a standard
| industry term and not explain anything.
|
| >> Also, it doesn't pass my "send a friend a cURL example" test
| for any web API.
|
| > Well yeah. It's not really intended for that use-case?
|
| Yeah, I agree. Being easy to use isn't the indented use-case
| for gRPC.
|
| >> The reliance on HTTP/2 initially limited gRPC's reach, as
| not all platforms and browsers fully supported it
|
| > Again, not the intended use-case. Where does this web-
| browsers-are-the-be-all-and-of-tech attitude come from? Not
| everything needs to be based around browser support. I do agree
| on http/3 support lacking though.
|
| I did say browsers here but the "platform" I am thinking of
| right now is actually Unity, since I do work in the game
| industry. Unity doesn't have support for HTTP/2. It seems that
| I have different experiences than you, but I still think this
| point is valid. gRPC didn't need to be completely broken on
| HTTP/1.1.
|
| >> lack of a standardized JSON mapping
|
| > Because JSON has an extremely anaemic set of types that
| either fail to encode the same semantics, or require all sorts
| of extra verbosity to encode. I have the opposite experience
| with protobuf: I know the schema, so I know what I expect to
| get valid data, I don't need to rely on "look at the json to
| see if I got the field capitalisation right".
|
| I agree that it's much easier to stick to protobuf once you're
| completely bought-in but not every project is greenfield.
| Before a well-defined JSON mapping and tooling that adhered to
| it is is very hard to transition from JSON to protobuf. Now
| it's a lot easier.
|
| >> It has made gRPC less accessible for developers accustomed
| to JSON-based APIs
|
| > Because god forbid they ever had to learn anything new right?
| Nope, better for the rest of us to just constantly bend over
| backwards to support the darlings who "only know json" and
| apparently can't learn anything else, ever.
|
| No comment. I think we just have different approaches to
| teaching.
|
| >> Only google would think not solving dependency management is
| the solution to dependency management
|
| > Extremely good point. Will definitely be looking at Buf the
| next time I touch GRPC things.
|
| I'm glad to hear it! I've had nothing but execellent
| experiences with buf tooling and their employees.
|
| > GRPC is a lower-overhead, binary rpc for server-to-server or
| client-server use cases that want better performance and faster
| integration that a shared schema/IDL permits. Being able to
| drop in some proto files and automatically have a package with
| the methods available and not having to spend time wiring up
| url's and writing types and parsing logic is amazing. Sorry
| it's not a good fit for serving your webpage, criticising it
| for not being good at web stuff is like blaming a tank for not
| winning street races.
|
| Without looping in the frontend (aka web) it makes the
| contract-based philosophy of gRPC much less compelling. Because
| without that, you would have to have a completely different
| language for contracts between service-to-service (protobuf)
| than frontend to service (maybe OpenAPI). For the record: I
| very much prefer protobufs for the "contract source of truth"
| to OpenAPI. gRPC-Web exists because people wanted to make this
| work but they built their street racer with some tank parts.
|
| > GRPC isn't without its issues and shortcomings- I'd like to
| see better enums and a stronger type system, and defs http/3 or
| raw quic transport.
|
| Totally agree!
| randomdata wrote:
| _> It 's super easy to explain what unary means but it's
| often infinitely easier to use a standard industry term and
| not explain anything._
|
| What's the standard term? While I agree that unary isn't
| widely known, I don't think I have ever heard of any other
| word used in its place.
|
| _> gRPC didn 't need to be completely broken on HTTP/1.1._
|
| It didn't need to per se (although you'd lose a lot of the
| reason for why it was created), but as gRPC was designed
| before HTTP/2 was finalized, it was still believed that
| everyone would want to start using HTTP/2. HTTP/1 support
| seemed unnecessary.
|
| And as it was designed before HTTP/2 was finalized, it is not
| like it could have ridden on the coattails of libraries that
| have since figured out how to commingle HTTP/1 and HTTP/2.
| They had to write HTTP/2 from scratch in order to implement
| gRPC, so supporting HTTP/1 as well would have greatly ramped
| up the complexity.
|
| Frankly, their assumption should have been right. It's a
| sorry state that they got it wrong.
| tgma wrote:
| gRPC is deliberately designed not to be dependent on protobuf for
| its message format. It can be used to transfer other
| serialization formats. However, the canonical stub generator,
| which is not hard to replace at all, assumes proto so when people
| hear gRPC they really think of Protobuf over gRPC. Most of the
| complaints should be directed at protobuf, with or without gRPC.
|
| The primary misfeature of gRPC itself, irrespective of protobuf,
| is relying on trailers for status code, which hindered its
| adoption in the context of web browser without an edge proxy that
| could translate gRPC and gRPC-web wire formats. That alone IMO
| hindered the universal applicability and adoption quite a bit.
| bootloop wrote:
| Do you know of an example where this is done? I didn't know
| that and we are currently using a customized wire format (based
| on a patched Thrift), so I thought gRPC wouldn't be an option
| for us.
| jakjak123 wrote:
| This is the true design issue with gRPC as I see it. It would
| be way bigger without this. I love protobuf though, gRPC is
| just alright. At least gRPC makes it so much simpler to build
| powerful automation and tooling around it than the wild west of
| randomly created 'json'-ish REST-ish APIs.
| cherryteastain wrote:
| For me, the breaking point was when I saw the C++ bindings
| unironically recommend [1] that you use terrible anti-patterns
| such as "delete this". I find it unlikely that all these
| incredibly well paid Google engineers are unaware how people
| avoid these anti-patterns in idiomatic C++ (by e.g.
| std::shared_ptr). The only remaining sensible explanation is that
| Google internal gRPC C++ tooling must have utilities that
| abstract away this ugly underbelly, which us mere mortals are not
| privy to.
|
| [1] https://grpc.io/docs/languages/cpp/callback/
___________________________________________________________________
(page generated 2024-06-27 23:01 UTC)