[HN Gopher] REST vs GraphQL vs gRPC
       ___________________________________________________________________
        
       REST vs GraphQL vs gRPC
        
       Author : gibbonsd1
       Score  : 128 points
       Date   : 2021-03-15 14:40 UTC (8 hours ago)
        
 (HTM) web link (www.danhacks.com)
 (TXT) w3m dump (www.danhacks.com)
        
       | chrismorgan wrote:
       | Not loading for me (PR_CONNECT_RESET_ERROR on Firefox,
       | ERR_CONNECTION_RESET on Chrome).
       | 
       | https://web.archive.org/web/20210315144620/https://www.danha...
        
       | _ZeD_ wrote:
       | ... vs soap?
        
         | bluejekyll wrote:
         | vs corba?
         | 
         | Between soap and grpc, grpc is the better choice at this point.
         | It's simpler to implement, and has decent multi-language
         | support.
        
       | sillyquiet wrote:
       | In my opinion, it makes very little sense to compare GraphQL to
       | REST from a client perspective - if you are only going to be
       | hitting a single API endpoint, use REST (or gRPC I guess). The
       | overhead of GraphQL doesn't make it worth using at that scale.
       | 
       | The strength and real benefit of GraphQL comes in when you have
       | to assemble a UI from multiple data sources and reconcile that
       | into a negotiable schema between the server and the client.
        
         | fastball wrote:
         | Though there are also solutions like Hasura where GraphQL makes
         | sense at approximately any scale because it allows you to
         | create an API from nothing in about 10 minutes.
        
       | claytongulick wrote:
       | Surprised no one has mentioned what (to me) is the killer feature
       | of REST, JSON-patch [1]
       | 
       | Completely solves the PUT verb mutation issue and even allows
       | event-sourced like distributed architectures with readability (if
       | your patch is idempotent)
       | 
       | I married it to mongoose [2] and added an extension called json-
       | patch-rules to whitelist/blacklist operations [3] and my API life
       | became a very happy place.
       | 
       | I've replaced hundreds of endpoints with json-patch APIs and some
       | trivial middleware.
       | 
       | When you couple that stack with fast-json-patch [4] on the client
       | you just do a simple deep compare between a modified object and a
       | cloned one to construct a patch doc .
       | 
       | This is the easiest and most elegant stack I've ever worked with.
       | 
       | [1] http://jsonpatch.com/
       | 
       | [2] https://www.npmjs.com/package/mongoose-patcher
       | 
       | [3] https://github.com/claytongulick/json-patch-rules
       | 
       | [4] https://www.npmjs.com/package/fast-json-patch
        
       | [deleted]
        
       | meowzero wrote:
       | Another con of GraphQL (and probably GRPC) is caching. You
       | basically get it for free with REST.
       | 
       | REST can also return protobufs, with content type
       | application/x-protobuf. Heck, it can return any Content-Type. It
       | doesn't have to be confined to JSON.
       | 
       | GRPC needs to support the language you're using. It does support
       | a lot of the popular languages now. But most languages have some
       | sort of http server or client to handle REST.
        
         | rhacker wrote:
         | Well actually, it can actually be more advanced with GraphQL.
         | In fact GQL clients have the ability to invalidate caches if it
         | knows the same ID has been deleted or edited and can in some
         | cases even avoid a new fetch.
         | 
         | That being said some of those advanced use cases may be off by
         | default in Apollo.
        
           | codegladiator wrote:
           | How will the client know ?
        
         | ec109685 wrote:
         | We solve the cacheability part by supporting aliases for
         | queries by extended the GraphQL console to support saving a
         | query with an alias.
         | 
         | Also, with gzip, the size of json is not a big deal given
         | redundant fields compress well.
        
         | phaedryx wrote:
         | I believe that GraphQL handles this with "persisted queries."
         | Basically, you ask the server to "run standard query
         | 'queryname'." Since it is a standard, predefined query you can
         | cache the results.
         | 
         | Apollo support link: https://www.apollographql.com/docs/apollo-
         | server/performance...
        
           | thinkingkong wrote:
           | The browser supports built in caching without requiring a
           | specific library; additionally the infrastructure of the web
           | provides this as well.
        
         | tlrobinson wrote:
         | I think the benefits of HTTP caching are often exaggerated,
         | especially for APIs and single page applications.
         | 
         | Often I'll want much more control over caching and cache
         | invalidation than what you can do with HTTP caching.
         | 
         | I'd be interested to see an analysis of major websites usage of
         | HTTP caching on non-static (i.e. not images, JS, etc)
         | resources. I bet it's pretty minimal.
        
           | jayd16 wrote:
           | You can put a RESTful response on S3 (or even stub a whole
           | service) but AFAIK you can't do that for gRPC or GraphQL.
        
         | cogman10 wrote:
         | What popular languages aren't supported by GRPC?
        
           | meowzero wrote:
           | Scala, Swift, Rust, C, etc. I guess with Scala you can
           | probably use the Java one. But they have one for Kotlin...
        
             | sshb wrote:
             | Swift is at least supported via Objective-C, and swift-grpc
             | looks solid https://github.com/grpc/grpc-swift
        
             | chills wrote:
             | Scala has several (approximately one per effect library):
             | https://scalapb.github.io/docs/grpc/
        
               | meowzero wrote:
               | That's actually cool. It's been a while since I worked on
               | GRPC with Scala. Back then we only had that thin wrapper
               | around Java.
        
             | LawnGnome wrote:
             | Rust has tonic, which I've used to good effect as both as a
             | client and server.
        
               | onei wrote:
               | I'll second that. Using prost directly was a little
               | rough, but with tonic on top it's been a dream
        
       | throwaway4good wrote:
       | Am I the only who simply does remote procedure calling over
       | http(s) via JSON? Not REST as in resource modelling but simply
       | sending a request serialized as a JSON object and getting a
       | response back as a JSON object.
        
         | [deleted]
        
         | sneak wrote:
         | I've done JSON-RPC at scale before and the one downside to it
         | is that you have to write a custom caching proxy for readonly
         | calls that understands your API.
         | 
         | With REST you can just use a normal HTTP caching proxy for all
         | the GETs under certain paths, off the shelf.
         | 
         | Using a hybrid (JSON-RPC for writes and authenticated reads,
         | REST for global reads) would have saved me a lot of time spent
         | building and maintaining a JSON-RPC caching layer.
         | 
         | There is benefit to a GET/POST split, and JSON-RPC forces even
         | simple unauthenticated reads into a POST.
         | 
         | The other issue with JSON-RPC is, well, json. It's not the
         | worst, but it's also not the best. json has no great
         | canonicalization so if you want to do signed requests or
         | responses you're going to end up putting a string of json
         | (inner) into an key's value at some point. Doing that in
         | protobuf seems less gross to me.
        
           | hashkb wrote:
           | If I'm going to neuter HTTP like that, I at least do RPC over
           | websockets for realtime feel. And I still usually run it
           | through the whole Rails controller stack so I don't drive
           | myself insane.
        
           | throwaway4good wrote:
           | Personally I prefer to have explicit control over the caching
           | mechanism rather than leaving it to network elements or
           | browser caching.
           | 
           | That is explicity cache the information in your JavaScript
           | frontend or have your backend explicitly cache. In that way
           | it is easy to understand and your can also control what
           | circumstances a cache is invalidated.
        
             | sneak wrote:
             | > _Personally I prefer to have explicit control over the
             | caching mechanism rather than leaving it to network
             | elements or browser caching._
             | 
             | I'm not talking about browser caching, I'm talking about
             | the reverse proxy that fronts your ("backend") service to
             | the internet. High traffic global/unauthenticated reads,
             | especially those that never change, should get cached by
             | the frontend (of the "backend", not the SPA) reverse proxy
             | and not tie up appservers. (In our case, app servers were
             | extremely fat, slow, and ridiculously slow to scale up.)
        
               | throwaway4good wrote:
               | I am sure you have good reasons for your concrete design
               | but in the general case: Why not simply build the caching
               | into your backend services rather than having a proxy do
               | it based on the specifics of http protocol? It would be
               | simpler and far more powerful.
        
         | danpalmer wrote:
         | This is fine at a small scale. When you're one dev or a small
         | team you can understand the whole system and you'll benefit
         | from this simplicity.
         | 
         | When you're many devs, many APIs, many resources, it really
         | pays to have a consistent, well-defined way to do this. GraphQL
         | is very close to what you've described, with some more defined
         | standards. GRPC is close as well, except the serialisation
         | format isn't JSON, it's something more optimised.
         | 
         | As a team grows these sorts of standards emerge from the first-
         | pass versions anyway. These just happen to be pre-defined ones
         | that work well with other things that you could choose to use
         | if you wanted to.
        
           | sagichmal wrote:
           | It's fine at medium and large scale, too, as long as the
           | service doesn't change its API very much, and/or it doesn't
           | have too many consumers. It only breaks down when managing
           | change becomes too difficult. IMO way way too many people opt
           | in to the complexity of IDL-based protocols without good
           | reason.
        
         | corytheboyd wrote:
         | No, I have seen many such approaches. It draws undue criticism
         | when the actual REST API starts to suffer due to people getting
         | lazy, at which point they lump the RPC style calls into the
         | blame.
        
         | throwaway4good wrote:
         | Just be clear - I am not suggesting JSON-RPC as there is no
         | envelope and the name of the invoked procedure is in the HTTP
         | request line.
         | 
         | For example:                  POST /api/listPosts HTTP/1.1
         | { userId: "banana", fromDate: 2342342342, toDate: 2343242 }
         | 
         | Reponse:                  HTTP/1.1 200 OK        [ { id: 32432,
         | title: "Happy banana", userId: "banana" }, ... ]
         | 
         | Or in case of an error:                  HTTP/1.1 500 Internal
         | Server Error        { type: "class name of exception raised
         | server side", message: "Out of bananas" }
         | 
         | The types can be specified with TypeScript needed.
        
           | throwaway4good wrote:
           | The point is to get free of resource modelling paradigm, the
           | meaning of various http methods, and potential caching. And
           | also not have the silly overhead of JSON-RPC.
        
         | yeswecatan wrote:
         | My team has started doing this for workflow orchestration. When
         | our workflows (implemented in Cadence) need to perform some
         | complex business logic (grab data from 3 sources and munge, for
         | example) we handle that with a RPC-style endpoint.
        
       | jrsj wrote:
       | Big missing con for GraphQL here -- optimization. Doing the right
       | thing in simple cases is easy; in more complex cases you're
       | looking at having to do custom operations based on query
       | introspection which is an even bigger pain in the ass than using
       | REST in the first place UNLESS all of your data is in one
       | database OR if you're using it as a middleman between your
       | clients and other backend services, you have a single read-
       | through cache like Facebook which allows you to basically act as
       | if everything were in a single database.
        
       | sshb wrote:
       | I've recently stumbled upon WebRPC
       | https://github.com/webrpc/webrpc
       | 
       | It solves gRPC's inability to work nicely with web browsers.
        
         | Sean-Der wrote:
         | https://github.com/jsmouret/grpc-over-webrtc is another way to
         | get into the browser.
         | 
         | It's nice that you don't have to do any translation. Just gRPC
         | in/out of the browser.
        
         | alexhutcheson wrote:
         | Are you familiar with gRPC-web?
         | 
         | https://github.com/grpc/grpc-web
        
           | sshb wrote:
           | Yes, it bloats js bundle size quite a lot due to protobuf.
        
       | ledauphin wrote:
       | a big thing not called out here that both gRPC and GraphQL
       | natively do, but that REST does not natively do, is schema
       | definition for the payloads.
       | 
       | It's massively useful to know exactly what 'type' a received
       | payload is, as well as to get built-in, zero-boilerplate feedback
       | if the payload you construct is invalid.
        
         | sagichmal wrote:
         | Sure, but that utility also carries massive costs: the
         | infrastructure and tooling required to work with those schema
         | definitions, especially as they change over time. It's
         | certainly not the case that the benefits always, or even
         | usually, outweigh those costs.
        
       | TeeWEE wrote:
       | I think for people who didnt try GRPC yet, this is for me the
       | winner feature:
       | 
       | "Generates client and server code in your programming language.
       | This can save engineering time from writing service calling code"
       | 
       | It saves around 30% development time on features with lots of API
       | calls. And it grows better since there is a strict contract.
       | 
       | Human readability is over-rated for API's.
        
         | adamkl wrote:
         | Code generation and strong contracts are good (and C#/Java
         | developers have been doing this forever with SOAP/XML), but
         | they do place some serious restrictions on flexibility.
         | 
         | I'm not sure how gRPC handles this, but adding an additional
         | field to a SOAP interface meant regenerating code across all
         | the clients else they would fail at runtime while deserializing
         | payloads.
         | 
         | A plus for GraphQL is that because each client request is a
         | custom query, new fields added to the server have no impact on
         | existing client code. Facebook famously said in one of their
         | earlier talks on GraphQL that they didn't version their API,
         | and have never had a breaking change.
         | 
         | Really, I don't gRPC and GraphQL should even be compared since
         | they support radically different use cases.
        
           | malkia wrote:
           | Learned that at Google, for each service method, introduce
           | individual Request and Response messages, even if you can
           | reuse. This way you can extend without breaking.
           | 
           | Also never reuse old/deleted field (number), and be very
           | careful if you change the type (better not).
        
             | fatnoah wrote:
             | These were basic principles I put in place at a previous
             | company that had a number of API-only customers.
             | Request/Response messages. Point releases could only
             | contain additive changes. Any changes to existing
             | behaviors, removal of fields, or type changes required
             | incrementing the API version, with support for current and
             | previous major versions.
        
           | mattnewton wrote:
           | > I'm not sure how gRPC handles this, but adding an
           | additional field to a SOAP interface meant regenerating code
           | across all the clients else they would fail at runtime while
           | deserializing payloads.
           | 
           | This is basically the reason every field in proto2 will be
           | marked optional and proto3 is "optional" by default. IIRC the
           | spec will just ignore these fields if they aren't set or if
           | they are present but it doesn't know how to use them (but
           | won't delete them, if it needs to be forwarded. Of course
           | this only works if you don't reserialize it. Edit: this is
           | not true see below).
        
             | malkia wrote:
             | I do remember some debate around required/optional in
             | proto2, and how the consensus was to not have required in
             | proto3 - parties had good arguments on both sides, but I
             | think ultimately the backward-forward compatibility
             | argument won. With required, you can never retire a field,
             | and older service/client would not work correctly. I
             | haven't used proto2 since then, been using proto3 - but was
             | not aware of what another poster here mentioned about proto
             | 3.5 - so now have to read...
        
             | jeffbee wrote:
             | Even if a process re-serializes a message, unknown fields
             | will be preserved, if using the official protobuf libraries
             | proto2 or 3.5 and later. Only in 3.0 did they drop unknown
             | fields, which was complete lunacy. That decision was
             | reverted for proto 3.5.
             | 
             | Some of the APIs (C++ for example) provide methods to
             | access unknown fields, in case they were only mostly
             | unknown.
        
               | malkia wrote:
               | Oh, I see now - https://github.com/protocolbuffers/protob
               | uf/releases/tag/v3.... (General) - I wasn't aware of that
               | as I think I've started using proto3 after that version.
               | Good to know.
        
           | AlexITC wrote:
           | Protobuf was designed to keep backwards/forwards
           | compatibility, you can easily add any fields to your messages
           | without breaking old clients at the network layer (unless you
           | intend to break them at the application layer);
        
         | emodendroket wrote:
         | So when are SOAP and WSDL coming back into Vogue?
        
         | andrew_ wrote:
         | You don't get much more human readable than GraphQL syntax.
         | There are now oodles of code generation tools available for
         | GraphQL schemas which takes most of the heavy lifting out of
         | the equation.
        
           | tmpz22 wrote:
           | Do the code generators create efficient relational queries? I
           | don't know about everyone else but my production data is
           | highly relational.
        
             | jrockway wrote:
             | I use gqlgen for a Go backend and the combination of our
             | schema design and the default codegen results in it opening
             | one database connection for every row that ends up in the
             | output. You can manually adjust it to not do that, but it
             | doesn't seem like a good design to me.
             | 
             | (It also does each of these fetches in a separate
             | goroutine, leading me to believe that it's really designed
             | to be a proxy in front of a bunch of microservices, not an
             | API server for a relational database. Even in that case,
             | I'm not convinced it's an entirely perfect design -- for a
             | large resultset you're probably going to pop the circuit
             | breaker on that backend when you make 1000 requests to it
             | in parallel all at the exact same instant. Because our
             | "microservice" was Postgres, we very quickly determined
             | where to set our max database connection limit, because
             | Postgres is particularly picky about not letting you open
             | 1000 connections to it.
             | 
             | I had the pleasure of reading the generated code and
             | noticing the goroutine-per-slice-element design when our
             | code wrapped the entire request in a database transaction.
             | Transactions aren't thread-safe, so multiple goroutines
             | would be consuming the bytes out of the network buffer in
             | parallel, and this resulted in very obvious breakages as
             | the protocol failed to be decoded. Fun stuff! I'll point
             | out that if you are a Go library, you should not assume the
             | code you're calling is thread-safe... but they did the
             | opposite. Random un-asked-for parallelism is why I will
             | always prefer dumb RPCs to a query language on top of a
             | query language. Sometimes you really want to be explicit
             | rather than implicit, even if being explicit is kind of
             | boring.)
        
         | karmakaze wrote:
         | Thrift supports many serializations both binary and human
         | readable that you could choose at runtime, say for debugging.
         | 
         | I never did use the feature, having got tired of using Thrift
         | for other reasons (e.g. poor interoperability Java/Scala).
        
           | morelisp wrote:
           | Wait, thrift interoperates poorly with Java? We are stuck
           | using it in go because another (Java-heavy) team exposes
           | their data via it, and the experience has been awful even for
           | a simple service. So what is it good in?
        
             | karmakaze wrote:
             | This was a long way back. Specifically, the Scrooge (or
             | Finagle) generator for Scala and Java supported different
             | versions of Thrift libraries. Maybe the problem was related
             | to having a project with both Java and Scala and over the
             | wire interop would have been fine--don't remember the exact
             | details.
        
         | mumblemumble wrote:
         | Also, gRPC's human readability challenges are overblown. A
         | naked protocol buffer datagram, divorced from all context, is
         | difficult to interpret. But gRPC isn't just protocol buffers.
         | 
         | There's a simple, universal two-step process to render it more-
         | or-less a non-issue. First, enable the introspection API on all
         | servers. This is another spot where I find gRPC beats Swagger
         | at its own game: any server can be made self-describing, for
         | free, with one line of code. Second, use tools that understand
         | gRPC's introspection API. For example, grpcurl is a command-
         | line tool that automatically translates protobuf messages
         | to/from JSON.
        
         | sa46 wrote:
         | I adore gRPC but figuring out how to use it from browser
         | JavaScript is painful. The official grpc-web [1] client
         | requires envoy on the server which I don't want. The
         | improbable-eng grpc-web [2] implementation has a native Go
         | proxy you can integrate into a server, but seems riddled with
         | caveats and feels a bit immature overall.
         | 
         | Does grpc-web work well? Is there a way to skip the proxy layer
         | and use protobufs directly if you use websockets?
         | 
         | [1]: https://github.com/grpc/grpc-web [2]:
         | https://github.com/grpc/grpc-web
        
         | mattwad wrote:
         | Not for me. I tried this with Javascript a couple years ago,
         | and it was painful mapping to and from gRPC types everywhere.
         | There's no "undefined" for example if you have a union type.
         | You have to come up with a "EMPTY" value. On top of that, we
         | had all kinds of weird networking issues that we just weren't
         | ready to tackle the same way we could with good ol' HTTP.
        
         | omginternets wrote:
         | The flip side (IMHO, at least), is that simple build-chains are
         | _underrated_.
         | 
         | As much as I love a well-designed IDL (I'm a Cap'n Proto user,
         | myself), the first thing I reach for is ReST. It most cases,
         | it's sufficient, and in all cases it keeps builds simple and
         | dependencies few.
        
           | sagichmal wrote:
           | > The flip side (IMHO, at least), is that simple build-chains
           | are underrated.
           | 
           | Wow, yes! Yes!
           | 
           | IDLs represent substantial complexity, and complexity always
           | needs to be justified. Plain, "optimistically-schema'd" ;)
           | REST, or even just JSON-over-HTTP, should be your default
           | choice.
        
         | gravypod wrote:
         | In my personal experience the other huge benefit is you now
         | have a source of truth for documentation. Since your API
         | definition _is_ code you can leave comments in it and code
         | review it all while having a language that is very approachable
         | for anyone who is familiar with the C family of languages.
         | 
         | For a newcomer having `message Thing {}` and `service
         | ThingChanger {}` is very approachable because it maps directly
         | into the beginner's native programming language. If they
         | started out with Python/C++/Java you can say "It's like a class
         | that lives on another computer" and they instantly get it.
        
         | codegladiator wrote:
         | Have you seen OpenAPI ? Generating client and server code in
         | your programming language for rest/graphql/grpc is not new.
        
         | nwienert wrote:
         | On the GraphQL side you can use gqless[0] (or the improved fork
         | I helped sponsor, here[1]). It's by far the best DX I've had
         | for any data fetching library: fully typed calls, no strings at
         | all, no duplication of code or weird importing, no compiler,
         | and it resolves the entire tree and creates a single fetch
         | call.
         | 
         | [0] https://github.com/gqless/gqless
         | 
         | [1] https://github.com/PabloSzx/new_gqless
        
         | dtech wrote:
         | There's solutions like that for GraphQL [1] and REST too. For
         | REST OpenAPI/Swagger has a very large ecosystem, but does
         | depend on the API author making one.
         | 
         | [1] https://graphql-code-generator.com/
        
           | atraac wrote:
           | In C# one can have a decent documentation just based on a few
           | attributes so you don't even have to write that much to have
           | that schema available to your clients.
        
           | mumblemumble wrote:
           | I can't speak to GraphQL, but, when I was doing a detailed
           | comparison, I found that OpenAPI's code generation facilities
           | weren't even in in the same league as gRPC's.
           | 
           | Buckle up, this is going to be a long comparison. Also,
           | disclaimer, this was OpenAPI 2 I was looking at. I don't know
           | what has changed in 3.
           | 
           | gRPC has its own dedicated specification language, and it's
           | vastly superior to OpenAPI's JSON-based format. Being not
           | JSON means you can include comments, which opens up the
           | possibility of using .proto files as a one stop shop for
           | completely documenting your protocols. And the gRPC code
           | generators I played with even automatically incorporate these
           | comments into the docstrings/javadoc/whatever of the
           | generated client libraries, so people developing against gRPC
           | APIs can even get the documentation in their editor's pop-up
           | help.
           | 
           | Speaking of editor support, using its own language means that
           | the IDEs I tried (vscode and intellij) offer much better
           | editor assistance for .proto files than they do for OpenAPI
           | specs. And it's just more concise and readable.
           | 
           | Finally, gRPC's proto files can import each other, which
           | offers a great code reuse story. And it has a standard
           | library for handling a lot of common patterns, which helps to
           | eliminate a lot of uncertainty around things like, "How do we
           | represent dates?"
           | 
           | Next is the code generation itself. gRPC spits out a library
           | that you import, and it's a single library for both clients
           | and servers. The server side stuff is typically an abstract
           | class that you extend with your own implementation. I
           | personally like that, since it helps keep a cleaner
           | separation between "my code" and "generated code", and also
           | makes life easier if you want to more than one service
           | publishing some of the same APIs.
           | 
           | OpenAPI doesn't really have one way of doing it, because all
           | of the code generators are community contributions (with
           | varying levels of documentation), but the two most common
           | ways to do it are to generate only client code, or to
           | generate an entire server stub application whose
           | implementation you fill in. Both are terrible, IMO. The first
           | option means you need to manually ensure that the client and
           | server remain 100% in sync, which eliminates one of the major
           | potential benefits of using code generation in the first
           | place. And the second makes API evolution more awkward, and
           | renders the code generation all but useless for adding an API
           | to an existing application.
           | 
           | gRPC's core team rules the code generation with an iron fist,
           | which is both a pro and a con. On the upside, it means that
           | things tend to behave very consistently, and all the official
           | code generators meet a very high standard for maturity. On
           | the downside, they tend to all be coded to the gRPC core
           | team's standards, which are very enterprisey, and designed to
           | try and be as consistent as possible across target languages.
           | Meaning they tend to feel awkward and unidiomatic for every
           | single target platform. For example, they place a high
           | premium on minimizing breaking changes, which means that the
           | Java edition, which has been around for a long time,
           | continues to have a very Java 7 feel to it. That seems to rub
           | most people (including me) the wrong way nowadays.
           | 
           | OpenAPI is much more, well, open. Some target platforms even
           | have multiple code generators representing different people's
           | vision for what the code should look like. Levels of
           | completeness and documentation vary wildly. So, you're more
           | likely to find a library that meets your own aesthetic
           | standards, possibly at the cost of it being less-than-perfect
           | from a technical perspective.
           | 
           | For my part, I came away with the impression that, at least
           | if you're already using Envoy, anyway, gRPC + gRPC-web may be
           | the least-fuss and most maintainable way to get a REST-y (no
           | HATEOAS) API, too.
        
             | onei wrote:
             | I don't entirely disagree that gRPC tooling is nicer and
             | more complete in some areas, but there's some
             | misconceptions here.
             | 
             | You can specify openapi v2/3 as YAML and get comments that
             | way. However, the idea is that you add descriptions to the
             | properties, models, etc. It's almost self-documenting in
             | v3, and looks about the same in v2, although I've used v2
             | less so can't be sure.
             | 
             | I can't speak to editor support, but openapi has an online
             | editor that has linting and error checking. Not sure if
             | that's available as a plugin somewhere, but it's a likely a
             | little more awkward if it is by virtue of being a YAML or
             | JSON file rather that a bespoke file extension.
             | 
             | I've seen v3 share at least models across multiple files -
             | it can definitely be done. String formats for dates and
             | uuids are available, but likely not as rich as the protobuf
             | ecosystem as you mention.
             | 
             | And I wholeheartedly agree that the lack of consistent
             | implementation is a problem in openapi. I tried to use v3
             | for Rust recently and gave up due to it's many rough edges
             | for my use case. It's a shame - the client generation would
             | have been a nice feature to get for free.
        
               | mumblemumble wrote:
               | All fair points.
               | 
               | I had forgotten about the YAML format; I probably skipped
               | over it because I am not a fan of YAML. As far as the
               | description features go, they're something, but the lack
               | of ability to stick extra information just anywhere in
               | the file for the JSON format severely hampers the story
               | for high-level documentation. I'm not a fan of the "the
               | whole is just the sum of the parts" approach to
               | documentation; not every important thing to know can
               | sensibly be attached to just one property or resource.
        
       | bluejekyll wrote:
       | Additional GraphQL con: it requires some thought and planning in
       | order to ensure data is cacheable in CDNs and other reverse
       | proxies. This is generally simpler in REST because the APIs tend
       | to be more single use. In GraphQL you have to essentially
       | predefine all the queries in order to achieve the same cache-
       | ability of responses. Then identify those, essentially, over
       | REST.
        
       | thdxr wrote:
       | Highly recommend taking a look at the JSONAPI spec -
       | https://jsonapi.org/
       | 
       | It directly addresses the cons mentioned in the article while
       | retaining all the pros
        
         | sagichmal wrote:
         | Tried it. Definitely not ready yet, and the scope may be large
         | enough that it won't ever get there. Also, many of its design
         | choices are fundamentally in tension with statically typed
         | languages.
         | 
         | I think you can probably formalize JSON API schemas in a useful
         | way, but JSON API ain't it.
        
       | ctvo wrote:
       | Bananas vs. Apples vs. Oranges
        
         | phaedryx wrote:
         | Pro: eating the skin of an apple is easier
        
         | codegladiator wrote:
         | Con: Bananas have thickest skin so harder to peal
        
           | giantrobot wrote:
           | Con: They taste gross.
        
       | speedgoose wrote:
       | > JSON objects are large and field names are repetitive
       | 
       | I used to write protocol buffer stuff for this reason. But I
       | realized after some time that compressed json is almost as good
       | if not better depending on the data, and a lot simpler and nicer
       | to use. You can consider to pre-share a dictionary if you want to
       | compress always the same tiny messages. Of course json +
       | compression is a bit more cpu intensive than protocol buffers but
       | it's not having an impact on anything in most use cases.
        
         | ericbarrett wrote:
         | I think your instinct to reach for the straightforward solution
         | is good. gRPC has advantages, but it also comes with complexity
         | since you have to bring all the tooling along. And the CPU
         | burden of (de-)serializing JSON is a very different story than
         | when Protobufs were developed in 2001.
        
       | nym3r0s wrote:
       | Apache Thrift is another great way of server-server & client-
       | server communication. Pretty similar to protobuf but feels a bit
       | more mature to me.
       | 
       | https://thrift.apache.org/
        
       | mixxit wrote:
       | I wish gRPC was bidirectional
       | 
       | For now I will stick to SignalR
        
         | malkia wrote:
         | It supports bidi, and also single request, multiple responses,
         | or multiple requests, single response -
         | https://grpc.io/docs/what-is-grpc/core-concepts/#bidirection...
        
       | hankchinaski wrote:
       | i don't know about you but in my experience, unless you have
       | Google's size microservices and infrastructure gRPC (with
       | protocol buffers) is just tedious.
       | 
       | - need an extra step when doing protoc compilation of your models
       | 
       | - cannot easily inspect and debug your messages across your
       | infrastructure without a proper protobuf decoder/encoder
       | 
       | If you only have Go microservices talking via RPC there is GOB
       | encoding which is a slimmed down version of protocol buffer, it's
       | self describing, cpu efficient and natively supported by the Go
       | standard library and therefore probably a better option -
       | although not as space efficient. If you talk with other non-Go
       | services then a JSON or XML transport encoding will do the job
       | too (JSON rpc).
       | 
       | The graphQL one is great as what is commonly known as 'backend
       | for frontend' - but inside the backend. it makes the life of
       | designing an easy to use (and supposedly more efficient) API
       | easier (for the FE) but much less so for the backend, which
       | warrants increased implementation complexity and maintenance.
       | 
       | the good old rest is admittedly not as flexible as rpc or graphql
       | but does the job for simpler and smaller apis albeit I admit i
       | see, anecdotally, it being used less and less
        
         | lokar wrote:
         | I wish gRPC has the same ability as stubby (what google uses
         | internally): logging rpc (calls and replies, full msg or just
         | the hdr) to a binary file and a nice set of tools to decode and
         | analyze them.
        
           | jeffbee wrote:
           | https://github.com/grpc/proposal/blob/master/A16-binary-
           | logg...
        
         | gravypod wrote:
         | > unless you have Google's size microservices and
         | infrastructure gRPC (with protocol buffers) is just tedious
         | 
         | Having used gRPC in very small teams (<5 engineers touching
         | backend stuff) I had a very different experience from yours.
         | 
         | > need an extra step when doing protoc compilation of your
         | models
         | 
         | For us this was hidden by our build systems. In one company we
         | used Gradle and then later Bazel. In both you can set it up so
         | you plop a .proto into a folder and everything "works" with
         | autocompletes and all.
         | 
         | > cannot easily inspect and debug your messages across your
         | infrastructure without a proper protobuf decoder/encoder
         | 
         | There's a lot of tooling that has recently been developed that
         | makes all of this much easier.
         | 
         | - https://github.com/fullstorydev/grpcurl
         | 
         | - https://github.com/uw-labs/bloomrpc
         | 
         | - https://kreya.app/
         | 
         | You can also use grpc-web as a reverse proxy to expose normal
         | REST-like endpoints for debugging as well.
         | 
         | > If you talk with other non-Go services then a JSON or XML
         | transport encoding will do the job too (JSON rpc).
         | 
         | The benefit of protos is they're a source of truth across
         | multiple languages/projects with well known ways to maintain
         | backwards comparability.
         | 
         | You can even build tooling to automate very complex things:
         | 
         | - Breaking Change Detector: https://docs.buf.build/breaking-
         | usage/
         | 
         | - Linting (Style Checking): https://docs.buf.build/lint-usage/
         | 
         | There's many more things that can be done but you get the idea.
         | 
         | On top of this you get something else that is way better:
         | Relatively fast server that's configured & interfaces with the
         | same way in every programming language. This has been a massive
         | time sink in the past where you have to investigate nginx/*cgi,
         | sonic/flask/waitress/wsgi, rails, and hundreds of other things
         | for every single language stack each with their own gotchas.
         | gRPC's ecosystem doesn't really have that pain point.
        
         | jayd16 wrote:
         | > unless you have Google's size microservices and
         | infrastructure
         | 
         | The protobuf stuff can start to pay off as early as when you
         | have two or more languages in the project.
        
         | [deleted]
        
       | hashkb wrote:
       | Too concise. This wouldn't be helpful for making a choice; or
       | will mislead. Right off the top, it's not necessary to write REST
       | endpoints for each use case. Many REST apis have filtering and
       | joining, just like gql.
       | 
       | Edit: claiming gql solves over/underfetch without mentioning that
       | you're usually still responsible for implementing it (and it can
       | be complex) in resolvers is borderline dishonest.
        
         | parhamn wrote:
         | I'd call it completely lacking, not concise. E.g. its already
         | conflating transport & serialization. But HN loves a good
         | serializer debate so this will be an active thread.
        
           | majkinetor wrote:
           | Must be a bad student homework ...
        
       | zedr wrote:
       | > Easily discoverable data, e.g. user ID 3 would be at /users/3.
       | All of the CRUD (Create Read Update Delete) operations below can
       | be applied to this path
       | 
       | Strictly speaking, that's not what REST considers "easily
       | discoverable data". That endpoint would need to have been
       | discovered by navigating the resource tree, starting from the
       | root resource.
       | 
       | Roy Fielding (author of the original REST dissertation): "A REST
       | API must not define fixed resource names or hierarchies (an
       | obvious coupling of client and server). (...) Instead, allow
       | servers to instruct clients on how to construct appropriate URIs,
       | such as is done in HTML forms and URI templates, by defining
       | those instructions within media types and link relations.
       | [Failure here implies that clients are assuming a resource
       | structure due to out-of band information, such as a domain-
       | specific standard, which is the data-oriented equivalent to RPC's
       | functional coupling].
       | 
       | A REST API should be entered with no prior knowledge beyond the
       | initial URI (bookmark) and set of standardized media types that
       | are appropriate for the intended audience (i.e., expected to be
       | understood by any client that might use the API). "[1]
       | 
       | 1. https://roy.gbiv.com/untangled/2008/rest-apis-must-be-
       | hypert...
        
         | smadge wrote:
         | The web has a standard for identifying resources, URIs. One
         | nice thing about URIs is they can be URLs. A single integer id
         | doesn't even identify a user since if I give you 3 you have no
         | idea if it is users/3 or posts/3, or users/3 on Twitter or
         | users/3 on Google, or the number of coffees I have drunk today.
        
         | arethuza wrote:
         | You are quite correct, but by this stage the original
         | definition of REST to include HATEOAS has pretty much been
         | abandoned by most people.
         | 
         | Edit: Pretty much every REST API I see these days explains how
         | to construct your URLs to do different things - rather than
         | treating all URLs as opaque. Mind you having tried to create
         | 'pure' HATEOAS REST API I think I prefer the contemporary
         | approach!
        
           | zedr wrote:
           | I agree with your preference. I too lean towards a pragmatic
           | approach to REST, which I've seen referred to as "RESTful",
           | as in the popular book "RESTful Web APIs" by Richardson.
        
       | thinkingkong wrote:
       | All these patterns are helpful because theyre consistent. The
       | graphql and grpc systems are "good" because theyre schema driven
       | so that makes automated tooling easier. That being said, the
       | problem at smaller scales doesnt really exist too much. Saving
       | bytes on the wire is such a weird optimization at early to mid
       | stages that - unless latency is your _actual_ problem ends up
       | being an optimization with very little business value.
        
       | bigmattystyles wrote:
       | Maybe it's because I'm in the .net world, but why is there never
       | any love for odata? If you have SQL in the back, it's great!
        
         | littlecranky67 wrote:
         | OData had its momentum but since a couple of years at least,
         | there is no maintained JS odata library that is not buggy and
         | fully usable in modern environments.
        
           | bigmattystyles wrote:
           | I can't disagree there, and for all the work MS is putting
           | into it right now for it in dotnetcore - I don't understand
           | how they can have this big a blind spot.
        
             | pietromenna wrote:
             | I agree with you that there is support for oData v2 and v4.
             | But they are not exactly mainstream out there. I like oData
             | v4 and I try to use it when it is opossible.
        
       | mdavidn wrote:
       | There are a variety of tricks to solve the over/under-fetching
       | problems of REST. My default approach is JSON:API, which defines
       | standard query parameters for clients to ask the server to return
       | just a subset of fields or to return complete copies of
       | referenced resources.
       | 
       | https://jsonapi.org/
        
       | sudowing wrote:
       | The `versus` nature of this question was the driving force behind
       | a project I built last year.
       | 
       | I've been in multiple shops where REST was the standard -- and
       | while folks had interest in exploring GraphQL or gRPC, we could
       | not justify pivoting away from REST to the larger team.
       | Repeatedly faced with this `either-or`, I set out to build a
       | generic app that would auto provision all 3 (specifically for
       | data-access).
       | 
       | I posted in verbose detail about that project a few months ago,
       | so here I'll just provide a summary: The project auto provisions
       | REST, GraphQL & gRPC services that support CRUD operations to
       | tables, views and materialized views of several popular databases
       | (postgres, postgis, mysql, sqlite). The services support full
       | CRUD (with validation), geoquery (bbox, radius, custom wkt
       | polygon), complex_resources (aggregate & sub queries), middleware
       | (access the query before db execution), permissions (table/view
       | level CRUD configs), field redaction (enable query support --
       | without publication), schema migrations, auto generated
       | openapi3/swagger docs, auto generated proto file.
       | 
       | I hope this helps anyone in a spot where this `versus`
       | conversation pops up.
       | 
       | original promotional piece:
       | https://news.ycombinator.com/item?id=25600934
       | 
       | docker implementation: https://github.com/sudowing/service-
       | engine-template
       | 
       | youtube playlist:
       | https://www.youtube.com/playlist?list=PLxiODQNSQfKOVmNZ1ZPXb...
        
         | joekrill wrote:
         | What conclusions did you come to?
        
           | sudowing wrote:
           | I can't say I reached any conclusions -- I can only offer
           | that the handful of people that have offered feedback found
           | if feature rich and easy to use.
        
       | mumblemumble wrote:
       | I feel like this is rather shallow, and, by focusing so heavily
       | on just the transport protocol, misses a lot of more important
       | details.
       | 
       | For starters, REST and "JSON over HTTP/1.1" are not necessarily
       | synonyms. This description conflates them, when really there are
       | three distinct ways to use JSON over HTTP/1.1: Actual REST
       | (including HATEOAS), the "openAPI style" (still resource-
       | oriented, but without HATEOAS), and JSON-RPC. For most users, the
       | relative merits of these three considerations are going to be a
       | much bigger deal than the question of whether or not to use JSON
       | as the serialization format.
       | 
       | Similarly, for gRPC, you have a few questions: Do you want to do
       | a resource-oriented API that can easily be reverse proxied into a
       | JSON-over-HTTP1.1 API? If so then you gain the ability to access
       | it from Web clients, but may have to limit your use of some of
       | gRPC's most distinctive features. How much do you want to lean
       | toward resource-orientation compared to RPC? gRPC has good
       | support for mixing and matching the two, and making an
       | intentional decision about how you do or do not want to mix them
       | is again probably a much bigger deal in the long run than the
       | simple fact of using protocol buffers over HTTP/2.
       | 
       | GraphQL gives clients a lot of flexibility, and that's great, but
       | it also puts a lot of responsibility on the server. With GraphQL,
       | clients get a lot of latitude to construct queries however they
       | want, and the people constructing them won't have any knowledge
       | about which kinds of querying patterns the server is prepared to
       | handle efficiently. So there's a certain art to making sure you
       | don't accidentally DOS attack yourself. Guarding against this
       | with the other two API styles can be a bit more straightforward,
       | because you can simply not create endpoints that translate into
       | inefficient queries.
        
         | stevebmark wrote:
         | This post is just a brief summary of each protocol, I don't
         | understand how it made it to the front page.
        
           | applecrazy wrote:
           | I think it's because of the robust discussion in these
           | comments.
        
             | lazysheepherd wrote:
             | I almost never read articles in HN.
             | 
             | If I see title "something something GraphQL" in HN, I do
             | not think "ooh someone wrote an article about GraphQL".
             | 
             | I rather see it as "folks in HN discussing about GraphQL
             | today!"
             | 
             | I can google and get tens or even hundreds of articles
             | about GraphQL, but I won't get the "points and counter
             | points: a discussion that I get here.
        
             | ZephyrBlu wrote:
             | Story of HN. Come for the content, stay for the comments.
        
       | stickfigure wrote:
       | No mention of what I see as the biggest con of GraphQL: You must
       | build a lot of rate limiting and security logic, or your APIs are
       | easily abused.
       | 
       | A naive GraphQL implementation makes it trivial to fetch giant
       | swaths of your database. That's fine with a 100% trusted client,
       | but if you're using this for a public API or web clients, you can
       | easily be DOSed. Even accidentally!
       | 
       | Shopify's API is a pretty good example of the lengths you have to
       | go to in order to harden a GraphQL API. It's ugly:
       | 
       | https://shopify.dev/concepts/about-apis/rate-limits
       | 
       | You have to limit not just number of calls, but quantity of data
       | fetched. And pagination is gross, with `edges` and `node`. This
       | is is straight from their examples:                   {
       | shop {             id             name           }
       | products(first: 3) {             edges {               node {
       | handle               }             }           }         }
       | 
       | Once you fetch a few layers of edges and nodes, queries become
       | practically unreadable.
       | 
       | The more rigid fetching behavior of REST & gRPC provides more
       | predictable performance and security behavior.
        
         | hn_throwaway_99 wrote:
         | This is what I see as a huge misconception of GraphQL, and
         | unfortunately proliferates due to lots of simple "Just expose
         | your whole DB as a GraphQL API!" type tools.
         | 
         | It's quite simple (easier in my opinion than in REST) to build
         | a targeted set of GraphQL endpoints that fit end-user needs
         | while being secure and performant. Also, as the other user
         | posted, "edges" and "nodes" has nothing to do with the core
         | GraphQL spec itself.
        
           | slow_donkey wrote:
           | I don't disagree with you, but graphql just lends itself well
           | to bad decisions and many times when I've poked at graphql
           | endpoints they share these issues (missing auth after first
           | later, exposing schema by accident, no depth/cost limit). I
           | think a combination of new technology w/o standardized best
           | practices and startups being resource constrained
           | proliferates poor security with graphql.
           | 
           | Of course, the same could happen for standard REST as well,
           | but I think the foot guns are more limited.
        
             | Aeolun wrote:
             | > no depth/cost limit
             | 
             | Or you can do like us, there's no depth at all, since our
             | types do not have any possible subqueries.
        
         | stevebmark wrote:
         | edges and node come from Relay, not from the core GraphQL spec.
         | They're just one way to do pagination.
         | 
         | I like edges and node, it gives you a place to encode
         | information about the relationship between the two objects, if
         | you want to. And if all your endpoints standardize on this
         | Relay pagination, you get standard cursor/offset fetching,
         | along with the option to add relationship metadata in the
         | future if you want, without breaking your schema or clients.
         | 
         | edit: the page you linked to has similar rate limiting behavior
         | for both REST and GraphQL lol
        
           | andrewingram wrote:
           | Technically the spec is part of GraphQL itself now, but an
           | optional recommendation, not something you're obliged to do.
           | 
           | That said, like you I am a fan.
           | 
           | It's a pretty defensible pattern, more here for those
           | interested: https://andrewingram.net/posts/demystifying-
           | graphql-connecti...
           | 
           | The overall verbosity of a GraphQL queue tends to not be a
           | huge issue either, because in practice individual components
           | are only concerning themselves with small subsets of it (i.e
           | fragments). I'm a firm believer that people will have a
           | better time with GraphQL if they adopt Relay's bottom-up
           | fragment-oriented pattern, rather than a top-down query-
           | oriented pattern - which you often see in codebases by people
           | who've never heard of Relay.
        
           | wyattjoh wrote:
           | Seconded. I feel that the pagination style that Relay offers
           | is typically better than 99% of the custom pagination
           | implementations out there. There's no reason why the cursor
           | impl can just do limit/skip under the hood (if that's what
           | you want to do), but it unlocks you to change that to cursor
           | based _easily_.                   {           products(first:
           | 3) {             pageInfo {               hasNextPage
           | endCursor             }             edges {
           | cursor               node {                 handle
           | }             }           }         }
        
         | andrew_ wrote:
         | Rate limiting and security are trivial these days, with an
         | abundance of directive libs available, ready to use out of the
         | box, and every major third party auth provider boasting ease of
         | use with common GraphQL patterns. I'd argue what you see as the
         | biggest con is actually a strength now.
         | 
         | > And pagination is gross, with `edges` and `node`
         | 
         | This just reads like an allergic reaction to "the new" and
         | towards change. Edges and Nodes are elegant, less error prone
         | and limits and skips, and most importantly - datasource
         | independent.
        
           | verdverm wrote:
           | I'd be interested to see a graphql library that makes
           | security trivial. Could you add some links?
           | 
           | In my experience, securing nested assets based on
           | owner/editor/reader/anon was rather difficult and required
           | inspecting the schema stack. I was using the Apollo stack.
           | 
           | This was in the context of apps in projects in accounts
           | (common pattern for SaaS where one email can have permissions
           | in multiple orgs or projects)
        
         | QuinnWilton wrote:
         | I'm a huge fan of GraphQL, and work full-time on a security
         | scanner for GraphQL APIs, but denial of service is a huge (but
         | easily mitigated) risk of GraphQL APIs, simply because of the
         | lack of education and resources surrounding the topic.
         | 
         | One fairly interesting denial of service vector that I've found
         | on nearly every API I've scanned has to do with error messages.
         | Many APIs don't bound the number of error messages that are
         | returned, so you can query for a huge number of fields that
         | aren't in the schema, and then each of those will translate to
         | an error message in the response.
         | 
         | If the server supports fragments, you can also sometimes
         | construct a recursive payload that expands, like the billion
         | laughs attack, into a massive response that can take down the
         | server, or eat up their egress costs.
        
         | tshaddox wrote:
         | I'm not convinced that GraphQL is any more difficult to
         | implement robust rate limiting or other performance guarantees
         | than a REST API with comparable functionality. As soon as you
         | start implementing field/resource customizability in a REST API
         | you have roughly the same problems guaranteeing performance.
         | JSON:API, for example, specifies how to request fields on
         | related objects with a syntax like
         | `/articles/1?include=author,comments.author`, which is
         | comparable to the extensibility you get by default in GraphQL.
         | Different libraries which help you implement JSON:API or
         | GraphQL may differ in how you opt in or opt out of this sort of
         | extensibility, and perhaps in practice GraphQL libraries tend
         | to require opting _out_ (and GraphQL consumers might tend to
         | expect a lot of this extensibility), but at the end of the day
         | there 's little difference in principle for two APIs with
         | comparable functionality. And, as others have noted, the
         | popular GraphQL implementations I've seen all make it fairly
         | straightforward to limit things like the query depth or total
         | number of entities requested.
         | 
         | Of course, if the argument is simply that it tends to be more
         | challenging to manage performance of GraphQL APIs simply
         | because GraphQL APIs tend to offer a lot more functionality
         | than REST APIs, then of course I agree, but that's not a
         | particularly useful observation. Indeed having no API at all
         | would further reduce the challenge!
         | 
         | [0] https://jsonapi.org/format/#fetching-includes
        
           | ucarion wrote:
           | > Of course, if the argument is simply that it tends to be
           | more challenging to manage performance of GraphQL APIs simply
           | because GraphQL APIs tend to offer a lot more functionality
           | than REST APIs, then of course I agree, but that's not a
           | particularly useful observation. Indeed having no API at all
           | would further reduce the challenge!
           | 
           | On their own, such arguments are indeed not useful. But if
           | you can further point out that GraphQL has more functionality
           | than is required, then you can basically make a YAGNI-style
           | argument against GraphQL.
        
       | ryandvm wrote:
       | I am totally expecting the next fad in web development to be just
       | exposing a raw SQL interface to the front-end...
        
         | js8 wrote:
         | Simplify things? No way! The next fad will be SQL over GraphQL.
        
       | lokar wrote:
       | This is really just a comparison of the basic wire format. Full
       | RPC systems like gRPC are much more.
        
         | sbayeta wrote:
         | Could you please elaborate briefly? Thanks!
        
           | swyx wrote:
           | i think they are talking about how it is very standard for
           | gRPC systems to generate server and client code that make it
           | very easy to use. see this comment for more
           | https://news.ycombinator.com/item?id=26466902
        
             | lokar wrote:
             | Also, many others:
             | 
             | - Standard Authentication and identity (client and server)
             | - Authorization support - Overload protection and flow
             | control - Tracing - Standard logging - Request
             | prioritization - Load balancing - Health checks and server
             | status
        
       | sneak wrote:
       | One thing that people seem to gloss over when comparing these is
       | that you also need to compare the serialization. gRPC using
       | protobuf means you get actual typed data, whereas typing in JSON
       | (used by the other two) is a mess (usually worked around by
       | jamming anything ambiguous-in-javascript like floats, dates,
       | times, et c into strings).
       | 
       | If you're using typed languages on either the client or the
       | server, having a serialization system that preserves those types
       | is a very nice thing indeed.
        
         | dboreham wrote:
         | > usually worked around by jamming anything ambiguous like
         | floats, dates, times, et c into strings
         | 
         | because nobody has ever done this with protobuf...
         | 
         | btw what is the protobuf standard type for "date"?
        
           | sneak wrote:
           | int64 representing unix epoch millis (in UTC) is what I
           | usually use. You need to jump through an additional hoop to
           | store timezone or offset.
           | 
           | You can, of course, do the thing that JS requires you always
           | do and put an ISO8601 date in a string. This has the benefit
           | of storing the offset data in the same field/var.
           | 
           | Javascript needs int64s as strings, I believe, because the JS
           | int type maxes out at 53 bits.
        
           | [deleted]
        
           | teeray wrote:
           | That would be a Timestamp, found in the "well-known types"
           | [0]. You can use your own encoding (milliseconds since epoch,
           | RFC3339 string, etc), but using Timestamp gets you some auto-
           | generated encoding / decoding functions in the supported
           | languages.
           | 
           | [0] https://developers.google.com/protocol-
           | buffers/docs/referenc...
        
             | jeffbee wrote:
             | Timestamp is fundamentally flawed and should only be used
             | in applications without any kind of performance/efficiency
             | concerns, or for people who really need a range of ten
             | thousand years. The problem with Timestamp is it should not
             | have used variable-length integers for its fields. The
             | fractional part of a point in time is uniformly
             | distributed, so most timestamps are going to have either 4
             | or 5 bytes in their representation, meaning int32 is worse
             | than fixed32 on average. The whole part of an epoch offset
             | in seconds is also pretty large, it takes 5 bytes to
             | represent the present time. Since you also have two field
             | tags, Timestamp requires 11-12 bytes to represent the
             | current time, and it's expensive to decode because it takes
             | the slowest-possible path through the varint decoder.
             | 
             | Reasonable people can used a fixed64 field representing
             | nanoseconds since the unix epoch, which will be very fast,
             | takes 9 bytes including the field tag, and yields a range
             | of 584 years which isn't bad at all.
        
             | jayd16 wrote:
             | A timestamp is not quite the same thing as a calendar date.
        
       ___________________________________________________________________
       (page generated 2021-03-15 23:01 UTC)