I like the idea of grpc because I wanted the contract but I tried it on a small service and I think I would avoid it in the future. Too many rough edges and features I didnt really need. I was using it in Rust and python mainly (maybe it is better in Go?) but it had a whole bunch of google stuff in there I didnt need.
- Configuring the python client with a json string that did not seem to have a documented schema
- Error types that were overly general in some ways and overly specific in other ways
- HAProxy couldn't easily health check the service
There were a few others that I cant remember because it was ~5 years ago. I liked the idea of the contract and protobuf seemed easy to write but had no need for client side dns load balancing and the like and was not working in GoLang.
kjuulh 22 hours ago [-]
I think connect-rpc[0] strikes a good balance between normal http apis, and gRPC. It allows protobuf as json. So you could think of it as an opinionated http api spec. A health check would be just a call to an url /something.v1.MyService/MyMethod -d { "input": "something }.
it works really well, and the tooling is pretty good, though it isn't that widely supported yet. Rust for one doesn't have an implementation. But I've been using it at work, and we basically haven't had any issues with it (go and typescript).
But the good thing is that it can interoperate with normal grpc servers, etc. But that of course locks it into using the protobuf wireformat, which is part of the trouble ;)
+1 for connect-rpc. After go-micro‘s maintainer went off the rails, I ripped it all out and switched to connect.
smnscu 11 hours ago [-]
> After go-micro‘s maintainer went off the rails
What do you mean by this? Genuinely curious, as someone who's followed that project in the past.
asim 9 hours ago [-]
He probably means when I took VC funding in 2019 and started to rip apart the framework to try build a platform and business. The 2-3 years after were very chaotic.
My goal was never to serve the community but instead leverage it to build a business. Ultimately that failed. The truth is it's very difficult to sustain open source. Go-micro was never the end goal. It was always a stepping stone to a platform e.g microservices PaaS. A lot of hard lessons learned along the way.
Now with Copilot and AI I'm able to go back and fix a lot of issues but nothing will fix trust with a community or the passage of time. People move on. It served a certain purpose at certain time.
Note: The company behind connect-rpc raised $100m but for more of a build system around protobuf as opposed to the rpc framework but this was my thinking as well. The ability to raise $10-20m would create the space to build the platform off the back of the success of the framework.
jamesu 21 hours ago [-]
Using connectrpc was a pretty refreshing experience for me. Implementing a client for the HTTP stuff at least is pretty easy!
I was able to implement a basic runner for forgejo using the protobuf spec for the runner + libcurl within a few days.
dewey 22 hours ago [-]
I've only enjoyed using Protobuf + gRPC after we've started using https://buf.build. Before that it was always a pain with Makefiles building obscure commands, then developers having different versions of the Protobuf compiler installed and all kinds of paper cuts like that.
Now it's just "buf generate", every developer has the exact same settings defined in the repo and on the frontend side we are just importing the generated Typescript client and have all the types instantly available there. Also nice to have a hosted documentation to link people to.
My experience is mostly with Go, Python and TS.
PessimalDecimal 8 hours ago [-]
buf.build sounds interesting as a middle ground for using protos without going all-in on the Bazel build ecosystem.
neRok 7 hours ago [-]
> - Configuring the python client with a json string that did not seem to have a documented schema
I'm far from an expert, yet I came to believe that what you've described is basically "code smell". And the smell probably comes from seemingly innocuous things like enum's.
And you wondered if the solution was using Go, but no, it isn't. I was actually Go at the time myself (this was a few years ago, and I used Twirp instead of Protobuf) - but I realised that RDBMS > "Server(Go)" layer had quirks, and then the "Server(Go)" > "API(JS)" had other quirks -- and so I realised that you may as well "splat" out every attribute/relationship. Because ultimately, that's the problem...
Eg: is it a null field, or undefined, or empty, or false, or [], or {}? ...
[] == my valentines day inbox. :P
aanet 10 hours ago [-]
I’m old enough to remember the days of CORBA (Orbix, Iona, BEA, anyone ??) and its IDL, the IDL compiler, stubs and other props and doodads. Ah yes, and the registry as well, where the services registered themselves, and the discovery mechanisms.
gRPC (a very Googly thing) took it all, hook line and sinker and made it URLesque
Can’t recall how the ORB overhead has been resolved in gRPCs
speedbird 9 hours ago [-]
And still I found it easier to use than gRPC.
jeffbee 7 hours ago [-]
There is nothing analogous to the ORB in gRPC, so your complaint seems hallucinated. Callers are entirely responsible for figuring out where their services are. gRPC holds no opinions whatsoever about how your servers are named or how their lifecycles are managed. There is not a "bus".
allanrbo 20 hours ago [-]
For those situations where you need just a little bit of protobuf in your project, and don't want to bother with the whole proto ecosystem of codegen and deps:
https://github.com/allanrbo/pb.py
pm90 16 hours ago [-]
My gripe with grpc is that it doesn’t play super well with kubernetes services… you have to take a little bit of care, you need to understand how k8s services work and you have to understand how load balancing in grpc works. Ideally I would want to use protobuf as an interchange format, and a “dumb” http server that understands that.
That being said… once you do configure it properly it can be a powerful tool. The complexity though is usually not worth it unless you’re at a certain scale.
whs 15 hours ago [-]
I wrote gRPC xDS server for Kubernetes that is configuration-free. Basically just load xDS client library into your code, then use xds:///servicename.namespace (unlike DNS, namespace is always required). It should be as lightweight and scales in similar way to the cluster DNS.
My company run this exact code in production since it was created in 2022. We probably have several times more than 1000 rps gRPC requests running internally including over the public internet for hybrid cloud connectivity. That being said, gRPC's xDS client is not always bugs-free.
Have you checked out https://connectrpc.com/
We are using this for the basic HTTP server/client compatibility. And that means we can also use from web a context without any proxy setup.
jeffbee 7 hours ago [-]
If you just want to send a protobuf to a host:port there's no reason you can't do that with gRPC. Client load balancers are something you can optionally layer on top.
est 20 hours ago [-]
> The contract-first philosophy
gRPC/protobuf is largely a Google cult. I've seen too projects with complex business logic simply give up and embed JSON strings inside pb. Like WTF...?
Everything was good in the begining, as long as everyone submits their .proto to a centralized repo. Once the one team starts to host their own, things get broken quickly.
As it occured to me, gRPC could optionally just serve those .proto files in the initial h2 handshake on the wire. It add just few kilobytes but solves a big problem.
PessimalDecimal 8 hours ago [-]
> Everything was good in the begining, as long as everyone submits their .proto to a centralized repo. Once the one team starts to host their own, things get broken quickly.
Is this an issue with protobufs per se though? It's a data schema. How are people supposed to develop to a shared schema if a team doesn't - you know - share their schema? That could happen with any other particular choice for how schemas are defined.
ragall 5 hours ago [-]
It's a problem with PB because it requires everything to be typed (unless you use Any), which requires all middleware to eagerly type check all data passing through. With JSON, validation will be typically done only by the endpoints, which allows for much faster development.
There was a blog a few years ago, where an engineer working on the Google Cloud console was complaining that simply adding a checkbox to one of the pages required modifying ~20 internal protos and 6 months of rollout. That's an obvious downside that I wish I knew how to fix.
It's possible in the story you mention that each of those ~20 internal protos were different messages, and each hop between backends was translating data between nearly identical schemas. In that case, they'd all need to be updated to transport that data. But that's different and the result of those engineers' choice for how to structure their service definitions.
matja 17 hours ago [-]
> As it occured to me, gRPC could optionally just serve those .proto files in the initial h2 handshake on the wire
Do you mean the reflection protocol, or some other .proto files?
jcgrillo 19 hours ago [-]
I personally really like gRPC and protobufs. I think they strike a good balance between a number of indirectly competing objectives. However I completely agree with your observation that as soon as you move beyond a single source of truth for the .proto files it all goes to shit. I've seen some horrible things--generated code being committed to version control and copied between repos, .proto files duplicated and manually kept up to date (or not). Both had hilarious failure modes. There is no viable synchronization mechanism except to ensure that each .proto file is defined in exactly one place, that each time someone touches a .proto file all the downstream dependencies on that file are updated--everyone who consumes any code generated from that .proto--and that for every such change clients are deployed before servers. Usually these invariants are maintained by meatspace protocols which invariably fail.
jeffbee 19 hours ago [-]
I don't see why any of that would be necessary. There are simple rules for protobuf compatibility and people only need to follow them. Never re-use a field number to mean something else. Never change the type of a field. That's it. Those are the only rules. If you follow them you don't have to think about any of that stuff that you mentioned.
jcgrillo 19 hours ago [-]
Absolutely! Forward and backward compatibility are one of the wonderful things about protobufs. And that all goes wrong when you try to define the interface in more than one place.
EDIT: also, although the wire protocol may tolerate unknown or missing data, almost always the application doesn't.
EDIT AGAIN: I'm not saying this is how it should be just that this is the low energy state the socio-technical system seems to arrive at over time. So ideally it should be simple but due to imperfect decisions it gets horribly complicated over time.
jeffbee 19 hours ago [-]
I fail to see how the application will even be aware of unknown data. Explain what practical problem could possibly arise if you think a message has 4 fields and I send you a fifth one.
Edited to reply to your edits: People who are just bozos with computers will never be kept from bozotry by any interchange format. If they lack any semblance of foresight then maybe they simply should get a different line of work. Postel's law is in force here. If you start sending me emails with extra headers my email program is never going to care. Protobufs are the same way.
jcgrillo 16 hours ago [-]
Apologies for the delay, this site appears to be rate limiting me. Yeah used correctly they're great. But they're almost never used correctly in practice. I agree this is bozotry in the extreme, but it's widespread. To avoid it all they'd need to do is read like 4 pages of well-written accessible documentation, but sadly that bar is too high. I don't blame protobufs! It's just that, somehow, what should be an elegant, simple system turns into a nightmare in practice. Every. Goddamn. Time. Not unlike when people try to use Kafka. That isn't to say the tool shouldn't be used, just that maybe we need a better way to organize/educate/hire engineers so they don't ruin things so badly. Or at least some way to impose an upper bound the damage they can do. Maybe there's some kind of regularization effect if you force everyone to work with Map<Object, Object> JSON. Or maybe it's just the state everything devolves to eventually.
jayd16 19 hours ago [-]
It does have discovery built in. Is that what you want?
est 17 hours ago [-]
you mean grpc.reflection.v1alpha.ServerReflection? Close enough, sadly not generally enabled.
cyberax 19 hours ago [-]
Protobuf is good, but it's not perfect. The handling of "oneof" fields is weird and the Python bindings were written by drunk squirrels, enums are strange, etc.
gRPC is terrible, but ConnectRPC allows sane integration of PB with regular browser clients. Buf.build also has a lot of helpful tools, like backwards compatibility checking.
But it's not worse than other alternatives like Thrift. And waaaaaaaaaayyyyyy better than OpenAPI monstrosities.
Rendered at 22:46:20 GMT+0000 (Coordinated Universal Time) with Vercel.
- Configuring the python client with a json string that did not seem to have a documented schema
- Error types that were overly general in some ways and overly specific in other ways
- HAProxy couldn't easily health check the service
There were a few others that I cant remember because it was ~5 years ago. I liked the idea of the contract and protobuf seemed easy to write but had no need for client side dns load balancing and the like and was not working in GoLang.
it works really well, and the tooling is pretty good, though it isn't that widely supported yet. Rust for one doesn't have an implementation. But I've been using it at work, and we basically haven't had any issues with it (go and typescript).
But the good thing is that it can interoperate with normal grpc servers, etc. But that of course locks it into using the protobuf wireformat, which is part of the trouble ;)
0: https://connectrpc.com/
What do you mean by this? Genuinely curious, as someone who's followed that project in the past.
My goal was never to serve the community but instead leverage it to build a business. Ultimately that failed. The truth is it's very difficult to sustain open source. Go-micro was never the end goal. It was always a stepping stone to a platform e.g microservices PaaS. A lot of hard lessons learned along the way.
Now with Copilot and AI I'm able to go back and fix a lot of issues but nothing will fix trust with a community or the passage of time. People move on. It served a certain purpose at certain time.
Note: The company behind connect-rpc raised $100m but for more of a build system around protobuf as opposed to the rpc framework but this was my thinking as well. The ability to raise $10-20m would create the space to build the platform off the back of the success of the framework.
Now it's just "buf generate", every developer has the exact same settings defined in the repo and on the frontend side we are just importing the generated Typescript client and have all the types instantly available there. Also nice to have a hosted documentation to link people to.
My experience is mostly with Go, Python and TS.
I'm far from an expert, yet I came to believe that what you've described is basically "code smell". And the smell probably comes from seemingly innocuous things like enum's.
And you wondered if the solution was using Go, but no, it isn't. I was actually Go at the time myself (this was a few years ago, and I used Twirp instead of Protobuf) - but I realised that RDBMS > "Server(Go)" layer had quirks, and then the "Server(Go)" > "API(JS)" had other quirks -- and so I realised that you may as well "splat" out every attribute/relationship. Because ultimately, that's the problem...
Eg: is it a null field, or undefined, or empty, or false, or [], or {}? ...
[] == my valentines day inbox. :P
gRPC (a very Googly thing) took it all, hook line and sinker and made it URLesque
Can’t recall how the ORB overhead has been resolved in gRPCs
That being said… once you do configure it properly it can be a powerful tool. The complexity though is usually not worth it unless you’re at a certain scale.
My company run this exact code in production since it was created in 2022. We probably have several times more than 1000 rps gRPC requests running internally including over the public internet for hybrid cloud connectivity. That being said, gRPC's xDS client is not always bugs-free.
https://github.com/wongnai/xds
gRPC/protobuf is largely a Google cult. I've seen too projects with complex business logic simply give up and embed JSON strings inside pb. Like WTF...?
Everything was good in the begining, as long as everyone submits their .proto to a centralized repo. Once the one team starts to host their own, things get broken quickly.
As it occured to me, gRPC could optionally just serve those .proto files in the initial h2 handshake on the wire. It add just few kilobytes but solves a big problem.
Is this an issue with protobufs per se though? It's a data schema. How are people supposed to develop to a shared schema if a team doesn't - you know - share their schema? That could happen with any other particular choice for how schemas are defined.
There was a blog a few years ago, where an engineer working on the Google Cloud console was complaining that simply adding a checkbox to one of the pages required modifying ~20 internal protos and 6 months of rollout. That's an obvious downside that I wish I knew how to fix.
https://kmcd.dev/posts/protobuf-unknown-fields/ discusses the scenario you're hinting at.
It's possible in the story you mention that each of those ~20 internal protos were different messages, and each hop between backends was translating data between nearly identical schemas. In that case, they'd all need to be updated to transport that data. But that's different and the result of those engineers' choice for how to structure their service definitions.
Do you mean the reflection protocol, or some other .proto files?
EDIT: also, although the wire protocol may tolerate unknown or missing data, almost always the application doesn't.
EDIT AGAIN: I'm not saying this is how it should be just that this is the low energy state the socio-technical system seems to arrive at over time. So ideally it should be simple but due to imperfect decisions it gets horribly complicated over time.
Edited to reply to your edits: People who are just bozos with computers will never be kept from bozotry by any interchange format. If they lack any semblance of foresight then maybe they simply should get a different line of work. Postel's law is in force here. If you start sending me emails with extra headers my email program is never going to care. Protobufs are the same way.
gRPC is terrible, but ConnectRPC allows sane integration of PB with regular browser clients. Buf.build also has a lot of helpful tools, like backwards compatibility checking.
But it's not worse than other alternatives like Thrift. And waaaaaaaaaayyyyyy better than OpenAPI monstrosities.