Don’t Build a Distributed Monolith

Ben Christensen (Facebook, Hystrix; formerly at Netflix)

Description

In pursuit of ease and obedience to the DRY ("don't repeat yourself") axiom, it is common to build a distributed monolith instead of the intended decoupled microservice architecture. This talk will explore the common pitfalls, the pain they cause, and approaches to connecting microservice permitting the promised separation of concerns that allows varied architectures, platforms, languages, timelines, and incremental rewrites of a microservice system.

Transcript

So my goal today is to convince all of you to not couple your systems with binary dependencies. And so what I’m going to be kind of talking about is my experience working with distributed systems and microservices and the trend I have seen to very easily fall into this mistake of optimizing for the short term, which ends up really costing us down the road and resulting in what I have come to call a distributed monolith, which really starts to remove a lot of the benefits of the microservice architecture.

So I’m going to be talking about two things today: shared libraries and network clients. And the shared libraries I’m talking about are those that are required. You can’t run your service without them. They’re often called “the platform”. They’re often called “the core” or “platform core”, those types of things. And I’m also talking about shared libraries of the transitive variety. The type wear you pull one thread and you get the whole sweater, where you add a single dependency and you end up with 50 libraries or 200 libraries pulled in and you only wanted one method.

The network clients I’m going to talk about are the official variety, the form that you cannot talk to the service unless you use their official client. And so I’m going to talk about why these two macro level concepts cause problems. So what does binary coupling look like in a service? Lots of ways that you can do this. I’m not going to go through all these, but some of them stand out. Some of them are very innocent at first.

But the point is that it doesn’t take long until you can have hundreds of libraries that are required. The key word there is required to run a system. And if you can’t actually launch a service and have it interact with your microservice architecture unless you have these hundreds libraries, and those are the only hundreds of libraries that can work, then we’re really losing a lot of the benefits of the microservice architecture. And this is a distributed monolith, because we’ve really just taken a monolithic code base and spread it out across a network.

So have you ever seen it take months to upgrade a library across a company? For example, I’ve watched this a couple times happen with Guava, trying to upgrade Guava across a company because everyone is depending upon it, but it’s all pulled in transitively. And you want to upgrade to a new major version of it, and the engineers who are actually assigned months of efforts to go figure out how to get that diff. And then spend a weekend upgrading the company. That’s a distributive monolith as well.

And if you’re trying to introduce a new language or major foundational tech stack and it’s going to take you over a year to introduce it, because you’re having to reinvent a lot of wheels and reverse engineer how the base platform works, you also are probably dealing with a distributive monolith. And I’ve seen all these things and fought against them. And they are painful. And so what these symptoms represent is they represent lost benefits of the microservice architecture, and in many ways, if we’re going to end up with these symptoms we probably should have stayed closer to the monolith anyways, because we’re paying all the costs of the distributed systems.

We’re not really getting the benefits of it. And so one of those lost benefits is being able to embrace polyglot. Now, when I talk about polyglot, I am not talking about let’s have three dozen different languages and every service written in something different. That is a whole different extreme that is very unlikely to be of any benefit to anybody. Generally, it does make sense that for there to be core technologies that a broad portion of the company is familiar with, even if it’s just for sharing of knowledge in operational support when everything breaks at 3:00 a.m. Christmas Eve. Been there.

And so it’s very useful if everyone can actually have a collective understanding of how to debug the JBM runtime or the V8 runtime, etc. However, over time, you will find that there are different services that are served better by different languages, or there are different skill sets that you can hire or who feel more comfortable in different languages. Node.js has a group of people who think that it is the right solution. Others think that C++ is best. And then there’s Java.

Even within Java, there’s whole splinter groups, all the different splinter groups within that. And so a question is, are you actually capable of allowing all these different tech stacks to coexist in the system and to do so idiomatically? Also, we lose the benefit if we couple ourselves of organizational and technical decoupling. And so here I’m talking about an organization being able to grow such that the individual teams and organizations can evolve technically without coupled collaboration between them.

So can an individual team adopt a new technology or platform without convincing a central authority? That’s an important thing. If you don’t have that, you’re really not benefiting from microservices. And can they choose something more specific? Could you choose a different concurrency model than the core platform? This is one I specifically fought against, where the core concurrency model of “the platform” had been [inaudible] per request for good reason.

We had some use cases that would be better served by an event loop model, but basically we were going to have to rewrite the entire platform and reverse engineer everything if we wanted to do that. Temporal decoupling is another reason. So by temporal, I’m talking about over time the reasons why we make certain tech choices will change or just versions of libraries increase and we want to adopt the newer tech.

So for example, if you’re in the Java world and you adopted Tomcat years ago and it made sense then, but you want to adopt Netty now, can you actually do so without upgrading the entire company at once? Or can you just simply upgrade to the newest version of Guava because you want to use some new functionality in it, but you can’t because some core central platform is transitably holding you to three versions ago. But isn’t shared code the right thing to do? This is what we’ve been taught from our earliest times learning how to write code.

And I want to argue that it’s not always actually the right thing or the best thing to do. And it’s because it’s not necessarily the right principle to prioritize when we’re talking about distributed systems. So I’m going to go to an authority other than me. “Building Microservices” is a book by Sam Newman. He’s got a few great quotes on this. So first of all you lose true technology heterogeneity. And so this is about if I’m taking all my business logic or the particular domain of how something should work and I’m putting it in shared code, then the only way for anybody to actually access that is by running the exact same technology stack so that they can use that shared code.

Second, you actually prevent you from being able to independently scale different services. So a concrete example here is if you have a thread per request model and asynchronous blocking style in one system that might work totally fine because that system only ever needs to scale up to 10 servers. But if I’ve got one that I now need to scale up to 10,000 servers, but I could get it to be 5,000 or 2,000 if I adopted a different concurrency model, shouldn’t I be free to adopt the different tech stack and concurrency model on that one without having to take the whole company with it?

Another one is the ability to deploy changes in isolation. So if I’ve got my business logic sitting in a shared library, and I have a bug or I need to put new behavior into it, and that requires me getting 10 other teams to set my change and then deploy, I’m not doing a microservice architecture. That’s a distributed monolith and I’m having to synchronize the deployments of everybody to get my change out. And then this one actually hit me close, is that once you start having shared code as the thing that propagates behavior, the seams in your system are really difficult to tackle.

And so when I was working on Hystrix years ago, Hystrix is a library for fault tolerance and bulk heading, etc. It’s actually a far more expensive and heavyweight solution than it needs to be if you don’t have to worry about shared code. And so it, by design, had to be able to encapsulate any client-side code that could be running and shove it all off in another thread. And that’s an incredibly heavyweight solution that if you actually are properly decoupled at the network boundaries, you could start to deal with your fault tolerance and resilience and other logic in a much more efficient manner.

So, for any of you who are doing a deep scan in your brain on what that acronym means again, if you’ve lost it, it’s “don’t repeat yourself”. This is baked into us from early, early in our programming experience. And there’s actually a section, literally, called “DRY and the Perils of Code Reuse in a Microservice World” in Sam Newman’s book. And this approach of “don’t repeat yourself” can be deceptively dangerous in a microservice architecture. So why?

Well, one of the reasons why we actually do microservice architecture is so that we can avoid coupling the producer, the service, from the consumers, so that when one changes the other ones don’t all have to change. And if we favor the DRY principle, we can start to break this very benefit of microservices. This is an important statement. So if your use of shared code ever leaks outside of your service boundary of introduce coupling. So shared code, in and of itself, is not problematic when you’re using it inside for your implementation.

But as soon as it starts to leak across your network boundaries and across your service boundaries, that’s when it starts to become a problem, when it couples those systems together. And I agree with this statement that the evils of too much coupling between services are far worse than that caused by code duplication. And honestly, you should just go read that section of the book, if not the whole book. So I was lucky enough that I got to review this while he was writing it.

And he’s done a great job of really walking through the principles that we should follow. So page 59 is where those quotes came from, so you can go read it all in context and get the rest of the details. So I want to go through some of the outcomes that I’ve seen over time as these things happen in a distributed system. So the first thing that happens is the client library, written by the service team, becomes the only official way to access the service and no other way will work.

And you start to see this when they make the network APIs themselves opaque. You actually aren’t aware of what protocol they’re using, what encoding formats. You don’t know what the APIs are. And the only way to access the system is here’s the Java API or pick your language. This then leads to this. So if I have a single official client it’s then very easy for service logic to start to drift into the client because it’s now my formal way of talking.

You can do it all in the name of performance and just ignore, actually fixing my networking libraries, and just say that I can shave off some time here or it’s just easier to make a conditional check in the client. And all of a sudden, I’m starting to actually run a lot of my service in the client. This then leads to the fact that it basically becomes impossible to adopt any new architectures and languages because the entire system is tightly coupled through these formal client libraries that have already made the decision of what the architecture and language and tech stack is, and I have to use them.

And so if I ever want to adopt anything new, I actually have to figure out what to do with the dozens of clients that are all based upon a decision years ago. And these ultimately lead to pretty far reaching effects over time. So for one thing, the consuming team is now at the mercy of the service owner. Whatever the service owner chooses to do, whatever code they choose to put in their client, whatever their deployment cycle is, or whenever they need to fix a bug, the consuming team basically has no choice except to accept what they give.

In public internet world, this means you go find a new vendor. Within a company, typically the pressures of deliverable dates and organizational pressure, and those types of things basically mean that you just go to move forward, and it’s typically not a very easy thing to push back and say, “I don’t agree with what you’re pushing upon us.” And then you get into these cross-organizational disputes over timelines and priorities. Anyone who actually tries to buck that trend, they may go off and try and reverse-engineer the client and create their own black market client.

And I’ve seen this happen, but then they become very brittle. They’re missing the nuances of whatever logic is sitting inside that client that they missed, especially anything that changes over time. And then the service owner, because they don’t actually support public APIs and support third party clients, could change and break their protocols and data contracts at any point in time. So this solution becomes very brittle. And the service owners are projecting their decisions onto all of their consumers across the company.

And so whatever architectural choice the service owner makes, and also, importantly, the resource utilization. So if their client decides that they’re going to open sockets or use threads or thread pools or allocate memory or do caching or any of those different things, those decisions now are all made by someone else who comes into the system and, in your runtime, are now making decisions that you can’t control. And then you multiply that by however many client libraries and decisions from all these different teams, and that means that all the consumers of those libraries now have to, the operational complexity is now everyone’s concern instead of the service owner isolating to their service implementation.

And so imagine you’re a consumer. You’re a tier that needs to actually consume data from 10 different services, and I’ve been given 10 client libraries to use. Now, all the complexity of the code and the technical decisions of all those is now my problem to operate and debug, even at 3:00 a.m. when everything crashes and burns, and I’m looking through stack traces and memory allocation of things that we never wrote and never opted into supporting.

So what’s the alternative after my rant against all these things? Contracts and protocols. In particular, just like our programming languages have interfaces and APIs, services should hide all their implementation details and expose data contracts and network protocols. So if I’ve got a network protocol in a data contract, all of a sudden I can consume it from any language in any technology. A consumer can iterate over time and change it as they wish, and they have no dependency on the service implementation and so they can evolve independently of each other.

This happens to be why the internet has been so successful. It’s been able to evolve completely independently. There’s an interesting, though, that happens within a company. Even though we base ourselves off of the internet technologies, because we don’t have that hard decoupling of organizational boundaries and different priorities, we end up often making decisions that end up breaking the very things that made the internet so successful.

This is where there’s a whole lot of buts that show up. There’s all these reasons why we’re doing this. So, there’s a few that actually deserve talking about. So, there’s things like standardized logging, fault injection, distributed tracing, discovery, routing, bulkheading, etc. that actually do need solutions. Distributed tracing is one of them that actually, if you have one service in the system that doesn’t do it right, the distributed tracing is kind of broken for everybody. And so you can’t just say, “Well screw it. We’re not going to have any standardization whatsoever.”

You do need to address standardization for this to work. Because the internet doesn’t exactly have distributed tracing, and within a company that’s a very nice thing to have. So there are legitimate needs for standardization. However we do not need binary coupling to achieve this. So first of all, I want to talk about standardization via protocols and contracts. There’s a lot of things that we can address. Not all of them, but a lot of them we can address by declaring them in the protocol and in the data contracts of our services, and then allowing independent libraries to evolve that handle all the implementation details of that.

The important part here is that consumers can choose which implementation of this they’re going to actually embrace, or if they want to adopt a new tech stack. Let’s say Go is the new hotness and I want to go and adopt Go. If my team feels that it is beneficial enough for us to go and embrace go, we can actually reimplement the necessary libraries against the protocols and contract to do so. One example of this are the public AWS APIs. They’re very good at actually documenting them with the actual protocol that they stick to, HP predominantly, and the data contacts. And then they are separate teams and the community who actually generates common libraries that most people end up using.

Most people don’t actually talk directly to the restful APIs. Most of them are using some library. But they’re done independently and I can implement anything against those APIs. The second approach is to use auditing rather than binary coupling to ensure standardization. So this is one where things like tracing, we need something that audits a service before it comes in. And so it’s effectively an integration test for a new service. So if I want to bring a new service in, if I’m going to use some common tech stack that most services use, then it’s probably pretty easy to get through that auditing process because everyone’s done it.

But if I’m a team who, it’s four years in and I’m trying to do something new and it’s worth me taking a little bit longer to bring on a new tech stack, then there’s an auditing process to actually assert that yes you actually show up properly with distributed tracing. You’re not breaking the flow of the identifiers needed for that. Yes, you’re logging everything off into the right system. No, you’re not putting a huge security hole in our system, those types of things. So a good question would be doesn’t this actually made it harder to bring on a new service?

And honestly, it easily could, but it doesn’t have to. And so one thing I’ve seen work well are common tech stacks where they’re very similar, they resemble the platform that we talked about earlier. The difference is that these are not formalized as the only way to do something, and that they’re built against protocols and contracts, and conform with the auditing process. But you can have multiple of these stacks. I can have one for the Java systems, even within Java, the Scala and Closure teams might want to do something different that is more idiomatic for them.

Node.js can have their own, Go can have their own. And over time, I might end up with a Java one that is circa Java 6 and other one that’s circa Java 8, one with Tomcat, one with Netty. And so these different stacks can exist. You get the benefits of reusing other people’s work, and collaborating together, but you actually still allow the decoupling over time. The key is that the existence of the protocols and the contracts allow new stacks to be built. If new stacks can’t be built, then everyone is going to be couple to whatever that core platform is.

A principle would be that basically, if I want to, I can actually go build a new stack against those protocols and contracts. So a litmus test would be, can I actually take a team of engineers who are interested in Node.js becoming a legit thing in my service and actually build something without convincing the rest of the company? Because if you have to convince the rest of the company, basically you won’t succeed most of the time, just getting through the convincing part.

You go and try to convince a group of C++ engineers that you’re going to bring Node.js in, that’s like Cold War type discussions. It just won’t happen. And honestly, we shouldn’t have to have those conversations in order for teams to do this. And also, sidecars and proxies do have their places, but that should not be the requirement, because they actually bring in performance and operational concerns of their own.

I’ve watched the use of sidecars actually be a massive issue for Node.js people who they spend their entire life trying to debug what’s going on in the sidecar rather than just building their system and operating it. So that’s a whole different discussion. So what might this look like to actually build something like this? So in shared libraries, first of all, we’ve got to tame our transitive dependencies. And often, so if you intend on your library being shared across systems, if it’s an internal dependency you can shade it.

When I pull a dependency, I should not get 50 other libraries just because you decided to use them for a method, or just copy/paste the method you need. It’s really frustrating when you want to use the hashcode method from Apache Commons, and I get now 10 jars from 5 years ago, because sometimes copy/paste really is the best solution. If the dependency is part of your public API, neither of those are options though.

And really, this starts to become much more strict. It can’t be…you basically can’t have breaking changes on that library now. And so this means that you effectively cannot use any… If there’s a library that bumps their major version every so often, I think you can all think of a few that do this, then you basically can’t use that in your public APIs for shared libraries across system boundaries, because that means that every time that you want to change it, the whole company has to do one mass upgrade.

So RxJava is something I’ve worked on, and OkHttp by Square. Both have adopted a different approach to this. RxJava basically made the commitment to everybody that the major version of 1.0 will never break because it is intended to be in the public API of libraries that are shared. And if it were to ever break, because a V2 comes along, all of a sudden the company would either have to do this massive upgrade or you could never actually adopt V2. So it’s actually going to name space V2 independently so the two can coexist.

Netty did this as well, which was helpful. And OkHttp has chosen to adopt the same model. On to the network clients bit. So there’s a few different ways of solving this. So Swagger is a very popular one and it’s rather interesting. It is popular enough that it is actually evolving into the OpenAPI initiative. And the premise here is that I can declare my restful…it’s a very strict set of things. Basically, it’s request response over HTTP. That’s it. That’s what you do with Swagger. But within that it is actually very powerful and comes with a lot of tools for creating documentation and browsing it and playing with the APIs, and generating clients in lots of different languages.

Thrift has been around a long time. They’ve shown how to do this. ProtoBuff is another one where you declare what your interface definition, your types are. You can generate the code out of it. Quark by DataWire, who’s hosting us today, is doing some really interesting work in this one. I actually am pretty excited about where Quark is going, because it goes beyond those previous three in that it doesn’t assume that all you want to do is request response or JSON over HTTP or something like that.

It is open to allowing us to define new protocols and interaction models, including streaming and subscriptions, and those types of things. And so something that I’m exploring a lot is being able to move beyond just a request/response get, and being able to represent in our definitions single responses, multi-responses, a finite number of Ns, so if it’s always a page of 10, or infinite response streams for push subscriptions. There’s other things that internally we would typically want to put in our declarations as well. For example, caching tiers. The microservice model is I don’t want to know about your dependencies.

And so if you’ve got data stores or whatever, it’s hidden behind you. Sometimes there are services where it makes sense, for performance reasons, that I want you to always memcache first. In that case though, then that becomes a client-side knowledge that I have to do, so perhaps we should put that into our definitions so that it actually says, “Here’s the origin and the caching tiers,” and so I can generate clients out of it, that whatever memcached client you choose to use, I know how to interact with that protocol.

Fallback values, not all fallback values can be done like this, but a lot of the systems in my experience, actually their fallback is you fail silently with some nil value or there’s some legit default value that you can just cover it in the case where the system is dead. And then things that don’t really matter necessarily in public internet communication, because you’re hidden behind big load balancers, but internally it does, are things like flow control and health metrics, and that bidirectional communication between client and server to allow clients’ implementations when you code gen them to actually be able to do things like proper clients have load balancing decisions.

These are the types of things that have not yet really seemed baked into these systems. Before I left Netflix, this was the type of stuff we were starting to explore, of how to make this just part of the service definition. So why do we actually fail at this so often in our companies? We start out well-intentioned to do this and we fail. I really think it comes down to just very simple things. We go for whatever is easy and at hand, and we know how to use shared libraries and send them around.

We’ve already got Artifactory, or Maven Central, or MPM or whatever, and so we just use that as the mechanism to do it. I also think that it’s often in the short-term it feels more productive, to get up and going right away. It feels like we’re able to innovate much faster, because I can just ship it around with the things that I know easily. And also, I think that, unfortunately, it’s partly because the service owners, the service has to exist first before the client.

And since they get to make the decisions first, they’re making the ones that are easiest for them and then that gets propagated. And by the time those decisions are made, typically there’s some deadline that has to be hit, and so then pushing back on that is typically not…it’s like, “Yeah, yeah. We get it. It’s tech dat,” and then five years later that tech dat is four platform with 400 jars, and happens to pull in Hadoop because no one understands why.

And it is what it is. And so I think that these things actually are very innocent things that end up in this coupling. And the problem is that this delaying of the cost of the decoupling, it actually is very, very high. When you push it off down the road, to basically untangle this later, it’s incredibly hard and very expensive. And as I’ve seen efforts to try and do this, just talking about them ends up taking years. Okay? Just talking about how to detangle the mess is years, let alone actually making it happen.

And so this is particularly ironic because we actually know the solutions for this and they exist. And they have a pretty limited tax. It does take a little bit of an initial extra bit of thought to make it happen and a little bit of extra effort right at the beginning, but once the tools are in place, it’s pretty straightforward. And my goal is that I convince at least some of you to look beyond the short-term ease of this and avoid binary coupling by looking at leveraging contracts, protocols, and automated tooling around our systems so we can actually get the benefit of microservices and service-oriented architectures. Thank you.

Questions. Pardon me? Oh, he was asking if something like Spring Cloud is going against basically everything I just said. So I have generally had a hesitant relationship with things like Spring and Akka. And those types of things are basically you give them a finger and they take your arm. Basically, this promise of, “We will solve everything for you,” I’m okay with those if you’re using them within a self-contained service. If you’re okay choosing that for your service, I’m fine.

I’m not okay with it being the architecture for the whole system, though. In fact, I’ve been pretty impressed with the stuff from Spring over the last couple years, but I’m not okay with it taking these systems and saying, “My entire company’s entire architecture is going to basically adopt one of these huge frameworks.” Because you’ve just coupled yourself to that forever. So that’s kind of my take. I’m okay with them within the common stacks. If you want to have a Spring stack that teams can adopt, that’s fine.

As long as it’s not preventing me from using also Go or Node.js or whatever in the different places, and that anything the Spring is an implementation inside, same with Akka. If I want to use Akka within a service, that’s great. It’s a very powerful, awesome piece of tech. But then I should expose APIs out and not expect my entire company to all be Akka actors because it’s not the right solution everywhere.

Question:

Ben: It’s a challenging one. And so this, oftentimes when I talk to folks I try and liken it actually to our streets, our road systems. We as a nation had to agree upon certain contracts and protocols of how cars get around. But then once we’ve agreed upon those, I don’t care what your car is. And a part of what …we’re out of time? Oh, okay. And so, by default, I expect most places to just end up at HTTP to start with when request/response. The way that I see this evolving is in the same way that I don’t need a central authority to govern how all the platforms of all the systems should work, any given service should be able to opt in to saying, for whatever reason, there’s a new protocol that makes sense for our use cases that our clients can opt in to using.

And typically the way I’ve seen this need to be exposed is that you’ve got your foundational HTTP one, which is kind of the lingua franca. Then you can start to opt into web sockets or error on, or like Steve and I over there have been working on a new one called reactive socket. It starts to expose things that HTTP just doesn’t do. And just let it organically grow. I don’t believe it’s the right thing to try and get the whole company together and say, “Thou shalt,” for all the same reasons I’ve just given.

And just let them organically grow amongst the services where it makes sense. And if it’s successful, others will choose to adopt it.

Question:

Ben: The baseline is whatever the…honestly, that’s going to the first mover advantage. That’s going to be the founder of you company, typically. Whatever they chose to use, that will end up probably being that baseline. So like at Google, it’s pretty predominantly Photoboof, at Facebook it’s Thrift. Different places have that foundation and then it typically takes a pretty strong reason to move away from those things at the protocol level. The data contracts can evolve a lot more independently from the protocol.

Some of the stuff that I’ve been over the last year, and I’m continuing to do right now, is starting to look at we’ve got request/response working well. How can we allow in the negotiation between a client and server that if both of them can support this other protocol, that we opt into that and we get new behaviors.

Question:

Ben: Oh, yeah. There’s no way on Earth that we would succeed in getting a large organization to all, in one swoop, adopt a new protocol. But the nature of microservices is they’re point-to-point typically. And so you can start to expose these new abilities when the client and service actually have a reason to do something beyond the norm.

Question:

Ben: It is mostly a cultural one. That’s why most of what I was doing today is not talking about technical solutions. It was mostly about trying to convince people of it. Because the technical things are actually not rocket science. Any other questions?

Question:

Ben: Yeah, the question was what about decoupling from your…was it migrating some of the binary?

Question:

Ben: Okay, so decoupling binary dependencies into microservices. That’s effectively what microservices should be. So if I’ve got shared libraries that I’m sitting all around, basically the whole point of microservices is I’ve stopped doing that and I take that library, turn it into a service, put a public API around it, and that’s what a microservice should be. So that question is, in and of itself, the answer to the problem. Yep. I’ve got business logic that is changing, some algorithm that is deciding what this user should see or not see.

That should not be a library. That should be a service. That’s the point that I can rapidly iterate on that and everyone just automatically gets it when they consume from that. Yep. Was there a second question?

Question:

Ben: Yes, the question was “If everyone is reimplementing the client, who is responsible for maintaining the contract?” So this is one of those things where there’s a balance of both sides. So I referred earlier to the fact that, in general, the common pattern that will probably evolve is, engineers are lazy by default, is that you will end up reaching out to something that is already there unless you have a strong reason to go build something. And that typically only happens every once in a while, so the expectation that there is a handful of them already exist.

Then the question is who’s actually asserting that those behave correctly? And to me, that’s actually both sides have to share that. The server has to actually program defensively, like any server should, that whatever the input is that’s coming in and that the load shedding and all those things is done by the server. Is even formal clients often get this stuff wrong. I have, many times, seen the official client of a service DDOS their own service. And so a service should already be capable of defending their own contracts.

And then the client writing, whoever takes on the burden of actually writing a new client, they obviously are taking upon themselves the burden of getting it right. Like if you’re going to go, for example, and reimplement, adopt a completely new tech stack, and that means you have to implement the routing and load balancer stuff, that’s somewhat non-trivial. Now you should be able to actually do that, but it’s still upon you and your team to actually do it right. But there should not…you should not require some central authority to do that.

Question:

Ben: On intraservice? Interservice. So within a particular…within a microservice, it’s talking to itself? Inter, intra…my English skills are confusing me right now.

Question:

Ben: Oh, you can’t really have a microservice architecture without that happening, honestly. So my expectation is as soon as you do a microservice architecture, you fully expect to have these huge fanouts and man, I’ve even seen where it’s a loop for legitimate reasons, ironically. And so a full graph is expected in a service architecture like this.

Question:

Ben: That’s fine. Sure. I, yeah, so I would expect that. Thank you for having me.

Expand Transcript

Stay in the Loop

Keep up with the latest microservices news.

Simplify and streamline microservice deployment.

Try the open source Datawire Blackbird deployment project.