Opening Keynote: Trends in Microservices – Richard Li, Datawire

Richard Li (Datawire)

Description

As more companies have adopted microservices, conventional wisdom on microservices architecture and best practices have started to converge. Based on Datawire’s experience with dozens of companies, Richard Li discusses the evolution of these key trends, including polyglot architectures, the service mesh, and the role of operations.

Presentation Slides


Transcript

Kelsey Evans:                         All right, hello everyone, and welcome to the Microservices Practitioner Virtual Summit. I’m Kelsey, and I’m gonna be your moderator for this whole week. I’m here with Richard Li, who is going to start with our opening keynote, Trends in Microservices. But first, just a quick walk through the features that you have on Zoom so that you can interact with the speakers. So, on the bottom panel you have a Q&A button, as well as chat button that you can use to chat with attendees, and the Q&A button can be used to ask questions. Feel free to pop those in throughout the presentations, but we will get to them at the end. So feel free to put them in, and we’ll get to them in the last 10 minutes or so. If you have any questions, you can use that or the chat button. So, I’m gonna hand it over to Richard, who will get started now.

Richard Li:                              Thanks Kelsey. So, thank you everyone for joining. Let’s see if I can … My mouse cursor has disappeared. So, we asked what role, for the folks who have attended, and we’ve a little over 600 people who have signed up, and about a third of you are developers, a quarter of you represents some form of engineering management, 12% are DevOps engineers. And what I think is most interesting, is about 15% of you self identify yourself as a platform engineer. And this is one of the trends that I’ll be talking about, and it’s one of these things that we’ve seen more and more as people start to adopt microservices and build more cloud native applications.

If you’re tweeting about anything that happens, we have a hashtag, #msvsummit, and then we’re also experimenting for the first time with just an online chat as opposed to the Zoom chat, on Gitter, which will be persistent across all the different sessions, across the different days. What we’re doing this year, as opposed to last year … Last year, for those of you who joined last year, we ran a single multi-hour microservices extravaganza. This time what we’re doing, is we’re tying to split up the different talks across multiple days.

So, we’re kicking things off today, I’m talking a little bit about trends in microservices, and then Kevin and Doug from Squarespace will be talking about microservices at Squarespace. Tomorrow, Lauri Apple from Zalando will be talking about how to avoid a GitHub junkyard, and then Paul MacKay from Ancestry will talk about their adoption of Kubernetes as they journey towards microservices. On Wednesday, Rafael Schloming from Datawire will talk about how developers can successfully adopt microservices. And finally, on Thursday, Matt Klein from Lyft will be talking about how Envoy, the open source layer 7 proxy that he wrote, was actually deployed in a service mesh at Lyft.

Wanna thank our sponsors; CloudBees, which has Jenkins World, their developer conference, coming up in August in San Francisco, and StackPointCloud, which lets you do on-demand Kubernetes in Amazon, Rackspace, or a number of other infrastructure providers.

So, I think at this point everyone is familiar with microservices, and the promise of microservices; you take your single release cycle of a monolithic application, and you break it into multiple independent release cycles, and you have suddenly much faster velocity. So Datawire’s been working with microservices for the past two and a half years, and one of the things we found is that technology has really, really evolved, and so what I wanted to do was highlight some of the things that we’ve learned, and the industry as a whole has started to adopt, in terms of different technology trends associated with microservices. And I’m here to talk about five different trends that we’ve seen.

So the first trend is about the evolving role of operations. I alluded to this a little bit earlier when I talked about the platform engineer, but if you think about the historical role of operations, operations used to be focused around reliability, availability, and scalability of your base level infrastructure; your network, compute, and storage. And that’s still important to operations today. Obviously, if you don’t have reliable compute network and storage, you shouldn’t be worrying about anything else. But now, with Amazon, and Google, and so forth, this has really gotten much better, and what we’re finding is that the traditional role of operations, focused around this basic reliability, availability, scalability, has dramatically expanded from this little segment, to really a much bigger segment focused around developer productivity.

And what we mean by developer productivity, is it’s about the development environment, or how do you make your application, which consists of multiple services, resilient, and highly available, and scalable? And how do you build a continuous deployment, or continuous delivery pipeline? And my developers wanna be able to monitor the application, so that if there’s actually a software bug, which in operations land I can’t fix, my developers can quickly identify it, root cause it, and fix it. How do I do canary deployment?

So all of these things are things that, organically, we’ve seen operations folks actually step up, and what they’ve done is, they’ve actually started to transform themselves into platform engineers, where instead of just thinking about operations and keeping your compute systems running, they’re providing common tools and infrastructure for the rest of the team. And some of that common infrastructure could be, just reliable compute, but some of that common infrastructure might be a set of libraries to make your microservice more resilient. And that’s a big trend, where we’re seeing operations evolve into platform.

And so, right after this talk the guys from Squarespace will be talking about how they’ve adopted microservices, and when I met with them, probably a month ago, it was interesting because they were operations guys, in a large part, who have evolved into a platform team, and they actually provide a common set of tools and infrastructure for the rest of the developers, so they can actually be more productive.

The second trend that we’ve seen, is that it’s a polyglot world, and the reality is, it’s always has been. What we found is, it’s really, really hard to build an effective microservices architecture with a single language stack. And companies that try to mandate a single language stack ran into two problems with this. The first is, you might say everything needs to be written in, say, Java 8. The problem of course becomes, what happens when Java 9 comes around? Because you are then faced with two, sort of different choices.

One is, you could say, “Well, we’re gonna do this big forklift upgrade, and upgrade everything at the same time to Java 9.” And the problem with that is, you end up with what Ben Christensen calls a distributed monolith, and you take a massive loss [inaudible 00:07:16] while all your teams hurriedly try to coordinate and figure out how to get to Java 9, and you do this big upgrade all at once. Which is probably not the best strategy; it defeats the entire purpose of microservices. So what you do is, you incrementally adopt Java 9 versus Java 8, and suddenly you’re actually running two languages. They might both be Java, but one’s Java 9 and one’s Java 8. And the same thing happens, at a smaller scale, with libraries, whether you are using one version of an RPC library and a different version of an RPC library. So the reality is that even if you manage a single language, you typically end up with multiple language stacks.

And the other use case, of course, is the more common way people think about polyglot, which is, different languages are good at different things. We see Python as a very popular choice for data analysis. We see NodeJS as a popular choice for more UI rendering. And so, as we see organizations evolve they really start to adopt different language stacks in different languages for velocity, for different reasons. And so it’s important for our infrastructure to really take into account the polyglot nature of the universe.

This leads to the third trend that we’ve seen, which is the service mesh. So, if you think about what is a service mesh, you can think about a series of services, service A that calls service B that calls service C, and so forth. And if there’s a failure in that, how do you make sure that that entire system doesn’t completely collapse. So, sort of analogy is Christmas tree lights. If you have one of your lights being unreliable you don’t want your entire strand to stop functioning. And so, the way you actually accomplish this, that people have conversion on, is what’s called a service mesh. And what you do in a service mesh is, you have all your microservices, and next to each of your services you have a special proxy, or library, that takes care of all your network calls.

And so that service, instead of talking directly to another service, it talks to that proxy locally, which then talks to the other service. And why you do this, is because that proxy can automatically layer in resilience, observability, and security semantics that the developer who wrote that service doesn’t actually need to worry about. And so, this has become a very popular design pattern, especially as you start to add more services at scale. And so, the service mesh, by letting services communicate with each other, lets you actually scale your development productivity while keeping to add resilience, and security, and observability to your architecture.

And the history of this is actually really interesting. I would argue that Netflix was one of the pioneers in the service mesh with their Java stack. So, they wrote a series of projects … Hystrix is still very popular, Ribbon, Eureka, Piranha, that are different libraries that you added to your stack. Sort of shortly after Netflix actually open sourced a lot of this stuff, Airbnb also open sourced a different architecture that it called SmartStack, and it was built around HAProxy. And it was not as sophisticated as the Netflix stack, but it had one big advantage; which was, because it was a proxy, it did not require you to program in Java. And so, the Netflix stack only worked with Java, the Airbnb stack actually worked with any language, because it was just a proxy. And other companies, like Yelp, started adopting SmartStack.

Most recently, we’ve seen the team at Lyft open source a modern version of HAProxy, called Envoy, which actually adds in a lot of the features that you would see in the Netflix stack, with an architecture similar to SmartStack, managed by this fabric called Istio, which Google and IBM have worked on, and today we see this as actually one of the leading open source projects in the service mesh space. And so, based on the second trend I talked about, around polyglot, we’ve seen sort of the market start to really adopt this proxy based approach, to give developers the freedom to pick whatever language they want for their particular task. And so, I’m also excited that this Thursday, at 1 o’clock eastern time, Matt Klein will be talking about the use of Envoy in their production environment, and how they deployed Envoy as a service mesh.

And service meshes are pretty new, so when we asked all of you about service meshes almost 60% said you hadn’t heard about service meshes, so I added the slide to try to explain a little bit more about service meshes. But only 3% of the folks are actually running a service mesh in production. And we expect this to actually grow over the next few years, but certainly we think that services meshes is a major trend as people start adopting microservices.

The fourth trend is, there’s a lot of open source infrastructure that people are starting to standardize around, and the first thing that we’ve seen is Kubernetes. So, the vast majority of the world that we are starting to see is starting to really standardize in Kubernetes, particularly companies that are adopting microservice architectures in the past two years. We’ve seen just tremendous momentum from Kubernetes, and you can get Kubernetes in the cloud from Google, from Microsoft, from Rackspace, from IBM, or you can deploy it yourself in Amazon. And as you can see from our audience, 16% of you are actually running production services on Kubernetes, while another 26% of you are actually actively evaluating Kubernetes.

And we see, as I mentioned before, Docker, Kubernetes, and Envoy being some of the core components that people are starting to use for microservices. And we see these being adopted not just from a technology perspective, that is, these are the best features and functions, but really because this is where we see the most community momentum around. And I think that’s one of the keys to successful open source projects, is really the amount of community momentum, and we see a lot of momentum around these three projects.

And the final trend we’ve seen is, is what we call bottoms up adoption. So if you think about traditional enterprise software trends, like, “We’re going to adopt J2EE. We’re going to adopt agile. We’re adopting the cloud,” your CTO or VP of engineering makes this decision, communicates it to his or her managers, and it’s really a top down kind of initiative. With microservices we’ve actually seen some top down initiatives, but a lot of bottoms up, where a developer or an operations person says, “This is taking forever. I’m just gonna start writing this service with an empty repo, wire it up with rest calls, and work on it organically.” And we’ve seen this patten repeat enough that it’s actually surprisingly successful. And we think it’s actually a trend that should be encouraged.

And so, one of the things we did when we talk to organizations is, we encourage organizations not to necessarily just think about how do you architect a microservices architecture, but really how do you start with developers, and start writing some code that does something useful, and then, over time, thinking about things like service meshes, and routing, and all the other things that you need to think about, but really just get started by just deploying a service and seeing what you can learn. And so, this Wednesday at 1:00 PM, my colleague Rafael will be speaking about developer led adoption of microservices.

So, that’s it, and we’re going to see if anyone has any questions. Otherwise, we’ll take a quick 15 minute break, and have Squarespace on in a few minutes.

Kelsey Evans:                         Okay, so if anyone has any questions you can go ahead and use your Q&A button on the bottom of your screen to post those now.

Richard Li:                              And also, we’ll check the Gitter chat.

There’s a question.

Kelsey Evans:                         Okay. So we have one question; how are you seeing the microservices approach coexist with legacy SOA and [inaudible 00:16:36] from SOA?

Richard Li:                              It’s a great question. So, what we’ve seen is that, we like to think about microservices as service oriented development, as opposed to service oriented architecture, and by that I mean, the developers actually start by just writing a service and deploying it side by side with your existing infrastructure. And so, with a traditional SOA we would just say that, what we’ve seen is that developers literally just start writing a microservice and they don’t actually try to touch the rest of the infrastructure. And sometimes that microservice actually talks to your ESB, or whatever infrastructure you use to actually wire together your SOA; just as frequently it does not. And so, I would say that microservices is sort of the organic bottoms up version of SOA, if that makes any sense.

Kelsey Evans:                         [inaudible 00:17:35]. We’ll wait a little bit longer just in case anyone’s typing any questions. Okay, we have another question. If each microservice can be in its own language, then is it worth it to lose out on common functionality, like logging, auditing, security, et cetera?

Richard Li:                              It’s a great question. So, the answer is, you do want some common functionality, and one of the ways you actually accomplish that is actually with the service mesh. So, the design pattern that we’ve seen is that … And I would say you probably don’t wanna adopt, like, 30 different languages, right? But in your core languages, what you do, is you wanna actually be able to support common logging, security, et cetera, cross cutting semantics across all your different languages, and the way you do that is usually with this proxy that is adjacent to each of your services.

And there are also other ways to do this, you can also just say, “We’re gonna use StatsD as our stats gathering tool,” and make sure you have a StatsD library in each of your supported languages. But it actually is important, and I’m glad you asked the question, to support common functionality across all your languages, such as logging and debugging. The service mesh makes that much easier, but there are also other strategies you probably will need to take.

Kelsey Evans:                         Okay, two more questions have come in. One is, can proxy only be uses with Kubernetes, or any microservices framework?

Richard Li:                              So the proxy-only architecture can be used with any microservices framework, so that’s just a design pattern. So Airbnb for example, when they deployed SmartStack Kubernetes, this was seven or eight years ago, and they were a Rails application, and I believe an Aero data center. Similarly, Yelp actually used SmartStack with Amazon and so forth. Envoy itself is just a proxy, so it can also be used in any infrastructure. The Istio project, and their goal is to produce an out of the box service mesh, only supports Kubernetes today, but they have a stated goal to support other sort of infrastructure. So, this is my long winded answer saying, depending on how much work you want to do today, you have different choices. If you’re on Kubernetes there is more of an out of the box solution, if you’re on other infrastructure there might be a little bit more engineering to do, but that amount of work is actually decreasing as we speak.

Kelsey Evans:                         Okay, another question. How do you see microservices and data science integrating, and if so, what tools work for approaches?

Richard Li:                              So microservices and data science, I actually don’t know what you’re referring to, and I don’t have a lot of expertise around data science. The only thing I can say, and maybe this doesn’t actually answer your question, is that we’ve seen a number of data oriented companies build data science type services using a microservices architecture. So, a great example would be … What’s that design clothes company for women?

Kelsey Evans:                         Stitch Fix.

Richard Li:                              Stitch Fix. Yeah, so Stitch Fix actually uses a microservice architecture, and what they do is, they actually do a lot of data analytics to provide a custom clothing … What they do is, they send you a box of clothes every month, based on their data analysis of all the questions they ask you, and apparently it works very well. And what they do, from a microservice architecture, is they actually have built a series of services that actually do a lot of the analytics, and what’s interesting about them is that they view microservices really more as a data storage problem, because the question becomes, “How do you actually shard all this data so that it’s actually accessible to each of your services in an interesting and rapid way.”

And so, I think that same paradigm, around how do you actually get your teams to independently develop functionality, still applies, but there’s some interesting questions around how you actually organize that data, and architect that data. Do you have one big data lake? Do you have many, many different pools of data, and if so, how do you ensure consistency? Those are some of the challenges that Stitch Fix actually wrestles with.

Kelsey Evans:                         Okay, great. So those are all of our questions that have come in. If you didn’t get a chance to ask a question feel free to post it in the Gitter chat and we’ll get to it throughout the week. Right now we’re gonna pause for a moment, and then start with Squarespace right at 1:30. Thank you.

Expand Transcript

Stay in the Loop

Keep up with the latest microservices news.

Simplify and streamline microservice deployment.

Try the open source Datawire Blackbird deployment project.