Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Modern-Day Architecture Design Patterns for Software Professionals (medium.com/better-programming)
167 points by tanmaydesh5189 on Oct 14, 2020 | hide | past | favorite | 68 comments


Has anybody legitimately had good experiences with microservices these days? Especially if you're not starting with a monolith that has done well? My experience at several companies, several projects, has been that what was actually just a poorly built, inflexible, monolithic application becomes a poorly built, distributed, networked, polyglot application that is now 100x less flexible. Sure you can reimplement any microservice interface any time you want and drop in the replacement, but good luck figuring out where that was being called and how many unique and interesting ways it might break if you don't implement the edge cases in the same way.

Microservices and patterns like the ones in this article really add a serious amount of cost in the form of complexity, and you had better understand much better than this article explains the actual ways that your new and distributed system might fail. What happens when this circuit breaker opens and you stop creating accounts, but your transaction circuit breaker hasn't opened and because you're CQRS, you've already told your client "201 Accepted"? Be ready to deal with that.

In my experience, you're much better off knowing how to build and iterate on a scalable monolithic architecture first. Built something that's good and that you can iterate on pretty easily. Figure out how to profile monolithic applications for performance and squeeze more throughput out of your system before you try your hand at splitting that architecture up and adding the myriad of distributed systems problems on top of the essential complexity.

Don't underestimate the value of a compiler error or a right-click-to-refactor IDE. You don't get those when suddenly everything is loosely coupled microservices over a service mesh.

There is plenty of tech to master to manage a half decent networked monolith without adding your own accidental complexity to the equation.

Just to add a reference to a primary inspiration, on top of my anecdotes: https://www.martinfowler.com/bliki/MonolithFirst.html


At my previous workplaces: no it has always been an unmitigated disaster. And in one case I think moving from the original monolith to microservice based architecture killed the business.

At my current workplace: yes! We have about a dozen separate "microservices" and it works pretty well actually. The main difference I can see is that the boundary layers are extremely well defined and completely asynchronous. There are no synchronous REST or RPC calls between services at all. And it actually works!

I think the key is loose coupling in the extreme.


Exact same experience here.

At my previous workplace it was a disaster.

They hired too many developers in order to grow, and that led to too many people working on the same codebase. Microservices were adopted to solve that. After two years our product was exactly the same as before, except it had much more issues with latency and errors. Headcount was many times bigger, though.

The problem was microservices being adopted for political rather than technical reasons. They wanted teams to own their stuff, but the boundaries were too fuzzy.

We had a lot of duplicated data because leadership never decided which team owned what. Also, some features had to be made by duplicating data in two different services: if team #2 couldn't convince team #1 to send certain events to their microservice, team #2 had to duplicate the whole DB table using Kafka (Very appropriate, right?) and do a cron job.

Another problem happened was when they banned "microservices" but asked to make "well-sized services". New teams had a limit of how many services they could have and to get around the limitation they started having to mix completely unrelated functionality in a single microservice (and yep, that was sanctioned by leadership).

My current company has Microservices, but they exist purely for technical reasons and are extremely loosely coupled, so it works great.


Before anyone even touches "microservices" they should be forced to read Domain-Driven Design. If you do not have a "Bounded Context" and are not splitting your "microservices" by that then you're doing it very wrong.

A lot of the benefits I've seen people using "microservices" to fix what are just horrible design issues in their framework / codebase of choice.


Is that a book? Would love a reference.


Domain-driven design by Eric Evans is the usual reference. There are some other well-regarded books on the subject too, e.g. by Vernon Vaughn.


> I think the key is loose coupling in the extreme.

Yeah, I agree for sure. I think that's kind of the key whether you're building a distributed system or a monolith, though. e.g. built on small interfaces that are easy to implement. In general, I think the thing about microservices is that they're a lot more sensitive to initial direction because of the added complexity of major architecture changes.


In general I agree, but I think microservices applied in the right context can be useful. For example, say you have a system that manages bookings for a really popular musical act and frequently crashes with load when tickets go on sale. In that case it could be helpful to have the frontend service that deals with the booking requests isolated so it doesn't bring down the rest of the system.

The problem is microservices for the sake of microservices, which seems to be the majority. When a design pattern is applied just because somebody is familiar with it and not because it's the best fit to the problem at hand - that's when things get messy.


Maybe I'm just not up on the startup or smaller company scene, but isn't it still pretty important to acquire those customers before you worry about whether or not they'll take your systems down? I'd expect that to give you plenty of time to build the monolith, figure out if it's too brittle, too unscaleable, etc, and maybe ground-up rewrite at low cost before you solidify too much by breaking things up

Even so, let's say you need to scale very very quickly right away, it's so much easier to profile and optimize a single process. I've been successful in optimizing e.g. a Go library from 100 KB/s to 700MB/s, with about two weeks of work (`pprof` and `go bench` are amazing tools), which then handled 20k tps in an application, and could have scaled out but didn't actually need to. I've also gotten millions of rows in a postgres database to be full-text searchable in under 10ms, also with maybe only a week of work. I can't readily think of a time (and I've had plenty of opportunity these days) where I've successfully diagnosed and optimized underperforming distributed systems, despite having spent months discussing and attempting to measure performance issues and trying various "yeah maybe that could work" fixes.

Granted, It's almost certainly always been the case of microservices for the sake of microservices, but I always come back to the lack of tooling (no compiler to tell you that you broke some contract), the siren call of polyglot microservices that makes it a lot harder to build common tooling, and the temptation to prematurely send microservices to different teams (further adding to the lack of agility), as really solid reasons to stay smaller as long as possible.


Yeah I'd agree that for most startups a monolith makes more sense. It's usually easier to add microservices later if they're needed than to go the other way.

Your point about the Go binary reminds me of a related problem. We so often come up with solutions (like microservices) to performance problems that just add more complexity, when often the performance problems can be solved by reducing complexity. Then as you say it becomes so complex that it's more difficult to fix later problems.


Dangerous statement... Is it because the 'problem solvers' in question don't in fact know how to solve the complexity in code?


Possibly somewhat, but I think more often its that reducing complexity isn’t valued in many organisations. It’s usually faster to apply a patch over solution than to go and reduce complexity in the existing system. Also the larger the system, the less willingness there is to risk changing something that already exists - technically and politically.

As an example, imagine that you have a very slow service. You could dig in and find the bottlenecks, and rewrite the service to remove those bottlenecks. Or you could just add a caching layer on top. The cache layer likely solves the problem quickly, but it adds more complexity to the system overall.


You don't need microservices to do this.

Build a monolith which does everything. Deploy multiple instances. Route booking requests to a subset of the instances. Route other requests to the other instances.

We did this at an old job. We had a data-intensive app which had a web UI for users, an admin UI, and an API for data export. They all had lots of common code for persistence, querying, aggregation, etc. Because of the common code, the simplest way to implement the three interfaces was in a single codebase. But because API requests could sometimes crash the app (we didn't do a great job here!), we segregated the API into its own pool of instances. We didn't segregate the admin UI, but could have done that if necessary.

As well as simplicity, another advantage of this approach is that the decision about how to split up traffic is very dynamic: you can change it just by adding or removing instances and changing load balancer config, without having to make any code changes. if you split the traffic up into separate microservices, you have to make that decision upfront, and changing it is a lot of work.


> Build a monolith which does everything. Deploy multiple instances. Route booking requests to a subset of the instances. Route other requests to the other instances.

Don't you now have "microservices" bundled together in a single binary - and deployed together? If each of your component exposes different APIs and each instance only handles certain endpoints, then you kind of have microservices - not literally, but it seems the idea is the same.

Simplicity being a benefit, I don't really see how the dynamic traffic routing is a huge benefit vs microservices - you can dynamically scale up/down microservices too.

I am not saying this is a bad approach, by the way, but just that it isn't much different from microservices.


I think you can use such an architecture without almost zero or minimal code change. You don't have to split the shared endpoints using some remote API calls since they are all available inside your monolith.


Agreed. Which is why I mentioned simplicity being a benefit - if you already have a monolith that you are seeing issues with. But the parent said "You don't need microservices to do this."; this solution isn't much different from microservices.


The GP solution has very little in common with microservices - it’s just a load balanced architecture with specific pools for different workloads. Just like we could do in the 90s.

Microservices are only microservices if they have both narrow focus and radical independence. If they have narrow focus without the independence then you probably have a monolith with 25 executables to deploy.


Are all "web servers" connecting to the same database (cluster or instance), without application specific database partitioning ?


In our case, they did. If you have a clustered database, they could perhaps use different nodes in the cluster. If you have read replicas, they could have separate pools of read replicas.


This often feels like a meme: "I had a scalability problem, so I made everything distributed and asynchronous. Now I have three problems!"

It's not that the idea is fundamentally broken, but I suspect a lot of people dramatically misjudge the trade-offs. Never underestimate the power and efficiency of one big, fast server, nor the added complexity that is introduced by taking loose coupling to extreme levels.


Yes at a large ecommerce company with many teams.

Each team is able to treat their own microservice as a product, with its own release schedule and roadmap. Decoupled from other teams.

If your having to touch multiple services to implement changes, you've sliced your services in the wrong way.


>Has anybody legitimately had good experiences with microservices these days? Especially if you're not starting with a monolith that has done well? My experience at several companies, several projects, has been that what was actually just a poorly built, inflexible, monolithic application becomes a poorly built, distributed, networked, polyglot application that is now 100x less flexible. Sure you can reimplement any microservice interface any time you want and drop in the replacement, but good luck figuring out where that was being called and how many unique and interesting ways it might break if you don't implement the edge cases in the same way.

It depends on the team. If there is a large team, then yes micro-services could be a good choice. It is not silver bullet. In the end it comes down the the quality of developers. With a group of good quality developers then a good product can be build with micro-services or monolith. With functional programming or OO or declarative.

Stop using marketing people who uses buzz word without understanding underling tech.


> Has anybody legitimately had good experiences with microservices these days?

I have, but in a specific environment. Almost all of the final requirements were known in advance. Once installed, the final version would not receive any updates other than critical bug fixes.

Even in that environment it became clear that any issue is a lot more work to debug and it will probably be the maintainer of the other service that will have to do troubleshooting.

The places where interaction occurs become very important, but these are now in separate projects. It is easy to mess up code that you cannot see.


I think one of the reasons why so many devs immediately want to jump to microservices is because they haven't learned how to architect an application well and have been burned by entangled responsibilities. E.g. many Rails apps, if you're not super disciplined, devolve into a hot mess where your database is everywhere (even in your views), tests take forever to load because there is no isolation, framework updates take forever and so on. Other frameworks like Spring are a bit less bad, but they still don't force you to write properly layered architectures, use domain driven design, etc.

Of course, the idea that this is magically fixed if you split things up into smaller services, is a bit misguided. Yes, API contracts are more explicit (especially in dynamically typed languages where you otherwise lack any sort of contract between pieces of your application), but there are still all sorts of implicit dependencies. At some point, someone might say "ah let's have service B also use the database of service A because of this one feature", or, "you can only call this service after you have called that other service in exactly that way" etc. and then you slowly start creating that distributed monolith...


People are always looking for a silver bullet but that obviously does not exist.

There are technical pros and cons to both monoliths and microservices.

Either way, the most important is to have a good, functional team. A team that has built a good monolith will probably build good microservices. A team that has built a messy monolith will probably do even worse with microservices.

My view is that microservices are useful at scale and that a well-designed monolith can be split into microservices as needed later on.


>Has anybody legitimately had good experiences with microservices these days?

Yes. I work for a company that started out with a few separate monoliths that were hacked together under intense time pressure by some smart folks that didn't anticipate the scale they were going to be operating under. As the team expanded and more talent was added, they were able to partition up some of the functionality into separate services.

When I joined we had a small team with a mixture of legacy systems written in Java, Perl, PHP, Scala and Python. Some of the microservices were poorly done, others were fine. The new teams were able to completely overhaul each service one at a time to where they were stable and needing almost no maintenance/babysitting.

Would microservices have been a good starting decision for this company, given its current level of engineering talent? Probably not. However, take a bootstrapped team crunched for time, with some foresight to know that there would need to be rewrites but not certain where, microservices can end up mitigating a lot of unknowable future issues.


Yes; for us, it's a refinement of Service Oriented Architecture rather than hype mind.

Schemas (with well defined major and minor versioning), routing (with timeouts) and service discovery (in the more general sense of being able to find the service you need) are all essential components.

I fear Conway's law is at it's strongest with microservices and actually getting teams to collaborate is the biggest challenge.


I've worked with two major (well, relatively) microservices architectures and honestly it made things many times more complicated and expensive than they should have.

In the one, I came in after a year and a half and they still had nothing to show for other than some marketing (this was fairly straightforward e-commerce). They tried to bolt Java applications together using Amazon Kinesis (which isn't intended as a service bus) and we quickly ran into issues because the stock / inventory service was separate from the order processing service, so any incoming order would have to do a weird round trip to confirm stock before it could confirm the order to the end-user. To name but one example.

The other was for a shipping company, where every 'model' (users, addresses, shipments, etc) was put in its own microservice. That one started out with NodeJS services before - for some reason, probably political / hiring related) migrating to Scala, only adding to the complexity of what, in essence, was a CRUD application where the heavy lifting was still being done in the legacy back-end / mainframe systems.

I don't get it. Glad I'm working on a project right now where I have full control over things, it's also a CRUD project but I can build it as a straight monolith.

One other anecdote I have is where we built a serverless application. It too was e-commerce, but the heavy lifting was done by as-a-services; we used Commercetools as e-commerce back-end as as service, Contentful as CMS-as-a-service, Adyen for payment processing, and we used a heap (about 30) 'serverless' functions deployed in Netlify as the glue. Seemed to work all right, better than the microservices attempt (because the individual functions didn't have to talk to one another; they responded to either user / client requests, or events generated from one of the services doing the heavy lifting).

Anyway if I had to do ALL of those again (and if I actually had a say in it, lol) I'd just write them as plain old Java monoliths.

The reason that's not being considered an option much is because it's boring, and highly paid software engineers don't want to do boring stuff. Second reason is that if you have boring code, you get mediocre developers. That's why in some projects I've seen them push for Scala instead of plain Java (which would've done just fine), because if you pick Scala you weed out a good 95% or so percentile of developers - that is, the reasoning being that one already has to be a good developer to start to comprehend Scala.

Of course, it backfired because these developers are so 'good' that they're also stubborn and will do things their own way, which may not be the same way as other devs. That may have just been my limited experience though, IDK.


In most async implementations the final stock check just cancels the order. Which is fairly rare.

An eventually consistent implementation is good enough for displaying to the user.

The extra complexity of displaying consistent and up to date stock inventory to user isn't really worth the cost instead of the fairly rare occurance of having to cancel an order.


> They tried to bolt Java applications together using Amazon Kinesis (which isn't intended as a service bus)

In my travels I've learned that 'Using %NOT_ESB% as an ESB' is the first sign you are entering microservice hell.


Monoliths always sound like a good idea but they always lead to some low code platform garbage that the org has settled upon. Microservices stay as proper code.


Gotta disagree with you. A well built / designed / simple monolith beats any poorly implemented microservice of the same domain problem anytime.

If you cannot separate the domain problem correctly in microservice architecture, be prepared for the worst.


Gotta agree with you - but my point was 'low code garbage' - this is what happens in enterprises if a monolith is the goal; if you rule out monolith then you ain't suffering with Pega, Appian, Salesforce etc. Monoliths are great but they are also an open manhole you need to close in enterprise development.


A giant shithole is what you get when you create a monolith that can fit into many business processes / cases. One additional case to handle introduce many more complexity to application.


For sure. Next after selecting a low code monolithic stack we are informed that we can reuse low code solution, which is like a doubling down on the monolithic approach. But of course with 30 consultants.


Isn't a modern Linux system not a bunch of microservices?


Not really, it's a bunch of tightly connected non-networked components (the OS and userland stack--even with dbus increasingly in the mix, the reliability/failure domains are really different than those of web services).


Another great one is "Service Consolidation": When you notice that some of your microservices spends most of their time chattering with each other, it's time to clump them togeather into a regular old service.


I've spent a lot of time over the past couple of months thinking about how it would be great to move to microservice architecture to help with the "ball of mud" our codebase has turned into. The benefits seem really great, but it's such a daunting hill to climb. You can't just have a couple different services and boom everything works. You need to worry about consistency, efficiency (e.g. no database joins), and so many other things. Then there's a question of how do you handle things that used to be simple-- i.e. a POST request maybe used to return an id but now it just returns 201 and that's it since you have event sourcing and the instance is eventually created.

In the end, I wonder how many companies really need microservices. What if boundaries between apps were actually enforced?


You can still have isolation of responibilities between modules, like "this module only reads and writes to that database schema". Modules combined with appropiate glue then makes a service that is "fat" in responibilities but lean in network latencies. Code reviews that also checks that module seperation is maintained really helps.


Id go a lot further and say you should invest a ton in enforcing module boundaries automatically. Shopify's approach to writing their own package manager comes to mind - you cannot rely on code review to keep code clean. Entropy will naturally lead to boundaries broken down. If you want the separation of microservices with a network to enforce it you need robust tooling to enforce that.


In java land, I have found multi-projevt gradle builds and copious tests have gotten us pretty far without needing to resort to a custom package manager.


multi project gradle builds are a god-send


do you have any links re that shopify package manager - google seems to confuse software and postal wrapping...



> efficiency (e.g. no database joins)

This is WRONG. I was in a team that somehow think that could make a decent software without using postgresql as-is. Is wrong: How many devs know how implement, CORRECTLY, ACID?

ALL OF ACID?

Then if "remove" for "efficiency" the single piece of software that ACTUALLY implement it, CORRECTLY, what exactly them expect?

Well, them were running in circles, avoiding ACID, being very efficient running 12 docker images (!) and not implementing actual features. I left because I point to this and was deemed not good fit.

With relation to microservices and all their past selves (n-tiers variations) the best advice is:

Use a rdbms, properly

Maybe, add a cache as redis.

End.

This is all you need for the vast majority of apps. And it will scale, and will perform very well. If it NOT scale and not perform very well, then is MORE likely that the app code is wrong than PostgreSQL/Redis to be wrong.

MORE.


I think it's not really microservices you pine for, but it's being able to untangle said ball of mud into discrete and - more importantly - manageable codebases.

100K LOC is hard to work with, but 10K is manageable.

I do think that's why developers idealize the microservices architecture; they don't want to have to keep 100K LOC in their head, or work with 10 other developers; they want to tend to their own 10K LOC garden, they want to isolate a problem space and focus on that, instead of the big picture which is too much for any one person to deal with.


I absolutely agree, and that's delivered by simple modular architecture over well designed interfaces.

Adding runtime networking and server management complexity to that is a completely different things though!


You're absolutely right.


Exactly this. Even if I forgot all the complexity of distributed architecture, tons of new failure. condition, the conversion to microservice what about the efficiencies. I have quite a few core objects/tables that everyone use that anyone can join or use in their ORM mapping and what happens if they are all in a separate schema and microservice.


Have a look at "refactoring legacy code" by michael feathers I think it was.

The book is a bit old, about process not fad technology and the age has affected it not at all.

It's describes the accurate and correct process to fix the problem you describe.


Circuit breaker is a design pattern, not an architecture pattern.

Event sourcing is badly written. I'm guessing that one is supposed to look at a seperate event log instead of querying the data directly.

Not understanding the Sidecar pattern either. Does L4/L7 imply L4, 5 6 and 7, or just L4 (transport layer) and L7 (application layer)? Does sidecar then simply suggest seperating application logic from cross-cutting concerns (communication, security, logging etc.)?


I've only ever seen the sidecar pattern described with containers. https://docs.microsoft.com/en-us/azure/architecture/patterns...


Yes, basically as you guessed. Applications receive input, manage local resources, and produce output. The sidecar proxy is their interface with the world. The proxy handles cross-cutting infrastructure concerns: service discovery, federation, distributed tracing, and so on.

Blog posts about Envoy proxy as a sidecar in a service mesh context should offer plenty of examples.


What the article calls CQRS is just read replication (Its still very valid, and common), its just not called CQRS, CQRS is about actually different data representations on query and read, and is an orthogonal concept to read replicas.

source: https://www.martinfowler.com/bliki/CQRS.html


For another reference on CQRS that goes deeper than Mr Fowlers wonderful wiki, Udi Dahan did a lot of practical work and then education regarding CQRS systems


I used to write articles like this. And I used to relish in the details of building distributed systems. No more. The reality is that monolith everyone loves to rage on about as the grand opposition to microservices is actually built on an operating system that abstracts away the details programmers before us had to deal with. And I imagine they too talked of the tradeoffs in design between design patterns. If an architecture and operating environment promotes distributed systems development then these problems go away and actually you find it's a fundamentally better experience. Unix pipes were written in a single day and they became the thing that redefined how programs were written. Imagine all your favourite tools like ls, grep, ps we're just one monolith. Holy heck no.

I think microservices are waiting for the distributed operating system and development environment that makes life as easy as Unix did with C programming and bash.

All these things in the article. You shouldn't have to care about.


I very much agree. Have you seen Unison, the language written by a prominent Scala author? Do you think that it provides enough abstractions for distributing processing to push it over the "tipping point"?

https://www.unisonweb.org/


First I'm seeing of unison, will have a look thanks. My primary language became Go when that came out, and I don't foresee myself moving to another language until there's a paradigm shift. Meaning, I think the majority of services we need will be written in Go and be exposed as very standard consumable APIs much like unix tools were with text. The shift beyond that, I think is to a language that understands a live environment of these services as opposed to yet another language that looks to replace existing languages.

Basically I think Go is the last language needed on the backend for the way we write software today much as C reached its pinnacle 10-15 years ago. C program moved to Java or dynamic scripting languages for different platforms. Go dominates the Cloud and I think the platforms we program for beyond that might be something higher level. Maybe voice. Maybe something visual, who knows.


> Imagine all your favourite tools like ls, grep, ps we're just one monolith

Like BusyBox? :-)


> Many modern-day applications need to be built at an enterprise scale, sometimes even at an internet scale.

I would argue the contrary: most applications don’t need to be built to scale. Building distributed systems (which is what scaling is about) costs a lot of effort and therefore money. Throwing more hardware at it and traditional optimization are enough for 99% of projects (you do that after a project turns out to be taking off)


Yes, that modern day scaling is generally of aspirations rather than applications!


I am just shy of 10 years professional experience now and I have learned 1 very important lesson:

You can safely ignore the opinion of anyone/anything that claims there's a "golden solution".

That goes for ALL aspects of programming. EVERYTHING has a tradeoff, and it concerns me how often that pro vs con analysis is just never done.

I've worked in all the flavors here:

* Well designed, easy to use interfaces in a massive monolith

* Poorly designed monoliths with litanies of gotchas and edge cases

* Super fast microservices that "just work"

* Garbage applications tightly coupled across a network layer

What it all comes down to is the strength and ability of the design phase. Good engineers = good products.

This shouldn't need to be said but the comments also appear to sidestep that problem with a few proclaiming "golden solutions".

Be weary :)


I think one of the more useful ones not mentioned would be idempotency. That is, ensure for any given message, processing it multiple times will be at least tolerated, but ideally yield identical state. This eases so many complexities of dealing with failures (it's always safe to to retry, even if the receipient may have processed your first attempt), coherency, etc etc.




The "Strangler" pattern _is_ the facade pattern?


Yes I would largely agree.

One useful distinction is about the scale or context of applying the base idea - Facade is typically a code class level pattern, whereas Strangler as described here is an application or service pattern.

At a scale in the middle is the DTO pattern, representing a facade on the data objects internal/external to a service.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: