Then why are you thinking about architecture patterns. Get you business plan and requirements sorted and THEN start looking into what platform(s) you will service and the different patterns that will meet your requirements.
Never underestimate the power of human stupidity -
I'm old. I know stuff - JSOP
Many patterns, such as those in the GoF book, aren't "architectural". They're more tactical, although they can help to make an architecture better.
Jim Coplien, who was well-known in the C++ world, got involved with patterns early on. Later, he said that there was too much focus on patterns. The emphasis needs to be on encapsulation, polymorphism, and inheritance. Patterns are the exception when those things, by themselves, don't produce loosely coupled code.
To understand a pattern, you need to have seen what code looked like before the pattern was applied, and after. Then, when you write or find code that looks like the "before" code, you'll know how the pattern can improve it. That's usually how a pattern gets used. How code evolves to satisfy its specifications is what determines which patterns are used; it is wrong to start out by saying which patterns will be used before you have a good sense of the high-level design.
The most correct thing previous commenters posted is that you need to handle business requirements first.
The other correct thing is that GoF patterns are mostly tactical.
Given that in my answer I'll assume is that architectural patterns are monolith, microservices, CQRS etc.
With that said I'd start from monolith application since microservices bring a lot of overhead that you might not need until you've pitched a PoC to some investors or might not need it at all if you're not going to have a team of sufficient size or your application won't endure high loads.
Then your application will evolve according to your business requirements and also to a fact that your understanding of business will evolve too.
Say you'll discover that your business problem contains of multiple subdomains. Then you can handle them via vertical slices.
Or you may need to handle spikes in application load. Then you might need to extract some microservices.
In general, my advice is to start with the simplest solution unless you're encounter requirements to act otherwise.
Poorly considered, but it fits well within the current software engineering zeitgeist, so it makes sense that it would get written. That's the fancy way to say: "it's a fad, don't worry about it". And stop reading Uncle Bob.
"it's not flexible", not if you consider adding an extra case to be a Big Deal, but the alternative is adding an entire new class.
Even if you are of the opinion that creating new classes is somehow easier than creating new cases, then spend a little time thinking about what would happen when the function signature of that virtual function is changed, or if something about the API used by the subclasses to implement themselves changes. This has the annoying property that the fix you're about to write is highly non-local, you'd have to "hunt down" all the places that need to change - in the best case they correspond to compile errors.
If a requirement is broken into N cases, but the code is spread out non-locally over N classes, your code does not look like the requirement which makes it harder to check whether it matches.
"it's not SOLID", maybe, but SOLID is subjective and overrated.
"horrific", let's not even talk about it.
The presented "better way" presupposes that we have a convenient instance of the "class that represents a particular reason". How did we get that? Chances are that there's a switch hiding in a factory pattern or something. Moving the problem. And if there is some pattern such as new AddressChanged().Update(), how about we don't wrap it in a class and just call a method that does that.
Article forgets to point out that in the first place the only domain that was considered is the business logic domain. Even if it makes sense there (which I don't agree with either), it won't make sense anywhere else. There's no way Math.Min would have its if replaced by polymorphism.
First of course writers have different goals than programmers. Always keep that in mind when reading postings on the web.
From the link
"New requirements come along. Who would have thought? You were so sure nothing would happen"
Wrong, wrong, wrong.
Attempting to write code in case something might happen is a sure way to lead to code that is difficult to maintain.
If you already have requirements, even if it is just from a whiteboard, then your design and implementation should, of course, take that into account. But if you are claiming that you are writing your code to make it easy to add requirements that are unknown then you are at least ignorant of the challenges of maintaining legacy code. Especially if your claim is for something that might happen far in the future.
What actually ends up happening with that hubris is that code must be maintained for years or decades despite the fact that it serves no purpose. The complexity actually makes it harder, not easier, to add new features because it is more complex.
The best you can do now for such future possibilities is to write code that is easy to understand. And to make it clear how the code that you wrote now meets the requirements that you have now. Then that programmer 10 years from now who must add a feature that is actually needed then, will at least be able to understand what your code actually is doing and very likely needs to continue to do even with the new feature in place.
I wouldn't label if statement as obsolete but I found dynamic polymorphism (hidden by the facade of SOLID mnemonic) pretty convenient thing in order for all OOD keywords to find their place. And indeed before entering the thread my guess was that author will make the case that if.. else.. violates the open-closed principle. And my guess was right. Although if I were the author I'd avoid such words as terrible, horrific etc