|
Do not think of Azure as an alternative to your development computer.
It is an alternative to the server (farm) hosting your application.
And while Azure and AWS can run virtual machines, it is an outdated way of doing software deployment and you will have limited benefit from running those in Azure.
Docker will start to present some benefits (specifically if you need to run on-prem as well), but a "cloud native" architecture where you use Azure storage solutions, authentications, serverless, messagebus, ... is where you see the full benefit.
|
|
|
|
|
Hey, can you elaborate a bit on how docker will give benefits when running on premis? I use docker a lot in my dev environment but not for deploying to production.
|
|
|
|
|
It tends to eliminate/minimize many environmental/configurational concerns.
The segregation of containerization means that it is less often that environment/configuration changes for system A unexpectedly and negatively impact system B. For example, windows' hosts file. In non-containers, that gets shared with everything sitting on that machine. Containers? Each has its own, and as a matter of creating the container, it's already setup. In the same vein you have less scripting/manual configuration of the hardware/OS on the deployment target as most of that stuff would/should live in the containers.
|
|
|
|
|
Thanks very much for your informative answer. So specifically on the orchestration/management of these containers on lets say a windows VM running in the customers datacentre. Wouldn't you need to also deploy Kubernetes or Red Hat Openshift to deploy these containers in the wild so doesn't that add a level of complexity that a standard deployment doesn't?
|
|
|
|
|
You pretty much need Kubernetes etc for the same features you would need for more advanced VM management (i.e. anything above "start a VM on this server"). If all you need is "start this program on this machine", all you need is "docker run" or maybe "docker-compose up" with containers.
I think a lot of the reason VMs are considered "simple" is because most developers are just given one that is managed on infrastructure that are "other peoples problem". So not really an apples to apples comparison.
|
|
|
|
|
I'd say that using those things does simplify scaling and deployment.
It may be true for many things that you would benefit from that simplification while you could always "roll your own".
Certainly for some though, you're not going to need more than a single container instance somewhere.
For example, maybe you use containers so developers can easily pass around builds to host locally so that the code they are really focused on is able to hit those locally hosted containers as dependencies.
Maybe especially beneficial if both sides of the fence are being worked. "A" depends on "B" but you are actively altering both. As you and others make changes, it isn't just the repos changing but the containers coming out of build pipelines reflective of those changes also. The most benefit is if those containers are artifacts that eventually hit testing/production environs. But that would/could still have benefit even if your "real environment instances" were hosted on cloud instances or even if they were not containerized. Provided you enforce that changes to the container must also be reflected in the environments they represent you still eliminate a bunch of "works on my machine".
In simpler use cases, it might be that orchestration tooling complicates things more than simplifies them.
|
|
|
|
|
Besides the configuration management mentioned already a container also resumes less resources than VM.
You typically have many applications running on a host. This means a lot of memory consumption spinning up multiple copies of the kernel and various background services. This does not happen with docker - they will use the memory the processes need only.
You could also see less disk usage as base images can be shared - though how much this will be the case will depend a lot on the containers running - if you run a bunch of images from different sources, there might not be that many shared images.
And in case of autoscale, the time to start a new instance of a container is typically measured in seconds (as long as you have a machine ready to run it). If you have a single application on-prem, not a big benefit as you would need the hosts anyways, but in large environments or clouds it can make a difference. And if you have an app that have high load for short time (for example once a month) it can suddenly be a lot cheaper being in the cloud and only paying for what you use.
|
|
|
|
|
"cloud native" can also be much more expensive than a VM.
|
|
|
|
|
Yep. You need to do a price/benefit calculation. If you need nothing from the cloud and it is old software - keep it in a VM. If it is new software and doesn't need the benefits offered by "cloud native", use Docker. Or Docker + some cloud services.
We are stuck with customers that are not exactly pushing the technological limits (and I thought banks where bad), so we are now starting to build in docker containers (so we have a predictable and source controlled build environment - and easy reproduction on a local dev box), then pull the artifacts out for the dinosaurs to run in their IIS while we can spin up a docker image if we have to.
|
|
|
|
|
So I see Docker is important after all. My problem with cost is trying to put 27 applications using 30 databases into one database and build image to avoid the per database charge. Also, I need to know how to feed our punched cards into Docker.
|
|
|
|
|
I would stop for a moment and think about calling stuff "outdated"
Fora lot of workloads/systems having a VM is perfectly good solution.
Running VM or two on IaaS provider is much cheaper that having to pay Azure bills.
|
|
|
|
|
I have no hesitation calling VMs outdated. It can still be a perfectly good solution for running legacy software - where the rewrite would cost more than the benefits. Even for new software we do have to support it as some of our customers have very valid reasons to self host, and... well... let's just say it takes way more than 10 years for them to introduce something new. I guess we should just be happy they have VMs and are not carrying machines around.
Docker is a reasonable solution that gives some benefit no matter if you run in your own data-center, IaaS, or Azure/AWS etc, but of course the software needs to be designed for it.
More cloud native (using more cloud services) will give Azure (or AWS etc) additional benefits that IaaS and self-hosting just can't deliver. If those benefits are worth anything to your project is of course up to your own "finger in the air" cost/benefit calculations.
Spend 100 a month on an Azure resource and they will ask "do we still need this". Spend 1000 a month in salaries to build and maintain your own autoscale, failover, configuration management, ... and they will say "we are investing in our product".
|
|
|
|
|
Oh autoscale.
I can run a lot of users on a single VM and don't need auto scale.
We have high touch sales process and we do B2B.
We also do "web applications" so we are doing a lot of writes and "eventual consistency" is not good enough, people expect what they write to read it back right away.
Where we don't have that many reads on database, we don't have twitter problems like Shakira tweeting something and pushing it to followers.
Auto scale and other stuff seem like killer feature if you serve lots of content and have a lot of reads.
But if you write to database and database is bottleneck and auto scaling application would make more problems than it solves.
So yeah that is part that I do not understand - everyone is scaling application instances but you have to write to database anyway and for my workloads it does not even make sense.
|
|
|
|
|
Of course everyone does not need autoscale. It is not a killer feature. Just a requirement in some use cases and not in others... as basically everything else in software development.
The only use case I can see where containers might be a bit more complicated than running a VM is running multiple web sites on a single box. IIS makes this easy. With docker each site will be on its own port, so you need something in front to listen to 443 and do the HTTPS for the containers. This means something like NGINX or Traefik. Not rocket science, but definitely an area that is still too complex. Traefik being able to automatically pick up containers as they are spun up and server them is a good step though, but still a bit to wrap the head around.
Besides that... hard to beat:
docker-compose up.
If I had to run a modern application in a VM, I would install docker in that VM
|
|
|
|
|
Because your employer says that you must?
|
|
|
|
|
|
|
Azure has multipele hosts, so redundancy and theoretic more uptime.
Did I mention I'm a silverbug? Ya reckon I trust Amazon not to lock me out?
Don't introduce useless dependencies. Ever.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|
|
It's not a one-size-fits-all solution. If you have no use for it (and not everybody does), don't force a square peg in a round hole.
And there's nothing wrong with that.
And don't let marketing tell you otherwise.
|
|
|
|
|
I'll give you a real world example, which was actually my first experience and introduction to Azure.
To obfuscate enough to avoid self-promotion, I'll simply refer to the product as MWC, a .net web-based timeclock system consisting of 4 basic pieces:
0: A small app for clocking in/out designed for touchscreen. Usage for this customer was 60 sites and an average of 5 workstations per site, so a potential for 300+ simultaneous connections. Remember that this is a timeclock, so inputs will be fairly concentrated and predictable.
1: A management portal. Usage in this case was also 60 sites plus several admin workstations. Patterns of usage are again predictable...concentrated in the morning and afternoon.
2: SQL Database
3: SQL Server Reports - This one became necessary due to Azure webapp restrictions plus the tight deadline for the project. (I had a co-worker who couldn't help code, but could do SSRS reports)
So here you have a couple of related web applications that have vastly different needs. (really 3 if you count the reports delivered via the SSRS web portal) Then you have the database which obviously needs connections to everything else. Self-hosting was my first thought, (everything's local and fast) (and under my thumb) but it was out of the question. My network reliability is OK, but not good enough. Considering a webhost, I went shopping. At the time, (probably still do, not sure) MS had a free 90 day trial period so I gave it a shot. Here's how it ended up.
The tiny workhorse web app debuted in the Shared Tier but was quickly bumped to Standard. Cost at the time: est: $50/mth.
The management portal web app started (and stayed) in the Free Tier.
The database tier was S1. I don't remember exactly, but at the time it was around $30/mth.
All told, it was around $90 a month for everything. I realize I could have gone with a cheap php/mysql host but I swore off PHP a project before and even though the prototype was already written in php, I was scrapping it anyway.
To summarize, Azure gives you a lot of flexibility with regards to scaling any component either up or out, ssl was automatic on the web apps, and it just worked. In the end, I found an even better, and cheaper alternative to the original setup...an Azure VM that now runs 2 of these MWC systems plus another half-dozen web apps for other customers. I've got SSRS set to serve out reports (the ones that haven't been ported to DevExpress yet). With the VM, you have complete control of the environment, just list self-hosting. Maintenance is a breeze...just rdc in. If you do go for Azure, especially a VM, you can save a lot of money by paying up front. I paid for (reserved) 3 years up front (at $22.86/month) so it winds up being around $20/month now. I should say that it's not a beast (Xeon E5 @2.3GHz and 8GB on Server 2016) but watching Performance during peak hours shows that it's hardly breaking a sweat. Damn, I've got work to do! Good luck Eddie!
"Go forth into the source" - Neal Morse
"Hope is contagious"
|
|
|
|
|
There are times when the data that you are dealing with may need to be accessed by others outside of your physical location. Consider the following real-life example. We generate hundreds of thousands of food inspection documents each year with inspections performed using software that we design and maintain. We use C# for coding our software and SQL to store our data. Other states also use our software and produce similar amounts of documents. The USDA wanted to be able to view the documents and associated data without having to contact each state to get the documents that they wanted at any given time. Hosting the data at our main HQ was not a viable solution. Instead, we used Azure to host the data. We make connections to the data from our local on-site software to load the documents and their inspection data into the SQL database at Azure daily. We also have a password-protected web-based interface where authorized users can pull up desired documents, print them out, etc. This was our first truly cloud-based project so there was a lot of learning involved. I suggest that you do NOT attempt to buy a domain through Azure. We could never get the domain name purchase to go through so wound up purchasing elsewhere and then pointing to Azure through custom DNS records. Setting up the custom DNS records and getting everything to work correctly was somewhat complex (because an Azure SSL certificate was also involved), but we eventually wound up with a solution that is a perfect fit for our needs.
“The Ultimate Question: 2b | !2b?”
** I would love to change the world, but they won’t give me the source code **
<*})><
|
|
|
|
|
Because you don't need as much ram as Mauve?
Did you ever see history portrayed as an old man with a wise brow and pulseless heart, weighing all things in the balance of reason?
Is not rather the genius of history like an eternal, imploring maiden, full of fire, with a burning heart and flaming soul, humanly warm and humanly beautiful?
--Zachris Topelius
|
|
|
|
|
I prefer it for my customers.
Here's my situation, a customer wants an application where they can do A, B and C.
I always propose a web application, as they can easily access it at work, from home, on the road, on their PC, tablet, phone, etc.
In fact, the access on the phone and on the road has been a big issue for some of my clients, so web is a totally valid solution.
However, they're just small companies that do not have servers or an IT department.
Using Azure allows me to run a web application without barely any input from my clients for about €60 a month.
I hook it up to Azure DevOps and I have fully automated build and release pipelines in minutes.
I get a SQL Server database for another €5 a month.
Having some additional services costs nothing extra as they can go on the €60 plan.
My clients do not need to buy a server, they do not need an additional IT-person, they do not need updates or whatever.
I can do it all for them and I don't need physical access to anything, nor VPN or what have you.
The €60 a month isn't an issue for my customers and it gives a lot of ease and flexibility.
Other situations can include hyperscale or very planned or unplanned usage.
For example, I have a customer that has two jobs a day for 12 administrations and a couple of agents, so around 40 jobs in total, but it's a pretty intensive job that can take up to ten minutes (depending on the administration and agent, some only take seconds).
I've chosen an Azure Function, which is serverless (and so "free" if you're not using it) and it just runs about ten minutes a day.
Due to the scaling nature of Functions, it "decides" how many instances to run and within about ten minutes all jobs are done.
That same concept would work for jobs that could trigger at any time, but also couldn't trigger for days.
Do you really want to buy a server for those 10 minutes a day!?
There are plenty of use cases for the cloud, or Azure in particular.
Of course you do need a stable and preferably fast internet connection, but most businesses are already heavily dependent on internet anyway.
|
|
|
|
|
I don't know why people like it.
It's like they took a perfectly good language (C) and pythonized it.
No, thankfully it doesn't use significant whitespace, but I get the impression that whoever designed Rust actually hates the C language family and wants bad things to happen to them.
You can keep this nonsense.
If I wanted training wheels, I'd use VB.
Joy. Yet another language I get to learn enough of simply to port things away into a proper language.
Edited to add: anybody who designs a grammar with this construction "fn main()" needs to have their compiler taken away and forced to use scripting languages until they can prove to the world that they can use context free grammars properly
To err is human. Fortune favors the monsters.
modified 22-Mar-22 6:31am.
|
|
|
|
|