|
I still use it - can't see a reason not to
"Life should not be a journey to the grave with the intention of arriving safely in a pretty and well-preserved body, but rather to skid in broadside in a cloud of smoke, thoroughly used up, totally worn out, and loudly proclaiming “Wow! What a Ride!" - Hunter S Thompson - RIP
|
|
|
|
|
Nop, I always roll my own.
|
|
|
|
|
just write message into a text file?
it turns out my case is simple. I just dump my messages into a text file, which file name is with timestamp.
diligent hands rule....
modified 23-Mar-22 13:21pm.
|
|
|
|
|
Pretty much. Maybe
I don't use logging packages (well, I did in another development life). I live in the embedded world, and for the last slate of products, we rolled our own. I won't get into writing stuff to the file system and worrying about loss of power. What I will tell you to do is to use a comma delimited format for your logging. Think ahead of time what information is useful to you and set up standards. Error levels, associated data. But keep it all comma delimited.
Being able to suck this into Excel is priceless.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
Why would I use Azure when speed is dependent on my Internet connection, and Sql Server on my computer, or network is much faster (I think)?
Ed
|
|
|
|
|
Depends on what you want.
If you want a mobile DB, that can be accessed by any device, any time, over LAN, WAN, mobile or wired connection then Azure is one way to go.
If you want security, speed, and low cost, then ... ummm ... it probably isn't ...
"I have no idea what I did, but I'm taking full credit for it." - ThisOldTony
"Common sense is so rare these days, it should be classified as a super power" - Random T-shirt
AntiTwitter: @DalekDave is now a follower!
|
|
|
|
|
Do not think of Azure as an alternative to your development computer.
It is an alternative to the server (farm) hosting your application.
And while Azure and AWS can run virtual machines, it is an outdated way of doing software deployment and you will have limited benefit from running those in Azure.
Docker will start to present some benefits (specifically if you need to run on-prem as well), but a "cloud native" architecture where you use Azure storage solutions, authentications, serverless, messagebus, ... is where you see the full benefit.
|
|
|
|
|
Hey, can you elaborate a bit on how docker will give benefits when running on premis? I use docker a lot in my dev environment but not for deploying to production.
|
|
|
|
|
It tends to eliminate/minimize many environmental/configurational concerns.
The segregation of containerization means that it is less often that environment/configuration changes for system A unexpectedly and negatively impact system B. For example, windows' hosts file. In non-containers, that gets shared with everything sitting on that machine. Containers? Each has its own, and as a matter of creating the container, it's already setup. In the same vein you have less scripting/manual configuration of the hardware/OS on the deployment target as most of that stuff would/should live in the containers.
|
|
|
|
|
Thanks very much for your informative answer. So specifically on the orchestration/management of these containers on lets say a windows VM running in the customers datacentre. Wouldn't you need to also deploy Kubernetes or Red Hat Openshift to deploy these containers in the wild so doesn't that add a level of complexity that a standard deployment doesn't?
|
|
|
|
|
You pretty much need Kubernetes etc for the same features you would need for more advanced VM management (i.e. anything above "start a VM on this server"). If all you need is "start this program on this machine", all you need is "docker run" or maybe "docker-compose up" with containers.
I think a lot of the reason VMs are considered "simple" is because most developers are just given one that is managed on infrastructure that are "other peoples problem". So not really an apples to apples comparison.
|
|
|
|
|
I'd say that using those things does simplify scaling and deployment.
It may be true for many things that you would benefit from that simplification while you could always "roll your own".
Certainly for some though, you're not going to need more than a single container instance somewhere.
For example, maybe you use containers so developers can easily pass around builds to host locally so that the code they are really focused on is able to hit those locally hosted containers as dependencies.
Maybe especially beneficial if both sides of the fence are being worked. "A" depends on "B" but you are actively altering both. As you and others make changes, it isn't just the repos changing but the containers coming out of build pipelines reflective of those changes also. The most benefit is if those containers are artifacts that eventually hit testing/production environs. But that would/could still have benefit even if your "real environment instances" were hosted on cloud instances or even if they were not containerized. Provided you enforce that changes to the container must also be reflected in the environments they represent you still eliminate a bunch of "works on my machine".
In simpler use cases, it might be that orchestration tooling complicates things more than simplifies them.
|
|
|
|
|
Besides the configuration management mentioned already a container also resumes less resources than VM.
You typically have many applications running on a host. This means a lot of memory consumption spinning up multiple copies of the kernel and various background services. This does not happen with docker - they will use the memory the processes need only.
You could also see less disk usage as base images can be shared - though how much this will be the case will depend a lot on the containers running - if you run a bunch of images from different sources, there might not be that many shared images.
And in case of autoscale, the time to start a new instance of a container is typically measured in seconds (as long as you have a machine ready to run it). If you have a single application on-prem, not a big benefit as you would need the hosts anyways, but in large environments or clouds it can make a difference. And if you have an app that have high load for short time (for example once a month) it can suddenly be a lot cheaper being in the cloud and only paying for what you use.
|
|
|
|
|
"cloud native" can also be much more expensive than a VM.
|
|
|
|
|
Yep. You need to do a price/benefit calculation. If you need nothing from the cloud and it is old software - keep it in a VM. If it is new software and doesn't need the benefits offered by "cloud native", use Docker. Or Docker + some cloud services.
We are stuck with customers that are not exactly pushing the technological limits (and I thought banks where bad), so we are now starting to build in docker containers (so we have a predictable and source controlled build environment - and easy reproduction on a local dev box), then pull the artifacts out for the dinosaurs to run in their IIS while we can spin up a docker image if we have to.
|
|
|
|
|
So I see Docker is important after all. My problem with cost is trying to put 27 applications using 30 databases into one database and build image to avoid the per database charge. Also, I need to know how to feed our punched cards into Docker.
|
|
|
|
|
I would stop for a moment and think about calling stuff "outdated"
Fora lot of workloads/systems having a VM is perfectly good solution.
Running VM or two on IaaS provider is much cheaper that having to pay Azure bills.
|
|
|
|
|
I have no hesitation calling VMs outdated. It can still be a perfectly good solution for running legacy software - where the rewrite would cost more than the benefits. Even for new software we do have to support it as some of our customers have very valid reasons to self host, and... well... let's just say it takes way more than 10 years for them to introduce something new. I guess we should just be happy they have VMs and are not carrying machines around.
Docker is a reasonable solution that gives some benefit no matter if you run in your own data-center, IaaS, or Azure/AWS etc, but of course the software needs to be designed for it.
More cloud native (using more cloud services) will give Azure (or AWS etc) additional benefits that IaaS and self-hosting just can't deliver. If those benefits are worth anything to your project is of course up to your own "finger in the air" cost/benefit calculations.
Spend 100 a month on an Azure resource and they will ask "do we still need this". Spend 1000 a month in salaries to build and maintain your own autoscale, failover, configuration management, ... and they will say "we are investing in our product".
|
|
|
|
|
Oh autoscale.
I can run a lot of users on a single VM and don't need auto scale.
We have high touch sales process and we do B2B.
We also do "web applications" so we are doing a lot of writes and "eventual consistency" is not good enough, people expect what they write to read it back right away.
Where we don't have that many reads on database, we don't have twitter problems like Shakira tweeting something and pushing it to followers.
Auto scale and other stuff seem like killer feature if you serve lots of content and have a lot of reads.
But if you write to database and database is bottleneck and auto scaling application would make more problems than it solves.
So yeah that is part that I do not understand - everyone is scaling application instances but you have to write to database anyway and for my workloads it does not even make sense.
|
|
|
|
|
Of course everyone does not need autoscale. It is not a killer feature. Just a requirement in some use cases and not in others... as basically everything else in software development.
The only use case I can see where containers might be a bit more complicated than running a VM is running multiple web sites on a single box. IIS makes this easy. With docker each site will be on its own port, so you need something in front to listen to 443 and do the HTTPS for the containers. This means something like NGINX or Traefik. Not rocket science, but definitely an area that is still too complex. Traefik being able to automatically pick up containers as they are spun up and server them is a good step though, but still a bit to wrap the head around.
Besides that... hard to beat:
docker-compose up.
If I had to run a modern application in a VM, I would install docker in that VM
|
|
|
|
|
Because your employer says that you must?
|
|
|
|
|
|
|
Azure has multipele hosts, so redundancy and theoretic more uptime.
Did I mention I'm a silverbug? Ya reckon I trust Amazon not to lock me out?
Don't introduce useless dependencies. Ever.
Bastard Programmer from Hell
"If you just follow the bacon Eddy, wherever it leads you, then you won't have to think about politics." -- Some Bell.
|
|
|
|
|