|
Depends on what you want to achieve, for me it's currently no use since it doesn't support GUI applications. For services and console applications it looks good so far.
Rules for the FOSW ![ ^]
if(!string.IsNullOrWhiteSpace(_signature))
{
MessageBox.Show("This is my signature: " + Environment.NewLine + _signature);
}
else
{
MessageBox.Show("404-Signature not found");
}
|
|
|
|
|
|
|
If you include HTTP/HTML in you "GUI" concept, then Docker can handle GUIs. Quite a few Dockerized applications provide a user interface of mice and men (ues), like any other web application.
Anything that goes over IP will work. I guess you could even do X11 (remember X11?), although I never heard of anyone doing that.
Anything non-IP will give you problems, though, whether user I/O or other I/O. You can't plug a USB device into a container. Or some instrumentation interface. Or physical interfaces like I2I / SPI. Or even a serial port.
Some people are trying to tunnel USB over IP: Your Dockerized application is given a driver API stub that marshals all the parameters into an IP packet and forwards it to a machine "out there in the free world", unwrapping the IP packet and feeding the parameter into a real physical interface. This is not provided as a basic mechanism; consider it a somewhat experimental hack, which may cause some problems (e.g. far higher latency than you experience with a direct physical access. In principle, you could similarly tunnel any protocol over IP (hey, that's exactly what RFC 791 describes as its primary purpose!), but the only such effort I am aware of is with USB.
The only "standard" alternative to IP is that you can mount a host file system in a running container, one or more files in that file system being pipes. The "external" end of the pipe may be whatever that works in a non-Dockerized world.
As a main rule, any tunneling-over-IP or pipie solution requires a general machine on the outside. That USB solution I know of requires it to be a Linux machine. I guess it could be the same machine that hosts the Docker engine, if that is a Linux box. If you have to set up another Linux box just to hold your physical USB interface, then your gain from Dockerizing becomes somewhat limited.
The IP tunneling means that you have the freedom to place the physical interface anywhere in the (internet) world, as long as the proper software to handle the IP communication is available, but I guess latency could be a significant problem for e.g. trans-Atlantic USB connnections.
I would not advocate any such tunneling solution. I think use of Docker should be limited to pure processing work, with only "primitive" I/O requirements, or plain web applications running HTTP/HTML.
|
|
|
|
|
Yeah, it's important. Containers will (hopefully) replace VMs are the virtualized environment of choice.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Nathan Minier wrote: replacecomplete VMs FTFY...
These are very different ideas and very different capabilities... There are things that containers can't do and the other way around...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
Of course, but if you've seen how VMs are largely used in the enterprise...
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Can Docker let me test an app I'm developing against multiple versions of Windows?
If I can't test against 7, 8.1, 10, 2008 R2, 2012, 2012 R2 and 2016 with Docker, then I need actual VMs.
And that's just for the supported versions of Windows.
So...they serve different purposes. One isn't a replacement for the other.
|
|
|
|
|
You're right. Far too many people use VMs as a replacement for containers, which seems to be what you're advocating for general use because of a development edge case.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Testing against multiple OSes is an edge case?
|
|
|
|
|
In an enterprise environment? Absolutely.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Enterprises are the worst offenders when it comes to keeping up to date. We've all heard the horror stories about enterprises being unable to move away from XP for one reason or another. When MS kept pushing back the XP support cutoff date, it wasn't because of home users.
So I'd have to think that testing against multiple OSes is NOT an edge case, but should be done by any developer or tester who wants to sell/use software in any business that's in that boat.
(disclaimer: I work for a tiny company, but we sell primarily to larger enterprises with tens of thousands of servers)
|
|
|
|
|
...you're missing my point.
Testing against multiple OS types is not how enterprise commonly use VMs; it's a use case that is 100% an edge case for how VMs are used in the enterprise, regardless of your own personal - and no argument that it's a perfectly valid - usage of VMs in your work environment.
Enterprises typically use VMs to lighten their physical server requirements, which is good, but in doing so have embraced standing up a new VM for whatever whim the management teams happen to have (like a unique web server per department, for instance), which is bad. The wasted time and resources used to manage (and secure) the bloat of extraneous VMs, which would be BETTER served by containers, is my complaint here.
"Never attribute to malice that which can be explained by stupidity."
- Hanlon's Razor
|
|
|
|
|
Gotcha. I figured I missed your point, I just wasn't sure how.
You're totally right. That said, I'll also add that using the wrong tools for a job is not strictly the domain of large enterprises.
|
|
|
|
|
I can upvote you only once, but that's the exact point of this...
Since Docker (and other containers) came to focus it became a matter of 'fashion' to hammer VMs and glorify containers...
It's like we would make cakes with fresh vegetables instead of sugar from now on...
"The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge". Stephen Hawking, 1942- 2018
|
|
|
|
|
I never understood why a large fraction of Docker buffs shows panic reactions every time someone suggests that Docker is a kind of virtualization. Of course there are things that, say, VMware will do that Docker won't do - and the other way around. So Docker isn't identical to VMware.
Yet, the concept of virtualization has been applied in lots of other ways. VMware is not The Only Defintion of virtualization. When I on my Windows machine run a Ubuntu application in a Docker container, operating in its own network world, and sees a Unix style file system rather than the physical NTFS file system underneath - of course those are examples of virtualization!
It seems to me like Docker buffs really are trying to say: Forget about competing alternatives - this is something completely different. You shouln't even consider making any feature-by-feature comparison, because they are so different. Virtualization is out. Containers provide an operating environment which is independent of the underlaying hardware, and that isn't virtualization. You can run different base layers (e.g. different OS kernels) in simultaneously running containers on a single host, but that isn't virtualization. You can create multiple fully independent networks for groups of containers to communicate among themselves; these networks have separate, independent network address spaces so they don't interfere with each other even if they use identical addresses, but that isn't virutalization. Multiple containers running from the same image have identical local file systems at startup, but if they make modifications, one container's changes is invisible to the other containers, even with identical file names, but that isn't virtualization.
We have decided to use a different terminology - we refer to the local file system as a "union FS" to mark a distance from a virtual file system. We call it a "named" network to distinguish it from a virtual network. Hey, they have different names, how could they then represent similar concepts?
Docker containers realize one set of virtualization concepts, VMware another one. It sure seems to me that Docker has made a great selection for a fairly lightweight kind of virtualization. Nevertheless it is creating virtual environments, virtual resources and mapping these onto more or less arbitrary physical hardware. Just like all virtualization does.
Because Docker essentially ignores all other I/O-facilities than IP and Unix-style file systems, it has a somewhat easier job than those VMs taking the full responsibility for I/O (and other hardware access). So Docker can say "Why do /xxx/ to provide a virtual resource - Docker doesn't need it?" Sure, when you do not provide e.g. general I/O, then you don't need it. Still you are virtalizing those resources that you do provide!
I think Docker is great for a large subset of tasks. But why should it displace other virtualization methods for other tasks? Docker is not universal: It cannot handle arbitrary I/O. It cannot handle arbitrary OSes - the base layer (usually an OS kernel) has a rather limited set of APIs to the host for realizing its own provisions, in particular with respect to I/O and device access. Say, if you need to run one container providing a Windows GUI, one running a MacOS application an a few running Linux applications, the Linux Docker implementation cannot handle the two first ones. VMware can. So for that use, why shouldn't I "be allowed to" run VMware?
This seems to me very much as a turf war, were terms and definitions are used a mechanisms to push competitors away. If Docker could take over all tasks, it could make more sense, but since there are lots of issues Docker cannot handle, it will never fully replace VMs, only a certain fraction of them. So why not make clear where Docker is suitable, and leave it at that?
|
|
|
|
|
Docker? Aint nobody got time fo' that.
xcopy c:\inetpub\wwwroot\mysite c:\inetpub\wwwroot\mysite-v2
*dusts hands*
|
|
|
|
|
I haven't jumped in yet, been dancing around the subject for some time though.
Everyone has a photographic memory; some just don't have film. Steven Wright
|
|
|
|
|
If you want to virtualize Windows Forms applications you can try the Cameyo packager, it's free for up to 50 users.
|
|
|
|
|
My only experience with Docker is swearing like one.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
I'm supposed to be looking into this as an option for migrating a legacy lob desktop application. As soon as I get my server 2016 ready for production, I'll be able to check it out. I'm not about to pay for a hosted environment unless I absolutely have to.
"Go forth into the source" - Neal Morse
|
|
|
|
|
I'v heard the term "Docker".
Never seen it. No idea what is it. Don't need it. Don't care.
If it's not broken, fix it until it is.
Everything makes sense in someone's mind.
Ya can't fix stupid.
|
|
|
|
|
Vunic wrote: Is there anybody out there who still doesnt have a need to look into this?
I did not have the need to know about it until I was asked in an interview if I was familiar with it.
|
|
|
|
|
It is a technology we are looking at where I am employed.
Really haven't touched on it too much, I have real (not virtual) things to do; so to borrow from Rune Haako I will "send a droid", or in my case an intern.
We had one of them there walking discussions yesterday, and I was kinda like "so Docker is like the new Java, we just have containers instead of jars". But it was pointed out to me today that you can run Java inside of Docker, but not vice versa.
Director of Transmogrification Services
Shinobi of Query Language
Master of Yoda Conditional
|
|
|
|
|
I worked with it a fair bit in the past. Docker alone isn't really the only way to achieve any of the things it offers, and in my experience is downright counter productive if you end up in an organisation that tries to fit everything into Docker containers.
I'd recommend reading about The Twelve-Factor App if you're completely unfamiliar with Docker or Containers - it's basically just a set of 'best practices' for using Docker. They're ideas that actually make docker a nice enough development experience, but just downright stupid if you were to try use them all outside of Docker (e.g. all config should be environment variables) although a few of them are just good common sense rules.
All in All my conclusion was that well architected software, along with decent automation scripts for managing infrastructure and dependencies can be easier than trying to maintain dozens of different container images. Although no-one I know who advocates using Docker is actually conscious of the fact that security updates still need to be installed into their images - along with all the testing that comes with that.
|
|
|
|