|
megaadam wrote: Make double standards great again!
But a lot of standards do have several standard numbers, because they have been developed in close cooperation of two or more standard organizations. Quite a few telecommunication standards have both an ISO number and an ITU "recommendation" number (like X.509). Some IEEE standards are identical to ISO standards. In Germany, DIN (eutsches Institut für Normung) is the German branch of ISO. They were very early with some standards (like DIN 45500 which all old-time hifi freaks know well) - parts of this was made into ISO standards with different numbers.
In a few cases, the standards have small "editorial" differences, such as whether the final part(s) are called an "appendix" or "annex". Or mandatory definition of certain terms such as MAY and MUST. There may be other formal requirements, such as ITU referring to specific regulatory units in the telecom world, which is against ISO principles, so those are replaced by terms like "the management organization" - not identifying a specific one. The technical contents of the standard is completely unaffected by these differences.
Sometimes, you may see national standards such as Norsk Standard 646 - NS 646 is identical to ISO 646 ("ASCII") but with an addendum defining its use in Norway. Fortunately, 646 was unused in the NS number series; in other cases, the NS number differs from the ISO number, for the same technical content. And for some standards, the English text isn't even translated to Norwegian for the NS version.
|
|
|
|
|
I thought everything these days was made in China!
I just bought a TV that says built in Antennae and I don't even know where the hell that is?
I do all my own stunts, but never intentionally!
JaxCoder.com
|
|
|
|
|
Have you never heard of the battle in the Antennes?
I have lived with several Zen masters - all of them were cats.
His last invention was an evil Lasagna. It didn't kill anyone, and it actually tasted pretty good.
|
|
|
|
|
I used to fight with Rabbit ears years ago and occasionally tin foil but never with antennae.
I do all my own stunts, but never intentionally!
JaxCoder.com
|
|
|
|
|
REMEMBER THE ANTENNAE!
I wanna be a eunuchs developer! Pass me a bread knife!
|
|
|
|
|
This is Antennae!
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I hear it's highly recommended for get-togethers. Graduations, weddings, anniversaries, all kinds of ceremonies - they're the best for receptions.
|
|
|
|
|
Oops. The beings on planet Antennae accidentally left the label on the TV before exporting it. And Trump thinks he has a trade imbalance with China -- if he only knew the true truth!
Latest Article - Azure Function - Compute Pi Stress Test
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
Marc Clifton wrote: if he only knew the true truth
If only...
I do all my own stunts, but never intentionally!
JaxCoder.com
|
|
|
|
|
|
let me guess, it stopped working and you want to try and fix it yourself so you're looking for a tech manual.
Be careful, the "Warranty Void if Removed" sticker really is damn hard to get off without destroying it, use the sharpest blade you can find and be patient, really patient.
|
|
|
|
|
A Chinese satellite state I believe. Hmm, but then with that name...
|
|
|
|
|
Here it is - off Antarctica!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
No wonder I don't know where it is, it's cold there and I don't do cold!
I do all my own stunts, but never intentionally!
JaxCoder.com
|
|
|
|
|
It's only winter slightly more than 11 months a year. The rest of the time is summer, and you'd be as happy as can be!
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
I've lived in the south (US) 99% of my life and when it gets below 60 I am cold, especially now that I'm older. I haven't seen snow in almost 40 years!, and that's been to soon.
I do all my own stunts, but never intentionally!
JaxCoder.com
|
|
|
|
|
60 degrees? When I lived in England any time the temperature went above 60 degrees in the summer they started talking about a "heat-wave"!
- I would love to change the world, but they won’t give me the source code.
|
|
|
|
|
Isn't it the capital of Madagascar?
Whenever you find yourself on the side of the majority, it is time to pause and reflect. - Mark Twain
|
|
|
|
|
This wash tag has made the rounds on the Internet here in Norway: Produsert i Kalkun[^].
For those of you who do not read Norwegian: A turkey is called "en kalkun" in Norwegian.
|
|
|
|
|
I tried to avoid the Docker hype for a while, as I am not really enthousiastic about Docker for Windows, but as the pressure is mounting I had to give in. Sadly no one seems to realize the amount of work on the Builder side that will be needed, and also to get the images served properly.
Well enough whining for now, I had a look at this overview: https://www.slant.co/topics/2436/~docker-image-private-registries[^]
And to me Harbor looks like an interesting choice, but I would like to hear if anyone has had any experience with it in a Windows environment.
Looking forward to your reaction(s)
|
|
|
|
|
I guess I would have stopped at "in a Windows environment." My experience with Hyper-V was horrid. Docker running under a VM in Windows was OK. Neither seems like a useful solution for any kind of problem I can think of, unless the Windows/Docker relationship has moved beyond first base.
Latest Article - Azure Function - Compute Pi Stress Test
Learning to code with python is like learning to swim with those little arm floaties. It gives you undeserved confidence and will eventually drown you. - DangerBunny
Artificial intelligence is the only remedy for natural stupidity. - CDP1802
|
|
|
|
|
Quote: “Microservices is a silver bullet, magic pill, instant fix, and can't-go-wrong solution to all of software's problems. In fact, as soon you implement even the basics of microservices all of your dreams come true; you will triple productivity, reach your ideal weight, land your dream job, win the lottery 10 times, and be able to fly, clearly.”
https://dzone.com/articles/microservices-anti-patterns[^]
|
|
|
|
|
We are in the process of firmly establishing the Docker Registry from docker.com. We are developing low level software, so we mostly do system builds and testing - no orchestration or swarming, no microservices. The reason for using Docker is to keep control over the build tools: We must be able to pick up an old project and rebuild a delivery from two or three years ago, identical to the bit. Using dockerized tools is an element to reach this goal.
We did initial trials on Windows, just to learn what it is, but our IT guys want central servers on Linux (they seem to prefer CentOS, but I think the developers pressured for Ubuntu on the registry server). We already have a small handful of production lines and half a dozen repositories (i.e. image names) using the registry. It seems to be fairly stable.
But: The free version has no access control whatsoever: Any developer can push any self-built garbage image to the server. Young software developers are as rebellious as a teenage son, always trying to ignore rules and sneak around blocks. So we must either switch to the paid version (our budget guys prefer not to), or investigate the open-source Portus[^] solution - our IT guys are evaluating Portus right now.
When deleting, only pointers are deleted. The garbage collector is sort of lazy, you have to wake him up manually (or by an alarm clock). He is rather careless, too: First makes a round to mark what to dispose, then a second round to pick it up. If someone pushes another image between the rounds, saying "But I would like to use that layer!", he may pick it up and dispose the layer in his second round anyway. Their own words are "Stop the world GC" We will set the cron-ometer to Monday morning at 04:00, and all who are supposed to upload images will know to sleep tight Monday mornings, rather than pushing images.
The free version neither provides a web interface to the registry nor a stand-alone UI - not even a command-line version (but you would definitely want a GUI of sorts to overview the registry). For now we are using curl for REST calls ... which is slightly above drawing the bit pattern to send out on the line, but not by much. You can find a number of free front ends at github, but I haven't seen any ready-to-use binaries, and most certainly not for Windows. While I sure can retrieve the source code, set up Linux in a virtual machine, pick up all the build tools required to build the job and run the build, doing that for twelve alternatives is a little cumbersome. I haven't done that yet.
One point that is independent of which registry solution you choose:
In the experimentation phase, images were build without any discipline and order, so except for the Ubuntu base layer, almost every image had its own set of layers. And they were huge - each version tended to be 4-5 gigabytes.
There are two reasons for this: We decided against "One image, one tool" and an "external" build script calling tools in turn. Rather, we put all the tools for a build step into a single container; this makes it much easier to keep track of consistent toolboxes where we know that the various tools' versions go together. The build step is controlled by a bash script running inside the container (it is located in the checked-out source tree, that is mounted in the container at startup). So, images tend to be large (but there are not as many of them).
Second: The experimenting developers seemed to be scared of layers, trying to reduce the number by loading as many tools, as many Python packages and whathaveyou, as possible, in one single build steps for the image. So every layer was different, no common use (except for the Ubuntu base), and disk space requirements were huge, when the tiniest little version update required a complete 5 GB image build from the bottom.
So: We are now establishing a tree structure of base layers: With Ubuntu 18.04 LTS at the bottom, we create an image with a stable set of basic build management tools, common to all build tasks and not expected to change, and we use this "ubuntutools" (rather than the raw Ubuntu) layer as a base to build on. Then we add a fairly stable gcc, and a set of C/C++ related tools to make a "gcc base layer" for the more specialized images to be based on. On the ubuntutools base we also build a Python branch with a fairly large set of pre-installed Python packages (we currently use around 150 of them) and a set of Python tools. Our developers frequently request new packages; then we lay a thin "veneer" layer on top of the common Python layer, adding to the large set already in the base.
The art is in determining which tools are super-stable, and can be put in the lower layers (like Cmake and Ninja - they do come in new versions, but we rarely require the update), medium-stable tools (like gcc - we do not switch to a new release until the old one doesn't work for us), and volatile elements (like python packages under development) that must be placed in the leaf nodes. When we have to update a low or intermediate level layer, the tree must grow a new branch, but we require a documented need for that update before we accept it - a developer's wish to always run "the latest and greatest version" is not sufficient. (In many cases, when the update requirement is for a single component, we can also provide a veneer layer that replaces the version lower layer.)
This structure has a number of benefits:
Using an already-build, complex image as a base reduces (leaf) image build time drastically.
Dockerfile for the top layers are very simple.
A lot of disk space is saved, both in the registry and in the Docker engine.
The layer cache in the engine is used far more efficiently.
Network traffic to retrieve layers/images from the registry is significantly reduced.
When several containers run simultaneously, they will to a much larger degree share code segments in RAM, even if they run different images, when they are built on the same "high level" base image.
Startup times may be somewhat reduced: The probability of a (medium layer) image already being present in RAM increases.
The only significant disadvantage is that to see all the tool versions in your image, you have to nest backwards through multiple levels of base images. We are documenting the entire tree on our intranet, and you can click yourself backwards layer by layer, to get far more information than you could find in a huge "single level" Dockerfile, and certainly in a much more readable format!
But most of all: Enforcing this tree structure helps us keep those unruly developers under control so they don't go wild with plethoras of incompatible tool versions (which is killing to the idea of reproducible builds!).
|
|
|
|
|
Thanks, useful information!
Seems my suspicion of the (free) Docker Registry is confirmed, on Slant someone commented:
Quote: Biggest CON there is that it cannot control deleting of images properly
Bottom line is this makes docker registry suck when your harddisk fills up at the wrong time and you cannot push out your builds! Of course this has nothing to do with the "Enterprise Grade Private Docker Registry which seems fine and reasonably priced too.
|
|
|
|
|
As far as I can see, even the free version CAN delete images properly, but you have to do a garbage collection (analogous to emptying your thrash bin in Windows) to actually free up the space.
Another detail: When you use the REST API "by hand", deletion requires a SHA that I haven't yet discovered how to read from the registry itself. (Maybe I am expected to locally calculate the SHA of the image manifest - I believe that is what it really is!) So I have to pull the image to the Docker engine, which can provide the SHA I need through "images --digests". I guess I will find a way where I don't have to pull a huge image across the network only because I want to delete it!
|
|
|
|
|