|
Marc Clifton wrote: what hardware on what platform is less of an issue, IMO, unless you need real performance.
Ah, but that, IMO, is where the real challenges lie. Writing performant code for a resource-intensive task is the kind of challenge that I really enjoy.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
FedEx is planning to launch an ecommerce platform called “fdx” later this year. What are the shipping choices?
|
|
|
|
|
Kent Sharkey wrote: What are the shipping choices? Dog and pony show.
|
|
|
|
|
Kent Sharkey wrote: What are the shipping choices?
Pony Express.
|
|
|
|
|
The CAMM2 spec was recently finalized, and memory makers are testing the waters. You mean we'll soon be able to bend the pins on our memory chips again?
Great times!
|
|
|
|
|
In the latest wave of anti-ad blocker measures implemented by YouTube, AdBlock users have been reporting widespread performance issues with the extension enabled. Great news for those who enjoy 15 minutes of ads every 10 minutes
|
|
|
|
|
DOSP involves initially releasing software under a proprietary license, followed by a planned transition to an open source license. OPevNtually Sourcsometime?
meh
|
|
|
|
|
The tech sector has kicked off the new year with a spate of fresh job cuts that are coming at the same time as the industry is doubling down on investments into artificial intelligence. One couldn't be causing the other, could it?
|
|
|
|
|
Apple customers wanting to buy an Apple Vision Pro may have to sit through a lengthy sales pitch, including a 25-minute in-store demonstration on how to use the headset. What if you just want to buy it, and not marry it?
|
|
|
|
|
Java is not the language it used to be, and that is mostly a good thing. Here are eleven ways Java is evolving to meet the challenges of the future. Because this one goes to 11:throws VersionException?
|
|
|
|
|
A six year old Linux kernel mailing list discussion has been reignited over the prospects of converting the Linux kernel to supporting modern C++ code. If you kids keep asking, I'm going to turn this car around
And it's BACK TO WINNIPEG!
|
|
|
|
|
Not a Linux user here so I can see both sides of this.
Pro - Modern C and C++ has tools to assist in the elimination of buffer errors, which are the single biggest technical source of breaches.
Con - Changing working and reasonably well debugged code is stupid without an overriding reason to do so.
Seriously, if the goal is to eliminate C code then Linux needs to shift to something like Rust, Java, or C# for the OS. The entire C language is a security breach waiting to happen as it's impossible to ever verify correctness in something as complex as a modern operating system. IBM (OS/360), Honeywell (OS/360 emulator), Digital Equipment (VMS), MIT (Multics) etc., all knew this and wrote their operating systems using descripter controlled counted buffers. The result was their operating systems were secure by both design and implementation. It was K&R who broke this model by creating C and writing the first version of Unix in C. We've been fighting memory bugs ever since when we had the solution right from the beginning.
|
|
|
|
|
obermd wrote: ...descripter controlled counted buffers Wow! Putting this in quotes and googling it returns 0 results. Do you have any sources overviewing this approach? It sounds interesting, and I hadn't heard about these before.
|
|
|
|
|
Other than personally doing system level programming on OpenVMS and assembly language programming on OS/360, no. Multics pulled this concept from OS/360. You literally didn't have access to the underlying memory as it was all controlled by the OS. If you attempted to read or write outside the memory buffer you had requested from the OS your program would fault with a memory exception.
Also of note, OpenVMS went to a Black Hat conference one year and still stands as the OS to go to a Black Hat conference and NOT get breached. The following year the conference changed the rules and required any OS being submitted to run natively on the Intel line of processors.
|
|
|
|
|
Interesting! Thanks!
ps - Did those checks slow down programs much? I imagine the checks had to occur for every memory access, even if it was done at the OS level...
|
|
|
|
|
The checks were part of the OS, so there's no way to really tell if they're slowing down programs. My understanding of VMS is that the checks were O(1) time consumers, so their impact was very small. The checks were integer comparisons of buffer sizes.
I do know that early garbage collection schemes such as those found in Symbolics Lisp Machines and early versions of SmallTalk could consume up to 30% of the CPU.
|
|
|
|
|
obermd wrote: I do know that early garbage collection schemes such as those found in Symbolics Lisp Machines and early versions of SmallTalk could consume up to 30% of the CPU. Yeoch! Interesting...
|
|
|
|
|
obermd wrote: I do know that early garbage collection schemes such as those found in Symbolics Lisp Machines and early versions of SmallTalk could consume up to 30% of the CPU.
Ironically, while initially awful, the Smalltalk community, and especially the Self language also pioneered many of the techniques now used in runtimes like .NET, JVM and JavaScript.
"If you don't fail at least 90 percent of the time, you're not aiming high enough."
Alan Kay.
|
|
|
|
|
A full flow analysis, as part of a static code analyzer, can do a lot to remove the need for checks, so that the compiler could suppress e.g. length / index / null checking when the flow analysis showed that the index / pointer could not possibly be outside the object or a null pointer. If the compiler did such complete flow analysis, that is. Usually, they don't.
I have been using static analyzers telling me things like "In module A, x is set to 20, and used in a call to module B function f as argument y. f uses y without modification to a call to function g in module C, as argument z. g uses z to index string s, but s is declared with length 16, so this would cause a runtime error".
The trace will often contain conditions, like "at function f, line 211, the local variable a may be set to a negative value, in which case the if-test at line 342 will take the false branch, causing function C.g to be called with z argument set to 20" or something like that.
The flow analyzer traces every call to g back to where whatever ends up as argument z. If every possible path that leads up to z indexing s goes back to some origin that always will be within legal limits, then there is no need for check instructions.
A trivial example: If the string index is a loop control variable with constant bounds within the string's index limits, and there is no other indexing of the string within this loop, check instructions are superfluous.
Religious freedom is the freedom to say that two plus two make five.
|
|
|
|
|
That is a great system! But it sounds like it would slow down compiling A LOT! Probably worth it in the end...
|
|
|
|
|
You know what would be awesome? An article! Or a collection of articles on your experience.
Just saying.
GCS/GE d--(d) s-/+ a C+++ U+++ P-- L+@ E-- W+++ N+ o+ K- w+++ O? M-- V? PS+ PE Y+ PGP t+ 5? X R+++ tv-- b+(+++) DI+++ D++ G e++ h--- r+++ y+++* Weapons extension: ma- k++ F+2 X
The shortest horror story: On Error Resume Next
|
|
|
|
|
A half-century ago, "Saturday Review" asked some of the era's visionaries for their predictions of what 2024 would look like. Here are their hits and misses. Shiny jumpsuits and flying cars?
Although I see that Zardoz was released in 1974, so perhaps we should be wearing a little less than shiny jumpsuits.
|
|
|
|
|
Didn't realize Zardoz was that old.
|
|
|
|
|
There are two simple reasons Because multiplication is hard?
Why, I bet almost 50% of developers are in the bottom half of the scale.
|
|
|
|
|
Kent Sharkey wrote: 50% of developers are in the bottom half of the scale below the median.
FTFY
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|