|
"You cannot rely on information being stable or remain available."
clap clap - well said sir. Interestingly, for large purchases from Amazon or any other place, I make a screen grab of the deal. I've had several occasions where the terms just change out from the day before as if by magic. It's well know that Amazon uses the shopping model they have built for you and dynamically change prices. I can easily see articles and what not subtly modified.
Charlie Gilley
“They who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.” BF, 1759
Has never been more appropriate.
|
|
|
|
|
trønderen wrote: That is a fundamental property of the Internet: You cannot rely on information being stable or remain available.
Going beyond my initial question, yes, you're absolutely right.
It's one thing to update information. Broken links make the web that much less useful. Have you ever saved a technical blog from a few years ago, that points back to MSDN documentation? Good luck following those links nowadays.
IMO, if the web had to be redesigned from scratch, versioning would be a thing, and every page (or browsers themselves?) would have a button to bring up the version history, and have the ability to present different versions side-by-side so you could tell at a glance what's changed. Build a git-like infrastructure underneath the whole web, so to speak. Why not?
The Wayback machine is one thing. But pages get easily broken as soon it starts relying on scripts, and because of refresh frequency, many "in-between" versions are missing altogether.
That being said, I have no idea how one would deal with changes when they're being done for the sake of security fixes. Static pages are one thing, but when "live" pages with an entire infrastructure behind them has to be kept running...I'm just not seeing any easy solution.
|
|
|
|
|
I look at it every morning. Rarely read anything.
|
|
|
|
|
Same. I go over the subject lines (which is trivially fast with an RSS reader), delete most of them, and the few articles that seem interesting, I keep around as unread. Then as I have free time, I'll go back to those unread items. I try to keep the list as small as possible.
|
|
|
|
|
dandy72 wrote: Anyone know anything about journalism that can shed some light as to what the real motive might be?
I do not know the details of the updates you see, but there is a case of updates to prevent lawsuit without actually apologize for some accusation or misinformation in the original 'news' - and that because most people will not read an item popped up after weeks or months, just like you, except lawyers...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." ― Gerald Weinberg
|
|
|
|
|
dandy72 wrote: I'm sure I'm reading too much into this, as the topics in those revised articles are generally rather benign.
If they are just correcting minor details, then I would agree. However, journalistic integrity would still require them to note that a change was made from the original.
Of course, journalistic integrity is very difficult to find these days.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: However, journalistic integrity would still require them to note that a change was made from the original.
This is my main point exactly. They make changes, but do not make any note of it, or what was changed or why. I notice it because the article (re-)appears at the bottom of my feed as an unread item.
But someone just looking at a random article would have no idea the content no longer matches the original. So depending on when they read it, different people would recall different facts being stated. I'd expect better from the BBC.
|
|
|
|
|
Firstly, I am not asking for help on a programming issue here. I'm mostly just trying to see if anyone else here is or has in the past been experiencing any problems using the ClosedXML .Net library to open/read Excel files.
I have a simple process that has worked every day flawlessly for over 2 years, then out of the blue, started failing. Basically, ClosedXML was choking trying to open an Excel (*.xlsx) file. By choking, I mean it was an IO exception reporting that the file was corrupted. The weird thing is, I can copy that file to my desktop, open it in Excel, save it, copy it back to the server, and it works fine.
I am aware that a new version of OpenXML was released (right around the time that my process began failing???) with quite a few breaking changes. Coincidence?...I don't know yet.
What's new in the Open XML SDK | Microsoft Learn[^]
What I've tried:
0: Go to GitHub and get the latest ClosedXML libs. So this required a .net framework upgrade to 4.6+. No problem...compiles, go to open a spreadsheet and it complains about the XMLDocument version...go to GitHub, get that version and try again. It compiles fine, go to open a spreadsheet and now it whines about a netstandard library that it can't find. (sure this is an indication of inccompatibility) I tried different versions/combinations but the only way to get it working again was by reverting back to the original framework and original libraries. Back to square one.
1: Plead with the new IT guy responsible for scheduling that job to please change the format to CSV! (the previous IT guy was on a power trip and refused to change it despite numerous requests)
2: Investigating the idea of simply extracting the sheet1.xml file from the archive and parsing it out. Then I wonder why if it was that easy, why there are so few solutions that mention this approach. It's possible that I'm looking at an extremely simple/limited structure (no formulas/formatting/etc.) in this particular file, but it looks feasible. In the event that #1 fails, this will probably be the next path of attack.
3: Install Excel on the customer's server. Ya know, they did give me an admin account so in theory, I can install anything required to get the job done. Also, I have an old Office 2007 disk around here that should still work even if it never gets registered. If 1 and 2 fail, this may be the only option. Not only is it the worst option, but it's also the slowest option. I'd like to avoid this one.
"Go forth into the source" - Neal Morse
"Hope is contagious"
|
|
|
|
|
kmoorevs wrote: if anyone else here is or has in the past been experiencing any problems using the ClosedXML .Net library to open/read Excel files. Nope...
.
.
.
.
Never used it.
run, hides and ducks
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
|
|
You can't install Interop on a server because of licencing.
Well you can or used to be able to, but it took couple of registry hacks.
// TODO: Insert something here Top ten reasons why I'm lazy
1.
|
|
|
|
|
Or in our case it wasn't allowed.
|
|
|
|
|
|
I have used it on a number of occasions and it works well.
|
|
|
|
|
|
But that was the implication from:
Quote: Avoid Interop.
|
|
|
|
|
|
Well, what is your point?
|
|
|
|
|
|
|
Nope, just offering unsolicited advice.
|
|
|
|
|
Feel free not to bother in future.
|
|
|
|
|
And you needn't bother respond either.
|
|
|
|
|
If you don't post pointless or silly comments there will not be anything to respond to.
modified 4 days ago.
|
|
|
|
|