|
Wordle 900 3/6*
π¨β¬π¨π¨β¬
π©π¨β¬π¨π©
π©π©π©π©π©
|
|
|
|
|
Wordle 900 4/6
π¨β¬β¬β¬β¬
π¨π¨β¬π¨π©
β¬π©π©π©π©
π©π©π©π©π©
|
|
|
|
|
Wordle 900 3/6
β¬β¬π¨π¨β¬
π©π¨β¬π¨π©
π©π©π©π©π©
|
|
|
|
|
Wordle 900 5/6
π¨β¬β¬β¬π¨
β¬π©β¬π©β¬
β¬π©π¨π©β¬
π¨π©π©π©β¬
π©π©π©π©π©
|
|
|
|
|
Wordle 900 3/6
β¬β¬π¨β¬β¬
β¬β¬π¨β¬π©
π©π©π©π©π©
|
|
|
|
|
β¬β¬π¨π¨β¬
β¬β¬π¨β¬β¬
π¨π¨β¬π¨β¬
π©π©π©π©π©
In a closed society where everybody's guilty, the only crime is getting caught. In a world of thieves, the only final sin is stupidity. - Hunter S Thompson - RIP
|
|
|
|
|
Wordle 900 4/6*
β¬β¬π¨β¬β¬
π¨β¬π¨π¨β¬
π¨π©π©π©β¬
π©π©π©π©π©
Happiness will never come to those who fail to appreciate what they already have. -Anon
And those who were seen dancing were thought to be insane by those who could not hear the music. -Frederick Nietzsche
|
|
|
|
|
Wordle 900 3/6
π¨β¬π¨π¨β¬
π¨π©π©π©β¬
π©π©π©π©π©
Ok, I have had my coffee, so you can all come out now!
|
|
|
|
|
I've been subscribed to the BBC World News RSS feed for years. Yes, RSS feeds are still a thing. I wouldn't have it any other way either, given that nobody places ads on those feeds (or perhaps rather, don't bother to, for some reason). But that's not the point.
I've been noticing for quite a while now that they'll often re-surface old articles - days, weeks, even months old - articles they've already published before, but republish them with updated bits and pieces - adjust some numbers, add some details that weren't there before, that sort of thing. Some of these (the same articles) show up repeatedly time and time again.
I never find these to be of particular interest (no matter what got updated), so I just delete these "new" entries that show up as unread at the bottom of the chronological list.
I really wonder who those updates are there for. RSS has fallen out of favor, so very few people should even notice. I can only assume that, among the population at large, only people searching for an article on a specific topic might find them, and read the latest version as if it were the first published instance (and really, how might one even know, unless they're marked as such, which they never are?) What's the point? After a while, if something's really worth bringing up again, doesn't it warrant having a brand new article written instead? If it's not, then presumably you're concluding people shouldn't care enough, so as a reporter, you should just let those old articles go...
I don't like to see history rewritten. If it has to do with fact-checking, or new details having come to light, I've seen newspapers publish follow-up articles, corrections as part of an addendum, that sort of thing. These online articles however don't get an addendum; the original gets modified and then passed off as if these were "as originally written".
I'm not sure whether this is common and other news sources do the same, as this is the only news feed I subscribe to. And they're the only ones who do it.
Anyone know anything about journalism that can shed some light as to what the real motive might be?
I'm sure I'm reading too much into this, as the topics in those revised articles are generally rather benign.
|
|
|
|
|
dandy72 wrote: I've been subscribed to the BBC World News RSS feed for years. They are probably having to correct the bits that are incorrect/downright lies.
|
|
|
|
|
They gave the job to AI.
Iβve given up trying to be calm. However, I am open to feeling slightly less agitated.
Iβm begging you for the benefit of everyone, donβt be STUPID.
|
|
|
|
|
That is a fundamental property of the Internet: You cannot rely on information being stable or remain available.
If you really need to have something documented, make a copy of that web page. It may be difficult sometimes; you may have to revert to PrntScrn - Save Page as... may not capture all you want to save. Always check the saved page.
Updating news articles is common practice with web newspapers. Sometimes, when reporting from an ongoing event, they put something along the lines of 'This story will be updated', but often they do not. Often, the 'breaking news' updates the first few paragraphs of the story, but if you read to the bottom, there can be a lot down there that they didn't remember to update - such as 'Due to the car accident, the E6 is closed for all traffic', but the (updated) headline and first few sentences declare 'E6 is now open after having being closed for two hours'.
Sometimes, when revising 'facts', they add a small note indicating that 'A previous version of this story said so-and-so. This has been updated to such-and-such'.
The one newspaper that really p me off was one where I could provoke an update. Officially, they were open to reader comments. If you made a comment that was in conflict with the newspaper's views, the article would be 'updated'. Several times I compared the article as it was at the time when I made my comment, with my comment displayed, showing that it was accepted, with the version marked 'Last updated at ...', and not a single character was changed, except for the 'Last updated' time.
And the list of comments were empty. Each update version had its own chain of comments. So they could pretend to accept disagreeing comments, while wiping unwanted comments by making a no-changes 'update'. If asked, they could present a technical explanation ('Every revision has its own comment chain - that's just how it is!'. And it was true: The new revision got a new URL, so if I saved the URL to the revision I commented on, I could use it to retrieve the version with my unwanted comment still in the comment chain.
I think it is far more honest to simply declare that 'We do not accept comments in disagreement with our views', and either inform the commenter the reason why he was rejected, or leaving an entry in the comment chain indicating that a comment was censored. Preferably both.
|
|
|
|
|
re/ "You cannot rely on information being stable or remain available." i recently learned the Wayback Machine is useful . it even has copies of my own minor website of many years prior .
|
|
|
|
|
If it is "semi-stable", Wayback maybe of use. Often it is not. Observe how many snapshots there is of an article: Sometimes, they come weeks or months apart. Compare the time of the first snapshot with the publication date for the newspaper story: The story may have been through several modifications before the the first Wayback snapshot.
There are lots of web pages not available on Wayback. It crawls from one page to another through links, and if none of its already known pages link to some local web newspaper, Wayback will not discover it. There used to be an entry in Wayback's main page, something like 'Please index this web page!' It may still be somewhere, but I am not sure where (I haven't spent much time looking!)
|
|
|
|
|
trΓΈnderen wrote: You cannot rely on information being stable or remain available
This, IMHO, is one the serious deficiencies of Internet (and also its strength). Nothing is cast in stone on the Internet, everything is flaky.
In the days of the hardcopy prints from the printing press, creating a second version was a lot of effort. Now, soft copies can be updated in a trice.
modified 6-Dec-23 3:01am.
|
|
|
|
|
"You cannot rely on information being stable or remain available."
clap clap - well said sir. Interestingly, for large purchases from Amazon or any other place, I make a screen grab of the deal. I've had several occasions where the terms just change out from the day before as if by magic. It's well know that Amazon uses the shopping model they have built for you and dynamically change prices. I can easily see articles and what not subtly modified.
Charlie Gilley
βThey who can give up essential liberty to obtain a little temporary safety deserve neither liberty nor safety.β BF, 1759
Has never been more appropriate.
|
|
|
|
|
trΓΈnderen wrote: That is a fundamental property of the Internet: You cannot rely on information being stable or remain available.
Going beyond my initial question, yes, you're absolutely right.
It's one thing to update information. Broken links make the web that much less useful. Have you ever saved a technical blog from a few years ago, that points back to MSDN documentation? Good luck following those links nowadays.
IMO, if the web had to be redesigned from scratch, versioning would be a thing, and every page (or browsers themselves?) would have a button to bring up the version history, and have the ability to present different versions side-by-side so you could tell at a glance what's changed. Build a git-like infrastructure underneath the whole web, so to speak. Why not?
The Wayback machine is one thing. But pages get easily broken as soon it starts relying on scripts, and because of refresh frequency, many "in-between" versions are missing altogether.
That being said, I have no idea how one would deal with changes when they're being done for the sake of security fixes. Static pages are one thing, but when "live" pages with an entire infrastructure behind them has to be kept running...I'm just not seeing any easy solution.
|
|
|
|
|
I look at it every morning. Rarely read anything.
|
|
|
|
|
Same. I go over the subject lines (which is trivially fast with an RSS reader), delete most of them, and the few articles that seem interesting, I keep around as unread. Then as I have free time, I'll go back to those unread items. I try to keep the list as small as possible.
|
|
|
|
|
dandy72 wrote: Anyone know anything about journalism that can shed some light as to what the real motive might be?
I do not know the details of the updates you see, but there is a case of updates to prevent lawsuit without actually apologize for some accusation or misinformation in the original 'news' - and that because most people will not read an item popped up after weeks or months, just like you, except lawyers...
"If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization." β Gerald Weinberg
|
|
|
|
|
dandy72 wrote: I'm sure I'm reading too much into this, as the topics in those revised articles are generally rather benign.
If they are just correcting minor details, then I would agree. However, journalistic integrity would still require them to note that a change was made from the original.
Of course, journalistic integrity is very difficult to find these days.
Freedom is the freedom to say that two plus two make four. If that is granted, all else follows.
-- 6079 Smith W.
|
|
|
|
|
Daniel Pfeffer wrote: However, journalistic integrity would still require them to note that a change was made from the original.
This is my main point exactly. They make changes, but do not make any note of it, or what was changed or why. I notice it because the article (re-)appears at the bottom of my feed as an unread item.
But someone just looking at a random article would have no idea the content no longer matches the original. So depending on when they read it, different people would recall different facts being stated. I'd expect better from the BBC.
|
|
|
|
|
Quote: Yes, RSS feeds are still a thing. I wouldn't have it any other way either
NNTP (Network News Transfer Protocol) is better.
|
|
|
|
|
inariy wrote: NNTP (Network News Transfer Protocol) is better.
RSS and NNTP really serve different purposes. I don't see why anyone with an RSS feed would publish the same thing over Usenet.
I still use both every day. There really is no overlap.
|
|
|
|
|
Firstly, I am not asking for help on a programming issue here. I'm mostly just trying to see if anyone else here is or has in the past been experiencing any problems using the ClosedXML .Net library to open/read Excel files.
I have a simple process that has worked every day flawlessly for over 2 years, then out of the blue, started failing. Basically, ClosedXML was choking trying to open an Excel (*.xlsx) file. By choking, I mean it was an IO exception reporting that the file was corrupted. The weird thing is, I can copy that file to my desktop, open it in Excel, save it, copy it back to the server, and it works fine.
I am aware that a new version of OpenXML was released (right around the time that my process began failing???) with quite a few breaking changes. Coincidence?...I don't know yet.
What's new in the Open XML SDK | Microsoft Learn[^]
What I've tried:
0: Go to GitHub and get the latest ClosedXML libs. So this required a .net framework upgrade to 4.6+. No problem...compiles, go to open a spreadsheet and it complains about the XMLDocument version...go to GitHub, get that version and try again. It compiles fine, go to open a spreadsheet and now it whines about a netstandard library that it can't find. (sure this is an indication of inccompatibility) I tried different versions/combinations but the only way to get it working again was by reverting back to the original framework and original libraries. Back to square one.
1: Plead with the new IT guy responsible for scheduling that job to please change the format to CSV! (the previous IT guy was on a power trip and refused to change it despite numerous requests)
2: Investigating the idea of simply extracting the sheet1.xml file from the archive and parsing it out. Then I wonder why if it was that easy, why there are so few solutions that mention this approach. It's possible that I'm looking at an extremely simple/limited structure (no formulas/formatting/etc.) in this particular file, but it looks feasible. In the event that #1 fails, this will probably be the next path of attack.
3: Install Excel on the customer's server. Ya know, they did give me an admin account so in theory, I can install anything required to get the job done. Also, I have an old Office 2007 disk around here that should still work even if it never gets registered. If 1 and 2 fail, this may be the only option. Not only is it the worst option, but it's also the slowest option. I'd like to avoid this one.
"Go forth into the source" - Neal Morse
"Hope is contagious"
|
|
|
|
|