|
Richard Andrew x64 wrote: "What would you do with 33% less development time?"
Testing whatever I developed in such a hurry.
|
|
|
|
|
Productivity is not measured in time. If you're focusing on time, you've lost the plot.
|
|
|
|
|
Am I the only one amazed at the corporate incompetence displayed by some websites? I'm thinking not just of the huge number of broken links, missing images and all the rest, but also of completely useless paywalls.
I used to have a (free, trial) subscription to the Daily Telegraph. When it ended and I got presented with a paywall, it took precisely ONE CLICK on my browser to get past it. The DT paywall is entirely dependent on Javascript and with my "One-click Javascript toggle" Chrome plugin, I now have full, unrestricted access to DT content (should I want it).
Exactly the same with the New York Times website. Disabling JS at the Guardian stops all the nags AND the cookie requests. At other sites (even with JS enabled) if there's a pop-up blocking the screen just use the HTML inspector and delete the hiding DIV; (you may also need to remove the "position:fixed" attribute of the main content div). But it's generally ridiculously insecure.
BTW I don't make a habit of reading stuff I'm supposed to pay for. If I come across a paywall on a site I may see if it goes with JS off, maybe read that article, and not return to the site.
If you're a web developer, do you find that your organisation actually checks what you deliver? As in testing links, forms, and security? Or is it entirely down to the IT department?
|
|
|
|
|
Maybe it's more expensive to test and develop such paywalls correctly than doing it right would gain.
Most people don't know how to disable a paywall.
|
|
|
|
|
Some sites, even though they made JS toggle not circumvent their paywall, they still didn't make going through the paywall necessary. You just had to do more/different like the deleting the DIV stuff or changing the JS.
I'd guess for some stuff that led to plugins aimed at specific paywalls bringing circumvention even more to the masses than JS toggle.
It probably also meant that content sharing, rather than via a link that brings exposure, just happened via a screenshot/copy-paste instead.
SEO is maybe a big part too for many sites. If they put all their content behind a wall then it won't get indexed. They want their site found when people go searching for things they have.
|
|
|
|
|
I first read your subject line as "corporate incontinence", but it still works as well.
"Taking the piss" as our friends from the UK and OZ would say.
Software Zen: delete this;
|
|
|
|
|
Somewhat related...does anyone clean up their javascript errors?
The reason I ask is that I experienced a disastrous rollout of a lob web application. The project had a super tight deadline which meant that some javascript hadn't been thoroughly tested. There's nothing worse than phone call after phone call of managers asking what to do when IE reports a script error.
I remember my first thoughts, 'why are they running IE?' and 'why are they running it with the option turned on to report all errors?'
So I started running IE set to alert on javascript errors and cleaned up the application. Trying to browse the web with that setting was pointless...errors everywhere.
"Go forth into the source" - Neal Morse
"Hope is contagious"
|
|
|
|
|
Technical question:
For the Daily Telegraph, do you disable all JS for the site, or do you disable only specific scripts?
The difficult we do right away...
...the impossible takes slightly longer.
|
|
|
|
|
|
DerekT-P wrote: the huge number of broken links, missing images and all the rest, Not to speak of "future" events that took place years ago.
They probably learn from Wikipedia. I just looked up some information about a highway route: The Wikipedia article stated that the work for remodeling of the road is planned to start up in 2017. (As far as I know, it hasn't started yet.) Give me an hour, and I could easily find a hundred pages referring to "future" events more than five years old or not updated with more than five year old results from e.g. elections, but there must be many millions of such pages with grossly outdated information in Wikipedia.
I guess that lots of the Wikipedia information was entered by volunteers, enthusiasts that wanted Wikipedia to become a super great encyclopedia - succeeding at the time. But the enthusiasm didn't cover maintenance.
Few commercial companies develop their own web pages nowadays; they hire a web design company to establish the pages, and the web pages may be great at the time. But the budgeting for the following years didn't cover maintenance. I have asked several companies that they need to update the web pages, and they answer 'The problem is that the web design company is defunct'. Even when they don't give this answer, the reply usually starts with 'We know, but ...'.
Even if the company is within the IT area, they most likely are not developing their own web pages. Today, it takes a lot more than just knowing HTML. You can't expect the developer of embedded code, an accounting system or a network protocol implementation to be intimate familiar with web design tools. Besides, you can't expect programmers to know very much about UI and graphical design. (That goes for Javascript programmers as well, if we are to judge by the results!)
Regarding broken links: Doesn't Wikipedia have a crawler that regularly goes through the articles to check the links? How is it that there still are huge numbers of links that are not flagged as 'Broken link'?
For the Wikipedia articles, I know what I can expect the response to be from you: Well, why don't you update the article yourself? -- Writing (including updating) an encyclopedia article to be reliable, complete (at the level expected) and up-to-date requires a writer that is inside, working in the field. If I were to update all the Wikipedia articles I find as grossly outdated would require me to learn dozens of knew knowledge areas at the level of a professional. Sorry, that is not possible to do. Not for me, at least.
|
|
|
|
|
For the most part my web browsing experience is positive. Yes, occasionally it breaks, hits a dead end or something not intended, but as dynamic as the internet and the associated software that use it, no surprise.
It's mostly an upside. I am of the age when Netscape on a dial up line was heaven.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
jmaida wrote: as dynamic as the internet and the associated software that use it, no surprise WWW is the ultimate memory hole. Once it was thought to be a mechanism for making information available. It has turned out to be just as much of a mechanism that allows you to make information disappear without a trace.
I guess that I, more than most people, make private copies of essential web information that I expect to be subject to memory holing. Much too often, I am right. I could need that information to "prove" this or that, but it isn't waterproof: I could have manufactured the page locally. As long as the information is still available on internet, an https link serves as an authentication of the source, but once the information is downloaded, this authentication proof has no value: There is no way to "sign" a web page copy with the https key. Any claim about its source is based solely on my word.
Most web information that I copy is of course "informal", not of legal importance. Yet, I am one of those people frequently referring my friends and contacts to stuff I have picked up on internet. I make it a habit to always check if the URL is still available. Frequently, it isn't. So I have to send them a copy of my copy.
In the early days of video streaming services, even those companies didn't have enough storage capacity to hold all movies available indefinitely. It was regularly reported that 'After Sept. 1st, the following movies will no longer be available: ...', and I nodded: 'Yes, but they are still available in my DVD/BD shelf!' That is one major reason why I always rejected streaming services. (I guess that a second reason for removing movies had to do with copyright issues, but even that didn't affect my DVD/BD shelf.) I have made it a habit to read 'Always available' as 'Available until it is no longer available'.
If you rarely are bothered by the web memory hole, it could be accidental, that you mainly access sites where information doesn't disappear. Another explanation is that you have grown into the internet: When I talk with younger people, it is as if they never notice that a link is dead. It is less significant than a mosquito; they go on to the next link as if they never saw the dead one. Sometimes, I am that way myself, but when I am really searching for information, it annoys me a lot. (Or, it p**ses me off, but that might be an illegal term in the lounge.)
|
|
|
|
|
I understand and I, too, make snapshots or copies of web info because it can disappear or get lost. Nature of the beast.
"A little time, a little trouble, your better day"
Badfinger
|
|
|
|
|
Re fixing stuff on WikiPedia... yes, it's annoying when stuff is demonstrably wrong or outdated. But no, I certainly wouldn't suggest fix-it-yourself on Wikipedia. I think I know a reasonable amount about a small number of subjects, but I wouldn't risk (a)getting it wrong and (b)doing so very publicly by editing Wikipedia. For one thing I don't have the time to track down the references for any change I might make.
Re the memory hole... yes, I do see that but maybe not so much as you. I do take copies of stuff that I know is really going to be useful further down the line, but that's more to do with making it easy for me to find. I can save things in a hierarchy or structure that makes sense to me, and in a searchable way, rather than just saving shortcuts (albeit also in a hierarchical structure). Shortcuts break, even if the info is still on the web somewhere.
I volunteer with one charitable organisation that spent a lot of money on a website revamp. At the time I did a full review, of broken links, typos, security holes, major performance issues, the lot. I sent it to the guy who was managing the rewrite, who responded "thanks, but our contract with the developers doesn't include maintenance / updates; it is how it is". They have an interface for adding news items, but that's it. They can't even update the "About us" pages. Worse, there's an online shop area which has about 20 items (most with "placeholder" images) out of several hundred in the physical shop. I've pointed this out but the response has been that the web company charge too much to add an item to the shop pages.
Even in 2023, far too many companies are totally ignorant of IT and even how to purchase external support.
|
|
|
|
|
This thread also reminds me of:
a) the recent one about comments in code, and coders who forget to update them when the code changes, and
b) the long term problem of information rot being faced by ChatGPT etc.
|
|
|
|
|
Disabling a paywall to get free access to something that should be paid for is theft, pure and simple. Just because you can pick a lock and steal something doesn't make it ok. The people providing the content you access are doing so to earn a living.
|
|
|
|
|
Yes, I understand that. My professional curiosity means I often ask "how?" when things are done on a website that I visit, be it a paywall, an animation, a particular layout etc. My observations here are that the people who put those "locks" in place are doing a lousy job.
And BTW, if I were to browse to these sites with JScript already disabled I wouldn't even know there WAS a paywall.
|
|
|
|
|
haughtonomous wrote: Disabling a paywall to get free access to something that should be paid for is theft, pure and simple. Just because you can pick a lock and steal something doesn't make it ok. The major problem with pay sites is that in most cases, the only way to access that single article you want to read is to pay a significant amount for a subscription. Most likely, that will increase the amount of junk mail in your inbox, and there is a significant risk that you'll have to fight for months or maybe years to later have that subscription cancelled. There is a significant risk that your reading habits will be tracked and analyzed.
The major problem, though: In international forums, you may encounter links to hundreds (or thousands, if you browse a lot) of different pay sites. Most of us cannot afford to subscribe to hundreds of web newspapers, or whatever, just to read that one article referred to in some other forum.
If there were a way to pay for access to a single article, or maybe for 8 hours of access, with no bindings and anonymously - somewhat similar to buying a single copy on a newspaper stand - I would probably read a lot more of those pay articles. But noone has succeeded in marketing a system for such micropayments to the web newspapers.
It would not be hard to make one; the big problem is to make the news sites accept it. If I were asked for a solution, I would suggest something based on the logic of Kerberos: I go to a ticket office (TGS, in Kerberos terms), checking out a ticket to a given pay site. The TGS serves a lot of different sites. The ticket I obtain is valid for, say, any one article, or maybe for multiple accesses within an 8 hour period. The site would not need to know anything about me, would need no account to be charged. Every month the ticket office would report to the site: I have sold so-and-so many single tickets and so-and-so many 8-hour tickets to your site. Here comes the payment for it!
The ticket office may invoice me for the tickets I have checked out that month, to any of the sites served by that office, with no knowledge of which articles I read. The ticket office will just know which sites I visited.
I could be anonymous even to the ticket office: In Kerberos, you authenticate yourself to a login server that does not sell tickets, except to the ticket office (a "TGT", Ticket Granting Ticket). So the different ticket offices (there may be several) may all report back: The customer who presented TGT #24568 checked out a total of 43 single tickets and nine 8 hour tickets. The login server would invoices me for those, not knowing the sites, and then forward the payment to the ticket offices, which distributes the payment (for all customers collected) to the sites.
The biggest problem is not to build such a system, but to make the web sites adopt it. They would probably fear that without the subscription tie-in, they would loose (a fraction of) their regular customers, downplaying the effect of 'random' customers. I think they are wrong - their site would be more attractive! Also, they will loose the income from selling tracking data for random customers.
I don't spend efforts trying to scale pay walls. Rather, I have 'blacklisted' several news sites: I used to read the stuff they make available for free, and see links to pay stories that I would have read if they offered a pay-per-view solution. But they don't, so now I have stopped reading even their free stuff. When someone provides links to any of those sites, I skip over the link. When they can't sell me what I want, I do not buy. Not even the 'first sample is free' stories.
|
|
|
|
|
Corporate incompetence affects all levels of corporations, not just web sites.
And I will put the responsibility for it squarely on the Harvard MBA attitude---"What's this quarter's profits?"
Long term thinking/planning is an anathema to immediate profitability.
If I was God/King/Absolute Benevolent Dictator, no corporation would be allowed to have more than 5% market share and no engineer would be permitted to design a product until they have spent at least 5 years in the field using the company products, 5 years in product maintenance and 5 years apprenticing a senior engineer.
But I'm not God/King/Absolute Benevolent Dictator (at least not yet), so, for now, I will only refuse to hire crappy Harvard MBA educated management.
So....would you like to hear how I really feel about Harvard MBA's?
|
|
|
|
|
My bad experiences have been with Stanford MBAs.
|
|
|
|
|
rjmoses wrote: so, for now, I will only refuse to hire crappy Harvard MBA educated management. You can already do more than me...
M.D.V.
If something has a solution... Why do we have to worry about?. If it has no solution... For what reason do we have to worry about?
Help me to understand what I'm saying, and I'll explain it better to you
Rating helpful answers is nice, but saying thanks can be even nicer.
|
|
|
|
|
I don't think it is incompetence playing out here. They just want to have a cookie and eat the cookie at the same time and they somewhat succeed. What I mean is that they would like the full text to be read by bots and indexed in Google and other browsers but they also want most interested users to pay for reading it. Having it protected by JS (that can be disabled) means that bots can read it in full and index. Also, most of the interested users will still pay - either because of not being technical enough or valuing their time more than a few dollars to play with JS, or out of decency.
When it comes to a ton of JS errors - that's the effect of cost-cutting. Corporations try to make stuff faster to be competitive and if hardly anyone sees those errors then it does not matter enough to be fixed.
|
|
|
|
|
There is a technical reason: they have to allow the Google crawler to index the site. 12ft Ladder
|
|
|
|
|
Which is also ridiculously easy to detect.
|
|
|
|
|
Meh, it's a 95%+ solution for them; most people are cut off from the articles.
If they really wanted it secure, they wouldn't load the content; they'd cut it off at the server. Arguably, it takes a bit more work to do that, but is more a management headache when you have dozens or hundreds of content contributors.
That said, corporate incompetence abounds. Researching buying expert user software on sites whose aim is to get you to contact them without giving you enough information to know if it's worth the effort. A pizza site that doesn't show you the cost of what you are ordering until you add it to the cart. (If that's an intentional strategy, it is just silly). Topping that one off, pardon the pun, when we go to pay for it, we couldn't use a gift card so we had to cancel the cart and call in the order. What an ecommerce fail.
It seems that many companies never use their own damn site!
|
|
|
|
|