|
Agree 110%.
Relating to a film franchise is a pretty good comparison (from a theoretical standpoint at least) - can you say "Hangover III"! Very little coming out of Hollywood these days is original.
|
|
|
|
|
"If you don't have time to do it right, when will you have time to do it over?"
|
|
|
|
|
Right does not mean pretty. If it works, it works. The problem with companies that wrote ugly code that worked is that they did not start planning for upgrades early enough and waited too long to rewrite things. The ugly codes that lasted are the ones where updates happen once a decade or so. Companies need to assess how long their code is expected to stay running and start refactoring and improving it midway through. Sadly, those that do go belly-up still make a lot of money for their upper-management, but customers and employees be damned.
|
|
|
|
|
"Right" includes maintainability.
|
|
|
|
|
I’ve written a lot of DirectX code and I’ve written about DirectX extensively. I’ve even produced online training courses on DirectX. It really isn’t as hard as some developers make it out to be. There’s definitely a learning curve, but once you get over that, it’s not hard to understand how and why DirectX works the way it does. Still, I’ll admit that the DirectX family of APIs could be easier to use. A few nights ago, I decided to remedy this. I stayed up all night and wrote a little header file.... A challenge to all of the “C++ is hard” or “DirectX is hard” arguments.
|
|
|
|
|
Do i need replication? Actually this question should be replaced with ‘Can I afford data loss?’. Naturally the answer is ‘Hell no! Losing the data would ruin my business!’ So could you afford to lose data from the last hour, last 15 minutes or last minute? Or maybe you need a zero data-loss solution? It's not enough to just enable replication in your database because if you want your data to be completely safe you must face a tough decision about what replication mode best fits your requirements. The choice isn’t easy but it pays off when your data is still accessible right after a disaster.
|
|
|
|
|
Database replication is sold to an unsuspecting public under false pretences.
The threat is that your company will go out of business if your database is lost and so you need to replicate it.
How are you going to run your business if you are in California and an earthquake strikes?
Your employees may have been killed in the earthquake. Or they may be busy trying to locate their loved ones.
I talked to a PG&E (Pacific Gas & Electric) company representative about getting a second feeder line to power our buildings in case an earthquake knocked out the one and only power line.
He said that would be the last of my worries. There was/is a 3' gas main 300 feet from my building. In case there was an earthquake centered around that, the gas pipeline would rupture and the resulting fireball would suck out all the oxygen within 500 feet.
Think about that before you talk about database replication.
|
|
|
|
|
Vivic wrote: How are you going to run your business if you are in California and an earthquake strikes?
The sameway that the company I work for does (even though Oxfordshire doesn't suffer from eathquakes).
We have an off site disaster recovery location (in a different town) that we hire out and have the business back up and running in 2 days, which is achieved by using off site backups and replication etc.
Every day, thousands of innocent plants are killed by vegetarians.
Help end the violence EAT BACON
|
|
|
|
|
We backed up our data on a second computer that did nothing but backups. It would be what the article's author calls semi-synchronous but considering that there were no users on the replication machine and the connection between the two computers was 100 mb/sec, I would say that it was pretty much synchronous.
We backed up the data daily onto DAT tapes... one tape could hold the entire database. The DAT tape was stored offsite. We had daily backups for 30/31 days, month-end tapes, and year-end tapes.
We didn't sign up for off-site disaster recovery location for the simple reason we could get a trailer with the computer (the size of a two-drawer file cabinet) and half a dozen terminals on our site or any other location within your two-day period.
It still doesn't answer the question: how do you run the business if the employees don't/cant show up.
modified 5-Jun-13 11:52am.
|
|
|
|
|
WE've got 50 employees and growing, So our parent company enforced that we prepared for such events.
Every day, thousands of innocent plants are killed by vegetarians.
Help end the violence EAT BACON
|
|
|
|
|
We were a 1500-employee $900 million annual sales company.
Our auditors looked at our backup situation and couldn't fault it.
|
|
|
|
|
Network partitions are a contentious subject. Some claim that modern networks are reliable and that we are too concerned with designing for theoretical failure modes. They often accept that single-node failures are common but argue that we can reliably detect and handle them. Conversely, others subscribe to Peter Deutsch’s Fallacies of Distributed Computing and disagree. They attest that partitions do occur in their systems, and that, as James Hamilton of Amazon Web Services neatly summarizes, “network partitions should be rare but net gear continues to cause more issues than it should.” The answer to this debate radically affects the design of distributed databases, queues, and applications. So who’s right? Much of what we know about real-wold distributed systems is founded on guesswork and rumor.
|
|
|
|
|
Just when you thought SQL Server couldn’t get better, Microsoft is announcing the features for SQL Server 2014. They haven’t announced the licensing/pricing, but I’ll tell you what I do know so far.... There’s very real improvements in here for everybody. If you’re a DBA on a multi-terabyte database, you’re going to love the SSD buffer pool extensions and the granular index rebuilds. If you’re BI-curious, you’re going to be experimenting with the clustered column store indexes. If you’re a software-as-a-service vendor with lots of clients, you’re going to love failover cluster support for CSVs and query performance improvements. And if you’re a developer who works with a SQL Server back end, you’ve got all kinds of new tricks to scale. SSD array caches? That sounds pretty cool... and it's just the start.
|
|
|
|
|
Just a pity the pricing model has put it so far out of reach.
MySQL, RavenDB, MongoDB, COuchDB - so many cheaper or free options that scale.
I absolutely love the tools that SQL brings, but when all's said and done I just want my DB to work and be reliable.
cheers,
Chris Maunder
The Code Project | Co-founder
Microsoft C++ MVP
|
|
|
|
|
I’m thrilled to share that our next major release, Visual Studio 2013, will be available later this year, with a preview build publicly available at Build 2013 in San Francisco at the end of the month. In his keynote demo and follow-on foundational session today at TechEd, Brian Harry highlighted some of the new ALM capabilities coming in this release and in the cloud, including new features focused on business agility, quality enablement, and DevOps. Here are a few of my favorites... What's in it? CaMeLcAsE mEnUs? A pastel color palette? Live tiles? Read on to find out.
|
|
|
|
|
Let's see:
Here are a few of my favorites:
Agile portfolio management, which enables you to plan your agile projects “at scale” by showing the hierarchical relationship between work being done in multiple teams across your organization.
Couldn't care less. Ultimately, Agile isn't about charts and reports, it's about human beings communicating to human beings.
Cloud-based load testing, a new capability of Team Foundation Service that takes advantage of the elastic scalability of Windows Azure to generate traffic, simulating thousands of simultaneous virtual users so as to help you understand how your web applications and services operate under load.
Couldn't care less. Don't use Azure, probably never will. Why would I, when it's easy enough to host my own server?
Code information indicators that provide information about unit tests, work items, code references, and more, all directly within the code editor in Visual Studio, increasing developer productivity by enabling project-related contextual information to be viewed and consumed without leaving the editor.
And we have a new buzzword bingo winner! and with promises of enabling, productivity increasing, improved consumption, visual clutter!
A team room integrated into TFS, improving the collaboration amongst team members via a real-time and persistent chat room that integrates with data and interactions elsewhere in TFS.
Yet another way to reduce the productivity gains of the "enabling project-related contextual information" by constant interruption along the lines of "hey, what did you think of the latest Star Trek movie?" chat room chats. What, Mr. S, do you think will be the first feature to be disabled in VS2013!!! That is, if it even can!
Identity integrated into Visual Studio, such that the IDE is connected to backend services that support, for example, roaming the developer’s settings as the developer moves from installation to installation.
Great idea, except that I never have need to roam. What with screen sharing tools, etc., it's completely unnecessary for a developer to move from installation to installation. Dammit Mr. S, I'm a developer, not an on-call site repairman!
Support in TFS for integrated code comments that facilitate code reviews with increased transparency and traceability.
Hellllooooo....wake up callllll....programmer's don't comment! And if they comment, it's usually not the stuff that I want to review with my fellow devs anyways - in fact, I'm more interested in code reviews of the stuff that isn't commented! Effective code reviews have more to do with information sharing, approaches to problems, and architecture, than they do with "// opening async channel..."
A .NET memory dump analyzer, which enables developers to easily explore .NET objects in a memory dump and to compare two memory dumps in pursuit of finding and fixing memory leaks.
What??? .NET has memory leaks??? Wasn't that supposed to have been solved??? And, OMG, I'm back into the days of 6502 assembly language programming, poring over memory dumps. w-T-f.
Git support built into Visual Studio 2013, both on the client and on the server, including in the on-premises Team Foundation Server 2013.
Oh God. The world would be a better place without Git. Git away from me, you, you, thing!
Marc
|
|
|
|
|
No Marc
Now, I want you to tell us what you _really_ think
Bryce
MCAD
---
|
|
|
|
|
bryce wrote: Now, I want you to tell us what you _really_ think
Bah Humbug!
Marc
|
|
|
|
|
I like Git. So Git out of town!
Gryphons Are Awesome! Gryphons Are Awesome!
|
|
|
|
|
Once again you said it so I don't have to.
Thanks Marc.
"The secret of happiness is freedom, and the secret of freedom, courage."
Thucydides (B.C. 460-400)
|
|
|
|
|
Matthew Faithfull wrote: Once again you said it so I don't have to.
Does that mean I'm the voice of all curmudgeonous CPians?
Marc
|
|
|
|
|
I wouldn't go that far. I don't think they're measuring curmugeonousness on the Clifton scale (logarithmic at weekends, exponential on weekdays, hyberbolic on bank holidays) just yet but perhaps the Wikipedia update just hasn't passed the censors yet
"The secret of happiness is freedom, and the secret of freedom, courage."
Thucydides (B.C. 460-400)
|
|
|
|
|
Matthew Faithfull wrote: the Clifton scale (logarithmic at weekends, exponential on weekdays, hyberbolic on bank holidays)
I'll have to remember that one!
Marc
|
|
|
|
|
Not sure I know where you stand??? Please, tell us how you really feel!!
If your actions inspire others to dream more, learn more, do more and become more, you are a leader.-John Q. Adams You must accept one of two basic premises: Either we are alone in the universe, or we are not alone in the universe. And either way, the implications are staggering.-Wernher von Braun Only two things are infinite, the universe and human stupidity, and I'm not sure about the former.-Albert Einstein
|
|
|
|
|
Quote: A .NET memory dump analyzer, which enables developers to easily explore .NET objects in a memory dump and to compare two memory dumps in pursuit of finding and fixing memory leaks.
What??? .NET has memory leaks??? Wasn't that supposed to have been solved??? And, OMG, I'm back into the days of 6502 assembly language programming, poring over memory dumps. w-T-f.
HAHAHAHA!
Nailed it.
|
|
|
|
|