Click here to Skip to main content
15,899,754 members
Articles / All Topics

Programming for Performance Part 2: Understanding Your Target Platform and Audience

Rate me:
Please Sign up or sign in to vote.
0.00/5 (No votes)
14 Jan 2010Ms-PL7 min read 7.5K   4  
How to understand your target platform and audience

In Part 1, I talked about being smart about code reuse. The story that I have here about understanding your target platform helps reaffirm that viewpoint, while also going into new realm of concerns. This post is the second in a multi-part series that I will be working on over the next few weeks. I will update it with links to the other posts as they become available.

Built For Speed

Several years back, at the company I was working for, we were building a new B2B web application. This application demanded extremely high performance and had some very complex business rules associated with it (it dealt with pharmaceuticals and prescriptions). Up until this point, all of the websites that we had were done in classic ASP; definitely not known for high performance, but still better than any of the other available options when the sites were built. We researched two different Microsoft-based solutions: ASP.NET and ATL Server. We ended up deciding to use ATL Server for two reasons:

  1. being unmanaged C++ code, we could get a higher level of performance for the site, and
  2. we could reuse the C++ objects that our object dev team was building for the batch data processing and Windows applications that were being built at the same time to manage the backend data for the site.

The Web is a Special Beast

You'd think that using unmanaged C++ code, we'd have a site that just flew, right? Unfortunately, that didn't turn out to be the case. Sure the site was fast to start out with, but once we started dealing with a lot of data, everything slowed down to a crawl. The reason? The business/data objects that we were using were built, like I have already mentioned, for batch processes and Windows applications. What’s the big difference between those types of applications and a web application? State. Web applications do not maintain state of objects from one request to the next. Yes, you can serialize objects and use things like viewstate and session state, but doing that with large objects leads to many performance issues of its own. What happened was that these objects were not just loading the data needed for the web view, they were loading all data for the instance, along with all descendent object data. This led to objects that were performing literally hundreds of queries just to show a single summary page. And when you went to view one of the child pages, it had to reload all of that data again for the child.

When testing revealed the big slowness issue we were having, we ran into one of the classic problems that always happens when you have multiple groups of developers, each working on individual components: the blame game. The object development team was blaming the web team for not knowing how to write C++ to use the objects correctly in ATL Server. And, of course, the web development team was blaming the object team for writing objects that didn't perform well on the web. In reality, neither group was truly to blame, and at the same time, we both were. No one spent the time to think about the differences between a web application and a batch or Windows application. Everyone just said “well, we've got the object team building these business and data objects, so since we don't want to duplicate this complex business logic, we'll just use the same objects.” The belief was that this would save us a ton of time, and we could get the project done quicker overall. It did save us time initially, but obviously, this didn't last. Once we were able to convince the powers that this solution just wasn't going to work, and we convinced the object team leader that his group would need to create some additional web objects, we nearly doubled the original estimate for time to completion. Had we thought about the unique factors in web applications in advance, we would have saved a large chunk of time. We ended up with probably the same amount of code either way, but the time involved in getting us to that point hurt the project greatly.

Audience Matters Too

Once we had the site up and running, everything seemed to be going smoothly. That is, until updates needed to be made to the site. When deciding to use ATL Server, we already knew the downsides of having to restart IIS to free up the DLLs from memory that are going to need to be replaced. We had a server farm in place, so it was easy enough to just take the servers out of the farm one at a time to replace the DLLs. However, the problem arose in the frequency of updates. The business unit was demanding changes and fixes go out as soon as they were available, rather than building up for a large release. When it was once every few weeks, updating the DLLs in this manner wasn't a huge deal, but when it became nearly every day that this had to be done, we were soon running into pushback from our server admins, who were also our release coordinators (if you could call them that).

This clearly wasn't a long-term solution with the way our business unit functioned, so we went back to the drawing board. We were already in the planning stages of 2.0 for the site, so we decided that the best course of action was to bite the bullet and go to ASP.NET instead. We already had our own separate objects with duplicated business logic anyways, so jumping to C# with everything wasn't too much of a stretch. Also, if you've ever built a web application in C++ and ATL Server, you know that it isn't exactly easy. Definitely not as easy as ASP.NET is (and was, even back with .NET 1.1). This meant that changes would be much quicker and easier to implement once we had converted.

So, after we were able to convince the very non-technical business sponsor that going back to ASP was not an option (long story for another time), we built 2.0 using ASP.NET. Performance was not quite as good as it was with ATL Server (although much better than what we had with hacked together web business objects), but we were able to update the site much more efficiently, which meant that the business was able to get their changes released more quickly. The moral here is that you need to think about your audience and come to a happy medium when it comes to performance. In this case, the business unit was willing to sacrifice some performance benefits for their end users to be able to provide them with a more constant stream of updates and fixes. And for our clients, the constant updates were definitely more important to the decision-makers than some small-to-moderate performance increases for their lower level employees (the ones that would actually be using the web application).

Conclusion

As you should be able to tell by now, had we looked at both the platform (web) and audience (our business unit and client management) and their unique constraints more closely at the start of this project, we probably would have come to the same final resting place (ASP.NET with custom business objects for web only) without doing what essentially worked out to two nearly-complete rewrites. Of course, we were all a bunch of college students and first-year grads, so maybe we wouldn't have, but had we spent a little more time researching and having a few meetings at the beginning of the project, the probability of success would have been much greater. This also shows additional reasons why reusing existing code (as discussed in Part 1) can cause problems with performance. In the third part of this series, I will be providing another example showcasing both the issues discussed in the first two parts, and expanding on it to go over Normalization Gone Mad. Don't miss it; it’s sure to be a good time for all.


Posted to Blog

License

This article, along with any associated source code and files, is licensed under The Microsoft Public License (Ms-PL)


Written By
Architect Nexus Technologies, LLC
United States United States
I have been working in the field of software development since 1999. With a degree in Computer Engineering from the Milwaukee School of Engineering, I try to provide a strong results-oriented approach to software development. I have worked with a variety of industries, including healthcare, magazine publishing and retail. After having worked for corporations of varying sizes for nearly ten years while also providing custom software solutions to individuals and small companies, I left the corporate world to provide expert, high-quality software solutions to a broader range of companies full-time. I am also a Certified Usability Analyst with Human Factors International, committed to providing the best possible experience to the users of your website or application.

Comments and Discussions

 
-- There are no messages in this forum --