It has been mentioned and covered a bunch of times that having a huge data in your database would take a lot of time to be extracted! 1 million records, 26 seconds are fair. There are two ways to move, one is if you've got a lot of money to spare (and waste... yes, waste). Other way is to use your wit and cut short this query.
First method - Go to nearest (or farthest) store for hardware stuff, buy a lot of RAM, a new CPU or should I say a mainframe at all. Then use it!
Second method - I would go with this approach, in this approach, what you will do is you will add a new constraint in the query as
TOP 100
so the query would be like this
SELECT TOP 100 * from ...
What would this do? This would select the first 100 records in the table, making it faster to be executed and returned to your application for rendering as a table and other stuff that you want to do with those results, usually display.
How will this help? Have you tried reading the records in the table? There are only a few records shown at a time, mostly I've shown only 20 records to the users at a time, no user would have a look at
1 million records in just an instance of time. You would show him the first 100, then if he wants he can request for the next 100 and so on and so forth. This would be better, suppose a user has a pentium 4 computer with 1GB RAM he would leave your website if you send him this much data to process and render. Consider sending less data, then if he wants to read, send him the next data and so on. No user is interested in reading all of the records that you're having, he wants to know about those which are relative to him - his own record specially.