I'm seeing two errors here.
Firstly average usage and consumption per item is aggregation of data.
Filtering and aggregation is what the database excels at. So why on earth do you want to move the whole bunch of data off the database. Do your aggregation in the query.
Were having a database where several tables have more than 200 million rows where an aggregate on a pivot of 6 million rows is done in less than 6 seconds. But the resultset is only a few kb.
Shuffling data takes linear time and moving 4.2 GB of data is not a process you can optimize without upgrading your hardware.
Secondly, never use a clustered primary key if you have secondary indexes. I'm not going to explain why as Marcus Winand is
doing it[
^] so much better.
<edit>Also consider using an indexed view, most aggregates can be used in an indexed view. The exceptions I can think about at the moment is Min() and Max()</edit>