You mentioned
60~70 request created at moment is that per second?
If installed on a powerful enough mcachine, SQL Server is good enough.
However we need to think a little 'out of the box' and stop worrying about a logging problem, and start think about a pure SQL Server efficiency problem.
Store all rows in a heap table; no indexes, no clustering, because it takes work to write both to table and then to an index. You just want to reduce SQL Server's write time by using a heap.
... Have this heap use a single large enough data file... Maybe even in a single server devoted to this depending on the actual workload.
The target heap table should have a column that indicates whether the row has been 'processed' or 'transferred'; a column of type bit.
Have a Job or process that runs every so many seconds programmed to:
1. update all rows in the heap table to 'transferred'.
2. copy those rows to a more structured table (at another database maybe) that has indexes and all of that; that is the table you want to use for querying and creating reports and filtering.
3. delete the processed rows from the heap.
Basically you should keep the heap table short and not run anything else on it (no reports).
Those are just rules of thumb, basic guidelines.
I have not been able to experiment with the following recommendation, but is worth a look:
Not using variable length data types (varchar, nvarchar, varbinary); all rows should be of fixed-length, and if possible, proportional to the size of SQL Server pages (see
Understanding Pages and Extents[
^]
The match-to-page-size is to avoid fragmentation which can be a real problem with heaps.
SQL Server Heaps, and Their Fragmentation - Simple Talk[
^]
Good luck