Click here to Skip to main content
15,887,214 members
Please Sign up or sign in to vote.
0.00/5 (No votes)
Hi everyone,

I have a web API which communicate with multiple external services (SOAP, REST, etc) all requests to external services are in parallel (60~70 request created at moment) i need to now what is the best practice to log all requests and responses.

I need to now the best method in terms of performance, organized data for searching and filtering, minimum storage, etc

What I have tried:

I tried (SQL Server File Streaming) but this cause a server file system issues since the logging operation done in extremely high traffic.
Posted
Updated 14-Apr-17 15:46pm

1 solution

You mentioned 60~70 request created at moment is that per second?
If installed on a powerful enough mcachine, SQL Server is good enough.

However we need to think a little 'out of the box' and stop worrying about a logging problem, and start think about a pure SQL Server efficiency problem.

Store all rows in a heap table; no indexes, no clustering, because it takes work to write both to table and then to an index. You just want to reduce SQL Server's write time by using a heap.
... Have this heap use a single large enough data file... Maybe even in a single server devoted to this depending on the actual workload.
The target heap table should have a column that indicates whether the row has been 'processed' or 'transferred'; a column of type bit.

Have a Job or process that runs every so many seconds programmed to:
1. update all rows in the heap table to 'transferred'.
2. copy those rows to a more structured table (at another database maybe) that has indexes and all of that; that is the table you want to use for querying and creating reports and filtering.
3. delete the processed rows from the heap.

Basically you should keep the heap table short and not run anything else on it (no reports).
Those are just rules of thumb, basic guidelines.


I have not been able to experiment with the following recommendation, but is worth a look:
Not using variable length data types (varchar, nvarchar, varbinary); all rows should be of fixed-length, and if possible, proportional to the size of SQL Server pages (see Understanding Pages and Extents[^]
The match-to-page-size is to avoid fragmentation which can be a real problem with heaps. SQL Server Heaps, and Their Fragmentation - Simple Talk[^]

Good luck
 
Share this answer
 
v2

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900