Click here to Skip to main content
15,902,917 members
Please Sign up or sign in to vote.
1.00/5 (2 votes)
See more:
Hi,

currently i am working on telcom project where i have to process few hundered cdr files in a minitue( cdr is a text file which contains details\
related with a call in comma seperated format.each line represent each call.most of the file contains up to 8k lines).we have to process each line(formating,rating etc etc) and insert those values in a table. we were using
c# for for the front end and sql for back end processing.everything working fine for a few hundered cdr files per minitue..but as per the current requiremnt we have to process a few thousand files at a time.n our current process fails
terribly.
so is there any alternative for more faster data processing.i heard about big data(hadoop).but cdrs are perfectly structed data.which is not the ideal case for big data applications.please suggest any other alternatives..

What I have tried:

we were inserting each data on row by row basis.i tried to change it to bulk insert but it didnt work because the table to which we are inserting data is used by other services too(such as reporting service),so we cannot lock table for for than 100ms
Posted
Updated 11-May-17 13:52pm
Comments
Mehdi Gholam 17-May-16 3:01am    
First, measure what your bottle neck is by timing every step.
Patrice T 18-May-16 17:52pm    
You will need to show code that process your data.

1 solution

In my opinion you have first to find out where your program is failing, that is where is the bottleneck. It could be, for instance, either in the CDR format parsing or in the data insertion inside the database (or both).
Once you located the bottleneck you could try to improve the performance, (e.g. optimizing the parsing algorithm).
 
Share this answer
 
v2

This content, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)



CodeProject, 20 Bay Street, 11th Floor Toronto, Ontario, Canada M5J 2N8 +1 (416) 849-8900