|
I am learning Asp.net with C# and Sql
server. I came across with a huge fixed width text file of 1780 columns. I have to import this file in sql server table. I will appreciate if you can help in this.
Thanks
Ibrahim Sharief
isharief291@gmail.com
|
|
|
|
|
|
Use Data Import Export Wizard of Sql Server
|
|
|
|
|
Besides just importing it you might want to consider if you really want to have a table with that many columns.
|
|
|
|
|
Hi
i want to show results something like this...
First row contain the opening balance calculated by this i.e: Sum(Balance)where datefield <= DateByUser
and next rows shows the records between dates given by user & evaluate balance on each row
output should like this..
VchrNo___DrAmnt___CrAmnt___ Balance
00 --------00---------00--------1570
113--------10---------00--------1580
115--------15---------00--------1585
123--------10---------00--------1595
123--------12---------00--------1607
. . . .
. . . .
|
|
|
|
|
|
Thanks For reply
the solution you provide help me lot and i am about to solve my problem..
|
|
|
|
|
I'm looking for a method/example/pseudo-code of importing a CSV file into a SQL Server table with several requirements:
1) I do not know the quantity of columns, column header nor data type of each column prior to the program running. Along with the header fields, I have another file which describes the type and size of each column (like integer, decimal or string) and I can already create the destination table based on this input.
2) Because I don't know the format of the input table before running the program (i.e. the input table changes every time), I can't create a class specifically designed to describe the contents of each record from the CSV file.
3) I already have a method working where a DataTable gets fully populated and the SQLBulkCopy variable uses the .WriteToServer() method to insert all of the records. HOWEVER, there's a problem when the source file gets too large (over 500mb or so) and the result is an OutOfMemory exception.
4) The AdHocGeek has a partial solution which works with strings only but doesn't address integers, decimals, dates, et.al.
The problem is that I can't see a way to use an IDataReader into a complex (i.e. many field types) table on the fly. I've added the .ColumnMappings() method to determine the name of the columns with which to map the input columns into the destination SQL Server table. When the application is run however, it gets an error indicating "The given value of type String from the data source cannot be converted to type decimal of the specified target column".
I can't determine why the DataTable works and the DataReader doesn't and can't find any examples to prove otherwise.
Any ideas or examples would be appreciated.
|
|
|
|
|
Well I would say you are screwed, only because you are trying to transform the data in your load. IMHO transforms are the biggest time waster ever inflicted on the load process. I would split the operations to a load process and then a transform. Caveat I am not addressing the size issue as I have never had the problem (and 500mb is very small beer in our environment).
I would do the following.
Read in the data file and get the header record.
Create a staging table in sql server exactly matching the column headers - every column to be varchar or nvarchar if needed
Convert the csv file to a datatable (This article [^]may help).
Use BulkCopy to load the data (everything is varchar so it WILL load)
Use a stored proc to do the transforms from the staging table to the target table. If you have to use a dictionary file then you are going to have to take that into account and it will be a challenge (probably a crap load of dynamic sql).
Drop the staging table
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
That's a very interesting idea that should work. I would need to create a temporary SP based on the fields and data types for the conversion. It also pushes the field validation to the SP but that shouldn't be a problem.
Thanks for the feedback.
|
|
|
|
|
Allan Watkins wrote: a DataTable gets fully populated
That sounds like a bad idea; there's no need to have more than one record in memory at a time.
Allan Watkins wrote: cannot be converted
Sounds like you may need to specify a Parse routine. Take a look at the values that are causing the trouble -- it's possible that they are empty.
|
|
|
|
|
Hi,
Why I am getting the above error for this stored procedure?
CREATE DEFINER=`root`@`192.168.1.%` PROCEDURE `sp_open_file`(IN param_file_no int, IN param_clinic_id int, IN param_doctor_id int, IN param_account_category int, IN param_account_id int)
BEGIN
DECLARE param_patient_account_number BIGINT;
DECLARE param_price_list_id int;
DECLARE param_account_number BIGINT;
SET param_patient_account_number = (SELECT account_number FROM patients WHERE file_no = param_file_no);
IF (param_account_category = 1) THEN
SET param_account_number = param_patient_account_number;
ELSE
SET param_account_number = (SELECT account_number FROM accounts WHERE account_id = param_account_id);
END IF;
SET param_price_list_id = (SELECT price_list_id FROM accounts WHERE account_number = param_account_number);
INSERT INTO visits (file_no, clinic_id, doctor_id, account_category, account_number, price_list_id) VALUES (param_file_no, param_clinic_id, param_doctor_id, param_account_category , param_account_number, param_price_list_id);
UPDATE patients SET file_is_open = TRUE;
END
Technology News @ www.JassimRahma.com
|
|
|
|
|
..because one of the three subqueries returns more than one row.
(SELECT account_number FROM patients WHERE file_no = param_file_no);
(SELECT account_number FROM accounts WHERE account_id = param_account_id);
(SELECT price_list_id FROM accounts WHERE account_number = param_account_number);
Meaning that the table holds more than a single record with the requested Id.
|
|
|
|
|
Quote: Meaning that the table holds more than a single record with the requested Id.
True, this is the main reason for such errors. Many resolution are available for this errors.
1) Use Top 1 keyword i.e.
(SELECT TOP 1 account_number FROM patients WHERE file_no = param_file_no);
(SELECT TOP 1 account_number FROM accounts WHERE account_id = param_account_id);
(SELECT TOP 1 price_list_id FROM accounts WHERE account_number = param_account_number);
2) Some of your data require 1 or more condition (filter) in your sql query.
3) If any duplicate entry found, Please remove.
Maulik Dusara
Sr. Sofware Engineer
|
|
|
|
|
MaulikDusara wrote: True, this is the main reason for such errors. Many resolution are available for this errors.
1) Use Top 1 keyword i.e.
That's assuming that there's more of those records allowed in the table, and being an "Id", this COULD lead to an inconsistent database - exaggerating the problem.
|
|
|
|
|
I agree with Eddy, while top 1 fixes this problem it ignores the underlying data problem that the user was expecting 1 record per ID and is not getting the correct result.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Quote: not getting the correct result.
I also agree and I also mentioned that Quote: Many resolution are available for this errors
Please also check my other resolutions.
Maulik Dusara
Software Engineer/Team Leader
|
|
|
|
|
MaulikDusara wrote: Please also check my other resolutions.
That's the point; he should not "try resolutions", but analyze the bug; if there are multiple records coming back on a key-field selection, chances are that there's an inconsistency. Selecting the "top 1" might hide that problem a little longer.
|
|
|
|
|
I have drafted a rough design of a new database for a job application web site. I have a quintillion lookup tables with no more than a primary key and a string column, e.g. Province , with Id int and Province nvarchar(50) . In this case, Province seems a natural and logical name for the description field, but in others the description is more vague, and better named as simple Description .
Now, if I use the name Description for all description columns in the DB I gain the opportunity to use more generalized T-SQL and C# code to deal with lookup tables, but I lose in semantics, especially where e.g. a WorkCategory lookup table grows to represent a business entity and not just a value domain; then Description would become less meaningful among more than just two columns.
I understand there is no black and white, clear cut answer to this question, but I am only asking for suggestions and input from people who may have faced similar dilemmas before.
|
|
|
|
|
Do NOT go down the path of description, do precisely what you are doing, name your fields intelligently and when nothing comes to mind only then use description. It'll save you a crap load of refactoring in the future.
I use a code generator which reduces the cost of refactoring and I still get irritated when I do this.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Mycroft Holmes wrote: I use a code generator which reduces the cost of refactoring
So do I, being the T4 based code generated by the EF Wizard or Power Tools, but I'm interested in what causes the mountains of refactoring you are warning me against. Is it, as I suspect, when you need to rename Description to something more specific?
|
|
|
|
|
Brady Kelly wrote: Is it, as I suspect, when you need to rename Description to something more specific?
You suspect correctly, although I have to admit my problem is caused more by using views and never allowing Select * From ViewName
But I find it immensely irritating to get to the reporting phase of an app only to wish I had used something more descriptive than description. And having to rename them in a query when you have 5 description fields hanging off a Fact table view!
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
I see. I'm currently on a big renaming exercise, and will try and reduce my Description usage. Luckily that field isn't used in any FK's.
|
|
|
|
|
I mostly agree with Mycroft; the odds definitely favour what he recommends.
But as he and I seem to work on very different types of systems, I generally never display the column headings and in many cases ( Code , Meaning ) will suffice. Yet in many others a broader range of text values is required, so more descriptive names are suitable; one such I have here has ( ID , Tag , Description ) -- the Tag is used for a Dictionary and the Description is used in a ComboBox. There's another example here that pre-dates me wherein four different text values are required ( ID , Name , Description , LongName , ShortName ) .
Oh, I just remembered another one that I created a week or two ago ( Code , Meaning , Description ) -- the Description is intended only as documentation for developers looking at the table; this table contains flags to be used in bitmaps.
What matters more is using names like "Name" and "Description" which are or may be reserved words -- you should probably avoid that, and by being in the habit of using more descriptive names you may never need to worry about that.
Consider each table on its own, don't try to enforce a single rule on all of them, but err on the side of caution. As I hope I showed here, habits are hard to break, so please try not to follow my path.
(Now I'm picturing Jacob Marley...)
|
|
|
|
|
PIEBALDconsult wrote: Description is intended only as documentation for developers looking at the table That kind of info I am storing in a separate data dictionary, which has not yet moved beyond an Excel doc, but I am seriously considering writing something to use SQL Server's Extended Properties for data dictionary data.
PIEBALDconsult wrote: What matters more is using names like "Name" and "Description" which are or may be reserved words -- you should probably avoid that, and by being in the habit of using more descriptive names you may never need to worry about that. Yes, and even names that aren't reserved but are prickly, like having a table called ReferenceType with Id and Type columns. I have avoided all of those things.
|
|
|
|