|
You can not update through a DataReader. It retreives a read-only, forward-only stream of data one record at a time.
If I read your question correctly, you are doing some processing on each record after reading, then need to set a flag in the database to indicate that record has been processed.
You will need a second connection and comand object for this. You can't use the same connection/command as the DataReader because it keeps it's connection open internally throughout it's lifetime.
The best way to accomplish this will vary slightly depending on how many records you expect to process and whether the processing will run a long time (cumulative).
For a relatively short run:
1. Open SQLConnection for updating and attach to SQLCommand
2. Read next record from DataReader
3. Perform needed processing
4. Generate update SQL for current record
5. Use SQLCommand to run update SQL
6. Repeat steps 2 - 5 until last record has been processed and updated
7. Close and destroy update SQLConnection/SQLCommand
For a long run:
1. Create SQLConnection for updating and attach to SQLCommand
2. Read next record from DataReader
3. Perform needed processing
4. Generate update SQL for current record
5. Open SQLConnection for update
6. Use SQLCommand to run update SQL
7. Close SQLConnection
8. Repeat 2 - 7 until last record has been processed and updated
9. Close and destroy update SQLConnection/SQLCommand
Hope this answers your question and helps out.
Kevin Rucker, Application Programmer
QSS Group, Inc.
United States Coast Guard OSC
Kevin.D.Rucker@uscg.mil
"Programming is an art form that fights back." -- Chad Hower
|
|
|
|
|
Spectre_001 wrote: You can not update through a DataReader
No, but you can through ExecuteReader.
In your scenarios... generate a parameterized update command up front and then set the parameters for each row and execute -- no need to keep generating the same SQL statement.
|
|
|
|
|
That dosen't help if he needs to read, process, then update.
Kevin Rucker, Application Programmer
QSS Group, Inc.
United States Coast Guard OSC
Kevin.D.Rucker@uscg.mil
"Programming is an art form that fights back." -- Chad Hower
|
|
|
|
|
Short answer - You can't. It is a data READER - not an updater. It's not like opening a recordset in VB6.
You need to ExecuteSQL with appropriate SQL to udpate the records (i.e. UPDATE table SET field=value WHERE key=id).
To do it the way you want it would better to use a DataSet or something a little more robust - plenty of info on MSDN.
|
|
|
|
|
Table-A is having 4 columns. I want to select values from Table-A with values from Event columns as column names as shown in Table-B. I need a stored procedure for selecting it if its possible.
TABLE-A
ID.No--Name-----Event-----------Amount_Paid
1---- Terry---ReliefFund------100
1---- Terry---Donation--------200
2---- John----Festival--------100
2---- John----Donation--------300
TABLE-B
ID.No---Name----ReliefFund------Donation--------Festival
1-------Terry---100-------------200-------------0
2-------John----0---------------300-------------100
Please help..., thanks
modified on Saturday, December 11, 2010 12:45 AM
|
|
|
|
|
For this what query you have developed?
What error it shows?
With Regards,
Kasson
|
|
|
|
|
|
SELECT * FROM mytable AS A
PIVOT (
SUM(AMOUNT)
FOR MEVENT IN([ReliefFund], [Donation], [Festival])
) AS M
|
|
|
|
|
This article [^]may be of use to you
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
This Link[^] may help you.
|
|
|
|
|
I have an sql query which gives the output as follows:
OutputFileid
-----------
4
5
6
I want to get this output in horizontal fashion , like 4,5,6 so that I can pass to another query as an 'in' parameter.
I was getting that, but the second query returned an error saying that "Conversion failed when converting the varchar value '4,5,6' to data type int".
Following is the full stored procedure which i am using for getting the desired result.
ALTER procedure [dbo].[IsBCastRefExistsforProcess]
(@processId int)
as
begin
DECLARE @FILEIDs varchar(100)
select @FILEIDs= COALESCE(@FILEIDs+',' ,'')+ convert(varchar,outputfileid)+''from OutputFiles where ProcessId = @processid
PRINT @FILEIDs
if (LEN(@FILEIDs) > 0)
begin
select eg.EJVColumnId,eg.EJVColumnName
from EJVGedaColumns eg,
GedaColumnMapping gc
where eg.EJVColumnId = gc.EJVColumnId
and eg.EJVColumnName like '%BCAST_REF%'
and gc.OutputFileid in (@FILEIDs)
end
end
when I tried to execute exec IsBCastRefExistsforProcess 1234 it is giving the above error. Is there anyway which i can achieve this?
select @FILEIDs= COALESCE(@FILEIDs+',' ,'')+ convert(varchar,outputfileid)+''from OutputFiles where ProcessId = @processid
This gives me the values in horizontal fashion.
I tried giving 'and gc.OutputFileid in (Convert(int,@FILEIDs)) - But end up with same error.
Please help.
Thanks
Success is the good fortune that comes from aspiration, desperation, perspiration and inspiration.
|
|
|
|
|
I fixed it!!.
I put the same in a dynamic sql and it got worked. Like this,
SET @SQL = 'select eg.EJVColumnId,eg.EJVColumnName,gc.OutputFileid from EJVGedaColumns eg,GedaColumnMapping gc
where eg.EJVColumnId = gc.EJVColumnId and eg.EJVColumnName like ''%BCAST_REF%''
and gc.OutputFileid in ('+@FILEIDs+')'
EXEC (@SQL)
Hope this will help somebody.
Thanks
Success is the good fortune that comes from aspiration, desperation, perspiration and inspiration.
|
|
|
|
|
meeram395 wrote: EXEC (@SQL)
other then EXEC you can also use sp_executesql procedure that will give you a way to input parameter with a strong datatype to avoid data mismatch conflict.
|
|
|
|
|
Thanks very much Hiren, for the help. I will try that.
Success is the good fortune that comes from aspiration, desperation, perspiration and inspiration.
|
|
|
|
|
As an alternative; you could create a function that returns[^] a table (with those ints). You could pass these even as parameters[^]
I are Troll
|
|
|
|
|
|
Thanks
|
|
|
|
|
Thank you very much Eddy. This is too good.
Success is the good fortune that comes from aspiration, desperation, perspiration and inspiration.
|
|
|
|
|
This is one of the times that I'm sad that I work in Oracle.
No such thing in Oracle as I know of.
"When did ignorance become a point of view" - Dilbert
|
|
|
|
|
sample join table query in MY SQL
Plz help
|
|
|
|
|
good subject lines give good Google results.
|
|
|
|
|
5
Had it been written like yoda speaks I would have give a 6.
"When did ignorance become a point of view" - Dilbert
|
|
|
|
|
whatever it takes to get people to try and use Google as a first step when in need.
|
|
|
|
|
Well said.
With Regards,
Kasson
|
|
|
|
|
Overview... Client has an inventory DB in SQL with over 2 million records. The inventory db is constantly updated by .NET 3.5 winforms app that queries another 3rd party db via web services. the inventory is essently in constant update. Client is looking for faster performance as it's taking days to cycle thru the whole table and inventoried items will continue to grow.
I have no control over the structure of the SQL table as its tied to other applications and I have no control over the 3rd party web service that I'm getting the updated info from.. I only have control over the winform app that communicates with the two. It simply queries the webservice for inventory data, do some work with the data, and update the results into SQL db. But it has to run down thru the entire inventory table.
I've inherited this app and so most of the work I've done was specific to functional tweaks and code cleanup but now the client is looking for more speed. Throwing 5 figures worth of hardware horsepower and my "tweaks" isn't giving him the results he was expecting. While his old hardware was inadequate for what he was doing, I explained throwing muscle doesn't really fix slow software.
Sending Update queries 1 record at a time whether via the .NET app or even calling a Stored Procedure just seems inefficient when your dealing with a dataset of this size.. And when it takes days to update, it runs 24/7 which causes other issues.
My thought is to do work with the data via a datatable and then do one big update to SQL. I'm not talking loading all records.. currently the app loops thru chunks of 500-1000 records at a time. There has to be way better than a 1 by 1 record update encapsulated in a big For-Loop.
Advice, Comments, Opinions?
I've done similar work with datatables but it was for manipulating CSV files and on much smaller scale (< 1000 rows). But it seems logical that the priciples would be the same and faster than calling SQL directly to update 1 record at a time.
Thanks In Advance!
|
|
|
|