|
Okay,
So the performance is affected because you fetch so many records (not a real world situation, but although a good test case).
Anyway, since the scalar is correlated, it's executed as many times as there are rows in the result (in this case 10080 times). If the result is limited, it will greately affect performance.
Another thing. Try adding a new index with for column SerialNumber (or SerialNumber, ID). Leave the existing ID index as it is. I would guess that you'll see quite different performance.
Also you could have an index on Details.SN if not already indexed.
Mika
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Thanks again, Mika, for your help.
For your information, this is the solution-suggested by someone at another site - that worked:
SELECT Details.*, r1.VE
FROM Details LEFT JOIN [SELECT r1.* FROM Results r1 INNER JOIN (SELECT SerialNumber, Max(ID) As maxID FROM Results GROUP BY SerialNumber) AS r2
ON r1.SerialNumber = r2.SerialNumber AND r1.ID = r2.maxID]. AS r1 ON (Details.SN = r1.SerialNumber);
The actual query I'm using is a bit more complex than this as it involves several more conditions in the LEFT JOIN statement and some more or less simple WHERE queries to limit the information returned from the Details table, but the basic idea of this query does solve the problem I had described.
Thanks again.
|
|
|
|
|
I've got a stored procedure running across 2 servers which either inserts or updates multiple rows of data into a series of tables from a single source table. I've run into a problem where, if the source table doesn't contain a record for a given ID, MS DTC throws an exception and bombs out of the transaction. e.g.
<br />
UPDATE MyTable<br />
SET p.Column1 = pm.Column1<br />
FROM<br />
Property p<br />
inner join Address a<br />
on p.PropertyRef = a.PropertyRef<br />
inner join remotedb.PropertyMaster pm<br />
on pm.PropertyRef = p.PropertyRef<br />
errors with 'The Microsoft Distributed Transaction Coordinator (MS DTC) has cancelled the distributed transaction.'. Has anyone encountered this - and ideally know of a solution / workaround?
TIA
Dave
It definitely isn't definatley
modified on Monday, October 13, 2008 12:02 PM
|
|
|
|
|
Few issues:
- in the statement, you have a reference to
src , but it's not defined. Typo? - I know that the source cannot have more than one row if update is used like this (or at least the results may be incorrect), but could it also be that the source row must have exactly one matching row. But then again, just guessing
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Yeah the src part was a typo (now updated in the original question).
I think you're probably right in your second point; it appears that DTC is expecting exactly one record set but since it's getting none it just decides to not work. Frustrating!
It definitely isn't definatley
modified on Monday, October 13, 2008 1:23 PM
|
|
|
|
|
I made a simple test case and ran it in both SQL Server 2008 and 2005 without DTC and had no problems. The test was the following:
CREATE TABLE Test1 (
Column1 varchar(50)
);
INSERT INTO Test1 (Column1)
VALUES ('ABC');
CREATE TABLE Test2 (
Column2 varchar(50)
);
UPDATE Test1
SET Test1.Column1 = Test2.Column2
FROM Test2
UPDATE Test1
SET Test1.Column1 = a.Column2
FROM Test2 a
INNER JOIN Test2 b
ON a.Column2 = b.Column2
In all cases I received 0 rows updated as expected. I was curious since typically if DTC makes a stop it's because an error has occured in some node and it simply rolls back all pending transactions. So I was thinking that there may be another error behind the one you received, but I had no confirmation for that.
One thing you could test is to modify your statement like this:
UPDATE MyTable
SET p.Column1 = (SELECT pm.Column1
FROM Property p
inner join Address a
on p.PropertyRef = a.PropertyRef
inner join remotedb.PropertyMaster pm
on pm.PropertyRef = p.PropertyRef)
Perhaps there's a difference in the behaviour or at least you get some other info.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Cheers Mika,
I actually tried something similar on a couple of different machines and only found the problem on the original server so it's almost certainly down to a configuration issue (even though my client's DBA team reckon it's not). I've sent them a copy of DTCPing to do some diagnostics so hopefully that'll find the issue. Failing all else I'll get them to reinstall MSDTC and hopefully that'll mend everything.
Thanks for you help
Dave
It definitely isn't definatley
|
|
|
|
|
You're welcome.
If you find out it's a configuration issue and not a broken install, I'd be really glad if you could share the info.
I'm having a hard time to understand what configuration option would modify the behaviour so dramatically since DTC is just a controller and it doesn't have to understand what actually happens under the hood. Just needs to know if everything went as expected or not.
Hopefully you find the solution one way or another without too much pain.
Mika
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Despite protests from my client's DBA team, the server was upgraded to SP2 (previously SP1) and a hotfix was applied http://support.microsoft.com/kb/937517/en-us[^] - apparently there's a known problem which boils down to an incorrect attention signal being passed to the linked server which forces the linked server to rollback the transaction.
*feels vindicated
It definitely isn't definatley
|
|
|
|
|
Great !
Hopefully that solves the problem. Based on the hotfix info it could be exactly what you were experiencing.
Also confirmed that it wasn't intended behaviour as we expected
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
hy,
how can i create a transaction so that when ever i update a row of a table no other user can change it or access it..
its like transaction on a account , only one user can operate on an account at a time.
i tried the following sql statement LOCK [ TABLE ] name IN lock_mode
but it gives an error "incorrect syntax near table"
should i create a stored procedure and then use SET TRANSACTION ISOLATION LEVEL SERIALATION after creating a transaction.
plz help
|
|
|
|
|
|
Ashfield wrote: This is the 3rd time you have asked the question, the answer is NOT going to change!!!!!
"The clue train passed his station without stopping." - John Simmons / outlaw programmer
"Real programmers just throw a bunch of 1s and 0s at the computer to see what sticks" - Pete O'Hanlon
"Not only do you continue to babble nonsense, you can't even correctly remember the nonsense you babbled just minutes ago." - Rob Graham
|
|
|
|
|
Hi..
I m using mysql database.
In my table adapter, I have 2 queries.
Query(1)
INSERT INTO comment
(commentID, reviewID, commentStatus, submitterID, commentDate, lastUpadatedDate)
VALUES (?commentID, ?reviewID, ?commentStatus, ?submitterID, ?commentDate, ?lastUpadatedDate)
Query(2)
INSERT INTO commentdetail
(commentDetailID, commentID, commentType, locator1, locator2, comment, remark)
VALUES (?commentDetailID, ?commentID, ?commentType, ?locator1, ?locator2, ?comment, ?remark)
i can add the Query(1).
but everytime i tried to add Query(2) displays the following error.
The query builder failed.
Attempted to read or write protected memory. This is often an indication that other memory is corrupt.
i need urgently..
Could anyone know how to solve this error,Please?
thanks in advance.
|
|
|
|
|
I use .NET 2.0 and VS 2005
|
|
|
|
|
Haven't used MySql really so just guessing here.
In the second statement, you have a potentially reserved word comment and also remark . Try removing them from the statement:
INSERT INTO commentdetail
(commentDetailID, commentID, commentType, locator1, locator2)
VALUES (?commentDetailID, ?commentID, ?commentType, ?locator1, ?locator2)
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Hi,
Can someone please help me with an issue I am having?
I have created a Table named "Countries" The table has a CountryID and a Country. What needs to happen is that the user must be able to add his/her country in the database. The problem I am having is the following:
The CountryID is NOT auto numbered. I need to write a trigger that executes BEFORE the insert procedure. It basically needs to check what the latest record ID is, if there are none than the CountryID should be 1 if there are records the trigger must take the last record and add 1.
I have never worked with triggers so I am in a bit of a pickle. Any help will be appreciated!!
Illegal Operation
|
|
|
|
|
Why can't you change the countryid field into an identity (autonumber) field.
Triggers work on the inserted record and therefore fire after insert. Identity field is the designed methedology, you are trying to make a trigger do the job of another function.
Alternatively you can create an insert proc that gets the number before the insert begins. This is subject to corruption in high volume tables (2 attempts at the same time - very unlikely but must be considered).
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Ok, point taken. What happens when you use auto numbering and you have 10 entries in your table, now you need to delete entry number 7. What are you going to do to fix the table??
Illegal Operation
|
|
|
|
|
Define "fix" the table. Guessing you are stressing about the missing sequential number 7 from the identity field.
This is Database 101. ID fields must NEVER have intelligent data, they are used to identify the record and link it to other tables. The sooner you get this idea the happier you will be. If you are using the ID field other than to locate the record (eg sorting) then your design is tragically flawed.
I recommend you get a book on database design and data structuring (mine are so old they are no longer published).
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
hold your horses!
Triggers only work after not before, just use id_country as primarykey and set autonumber (counter) there.
I also think dont need triggers for now.
nelsonpaixao@yahoo.com.br
trying to help & get help
|
|
|
|
|
Hi,
I have a data file which consists of data as below,
Header
xx|yy|zz
aa|bb|cc
Footer
While iam inserting into table using bulk insert, this pipe(|) is also getting inserted into the table,
here is my query iam using to insert the data using bulk insert.
BULK Insert #TmpStList
FROM 'D:\PSC07\HRDATA\HR2SSTS_TRAINING.TXT' WITH
(FirstRow=1,LastRow= 3,DATAFILETYPE='char',FIELDTERMINATOR = '|',
ROWTERMINATOR = '\n',keepnulls)
I want result like this
fields1 2 3
xx yy zz
aa bb cc
but i got this
fileds1 2 3
Headerxx yy zz
aa bb cc
I don't want to insert (|) into Header .
|
|
|
|
|
Your header row should have as many columns as data rows. So you should modify your data to:
Header||
xx|yy|zz
aa|bb|cc
Footer||
Although you can start from second row using FirstRow=2 , note that:
The FIRSTROW attribute is not intended to skip column headers. Skipping headers is not supported by the BULK INSERT statement. When skipping rows, the SQL Server Database Engine looks only at the field terminators, and does not validate the data in the fields of skipped rows.
The need to optimize rises from a bad design.
My articles[ ^]
|
|
|
|
|
Hi,
i am finishing a application that as a database, the application is a stand alone, so no network here (1 database for 1 application).
Will i get errors when installing the application in other computer with no sql server installed ?
I think i have to type ...attached...path...whatever in connection_string and all will be fine.
You can edit and get reports in the application, i only allow users to trigger store_procedures (with parameters).
Will i get errors???
nelsonpaixao@yahoo.com.br
trying to help & get help
|
|
|
|
|
Yes. You are using a service-based database. This means that the database must be attached to a server when you use it. Sql Server must be installed on the target machine and the database attached to it (or you can attach it when using it).
Eslam Afifi
|
|
|
|