|
Nope I did not issue a commit. Usually have it commented and if all goes well, highlight the commit and execute it!
Thanks a bunch!
|
|
|
|
|
Glad it helps.
For what is't worth, and I am sure others will disagree with me - don't use the transaction begin, commit rollback system for admin work on the database.
Write a select query to test the update, create a backup table of the rows you are about to update, run the update and check the changes.
If there is a problem restore the rows using the backup table you created.
Using the transaction system to get you out of trouble is bad practise and will one day ;and you in trouble.
If you use your current system you are going to lock the rows and there is a real danger that you will forget the commit or you will accidentally run a commit when you did not mean to.
How do I know this?
I used to do what you are now doing, years ago, and through a considerable experience of embarrassment learnt to use the method I illustrate above.
In summary:
Don't run an update without testing it and knowing what the results will be.
Take a backup, of the rows being updated, to restore the rows in case something goes wrong, which it won't as you fully tested the update with a select statement first.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
Creating the backup table every time an update is executed does not make sense. The database has all these tables _bak and begins to get messy real soon.
|
|
|
|
|
I don't think you read my post fully.
Before the update run a query like this:
select * into temp_20140219 from cash_withdrawals where userid = 1234
You then run your update:
update cash_withdrawals set withdrawals = -1000000 where id = 1234
You then check that the update worked correctly:
select * from cash_withdrawals where userid = 1234
If everything looks correct you then drop the backup table.
drop table temp_20140219
If there is a problem you restore the data:
delete from cash_withdrawals where id = 1234
insert into cash_withdrawals
select * from temp_20140219
If you carry on with things the way you currently do you will get burnt, maybe not this weeek but it will happen eventually...
[EDIT]SQL syntax error corrected[/EDIT]
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
GuyThiebaut wrote: Using the transaction system to get you out of trouble is bad practise and will one day ;and you in trouble. Transactions are great if you need multiple statements to work or fail as if they are one single statement; and yes, one should always take a five-second pause to reflect before doing a commit. To make a short story long..
We do a lot of things on autopilot. I've seen other people drive cars and look around when turning in a bend. They all peek in a similar fashion, starting on the left-shoulder, checking each angle, unto the far right. That routine has been drilled in to protect one from their own autopilot. Check each angle, then turn.
Programmers get the same tendencies, and most of us remember each crash and the tendency; still pressing fanatically Ctrl-Shift-S in the days of autorecover? Selecting your entire post and copying it to the clipboard in case of a time-out? We're very familiar with crashes, but there's few resources that explain where there be dragons.
So, before committing; check whether you got the correct server (left shoulder), correct database (left screen), correct credentials (mirror), explain to an imaginary friend what your query should do (look ahead) - then explain what each statement actually does (back-mirror), then find someone to blame when it goes wrong (right mirror).
Now you can commit. Just don't do it without the five-seconds delay - it works as well as GMails' "undo send" button. It'll save you often, but not often enough to live without backups.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
begin tran
//insert or update or delete query
Commit tran //if insert/update/delete used correct
Rollback tran //if insert/update/delete used incorrect
|
|
|
|
|
Have a read of my posts - I am saying that yes this can be done, however there are too many risks involved for me to consider it a suitable pattern for administration on a database.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
vkEE wrote: Do I have to issue a rollback statement In general, yes. Transactions are pending until either a commit or rollback.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
But like GuyThiebaut mentioned, since the update query errored out, the connection would be lost and hence as good as rollback.
|
|
|
|
|
You are guessing that the data has not been updated.
How do you know that the data has not been updated?
Because the connection was dropped?
Because the Microsoft manual said so or because GuyThiebaut told you so?
That is very sloppy thinking and will get you into trouble one day...
You need to check your updates and KNOW that the update did or did not happen.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
I know the update did not go through because the table has a trigger on the field called updatedate, which updates this field to current datetime when a change occurs, and there were no rows in the database with the time I had performed an update.
Thanks.
|
|
|
|
|
vkEE wrote: and there were no rows in the database with the time I had performed an update
...and you are the only person who could have updated data at that time?
It pains me to see you so confident in such a precarious system.
At least I wont be there to say "I told you so" when the SHTF the next time you rely on this sort of method to update a large number of rows.
“That which can be asserted without evidence, can be dismissed without evidence.”
― Christopher Hitchens
|
|
|
|
|
vkEE wrote:
But like GuyThiebaut mentioned, since the update
query errored out, the connection would be lost and hence as good as
rollback. More importantly; no single part of it was comitted! (a rollback also ends the transaction and cleans it up, but a pending transaction is hardly a disaster - confirming the change by committing it would have been)
You can check whether there are pending transactions;
SELECT @@TRANCOUNT;
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|
|
Run the rollback - it cannot hurt
|
|
|
|
|
Hi,
I'm not sure this is the best way to do this. I have an agent running as a service on thousands of servers. Each server can delete and add several hundred records into one table everyday.
I have set the KEY as an integer using IDENTITY(1,1). During the initial testing I have noticed that SQL Server does not reuse the KEY. This makes me wonder what happens when I've added and deleted 4 Billion records (The max size of an Integer). Does it automatically wrap and I don't need to worry about this?
This is SQL Server, and I'm wondering if I should be doing a database REORGs like we do with DB2.
I know it would of been nice to be able to use a KEY and not worry about this. From what I've read the serial number from each distributor is supposed to be unique however I've run into duplicates right from the start where the distributor is using zero's as the serial number.
Thanks,
Glenn
Glenn
|
|
|
|
|
|
Yes, I can define it that way, but that only moves the potential problem, doesn't fix it.
Glenn
|
|
|
|
|
gmhanna wrote: that only moves the potential problem, doesn't fix it.
How much data are you inserting?
Assuming 1,000,000 servers, each inserting 1,000,000 records per day, it would take you just under 25,252 years 332 days to use up the capacity of a bigint column starting at 1 and ignoring negative keys. If you start at the minimum value, it would take just under 50,505 years 298 days.
At that point, I think it's fair to say it's Someone Else's Problem.
"These people looked deep within my soul and assigned me a number based on the order in which I joined."
- Homer
|
|
|
|
|
Bit it sure is nice to see someone planning for the future. He definitely learned from Y2K.
There are only 10 types of people in the world, those who understand binary and those who don't.
|
|
|
|
|
Go with bigint on all transaction tables, int breaks when it hits the limit. The horizon for bigint is beyond your life time so settle for that.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
gmhanna wrote: wonder what happens when I've added and deleted 4 Billion records
1) Instead of running a delete command, use truncate table. That will reseed the identity field, thus allowing you to regenerate the numbers. (Not that there is anything wrong with what others suggested of just using a BIGINT for the ID field. Once MAX INT has been reached you will no longer be able to insert records into that table.)
2) If you are unable to truncate the table, then you can reseed the key. More information can be found here http://blog.sqlauthority.com/2007/03/15/sql-server-dbcc-reseed-table-identity-value-reset-table-identity/[^]
A quick example
CREATE TABLE Test(id INT IDENTITY(1, 1), TestValue INT)
GO
INSERT INTO test VALUES(1)
INSERT INTO test VALUES(2)
INSERT INTO test VALUES(3)
INSERT INTO test VALUES(4)
INSERT INTO test VALUES(5)
GO
SELECT * FROM test
GO
DELETE test
GO
INSERT INTO test VALUES(1)
INSERT INTO test VALUES(2)
INSERT INTO test VALUES(3)
INSERT INTO test VALUES(4)
INSERT INTO test VALUES(5)
GO
SELECT * FROM test
GO
TRUNCATE TABLE TEST
GO
INSERT INTO test VALUES(1)
INSERT INTO test VALUES(2)
INSERT INTO test VALUES(3)
INSERT INTO test VALUES(4)
INSERT INTO test VALUES(5)
GO
SELECT * FROM test
GO
DBCC CHECKIDENT(test, reseed, 1)
GO
INSERT INTO test VALUES(1)
INSERT INTO test VALUES(2)
INSERT INTO test VALUES(3)
INSERT INTO test VALUES(4)
INSERT INTO test VALUES(5)
GO
SELECT * FROM test
GO
DROP TABLE test
gmhanna wrote: This is SQL Server, and I'm wondering if I should be doing a database REORGs like we do with DB2.
From IBM's website, looks like the same feature would exist in SQL Server as clustered index management (more information http://technet.microsoft.com/en-us/library/ms189858.aspx[^])
Common sense is admitting there is cause and effect and that you can exert some control over what you understand.
|
|
|
|
|
Hi everyone!
I'm starting a project on BIDS. I want to create cubes based on replication databases for Salesforce and PRISM as data sources
Does anyone had an experience with Salesforce reports developing?
I will very appreciate for any suggestions you might have!
Thanks in advance.
Kind Regards
Karina
|
|
|
|
|
It's entirely doable, our EDW team does it.
Loosely
1) Pull data from SalesForce.com into a staging environment (we use their dbamp application to do it)
2) Pull data from Prism into a staging environment (AS400 ODBC connection)
3) Pair the data together into Facts and Dimensions. Log data miss matches for user reporting
Both systems should have customer numbers that match up, it entirely probably some where Salesforce.com is getting your Prism Customer numbers (or should be).
4) Once a solid data model has been built generating a cube off of that data is then easy.
For more conceptual information -> http://en.wikipedia.org/wiki/Data_warehouse[^]
Common sense is admitting there is cause and effect and that you can exert some control over what you understand.
|
|
|
|
|
Hi!
I am trying to create a statement, that deletes rows where UpdateDate is older than 7 day and than selects that rows for me. I need to get those rows back to my application to write the log file. In 2 steps (select and than delete by ID) is no problem. But is there a way to do this in one single step?
What I need to get back is table.ID, table.CreationDate, table.Name ... It is nearly the complete row.
THX for your help!
|
|
|
|
|
Member 9935038 wrote: But is there a way to do this in one single step? What do you mean with "single step"?
If you need them to be a single operation then it'd be wrapped in a transaction. That'd mean that there'd be two more extra statements.
If you're wondering if there's a command that does both a select and a delete, then the answer is no, but if you need that you can wrap 'em both in a stored procedure.
Bastard Programmer from Hell
If you can't read my code, try converting it here[^]
|
|
|
|