|
Hi guys,
I have these three queries and i would like to join them as one table so that i can
read from one table using my DataReader(). I don't want to use subqueries because certain queries will
return more than one result and Sql Server does not like that one.
(
SELECT [number], [dateTime]
FROM [Database].[dbo].[table]
WHERE [dateTime]
BETWEEN '2010-04-30 00:00:00' AND '2010-04-30 23:59:59' AND
[number] NOT IN ('227000000', '2274000000', '2770000000')
)
(
SELECT RIGHT ([number_pair], 11) AS [number_pair] FROM [Database].[dbo].[table]
WHERE [number_pair] != '' AND [used] = '1' AND [number_pair] IN
(
SELECT [Database].[dbo].[table].[num_pair] FROM [Database].[dbo].[table]
WHERE [dateTime] BETWEEN '2010-04-30 00:00:00' AND '2010-04-30 23:59:59' AND
[number] NOT IN ('227000000', '2274000000', '2770000000')
)
)
(
SELECT column FROM [Database].[dbo].[table]
WHERE [sent] = '1' AND
[sent_date] BETWEEN '2010-04-30 00:00:00' AND '2010-04-30 23:59:59'
)
Please anyone help me
Thanks,
Morgs
|
|
|
|
|
The three queries don't return the same number of columns so they can't be combined using "union". So first you need to make the three queries to return the same number of columns with corresponding columns having the same data types. Then you can use "union" or "union all" to combine them into one big table.
|
|
|
|
|
Hi Loyal,
Thanks for your reply, well the problem is that these queries will return different
number of columns and rows.e.g 1st query will return 3 columns,
2nd and 3rd will return 1 column
|
|
|
|
|
How do you expect to read from one table if they all have different number of columns?
You can have 3 select queries in 1 stored procedure and still use DataReader . The DataReader class contains a NextReader() method which moves onto the next table in the resultset.
|
|
|
|
|
Thanks alot, i will try this one
|
|
|
|
|
That is not a problem. Simply determine which query returns the most columns, and add dummy columns to the other queries, such that each query returns the same amount of columns. use 0 for columns with numerical values, and '' for columns with text values.
So for example:
SELECT NumericalColumn1, NumericalColumn1, textColumn1
FROM Table1
WHERE blabla
UNION ALL
SELECT NumericalColumn1, 0, ''
FROM Table2
WHERE blabla
UNION ALL
SELECT 0, 0, textColumn1
FROM Table3
WHERE blabla
My advice is free, and you may get what you paid for.
|
|
|
|
|
Hi everyone,
I'm using a huge database of geostatistical blocks (million of records). I would like to read the records one by one and process only some. I tried the datareader but it's slowing terribly. Is there a read next or something like that in SQL?
I'm programming in C#.
Thanks in advance,
Yvon
|
|
|
|
|
If you are only processing a subset of records why not cut down the size of the result set by using a where clause to filter out those records you will not need? (Sorry if you already know that but your question isn't clear).
Like:
select * from [table] where [column] = 'foo'
It's best to bring back only those records you'll actually need: perhaps if you were to explain your need a little more clearly: e.g. are you intending to do this in Sql or c#?
me, me, me
"The dinosaurs became extinct because they didn't have a space program. And if we become extinct because we don't have a space program, it'll serve us right!"
Larry Niven
nils illegitimus carborundum
|
|
|
|
|
In fact, I had something like SELECT [IJK],[XC],[YC],[ZC] FROM [MSDP]; and it was taking 10 to 15 minutes to get the records.
I tried to change it to SELECT TOP 200 [IJK],[XC],[YC],[ZC] FROM [MSDP] ORDER BY [IJK]. It's fast for the first 200 records.
To get the following records, I use SELECT TOP 200 [IJK],[XC],[YC],[ZC] FROM [MSDP] WHERE [IJK] NOT IN (SELECT TOP 200 [IJK],[XC],[YC],[ZC] FROM [MSDP] ORDER BY [IJK]) ORDER BY [IJK];
the second 200 changes in a loop to 400,600 and so on.
At the beginning, it's quite fast but the performances are degrading quite fast.
Any help?
|
|
|
|
|
Does it need to be ordered? If it does at least put an index on [IJK] that should help. Consider indexing the other columns. Have you created an execution plan? How does that look?
me, me, me
"The dinosaurs became extinct because they didn't have a space program. And if we become extinct because we don't have a space program, it'll serve us right!"
Larry Niven
nils illegitimus carborundum
|
|
|
|
|
This way your selecting all records (however in parts though).
Where's your filter (WHERE CLAUSE), since you need only some records?
|
|
|
|
|
I didn't put the WHERE clause because it's very complex but it's working.
|
|
|
|
|
Your logic must be really really really complex, if it can not be translated to a WHERE clause.
However assuming that you are right, you can always at least cut out some part of the records with the WHERE clause. Look at the search logic you have written in your application, find the single largest search parameter, and translate only that to sql.
My advice is free, and you may get what you paid for.
|
|
|
|
|
That is going to be really slow, since retrieving e.g. records 4801-5000 will use 5,000 records' worth of processing to retrieve 200 records; retrieving records 49,801-50,000 will use 50,000 records' worth of processing to retrieve 200 records. To get through 1,000,000 records would require processing about 2,500,000,000.
To add insult to injury, deleting records from the table may cause some records (not involved in the deletion) to be skipped. Adding records may cause some records to be duplicated, but that's a far smaller problem.
If the value in column IJK uniquely identifies records, use "SELECT TOP 200 ... WHERE [IJK] > @lastijk" where parameter @lastijk is the last value you retrieved. I'd guess that IJK is probably unique, since if it isn't your code won't work properly at any speed, but if it is not unique, use "SELECT TOP 200 ... WHERE [IJK] > @lastijk" followed by "SELECT ... WHERE [IJK] = @lastijk" (no TOP 200 qualifier there, and use the last record from the first query for @lastijk); all of the records where IJK equals @lastijk will appear in the second query, so you should ignore any items from the first query where IJK equals @lastijk.
Incidentally, to get reasonable performance, you'll need to have an index--preferably clustered--on @lastijk. I'd guess you probably do have one, or else your performance would be really slow for even the first 200 records.
|
|
|
|
|
Thanks, looks good to me, the performance was degrading quickly, this would probably fix it...
|
|
|
|
|
Yvon Bourgouin wrote: At the beginning, it's quite fast but the performances are degrading quite fast
I'm not surprised. Think about what you are asking the database to do here:
SELECT TOP 200...
SELECT TOP 200 WHERE NOT IN (SELECT TOP 200...)
SELECT TOP 200 WHERE NOT IN (SELECT TOP 400...)
SELECT TOP 200 WHERE NOT IN (SELECT TOP 600...)
... repeat lots of times ...
SELECT TOP 200 WHERE NOT IN (SELECT TOP 1,000,000...)
If I understand you correctly, you are running the same query over and over, asking the database to do more work every time.
You say that you are only processing a small number of the rows. So, why fetch them all? Why not put something into the WHERE clause to cut down the number of rows you select? Once you have fetched these rows, how do you decide which ones you will process and which ones you will ignore and what is stopping you from moving that selection logic back to the database? Sorry if that's obvious, I'm sure you have considered it, but I don't quite understand why you don't do that.
|
|
|
|
|
Returning millions of records takes time. If you can't limit the results via a WHERE clause, you might like to try the following approach;
CREATE PROCEDURE GetData
(
@start INT
)
AS
WITH Query
AS
(
SELECT
ROW_NUMBER() OVER
(
ORDER BY [IJK] ASC
) as position,
[IJK],
[XC],
[YC],
[ZC]
FROM [MSDP]
)
SELECT [IJK],
[XC],
[YC],
[ZC]
FROM Query
WHERE position BETWEEN @start AND @start + 200
To run the sp, you use
EXEC GetData 200000 [Replace the 200000 with the start value]. You could also try returning more than 200 records by changing the sp.
|
|
|
|
|
Thanks, that might solve my problem...
|
|
|
|
|
Yvon Bourgouin wrote: million of records
Yvon Bourgouin wrote: I would like to read the records one by one
Can you spot the problem here. Trying to iterate over millions of rows will, of course, be slow.
Why would you not limit the returned results to the rows you actually want?
|
|
|
|
|
I just had to go through and add view definition and execute permission to a couple dozen scalar value functions for a user. I had to right click on each one, click permissions, click search, type the login, click check names, then check the two permissions and click OK. I could have sworn there was a way to manage object permissions in bulk for a given user but I can't seem to find it.
|
|
|
|
|
It really depends on your set up, you could have set up and AD group and manged the users via AD and simply added the group (when searching) to the functions.
You could have used a script to do the same job. We have a set up where the admin can assign procs to users via the apps UI and script the change.
If you restore the database to another server you may run into the situation where the perissions are named the same but have a different ID and are no longer valid - you need to repeat the process.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
The problem isn't the number of users. The problem is the number of database objects. I want to bring up a list of scalar valued functions and apply permissions for a single user or group to multiple functions at once. Even if I scripted it that would mean a separate grant statement for each function.
|
|
|
|
|
You could employ schema to segregate your objects (I don't and think it would be irritating to have to include schema in every query), otherwise you have to live with your decision to apply user level permissions to database objects.
I'm curious, do your users access your database direct (SSMS/QA) or only through an application?
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi All,
Recently ,when i try to restore files from my back up (MyDB.BAK) i found two entries one Saying a Fullbackup and another backup which contains the date of backup
Why i am getting two different entries i really dont know.Previously,I only used to get a single file for the current date backup.
Have i changed something to get a duplicate backup?
As the size of the file is now Doulbled from the one i had two months ago?
I am just doing a full backup.
In short this Backup (myDB.bak) is contains two files of same type.Its a duplicate. As a result its giving me double the size of my DB.
Is there any way i could get only one backup?
Thanks
|
|
|
|
|
I suspect you checked append when selecting the destination (in SSMS). When restoring, via SSMS, you can select the backup you want to restore. I am surprised that restoring both set doubles the size of your database though, I would expect it to overwrite.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|