|
thanks Pradeeop
well will try it and get back soon..
|
|
|
|
|
Dear All,
I am making web application using Asp.net C#(Visual Studio2005). And Sql server 2005 as a back End
I now want to generate report using through sql server reporting services.
Please give me some useful link so that I can able to use report generated by sql server 2005 through my web application
Thanks
regards
imran khan
|
|
|
|
|
1. If there are duplicate rows in a table how do you retrieve them. We can use select distinct(fieldname) but not in this way. Is there any other way.
2. After reading the reading like,
while(dr.read())
{}
can we close the reader automatically without calling dr.close().
yamini
|
|
|
|
|
1. listing only duplicate row data is possible. need table structure and sample data.
2. to my knowledge gargabe collector should do.
Regards
KP
|
|
|
|
|
suppose there are five similar records like same name, id, dept then without writing a query in sqlserver how are the duplicates removed means only a single record should be retrieved.
thanx,
yamini
|
|
|
|
|
i regret,
i'm, not getting your query. can you explain in detail
Regards
KP
|
|
|
|
|
Regarding #2
Although you don't have to call Close() it is advisable. While the DataReader is open you are tying up the connection so it cannot be used until the reader has been collected by the garbage collector. You don't have any control over when the garbage collector will collect the reader.
Another option is to wrap it in a using statment and have C# close it when it drops out of scope
using(SqlDataReader dr = cmd.ExecuteReader())
{
while(dr.Read())
{
}
}
|
|
|
|
|
1. Take a look at using HAVING COUNT to identify duplicated rows.
Deja View - the feeling that you've seen this post before.
|
|
|
|
|
I am doing a project in Visual Studio .net (language C#). How can I store documents (like Word, Excel, PPT etc.) in an SQL server 2000.
Please can anybody provide some useful links or the exact method/code.
|
|
|
|
|
|
Hi, I'm new to SQL and I want to give the INSERT statement a variable as its value instead of a constant. Can this be done? If yes then how??
Thanks
|
|
|
|
|
Yes Sure you can do that. All what you need to do is add parameters to your insert statment.
Example:
insert into temptable (col1,col2) values (@username,@date)
the @username and @date are the parameters you have to assign values to in you command object.
Example :
mycommand.parameters.add("@username","Value of the username")
mycommand.parameters.add("@date","Value of the date")
then you execute the command
Regards,
|
|
|
|
|
There are 2 Tables, 1.Marks 2.Ctrl
Marks Table have following Data :
CtrlKey M1 M2 M3 M4 M5 M6
1 10 18 00 13 12 19
2 20 21 23 24 18 AB
...
Ctrl Table Have following Data :
CtrlKey Rollno
1 11
1 12
1 13
1 14
1 15
1 16
2 17
2 18
2 19
2 20
2 21
2 22
...
there is a new ctrlkey on every 6 rollnos
I want Output Should like this :
Rollno Marks (like Foreign Key)
11 10
12 18
13 00
14 13
15 12
16 19
17 20
18 21
...
Please help me to write a single select query to get output.
Girish Kumar Sharma
|
|
|
|
|
hi,
i dont understand the relation between Marks and Ctrl tables.
regards
Jijo
jijo jose
|
|
|
|
|
Both the table (Marks and Ctrl) have a common field "ctrlkey" and have relation between them.
Please re-think upon it and do needful as early possible.
|
|
|
|
|
this cannot be done using a simple SELECT query.
1. Create a cursor for "Marks" table
2. create a cursor for "Ctrl" table which fetches corresponding data for hte ctrlkey fetched by outer cursor
3. in the inner look you can arrange the data as you required
Regards
KP
|
|
|
|
|
Hello all
This may seem a simple question, but I'm not looking for a solution, I'm looking for a best practice. I have a SQLTable that contains 293 columns, and may even contain thousands of rows. Such a massive table could be a source of performance issues.
My question is: Is it satisfactory to keep the table this way, or should it be better if I divide the table on more than one tables, and link them using Foreign Key constraints? For example each Table containing 50 columns or so?
Waiting eagerly for your expertise concerning this issue.
Regards
|
|
|
|
|
Nader Elshehabi wrote: I have a SQLTable that contains 293 columns, and may even contain thousands of rows. Such a massive table could be a source of performance issues.
Absolutely - But, it really depends on how you want to use the table.
Nader Elshehabi wrote: Is it satisfactory to keep the table this way, or should it be better if I divide the table on more than one tables, and link them using Foreign Key constraints?
To me that suggests that the table is not normalised. All data in the table must be dependent on the primary key. If you have repeating data in a table that suggests that a separate table needs to be created. For example:
A table Family: MotherName, FatherName, Child1Name, Child2Name, Child3Name, Child4Name
This has some limitations. First a maximum of 4 children are permitted. It assumes everyone lives in a perfect home and all siblings share the same parents.
So, if you have repeating columns (often characterised by the need to include a digit) then that needs to be separated out into a new table.
Nader Elshehabi wrote: For example each Table containing 50 columns or so?
You should search for useful relationships rather than splitting at an arbitrary number of columns. By splitting at every 50th column you run the risk of splitting related data (or data that is often used together) into separate tables which will slow down any query.
If there is no way to normalise the data then consider what groups of columns tend to be used together. Then split it that way ensuring that each table maintains a one-to-one relationship with the others. This will cause some issues during an insert where some rows won't exist yet. So, design it to accept a zero-to-one relationship - that way missing rows in other tables don't affect the whole. You might also find that there a groups of columns where if one column isn't filled, neither are others in the group. You can then use this knowledge to eliminate the need to store a row where no information is used.
|
|
|
|
|
Thank you Colin for your prompt reply I really appreciate it.
The table describes a medical procedure which contains 293 parameters to be filled in the report. Some of them of course can be null, but non of them is a repetition. Each procedure -ie. row- is given a GUID unique to that procedure used as the primary key.
My idea about dividing the table wasn't at the exact 50th column -maybe I wasn't clear on this point, sorry-. The procedure can be -somehow- roughly divided into several steps, each containing about 50 parameters. As you said this will give me a great headache in maintaining the relations between the tables using the procedure's GUID as a foreign key. Also in the quesries, this is going to be very troublesome.
Currently the table may work fine, but my concern is when it's filled with thousands of rows. Is it worth the effort to divide the table the way you suggested, or will it work fine as a single 293 columns containing table in the future?
Regards
|
|
|
|
|
I don't think you need to worry about thousands of rows as much as what is the total number of bytes with your 293 columns. If you are using sql 2005 then your page size is 8060 bytes. You really don't want a single row to go over the page size, that can cause a lot of performace issues. If you can say that your total max row size would be 6000 bytes then I wouldn't worry about performace. If you max row size is over 8060 bytes then you really should break up the columns.
If you only have thousands like 10,000 rows in a table there is a pretty good chance that any query wouldn't even use an index. Which is the next point. Performace has more to do with indexes being set up properly then number of columns. Sql server always looks up full pages. So the smaller the row size the more rows that will be read with one page read. Anyway, that is probably a lot more then you wanted.
Here's a link to potential problems you might have:
http://www.sqlservercentral.com/columnists/awarren/2862.asp[^]
Hope that helps.
Ben
|
|
|
|
|
Thank you so much kubben. You reply has been very helpful and guiding.
Kindest Regards
|
|
|
|
|
Hi,
I am using MSSQL2005 and am having difficulties understanding how to create a stored procedure which first verfies data is found, and then if so processes that data.
ALTER PROCEDURE dbo.PostNews
@Subject VARCHAR(255),
@Body TEXT
AS
/* Has news already been posted within the last 10 minutes? */
IF EXISTS(SELECT TOP 1 DATEDIFF(MINUTE, tableNews.CreateDate, GETDATE()) AS MinuteDiff FROM tableNews WHERE tableNews.Subject=@Subject) BEGIN
/** THIS DOES NOT WORK **/
IF(MinuteDiff <= 10) RETURN 1;
END
/* Post news entry. */
RETURN 0;
RETURN
If possible it would be great to find out how you guys attempt these types of SQL statements. What could I do to resolve the above issue, if the result is found I want to return 1, otherwise return 0 once the news posting has been added.
Many thanks,
Lea Hayes
|
|
|
|
|
lhayes00 wrote: /** THIS DOES NOT WORK **/
IF(MinuteDiff <= 10) RETURN 1;
END
Because MinuteDiff does not exist at this point in the code.
Okay - Let's look at the IF statement
lhayes00 wrote: IF EXISTS(SELECT TOP 1 DATEDIFF(MINUTE, tableNews.CreateDate, GETDATE()) AS MinuteDiff FROM tableNews WHERE tableNews.Subject=@Subject)
IF EXISTS(SELECT) does a check to discover if the SELECT statement will return any rows. It doesn't care what those rows are, or store the results anywhere. It is a simple: Is anything at all returned.
Because of that, any calculations performed in the SELECT clause (in the place you'd normally return column data) is ignored. Putting a TOP 1 is pointless because the operation will short circuit the moment any row is found. Putting a DATEDIFF function in is pointless because it will ignore the results.
What you might want to do instead is something like this:
DECLARE @tenMinutesAgo DATETIME
SET @tenMinutesAgo = DATEADD(minute, -10, GETDATE())
IF EXISTS (SELECT * FROM tableNews WHERE CreateDate >= @tenMinutesAgo AND Subject=@Subject)
BEGIN
RETURN 1;
END
RETURN 0;
|
|
|
|
|
Thankyou very much for your reply
I have just tried running that in my project and it works perfectly!!
And in addition is somewhat more optimal which is great.
|
|
|
|
|
You're welcome. I'm glad it worked.
|
|
|
|