|
Hi,I hope this will help you
SELECT
(SELECT top 1 m2.Clmn1 FROM mytable m2 WHERE m2.Clmn2= m.Clmn2) Clmn1 ,--by this subquery you get first value of clmn1 column
clmn2,clmn3,clmn7
FROM mytable m
GROUP BY m.clmn2,m.clmn3,m.clm7
HAVING
COUNT (col2) > 0 AND
COUNT (col3) > 0 AND
COUNT (col7) > 0
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.aktualiteti.com
|
|
|
|
|
yeeees !! Blue_Boy , realy realy i appreciate your reply. you are great and helpfull.
yor answer is very celever .
Many Thanks
|
|
|
|
|
You are always welcome bro
I Love T-SQL
"Don't torture yourself,let the life to do it for you."
If my post helps you kindly save my time by voting my post.
www.aktualiteti.com
|
|
|
|
|
Hello,
Following the faq, I am attempting to query the a DB to obtain the list of table names with the following query:
SELECT * FROM sqlite_master WHERE type='table' ORDER BY name
I am using an ODBC driver and have written code in c#:
DbConnection.Open();
OdbcCommand DbCommand = DbConnection.CreateCommand();
DbCommand.CommandText = ("SELECT * FROM sqlite_master WHERE type='table' ORDER BY name");
DbReader = DbCommand.ExecuteReader();
List<string> TableNames = new List<string>();
for (int i = 0; i < DbReader.FieldCount; i++)
{
TableNames.Add(DbReader.GetName(i));
Trace.WriteLine(DbReader.GetName(i));
}
DbReader.Close();
DbConnection.Close();
I have a feeling that dbreader.getname(i) is returning the wrong thing, versus the query being incorrect.
Can anyone help me figure out a way to identify the names of tables in an SQLite3 DB?
Thanks,
Matt
|
|
|
|
|
The DbReader you obtained can be used to retrieve the table names from the "name" field:
DbReader["name"]
So your program should be modified to be like this:
DbConnection.Open();
OdbcCommand DbCommand = DbConnection.CreateCommand();
DbCommand.CommandText = ("SELECT * FROM sqlite_master WHERE type='table' ORDER BY name");
DbReader = DbCommand.ExecuteReader();
List TableNames = new List();
while(DbReader.Read()){
TableNames.Add(DbReader["name"]);
Trace.WriteLine(DbReader["name"]);
}
DbReader.Close();
DbConnection.Close();
Your original code was trying to retrieve all the fields in one record.
Good luck!
|
|
|
|
|
Thanks for the modified code!
Can't make a list like that for some reason. Maybe I'm missing an import; I kept the casting.
Because of this:
OdbcCommand DbCommand = DbConnection.CreateCommand();
DbCommand.CommandText = ("SELECT name FROM sqlite_master WHERE type='table' ORDER BY name");
DbReader = DbCommand.ExecuteReader();
List<string> TableNames = new List<string>();
while (DbReader.Read())
{
TableNames.Add(DbReader["name"].ToString());
Trace.WriteLine(DbReader["name"]);
}
DbReader.Close();
if (TableNames.Contains("bandwidth_records"))
{
DbConnection.Close();
}
else
{
)
Thanks!
|
|
|
|
|
Hello,
I have an issue with addresses that I need to parse into streetNum, Direction (if it is there), sreetName.
The data can look as such:
1234 McKean Street -- Needs to look like StreetNum: 1234, Direction: , StreetName: McKean
1234 Mc Kean Street -- Needs to look like StreetNum: 1234, Direction: , StreetName: Mc Kean
1234 E McKean Street -- Needs to look like StreetNum: 1234, Direction: E, StreetName: McKean
1234 E Mc Kean Street -- Needs to look like StreetNum: 1234, Direction: E, StreetName: Mc Kean
(edit I went ahead and made the 526 addresses that include 1/2 to be have an underscore so since there is no space anymore it is easy to query)
As far as I can tell those are all the options. There are approximately 600,000 records so it really has to be automated. There can be A LOT of changes in this data as the city may update a large portion of it at any time so I need to be able to do this on a monthly basis.
One thing to note I am trying to do this so I can compare this data to another table of 70,000 records to automate a search that I would normally do using a "like" query in Access but I streamlined that data as it was only 70k records with a turnover of about 50 a month so that is easy enough and now I need to streamline this data so I can compare the two tables.
That data is in this format:
streetNumber 1324, 1324-28, 1324-1/2
direction N,S,E,W
streetName McKean (I went through by hand and took all the spaces out)
I post this information just in case it is easier to do this another way even though it would be really nice to have that data parsed but because of it being different and the only delimiter being a space makes it difficult to parse.
Any help would be much appreciated.
Thank you,
George
modified on Friday, December 4, 2009 10:27 AM
|
|
|
|
|
Let me know if it helps
Sample data
declare @t table(completeaddress varchar(100))
insert into @t
select '1234 McKean Street' union all select '1234 McKean Street' union all
select '1234 E McKean Street' union all select '1234 E McKean Street'
Query
select
StreetNum = 'StreetNum : ' + SUBSTRING(completeaddress,1,4) +','
,Direction = 'Direction : ' +
case when ltrim(rtrim(SUBSTRING(completeaddress,5,2))) In('N','S','E','W')
then SUBSTRING(completeaddress,5,2) else '' end + ','
,StreetName = 'StreetName : ' +
case when ltrim(rtrim(SUBSTRING(completeaddress,5,2))) In('N','S','E','W')
then SUBSTRING(completeaddress,7,LEN(completeaddress))
else SUBSTRING(completeaddress,6,LEN(completeaddress)) end
from @t
Output:
StreetNum Direction StreetName
StreetNum : 1234, Direction : , StreetName : McKean Street
StreetNum : 1234, Direction : , StreetName : McKean Street
StreetNum : 1234, Direction : E, StreetName : McKean Street
StreetNum : 1234, Direction : E, StreetName : McKean Street
Niladri Biswas
|
|
|
|
|
SET NOCOUNT ON
DECLARE @ID uniqueidentifier,
@vFirstName nvarchar(255),
@vLastName nvarchar (10)
DECLARE customer CURSOR FOR
SELECT @ID,
FirstName,
LastName
FROM user
OPEN Customer
FETCH Customer INTO @ID,
@vFirstName,
@vLastName
WHILE @@Fetch_Status = 0
BEGIN
FETCH Customer INTO @ID,
@vFirstName,
@vLastName
Insert into test values (@ID,@vFirstName, @vLastName)
END
CLOSE Customer
DEALLOCATE Customer
RETURN
The above cursor works good, but what i want to change above is - Instead of taking ID from user table , i want to
SET ID = newid() but on adding this after DECLARE customer am getting error.
How can i get around this problem ?
Thank You
Andyyy
|
|
|
|
|
AndyInUK wrote: The above cursor works good,
Are you sure? I think you lose your first record.
Why select the ID in your cursor if you don't use it. Then you should be able to add your set just before the insert - or even as part of it.
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
Ashfield wrote: you should be able to add your set just before the insert
oh yeh i was trying to SET way up after DECLARE customer, it works nicely before Insert - that was school boy error .
Ashfield wrote: I think you lose your first record.
Yeah actually i am losing 1 record, can you please tell what did i do wrong in that respect.
Ashfield wrote: Why select the ID in your cursor if you don't use it.
yeah i did removed it when i was setting ID.
Thanks for your help
|
|
|
|
|
In your WHILE you need to put your insert before your fetch - you got your 1st record outside the loop
OPEN Customer
FETCH Customer INTO ......
WHILE @@Fetch_Status = 0
BEGIN
Insert into test values (.....)
FETCH Customer INTO ....
END
Pleased to help
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
I like to add a note that Cursors are really bad for performance. It is very esily converted to a while loop. If you need help let me know.
|
|
|
|
|
Cursors are slow because they process results row-by-row and a while loop does the same, how come it be faster then ?
|
|
|
|
|
Avoid cursors where possible. You could do this with one statement...
insert into test (id, vfirstname, vlastname)
select newid(), vFirstname, vLastName from customer
|
|
|
|
|
Hi,
Try to understand the output of NewId()
If I issue select NewId() the output will be something like
3B311450-65BB-4AF8-898B-0CD02C5086E2 .
It is a pure hyphenized alphanumeric charecter. If your destination table has the Id column as integer type, then you need to either change the column type from Integer to Varchar or you need to extract only the numeric values from NewId() for your future use.
I have created a sample for you. Look into that(id datatype is varchar) . I hope you will get some insight.
declare @tblSource table(id int identity, FirstName varchar(50),LastName varchar(50))
declare @tblDest table(id varchar(max) , FirstName varchar(50),LastName varchar(50))
insert into @tblSource
select 'firstname1', 'lastname1' union all
select 'firstname2', 'lastname2' union all
select 'firstname3', 'lastname3' union all
select 'firstname4', 'lastname4' union all
select 'firstname5', 'lastname5'
Query:
insert into @tblDest (id,FirstName,LastName)
select NEWID(),FirstName,LastName
from @tblSource
select * from @tblDest
Output:
id FirstName LastName
D2A3D81C-F76E-44A4-9D47-83FBE4DDB76B firstname1 lastname1
0B827C0E-EB68-43EA-A423-B10575EAF572 firstname2 lastname2
EB4DB402-3D8F-4C09-9E37-004CE37DE1FD firstname3 lastname3
E6CE9239-1E95-4660-B5AC-00DBFEE28474 firstname4 lastname4
345F70CC-B957-41CC-977C-9FBA7A81E912 firstname5 lastname5
In case you need to strip out only the numbers, here is an example
declare @str varchar(max)
set @str = 'D2A3D81C-F76E-44A4-9D47-83FBE4DDB76B'
;with cte as(
select 1 as rn
union all
select rn+1 from cte where rn<LEN(@str)),
cte2 as(
select rn,chars
from cte
cross apply(select SUBSTRING(@str,rn,1) chars)X
where chars like '%[0-9]%'
)
select numericData from (
select cast(chars as varchar(max))from cte2
for xml path(''))X(numericData)
Output:
numericData
23817644494783476
Niladri Biswas
modified on Tuesday, December 8, 2009 4:12 AM
|
|
|
|
|
I am retrieving a lot of data and want this to be as fast as possible.
Let's say I have a database with two tables; tablePerson and tableFiles. A person has one file wich is stored in a BLOB. In my solution as it is now, I select * from person table, and while reading from this I select one row from the table with the blob.. This sums up to alot of one-row-queries towards the blob-table.. This blob-table has 678000 rows, and all these queries are taking a lot of time...
So.. How can I improve performance on this? Any tips?
This is something similar to my code:
using(SqlCommand cmd = new SqlCommand("select * from tablePerson", _conn)
{
SqlDataReader r = cmd.exeCuteReader();
while(r.read())
{
Person p = new Person();
string documentID = Convert.ToString(r["p_docid"]);
p.LocationToDocument = ExtractDocument(documentID);
}
r.close();
r.dispose();
}
...
private string ExtractDocument(string docid)
{
string filename = "";
using (SqlCommand cmd = new SqlCommand("select d_docid, d_title, d_BLOB from tableDocument where d_docid=@paramDocID", _conn)
{
SqpParameter p = new SqlParameter("@paramDocID", System.Data.SqlDbType.Text);
p.Value = docid;
SqlDataReader r = cmd.exeCuteReader(System.Data.CommandBehavior.SequentialAccess);
while(r.read())
{
}
}
return filename;
}
|
|
|
|
|
Why not do it in one select with a join?
Bob
Ashfield Consultants Ltd
Proud to be a 2009 Code Project MVP
|
|
|
|
|
As Asfield said, join the 2 tables so that you retrieve them once. I would add to what he said is that you probably want to add paging. So instead of processing 1 M record, you can process 1000 record at a time etc..
|
|
|
|
|
Good Day All
i have a Table with the ID Field that was not an identity Field. This Field has not Duplicates. Now later i want to Change this numeric Field as an Identity Field like this
--STEP 7 ADD THE IDENTITY BACK IN TABLE MTM_ACTV_STAFF
ALTER TABLE [New_Jaco].[dbo].[MTM_ACTV_STAFF]
ALTER COLUMN [ID] IDENTITY(1,1)
but i get an Error that says
Msg 156, Level 15, State 1, Line 3
Incorrect syntax near the keyword 'IDENTITY'.
Thanks
Vuyiswa Maseko,
Spoted in Daniweb-- Sorry to rant. I hate websites. They are just wierd. They don't behave like normal code.
C#/VB.NET/ASP.NET/SQL7/2000/2005/2008
http://www.vuyiswamaseko.com
vuyiswa@its.co.za
http://www.itsabacus.co.za/itsabacus/
|
|
|
|
|
A quick guess, I never script these thing I use SSMS, but don't you need a data type in the alter as well as the create.
ALTER COLUMN [ID] INT IDENTITY(1,1)
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
|
Hi all, I have what might well be considered a stupid question but fortunately none of you are my employer or former varsity prof so I'll ask away
I've always endeavored to keep my databases normalized to the best of my ability and, for the most part, I think I'm fairly well set in that line of thinking.
At my current company though some of our databases are anything but normalized. When I challenged my boss about it he responded that it was done deliberately because it is not supposed to be a relational database. It is a DB that accepts, stores and processes transactions at quite a fierce tempo and they didn't want every INSERT or SELECT statement to have to internally cross reference lots of other tables.
Fair enough, it makes sense. But then it occurred to me that I probably don't know enough about DB design because I would never have thought of that. I would simply have designed a normalized DB because that's the way I'm programmed.
So, I turned to Wikipedia but even there I can't seem to find an awful lot of information on non-normalized DB's but that's most likely because I don't really know what search terms to use.
So my question is:
1. When should a DB be normalized and when not?
2. Can anyone point me to an online article or two that deals with the topic?
3. Am I correct in assuming that a normalized DB is called a "Relational Database"?
4. What would you call a DB like ours here that is not normalized?
|
|
|
|
|
Your boss may actually be right (I work with batch processes not transactions) because with every FK and index on a table the CRUD operation need to do the additional work to maintain the relationships. I still doubt it is a valid answer though, only a reporting database should not be normalised IMHO.
Can't help more than that and I hope you find the ammo to shoot this one down.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Thanks, but I'm not disputing that my boss is right. I think he has a valid point so I'm not really looking for ammo to shoot his argument down I'd just like to make sure I really understand DB design and not blindly stick to normalized DBs if that's not always the best way to go.
|
|
|
|
|