|
Hi all I am trying to consume a webservice using TSQL but I am getting a null result
The method URL is http://localhost/SSISWebservice/service1.asmx
The invoked url is http://localhost/SSISWebservice/service1.asmx/HelloWorld
I found two samples and used them on a local web service but both of them did not get result s or generated an error.
thanks
Mohammad Al Hoss
Development To Me Is A Pleasure more than a Job
|
|
|
|
|
Are you using MSXML2.XMLHTTP and TSQL, or are you using a CLR-sproc to read the webservice?
If you're using the XMLHTTP variant, then you might want to check whether OLE-automation is enabled;
sp_configure 'show advanced options', 1;
GO
RECONFIGURE;
GO
sp_configure 'Ole Automation Procedures', 1;
GO
RECONFIGURE;
GO
Hope this helps
I are Troll
|
|
|
|
|
I've encountered an unusual situation in an import program I wrote. From what I can determine, the root of the problem is that I read data from an Excel spreadsheet (through a data conversion component which changes all data types to DT_STR) into an unconstrained table (no FK's, all columns are varchar(255)). For some reason, on occasion currency values are imported with what I assume is their floating point representation. Confusingly, this doesn't appear to be consistent behaviour in a given file. e.g. 10098.73 -> 10098.73 but 8978.98 -> 8978.9799999999996.
In a second package, the data is imported into a correctly typed table (there's a few lookups etc done along the way too). The type of the currency column is numeric(15,2) but I don't seem to be able to catch the data truncation as it is reported as a data overflow instead and I don't want to ignore the error completely in case e.g. text has been entered instead of a number.
I'm assuming the easier way to go is to figure out why the numbers are occasionally stored as floating point in text - can anyone shed any light?
Failing that, I might just write a SPROC that will truncate the text automatically...!!
It definitely isn't definatley
|
|
|
|
|
moon_stick wrote: I'm assuming the easier way to go is to figure out why the numbers are occasionally stored as floating point in text - can anyone shed any light?
Because $2 divided by three people equals something that's hard to store in binary. It results in 0.666666~, where the length of a byte is limited. A digital representation of a analogue value
Excell sometimes keeps the "calculated" values without rounding them. Those values are already imprecise, since you can't encode every fraction using a byte. It keeps the unrounded value around, and displays a value with the specified formatting: in this case, a currency with two positions after the decimal-separator.
It would be easier to test whether the data can be interpreted as a numeric, and if so, round it by two decimals. Keep in mind that you don't want to use these (formatted) numbers for further calculations; you'd want the original floats for that
I are Troll
|
|
|
|
|
Greetings! My name is Christos and I come from greece, happy to mmet you all!
Anyways,
I should start describing my problem. We use an ERP program in conjuction with SQL server and lots of tables that keep track of product records customers and all these kinds of stuff. I am building a fairly simple Warehouse Inventory Application. I have created a custom Inventory table that I would populate with records like {stock_code, Description, Quantity_remain, quantity_avail}. Those records already exist in 2 different tables in my database. The aim is that I want my program to fetch daily 10 random codes from those 2 tables bringing together the "columns" that my new custom table has +2 more editable columns: inventory, date, user with a query like this:
select top 10 stck.code, stck.descr, stck.active, stck.group_code, stck.categ_code,
stck_x_th.xwr_code, stck_x_th.thesi_code, stck_x_th.quant_remain
from stck left outer join stck_x_th on stck.code = stck_x_th.stck_code
order by newid(),
I use the the stck table to fetch product info like its code,description,group_code,categ_code, and then I outer join the stck_x_th that contains records concerning quantities remaining or available asociated with each code and "places" those quantities reside.
(those exact same columns are the ones my custom inventory table has)
store them in a datagridview control, with only one editable column -inventory-, and then when the user fills out the inventory quantites, I want to save the datagrid "as it is" in my new custom table.
I am creating a databound gridview control using one table adapter (which refers to the inventory table). Then I populate my table adapter using the forementioned query with a fillby method to populate the Datagrid (and therefore my Table in SQL) with the exact same values my query returned from the different tables. But whenever I try to update the table concurrency exceptions raise.
It works like a charm if I enter the values in the grid cells manually my self but not if I populate the grid via a query and then hit update. Any suggestions please?
I would appreciate it if you could bother posting some sample code.
Thanks in Advance
|
|
|
|
|
So if I understand this correctly you are querying the database, populating the datagridview and wanting to rely on the datagridviews update command to put the data back into the database.
This is probably the ultimate corruption of a concept that I have ever seen, when MS supplied the DGV with command methods to do the CRUD work it is designed for retail work (it is a great disservice to the developers as this type of thing happens). What you are doing should be done totally in the database, it should be performed by a stored procedure and never have anything to do with the UI.
You need to look into stored procedures and TSQL, data warehousing is always done on the server side.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Hi guys, I know strictly speaking this is not a database question, but since this has to do with massive data manipulation, and that from past experiences I know there are some really intelligent and creative people here, hopefully bouncing some ideas here could shed some light on a possible solution.
Conceptually, I need to compare the pattern of object A, B, C, D, E, F. Each object has 5 attributes (a, b, c, d, e), the value of which can range from 0 to 9.
Object A B C D E F
a 6 9 1 9 9 3
b 8 3 5 1 1 3
c 7 0 8 3 5 8
d 7 5 6 5 5 2
e 8 9 2 2 8 0
The goal is to find out which objects have the highest match of attribute values. For example, in the above table, D & E have 3 matches, B & D have 2 matches, C & D have 1 match...etc.
Of course a simple loop comparing each object's pattern to another would suffice if there's only 6 objects and 5 attributes like the above. But my eventual goal is to categorize some 10000 objects with 20 attributes.
For some reason I thought there must be an elegant solution to this, possibly involving some ingenious/creative data sorting using arrays. But again I've been in programming long enough to know simple concepts sometimes require unbelievably complex programmming.
I'd greatly appreciate any suggestions or ideas. Many thanks again.
|
|
|
|
|
E has either 2 or 4 matches, otherwise I have missed your logic.
Are you counting each col that matches eg A has 1 match with 3 cols, is it one or 3.
Do the values of the match impact on your requirement (3x9 is worth more than 3x5) or (4x2 < 2x5)
I can think of a few way, all simple and reiterative so I'll not waste your time.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
seems to me he is comparing each possible pair of two columns, so for N columns, that's N*(N-1)/2 column comparions, where identical field values constitute a match. And he is contemplating N=1000
and it looks like a huge job, possibly not a good approach to reach whatever the higher goal might be.
|
|
|
|
|
Thank you Holmes for the reply.
In essence I am trying to compare one column's pattern against each other, so if column A and column B have three attriutes that are the same, then that's 3 matches for that pair.
Luc is right in saying there'll be n(n+1)/2 comparisons, given n columns. Given roughly 10000 columns, that's about roughly 50 million comparisons. However, that is assuming that during each comparison loop you can only compare 1 versus 1 at a time. Now I am trying to figure out whether there's an ingenious way to solve this problem efficiently (1 hour of process time is okay by me).
Ultimately, the purpose of this exercise is to write a code that can categorize various random patterns into groups. So for example, after this is done, you can observe and say, "Group 1 has higher attribute a, lower attribute b, lower attribute c", and "Group 2 has below average attrite a, but very high attribute d"...etc.
|
|
|
|
|
You could do what you want using Sql.
USE tempdb
GO
CREATE TABLE #Data
(
ObjectId INT NOT NULL,
AttributeId TINYINT NOT NULL,
Value TINYINT NOT NULL
)
GO
--Object A B C D E F
--a 6 9 1 9 9 3
--b 8 3 5 1 1 3
--c 7 0 8 3 5 8
--d 7 5 6 5 5 2
--e 8 9 2 2 8 0
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (1, 1, 6);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (1, 2, 8);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (1, 3, 7);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (1, 4, 7);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (1, 5, 8);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (2, 1, 9);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (2, 2, 3);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (2, 3, 0);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (2, 4, 5);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (2, 5, 9);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (3, 1, 1);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (3, 2, 5);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (3, 3, 8);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (3, 4, 6);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (3, 5, 2);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (4, 1, 9);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (4, 2, 1);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (4, 3, 3);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (4, 4, 5);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (4, 5, 2);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (5, 1, 9);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (5, 2, 1);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (5, 3, 5);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (5, 4, 5);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (5, 5, 8);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (6, 1, 3);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (6, 2, 3);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (6, 3, 8);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (6, 4, 2);
INSERT INTO #Data (ObjectId, AttributeId, Value) VALUES (6, 5, 0);
GO
SELECT SourceId,
MatchId,
COUNT(*) AS TotalMatches
FROM
(
SELECT d1.ObjectId AS SourceId,
d2.ObjectId AS MatchId
FROM #Data d1
INNER JOIN #Data d2
ON d2.ObjectId > d1.ObjectId
WHERE d2.AttributeId = d1.AttributeId
AND d2.Value = d1.Value
) t
GROUP BY SourceId, MatchId
ORDER BY COUNT(*) DESC
GO
DROP TABLE #Data
GO
|
|
|
|
|
Wow it really worked! Thank you so much for the help Russell.
This result looks awesome:
Object Object #ofMatches
4 5 3
2 4 2
2 5 2
3 4 1
1 5 1
2 6 1
3 6 1
Just curious, did it take you a long time to figure out how to do this?
|
|
|
|
|
You're welcome.
It took me more time to type in the sample code than it did to work out how to to it and run it.
The Entity-Attribute-Value (EAV) pattern is one that is well known in database design. Many people have tried to use it instead of the standard normalized approach. It is not generally something that I would recommend but it suits your requirements perfectly.
One thing to remember is to have a Clustered Index on the ObjectId and AttributeId to prevent table scans. This isn't really an issue with small data sets but will become an issue when you have more objects and attributes.
Ian
|
|
|
|
|
I need to import many excel files to sql server 2005,
These excel files have the same fields and different values.
for example:
table1
a b c
1 2 3
4 5 6
table2
a b c
22 33 44
77 88 66
Because there are many similar files in one directory,
I want to import all these files into sql server 2005,
and build these tables automatically.
I don`t know how to do it?
|
|
|
|
|
caiguosen wrote: I want to import all these files into sql server 2005,
and build these tables automatically.
I don`t know how to do it?
There's more than one way to do so. You might be interested in this[^] KB-article
I are Troll
|
|
|
|
|
I know this solution, but I have several thousands excel Files,
and their version are EXCEL2.1, I still do't know how to import so many excel2.1 files to sql server?
I have such a solution,but I first need to save excel files to the latest version, and it only can import one file to sql server.
for example:
SELECT * INTO XLImport5 FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=D:\tt.SZ.xls', 'SELECT * FROM [tt$]')
|
|
|
|
|
caiguosen wrote: I know this solution, but I have several thousands excel Files,
and their version are EXCEL2.1, I still do't know how to import so many excel2.1 files to sql server?
You're opening the Excel-file as a database over JET - that means that you're importing from a lot of databases. You can't concatenate the Excel-files, as the Excel-format is limited to 65536 rows.
There's an article on importing multiple files here[^] that might help.
I are Troll
|
|
|
|
|
Thank Eddy Vluggen very much!
I'll try it.
|
|
|
|
|
My pleasure, and good luck
I are Troll
|
|
|
|
|
the version of the excel files which i want to import is Excel2.1,
if I use the following solution to import, it can not work.
for example:
select *
into dbo.Temple FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 8.0;Database=D:\a.xls;HDR=YES',
'SELECT * FROM [a$]')
The fact is that I have to open it and save it as the later version.Unfortunately,I have thousands of excel files so that I couldn`t open them one by one.
Are there some solutions that I change their version quickly?
or some solutions that can help me to import excel2.1 to sql server
directly?
|
|
|
|
|
caiguosen wrote: the version of the excel files which i want to import is Excel2.1,
if I use the following solution to import, it can not work.
Can you try it with the Excel 5.0 format, like this?
select *
into dbo.Temple FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 5.0;Database=D:\a.xls;HDR=YES',
'SELECT * FROM [a$]')
caiguosen wrote: Are there some solutions that I change their version quickly?
I don't know of any, you'd have to Google for those. I'd try to read the file using C#, and write it directly to a database
I are Troll
|
|
|
|
|
I tried following solution,but it still can not work.
it can not import excel2.1 to sql server.
select *
into dbo.Temple FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0',
'Excel 5.0;Database=D:\a.xls;HDR=YES',
'SELECT * FROM [a$]')
do you have some c# code that can import excel2.1 to sql server2005 quickly?
I google some code which was written by c++,but most of them can not read excel2.1, and some of them can read, but the speed is really slowly.
the problemis that I have thousands of excel files,not only one.
or do you have some solution that can save many excel2.1 files as later version?
if i open it and save as it, it can be imported.
|
|
|
|
|
caiguosen wrote: do you have some c# code that can import excel2.1 to sql server2005 quickly?
I'm sorry, but I don't have that code. If I did, I would've posted it.
caiguosen wrote: I google some code which was written by c++,but most of them can not read excel2.1, and some of them can read, but the speed is really slowly.
the problemis that I have thousands of excel files,not only one.
There's no "quick" way to get those Excel-files in the database. Version 2.1 is an old file-format, perhaps you could find a tool that converts those files in batch, but I doubt it. If you can read the files, even if it's slow, than you can convert them.
If you can write a console-application that reads a single file, then you can re-use that to import the other files. If "slowly" means that it takes a minute, than 1000 files take 1000 minutes. That's roughly 16 hours, so the conversion would be done within a day.
I are Troll
|
|
|
|
|
ok,thank you for your help.
I think I need to search other solutions, otherwise it will cost me too much time.
|
|
|
|
|
Reference to the following article:
http://www.codersbarn.com/post/2008/10/12/Bind-CheckBoxList-to-DataSet.aspx
I created the following table
1) Should I set all the MusicFeature values by default to zero.
2) How should I create foreign constraits. ProfileID.
3) ProfileID is used in a tbl_Prrofile and is a primary key.
4) How should this table be fixed. Also I need to keep in mind what should be null.
CREATE
TABLE [dbo].[tbl_Music](
[MusicID] [int]
IDENTITY(1,1) NOT NULL,
[ProfileID] [int]
NULL,
[MusicText] [nvarchar]
(64) NOT NULL,
[MusicFeature] [bit]
NULL,
CONSTRAINT [PK_tbl_lookup_music] PRIMARY KEY CLUSTERED
(
[MusicID]
ASC
)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
ON [PRIMARY]
--------------------------------
CREATE
TABLE [dbo].[tbl_Profile](
[ProfileID] [int]
IDENTITY(1,1) NOT NULL,
[UserID] [nvarchar]
(50) NULL,
[UserName] [nvarchar]
(50) NULL,
CONSTRAINT [PK_tbl_Profile] PRIMARY KEY CLUSTERED
(
[ProfileID]
ASC
)
WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY]
)
ON [PRIMARY] TEXTIMAGE_ON [PRIMARY]
GO
SET
ANSI_PADDING OFF
-----------------------------------
tbl_Music.
MusicID ProfileID MusicText MusicFeature
1 NULL Prefer not to say NULL
2 NULL African NULL
3 NULL Blues NULL
4 NULL Caribbean NULL
|
|
|
|
|