|
ChiYung wrote:
(coz the item in this bookstore includes many things such as gifts, cards....etc.)
Oh, that makes a bit more sense then... I thought that they were all books.
Create one table that only contains a listing of the productID, and productType (normal; consignment; etc...) and maybe even a text description, then create secondary tables that use the productID as a foreign key to provide the extra data.
This really goes back to the first idea that I had though with keeping everything that is relivent to all products (and no matter WHAT it is that you seel, there will be relivent similarities between them all) in one table.
Paul Watson wrote:
"At the end of the day it is what you produce that counts, not how many doctorates you have on the wall."
George Carlin wrote:
"Don't sweat the petty things, and don't pet the sweaty things."
Jörgen Sigvardsson wrote:
If the physicists find a universal theory describing the laws of universe, I'm sure the a**hole constant will be an integral part of that theory.
|
|
|
|
|
Thanks for your reply.
But i still have questions. I think i need to give you more information of my current problem.
HERE IS MY ORIGINAL DATABASE DESIGN:
For normal product, i have to store these attributes:
<code>- Prod_id
- Prod_title
- Prod_ISBN
- Prod_publisher
- Prod_cost
- Prod_retailprice
- Prod_description
- Category_id
- Prod_max (Max On Hand)
- Prod_min (Min On Hand)
- Shelf_id
- Status_id</code>
For consignment product, i have to store these attributes:
<code>- Cons_Prod_id
- Cons_Prod_description
- Cons_cost
- Cons_retailprice
- Cons_member_id [FK] (for the table that stores the info of the person who consign this item)</code>
For Invoice, i have:
<code>- Invoice_number
- Prod_id [FK]
- Cons_Prod_id [FK]
- Qty</code>
I think in the invoice table, it's not making sense to have both Prod_id and Cons_Prod_id in the table. Because if a product has Prod_id, it won't have Cons_Prod_id or vice versa. In this case, there will be wasting memory space.
So I decided to modify it and use inherit relationship in order to use ONE Prod_id ONLY in the invoice table.
However, if i use your suggestion, it just solves the problem on how to distinguish the normal and consignment product.
I still need a table to store those extra information such as Cons_member_id because some information won't exist in both products. (e.g. Consignment table does not need Prod_max and Prod_min, but need cons_member_id)
Then i stuck here now...
I very appreciate your help to solve my problem!!
Thanks alot!!!!!
|
|
|
|
|
tblProducts
<code>
-ProdID [PK]
-ProdDescription
-ProdType [FK]
-Cost
-RetailPrice
</code>
tblProdNormal
<code>
-ProdID [FK]
-Title
-ISBN
-Publisher
-CategoryID
-Max (Max On Hand)
-Min (Min On Hand)
-ShelfID
-StatusID
</code>
tblProdCons
<code>
-ProdID
-MemberID [FK] (for the table that stores the info of the person who consign this item)
</code>
This does pretty good at getting things down to common stuff.
Paul Watson wrote:
"At the end of the day it is what you produce that counts, not how many doctorates you have on the wall."
George Carlin wrote:
"Don't sweat the petty things, and don't pet the sweaty things."
Jörgen Sigvardsson wrote:
If the physicists find a universal theory describing the laws of universe, I'm sure the a**hole constant will be an integral part of that theory.
|
|
|
|
|
how can i edit,save and delete data in a datagrid which datasource is a datable in a dataset,whick will make it look like microsoft access
|
|
|
|
|
hi,
Please try this website this will help you.. www.asp.net then goto tutotials then go to webcontrols then datagrids. Everything bout datagrid is there....
Dabuskol
|
|
|
|
|
thx i ve found the solution,just to put the endcurrentedit and update for the dataset code in the (CurrentCellChanged) Event of the DataGrid.. am using a windows application dunno if it is the same as Asp.net or not
|
|
|
|
|
I have 16Million records using SQL Server 2000. I'm retriving it using stored procedure in which the dates "from" and "to" is my criteria, at the same time I group all the records. Is this the best way??? please give advice.......
My stored procedure is working although i'm not satisfied with the result time. form 700,000 records it took me 1 min and 38 seconds....
|
|
|
|
|
Check indexes on the db table you are selecting from. Are there any index on dates column?
Tomas Rampas
------------------------------
gedas CR s.r.o.
System analyst, MCP
TGM 840,
293 01 Mlada Boleslav,
Czech Republic
Telefon/phone +420(326)711411
Telefax/fax +420(326)711420
rampas@gedas.cz
http://www.gedas.com/
------------------------------
To be or not to be is true...
George Bool
|
|
|
|
|
yes there were indexes on all fields which are part of the group by.
Thank You for the effort. Let's close this item and I will create another one for dynamic where clause. I think my stored procedure is working fine.
|
|
|
|
|
The Ado Dc shows a dialog box for building connection string for the ole db providers present on the system. how can i myself make this dialog for buiding query strings at runtime
|
|
|
|
|
data source=SERVERNAME;initial catalog=DATABASENAME;password=PASSWORD;persist security info=True;user id=USERID;
Try this one...
|
|
|
|
|
Hello,
I have got problem with $SUBJECT.
When I call Clear method of the DataSet, the data doesn't available, but a memory (by the TaskManager) remains on the same level. So when I refill the DataSet memory allocation is rising.
Any suggstions?
Thanks.
Tom
PS Remove _nospam to contact me via e-mail.
|
|
|
|
|
The free operation is performed basically by setting all the row references of all the tables to null (or Nothing , depending on favourite language ).
The actual freeing of memory is done by the garbage collector whenever it next decides to run.
The size you see in the Task Manager is (by default) the process's working set size - the set of all memory pages currently in physical memory for this process. Note that shared pages are counted once for each process, so the sum of this column can exceed the size of your physical memory.
If memory demands get tight, Windows will swap out pages that haven't been referenced recently. If there's an area of the managed heap that is all garbage waiting to be collected, which is one or more pages in size, this might get swapped out. Because the memory is writable, it will be written to the page file rather than just discarded.
The garbage collector won't reduce the virtual memory size of the managed heap unless it opts to compact the heap (moving objects down in memory). It does this periodically, IIRC.
I wouldn't worry about it.
|
|
|
|
|
|
Hi,
Simple question :
I have a table and one column is ID (int identity(1,1) not null).
Say I have 5 rows:
[u]ID Name Color [/u]
1 Bob White
2 Jane Black
3 Lucy Red
4 Gang Brown
5 Dub Yellow
If I delete the row 3, the remaining ID's are 1, ,2, 4, 5.
Is it possible to make the remaining after the deletion row 3 1, 2, 3, 4?
that is the above ID's to decrement.
__________________
Best regards,
Exceter.
|
|
|
|
|
you can make a delete trigger on this table and update recordes with ID > deletedID
|
|
|
|
|
Sir,
Sorry for a such question, but I am new in SQL.
Could you show how that trigger is written?
Respectfully
exceter.
|
|
|
|
|
Why would you want to do this?
If your ID column is referenced from somewhere else, you will have to update all those tables too.
Let the hole be! Especially if you're using IDENTITY.
(It can be done, but will be some work, as you first will have to temporarily SET IDENTITY INSERT ON for your table. Insert row 4 in row 3's place, delete row4, insert row 5 in row 4's place. (Which you offcourse will do in a loop.))
If you really really need this functionality, it is better NOT to use the IDENTITY column, since that will allow you to use an UPDATE statement instead. (Possibly in a trigger or a stored proc...)
|
|
|
|
|
Hi, I have a problem in listing all records in my mdb file.
steps.
1. I made a mymdb.mdb file.
2. added some records.
3. deleted some records.
4. added new records.
5. repeat 2,3,4 steps several times.
Finally, when I list all the records in mdb file - I used MoveFirst(),
read record and MoveNext() until all the records were
populated. - the added records in 4th step were inserted
somewhere near the deleted records in 2nd step.
I don't know exactly this behavior is caused my code or it is a nature of mdb.
I want to place the new records always in the last of the mdb.
How can I do that?
Thanks.
|
|
|
|
|
Wormhole5230 wrote:
I don't know exactly this behavior is caused my code or it is a nature of mdb
this is a nature of any DBMS..
you have no control over the physical order of records in a datafile..(Unless you make a clustered index in sql server but event in this case i'm not sure about the order)Access and SQL Server fill the space marked for deletion with new records..
need a solution ?
add an Autonumber field and let your queries end with (Order By ID )
|
|
|
|
|
Hi
i just wana ask that does access work on network. if yes, then how to connect its file with application?
Thanx in advance
|
|
|
|
|
Access 'works' on a network by having the Jet database engine running on the client's computer, accessing the .mdb file through file-sharing on the server (i.e. either through a mapped network drive letter or using a UNC path).
Basically, all clients open and lock the pieces of the file they need to use at any given time.
This can be a bit precarious and can sometimes lead to data corruption if the locking isn't exactly right.
It's probably better to use the SQL Server Desktop Engine[^] for shared data. This supports up to five concurrent queries (you can do more, but there's a query governor that wrecks performance). It's basically a governed version of SQL Server 2000.
|
|
|
|
|
Mike Dimmick wrote:
This can be a bit precarious and can sometimes lead to data corruption if the locking isn't exactly right
Sorry, but this is flat out wrong and is perpetuating a myth to a new generation. There are no locking concerns for the programmer that can lead to data corruption.
The rest of what you said I can agree to though!
------------
|
|
|
|
|
Maybe I should rephrase:
It's quite difficult to do distributed locking on regions of a single file over the network. Historically, Jet hasn't always got it right, which can lead to inconsistent data in the data file.
You should ensure that all your clients have the latest service pack - currently Service Pack 7[^] - for Jet 4.0 in order to reduce the possibility of errors.
The situation does not so easily arise with MSDE or SQL Server because the file locking is all performed on the server computer, not distributed.
See this Knowledge Base article[^] for more details on how to minimise the possibility of corruption.
For preference, I would tend to use Access/Jet only for small databases with small volumes, with a limited number of concurrent operations, using a local data file. It makes a reasonable backing store for a web service if there aren't very many concurrent users.
|
|
|
|
|
Ahh..you linked to a jet 3.0 article, it's possible that jet 3.0 had more problems, but we've been using jet 4.0 in a commercial application that is networked and we have thousands of users worldwide. Data corruption is exceedingly rare probably no more than 15 times have I seen it with our product since we released in early 2000 and in every case has come down to using an outdated jet driver or an external hardware failure / lockup of some sort.
That being said I would agree with everything else you said and in fact we are switching to msde now that enough of the world has a fast enough internet connection that we can include it with our software which is distributed over the internet only.
------------
|
|
|
|