|
hi,
Please try this website this will help you.. www.asp.net then goto tutotials then go to webcontrols then datagrids. Everything bout datagrid is there....
Dabuskol
|
|
|
|
|
thx i ve found the solution,just to put the endcurrentedit and update for the dataset code in the (CurrentCellChanged) Event of the DataGrid.. am using a windows application dunno if it is the same as Asp.net or not
|
|
|
|
|
I have 16Million records using SQL Server 2000. I'm retriving it using stored procedure in which the dates "from" and "to" is my criteria, at the same time I group all the records. Is this the best way??? please give advice.......
My stored procedure is working although i'm not satisfied with the result time. form 700,000 records it took me 1 min and 38 seconds....
|
|
|
|
|
Check indexes on the db table you are selecting from. Are there any index on dates column?
Tomas Rampas
------------------------------
gedas CR s.r.o.
System analyst, MCP
TGM 840,
293 01 Mlada Boleslav,
Czech Republic
Telefon/phone +420(326)711411
Telefax/fax +420(326)711420
rampas@gedas.cz
http://www.gedas.com/
------------------------------
To be or not to be is true...
George Bool
|
|
|
|
|
yes there were indexes on all fields which are part of the group by.
Thank You for the effort. Let's close this item and I will create another one for dynamic where clause. I think my stored procedure is working fine.
|
|
|
|
|
The Ado Dc shows a dialog box for building connection string for the ole db providers present on the system. how can i myself make this dialog for buiding query strings at runtime
|
|
|
|
|
data source=SERVERNAME;initial catalog=DATABASENAME;password=PASSWORD;persist security info=True;user id=USERID;
Try this one...
|
|
|
|
|
Hello,
I have got problem with $SUBJECT.
When I call Clear method of the DataSet, the data doesn't available, but a memory (by the TaskManager) remains on the same level. So when I refill the DataSet memory allocation is rising.
Any suggstions?
Thanks.
Tom
PS Remove _nospam to contact me via e-mail.
|
|
|
|
|
The free operation is performed basically by setting all the row references of all the tables to null (or Nothing , depending on favourite language ).
The actual freeing of memory is done by the garbage collector whenever it next decides to run.
The size you see in the Task Manager is (by default) the process's working set size - the set of all memory pages currently in physical memory for this process. Note that shared pages are counted once for each process, so the sum of this column can exceed the size of your physical memory.
If memory demands get tight, Windows will swap out pages that haven't been referenced recently. If there's an area of the managed heap that is all garbage waiting to be collected, which is one or more pages in size, this might get swapped out. Because the memory is writable, it will be written to the page file rather than just discarded.
The garbage collector won't reduce the virtual memory size of the managed heap unless it opts to compact the heap (moving objects down in memory). It does this periodically, IIRC.
I wouldn't worry about it.
|
|
|
|
|
|
Hi,
Simple question :
I have a table and one column is ID (int identity(1,1) not null).
Say I have 5 rows:
[u]ID Name Color [/u]
1 Bob White
2 Jane Black
3 Lucy Red
4 Gang Brown
5 Dub Yellow
If I delete the row 3, the remaining ID's are 1, ,2, 4, 5.
Is it possible to make the remaining after the deletion row 3 1, 2, 3, 4?
that is the above ID's to decrement.
__________________
Best regards,
Exceter.
|
|
|
|
|
you can make a delete trigger on this table and update recordes with ID > deletedID
|
|
|
|
|
Sir,
Sorry for a such question, but I am new in SQL.
Could you show how that trigger is written?
Respectfully
exceter.
|
|
|
|
|
Why would you want to do this?
If your ID column is referenced from somewhere else, you will have to update all those tables too.
Let the hole be! Especially if you're using IDENTITY.
(It can be done, but will be some work, as you first will have to temporarily SET IDENTITY INSERT ON for your table. Insert row 4 in row 3's place, delete row4, insert row 5 in row 4's place. (Which you offcourse will do in a loop.))
If you really really need this functionality, it is better NOT to use the IDENTITY column, since that will allow you to use an UPDATE statement instead. (Possibly in a trigger or a stored proc...)
|
|
|
|
|
Hi, I have a problem in listing all records in my mdb file.
steps.
1. I made a mymdb.mdb file.
2. added some records.
3. deleted some records.
4. added new records.
5. repeat 2,3,4 steps several times.
Finally, when I list all the records in mdb file - I used MoveFirst(),
read record and MoveNext() until all the records were
populated. - the added records in 4th step were inserted
somewhere near the deleted records in 2nd step.
I don't know exactly this behavior is caused my code or it is a nature of mdb.
I want to place the new records always in the last of the mdb.
How can I do that?
Thanks.
|
|
|
|
|
Wormhole5230 wrote:
I don't know exactly this behavior is caused my code or it is a nature of mdb
this is a nature of any DBMS..
you have no control over the physical order of records in a datafile..(Unless you make a clustered index in sql server but event in this case i'm not sure about the order)Access and SQL Server fill the space marked for deletion with new records..
need a solution ?
add an Autonumber field and let your queries end with (Order By ID )
|
|
|
|
|
Hi
i just wana ask that does access work on network. if yes, then how to connect its file with application?
Thanx in advance
|
|
|
|
|
Access 'works' on a network by having the Jet database engine running on the client's computer, accessing the .mdb file through file-sharing on the server (i.e. either through a mapped network drive letter or using a UNC path).
Basically, all clients open and lock the pieces of the file they need to use at any given time.
This can be a bit precarious and can sometimes lead to data corruption if the locking isn't exactly right.
It's probably better to use the SQL Server Desktop Engine[^] for shared data. This supports up to five concurrent queries (you can do more, but there's a query governor that wrecks performance). It's basically a governed version of SQL Server 2000.
|
|
|
|
|
Mike Dimmick wrote:
This can be a bit precarious and can sometimes lead to data corruption if the locking isn't exactly right
Sorry, but this is flat out wrong and is perpetuating a myth to a new generation. There are no locking concerns for the programmer that can lead to data corruption.
The rest of what you said I can agree to though!
------------
|
|
|
|
|
Maybe I should rephrase:
It's quite difficult to do distributed locking on regions of a single file over the network. Historically, Jet hasn't always got it right, which can lead to inconsistent data in the data file.
You should ensure that all your clients have the latest service pack - currently Service Pack 7[^] - for Jet 4.0 in order to reduce the possibility of errors.
The situation does not so easily arise with MSDE or SQL Server because the file locking is all performed on the server computer, not distributed.
See this Knowledge Base article[^] for more details on how to minimise the possibility of corruption.
For preference, I would tend to use Access/Jet only for small databases with small volumes, with a limited number of concurrent operations, using a local data file. It makes a reasonable backing store for a web service if there aren't very many concurrent users.
|
|
|
|
|
Ahh..you linked to a jet 3.0 article, it's possible that jet 3.0 had more problems, but we've been using jet 4.0 in a commercial application that is networked and we have thousands of users worldwide. Data corruption is exceedingly rare probably no more than 15 times have I seen it with our product since we released in early 2000 and in every case has come down to using an outdated jet driver or an external hardware failure / lockup of some sort.
That being said I would agree with everything else you said and in fact we are switching to msde now that enough of the world has a fast enough internet connection that we can include it with our software which is distributed over the internet only.
------------
|
|
|
|
|
Yes it does and all you need be able to do is "see" the file on the network from each computer and have full rights to it and the folder since there is a second .ldb file involved that must be created and destroyed by any one of the users sharing it.
Please disregard what the other person who answered your question said about corruption and locking. It's extremely rare and should not be a reason to not use it although I too would suggest MSDE for anything more than a test or in-house application.
------------
|
|
|
|
|
Does anyone here know of any site/document that has done some sort of analysis on the performance (specifically speed) expectations of various ADO connections (JET and SQL)?
I have been doing DB programming using ADO for a while now (C++), however I am never quite sure whether my software is performing optimally for the given task (saving HUGE number of records (100K+), deleting, modifying etc).
I have read all the MSDN 'tips' on using ADO, but I have never been able to compare my performance against what it should be. I always think that my db connection could perform better, but can never prove it.
Any response will be noted.
Cheers.
"..Even my comments have bugs!"
Inspired by Toni78
|
|
|
|
|
ADO has the disadvantage of being a wrapper for Visual Basic and scripting languages. In order to support this, it does a lot of copying into small temporary objects.
Using the underlying OLE DB allows you to supply your own buffers for data rows, and also to use more advanced operations such as delayed updates (accumulating an update in memory, then performing the whole update in one network operation).
You may want to look into the OLE DB consumer templates. A helpful resource is the book OLE DB Consumer Templates: A Programmers Guide[^].
You should definitely check whether the time is being taken in generating the resultset or in passing the resultset across the connection. If more time is taken in fetching the resultset, you might want to consider using the adAsyncFetch option to the Command 's Execute method, which will fetch up to the CacheSize before returning, then fetch the rest of the rows asynchronously. If you try to access a row not yet received, the operation will block until that row has been received.
Use only the cursors you need for an operation. If you're just reading a resultset, specify adLockReadOnly for the lock type, and adOpenForwardOnly for the cursor type. This allows SQL Server to simply squirt the rows down your connection (it's often called a firehose cursor for this reason).
For SQL Server, you should definitely get used to entering sample queries into Query Analyzer and looking at the Execution Plan that results. This can help you reformulate your queries to work better. The SQL Profiler tool can help you examine a workload and determine if you should add - or remove! - some indexes to improve performance.
For more on optimising SQL Server queries, see Inside SQL Server 2000[^].
|
|
|
|
|
Thanks for your reply Mike.
I've already adapted certain techniques, like the firehose cursor, but using the ADO interface.
I will definitely have a look at your list of suggested books.
"..Even my comments have bugs!"
Inspired by Toni78
|
|
|
|