|
adding a foreign key will not conflict with existing data but it would help in data integrity.
am only concerned if it will result in any loss of data already existing. I know that it should not but since i've never done this before on a table in production environment with data just needed to be doubly sure that it results in no data loss.
|
|
|
|
|
You may not want to do this in production environment. Try to duplicate the data in development environment and test it before doing it in a live production environment.
I want to say it will not cause data loss, but I still suggest you to do it in development environment. We had mishaps on production environment before, even though we believed everything should be okay. When that happens, it may keep you in your office for a few days without any sleep.
|
|
|
|
|
Many Thanks all. Done successfully
|
|
|
|
|
I'm trying to do a SQL Server script that will backup a database to a file.
DECLARE @Filename NVARCHAR(MAX)
SELECT @Filename = N'\\{mymachinenameonnetwork}\SQL Backups\mydb_backup_' + convert(varchar, getdate(), 102) + '_' + replace(convert(varchar, getdate(), 8),':','')
SELECT @Filename
BACKUP DATABASE [mydb] TO DISK = @Filename WITH NOFORMAT, NOINIT, NAME = N'mydb-Full Database Backup', SKIP, NOREWIND, NOUNLOAD, STATS = 10
GO
This will not produce a backup file. But, if I create a variable of the entire statement and run it via sp_executesql, it works perfectly.
SELECT @Cmd = 'BACKUP DATABASE [mydb] TO DISK = ''' + @Filename + ''' WITH NOFORMAT, NOINIT, NAME = N''mydb-Full Database Backup'', SKIP, NOREWIND, NOUNLOAD, STATS = 10'
exec sp_executesql @stmt=@Cmd
There must be some reason that I'm not seeing. Can anyone shed some light on this problem?
Thanks,
Ed
|
|
|
|
|
Thats a very funny error.
Just change "DECLARE @Filename NVARCHAR(MAX)" to VARCHAR(MAX). It seems that this functions
doesn't support unicode or something else.
Greetings
Covean
|
|
|
|
|
Thanks for your reply. Unfortunately, I made the change with the same result. I have to say that this one really has me stumped. I'm using the sp_executesql way, but I would love to know why the command way isn't working.
|
|
|
|
|
Hi,
I'm considering using MySQL in my next project so can you please tell me:
1. What is the lightest and most simple version to install and where to get it as I'm using it as a replacement for access to be accessed from a C# application.
2. How can I install it and configure it silently "like including a pre-configured response file in Oracle"
Thank you.
|
|
|
|
|
Ok, I found the answers in the MySQL forum but I still need an advice from you old users of MySQL as this will be my first shot and I need to make it right so please tell me what would you consider for my case. "A light stand alone database only accessed through one application".
Thank you.
|
|
|
|
|
Any version of MySQL is not heavy. I don't know how to silently install and configure it, though. MySQL is usually used in a multi-client environment. From your description you want to get access to it from just one application. And you want to install an instance of MySQL with each of your application. This may be overkill.
There is another option for you, which is SQLite. If you don't have problem with the restrictions of some of the data types in SQLite, you can try it instead of using MySQL. It's equally easy to program using C# to get access to SQLite, and the setup can be completely silent.
|
|
|
|
|
|
I actually had a similar situation on a project I'm working on. I think SQLite limits the database to 2GB or so, and that was too restrictive for us, so we went with MySQL.
As for the silent install/config, you can run the .msi installer with the "-s" option, and then run the MySQLInstanceConfig.exe utility to set up your instance. This can be called from the command line with a set of switches for your particular setup (see here[^] for the list of parameters). I just put all the commands in a small console utility that runs as a custom action in our installer.
Dybs
The shout of progress is not "Eureka!" it's "Strange... that's not what i expected". - peterchen
|
|
|
|
|
That was quite informative.. Thanks mate!
|
|
|
|
|
The information you offered here is quite useful to me. I was not aware of the 2GB size limit of SQLite databases. I will sure check that out.
The way to silently install MySQL is also very helpful.
|
|
|
|
|
Looks like I was mistaken on the size limt (I must have been thinking of MSSQL Express). According to the SQLite documentation[^] the max page size for a database is 32K, and the max number of pages is (1024^3)-1, bringing the total database size to around 32 TB! Oops, my bad Sorry for the misinformation.
Dybs
The shout of progress is not "Eureka!" it's "Strange... that's not what i expected". - peterchen
|
|
|
|
|
|
I'm trying to build a form to search an access membership database in c#
I want to be able to search by firstname and lastname to start with but sometimes i might not be able to remember how to spell the whole name so I want names that start with what I've typed in.
OleDbCommand command = _databaseConnection.CreateCommand();<br />
command.CommandText = "SELECT * FROM MEMBERS WHERE fname like '?*' AND lname like '?*'";<br />
command.Parameters.Add(new OleDbParameter("fname",firstName));<br />
command.Parameters.Add(new OleDbParameter("lname", lastName));<br />
I seem to have tried every combination of ?* ?% '?*' etc. in the sql statement, does anyone know what the correct syntax is to do this?
Cheers
Russ
|
|
|
|
|
Try it without the quotes round the parameters;
This should work:
command.CommandText = "SELECT * FROM MEMBERS WHERE fname like '%'+?+'%' AND lname like '%'+?+'%'";
Source[^]
I are Troll
|
|
|
|
|
Hi guys, I am currently trying to create a powerful stock market scanner, like the one Yahoo or Google has. The scanner will allow users to choose among some 50 parameters to scan through several thousand stocks.
I am not sure how Yahoo or Google does it, but the only way I know is by using a SQL INTERSECT query. The query statement probably needs to be reconstructed each time an user adds additional parameter. So when a parameter is added, I'll simply append "INTERSECT [new sql query goes here]" into one big long query string.
Conceptually, it'll be like this:
SQL Query 1 (parameter 1)
INTERSECT
SQL Query 2 (parameter 2)
INTERSECT
SQL Query 3 (parameter 3)
INTERSECT
..
..
Is this method the best approach to create a scanner? I thought all other methods (INNER JOIN, Views, CTEs, nested query) would result in over-complicated query strings.
Or is there a better method (the scanner in Yahoo/Google seems to run really fast)?
I'd greatly appreciate any suggestions!! Thanks as always!
|
|
|
|
|
Have you tried this:
SELECT ... WHERE (parameter 1) AND (parameter 2) AND (parameter 3) ...
To speed up this, you need build an index on the fields you want to filter by.
|
|
|
|
|
I rarely use intersect, so this may not be the best advice but. The above would result in 3 select statements against the same data structure, the results would then be compared and the intersection returned. As Dmitry suggested stick it in a where clause.
You can almost certainly define the set of allowed parameters so some thing like this will do the trick:
ALTER PROC [DocumentsForEntity]
--DECLARE
@FleetID int,
@RoomID INT,
@ShipID INT,
@AutomationID int,
@CostumeID INT,
@ShowID INT
AS
--SET @FleetID = 1
SELECT
*
FROM dbo.vwDocument
CROSS JOIN dbo.Settings s
WHERE s.SettingCode = 'DocumentURL'
AND (ISNULL(@FleetID,0) = 0 OR FleetID = @FleetID)
AND (ISNULL(@RoomID,0) = 0 or RoomID = @Roomid)
AND (ISNULL(@ShipID,0) = 0 or ShipID = @ShipID)
AND (ISNULL(@AutomationID,0) = 0 or AutomationID = @AutomationID)
AND (ISNULL(@CostumeID,0) = 0 or CostumeID = @CostumeID)
AND (ISNULL(@ShowID,0) = 0 or ShowID = @ShowID)
ORDER BY DocumentType,DocName
The user can then select 1 or more of the parameters.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Thank you guys, I've considered the WHERE clause, but each filter criteria already looks very complex (with 3 to 4 nested Select, subquery statements, a lot of nested WHERE statements too); so I thought using WHERE would make it too complicated for SQL to handle.
|
|
|
|
|
After a brief break to play with GDI+ I'm back to the equipment database I've been toying with. I'm now looking at the structural alternatives to support record keeping for transformers, specifically test results. Transformers are tested at varying intervals, with different tests performed at different times. We test for water content in the insulating oil, for dissolved gasses (about 10 of them), for dielectric strength, and sheer strength, also sometimes we want to look at the turbidity and color. That's just for the oil! We also have to periodically test the insulation strength of the entire assembly (Doble tests), bushing BIL, impedance, CT ratios, and various other tests.
At first glance I was considering one humongous table called TransformerTests to record the dates, types of tests, and results, but this would be ridiculous to manage, and extremely wasteful, as most of the record fields would be empty. My second thought was to create tables for each type of test, then in the TransformerTests table simply enter the date of each with the type and record number of each test performed, using a query on the detail tables to retrieve test information and build reports.
The second way makes a lot of sense to me, but I'm wondering if there's a third, smarter way that someone here can suggest? Thoughts?
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
|
|
|
|
|
The problem with having a table for each test is that when you need to add a new type of test you are off into the database and code to extend you system. Try something like this
Transformer (TransformerID, etc) This should actually be an equipment table with Transformer as a type
TestType (ID,name) basically a placeholder for attributes
TestAttr (ID, AttrLabel) these are varchar and are basically name value pairs
TestTypeDefault(TestTypeID, AttrID) see notes
TestTable (id, transormerID, TestTypeID dates, pass/fail, comments, user) only the common fields for ALL tests
TestResults(TestID, AttrID, AttrValue)
When you create a testtype you link the default attributes to the type.
Creating a Test (attached to a transformer) will get a copy of the default attributes into the testresult table and allows you to add additional attributes to this test
This will allow you to create new test types and tests without any programming. The major drawback is that all your data is in varchar fields and will need converting if you need to manipulate it properly. I've done this one in the 90s for a manufacturing system and it worked well, comparing test results across a period of time was a PITA because of the formatting.
You could put a datatype indicator in the attribute table to help the UI do some validation and grow it from there.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
Interesting approach, though I don't understand all of it. The attribute bit stumps me - what's that got to do with data and tables? I've only used Access, Reflex, FoxPro and Paradox in the past, so I tend to think of database structures in those terms. I'm going to have enough to do just keeping up with variations on equipment types. For instance, transformers can have Load Tap Changers (LTCs) which effectively act as regulators to change voltage in response to load changes. If one is so equipped I'll need to keep track of a separate object, along with its own tests and reading history, but never let it get disconneted from its parent transformer. Other transformers are associated with regulators as separate devices, and these can be separated from their parent transformers and installed in other locations. They also use different methods of oil protection - some are sealed, some have pressure tanks of nitrogen to maintain positive pressure, which has to be monitored. Others have oil reservoirs, and the levels have to be recorded in both the transformer and reservoir.
I think for now I need to keep it simple, if somewhat inefficient, just to avoid driving myself nuts. Fortunately there aren't a lot of tests that are done, and this industry hasn't changed in 50 years. They still think solid state relays are risky, new-fangled toys. It might be another 50 years before I have to add a new test, and I really won't give a damn by then.
One of my primary goals will be to incorporate trend reporting in the test results, once I get the inventory tracking functions in working order. A DB is only as useful as the money it saves you, and it's a lot cheaper to anticipate a transformer failure before it happens. Gas testing in particular is remarkably valuable, though the level of dissolved gasses is less important than the rate of change. This information must be easy to manipulate in the future in order to easily generate views with meaningful content, and while the tests haven't changed, the interpretation of the results has been steadily evolving.
I'll look into using attributes, though. It sounds extremely flexible, even if the supporting code has to be more complex. Thanks!
"A Journey of a Thousand Rest Stops Begins with a Single Movement"
|
|
|
|
|
Roger Wright wrote: . Fortunately there aren't a lot of tests that are done, and this industry hasn't changed in 50 years.
In which case by all means create a table for each test type. While not technically elegant it WORKS WELL, the main criteria for any system. I live in a world where the test can change from month to month and they want to restructure the universe every 6 months so I need the flexibility to change rapidly.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|