|
What for you want to know that?
"Don't worry if it doesn't work right. If everything did, you'd be out of a job." (Mosher's Law of Software Engineering)
|
|
|
|
|
hmm well for startes, how its used, and if I can implement it, so on, and so forth...
With great code, comes great complexity, so keep it simple stupid...
|
|
|
|
|
I got VS2005 C# source code which was in Source safe (PVCS) previously and i want to unbind it with PVCS. I have TFS on my machine which will be further be used for this purpose.
How can i do this by manually editing the files which keeps information about binding?
I tried deleting GlobalSection(SourceCodeControl) from .sln file but if i save the changes of the file then i am able to open it further in VS IDE.
What should i do OR what else i need to do to complete the process?
Thaks in Advance.
Gajesh
|
|
|
|
|
Open the project, under File | Source Control there are a number of options that may be useful.
Never underestimate the power of human stupidity
RAH
|
|
|
|
|
In File->Source Control->Change Source Control, uncheck the connected checkbox.
|
|
|
|
|
This option was not available in file menu. I added "Change Source Control" through rearrange commands but that is disabled (grayed). What should i do?
FYI: i have TFS installed on machine not PVCS.
|
|
|
|
|
Oh,You have change this right?
You'll inout commnad order line:
1. cd C:\Program Files\Microsoft Visual Studio 8\Common7\IDE
2.tf undo /workspace:WorkSpaceSample;YourName:UserNomber $/Path/FileName
FileName:lock FileName.
you'll try it,Good luck.
|
|
|
|
|
|
HELLO !!!!
You're an idiot. Hi is not even remotely a sensible subject. And if you type your question in to google, you'll find tons of answers.
Christian Graus
Driven to the arms of OSX by Vista.
"I am new to programming world. I have been learning c# for about past four weeks. I am quite acquainted with the fundamentals of c#. Now I have to work on a project which converts given flat files to XML using the XML serialization method" - SK64 ( but the forums have stuff like this posted every day )
|
|
|
|
|
Every time I read a response from you I think of Dr. House ....
|
|
|
|
|
use numericupdowm control.
When you post a question next time, kindly put a title in Subject column.
"hi" is not a subject!
*12Code
|
|
|
|
|
use masked textbox control
*12Code
|
|
|
|
|
Hi, you can visit www.codeincsharp.blogspot.com
Hope this one can help.
Thanks
Hi, Please select Good Question if my answer are fit to your Question.
|
|
|
|
|
Hi guys,
There're many user controls in my current project,and VS always initializes and loads these controls into toolbox,sometime it costs me 20 minutes to finish this,and in this 20 minutes VS is no response and I can do nothing.
So I'm curious to know if there's a way to stop VS to do this.
Anything you can help it would be appreciated!
|
|
|
|
|
Hi,
I haven't tried this, but there is a property under menu Tools/Options/WindowsFormsDesigner/AutoToolboxPopulate; you may have to check "Show all settings" first.
BTW: if your UserControls take that long, maybe you should limit what code they execute inside the Designer by qualifying slow code with an if (!DesignMode) ... . Example: there probably is no need to perform database accesses when loading a Form inside the Designer.
|
|
|
|
|
Thank you for your answer.Yes,it does work!!
|
|
|
|
|
You're welcome.
|
|
|
|
|
Dear all,
I need a way to read a 500MB large TEXT file containing a url on each line and remove duplicates and then write data back on the text file. I know it sounds simple but here i need my algorithm to be as fast as possible. I don't want to design a special data structure for this purpose i just want to use C# pure language to do this.
Here i have some question in mind that i never found a good answer before:
1.When you use
string s = File.ReadAllText(filename); the exception 'out of memory' will thrown (Because the amount of data is too big). So we should read the file line by line. dose reading line by line affect the speed? isn't it better to read the whole file and then do the processing?
2.What is the fastest data structure in c# to access data on it. i mean between hashtable list String[] array...
3.Because it's the matter of speed, should i think of writing data back to a database after reading from file? is it faster than writing directly back to file? (like inserting data into MySQL server with an index on my url field with UNIQUE type, so by inserting the data it will take care of duplicates)
I know my question is a bit big.. but if you have any idea in any part i will be glad to know your precious opinions.
Thank you.
|
|
|
|
|
Hi,
some questions:
1. do you need to preserve the order of the URLs?
2. what is going to happen with the file with unique URLs afterwards?
3. can you show say 10 different URL to show how different they are?
4. what is generating the input file? can that code be slightly modified to gain a lot of speed afterwards?
|
|
|
|
|
1. Theres no need to preserve the order of urls, but if you want to sort the urls and then remove the adjacent identical records it's not doable here because i want to remove urls with same domain names also.
here is the sample of urls list:
http://iri.siam108site.com/dssff/
http://ieggfiv.300mb.info/12hf8485n/
http://forum.nto.pl/chlopcy-z-biskupic-mieli-sie-bic-tymczasem-jeden-zginal-t19232.html
http://www.osiol.com.pl/uwaga-tvn-2007-2008-tvrip-t26379.html
http://fdss.300mb.info/12hf8485n/
http://perpetuum-lab.com.hr/forum/showtag-physiology.html
http://hrbikwi.siam108site.com/1i0gejcj9j6
So http://fdss.300mb.info/12hf8485n/ and http://hrbikwi.siam108site.com/1i0gejcj9j6 should be removed from the list.
2. A web crawler will crawl urls to grab page contents
4. I wrote a program that is grabbing these urls from search engines and then it writes the urls into at the end of a text file (Without any kind of processing and checking on data)
Sorry for the late response it was 5 am when i asked the question :p
modified on Thursday, April 30, 2009 4:34 AM
|
|
|
|
|
OK,
1.
Generating a huge file, then search for duplicates, is a waste of time. If the code that generates such list can be modified, modifying it will always be a much better approach.
It will take less code, less effort to develop, and less CPU cylces to execute.
2.
One way to reduce the postprocessing effort (but not code) is by generating not one but many files, i.e. by applying some sorting. Say you calculate an 8-bit hash value on the URLs, and sort them accordingly in 256 different files. That is easy, and reduces your postprocessing to 256 much smaller jobs (average file size now is 2MB, and now you can do a ReadAllLines).
3.
The ultimate solution is to avoid duplicates right away.
I have written web crawlers before; obviously they need a way to avoid visiting the same URL twice, so they contain some logic for that.
Furthermore, you typically want to search breadth-first, i.e. read and process all of a page before switching to another page; that is more efficient than depth-first, since in the latter you have to either keep in memory or refetch the content of all the pages you are encountering while digging deeper.
The natural solution to these two issues is to organize the crawler around a list containing all the URLs you have encountered (and should visit). Every anchor, image source, style sheet encountered gets added, if not already in the list; when a page is done, move to the next in the list (use an index). When the index gets to the end of the list, you are done, and the list now holds all the unique URLs.
|
|
|
|
|
Thank you for your solution. I like the idea of 256 file and checksum. I will try it now.
|
|
|
|
|
To make it fairly simple, us a generic dictionary where each url key and an entry into the database. Read each line of the textfile one at a time and verify that the key doesn't exist if it doesn't add it to the dictionary. If the key does exist then ignore it. When you are all done use the foreach to go through each key writing it to a file as text.
Something like
Dictionary<string, string=""> Urls = new Dictionary<string, string="">();
while(TextReader.Read())
{
string key = TextReader.ReadLline();
if(!Urls.ContainsKey(key))
{
Urls.Add(key,key);
}
}
foreach(string s in Urls.Keys)
{
TextWriter.WriteLine(s);
}
Obviously you need to do all of the opening and closing of files, but this is probably the simplest way.
|
|
|
|
|
|
I think the easiest way to do this would be to provide your own Import/Export settings UI and handle the loading of data and saving to settings/saving to export file in your code. It should be fairly trivial.
|
|
|
|