First of all, please see my comment to the question.
Now, here are the components you really need. First, you need to be able to download each page you want to search in. This can be done using the class
HttpWebRequest
:
HttpWebRequest Class (System.Net)[
^]
For the starting point, you can look at the source code of my application I shared in full here:
how to download a file from internet[
^].
This application is very small (only one code file) and clear, so it's not hard to see how it works. See also my other past answers:
FTP: Download Files[
^],
how to download a file from the server in Asp.net 2.0[
^],
get specific data from web page[
^],
Performing a some kind of Web Request and getting result[
^],
How to get particular data from a url using c#[
^].
Now, when you have the content from the Web, it is, typically, HTML data, which you would need to parse, to perform your search, and, importantly, to find out other URLs for your search. I would recommend HTML Agility Pack, an open-source product under the Microsoft Public License:
Html Agility Pack — Home[
^].
Anyway, review this list of parsers:
Comparison of HTML parsers — Wikipedia, the free encyclopedia[
^].
Generally, what you need is close to
Web scraping:
Web scraping — Wikipedia, the free encyclopedia[
^].
—SA