Thank you for the clarification. After you explained the purpose, I can see that I see no sense in most of what you suggested. The files are downloaded using the class
System.Net.HttpWebRequest
or simplified class
System.Net.WebClient
:
https://msdn.microsoft.com/en-us/library/system.net.httpwebrequest%28v=vs.110%29.aspx[
^],
https://msdn.microsoft.com/en-us/library/system.net.webclient%28v=vs.110%29.aspx[
^].
If you needed to parse downloaded HTML files, I would advise using some HTML parser, first of all,
Html Agility Pack, but I don't really think that sophisticated
Web scraping techniques uses for hunting for different URLs will really be needed, because the sites are yours, so you could directly deliver (in any suitable resource) the URLs you client needs to use for downloading.
When you download the files, you may need to view some of them, so some Web browser would be needed to view, for example, HTML file, but WebKit could be an overkill. You can use any
WebBrowser
component for this purpose. And any specific use of JavaScript, TypeScript, Angular.JS sounds in a desktop application sound pretty much like absurd to me. If I missed something, I'll be grateful for your explanations.
—SA