After fetching the HTML the best way for you to get at all the links on the page is to use a library like
HTMLAgilityPack[
^]. This way you can easily get at all the a href nodes to inspect them for possible pdf files.
Caveat: The URL pointing to a PDF file does not nescessarily have to contain the sting .pdf The only way to make sure that you're really getting all PDF's that are linked to from a page is to open every link you find on said page and make a header request on that document. The mime-type returned by the server is also no absolute guarantee that it will be a PDF but better yet than only looking at the URL extension.
If you're writing a crawler you'd also want to make sure to follow links to other documents linked from your page that might contain PDFs.
I hope I expressed myself clearly, but if you still have doubts feel free to leave me a comment.
Cheers!
-MRB