Click here to Skip to main content
15,867,453 members
Articles / Web Development / HTML

Html Agility Pack - Massive information extraction from WWW pages

Rate me:
Please Sign up or sign in to vote.
4.85/5 (29 votes)
4 Feb 2014CPOL6 min read 120.8K   73   16
What to do if database of over 150,000 records is available only as a list of webpages each holding just 50 records? You can spend a week clicking through it and die of boredom or you can write a scraper that will do the work for you :)

Recently I needed to acquire some database. Unfortunately it was published only as a website that presented 50 records per single page. Whole database had more than 150 thousand records. What to do in such situation? Click through 3000 pages, manually collecting data in a text file? One week and it's done! ;) Better to write a program (so called scraper) which will do the work for you. The program has to do three things:

  • generated a list of addresses from which data should be collected;
  • visit pages sequentially and extract information from HTML code;
  • dump data to local database and log work progress.

Address generation should be quite easy. For most sites pagination is built with plain links in which page number is clearly visible in the main part of URL (http://example.com/somedb/page/1) or in the query string (http://example.com/somedb?page=1). If pagination is done via AJAX calls situation is a bit more complex, but let's not bother with that in this post... When you know the pattern for page number parameter, all it's needed is a simple loop with something like:

C#
string url = string.Format("http://example.com/somedb?page={0}", pageNumber)

Now it's time for something more interesting. How to extract data from a webpage? You can use WebRequest/WebResponse or WebClient classes from System.Net namespace to get page content. After that you can obtain information via regular expressions. You can also try to treat downloaded content as XML and scrutinize it with XPath or LINQ to XML. These are not good approaches, however. For complicated page structure writing correct expression might be difficult, one should also remember that in most cases webpages are not valid XML documents. Fortunately, HTML Agility Pack library was created. It allows convenient parsing of HTML pages, even these with malformed code (i.e., lacking proper closing tags). HAP goes through page content and builds document object model that can be later processed with LINQ to Objects or XPath.

To start working with HAP you should install NuGet package named HtmlAgilityPack (I was using version 1.4.6) and import namespace with the same name. If you don't want to use NuGet (why?) download zip file from project's website and add reference to HtmlAgilityPack.dll file suitable for your platform (zip contains separate versions for .NET 4.5 and Silverlight 5 for example). Documentation in .chm file might be useful too. Attention! When I opened downloaded file (in Windows 7), the documentation looked empty. "Unlock" option from file's properties screen helped to solve the problem.

Retrieving webpage content with HAP is very easy. You have to create HtmlWeb object and use its Load method with page address:

C#
HtmlWeb htmlWeb = new HtmlWeb();
HtmlDocument htmlDocument = htmlWeb.Load("http://en.wikipedia.org/wiki/Paintball");

In return, you will receive object of HtmlDocument class which is the core of HAP library.

HtmlWeb contains a bunch of properties that control how document is retrieved. For example, it is possible to indicate whether cookies should be used (UseCookies) and what should be the value of User Agent header included in HTTP request (UserAgent). For me AutoDetectEncoding and OverrideEncoding properties were especially useful as they let me correctly read document with Polish characters.

C#
HtmlWeb htmlWeb = new HtmlWeb() { AutoDetectEncoding = false, OverrideEncoding = Encoding.GetEncoding("iso-8859-2") };

StatusCode (type System.Net.HttpStatusCode) is another very useful property of HttpWeb. With it you can check the result of latest request processing.

Having HtmlDocument object ready, you can start to extract data. Here's an example of how to obtain links addresses and texts from previously downloaded webpage (add using System.Linq):

C#
IEnumerable<HtmlNode> links = htmlDocument.DocumentNode.Descendants("a").Where(x => x.Attributes.Contains("href"));
foreach (var link in links)
{
    Console.WriteLine(string.Format("Link href={0}, link text={1}", link.Attributes["href"].Value, link.InnerText));       
}

Property DocumentNode of type HtmlNode points to page's root. Method Descendants is used to retrieve all links (a tag) that contain href attribute. After that texts and address are printed on the console. Quite easy, huh? Few other examples:

Getting HTML code of the whole page:

C#
string html = htmlDocument.DocumentNode.OuterHtml;

Getting element with "footer" id

C#
HtmlNode footer = htmlDocument.DocumentNode.Descendants().SingleOrDefault(x => x.Id == "footer"); 

Getting children of div with "toc" id and displaying names of child nodes which have type different than Text:

C#
IEnumerable<HtmlNode> tocChildren = htmlDocument.DocumentNode.Descendants().Single(x => x.Id == "toc").ChildNodes;
foreach (HtmlNode child in tocChildren)
{
    if (child.NodeType != HtmlNodeType.Text)
    {
        Console.WriteLine(child.Name);
    }
}

Getting list elements (li tag) that have toclevel-1 class:

C#
IEnumerable<HtmlNode> tocLiLevel1 = htmlDocument.DocumentNode.Descendants()
    .Where(x => x.Name == "li" && x.Attributes.Contains("class")
    && x.Attributes["class"].Value.Split().Contains("toclevel-1"));

Notice that Where filter is quite complex. Simple condition:

C#
Where(x => x.Name == "li" && x.Attributes["class"].Value == "toclevel-1")

is not correct! Firstly there is no guarantee that each li tag will have class attribute set so we need to check if attribute exist to avoid NullReferenceException exception. Secondly the check for toclevel-1 is flawed. HTML element might have many classes, so instead of using == it's worthwhile to use Contains(). Plain Value.Contains is not enough though. What if we are looking for "sec" class and element has "secret" class? Such element will be matched too! Rather than Value.Contains you should use Value.Split().Contains. This way an array of strings will be checked via equals operator (instead of searching a single string for substring).

Getting texts of all li elements which are nested in minimum one li element:

C#
var h1Texts = from node in htmlDocument.DocumentNode.Descendants()
              where node.Name == "li" && node.Ancestors("li").Count() > 0
              select node.InnerText;

Beyond LINQ to Objects, XPath might also be used to extract information. For example:

Getting a tags that have href attribute value starting with # and longer than 15 characters:

C#
IEnumerable<HtmlNode> links = htmlDocument.DocumentNode.SelectNodes("//a[starts-with(@href, '#') and string-length(@href) > 15]");

Finding li elements inside div with id "toc" which are third child in their parent element:

C#
IEnumerable<HtmlNode> listItems = htmlDocument.DocumentNode.SelectNodes("//div[@id='toc']//li[3]");

XPath is a complex tool and it's impossible to show all its great capabilities in this post...

HAP lets you explore page structure and content but it also allows page modification and save. It has helper methods good for detecting document encoding (DetectEncoding), removing HTML entities (DeEntitize) and more... It is also possible to gather validation information (i.e. check if original document had proper closing tags). These topics are beyond the scope of this post.

While processing consecutive pages, dump useful information to local database most suitable for your needs. Maybe .csv file will be enough for you, maybe SQL database will be required? For me plain text file was sufficient.

Last thing worth doing is ensuring that scraper properly logs information about its work progress (for sure you want to know how far your program went and if it encountered any errors). For logging it is best to use specialized library such as log4net. There's a lot of tutorials on how to use log4net so I will not write about it. But I will show you a sample configuration which you can use in console application:

XML
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
    <configSections>
        <section name="log4net" type="log4net.Config.Log4NetConfigurationSectionHandler, log4net"/>          
    </configSections>
    <log4net>        
        <root>
            <level value="DEBUG"/>            
            <appender-ref ref="ConsoleAppender" />
            <appender-ref ref="RollingFileAppender"/>
        </root>
        <appender name="ConsoleAppender" type="log4net.Appender.ColoredConsoleAppender">
            <layout type="log4net.Layout.PatternLayout">
                <conversionPattern value="%date{ISO8601} %level [%thread] %logger - %message%newline" />
            </layout>
            <mapping>
                <level value="ERROR" />
                <foreColor value="White" />
                <backColor value="Red" />
            </mapping>
            <filter type="log4net.Filter.LevelRangeFilter">
                <levelMin value="INFO" />                
            </filter>
        </appender>         
        <appender name="RollingFileAppender" type="log4net.Appender.RollingFileAppender">
            <file value="Log.txt" />
            <appendToFile value="true" />
            <rollingStyle value="Size" />
            <maxSizeRollBackups value="10" />
            <maximumFileSize value="50MB" />
            <staticLogFileName value="true" />
            <layout type="log4net.Layout.PatternLayout">
                <conversionPattern value="%date{ISO8601} %level [%thread] %logger - %message%newline%exception" />
            </layout>
        </appender>
    </log4net>    
</configuration>

Above config contains two appenders: ConsoleAppender and RollingFileAppender. The first logs text to console window, ensuring that errors are clearly distinguished by color. To reduce amount of information LevelRangeFilter is set so only entries with INFO or higher level are presented. The second appender logs to text file (even entries with DEBUG level go there). Maximum size of singe file is set to 50MB and total files number limit is set to 10. Current log is always in Log.txt file...

And that's all, scraper is ready! Run it and let it labor for you. No dull, long hour work - leave it for people who don't know how to program :)

Additionally you can try a little exercise: instead of creating a list of all pages to visit, determine only the first page and find a link to next page in currently processed one...

P.S.: Keep in mind that HAP works on HTML code that was sent by the server (this code is used by HAP to build document model). DOM which you can observe in browser's developer tools is often a result of scripts execution and might differ greatly form the one build directly from HTTP response.

  • Update 08.12.2013: As requested, I created simple demo (Visual Studio 2010 solution) of how to use Html Agility Pack and log4net. The app extracts some links from wiki page and dumps them to text file. Wiki page is saved to htm file to avoid dependency on web resource that might change. Download.  
  • Update 05.12.2013: Code samples with selecting by id use Single instead of Where+First. It's a good practice to use Single method if you want to get exactly one element.  

License

This article, along with any associated source code and files, is licensed under The Code Project Open License (CPOL)


Written By
Software Developer
Poland Poland

Comments and Discussions

 
QuestionIs this multi threading ready? Pin
Member 422724813-May-18 19:45
Member 422724813-May-18 19:45 
QuestionQuestion about the legality of this Pin
andegre5-Feb-14 7:45
andegre5-Feb-14 7:45 
AnswerRe: Question about the legality of this Pin
Philip.F7-Apr-14 22:59
Philip.F7-Apr-14 22:59 
GeneralRe: Question about the legality of this Pin
CODEPIT30-Sep-14 18:45
CODEPIT30-Sep-14 18:45 
GeneralScraping Framework Pin
Deutschie4-Feb-14 21:55
Deutschie4-Feb-14 21:55 
Questiongreat help to start with HTML Agility Pack Pin
dontumindit4-Feb-14 8:59
dontumindit4-Feb-14 8:59 
GeneralHave a look at... Pin
M-Badger5-Dec-13 12:06
M-Badger5-Dec-13 12:06 
Reputationator - CP Narcissists Rejoice! Part 1 of 4[^]

It uses the HTML Agility Pack as well, the two of you might be able to swap some ideas.
Also, it's worth noting that such designs, as has been experienced with the above app, are highly vulnerable to the website owner (CP in this instance) changing page structure etc. at any moment without notice...

M
Questioncan you post the source code Pin
dyma2-Dec-13 20:28
dyma2-Dec-13 20:28 
AnswerRe: can you post the source code Pin
morzel3-Dec-13 0:28
morzel3-Dec-13 0:28 
GeneralRe: can you post the source code Pin
dyma3-Dec-13 1:47
dyma3-Dec-13 1:47 
GeneralRe: can you post the source code Pin
fredatcodeproject6-Dec-13 4:48
professionalfredatcodeproject6-Dec-13 4:48 
AnswerRe: can you post the source code Pin
morzel8-Dec-13 20:47
morzel8-Dec-13 20:47 
GeneralRe: can you post the source code Pin
dyma8-Dec-13 22:21
dyma8-Dec-13 22:21 
GeneralRe: can you post the source code Pin
mhn2179-Dec-13 1:32
mhn2179-Dec-13 1:32 
GeneralRe: can you post the source code Pin
morzel9-Dec-13 2:45
morzel9-Dec-13 2:45 
AnswerRe: can you post the source code Pin
dontumindit4-Feb-14 9:00
dontumindit4-Feb-14 9:00 

General General    News News    Suggestion Suggestion    Question Question    Bug Bug    Answer Answer    Joke Joke    Praise Praise    Rant Rant    Admin Admin   

Use Ctrl+Left/Right to switch messages, Ctrl+Up/Down to switch threads, Ctrl+Shift+Left/Right to switch pages.