Parse HTML Components
This page explains how to parse and extract information from a page (local or remote).
Parsing HTML and extract the relevant information is something we can use in many contexts: scan a page for a price change, extract a component, detect the broken links .. etc.
AppSeed, in particular, uses HTML parsing for two things:
    Page structure detection
    Component extraction
For newcomers, AppSeed uses automation tools to convert lifeless UI Kits into simple starters generated in many frameworks and patterns. For instance, this open-source design - Pixel Lite provided by Themesberg has been translated to Flask and Django using HTML parsing as the first phase of the translation process.
Required libraries and tools
    ​Python - interpreter
    ​Beautiful Soup - a well-known parsing library
    ​Lxml - used to compensate BS4 limitations

The process

The flow explained in this article will execute a few simple steps:
    Load the HTML content - this can be done from a local file or using a LIVE website
    Analyze the page and extract XPATH expression for a component
    Use Lxml library to extract the HTML
    Format the component and save it on disk
Install libraries via PIP
1
$ pip install requests
2
$ pip install lxml
3
$ pip install beautifulsoup4
Copied!
From this point, all the code is typed using a python console
1
$ python [ENTER]
2
>>>
Copied!
Load the content from local file
1
>>> f = open('./app/templates/index.html','r')
2
>>> html_page = f.read()
Copied!
Load content from remote HTML file (the LIVE sample)
1
>>> import requests
2
>>> page = requests.get('https://demo.themesberg.com/pixel-lite/index.html')
3
>>> html_page = page.content
Copied!
At this point html_page variable contains the entire HTML content (string type) and we can use it in BS4 or Lxml to extract the components. To visualize the page structure we can use browser tools:
HTML Parser - Target Component Inspection.
The target component will be extracted using an XPATH expression provided by the browser:
1
//*[@id="features"]
Copied!
To extract the component, this XPATH expression will be used in Lxml library to isolate the code.
1
>>> from lxml import html
2
>>> html_dom = html.fromstring( html_page )
3
>>> component = html_dom.xpath( '//*[@id="features"]' )
4
Copied!
To extract the raw HTML from the component object we need to use tostring helper exposed by Lxml library:
1
>>> from lxml.etree import tostring
2
>>> component_html = tostring( component[0] )
Copied!
The next step is to call Beautiful soup and prettify the HML for saving on disk
1
>>> from bs4 import BeautifulSoup as bs
2
>>> soup = bs( component_html )
3
>>> soup.prettify()
Copied!
The component is fully extracted and parsable:
1
<section class="section section-lg pb-0" id="features">
2
<div class="container">
3
<div class="row">
4
5
...
6
7
<div class="col-12 col-md-4">
8
<div class="icon-box text-center mb-5 mb-md-0">
9
<div class="icon icon-shape icon-lg bg-white shadow-lg border-light rounded-circle icon-secondary mb-3">
10
<span class="fas fa-box-open">
11
</span>
12
</div>
13
<h2 class="my-3 h5">
14
80 components
15
</h2>
16
<p class="px-lg-4">
17
Beatifully crafted and creative components made with great care for each pixel
18
</p>
19
</div>
20
</div>
21
22
...
23
24
</div>
25
</div>
26
</div>
27
</section>
Copied!
The rendered version:
HTML Parser - Extracted Component.

Resources

Last modified 4mo ago
Copy link