Digital resources in the Social Sciences and Humanities OpenEdition Our platforms OpenEdition Books OpenEdition Journals Hypotheses Calenda Libraries OpenEdition Freemium Follow us

Experimenting on data: Web-Scraping

Some desperate corona evening recently, I ended up scraping some library records of Herzog August Bibliothek Wolfenbüttel. Since I obtained some information by web crawling which I really didn’t need in the first place, I thought this would make a great first post for our ‘Experiments on Data’ series.

Pick a scraping library in the programming language of your choice

When you want to scrape a website, you’ll probably need a scraping library first. There are many of them (such as Beautiful Soup, Selenium, etc – my example is requests and lxml) . I’ll show some examples in Python.

 

Pick a site to crawl and check out what its links look like

Then you need to pick a site to crawl. In case you want to crawl more than one page, you need to look at links. In my case, the links looked super cryptic but by following some more links (like clicking on “Next page” and so forth), I saw what the pattern was and was ready to roll. So first, I made sure the program got all my wanted pages.

(You get the .ipynb and and a PDF output where you can check out the results on my github here. Just scroll down in the readme where you already see most of the code)

A peek into the IPython Notebook…

(Look at the code here as a html page or here as slides – you have to download the .html files and open in a browser)

from lxml import html
import requests
starting_url = 'http://opac.lbs-braunschweig.gbv.de/DB=2/SET=2/TTL=21/NXT?FRST=01'
url_front = 'http://opac.lbs-braunschweig.gbv.de/DB=2/SET=2/TTL=21/NXT?FRST='

pages_list = ['01','11','21','31','41','51', '61']

for item in pages_list:
    print(url_front + item)

page_store = []

the_pages = [(url_front + item) for item in pages_list]
the_pages
def print_elem(elem):
    print("<%s>\nTags: %s;\n%s...\n\n" % (elem.tag, elem.attrib, elem.text_content()[:200].replace('\n', ' ')))
for one_page in the_pages:
    page = requests.get(one_page)
    page_store.append(page)
page_store
for page in page_store:
    tree = html.fromstring(page.text)
    print_elem(tree)
mainpage = 'http://opac.lbs-braunschweig.gbv.de/DB=2/SET=2/TTL=37/MAT=/NOMAT=T/REL?PPN=080093043'
page = requests.get(mainpage)
tree = html.fromstring(page.text)
print_elem(tree)
tree.text_content()
hits = tree.xpath("//table[@summary='hitlist']/tr[@valign='top']/td[@class='hit' and @align='left']")
for hit in hits:
    print_elem(hit)

Now we get some results. Which is also when I realized my master’s thesis was the top search result which I am hugely embarrassed about since I really totally don’t stand anymore with anything I said in my master’s thesis. Well, that’s life I guess.

 

Learn about the page’s HTML structure

Then I came up with a way of processing. Since usually, you’re not scraping your own site where you already know what the data looks like and you probably already have the data so that you don’t need to scrape the site (in case you’re already scraping your own site, you must be really bored in which case I can recommend starting a blog…). Well anyways, since you don’t usually HTML-inspect sites that you’re not web-developing yourselves (or at least I don’t), you won’t know what the HTML structure looks like. But we need to find out all we can about ids or classes of the relevant elements containing the content we’re interested in. In case you’ve never inspected HTML data of a website before, you can usually get to the inspect-view by right clicking on the piece of the site you’re interested in and there will be an “Inspect” option or something (depending on your browser). There are usually also keyboard shortcuts you can use.

titles = {}
linklist = tree.xpath("//table[@summary='hitlist']/tr[@valign='top']/td[@class='hit' and @align='left']/a")
tds = tree.xpath("//table[@summary='hitlist']/tr[@valign='top']/td[@class='hit' and @align='left']")

for link in linklist:
    print(link.text_content())
    print(link.attrib['href'])
for td in tds:
    print(td.text_content())

 

Go and get your data

Once you’ve identified what your data looks like, that’s what you’re going to extract. That’s usually the point where most of the work is already done. However, please note that not all sites like to be crawled and you’ll also run into trouble trying to scrape a site whose content is generated dynamically using JavaScript because your scraper won’t execute that (there are some ways around it and special crawlers but it does get quite a bit more complicated there).

Save the data where you want it or process it further – and that’s it really. In my case, however, I don’t even know why I started this crawling project late at night. After all, I already have an extensive bibliography on the subject of my dissertation… Well, data moves in mysterious ways.

As you can see, this really wasn’t complicated or a lot of code. Maybe it motivates someone to try it out.

What are your useless experiments on data?

Happy crawling,

S


OpenEdition suggests that you cite this post as follows:
Sarah Lang (May 4, 2020). Experimenting on data: Web-Scraping. Epigrammetry. Retrieved October 6, 2024 from https://doi.org/10.58079/og6u


You may also like...

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.