Web Scraping At Scale



NextStep 2019 was an exciting event that drew professionals from multiple countries and several sectors. One of our most popular technical sessions was on how to scrape website data. Presented by Miguel Antunes, an OutSystems MVP and Tech Lead at one of our partners, Do iT Lean, this session is available on-demand. But, if you prefer to just quickly read through the highlights…keep reading, we’ve got you covered!

Extra tips for scraping large websites Cache pages – While parsing large websites, always cache the pages you visited in order to avoid overloading the website. Save the URLs – Keeping a list of previously selected URLs is always a good idea. If your scraper crashes, let’s say. Split scraping –. Aug 19, 2020 Web Scraping Enterprise Scale: Real-Life Scenario - Frankort & Koning So, you may think that this was a nice and simple example of scraping a website, but how can you apply this at the enterprise level? To illustrate this tool’s effectiveness at an enterprise-level, we’ll use a case study of Frankort & Koning, a company we did this for. Web scraping can be done by people with various degree of experience and knowledge. Whether you're a developer wanting to perform large-scale data extraction on lots of websites or a growth-hacker wanting to extract email addresses on directory websites, there are many options!

As developers, we all love APIs. It makes our lives that much easier. However, there are times when APIs aren’t available, making it difficult for developers to access the data they need. Thankfully, there are still ways for us to access this data required to build great solutions.

What Is Web Scraping?

Web scraping is the act of pulling data directly from a website by parsing the HTML from the web page itself. It refers to retrieving or “scraping” data from a website. Instead of going through the difficult process of physically extracting data, web scraping employs cutting-edge automation to retrieve countless data points from any number of websites.

If a browser can render a page, and we can parse the HTML in a structured way, it’s safe to say we can perform web scraping to access all the data.

Benefits of Web Scraping and When to Use It

You don’t have to look far to come up with many benefits of web scraping.

  • No rate-limits: Unlike with APIs, there aren’t any rate limits to web scraping. With APIs, you need to register an account to receive an API key, limiting the amount of data you’re able to collect based on the limitations of the package you buy.
  • Anonymous access: Since there’s no API key, your information can’t be tracked. Only your IP address and cool keys can be tracked, but that can easily be fixed through spoofing, allowing you to remain perfectly anonymous while accessing the data you need.
  • The data is already available: When you visit a website, the data is public and available. There are some legal concerns regarding this, but most of the time, you just need to understand the terms and conditions of the website you’re scraping, and then you can use the data from the site.

How to Web Scrape with OutSystems: Tutorial

Regardless of the language you use, there’s an excellent scraping library that’s perfectly suited to your project:

  • Python: BeautifulSoup or Scrapy
  • Ruby: Upton, Wombat or Nokogiri
  • Node: Scraperjs or X-ray
  • Go: Scrape
  • Java: Jaunt

OutSystems is no exception. Its Text and HTML Processing component is designed to interpret the text from the HTML file and convert it to an HTML Document (similar to a JSON object). This makes it possible to access all the nodes.

It also extracts information from plain text data with regular expressions, or from HTML with CSS selectors. You’ll be able to manipulate HTML documents with ease while sanitizing user input against HTML injection.

But how does web scraping look like in real life? Let’s take a look at scraping an actual website.

We start with a simple plan:

  • Pinpoint your target: a simple HTML website;
  • Design your scraping theme;
  • Run and let the magic happen.

Scraping an Example Website

Our example website is www.bank-code.net, a site that lists all the SWIFT codes from the banking industry. There’s a ton of data here, so let’s get scraping.

This is what the website looks like:

If you want to collect these SWIFT codes for an internal project, it will take hours to copy it manually. With scraping, extracting the data will take a fraction of that time.

  • Navigate to your OutSystems personal environment, and start a new app (if you don't have one yet, sign-up for OutSystems free edition);
  • Choose “Reactive App”;
  • Fill in your app’s basic information, including its name and a description of the app to continue;
  • Click on “Create Module”;
  • Reference the library you’re going to use from the Forge component, which in this case is the “Text and HTML Processing” library;
  • Go to the website and copy the URL, for example: https://bank-code.net/country/PORTUGAL-%28PT%29/100. We’re going to use Portugal as a baseline for this tutorial;
  • In the OutSystems app, create a REST API for integration with the website. It’s basically just a “get request”, and place the copied URL;
  • If you noticed we have the pagination offset already present in the URL, it’s the “/100” part. Change that to be a REST input parameter;
  • Out of our set of actions, we’ll use the ones designed to work with HTML, which in this case, are Attributes or Elements. We can send the HTML text of the website to these actions. This will return our HTML document, the one mentioned before that looks like a JSON object where you can access all the nodes of the HTML.

Now we can create our action to scrape the website. Let’s call it “Scrape”, for example.

  • Use the endpoint previously created, which will gather the HTML. We’ll parse this HTML text into our document;
  • Going back to the website, in Chrome, right-click on the page where the content is that you’d like scraped. Click on “Inspect” and in the subsequent section, identify the table you’d like to scrape;
  • Since the table has its own ID, it will be unique across the HTML text, making it easy to identify in the text;
  • Since we now have the table, we really want to get all the rows in this table. You can easily identify the selector for the row by expanding the HTML till you see the rows and right click in one of them - Copy - Copy Selector, and this will give you “#tableID > tbody > tr:nth-child(1)” for the first row. And since we want all of them, we’re going to use “#tableID > tbody > tr”;
  • You have now all the elements for the table rows. It’s time to iterate all rows and get to select all the columns;
  • Now, select the column’s text, using the HTML document and the Selector from the last action, in addition to our column selector: “> td:nth-child(2)” is the selector for the second column which contains the Bank Name. For the other columns, you just need to iterate the “child(n)” node.

Since you have scraped all the information, check if you already have the code on our database. If we have it, we just need to update the data. If we don’t have it, we’ll just create the record. This should provide us with all the records for the first page of the website when you hit 1-Click Publish.

The process above is basically our tool for parsing the data from the first page. We identify the site, identify the content that we want, and identify how to get the data. This runs all the rows of the table and parses all the text from the columns, storing it in our database.

For the full code used in this example, you can go to the OutSystems Forge and download it from there.

Web Scraping Enterprise Scale: Real-Life Scenario - Frankort & Koning

So, you may think that this was a nice and simple example of scraping a website, but how can you apply this at the enterprise level? To illustrate this tool’s effectiveness at an enterprise-level, we’ll use a case study of Frankort & Koning, a company we did this for.

Web Scraping At Scale

Frankort & Koning is a Netherlands-based fresh fruit and vegetable company. They buy products from producers and sell them to the market. As these products trade in fresh produce, there are many regulations that regulate their industry. Frankfort & Koning needs to check each product that they buy to resell.

Imagine how taxing it would be to check each product coming into their warehouse to make sure that all the producers and their products are certified by the relevant industry watchdog. This needs to be done multiple times per day per product.

GlobalGap has a very basic database, which they use to give products a thirteen-digit GGN (Global Gap Number). This number identifies the producer, allowing them to track all the products and determine if they're really fresh. This helps Frankort & Koning certify that the products are suitable to be sold to their customers. Since Global Gap doesn't have any API to assist with this, this is where the scraping part comes in.

To work with the database as it is now, you need to enter the GGN number into the website manually. Once the information loads, there will be an expandable table at the bottom of the page. Clicking on the relevant column will provide you with the producer’s information and whether they’re certified to sell their products. Imagine doing this manually for each product that enters the Frankort & Koning warehouse. It would be totally impractical.

How Did We Perform Web Scraping for Frankort & Koning?

We identified the need for some automation here. Selenium was a great tool to set up the automation we required. Selenium automates user interactions on a website. We created an OutSystems extension with Selenium and Chrome driver.

This allowed Selenium to run Chrome instances on the server. We also needed to give Selenium some instructions on how to do the human interaction. After we took care of the human interaction aspect, we needed to parse the HTML to bring the data to our side.

The instructions Selenium needed to automate the human interaction included identifying our base URL and the 'Accept All Cookies' button, as this button popped up when opening the website. We needed to identify that button so that we could program a click on that button.

We also needed to produce instructions on how to interact with the collapse icon on the results table and the input where the GGN number would be entered into. We did all of this to run on an OutSystems timer and ran Chrome in headless mode.

We told Selenium to go to our target website and find the cookie button and input elements. We then sent the keys, as the user entered the GGN number, to the system and waited a moment for the page to be rendered. After this, we iterated all the results, and then output the HTML back to the OutSystems app.

This is how we tie together automation and user interaction with web scraping.

These are the numbers we worked with, with Frankort & Koning:

  • 700+ producers supplying products
  • 160+ products provided each day
  • 900+ certificates - the number of checks they needed to perform daily
  • It would’ve taken about 15 hours to process this information manually
  • Instead, it took only two hours to process this information automatically

This is just one example of how web scraping can contribute to bottom-line savings in an organization.

Still Got Questions?

Just drop me a line! And in the meantime, if you enjoyed my session, take a look at the NextStep 2020 conference, now available on-demand, with more than 50 sessions presented by thought leaders driving the next generation of innovation.

Related posts

Businesses that don’t rely on data have a meager chance of success in a data-driven world. One of the best sources of data is the data available publicly online on various websites and to get this data you have to employ the technique called Web Scraping or Data Scraping. The purpose of this article is to walk you through some of the things you need to do and the issues you need to be cognizant of when you do large scale web scraping.

Building and maintaining a large number of web scrapers is a very complex project that like any major project requires planning, personnel, tools, budget, and skills.

Web Scraping At Scale Chart

You will most likely be hiring a few developers who know how to build scalable scrapers and setting up some servers and related infrastructure to run these scrapers without interruption and integrating the data you extract into your business process.

You can use full-service professionals such as ScrapeHero to do all this for you or if you feel brave enough, you can tackle this yourself.

This article will give you tips on how to build scrapers on a large scale. But to learn more about what is web scraping and how to build a web scraper using Python, you can read our guide –

How to Build Web Scrapers

The best way to build a web scraper would be using one of the many web scraping tools and frameworks. Next, its recommended to select a web scraping framework for building your scrapers – like Scrapy (Python), PySpider (Python) or Puppeteer(Javascript).

Web Scraping At Scale Example

You can get started with building your own web scraping solution by following some of our web scraping tutorials.

The best reason build your own web scraper would be that you won’t run into the risk of your developer(s) disappearing one day, leaving no one to maintain your scrapers. You will not lock yourself into the ecosystem of a proprietary tool, having no way to move hundreds of scrapers into another tool if they shut down. This has already happened in our industry with a high profile company, read more about that here – Demise of Kimono Labs and the fate of their users.

The choice of tools or frameworks should depend on a few factors depending upon the website(s) that you plan to scrape. Some tools are better than others at handling the complexity of the sites. Generally speaking, the more flexible the tool, the more the learning curve, and conversely the easy-to-use tools may not handle complex sites or complex logic.

What is a complex website?

Any website that is built using some advanced JavaScript frameworks such as React or Angular is usually complex if you are extracting a lot of data from it. You will need a real web browser such as Puppeteer or Selenium to scrape the data. Alternatively, you can check and reverse engineer the REST API of the website if it exists.

What is a complex logic?

Here are some examples of complex logic:

  • Scraping details from a listing website, search with the listing name in another site, combine this data and save it to a database
  • Take a list of keywords, perform a search in google maps for each keyword, go to the result – extract contact details, repeat the same process on Yelp and a few other websites, then finally combine all this data.

Should you use a visual web scraping tool?

Visual web scraping tools are pretty good at extracting data from simple websites and are easy to get started with. But once you hit a wall, there isn’t much you can do. We recommend that you use visual tools for extracting data from websites that are not too complicated or if your scraping logic is complex.

We are yet to find an open-source visual web scraping tool that can handle complex logic. If the website is complex and you need to do large scale web scraping. You are better off building a scraper from scratch using a programming language like Python.

Which programming language is the best for building web scrapers?

Another common question we usually hear from our readers at ScrapeHero is: Which programming language should we use for building web scrapers?

We recommend that you use Python. The most popular web scraping framework, Scrapy is built using Python. It also has the most number of web scraping frameworks and is excellent for parsing and processing data.

You could use tools like Selenium with Python for scraping most of the modern websites built using Javascript frameworks like React, Angular, or VueJS. There is also a massive community of Python developers if you are in need of better talent.

Learn more about building web scrapers

How to run web scrapers at large scale

There is a massive difference between writing and running one scraper that scrapes 100 pages to a large scale distributed scraping infrastructure that can scrape thousands of websites or millions of pages a day.

Did you know?

ScrapeHero’s infrastructure can scrape websites at a rate of 3,000 pages per second, but we never use it all on a single website. Be nice to websites you scrape without putting a heavy load on their servers and CDNs.

Here are some tips to run web scrapers on a large scale –

Distributed Web Scraping Architecture

For scraping millions of pages daily, you will need a few servers and a way to distribute your scrapers across them and have them communicate with each other. Here are some components you would need to make this happen –

URL Queue and a Data Queue – using Message Broker like Redis, RabbitMQ or Kafka to distribute URLs and data across the scrapers that are running in different servers. You can design scrapers to read URLs from a queue in the broker. Scrape them and put the extracted data into another queue and feed newly discovered URLs into the URL Queue. Another process would read from the data queue and write it to a database while the scraper is running. You could skip this step and write directly to the database from the scraper if you are not writing a lot of data.

You would also need some process manager to make sure your scraper scripts restart automatically. If they are killed for some reason while parsing data.

Frameworks like Scrapy – Redis, and PySpider lets you skip a lot of the above tasks.

Scheduling website scrapers for periodic data collection

If you need to refresh data periodically, you can either run it manually or automate it using some tool. If you are using Scrapy, scrapyd + cron can schedule the spiders for you, and it will update the data the way you need it. PySpider also has a similar interface to do this.

Databases to store a large number of records

Once you have this massive data trove, you need a place to save it. We would suggest using a NoSQL database like MongoDB, Cassandra or HBase to store this data, depending upon the frequency and speed of data scraping.

You can then extract this data from this database/datastore and integrate it with your business process. But before you do that, you should set up some reliable Quality Assurance tests for your data (more on that later)

Web Scraping Selenium

Use IP Rotation and Proxies to Get Past Anti Scraping Tools

Large scale scraping comes with a multitude of problems and anti-scraping tools and techniques are one of the biggest. There is a considerable number of Bot Mitigation & Screen Scraping Protection Tools aka anti-scraping tools like Akamai, Distill Networks, Shield Square, Perimeter X, etc. that block your scrapers from accessing websites usually by an IP Ban.

If any of the websites you need have any IP based blocking involved, your servers’ IP address will be blacklisted in no time. The site will not respond to requests from your servers or show you a captcha, leaving you with very few options after getting blacklisted.

As we are talking about millions of requests. You might need some of these

Web Scraping Online

  • Rotate the requests you make through 1000+ private proxies if you are not dealing with a captcha or anti-scraping service
  • Make requests through a provider with 100,000+ geo proxies that are not completely blacklisted when dealing with most anti-scraping solutions
  • Or time and resources to reverse engineer and bypass the anti-scraping solutions

Learn more about proxies and IP bans

Data Validation and Quality Control

The website data scraped is only as good as its quality. To ensure the data that you scraped inaccurate and complete, you need to run a variety of Quality assurance tests on it right after it is scraped. Especially when doing large scale web scraping, validate every field of every record before you save it or before you process it.

Having a set of Tests for the integrity of the data is essential. You can automate some of it by using Regular Expressions, to check if the data follows a predefined pattern. Else if it doesn’t then generate some alerts so that it can be manually inspected.

There are tools like Pandas, Cerebrus, Schema, etc built on Python, to help you validate the data record by record.

If the scraper is part of a data pipeline, you can set up multiple stages in the pipeline. Then verify the integrity and quality of the data using ETL Tools.

Maintenance

Data Scrapers

Every website will change its structure now and then, and so should your scrapers. Website scrapers usually need adjustments every few months or even weeks. As a minor change in target websites affecting the data fields you scrape might give you incomplete data or crash the scraper, depending on the logic of your scraper.

You need a mechanism to alert you if a lot the data you scrape is invalid or empty all of a sudden, so that you may check the scrapers and the website for changes that triggered it. You need to fix the scraper once it is broken either manually or by building some sophisticated logic to repair itself quickly to prevent disruptions in your data pipeline.

Database and Storage

If you’re doing large scale web scraping, you need lots of data storage and it’s best to plan up-front. Small amounts of data can be stored in flat files or spreadsheets. But when the data becomes too large for a spreadsheet to handle, you need to evaluate storage options including Cloud Storage and Cloud Hosted Databases (S3, Azure SQL, Azure Postgres, Redshift, Aurora, RDS, DynamoDB, Redis) or Relational Databases (MySQL, Oracle, SQL Server) or NoSQL databases (MongoDB, Cassandra etc).

Depending upon the size of data, you need to clean your database of outdated data to save space and money. You might also want to scale up your systems if you still need the old data. Sharding and Replication of databases can be of help.

You need to make sure that you do not keep anything that is not of use after the scraping is complete.

Know when to ask for help

As you might have realized by now, this whole process is expensive and time-consuming. You need to be ready to take on this challenge for large scale web scraping.

You also need to know when to stop and ask for help. ScrapeHero has been doing all this and more for many years now, let us know if you need help to build a scraper.

Or you can just worry about what to do with the data, we deliver to you. Interested ?

Turn the Internet into meaningful, structured and usable data

Web Scraping Pay Scale