Today more and more websites are using Ajax for fancy user experiences, dynamic web pages, and many more good reasons.Crawling Ajax heavy website can be tricky and painful, we are going to see some tricks to make it easier.
Prerequisite
If this is for the same domain then no problem - the jQuery solution is good. But otherwise you can't access content from an arbitrary website because this is considered a security risk. See same origin policy. There are of course server side workarounds such as a web proxy or CORS headers. Of if you're lucky they will support jsonp. Use $.ajax to load the other page into a variable, then create a temporary element and use.html to set the contents to the value returned. Loop through the element's children of nodeType 1 and keep their first children's nodeValues. If the external page is not on your web server you will need to proxy the file with your own web server.
Before starting, please read the previous articles I wrote to understand how to set up your Java environment, and have a basic understanding of HtmlUnit Introduction to Web Scraping With Java and Handling Authentication.After reading this you should be a little bit more familiar with web scraping.
Requirements: PHP and Laravel framework; Boostrap; Jquery. Overview: Content scraping means reading pieces of content from html or xml pages to be displayed in some websites or to be saved into database, for example we might want to read a listing of articles in a news website or to read the products of ecommerce websites to make something like pricing comparison, etc. Using JavaScript and jQuery, the above code requests a page from www.freecodecamp.org, like a browser would. And freeCodeCamp responds with the page. Instead of a browser running the code to display the page, we get the HTML code. And that’s what web scraping is, extracting data from websites.
Setup
The first way to scrape Ajax website with Java that we are going to see is by using PhantomJS with Selenium and GhostDriver.
PhantomJS is a headless web browser based on WebKit ( used in Chrome and Safari). It is quite fast and does a great job to render the Dom like a normal web browser.
- First you'll need to download PhantomJS
- Then add this to your pom.xml :
and this :
##PhantomJS and Selenium
Now we're going to use Selenium and GhostDriver to “pilot” PhantomJS.
The example that we are going to see is a simple “See more” button on a news site, that perform a ajax call to load more news.So you may think that opening PhantomJS to click on a simple button is a waste of time and overkilled ? Of course it is !
The news site is : Inshort
As usual we have to open Chrome Dev tools or your favorite inspector to see how to select the “Load More” button and then click on it.
Now let's look at some code :
That's a lot of code to setup phantomJs and Selenium !I suggest you to read the documentation to see the many arguments you can pass to PhantomJS.
Note that you will have to replace /usr/local/bin/phantomjs
with your own phantomJs executable path
Then in a main method :
Here we call the initPhantomJs()
method to setup everything, then we select the button with its id and click on it.
The other part of the code count the number of articles we have on the page and print it to show what we have loaded.
We could have also printed the entire dom with driver.getPageSource()
and open it in a real browser to see the difference before and after the click.
I suggest you to look at the Selenium Webdriver documentation, there are lots of cool methods to manipulate the DOM.
I used a dirty solution with my Thread.sleep(800)
to wait for the Ajax call to complete.It's dirty because it is an arbitrary number, and the scraper could run faster if we could wait just the time it takes to perform that ajax call.
There are other ways of solving this problem :
If you look at the function being executed when we click on the button, you'll see it's using jQuery :
This code will wait until the variable jQuery.active equals 0 (it seems to be an internal variable of jQuery that counts the number of ongoing ajax calls)
If we knew what DOM elements the Ajax call is supposed to render we could have used that id/class/xpath in the WebDriverWait condition :
Conclusion
So we've seen a little bit about how to use PhantomJS with Java.
The example I took is really simple, it would have been easy to emulate the request. A great tool to intercept requests and reverse-engineer back-end APIs is Charles proxy.
But sometimes when you have tens of Ajax calls, and lots of Javascript being executed to render the page properly, it can be very hard to scrape the data you want, and PhantomJS/Selenium is here to save you :)
Next time we will see how to do it by analyzing the AJAX calls and make the requests ourselves.
If you want to know how to do this in Python, you can read our great 👉 python web scraping tutorial.
As usual you can find all the code in my Github repo
What Is Web Scraping
Rendering JS at scale can be really difficult and expensive. This is exactly the reason why we started ScrapingBee, a web scraping api that takes care of this for you.
It will also take car of proxies and CAPTCHAs, don't hesitate to check it out, the first 1000 API calls are on us.
Web scraping, web crawling, html scraping, and any other form of web data extraction can be complicated. Between obtaining the correct page source, to parsing the source correctly, rendering javascript, and obtaining data in a usable form, there’s a lot of work to be done. Different users have very different needs, and there are tools out there for all of them, people who want to build web scrapers without coding, developers who want to build web crawlers to crawl large sites, and everything in between. Here is our list of the 10 best web scraping tools on the market right now, from open source projects to hosted SAAS solutions to desktop software, there is sure to be something for everyone looking to make use of web data!
1. Scraper API
Website: https://www.scraperapi.com/
Who is this for: Scraper API is a tool for developers building web scrapers, it handles proxies, browsers, and CAPTCHAs so developers can get the raw HTML from any website with a simple API call.
Why you should use it: Scraper API is a tool for developers building web scrapers, it handles proxies, browsers, and CAPTCHAs so developers can get the raw HTML from any website with a simple API call. It doesn’t burden you with managing your own proxies, it manages its own internal pool of over a hundreds of thousands of proxies from a dozen different proxy providers, and has smart routing logic that routes requests through different subnets and automatically throttles requests in order to avoid IP bans and CAPTCHAs. It’s the ultimate web scraping service for developers, with special pools of proxies for ecommerce price scraping, search engine scraping, social media scraping, sneaker scraping, ticket scraping and more! If you need to scrape millions of pages a month, you can use this form to ask for a volume discount.
2. ScrapeSimple
Website: https://www.scrapesimple.com
Who is this for: ScrapeSimple is the perfect service for people who want a custom scraper built for them. Web scraping is made as simple as filling out a form with instructions for what kind of data you want.
Why you should use it: ScrapeSimple lives up to its name with a fully managed service that builds and maintains custom web scrapers for customers. Just tell them what information you need from which sites, and they will design a custom web scraper to deliver the information to you periodically (could be daily, weekly, monthly, or whatever) in CSV format directly to your inbox. This service is perfect for businesses that just want a html scraper without needing to write any code themselves. Response times are quick and the service is incredibly friendly and helpful, making this service perfect for people who just want the full data extraction process taken care of for them.
3. Octoparse
Website: https://www.octoparse.com/
Who is this for: Octoparse is a fantastic tool for people who want to extract data from websites without having to code, while still having control over the full process with their easy to use user interface.
Why you should use it: Octoparse is the perfect tool for people who want to scrape websites without learning to code. It features a point and click screen scraper, allowing users to scrape behind login forms, fill in forms, input search terms, scroll through infinite scroll, render javascript, and more. It also includes a site parser and a hosted solution for users who want to run their scrapers in the cloud. Best of all, it comes with a generous free tier allowing users to build up to 10 crawlers for free. For enterprise level customers, they also offer fully customized crawlers and managed solutions where they take care of running everything for you and just deliver the data to you directly.
4. ParseHub
Website: https://www.parsehub.com/
Who is this for: Parsehub is an incredibly powerful tool for building web scrapers without coding. It is used by analysts, journalists, data scientists, and everyone in between.
Why you should use it: Parsehub is dead simple to use, you can build web scrapers simply by clicking on the data that you want. It then exports the data in JSON or Excel format. It has many handy features such as automatic IP rotation, allowing scraping behind login walls, going through dropdowns and tabs, getting data from tables and maps, and much much more. In addition, it has a generous free tier, allowing users to scrape up to 200 pages of data in just 40 minutes! Parsehub is also nice in that it provies desktop clients for Windows, Mac OS, and Linux, so you can use them from your computer no matter what system you’re running.
5. Scrapy
Website: https://scrapy.org
Who is this for: Scrapy is a web scraping library for Python developers looking to build scalable web crawlers. It’s a full on web crawling framework that handles all of the plumbing (queueing requests, proxy middleware, etc.) that makes building web crawlers difficult.
Why you should use it: As an open source tool, Scrapy is completely free. It is battle tested, and has been one of the most popular Python libraries for years, and it’s probably the best python web scraping tool for new applications. It is well documented and there are many tutorials on how to get started. In addition, deploying the crawlers is very simple and reliable, the processes can run themselves once they are set up. As a fully featured web scraping framework, there are many middleware modules available to integrate various tools and handle various use cases (handling cookies, user agents, etc.).
6. Diffbot
Website: https://www.diffbot.com
Who is this for: Enterprises who who have specific data crawling and screen scraping needs, particularly those who scrape websites that often change their HTML structure.
Why you should use it: Diffbot is different from most page scraping tools out there in that it uses computer vision (instead of html parsing) to identify relevant information on a page. This means that even if the HTML structure of a page changes, your web scrapers will not break as long as the page looks the same visually. This is an incredible feature for long running mission critical web scraping jobs. While they may be a bit pricy (the cheapest plan is $299/month), they do a great job offering a premium service that may make it worth it for large customers.
7. Cheerio
Website: https://cheerio.js.org
Who is this for: NodeJS developers who want a straightforward way to parse HTML. Those familiar with jQuery will immediately appreciate the best javascript web scraping syntax available.
Why you should use it: Cheerio offers an API similar to jQuery, so developers familiar with jQuery will immediately feel at home using Cheerio to parse HTML. It is blazing fast, and offers many helpful methods to extract text, html, classes, ids, and more. It is by far the most popular HTML parsing library written in NodeJS, and is probably the best NodeJS web scraping tool or javascript web scraping tool for new projects.
8. BeautifulSoup
Website: https://www.crummy.com/software/BeautifulSoup/
Who is this for: Python developers who just want an easy interface to parse HTML, and don’t necessarily need the power and complexity that comes with Scrapy.
Why you should use it: Like Cheerio for NodeJS developers, Beautiful Soup is by far the most popular HTML parser for Python developers. It’s been around for over a decade now and is extremely well documented, with many web parsing tutorials teaching developers to use it to scrape various websites in both Python 2 and Python 3. If you are looking for a Python HTML parsing library, this is the one you want.
9. Puppeteer
Website: https://github.com/GoogleChrome/puppeteer
Who is this for: Puppeteer is a headless Chrome API for NodeJS developers who want very granular control over their scraping activity.
Why you should use it: As an open source tool, Puppeteer is completely free. It is well supported and actively being developed and backed by the Google Chrome team itself. It is quickly replacing Selenium and PhantomJS as the default headless browser automation tool. It has a well thought out API, and automatically installs a compatible Chromium binary as part of its setup process, meaning you don’t have to keep track of browser versions yourself. While it’s much more than just a web crawling library, it’s often used to scrape website data from sites that require javascript to display information, it handles scripts, stylesheets, and fonts just like a real browser. Note that while it is a great solution for sites that require javascript to display data, it is very CPU and memory intensive, so using it for sites where a full blown browser is not necessary is probably not a great idea. Most times a simple GET request should do the trick!
10. Mozenda
Jquery Web Scraping
Website: https://www.mozenda.com/
Who is this for: Enterprises looking for a cloud based self serve webpage scraping platform need look no further. With over 7 billion pages scraped, Mozenda has experience in serving enterprise customers from all around the world.
Why you should use it: Mozenda allows enterprise customers to run web scrapers on their robust cloud platform. They set themselves apart with the customer service (providing both phone and email support to all paying customers). Its platform is highly scalable and will allow for on premise hosting as well. Like Diffbot, they are a bit pricy, and their lowest plans start at $250/month.
Honorable Mention 1. Kimura
Website: https://github.com/vifreefly/kimuraframework
Who is this for: Kimura is an open source web scraping framework written in Ruby, it makes it incredibly easy to get a Ruby web scraper up and running.
Why you should use it: Kimura is quickly becoming known as the best Ruby web scraping library, as it’s designed to work with headless Chrome/Firefox, PhantomJS, and normal GET requests all out of the box. It’s syntax is similar to Scrapy and developers writing Ruby web scrapers will love all of the nice configuration options to do things like set a delay, rotate user agents, and set default headers.
Honorable Mention 2. Goutte
Mac drivers for canon mp250. Website: https://github.com/FriendsOfPHP/Goutte
Who is this for: Goutte is an open source web crawling framework written in PHP, it makes it super easy extract data from the HTML/XML responses using PHP.
Why you should use it: Goutte is a very straight forward, no frills framework that is considered by many to be the best PHP web scraping library, as it’s designed for simplicity, handling the vast majority of HTML/XML use cases without too much additional cruft. It also seamlessly integrates with the excellent Guzzle requests library, which allows you to customize the framework for more advanced use cases.
The open web is by far the greatest global repository for human knowledge, there is almost no information that you can’t find through extracting web data. Because web scraping is done by many people of various levels of technical ability and know how, there are many tools available that service everyone from people who don’t want to write any code to seasoned developers just looking for the best open source solution in their language of choice.
Hopefully, this list of tools has been helpful in letting you take advantage of this information for your own projects and businesses. If you have any web scraping jobs you would like to discuss with us, please contact us here. Happy scraping!