capture your start urls in your output with Scrapy response.meta
Every web scraping project has aspects that are different or interesting and worth remembering for future use.
This is a look at a recent real world project and looks saving more than one start url in the output.
This assumes basic knowledge of web scraping, and identifying selectors. See my other videos if you would like to learn more about selectors (xpath & css)
We want to fill all of the columns in our client’s master excel sheet.
We could* then provide them with a CSV which they can import and do with what they wish.
We want 1500+ properties so we will be using Scrapy and Python
One of the required fields requires us to pass the particular start url all the way through to the CSV (use response.meta)
Some of the required values are inside text and will require parsing with re (use regular expressions) ¡We don’t care about being fast – edit “settings.py” with conservative values for concurrent connections, download delay
This is a German website so I will use Google Chrome browser and translate to English.
We will use Scrapy’s Request.meta attribute to achieve the following:
Capture whichever of the multiple start_urls is used – pass it all the way through to the output CSV.
Create a “meta” dictionary in the initial Request in start_requests
“surl” represents each of our start urls
(we have 2, one for ‘rent’ and one for the ‘buy’ url, we could have many more if required)
A common task is to track competitors prices and use that information as a guide to the prices you can charge, or if you are buying, you can spot when a product is at a new lowest price. The purpose of this article is to describe how to web scrape Amazon.
Using Python, Scrapy, MySQL, and Matplotlib you can extract large amounts of data, query it, and produce meaningful visualizations.
In the example featured, we wanted to identify which Amazon books related to “web scraping” had been reduced in price over the time we had been running the spider.
If you want to run your spider daily then see the video for instructions on how to schedule a spider in CRON on a Linux server.
Procedure used for price tracking
query = '''select amzbooks2.* from (select amzbooks2.*, lag(price) over (partition by title order by posted) as prev_price from amzbooks2) amzbooks2 where prev_price <> price'''
Visualize the stored data using Python and Matplotlib
The most important thing when starting to scrape is to establish what you want in your final output.
Here are the data points we want to extract :
Now we can write our parse method, and once done, we can finally add on the “next page” code.
The Amazon pages have white space around the Author name(s) so you this will be an example of when to use ‘normalize-space’.
We also had to make sure we weren’t splitting the partially parsed response too soon, and removing the 2nd Author, (if there was one).
Some of the results are perhaps not what you want, but this is due to Amazon returning products which it thinks are in some way related to your search criteria!
By using pipelines in Scrapy, along with the process_item method we were able to filter much of what was irrelevant. The great thing about web scraping to an SQL database is the flexibility it offers once you have the data. SQL, Pandas, Matplotlib and Python are a powerful combination…
If you need to get past a login that is proving impossible to get past, usually if the form data keeps changing, then you can use Selenium to get past the login screen and then pass the response back into Scrapy.
It may sound like a workaround, and it is, but it’s a good way to get logged in so you can get the content much quicker than if you try and use Selenium to do it all.
Selenium is for testing, but sometimes you can combine Selenium and Scrapy to get the job done!
The task was to scrape over 50,000 records from a website and be gentle on the site being scraped. A Raspberry Pi Zero was chosen to do this as speed was not a significant issue, and in fact, being slower makes it ideal for web scraping when you want to be kind to the site you are scraping and not request resources too quickly. This article describes using Scrapy, but BeautifulSoup or Requests would work in the same way.
The main considerations were:
Could it run Scrapy without issue?
Could it run with a VPN connection?
Would it be able to store the results?
So a quick, short test proved that it could collect approx 50,000 records per day which meant it was entirely suitable.
I wanted a VPN tunnel from the Pi Zero to my VPN provider. This was an unknown, because I had only previously run it on a Windows PC with a GUI. Now I was attempting to run it from a headless Raspberry Pi!
This took approx 15 mins to set up. Surprisingly easy.
The only remaining challenges were:
run the spider without having to leave my PC on as well (closing PuTTy in Windows would have terminated the process on the Pi) – That’s where nohup came in handy.
Transfer the output back to a PC (running Ubuntu – inside a VM ) – this is where rsync was handy. (SCP could also have been used)
As an overview of how web scraping works, here is a brief introduction to the process, with the emphasis on using Scrapy to scrape a listing site.
If you would like to know more, or would like us to scrape data from a specific site please get in touch.
*This article also assumes you have some knowledge of Python, and have Scrapy installed. It is recommended to use a virtual environment. Although the focus is on using Scrapy, similar logic will apply if you are using Beautiful Soup, or Requests. (Beautiful Soup does some of the hard work for you with find, and select).
Below is a basic representation of the process used to scrape a page / site
Identifying the div and class name
Using your web browser developer tools, traverse up through the elements (Chrome = Inspect Elements) until you find a ‘div’ (well, it’s usually a div) that contains the entire advert, and go no higher up the DOM.
(advert = typically: the thumbnail + mini description that leads to the detail page)
The code inside the ‘div’ will be the iterable that you use with the for loop. The “.” before the “//” in the xpath means you select all of them eg. All 20, on a listings page that has 20 adverts per page
Now you have the xpath and checked it in scrapy shell, you can proceed to use it with a for loop and selectors for each piece of information you want to pick out. If you are using XPATH, you can use the class name from the listing, and just add “.” to the start as highlighted below.
This “.” ensures you will be able to iterate through all 20 adverts at the same node level. (i.e All 20 on the page).
To go to the details page we use “Yield” but we also have to pass the variables that we have picked out on the main page. So we use ‘meta’ (or newer version = cb_kwargs”).
Using ‘meta’ allows us to pass variables to the next function – in this case it’s called “fetch_details” – where they will be added to the rest of the variables collected and sent to the FEEDS export which makes the output file.
There is also a newer, recommended version of “meta” to pass variables between functions in Scrapy: “cb_kwargs”
Once you have all the data it is time to use “Yield” to send it to the FEEDS export.
This is the format and destination that you have set for your output file.
*Note it can also be a database, rather than JSON or CSV file.
You may wish to run all of your code from within the script, in which case you can do this:
# main driver
if __name__ == "__main__":
process = CrawlerProcess()
# Also you will need to add this at the start :
from scrapy.crawler import CrawlerProcess
Web Scraping – Summary
We have looked at the steps involved and some of the code you’ll find useful when using Scrapy.
Identifying the html to iterate through is the key
Try and find the block of code that has all of the listings / adverts, and then narrow it down to one advert/listing. Once you have done that you can test your code in “scrapy shell” and start building your spider.
(Scrapy shell can be run from your CLI, independent of your spider code):