A common task is to track competitors prices and use that information as a guide to the prices you can charge, or if you are buying, you can spot when a product is at a new lowest price. The purpose of this article is to describe how to web scrape Amazon.
Using Python, Scrapy, MySQL, and Matplotlib you can extract large amounts of data, query it, and produce meaningful visualizations.
In the example featured, we wanted to identify which Amazon books related to “web scraping” had been reduced in price over the time we had been running the spider.
If you want to run your spider daily then see the video for instructions on how to schedule a spider in CRON on a Linux server.
Procedure used for price tracking
query = '''select amzbooks2.* from (select amzbooks2.*, lag(price) over (partition by title order by posted) as prev_price from amzbooks2) amzbooks2 where prev_price <> price'''
Visualize the stored data using Python and Matplotlib
The most important thing when starting to scrape is to establish what you want in your final output.
Here are the data points we want to extract :
Now we can write our parse method, and once done, we can finally add on the “next page” code.
The Amazon pages have white space around the Author name(s) so you this will be an example of when to use ‘normalize-space’.
We also had to make sure we weren’t splitting the partially parsed response too soon, and removing the 2nd Author, (if there was one).
Some of the results are perhaps not what you want, but this is due to Amazon returning products which it thinks are in some way related to your search criteria!
By using pipelines in Scrapy, along with the process_item method we were able to filter much of what was irrelevant. The great thing about web scraping to an SQL database is the flexibility it offers once you have the data. SQL, Pandas, Matplotlib and Python are a powerful combination…
If you need to get past a login that is proving impossible to get past, usually if the form data keeps changing, then you can use Selenium to get past the login screen and then pass the response back into Scrapy.
It may sound like a workaround, and it is, but it’s a good way to get logged in so you can get the content much quicker than if you try and use Selenium to do it all.
Selenium is for testing, but sometimes you can combine Selenium and Scrapy to get the job done!
This article describes how to form a Scrapy xpath selector to pick out the hidden value that you may need to POST along with a username and password when scraping a site with a log in. These hidden values are dynamically created so you must send them with your form data in your POST request.
Identify the source in the browser:
This is the xpath selector format you will need to use:
The challenge was to scrape a site where the class names for each element were out of order / randomised. So the only way to get the data in the correct sequence was to sort through the CSS styles by left, top, and match to the class names in the divs…
The names were meaningless and there was no way of establishing an order, a quick check in developer tools shows the highlighter rectangle jumps all over the page in no particular order when traversing the source code.
So we began the code by identifying the CSS and then parsing it:
There was NO possibility of sequentially looping through the divs to extract ordered data.
After watching the intro to the challenge set on YouTube, and a handy hint from CMK we got to work.
From this article you will learn as much about coding with Python as you will about web scraping in specific. Note I used requests_html, as it provided me with the option to use XPATH.
BeautifulSoup could also have been used.
Python methods used:
x = round(1466,-2)
print(x) # 1500
x = round(1526,-2)
print(x) # 1500
x = round(1526,-1)
print(x) # 1530
I needed to use round as the only way to identify a “Column” of text was to cross reference the <style> “left px” and “top px” with the class names used inside the divs. Round was required as there was on occasion a 2 or 3 px variation in the “left” value.
Item getterfrom operator import itemgetter
I had to sort the values from the parsed css style in order of “left” to identify 3 columns of data, and then “top” to sort the contents of the 3 columns A, B, C.
zipped = zip(ls_desc,ls_sellp,ls_suggp)
Zipping the 3 lists – read on to find out the issue with doing this…
rows = list(zipped)
So to get the data from the 3 columns, A (ls_desc), B (ls_sellp), and C (ls_suggp) I used ZIP, but…….the were/are 2 values missing in column C!!
A had 77 values,
B had 77 values
C had 75 !
Not only was there no text in 2 of the blanks in column C, there was also NO text or even any CSS.
We only identified this as an issue after running the code – visually the page looked consistent, alas the last part of column “C” becomes out of sequence with the data in colmumn A and B which are both correct.
Go back and check if column “C” has a value at the same top px value as Column “B”. If no value then insert an “x” or spacer into Column C at that top px value.
This will need to be rewritten using dictionaries, and create one dictionary per ROW rather than my initial idea of 1 list per column and zipping them!
special thanks to “code monkey king” for the idea/challenge!
The task was to scrape over 50,000 records from a website and be gentle on the site being scraped. A Raspberry Pi Zero was chosen to do this as speed was not a significant issue, and in fact, being slower makes it ideal for web scraping when you want to be kind to the site you are scraping and not request resources too quickly. This article describes using Scrapy, but BeautifulSoup or Requests would work in the same way.
The main considerations were:
Could it run Scrapy without issue?
Could it run with a VPN connection?
Would it be able to store the results?
So a quick, short test proved that it could collect approx 50,000 records per day which meant it was entirely suitable.
I wanted a VPN tunnel from the Pi Zero to my VPN provider. This was an unknown, because I had only previously run it on a Windows PC with a GUI. Now I was attempting to run it from a headless Raspberry Pi!
This took approx 15 mins to set up. Surprisingly easy.
The only remaining challenges were:
run the spider without having to leave my PC on as well (closing PuTTy in Windows would have terminated the process on the Pi) – That’s where nohup came in handy.
Transfer the output back to a PC (running Ubuntu – inside a VM ) – this is where rsync was handy. (SCP could also have been used)