As a recent data science bootcamp grad, I now feel pretty comfortable tackling any web scraping problem. However, this definitely was not the case a month ago. I remember one of my classmates was scraping everything in our second 3-week unit, while I had barely even looked at it. So in an attempt to level up, I checked out the documentation and gave it a go. Unlike many open-source libraries for python, the Selenium documentation was far from perfect and on top of that, all of the tutorials were dated. Now many tutorials and attempts later, I feel well suited to face any challenge. To help with the start-up challenges, I decided to give an overview of how to get started and share the resources that helped me the most!
This will not be an exhaustive overview of all the resources at your disposal with python, but it should be enough to help you in the beginning. The two tools I will be going over in this post are Beautiful Soup (utilizing Requests) and Selenium. The only other major option you have available is Scrapy. First, we will go over Beautiful Soup, then Selenium, and in the end, we will bring them together! Along the way, I will discuss the pros and cons of both.
Beautiful Soup is a great resource, and more often than not it will be the one that you will use. It is a python package that is used to parce HTML and XML. It has some pitfalls when it comes to dynamic websites, which is when we will make use of Selenium, however it should be the first resource you turn to!
mention that most of the time when you think Selenium is needed, it's not