I'm looking for a person who can download all records from a website, parse it, and store in a MySQL database. Since this website is updated quite often, this project should ideally be developed in a form that I can run it here once a week and get all records inserted in my local database again.
Here is an example: [url removed, login to view]
In this page you find all information about this fighter, his height, birthday, weight, and most importantly, his previous opponents and the results of his fights. By following the links to the pages of his opponents, we have the same information about them. There is no need to download the information about the events, only the fighters. Therefore, you can restrict your downloads to the "/fighter" subdirectory.
The database might be quite large (~100,000 records or more, so the initial download might take a while.
Prefered languages are PHP, Perl, Bash, Python, or any combination of those. The database of choice is MySQL.
Looking forward to see your bids !
33 freelancers are bidding on average ¥16321 for this job
Hello Sir/Madam, I am an expert in web scraping (I rank #1 for Web Scraping [url removed, login to view]). I have done many similar jobs. Ready to start Thanks, Alex
Hi, We are experienced programmers in perl and php. We have done similar spidering projects successfully in perl. Look forward to hear from u. Thanks
Dear, potential employer. Perl/ Web/MySQL professional here. Please, accept this bid to have your task completed with the best professional quality, though, relatively cheap. Looking forward for your reply.
Hi, my name is Maciek I'm from Poland - Siedlce. Since 2005 I'm Linux and MySQL administrator. I'm network administrator in "DOMTEL" [url removed, login to view] We are network provider, we work on Linux. I work at many Linux Daha fazlası