GithubHelp home page GithubHelp logo

Comments (4)

hellock avatar hellock commented on August 16, 2024

In my tests, this argument can work well. Could you provide more information about the issue such as system and python version, and the code snippets you use?

from icrawler.

devilsnare007 avatar devilsnare007 commented on August 16, 2024

I am using conda environment with python 3.5, have already changed the value for file_idx_offset='auto' in downloader.py - I am trying to save all images to the same directory using the different builtin crawlers (currently using Google and Bing, however, Bing seems to overwrite the files saved by Google as the index always seems to start at 000001 for file names no matter if you change value to 0 or 'auto' or even comment out <def max_file_idx(self)> in filesystem.py )

Here is the code snippit:

from icrawler.builtin import GoogleImageCrawler, BingImageCrawler 
import os
import sys
import tensorflow as tf
import subprocess                                                 
import logging													  

#retrieve the first argument as the Label to investigate
label = sys.argv[1]
directory = "D:/Tensorflow_Final/photos/trainingdata" + "/" +label
if not os.path.exists(directory):
	#the label does not exists so we create one
	os.mkdir(directory)
	google_crawler = GoogleImageCrawler(downloader_threads=10, storage={'root_dir': directory},
        			     log_level=logging.INFO)
	google_crawler.crawl(keyword=label, max_num=10)

	bing_crawler = BingImageCrawler(storage={'root_dir': directory},
				log_level=logging.DEBUG)
	bing_crawler.crawl(keyword=label, max_num=10)

Here is the cmd output:

(tfproj1) D:\Tensorflow_Final>python Test.py horse
2017-11-19 09:37:32,792 - INFO - icrawler.crawler - start crawling...
2017-11-19 09:37:32,792 - INFO - icrawler.crawler - starting 1 feeder threads...
2017-11-19 09:37:32,793 - INFO - feeder - thread feeder-001 exit
2017-11-19 09:37:32,793 - INFO - icrawler.crawler - starting 1 parser threads...
2017-11-19 09:37:32,794 - INFO - icrawler.crawler - starting 10 downloader threads...
2017-11-19 09:37:35,534 - INFO - parser - parsing result page https://www.google.com/search?q=horse&tbs=cdr%3A1%2Ccd_min%3A%2Ccd_max%3A%2Csur%3A&lr=&start=0&ijn=0&tbm=isch
2017-11-19 09:37:36,156 - INFO - downloader - image #1  http://cdn.thehorse.com/images/cms/2017/01/brown-horse-rearing-in-snow.jpg?preset=medium
2017-11-19 09:37:36,615 - INFO - downloader - image #2  http://cdn.thehorse.com/images/cms/2017/07/lean-performance-horse.jpg?preset=medium
2017-11-19 09:37:36,674 - INFO - downloader - image #3  http://www.merckvetmanual.com/-/media/manual/veterinary/section-images/horse_section.jpg
2017-11-19 09:37:36,748 - INFO - downloader - image #4  https://i.pinimg.com/736x/24/d7/ec/24d7ec8bb462a9568a900a22ca57aba4--horses-beautiful-wild-pretty-horses.jpg
2017-11-19 09:37:37,071 - INFO - downloader - image #5  https://images2.onionstatic.com/clickhole/3567/2/original/600.jpg
2017-11-19 09:37:37,171 - INFO - downloader - image #6  https://images2.onionstatic.com/clickhole/3564/7/original/600.jpg
2017-11-19 09:37:37,618 - INFO - downloader - image #7  https://upload.wikimedia.org/wikipedia/commons/d/de/Nokota_Horses_cropped.jpg
2017-11-19 09:37:37,987 - INFO - downloader - image #8  https://i.ytimg.com/vi/y1U1Eqfdg7w/maxresdefault.jpg
2017-11-19 09:37:38,110 - INFO - downloader - image #9  https://images2.onionstatic.com/clickhole/3567/5/original/600.jpg
2017-11-19 09:37:38,595 - INFO - downloader - image #10 http://ashs.com.au/images/New_Buttons-2017-03-24/StudBook3.png
2017-11-19 09:37:39,551 - INFO - downloader - downloaded images reach max num, thread downloader-002 is ready to exit
2017-11-19 09:37:39,552 - INFO - downloader - thread downloader-002 exit
2017-11-19 09:37:40,103 - INFO - downloader - downloaded images reach max num, thread downloader-005 is ready to exit
2017-11-19 09:37:40,103 - INFO - downloader - thread downloader-005 exit
2017-11-19 09:37:40,238 - INFO - downloader - downloaded images reach max num, thread downloader-008 is ready to exit
2017-11-19 09:37:40,239 - INFO - downloader - thread downloader-008 exit
2017-11-19 09:37:40,600 - INFO - parser - downloaded image reached max num, thread parser-001 is ready to exit
2017-11-19 09:37:40,600 - INFO - parser - thread parser-001 exit
2017-11-19 09:37:40,925 - INFO - downloader - downloaded images reach max num, thread downloader-003 is ready to exit
2017-11-19 09:37:40,925 - INFO - downloader - thread downloader-003 exit
2017-11-19 09:37:41,978 - INFO - downloader - downloaded images reach max num, thread downloader-009 is ready to exit
2017-11-19 09:37:41,978 - INFO - downloader - thread downloader-009 exit
2017-11-19 09:37:42,042 - INFO - downloader - downloaded images reach max num, thread downloader-010 is ready to exit
2017-11-19 09:37:42,042 - INFO - downloader - thread downloader-010 exit
2017-11-19 09:37:42,149 - INFO - downloader - downloaded images reach max num, thread downloader-006 is ready to exit
2017-11-19 09:37:42,150 - INFO - downloader - thread downloader-006 exit
2017-11-19 09:37:42,368 - INFO - downloader - downloaded images reach max num, thread downloader-007 is ready to exit
2017-11-19 09:37:42,369 - INFO - downloader - thread downloader-007 exit
2017-11-19 09:37:46,884 - INFO - downloader - downloaded images reach max num, thread downloader-004 is ready to exit
2017-11-19 09:37:46,884 - INFO - downloader - thread downloader-004 exit
2017-11-19 09:37:48,034 - INFO - downloader - downloaded images reach max num, thread downloader-001 is ready to exit
2017-11-19 09:37:48,034 - INFO - downloader - thread downloader-001 exit
2017-11-19 09:37:48,808 - INFO - icrawler.crawler - Crawling task done!
2017-11-19 09:37:48,808 - INFO - icrawler.crawler - start crawling...
2017-11-19 09:37:48,809 - INFO - icrawler.crawler - starting 1 feeder threads...
2017-11-19 09:37:48,810 - INFO - feeder - thread feeder-001 exit
2017-11-19 09:37:48,810 - INFO - icrawler.crawler - starting 1 parser threads...
2017-11-19 09:37:48,812 - INFO - icrawler.crawler - starting 1 downloader threads...
2017-11-19 09:37:49,470 - INFO - parser - parsing result page http://www.bing.com/images/search?q=horse&count=35&first=0
2017-11-19 09:37:53,382 - INFO - downloader - image #1  http://www.roberthague.com/sculpture/gallery/images/selene_horse_hague_800.jpg
2017-11-19 09:37:53,762 - INFO - downloader - image #2  https://crawler-cache-jellolabs-com.imgix.net/OjKfzP8aIOhwcsv6/source_photo.jpg
2017-11-19 09:37:54,914 - INFO - downloader - image #3  http://www.picsandgigglesphotography.com/img/s2/v53/p1873638982-4.jpg
2017-11-19 09:37:56,899 - INFO - downloader - image #4  http://www.naturalfishsa.com/en/prod/assets/jurel.jpg
2017-11-19 09:37:57,304 - INFO - downloader - image #5  https://crawler-cache-jellolabs-com.imgix.net/xETHdiBafG1mB9-H/source_photo.jpg
2017-11-19 09:37:57,774 - INFO - parser - no more page urls for thread parser-001 to parse
2017-11-19 09:37:57,774 - INFO - parser - thread parser-001 exit
2017-11-19 09:37:58,515 - INFO - downloader - image #6  http://www.xn--grnhorse-75a.dk/grünhorse/Camilla_West_files/10517431_10152606921667682_2998534746776919831_o.jpg
2017-11-19 09:37:59,204 - INFO - downloader - image #7  http://funnylookinghorse.com/wp-content/uploads/2013/05/Razor-Blade-480x640.jpg
2017-11-19 09:37:59,682 - INFO - downloader - image #8  http://farm9.staticflickr.com/8270/8709119082_edf1a56e04_z.jpg
2017-11-19 09:38:10,862 - ERROR - downloader - Exception caught when downloading file http://www.chinesefilms.cn/mmsource/images/2016/07/04/d84f9a159944497390345334b4291128.jpg, error: HTTPConnectionPool(host='www.chinesefilms.cn', port=80): Max retries exceeded with url: /mmsource/images/2016/07/04/d84f9a159944497390345334b4291128.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001FA3FD21940>: Failed to establish a new connection: [Errno 11002] getaddrinfo failed',)), remaining retry times: 2
2017-11-19 09:38:22,025 - ERROR - downloader - Exception caught when downloading file http://www.chinesefilms.cn/mmsource/images/2016/07/04/d84f9a159944497390345334b4291128.jpg, error: HTTPConnectionPool(host='www.chinesefilms.cn', port=80): Max retries exceeded with url: /mmsource/images/2016/07/04/d84f9a159944497390345334b4291128.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001FA41150D68>: Failed to establish a new connection: [Errno 11002] getaddrinfo failed',)), remaining retry times: 1
2017-11-19 09:38:33,190 - ERROR - downloader - Exception caught when downloading file http://www.chinesefilms.cn/mmsource/images/2016/07/04/d84f9a159944497390345334b4291128.jpg, error: HTTPConnectionPool(host='www.chinesefilms.cn', port=80): Max retries exceeded with url: /mmsource/images/2016/07/04/d84f9a159944497390345334b4291128.jpg (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x000001FA3FDAF2E8>: Failed to establish a new connection: [Errno 11002] getaddrinfo failed',)), remaining retry times: 0
2017-11-19 09:38:38,191 - INFO - downloader - no more download task for thread downloader-001
2017-11-19 09:38:38,194 - INFO - downloader - thread downloader-001 exit
2017-11-19 09:38:38,845 - INFO - icrawler.crawler - Crawling task done!

from icrawler.

hellock avatar hellock commented on August 16, 2024

You can specify the argument file_idx_offset in the crawl method, no need to modify the codes otherwhere. The following example should work fine.

google_crawler = GoogleImageCrawler(downloader_threads=10, storage={'root_dir': directory},
        			     log_level=logging.INFO)
google_crawler.crawl(keyword=label, max_num=10, file_idx_offset='auto')

bing_crawler = BingImageCrawler(storage={'root_dir': directory},
				log_level=logging.DEBUG)
bing_crawler.crawl(keyword=label, max_num=10, file_idx_offset='auto')

from icrawler.

devilsnare007 avatar devilsnare007 commented on August 16, 2024

Thanks Kai that worked!! I thought it had to be changed in the source file.

from icrawler.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.