GithubHelp home page GithubHelp logo

Comments (5)

SoheilKhodayari avatar SoheilKhodayari commented on June 17, 2024

Hi, thanks for your report!

Yes, as documented here in the README, the current version of the code requires that you provide your own settings file. You can find an example here. This file specifies the testbed, e.g., which websites should the tool test.

from basta-cosi.

Minghao1207 avatar Minghao1207 commented on June 17, 2024

Thanks a lot! The problem solved after providing the setting file :D There are several more problems I met afterwards:

According to 'Step 3: Selenium Webdrivers', the exact path of drivers need to be set in function get_new_browser_driver in automator/main.py. My operation system is macos, I set chrome driver path like this:

chromedriver = '/usr/local/bin/chromedriver'

Firefox path originally set in:

driver = webdriver.Firefox(capabilities=firefox_capabilities)

Is there an need to change this and pass the exact path of geckodriver to the function? And which driver did the project use to drive edge? I only see chromedriver and geckodriver in README.

And when running 'python crawler_and_avi.py get_cosi_attacks' in folder automator, It gives me this error:

Traceback (most recent call last):
  File "crawler_and_avi.py", line 916, in <module>
    main_crawl_url_response_headers(siteId, chunk_size=3)
  File "crawler_and_avi.py", line 601, in main_crawl_url_response_headers
    stateModule = __import__("%s.Scripts.%s"%(siteId, STATES_SCRIPT_FILE), fromlist=["states"])
ImportError: No module named 23.Scripts.FreeNPremium

Do I also need to provide FreeNPremium.py in 'automator\1\Scripts' ? Is there an example of this script?

Thanks again for your time! I really appreciates your help :D

from basta-cosi.

SoheilKhodayari avatar SoheilKhodayari commented on June 17, 2024

Hi, you have to set the correct path of the browser you want to use. You may need to change the function according to your OS/environment to make it working. Please refer to the selenium documentation for details.

For Edge, please check out here. You may download an appropriate driver from here, and pass the executable path to webdriver.Edge(executable_path=executable_path)

Regarding your second issue, please note that this tool gets as input the so-called state scripts. These are scripts that load a particular state inside the browser before testing, e.g., logged in as normal user, logged in as admin, logged in as a premium user, etc. As a tester, you can specify what states you want the tool to check for XS-Leaks.

You can call your state script file as you wish. An example is here. You should specify what state script the tool should use in the config.

Do I also need to provide FreeNPremium.py in 'automator\1\Scripts' ? Is there an example of this script?

The code is asking for a state script called FreeNPremium.py because it has been specified in the config. Assuming you want to test for XS-Leaks between free vs premium accounts, you need to create a script, with two functions that logs in the browser in those accounts, respectively.

from basta-cosi.

Minghao1207 avatar Minghao1207 commented on June 17, 2024

Thanks so much for your help! I successfully run "crawler_and_avi.py" now. But when I
run crawler_find_urls.py, I got this error:

++ URL Crawling Started!
Traceback (most recent call last):
  File "crawler_find_urls.py", line 240, in <module>
    URLs = get_urls(siteId)
  File "crawler_find_urls.py", line 204, in get_urls
    currentAccountURLs = find_urls(siteId, state, spider_duration_mins=spider_duration_mins, ajax_duration_mins=ajax_duration_mins, Local=Local, IP=IP, CrawlFlag=CrawlFlag)
  File "crawler_find_urls.py", line 30, in find_urls
    psl = public_suffix_list(http=httplib2.Http(cache_dir), headers={'cache-control': 'max-age=%d' % (90000000*60*24)})
  File "/Users/tmh/Desktop/Basta-COSI-master/automator/publicsuffix.py", line 255, in public_suffix_list
    _response, content = http.request(url, headers=headers)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1694, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1434, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1360, in _conn_request
    raise ServerNotFoundError("Unable to find the server at %s" % conn.host)
httplib2.ServerNotFoundError: Unable to find the server at mxr.mozilla.org

I googled this error and found out it was because mxr source code is no longer available online, so I changed the EFFECTIVE_TLD_NAMES in publicsuffix.py from
'http://mxr.mozilla.org/mozilla-central/source/netwerk/dns/effective_tld_names.dat?raw=1'
to 'https://chromium.googlesource.com/chromium/src/+/master/net/base/registry_controlled_domains/effective_tld_names.dat'
Afterwards I got this error:

++ URL Crawling Started!
Traceback (most recent call last):
  File "crawler_find_urls.py", line 240, in <module>
    URLs = get_urls(siteId)
  File "crawler_find_urls.py", line 204, in get_urls
    currentAccountURLs = find_urls(siteId, state, spider_duration_mins=spider_duration_mins, ajax_duration_mins=ajax_duration_mins, Local=Local, IP=IP, CrawlFlag=CrawlFlag)
  File "crawler_find_urls.py", line 30, in find_urls
    psl = public_suffix_list(http=httplib2.Http(cache_dir), headers={'cache-control': 'max-age=%d' % (90000000*60*24)})
  File "/Users/tmh/Desktop/Basta-COSI-master/automator/publicsuffix.py", line 255, in public_suffix_list
    _response, content = http.request(url, headers=headers)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1694, in request
    (response, content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1434, in _request
    (response, content) = self._conn_request(conn, request_uri, method, body, headers)
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/httplib2/__init__.py", line 1390, in _conn_request
    response = conn.getresponse()
  File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/httplib.py", line 1061, in getresponse
    raise ResponseNotReady()
httplib.ResponseNotReady

Running main.py gives me similar error. So I guess I didn't get the right url for EFFECTIVE_TLD_NAMES. Which url should I use when mxr.mozilla.org is no longer available?

Thanks again for all the help you have offered !!!

from basta-cosi.

SoheilKhodayari avatar SoheilKhodayari commented on June 17, 2024

How about using https://publicsuffix.org/list/effective_tld_names.dat? Does that solve your problem?

from basta-cosi.

Related Issues (1)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.