GithubHelp home page GithubHelp logo

anorov / cloudflare-scrape Goto Github PK

View Code? Open in Web Editor NEW
3.3K 131.0 449.0 466 KB

A Python module to bypass Cloudflare's anti-bot page.

License: MIT License

Python 97.64% Makefile 2.36%
cloudflare anti-bot-page protected-page scrape scraping-websites

cloudflare-scrape's Introduction

cloudflare-scrape

A simple Python module to bypass Cloudflare's anti-bot page (also known as "I'm Under Attack Mode", or IUAM), implemented with Requests. Python versions 2.6 - 3.7 are supported. Cloudflare changes their techniques periodically, so I will update this repo frequently.

This can be useful if you wish to scrape or crawl a website protected with Cloudflare. Cloudflare's anti-bot page currently just checks if the client supports JavaScript, though they may add additional techniques in the future.

Due to Cloudflare continually changing and hardening their protection page, cloudflare-scrape requires Node.js to solve JavaScript challenges. This allows the script to easily impersonate a regular web browser without explicitly deobfuscating and parsing Cloudflare's JavaScript.

Note: This only works when regular Cloudflare anti-bots is enabled (the "Checking your browser before accessing..." loading page). If there is a reCAPTCHA challenge, you're out of luck. Thankfully, the JavaScript check page is much more common.

For reference, this is the default message Cloudflare uses for these sorts of pages:

Checking your browser before accessing website.com.

This process is automatic. Your browser will redirect to your requested content shortly.

Please allow up to 5 seconds...

Any script using cloudflare-scrape will sleep for 5 seconds for the first visit to any site with Cloudflare anti-bots enabled, though no delay will occur after the first request.

Installation

Simply run pip install cfscrape. You can upgrade with pip install -U cfscrape. The PyPI package is at https://pypi.python.org/pypi/cfscrape/

Alternatively, clone this repository and run python setup.py install.

Node.js dependency

Node.js version 10 or above is required to interpret Cloudflare's obfuscated JavaScript challenge.

Your machine may already have Node installed (check with node -v). If not, you can install it with apt-get install nodejs on Ubuntu >= 18.04 and Debian >= 9 and brew install node on macOS. Otherwise, you can get it from Node's download page or their package manager installation page.

Updates

Cloudflare regularly modifies their anti-bot protection page and improves their bot detection capabilities.

If you notice that the anti-bot page has changed, or if this module suddenly stops working, please create a GitHub issue so that I can update the code accordingly.

  • Many issues are a result of users not updating to the latest release of this project. Before filing an issue, please run the following command to update cloudflare-scrape to the latest version:
pip install -U cfscrape

If you are still encountering a problem, create a GitHub issue and please include:

  • The version number from pip show cfscrape.
  • The relevant code snippet that's experiencing an issue or raising an exception.
  • The full exception and traceback, if applicable.
  • The URL of the Cloudflare-protected page which the script does not work on.
  • A Pastebin or Gist containing the HTML source of the protected page.

If you've upgraded and are still experiencing problems, click here to create a GitHub issue and fill out the pertinent information.

Usage

The simplest way to use cloudflare-scrape is by calling create_scraper().

import cfscrape

scraper = cfscrape.create_scraper()  # returns a CloudflareScraper instance
# Or: scraper = cfscrape.CloudflareScraper()  # CloudflareScraper inherits from requests.Session
print scraper.get("http://somesite.com").content  # => "<!DOCTYPE html><html><head>..."

That's it. Any requests made from this session object to websites protected by Cloudflare anti-bot will be handled automatically. Websites not using Cloudflare will be treated normally. You don't need to configure or call anything further, and you can effectively treat all websites as if they're not protected with anything.

You use cloudflare-scrape exactly the same way you use Requests. (CloudflareScraper works identically to a Requests Session object.) Just instead of calling requests.get() or requests.post(), you call scraper.get() or scraper.post(). Consult Requests' documentation for more information.

Options

Existing session

If you already have an existing Requests session, you can pass it to create_scraper() to continue using that session.

session = requests.session()
session.headers = ...
scraper = cfscrape.create_scraper(sess=session)

Unfortunately, not all of Requests' session attributes are easily transferable, so if you run into problems with this, you should replace your initial sess = requests.session() call with sess = cfscrape.create_scraper().

Delays

Normally, when a browser is faced with a Cloudflare IUAM challenge page, Cloudflare requires the browser to wait 5 seconds before submitting the challenge answer. If a website is under heavy load, sometimes this may fail. One solution is to increase the delay (perhaps to 10 or 15 seconds, depending on the website). If you would like to override this delay, pass the delay keyword argument to create_scraper() or CloudflareScraper().

There is no need to override this delay unless cloudflare-scrape generates an error recommending you increase the delay.

scraper = cfscrape.create_scraper(delay=10)

Integration

It's easy to integrate cloudflare-scrape with other applications and tools. Cloudflare uses two cookies as tokens: one to verify you made it past their challenge page and one to track your session. To bypass the challenge page, simply include both of these cookies (with the appropriate user-agent) in all HTTP requests you make.

To retrieve just the cookies (as a dictionary), use cfscrape.get_tokens(). To retrieve them as a full Cookie HTTP header, use cfscrape.get_cookie_string().

get_tokens and get_cookie_string both accept Requests' usual keyword arguments (like get_tokens(url, proxies={"http": "socks5://localhost:9050"})). Please read Requests' documentation on request arguments for more information.

User-Agent Handling

The two integration functions return a tuple of (cookie, user_agent_string). You must use the same user-agent string for obtaining tokens and for making requests with those tokens, otherwise Cloudflare will flag you as a bot. That means you have to pass the returned user_agent_string to whatever script, tool, or service you are passing the tokens to (e.g. curl, or a specialized scraping tool), and it must use that passed user-agent when it makes HTTP requests.

If your tool already has a particular user-agent configured, you can make cloudflare-scrape use it with cfscrape.get_tokens("http://somesite.com/", user_agent="User-Agent Here") (also works for get_cookie_string). Otherwise, a randomly selected user-agent will be used.


Integration examples

Remember, you must always use the same user-agent when retrieving or using these cookies. These functions all return a tuple of (cookie_dict, user_agent_string).

Retrieving a cookie dict through a proxy

get_tokens is a convenience function for returning a Python dict containing Cloudflare's session cookies. For demonstration, we will configure this request to use a proxy. (Please note that if you request Cloudflare clearance tokens through a proxy, you must always use the same proxy when those tokens are passed to the server. Cloudflare requires that the challenge-solving IP and the visitor IP stay the same.)

If you do not wish to use a proxy, just don't pass the proxies keyword argument. These convenience functions support all of Requests' normal keyword arguments, like params, data, and headers.

import cfscrape

proxies = {"http": "http://localhost:8080", "https": "http://localhost:8080"}
tokens, user_agent = cfscrape.get_tokens("http://somesite.com", proxies=proxies)
print tokens
# => {'cf_clearance': 'c8f913c707b818b47aa328d81cab57c349b1eee5-1426733163-3600', '__cfduid': 'dd8ec03dfdbcb8c2ea63e920f1335c1001426733158'}

Retrieving a cookie string

get_cookie_string is a convenience function for returning the tokens as a string for use as a Cookie HTTP header value.

This is useful when crafting an HTTP request manually, or working with an external application or library that passes on raw cookie headers.

import cfscrape
request = "GET / HTTP/1.1\r\n"

cookie_value, user_agent = cfscrape.get_cookie_string("http://somesite.com")
request += "Cookie: %s\r\nUser-Agent: %s\r\n" % (cookie_value, user_agent)

print request

# GET / HTTP/1.1\r\n
# Cookie: cf_clearance=c8f913c707b818b47aa328d81cab57c349b1eee5-1426733163-3600; __cfduid=dd8ec03dfdbcb8c2ea63e920f1335c1001426733158
# User-Agent: Some/User-Agent String

curl example

Here is an example of integrating cloudflare-scrape with curl. As you can see, all you have to do is pass the cookies and user-agent to curl.

import subprocess
import cfscrape

# With get_tokens() cookie dict:

# tokens, user_agent = cfscrape.get_tokens("http://somesite.com")
# cookie_arg = "cf_clearance=%s; __cfduid=%s" % (tokens["cf_clearance"], tokens["__cfduid"])

# With get_cookie_string() cookie header; recommended for curl and similar external applications:

cookie_arg, user_agent = cfscrape.get_cookie_string("http://somesite.com")

# With a custom user-agent string you can optionally provide:

# ua = "Scraping Bot"
# cookie_arg, user_agent = cfscrape.get_cookie_string("http://somesite.com", user_agent=ua)

result = subprocess.check_output(["curl", "--cookie", cookie_arg, "-A", user_agent, "http://somesite.com"])

Trimmed down version. Prints page contents of any site protected with Cloudflare, via curl. (Warning: shell=True can be dangerous to use with subprocess in real code.)

url = "http://somesite.com"
cookie_arg, user_agent = cfscrape.get_cookie_string(url)
cmd = "curl --cookie {cookie_arg} -A {user_agent} {url}"
print(subprocess.check_output(cmd.format(cookie_arg=cookie_arg, user_agent=user_agent, url=url), shell=True))

cloudflare-scrape's People

Contributors

alzamer2 avatar anorov avatar asl97 avatar berdt avatar cvium avatar edmundmartin avatar jremes-foss avatar kittenswolf avatar lord8266 avatar lukastribus avatar lukele avatar obskyr avatar r-darwish avatar rcoh avatar simopopov avatar spanglelabs avatar tobix avatar veteranbv avatar washedev avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

cloudflare-scrape's Issues

Syntax Errors?

I tried to use this script yesterday and I keep getting these kinds of errors

SyntaxError: SyntaxError: Unexpected token } ( @ 1 : 68 ) -> toLowerCase(),t[3]=t[3]&&new RegExp("(?:^|\s)"+t[3]+"(?:\s|$)")),t},ot=funct

I believe the "builder" part of the script is searching and returning the wrong challenge and or answer but I am not sure if I am correct and have no idea what part of the page is meant to be returned in this case.

The code I am executing is given below:

url = "http://kissmanga.com/" scraper = create_scraper() scraper.get(url)

Any help with regards to what is wrong(Or what I am doing wrong) would be appreciated. Thanks.

Emails obfuscated

I am scraping a CF site and for the email field, I am getting:

u'/cdn-cgi/l/email-protection#8fe6e1e9e0cfe2e0eeede7eee3e9e2eefdeefbe7e0e1a1ece0e2'

Googling around, this looks like a Cloudflare feature called "E-mail address obfuscation". Source: https://sendy.co/forum/discussion/4956/email-addresses-in-email-body-converted-to-email-protected/p1

I found this solution and wanted to share in case you wanted to add it add it as a function. http://www.saltycrane.com/blog/2015/07/calling-javascript-python-de-cloudflare-scraped-content/

Start a new session once it expires

CloudFlare only lets you perform a certain number of requests until the session expires. Can we use response hooks to check if the page is the challenge page, and if so, start a new session?

Or is there another reason why this isn't possible (I see this has been asked before)?

I want to contribute by building this in node.js

By seeing the popularity of node.js I believe that we should convert your precious work into some kind of npm module. If you can help me on this then I could be a contributor on this. Let me know your thought on this.
What I need from you :

  1. Basic flow of process
  2. Things to keep in mind

PS : I am not comfortable in python

Cloudflare Update.

I'm currently having issues using your script, the same as StevenVeshkini had(/s).
From what I've gathered there seem to be eight 'setTimeout's on the page causing your script to get confused. I think the one that your script wants is the eighth but as a scriptnoob I have no idea how to implement the correct search.

Thanks in advance,
plasmaboltrs

HTTP Error 521

I want to get the content of http://www.eoemarket.com/soft/28366.html

  • when I used urllib2 in Python,the error is
Traceback (most recent call last):
  File "5_youyishichang.py", line 7, in <module>
    html=urllib2.urlopen(url).read()
  File "/usr/lib/python2.7/urllib2.py", line 127, in urlopen
    return _opener.open(url, data, timeout)
  File "/usr/lib/python2.7/urllib2.py", line 410, in open
    response = meth(req, response)
  File "/usr/lib/python2.7/urllib2.py", line 523, in http_response
    'http', request, response, code, msg, hdrs)
  File "/usr/lib/python2.7/urllib2.py", line 448, in error
    return self._call_chain(*args)
  File "/usr/lib/python2.7/urllib2.py", line 382, in _call_chain
    result = func(*args)
  File "/usr/lib/python2.7/urllib2.py", line 531, in http_error_default
    raise HTTPError(req.get_full_url(), code, msg, hdrs, fp)
urllib2.HTTPError: HTTP Error 521: 
  • when I used curl www.eoemarket.com/soft/28366.html -v,the stdout is
* Hostname was NOT found in DNS cache
*   Trying 115.231.25.180...
* Connected to www.eoemarket.com (115.231.25.180) port 80 (#0)
> GET /soft/28366.html HTTP/1.1
> User-Agent: curl/7.35.0
> Host: www.eoemarket.com
> Accept: */*
> 
< HTTP/1.1 521 
< Date: Thu, 07 Apr 2016 08:41:58 GMT
< Content-Type: text/html
< Content-Length: 1009
< Connection: keep-alive
< Cache-Control: no-cache, no-store
* Server YUNDUN is not blacklisted
< Server: YUNDUN
< X-Cache:  from YUNDUN
< 
* Connection #0 to host www.eoemarket.com left intact
<html><body><script language="javascript"> window.onload=setTimeout("ix(107)", 200); function ix(QG) {var qo, mo="", no="", oo = [0x16,0x94,0x38,0x80,0x28,0x80,0x57,0xc8,0x0b,0x33,0x8b,0xa3,0x9b,0x44,0xfd,0x76,0xce,0x15,0x4a,0x4e,0xa2,0x39,0x52,0xaa,0x72,0x19,0x6d,0x72,0x17,0x7c,0x8f,0x54,0x17,0x5b,0xc3,0xc1,0x6a,0xa2,0xca,0x1e,0x95,0x4e,0xa6,0xee,0x96,0x4d,0x95,0xca,0x2e,0x80,0x43,0xb8,0x7b,0xe0,0x98,0x5b,0x19,0x7e,0x46,0xa9,0x7e,0xf1,0xd6,0x85,0x9a,0x0e,0x21,0x39,0x01,0x76,0xa6,0x0d,0x22];qo = "qo=71; do{oo[qo]=(-oo[qo])&0xff; oo[qo]=(((oo[qo]>>1)|((oo[qo]<<7)&0xff))-160)&0xff;}while(--qo>=2);"; eval(qo);qo = 70; do { oo[qo] = (oo[qo] - oo[qo - 1]) & 0xff; } while (-- qo >= 3 );qo = 1; for (;;) { if (qo > 70) break; oo[qo] = ((((((oo[qo] + 135) & 0xff) + 197) & 0xff) << 5) & 0xff) | (((((oo[qo] + 135) & 0xff) + 197) & 0xff) >> 3);qo++;}po = "";for (qo = 1; qo < oo.length - 1; qo++) if (qo % 7) po += String.fromCharCode(oo[qo] ^ QG);po += "\""; eval("qo=eval;qo(po);");}</script> </body></html>
  • I search "http error 521" and get this url: Error 521: Web server is down in cloudflare's official website.
    It seems this modules cloudflare-scrape could solve this problem,but it didn't return the entire html.
    If I use a browser such as Chrome, I could get the entire html.
  • pip list of my virtualenv
argparse (1.2.1)
cfscrape (1.6.1)
pip (1.5.4)
PyExecJS (1.3.1)
requests (2.9.1)
setuptools (2.2)
six (1.10.0)
wsgiref (0.1.2)

nodejs -v is v0.10.37
But when I execute

import cfscrape
scraper = cfscrape.create_scraper()
url="http://www.eoemarket.com/soft/28366.html"
scraper.get(url).content

It's same as the output of curl.
I am a rookie about Python ,and my English is poor. Looking forward to your reply.Thank you very much. ^_^

response returns 403 Forbidden today trying cfscrape

Tried in python3 doesn't work, python2.7 works fine tho O_O

!/usr/bin/env python

import cfscrape

scraper = cfscrape.create_scraper() # returns a requests.Session object
print (scraper.get("https://testsite.com").text)

Response text

`Desktop$ python3 testcf.py

<title>403 Forbidden</title>

403 Forbidden


nginx `

Any reason why it's being blocked ?

Not working with scrapy, unable to pass cookies, may be

import scrapy

from scrapy.spider import BaseSpider

from scrapy.selector import Selector

from selenium import webdriver

from stack.items import StackItem

from scrapy.selector import Selector

import requests

import cfscrape

from requests import Request

sess.mount("http://", cfscrape.CloudflareAdapter())

sess.mount("https://", cfscrape.CloudflareAdapter())

USER_AGENT = 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/42.0.2311.135 Safari/537.36'

class StackSpider(BaseSpider):
name = "stack"
allowed_domains = [
"cloud flare site"]

start_urls = [
    "cloud flare site"
]

#def __init__(self):
#   self.driver = webdriver.PhantomJS()
'''
def parse_start_url(self, response):
    self.cookie = {'cf_clearance': 'e49068ee588706fdabc0c434eb66df533d12ec3c-1461787949-86400', '__cfduid': 'dda7da5497ba77c6063e79d48921df0a71461787944'}
    return super(Spider, self).parse_start_url(response)
'''

def start_requests(self):
    cf_requests = []
    for url in self.start_urls:
        token, agent = cfscrape.get_tokens(url, "Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/46.0.2490.86 Safari/537.36")


        cf_requests.append(scrapy.Request(url=url,
            cookies=token,
            headers={'User-Agent': agent}))
        return cf_requests

def parse(self, response):
    print "#####Parsing Hello World"
    item = StackItem()
    #self.driver.get(response.url)

#   self.driver.close()
    return item

Can not force JS_ENGINE on Mac OS X

Whenever I'm trying to run cfscrape from PHP using the native exec() function, I'm getting this error :

EnvironmentError('Your Javascript runtime 'JavaScriptCore' is not supported due to security concerns. Please use Node.js or PyV8. To force a specific engine, such as Node, call create_scraper(js_engine="Node"))

Of course, I do try to force "Node" as the JS_ENGINE. Node.JS is installed on my machine and in my path. I have also tried forcing "V8" as the JS_ENGINE, same error. Like it's ignoring it completely.

Weird thing is though, if run the python script directly (in my console : python /path/to/script.py) it works exactly as expected.

Is it a bug or a misuse on my end ?

Cookie expire?

I am running a online private web server which can help me bypass cloudflare. After running for about a month, the scraper did not bypass cloudflare. Instead, it return the original browser checking page. So is it the problem of cookie expiration? How to fix it as well?

Setting cookies doesn't work

If you manually set your own cookies, the cookies will not be set if it's used on a page with Cloudflare anti-bot. It seems like they get overwritten with cf_clearance and cfduid, instead of it being merged.

The following does work if used on page without Cloudflare, but not on a page with it:
scraper.get("http://somesite.com", cookies=cookies)

cloudflare header changes

Refresh header value is now missing for initial scrape of page, resulting in cfscrape failing to know to solve for a challenge.

Line 37 needs to be updated to reflect current changes. I did a quick patch by replacing line 37 with a HTTP 503 error check and left line 38 the same (example). Not sure if the best option for a fix, but don't see any other values within the headers that would be an anti-bot indicator.

A POST request can not be performed without first performing a GET.

Trying to perform a POST with a freshly-create_scraper()ed scraper times out with the classic requests.exceptions.ConnectionError: ('Connection aborted.', error(10054, '[locale dependent]')) error. Performing a GET with the scraper first, and then the POST, works just fine.

I haven't tested this on a lot of sites, so it may not be generalizable this way, but it sure does seem like it.

Issues with latest merge

The site I am trying to use this on has a slight variation in its CloudFlare page. I'm also fairly certain all of the needed libraries are correctly installed.

Instead of:

html = lxml.html.fromstring(page.content)

It requires:

html = lxml.html.fromstring('cf-content')

But then I get this error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "cfscrape.py", line 19, in grab_cloudflare
    challenge = html.find(".//input[@name='jschl_vc']").attrib["value"]
AttributeError: 'NoneType' object has no attribute 'attrib'

Incapsula

Hi,
Could you do the same for Incapsula?

Cant remote upload the picture

Hello.

im trying to remote upload pictures from 4fuckr.com to my host.

Now the problem: 4fuckr.com set the 5 seconds anti ddos protection by cloudflare. well fine, we just grab the cookies and done.

Its still not remote upload them to my host :<

Here's what i've done so far:

(remote upload script.php):
http://hastebin.com/raw/ufikasiqob

(getting cookies from 4fuckr website)
http://hastebin.com/raw/udoqotixog

(this is how cookies.txt looks like)
Cookie: cf_clearance=410e1a48ad9474304d7936c965373cbfdbf1f77d-1458209645-14400; __cfduid=d93afdc905b7cb5ca3f3bc9cd60a263dc1458209640

and thats how i "catch" images from 4fuckr.com

http://weserv.capsload.it/push_nolog.php?p=https://images.4fuckr.com/cf57da1df454/6/c/8/d/6/6c8d67c209.jpg

Problem: PICTURES ARE WHITE (0KB)

Its just from that website. :(

unable to login

Hi Dude,

I am having issues with a script i am trying to make is it me being a noobie or is it the site?
Website: http://www.celticfc.tv/login
HTML of site: http://pastebin.com/raw/0VYjJfnm

import cfscrape
import requests
scraper = cfscrape.create_scraper() 



scraper.get("http://www.celticfc.tv/login", headers={"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36", "Accept": "*/*", "Referer": "http://www.celticfc.tv/login"})

grabbedcookies = cfscrape.get_tokens("http://www.celticfc.tv/login", user_agent="Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36")

loginok = scraper.post("http://www.celticfc.tv/login", headers={"User-Agent": "Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/51.0.2704.103 Safari/537.36", "Accept": "*/*", "Referer": "http://www.celticfc.tv/login","Content-Type": "application/x-www-form-urlencoded"}, data={"usernamecfc": "konrad", "password": "konrad"})

if 'logout' in loginok.text:
    print loginok.text
else:
    print "Login Failed"

The account works and when i've proxied the requests i notice that when it sends the post it still ends up back at the login page even though the login works.

Source IP - Requests Toolbelt

I have requests toolbelt working with the SourceAddressAdapter and data originates from the specified IP (first print statement), however passing that session to cfscrape reverts it to the default IP/interface (second print statement).

How would I get cfscrape to use the already established session and mounts?

import cfscrape
import requests
from requests_toolbelt.adapters.source import SourceAddressAdapter

s = requests.Session()
s.mount('http://', SourceAddressAdapter(('10.10.10.10', 31999)))
s.mount('https://', SourceAddressAdapter(('10.10.10.10', 31998)))

print s.get("http://website.net/test.php").content

scraper = cfscrape.create_scraper(s)

print scraper.get("http://website.net/test.php").content

Possible bug?

cfscrape.py", line 61, in solve_cf_challenge
answer = str(int(ctxt.eval(builder)) + len(domain))
SyntaxError: SyntaxError: Illegal return statement ( @ 2 : 77 ) -> !![]+!![]+!![]+!![]+!![]+!![]+!![]+!![])); return parseInt(UQTeuzc.TiasKf, 10)

Failing with TOR

cloudflare-scrape works like a charm from my IP, but it fails when I try to use it through Tor.

related: https://support.cloudflare.com/hc/en-us/articles/203306930-Does-CloudFlare-block-Tor-

This is an example of the page I got:

<!DOCTYPE html>
<!--[if lt IE 7]> <html class="no-js ie6 oldie" lang="en-US"> <![endif]-->
<!--[if IE 7]>    <html class="no-js ie7 oldie" lang="en-US"> <![endif]-->
<!--[if IE 8]>    <html class="no-js ie8 oldie" lang="en-US"> <![endif]-->
<!--[if gt IE 8]><!--> <html class="no-js" lang="en-US"> <!--<![endif]-->
<head>
<title>Attention Required! | CloudFlare</title>
<meta charset="UTF-8" />
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8" />
<meta http-equiv="X-UA-Compatible" content="IE=Edge,chrome=1" />
<meta name="robots" content="noindex, nofollow" />
<meta name="viewport" content="width=device-width,initial-scale=1,maximum-scale=
1" />
<link rel="stylesheet" id="cf_styles-css" href="/cdn-cgi/styles/cf.errors.css" t
ype="text/css" media="screen,projection" />
<!--[if lt IE 9]><link rel="stylesheet" id='cf_styles-ie-css' href="/cdn-cgi/sty
les/cf.errors.ie.css" type="text/css" media="screen,projection" /><![endif]-->
<style type="text/css">body{margin:0;padding:0}</style>
<!--[if lte IE 9]><script type="text/javascript" src="/cdn-cgi/scripts/jquery.mi
n.js"></script><![endif]-->
<!--[if gte IE 10]><!--><script type="text/javascript" src="/cdn-cgi/scripts/zep
to.min.js"></script><!--<![endif]-->
<script type="text/javascript" src="/cdn-cgi/scripts/cf.common.js"></script>


</head>
<body>
  <div id="cf-wrapper">
    <div class="cf-alert cf-alert-error cf-cookie-error" id="cookie-alert" data-
translate="enable_cookies">Please enable cookies.</div>
    <div id="cf-error-details" class="cf-error-details-wrapper">
      <div class="cf-wrapper cf-header cf-error-overview">
        <h1 data-translate="challenge_headline">One more step</h1>
        <h2 class="cf-subheadline"><span data-translate="complete_sec_check">Ple
ase complete the security check to access</span> sports.betcoin.ag</h2>
      </div><!-- /.header -->

      <div class="cf-section cf-highlight cf-captcha-container">
        <div class="cf-wrapper">
          <div class="cf-columns two">
            <div class="cf-column">
              <div class="cf-highlight-inverse cf-form-stacked">
                <form class="challenge-form" id="challenge-form" action="/cdn-cg
i/l/chk_captcha" method="get">
  <script type="text/javascript" src="/cdn-cgi/scripts/cf.challenge.js" data-typ
e="normal"  data-ray="297302c2d5d52926" async data-sitekey="6LfOYgoTAAAAAInWDVTL
Sc8Yibqp-c9DaLimzNGM" data-stoken="urFaI2UjzL7Q4gf4a-aeCBTXc1axcVPUoLk1n_YSXYddI
_7UpGGFF-ImEq1D8kqkSkc1ihC4tL3e8oV4_mHATeYFu6mOUdUZxOBRKYy5b6w"></script>
  <div class="g-recaptcha"></div>
  <noscript id="cf-captcha-bookmark" class="cf-captcha-info">
    <div><div style="width: 302px">
      <div>
        <iframe src="https://www.google.com/recaptcha/api/fallback?k=6LfOYgoTAAA
AAInWDVTLSc8Yibqp-c9DaLimzNGM&stoken=urFaI2UjzL7Q4gf4a-aeCBTXc1axcVPUoLk1n_YSXYd
dI_7UpGGFF-ImEq1D8kqkSkc1ihC4tL3e8oV4_mHATeYFu6mOUdUZxOBRKYy5b6w" frameborder="0
" scrolling="no" style="width: 302px; height:422px; border-style: none;"></ifram
e>
      </div>
      <div style="width: 300px; border-style: none; bottom: 12px; left: 25px; ma
rgin: 0px; padding: 0px; right: 25px; background: #f9f9f9; border: 1px solid #c1
c1c1; border-radius: 3px;">
        <textarea id="g-recaptcha-response" name="g-recaptcha-response" class="g
-recaptcha-response" style="width: 250px; height: 40px; border: 1px solid #c1c1c
1; margin: 10px 25px; padding: 0px; resize: none;"></textarea>
        <input type="submit" value="Submit"></input>
      </div>
    </div></div>
  </noscript>
</form>

              </div>
            </div>

            <div class="cf-column">
              <div class="cf-screenshot-container">

                <span class="cf-no-screenshot"></span>

              </div>
            </div>
          </div><!-- /.columns -->
        </div>
      </div><!-- /.captcha-container -->

      <div class="cf-section cf-wrapper">
        <div class="cf-columns two">
          <div class="cf-column">
            <h2 data-translate="why_captcha_headline">Why do I have to complete
a CAPTCHA?</h2>

            <p data-translate="why_captcha_detail">Completing the CAPTCHA proves
 you are a human and gives you temporary access to the web property.</p>
          </div>

          <div class="cf-column">
            <h2 data-translate="resolve_captcha_headline">What can I do to preve
nt this in the future?</h2>

            <p data-translate="resolve_captcha_antivirus">If you are on a person
al connection, like at home, you can run an anti-virus scan on your device to ma
ke sure it is not infected with malware.</p>

            <p data-translate="resolve_captcha_network">If you are at an office
or shared network, you can ask the network administrator to run a scan across th
e network looking for misconfigured or infected devices.</p>
          </div>
        </div>
      </div><!-- /.section -->

      <div class="cf-error-footer cf-wrapper">
  <p>
    <span class="cf-footer-item">CloudFlare Ray ID: <strong>297302c2d5d52926</st
rong></span>
    <span class="cf-footer-separator">&bull;</span>
    <span class="cf-footer-item"><span data-translate="your_ip">Your IP</span>:
195.254.135.76</span>
    <span class="cf-footer-separator">&bull;</span>
    <span class="cf-footer-item"><span data-translate="performance_security_by">
Performance &amp; security by</span> <a data-orig-proto="https" data-orig-ref="w
ww.cloudflare.com/5xx-error-landing?utm_source=error_footer" id="brand_link" tar
get="_blank">CloudFlare</a></span>

  </p>
</div><!-- /.error-footer -->


    </div><!-- /#cf-error-details -->
  </div><!-- /#cf-wrapper -->

  <script type="text/javascript">
  window._cf_translation = {};


</script>

</body>
</html>

cloudflare-scrape Doesn't Work with Scrapy

I was practicing using Scrapy on a site that apparently just today implemented Cloudflare protection. After a bit of research, I tried cloudflare-scrape.

(http://www.endclothing.com/us/latest-products/latest-sneakers)
Full html: http://pastebin.com/CxgH9NzB

I'm using Python 2.7, Requests 2.8.1, PyExecJS (not sure which version), Node.js 0.10.25.

Along with adding the line "import cfscrape", I overwrote Scrapy's start_requests method to use cfscrape.get_tokens() as described in this post: http://stackoverflow.com/a/33290671

Here is my full spider.py file:
http://pastebin.com/mHDNw69G

The output is fairly limited. 0 items scraped (I expected 120). No errors in the log, just 503 status on the start_url. Here's the full log: http://pastebin.com/1HLijt5Z

How to modify exisiting requests script to include cfscrape?

I'm trying to understand the documentation, but I can't seem to get it right - I keep getting errors. Can anyone help me figure out how to wedge cfscrape into this existing requests script? I feel like I must be missing something really obvious.

#!/usr/bin/env python
import requests
import cfscrape

requests.packages.urllib3.disable_warnings()
headers = {
    'User-Agent': 'Mozilla/5.0 (Windows NT 6.1; WOW64; rv:38.0) Gecko/20100101 Firefox/38.0'
}

login_url = 'https://example.com/login.php'
get_url = 'https://example.com/index.php'
payload = { 'username':     'username',
        'password':     'password',
        'keeplogged':   '1',
        'login':    'submit'}

with requests.session() as s:
    # Authenticate the session
    p = s.post(login_url, data=payload, verify=False)
    print p.content

    # An authorised request
    r = s.get(get_url, verify=False)
    print r.text

Post request - Connection website

Hi,

When i try to connect at this website : http://ascalion.eu i'm not connected.

Username : fortest1
password : fortest1

My code : http://paste.isomorphis.me/EmB&ln

Steps are :

  • post requests with data to ascalion.eu/connexion
  • get requests to check if my connection is etablished, but in response content i havent "connected as fortest1", normally it was printed when i'm connected (via web browser)

I dont know why it doesnt work..

Is this a bug ?

Ps : the username and password are for testing, you can use them without any problems.

Regards,

JSLocker error

when i try to use this i get a AttributeError: 'module' object has no attribute 'JSLocker' error. Any thoughts?

Not working

Is this project going to be updated or should we forget about it?
It fails at math = re.search(r"a.value = (\d.+?);", script)

as the a.value is not an int anymore but an object transformed into an int with parseint()

Not working with "get" URL

No answer from URL with "get" format, with the "normal" URL is working.

i.e. http://cb01.co is working - http://cb01.co/?s=chocolat not working.

Traceback (most recent call last):
File "", line 1, in
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 487, in get
return self.request('GET', url, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfscrape/init.py", line 30, in request
return self.solve_cf_challenge(resp, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfscrape/init.py", line 69, in solve_cf_challenge
return self.get(submit_url, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 487, in get
return self.request('GET', url, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfscrape/init.py", line 30, in request
return self.solve_cf_challenge(resp, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfscrape/init.py", line 69, in solve_cf_challenge
return self.get(submit_url, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/requests/sessions.py", line 487, in get
return self.request('GET', url, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfscrape/init.py", line 30, in request
return self.solve_cf_challenge(resp, *_kwargs)
File "/usr/local/lib/python2.7/dist-packages/cfscrape/init.py", line 69, in solve_cf_challenge
return self.get(submit_url, **kwargs)
(and so on....)

HTML: http://pastebin.com/jvZRbaYF

Module broken

Module has broken for me as of 2016/06/02.

Traceback (most recent call last):
  File "/home/matoro/documents/scripting/aniwatch-getcookies.py", line 1, in <module>
    import cfscrape; cookie_arg, user_agent = cfscrape.get_cookie_string("https://kissanime.to"); print (cookie_arg)
  File "/usr/lib/python3.5/site-packages/cfscrape/__init__.py", line 152, in get_cookie_string
    tokens, user_agent = cls.get_tokens(url, user_agent=user_agent, js_engine=None)
  File "/usr/lib/python3.5/site-packages/cfscrape/__init__.py", line 124, in get_tokens
    resp = scraper.get(url)
  File "/usr/lib/python3.5/site-packages/requests/sessions.py", line 487, in get
    return self.request('GET', url, **kwargs)
  File "/usr/lib/python3.5/site-packages/cfscrape/__init__.py", line 30, in request
    return self.solve_cf_challenge(resp, **kwargs)
  File "/usr/lib/python3.5/site-packages/cfscrape/__init__.py", line 52, in solve_cf_challenge
    js = self.extract_js(body)
  File "/usr/lib/python3.5/site-packages/cfscrape/__init__.py", line 73, in extract_js
    "t,r,a,f.+?\r?\n[\s\S]+?a\.value =.+?)\r?\n", body).group(1)
AttributeError: 'NoneType' object has no attribute 'group'

URL of page: https://kissanime.to/
Unblocked source of the page: https://gist.githubusercontent.com/matoro/7b68ca646d74aa85be75c574b3ada52e/raw/88428631317b9446584e58a671f680e4383f088b/kissanime.html

A bit of further information: I've been using this method up until this point with no problems. It is part of a much larger bash script I'm using as a learning tool for bash. The relevant section pastes the code import cfscrape; cookie_arg, user_agent = cfscrape.get_cookie_string("https://kissanime.to"); print (cookie_arg) into aniwatch-getcookies.py, executes it, then deletes the file. I can post the complete bash script if it will help.

I have a debug mode built into the script, which will print the cookies to stdout. When running it, I get the following:

[debug] Clearance cookies [!] Unable to parse Cloudflare anti-bots page. Try upgrading cloudflare-scrape, or submit a bug report if you are running the latest version. Please read https://github.com/Anorov/cloudflare-scrape#updates before submitting a bug report.

'https://kissanime.to' returned an error. Could not collect tokens. obtained.

Not working

For a few days now (last it worked was on 5/6/16), the module isn't working properly.
No exception is thrown, but there is utterly no sort of response. When I stop my program with a ctrl+c, I get the following:
Traceback (most recent call last):
File "E:\Kissanime-dl\kissanime-dl.py", line 188, in <module>
return bs(cfscraper.create_scraper().get(url).content, 'lxml')
followed by an endless loop of
File "C:\Tools\Anaconda\lib\site-packages\requests\sessions.py", line 487, in get return self.request('GET', url, **kwargs)
File "C:\Tools\Anaconda\lib\site-packages\cfscrape\__init__.py", line 30, in request return self.solve_cf_challenge(resp, **kwargs)
File "C:\Tools\Anaconda\lib\site-packages\cfscrape\__init__.py", line 69, in solve_cf_challenge return self.get(submit_url, **kwargs)
and finally,
File "C:\Tools\Anaconda\lib\site-packages\cfscrape\__init__.py", line 36, in solve_cf_challenge time.sleep(5) #Cloudflare requires a delay before solving the challenge

The page I'm trying to scrape is KissAnime.
And here is the source code of the page.

I just noticed that somebody else had an issue with the same page, but that issue was marked closed. And my Traceback seems to be different.

Couldn't collect tokens

Hi, i've wrote a code using cloudflare-scrape, and read "Integration" section, but i'm recieving an error 'forum.goodchoiceshow.ru' returned an error. Could not collect tokens. Where did i made a mistake?

request = urllib2.Request(url + param_joiner + buildblock(random.randint(3,10)) + '=' + buildblock(random.randint(3,10)))
    cookie_value, user_agent = cfscrape.get_cookie_string(host)
    request.add_header('User-Agent', user_agent)
    request.add_header('Cookie', cookie_value)
    #request.add_header('User-Agent', random.choice(headers_useragents))
    request.add_header('Cache-Control', 'no-cache')
    request.add_header('Accept-Charset', 'ISO-8859-1,utf-8;q=0.7,*;q=0.7')
    request.add_header('Referer', random.choice(headers_referers)+random.choice(keyword_top))
    request.add_header('Keep-Alive', random.randint(110,120))
    request.add_header('Connection', 'keep-alive')
    request.add_header('Host', host)
    index = random.randint(0,len(ips)-1)
    proxy = urllib2.ProxyHandler({'http':ips[index]})
    opener = urllib2.build_opener(proxy,urllib2.HTTPHandler)
    urllib2.install_opener(opener)  
    try:
            urllib2.urlopen(request)
            if(flag==1): set_flag(0)
            if(code==500): code=0
    except urllib2.HTTPError, e:
            set_flag(1)
            code=500
    except urllib2.URLError, e:
            sys.exit()
    else:
            inc_counter()
            urllib2.urlopen(request)
    return(code)

Cloudflare update?

It seems that the script may have broken. Specifically, right here.

EDIT: It seems that the regex is fine. It's just that sometimes there are more "setTimeout"'s than one so the script chooses the wrong one.

Has cloudfare code been changed?

Source code for the webpage: http://pastebin.com/YAhwjGGu
Website url: https://kissanime.to/

Actual error:

Exception in thread Thread-1:
Traceback (most recent call last):
  File "/usr/lib/python3.5/threading.py", line 914, in _bootstrap_inner
    self.run()
  File "/usr/lib/python3.5/threading.py", line 862, in run
    self._target(*self._args, **self._kwargs)
  File "Gui.py", line 61, in scrape
    self.scraper.login("username", "password")
  File "/home/absolutezero273/Desktop/Projects/PyKissDl/PyKissDl/scraper.py", line 30, in login
    login_form.find('input', {'name': 'username'})['value'] = usr
AttributeError: 'NoneType' object has no attribute 'find'

CF change

Code stopped working today.

Thank you for your great work, anorov!

Not working

cloudflare page : http://pastebin.com/Jd9uHBtS (no idea why it's so long)
Traceback (most recent call last):
File "cloudflarescrap.py", line 8, in
raise ValueError("'%s' returned error %d, could not collect tokens." % (url, resp.status_code))
ValueError: 'http://csgo.steamanalyst.com' returned error 503, could not collect tokens.

For some reason, sometimes it gives this error, but more often it only gives the cfduid cookie (cfclerance empty). Edit: just tried the same script 50times.. 0.99/2 error 0.99/2 cfduid only, and only got both cookies twice edit: just got both cookies 5times in a row, but now it's back to errors for 20 or so tries.

I've got it working on another site, but for some reason it doesn't on this site.

syntax error at importing

After upgrading cfscrape with pip, now I'm unable to import cfscrape
Screenshot

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\tools\python\lib\site-packages\cfscrape\__init__.py", line 107
    print "'%s' returned error %d, could not collect tokens.\n" % (url, resp.status_code)
                                                              ^
SyntaxError: invalid syntax

I get the same error on my windows (8.1) and linux (Ubuntu 14.04) machine and both running Python 3.4.2

Am I doing something wrong? I tried to uninstall cfscrape and installed it again
also PyExecJS and requests are the latest version

No longer working

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "C:\Python27\lib\site-packages\requests\sessions.py", line 487, in get
    return self.request('GET', url, **kwargs)
  File "C:\Python27\lib\site-packages\cfscrape\__init__.py", line 30, in request
    return self.solve_cf_challenge(resp, **kwargs)
  File "C:\Python27\lib\site-packages\cfscrape\__init__.py", line 52, in solve_cf_challenge
    js = self.extract_js(body)
  File "C:\Python27\lib\site-packages\cfscrape\__init__.py", line 73, in extract_js
    "t,r,a,f.+?\r?\n[\s\S]+?a\.value =.+?)\r?\n", body).group(1)
AttributeError: 'NoneType' object has no attribute 'group'

URL:
http://section-f.tk/

HTML source of website:
https://gist.github.com/Pr0xy671/73853aa0e858396651c8b75404075e53

Not working anymore?

Ok, I'm sorry for the bad bug report I'm going to do, but it seems that they introduced another obfuscation that prevent this from working, I'm I right? I had success using this:

S = "cD0iZSIgKyAgJycgKyJiIiArICAnJyArIAoiMyIgKyAiNHN1Ii5zbGljZSgwLDEpICsgICcnICsnJysiNXNlYyIuc3Vic3RyKDAsMSkgKyAiYiIgKyAgJycgKycnKyJhdCIuY2hhckF0KDApICsgICcnICsgCiIzIiA$
L = len(S)
U = 0
A = 'ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz0123456789+/'
s={}
for u in xrange(64):
   s[A[u]] = u

l=0
r = ""
for i in xrange(L):
   c = s[S[i]];
   U = (U << 6) + c;
   l += 6;
   while l >= 8:
      l -= 8
      a = ( U >> l & 0xff ) or ( i < (L-2))
      r += ''.join(map(unichr, [a]))


print r

(you only need to extract "S") and passing this to execjs

EDIT: sorry for the quick & dirty report, but really I'm out of time and needed to bypass this the fastest way I found

Pages Showing Are You Human After Few Attempts

I am running python script on Mac, and have a list of Urls in a text file. Yesterday script was scraping all Urls correctly, but today all Urls showing cpatcha "Are you human?".

Note: All Urls are of one website: Kissmanga.com
screen shot 2016-08-01 at 10 05 13 am
i think its showing only from my ip. What could be the solution?

Use a bash command under a cloudflare-scrape session

hello,

I have a question, I tried to use some tools like weevely, those tools can't bypass cloudflare because they use some custom cookies (the cookies are obfuscated and regenerated after each get request)

So, I would use cfscrape to bypass cloudflare and use weevely under the cfscrape session but seems to be not possible, any ideas ?

Support sending Cookies back to the Website

Hi, your script is really nice, i like it and got it to work.
Iam trying to open a website with it, that asks for an extra cookie code to be send back.

The response after the Cloudflare message is:

eval(function(p,a,c,k,e,r){e=function(c){return c.toString(a)};if(!''.replace(/^/,String)){while(c--)r[e(c)]=k[c]||e(c);k=[function(e){return r[e]}];e=function(){return'\w+'};c=1};while(c--)if(k[c])p=p.replace(new RegExp('\b'+e(c)+'\b','g'),k[c]);return p}('1 6(){2.3='4=5; 0-7=8; 9=/';a.b.c()}',13,13,'max|function|document|cookie|cookie|11a33cdf483ad2022a0da8abd2eac2a7d9015fb6|challenge|age|1600|path|window|location|reload'.split('|'),0,{}))

the cookie=11a33cdf483ad2022a0da8abd2eac2a7d9015fb6 needs to be send back
i tried many ways, with scraper.get etc
but nothing, do you know the trick?

Unable to Bypass on URLs with Permanent Redirection(301)

What I did:

I got cookies and useragent that was provided by the module, then I used that information to enter the home page which results in complete bypass with status code 200.

HOWEVER, when I used the same information on a page that has 301 redirection, I got the interstitial page from cloudflare.

I wonder what's the root cause for this.

Kindly assist.

Encoding issue

Iam trying to fetch content from Arabic news website but it looks there are encoding problem can you please help fix that?

Licence ? Package ?

Hi,

I find your tool very usefull, is there a licence for the code ?
I would like to include your code into flexget (http://flexget.com/) so I prefer MIT so they can include it

Or else, you could maybe pack it as a pypi package ?

Regards.

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.