vterron / fear-and-greed Goto Github PK
View Code? Open in Web Editor NEWPython wrapper for CNN's Fear & Greed Index.
License: MIT License
Python wrapper for CNN's Fear & Greed Index.
License: MIT License
While the unit tests are hermetic, we need an integration test to verify that we're still parsing CNN's correctly.
We can use a GitHub scheduled event to run the test e.g. hourly.
As this uses incorrect historical timezones for date arithmetic on local times that cross DST boundaries.
Instead, use tz.normalize()
.
See:
cnn website recently update their site from https://money.cnn.com/data/fear-and-greed
to https://www.cnn.com/markets/fear-and-greed
.
The result fetched from old page is not matched with the new page.
Instead of always fetching the current index value, add an option to get()
to allow users specify the date they're interested in.
This is almost trivial to implement after the changes we did in e7d9eb5.
For example, the data for 2022-04-25 is https://production.dataviz.cnn.io/index/fearandgreed/graphdata/2022-04-25.
Cool project! Is there a way to grab a time series of values?
Current Output
ValueError: couldn't parse https://money.cnn.com/data/fear-and-greed/
Changes to Website
https://edition.cnn.com/markets/fear-and-greed
Is it possible to include historical data within the repo?
If not, where can I download the time series?
See @TheSnoozer's suggestion in e7d9eb5.
Perhaps also related to the idea of #17 and the truly minimal fix #16 to get everything back to work.
A true request has more than a user agent:
E.g.
import requests
headers = {
'authority': 'production.dataviz.cnn.io',
'sec-ch-ua': '"Chromium";v="91", " Not;A Brand";v="99"',
'sec-ch-ua-mobile': '?0',
'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36',
'accept': '*/*',
'origin': 'https://edition.cnn.com',
'sec-fetch-site': 'cross-site',
'sec-fetch-mode': 'cors',
'sec-fetch-dest': 'empty',
'referer': 'https://edition.cnn.com/',
'accept-language': 'en-US,en;q=0.9,de;q=0.8',
'if-none-match': 'W/2820292103763769309',
}
response = requests.get('https://production.dataviz.cnn.io/index/fearandgreed/graphdata', headers=headers)
So perhaps when you think about rotating user-agents it would also be worth to ensure that the request is as close as a normal browser request (by supplying all headers).
Perhaps on a broader scale this will at some point also be switched to using cookies, but I guess we are not there yet :-)
For now I think this is not mission critical, but I thought it's worth a thought.
The following error has been persisting last several months as I suppose the CNN web site introduced a new look to their site.
File "/Users/sungmc/Projects/Finance/Python For Finance/venv/lib/python3.8/site-packages/fear_greed_index/CNNFearAndGreedIndex.py", line 117, in _load_fear_and_greed
text_soup_cnn.findAll("div", {"class": "modContent feargreed"})[0]
IndexError: list index out of range
Since #15 we set a user agent to prevent HTTP response code 418 ("I'm a teapot"). For increased robustness, instead of a fixed user agent ("Mozilla") we could rotate these agents, e.g. using a library such as scrapy-user-agents (but this one was last updated 4y ago; ideally we should use one with more up-to-date user agents).
... while resetting the release version down to v0.1.
I try to get data in the JSON format doing
`import requests, json
import pandas as pd
BASE_URL = "https://production.dataviz.cnn.io/index/fearandgreed/graphdata"
START_DATE = '2021-01-01'
r = requests.get("{}/{}".format(BASE_URL, START_DATE))
data = r.json()`
It returned
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
/tmp/ipykernel_4281/2898561448.py in
9
10 r = requests.get("{}/{}".format(BASE_URL, START_DATE))
---> 11 data = r.json()
~/anaconda3/envs/yfinance/lib/python3.9/site-packages/requests/models.py in json(self, **kwargs)
899 if encoding is not None:
900 try:
--> 901 return complexjson.loads(
902 self.content.decode(encoding), **kwargs
903 )
~/anaconda3/envs/yfinance/lib/python3.9/json/init.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
348 cls = JSONDecoder
~/anaconda3/envs/yfinance/lib/python3.9/json/decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):
~/anaconda3/envs/yfinance/lib/python3.9/json/decoder.py in raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end
JSONDecodeError: Expecting value: line 1 column 1 (char 0)`
I will appreciate help to fix this problem
With OS LInuxMInt, in a virtual environment and with 'pip install fear-and-greed?, I have installed the program. When running in 'jupyter lab', the statements.
import fear_and_greed
fear_and_greed.get()
it returns the error
'------------------------------------------------- --------------------------
ModuleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_12922/2736172641.py in
----> 1 import fear_and_greed
two
3 fear_and_greed.get()
ModuleNotFoundError: No module named 'fear_and_greed'
I check with 'conda list', that if the package is installed.
How can I fix this error?
I will appreciate help.
Best regards
Hello!
Two days ago has error: "couldn't parse https://money.cnn.com/data/fear-and-greed/"
I think, site was updated.
Plus run mypy
as part of make test
.
File "/Users/xxxxx/lib/python3.8/site-packages/fear_and_greed/cnn.py", line xxx, in get
raise ValueError("couldn't parse {}".format(URL))
ValueError: couldn't parse https://money.cnn.com/data/fear-and-greed/
Initially TestPyPI, will switch to PyPI everything's ready.
Fork the code we had @ 22bab0a to a different Git repository, but this time install fear-and-greed
via pip and have the AWS Lambda function just call get()
.
I'm getting an error when I run the most basic "fear_and_greed.get()" command. I've tried uninstalling and reinstalling these packages with no different results:
Environment details:
Python 3.10.11
Ubuntu
Here are the error details from the Apache Airflow logs:
[2023-08-28, 00:00:07 UTC] {base.py:427} ERROR - Unable to deserialize response: pickle data was truncated
[2023-08-28, 00:00:07 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
File "/home/bradl/.local/lib/python3.10/site-packages/airflow/decorators/base.py", line 220, in execute
return_value = super().execute(context)
File "/home/bradl/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 181, in execute
return_value = self.execute_callable()
File "/home/bradl/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 198, in execute_callable
return self.python_callable(*self.op_args, **self.op_kwargs)
File "/c/Users/bradl/Dropbox/Python/airflow_home/dags/dag_scrape_fear_and_greed.py", line 49, in get_fear_and_greed
fear_and_greed = fear_and_greed.get()
File "/home/bradl/.local/lib/python3.10/site-packages/fear_and_greed/cnn.py", line 59, in get
response = fetcher()["fear_and_greed"]
File "/home/bradl/.local/lib/python3.10/site-packages/fear_and_greed/cnn.py", line 48, in __call__
r = requests.get(URL, headers=headers)
File "/home/bradl/.local/lib/python3.10/site-packages/requests/api.py", line 73, in get
return request("get", url, params=params, **kwargs)
File "/home/bradl/.local/lib/python3.10/site-packages/requests/api.py", line 59, in request
return session.request(method=method, url=url, **kwargs)
File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/session.py", line 158, in request
return super().request(method, url, *args, headers=headers, **kwargs) # type: ignore
File "/home/bradl/.local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
resp = self.send(prep, **send_kwargs)
File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/session.py", line 205, in send
response = self._send_and_cache(request, actions, cached_response, **kwargs)
File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/session.py", line 233, in _send_and_cache
self.cache.save_response(response, actions.cache_key, actions.expires)
File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/backends/base.py", line 91, in save_response
self.responses[cache_key] = cached_response
File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/backends/sqlite.py", line 309, in __setitem__
self._write(key, value)
File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/backends/sqlite.py", line 331, in _write
con.execute(
sqlite3.DatabaseError: database disk image is malformed
i.e. run make test
on push.
A declarative, efficient, and flexible JavaScript library for building user interfaces.
๐ Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. ๐๐๐
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google โค๏ธ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.