GithubHelp home page GithubHelp logo

fear-and-greed's People

Contributors

vterron avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar

fear-and-greed's Issues

New version of CNN's website broke the library

cnn website recently update their site from https://money.cnn.com/data/fear-and-greed to https://www.cnn.com/markets/fear-and-greed.
The result fetched from old page is not matched with the new page.

Time Series

Cool project! Is there a way to grab a time series of values?

Historic Data

Is it possible to include historical data within the repo?
If not, where can I download the time series?

Make requests as similar as a true browser request

Perhaps also related to the idea of #17 and the truly minimal fix #16 to get everything back to work.

A true request has more than a user agent:
E.g.

import requests

headers = {
    'authority': 'production.dataviz.cnn.io',
    'sec-ch-ua': '"Chromium";v="91", " Not;A Brand";v="99"',
    'sec-ch-ua-mobile': '?0',
    'user-agent': 'Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.114 Safari/537.36',
    'accept': '*/*',
    'origin': 'https://edition.cnn.com',
    'sec-fetch-site': 'cross-site',
    'sec-fetch-mode': 'cors',
    'sec-fetch-dest': 'empty',
    'referer': 'https://edition.cnn.com/',
    'accept-language': 'en-US,en;q=0.9,de;q=0.8',
    'if-none-match': 'W/2820292103763769309',
}

response = requests.get('https://production.dataviz.cnn.io/index/fearandgreed/graphdata', headers=headers)

So perhaps when you think about rotating user-agents it would also be worth to ensure that the request is as close as a normal browser request (by supplying all headers).

Perhaps on a broader scale this will at some point also be switched to using cookies, but I guess we are not there yet :-)

For now I think this is not mission critical, but I thought it's worth a thought.

IndexError: list index out of range from text_soup_cnn.findAll("div", {"class": "modContent feargreed"})[0]

The following error has been persisting last several months as I suppose the CNN web site introduced a new look to their site.

File "/Users/sungmc/Projects/Finance/Python For Finance/venv/lib/python3.8/site-packages/fear_greed_index/CNNFearAndGreedIndex.py", line 117, in _load_fear_and_greed
text_soup_cnn.findAll("div", {"class": "modContent feargreed"})[0]
IndexError: list index out of range

Rotate user agents

Since #15 we set a user agent to prevent HTTP response code 418 ("I'm a teapot"). For increased robustness, instead of a fixed user agent ("Mozilla") we could rotate these agents, e.g. using a library such as scrapy-user-agents (but this one was last updated 4y ago; ideally we should use one with more up-to-date user agents).

Get the error: JSONDecodeError: Expecting value: line 1 column 1 (char 0)

I try to get data in the JSON format doing

`import requests, json
import pandas as pd

BASE_URL = "https://production.dataviz.cnn.io/index/fearandgreed/graphdata"
START_DATE = '2021-01-01'

r = requests.get("{}/{}".format(BASE_URL, START_DATE))
data = r.json()`

It returned
`---------------------------------------------------------------------------
JSONDecodeError Traceback (most recent call last)
/tmp/ipykernel_4281/2898561448.py in
9
10 r = requests.get("{}/{}".format(BASE_URL, START_DATE))
---> 11 data = r.json()

~/anaconda3/envs/yfinance/lib/python3.9/site-packages/requests/models.py in json(self, **kwargs)
899 if encoding is not None:
900 try:
--> 901 return complexjson.loads(
902 self.content.decode(encoding), **kwargs
903 )

~/anaconda3/envs/yfinance/lib/python3.9/json/init.py in loads(s, cls, object_hook, parse_float, parse_int, parse_constant, object_pairs_hook, **kw)
344 parse_int is None and parse_float is None and
345 parse_constant is None and object_pairs_hook is None and not kw):
--> 346 return _default_decoder.decode(s)
347 if cls is None:
348 cls = JSONDecoder

~/anaconda3/envs/yfinance/lib/python3.9/json/decoder.py in decode(self, s, _w)
335
336 """
--> 337 obj, end = self.raw_decode(s, idx=_w(s, 0).end())
338 end = _w(s, end).end()
339 if end != len(s):

~/anaconda3/envs/yfinance/lib/python3.9/json/decoder.py in raw_decode(self, s, idx)
353 obj, end = self.scan_once(s, idx)
354 except StopIteration as err:
--> 355 raise JSONDecodeError("Expecting value", s, err.value) from None
356 return obj, end

JSONDecodeError: Expecting value: line 1 column 1 (char 0)`

I will appreciate help to fix this problem

ModuleNotFoundError: No module named 'fear_and_greed'

With OS LInuxMInt, in a virtual environment and with 'pip install fear-and-greed?, I have installed the program. When running in 'jupyter lab', the statements.
import fear_and_greed
fear_and_greed.get()
it returns the error
'------------------------------------------------- --------------------------
ModuleNotFoundError Traceback (most recent call last)
/tmp/ipykernel_12922/2736172641.py in
----> 1 import fear_and_greed
two
3 fear_and_greed.get()

ModuleNotFoundError: No module named 'fear_and_greed'
I check with 'conda list', that if the package is installed.

How can I fix this error?

I will appreciate help.
Best regards

Unable to deserialize response: pickle data was truncated

I'm getting an error when I run the most basic "fear_and_greed.get()" command. I've tried uninstalling and reinstalling these packages with no different results:

  • fear_and_greed
  • requests
  • requests_cache

Environment details:
Python 3.10.11
Ubuntu

Here are the error details from the Apache Airflow logs:

[2023-08-28, 00:00:07 UTC] {base.py:427} ERROR - Unable to deserialize response: pickle data was truncated
[2023-08-28, 00:00:07 UTC] {taskinstance.py:1824} ERROR - Task failed with exception
Traceback (most recent call last):
  File "/home/bradl/.local/lib/python3.10/site-packages/airflow/decorators/base.py", line 220, in execute
    return_value = super().execute(context)
  File "/home/bradl/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 181, in execute
    return_value = self.execute_callable()
  File "/home/bradl/.local/lib/python3.10/site-packages/airflow/operators/python.py", line 198, in execute_callable
    return self.python_callable(*self.op_args, **self.op_kwargs)
  File "/c/Users/bradl/Dropbox/Python/airflow_home/dags/dag_scrape_fear_and_greed.py", line 49, in get_fear_and_greed
    fear_and_greed = fear_and_greed.get()
  File "/home/bradl/.local/lib/python3.10/site-packages/fear_and_greed/cnn.py", line 59, in get
    response = fetcher()["fear_and_greed"]
  File "/home/bradl/.local/lib/python3.10/site-packages/fear_and_greed/cnn.py", line 48, in __call__
    r = requests.get(URL, headers=headers)
  File "/home/bradl/.local/lib/python3.10/site-packages/requests/api.py", line 73, in get
    return request("get", url, params=params, **kwargs)
  File "/home/bradl/.local/lib/python3.10/site-packages/requests/api.py", line 59, in request
    return session.request(method=method, url=url, **kwargs)
  File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/session.py", line 158, in request
    return super().request(method, url, *args, headers=headers, **kwargs)  # type: ignore
  File "/home/bradl/.local/lib/python3.10/site-packages/requests/sessions.py", line 589, in request
    resp = self.send(prep, **send_kwargs)
  File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/session.py", line 205, in send
    response = self._send_and_cache(request, actions, cached_response, **kwargs)
  File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/session.py", line 233, in _send_and_cache
    self.cache.save_response(response, actions.cache_key, actions.expires)
  File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/backends/base.py", line 91, in save_response
    self.responses[cache_key] = cached_response
  File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/backends/sqlite.py", line 309, in __setitem__
    self._write(key, value)
  File "/home/bradl/.local/lib/python3.10/site-packages/requests_cache/backends/sqlite.py", line 331, in _write
    con.execute(
sqlite3.DatabaseError: database disk image is malformed

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    ๐Ÿ–– Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. ๐Ÿ“Š๐Ÿ“ˆ๐ŸŽ‰

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google โค๏ธ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.