GithubHelp home page GithubHelp logo

Comments (11)

bkleyn avatar bkleyn commented on May 13, 2024

Could you please try setting click_column="score" when defining the evaluation metrics as per below?

# Column names for the response, user, and item id columns
metric_params = {'click_column': 'score', 'user_id_column': 'ID', 'item_id_column':'MailerID'}

from mab2rec.

ayush488 avatar ayush488 commented on May 13, 2024

metric_params = {'click_column': 'score', 'user_id_column': 'ID', 'item_id_column':'MailerID'}

Evaluate peformance at different k-recommendations

top_k_list = [4]

List of metrics to benchmark

metrics = []
for k in top_k_list:
metrics.append(BinaryRecoMetrics.AUC(**metric_params, k=k))
metrics.append(BinaryRecoMetrics.CTR(**metric_params, k=k))
metrics.append(RankingRecoMetrics.Precision(**metric_params, k=k))
metrics.append(RankingRecoMetrics.Recall(**metric_params, k=k))
metrics.append(RankingRecoMetrics.NDCG(**metric_params, k=k))
metrics.append(RankingRecoMetrics.MAP(**metric_params, k=k))

from mab2rec.pipeline import benchmark

Benchmark the set of recommenders for the list of metrics

using training data and user features scored on test data

reco_to_results, reco_to_metrics = benchmark(recommenders,
metrics=metrics,
train_data=X_train,
test_data=X_test,
user_features=df_users_X,
user_id_col= 'ID',
item_id_col= 'MailerID',
response_col = 'sales_net')
Traceback (most recent call last):

File "C:\Users\ayush\AppData\Local\Temp\2\ipykernel_5524\2509256242.py", line 27, in
response_col = 'sales_net')

File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mab2rec\pipeline.py", line 417, in benchmark
return _bench(**args)

File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mab2rec\pipeline.py", line 531, in _bench
recommendations[name])

File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\jurity\recommenders\combined.py", line 121, in get_score
return_extended_results)

File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\jurity\recommenders\auc.py", line 140, in get_score
return self._accumulate_and_return(results, batch_accumulate, return_extended_results)

File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\jurity\recommenders\base.py", line 121, in _accumulate_and_return
cur_result = self._get_results([results])

File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\jurity\recommenders\auc.py", line 146, in _get_results
return roc_auc_score(results[:, 0], results[:, 1])

File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\sklearn\metrics_ranking.py", line 560, in roc_auc_score
raise ValueError("multi_class must be in ('ovo', 'ovr')")

ValueError: multi_class must be in ('ovo', 'ovr')

from mab2rec.

ayush488 avatar ayush488 commented on May 13, 2024

getting this error now?

from mab2rec.

ayush488 avatar ayush488 commented on May 13, 2024

seems like the evaluation can only be done with click 0 or 1?

from mab2rec.

ayush488 avatar ayush488 commented on May 13, 2024

Could you please try setting click_column="score" when defining the evaluation metrics as per below?

# Column names for the response, user, and item id columns
metric_params = {'click_column': 'score', 'user_id_column': 'ID', 'item_id_column':'MailerID'}

now it is giving different error. Could you please help?

from mab2rec.

bkleyn avatar bkleyn commented on May 13, 2024

Yes, that's correct. All the recommendation metrics above are only well defined for binary (0 or 1) responses.

from mab2rec.

ayush488 avatar ayush488 commented on May 13, 2024

so there is no way to evaluate the non binary responses?

from mab2rec.

bkleyn avatar bkleyn commented on May 13, 2024

Unfortunately, we don't have non-binary metrics implemented in the Jurity library. Common metrics include Mean Absolute Error (MAE) and Mean Squared Error (MSE) and are straightforward to compute.

An alternative strategy that is frequently used would be to binarize your response by setting the response to 1 if the continuous value is above some threshold and 0 otherwise.

from mab2rec.

ayush488 avatar ayush488 commented on May 13, 2024

from mab2rec.

bkleyn avatar bkleyn commented on May 13, 2024

Yes, that's correct.

from mab2rec.

ayush488 avatar ayush488 commented on May 13, 2024
 File "C:\Users\ayush\AppData\Local\Temp\2\ipykernel_5524\2998262892.py", line 13, in <module>
    response_col = 'response')

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mab2rec\pipeline.py", line 417, in benchmark
    return _bench(**args)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mab2rec\pipeline.py", line 526, in _bench
    save_file=None)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mab2rec\pipeline.py", line 286, in score
    recs_of_batch, scores_of_batch = recommender.recommend(contexts, excluded_arms_batch, return_scores=True)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mab2rec\rec.py", line 323, in recommend
    expectations = self.mab.predict_expectations(contexts)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mabwiser\mab.py", line 1229, in predict_expectations
    return self._imp.predict_expectations(contexts)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mabwiser\linear.py", line 151, in predict_expectations
    return self._parallel_predict(contexts, is_predict=False)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mabwiser\base_mab.py", line 227, in _parallel_predict
    for i in range(n_jobs))

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\joblib\parallel.py", line 1043, in __call__
    if self.dispatch_one_batch(iterator):

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\joblib\parallel.py", line 861, in dispatch_one_batch
    self._dispatch(tasks)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\joblib\parallel.py", line 779, in _dispatch
    job = self._backend.apply_async(batch, callback=cb)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\joblib\_parallel_backends.py", line 208, in apply_async
    result = ImmediateResult(func)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\joblib\_parallel_backends.py", line 572, in __init__
    self.results = batch()

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\joblib\parallel.py", line 263, in __call__
    for func, args, kwargs in self.items]

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\joblib\parallel.py", line 263, in <listcomp>
    for func, args, kwargs in self.items]

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mabwiser\linear.py", line 212, in _predict_contexts
    arm_to_expectation[arm] = arm_to_model[arm].predict(row)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mabwiser\linear.py", line 93, in predict
    beta_sampled = self.rng.multivariate_normal(self.beta, np.square(self.alpha) * self.A_inv)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\mabwiser\utils.py", line 258, in multivariate_normal
    return np.squeeze(self.rng.multivariate_normal(mean, covariance, size=size, method='cholesky'))

  File "_generator.pyx", line 3625, in numpy.random._generator.Generator.multivariate_normal

  File "<__array_function__ internals>", line 6, in cholesky

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\numpy\linalg\linalg.py", line 763, in cholesky
    r = gufunc(a, signature=signature, extobj=extobj)

  File "C:\ProgramData\Anaconda3\envs\test_env\lib\site-packages\numpy\linalg\linalg.py", line 91, in _raise_linalgerror_nonposdef
    raise LinAlgError("Matrix is not positive definite")

LinAlgError: Matrix is not positive definite

Still getting error after binarizing....

from mab2rec.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.