GithubHelp home page GithubHelp logo

ibm / elasticsearch-spark-recommender Goto Github PK

View Code? Open in Web Editor NEW
836.0 68.0 266.0 3.45 MB

Use Jupyter Notebooks to demonstrate how to build a Recommender with Apache Spark & Elasticsearch

Home Page: https://developer.ibm.com/code/patterns/build-a-recommender-with-apache-spark-and-elasticsearch/

License: Apache License 2.0

Jupyter Notebook 100.00%
spark elasticsearch jupyter python ibmcode

elasticsearch-spark-recommender's Introduction

Read this in other languages: **.

Building a Recommender with Apache Spark & Elasticsearch

Recommendation engines are one of the most well known, widely used and highest value use cases for applying machine learning. Despite this, while there are many resources available for the basics of training a recommendation model, there are relatively few that explain how to actually deploy these models to create a large-scale recommender system.

This Code Pattern demonstrates the key elements of creating such a system, using Apache Spark and Elasticsearch.

This repo contains a Jupyter notebook illustrating how to use Spark for training a collaborative filtering recommendation model from ratings data stored in Elasticsearch, saving the model factors to Elasticsearch, and then using Elasticsearch to serve real-time recommendations using the model. The data you will use comes from MovieLens and is a common benchmark dataset in the recommendations community. The data consists of a set of ratings given by users of the MovieLens movie rating system, to various movies. It also contains metadata (title and genres) for each movie.

When you have completed this Code Pattern, you will understand how to:

  • Ingest and index user event data into Elasticsearch using the Elasticsearch Spark connector
  • Load event data into Spark DataFrames and use Spark's machine learning library (MLlib) to train a collaborative filtering recommender model
  • Export the trained model into Elasticsearch
  • Using a script score query in Elasticsearch, compute similar item and personalized user recommendations and combine recommendations with search and content filtering

Architecture diagram

Flow

  1. Load the movie dataset into Spark.
  2. Use Spark DataFrame operations to clean up the dataset and load it into Elasticsearch.
  3. Using Spark MLlib, train a collaborative filtering recommendation model from the ratings data in Elasticsearch.
  4. Save the resulting model into Elasticsearch.
  5. Using Elasticsearch queries, generate some example recommendations. The Movie Database API is used to display movie poster images for the recommended movies.

Included components

  • Apache Spark: An open-source, fast and general-purpose cluster computing system
  • Elasticsearch: Open-source search and analytics engine
  • Jupyter Notebooks: An open-source web application that allows you to create and share documents that contain live code, equations, visualizations and explanatory text.

Featured technologies

  • Data Science: Systems and scientific methods to analyze structured and unstructured data in order to extract knowledge and insights.
  • Artificial Intelligence: Artificial intelligence can be applied to disparate solution spaces to deliver disruptive technologies.
  • Python: Python is a programming language that lets you work more quickly and integrate your systems more effectively.

Watch the Video

Steps

Follow these steps to create the required services and run the notebook locally.

  1. Clone the repo
  2. Set up Elasticsearch
  3. Download the Elasticsearch Spark connector
  4. Download Apache Spark
  5. Download the data
  6. Launch the notebook
  7. Run the notebook

1. Clone the repo

Clone the elasticsearch-spark-recommender repository locally. In a terminal, run the following command:

$ git clone https://github.com/IBM/elasticsearch-spark-recommender

2. Set up Elasticsearch

This Code Pattern currently depends on Elasticsearch 7.6.x. Go to the downloads page and download the appropriate package for your system. If you do not see a valid release version there, go to the previous release page.

In this Code Pattern readme we will base instructions on Elasticsearch 7.6.2.

For example, on Mac you can download the TAR archive and unzip it using the commands:

$ wget https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.6.2-darwin-x86_64.tar.gz
$ tar xfz elasticsearch-7.6.2-darwin-x86_64.tar.gz

Change directory to the newly unzipped folder using:

$ cd elasticsearch-7.6.2

Next, start Elasticsearch (do this in a separate terminal window in order to keep it up and running):

$ ./bin/elasticsearch

Note: the first time you try to run this command, you may see an error like the following:

ElasticsearchException[Failure running machine learning native code. This could be due to running on an unsupported OS or distribution, missing OS libraries, or a problem with the temp directory. To bypass this problem by running Elasticsearch without machine learning functionality set [xpack.ml.enabled: false].]

In this case, re-running the command should successfully start up Elasticsearch.

Finally, you will need to install the Elasticsearch Python client. You can do this by running the following command (you should do this in a separate terminal window to the one running Elasticsearch):

$ pip install elasticsearch

3. Download the Elasticsearch Spark connector

The Elasticsearch Hadoop project provides connectors between Elasticsearch and various Hadoop-compatible systems, including Spark. The project provides a ZIP file to download that contains all these connectors. You will need to run your PySpark notebook with the Spark-specific connector JAR file on the classpath. Follow these steps to set up the connector:

  1. Download the elasticsearch-hadoop-7.6.2.zip file, which contains all the connectors. You can do this by running:
$ wget https://artifacts.elastic.co/downloads/elasticsearch-hadoop/elasticsearch-hadoop-7.6.2.zip
  1. Unzip the file by running:
$ unzip elasticsearch-hadoop-7.6.2.zip
  1. The JAR for the Spark connector is called elasticsearch-spark-20_2.11-7.6.2.jar and it will be located in the dist subfolder of the directory in which you unzipped the file above.

4. Download Apache Spark

This Code Pattern should work with any Spark 2.x version, however this readme uses version 2.4.5.

Download Apache Spark

Download the Spark from the downloads page. Once you have downloaded the file, unzip it by running:

$ tar xfz spark-2.4.5-bin-hadoop2.7.tgz

Note if you download a different version, adjust the relevant command used above and elsewhere in this Code Pattern accordingly

You will also need to install Numpy in order to use Spark's machine learning library, MLlib. If you don't have Numpy installed, run:

$ pip install numpy

5. Download the data

You will be using the Movielens dataset of ratings given by a set of users to movies, together with movie metadata. There are a few versions of the dataset. You should download the "latest small" version.

Run the following commands from the base directory of the cloned Code Pattern repository:

$ cd data
$ wget http://files.grouplens.org/datasets/movielens/ml-latest-small.zip
$ unzip ml-latest-small.zip

6. Launch the notebook

The notebook should work with Python 2.7+ or 3.x (but has only been tested on 3.6)

To run the notebook you will need to launch a PySpark session within a Jupyter notebook. If you don't have Jupyter installed, you can install it by running the command:

$ pip install notebook

Remember to include the Elasticsearch Spark connector JAR from step 3 on the Spark classpath when launching your notebook.

Run the following command to launch your PySpark notebook server locally. For this command to work correctly, you will need to launch the notebook from the base directory of the Code Pattern repository that you cloned in step 1. If you are not in that directory, first cd into it.

PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../spark-2.4.5-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path ../../elasticsearch-hadoop-7.6.2/dist/elasticsearch-spark-20_2.11-7.6.2.jar

This should open a browser window with the Code Pattern folder contents displayed. Click on the notebooks subfolder and then click on the elasticsearch-spark-recommender.ipynb file to launch the notebook.

Launch notebook

Optional:

In order to display the images in the recommendation demo, you will need to access The Movie Database API. Please follow the instructions to get an API key. You will also need to install the Python client using the command:

$ pip install tmdbsimple

The demo will still work without this API access, but no images will be displayed (so it won't look as good!).

7. Run the notebook

When a notebook is executed, what is actually happening is that each code cell in the notebook is executed, in order, from top to bottom.

Each code cell is selectable and is preceded by a tag in the left margin. The tag format is In [x]:. Depending on the state of the notebook, the x can be:

  • A blank, this indicates that the cell has never been executed.
  • A number, this number represents the relative order this code step was executed.
  • A *, this indicates that the cell is currently executing.

There are several ways to execute the code cells in your notebook:

  • One cell at a time.
    • Select the cell, and then press the Play button in the toolbar. You can also hit Shift+Enter to execute the cell and move to the next cell.
  • Batch mode, in sequential order.
    • From the Cell menu bar, there are several options available. For example, you can Run All cells in your notebook, or you can Run All Below, that will start executing from the first cell under the currently selected cell, and then continue executing all cells that follow.

Sample output

The example output in the data/examples folder shows the output of the notebook after running it in full. View it here.

Note: To see the code and markdown cells without output, you can view the raw notebook.

Troubleshooting

  • Error: java.lang.ClassNotFoundException: Failed to find data source: es.

If you see this error when trying to write data from Spark to Elasticsearch in the notebook, it means that the Elasticsearch Spark connector (elasticsearch-spark-20_2.11-7.6.2.jar) was not found on the class path by Spark when launching the notebook.

Solution: First try the launch command from step 6, ensuring you run it from the base directory of the Code Pattern repo.

If that does not work, try to use the fully-qualified path to the JAR file when launching the notebook, e.g.:

PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../spark-2.4.5-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path /FULL_PATH/elasticsearch-hadoop-7.6.2/dist/elasticsearch-spark-20_2.11-7.6.2.jar

where FULL_PATH is the fully-qualified (not relative) path to the directory from which you unzippd the elasticsearch-hadoop ZIP file.

  • Error: org.elasticsearch.hadoop.EsHadoopIllegalStateException: SaveMode is set to ErrorIfExists and index ratings exists and contains data. Consider changing the SaveMode

If you see this error when trying to write data from Spark to Elasticsearch in the notebook, it means that you have already written data to the relevant index (for example the ratings data into the ratings index).

Solution: Try to continue working through the notebook from the next cell down. Alternatively, you can first delete all your indics and re-run the Elasticsearch command to create index mappings (see the section Step 2: Load data into Elasticsearch in the notebook).

  • Error: ConnectionRefusedError: [Errno 61] Connection refused

You may see this error when trying to connect to Elasticsearch in the notebook. This likely means your Elasticsearch instance is not running.

Solution: In a new terminal window, cd to the directory in which Elasticsearch is installed and run ./bin/elasticsearch to start up Elasticsearch.

  • Error: Py4JJavaError: An error occurred while calling o130.save. : org.elasticsearch.hadoop.rest.EsHadoopNoNodesLeftException: Connection error (check network and/or proxy settings)- all nodes failed; tried [[127.0.0.1:9200]]

You may see this error when trying to read data from Elasticsearch into Spark (or writing data from Spark to Elasticsearch) in the notebook. This likely means your Elasticsearch instance is not running.

Solution: In a new terminal window, cd to the directory in which Elasticsearch is installed and run ./bin/elasticsearch to start up Elasticsearch.

  • Error: ImportError: No module named elasticsearch

If you encounter this error, it either means the Elasticsearch Python client is not installed, or cannot be found on the PYTHONPATH.

Solution: First try to install the client using $ pip install elasticsearch (if running in a Python virtual environment e.g. Conda or Virtualenv) or $ sudo pip install elasticsearch otherwise. If that doesn't work, add your site-packages folder to your Python path (e.g. on Mac: export PYTHONPATH=/Library/Python/2.7/site-packages for Python 2.7). See this Stackoverflow issue for another example on Linux. Note: the same general solution applies to any other module import error that you may encounter.

  • Error: HTTPError: 401 Client Error: Unauthorized for url: https://api.themoviedb.org/3/movie/1893?api_key=...

If you see this error in your notebook while testing your TMDb API access, or generating recommendations, it means you have installed tmdbsimple Python package, but have not set up your API key.

Solution: Follow the instructions at the end of step 6 to set up your TMDb account and get your API key. Then copy the key into the tmdb.API_KEY = 'YOUR_API_KEY' line in the notebook cell at the end of Step 1: Prepare the data (i.e. replacing YOR_API_KEY with the correct key). Once you have done that, execute that cell to test your access to TMDb API.

Links

Note the slide and video links below refer to an older version of this Code Pattern, that utilized the Elasticsearch Vector Scoring Plugin. Since Elasticsearch added native support for dense vector scoring, the plugin is no longer required. However, the details about the way in which the models and scoring functions work are still valid.

Learn more

  • Data Analytics Code Patterns: Enjoyed this Code Pattern? Check out our other Data Analytics Code Patterns
  • AI and Data Code Pattern Playlist: Bookmark our playlist with all of our Code Pattern videos
  • Watson Studio: Master the art of data science with IBM's Watson Studio
  • Spark on IBM Cloud: Need a Spark cluster? Create up to 30 Spark executors on IBM Cloud with our Spark service

License

This code pattern is licensed under the Apache Software License, Version 2. Separate third party code objects invoked within this code pattern are licensed by their respective providers pursuant to their own separate licenses. Contributions are subject to the Developer Certificate of Origin, Version 1.1 (DCO) and the Apache Software License, Version 2.

Apache Software License (ASL) FAQ

elasticsearch-spark-recommender's People

Contributors

dolph avatar imgbot[bot] avatar kant avatar ljbennett62 avatar markstur avatar mlnick avatar rhagarty avatar scottdangelo avatar stevemart avatar wwalisa avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

elasticsearch-spark-recommender's Issues

Getting Py4JJavaError error

Getting this error in Cell "Load Ratings and Movies ..."
Maybe a compatibility issue?

/Users/rhagarty/spark-2.2.0-bin-hadoop2.7/python/lib/py4j-0.10.4-src.zip/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name)
317 raise Py4JJavaError(
318 "An error occurred while calling {0}{1}{2}.\n".
--> 319 format(target_id, ".", name), value)
320 else:
321 raise Py4JError(

Py4JJavaError: An error occurred while calling o127.save.
: java.lang.ClassNotFoundException: Failed to find data source: es. Please find packages at http://spark.apache.org/third-party-projects.html
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:549)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass$lzycompute(DataSource.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.providingClass(DataSource.scala:86)
at org.apache.spark.sql.execution.datasources.DataSource.write(DataSource.scala:470)
at org.apache.spark.sql.execution.datasources.SaveIntoDataSourceCommand.run(SaveIntoDataSourceCommand.scala:48)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult$lzycompute(commands.scala:58)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.sideEffectResult(commands.scala:56)
at org.apache.spark.sql.execution.command.ExecutedCommandExec.doExecute(commands.scala:74)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$execute$1.apply(SparkPlan.scala:117)
at org.apache.spark.sql.execution.SparkPlan$$anonfun$executeQuery$1.apply(SparkPlan.scala:138)
at org.apache.spark.rdd.RDDOperationScope$.withScope(RDDOperationScope.scala:151)
at org.apache.spark.sql.execution.SparkPlan.executeQuery(SparkPlan.scala:135)
at org.apache.spark.sql.execution.SparkPlan.execute(SparkPlan.scala:116)
at org.apache.spark.sql.execution.QueryExecution.toRdd$lzycompute(QueryExecution.scala:92)
at org.apache.spark.sql.execution.QueryExecution.toRdd(QueryExecution.scala:92)
at org.apache.spark.sql.DataFrameWriter.runCommand(DataFrameWriter.scala:610)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:233)
at org.apache.spark.sql.DataFrameWriter.save(DataFrameWriter.scala:217)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:280)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:214)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: es.DefaultSource
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$21$$anonfun$apply$12.apply(DataSource.scala:533)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$21$$anonfun$apply$12.apply(DataSource.scala:533)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$21.apply(DataSource.scala:533)
at org.apache.spark.sql.execution.datasources.DataSource$$anonfun$21.apply(DataSource.scala:533)
at scala.util.Try.orElse(Try.scala:84)
at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:533)
... 29 more

FYI - Also seeing this in the console log:
17/10/10 13:11:47 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/10/10 13:11:50 WARN ObjectStore: Failed to get database global_temp, returning NoSuchObjectException

Elasticsearch connector confusion

@MLnick - step 3 is confusing. The title and text calls it "elasticsearch-spark", but the downloaded file and unzip command use "elasticsearch-hadoop". Can you clarify this step?

Take a look at journey development docs as guide

@MLnick - I assume you haven't had a chance to implement the suggested items in the journey checklist yet, so please start with the README at https://github.ibm.com/developer-journeys/journey-docs.

You can skip "Deploy to Bluemix" and CI, or any other items that don't fit this specific journey.

Looking specifically at what you do have, the description needs to be beefed up. Besides stating what it does, what makes it so compelling that data scientists should try it out? What is the goal of this journey? Here is a good example - https://github.com/IBM/pixiedust-facebook-analysis

Also, would be nice to use real package names instead of acronyms, at least the first time they are mentioned. Like "Elastic Search (ES) can be ... ". I know that I'm not in your target audience, but I had to look up ALS, TMDB, ...

And a question about the demo notebooks - are they independent, or does one feed the other? And what do they have to do with the run command? This wasn't clear to me. I haven't had a chance to load it yet and run it yet, so maybe it will become obvious.

"Recommendations" error

All the recommendation steps at the end of the notebook had errors, all similar to the following:

NameError Traceback (most recent call last)
in ()
----> 1 display_user_recs(12, num=5, num_last=5)

in display_user_recs(the_id, q, num, num_last, index)
103 i = 0
104 for movie in user_movies:
--> 105 movie_im_url = get_poster_url(movie['tmdbId'])
106 movie_title = movie['title']
107 user_html += "

%s

" % (movie_title, movie_im_url)

in get_poster_url(id)
9 poster_url = IMAGE_URL + movie['poster_path'] if 'poster_path' in movie and movie['poster_path'] is not None else ""
10 return poster_url
---> 11 except ModuleNotFoundError:
12 return "NA"
13

NameError: global name 'ModuleNotFoundError' is not defined

NOTE - this is probably related to the fact that I did NOT do the optional step of setting up the tmdb.API_KEY (I had errors). If so, the name of the recommended movie should still be listed, even if it's poster can't be found.

Otherwise, the optional step should probably be required.

Py4JJavaError: An error occurred while calling o226.save. : java.lang.NoClassDefFoundError: scala/Product$class

i am getting this error
Py4JJavaError: An error occurred while calling o226.save.
: java.lang.NoClassDefFoundError: scala/Product$class
in this step

write ratings data

ratings.write.format("es").save("ratings")
num_ratings_es = es.count(index="ratings")['count']
num_ratings_df = ratings.count()

check write went ok

print("Dataframe count: {}".format(num_ratings_df))
print("ES index count: {}".format(num_ratings_es))
plz help me

2018-05-15 18:38:15 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); selected next node [17.13.50.21:9200] 2018-05-15 18:38:19 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); no other nodes left - aborting...

Hello there, I can not connect ES-Spark from local to Elasticsearch in server. I think services need dependencces, so I connect Spark to Elasticsearch in servers. This is my code:

es_read_conf = { 
    # specify the node that we are sending data to (this should be the master)    
    "es.nodes" : '17.13.50.21:9200',
            
        # specify the port in case it is not the default port
    "es.port" : '9200',
    
    # specify the read resource in the format 'index/doc-type'
    "es.resource" : "stream-test/sample"
    }

es_rdd = sc.newAPIHadoopRDD(
    inputFormatClass="org.elasticsearch.hadoop.mr.EsInputFormat",
    keyClass="org.apache.hadoop.io.NullWritable", 
    valueClass="org.elasticsearch.hadoop.mr.LinkedMapWritable", 
    conf=es_read_conf)

And I met error:
2018-05-15 18:38:15 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); selected next node [17.13.50.21:9200] 2018-05-15 18:38:19 ERROR NetworkClient:144 - Node [127.0.0.1:9200] failed (Connection refused: con ect); no other nodes left - aborting...

Can you help me? Thank you.

Add numpy installation to README

PySpark ML requires numpy to be installed. If it's not installed a ModuleNotFoundError is raised when trying to execute the ALS training cell in the notebook.

(I got this error because I was testing things out in a new conda env).

NameError: name 'spark' is not defined

#first import a few utility methods that we'll use later on
from IPython.display import Image, HTML, display
#check PySpark is running
spark


NameError Traceback (most recent call last)
in ()
2 from IPython.display import Image, HTML, display
3 # check PySpark is running
----> 4 spark

NameError: name 'spark' is not defined

I running this file in Anaconda Jupyter. How can i solve this?

name 'spark' is not defined

Downloaded and installed everything. When I'm running the # check PySpark is running spark command. I get this:

NameError Traceback (most recent call last)
in ()
2 from IPython.display import Image, HTML, display
3 # check PySpark is running
----> 4 spark

NameError: name 'spark' is not defined

My commands:

SET PYSPARK_DRIVER_PYTHON=C:\Program Files (x86)\Python36-32\Scripts\jupyter.exe
SET PYSPARK_DRIVER_PYTHON_OPTS=notebook --no-browser
..\spark-2.2.0-bin-hadoop2.7\bin\pyspark --driver-memory 4g --driver-class-path ..\elasticsearch-hadoop-5.3.0\dist\elasticsearch-hadoop-5.3.0.jar

Support for ES 6.6?

I'm extremely new to ES but I've been going through the notebook changing code where required to make it work (e.g multiple types per index are no longer supported). I'm now stuck at the retrieving/calculating the recommendations part of the example (after calculating the embeddings and bringing them back into ES).

Specifically in the fn_query this part of the ES query is out of date:

"script": {
"inline": "payload_vector_score",
"lang": "native",
"params": {
"field": "@model.factor",
"vector": query_vec,
"cosine" : cosine
}
}

fails and I get the error :
RequestError: RequestError(400, 'search_phase_execution_exception', 'script_score: the script could not be loaded')

And I'm not sure how to get this up to speed for ES 6.6.

Cheers!

EDIT: Ah, I'm assuming the script is a plugin written by MLnick in his other repository? Which I notice has a TODO for porting to latest ES.

Elasticsearch error

@MLnick - getting this error in Step 2: Load data into Elasticsearch -

ImportError Traceback (most recent call last)
in ()
----> 1 from elasticsearch import Elasticsearch
2
3 # test your ES instance is running
4 es = Elasticsearch()
5 es.info(pretty=True)

ImportError: No module named elasticsearch

The work-around for me (running on Mac) was to:
export PYTHONPATH=/Library/Python/2.7/site-packages

Not sure if there is a better solution. Maybe we can add this to trouble-shooting as a "possible" solution?

PYSPARK_DRIVER_PYTHON is not defined

PYSPARK_DRIVER_PYTHON="jupyter"`` PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../spark-2.2.0-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path ../../elasticsearch-hadoop-5.3.0/dist/elasticsearch-spark-20_2.11-5.3.0.jar

This gives an error in command prompt

PYSPARK_DRIVER_PYTHON is not recognized as an internal or external command, operable command or a batch file

Thank you

Thank you for this project, it is very easy to set up and use, yet it is scalable and built on nice software.

I managed to feed it 45M+ ratings and get real-time recommendations on my macbook air. For now I have an issue when people receive recommendations of products with 2 to 5 ratings, so it is pretty weird, but I'll remove them from my sample and see how it goes (or could I tinker with ALS parameters?)

Anyway thank you!

Use ALS instead of SVD?

This repo very great! But I still don't understand why we should use ALS instead of SVD (Step 3 in Jupyter notebook)? Could you explain more for this experiment?

'Recommedation' errors

@MLnick - I grabbed the latest, and I'm still having issues with this.

Again, I did NOT install 'tmdbsimple' and skipped the 'optional' step of connecting to the TMDB API. Then when I get to the 'recommendation' steps, they all give me the same error:


UnboundLocalError Traceback (most recent call last)
in ()
----> 1 display_similar(1, num=5, q="genres:children")

in display_similar(the_id, q, num, index, dt)
144 """
145 movie, recs = get_similar(the_id, q, num, index, dt)
--> 146 q_im_url = get_poster_url(movie['tmdbId'])
147 if q_im_url == "NA":
148 display(HTML("Cannot import tmdbsimple. No movie posters will be displayed!"))

in get_poster_url(id)
10 poster_url = IMAGE_URL + movie['poster_path'] if 'poster_path' in movie and movie['poster_path'] is not None else ""
11 return poster_url
---> 12 except APIKeyError as ae:
13 return "KEY_ERR"
14 except ModuleNotFoundError as me:

UnboundLocalError: local variable 'APIKeyError' referenced before assignment

Maybe we should make this a MANDATORY step and force the user to figure out how to correctly install tmdbsimple?

Running into build issue

@MLnick - Seeing the following error:

$ PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../../spark-2.2.0-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path ../../elasticsearch-hadoop-5.3.0//dist/elasticsearch-spark_2.11-5.3.0.jar
env: jupyter: No such file or directory

Getting error in running code

When I run this code I got these errors in each of the situations below:

Elasticsearch = 6.8.6
elasticsearch (python) = 7.5.1

RequestError: RequestError(400, 'illegal_argument_exception', 'Rejecting mapping update to [demo] as the final mapping would have more than 1 type: [movies, ratings, users]')
———————————
2)
Elasticsearch = 5.3.0
elasticsearch (python) = 5.4.0

RequestError: TransportError(400, 'search_phase_execution_exception', 'Failed to compile inline script [payload_vector_score] using lang [native]')
————————————
3)
Elasticsearch = 5.3.0
elasticsearch (python) = 7.5.1

TypeError: search() got multiple values for argument 'body'

README request

I think the README needs to add a reference to the movie data being used in this journey. In the intro, the first mention of it is in the phrase 'ratings data', which no one would associate with movies.

Can you expand paragraph 3 to include a mention of the movielens ratings data and what it contains? Maybe include a link to their website?

Support Elasticsearch 7.x

I too would be interested in seeing this repository updated to a later version of elasticsearch. With the removal of types from indexes, the introduction of the dense_vector data type, and the vector scoring plugin being incompatible with ES 7.x this sample could benefit from an update. There is a lot of extremely valuable information available here that is hard to apply to a current version of Elasticsearch.

Location of ml-latest-small folder

In your command to unzip this file, you should mention where it should reside. I would recommend:

cd /notebooks
unzip ml-latest-small.zip

This way, in step [3] of the notebook, the user can simply put:
PATH_TO_DATA = "ml-latest-small"
OR, better yet, you can make this the default so the user doesn't need to modify this cell.

I unzipped to , which caused me to some pain trying to figure out what to set the PATH_TO_DATA path to be. Once I figured out that the base directory was /notebooks, I set my path to "../ml-latest-small".

how to run PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../spark-2.2.3-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path /application/search/es-spark-recommender/elasticsearch-hadoop-5.3.0/dist/elasticsearch-spark-20_2.11-5.3.0.jar in Background process;

PYSPARK_DRIVER_PYTHON="jupyter" PYSPARK_DRIVER_PYTHON_OPTS="notebook" ../spark-2.2.3-bin-hadoop2.7/bin/pyspark --driver-memory 4g --driver-class-path /application/search/es-spark-recommender/elasticsearch-hadoop-5.3.0/dist/elasticsearch-spark-20_2.11-5.3.0.jar

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.