GithubHelp home page GithubHelp logo

Comments (4)

jbouffard avatar jbouffard commented on August 16, 2024 1

Here's the script that I was trying to run: https://gist.github.com/jbouffard/408d9487c8561e26d7f6ea040c570cdd Ideally, I'd like to have been able to run this script through this command: ./spark-submit ./global_osm_tobler_gps.py. However, I found that I need to include the --jars parameter to make it work, even though it's assinged elsewhere in the GPS code base https://github.com/locationtech-labs/geopyspark/blob/master/geopyspark/__init__.py#L118

from geodocker-jupyter-geopyspark.

jamesmcclain avatar jamesmcclain commented on August 16, 2024

Can you include an example script and how (ideally) to run it?

from geodocker-jupyter-geopyspark.

jamesmcclain avatar jamesmcclain commented on August 16, 2024

This error

PYSPARK_PYTHON=/usr/bin/python3.4 PYSPARK_DRIVER_PYTHON=/usr/bin/python3.4 spark-submit test.py
...
org.apache.spark.SparkException: Failed to register classes with Kryo
        at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:138)
        at org.apache.spark.serializer.KryoSerializerInstance.borrowKryo(KryoSerializer.scala:289)
        at org.apache.spark.serializer.KryoSerializerInstance.<init>(KryoSerializer.scala:274)
        at org.apache.spark.serializer.KryoSerializer.newInstance(KryoSerializer.scala:184)
        at org.apache.spark.broadcast.TorrentBroadcast$.blockifyObject(TorrentBroadcast.scala:269)
        at org.apache.spark.broadcast.TorrentBroadcast.writeBlocks(TorrentBroadcast.scala:126)
        at org.apache.spark.broadcast.TorrentBroadcast.<init>(TorrentBroadcast.scala:88)
        at org.apache.spark.broadcast.TorrentBroadcastFactory.newBroadcast(TorrentBroadcastFactory.scala:34)
        at org.apache.spark.broadcast.BroadcastManager.newBroadcast(BroadcastManager.scala:56)
        at org.apache.spark.SparkContext.broadcast(SparkContext.scala:1410)
        at org.apache.spark.scheduler.DAGScheduler.submitMissingTasks(DAGScheduler.scala:997)
        at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:919)
        at org.apache.spark.scheduler.DAGScheduler.handleJobSubmitted(DAGScheduler.scala:863)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.doOnReceive(DAGScheduler.scala:1683)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1675)
        at org.apache.spark.scheduler.DAGSchedulerEventProcessLoop.onReceive(DAGScheduler.scala:1664)
        at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:48)
Caused by: java.lang.ClassNotFoundException: geotrellis.spark.io.kryo.KryoRegistrator
        at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
        at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
        at java.lang.Class.forName0(Native Method)
        at java.lang.Class.forName(Class.java:348)
        at org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$5.apply(KryoSerializer.scala:133)
        at org.apache.spark.serializer.KryoSerializer$$anonfun$newKryo$5.apply(KryoSerializer.scala:133)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.TraversableLike$$anonfun$map$1.apply(TraversableLike.scala:234)
        at scala.collection.IndexedSeqOptimized$class.foreach(IndexedSeqOptimized.scala:33)
        at scala.collection.mutable.ArrayOps$ofRef.foreach(ArrayOps.scala:186)
        at scala.collection.TraversableLike$class.map(TraversableLike.scala:234)
        at scala.collection.mutable.ArrayOps$ofRef.map(ArrayOps.scala:186)
        at org.apache.spark.serializer.KryoSerializer.newKryo(KryoSerializer.scala:133)
        ... 16 more

occurs when attempting to spark-submit this script

from functools import partial
import geopyspark as gps
import fiona
import pyproj

from pyspark import SparkContext
from shapely.geometry import MultiPoint, MultiLineString, shape
from shapely.ops import transform
from geonotebook.wrappers import VectorData, TMSRasterData

# Set up our spark context
conf = gps.geopyspark_conf(appName="San Fran MVP", master="yarn-client")
sc = SparkContext(conf=conf)

list = [1,2,3]
rdd = sc.parallelize(list)
print(rdd.collect())

on EMR using either the hadoop account or an account created by oauth. The problem is not particular to one or the other, but is actually caused by apparent differences in behavior between spark-submiting a script and running a script through pyspark-shell, which is what Jupyter appears to be doing. This is not a simple configuration issue and requires deeper investigation.

I will close this issue and open one in the geopyspark repository, because that seems to be the appropriate place.

from geodocker-jupyter-geopyspark.

jamesmcclain avatar jamesmcclain commented on August 16, 2024

Similarly, this

aws emr add-steps \
  --cluster-id j-xxxxxxxxxxxx \
  --steps 'Name=Spark,Jar=s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar,Args=[/bin/sh,-c,"GEOPYSPARK_JARS_PATH=/opt/jars PYSPARK_PYTHON=/usr/bin/python3.4 PYSPARK_DRIVER_PYTHON=/usr/bin/python3.4 spark-submit /home/hadoop/test.py"],ActionOnFailure=CONTINUE'

does not work, but this

aws emr add-steps \
  --cluster-id j-xxxxxxxxxxxx \
  --steps 'Name=Spark,Jar=s3://us-east-1.elasticmapreduce/libs/script-runner/script-runner.jar,Args=[/bin/sh,-c,"GEOPYSPARK_JARS_PATH=/opt/jars PYSPARK_PYTHON=/usr/bin/python3.4 PYSPARK_DRIVER_PYTHON=/usr/bin/python3.4 spark-submit --jars /opt/jars/geotrellis-backend-assembly-0.3.1.jar /home/hadoop/test.py"],ActionOnFailure=CONTINUE'

does.

from geodocker-jupyter-geopyspark.

Related Issues (13)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.