aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Various fixes to configuration codeMatei Zaharia2013-12-2888-536/+692
| | | | | | | | | | | | | | - Got rid of global SparkContext.globalConf - Pass SparkConf to serializers and compression codecs - Made SparkConf public instead of private[spark] - Improved API of SparkContext and SparkConf - Switched executor environment vars to be passed through SparkConf - Fixed some places that were still using system properties - Fixed some tests, though others are still failing This still fails several tests in core, repl and streaming, likely due to properties not being set or cleared correctly (some of the tests run fine in isolation).
* spark-544, introducing SparkConf and related configuration overhaul.Prashant Sharma2013-12-2596-478/+612
|
* Merge pull request #280 from aarondav/minorPatrick Wendell2013-12-203-17/+8
|\ | | | | | | | | | | Minor cleanup for standalone scheduler See commit messages
| * Fix compiler warning in SparkZooKeeperSessionAaron Davidson2013-12-191-0/+1
| |
| * Remove firstApp from the standalone scheduler MasterAaron Davidson2013-12-191-10/+0
| | | | | | | | As a lonely child with no one to care for it... we had to put it down.
| * Extraordinarily minor code/comment cleanupAaron Davidson2013-12-192-7/+7
| |
* | Merge pull request #272 from tmyklebu/masterPatrick Wendell2013-12-195-16/+36
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | Track and report task result serialisation time. - DirectTaskResult now has a ByteBuffer valueBytes instead of a T value. - DirectTaskResult now has a member function T value() that deserialises valueBytes. - Executor serialises value into a ByteBuffer and passes it to DTR's ctor. - Executor tracks the time taken to do so and puts it in a new field in TaskMetrics. - StagePage now reports serialisation time from TaskMetrics along with the other things it reported.
| * | Add a serialisation time column to the StagePage.Tor Myklebust2013-12-181-1/+5
| | |
| * | objectSer -> valueSer in a test.Tor Myklebust2013-12-171-2/+2
| | |
| * | Missed a spot; had an objectSer here too.Tor Myklebust2013-12-171-2/+2
| | |
| * | Merge branch 'master' of git://github.com/apache/incubator-sparkTor Myklebust2013-12-164-7/+7
| |\ \
| * | | Incorporate pwendell's code review suggestions.Tor Myklebust2013-12-164-9/+8
| | | |
| * | | UI to display serialisation time of a stage.Tor Myklebust2013-12-161-0/+6
| | | |
| * | | Track task value serialisation time in TaskMetrics.Tor Myklebust2013-12-164-15/+26
| | | |
* | | | Merge pull request #276 from shivaram/collectPartitionReynold Xin2013-12-195-8/+50
|\ \ \ \ | |_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | Add collectPartition to JavaRDD interface. This interface is useful for implementing `take` from other language frontends where the data is serialized. Also remove `takePartition` from PythonRDD and use `collectPartition` in rdd.py. Thanks @concretevitamin for the original change and tests.
| * | | Add comment explaining collectPartitions's useShivaram Venkataraman2013-12-191-0/+2
| | | |
| * | | Make collectPartitions take an array of partitionsShivaram Venkataraman2013-12-193-12/+22
| | | | | | | | | | | | | | | | | | | | | | | | Change the implementation to use runJob instead of PartitionPruningRDD. Also update the unit tests and the python take implementation to use the new interface.
| * | | Add collectPartition to JavaRDD interface.Shivaram Venkataraman2013-12-185-9/+39
| | | | | | | | | | | | | | | | Also remove takePartition from PythonRDD and use collectPartition in rdd.py.
* | | | Merge pull request #278 from MLnick/java-python-tostringMatei Zaharia2013-12-192-0/+5
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | Add toString to Java RDD, and __repr__ to Python RDD Addresses [SPARK-992](https://spark-project.atlassian.net/browse/SPARK-992)
| * | | | Add toString to Java RDD, and __repr__ to Python RDDNick Pentreath2013-12-192-0/+5
|/ / / /
* | | | Merge pull request #183 from aarondav/spark-959Reynold Xin2013-12-191-0/+2
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | [SPARK-959] Explicitly depend on org.eclipse.jetty.orbit jar Without this, in some cases, Ivy attempts to download the wrong file and fails, stopping the whole build. See [bug](https://spark-project.atlassian.net/browse/SPARK-959) for more details. Note that this may not be the best solution, as I do not understand the root cause of why this only happens for some people. However, it is reported to work.
| * | | | [SPARK-959] Explicitly depend on org.eclipse.jetty.orbit jarAaron Davidson2013-12-181-0/+2
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without this, in some cases, Ivy attempts to download the wrong file and fails, stopping the whole build. See bug for more details. (This is probably also the beginning of the slow death of our recently prettified dependencies. Form follow function.)
* | | | | Merge pull request #247 from aarondav/minorReynold Xin2013-12-1813-71/+48
|\ \ \ \ \ | |/ / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Increase spark.akka.askTimeout default to 30 seconds In experimental clusters we've observed that a 10 second timeout was insufficient, despite having a low number of nodes and relatively small workload (16 nodes, <1.5 TB data). This would cause an entire job to fail at the beginning of the reduce phase. There is no particular reason for this value to be small as a timeout should only occur in an exceptional situation. Also centralized the reading of spark.akka.askTimeout to AkkaUtils (surely this can later be cleaned up to use Typesafe). Finally, deleted some lurking implicits. If anyone can think of a reason they should still be there, please let me know.
| * | | | In experimental clusters we've observed that a 10 second timeout was ↵Aaron Davidson2013-12-1813-71/+48
|/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | insufficient, despite having a low number of nodes and relatively small workload (16 nodes, <1.5 TB data). This would cause an entire job to fail at the beginning of the reduce phase. There is no particular reason for this value to be small as a timeout should only occur in an exceptional situation. Also centralized the reading of spark.akka.askTimeout to AkkaUtils (surely this can later be cleaned up to use Typesafe). Finally, deleted some lurking implicits. If anyone can think of a reason they should still be there, please let me know.
* | | | Merge pull request #267 from JoshRosen/cygwinReynold Xin2013-12-184-5/+55
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix Cygwin support in several scripts. This allows the spark-shell, spark-class, run-example, make-distribution.sh, and ./bin/start-* scripts to work under Cygwin. Note that this doesn't support PySpark under Cygwin, since that requires many additional `cygpath` calls from within Python and will be non-trivial to implement. This PR was inspired by, and subsumes, #253 (so close #253 after this is merged).
| * | | | Fix Cygwin support in several scripts.Josh Rosen2013-12-154-5/+55
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This allows the spark-shell, spark-class, run-example, make-distribution.sh, and ./bin/start-* scripts to work under Cygwin. Note that this doesn't support PySpark under Cygwin, since that requires many additional `cygpath` calls from within Python and will be non-trivial to implement. This PR was inspired by, and subsumes, #253 (so close #253 after this is merged).
* | | | | Merge pull request #274 from azuryy/masterReynold Xin2013-12-181-1/+1
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixed the example link in the Scala programing guid. The old link cannot access, I changed to the new one.
| * | | | | changed the example links in the scala-programming-guidfengdong2013-12-181-1/+1
| | | | | |
| * | | | | Fixed the example link.fengdong2013-12-181-1/+1
| | |/ / / | |/| | |
* | | | | Merge pull request #273 from rxin/topReynold Xin2013-12-171-0/+2
|\ \ \ \ \ | |/ / / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fixed a performance problem in RDD.top and BoundedPriorityQueue BoundedPriority was actually traversing the entire queue to calculate the size, resulting in bad performance in insertion. This should also cherry pick cleanly into branch-0.8.
| * | | | Fixed a performance problem in RDD.top and BoundedPriorityQueue (size in ↵Reynold Xin2013-12-171-0/+2
|/ / / / | | | | | | | | | | | | BoundedPriority was actually traversing the entire queue to calculate the size, resulting in bad performance in insertion).
* | | | Merge pull request #268 from pwendell/shaded-protobufPatrick Wendell2013-12-167-82/+64
|\ \ \ \ | |_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for 2.2. to master (via shaded jars) This patch does a few related things. NOTE: This may not compile correctly for ~24 hours until artifacts fully propagate to Maven Central. 1. Uses shaded versions of akka/protobuf. For more information on how these versions were prepared, see [1]. 2. Brings the `new-yarn` project up-to-date with the changes for Akka 2.2.3. 3. Some clean-up of the build now that we don't have to switch akka groups for different YARN versions. [1] https://github.com/pwendell/spark-utils/tree/933a309ef85c22643e8e4b5e365652101c4e95de/shaded-protobuf
| * | | One other fixPatrick Wendell2013-12-161-1/+1
| | | |
| * | | Clean-upPatrick Wendell2013-12-162-1/+2
| | | |
| * | | CleanupPatrick Wendell2013-12-162-7/+0
| | | |
| * | | Removing extra code in new yarnPatrick Wendell2013-12-161-1/+0
| | | |
| * | | Remove trailing slashes from repository specifications.Patrick Wendell2013-12-161-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The correct format is to not have a trailing slash. For me this caused non-deterministic failures due to issues fetching certain artifacts. The issue was that some of the maven caches would fail to fetch the artifact (due to the way that the artifact path was concatenated with the repository) and this short-circuited the download process in a silent way. Here is what the log output looked like: Downloading: http://repo.maven.apache.org/maven2/org/spark-project/akka/akka-remote_2.10/2.2.3-shaded-protobuf/akka-remote_2.10-2.2.3-shaded-protobuf.pom [WARNING] The POM for org.spark-project.akka:akka-remote_2.10:jar:2.2.3-shaded-protobuf is missing, no dependency information available This was pretty brutal to debug since there was no error message anywhere and the path *looks* correct as reported by the Maven log.
| * | | Attempt with extra repositoriesPatrick Wendell2013-12-167-76/+65
|/ / /
* | | Merge pull request #270 from ewencp/really-force-ssh-pseudo-tty-masterPatrick Wendell2013-12-161-2/+2
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Force pseudo-tty allocation in spark-ec2 script. ssh commands need the -t argument repeated twice if there is no local tty, e.g. if the process running spark-ec2 uses nohup and the parent process exits. Without this change, if you run the script this way (e.g. using nohup from a cron job), it will fail setting up the nodes because some of the ssh commands complain about missing ttys and then fail. (This version is for the master branch. I've filed a separate request for the 0.8 since changes to the script caused the patches to be different.)
| * | | Force pseudo-tty allocation in spark-ec2 script.Ewen Cheslack-Postava2013-12-161-2/+2
| | |/ | |/| | | | | | | | | | | | | ssh commands need the -t argument repeated twice if there is no local tty, e.g. if the process running spark-ec2 uses nohup and the parent process exits.
* | | Merge pull request #245 from gregakespret/task-maxfailures-fixReynold Xin2013-12-163-5/+5
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix for spark.task.maxFailures not enforced correctly. Docs at http://spark.incubator.apache.org/docs/latest/configuration.html say: ``` spark.task.maxFailures Number of individual task failures before giving up on the job. Should be greater than or equal to 1. Number of allowed retries = this value - 1. ``` Previous implementation worked incorrectly. When for example `spark.task.maxFailures` was set to 1, the job was aborted only after the second task failure, not after the first one.
| * | Fix tests.Grega Kespret2013-12-102-2/+2
| | |
| * | Fix for spark.task.maxFailures not enforced correctly.Grega Kespret2013-12-091-3/+3
| | |
* | | Merge pull request #265 from markhamstra/scala.binary.versionPatrick Wendell2013-12-1511-70/+71
|\ \ \ | |_|/ |/| | | | | | | | | | | | | | | | | DRY out the POMs with scala.binary.version ...instead of hard-coding 2.10 repeatedly. As long as it's not a `<project>`-level `<artifactId>`, I think that we are okay parameterizing these.
| * | Use scala.binary.version in POMsMark Hamstra2013-12-1511-70/+71
| | |
* | | Merge pull request #256 from MLnick/masterJosh Rosen2013-12-151-2/+6
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix 'IPYTHON=1 ./pyspark' throwing ValueError This fixes an annoying issue where running ```IPYTHON=1 ./pyspark``` resulted in: ``` Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 0.8.0 /_/ Using Python version 2.7.5 (default, Jun 20 2013 11:06:30) Spark context avaiable as sc. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /usr/local/lib/python2.7/site-packages/IPython/utils/py3compat.pyc in execfile(fname, *where) 202 else: 203 filename = fname --> 204 __builtin__.execfile(filename, *where) /Users/Nick/workspace/scala/spark-0.8.0-incubating-bin-hadoop1/python/pyspark/shell.py in <module>() 30 add_files = os.environ.get("ADD_FILES").split(',') if os.environ.get("ADD_FILES") != None else None 31 ---> 32 sc = SparkContext(os.environ.get("MASTER", "local"), "PySparkShell", pyFiles=add_files) 33 34 print """Welcome to /Users/Nick/workspace/scala/spark-0.8.0-incubating-bin-hadoop1/python/pyspark/context.pyc in __init__(self, master, jobName, sparkHome, pyFiles, environment, batchSize) 70 with SparkContext._lock: 71 if SparkContext._active_spark_context: ---> 72 raise ValueError("Cannot run multiple SparkContexts at once") 73 else: 74 SparkContext._active_spark_context = self ValueError: Cannot run multiple SparkContexts at once ``` The issue arises since previously IPython didn't seem to respect ```$PYTHONSTARTUP```, but since at least 1.0.0 it has. Technically this might break for older versions of IPython, but most users should be able to upgrade IPython to at least 1.0.0 (and should be encouraged to do so :). New behaviour: ``` Nicks-MacBook-Pro:incubator-spark-mlnick Nick$ IPYTHON=1 ./pyspark Python 2.7.5 (default, Jun 20 2013, 11:06:30) Type "copyright", "credits" or "license" for more information. IPython 1.1.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/Users/Nick/workspace/scala/incubator-spark-mlnick/tools/target/scala-2.9.3/spark-tools-assembly-0.9.0-incubating-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/Users/Nick/workspace/scala/incubator-spark-mlnick/assembly/target/scala-2.9.3/spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop1.0.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 13/12/12 13:08:15 WARN Utils: Your hostname, Nicks-MacBook-Pro.local resolves to a loopback address: 127.0.0.1; using 10.0.0.4 instead (on interface en0) 13/12/12 13:08:15 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 13/12/12 13:08:15 INFO Slf4jEventHandler: Slf4jEventHandler started 13/12/12 13:08:15 INFO SparkEnv: Registering BlockManagerMaster 13/12/12 13:08:15 INFO DiskBlockManager: Created local directory at /var/folders/_l/06wxljt13wqgm7r08jlc44_r0000gn/T/spark-local-20131212130815-0e76 13/12/12 13:08:15 INFO MemoryStore: MemoryStore started with capacity 326.7 MB. 13/12/12 13:08:15 INFO ConnectionManager: Bound socket to port 53732 with id = ConnectionManagerId(10.0.0.4,53732) 13/12/12 13:08:15 INFO BlockManagerMaster: Trying to register BlockManager 13/12/12 13:08:15 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager 10.0.0.4:53732 with 326.7 MB RAM 13/12/12 13:08:15 INFO BlockManagerMaster: Registered BlockManager 13/12/12 13:08:15 INFO HttpBroadcast: Broadcast server started at http://10.0.0.4:53733 13/12/12 13:08:15 INFO SparkEnv: Registering MapOutputTracker 13/12/12 13:08:15 INFO HttpFileServer: HTTP File server directory is /var/folders/_l/06wxljt13wqgm7r08jlc44_r0000gn/T/spark-8f40e897-8211-4628-a7a8-755562d5244c 13/12/12 13:08:16 INFO SparkUI: Started Spark Web UI at http://10.0.0.4:4040 2013-12-12 13:08:16.337 java[56801:4003] Unable to load realm info from SCDynamicStore Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 0.9.0-SNAPSHOT /_/ Using Python version 2.7.5 (default, Jun 20 2013 11:06:30) Spark context avaiable as sc. ```
| * | | Making IPython PySpark compatible across versions <1.0.0. Also cleaned up ↵Nick Pentreath2013-12-151-1/+6
| | | | | | | | | | | | | | | | '-i' option and made IPYTHON_OPTS work
| * | | Merge remote-tracking branch 'upstream/master'Nick Pentreath2013-12-15197-2905/+3470
| |\| |
| * | | Fix 'IPYTHON=1 ./pyspark' throwing 'ValueError: Cannot run multiple ↵Nick Pentreath2013-12-121-2/+1
| | | | | | | | | | | | | | | | SparkContexts at once'
* | | | Merge pull request #257 from tgravescs/sparkYarnFixNameReynold Xin2013-12-152-0/+2
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | Fix the --name option for Spark on Yarn Looks like the --name option accidentally got broken in one of the merges. The Client hangs if the --name option is used right now.