aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* [SPARK-959] Explicitly depend on org.eclipse.jetty.orbit jarAaron Davidson2013-12-181-0/+2
| | | | | | | | Without this, in some cases, Ivy attempts to download the wrong file and fails, stopping the whole build. See bug for more details. (This is probably also the beginning of the slow death of our recently prettified dependencies. Form follow function.)
* Merge pull request #267 from JoshRosen/cygwinReynold Xin2013-12-184-5/+55
|\ | | | | | | | | | | | | | | | | | | | | Fix Cygwin support in several scripts. This allows the spark-shell, spark-class, run-example, make-distribution.sh, and ./bin/start-* scripts to work under Cygwin. Note that this doesn't support PySpark under Cygwin, since that requires many additional `cygpath` calls from within Python and will be non-trivial to implement. This PR was inspired by, and subsumes, #253 (so close #253 after this is merged).
| * Fix Cygwin support in several scripts.Josh Rosen2013-12-154-5/+55
| | | | | | | | | | | | | | | | | | This allows the spark-shell, spark-class, run-example, make-distribution.sh, and ./bin/start-* scripts to work under Cygwin. Note that this doesn't support PySpark under Cygwin, since that requires many additional `cygpath` calls from within Python and will be non-trivial to implement. This PR was inspired by, and subsumes, #253 (so close #253 after this is merged).
* | Merge pull request #274 from azuryy/masterReynold Xin2013-12-181-1/+1
|\ \ | | | | | | | | | | | | | | | Fixed the example link in the Scala programing guid. The old link cannot access, I changed to the new one.
| * | changed the example links in the scala-programming-guidfengdong2013-12-181-1/+1
| | |
| * | Fixed the example link.fengdong2013-12-181-1/+1
| | |
* | | Merge pull request #273 from rxin/topReynold Xin2013-12-171-0/+2
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | Fixed a performance problem in RDD.top and BoundedPriorityQueue BoundedPriority was actually traversing the entire queue to calculate the size, resulting in bad performance in insertion. This should also cherry pick cleanly into branch-0.8.
| * | Fixed a performance problem in RDD.top and BoundedPriorityQueue (size in ↵Reynold Xin2013-12-171-0/+2
|/ / | | | | | | BoundedPriority was actually traversing the entire queue to calculate the size, resulting in bad performance in insertion).
* | Merge pull request #268 from pwendell/shaded-protobufPatrick Wendell2013-12-167-82/+64
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add support for 2.2. to master (via shaded jars) This patch does a few related things. NOTE: This may not compile correctly for ~24 hours until artifacts fully propagate to Maven Central. 1. Uses shaded versions of akka/protobuf. For more information on how these versions were prepared, see [1]. 2. Brings the `new-yarn` project up-to-date with the changes for Akka 2.2.3. 3. Some clean-up of the build now that we don't have to switch akka groups for different YARN versions. [1] https://github.com/pwendell/spark-utils/tree/933a309ef85c22643e8e4b5e365652101c4e95de/shaded-protobuf
| * | One other fixPatrick Wendell2013-12-161-1/+1
| | |
| * | Clean-upPatrick Wendell2013-12-162-1/+2
| | |
| * | CleanupPatrick Wendell2013-12-162-7/+0
| | |
| * | Removing extra code in new yarnPatrick Wendell2013-12-161-1/+0
| | |
| * | Remove trailing slashes from repository specifications.Patrick Wendell2013-12-161-5/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The correct format is to not have a trailing slash. For me this caused non-deterministic failures due to issues fetching certain artifacts. The issue was that some of the maven caches would fail to fetch the artifact (due to the way that the artifact path was concatenated with the repository) and this short-circuited the download process in a silent way. Here is what the log output looked like: Downloading: http://repo.maven.apache.org/maven2/org/spark-project/akka/akka-remote_2.10/2.2.3-shaded-protobuf/akka-remote_2.10-2.2.3-shaded-protobuf.pom [WARNING] The POM for org.spark-project.akka:akka-remote_2.10:jar:2.2.3-shaded-protobuf is missing, no dependency information available This was pretty brutal to debug since there was no error message anywhere and the path *looks* correct as reported by the Maven log.
| * | Attempt with extra repositoriesPatrick Wendell2013-12-167-76/+65
|/ /
* | Merge pull request #270 from ewencp/really-force-ssh-pseudo-tty-masterPatrick Wendell2013-12-161-2/+2
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Force pseudo-tty allocation in spark-ec2 script. ssh commands need the -t argument repeated twice if there is no local tty, e.g. if the process running spark-ec2 uses nohup and the parent process exits. Without this change, if you run the script this way (e.g. using nohup from a cron job), it will fail setting up the nodes because some of the ssh commands complain about missing ttys and then fail. (This version is for the master branch. I've filed a separate request for the 0.8 since changes to the script caused the patches to be different.)
| * | Force pseudo-tty allocation in spark-ec2 script.Ewen Cheslack-Postava2013-12-161-2/+2
| | | | | | | | | | | | | | | | | | ssh commands need the -t argument repeated twice if there is no local tty, e.g. if the process running spark-ec2 uses nohup and the parent process exits.
* | | Merge pull request #245 from gregakespret/task-maxfailures-fixReynold Xin2013-12-163-5/+5
|\ \ \ | |/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix for spark.task.maxFailures not enforced correctly. Docs at http://spark.incubator.apache.org/docs/latest/configuration.html say: ``` spark.task.maxFailures Number of individual task failures before giving up on the job. Should be greater than or equal to 1. Number of allowed retries = this value - 1. ``` Previous implementation worked incorrectly. When for example `spark.task.maxFailures` was set to 1, the job was aborted only after the second task failure, not after the first one.
| * | Fix tests.Grega Kespret2013-12-102-2/+2
| | |
| * | Fix for spark.task.maxFailures not enforced correctly.Grega Kespret2013-12-091-3/+3
| | |
* | | Merge pull request #265 from markhamstra/scala.binary.versionPatrick Wendell2013-12-1511-70/+71
|\ \ \ | |_|/ |/| | | | | | | | | | | | | | | | | DRY out the POMs with scala.binary.version ...instead of hard-coding 2.10 repeatedly. As long as it's not a `<project>`-level `<artifactId>`, I think that we are okay parameterizing these.
| * | Use scala.binary.version in POMsMark Hamstra2013-12-1511-70/+71
| | |
* | | Merge pull request #256 from MLnick/masterJosh Rosen2013-12-151-2/+6
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix 'IPYTHON=1 ./pyspark' throwing ValueError This fixes an annoying issue where running ```IPYTHON=1 ./pyspark``` resulted in: ``` Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 0.8.0 /_/ Using Python version 2.7.5 (default, Jun 20 2013 11:06:30) Spark context avaiable as sc. --------------------------------------------------------------------------- ValueError Traceback (most recent call last) /usr/local/lib/python2.7/site-packages/IPython/utils/py3compat.pyc in execfile(fname, *where) 202 else: 203 filename = fname --> 204 __builtin__.execfile(filename, *where) /Users/Nick/workspace/scala/spark-0.8.0-incubating-bin-hadoop1/python/pyspark/shell.py in <module>() 30 add_files = os.environ.get("ADD_FILES").split(',') if os.environ.get("ADD_FILES") != None else None 31 ---> 32 sc = SparkContext(os.environ.get("MASTER", "local"), "PySparkShell", pyFiles=add_files) 33 34 print """Welcome to /Users/Nick/workspace/scala/spark-0.8.0-incubating-bin-hadoop1/python/pyspark/context.pyc in __init__(self, master, jobName, sparkHome, pyFiles, environment, batchSize) 70 with SparkContext._lock: 71 if SparkContext._active_spark_context: ---> 72 raise ValueError("Cannot run multiple SparkContexts at once") 73 else: 74 SparkContext._active_spark_context = self ValueError: Cannot run multiple SparkContexts at once ``` The issue arises since previously IPython didn't seem to respect ```$PYTHONSTARTUP```, but since at least 1.0.0 it has. Technically this might break for older versions of IPython, but most users should be able to upgrade IPython to at least 1.0.0 (and should be encouraged to do so :). New behaviour: ``` Nicks-MacBook-Pro:incubator-spark-mlnick Nick$ IPYTHON=1 ./pyspark Python 2.7.5 (default, Jun 20 2013, 11:06:30) Type "copyright", "credits" or "license" for more information. IPython 1.1.0 -- An enhanced Interactive Python. ? -> Introduction and overview of IPython's features. %quickref -> Quick reference. help -> Python's own help system. object? -> Details about 'object', use 'object??' for extra details. SLF4J: Class path contains multiple SLF4J bindings. SLF4J: Found binding in [jar:file:/Users/Nick/workspace/scala/incubator-spark-mlnick/tools/target/scala-2.9.3/spark-tools-assembly-0.9.0-incubating-SNAPSHOT.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: Found binding in [jar:file:/Users/Nick/workspace/scala/incubator-spark-mlnick/assembly/target/scala-2.9.3/spark-assembly-0.9.0-incubating-SNAPSHOT-hadoop1.0.4.jar!/org/slf4j/impl/StaticLoggerBinder.class] SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation. SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory] 13/12/12 13:08:15 WARN Utils: Your hostname, Nicks-MacBook-Pro.local resolves to a loopback address: 127.0.0.1; using 10.0.0.4 instead (on interface en0) 13/12/12 13:08:15 WARN Utils: Set SPARK_LOCAL_IP if you need to bind to another address 13/12/12 13:08:15 INFO Slf4jEventHandler: Slf4jEventHandler started 13/12/12 13:08:15 INFO SparkEnv: Registering BlockManagerMaster 13/12/12 13:08:15 INFO DiskBlockManager: Created local directory at /var/folders/_l/06wxljt13wqgm7r08jlc44_r0000gn/T/spark-local-20131212130815-0e76 13/12/12 13:08:15 INFO MemoryStore: MemoryStore started with capacity 326.7 MB. 13/12/12 13:08:15 INFO ConnectionManager: Bound socket to port 53732 with id = ConnectionManagerId(10.0.0.4,53732) 13/12/12 13:08:15 INFO BlockManagerMaster: Trying to register BlockManager 13/12/12 13:08:15 INFO BlockManagerMasterActor$BlockManagerInfo: Registering block manager 10.0.0.4:53732 with 326.7 MB RAM 13/12/12 13:08:15 INFO BlockManagerMaster: Registered BlockManager 13/12/12 13:08:15 INFO HttpBroadcast: Broadcast server started at http://10.0.0.4:53733 13/12/12 13:08:15 INFO SparkEnv: Registering MapOutputTracker 13/12/12 13:08:15 INFO HttpFileServer: HTTP File server directory is /var/folders/_l/06wxljt13wqgm7r08jlc44_r0000gn/T/spark-8f40e897-8211-4628-a7a8-755562d5244c 13/12/12 13:08:16 INFO SparkUI: Started Spark Web UI at http://10.0.0.4:4040 2013-12-12 13:08:16.337 java[56801:4003] Unable to load realm info from SCDynamicStore Welcome to ____ __ / __/__ ___ _____/ /__ _\ \/ _ \/ _ `/ __/ '_/ /__ / .__/\_,_/_/ /_/\_\ version 0.9.0-SNAPSHOT /_/ Using Python version 2.7.5 (default, Jun 20 2013 11:06:30) Spark context avaiable as sc. ```
| * | | Making IPython PySpark compatible across versions <1.0.0. Also cleaned up ↵Nick Pentreath2013-12-151-1/+6
| | | | | | | | | | | | | | | | '-i' option and made IPYTHON_OPTS work
| * | | Merge remote-tracking branch 'upstream/master'Nick Pentreath2013-12-15197-2905/+3470
| |\| |
| * | | Fix 'IPYTHON=1 ./pyspark' throwing 'ValueError: Cannot run multiple ↵Nick Pentreath2013-12-121-2/+1
| | | | | | | | | | | | | | | | SparkContexts at once'
* | | | Merge pull request #257 from tgravescs/sparkYarnFixNameReynold Xin2013-12-152-0/+2
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | Fix the --name option for Spark on Yarn Looks like the --name option accidentally got broken in one of the merges. The Client hangs if the --name option is used right now.
| * | | | Fix the --name option for Spark on YarnThomas Graves2013-12-122-0/+2
| |/ / /
* | | | Merge pull request #264 from shivaram/spark-class-fixReynold Xin2013-12-151-1/+1
|\ \ \ \ | |_|/ / |/| | | | | | | Use CoarseGrainedExecutorBackend in spark-class
| * | | Use CoarseGrainedExecutorBackend in spark-classShivaram Venkataraman2013-12-151-1/+1
|/ / /
* | | Merge pull request #251 from pwendell/masterReynold Xin2013-12-141-5/+7
|\ \ \ | | | | | | | | | | | | | | | | | | | | Fix list rendering in YARN markdown docs. This is some minor clean-up which makes the list render correctly.
| * | | Fix list rendering in YARN markdown docs.Patrick Wendell2013-12-101-5/+7
| | | |
* | | | Merge pull request #249 from ngbinh/partitionInJavaSortByKeyJosh Rosen2013-12-141-0/+14
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | Expose numPartitions parameter in JavaPairRDD.sortByKey() This change makes Java and Scala API on sortByKey() the same.
| * | | | Hook directly to Scala APIBinh Nguyen2013-12-101-8/+6
| | | | |
| * | | | Leave default value of numPartitions to Scala code.Binh Nguyen2013-12-101-2/+8
| | | | |
| * | | | Use braces to shorten the line.Binh Nguyen2013-12-101-1/+3
| | | | |
| * | | | Expose numPartitions parameter in JavaPairRDD.sortByKey()Binh Nguyen2013-12-101-2/+10
| | | | | | | | | | | | | | | | | | | | This change make Java and Scala API on sortByKey() the same.
* | | | | Merge pull request #259 from pwendell/scala-2.10Patrick Wendell2013-12-14196-2900/+3449
|\ \ \ \ \ | |_|_|/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Migration to Scala 2.10 == Below description was written by Prashant Sharma == This PR migrates spark to scala 2.10. Summary of changes apart from scala 2.10 migration: (has no implications for user.) 1. Migrated Akka to 2.2.3. Does not use remote death watch for it has a bug, where it tries to send message to dead node infinitely. Uses an indestructible actorsystem which tolerates errors only on executors. (Might be useful for user.) 4. New configuration settings introduced: System.getProperty("spark.akka.heartbeat.pauses", "600") System.getProperty("spark.akka.failure-detector.threshold", "300.0") System.getProperty("spark.akka.heartbeat.interval", "1000") Defaults for these are fairly large to only disable Failure detector that comes with akka. The reason for doing so is we have our own failure detector like mechanism in place and then this is just an overhead on top of that + it leads to a lot of false positives. But with these properties it is possible to enable them. A good use case for enabling it could be when someone wants spark to be sensitive (in a controllable manner ofc.) to GC pauses/Network lags and quickly evict executors that experienced it. More information is included in configuration.md Once we have the SPARK-544 merged, I had like to deprecate atleast these akka properties and may be others too. This PR is duplicate of #221(where all the discussion happened.) for that one pointed to master this one points to scala-2.10 branch.
| * | | | Merge pull request #262 from pwendell/mvn-fixPatrick Wendell2013-12-132-1/+5
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fix maven build issues in 2.10 branch Found some issues when locally testing maven.
| | * | | | Fix maven build issues in 2.10 branchPatrick Wendell2013-12-132-1/+5
| |/ / / /
| * | | | Merge pull request #261 from ScrapCodes/scala-2.10Reynold Xin2013-12-131-0/+7
| |\ \ \ \ | | | | | | | | | | | | | | | | | | Added a comment about ActorRef and ActorSelection difference.
| | * | | | Added a comment about ActorRef and ActorSelection difference.Prashant Sharma2013-12-141-0/+7
| |/ / / /
| * | | | Merge pull request #260 from ScrapCodes/scala-2.10Reynold Xin2013-12-1315-49/+29
| |\ \ \ \ | | | | | | | | | | | | | | | | | | Review comments on the PR for scala 2.10 migration.
| | * | | | Review comments on the PR for scala 2.10 migration.Prashant Sharma2013-12-1315-49/+29
| |/ / / /
| * | | | Merge pull request #255 from ScrapCodes/scala-2.10Patrick Wendell2013-12-122-37/+47
| |\ \ \ \ | | | | | | | | | | | | | | | | | | Disabled yarn 2.2 in sbt and mvn build and added a message in the sbt build.
| | * | | | Disabled yarn 2.2 and added a message in the sbt buildPrashant Sharma2013-12-122-37/+47
| |/ / / /
| * | | | Merge pull request #254 from ScrapCodes/scala-2.10Patrick Wendell2013-12-11337-6632/+18488
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Scala 2.10 migration This PR migrates spark to scala 2.10. Summary of changes apart from scala 2.10 migration: (has no implications for user.) 1. Migrated Akka to 2.2.3. Does not use remote death watch for it has a bug, where it tries to send message to dead node infinitely. Uses an indestructible actorsystem which tolerates errors only on executors. (Might be useful for user.) 4. New configuration settings introduced: System.getProperty("spark.akka.heartbeat.pauses", "600") System.getProperty("spark.akka.failure-detector.threshold", "300.0") System.getProperty("spark.akka.heartbeat.interval", "1000") Defaults for these are fairly large to only disable Failure detector that comes with akka. The reason for doing so is we have our own failure detector like mechanism in place and then this is just an overhead on top of that + it leads to a lot of false positives. But with these properties it is possible to enable them. A good use case for enabling it could be when someone wants spark to be sensitive (in a controllable manner ofc.) to GC pauses/Network lags and quickly evict executors that experienced it. More information is included in configuration.md Once we have the SPARK-544 merged, I had like to deprecate atleast these akka properties and may be others too. This PR is duplicate of #221(where all the discussion happened.) for that one pointed to master this one points to scala-2.10 branch.
| | * | | | A few corrections to documentation.Prashant Sharma2013-12-121-7/+7
| | | | | |
| | * | | | Merge branch 'akka-bug-fix' of github.com:ScrapCodes/incubator-spark into ↵Prashant Sharma2013-12-111-1/+1
| | |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | akka-bug-fix
| | | * | | | added eclipse repository for spark streaming.Prashant Sharma2013-12-111-1/+1
| | | | | | |