aboutsummaryrefslogtreecommitdiff
path: root/bin
Commit message (Collapse)AuthorAgeFilesLines
* [SPARK-1090] improvement on spark_shell (help information, configure memory)CodingCat2014-02-171-6/+42
| | | | | | | | | | | | | | | | | | | | https://spark-project.atlassian.net/browse/SPARK-1090 spark-shell should print help information about parameters and should allow user to configure exe memory there is no document about hot to set --cores/-c in spark-shell and also users should be able to set executor memory through command line options In this PR I also check the format of the options passed by the user Author: CodingCat <zhunansjtu@gmail.com> Closes #599 from CodingCat/spark_shell_improve and squashes the following commits: de5aa38 [CodingCat] add parameter to set driver memory 915cbf8 [CodingCat] improvement on spark_shell (help information, configure memory)
* Merge pull request #534 from sslavic/patch-1. Closes #534.Stevo Slavić2014-02-041-1/+1
| | | | | | | | | | | | | | | | | | Fixed wrong path to compute-classpath.cmd compute-classpath.cmd is in bin, not in sbin directory Author: Stevo Slavić <sslavic@gmail.com> == Merge branch commits == commit 23deca32b69e9429b33ad31d35b7e1bfc9459f59 Author: Stevo Slavić <sslavic@gmail.com> Date: Tue Feb 4 15:01:47 2014 +0100 Fixed wrong path to compute-classpath.cmd compute-classpath.cmd is in bin, not in sbin directory
* Merge pull request #484 from tdas/run-example-fixPatrick Wendell2014-01-201-2/+11
|\ | | | | | | | | | | | | | | | | | | Made run-example respect SPARK_JAVA_OPTS and SPARK_MEM. bin/run-example scripts was not passing Java properties set through the SPARK_JAVA_OPTS to the example. This is important for examples like Twitter** as the Twitter authentication information must be set through java properties. Hence added the same JAVA_OPTS code in run-example as it is in bin/spark-class script. Also added SPARK_MEM, in case someone wants to run the example with different amounts of memory. This can be removed if it is not tune with the intended semantics of the run-example scripts. @matei Please check this soon I want this to go in 0.9-rc4
| * Removed SPARK_MEM from run-examples.Tathagata Das2014-01-201-5/+0
| |
| * Made run-example respect SPARK_JAVA_OPTS and SPARK_MEM.Tathagata Das2014-01-201-2/+16
| |
* | Merge pull request #449 from CrazyJvm/masterReynold Xin2014-01-201-3/+8
|\ \ | |/ |/| | | | | | | | | SPARK-1028 : fix "set MASTER automatically fails" bug. spark-shell intends to set MASTER automatically if we do not provide the option when we start the shell , but there's a problem. The condition is "if [[ "x" != "x$SPARK_MASTER_IP" && "y" != "y$SPARK_MASTER_PORT" ]];" we sure will set SPARK_MASTER_IP explicitly, the SPARK_MASTER_PORT option, however, we probably do not set just using spark default port 7077. So if we do not set SPARK_MASTER_PORT, the condition will never be true. We should just use default port if users do not set port explicitly I think.
| * fix some format problem.CrazyJvm2014-01-161-2/+2
| |
| * fix "set MASTER automatically fails" bug.CrazyJvm2014-01-161-3/+8
| | | | | | | | spark-shell intends to set MASTER automatically if we do not provide the option when we start the shell , but there's a problem. The condition is "if [[ "x" != "x$SPARK_MASTER_IP" && "y" != "y$SPARK_MASTER_PORT" ]];" we sure will set SPARK_MASTER_IP explicitly, the SPARK_MASTER_PORT option, however, we probably do not set just using spark default port 7077. So if we do not set SPARK_MASTER_PORT, the condition will never be true. We should just use default port if users do not set port explicitly I think.
* | Fixed Window spark shell launch script error.Qiuzhuang Lian2014-01-162-3/+3
|/ | | | JIRA SPARK-1029:https://spark-project.atlassian.net/browse/SPARK-1029
* Merge branch 'master' into graphxReynold Xin2014-01-131-6/+1
|\
| * Merge pull request #353 from pwendell/ipython-simplifyPatrick Wendell2014-01-091-6/+1
| |\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Simplify and fix pyspark script. This patch removes compatibility for IPython < 1.0 but fixes the launch script and makes it much simpler. I tested this using the three commands in the PySpark documentation page: 1. IPYTHON=1 ./pyspark 2. IPYTHON_OPTS="notebook" ./pyspark 3. IPYTHON_OPTS="notebook --pylab inline" ./pyspark There are two changes: - We rely on PYTHONSTARTUP env var to start PySpark - Removed the quotes around $IPYTHON_OPTS... having quotes gloms them together as a single argument passed to `exec` which seemed to cause ipython to fail (it instead expects them as multiple arguments).
| | * Small fix suggested by joshPatrick Wendell2014-01-091-0/+1
| | |
| | * Simplify and fix pyspark script.Patrick Wendell2014-01-071-7/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This patch removes compatibility for IPython < 1.0 but fixes the launch script and makes it much simpler. I tested this using the three commands in the PySpark documentation page: 1. IPYTHON=1 ./pyspark 2. IPYTHON_OPTS="notebook" ./pyspark 3. IPYTHON_OPTS="notebook --pylab inline" ./pyspark There are two changes: - We rely on PYTHONSTARTUP env var to start PySpark - Removed the quotes around $IPYTHON_OPTS... having quotes gloms them together as a single argument passed to `exec` which seemed to cause ipython to fail (it instead expects them as multiple arguments).
* | | Finish d1d2b6d9b6b5f9cc45047507368a816903722d9eAnkur Dave2014-01-101-1/+0
| | |
* | | graph -> graphx in bin/compute-classpath.shAnkur Dave2014-01-091-2/+2
| | |
* | | Merge remote-tracking branch 'spark-upstream/master' into HEADAnkur Dave2014-01-0824-612/+709
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: README.md core/src/main/scala/org/apache/spark/util/collection/OpenHashMap.scala core/src/main/scala/org/apache/spark/util/collection/OpenHashSet.scala core/src/main/scala/org/apache/spark/util/collection/PrimitiveKeyOpenHashMap.scala pom.xml project/SparkBuild.scala repl/src/main/scala/org/apache/spark/repl/SparkILoop.scala
| * | Merge pull request #313 from tdas/project-refactorPatrick Wendell2014-01-071-6/+1
| |\ \ | | |/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Refactored the streaming project to separate external libraries like Twitter, Kafka, Flume, etc. At a high level, these are the following changes. 1. All the external code was put in `SPARK_HOME/external/` as separate SBT projects and Maven modules. Their artifact names are `spark-streaming-twitter`, `spark-streaming-kafka`, etc. Both SparkBuild.scala and pom.xml files have been updated. References to external libraries and repositories have been removed from the settings of root and streaming projects/modules. 2. To avail the external functionality (say, creating a Twitter stream), the developer has to `import org.apache.spark.streaming.twitter._` . For Scala API, the developer has to call `TwitterUtils.createStream(streamingContext, ...)`. For the Java API, the developer has to call `TwitterUtils.createStream(javaStreamingContext, ...)`. 3. Each external project has its own scala and java unit tests. Note the unit tests of each external library use classes of the streaming unit tests (`TestSuiteBase`, `LocalJavaStreamingContext`, etc.). To enable this code sharing among test classes, `dependsOn(streaming % "compile->compile,test->test")` was used in the SparkBuild.scala . In the streaming/pom.xml, an additional `maven-jar-plugin` was necessary to capture this dependency (see comment inside the pom.xml for more information). 4. Jars of the external projects have been added to examples project but not to the assembly project. 5. In some files, imports have been rearrange to conform to the Spark coding guidelines.
| | * Fixed examples/pom.xml and run-example based on Patrick's suggestions.Tathagata Das2014-01-071-6/+1
| | |
| * | CR feedback (sbt -> sbt/sbt and correct JAR path in script) :)Holden Karau2014-01-051-1/+1
| | |
| * | Finish documentation changesHolden Karau2014-01-052-2/+2
| |/
| * Merge remote-tracking branch 'apache-github/master' into remove-binariesPatrick Wendell2014-01-0324-610/+712
| |\ | | | | | | | | | | | | | | | Conflicts: core/src/test/scala/org/apache/spark/DriverSuite.scala docs/python-programming-guide.md
| | * sbin/compute-classpath* bin/compute-classpath*Prashant Sharma2014-01-034-2/+146
| | |
| | * sbin/spark-class* -> bin/spark-class*Prashant Sharma2014-01-036-4/+266
| | |
| | * run-example -> bin/run-examplePrashant Sharma2014-01-022-2/+2
| | |
| | * Merge branch 'scripts-reorg' of github.com:shane-huang/incubator-spark into ↵Prashant Sharma2014-01-0221-752/+448
| |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spark-915-segregate-scripts Conflicts: bin/spark-shell core/pom.xml core/src/main/scala/org/apache/spark/SparkContext.scala core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala core/src/main/scala/org/apache/spark/ui/UIWorkloadGenerator.scala core/src/test/scala/org/apache/spark/DriverSuite.scala python/run-tests sbin/compute-classpath.sh sbin/spark-class sbin/stop-slaves.sh
| | * deprecate "spark" script and SPAKR_CLASSPATH environment variableAndrew xia2013-10-121-92/+0
| | |
| | * refactor $FWD variableAndrew xia2013-09-293-4/+4
| | |
| | * rm bin/spark.cmd as we don't have windows test environment. Will added it ↵shane-huang2013-09-261-27/+0
| | | | | | | | | | | | | | | | | | later if needed Signed-off-by: shane-huang <shengsheng.huang@intel.com>
| | * fix paths and change spark to use APP_MEM as application driver memory ↵shane-huang2013-09-261-33/+8
| | | | | | | | | | | | | | | | | | instead of SPARK_MEM, user should add application jars to SPARK_CLASSPATH Signed-off-by: shane-huang <shengsheng.huang@intel.com>
| | * add scripts in binshane-huang2013-09-238-10/+155
| | | | | | | | | | | | Signed-off-by: shane-huang <shengsheng.huang@intel.com>
| | * moved user scripts to bin foldershane-huang2013-09-238-0/+418
| | | | | | | | | | | | Signed-off-by: shane-huang <shengsheng.huang@intel.com>
| | * add admin scripts to sbinshane-huang2013-09-2313-704/+0
| | | | | | | | | | | | Signed-off-by: shane-huang <shengsheng.huang@intel.com>
| | * added spark-class and spark-executor to sbinshane-huang2013-09-231-1/+1
| | | | | | | | | | | | Signed-off-by: shane-huang <shengsheng.huang@intel.com>
| * | Merge branch 'master' into scala-2.10Raymond Liu2013-11-135-9/+57
| |\ \
| * | | version changed 2.9.3 -> 2.10 in shell script.Prashant Sharma2013-09-151-1/+1
| | | |
| * | | Merged with masterPrashant Sharma2013-09-0613-121/+251
| |\ \ \ | | | |/ | | |/|
| * | | Merge branch 'master' of github.com:mesos/spark into scala-2.10Prashant Sharma2013-07-152-49/+65
| |\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/spark/Utils.scala core/src/test/scala/spark/ui/UISuite.scala project/SparkBuild.scala run
| * \ \ \ Merge branch 'master' into master-mergePrashant Sharma2013-07-122-0/+4
| |\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: README.md core/pom.xml core/src/main/scala/spark/deploy/JsonProtocol.scala core/src/main/scala/spark/deploy/LocalSparkCluster.scala core/src/main/scala/spark/deploy/master/Master.scala core/src/main/scala/spark/deploy/master/MasterWebUI.scala core/src/main/scala/spark/deploy/worker/Worker.scala core/src/main/scala/spark/deploy/worker/WorkerWebUI.scala core/src/main/scala/spark/storage/BlockManagerUI.scala core/src/main/scala/spark/util/AkkaUtils.scala pom.xml project/SparkBuild.scala streaming/src/main/scala/spark/streaming/receivers/ActorReceiver.scala
| * | | | | Removed some unnecessary code and fixed dependenciesPrashant Sharma2013-07-111-1/+1
| | | | | |
* | | | | | Added GraphX to classpath.Reynold Xin2013-11-071-0/+1
| | | | | |
* | | | | | Merge remote-tracking branch 'spark-upstream/master'Ankur Dave2013-10-301-4/+18
|\ \ \ \ \ \ | | |_|_|_|/ | |/| | | | | | | | | | | | | | | | Conflicts: project/SparkBuild.scala
| * | | | | Merge pull request #66 from shivaram/sbt-assembly-depsMatei Zaharia2013-10-181-4/+18
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Add SBT target to assemble dependencies This pull request is an attempt to address the long assembly build times during development. Instead of rebuilding the assembly jar for every Spark change, this pull request adds a new SBT target `spark` that packages all the Spark modules and builds an assembly of the dependencies. So the work flow that should work now would be something like ``` ./sbt/sbt spark # Doing this once should suffice ## Make changes ./sbt/sbt compile ./sbt/sbt test or ./spark-shell ```
| | * | | | | Exclude assembly jar from classpath if using depsShivaram Venkataraman2013-10-161-10/+18
| | | | | | |
| | * | | | | Merge branch 'master' of https://github.com/apache/incubator-spark into ↵Shivaram Venkataraman2013-10-151-2/+0
| | |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | sbt-assembly-deps
| | * | | | | | Add new SBT target for dependency assemblyShivaram Venkataraman2013-10-091-0/+6
| | | |_|_|_|/ | | |/| | | |
* | | | | | | Merge branch 'master' of https://github.com/apache/incubator-spark into ↵Joseph E. Gonzalez2013-10-183-3/+39
|\| | | | | | | | | | | | | | | | | | | | | | | | | | | indexedrdd_graphx
| * | | | | | SPARK-627 , Implementing --config arguments in the scriptsKarthikTunga2013-10-161-1/+1
| | | | | | |
| * | | | | | SPARK-627 , Implementing --config arguments in the scriptsKarthikTunga2013-10-162-2/+2
| | | | | | |
| * | | | | | Implementing --config argument in the scriptsKarthikTunga2013-10-162-7/+10
| | | | | | |
| * | | | | | Merge branch 'master' of https://github.com/apache/incubator-sparkKarthikTunga2013-10-151-2/+0
| |\ \ \ \ \ \ | | | |/ / / / | | |/| | | | | | | | | | | Updating local branch