aboutsummaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorMark Grover <mark@apache.org>2016-04-14 18:51:43 -0700
committerReynold Xin <rxin@databricks.com>2016-04-14 18:51:43 -0700
commitff9ae61a3b7bbbfc2aac93a99c05a9e1ea9c08bc (patch)
treeb2ea17d703d24540a69f40c66f8bfc35c6a2cc37
parentc80586d9e820d19fc328b3e4c6f1c1439f5583a7 (diff)
downloadspark-ff9ae61a3b7bbbfc2aac93a99c05a9e1ea9c08bc.tar.gz
spark-ff9ae61a3b7bbbfc2aac93a99c05a9e1ea9c08bc.tar.bz2
spark-ff9ae61a3b7bbbfc2aac93a99c05a9e1ea9c08bc.zip
[SPARK-14601][DOC] Minor doc/usage changes related to removal of Spark assembly
## What changes were proposed in this pull request? Removing references to assembly jar in documentation. Adding an additional (previously undocumented) usage of spark-submit to run examples. ## How was this patch tested? Ran spark-submit usage to ensure formatting was fine. Ran examples using SparkSubmit. Author: Mark Grover <mark@apache.org> Closes #12365 from markgrover/spark-14601.
-rw-r--r--bin/spark-class2.cmd2
-rw-r--r--core/src/main/scala/org/apache/spark/deploy/PythonRunner.scala2
-rw-r--r--core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala3
-rw-r--r--docs/building-spark.md2
-rw-r--r--docs/sql-programming-guide.md4
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRelation.scala2
-rw-r--r--sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala2
7 files changed, 9 insertions, 8 deletions
diff --git a/bin/spark-class2.cmd b/bin/spark-class2.cmd
index 579efff909..db680218dc 100644
--- a/bin/spark-class2.cmd
+++ b/bin/spark-class2.cmd
@@ -36,7 +36,7 @@ if exist "%SPARK_HOME%\RELEASE" (
)
if not exist "%SPARK_JARS_DIR%"\ (
- echo Failed to find Spark assembly JAR.
+ echo Failed to find Spark jars directory.
echo You need to build Spark before running this program.
exit /b 1
)
diff --git a/core/src/main/scala/org/apache/spark/deploy/PythonRunner.scala b/core/src/main/scala/org/apache/spark/deploy/PythonRunner.scala
index c0a9e3f280..6227a30dc9 100644
--- a/core/src/main/scala/org/apache/spark/deploy/PythonRunner.scala
+++ b/core/src/main/scala/org/apache/spark/deploy/PythonRunner.scala
@@ -62,7 +62,7 @@ object PythonRunner {
// ready to serve connections.
thread.join()
- // Build up a PYTHONPATH that includes the Spark assembly JAR (where this class is), the
+ // Build up a PYTHONPATH that includes the Spark assembly (where this class is), the
// python directories in SPARK_HOME (if set), and any files in the pyFiles argument
val pathElements = new ArrayBuffer[String]
pathElements ++= formattedPyFiles
diff --git a/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala b/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
index ec6d48485f..78da1b70c5 100644
--- a/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
+++ b/core/src/main/scala/org/apache/spark/deploy/SparkSubmitArguments.scala
@@ -478,7 +478,8 @@ private[deploy] class SparkSubmitArguments(args: Seq[String], env: Map[String, S
val command = sys.env.get("_SPARK_CMD_USAGE").getOrElse(
"""Usage: spark-submit [options] <app jar | python file> [app arguments]
|Usage: spark-submit --kill [submission ID] --master [spark://...]
- |Usage: spark-submit --status [submission ID] --master [spark://...]""".stripMargin)
+ |Usage: spark-submit --status [submission ID] --master [spark://...]
+ |Usage: spark-submit run-example [options] example-class [example args]""".stripMargin)
outStream.println(command)
val mem_mb = Utils.DEFAULT_DRIVER_MEM_MB
diff --git a/docs/building-spark.md b/docs/building-spark.md
index 40661604af..fec442af95 100644
--- a/docs/building-spark.md
+++ b/docs/building-spark.md
@@ -192,7 +192,7 @@ If you have JDK 8 installed but it is not the system default, you can set JAVA_H
# Packaging without Hadoop Dependencies for YARN
-The assembly jar produced by `mvn package` will, by default, include all of Spark's dependencies, including Hadoop and some of its ecosystem projects. On YARN deployments, this causes multiple versions of these to appear on executor classpaths: the version packaged in the Spark assembly and the version on each node, included with `yarn.application.classpath`. The `hadoop-provided` profile builds the assembly without including Hadoop-ecosystem projects, like ZooKeeper and Hadoop itself.
+The assembly directory produced by `mvn package` will, by default, include all of Spark's dependencies, including Hadoop and some of its ecosystem projects. On YARN deployments, this causes multiple versions of these to appear on executor classpaths: the version packaged in the Spark assembly and the version on each node, included with `yarn.application.classpath`. The `hadoop-provided` profile builds the assembly without including Hadoop-ecosystem projects, like ZooKeeper and Hadoop itself.
# Building with SBT
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md
index 2d9849d032..77887f4ca3 100644
--- a/docs/sql-programming-guide.md
+++ b/docs/sql-programming-guide.md
@@ -1651,7 +1651,7 @@ SELECT * FROM jsonTable
Spark SQL also supports reading and writing data stored in [Apache Hive](http://hive.apache.org/).
However, since Hive has a large number of dependencies, it is not included in the default Spark assembly.
Hive support is enabled by adding the `-Phive` and `-Phive-thriftserver` flags to Spark's build.
-This command builds a new assembly jar that includes Hive. Note that this Hive assembly jar must also be present
+This command builds a new assembly directory that includes Hive. Note that this Hive assembly directory must also be present
on all of the worker nodes, as they will need access to the Hive serialization and deserialization libraries
(SerDes) in order to access data stored in Hive.
@@ -1770,7 +1770,7 @@ The following options can be used to configure the version of Hive that is used
property can be one of three options:
<ol>
<li><code>builtin</code></li>
- Use Hive 1.2.1, which is bundled with the Spark assembly jar when <code>-Phive</code> is
+ Use Hive 1.2.1, which is bundled with the Spark assembly when <code>-Phive</code> is
enabled. When this option is chosen, <code>spark.sql.hive.metastore.version</code> must be
either <code>1.2.1</code> or not defined.
<li><code>maven</code></li>
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRelation.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRelation.scala
index b91e892f8f..bfe7aefe41 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRelation.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRelation.scala
@@ -784,7 +784,7 @@ private[sql] object ParquetRelation extends Logging {
// scalastyle:on classforname
redirect(JLogger.getLogger("parquet"))
} catch { case _: Throwable =>
- // SPARK-9974: com.twitter:parquet-hadoop-bundle:1.6.0 is not packaged into the assembly jar
+ // SPARK-9974: com.twitter:parquet-hadoop-bundle:1.6.0 is not packaged into the assembly
// when Spark is built with SBT. So `parquet.Log` may not be found. This try/catch block
// should be removed after this issue is fixed.
}
diff --git a/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala b/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala
index 505e5c0bb6..ff93bfc4a3 100644
--- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala
+++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveContext.scala
@@ -429,7 +429,7 @@ private[hive] object HiveContext extends Logging {
| Location of the jars that should be used to instantiate the HiveMetastoreClient.
| This property can be one of three options: "
| 1. "builtin"
- | Use Hive ${hiveExecutionVersion}, which is bundled with the Spark assembly jar when
+ | Use Hive ${hiveExecutionVersion}, which is bundled with the Spark assembly when
| <code>-Phive</code> is enabled. When this option is chosen,
| <code>spark.sql.hive.metastore.version</code> must be either
| <code>${hiveExecutionVersion}</code> or not defined.