diff options
author | Cheng Lian <lian@databricks.com> | 2016-06-29 22:50:53 -0700 |
---|---|---|
committer | Xiangrui Meng <meng@databricks.com> | 2016-06-29 22:50:53 -0700 |
commit | bde1d6a61593aeb62370f526542cead94919b0c0 (patch) | |
tree | 8630ee0675f1be8b45b8f4d72a9ced10f6d2eb80 /docs/sql-programming-guide.md | |
parent | d3af6731fa270842818ed91d6b4d14708ddae2db (diff) | |
download | spark-bde1d6a61593aeb62370f526542cead94919b0c0.tar.gz spark-bde1d6a61593aeb62370f526542cead94919b0c0.tar.bz2 spark-bde1d6a61593aeb62370f526542cead94919b0c0.zip |
[SPARK-16294][SQL] Labelling support for the include_example Jekyll plugin
## What changes were proposed in this pull request?
This PR adds labelling support for the `include_example` Jekyll plugin, so that we may split a single source file into multiple line blocks with different labels, and include them in multiple code snippets in the generated HTML page.
## How was this patch tested?
Manually tested.
<img width="923" alt="screenshot at jun 29 19-53-21" src="https://cloud.githubusercontent.com/assets/230655/16451099/66a76db2-3e33-11e6-84fb-63104c2f0688.png">
Author: Cheng Lian <lian@databricks.com>
Closes #13972 from liancheng/include-example-with-labels.
Diffstat (limited to 'docs/sql-programming-guide.md')
-rw-r--r-- | docs/sql-programming-guide.md | 41 |
1 files changed, 6 insertions, 35 deletions
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md index 6c6bc8db6a..68419e1331 100644 --- a/docs/sql-programming-guide.md +++ b/docs/sql-programming-guide.md @@ -63,52 +63,23 @@ Throughout this document, we will often refer to Scala/Java Datasets of `Row`s a <div class="codetabs"> <div data-lang="scala" markdown="1"> -The entry point into all functionality in Spark is the [`SparkSession`](api/scala/index.html#org.apache.spark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.build()`: - -{% highlight scala %} -import org.apache.spark.sql.SparkSession - -val spark = SparkSession.build() - .master("local") - .appName("Word Count") - .config("spark.some.config.option", "some-value") - .getOrCreate() - -// this is used to implicitly convert an RDD to a DataFrame. -import spark.implicits._ -{% endhighlight %} +The entry point into all functionality in Spark is the [`SparkSession`](api/scala/index.html#org.apache.spark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.builder()`: +{% include_example init_session scala/org/apache/spark/examples/sql/RDDRelation.scala %} </div> <div data-lang="java" markdown="1"> -The entry point into all functionality in Spark is the [`SparkSession`](api/java/index.html#org.apache.spark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.build()`: +The entry point into all functionality in Spark is the [`SparkSession`](api/java/index.html#org.apache.spark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.builder()`: -{% highlight java %} -import org.apache.spark.sql.SparkSession - -SparkSession spark = SparkSession.build() - .master("local") - .appName("Word Count") - .config("spark.some.config.option", "some-value") - .getOrCreate(); -{% endhighlight %} +{% include_example init_session java/org/apache/spark/examples/sql/JavaSparkSQL.java %} </div> <div data-lang="python" markdown="1"> -The entry point into all functionality in Spark is the [`SparkSession`](api/python/pyspark.sql.html#pyspark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.build`: - -{% highlight python %} -from pyspark.sql import SparkSession - -spark = SparkSession.build \ - .master("local") \ - .appName("Word Count") \ - .config("spark.some.config.option", "some-value") \ - .getOrCreate() -{% endhighlight %} +The entry point into all functionality in Spark is the [`SparkSession`](api/python/pyspark.sql.html#pyspark.sql.SparkSession) class. To create a basic `SparkSession`, just use `SparkSession.builder`: +{% include_example init_session python/sql.py %} </div> <div data-lang="r" markdown="1"> |