diff options
author | Dongjoon Hyun <dongjoon@apache.org> | 2016-06-27 21:58:16 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2016-06-27 21:58:16 -0700 |
commit | 50fdd866b55cb9b51427095e56b2aafea12a7c23 (patch) | |
tree | 52c34332185bdfe32992b19e29779f53bb6bf60a /project/SparkBuild.scala | |
parent | 1b7fc5817203db5a56489b289fb1a0dd44b2e26b (diff) | |
download | spark-50fdd866b55cb9b51427095e56b2aafea12a7c23.tar.gz spark-50fdd866b55cb9b51427095e56b2aafea12a7c23.tar.bz2 spark-50fdd866b55cb9b51427095e56b2aafea12a7c23.zip |
[SPARK-16111][SQL][DOC] Hide SparkOrcNewRecordReader in API docs
## What changes were proposed in this pull request?
Currently, Spark Scala/Java API documents shows **org.apache.hadoop.hive.ql.io.orc** package at the top.
http://spark.apache.org/docs/2.0.0-preview/api/scala/index.html#org.apache.spark.package
http://spark.apache.org/docs/2.0.0-preview/api/java/index.html
This PR hides `SparkOrcNewRecordReader` from API docs.
## How was this patch tested?
Manual. (`build/sbt unidoc`).
The following is the screenshot after this PR.
**Scala API doc**
![Scala API doc](https://app.box.com/representation/file_version_75673952621/image_2048/1.png?shared_name=2mdqydygs8le6q9x00356898662zjwz6)
**Java API doc**
![Java API doc](https://app.box.com/representation/file_version_75673951725/image_2048/1.png?shared_name=iv23eeqy3avvkqz203v9ygfaqeyml85j)
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #13914 from dongjoon-hyun/SPARK-16111.
Diffstat (limited to 'project/SparkBuild.scala')
-rw-r--r-- | project/SparkBuild.scala | 4 |
1 files changed, 3 insertions, 1 deletions
diff --git a/project/SparkBuild.scala b/project/SparkBuild.scala index 4b44469576..4c01ad3c33 100644 --- a/project/SparkBuild.scala +++ b/project/SparkBuild.scala @@ -720,6 +720,7 @@ object Unidoc { // Skip class names containing $ and some internal packages in Javadocs unidocAllSources in (JavaUnidoc, unidoc) := { ignoreUndocumentedPackages((unidocAllSources in (JavaUnidoc, unidoc)).value) + .map(_.filterNot(_.getCanonicalPath.contains("org/apache/hadoop"))) }, // Javadoc options: create a window title, and group key packages on index page @@ -733,7 +734,8 @@ object Unidoc { unidocSourceBase := s"https://github.com/apache/spark/tree/v${version.value}", scalacOptions in (ScalaUnidoc, unidoc) ++= Seq( - "-groups" // Group similar methods together based on the @group annotation. + "-groups", // Group similar methods together based on the @group annotation. + "-skip-packages", "org.apache.hadoop" ) ++ ( // Add links to sources when generating Scaladoc for a non-snapshot release if (!isSnapshot.value) { |