aboutsummaryrefslogtreecommitdiff
path: root/sql
diff options
context:
space:
mode:
authorhyukjinkwon <gurwls223@gmail.com>2017-04-12 12:38:48 +0100
committerSean Owen <sowen@cloudera.com>2017-04-12 12:38:48 +0100
commitceaf77ae43a14e993ac6d1ff34b50256eacd6abb (patch)
tree34bdb8c1ebbf4df008dc95a6c1234f203fa1dc49 /sql
parent2e1fd46e12bf948490ece2caa73d227b6a924a14 (diff)
downloadspark-ceaf77ae43a14e993ac6d1ff34b50256eacd6abb.tar.gz
spark-ceaf77ae43a14e993ac6d1ff34b50256eacd6abb.tar.bz2
spark-ceaf77ae43a14e993ac6d1ff34b50256eacd6abb.zip
[SPARK-18692][BUILD][DOCS] Test Java 8 unidoc build on Jenkins
## What changes were proposed in this pull request? This PR proposes to run Spark unidoc to test Javadoc 8 build as Javadoc 8 is easily re-breakable. There are several problems with it: - It introduces little extra bit of time to run the tests. In my case, it took 1.5 mins more (`Elapsed :[94.8746569157]`). How it was tested is described in "How was this patch tested?". - > One problem that I noticed was that Unidoc appeared to be processing test sources: if we can find a way to exclude those from being processed in the first place then that might significantly speed things up. (see joshrosen's [comment](https://issues.apache.org/jira/browse/SPARK-18692?focusedCommentId=15947627&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-15947627)) To complete this automated build, It also suggests to fix existing Javadoc breaks / ones introduced by test codes as described above. There fixes are similar instances that previously fixed. Please refer https://github.com/apache/spark/pull/15999 and https://github.com/apache/spark/pull/16013 Note that this only fixes **errors** not **warnings**. Please see my observation https://github.com/apache/spark/pull/17389#issuecomment-288438704 for spurious errors by warnings. ## How was this patch tested? Manually via `jekyll build` for building tests. Also, tested via running `./dev/run-tests`. This was tested via manually adding `time.time()` as below: ```diff profiles_and_goals = build_profiles + sbt_goals print("[info] Building Spark unidoc (w/Hive 1.2.1) using SBT with these arguments: ", " ".join(profiles_and_goals)) + import time + st = time.time() exec_sbt(profiles_and_goals) + print("Elapsed :[%s]" % str(time.time() - st)) ``` produces ``` ... ======================================================================== Building Unidoc API Documentation ======================================================================== ... [info] Main Java API documentation successful. ... Elapsed :[94.8746569157] ... Author: hyukjinkwon <gurwls223@gmail.com> Closes #17477 from HyukjinKwon/SPARK-18692.
Diffstat (limited to 'sql')
-rw-r--r--sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala8
-rw-r--r--sql/catalyst/src/test/scala/org/apache/spark/sql/UnsafeProjectionBenchmark.scala2
-rw-r--r--sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala2
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/DatasetSerializerRegistratorSuite.scala4
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala4
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala4
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala12
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala18
-rw-r--r--sql/core/src/test/scala/org/apache/spark/sql/test/TestSQLContext.scala2
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/Service.java2
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/ServiceOperations.java12
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/HttpAuthUtils.java2
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/PasswdAuthenticationProvider.java2
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/TSetIpAddressProcessor.java9
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/CLIServiceUtils.java2
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/ClassicTableTypeMapping.java6
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/TableTypeMapping.java2
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/SessionManager.java4
-rw-r--r--sql/hive-thriftserver/src/main/java/org/apache/hive/service/server/ThreadFactoryWithGarbageCleanup.java6
-rw-r--r--sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala4
-rw-r--r--sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQueryFileTest.scala2
-rw-r--r--sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcTest.scala4
22 files changed, 60 insertions, 53 deletions
diff --git a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
index 8508697995..8ae3ff5043 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/RandomDataGenerator.scala
@@ -117,11 +117,11 @@ object RandomDataGenerator {
}
/**
- * Returns a function which generates random values for the given [[DataType]], or `None` if no
+ * Returns a function which generates random values for the given `DataType`, or `None` if no
* random data generator is defined for that data type. The generated values will use an external
- * representation of the data type; for example, the random generator for [[DateType]] will return
- * instances of [[java.sql.Date]] and the generator for [[StructType]] will return a [[Row]].
- * For a [[UserDefinedType]] for a class X, an instance of class X is returned.
+ * representation of the data type; for example, the random generator for `DateType` will return
+ * instances of [[java.sql.Date]] and the generator for `StructType` will return a [[Row]].
+ * For a `UserDefinedType` for a class X, an instance of class X is returned.
*
* @param dataType the type to generate values for
* @param nullable whether null values should be generated
diff --git a/sql/catalyst/src/test/scala/org/apache/spark/sql/UnsafeProjectionBenchmark.scala b/sql/catalyst/src/test/scala/org/apache/spark/sql/UnsafeProjectionBenchmark.scala
index a6d9040938..769addf3b2 100644
--- a/sql/catalyst/src/test/scala/org/apache/spark/sql/UnsafeProjectionBenchmark.scala
+++ b/sql/catalyst/src/test/scala/org/apache/spark/sql/UnsafeProjectionBenchmark.scala
@@ -24,7 +24,7 @@ import org.apache.spark.sql.types._
import org.apache.spark.util.Benchmark
/**
- * Benchmark [[UnsafeProjection]] for fixed-length/primitive-type fields.
+ * Benchmark `UnsafeProjection` for fixed-length/primitive-type fields.
*/
object UnsafeProjectionBenchmark {
diff --git a/sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala b/sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala
index 074952ff79..7e5da012f8 100644
--- a/sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala
+++ b/sql/core/src/main/scala/org/apache/spark/sql/catalog/Catalog.scala
@@ -510,7 +510,7 @@ abstract class Catalog {
def refreshTable(tableName: String): Unit
/**
- * Invalidates and refreshes all the cached data (and the associated metadata) for any [[Dataset]]
+ * Invalidates and refreshes all the cached data (and the associated metadata) for any `Dataset`
* that contains the given data source path. Path matching is by prefix, i.e. "/" would invalidate
* everything that is cached.
*
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/DatasetSerializerRegistratorSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DatasetSerializerRegistratorSuite.scala
index 0f3d0cefe3..92c5656f65 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/DatasetSerializerRegistratorSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/DatasetSerializerRegistratorSuite.scala
@@ -56,7 +56,9 @@ object TestRegistrator {
def apply(): TestRegistrator = new TestRegistrator()
}
-/** A [[Serializer]] that takes a [[KryoData]] and serializes it as KryoData(0). */
+/**
+ * A `Serializer` that takes a [[KryoData]] and serializes it as KryoData(0).
+ */
class ZeroKryoDataSerializer extends Serializer[KryoData] {
override def write(kryo: Kryo, output: Output, t: KryoData): Unit = {
output.writeInt(0)
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
index 26967782f7..2108b118bf 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSourceSuite.scala
@@ -44,8 +44,8 @@ abstract class FileStreamSourceTest
import testImplicits._
/**
- * A subclass [[AddData]] for adding data to files. This is meant to use the
- * [[FileStreamSource]] actually being used in the execution.
+ * A subclass `AddData` for adding data to files. This is meant to use the
+ * `FileStreamSource` actually being used in the execution.
*/
abstract class AddFileData extends AddData {
override def addData(query: Option[StreamExecution]): (Source, Offset) = {
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala
index 5ab9dc2bc7..13fe51a557 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala
@@ -569,7 +569,7 @@ class ThrowingIOExceptionLikeHadoop12074 extends FakeSource {
object ThrowingIOExceptionLikeHadoop12074 {
/**
- * A latch to allow the user to wait until [[ThrowingIOExceptionLikeHadoop12074.createSource]] is
+ * A latch to allow the user to wait until `ThrowingIOExceptionLikeHadoop12074.createSource` is
* called.
*/
@volatile var createSourceLatch: CountDownLatch = null
@@ -600,7 +600,7 @@ class ThrowingInterruptedIOException extends FakeSource {
object ThrowingInterruptedIOException {
/**
- * A latch to allow the user to wait until [[ThrowingInterruptedIOException.createSource]] is
+ * A latch to allow the user to wait until `ThrowingInterruptedIOException.createSource` is
* called.
*/
@volatile var createSourceLatch: CountDownLatch = null
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
index 2ebbfcd22b..b69536ed37 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala
@@ -642,8 +642,10 @@ class StreamingQuerySuite extends StreamTest with BeforeAndAfter with Logging wi
*
* @param expectedBehavior Expected behavior (not blocked, blocked, or exception thrown)
* @param timeoutMs Timeout in milliseconds
- * When timeoutMs <= 0, awaitTermination() is tested (i.e. w/o timeout)
- * When timeoutMs > 0, awaitTermination(timeoutMs) is tested
+ * When timeoutMs is less than or equal to 0, awaitTermination() is
+ * tested (i.e. w/o timeout)
+ * When timeoutMs is greater than 0, awaitTermination(timeoutMs) is
+ * tested
* @param expectedReturnValue Expected return value when awaitTermination(timeoutMs) is used
*/
case class TestAwaitTermination(
@@ -667,8 +669,10 @@ class StreamingQuerySuite extends StreamTest with BeforeAndAfter with Logging wi
*
* @param expectedBehavior Expected behavior (not blocked, blocked, or exception thrown)
* @param timeoutMs Timeout in milliseconds
- * When timeoutMs <= 0, awaitTermination() is tested (i.e. w/o timeout)
- * When timeoutMs > 0, awaitTermination(timeoutMs) is tested
+ * When timeoutMs is less than or equal to 0, awaitTermination() is
+ * tested (i.e. w/o timeout)
+ * When timeoutMs is greater than 0, awaitTermination(timeoutMs) is
+ * tested
* @param expectedReturnValue Expected return value when awaitTermination(timeoutMs) is used
*/
def assertOnQueryCondition(
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala
index cab219216d..6a4cc95d36 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala
@@ -41,11 +41,11 @@ import org.apache.spark.util.{UninterruptibleThread, Utils}
/**
* Helper trait that should be extended by all SQL test suites.
*
- * This allows subclasses to plugin a custom [[SQLContext]]. It comes with test data
+ * This allows subclasses to plugin a custom `SQLContext`. It comes with test data
* prepared in advance as well as all implicit conversions used extensively by dataframes.
- * To use implicit methods, import `testImplicits._` instead of through the [[SQLContext]].
+ * To use implicit methods, import `testImplicits._` instead of through the `SQLContext`.
*
- * Subclasses should *not* create [[SQLContext]]s in the test suite constructor, which is
+ * Subclasses should *not* create `SQLContext`s in the test suite constructor, which is
* prone to leaving multiple overlapping [[org.apache.spark.SparkContext]]s in the same JVM.
*/
private[sql] trait SQLTestUtils
@@ -65,7 +65,7 @@ private[sql] trait SQLTestUtils
* A helper object for importing SQL implicits.
*
* Note that the alternative of importing `spark.implicits._` is not possible here.
- * This is because we create the [[SQLContext]] immediately before the first test is run,
+ * This is because we create the `SQLContext` immediately before the first test is run,
* but the implicits import is needed in the constructor.
*/
protected object testImplicits extends SQLImplicits {
@@ -73,7 +73,7 @@ private[sql] trait SQLTestUtils
}
/**
- * Materialize the test data immediately after the [[SQLContext]] is set up.
+ * Materialize the test data immediately after the `SQLContext` is set up.
* This is necessary if the data is accessed by name but not through direct reference.
*/
protected def setupTestData(): Unit = {
@@ -250,8 +250,8 @@ private[sql] trait SQLTestUtils
}
/**
- * Turn a logical plan into a [[DataFrame]]. This should be removed once we have an easier
- * way to construct [[DataFrame]] directly out of local data without relying on implicits.
+ * Turn a logical plan into a `DataFrame`. This should be removed once we have an easier
+ * way to construct `DataFrame` directly out of local data without relying on implicits.
*/
protected implicit def logicalPlanToSparkQuery(plan: LogicalPlan): DataFrame = {
Dataset.ofRows(spark, plan)
@@ -271,7 +271,9 @@ private[sql] trait SQLTestUtils
}
}
- /** Run a test on a separate [[UninterruptibleThread]]. */
+ /**
+ * Run a test on a separate `UninterruptibleThread`.
+ */
protected def testWithUninterruptibleThread(name: String, quietly: Boolean = false)
(body: => Unit): Unit = {
val timeoutMillis = 10000
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/TestSQLContext.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/TestSQLContext.scala
index b01977a238..959edf9a49 100644
--- a/sql/core/src/test/scala/org/apache/spark/sql/test/TestSQLContext.scala
+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/TestSQLContext.scala
@@ -22,7 +22,7 @@ import org.apache.spark.sql.SparkSession
import org.apache.spark.sql.internal.{SessionState, SessionStateBuilder, SQLConf, WithTestConf}
/**
- * A special [[SparkSession]] prepared for testing.
+ * A special `SparkSession` prepared for testing.
*/
private[sql] class TestSparkSession(sc: SparkContext) extends SparkSession(sc) { self =>
def this(sparkConf: SparkConf) {
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/Service.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/Service.java
index b95077cd62..0d0e3e4011 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/Service.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/Service.java
@@ -49,7 +49,7 @@ public interface Service {
* The transition must be from {@link STATE#NOTINITED} to {@link STATE#INITED} unless the
* operation failed and an exception was raised.
*
- * @param config
+ * @param conf
* the configuration of the service
*/
void init(HiveConf conf);
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/ServiceOperations.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/ServiceOperations.java
index a2c580d6ac..c3219aabfc 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/ServiceOperations.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/ServiceOperations.java
@@ -51,7 +51,7 @@ public final class ServiceOperations {
/**
* Initialize a service.
- * <p/>
+ *
* The service state is checked <i>before</i> the operation begins.
* This process is <i>not</i> thread safe.
* @param service a service that must be in the state
@@ -69,7 +69,7 @@ public final class ServiceOperations {
/**
* Start a service.
- * <p/>
+ *
* The service state is checked <i>before</i> the operation begins.
* This process is <i>not</i> thread safe.
* @param service a service that must be in the state
@@ -86,7 +86,7 @@ public final class ServiceOperations {
/**
* Initialize then start a service.
- * <p/>
+ *
* The service state is checked <i>before</i> the operation begins.
* This process is <i>not</i> thread safe.
* @param service a service that must be in the state
@@ -102,9 +102,9 @@ public final class ServiceOperations {
/**
* Stop a service.
- * <p/>Do nothing if the service is null or not
- * in a state in which it can be/needs to be stopped.
- * <p/>
+ *
+ * Do nothing if the service is null or not in a state in which it can be/needs to be stopped.
+ *
* The service state is checked <i>before</i> the operation begins.
* This process is <i>not</i> thread safe.
* @param service a service or null
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/HttpAuthUtils.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/HttpAuthUtils.java
index 5021528299..f7375ee707 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/HttpAuthUtils.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/HttpAuthUtils.java
@@ -89,7 +89,7 @@ public final class HttpAuthUtils {
* @param clientUserName Client User name.
* @return An unsigned cookie token generated from input parameters.
* The final cookie generated is of the following format :
- * cu=<username>&rn=<randomNumber>&s=<cookieSignature>
+ * {@code cu=<username>&rn=<randomNumber>&s=<cookieSignature>}
*/
public static String createCookieToken(String clientUserName) {
StringBuffer sb = new StringBuffer();
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/PasswdAuthenticationProvider.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/PasswdAuthenticationProvider.java
index e2a6de165a..1af1c1d06e 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/PasswdAuthenticationProvider.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/PasswdAuthenticationProvider.java
@@ -26,7 +26,7 @@ public interface PasswdAuthenticationProvider {
* to authenticate users for their requests.
* If a user is to be granted, return nothing/throw nothing.
* When a user is to be disallowed, throw an appropriate {@link AuthenticationException}.
- * <p/>
+ *
* For an example implementation, see {@link LdapAuthenticationProviderImpl}.
*
* @param user The username received over the connection request
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/TSetIpAddressProcessor.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/TSetIpAddressProcessor.java
index 645e3e2bbd..9a61ad4994 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/TSetIpAddressProcessor.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/auth/TSetIpAddressProcessor.java
@@ -31,12 +31,9 @@ import org.slf4j.LoggerFactory;
/**
* This class is responsible for setting the ipAddress for operations executed via HiveServer2.
- * <p>
- * <ul>
- * <li>IP address is only set for operations that calls listeners with hookContext</li>
- * <li>IP address is only set if the underlying transport mechanism is socket</li>
- * </ul>
- * </p>
+ *
+ * - IP address is only set for operations that calls listeners with hookContext
+ * - IP address is only set if the underlying transport mechanism is socket
*
* @see org.apache.hadoop.hive.ql.hooks.ExecuteWithHookContext
*/
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/CLIServiceUtils.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/CLIServiceUtils.java
index 9d64b102e0..bf2380632f 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/CLIServiceUtils.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/CLIServiceUtils.java
@@ -38,7 +38,7 @@ public class CLIServiceUtils {
* Convert a SQL search pattern into an equivalent Java Regex.
*
* @param pattern input which may contain '%' or '_' wildcard characters, or
- * these characters escaped using {@link #getSearchStringEscape()}.
+ * these characters escaped using {@code getSearchStringEscape()}.
* @return replace %/_ with regex search characters, also handle escaped
* characters.
*/
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/ClassicTableTypeMapping.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/ClassicTableTypeMapping.java
index 05a6bf9384..af36057bda 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/ClassicTableTypeMapping.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/ClassicTableTypeMapping.java
@@ -28,9 +28,9 @@ import org.apache.hadoop.hive.metastore.TableType;
/**
* ClassicTableTypeMapping.
* Classic table type mapping :
- * Managed Table ==> Table
- * External Table ==> Table
- * Virtual View ==> View
+ * Managed Table to Table
+ * External Table to Table
+ * Virtual View to View
*/
public class ClassicTableTypeMapping implements TableTypeMapping {
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/TableTypeMapping.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/TableTypeMapping.java
index e392c459cf..e59d19ea6b 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/TableTypeMapping.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/operation/TableTypeMapping.java
@@ -31,7 +31,7 @@ public interface TableTypeMapping {
/**
* Map hive's table type name to client's table type
- * @param clientTypeName
+ * @param hiveTypeName
* @return
*/
String mapToClientType(String hiveTypeName);
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/SessionManager.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/SessionManager.java
index de066dd406..c1b3892f52 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/SessionManager.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/cli/session/SessionManager.java
@@ -224,7 +224,9 @@ public class SessionManager extends CompositeService {
* The username passed to this method is the effective username.
* If withImpersonation is true (==doAs true) we wrap all the calls in HiveSession
* within a UGI.doAs, where UGI corresponds to the effective user.
- * @see org.apache.hive.service.cli.thrift.ThriftCLIService#getUserName()
+ *
+ * Please see {@code org.apache.hive.service.cli.thrift.ThriftCLIService.getUserName()} for
+ * more details.
*
* @param protocol
* @param username
diff --git a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/server/ThreadFactoryWithGarbageCleanup.java b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/server/ThreadFactoryWithGarbageCleanup.java
index fb8141a905..94f8126552 100644
--- a/sql/hive-thriftserver/src/main/java/org/apache/hive/service/server/ThreadFactoryWithGarbageCleanup.java
+++ b/sql/hive-thriftserver/src/main/java/org/apache/hive/service/server/ThreadFactoryWithGarbageCleanup.java
@@ -30,12 +30,12 @@ import org.apache.hadoop.hive.metastore.RawStore;
* in custom cleanup code to be called before this thread is GC-ed.
* Currently cleans up the following:
* 1. ThreadLocal RawStore object:
- * In case of an embedded metastore, HiveServer2 threads (foreground & background)
+ * In case of an embedded metastore, HiveServer2 threads (foreground and background)
* end up caching a ThreadLocal RawStore object. The ThreadLocal RawStore object has
- * an instance of PersistenceManagerFactory & PersistenceManager.
+ * an instance of PersistenceManagerFactory and PersistenceManager.
* The PersistenceManagerFactory keeps a cache of PersistenceManager objects,
* which are only removed when PersistenceManager#close method is called.
- * HiveServer2 uses ExecutorService for managing thread pools for foreground & background threads.
+ * HiveServer2 uses ExecutorService for managing thread pools for foreground and background threads.
* ExecutorService unfortunately does not provide any hooks to be called,
* when a thread from the pool is terminated.
* As a solution, we're using this ThreadFactory to keep a cache of RawStore objects per thread.
diff --git a/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala b/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala
index 6f5b923cd4..4dec2f71b8 100644
--- a/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala
+++ b/sql/hive/src/main/scala/org/apache/spark/sql/hive/HiveInspectors.scala
@@ -53,8 +53,8 @@ import org.apache.spark.unsafe.types.UTF8String
* java.sql.Date
* java.sql.Timestamp
* Complex Types =>
- * Map: [[MapData]]
- * List: [[ArrayData]]
+ * Map: `MapData`
+ * List: `ArrayData`
* Struct: [[org.apache.spark.sql.catalyst.InternalRow]]
* Union: NOT SUPPORTED YET
* The Complex types plays as a container, which can hold arbitrary data types.
diff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQueryFileTest.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQueryFileTest.scala
index e772324a57..bb4ce6d3aa 100644
--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQueryFileTest.scala
+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/HiveQueryFileTest.scala
@@ -24,7 +24,7 @@ import org.apache.spark.sql.catalyst.util._
/**
* A framework for running the query tests that are listed as a set of text files.
*
- * TestSuites that derive from this class must provide a map of testCaseName -> testCaseFiles
+ * TestSuites that derive from this class must provide a map of testCaseName to testCaseFiles
* that should be included. Additionally, there is support for whitelisting and blacklisting
* tests as development progresses.
*/
diff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcTest.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcTest.scala
index 7226ed521e..a2f08c5ba7 100644
--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcTest.scala
+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/orc/OrcTest.scala
@@ -43,7 +43,7 @@ private[sql] trait OrcTest extends SQLTestUtils with TestHiveSingleton {
}
/**
- * Writes `data` to a Orc file and reads it back as a [[DataFrame]],
+ * Writes `data` to a Orc file and reads it back as a `DataFrame`,
* which is then passed to `f`. The Orc file will be deleted after `f` returns.
*/
protected def withOrcDataFrame[T <: Product: ClassTag: TypeTag]
@@ -53,7 +53,7 @@ private[sql] trait OrcTest extends SQLTestUtils with TestHiveSingleton {
}
/**
- * Writes `data` to a Orc file, reads it back as a [[DataFrame]] and registers it as a
+ * Writes `data` to a Orc file, reads it back as a `DataFrame` and registers it as a
* temporary table named `tableName`, then call `f`. The temporary table together with the
* Orc file will be dropped/deleted after `f` returns.
*/