diff options
author | Dongjoon Hyun <dongjoon@apache.org> | 2016-04-02 17:50:40 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2016-04-02 17:50:40 -0700 |
commit | 4a6e78abd9d5edc4a5092738dff0006bbe202a89 (patch) | |
tree | 5ecbee86bb057139128b65b0f99405c51e637e38 /external/flume | |
parent | f705037617d55bb479ec60bcb1e55c736224be94 (diff) | |
download | spark-4a6e78abd9d5edc4a5092738dff0006bbe202a89.tar.gz spark-4a6e78abd9d5edc4a5092738dff0006bbe202a89.tar.bz2 spark-4a6e78abd9d5edc4a5092738dff0006bbe202a89.zip |
[MINOR][DOCS] Use multi-line JavaDoc comments in Scala code.
## What changes were proposed in this pull request?
This PR aims to fix all Scala-Style multiline comments into Java-Style multiline comments in Scala codes.
(All comment-only changes over 77 files: +786 lines, −747 lines)
## How was this patch tested?
Manual.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #12130 from dongjoon-hyun/use_multiine_javadoc_comments.
Diffstat (limited to 'external/flume')
-rw-r--r-- | external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala | 15 |
1 files changed, 8 insertions, 7 deletions
diff --git a/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala b/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala index 7dc9606913..6e7c3f358e 100644 --- a/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala +++ b/external/flume/src/main/scala/org/apache/spark/streaming/flume/FlumeInputDStream.scala @@ -185,13 +185,14 @@ class FlumeReceiver( override def preferredLocation: Option[String] = Option(host) - /** A Netty Pipeline factory that will decompress incoming data from - * and the Netty client and compress data going back to the client. - * - * The compression on the return is required because Flume requires - * a successful response to indicate it can remove the event/batch - * from the configured channel - */ + /** + * A Netty Pipeline factory that will decompress incoming data from + * and the Netty client and compress data going back to the client. + * + * The compression on the return is required because Flume requires + * a successful response to indicate it can remove the event/batch + * from the configured channel + */ private[streaming] class CompressionChannelPipelineFactory extends ChannelPipelineFactory { def getPipeline(): ChannelPipeline = { |