diff options
author | hyukjinkwon <gurwls223@gmail.com> | 2016-04-01 22:51:47 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2016-04-01 22:51:47 -0700 |
commit | d7982a3a9aa804e7e3a2004335e7f314867a5f8a (patch) | |
tree | d9c7604c13525a96c564e34c51b6e70648bc7bdf /sql/core/src/test/scala | |
parent | f414154418c2291448954b9f0890d592b2d823ae (diff) | |
download | spark-d7982a3a9aa804e7e3a2004335e7f314867a5f8a.tar.gz spark-d7982a3a9aa804e7e3a2004335e7f314867a5f8a.tar.bz2 spark-d7982a3a9aa804e7e3a2004335e7f314867a5f8a.zip |
[MINOR][SQL] Fix comments styl and correct several styles and nits in CSV data source
## What changes were proposed in this pull request?
While trying to create a PR (which was not an issue at the end), I just corrected some style nits.
So, I removed the changes except for some coding style corrections.
- According to the [scala-style-guide#documentation-style](https://github.com/databricks/scala-style-guide#documentation-style), Scala style comments are discouraged.
>```scala
>/** This is a correct one-liner, short description. */
>
>/**
> * This is correct multi-line JavaDoc comment. And
> * this is my second line, and if I keep typing, this would be
> * my third line.
> */
>
>/** In Spark, we don't use the ScalaDoc style so this
> * is not correct.
> */
>```
- Double newlines between consecutive methods was removed. According to [scala-style-guide#blank-lines-vertical-whitespace](https://github.com/databricks/scala-style-guide#blank-lines-vertical-whitespace), single newline appears when
>Between consecutive members (or initializers) of a class: fields, constructors, methods, nested classes, static initializers, instance initializers.
- Remove uesless parentheses in tests
- Use `mapPartitions` instead of `mapPartitionsWithIndex()`.
## How was this patch tested?
Unit tests were used and `dev/run_tests` for style tests.
Author: hyukjinkwon <gurwls223@gmail.com>
Closes #12109 from HyukjinKwon/SPARK-14271.
Diffstat (limited to 'sql/core/src/test/scala')
-rw-r--r-- | sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVParserSuite.scala | 10 |
1 files changed, 5 insertions, 5 deletions
diff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVParserSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVParserSuite.scala index c0c38c6787..dc54883277 100644 --- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVParserSuite.scala +++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/csv/CSVParserSuite.scala @@ -46,7 +46,7 @@ class CSVParserSuite extends SparkFunSuite { var numRead = 0 var n = 0 do { // try to fill cbuf - var off = 0 + var off = 0 var len = cbuf.length n = reader.read(cbuf, off, len) @@ -81,7 +81,7 @@ class CSVParserSuite extends SparkFunSuite { test("Regular case") { val input = List("This is a string", "This is another string", "Small", "", "\"quoted\"") val read = readAll(input.toIterator) - assert(read === input.mkString("\n") ++ ("\n")) + assert(read === input.mkString("\n") ++ "\n") } test("Empty iter") { @@ -93,12 +93,12 @@ class CSVParserSuite extends SparkFunSuite { test("Embedded new line") { val input = List("This is a string", "This is another string", "Small\n", "", "\"quoted\"") val read = readAll(input.toIterator) - assert(read === input.mkString("\n") ++ ("\n")) + assert(read === input.mkString("\n") ++ "\n") } test("Buffer Regular case") { val input = List("This is a string", "This is another string", "Small", "", "\"quoted\"") - val output = input.mkString("\n") ++ ("\n") + val output = input.mkString("\n") ++ "\n" for(i <- 1 to output.length + 5) { val read = readBufAll(input.toIterator, i) assert(read === output) @@ -116,7 +116,7 @@ class CSVParserSuite extends SparkFunSuite { test("Buffer Embedded new line") { val input = List("This is a string", "This is another string", "Small\n", "", "\"quoted\"") - val output = input.mkString("\n") ++ ("\n") + val output = input.mkString("\n") ++ "\n" for(i <- 1 to output.length + 5) { val read = readBufAll(input.toIterator, 1) assert(read === output) |