diff options
author | Cheng Hao <hao.cheng@intel.com> | 2015-07-23 10:28:20 -0700 |
---|---|---|
committer | Reynold Xin <rxin@databricks.com> | 2015-07-23 10:28:20 -0700 |
commit | 19aeab57c1b0c739edb5ba351f98e930e1a0f984 (patch) | |
tree | 8ede978519543fd2655a5653588f5e6867034bfe | |
parent | 52ef76de219c4bf19c54c99414b89a67d0bf457b (diff) | |
download | spark-19aeab57c1b0c739edb5ba351f98e930e1a0f984.tar.gz spark-19aeab57c1b0c739edb5ba351f98e930e1a0f984.tar.bz2 spark-19aeab57c1b0c739edb5ba351f98e930e1a0f984.zip |
[Build][Minor] Fix building error & performance
1. When build the latest code with sbt, it throws exception like:
[error] /home/hcheng/git/catalyst/core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala:78: match may not be exhaustive.
[error] It would fail on the following input: UNKNOWN
[error] val classNameByStatus = status match {
[error]
2. Potential performance issue when implicitly convert an Array[Any] to Seq[Any]
Author: Cheng Hao <hao.cheng@intel.com>
Closes #7611 from chenghao-intel/toseq and squashes the following commits:
cab75c5 [Cheng Hao] remove the toArray
24df682 [Cheng Hao] fix building error & performance
-rw-r--r-- | core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala | 1 | ||||
-rw-r--r-- | sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala | 2 |
2 files changed, 2 insertions, 1 deletions
diff --git a/core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala b/core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala index 2ce670ad02..e72547df72 100644 --- a/core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala +++ b/core/src/main/scala/org/apache/spark/ui/jobs/AllJobsPage.scala @@ -79,6 +79,7 @@ private[ui] class AllJobsPage(parent: JobsTab) extends WebUIPage("") { case JobExecutionStatus.SUCCEEDED => "succeeded" case JobExecutionStatus.FAILED => "failed" case JobExecutionStatus.RUNNING => "running" + case JobExecutionStatus.UNKNOWN => "unknown" } // The timeline library treats contents as HTML, so we have to escape them; for the diff --git a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala index 4067833d5e..bfaee04f33 100644 --- a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala +++ b/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/CatalystTypeConverters.scala @@ -402,7 +402,7 @@ object CatalystTypeConverters { case d: JavaBigDecimal => BigDecimalConverter.toCatalyst(d) case seq: Seq[Any] => seq.map(convertToCatalyst) case r: Row => InternalRow(r.toSeq.map(convertToCatalyst): _*) - case arr: Array[Any] => arr.toSeq.map(convertToCatalyst).toArray + case arr: Array[Any] => arr.map(convertToCatalyst) case m: Map[_, _] => m.map { case (k, v) => (convertToCatalyst(k), convertToCatalyst(v)) }.toMap case other => other |