aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
...
* [SPARK-10459] [SQL] Do not need to have ConvertToSafe for PythonUDFLiang-Chi Hsieh2015-09-171-0/+4
| | | | | | | | | | | | JIRA: https://issues.apache.org/jira/browse/SPARK-10459 As mentioned in the JIRA, `PythonUDF` actually could process `UnsafeRow`. Specially, the rows in `childResults` in `BatchPythonEvaluation` will be projected to a `MutableRow`. So I think we can enable `canProcessUnsafeRows` for `BatchPythonEvaluation` and get rid of redundant `ConvertToSafe`. Author: Liang-Chi Hsieh <viirya@appier.com> Closes #8616 from viirya/pyudf-unsafe.
* [SPARK-10077] [DOCS] [ML] Add package info for java of ml/featureHolden Karau2015-09-171-0/+108
| | | | | | | | | Should be the same as SPARK-7808 but use Java for the code example. It would be great to add package doc for `spark.ml.feature`. Author: Holden Karau <holden@pigscanfly.ca> Closes #8740 from holdenk/SPARK-10077-JAVA-PACKAGE-DOC-FOR-SPARK.ML.FEATURE.
* [SPARK-10282] [ML] [PYSPARK] [DOCS] Add @since annotation to ↵Yu ISHIKAWA2015-09-171-0/+28
| | | | | | | | pyspark.ml.recommendation Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8692 from yu-iskw/SPARK-10282.
* [SPARK-10274] [MLLIB] Add @since annotation to pyspark.mllib.fpmYu ISHIKAWA2015-09-171-1/+9
| | | | | | Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8665 from yu-iskw/SPARK-10274.
* [SPARK-10279] [MLLIB] [PYSPARK] [DOCS] Add @since annotation to ↵Yu ISHIKAWA2015-09-171-2/+26
| | | | | | | | pyspark.mllib.util Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8689 from yu-iskw/SPARK-10279.
* [SPARK-10278] [MLLIB] [PYSPARK] Add @since annotation to pyspark.mllib.treeYu ISHIKAWA2015-09-171-1/+35
| | | | | | Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8685 from yu-iskw/SPARK-10278.
* [SPARK-10281] [ML] [PYSPARK] [DOCS] Add @since annotation to ↵Yu ISHIKAWA2015-09-171-0/+13
| | | | | | | | pyspark.ml.clustering Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8691 from yu-iskw/SPARK-10281.
* [SPARK-10283] [ML] [PYSPARK] [DOCS] Add @since annotation to ↵Yu ISHIKAWA2015-09-171-0/+65
| | | | | | | | pyspark.ml.regression Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8693 from yu-iskw/SPARK-10283.
* [SPARK-10284] [ML] [PYSPARK] [DOCS] Add @since annotation to pyspark.ml.tuningYu ISHIKAWA2015-09-171-0/+28
| | | | | | Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8694 from yu-iskw/SPARK-10284.
* [MINOR] [CORE] Fixes minor variable name typoCheng Lian2015-09-171-2/+2
| | | | | | Author: Cheng Lian <lian@databricks.com> Closes #8784 from liancheng/typo-fix.
* Tiny style fix for d39f15ea2b8bed5342d2f8e3c1936f915c470783.Reynold Xin2015-09-161-1/+1
|
* [SPARK-9794] [SQL] Fix datetime parsing in SparkSQL.Kevin Cox2015-09-162-17/+42
| | | | | | | | | | This fixes https://issues.apache.org/jira/browse/SPARK-9794 by using a real ISO8601 parser. (courtesy of the xml component of the standard java library) cc: angelini Author: Kevin Cox <kevincox@kevincox.ca> Closes #8396 from kevincox/kevincox-sql-time-parsing.
* [SPARK-10050] [SPARKR] Support collecting data of MapType in DataFrame.Sun Rui2015-09-166-23/+123
| | | | | | | | | 1. Support collecting data of MapType from DataFrame. 2. Support data of MapType in createDataFrame. Author: Sun Rui <rui.sun@intel.com> Closes #8711 from sun-rui/SPARK-10050.
* [SPARK-10589] [WEBUI] Add defense against external site framingSean Owen2015-09-165-11/+24
| | | | | | | | Set `X-Frame-Options: SAMEORIGIN` to protect against frame-related vulnerability Author: Sean Owen <sowen@cloudera.com> Closes #8745 from srowen/SPARK-10589.
* [SPARK-10276] [MLLIB] [PYSPARK] Add @since annotation to ↵Yu ISHIKAWA2015-09-161-1/+35
| | | | | | | | pyspark.mllib.recommendation Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8677 from yu-iskw/SPARK-10276.
* [SPARK-10511] [BUILD] Reset git repository before packaging source distroLuciano Resende2015-09-161-0/+1
| | | | | | | | | | | | | The calculation of Spark version is downloading Scala and Zinc in the build directory which is inflating the size of the source distribution. Reseting the repo before packaging the source distribution fix this issue. Author: Luciano Resende <lresende@apache.org> Closes #8774 from lresende/spark-10511.
* [SPARK-10516] [ MLLIB] Added values property in DenseVectorVinod K C2015-09-151-0/+4
| | | | | | Author: Vinod K C <vinod.kc@huawei.com> Closes #8682 from vinodkc/fix_SPARK-10516.
* [SPARK-10595] [ML] [MLLIB] [DOCS] Various ML guide cleanupsJoseph K. Bradley2015-09-155-35/+91
| | | | | | | | | | | | | | | | | | Various ML guide cleanups. * ml-guide.md: Make it easier to access the algorithm-specific guides. * LDA user guide: EM often begins with useless topics, but running longer generally improves them dramatically. E.g., 10 iterations on a Wikipedia dataset produces useless topics, but 50 iterations produces very meaningful topics. * mllib-feature-extraction.html#elementwiseproduct: “w” parameter should be “scalingVec” * Clean up Binarizer user guide a little. * Document in Pipeline that users should not put an instance into the Pipeline in more than 1 place. * spark.ml Word2Vec user guide: clean up grammar/writing * Chi Sq Feature Selector docs: Improve text in doc. CC: mengxr feynmanliang Author: Joseph K. Bradley <joseph@databricks.com> Closes #8752 from jkbradley/mlguide-fixes-1.5.
* [SPARK-9078] [SQL] Allow jdbc dialects to override the query used to check ↵sureshthalamati2015-09-154-4/+41
| | | | | | | | | | | | the table. Current implementation uses query with a LIMIT clause to find if table already exists. This syntax works only in some database systems. This patch changes the default query to the one that is likely to work on most databases, and adds a new method to the JdbcDialect abstract class to allow dialects to override the default query. I looked at using the JDBC meta data calls, it turns out there is no common way to find the current schema, catalog..etc. There is a new method Connection.getSchema() , but that is available only starting jdk1.7 , and existing jdbc drivers may not have implemented it. Other option was to use jdbc escape syntax clause for LIMIT, not sure on how well this supported in all the databases also. After looking at all the jdbc metadata options my conclusion was most common way is to use the simple select query with 'where 1 =0' , and allow dialects to customize as needed Author: sureshthalamati <suresh.thalamati@gmail.com> Closes #8676 from sureshthalamati/table_exists_spark-9078.
* [SPARK-10613] [SPARK-10624] [SQL] Reduce LocalNode tests dependency on ↵Andrew Or2015-09-1517-636/+468
| | | | | | | | | | | | SQLContext Instead of relying on `DataFrames` to verify our answers, we can just use simple arrays. This significantly simplifies the test logic for `LocalNode`s and reduces a lot of code duplicated from `SparkPlanTest`. This also fixes an additional issue [SPARK-10624](https://issues.apache.org/jira/browse/SPARK-10624) where the output of `TakeOrderedAndProjectNode` is not actually ordered. Author: Andrew Or <andrew@databricks.com> Closes #8764 from andrewor14/sql-local-tests-cleanup.
* [SPARK-10381] Fix mixup of taskAttemptNumber & attemptId in ↵Josh Rosen2015-09-1517-69/+174
| | | | | | | | | | | | | | OutputCommitCoordinator When speculative execution is enabled, consider a scenario where the authorized committer of a particular output partition fails during the OutputCommitter.commitTask() call. In this case, the OutputCommitCoordinator is supposed to release that committer's exclusive lock on committing once that task fails. However, due to a unit mismatch (we used task attempt number in one place and task attempt id in another) the lock will not be released, causing Spark to go into an infinite retry loop. This bug was masked by the fact that the OutputCommitCoordinator does not have enough end-to-end tests (the current tests use many mocks). Other factors contributing to this bug are the fact that we have many similarly-named identifiers that have different semantics but the same data types (e.g. attemptNumber and taskAttemptId, with inconsistent variable naming which makes them difficult to distinguish). This patch adds a regression test and fixes this bug by always using task attempt numbers throughout this code. Author: Josh Rosen <joshrosen@databricks.com> Closes #8544 from JoshRosen/SPARK-10381.
* [SPARK-10575] [SPARK CORE] Wrapped RDD.takeSample with Scopevinodkc2015-09-151-37/+31
| | | | | | | | | | Remove return statements in RDD.takeSample and wrap it withScope Author: vinodkc <vinod.kc.in@gmail.com> Author: vinodkc <vinodkc@users.noreply.github.com> Author: Vinod K C <vinod.kc@huawei.com> Closes #8730 from vinodkc/fix_takesample_return.
* [SPARK-10612] [SQL] Add prepare to LocalNode.Reynold Xin2015-09-151-0/+8
| | | | | | | | The idea is that we should separate the function call that does memory reservation (i.e. prepare) from the function call that consumes the input (e.g. open()), so all operators can be a chance to reserve memory before they are all consumed. Author: Reynold Xin <rxin@databricks.com> Closes #8761 from rxin/SPARK-10612.
* [SPARK-10548] [SPARK-10563] [SQL] Fix concurrent SQL executionsAndrew Or2015-09-153-43/+132
| | | | | | | | | | | | | | | | *Note: this is for master branch only.* The fix for branch-1.5 is at #8721. The query execution ID is currently passed from a thread to its children, which is not the intended behavior. This led to `IllegalArgumentException: spark.sql.execution.id is already set` when running queries in parallel, e.g.: ``` (1 to 100).par.foreach { _ => sc.parallelize(1 to 5).map { i => (i, i) }.toDF("a", "b").count() } ``` The cause is `SparkContext`'s local properties are inherited by default. This patch adds a way to exclude keys we don't want to be inherited, and makes SQL go through that code path. Author: Andrew Or <andrew@databricks.com> Closes #8710 from andrewor14/concurrent-sql-executions.
* [SPARK-7685] [ML] Apply weights to different samples in Logistic RegressionDB Tsai2015-09-157-128/+303
| | | | | | | | | | | In fraud detection dataset, almost all the samples are negative while only couple of them are positive. This type of high imbalanced data will bias the models toward negative resulting poor performance. In python-scikit, they provide a correction allowing users to Over-/undersample the samples of each class according to the given weights. In auto mode, selects weights inversely proportional to class frequencies in the training set. This can be done in a more efficient way by multiplying the weights into loss and gradient instead of doing actual over/undersampling in the training dataset which is very expensive. http://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.html On the other hand, some of the training data maybe more important like the training samples from tenure users while the training samples from new users maybe less important. We should be able to provide another "weight: Double" information in the LabeledPoint to weight them differently in the learning algorithm. Author: DB Tsai <dbt@netflix.com> Author: DB Tsai <dbt@dbs-mac-pro.corp.netflix.com> Closes #7884 from dbtsai/SPARK-7685.
* [SPARK-10475] [SQL] improve column prunning for Project on SortWenchen Fan2015-09-152-4/+26
| | | | | | | | Sometimes we can't push down the whole `Project` though `Sort`, but we still have a chance to push down part of it. Author: Wenchen Fan <cloud0fan@outlook.com> Closes #8644 from cloud-fan/column-prune.
* [SPARK-10437] [SQL] Support aggregation expressions in Order ByLiang-Chi Hsieh2015-09-152-4/+30
| | | | | | | | | | JIRA: https://issues.apache.org/jira/browse/SPARK-10437 If an expression in `SortOrder` is a resolved one, such as `count(1)`, the corresponding rule in `Analyzer` to make it work in order by will not be applied. Author: Liang-Chi Hsieh <viirya@appier.com> Closes #8599 from viirya/orderby-agg.
* Revert "[SPARK-10300] [BUILD] [TESTS] Add support for test tags in ↵Marcelo Vanzin2015-09-1526-124/+147
| | | | | | run-tests.py." This reverts commit 8abef21dac1a6538c4e4e0140323b83d804d602b.
* [DOCS] Small fixes to Spark on Yarn docJacek Laskowski2015-09-151-6/+6
| | | | | | | | | * a follow-up to 16b6d18613e150c7038c613992d80a7828413e66 as `--num-executors` flag is not suppported. * links + formatting Author: Jacek Laskowski <jacek.laskowski@deepsense.io> Closes #8762 from jaceklaskowski/docs-spark-on-yarn.
* Closes #8738Xiangrui Meng2015-09-150-0/+0
| | | | | | | | Closes #8767 Closes #2491 Closes #6795 Closes #2096 Closes #7722
* [PYSPARK] [MLLIB] [DOCS] Replaced addversion with versionadded in mllib.randomnoelsmith2015-09-151-1/+1
| | | | | | | | Missed this when reviewing `pyspark.mllib.random` for SPARK-10275. Author: noelsmith <mail@noelsmith.com> Closes #8773 from noel-smith/mllib-random-versionadded-fix.
* [SPARK-10300] [BUILD] [TESTS] Add support for test tags in run-tests.py.Marcelo Vanzin2015-09-1526-147/+124
| | | | | | | | | | | | | | | This change does two things: - tag a few tests and adds the mechanism in the build to be able to disable those tags, both in maven and sbt, for both junit and scalatest suites. - add some logic to run-tests.py to disable some tags depending on what files have changed; that's used to disable expensive tests when a module hasn't explicitly been changed, to speed up testing for changes that don't directly affect those modules. Author: Marcelo Vanzin <vanzin@cloudera.com> Closes #8437 from vanzin/test-tags.
* [SPARK-10491] [MLLIB] move RowMatrix.dspr to BLASYuhao Yang2015-09-154-41/+72
| | | | | | | | | | | | jira: https://issues.apache.org/jira/browse/SPARK-10491 We implemented dspr with sparse vector support in `RowMatrix`. This method is also used in WeightedLeastSquares and other places. It would be useful to move it to `linalg.BLAS`. Let me know if new UT needed. Author: Yuhao Yang <hhbyyh@gmail.com> Closes #8663 from hhbyyh/movedspr.
* Update version to 1.6.0-SNAPSHOT.Reynold Xin2015-09-1538-40/+49
| | | | | | Author: Reynold Xin <rxin@databricks.com> Closes #8350 from rxin/1.6.
* [SPARK-10598] [DOCS]Robin East2015-09-141-1/+1
| | | | | | | | | | | Comments preceding toMessage method state: "The edge partition is encoded in the lower * 30 bytes of the Int, and the position is encoded in the upper 2 bytes of the Int.". References to bytes should be changed to bits. This contribution is my original work and I license the work to the Spark project under it's open source license. Author: Robin East <robin.east@xense.co.uk> Closes #8756 from insidedctm/master.
* Small fixes to docsJacek Laskowski2015-09-141-5/+5
| | | | | | | | Links work now properly + consistent use of *Spark standalone cluster* (Spark uppercase + lowercase the rest -- seems agreed in the other places in the docs). Author: Jacek Laskowski <jacek.laskowski@deepsense.io> Closes #8759 from jaceklaskowski/docs-submitting-apps.
* [SPARK-10275] [MLLIB] Add @since annotation to pyspark.mllib.randomYu ISHIKAWA2015-09-141-0/+15
| | | | | | Author: Yu ISHIKAWA <yuu.ishikawa@gmail.com> Closes #8666 from yu-iskw/SPARK-10275.
* [SPARK-10273] Add @since annotation to pyspark.mllib.featurenoelsmith2015-09-141-1/+57
| | | | | | | | | | Duplicated the since decorator from pyspark.sql into pyspark (also tweaked to handle functions without docstrings). Added since to methods + "versionadded::" to classes (derived from the git file history in pyspark). Author: noelsmith <mail@noelsmith.com> Closes #8633 from noel-smith/SPARK-10273-since-mllib-feature.
* [SPARK-9793] [MLLIB] [PYSPARK] PySpark DenseVector, SparseVector implement ↵Yanbo Liang2015-09-142-15/+107
| | | | | | | | | | | __eq__ and __hash__ correctly PySpark DenseVector, SparseVector ```__eq__``` method should use semantics equality, and DenseVector can compared with SparseVector. Implement PySpark DenseVector, SparseVector ```__hash__``` method based on the first 16 entries. That will make PySpark Vector objects can be used in collections. Author: Yanbo Liang <ybliang8@gmail.com> Closes #8166 from yanboliang/spark-9793.
* [SPARK-10542] [PYSPARK] fix serialize namedtupleDavies Liu2015-09-143-1/+20
| | | | | | Author: Davies Liu <davies@databricks.com> Closes #8707 from davies/fix_namedtuple.
* [SPARK-9851] Support submitting map stages individually in DAGSchedulerMatei Zaharia2015-09-1412-63/+710
| | | | | | | | | | This patch adds support for submitting map stages in a DAG individually so that we can make downstream decisions after seeing statistics about their output, as part of SPARK-9850. I also added more comments to many of the key classes in DAGScheduler. By itself, the patch is not super useful except maybe to switch between a shuffle and broadcast join, but with the other subtasks of SPARK-9850 we'll be able to do more interesting decisions. The main entry point is SparkContext.submitMapStage, which lets you run a map stage and see stats about the map output sizes. Other stats could also be collected through accumulators. See AdaptiveSchedulingSuite for a short example. Author: Matei Zaharia <matei@databricks.com> Closes #8180 from mateiz/spark-9851.
* [SPARK-10564] ThreadingSuite: assertion failures in threads don't fail the ↵Andrew Or2015-09-141-8/+15
| | | | | | | | | | test (round 2) This is a follow-up patch to #8723. I missed one case there. Author: Andrew Or <andrew@databricks.com> Closes #8727 from andrewor14/fix-threading-suite.
* [SPARK-10543] [CORE] Peak Execution Memory Quantile should be Per-task BasisForest Fang2015-09-142-8/+23
| | | | | | | | | | | | | | | | | | | | | | | | | | | | Read `PEAK_EXECUTION_MEMORY` using `update` to get per task partial value instead of cumulative value. I tested with this workload: ```scala val size = 1000 val repetitions = 10 val data = sc.parallelize(1 to size, 5).map(x => (util.Random.nextInt(size / repetitions),util.Random.nextDouble)).toDF("key", "value") val res = data.toDF.groupBy("key").agg(sum("value")).count ``` Before: ![image](https://cloud.githubusercontent.com/assets/4317392/9828197/07dd6874-58b8-11e5-9bd9-6ba927c38b26.png) After: ![image](https://cloud.githubusercontent.com/assets/4317392/9828151/a5ddff30-58b7-11e5-8d31-eda5dc4eae79.png) Tasks view: ![image](https://cloud.githubusercontent.com/assets/4317392/9828199/17dc2b84-58b8-11e5-92a8-be89ce4d29d1.png) cc andrewor14 I appreciate if you can give feedback on this since I think you introduced display of this metric. Author: Forest Fang <forest.fang@outlook.com> Closes #8726 from saurfang/stagepage.
* [SPARK-10549] scala 2.11 spark on yarn with security - Repl doesn't workTom Graves2015-09-141-1/+2
| | | | | | | | | Make this lazy so that it can set the yarn mode before creating the securityManager. Author: Tom Graves <tgraves@yahoo-inc.com> Author: Thomas Graves <tgraves@staydecay.corp.gq1.yahoo.com> Closes #8719 from tgravescs/SPARK-10549.
* [SPARK-10576] [BUILD] Move .java files out of src/main/scalaSean Owen2015-09-148-0/+0
| | | | | | | | Move .java files in `src/main/scala` to `src/main/java` root, except for `package-info.java` (to stay next to package.scala) Author: Sean Owen <sowen@cloudera.com> Closes #8736 from srowen/SPARK-10576.
* [SPARK-10594] [YARN] Remove reference to --num-executors, add --properties-fileErick Tryzelaar2015-09-141-1/+1
| | | | | | | | | | `ApplicationMaster` no longer has the `--num-executors` flag, and had an undocumented `--properties-file` configuration option. cc srowen Author: Erick Tryzelaar <erick.tryzelaar@gmail.com> Closes #8754 from erickt/master.
* [SPARK-9996] [SPARK-9997] [SQL] Add local expand and NestedLoopJoin operatorszsxwing2015-09-147-15/+574
| | | | | | | | This PR is in conflict with #8535 and #8573. Will update this one when they are merged. Author: zsxwing <zsxwing@gmail.com> Closes #8642 from zsxwing/expand-nest-join.
* [SPARK-6981] [SQL] Factor out SparkPlanner and QueryExecution from SQLContextEdoardo Vacchi2015-09-146-128/+195
| | | | | | | | | | Alternative to PR #6122; in this case the refactored out classes are replaced by inner classes with the same name for backwards binary compatibility * process in a lighter-weight, backwards-compatible way Author: Edoardo Vacchi <uncommonnonsense@gmail.com> Closes #6356 from evacchi/sqlctx-refactoring-lite.
* [SPARK-10522] [SQL] Nanoseconds of Timestamp in Parquet should be positiveDavies Liu2015-09-142-14/+15
| | | | | | | | | | Or Hive can't read it back correctly. Thanks vanzin for report this. Author: Davies Liu <davies@databricks.com> Closes #8674 from davies/positive_nano.
* [SPARK-10573] [ML] IndexToString output schema should be StringTypeNick Pritchard2015-09-142-3/+10
| | | | | | | | Fixes bug where IndexToString output schema was DoubleType. Correct me if I'm wrong, but it doesn't seem like the output needs to have any "ML Attribute" metadata. Author: Nick Pritchard <nicholas.pritchard@falkonry.com> Closes #8751 from pnpritchard/SPARK-10573.