| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
| |
Author: Luciano Resende <lresende@apache.org>
Closes #11092 from lresende/SPARK-13189.
|
|
|
|
|
|
|
|
|
|
|
|
| |
Update JDBC documentation based on http://stackoverflow.com/a/30947090/219530 as SPARK_CLASSPATH is deprecated.
Also, that's how it worked, it didn't work with the SPARK_CLASSPATH or the --jars alone.
This would solve issue: https://issues.apache.org/jira/browse/SPARK-13040
Author: Sebastián Ramírez <tiangolo@gmail.com>
Closes #10948 from tiangolo/patch-docs-jdbc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Fix for [SPARK-13002](https://issues.apache.org/jira/browse/SPARK-13002) about the initial number of executors when running with dynamic allocation on Mesos.
Instead of fixing it just for the Mesos case, made the change in `ExecutorAllocationManager`. It is already driving the number of executors running on Mesos, only no the initial value.
The `None` and `Some(0)` are internal details on the computation of resources to reserved, in the Mesos backend scheduler. `executorLimitOption` has to be initialized correctly, otherwise the Mesos backend scheduler will, either, create to many executors at launch, or not create any executors and not be able to recover from this state.
Removed the 'special case' description in the doc. It was not totally accurate, and is not needed anymore.
This doesn't fix the same problem visible with Spark standalone. There is no straightforward way to send the initial value in standalone mode.
Somebody knowing this part of the yarn support should review this change.
Author: Luc Bourlier <luc.bourlier@typesafe.com>
Closes #11047 from skyluc/issue/initial-dyn-alloc-2.
|
|
|
|
|
|
| |
Author: Bill Chambers <bill@databricks.com>
Closes #11094 from anabranch/dynamic-docs.
|
|
|
|
|
|
|
|
| |
minor fix for api link in ml onevsrest
Author: Yuhao Yang <hhbyyh@gmail.com>
Closes #11068 from hhbyyh/onevsrestDoc.
|
|
|
|
|
|
|
|
|
|
| |
dir with mesos conf and add docs.
Fix zookeeper dir configuration used in cluster mode, and also add documentation around these settings.
Author: Timothy Chen <tnachen@gmail.com>
Closes #10057 from tnachen/fix_mesos_dir.
|
|
|
|
|
|
|
|
|
|
| |
In [ml-guide](https://spark.apache.org/docs/latest/ml-guide.html#example-model-selection-via-cross-validation), there is invalid reference to `MulticlassClassificationEvaluator` apidoc.
https://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.ml.evaluation.MultiClassClassificationEvaluator
Author: Lewuathe <lewuathe@me.com>
Closes #10996 from Lewuathe/fix-typo-in-ml-guide.
|
|
|
|
|
|
|
|
| |
ISTM `lib` is better because `datanucleus` jars are located in `lib` for release builds.
Author: Takeshi YAMAMURO <linguin.m.s@gmail.com>
Closes #10901 from maropu/DocFix.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This patch changes Spark's build to make Scala 2.11 the default Scala version. To be clear, this does not mean that Spark will stop supporting Scala 2.10: users will still be able to compile Spark for Scala 2.10 by following the instructions on the "Building Spark" page; however, it does mean that Scala 2.11 will be the default Scala version used by our CI builds (including pull request builds).
The Scala 2.11 compiler is faster than 2.10, so I think we'll be able to look forward to a slight speedup in our CI builds (it looks like it's about 2X faster for the Maven compile-only builds, for instance).
After this patch is merged, I'll update Jenkins to add new compile-only jobs to ensure that Scala 2.10 compilation doesn't break.
Author: Josh Rosen <joshrosen@databricks.com>
Closes #10608 from JoshRosen/SPARK-6363.
|
|
|
|
|
|
|
|
| |
this is stated for --packages and --repositories. Without stating it for --jars, people expect a standard java classpath to work, with expansion and using a different delimiter than a comma. Currently this is only state in the --help for spark-submit "Comma-separated list of local jars to include on the driver and executor classpaths."
Author: James Lohse <jimlohse@users.noreply.github.com>
Closes #10890 from jimlohse/patch-1.
|
|
|
|
|
|
|
|
|
|
| |
cluster mode
JIRA 1680 added a property called spark.yarn.appMasterEnv. This PR draws users' attention to this special case by adding an explanation in configuration.html#environment-variables
Author: Andrew <weiner.andrew.j@gmail.com>
Closes #10869 from weineran/branch-yarn-docs.
|
|
|
|
|
|
|
|
|
|
|
|
| |
for streaming-akka project
Since `actorStream` is an external project, we should add the linking and deploying instructions for it.
A follow up PR of #10744
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #10856 from zsxwing/akka-link-instruction.
|
|
|
|
|
|
|
|
|
|
|
|
| |
inconsistent with Scala's Iterator->Iterator
Fix Java function API methods for flatMap and mapPartitions to require producing only an Iterator, not Iterable. Also fix DStream.flatMap to require a function producing TraversableOnce only, not Traversable.
CC rxin pwendell for API change; tdas since it also touches streaming.
Author: Sean Owen <sowen@cloudera.com>
Closes #10413 from srowen/SPARK-3369.
|
|
|
|
|
|
|
|
| |
Update user guide for RFormula feature interactions. Meanwhile we also update other new features such as supporting string label in Spark 1.6.
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10222 from yanboliang/spark-11965.
|
|
|
|
|
|
|
|
|
|
| |
cluster mode in closure handling
Clarify that modifying a driver local variable won't have the desired effect in cluster modes, and may or may not work as intended in local mode
Author: Sean Owen <sowen@cloudera.com>
Closes #10866 from srowen/SPARK-12760.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
…local vs cluster
srowen thanks for the PR at https://github.com/apache/spark/pull/10866! sorry it took me a while.
This is related to https://github.com/apache/spark/pull/10866, basically the assignment in the lambda expression in the python example is actually invalid
```
In [1]: data = [1, 2, 3, 4, 5]
In [2]: counter = 0
In [3]: rdd = sc.parallelize(data)
In [4]: rdd.foreach(lambda x: counter += x)
File "<ipython-input-4-fcb86c182bad>", line 1
rdd.foreach(lambda x: counter += x)
^
SyntaxError: invalid syntax
```
Author: Mortada Mehyar <mortada.mehyar@gmail.com>
Closes #10867 from mortada/doc_python_fix.
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Remove Akka dependency from core. Note: the streaming-akka project still uses Akka.
- Remove HttpFileServer
- Remove Akka configs from SparkConf and SSLOptions
- Rename `spark.akka.frameSize` to `spark.rpc.message.maxSize`. I think it's still worth to keep this config because using `DirectTaskResult` or `IndirectTaskResult` depends on it.
- Update comments and docs
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #10854 from zsxwing/remove-akka.
|
|
|
|
|
|
|
|
|
|
| |
properties
Several Spark properties equivalent to Spark submit command line options are missing.
Author: felixcheung <felixcheung_m@hotmail.com>
Closes #10491 from felixcheung/sparksubmitdoc.
|
|
|
|
|
|
| |
Author: Sun Rui <rui.sun@intel.com>
Closes #10201 from sun-rui/SPARK-12204.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Include the following changes:
1. Add "streaming-akka" project and org.apache.spark.streaming.akka.AkkaUtils for creating an actorStream
2. Remove "StreamingContext.actorStream" and "JavaStreamingContext.actorStream"
3. Update the ActorWordCount example and add the JavaActorWordCount example
4. Make "streaming-zeromq" depend on "streaming-akka" and update the codes accordingly
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #10744 from zsxwing/streaming-akka-2.
|
|
|
|
|
|
|
|
| |
shivaram sorry it took longer to fix some conflicts, this is the change to add an alias for `table`
Author: felixcheung <felixcheung_m@hotmail.com>
Closes #10406 from felixcheung/readtable.
|
|
|
|
|
|
|
|
|
| |
Author: scwf <wangfei1@huawei.com>
Author: Marcelo Vanzin <vanzin@cloudera.com>
Author: WangTaoTheTonic <wangtao111@huawei.com>
Author: w00228970 <wangfei1@huawei.com>
Closes #10238 from vanzin/SPARK-2750.
|
|
|
|
|
|
|
|
|
|
| |
integration doc
This PR added instructions to get Kinesis assembly jar for Python users in the Kinesis integration page like Kafka doc.
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #10822 from zsxwing/kinesis-doc.
|
|
|
|
|
|
|
|
|
|
| |
integration doc
This PR added instructions to get flume assembly jar for Python users in the flume integration page like Kafka doc.
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #10746 from zsxwing/flume-doc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
http://spark.apache.org/docs/latest/ml-guide.html#example-pipeline
```
val sameModel = Pipeline.load("/tmp/spark-logistic-regression-model")
```
should be
```
val sameModel = PipelineModel.load("/tmp/spark-logistic-regression-model")
```
cc: jkbradley
Author: Jeff Lam <sha0lin@alumni.carnegiemellon.edu>
Closes #10769 from Agent007/SPARK-12722.
|
|
|
|
|
|
|
|
|
|
| |
This patch adds a Hadoop 2.7 build profile in order to let us automate tests against that version.
/cc rxin srowen
Author: Josh Rosen <joshrosen@databricks.com>
Closes #10775 from JoshRosen/add-hadoop-2.7-profile.
|
|
|
|
|
|
|
|
| |
…enodes
Author: Tom Graves <tgraves@yahoo-inc.com>
Closes #10699 from tgravescs/SPARK-2930.
|
|
|
|
|
|
|
|
|
|
| |
guide example
Fixed WSSSE computeCost in Python mllib KMeans user guide example by using new computeCost method API in Python.
Author: Joseph K. Bradley <joseph@databricks.com>
Closes #10707 from jkbradley/kmeans-doc-fix.
|
|
|
|
|
|
|
|
| |
The default run has changed, but the documentation didn't fully reflect the change.
Author: Luc Bourlier <luc.bourlier@typesafe.com>
Closes #10740 from skyluc/issue/mesos-modes-doc.
|
|
|
|
|
|
|
|
|
|
| |
Use a much smaller step size in LinearRegressionWithSGD MLlib examples to achieve a reasonable RMSE.
Our training folks hit this exact same issue when concocting an example and had the same solution.
Author: Sean Owen <sowen@cloudera.com>
Closes #10675 from srowen/SPARK-5273.
|
|
|
|
|
|
|
|
|
|
| |
casting
Warning users about casting changes.
Author: Brandon Bradley <bradleytastic@gmail.com>
Closes #10708 from blbradley/spark-12758.
|
|
|
|
|
|
| |
Author: Reynold Xin <rxin@databricks.com>
Closes #10673 from rxin/SPARK-12735.
|
|
|
|
|
|
|
|
|
|
| |
Replace Guava `Optional` with (an API clone of) Java 8 `java.util.Optional` (edit: and a clone of Guava `Optional`)
See also https://github.com/apache/spark/pull/10512
Author: Sean Owen <sowen@cloudera.com>
Closes #10513 from srowen/SPARK-4819.
|
|
|
|
|
|
|
|
| |
spark.shuffle.service.enabled is spark application related configuration, it is not necessary to set it in yarn-site.xml
Author: Jeff Zhang <zjffdu@apache.org>
Closes #10657 from zjffdu/doc-fix.
|
|
|
|
|
|
|
|
|
|
| |
allowBatching configurations for Streaming
/cc tdas brkyvz
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #10453 from zsxwing/streaming-conf.
|
|
|
|
|
|
| |
Author: Jacek Laskowski <jacek@japila.pl>
Closes #10603 from jaceklaskowski/streaming-actor-custom-receiver.
|
|
|
|
|
|
|
|
| |
modify 'spark.memory.offHeap.enabled' default value to false
Author: zzcclp <xm_zzc@sina.com>
Closes #10633 from zzcclp/fix_spark.memory.offHeap.enabled_default_value.
|
|
|
|
|
|
|
|
|
|
|
|
| |
This PR removes `spark.cleaner.ttl` and the associated TTL-based metadata cleaning code.
Now that we have the `ContextCleaner` and a timer to trigger periodic GCs, I don't think that `spark.cleaner.ttl` is necessary anymore. The TTL-based cleaning isn't enabled by default, isn't included in our end-to-end tests, and has been a source of user confusion when it is misconfigured. If the TTL is set too low, data which is still being used may be evicted / deleted, leading to hard to diagnose bugs.
For all of these reasons, I think that we should remove this functionality in Spark 2.0. Additional benefits of doing this include marginally reduced memory usage, since we no longer need to store timetsamps in hashmaps, and a handful fewer threads.
Author: Josh Rosen <joshrosen@databricks.com>
Closes #10534 from JoshRosen/remove-ttl-based-cleaning.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
metricName
For the BinaryClassificationEvaluator, the scaladoc doesn't mention that "areaUnderPR" is supported, only that the default is "areadUnderROC".
Also, in the documentation, it is said that:
"The default metric used to choose the best ParamMap can be overriden by the setMetric method in each of these evaluators."
However, the method is called setMetricName.
This PR aims to fix both issues.
Author: BenFradet <benjamin.fradet@gmail.com>
Closes #10328 from BenFradet/SPARK-12368.
|
|
|
|
|
|
|
|
|
|
|
|
| |
prediction: user guide update
Update user guide doc for ```DecisionTreeRegressor``` providing variance of prediction.
cc jkbradley
Author: Yanbo Liang <ybliang8@gmail.com>
Closes #10594 from yanboliang/spark-12570.
|
|
|
|
|
|
|
|
|
| |
checked that the change is in Spark 1.6.0.
shivaram
Author: felixcheung <felixcheung_m@hotmail.com>
Closes #10574 from felixcheung/rwritemodedoc.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Spark SQL's JDBC data source allows users to specify an explicit JDBC driver to load (using the `driver` argument), but in the current code it's possible that the user-specified driver will not be used when it comes time to actually create a JDBC connection.
In a nutshell, the problem is that you might have multiple JDBC drivers on the classpath that claim to be able to handle the same subprotocol, so simply registering the user-provided driver class with the our `DriverRegistry` and JDBC's `DriverManager` is not sufficient to ensure that it's actually used when creating the JDBC connection.
This patch addresses this issue by first registering the user-specified driver with the DriverManager, then iterating over the driver manager's loaded drivers in order to obtain the correct driver and use it to create a connection (previously, we just called `DriverManager.getConnection()` directly).
If a user did not specify a JDBC driver to use, then we call `DriverManager.getDriver` to figure out the class of the driver to use, then pass that class's name to executors; this guards against corner-case bugs in situations where the driver and executor JVMs might have different sets of JDBC drivers on their classpaths (previously, there was the (rare) potential for `DriverManager.getConnection()` to use different drivers on the driver and executors if the user had not explicitly specified a JDBC driver class and the classpaths were different).
This patch is inspired by a similar patch that I made to the `spark-redshift` library (https://github.com/databricks/spark-redshift/pull/143), which contains its own modified fork of some of Spark's JDBC data source code (for cross-Spark-version compatibility reasons).
Author: Josh Rosen <joshrosen@databricks.com>
Closes #10519 from JoshRosen/jdbc-driver-precedence.
|
|
|
|
|
|
|
|
| |
We switched to TorrentBroadcast in Spark 1.1, and HttpBroadcast has been undocumented since then. It's time to remove it in Spark 2.0.
Author: Reynold Xin <rxin@databricks.com>
Closes #10531 from rxin/SPARK-12588.
|
|
|
|
|
|
|
|
|
|
| |
Streaming
This PR adds Scala, Java and Python examples to show how to use Accumulator and Broadcast in Spark Streaming to support checkpointing.
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #10385 from zsxwing/accumulator-broadcast-example.
|
|
|
|
|
|
| |
Author: Shixiong Zhu <shixiong@databricks.com>
Closes #10439 from zsxwing/kafka-message-handler-doc.
|
|
|
|
|
|
|
|
| |
i.e. Hadoop 1 and Hadoop 2.0
Author: Reynold Xin <rxin@databricks.com>
Closes #10404 from rxin/SPARK-11807.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
According the benchmark [1], LZ4-java could be 80% (or 30%) faster than Snappy.
After changing the compressor to LZ4, I saw 20% improvement on end-to-end time for a TPCDS query (Q4).
[1] https://github.com/ning/jvm-compressor-benchmark/wiki
cc rxin
Author: Davies Liu <davies@databricks.com>
Closes #10342 from davies/lz4.
|
|
|
|
|
|
| |
Author: Reynold Xin <rxin@databricks.com>
Closes #10395 from rxin/SPARK-11808.
|
|
|
|
|
|
| |
Author: Reynold Xin <rxin@databricks.com>
Closes #10387 from rxin/version-bump.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The current default storage level of Python persist API is MEMORY_ONLY_SER. This is different from the default level MEMORY_ONLY in the official document and RDD APIs.
davies Is this inconsistency intentional? Thanks!
Updates: Since the data is always serialized on the Python side, the storage levels of JAVA-specific deserialization are not removed, such as MEMORY_ONLY.
Updates: Based on the reviewers' feedback. In Python, stored objects will always be serialized with the [Pickle](https://docs.python.org/2/library/pickle.html) library, so it does not matter whether you choose a serialized level. The available storage levels in Python include `MEMORY_ONLY`, `MEMORY_ONLY_2`, `MEMORY_AND_DISK`, `MEMORY_AND_DISK_2`, `DISK_ONLY`, `DISK_ONLY_2` and `OFF_HEAP`.
Author: gatorsmile <gatorsmile@gmail.com>
Closes #10092 from gatorsmile/persistStorageLevel.
|