| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
|
|
|
|
|
|
|
| |
The current doc hints spark doesn't support accumulators of type `Long`, which is wrong.
JIRA: https://spark-project.atlassian.net/browse/SPARK-1117
Author: Xiangrui Meng <meng@databricks.com>
Closes #631 from mengxr/acc and squashes the following commits:
45ecd25 [Xiangrui Meng] update accumulator docs
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
https://spark-project.atlassian.net/browse/SPARK-1105
fix site scala version error
Author: CodingCat <zhunansjtu@gmail.com>
Closes #618 from CodingCat/doc_version and squashes the following commits:
39bb8aa [CodingCat] more fixes
65bedb0 [CodingCat] fix site scala version error in doc
|
|
|
|
|
|
| |
Also, replace the last reference to it in the docs.
This fixes SPARK-1026.
|
| |
|
|\
| |
| |
| |
| |
| | |
Conflicts:
core/src/test/scala/org/apache/spark/DriverSuite.scala
docs/python-programming-guide.md
|
| | |
|
| | |
|
|\| |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
This reverts commit 79b20e4dbe3dcd8559ec8316784d3334bb55868b, reversing
changes made to 7375047d516c5aa69221611f5f7b0f1d367039af.
|
|/
|
| |
val => var
|
| |
|
| |
|
| |
|
|
|
|
|
| |
Also changed uses of "job" terminology to "application" when they
referred to an entire Spark program, to avoid confusion.
|
|
|
|
|
|
| |
* RDD, *RDDFunctions -> org.apache.spark.rdd
* Utils, ClosureCleaner, SizeEstimator -> org.apache.spark.util
* JavaSerializer, KryoSerializer -> org.apache.spark.serializer
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This commit makes Spark invocation saner by using an assembly JAR to
find all of Spark's dependencies instead of adding all the JARs in
lib_managed. It also packages the examples into an assembly and uses
that as SPARK_EXAMPLES_JAR. Finally, it replaces the old "run" script
with two better-named scripts: "run-examples" for examples, and
"spark-class" for Spark internal classes (e.g. REPL, master, etc). This
is also designed to minimize the confusion people have in trying to use
"run" to run their own classes; it's not meant to do that, but now at
least if they look at it, they can modify run-examples to do a decent
job for them.
As part of this, Bagel's examples are also now properly moved to the
examples package instead of bagel.
|
| |
|
| |
|
|
|
|
|
| |
Also warns if spark.cleaner.ttl is not set in the version where you pass
your own SparkContext.
|
|\
| |
| |
| |
| | |
Conflicts:
core/src/main/scala/spark/RDD.scala
|
| |
| |
| |
| | |
the Java API. Updates scala and java programming guides.
|
|/
|
|
| |
ordering, so the reduce function must also be commutative.
|
| |
|
|\ |
|
| | |
|
|/ |
|
|
|
|
|
|
|
|
| |
- Edited quick start and tuning guide to simplify them a little
- Simplified top menu bar
- Made private a SparkContext constructor parameter that was left as
public
- Various small fixes
|
|
|
|
|
| |
also adding the version to the navbar so it is easy to tell which
version of Spark these docs were compiled for.
|
|
|
|
|
|
|
|
|
| |
throughout the docs: SPARK_VERSION, SCALA_VERSION, and MESOS_VERSION.
To use them, e.g. use {{site.SPARK_VERSION}}.
Also removes uses of {{HOME_PATH}} which were being resolved to ""
by the templating system anyway.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
- Rework/expand the nav bar with more of the docs site
- Removing parts of docs about EC2 and Mesos that differentiate between
running 0.5 and before
- Merged subheadings from running-on-amazon-ec2.html that are still relevant
(i.e., "Using a newer version of Spark" and "Accessing Data in S3") into
ec2-scripts.html and deleted running-on-amazon-ec2.html
- Added some TODO comments to a few docs
- Updated the blurb about AMP Camp
- Renamed programming-guide to spark-programming-guide
- Fixing typos/etc. in Standalone Spark doc
|