| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| |
| |
| |
| | |
Moved DStream and PairDSream to org.apache.spark.streaming.dstream
Similar to the package location of `org.apache.spark.rdd.RDD`, `DStream` has been moved from `org.apache.spark.streaming.DStream` to `org.apache.spark.streaming.dstream.DStream`. I know that the package name is a little long, but I think its better to keep it consistent with Spark's structure.
Also fixed persistence of windowed DStream. The RDDs generated generated by windowed DStream are essentially unions of underlying RDDs, and persistent these union RDDs would store numerous copies of the underlying data. Instead setting the persistence level on the windowed DStream is made to set the persistence level of the underlying DStream.
|
| |\
| | |
| | |
| | |
| | | |
Conflicts:
streaming/src/main/scala/org/apache/spark/streaming/dstream/DStream.scala
|
| | |
| | |
| | |
| | | |
org.apache.spark.streaming to org.apache.spark.streaming.dstream.
|
|\ \ \
| |_|/
|/| |
| | |
| | |
| | | |
Disable shuffle file consolidation by default
After running various performance tests for the 0.9 release, this still seems to have performance issues even on XFS. So let's keep this off-by-default for 0.9 and users can experiment with it depending on their disk configurations.
|
| |/ |
|
|/
|
|
|
|
| |
`foreachRDD` makes it clear that the granularity of this operator is per-RDD.
As it stands, `foreach` is inconsistent with with `map`, `filter`, and the other
DStream operators which get pushed down to individual records within each RDD.
|
|\
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
External Sorting for Aggregator and CoGroupedRDDs (Revisited)
(This pull request is re-opened from https://github.com/apache/incubator-spark/pull/303, which was closed because Jenkins / github was misbehaving)
The target issue for this patch is the out-of-memory exceptions triggered by aggregate operations such as reduce, groupBy, join, and cogroup. The existing AppendOnlyMap used by these operations resides purely in memory, and grows with the size of the input data until the amount of allocated memory is exceeded. Under large workloads, this problem is aggravated by the fact that OOM frequently occurs only after a very long (> 1 hour) map phase, in which case the entire job must be restarted.
The solution is to spill the contents of this map to disk once a certain memory threshold is exceeded. This functionality is provided by ExternalAppendOnlyMap, which additionally sorts this buffer before writing it out to disk, and later merges these buffers back in sorted order.
Under normal circumstances in which OOM is not triggered, ExternalAppendOnlyMap is simply a wrapper around AppendOnlyMap and incurs little overhead. Only when the memory usage is expected to exceed the given threshold does ExternalAppendOnlyMap spill to disk.
|
| | |
|
| |
| |
| |
| |
| |
| |
| |
| | |
Aside from trivial formatting changes, use nulls instead of Options for
DiskMapIterator, and add documentation for spark.shuffle.externalSorting
and spark.shuffle.memoryFraction.
Also, set spark.shuffle.memoryFraction to 0.3, and spark.storage.memoryFraction = 0.6.
|
|\ \
| | |
| | |
| | |
| | |
| | | |
Yarn client addjar and misc fixes
Fix the addJar functionality in yarn-client mode, add support for the other options supported in yarn-standalone mode, set the application type on yarn in hadoop 2.X, add documentation, change heartbeat interval to be same code as the yarn-standalone so it doesn't take so long to get containers and exit.
|
| | | |
|
|\ \ \
| |_|/
|/| |
| | |
| | |
| | | |
Enable shuffle consolidation by default.
Bump this to being enabled for 0.9.0.
|
| | |
| | |
| | |
| | | |
Bump this to being enabled for 0.9.0.
|
|\ \ \
| |/ /
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Simplify and fix pyspark script.
This patch removes compatibility for IPython < 1.0 but fixes the launch
script and makes it much simpler.
I tested this using the three commands in the PySpark documentation page:
1. IPYTHON=1 ./pyspark
2. IPYTHON_OPTS="notebook" ./pyspark
3. IPYTHON_OPTS="notebook --pylab inline" ./pyspark
There are two changes:
- We rely on PYTHONSTARTUP env var to start PySpark
- Removed the quotes around $IPYTHON_OPTS... having quotes
gloms them together as a single argument passed to `exec` which
seemed to cause ipython to fail (it instead expects them as
multiple arguments).
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This patch removes compatibility for IPython < 1.0 but fixes the launch
script and makes it much simpler.
I tested this using the three commands in the PySpark documentation page:
1. IPYTHON=1 ./pyspark
2. IPYTHON_OPTS="notebook" ./pyspark
3. IPYTHON_OPTS="notebook --pylab inline" ./pyspark
There are two changes:
- We rely on PYTHONSTARTUP env var to start PySpark
- Removed the quotes around $IPYTHON_OPTS... having quotes
gloms them together as a single argument passed to `exec` which
seemed to cause ipython to fail (it instead expects them as
multiple arguments).
|
|\ \ \
| |_|/
|/| |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
SPARK-998: Support Launching Driver Inside of Standalone Mode
[NOTE: I need to bring the tests up to date with new changes, so for now they will fail]
This patch provides support for launching driver programs inside of a standalone cluster manager. It also supports monitoring and re-launching of driver programs which is useful for long running, recoverable applications such as Spark Streaming jobs. For those jobs, this patch allows a deployment mode which is resilient to the failure of any worker node, failure of a master node (provided a multi-master setup), and even failures of the applicaiton itself, provided they are recoverable on a restart. Driver information, such as the status and logs from a driver, is displayed in the UI
There are a few small TODO's here, but the code is generally feature-complete. They are:
- Bring tests up to date and add test coverage
- Restarting on failure should be optional and maybe off by default.
- See if we can re-use akka connections to facilitate clients behind a firewall
A sensible place to start for review would be to look at the `DriverClient` class which presents users the ability to launch their driver program. I've also added an example program (`DriverSubmissionTest`) that allows you to test this locally and play around with killing workers, etc. Most of the code is devoted to persisting driver state in the cluster manger, exposing it in the UI, and dealing correctly with various types of failures.
Instructions to test locally:
- `sbt/sbt assembly/assembly examples/assembly`
- start a local version of the standalone cluster manager
```
./spark-class org.apache.spark.deploy.client.DriverClient \
-j -Dspark.test.property=something \
-e SPARK_TEST_KEY=SOMEVALUE \
launch spark://10.99.1.14:7077 \
../path-to-examples-assembly-jar \
org.apache.spark.examples.DriverSubmissionTest 1000 some extra options --some-option-here -X 13
```
- Go in the UI and make sure it started correctly, look at the output etc
- Kill workers, the driver program, masters, etc.
|
| |\ \
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
core/src/test/scala/org/apache/spark/deploy/JsonProtocolSuite.scala
pom.xml
|
| | | | |
|
| | | | |
|
| |\ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Conflicts:
core/src/main/scala/org/apache/spark/deploy/client/AppClient.scala
core/src/main/scala/org/apache/spark/deploy/client/TestClient.scala
core/src/main/scala/org/apache/spark/deploy/master/Master.scala
core/src/main/scala/org/apache/spark/deploy/worker/Worker.scala
core/src/main/scala/org/apache/spark/scheduler/cluster/SparkDeploySchedulerBackend.scala
|
| | | | | |
|
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
This is a very esoteric option and it's out of sync with the style we use.
So it seems fitting to fix it for 0.9.0.
|
|\ \ \ \ \
| |_|_|/ /
|/| | | |
| | | | |
| | | | |
| | | | | |
support distributing extra files to worker for yarn client mode
So that user doesn't need to package all dependency into one assemble jar as spark app jar
|
| | |/ /
| |/| |
| | | |
| | | | |
on yarn cluster
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
SPARK-1009 Updated MLlib docs to show how to use it in Python
In addition added detailed examples for regression, clustering and recommendation algorithms in a separate Scala section. Fixed a few minor issues with existing documentation.
|
| |\ \ \ \
| | | |_|/
| | |/| | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| |/ / /
|/| | |
| | | |
| | | | |
Also documents the spark.deploy.spreadOut option.
|
|\ \ \ \
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | |
| | | | | |
Conf improvements
There are two new features.
1. Allow users to set arbitrary akka configurations via spark conf.
2. Allow configuration to be printed in logs for diagnosis.
|
| | | | | |
|
| | | | | |
|
| | |/ /
| |/| | |
|
|\ \ \ \
| |/ / /
|/| | |
| | | |
| | | |
| | | | |
Add a script to download sbt if not present on the system
As per the discussion on the dev mailing list this script will use the system sbt if present or otherwise attempt to install the sbt launcher. The fall back error message in the event it fails instructs the user to install sbt. While the URLs it fetches from aren't controlled by the spark project directly, they are stable and the current authoritative sources.
|
| | | | |
|
|/ / /
| | |
| | | |
It controls the count of cores across the cluster, not on a per-machine basis.
|
|\ \ \
| | | |
| | | |
| | | |
| | | |
| | | | |
Conflicts:
core/src/test/scala/org/apache/spark/DriverSuite.scala
docs/python-programming-guide.md
|
| |\ \ \
| | | | |
| | | | |
| | | | | |
Spark-915 segregate scripts
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | | | | |
|
| | |\ \ \
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | |
| | | | | | |
spark-915-segregate-scripts
Conflicts:
bin/spark-shell
core/pom.xml
core/src/main/scala/org/apache/spark/SparkContext.scala
core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala
core/src/main/scala/org/apache/spark/ui/UIWorkloadGenerator.scala
core/src/test/scala/org/apache/spark/DriverSuite.scala
python/run-tests
sbin/compute-classpath.sh
sbin/spark-class
sbin/stop-slaves.sh
|
| | | |\ \ \ |
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Signed-off-by: shane-huang <shengsheng.huang@intel.com>
|
| | | | | | |
| | | | | | |
| | | | | | |
| | | | | | | |
Signed-off-by: shane-huang <shengsheng.huang@intel.com>
|