aboutsummaryrefslogtreecommitdiff
path: root/docs
Commit message (Collapse)AuthorAgeFilesLines
* SPARK-1173. Improve scala streaming docs.Aaron Kimball2014-03-021-5/+33
| | | | | | | | | | | | | Clarify imports to add implicit conversions to DStream and fix other small typos in the streaming intro documentation. Tested by inspecting output via a local jekyll server, c&p'ing the scala commands into a spark terminal. Author: Aaron Kimball <aaron@magnify.io> Closes #64 from kimballa/spark-1173-streaming-docs and squashes the following commits: 6fbff0e [Aaron Kimball] SPARK-1173. Improve scala streaming docs.
* Add Jekyll tag to isolate "production-only" doc components.Patrick Wendell2014-03-023-5/+32
| | | | | | | | Author: Patrick Wendell <pwendell@gmail.com> Closes #56 from pwendell/jekyll-prod and squashes the following commits: 1bdc3a8 [Patrick Wendell] Add Jekyll tag to isolate "production-only" doc components.
* SPARK-1121: Include avro for yarn-alpha buildsPatrick Wendell2014-03-021-4/+0
| | | | | | | | | | | | | | | | | This lets us explicitly include Avro based on a profile for 0.23.X builds. It makes me sad how convoluted it is to express this logic in Maven. @tgraves and @sryza curious if this works for you. I'm also considering just reverting to how it was before. The only real problem was that Spark advertised a dependency on Avro even though it only really depends transitively on Avro through other deps. Author: Patrick Wendell <pwendell@gmail.com> Closes #49 from pwendell/avro-build-fix and squashes the following commits: 8d6ee92 [Patrick Wendell] SPARK-1121: Add avro to yarn-alpha profile
* Revert "[SPARK-1150] fix repo location in create script"Patrick Wendell2014-03-011-2/+2
| | | | This reverts commit 9aa095711858ce8670e51488f66a3d7c1a821c30.
* [SPARK-1150] fix repo location in create scriptMark Grover2014-03-011-2/+2
| | | | | | | | | | | | | https://spark-project.atlassian.net/browse/SPARK-1150 fix the repo location in create_release script Author: Mark Grover <mark@apache.org> Closes #48 from CodingCat/script_fixes and squashes the following commits: 01f4bf7 [Mark Grover] Fixing some nitpicks d2244d4 [Mark Grover] SPARK-676: Abbreviation in SPARK_MEM but not in SPARK_WORKER_MEMORY
* Removed reference to incubation in Spark user docs.Reynold Xin2014-02-278-24/+14
| | | | | | | | Author: Reynold Xin <rxin@apache.org> Closes #2 from rxin/docs and squashes the following commits: 08bbd5f [Reynold Xin] Removed reference to incubation in Spark user docs.
* SPARK-1121 Only add avro if the build is for Hadoop 0.23.X and SPARK_YARN is setPrashant Sharma2014-02-261-0/+4
| | | | | | | | | Author: Prashant Sharma <prashant.s@imaginea.com> Closes #6 from ScrapCodes/SPARK-1121/avro-dep-fix and squashes the following commits: 9b29e34 [Prashant Sharma] Review feedback on PR 46ed2ad [Prashant Sharma] SPARK-1121-Only add avro if the build is for Hadoop 0.23.X and SPARK_YARN is set
* Updated link for pyspark examples in docsJyotiska NK2014-02-261-1/+1
| | | | | | | | Author: Jyotiska NK <jyotiska123@gmail.com> Closes #22 from jyotiska/pyspark_docs and squashes the following commits: 426136c [Jyotiska NK] Updated link for pyspark examples
* SPARK-1135: fix broken anchors in docsMatei Zaharia2014-02-261-28/+1
| | | | | | | | | | | | | | A recent PR that added Java vs Scala tabs for streaming also inadvertently added some bad code to a document.ready handler, breaking our other handler that manages scrolling to anchors correctly with the floating top bar. As a result the section title ended up always being hidden below the top bar. This removes the unnecessary JavaScript code. Author: Matei Zaharia <matei@databricks.com> Closes #3 from mateiz/doc-links and squashes the following commits: e2a3488 [Matei Zaharia] SPARK-1135: fix broken anchors in docs
* SPARK-1053. Don't require SPARK_YARN_APP_JARSandy Ryza2014-02-261-4/+2
| | | | | | | | | | | | It looks this just requires taking out the checks. I verified that, with the patch, I was able to run spark-shell through yarn without setting the environment variable. Author: Sandy Ryza <sandy@cloudera.com> Closes #553 from sryza/sandy-spark-1053 and squashes the following commits: b037676 [Sandy Ryza] SPARK-1053. Don't require SPARK_YARN_APP_JAR
* Include reference to twitter/chill in tuning docsAndrew Ash2014-02-241-3/+6
| | | | | | | | Author: Andrew Ash <andrew@andrewash.com> Closes #647 from ash211/doc-tuning and squashes the following commits: b87de0a [Andrew Ash] Include reference to twitter/chill in tuning docs
* SPARK-1117: update accumulator docsXiangrui Meng2014-02-211-1/+1
| | | | | | | | | | | | The current doc hints spark doesn't support accumulators of type `Long`, which is wrong. JIRA: https://spark-project.atlassian.net/browse/SPARK-1117 Author: Xiangrui Meng <meng@databricks.com> Closes #631 from mengxr/acc and squashes the following commits: 45ecd25 [Xiangrui Meng] update accumulator docs
* [SPARK-1105] fix site scala version error in docsCodingCat2014-02-198-26/+27
| | | | | | | | | | | | | https://spark-project.atlassian.net/browse/SPARK-1105 fix site scala version error Author: CodingCat <zhunansjtu@gmail.com> Closes #618 from CodingCat/doc_version and squashes the following commits: 39bb8aa [CodingCat] more fixes 65bedb0 [CodingCat] fix site scala version error in doc
* Revert "[SPARK-1105] fix site scala version error in doc"Patrick Wendell2014-02-181-1/+1
| | | | This reverts commit d99773d5bba674cc1434c86435b6d9b3739314c8.
* [SPARK-1105] fix site scala version error in docCodingCat2014-02-181-1/+1
| | | | | | | | | | | | https://spark-project.atlassian.net/browse/SPARK-1105 fix site scala version error Author: CodingCat <zhunansjtu@gmail.com> Closes #616 from CodingCat/doc_version and squashes the following commits: eafd99a [CodingCat] fix site scala version error in doc
* Fix typos in Spark Streaming programming guideAndrew Or2014-02-171-14/+13
| | | | | | | | | | | | Author: Andrew Or <andrewor14@gmail.com> Closes #536 from andrewor14/streaming-typos and squashes the following commits: a05faa6 [Andrew Or] Fix broken link and wording bc2e4bc [Andrew Or] Merge github.com:apache/incubator-spark into streaming-typos d5515b4 [Andrew Or] TD's comments 767ef12 [Andrew Or] Fix broken links 8f4c731 [Andrew Or] Fix typos in programming guide
* Typo: Standlone -> StandaloneAndrew Ash2014-02-142-4/+4
| | | | | | | | | | Author: Andrew Ash <andrew@andrewash.com> Closes #601 from ash211/typo and squashes the following commits: 9cd43ac [Andrew Ash] Change docs references to metrics.properties, not metrics.conf 3813ff1 [Andrew Ash] Typo: mulitcast -> multicast 873bd2f [Andrew Ash] Typo: Standlone -> Standalone
* Merge pull request #577 from hsaputra/fix_simple_streaming_doc.Henry Saputra2014-02-111-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | SPARK-1075 Fix doc in the Spark Streaming custom receiver closing bracket in the class constructor The closing parentheses in the constructor in the first code block example is reversed: diff --git a/docs/streaming-custom-receivers.md b/docs/streaming-custom-receivers.md index 4e27d65..3fb540c 100644 — a/docs/streaming-custom-receivers.md +++ b/docs/streaming-custom-receivers.md @@ -14,7 +14,7 @@ This starts with implementing NetworkReceiver(api/streaming/index.html#org.apa The following is a simple socket text-stream receiver. {% highlight scala %} class SocketTextStreamReceiver(host: String, port: Int( + class SocketTextStreamReceiver(host: String, port: Int) extends NetworkReceiverString { protected lazy val blocksGenerator: BlockGenerator = Author: Henry Saputra <henry@platfora.com> Closes #577 and squashes the following commits: 6508341 [Henry Saputra] SPARK-1075 Fix doc in the Spark Streaming custom receiver.
* Merge pull request #579 from CrazyJvm/patch-1.Chen Chao2014-02-101-1/+1
| | | | | | | | | | | | "in the source DStream" rather than "int the source DStream" "flatMap is a one-to-many DStream operation that creates a new DStream by generating multiple new records from each record int the source DStream." Author: Chen Chao <crazyjvm@gmail.com> Closes #579 and squashes the following commits: 4abcae3 [Chen Chao] in the source DStream
* Merge pull request #566 from martinjaggi/copy-MLlib-d.Martin Jaggi2014-02-093-58/+405
| | | | | | | | | | | | | | | | | | | | | | | | | | | | new MLlib documentation for optimization, regression and classification new documentation with tex formulas, hopefully improving usability and reproducibility of the offered MLlib methods. also did some minor changes in the code for consistency. scala tests pass. this is the rebased branch, i deleted the old PR jira: https://spark-project.atlassian.net/browse/MLLIB-19 Author: Martin Jaggi <m.jaggi@gmail.com> Closes #566 and squashes the following commits: 5f0f31e [Martin Jaggi] line wrap at 100 chars 4e094fb [Martin Jaggi] better description of GradientDescent 1d6965d [Martin Jaggi] remove broken url ea569c3 [Martin Jaggi] telling what updater actually does 964732b [Martin Jaggi] lambda R() in documentation a6c6228 [Martin Jaggi] better comments in SGD code for regression b32224a [Martin Jaggi] new optimization documentation d5dfef7 [Martin Jaggi] new classification and regression documentation b07ead6 [Martin Jaggi] correct scaling for MSE loss ba6158c [Martin Jaggi] use d for the number of features bab2ed2 [Martin Jaggi] renaming LeastSquaresGradient
* Merge pull request #542 from markhamstra/versionBump. Closes #542.Mark Hamstra2014-02-081-2/+2
| | | | | | | | | | | | | | | | | | Version number to 1.0.0-SNAPSHOT Since 0.9.0-incubating is done and out the door, we shouldn't be building 0.9.0-incubating-SNAPSHOT anymore. @pwendell Author: Mark Hamstra <markhamstra@gmail.com> == Merge branch commits == commit 1b00a8a7c1a7f251b4bb3774b84b9e64758eaa71 Author: Mark Hamstra <markhamstra@gmail.com> Date: Wed Feb 5 09:30:32 2014 -0800 Version number to 1.0.0-SNAPSHOT
* Merge pull request #552 from martinjaggi/master. Closes #552.Martin Jaggi2014-02-089-472/+586
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | tex formulas in the documentation using mathjax. and spliting the MLlib documentation by techniques see jira https://spark-project.atlassian.net/browse/MLLIB-19 and https://github.com/shivaram/spark/compare/mathjax Author: Martin Jaggi <m.jaggi@gmail.com> == Merge branch commits == commit 0364bfabbfc347f917216057a20c39b631842481 Author: Martin Jaggi <m.jaggi@gmail.com> Date: Fri Feb 7 03:19:38 2014 +0100 minor polishing, as suggested by @pwendell commit dcd2142c164b2f602bf472bb152ad55bae82d31a Author: Martin Jaggi <m.jaggi@gmail.com> Date: Thu Feb 6 18:04:26 2014 +0100 enabling inline latex formulas with $.$ same mathjax configuration as used in math.stackexchange.com sample usage in the linear algebra (SVD) documentation commit bbafafd2b497a5acaa03a140bb9de1fbb7d67ffa Author: Martin Jaggi <m.jaggi@gmail.com> Date: Thu Feb 6 17:31:29 2014 +0100 split MLlib documentation by techniques and linked from the main mllib-guide.md site commit d1c5212b93c67436543c2d8ddbbf610fdf0a26eb Author: Martin Jaggi <m.jaggi@gmail.com> Date: Thu Feb 6 16:59:43 2014 +0100 enable mathjax formula in the .md documentation files code by @shivaram commit d73948db0d9bc36296054e79fec5b1a657b4eab4 Author: Martin Jaggi <m.jaggi@gmail.com> Date: Thu Feb 6 16:57:23 2014 +0100 minor update on how to compile the documentation
* Merge pull request #533 from andrewor14/master. Closes #533.Andrew Or2014-02-061-3/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | External spilling - generalize batching logic The existing implementation consists of a hack for Kryo specifically and only works for LZF compression. Introducing an intermediate batch-level stream takes care of pre-fetching and other arbitrary behavior of higher level streams in a more general way. Author: Andrew Or <andrewor14@gmail.com> == Merge branch commits == commit 3ddeb7ef89a0af2b685fb5d071aa0f71c975cc82 Author: Andrew Or <andrewor14@gmail.com> Date: Wed Feb 5 12:09:32 2014 -0800 Also privatize fields commit 090544a87a0767effd0c835a53952f72fc8d24f0 Author: Andrew Or <andrewor14@gmail.com> Date: Wed Feb 5 10:58:23 2014 -0800 Privatize methods commit 13920c918efe22e66a1760b14beceb17a61fd8cc Author: Andrew Or <andrewor14@gmail.com> Date: Tue Feb 4 16:34:15 2014 -0800 Update docs commit bd5a1d7350467ed3dc19c2de9b2c9f531f0e6aa3 Author: Andrew Or <andrewor14@gmail.com> Date: Tue Feb 4 13:44:24 2014 -0800 Typo: phyiscal -> physical commit 287ef44e593ad72f7434b759be3170d9ee2723d2 Author: Andrew Or <andrewor14@gmail.com> Date: Tue Feb 4 13:38:32 2014 -0800 Avoid reading the entire batch into memory; also simplify streaming logic Additionally, address formatting comments. commit 3df700509955f7074821e9aab1e74cb53c58b5a5 Merge: a531d2e 164489d Author: Andrew Or <andrewor14@gmail.com> Date: Mon Feb 3 18:27:49 2014 -0800 Merge branch 'master' of github.com:andrewor14/incubator-spark commit a531d2e347acdcecf2d0ab72cd4f965ab5e145d8 Author: Andrew Or <andrewor14@gmail.com> Date: Mon Feb 3 18:18:04 2014 -0800 Relax assumptions on compressors and serializers when batching This commit introduces an intermediate layer of an input stream on the batch level. This guards against interference from higher level streams (i.e. compression and deserialization streams), especially pre-fetching, without specifically targeting particular libraries (Kryo) and forcing shuffle spill compression to use LZF. commit 164489d6f176bdecfa9dabec2dfce5504d1ee8af Author: Andrew Or <andrewor14@gmail.com> Date: Mon Feb 3 18:18:04 2014 -0800 Relax assumptions on compressors and serializers when batching This commit introduces an intermediate layer of an input stream on the batch level. This guards against interference from higher level streams (i.e. compression and deserialization streams), especially pre-fetching, without specifically targeting particular libraries (Kryo) and forcing shuffle spill compression to use LZF.
* Merge pull request #524 from rxin/docReynold Xin2014-01-301-0/+8
| | | | | | | | | | | | | | Added spark.shuffle.file.buffer.kb to configuration doc. Author: Reynold Xin <rxin@apache.org> == Merge branch commits == commit 0eea1d761ff772ff89be234e1e28035d54e5a7de Author: Reynold Xin <rxin@apache.org> Date: Wed Jan 29 14:40:48 2014 -0800 Added spark.shuffle.file.buffer.kb to configuration doc.
* Merge pull request #497 from tdas/docs-updateTathagata Das2014-01-2819-300/+1163
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Updated Spark Streaming Programming Guide Here is the updated version of the Spark Streaming Programming Guide. This is still a work in progress, but the major changes are in place. So feedback is most welcome. In general, I have tried to make the guide to easier to understand even if the reader does not know much about Spark. The updated website is hosted here - http://www.eecs.berkeley.edu/~tdas/spark_docs/streaming-programming-guide.html The major changes are: - Overview illustrates the usecases of Spark Streaming - various input sources and various output sources - An example right after overview to quickly give an idea of what Spark Streaming program looks like - Made Java API and examples a first class citizen like Scala by using tabs to show both Scala and Java examples (similar to AMPCamp tutorial's code tabs) - Highlighted the DStream operations updateStateByKey and transform because of their powerful nature - Updated driver node failure recovery text to highlight automatic recovery in Spark standalone mode - Added information about linking and using the external input sources like Kafka and Flume - In general, reorganized the sections to better show the Basic section and the more advanced sections like Tuning and Recovery. Todos: - Links to the docs of external Kafka, Flume, etc - Illustrate window operation with figure as well as example. Author: Tathagata Das <tathagata.das1565@gmail.com> == Merge branch commits == commit 18ff10556570b39d672beeb0a32075215cfcc944 Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Tue Jan 28 21:49:30 2014 -0800 Fixed a lot of broken links. commit 34a5a6008dac2e107624c7ff0db0824ee5bae45f Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Tue Jan 28 18:02:28 2014 -0800 Updated github url to use SPARK_GITHUB_URL variable. commit f338a60ae8069e0a382d2cb170227e5757cc0b7a Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Mon Jan 27 22:42:42 2014 -0800 More updates based on Patrick and Harvey's comments. commit 89a81ff25726bf6d26163e0dd938290a79582c0f Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Mon Jan 27 13:08:34 2014 -0800 Updated docs based on Patricks PR comments. commit d5b6196b532b5746e019b959a79ea0cc013a8fc3 Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Sun Jan 26 20:15:58 2014 -0800 Added spark.streaming.unpersist config and info on StreamingListener interface. commit e3dcb46ab83d7071f611d9b5008ba6bc16c9f951 Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Sun Jan 26 18:41:12 2014 -0800 Fixed docs on StreamingContext.getOrCreate. commit 6c29524639463f11eec721e4d17a9d7159f2944b Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Thu Jan 23 18:49:39 2014 -0800 Added example and figure for window operations, and links to Kafka and Flume API docs. commit f06b964a51bb3b21cde2ff8bdea7d9785f6ce3a9 Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Wed Jan 22 22:49:12 2014 -0800 Fixed missing endhighlight tag in the MLlib guide. commit 036a7d46187ea3f2a0fb8349ef78f10d6c0b43a9 Merge: eab351d a1cd185 Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Wed Jan 22 22:17:42 2014 -0800 Merge remote-tracking branch 'apache/master' into docs-update commit eab351d05c0baef1d4b549e1581310087158d78d Author: Tathagata Das <tathagata.das1565@gmail.com> Date: Wed Jan 22 22:17:15 2014 -0800 Update Spark Streaming Programming Guide.
* Merge pull request #466 from liyinan926/file-overwrite-newReynold Xin2014-01-271-0/+7
|\ | | | | | | | | | | Allow files added through SparkContext.addFile() to be overwritten This is useful for the cases when a file needs to be refreshed and downloaded by the executors periodically. For example, a possible use case is: the driver periodically renews a Hadoop delegation token and writes it to a token file. The token file needs to be downloaded by the executors whenever it gets renewed. However, the current implementation throws an exception when the target file exists and its contents do not match those of the new source. This PR adds an option to allow files to be overwritten to support use cases similar to the above.
| * Addressed comments from ReynoldYinan Li2014-01-181-1/+0
| | | | | | | | Signed-off-by: Yinan Li <liyinan926@gmail.com>
| * Allow files added through SparkContext.addFile() to be overwrittenYinan Li2014-01-181-0/+8
| | | | | | | | | | | | | | This is useful for the cases when a file needs to be refreshed and downloaded by the executors periodically. Signed-off-by: Yinan Li <liyinan926@gmail.com>
* | Deprecate mapPartitionsWithSplit in PySpark.Josh Rosen2014-01-231-2/+2
| | | | | | | | | | | | Also, replace the last reference to it in the docs. This fixes SPARK-1026.
* | Merge pull request #478 from sryza/sandy-spark-1033Patrick Wendell2014-01-221-1/+1
|\ \ | | | | | | | | | | | | | | | SPARK-1033. Ask for cores in Yarn container requests Tested on a pseudo-distributed cluster against the Fair Scheduler and observed a worker taking more than a single core.
| * | Incorporate Tom's comments - update doc and code to reflect that core ↵Sandy Ryza2014-01-211-1/+1
| | | | | | | | | | | | requests may not always be honored
| * | SPARK-1033. Ask for cores in Yarn container requestsSandy Ryza2014-01-201-1/+1
| | |
* | | Merge pull request #315 from rezazadeh/sparsesvdMatei Zaharia2014-01-221-0/+51
|\ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Sparse SVD # Singular Value Decomposition Given an *m x n* matrix *A*, compute matrices *U, S, V* such that *A = U * S * V^T* There is no restriction on m, but we require n^2 doubles to fit in memory. Further, n should be less than m. The decomposition is computed by first computing *A^TA = V S^2 V^T*, computing svd locally on that (since n x n is small), from which we recover S and V. Then we compute U via easy matrix multiplication as *U = A * V * S^-1* Only singular vectors associated with the largest k singular values If there are k such values, then the dimensions of the return will be: * *S* is *k x k* and diagonal, holding the singular values on diagonal. * *U* is *m x k* and satisfies U^T*U = eye(k). * *V* is *n x k* and satisfies V^TV = eye(k). All input and output is expected in sparse matrix format, 0-indexed as tuples of the form ((i,j),value) all in RDDs. # Testing Tests included. They test: - Decomposition promise (A = USV^T) - For small matrices, output is compared to that of jblas - Rank 1 matrix test included - Full Rank matrix test included - Middle-rank matrix forced via k included # Example Usage import org.apache.spark.SparkContext import org.apache.spark.mllib.linalg.SVD import org.apache.spark.mllib.linalg.SparseMatrix import org.apache.spark.mllib.linalg.MatrixyEntry // Load and parse the data file val data = sc.textFile("mllib/data/als/test.data").map { line => val parts = line.split(',') MatrixEntry(parts(0).toInt, parts(1).toInt, parts(2).toDouble) } val m = 4 val n = 4 // recover top 1 singular vector val decomposed = SVD.sparseSVD(SparseMatrix(data, m, n), 1) println("singular values = " + decomposed.S.data.toArray.mkString) # Documentation Added to docs/mllib-guide.md
| * \ \ Merge remote-tracking branch 'upstream/master' into sparsesvdReza Zadeh2014-01-177-137/+228
| |\ \ \ | | | |/ | | |/|
| * | | 0index docsReza Zadeh2014-01-171-1/+1
| | | |
| * | | use 0-indexingReza Zadeh2014-01-171-2/+2
| | | |
| * | | changes from PRReza Zadeh2014-01-171-2/+3
| | | |
| * | | Merge remote-tracking branch 'upstream/master' into sparsesvdReza Zadeh2014-01-1324-22/+1054
| |\ \ \
| * \ \ \ Merge remote-tracking branch 'upstream/master' into sparsesvdReza Zadeh2014-01-112-5/+35
| |\ \ \ \
| * \ \ \ \ Merge remote-tracking branch 'upstream/master' into sparsesvdReza Zadeh2014-01-0915-149/+433
| |\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: docs/mllib-guide.md
| * | | | | | documentation for sparsematrixReza Zadeh2014-01-071-3/+4
| | | | | | |
| * | | | | | fix docs to use SparseMatrixReza Zadeh2014-01-051-2/+5
| | | | | | |
| * | | | | | add k parameterReza Zadeh2014-01-041-2/+3
| | | | | | |
| * | | | | | Merge remote-tracking branch 'upstream/master' into sparsesvdReza Zadeh2014-01-0212-75/+154
| |\ \ \ \ \ \
| * | | | | | | doc tweaksReza Zadeh2014-01-011-1/+1
| | | | | | | |
| * | | | | | | doc tweakReza Zadeh2014-01-011-4/+5
| | | | | | | |
| * | | | | | | doc tweaksReza Zadeh2014-01-011-4/+4
| | | | | | | |
| * | | | | | | New documentationReza Zadeh2014-01-011-0/+52
| | | | | | | |
* | | | | | | | Clarify spark.default.parallelismAndrew Ash2014-01-211-1/+1
| | | | | | | | | | | | | | | | | | | | | | | | It's the task count across the cluster, not per worker, per machine, per core, or anything else.
* | | | | | | | Force use of LZF when spilling dataPatrick Wendell2014-01-201-1/+3
| | | | | | | |