aboutsummaryrefslogtreecommitdiff
path: root/docs
diff options
context:
space:
mode:
authorManish Amde <manish9ue@gmail.com>2014-05-07 17:08:38 -0700
committerPatrick Wendell <pwendell@gmail.com>2014-05-07 17:08:58 -0700
commitc7b27043a4845c7a704aed185c708539e435e12c (patch)
tree28d97fcf94ac11092b1f6fec991eb2492515e7aa /docs
parent0972b625199671b786e2659f870e2b3ff2cdb957 (diff)
downloadspark-c7b27043a4845c7a704aed185c708539e435e12c.tar.gz
spark-c7b27043a4845c7a704aed185c708539e435e12c.tar.bz2
spark-c7b27043a4845c7a704aed185c708539e435e12c.zip
SPARK-1544 Add support for deep decision trees.
@etrain and I came with a PR for arbitrarily deep decision trees at the cost of multiple passes over the data at deep tree levels. To summarize: 1) We take a parameter that indicates the amount of memory users want to reserve for computation on each worker (and 2x that at the driver). 2) Using that information, we calculate two things - the maximum depth to which we train as usual (which is, implicitly, the maximum number of nodes we want to train in parallel), and the size of the groups we should use in the case where we exceed this depth. cc: @atalwalkar, @hirakendu, @mengxr Author: Manish Amde <manish9ue@gmail.com> Author: manishamde <manish9ue@gmail.com> Author: Evan Sparks <sparks@cs.berkeley.edu> Closes #475 from manishamde/deep_tree and squashes the following commits: 968ca9d [Manish Amde] merged master 7fc9545 [Manish Amde] added docs ce004a1 [Manish Amde] minor formatting b27ad2c [Manish Amde] formatting 426bb28 [Manish Amde] programming guide blurb 8053fed [Manish Amde] more formatting 5eca9e4 [Manish Amde] grammar 4731cda [Manish Amde] formatting 5e82202 [Manish Amde] added documentation, fixed off by 1 error in max level calculation cbd9f14 [Manish Amde] modified scala.math to math dad9652 [Manish Amde] removed unused imports e0426ee [Manish Amde] renamed parameter 718506b [Manish Amde] added unit test 1517155 [Manish Amde] updated documentation 9dbdabe [Manish Amde] merge from master 719d009 [Manish Amde] updating user documentation fecf89a [manishamde] Merge pull request #6 from etrain/deep_tree 0287772 [Evan Sparks] Fixing scalastyle issue. 2f1e093 [Manish Amde] minor: added doc for maxMemory parameter 2f6072c [manishamde] Merge pull request #5 from etrain/deep_tree abc5a23 [Evan Sparks] Parameterizing max memory. 50b143a [Manish Amde] adding support for very deep trees (cherry picked from commit f269b016acb17b24d106dc2b32a1be389489bb01) Signed-off-by: Patrick Wendell <pwendell@gmail.com>
Diffstat (limited to 'docs')
-rw-r--r--docs/mllib-decision-tree.md15
1 files changed, 6 insertions, 9 deletions
diff --git a/docs/mllib-decision-tree.md b/docs/mllib-decision-tree.md
index 296277e58b..acf0feff42 100644
--- a/docs/mllib-decision-tree.md
+++ b/docs/mllib-decision-tree.md
@@ -93,17 +93,14 @@ The recursive tree construction is stopped at a node when one of the two conditi
1. The node depth is equal to the `maxDepth` training parameter
2. No split candidate leads to an information gain at the node.
+### Max memory requirements
+
+For faster processing, the decision tree algorithm performs simultaneous histogram computations for all nodes at each level of the tree. This could lead to high memory requirements at deeper levels of the tree leading to memory overflow errors. To alleviate this problem, a 'maxMemoryInMB' training parameter is provided which specifies the maximum amount of memory at the workers (twice as much at the master) to be allocated to the histogram computation. The default value is conservatively chosen to be 128 MB to allow the decision algorithm to work in most scenarios. Once the memory requirements for a level-wise computation crosses the `maxMemoryInMB` threshold, the node training tasks at each subsequent level is split into smaller tasks.
+
### Practical limitations
-1. The tree implementation stores an `Array[Double]` of size *O(#features \* #splits \* 2^maxDepth)*
- in memory for aggregating histograms over partitions. The current implementation might not scale
- to very deep trees since the memory requirement grows exponentially with tree depth.
-2. The implemented algorithm reads both sparse and dense data. However, it is not optimized for
- sparse input.
-3. Python is not supported in this release.
-
-We are planning to solve these problems in the near future. Please drop us a line if you encounter
-any issues.
+1. The implemented algorithm reads both sparse and dense data. However, it is not optimized for sparse input.
+2. Python is not supported in this release.
## Examples