aboutsummaryrefslogtreecommitdiff
path: root/python
diff options
context:
space:
mode:
authorSean Zhong <seanzhong@databricks.com>2016-09-06 16:05:50 +0800
committerWenchen Fan <wenchen@databricks.com>2016-09-06 16:05:50 +0800
commit6f13aa7dfee12b1b301bd10a1050549008ecc67e (patch)
tree67f7324e327eabf40d8a0970cd0baaea7994d666 /python
parentc0ae6bc6ea38909730fad36e653d3c7ab0a84b44 (diff)
downloadspark-6f13aa7dfee12b1b301bd10a1050549008ecc67e.tar.gz
spark-6f13aa7dfee12b1b301bd10a1050549008ecc67e.tar.bz2
spark-6f13aa7dfee12b1b301bd10a1050549008ecc67e.zip
[SPARK-17356][SQL] Fix out of memory issue when generating JSON for TreeNode
## What changes were proposed in this pull request? class `org.apache.spark.sql.types.Metadata` is widely used in mllib to store some ml attributes. `Metadata` is commonly stored in `Alias` expression. ``` case class Alias(child: Expression, name: String)( val exprId: ExprId = NamedExpression.newExprId, val qualifier: Option[String] = None, val explicitMetadata: Option[Metadata] = None, override val isGenerated: java.lang.Boolean = false) ``` The `Metadata` can take a big memory footprint since the number of attributes is big ( in scale of million). When `toJSON` is called on `Alias` expression, the `Metadata` will also be converted to a big JSON string. If a plan contains many such kind of `Alias` expressions, it may trigger out of memory error when `toJSON` is called, since converting all `Metadata` references to JSON will take huge memory. With this PR, we will skip scanning Metadata when doing JSON conversion. For a reproducer of the OOM, and analysis, please look at jira https://issues.apache.org/jira/browse/SPARK-17356. ## How was this patch tested? Existing tests. Author: Sean Zhong <seanzhong@databricks.com> Closes #14915 from clockfly/json_oom.
Diffstat (limited to 'python')
0 files changed, 0 insertions, 0 deletions