aboutsummaryrefslogtreecommitdiff
path: root/R/pkg/inst
diff options
context:
space:
mode:
authorSital Kedia <skedia@fb.com>2016-08-19 11:27:30 -0700
committerDavies Liu <davies.liu@gmail.com>2016-08-19 11:27:30 -0700
commitcf0cce90364d17afe780ff9a5426dfcefa298535 (patch)
tree4826d14efc1b7c57242b9a62f8f7c097c6b514c7 /R/pkg/inst
parent071eaaf9d2b63589f2e66e5279a16a5a484de6f5 (diff)
downloadspark-cf0cce90364d17afe780ff9a5426dfcefa298535.tar.gz
spark-cf0cce90364d17afe780ff9a5426dfcefa298535.tar.bz2
spark-cf0cce90364d17afe780ff9a5426dfcefa298535.zip
[SPARK-17113] [SHUFFLE] Job failure due to Executor OOM in offheap mode
## What changes were proposed in this pull request? This PR fixes executor OOM in offheap mode due to bug in Cooperative Memory Management for UnsafeExternSorter. UnsafeExternalSorter was checking if memory page is being used by upstream by comparing the base object address of the current page with the base object address of upstream. However, in case of offheap memory allocation, the base object addresses are always null, so there was no spilling happening and eventually the operator would OOM. Following is the stack trace this issue addresses - java.lang.OutOfMemoryError: Unable to acquire 1220 bytes of memory, got 0 at org.apache.spark.memory.MemoryConsumer.allocatePage(MemoryConsumer.java:120) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.acquireNewPageIfNecessary(UnsafeExternalSorter.java:341) at org.apache.spark.util.collection.unsafe.sort.UnsafeExternalSorter.insertRecord(UnsafeExternalSorter.java:362) at org.apache.spark.sql.execution.UnsafeExternalRowSorter.insertRow(UnsafeExternalRowSorter.java:93) at org.apache.spark.sql.execution.UnsafeExternalRowSorter.sort(UnsafeExternalRowSorter.java:170) ## How was this patch tested? Tested by running the failing job. Author: Sital Kedia <skedia@fb.com> Closes #14693 from sitalkedia/fix_offheap_oom.
Diffstat (limited to 'R/pkg/inst')
0 files changed, 0 insertions, 0 deletions