aboutsummaryrefslogtreecommitdiff
path: root/docs/hardware-provisioning.md
diff options
context:
space:
mode:
authorYuming Wang <wgyumg@gmail.com>2017-02-28 10:13:42 +0000
committerSean Owen <sowen@cloudera.com>2017-02-28 10:13:42 +0000
commit9b8eca65dcf68129470ead39362ce870ffb0bb1d (patch)
tree282c7af7443b31416ff3f9821615f18635de916b /docs/hardware-provisioning.md
parenta350bc16d36c58b48ac01f0258678ffcdb77e793 (diff)
downloadspark-9b8eca65dcf68129470ead39362ce870ffb0bb1d.tar.gz
spark-9b8eca65dcf68129470ead39362ce870ffb0bb1d.tar.bz2
spark-9b8eca65dcf68129470ead39362ce870ffb0bb1d.zip
[SPARK-19660][CORE][SQL] Replace the configuration property names that are deprecated in the version of Hadoop 2.6
## What changes were proposed in this pull request? Replace all the Hadoop deprecated configuration property names according to [DeprecatedProperties](https://hadoop.apache.org/docs/r2.6.0/hadoop-project-dist/hadoop-common/DeprecatedProperties.html). except: https://github.com/apache/spark/blob/v2.1.0/python/pyspark/sql/tests.py#L1533 https://github.com/apache/spark/blob/v2.1.0/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala#L987 https://github.com/apache/spark/blob/v2.1.0/sql/core/src/main/scala/org/apache/spark/sql/execution/command/SetCommand.scala#L45 https://github.com/apache/spark/blob/v2.1.0/sql/core/src/main/scala/org/apache/spark/sql/internal/SQLConf.scala#L614 ## How was this patch tested? Existing tests Author: Yuming Wang <wgyumg@gmail.com> Closes #16990 from wangyum/HadoopDeprecatedProperties.
Diffstat (limited to 'docs/hardware-provisioning.md')
-rw-r--r--docs/hardware-provisioning.md4
1 files changed, 2 insertions, 2 deletions
diff --git a/docs/hardware-provisioning.md b/docs/hardware-provisioning.md
index bb6f616b18..896f9302ef 100644
--- a/docs/hardware-provisioning.md
+++ b/docs/hardware-provisioning.md
@@ -15,8 +15,8 @@ possible**. We recommend the following:
* If at all possible, run Spark on the same nodes as HDFS. The simplest way is to set up a Spark
[standalone mode cluster](spark-standalone.html) on the same nodes, and configure Spark and
Hadoop's memory and CPU usage to avoid interference (for Hadoop, the relevant options are
-`mapred.child.java.opts` for the per-task memory and `mapred.tasktracker.map.tasks.maximum`
-and `mapred.tasktracker.reduce.tasks.maximum` for number of tasks). Alternatively, you can run
+`mapred.child.java.opts` for the per-task memory and `mapreduce.tasktracker.map.tasks.maximum`
+and `mapreduce.tasktracker.reduce.tasks.maximum` for number of tasks). Alternatively, you can run
Hadoop and Spark on a common cluster manager like [Mesos](running-on-mesos.html) or
[Hadoop YARN](running-on-yarn.html).