aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/conf.py
diff options
context:
space:
mode:
authorMatei Zaharia <matei@databricks.com>2013-12-29 20:15:07 -0500
committerMatei Zaharia <matei@databricks.com>2013-12-29 20:15:07 -0500
commiteaa8a68ff08304f713f4f75d39c61c020e0e691d (patch)
tree5543260c25af21555673154c0305a07e46f4ff6c /python/pyspark/conf.py
parent11540b798d622f3883cb40b20cc30ea7d894790a (diff)
downloadspark-eaa8a68ff08304f713f4f75d39c61c020e0e691d.tar.gz
spark-eaa8a68ff08304f713f4f75d39c61c020e0e691d.tar.bz2
spark-eaa8a68ff08304f713f4f75d39c61c020e0e691d.zip
Fix some Python docs and make sure to unset SPARK_TESTING in Python
tests so we don't get the test spark.conf on the classpath.
Diffstat (limited to 'python/pyspark/conf.py')
-rw-r--r--python/pyspark/conf.py10
1 files changed, 5 insertions, 5 deletions
diff --git a/python/pyspark/conf.py b/python/pyspark/conf.py
index a79f348b52..cf98b0e071 100644
--- a/python/pyspark/conf.py
+++ b/python/pyspark/conf.py
@@ -55,11 +55,11 @@ class SparkConf(object):
parameters as key-value pairs.
Most of the time, you would create a SparkConf object with
- C{SparkConf()}, which will load values from `spark.*` Java system
- properties and any `spark.conf` on your application's classpath.
- In this case, system properties take priority over `spark.conf`,
- and any parameters you set directly on the `SparkConf` object take
- priority over both of those.
+ C{SparkConf()}, which will load values from C{spark.*} Java system
+ properties and any C{spark.conf} on your Spark classpath. In this
+ case, system properties take priority over C{spark.conf}, and any
+ parameters you set directly on the C{SparkConf} object take priority
+ over both of those.
For unit tests, you can also call C{SparkConf(false)} to skip
loading external settings and get the same configuration no matter