aboutsummaryrefslogtreecommitdiff
path: root/python/pyspark/context.py
Commit message (Expand)AuthorAgeFilesLines
* [SPARK-4387][PySpark] Refactoring python profiling code to make it extensibleYandu Oppacher2015-01-281-32/+14
* [SPARK-5063] More helpful error messages for several invalid operationsJosh Rosen2015-01-231-0/+8
* [SPARK-5224] [PySpark] improve performance of parallelize list/ndarrayDavies Liu2015-01-151-1/+1
* [SPARK-4822] Use sphinx tags for Python doc annotationslewuathe2014-12-171-2/+2
* [SPARK-4548] []SPARK-4517] improve performance of python broadcastDavies Liu2014-11-241-10/+2
* [SPARK-3721] [PySpark] broadcast objects larger than 2GDavies Liu2014-11-181-2/+3
* [SPARK-4398][PySpark] specialize sc.parallelize(xrange)Xiangrui Meng2014-11-141-4/+21
* [SPARK-4186] add binaryFiles and binaryRecords in PythonDavies Liu2014-11-061-1/+31
* [SPARK-3886] [PySpark] simplify serializer, use AutoBatchedSerializer by defa...Davies Liu2014-11-031-38/+20
* [SPARK-2652] [PySpark] donot use KyroSerializer as default serializerDavies Liu2014-10-231-1/+0
* [SPARK-3971] [MLLib] [PySpark] hotfix: Customized pickler should work in clus...Davies Liu2014-10-161-2/+0
* [SPARK-2377] Python API for Streaminggiwa2014-10-121-4/+4
* [SPARK-3886] [PySpark] use AutoBatchedSerializer by defaultDavies Liu2014-10-101-4/+7
* [SPARK-3412] [PySpark] Replace Epydoc with Sphinx to generate Python API docsDavies Liu2014-10-071-46/+46
* [SPARK-3773][PySpark][Doc] Sphinx build warningcocoatomo2014-10-061-0/+1
* [SPARK-3478] [PySpark] Profile the Python tasksDavies Liu2014-09-301-1/+38
* Revert "[SPARK-3478] [PySpark] Profile the Python tasks"Josh Rosen2014-09-261-38/+1
* [SPARK-3478] [PySpark] Profile the Python tasksDavies Liu2014-09-261-1/+38
* [SPARK-3634] [PySpark] User's module should take precedence over system modulesDavies Liu2014-09-241-6/+5
* [SPARK-3491] [MLlib] [PySpark] use pickle to serialize data in MLlibDavies Liu2014-09-191-0/+1
* [SPARK-3430] [PySpark] [Doc] generate PySpark API docs using SphinxDavies Liu2014-09-161-1/+1
* [SPARK-1087] Move python traceback utilities into new traceback_utils.py file.Aaron Staple2014-09-151-6/+2
* [SPARK-2951] [PySpark] support unpickle array.array for Python 2.6Davies Liu2014-09-151-0/+1
* [SPARK-3047] [PySpark] add an option to use str in textFileRDDDavies Liu2014-09-111-4/+12
* [SPARK-3458] enable python "with" statements for SparkContextMatthew Farrellee2014-09-091-0/+14
* [SPARK-3309] [PySpark] Put all public API in __all__Davies Liu2014-09-031-0/+3
* SPARK-3318: Documentation update in addFile on how to use SparkFiles.getHolden Karau2014-08-301-2/+2
* [SPARK-3307] [PySpark] Fix doc string of SparkContext.broadcast()Davies Liu2014-08-291-2/+0
* [SPARK-1065] [PySpark] improve supporting for large broadcastDavies Liu2014-08-161-7/+13
* [SPARK-3035] Wrong example with SparkContext.addFileiAmGhost2014-08-161-1/+1
* [SPARK-2627] [PySpark] have the build enforce PEP 8 automaticallyNicholas Chammas2014-08-061-11/+14
* [SPARK-2454] Do not ship spark home to WorkersAndrew Or2014-08-021-1/+1
* [SPARK-2024] Add saveAsSequenceFile to PySparkKan Zhang2014-07-301-15/+36
* [SPARK-1550] [PySpark] Allow SparkContext creation after failed attemptsJosh Rosen2014-07-271-6/+12
* [SPARK-2652] [PySpark] Turning some default configs for PySparkDavies Liu2014-07-261-1/+12
* [SPARK-1458] [PySpark] Expose sc.version in Java and PySparkJosh Rosen2014-07-261-0/+7
* [SPARK-2014] Make PySpark store RDDs in MEMORY_ONLY_SER with compression by d...Prashant Sharma2014-07-241-1/+1
* [SPARK-2470] PEP8 fixes to PySparkNicholas Chammas2014-07-211-20/+25
* [SPARK-2061] Made splits deprecated in JavaRDDLikeAnant2014-06-201-1/+1
* SPARK-1416: PySpark support for SequenceFile and Hadoop InputFormatsNick Pentreath2014-06-091-0/+137
* [SPARK-1161] Add saveAsPickleFile and SparkContext.pickleFile in PythonKan Zhang2014-06-031-0/+14
* SPARK-1839: PySpark RDD#take() shouldn't always read from driverAaron Davidson2014-05-311-0/+26
* Added doctest and method description in context.pyJyotiska NK2014-05-281-1/+14
* [SPARK-1900 / 1918] PySpark on YARN is brokenAndrew Or2014-05-241-2/+6
* [SPARK-1519] Support minPartitions param of wholeTextFiles() in PySparkKan Zhang2014-05-211-2/+10
* SPARK-1579: Clean up PythonRDD and avoid swallowing IOExceptionsAaron Davidson2014-05-071-1/+1
* [SPARK-1549] Add Python support to spark-submitMatei Zaharia2014-05-061-0/+6
* [SPARK-986]: Job cancelation for PySparkAhir Reddy2014-04-241-3/+49
* SPARK-1483: Rename minSplits to minPartitions in public APIsCodingCat2014-04-181-3/+3
* SPARK-1305: Support persisting RDD's directly to TachyonHaoyuan Li2014-04-041-2/+5