aboutsummaryrefslogtreecommitdiff
path: root/python/run-tests
Commit message (Collapse)AuthorAgeFilesLines
* Re-enable Python MLlib tests (require Python 2.7 and NumPy 1.7+)Matei Zaharia2014-01-141-5/+5
|
* Disable MLlib tests for now while Jenkins is still on Python 2.6Matei Zaharia2014-01-131-5/+5
|
* Add Naive Bayes to Python MLlib, and some API fixesMatei Zaharia2014-01-111-0/+5
| | | | | | | | | | | | - Added a Python wrapper for Naive Bayes - Updated the Scala Naive Bayes to match the style of our other algorithms better and in particular make it easier to call from Java (added builder pattern, removed default value in train method) - Updated Python MLlib functions to not require a SparkContext; we can get that from the RDD the user gives - Added a toString method in LabeledPoint - Made the Python MLlib tests run as part of run-tests as well (before they could only be run individually through each file)
* Merge branch 'scripts-reorg' of github.com:shane-huang/incubator-spark into ↵Prashant Sharma2014-01-021-1/+1
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | spark-915-segregate-scripts Conflicts: bin/spark-shell core/pom.xml core/src/main/scala/org/apache/spark/SparkContext.scala core/src/main/scala/org/apache/spark/scheduler/cluster/mesos/CoarseMesosSchedulerBackend.scala core/src/main/scala/org/apache/spark/ui/UIWorkloadGenerator.scala core/src/test/scala/org/apache/spark/DriverSuite.scala python/run-tests sbin/compute-classpath.sh sbin/spark-class sbin/stop-slaves.sh
| * add scripts in binshane-huang2013-09-231-1/+1
| | | | | | | | Signed-off-by: shane-huang <shengsheng.huang@intel.com>
* | Fix some Python docs and make sure to unset SPARK_TESTING in PythonMatei Zaharia2013-12-291-1/+1
| | | | | | | | tests so we don't get the test spark.conf on the classpath.
* | Fix some other Python tests due to initializing JVM in a different wayMatei Zaharia2013-12-291-0/+1
| | | | | | | | | | | | | | | | The test in context.py created two different instances of the SparkContext class by copying "globals", so that some tests can have a global "sc" object and others can try initializing their own contexts. This led to two JVM gateways being created since SparkConf also looked at pyspark.context.SparkContext to get the JVM.
* | Add custom serializer support to PySpark.Josh Rosen2013-11-101-0/+1
|/ | | | | | | | | For now, this only adds MarshalSerializer, but it lays the groundwork for other supporting custom serializers. Many of these mechanisms can also be used to support deserialization of different data formats sent by Java, such as data encoded by MsgPack. This also fixes a bug in SparkContext.union().
* Fix PySpark unit tests on Python 2.6.Josh Rosen2013-08-141-14/+12
|
* Allow python/run-tests to run from any directoryMatei Zaharia2013-07-291-0/+3
|
* Add Apache license headers and LICENSE and NOTICE filesMatei Zaharia2013-07-161-1/+19
|
* Don't download files to master's working directory.Josh Rosen2013-01-211-0/+3
| | | | | | | This should avoid exceptions caused by existing files with different contents. I also removed some unused code.
* Add RDD checkpointing to Python API.Josh Rosen2013-01-201-0/+3
|
* Launch accumulator tests in run-testsMatei Zaharia2013-01-201-0/+3
|
* Indicate success/failure in PySpark test script.Josh Rosen2013-01-091-0/+17
|
* Add `pyspark` script to replace the other scripts.Josh Rosen2013-01-011-0/+9
Expand the PySpark programming guide.