aboutsummaryrefslogtreecommitdiff
Commit message (Collapse)AuthorAgeFilesLines
* Deleted py4j jar and added to assembly dependencyPrashant Sharma2014-01-027-50/+2
|
* Merge pull request #312 from pwendell/log4j-fix-2Patrick Wendell2014-01-0119-37/+52
|\ | | | | | | | | | | | | | | SPARK-1008: Logging improvments 1. Adds a default log4j file that gets loaded if users haven't specified a log4j file. 2. Isolates use of the tools assembly jar. I found this produced SLF4J warnings after building with SBT (and I've seen similar warnings on the mailing list).
| * Merge remote-tracking branch 'apache-github/master' into log4j-fix-2Patrick Wendell2014-01-0138-465/+803
| |\ | |/ |/| | | | | Conflicts: streaming/src/main/scala/org/apache/spark/streaming/scheduler/JobGenerator.scala
* | Merge pull request #314 from witgo/masterReynold Xin2013-12-312-1356/+240
|\ \ | | | | | | | | | restore core/pom.xml file modification
| * | restore core/pom.xml file modificationliguoqiang2014-01-012-1356/+240
|/ /
* | Merge pull request #73 from falaki/ApproximateDistinctCountReynold Xin2013-12-3112-233/+1595
|\ \ | | | | | | | | | | | | | | | Approximate distinct count Added countApproxDistinct() to RDD and countApproxDistinctByKey() to PairRDDFunctions to approximately count distinct number of elements and distinct number of values per key, respectively. Both functions use HyperLogLog from stream-lib for counting. Both functions take a parameter that controls the trade-off between accuracy and memory consumption. Also added Scala docs and test suites for both methods.
| * | Made the code more compact and readableHossein Falaki2013-12-313-23/+8
| | |
| * | minor improvementsHossein Falaki2013-12-312-4/+5
| | |
| * | Added Java unit tests for countApproxDistinct and countApproxDistinctByKeyHossein Falaki2013-12-301-0/+32
| | |
| * | Added Java API for countApproxDistinctHossein Falaki2013-12-301-0/+11
| | |
| * | Added Java API for countApproxDistinctByKeyHossein Falaki2013-12-301-0/+36
| | |
| * | Added stream 2.5.1 jar depenencyHossein Falaki2013-12-301-1/+2
| | |
| * | Renamed countDistinct and countDistinctByKey methods to include ApproxHossein Falaki2013-12-305-15/+15
| | |
| * | Using origin versionHossein Falaki2013-12-30374-8424/+19051
| |\ \
| * | | Removed superfluous abs call from test cases.Hossein Falaki2013-12-101-2/+2
| | | |
| * | | Made SerializableHyperLogLog Externalizable and added Kryo testsHossein Falaki2013-10-182-5/+10
| | | |
| * | | Added stream-lib dependency to Maven buildHossein Falaki2013-10-182-0/+9
| | | |
| * | | Improved code style.Hossein Falaki2013-10-174-15/+19
| | | |
| * | | Fixed document typoHossein Falaki2013-10-172-4/+4
| | | |
| * | | Added dependency on stream-lib version 2.4.0 for approximate distinct count ↵Hossein Falaki2013-10-171-1/+2
| | | | | | | | | | | | | | | | support.
| * | | Added countDistinctByKey to PairRDDFunctions that counts the approximate ↵Hossein Falaki2013-10-172-0/+81
| | | | | | | | | | | | | | | | number of unique values for each key in the RDD.
| * | | Added a countDistinct method to RDD that takes takes an accuracy parameter ↵Hossein Falaki2013-10-172-1/+38
| | | | | | | | | | | | | | | | and returns the (approximate) number of distinct elements in the RDD.
| * | | Added a serializable wrapper for HyperLogLogHossein Falaki2013-10-171-0/+44
| | | |
* | | | Merge pull request #238 from ngbinh/upgradeNettyPatrick Wendell2013-12-318-44/+60
|\ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | upgrade Netty from 4.0.0.Beta2 to 4.0.13.Final the changes are listed at https://github.com/netty/netty/wiki/New-and-noteworthy
| * | | | Fix failed unit testsBinh Nguyen2013-12-273-13/+24
| | | | | | | | | | | | | | | | | | | | Also clean up a bit.
| * | | | Fix imports orderBinh Nguyen2013-12-243-5/+2
| | | | |
| * | | | Remove import * and fix some formattingBinh Nguyen2013-12-242-7/+4
| | | | |
| * | | | upgrade Netty from 4.0.0.Beta2 to 4.0.13.FinalBinh Nguyen2013-12-247-31/+42
| | | | |
* | | | | Merge pull request #289 from tdas/filestream-fixPatrick Wendell2013-12-3114-196/+269
|\ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Bug fixes for file input stream and checkpointing - Fixed bugs in the file input stream that led the stream to fail due to transient HDFS errors (listing files when a background thread it deleting fails caused errors, etc.) - Updated Spark's CheckpointRDD and Streaming's CheckpointWriter to use SparkContext.hadoopConfiguration, to allow checkpoints to be written to any HDFS compatible store requiring special configuration. - Changed the API of SparkContext.setCheckpointDir() - eliminated the unnecessary 'useExisting' parameter. Now SparkContext will always create a unique subdirectory within the user specified checkpoint directory. This is to ensure that previous checkpoint files are not accidentally overwritten. - Fixed bug where setting checkpoint directory as a relative local path caused the checkpointing to fail.
| * | | | | Fixed comments and long lines based on comments on PR 289.Tathagata Das2013-12-314-10/+19
| | | | | |
| * | | | | Minor changes in comments and strings to address comments in PR 289.Tathagata Das2013-12-271-8/+6
| | | | | |
| * | | | | Added warning if filestream adds files with no data in them (file RDDs have ↵Tathagata Das2013-12-261-0/+7
| | | | | | | | | | | | | | | | | | | | | | | | 0 partitions).
| * | | | | Changed file stream to not catch any exceptions related to finding new files ↵Tathagata Das2013-12-261-19/+11
| | | | | | | | | | | | | | | | | | | | | | | | (FileNotFound exception is still caught and ignored).
| * | | | | Removed slack time in file stream and added better handling of exceptions ↵Tathagata Das2013-12-263-50/+21
| | | | | | | | | | | | | | | | | | | | | | | | due to failures due FileNotFound exceptions.
| * | | | | Fixed Python API for sc.setCheckpointDir. Also other fixes based on ↵Tathagata Das2013-12-247-22/+16
| | | | | | | | | | | | | | | | | | | | | | | | Reynold's comments on PR 289.
| * | | | | Merge branch 'apache-master' into filestream-fixTathagata Das2013-12-2437-123/+465
| |\ \ \ \ \ | | | |_|/ / | | |/| | |
| * | | | | Minor formatting fixes.Tathagata Das2013-12-233-9/+13
| | | | | |
| * | | | | Updated testsuites to work with the slack time of file stream.Tathagata Das2013-12-233-2/+22
| | | | | |
| * | | | | Merge branch 'scheduler-update' into filestream-fixTathagata Das2013-12-233-4/+26
| |\ \ \ \ \
| * | | | | | Fixed bug in file stream that prevented some files from being readTathagata Das2013-12-231-9/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | correctly.
| * | | | | | Updated CheckpointWriter and FileInputDStream to be robust against failed ↵Tathagata Das2013-12-223-35/+78
| | | | | | | | | | | | | | | | | | | | | | | | | | | | FileSystem objects. Refactored JobGenerator to use actor so that all updating of DStream's metadata is single threaded.
| * | | | | | Merge branch 'scheduler-update' into filestream-fixTathagata Das2013-12-222-1/+6
| |\ \ \ \ \ \
| * \ \ \ \ \ \ Merge branch 'scheduler-update' into filestream-fixTathagata Das2013-12-19224-3164/+4050
| |\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Conflicts: core/src/main/scala/org/apache/spark/rdd/CheckpointRDD.scala streaming/src/main/scala/org/apache/spark/streaming/StreamingContext.scala streaming/src/main/scala/org/apache/spark/streaming/dstream/FileInputDStream.scala streaming/src/main/scala/org/apache/spark/streaming/scheduler/JobGenerator.scala streaming/src/test/scala/org/apache/spark/streaming/CheckpointSuite.scala
| * | | | | | | | Fixed multiple file stream and checkpointing bugs.Tathagata Das2013-12-1110-117/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | - Made file stream more robust to transient failures. - Changed Spark.setCheckpointDir API to not have the second 'useExisting' parameter. Spark will always create a unique directory for checkpointing underneath the directory provide to the funtion. - Fixed bug wrt local relative paths as checkpoint directory. - Made DStream and RDD checkpointing use SparkContext.hadoopConfiguration, so that more HDFS compatible filesystems are supported for checkpointing.
* | | | | | | | | Merge pull request #308 from kayousterhout/stage_namingPatrick Wendell2013-12-307-14/+18
|\ \ \ \ \ \ \ \ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Changed naming of StageCompleted event to be consistent The rest of the SparkListener events are named with "SparkListener" as the prefix of the name; this commit renames the StageCompleted event to SparkListenerStageCompleted for consistency.
| * | | | | | | | | Updated code style according to Patrick's commentsKay Ousterhout2013-12-291-4/+2
| | | | | | | | | |
| * | | | | | | | | Changed naming of StageCompleted event to be consistentKay Ousterhout2013-12-277-14/+20
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The rest of the SparkListener events are named with "SparkListener" as the prefix of the name; this commit renames the StageCompleted event to SparkListenerStageCompleted for consistency.
| | | | | | | | | * Adding outer checkout when initializing loggingPatrick Wendell2013-12-311-3/+5
| | | | | | | | | |
| | | | | | | | | * Tiny typo fixPatrick Wendell2013-12-311-2/+2
| | | | | | | | | |
| | | | | | | | | * Removing use in testPatrick Wendell2013-12-311-2/+0
| | | | | | | | | |