aboutsummaryrefslogtreecommitdiff
path: root/external
diff options
context:
space:
mode:
authoruncleGen <hustyugm@gmail.com>2017-01-15 11:16:49 +0000
committerSean Owen <sowen@cloudera.com>2017-01-15 11:16:49 +0000
commita5e651f4c6f243b59724fd46237407374017a035 (patch)
treeb060cf0807de6d5440b155384ea7c3a261f89fa8 /external
parenta8567e34dc77a32ddeb280f8f9f603f301722059 (diff)
downloadspark-a5e651f4c6f243b59724fd46237407374017a035.tar.gz
spark-a5e651f4c6f243b59724fd46237407374017a035.tar.bz2
spark-a5e651f4c6f243b59724fd46237407374017a035.zip
[SPARK-19206][DOC][DSTREAM] Fix outdated parameter descriptions in kafka010
## What changes were proposed in this pull request? Fix outdated parameter descriptions in kafka010 ## How was this patch tested? cc koeninger zsxwing Author: uncleGen <hustyugm@gmail.com> Closes #16569 from uncleGen/SPARK-19206.
Diffstat (limited to 'external')
-rw-r--r--external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala11
-rw-r--r--external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala4
-rw-r--r--external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala26
3 files changed, 16 insertions, 25 deletions
diff --git a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala
index 794f53c5ab..6d6983c4bd 100644
--- a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala
+++ b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/DirectKafkaInputDStream.scala
@@ -42,15 +42,12 @@ import org.apache.spark.streaming.scheduler.rate.RateEstimator
* The spark configuration spark.streaming.kafka.maxRatePerPartition gives the maximum number
* of messages
* per second that each '''partition''' will accept.
- * @param locationStrategy In most cases, pass in [[PreferConsistent]],
+ * @param locationStrategy In most cases, pass in [[LocationStrategies.PreferConsistent]],
* see [[LocationStrategy]] for more details.
- * @param executorKafkaParams Kafka
- * <a href="http://kafka.apache.org/documentation.html#newconsumerconfigs">
- * configuration parameters</a>.
- * Requires "bootstrap.servers" to be set with Kafka broker(s),
- * NOT zookeeper servers, specified in host1:port1,host2:port2 form.
- * @param consumerStrategy In most cases, pass in [[Subscribe]],
+ * @param consumerStrategy In most cases, pass in [[ConsumerStrategies.Subscribe]],
* see [[ConsumerStrategy]] for more details
+ * @param ppc configuration of settings such as max rate on a per-partition basis.
+ * see [[PerPartitionConfig]] for more details.
* @tparam K type of Kafka message key
* @tparam V type of Kafka message value
*/
diff --git a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala
index 98394251bb..8f38095208 100644
--- a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala
+++ b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaRDD.scala
@@ -41,8 +41,8 @@ import org.apache.spark.storage.StorageLevel
* with Kafka broker(s) specified in host1:port1,host2:port2 form.
* @param offsetRanges offset ranges that define the Kafka data belonging to this RDD
* @param preferredHosts map from TopicPartition to preferred host for processing that partition.
- * In most cases, use [[DirectKafkaInputDStream.preferConsistent]]
- * Use [[DirectKafkaInputDStream.preferBrokers]] if your executors are on same nodes as brokers.
+ * In most cases, use [[LocationStrategies.PreferConsistent]]
+ * Use [[LocationStrategies.PreferBrokers]] if your executors are on same nodes as brokers.
* @param useConsumerCache whether to use a consumer from a per-jvm cache
* @tparam K type of Kafka message key
* @tparam V type of Kafka message value
diff --git a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
index c11917f59d..37046329e5 100644
--- a/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
+++ b/external/kafka-0-10/src/main/scala/org/apache/spark/streaming/kafka010/KafkaUtils.scala
@@ -48,7 +48,7 @@ object KafkaUtils extends Logging {
* configuration parameters</a>. Requires "bootstrap.servers" to be set
* with Kafka broker(s) specified in host1:port1,host2:port2 form.
* @param offsetRanges offset ranges that define the Kafka data belonging to this RDD
- * @param locationStrategy In most cases, pass in LocationStrategies.preferConsistent,
+ * @param locationStrategy In most cases, pass in [[LocationStrategies.PreferConsistent]],
* see [[LocationStrategies]] for more details.
* @tparam K type of Kafka message key
* @tparam V type of Kafka message value
@@ -80,14 +80,12 @@ object KafkaUtils extends Logging {
* Java constructor for a batch-oriented interface for consuming from Kafka.
* Starting and ending offsets are specified in advance,
* so that you can control exactly-once semantics.
- * @param keyClass Class of the keys in the Kafka records
- * @param valueClass Class of the values in the Kafka records
* @param kafkaParams Kafka
* <a href="http://kafka.apache.org/documentation.html#newconsumerconfigs">
* configuration parameters</a>. Requires "bootstrap.servers" to be set
* with Kafka broker(s) specified in host1:port1,host2:port2 form.
* @param offsetRanges offset ranges that define the Kafka data belonging to this RDD
- * @param locationStrategy In most cases, pass in LocationStrategies.preferConsistent,
+ * @param locationStrategy In most cases, pass in [[LocationStrategies.PreferConsistent]],
* see [[LocationStrategies]] for more details.
* @tparam K type of Kafka message key
* @tparam V type of Kafka message value
@@ -110,9 +108,9 @@ object KafkaUtils extends Logging {
* The spark configuration spark.streaming.kafka.maxRatePerPartition gives the maximum number
* of messages
* per second that each '''partition''' will accept.
- * @param locationStrategy In most cases, pass in LocationStrategies.preferConsistent,
+ * @param locationStrategy In most cases, pass in [[LocationStrategies.PreferConsistent]],
* see [[LocationStrategies]] for more details.
- * @param consumerStrategy In most cases, pass in ConsumerStrategies.subscribe,
+ * @param consumerStrategy In most cases, pass in [[ConsumerStrategies.Subscribe]],
* see [[ConsumerStrategies]] for more details
* @tparam K type of Kafka message key
* @tparam V type of Kafka message value
@@ -131,9 +129,9 @@ object KafkaUtils extends Logging {
* :: Experimental ::
* Scala constructor for a DStream where
* each given Kafka topic/partition corresponds to an RDD partition.
- * @param locationStrategy In most cases, pass in LocationStrategies.preferConsistent,
+ * @param locationStrategy In most cases, pass in [[LocationStrategies.PreferConsistent]],
* see [[LocationStrategies]] for more details.
- * @param consumerStrategy In most cases, pass in ConsumerStrategies.subscribe,
+ * @param consumerStrategy In most cases, pass in [[ConsumerStrategies.Subscribe]],
* see [[ConsumerStrategies]] for more details.
* @param perPartitionConfig configuration of settings such as max rate on a per-partition basis.
* see [[PerPartitionConfig]] for more details.
@@ -154,11 +152,9 @@ object KafkaUtils extends Logging {
* :: Experimental ::
* Java constructor for a DStream where
* each given Kafka topic/partition corresponds to an RDD partition.
- * @param keyClass Class of the keys in the Kafka records
- * @param valueClass Class of the values in the Kafka records
- * @param locationStrategy In most cases, pass in LocationStrategies.preferConsistent,
+ * @param locationStrategy In most cases, pass in [[LocationStrategies.PreferConsistent]],
* see [[LocationStrategies]] for more details.
- * @param consumerStrategy In most cases, pass in ConsumerStrategies.subscribe,
+ * @param consumerStrategy In most cases, pass in [[ConsumerStrategies.Subscribe]],
* see [[ConsumerStrategies]] for more details
* @tparam K type of Kafka message key
* @tparam V type of Kafka message value
@@ -178,11 +174,9 @@ object KafkaUtils extends Logging {
* :: Experimental ::
* Java constructor for a DStream where
* each given Kafka topic/partition corresponds to an RDD partition.
- * @param keyClass Class of the keys in the Kafka records
- * @param valueClass Class of the values in the Kafka records
- * @param locationStrategy In most cases, pass in LocationStrategies.preferConsistent,
+ * @param locationStrategy In most cases, pass in [[LocationStrategies.PreferConsistent]],
* see [[LocationStrategies]] for more details.
- * @param consumerStrategy In most cases, pass in ConsumerStrategies.subscribe,
+ * @param consumerStrategy In most cases, pass in [[ConsumerStrategies.Subscribe]],
* see [[ConsumerStrategies]] for more details
* @param perPartitionConfig configuration of settings such as max rate on a per-partition basis.
* see [[PerPartitionConfig]] for more details.