diff options
author | Dongjoon Hyun <dongjoon@apache.org> | 2016-03-09 10:31:26 +0000 |
---|---|---|
committer | Sean Owen <sowen@cloudera.com> | 2016-03-09 10:31:26 +0000 |
commit | c3689bc24e03a9471cd6e8169da61963c4528252 (patch) | |
tree | 5d1ee90afa2087ede8e4dbc4dd666d699578c230 /docs | |
parent | cbff2803ef117d7cffe6f05fc1bbd395a1e9c587 (diff) | |
download | spark-c3689bc24e03a9471cd6e8169da61963c4528252.tar.gz spark-c3689bc24e03a9471cd6e8169da61963c4528252.tar.bz2 spark-c3689bc24e03a9471cd6e8169da61963c4528252.zip |
[SPARK-13702][CORE][SQL][MLLIB] Use diamond operator for generic instance creation in Java code.
## What changes were proposed in this pull request?
In order to make `docs/examples` (and other related code) more simple/readable/user-friendly, this PR replaces existing codes like the followings by using `diamond` operator.
```
- final ArrayList<Product2<Object, Object>> dataToWrite =
- new ArrayList<Product2<Object, Object>>();
+ final ArrayList<Product2<Object, Object>> dataToWrite = new ArrayList<>();
```
Java 7 or higher supports **diamond** operator which replaces the type arguments required to invoke the constructor of a generic class with an empty set of type parameters (<>). Currently, Spark Java code use mixed usage of this.
## How was this patch tested?
Manual.
Pass the existing tests.
Author: Dongjoon Hyun <dongjoon@apache.org>
Closes #11541 from dongjoon-hyun/SPARK-13702.
Diffstat (limited to 'docs')
-rw-r--r-- | docs/sql-programming-guide.md | 4 | ||||
-rw-r--r-- | docs/streaming-programming-guide.md | 4 |
2 files changed, 4 insertions, 4 deletions
diff --git a/docs/sql-programming-guide.md b/docs/sql-programming-guide.md index c4d277f9bf..89fe873851 100644 --- a/docs/sql-programming-guide.md +++ b/docs/sql-programming-guide.md @@ -760,7 +760,7 @@ JavaRDD<String> people = sc.textFile("examples/src/main/resources/people.txt"); String schemaString = "name age"; // Generate the schema based on the string of schema -List<StructField> fields = new ArrayList<StructField>(); +List<StructField> fields = new ArrayList<>(); for (String fieldName: schemaString.split(" ")) { fields.add(DataTypes.createStructField(fieldName, DataTypes.StringType, true)); } @@ -1935,7 +1935,7 @@ val jdbcDF = sqlContext.read.format("jdbc").options( {% highlight java %} -Map<String, String> options = new HashMap<String, String>(); +Map<String, String> options = new HashMap<>(); options.put("url", "jdbc:postgresql:dbserver"); options.put("dbtable", "schema.tablename"); diff --git a/docs/streaming-programming-guide.md b/docs/streaming-programming-guide.md index e92b01aa77..998644f2e2 100644 --- a/docs/streaming-programming-guide.md +++ b/docs/streaming-programming-guide.md @@ -186,7 +186,7 @@ Next, we want to count these words. JavaPairDStream<String, Integer> pairs = words.mapToPair( new PairFunction<String, String, Integer>() { @Override public Tuple2<String, Integer> call(String s) { - return new Tuple2<String, Integer>(s, 1); + return new Tuple2<>(s, 1); } }); JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey( @@ -2095,7 +2095,7 @@ unifiedStream.print() <div data-lang="java" markdown="1"> {% highlight java %} int numStreams = 5; -List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<JavaPairDStream<String, String>>(numStreams); +List<JavaPairDStream<String, String>> kafkaStreams = new ArrayList<>(numStreams); for (int i = 0; i < numStreams; i++) { kafkaStreams.add(KafkaUtils.createStream(...)); } |