diff options
Diffstat (limited to 'test')
26 files changed, 755 insertions, 4 deletions
diff --git a/test/benchmarks/.gitignore b/test/benchmarks/.gitignore new file mode 100644 index 0000000000..ce4d893417 --- /dev/null +++ b/test/benchmarks/.gitignore @@ -0,0 +1,14 @@ +/project/project/ +/project/target/ +/target/ + +# what appears to be a Scala IDE-generated file +.cache-main + +# standard Eclipse output directory +/bin/ + +# sbteclipse-generated Eclipse files +/.classpath +/.project +/.settings/ diff --git a/test/benchmarks/README.md b/test/benchmarks/README.md new file mode 100644 index 0000000000..370d610bc4 --- /dev/null +++ b/test/benchmarks/README.md @@ -0,0 +1,105 @@ +# Scala library benchmarks + +This directory is a standalone SBT project, within the Scala project, +that makes use of the [SBT plugin](https://github.com/ktoso/sbt-jmh) for [JMH](http://openjdk.java.net/projects/code-tools/jmh/). + +## Running a benchmark + +The benchmarks require first building Scala into `../../build/pack` with `ant`. +If you want to build with `sbt dist/mkPack` instead, +you'll need to change `scalaHome` in this project. + +You'll then need to know the fully-qualified name of the benchmark runner class. +The benchmarking classes are organized under `src/main/scala`, +in the same package hierarchy as the classes that they test. +Assuming that we're benchmarking `scala.collection.mutable.OpenHashMap`, +the benchmark runner would likely be named `scala.collection.mutable.OpenHashMapRunner`. +Using this example, one would simply run + + jmh:runMain scala.collection.mutable.OpenHashMapRunner + +in SBT. +SBT should be run _from this directory_. + +The JMH results can be found under `target/jmh-results/`. +`target` gets deleted on an SBT `clean`, +so you should copy these files out of `target` if you wish to preserve them. + +## Creating a benchmark and runner + +The benchmarking classes use the same package hierarchy as the classes that they test +in order to make it easy to expose, in package scope, members of the class under test, +should that be necessary for benchmarking. + +There are two types of classes in the source directory: +those suffixed `Benchmark` and those suffixed `Runner`. +The former are benchmarks that can be run directly using `jmh:run`; +however, they are normally run from a corresponding class of the latter type, +which is run using `jmh:runMain` (as described above). +This …`Runner` class is useful for setting appropriate JMH command options, +and for processing the JMH results into files that can be read by other tools, such as Gnuplot. + +The `benchmark.JmhRunner` trait should be woven into any runner class, for the standard behavior that it provides. +This includes creating output files in a subdirectory of `target/jmh-results` +derived from the fully-qualified package name of the `Runner` class. + +## Some useful HotSpot options +Adding these to the `jmh:run` or `jmh:runMain` command line may help if you're using the HotSpot (Oracle, OpenJDK) compiler. +They require prefixing with `-jvmArgs`. +See [the Java documentation](http://docs.oracle.com/javase/8/docs/technotes/tools/unix/java.html) for more options. + +### Viewing JIT compilation events +Adding `-XX:+PrintCompilation` shows when Java methods are being compiled or deoptimized. +At the most basic level, +these messages will tell you whether the code that you're measuring is still being tuned, +so that you know whether you're running enough warm-up iterations. +See [Kris Mok's notes](https://gist.github.com/rednaxelafx/1165804#file-notes-md) to interpret the output in detail. + +### Consider GC events +If you're not explicitly performing `System.gc()` calls outside of your benchmarking code, +you should add the JVM option `-verbose:gc` to understand the effect that GCs may be having on your tests. + +### "Diagnostic" options +These require the `-XX:+UnlockDiagnosticVMOptions` JVM option. + +#### Viewing inlining events +Add `-XX:+PrintInlining`. + +#### Viewing the disassembled code +If you're running OpenJDK or Oracle JVM, +you may need to install the disassembler library (`hsdis-amd64.so` for the `amd64` architecture). +In Debian, this is available in +<a href="https://packages.debian.org/search?keywords=libhsdis0-fcml">the `libhsdis0-fcml` package</a>. +For an Oracle (or other compatible) JVM not set up by your distribution, +you may also need to copy or link the disassembler library +to the `jre/lib/`_`architecture`_ directory inside your JVM installation directory. + +To show the assembly code corresponding to the code generated by the JIT compiler for specific methods, +add `-XX:CompileCommand=print,scala.collection.mutable.OpenHashMap::*`, +for example, to show all of the methods in the `scala.collection.mutable.OpenHashMap` class. + +To show it for _all_ methods, add `-XX:+PrintAssembly`. +(This is usually excessive.) + +## Useful reading +* [OpenJDK advice on microbenchmarks](https://wiki.openjdk.java.net/display/HotSpot/MicroBenchmarks) +* Brian Goetz's "Java theory and practice" articles: + * "[Dynamic compilation and performance measurement](http://www.ibm.com/developerworks/java/library/j-jtp12214/)" + * "[Anatomy of a flawed benchmark](http://www.ibm.com/developerworks/java/library/j-jtp02225/)" +* [Doug Lea's JSR 166 benchmarks](http://gee.cs.oswego.edu/cgi-bin/viewcvs.cgi/jsr166/src/test/loops/) +* "[Measuring performance](http://docs.scala-lang.org/overviews/parallel-collections/performance.html)" of Scala parallel collections + +## Legacy frameworks + +An older version of the benchmarking framework is still present in this directory, in the following locations: + +<dl> +<dt><code>bench</code></dt> +<dd>A script to run the old benchmarks.</dd> +<dt><code>source.list</code></dt> +<dd>A temporary file used by <code>bench</code>.</dd> +<dt><code>src/scala/</code></dt> +<dd>The older benchmarks, including the previous framework.</dd> +</dl> + +Another, older set of benchmarks is present in `../benchmarking/`. diff --git a/test/benchmarks/build.sbt b/test/benchmarks/build.sbt new file mode 100644 index 0000000000..31cee701ad --- /dev/null +++ b/test/benchmarks/build.sbt @@ -0,0 +1,11 @@ +scalaHome := Some(file("../../build/pack")) +scalaVersion := "2.12.0-dev" +scalacOptions ++= Seq("-feature", "-Yopt:l:classpath") + +lazy val root = (project in file(".")). + enablePlugins(JmhPlugin). + settings( + name := "test-benchmarks", + version := "0.0.1", + libraryDependencies += "org.openjdk.jol" % "jol-core" % "0.4" + ) diff --git a/test/benchmarks/project/plugins.sbt b/test/benchmarks/project/plugins.sbt new file mode 100644 index 0000000000..e11aa29f3b --- /dev/null +++ b/test/benchmarks/project/plugins.sbt @@ -0,0 +1,2 @@ +addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "4.0.0") +addSbtPlugin("pl.project13.scala" % "sbt-jmh" % "0.2.6") diff --git a/test/benchmarks/src/main/scala/benchmark/JmhRunner.scala b/test/benchmarks/src/main/scala/benchmark/JmhRunner.scala new file mode 100644 index 0000000000..cc75be529d --- /dev/null +++ b/test/benchmarks/src/main/scala/benchmark/JmhRunner.scala @@ -0,0 +1,16 @@ +package benchmark + +import java.io.File + +/** Common code for JMH runner objects. */ +trait JmhRunner { + private[this] val parentDirectory = new File("target", "jmh-results") + + /** Return the output directory for this class, creating the directory if necessary. */ + protected def outputDirectory: File = { + val subdir = getClass.getPackage.getName.replace('.', File.separatorChar) + val dir = new File(parentDirectory, subdir) + if (!dir.isDirectory) dir.mkdirs() + dir + } +} diff --git a/test/benchmarks/src/main/scala/benchmark/KeySeq.scala b/test/benchmarks/src/main/scala/benchmark/KeySeq.scala new file mode 100644 index 0000000000..126b92b3b6 --- /dev/null +++ b/test/benchmarks/src/main/scala/benchmark/KeySeq.scala @@ -0,0 +1,24 @@ +package benchmark + +/** A sequence of keys. + * + * Tests of maps and sets require a sequence of keys that can be used + * to add entries and possibly to find them again. + * This type provides such a sequence. + * + * Note that this needn't be a "sequence" in the full sense of [[collection.Seq]], + * particularly in that it needn't extend [[PartialFunction]]. + * + * @tparam K the type of the keys + */ +trait KeySeq[K] { + /** Selects a key by its index in the sequence. + * Repeated calls with the same index return the same key (by reference equality). + * + * @param idx The index to select. Should be non-negative and less than `size`. + */ + def apply(idx: Int): K + + /** The size of this sequence. */ + def size: Int +} diff --git a/test/benchmarks/src/main/scala/benchmark/KeySeqBuilder.scala b/test/benchmarks/src/main/scala/benchmark/KeySeqBuilder.scala new file mode 100644 index 0000000000..95f6c7afd7 --- /dev/null +++ b/test/benchmarks/src/main/scala/benchmark/KeySeqBuilder.scala @@ -0,0 +1,33 @@ +package benchmark + +/** Builder of a [[KeySeq]] + * + * @tparam K the type of the keys + */ +trait KeySeqBuilder[K] { + /** Return a [[KeySeq]] having at least the given size. */ + def build(size: Int): KeySeq[K] +} + +object KeySeqBuilder { + /** Builder of a sequence of `Int` keys. + * Simply maps the sequence index to itself. + */ + implicit object IntKeySeqBuilder extends KeySeqBuilder[Int] { + def build(_size: Int) = new KeySeq[Int] { + def apply(idx: Int) = idx + def size = _size + } + } + + /** Builder of a sequence of `AnyRef` keys. */ + implicit object AnyRefKeySeqBuilder extends KeySeqBuilder[AnyRef] { + def build(_size: Int) = new KeySeq[AnyRef] { + private[this] val arr = new Array[AnyRef](size) + for (i <- 0 until size) arr(i) = new AnyRef() + + def apply(idx: Int) = arr(idx) + def size = _size + } + } +} diff --git a/test/benchmarks/src/main/scala/scala/collection/mutable/OpenHashMapBenchmark.scala b/test/benchmarks/src/main/scala/scala/collection/mutable/OpenHashMapBenchmark.scala new file mode 100644 index 0000000000..64e2244499 --- /dev/null +++ b/test/benchmarks/src/main/scala/scala/collection/mutable/OpenHashMapBenchmark.scala @@ -0,0 +1,308 @@ +package scala.collection.mutable; + +import org.openjdk.jmh.annotations._ +import org.openjdk.jmh.infra._ +import org.openjdk.jmh.runner.IterationType +import org.openjdk.jol.info.GraphLayout + +import benchmark._ +import java.util.concurrent.TimeUnit + +/** Utilities for the [[OpenHashMapBenchmark]]. + * + * The method calls are tested by looping to the size desired for the map; + * instead of using the JMH harness, which iterates for a fixed length of time. + */ +private object OpenHashMapBenchmark { + + /** Abstract state container for the `put()` bulk calling tests. + * + * Provides an array of adequately-sized, empty maps to each invocation, + * so that hash table allocation won't be done during measurement. + * Provides enough maps to make each invocation long enough to avoid timing artifacts. + * Performs a GC after re-creating the empty maps before every invocation, + * so that only the GCs caused by the invocation contribute to the measurement. + * + * Records the memory used by all the maps in the last invocation of each iteration. + * + * @tparam K type of the map keys to be used in the test + */ + @State(Scope.Thread) + private[this] abstract class BulkPutState[K](implicit keyBuilder: KeySeqBuilder[K]) { + /** A lower-bound estimate of the number of nanoseconds per `put()` call */ + private[this] val nanosPerPut: Double = 5 + + /** Minimum number of nanoseconds per invocation, so as to avoid timing artifacts. */ + private[this] val minNanosPerInvocation = 1000000 // one millisecond + + /** Size of the maps created in this trial. */ + private[this] var size: Int = _ + + /** Total number of entries in all of the `maps` combined. */ + private[this] var _mapEntries: Int = _ + protected def mapEntries = _mapEntries + + /** Number of operations performed in the current invocation. */ + private[this] var _operations: Int = _ + protected def operations = _operations + + /** Bytes of memory used in the object graphs of all the maps. */ + private[this] var _memory: Long = _ + protected def memory = _memory + + /** The sequence of keys to store into a map. */ + private[this] var _keys: KeySeq[K] = _ + def keys() = _keys + + var maps: Array[OpenHashMap[K,Int]] = null + + @Setup + def threadSetup(params: BenchmarkParams) { + size = params.getParam("size").toInt + val n = math.ceil(minNanosPerInvocation / (nanosPerPut * size)).toInt + _mapEntries = size * n + _keys = keyBuilder.build(size) + maps = new Array(n) + } + + @Setup(Level.Iteration) + def iterationSetup { + _operations = 0 + } + + @Setup(Level.Invocation) + def setup(params: IterationParams) { + for (i <- 0 until maps.length) maps(i) = new OpenHashMap[K,Int](size) + + if (params.getType == IterationType.MEASUREMENT) { + _operations += _mapEntries + System.gc() // clean up after last invocation + } + } + + @TearDown(Level.Iteration) + def iterationTeardown(params: IterationParams) { + if (params.getType == IterationType.MEASUREMENT) { + // limit to smaller cases to avoid OOM + _memory = + if (_mapEntries <= 1000000) GraphLayout.parseInstance(maps(0), maps.tail).totalSize + else 0 + } + } + } + + /** Abstract state container for the `get()` bulk calling tests. + * + * Provides a thread-scoped map of the expected size. + * Performs a GC after loading the map. + * + * @tparam K type of the map keys to be used in the test + */ + @State(Scope.Thread) + private[this] abstract class BulkGetState[K](implicit keyBuilder: KeySeqBuilder[K]) { + /** The sequence of keys to store into a map. */ + private[this] var _keys: KeySeq[K] = _ + def keys() = _keys + + val map = new OpenHashMap[K,Int].empty + + /** Load the map with keys from `1` to `size`. */ + @Setup + def setup(params: BenchmarkParams) { + val size = params.getParam("size").toInt + _keys = keyBuilder.build(size) + put(map, keys, 0, size) + System.gc() + } + } + + /** Abstract state container for the `get()` bulk calling tests with deleted entries. + * + * Provides a thread-scoped map of the expected size, from which entries have been removed. + * Performs a GC after loading the map. + * + * @tparam K type of the map keys to be used in the test + */ + @State(Scope.Thread) + private[this] abstract class BulkRemovedGetState[K](implicit keyBuilder: KeySeqBuilder[K]) { + /** The sequence of keys to store into a map. */ + private[this] var _keys: KeySeq[K] = _ + def keys() = _keys + + val map = new OpenHashMap[K,Int].empty + + /** Load the map with keys from `1` to `size`, removing half of them. */ + @Setup + def setup(params: BenchmarkParams) { + val size = params.getParam("size").toInt + _keys = keyBuilder.build(size) + put_remove(map, keys) + System.gc() + } + } + + /* In order to use `@AuxCounters` on a class hierarchy (as of JMH 1.11.3), + * it's necessary to place it on the injected (sub)class, and to make the + * counters visible as explicit public members of the that class. JMH doesn't + * scan the ancestor classes for counters. + */ + + @AuxCounters + private class IntBulkPutState extends BulkPutState[Int] { + override def mapEntries = super.mapEntries + override def operations = super.operations + override def memory = super.memory + } + private class IntBulkGetState extends BulkGetState[Int] + private class IntBulkRemovedGetState extends BulkRemovedGetState[Int] + + @AuxCounters + private class AnyRefBulkPutState extends BulkPutState[AnyRef] { + override def mapEntries = super.mapEntries + override def operations = super.operations + override def memory = super.memory + } + private class AnyRefBulkGetState extends BulkGetState[AnyRef] + private class AnyRefBulkRemovedGetState extends BulkRemovedGetState[AnyRef] + + + /** Put entries into the given map. + * Adds entries using a range of keys from the given list. + * + * @param from lowest index in the range of keys to add + * @param to highest index in the range of keys to add, plus one + */ + private[this] def put[K](map: OpenHashMap[K,Int], keys: KeySeq[K], from: Int, to: Int) { + var i = from + while (i < to) { // using a `for` expression instead adds significant overhead + map.put(keys(i), i) + i += 1 + } + } + + /** Put entries into the given map. + * Adds entries using all of the keys from the given list. + */ + private def put[K](map: OpenHashMap[K,Int], keys: KeySeq[K]): Unit = + put(map, keys, 0, keys.size) + + /** Put entries into the given map, removing half of them as they're added. + * + * @param keys list of keys to use + */ + private def put_remove[K](map: OpenHashMap[K,Int], keys: KeySeq[K]) { + val blocks = 25 // should be a non-trivial factor of `size` + val size = keys.size + val blockSize: Int = size / blocks + var base = 0 + while (base < size) { + put(map, keys, base, base + blockSize) + + // remove every other entry + var i = base + while (i < base + blockSize) { + map.remove(keys(i)) + i += 2 + } + + base += blockSize + } + } + + /** Get elements from the given map. */ + private def get[K](map: OpenHashMap[K,Int], keys: KeySeq[K]) = { + val size = keys.size + var i = 0 + var sum = 0 + while (i < size) { + sum += map.get(keys(i)).getOrElse(0) + i += 1 + } + sum + } +} + +/** Benchmark for the library's [[OpenHashMap]]. */ +@BenchmarkMode(Array(Mode.AverageTime)) +@Fork(5) +@Threads(1) +@Warmup(iterations = 20) +@Measurement(iterations = 5) +@OutputTimeUnit(TimeUnit.NANOSECONDS) +@State(Scope.Benchmark) +class OpenHashMapBenchmark { + import OpenHashMapBenchmark._ + + @Param(Array("50", "100", "1000", "10000", "100000", "1000000", "2500000", + "5000000", "7500000", "10000000", "25000000")) + var size: Int = _ + + // Tests with Int keys + + /** Test putting elements to a map of `Int` to `Int`. */ + @Benchmark + def put_Int(state: IntBulkPutState) { + var i = 0 + while (i < state.maps.length) { + put(state.maps(i), state.keys) + i += 1 + } + } + + /** Test putting and removing elements to a growing map of `Int` to `Int`. */ + @Benchmark + def put_remove_Int(state: IntBulkPutState) { + var i = 0 + while (i < state.maps.length) { + put_remove(state.maps(i), state.keys) + i += 1 + } + } + + /** Test getting elements from a map of `Int` to `Int`. */ + @Benchmark + def get_Int_after_put(state: IntBulkGetState) = + get(state.map, state.keys) + + /** Test getting elements from a map of `Int` to `Int` from which elements have been removed. + * Note that half of these queries will fail to find their keys, which have been removed. + */ + @Benchmark + def get_Int_after_put_remove(state: IntBulkRemovedGetState) = + get(state.map, state.keys) + + + // Tests with AnyRef keys + + /** Test putting elements to a map of `AnyRef` to `Int`. */ + @Benchmark + def put_AnyRef(state: AnyRefBulkPutState) { + var i = 0 + while (i < state.maps.length) { + put(state.maps(i), state.keys) + i += 1 + } + } + + /** Test putting and removing elements to a growing map of `AnyRef` to `Int`. */ + @Benchmark + def put_remove_AnyRef(state: AnyRefBulkPutState) { + var i = 0 + while (i < state.maps.length) { + put_remove(state.maps(i), state.keys) + i += 1 + } + } + + /** Test getting elements from a map of `AnyRef` to `Int`. */ + @Benchmark + def get_AnyRef_after_put(state: AnyRefBulkGetState) = + get(state.map, state.keys) + + /** Test getting elements from a map of `AnyRef` to `Int` from which elements have been removed. + * Note that half of these queries will fail to find their keys, which have been removed. + */ + @Benchmark + def get_AnyRef_after_put_remove(state: AnyRefBulkRemovedGetState) = + get(state.map, state.keys) +} diff --git a/test/benchmarks/src/main/scala/scala/collection/mutable/OpenHashMapRunner.scala b/test/benchmarks/src/main/scala/scala/collection/mutable/OpenHashMapRunner.scala new file mode 100644 index 0000000000..b14b733a81 --- /dev/null +++ b/test/benchmarks/src/main/scala/scala/collection/mutable/OpenHashMapRunner.scala @@ -0,0 +1,113 @@ +package scala.collection.mutable + +import java.io.File +import java.io.PrintWriter + +import scala.language.existentials + +import org.openjdk.jmh.results.Result +import org.openjdk.jmh.results.RunResult +import org.openjdk.jmh.runner.Runner +import org.openjdk.jmh.runner.options.CommandLineOptions +import org.openjdk.jmh.runner.options.OptionsBuilder +import org.openjdk.jmh.runner.options.VerboseMode + +import benchmark.JmhRunner + +/** Replacement JMH application that runs the [[OpenHashMap]] benchmark. + * + * Outputs the results in a form consumable by a Gnuplot script. + */ +object OpenHashMapRunner extends JmhRunner { + /** File that will be created for the output data set. */ + private[this] val outputFile = new File(outputDirectory, "OpenHashMap.dat") + + /** Qualifier to add to the name of a memory usage data set. */ + private[this] val memoryDatasetQualifier = "-memory" + + /** Adapter to the JMH result class that simplifies our method calls. */ + private[this] implicit class MyRunResult(r: RunResult) { + /** Return the dataset label. */ + def label = r.getPrimaryResult.getLabel + + /** Return the value of the JMH parameter for the number of map entries per invocation. */ + def size: String = r.getParams.getParam("size") + + /** Return the operation counts. Not every test tracks this. */ + def operations = Option(r.getSecondaryResults.get("operations")) + + /** Return the number of map entries. */ + def entries = r.getSecondaryResults.get("mapEntries") + + /** Return the memory usage. Only defined if memory usage was measured. */ + def memory = Option(r.getSecondaryResults.get("memory")) + } + + /** Return the statistics of the given result as a string. */ + private[this] def stats(r: Result[_]) = r.getScore + " " + r.getStatistics.getStandardDeviation + + + def main(args: Array[String]) { + import scala.collection.JavaConversions._ + + val opts = new CommandLineOptions(args: _*) + var builder = new OptionsBuilder().parent(opts).jvmArgsPrepend("-Xmx6000m") + if (!opts.verbosity.hasValue) builder = builder.verbosity(VerboseMode.SILENT) + + val results = new Runner(builder.build).run() + + /* Sort the JMH results into "data sets", each representing a complete test of one feature. + * Some results only measure CPU performance; while others also measure memory usage, and + * thus are split into two data sets. A data set is distinguished by its label, which is + * the label of the JMH result, for CPU performance, or that with an added suffix, for memory + * usage. + */ + + /** Map from data set name to data set. */ + val datasetByName = Map.empty[String, Set[RunResult]] + + /** Ordering for the results within a data set. Orders by increasing number of map entries. */ + val ordering = Ordering.by[RunResult, Int](_.size.toInt) + + def addToDataset(key: String, result: RunResult): Unit = + datasetByName.getOrElseUpdate(key, SortedSet.empty(ordering)) += result + + results.foreach { result => + addToDataset(result.label, result) + + // Create another data set for trials that track memory usage + if (result.memory.isDefined) + addToDataset(result.label + memoryDatasetQualifier, result) + } + + //TODO Write out test parameters + // val jvm = params.getJvm + // val jvmArgs = params.getJvmArgs.mkString(" ") + + val f = new PrintWriter(outputFile, "UTF-8") + try { + datasetByName.foreach(_ match { + case (label: String, dataset: Iterable[RunResult]) => + outputDataset(f, label, dataset) + }) + } finally { + f.close() + } + } + + private[this] def outputDataset(f: PrintWriter, label: String, dataset: Iterable[RunResult]) { + f.println(s"# [$label]") + + val isMemoryUsageDataset = label.endsWith(memoryDatasetQualifier) + dataset.foreach { r => + f.println(r.size + " " + ( + if (isMemoryUsageDataset && !r.memory.get.getScore.isInfinite) + stats(r.entries) + " " + stats(r.memory.get) + else + stats(r.operations getOrElse r.getPrimaryResult) + )) + } + + f.println(); f.println() // data set separator + } +} diff --git a/test/files/t8449/Client.scala b/test/files/pos/t8449/Client.scala index 5d273f06b2..5d273f06b2 100644 --- a/test/files/t8449/Client.scala +++ b/test/files/pos/t8449/Client.scala diff --git a/test/files/t8449/Test.java b/test/files/pos/t8449/Test.java index ecb1711b24..ecb1711b24 100644 --- a/test/files/t8449/Test.java +++ b/test/files/pos/t8449/Test.java diff --git a/test/files/run/repl-paste-parse.check b/test/files/run/repl-paste-parse.check new file mode 100755 index 0000000000..7b2148dc74 --- /dev/null +++ b/test/files/run/repl-paste-parse.check @@ -0,0 +1,6 @@ +Type in expressions for evaluation. Or try :help. + +scala> repl-paste-parse.script:1: error: illegal start of simple pattern +val case = 9 + ^ +:quit diff --git a/test/files/run/repl-paste-parse.scala b/test/files/run/repl-paste-parse.scala new file mode 100644 index 0000000000..e93ad4d02b --- /dev/null +++ b/test/files/run/repl-paste-parse.scala @@ -0,0 +1,27 @@ + +import java.io.{ BufferedReader, StringReader, StringWriter, PrintWriter } + +import scala.tools.partest.DirectTest +import scala.tools.nsc.interpreter.ILoop +import scala.tools.nsc.GenericRunnerSettings + +object Test extends DirectTest { + override def extraSettings = s"-usejavacp -i $scriptPath" + def scriptPath = testPath.changeExtension("script") + override def newSettings(args: List[String]) = { + val ss = new GenericRunnerSettings(Console.println) + ss.processArguments(args, true) + ss + } + def code = "" + def show() = { + val r = new BufferedReader(new StringReader("")) + val w = new StringWriter + val p = new PrintWriter(w, true) + new ILoop(r, p).process(settings) + w.toString.lines foreach { s => + if (!s.startsWith("Welcome to Scala")) println(s) + } + } +} + diff --git a/test/files/run/repl-paste-parse.script b/test/files/run/repl-paste-parse.script new file mode 100644 index 0000000000..903f6e7b0c --- /dev/null +++ b/test/files/run/repl-paste-parse.script @@ -0,0 +1 @@ +val case = 9 diff --git a/test/files/run/t4625.check b/test/files/run/t4625.check new file mode 100644 index 0000000000..e4a4d15b87 --- /dev/null +++ b/test/files/run/t4625.check @@ -0,0 +1 @@ +Test ran. diff --git a/test/files/run/t4625.scala b/test/files/run/t4625.scala new file mode 100644 index 0000000000..44f6225220 --- /dev/null +++ b/test/files/run/t4625.scala @@ -0,0 +1,7 @@ + +import scala.tools.partest.ScriptTest + +object Test extends ScriptTest { + // must be called Main to get probing treatment in parser + override def testmain = "Main" +} diff --git a/test/files/run/t4625.script b/test/files/run/t4625.script new file mode 100644 index 0000000000..600ceacbb6 --- /dev/null +++ b/test/files/run/t4625.script @@ -0,0 +1,5 @@ + +object Main extends Runnable with App { + def run() = println("Test ran.") + run() +} diff --git a/test/files/run/t4625b.check b/test/files/run/t4625b.check new file mode 100644 index 0000000000..e79539a5c4 --- /dev/null +++ b/test/files/run/t4625b.check @@ -0,0 +1 @@ +Misc top-level detritus diff --git a/test/files/run/t4625b.scala b/test/files/run/t4625b.scala new file mode 100644 index 0000000000..44f6225220 --- /dev/null +++ b/test/files/run/t4625b.scala @@ -0,0 +1,7 @@ + +import scala.tools.partest.ScriptTest + +object Test extends ScriptTest { + // must be called Main to get probing treatment in parser + override def testmain = "Main" +} diff --git a/test/files/run/t4625b.script b/test/files/run/t4625b.script new file mode 100644 index 0000000000..f21a553dd1 --- /dev/null +++ b/test/files/run/t4625b.script @@ -0,0 +1,8 @@ + +trait X { def x = "Misc top-level detritus" } + +object Bumpkus + +object Main extends X with App { + println(x) +} diff --git a/test/files/run/t4625c.check b/test/files/run/t4625c.check new file mode 100644 index 0000000000..6acb1710b9 --- /dev/null +++ b/test/files/run/t4625c.check @@ -0,0 +1,3 @@ +newSource1.scala:2: warning: Script has a main object but statement is disallowed +val x = "value x" + ^ diff --git a/test/files/run/t4625c.scala b/test/files/run/t4625c.scala new file mode 100644 index 0000000000..44f6225220 --- /dev/null +++ b/test/files/run/t4625c.scala @@ -0,0 +1,7 @@ + +import scala.tools.partest.ScriptTest + +object Test extends ScriptTest { + // must be called Main to get probing treatment in parser + override def testmain = "Main" +} diff --git a/test/files/run/t4625c.script b/test/files/run/t4625c.script new file mode 100644 index 0000000000..16159208e0 --- /dev/null +++ b/test/files/run/t4625c.script @@ -0,0 +1,7 @@ + +val x = "value x" +val y = "value y" + +object Main extends App { + println(s"Test ran with $x.") +} diff --git a/test/files/run/t7805-repl-i.check b/test/files/run/t7805-repl-i.check index 24512c0067..70f024605c 100644 --- a/test/files/run/t7805-repl-i.check +++ b/test/files/run/t7805-repl-i.check @@ -1,6 +1,3 @@ -Loading t7805-repl-i.script... -import util._ - Welcome to Scala Type in expressions for evaluation. Or try :help. diff --git a/test/files/run/t9170.scala b/test/files/run/t9170.scala index f39467bc25..87471fb129 100644 --- a/test/files/run/t9170.scala +++ b/test/files/run/t9170.scala @@ -44,7 +44,7 @@ object Y { // Exiting paste mode, now interpreting. -<console>:13: error: double definition: +<pastie>:13: error: double definition: def f[A](a: => A): Int at line 12 and def f[A](a: => Either[Exception,A]): Int at line 13 have same type after erasure: (a: Function0)Int diff --git a/test/junit/scala/collection/immutable/HashMapTest.scala b/test/junit/scala/collection/immutable/HashMapTest.scala new file mode 100644 index 0000000000..a970786455 --- /dev/null +++ b/test/junit/scala/collection/immutable/HashMapTest.scala @@ -0,0 +1,48 @@ +package scala.collection.immutable + +import org.junit.Assert.assertEquals +import org.junit.Test +import org.junit.runner.RunWith +import org.junit.runners.JUnit4 + +@RunWith(classOf[JUnit4]) +class HashMapTest { + + private val computeHashF = { + HashMap.empty.computeHash _ + } + + @Test + def canMergeIdenticalHashMap1sWithNullKvs() { + def m = new HashMap.HashMap1(1, computeHashF(1), 1, null) + val merged = m.merged(m)(null) + assertEquals(m, merged) + } + + @Test + def canMergeIdenticalHashMap1sWithNullKvsCustomMerge() { + def m = new HashMap.HashMap1(1, computeHashF(1), 1, null) + val merged = m.merged(m) { + case ((k1, v1), (k2, v2)) => + (k1, v1 + v2) + } + assertEquals(new HashMap.HashMap1(1, computeHashF(1), 2, null), merged) + } + + @Test + def canMergeHashMap1sWithNullKvsHashCollision() { + val key1 = 1000L * 1000 * 1000 * 10 + val key2 = key1.##.toLong + assert(key1.## == key2.##) + + val m1 = new HashMap.HashMap1(key1, computeHashF(key1.##), 1, null) + val m2 = new HashMap.HashMap1(key2, computeHashF(key2.##), 1, null) + val expected = HashMap(key1 -> 1, key2 -> 1) + val merged = m1.merged(m2)(null) + assertEquals(expected, merged) + val mergedWithMergeFunction = m1.merged(m2) { (kv1, kv2) => + throw new RuntimeException("Should not be reached.") + } + assertEquals(expected, mergedWithMergeFunction) + } +}
\ No newline at end of file |