summaryrefslogtreecommitdiff
path: root/test
Commit message (Collapse)AuthorAgeFilesLines
* Fix many typos in docs and commentsmpociecha2014-12-1422-30/+30
| | | | | | | | | | | | | This commit corrects many typos found in scaladocs, comments and documentation. It should reduce a bit number of PRs which fix one typo. There are no changes in the 'real' code except one corrected name of a JUnit test method and some error messages in exceptions. In the case of typos in other method or field names etc., I just skipped them. Obviously this commit doesn't fix all existing typos. I just generated in IntelliJ the list of potential typos and looked through it quickly.
* Merge pull request #4078 from gbasler/topic/fix-analysis-budgetAdriaan Moors2014-12-124-0/+73
|\ | | | | Avoid the `CNF budget exceeded` exception via smarter translation into CNF.
| * Avoid the `CNF budget exceeded` exception via smarter translation into CNF.Gerard Basler2014-10-274-0/+73
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The exhaustivity checks in the pattern matcher build a propositional formula that must be converted into conjunctive normal form (CNF) in order to be amenable to the following DPLL decision procedure. However, the simple conversion into CNF via negation normal form and Shannon expansion that was used has exponential worst-case complexity and thus even simple problems could become untractable. A better approach is to use a transformation into an _equisatisfiable_ CNF-formula (by generating auxiliary variables) that runs with linear complexity. The commonly known Tseitin transformation uses bi- implication. I have choosen for an enhancement: the Plaisted transformation which uses implication only, thus the resulting CNF formula has (on average) only half of the clauses of a Tseitin transformation. The Plaisted transformation uses the polarities of sub-expressions to figure out which part of the bi-implication can be omitted. However, if all sub-expressions have positive polarity (e.g., after transformation into negation normal form) then the conversion is rather simple and the pseudo-normalization via NNF increases chances only one side of the bi-implication is needed. I implemented only optimizations that had a substantial effect on formula size: - formula simplification, extraction of multi argument operands - if a formula is already in CNF then the Tseitin/Plaisted transformation is omitted - Plaisted via NNF - omitted: (sharing of sub-formulas is also not implemented) - omitted: (clause subsumption)
* | Merge pull request #4182 from som-snytt/issue/multizeroAdriaan Moors2014-12-095-83/+102
|\ \ | | | | | | SI-9015 Reject 0x and minor parser cleanup
| * | SI-9015 Reject 0x and minor parser cleanupSom Snytt2014-12-055-83/+102
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Only print error results. Show deprecated forms. Test for rejected literals and clean up parser There was no negative test for what constitutes a legal literal. The ultimate goal is for the test to report all errors in one compilation. This commit follows up the removal of "1." syntax to simplify number parsing. It removes previous paulp code to contain the erstwhile complexity. Leading zero is not immediately put to the buffer. Instead, the empty buffer is handled on evaluation. In particular, an empty buffer due to `0x` is a syntax error. The message for obsolete octal syntax is nuanced and deferred until evaluation by the parser, which is slightly simpler to reason about. Improve comment on usage of base The slice-and-dicey usage of base deserves a better comment. The difference is that `intVal` sees an empty char buffer for input `"0"`.
* | | Merge pull request #4169 from retronym/ticket/9008Adriaan Moors2014-12-055-0/+19
|\ \ \ | | | | | | | | SI-9008 Fix regression with higher kinded existentials
| * | | SI-9008 Fix regression with higher kinded existentialsJason Zaugg2014-12-035-0/+19
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow a naked type constructor in an existential type if we are directly within a type application. Recently, 84d4671 changed nested context creation to avoid passing down the `TypeConstructorAllowed`, which led to missing kind errors in code like `type T[({type M = List})#M]`. However, when typechecking `T forSome { quantifiers }`, we create a nested context to represent the nested scope introduced for the quantifiers. But we need to propagate the `TypeConstructorAllowed` bit to the nested context to allow for higher kinded existentials. The enclosed tests show: - pos/t9008 well kinded application of an hk existential - neg/t9008 hk existential forbidden outside of type application - neg/t9008b kind error reported for hk existential Regressed in 84d4671.
* | | Merge pull request #4176 from mpociecha/flat-classpath2Grzegorz Kossakowski2014-12-0510-14/+796
|\ \ \ | | | | | | | | The alternative, flat representation of classpath elements
| * | | Add benchmarks to compare recursive and flat cp representationsmpociecha2014-12-051-0/+143
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The goal of these changes is to add possibility to: - compare an efficiency and a content of both cp implementations (ClassPathImplComparator) - examine the memory consumption by creating a lot of globals using a specified classpath (ClassPathMemoryConsumptionTester) - it can be considered as e.g. some approximation of ScalaPresentationCompilers in Scala IDE when working with many projects ClassPathMemoryConsumptionTester is placed in main (I mean not test) sources so thanks to that it has properly, out of the box configured boot classpath etc. and it's easy to use it, e.g.: scala scala.tools.nsc.ClassPathMemoryConsumptionTester -YclasspathImpl:<implementation_to_test> -cp <some_cp> -sourcepath <some_sp> -requiredInstances 50 SomeFileToCompile.scala At the end it waits for the "exit" command so there can be used some profiler like JProfiler to look how the given implementation behaves. Also flat classpath implementation is set as a default one to test it on Jenkins. This particular change must be reverted when all tests will pass because for now it's not desirable to make it permanently the default representation.
| * | | Cleanup and refactoring - semicolons, unused or commented out codempociecha2014-12-053-3/+8
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit contains some minor changes made by the way when implementing flat classpath. Sample JUnit test that shows that all pieces of JUnit infrastructure work correctly now uses assert method form JUnit as it should do from the beginning. I removed commented out lines which were obvious to me. In the case of less obvious commented out lines I added TODOs as someone should look at such places some day and clean them up. I removed also some unnecessary semicolons and unused imports. Many string concatenations using + have been changed to string interpolation. There's removed unused, private walkIterator method from ZipArchive. It seems that it was unused since this commit: https://github.com/scala/scala/commit/9d4994b96c77d914687433586eb6d1f9e49c520f However, I had to add an exception for the compatibility checker because it was complaining about this change. I made some trivial corrections/optimisations like use 'findClassFile' method instead of 'findClass' in combination with 'binary' to find the class file.
| * | | Integrate flat classpath with the compilermpociecha2014-12-054-9/+273
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit integrates with the compiler the whole flat classpath representation build next to the recursive one as an alternative. From now flat classpath really works and can be turned on. There's added flag -YclasspathImpl with two options: recursive (the default one) and flat. It was needed to make the dynamic dispatch to the particular classpath representation according to the chosen type of a classpath representation. There's added PathResolverFactory which is used instead of a concrete implementation of a path resolver. It turned out that only a small subset of path resolvers methods is used outside this class in Scala sources. Therefore, PathResolverFactory returns an instance of a base interface PathResolverResult providing only these used methods. PathResolverFactory in combination with matches in some other places ensures that in all places using classpath we create/get the proper representation. Also the classPath method in Global is modified to use the dynamic dispatch. This is very important change as a return type changed to the base ClassFileLookup providing subset of old ClassPath public methods. It can be problematic if someone was using in his project the explicit ClassPath type or public methods which are not provided via ClassFileLookup. I tested flat classpath with sbt and Scala IDE and there were no problems. Also was looking at sources of some other projects like e.g. Scala plugin for IntelliJ and there shouldn't be problems, I think, but it would be better to check these changes using the community build. Scalap's Main.scala is changed to be able to use both implementations and also to use flags related to the classpath implementation. The classpath invalidation is modified to work properly with the old (recursive) classpath representation after changes made in a Global. In the case of the attempt to use the invalidation for the flat cp it just throws exception with a message that the flat one currently doesn't support the invalidation. And also that's why the partest's test for the invalidation has been changed to use (always) the old implementation. There's added an adequate comment with TODO to this file. There's added partest test generating various dependencies (directories, zips and jars with sources and class files) and testing whether the compilation and further running an application works correctly, when there are these various types of entries specified as -classpath and -sourcepath. It should be a good approximation of real use cases.
| * | | Refactor scalap's mainmpociecha2014-12-011-2/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The structure of scalap's Main has been refactored. EmptyClasspath is deleted. It looks that it was unused since this commit: https://github.com/scala/scala/commit/e594fe58ef8116a4bd2560ad0a856ad58ae9db33 Also classpath logging is changed and now uses asClassPathString method. It was needed to modify one test because of that but it won't depend on a particular representation. There aren't changes in the way scalap works.
| * | | Create dedicated path resolver for the flat classpath representationmpociecha2014-11-301-0/+159
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This commit adds dedicated FlatClassPathResolver loading classpath entries as FlatClassPath. Most of the common logic from PathResolver for the old classpath has been moved to the base, separate class which isn't dependent on a particular classpath representation. Thanks to that it was possible to reuse it when creating an adequate path resolver for the flat classpath representation. This change doesn't modify the way the compiler works. It also doesn't change nothing from the perspective of someone who already uses PathResolver in some project or even extends it - at least as long as he/she doesn't need to use flat classpath. There are also added JUnit tests inter alia comparing entries created using the old and the new classpath representations (whether the flat one created using the new path resolver returns the same entries as the recursive one).
| * | | Add the flat classpath type aggregating flat classpath instancesmpociecha2014-11-301-0/+208
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | There's added AggregateFlatClassPath - an equivalent of MergedClassPath from the old implementation. It is supposed to group classpath instances handling different files being directories, zips or jars. Unlike in the case of the old (recursive) implementation, there won't be a deep, nested hierarchy of classpath instances - just one root (aggregate) and a flat structure of its children. AggregateFlatClassPath ensures the distinction of classpath entries and merges corresponding entries for class and source files into one entry. This is required as SymbolLoaders class makes use of this kind of ClassRepresentation. There are also added unit tests which check whether AggregateFlatClassPath obtains correct entries from classpath instances specified in a constructor and whether it preserves the ordering in the case of repeated entries. There's added a test type of flat classpath using VirtualFiles so it's easy to check the real behaviour.
* | | | Merge pull request #4141 from Ichoran/issue/8970Lukas Rytz2014-12-041-0/+6
|\ \ \ \ | | | | | | | | | | SI-8970 hashCode of BigDecimal and Double do not match
| * | | | SI-8970 hashCode of BigDecimal and Double do not matchRex Kerr2014-11-211-0/+6
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Switched isValidDouble (binary equivalence) to isDecimalDouble (decimal expansion equivalence), as people generally do not care about the slew of extra decimal digits off the end of the binary approximation to a decimal fraction (decimal equivalence is the standard now). Added minimal unit test to verify behavior.
* | | | | Merge pull request #4146 from Ichoran/issue/6519Lukas Rytz2014-12-041-3/+15
|\ \ \ \ \ | | | | | | | | | | | | SI-6519 PagedSeq is not lazy enough
| * | | | | SI-6519 PagedSeq is not lazy enoughRex Kerr2014-11-211-3/+15
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | This was actually an issue with `length` of all things--it wasn't stopping scanning when it was past the end of what was requested. It'd just gratuitously read everything. Also fixed an overflow bug with isDefinedAt along the way (start + index > Int.MaxValue would always return true despite never working).
* | | | | Merge pull request #4180 from som-snytt/issue/7683Lukas Rytz2014-12-044-0/+43
|\ \ \ \ \ | | | | | | | | | | | | SI-7683 Enable -Ystop-before:typer
| * | | | | SI-7683 Enable -Ystop-before:typerSom Snytt2014-12-014-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allows a plugin to run before typer without incurring typechecking. The test is that a plugin doesn't run when stopping after parser, and that the truncated compilation run also succeeds, since updating check files for the output of -Xshow-phases is tedious.
* | | | | | Merge pull request #4185 from som-snytt/issue/9027Grzegorz Kossakowski2014-12-042-0/+34
|\ \ \ \ \ \ | | | | | | | | | | | | | | SI-9027 Parser no longer consumes space after multi XML elements
| * | | | | | SI-9027 Parser eagerly consumes space on multi XML elementsSom Snytt2014-12-032-0/+34
| |/ / / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Once the parser starts looking for more <elements>, it should still lookahead speculatively, leaving any remaining whitespace, including newlines, after the last element.
* | | | | | Merge pull request #4061 from maxcom/SI-8924-list-iterator-oomJason Zaugg2014-12-041-0/+49
|\ \ \ \ \ \ | | | | | | | | | | | | | | SI-8924 don't hold reference to list in iterator
| * | | | | | SI-8924 don't hold reference to list in iteratorMaxim Valyanskiy2014-11-271-0/+49
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Current implementation of LinearSeqLike.iterator holds reference to complete sequence. This prevent garbage collecting of List elements while we keep reference to iterator somewhere. This commit removes reference from Iterator implementation. This allow garbage collection of List while we iterating over its elements (when there is no other references to List in our program).
* | | | | | | Merge pull request #4164 from retronym/ticket/9003Adriaan Moors2014-12-032-0/+72
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | SI-9003 Eagerly capture more potentially mutable binders
| * | | | | | | SI-9003 Eagerly capture more potentially mutable bindersJason Zaugg2014-11-262-0/+72
| | |_|/ / / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This is a re-run of SI-5158 in two different contexts. As reported, the result of `unapply[Seq]` under name based pattern matching might not guarantee stability of results of calls to `_1`, `apply(i)`, etc, so we call these eagerly, and call them once only. I also found a case in regular pattern matching that we hadn't accounted for: extracting elements of sequences (either from a case class or from an `unapplySeq`) may also be unstable. This commit changes `ExtractorTreeMaker` to force storage of such binders, even under `-optimize`. This parallels the change to `ProductExtractorTreeMaker` in 8ebe8e3e8. I have added a special case for traditional `unapply` methods returning `Option`. This avoids a change for: ``` % cat test/files/run/t9003b.scala object Single { def unapply(a: Any) = Some("") } object Test { def main(args: Array[String]): Unit = { "" match { case Single(x) => (x, x) } } } % qscalac -optimize -Xprint:patmat test/files/run/t9003b.scala 2>&1 | grep --context=5 get case <synthetic> val x1: Any = ""; case5(){ <synthetic> val o7: Some[String] = Single.unapply(x1); if (o7.isEmpty.unary_!) matchEnd4({ scala.Tuple2.apply[String, String](o7.get, o7.get); () }) else case6() }; % scalac-hash v2.11.4 -optimize -Xprint:patmat test/files/run/t9003b.scala 2>&1 | grep --context=5 get case <synthetic> val x1: Any = ""; case5(){ <synthetic> val o7: Some[String] = Single.unapply(x1); if (o7.isEmpty.unary_!) matchEnd4({ scala.Tuple2.apply[String, String](o7.get, o7.get); () }) else case6() }; ```
* | | | | | | Merge pull request #4178 from retronym/ticket/9018Jason Zaugg2014-12-031-0/+16
|\ \ \ \ \ \ \ | | | | | | | | | | | | | | | | SI-9018 Fix regression: cycle in LUBs
| * | | | | | | SI-9018 Fix regression: cycle in LUBsJason Zaugg2014-12-031-0/+16
| | |_|_|/ / / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Regressed in 4412a92d, which admirably sought to impose some structure on the domain of depths, but failed to preserve an imporatnt part of said structure. When calculating LUBs and GLBs, the recursion depth is limited by propagating a decreasing depth parameter. Its initial value is the recursion limit, and is calcluated from the maximum depth of the types fed into the calculation. Here are a few examples that give a flavour of this calculation: ``` scala> class M[A] defined class M scala> class N extends M[M[M[M[A]]]] <console>:34: error: not found: type A class N extends M[M[M[M[A]]]] ^ scala> class N extends M[M[M[M[Int]]]] defined class N scala> lubDepth(typeOf[N] :: Nil) res5: scala.reflect.internal.Depth = Depth(4) scala> type T = M[Int] with M[M[Int]] defined type alias T scala> lubDepth(typeOf[T] :: Nil) res7: scala.reflect.internal.Depth = Depth(3) ``` One parts of the LUB calculation, `lub0`, truncates the lub to `Any` when the depth dives below zero. Before 4412a92d: ------------------ value decr incr ------------------ -3 -3 -2 (= AnyDepth) -2 -3 -1 -1 -2 0 0 -1 1 1 0 2 ... After 4412a92d: ----------------------- value decr incr ----------------------- -MaxInt -MaxInt -MaxInt (= AnyDepth) 0 -MaxInt 1 1 0 2 ... The crucial difference that triggered the regression is that decrementing a depth of zero now goes to the sentinel value, `AnyDepth`, rather than to `-1`. This commit modifies `Depth` to allow it to represent any negative depth. It also switches the sentinel value for `AnyDepth`. Even though I don't believe it is needed, I have also allowed for `Depth.Zero.decr.decr.decr == Depth.AnyVal`, which was historically the case in 2.10.4. To better understand what was happening, I added tracing to the calculation and diffed the before and after: https://gist.github.com/retronym/ec59608eecc52bb497fa Notice that when `elimSub(ts, depth = 0)` recursively calls `lub`, it does so with the variant that caluculates the allowable depth from the shape of the given types. We can then infinitely recurse. Before 4412a92d: ``` |-- elimSub(depth = 0, ts = List(Comparable[_ >: TestObject.E.Value with String <: Comparable[_ >: TestObject.E.Valu | |-- lub(depth = -1, ts = List(TestObject.E.Value with String, TestObject.C)) | | |-- lub0(depth = -1, ts0 = List(TestObject.E.Value with String, TestObject.C)) | | | |-- elimSub(depth = -1, ts = List(TestObject.E.Value with String, TestObject.C)) | | | |== List(TestObject.E.Value with String, TestObject.C) | | | |-- Truncating LUB to | | | |== Any | | |== Any | |== Any |== List(Comparable[_ >: TestObject.E.Value with String <: Comparable[_ >: TestObject.E.Value with String <: java.io |-- lub(depth = 0, ts = List(java.lang.type, java.lang.type)) | |-- lub0(depth = 0, ts0 = List(java.lang.type, java.lang.type)) | | |-- elimSub(depth = 0, ts = List(java.lang.type, java.lang.type)) | | |== List(java.lang.type) | |== java.lang.type |== java.lang.type |-- elimSub(depth = 0, ts = List(Object, Object)) |== List(Object) |-- elimSub(depth = 0, ts = List(Any, Any)) |== List(Any) ``` After 4412a92d: ``` |-- elimSub(depth = 0, ts = List(Comparable[_ >: TestObject.E.Value with String <: Comparable[_ >: TestObject.E.Valu | |-- lub(depth = _, ts = List(TestObject.E.Value with String, TestObject.C)) | | |-- lub(depth = 3, ts = List(TestObject.E.Value with String, TestObject.C)) | | | |-- lub0(depth = 3, ts0 = List(TestObject.E.Value with String, TestObject.C)) | | | | |-- elimSub(depth = 3, ts = List(TestObject.E.Value with String, TestObject.C)) | | | | |== List(TestObject.E.Value with String, TestObject.C) | | | | |-- lub1(depth = 3, ts = List(TestObject.E.Value with String, TestObject.C)) | | | | | |-- elimSub(depth = 3, ts = List(scala.math.Ordered[TestObject.E.Value], scala.math.Orde | | | | | |== List(scala.math.Ordered[TestObject.E.Value], scala.math.Ordered[TestObject.C]) ```
* / | | | | | Remove completely commented-out test/run fileGuy Dickinson2014-12-011-373/+0
|/ / / / / /
* | | | / / SI-8502 Improve resiliance to absent packagesJason Zaugg2014-11-284-7/+52
| |_|_|/ / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When unpickling a class, we create stub symbols for references to classes absent from the current classpath. If these references only appear in method signatures that aren't called, we can proceed with compilation. This is in line with javac. We're getting better at this, but there are still some gaps. This bug is about the behaviour when a package is completely missing, rather than just a single class within that package. To make this work we have to add two special cases to the unpickler: - When unpickling a `ThisType`, convert a `StubTermSymbol` into a `StubTypeSymbol`. We hit this when unpickling `ThisType(missingPackage)`. - When unpickling a reference to `<owner>.name` where `<owner>` is a stub symbol, don't call info on that owner, but rather allow the enclosing code in `readSymbol` fall through to create a stub for the member. The test case was distilled from an a problem that a Spray user encountered when Akka was missing from the classpath. Two existing test cases have progressed, and the checkfiles are accordingly updated.
* | | | | SI-8946 Disable flaky test for reflection memory leakJason Zaugg2014-11-281-0/+0
| |/ / / |/| | | | | | | | | | | | | | | | | | | It passed in PR validation but failed in a later run. There are some clever ideas bouncing around to make it stable, but in the meantime I'll shunt it into disabled.
* | | | Fixes memory leak when using reflectionTim Harper2014-11-221-0/+29
|/ / / | | | | | | | | | | | | | | | References to Threads would be retained long after their termination if reflection is used in them. This led to a steady, long memory leak in applications using reflection in thread pools.
* | | Merge pull request #4121 from retronym/ticket/5938Grzegorz Kossakowski2014-11-211-0/+35
|\ \ \ | | | | | | | | SI-5938 Test for a FSC bug with mixin, duplicate static forwarders
| * | | SI-5938 Test for a FSC bug with mixin, duplicate static forwardersJason Zaugg2014-11-091-0/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Under resident compilation, we were getting multiple copies of static forwarders created for mixed in methods. It seems like the bug was fixed as a by-product of #4040. This commit adds a test to show this. I've confirmed that the test fails appropriately with 2.11.4. For future reference, before I figured out how to write the test for this one (test/resident doesn't seem to let you run the code after compiling it), I was using bash to test as follows: (export V=2.11.x; (scalac-hash $V sandbox/t5938_1.scala; (for i in 1 2; do echo sandbox/t5938.scala; done; printf '\n') | scalac-hash $V -Xresident); stty echo; scala-hash $V X ; echo ':javap -public X' | scala-hash $V);
* | | | Merge pull request #4099 from retronym/ticket/7596Grzegorz Kossakowski2014-11-216-0/+65
|\ \ \ \ | | | | | | | | | | SI-7596 Curtail overloaded symbols during unpickling
| * | | | SI-7596 Curtail overloaded symbols during unpicklingJason Zaugg2014-11-066-0/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In code like: object O { val x = A; def x(a: Any) = ... } object P extends O.x.A The unpickler was using an overloaded symbol for `x` in the parent type of `P`. This led to compilation failures under separate compilation. The code that leads to this is in `Unpicklers`: def fromName(name: Name) = name.toTermName match { case nme.ROOT => loadingMirror.RootClass case nme.ROOTPKG => loadingMirror.RootPackage case _ => adjust(owner.info.decl(name)) } This commit filters the overloaded symbol based its stability unpickling a singleton type. That seemed a slightly safer place than in `fromName`.
* | | | | Merge pull request #4115 from retronym/ticket/8597Grzegorz Kossakowski2014-11-217-1/+90
|\ \ \ \ \ | | | | | | | | | | | | SI-8597 Improved pattern unchecked warnings
| * | | | | SI-8597 Improved pattern unchecked warningsJason Zaugg2014-11-097-1/+90
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The spec says that `case _: List[Int]` should be always issue an unchecked warning: > Types which are not of one of the forms described above are > also accepted as type patterns. However, such type patterns > will be translated to their erasure (§3.7). The Scala compiler > will issue an “unchecked” warning for these patterns to flag > the possible loss of type-safety. But the implementation goes a little further to omit warnings based on the static type of the scrutinee. As a trivial example: def foo(s: Seq[Int]) = s match { case _: List[Int] => } need not issue this warning. These discriminating unchecked warnings are domain of `CheckabilityChecker`. Let's deconstruct the reported bug: def nowarn[T] = (null: Any) match { case _: Some[T] => } We used to determine that if the first case matched, the scrutinee type would be `Some[Any]` (`Some` is covariant). If this statically matches `Some[T]` in a pattern context, we don't need to issue an unchecked warning. But, our blanket use of `existentialAbstraction` in `matchesPattern` loosened the pattern type to `Some[Any]`, and the scrutinee type was deemed compatible. I've added a new method, `scrutConformsToPatternType` which replaces pattern type variables by wildcards, but leaves other abstract types intact in the pattern type. We have to use this inside `CheckabilityChecker` only. If we were to make `matchesPattern` stricter in the same way, tests like `pos/t2486.scala` would fail. I have introduced a new symbol test to (try to) identify pattern type variables introduced by `typedBind`. Its not pretty, and it might be cleaner to reserve a new flag for these. I've also included a test variation exercising with nested matches. The pattern type of the inner case can't, syntactically, refer to the pattern type variable of the enclosing case. If it could, we would have to be more selective in our wildcarding in `ptMatchesPatternType` by restricting ourselves to type variables associated with the closest enclosing `CaseDef`. As some further validation of the correctness of this patch, four stray warnings have been teased out of neg/unchecked-abstract.scala I also had to changes `typeArgsInTopLevelType` to extract the type arguments of `Array[T]` if `T` is an abstract type. This avoids the "Checkability checker says 'Uncheckable', but uncheckable type cannot be found" warning and consequent overly lenient analysis. Without this change, the warning was suppressed for: def warnArray[T] = (null: Any) match { case _: Array[T] => }
* | | | | | Merge pull request #4123 from retronym/ticket/8253Jason Zaugg2014-11-202-0/+54
|\ \ \ \ \ \ | |_|_|_|/ / |/| | | | | SI-8253 Fix incorrect parsing of <elem xmlns={f("a")}/>
| * | | | | SI-8253 Fix incorrect parsing of <elem xmlns={f("a")}/>Jason Zaugg2014-11-102-0/+54
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The spliced application was placed in the `attrMap` in `SymbolicXMLBuilder` and later incorrectly matched by a pattern intended only to match: xml.Text(s) That attribute value is generated by parsing: <elem xmlns='a'/> So the net effect was that the two fragments of XML were identical! This commit sharpens up the match to really look for a syntactic `_root_.scala.xml.Text("...")`. The test just prints the parse trees of a variety of cases, as we we should not test the modularized XML library in scala/scala.
* | | | | Merge pull request #4118 from retronym/ticket/5639Jason Zaugg2014-11-1912-14/+184
|\ \ \ \ \ | | | | | | | | | | | | SI-5639 Fix spurious discarding of implicit import
| * | | | | SI-5639 Predicate bug fix on -Xsource:2.12Jason Zaugg2014-11-184-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To be conservative, I've predicated this fix in `-Xsource:2.12`. This is done in separate commit to show that the previous fix passes the test suite, rather than just tests with `-Xsource:2.12`.
| * | | | | SI-5639 Fix spurious discarding of implicit importJason Zaugg2014-11-098-14/+151
| |/ / / / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In Scala fa0cdc7b (just before 2.9.0), a regression in implicit search, SI-2866, was discovered by Lift and fixed. The nature of the regression was that an in-scope, non-implicit symbol failed to shadow an eponymous implicit import. The fix for this introduced `isQualifyingImplicit` which discards in-scope implicits when the current `Context`'s scope contains a name-clashing entry. Incidentally, this proved to be a shallow solution, and we later improved shadowing detection in SI-4270 / 9129cfe9. That picked up cases where a locally defined symbol in an intervening scope shadowed an implicit. This commit includes the test case from the comments of SI-2866. Part of it is tested as a neg test (to test reporting of ambiguities), and the rest is tested in a run test (to test which implicits are picked.) However, in the test case of SI-5639, we see that the scope lookup performend by `isQualifyingImplicit` is fooled by a "ghost" module symbol. The apparition I'm referring to is entered when `initializeFromClassPath` / `enterClassAndModule` encounters a class file named 'Baz.class', and speculatively enters _both_ a class and module symbol. AFAIK, this is done to defer parsing the class file to determine what inside. If it happens to be a Java compiled class, the module symbol is needed to house the static members. This commit adds a condition that `Symbol#exists` which shines a torch (forces the info) in the direction of the ghost module symbol. In our test, this causes it to vanish, as we only need a class symbol for the Scala defined `class Baz`. The existing `pos` test for this actually did not exercise the bug, separate compilation is required. It was originally checked in to `pending` with this error, and then later moved to `pos` when someone noticed it was not failing.
* | | | | Merge pull request #4051 from heathermiller/repl-cp-fix2Jason Zaugg2014-11-182-0/+109
|\ \ \ \ \ | | | | | | | | | | | | SI-6502 Reenables loading jars into the running REPL (regression in 2.10)
| * | | | | Adds tests that programatically generate jarsHeather Miller2014-11-052-0/+109
| | |/ / / | |/| | |
* | | | | Merge pull request #4068 from lrytz/t8925Jason Zaugg2014-11-184-1/+35
|\ \ \ \ \ | | | | | | | | | | | | SI-8925, SI-7407 test cases, fixed in new backend
| * | | | | SI-8925, SI-7407 test cases, fixed in new backendLukas Rytz2014-11-044-1/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Given that the issues are old (2.10) and we move to the new backend in 2.12, I don't plan to fix them in GenASM. The test case for 7407 existed already, this commit just adds `-Yopt:l:none` to the flags to make no dead code is eliminated.
* | | | | | Merge pull request #4075 from som-snytt/issue/8835-junitGrzegorz Kossakowski2014-11-179-225/+150
|\ \ \ \ \ \ | | | | | | | | | | | | | | SI-8835 Iterator tests can be junit
| * | | | | | SI-8835 Update iterator testsSom Snytt2014-11-131-50/+35
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Addressing feedback from the collections test czar, updated a few of the primitive tests. Avoided testing addition and just asserted. Updated indexOf test to test Iterator.indexOf.
| * | | | | | SI-8835 Iterator tests can be junitSom Snytt2014-11-119-225/+165
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Without loss of generality or convenience, but helps reduce the number of files in test/files and may reduce compile times for test suite. This commit includes the fix from #3963 and an extra test of that fix that ensures the stack doesn't grow on chained drops.