summaryrefslogtreecommitdiff
path: root/test/files/pos
Commit message (Collapse)AuthorAgeFilesLines
* SI-9135 Fix NPE, a regression in the pattern matcherJason Zaugg2015-02-051-0/+16
| | | | | | | | The community build discovered that #4252 introduced the possibility for a NullPointerException. The tree with a null type was a synthetic `Apply(<<matchEnd>>)` created by the pattern matcher. This commit adds a null check.
* Merge pull request #4252 from retronym/ticket/9050Lukas Rytz2015-02-031-0/+13
|\ | | | | SI-9050 Fix crasher with value classes, recursion
| * SI-9050 Fix crasher with value classes, recursionJason Zaugg2015-01-161-0/+13
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | From the "Substitution is hard to do" department. In 7babdab9a, TreeSymSubstitutor was modified to mutate the info of symbols defined in the tree, if that symbol's info referred to one of the `from` symbols in the substitution. It would have been more principled to create a cloned symbol with the updated info, and add that to the substitution. But I wasn't able implement that correctly (let alone efficiently.) The in-place mutation of the info of a symbol led to the crasher in this bug: a singleton type over that symbol ends up with a stale cached value of 'underlying'. In the enclosed test case, this leads to a type error in the `SubstituteRecursion` of the extension methods phase. This commit performs a cleanup job at the end of `substituteSymbols` by invalidating the cache of any `SingleType`-s in the tree that refer to one of the mutated symbols.
* | Merge pull request #4214 from som-snytt/issue/5154Jason Zaugg2015-01-221-0/+9
|\ \ | |/ |/| SI-5154 Parse leading literal brace in XML pattern
| * SI-5154 Parse leading literal brace in XML patternSom Snytt2014-12-161-0/+9
| | | | | | | | | | | | | | Don't consume literal brace as Scala pattern. Previously, leading space would let the text parser `xText` handle it correctly instead.
* | Merge pull request #4201 from mpociecha/fix-typos-in-docs-and-commentsGrzegorz Kossakowski2015-01-142-2/+2
|\ \ | | | | | | Fix many typos in docs and comments
| * | Fix many typos in docs and commentsmpociecha2014-12-142-2/+2
| |/ | | | | | | | | | | | | | | | | | | | | | | | | This commit corrects many typos found in scaladocs, comments and documentation. It should reduce a bit number of PRs which fix one typo. There are no changes in the 'real' code except one corrected name of a JUnit test method and some error messages in exceptions. In the case of typos in other method or field names etc., I just skipped them. Obviously this commit doesn't fix all existing typos. I just generated in IntelliJ the list of potential typos and looked through it quickly.
* | Merge pull request #4199 from adriaanm/rebase-4193Adriaan Moors2014-12-182-0/+272
|\ \ | | | | | | SI-8999 Reduce memory usage in exhaustivity check
| * | SI-8999 Reduce memory usage in exhaustivity checkGerard Basler2014-12-122-0/+272
| |/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The OOM could occur when all models are forcibly expanded in the DPLL solver. The simplest solution would be to limit the number of returned models but then we are back in non-determinism land (since the subset we get back depends on which models were found first). A better alternative is to create only the models that have corresponding counter examples. If this does not ring a bell, here's a longer explanation: TThe models we get from the DPLL solver need to be mapped back to counter examples. However there's no precalculated mapping model -> counter example. Even worse, not every valid model corresponds to a valid counter example. The reason is that restricting the valid models further would for example require a quadratic number of additional clauses. So to keep the optimistic case fast (i.e., all cases are covered in a pattern match), the infeasible counter examples are filtered later. The DPLL procedure keeps the literals that do not contribute to the solution unassigned, e.g., for `(a \/ b)` only {a = true} or {b = true} is required and the other variable can have any value. This function does a smart expansion of the model and avoids models that have conflicting mappings. For example for in case of the given set of symbols (taken from `t7020.scala`): "V2=2#16" "V2=6#19" "V2=5#18" "V2=4#17" "V2=7#20" One possibility would be to group the symbols by domain but this would only work for equality tests and would not be compatible with type tests. Another observation leads to a much simpler algorithm: Only one of these symbols can be set to true, since `V2` can at most be equal to one of {2,6,5,4,7}. TODO: How does this fare with mixed TypeConst/ValueConst assignments? When the assignments are e.g., V2=Int | V2=2 | V2=4, only the assignments to value constants are mutually exclusive.
* | Merge pull request #4122 from retronym/ticket/7459-2Adriaan Moors2014-12-185-0/+66
|\ \ | | | | | | SI-7459 Handle pattern binders used as prefixes in TypeTrees.
| * | SI-7459 Handle pattern binders used as prefixes in TypeTrees.Jason Zaugg2014-11-145-0/+66
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Match translation was incorrect for: case t => new t.C case D(t) => new d.C We would end up with Types in TypeTrees referring to the wrong symbols, e.g: // t7459a.scala ((x0$1: this.LM) => { case <synthetic> val x1: this.LM = x0$1; case4(){ matchEnd3(new tttt.Node[Any]()) }; matchEnd3(x: Any){ x } Or: // t7459b.scala ((x0$1: CC) => { case <synthetic> val x1: CC = x0$1; case4(){ if (x1.ne(null)) matchEnd3(new tttt.Node[Any]()) else case5() }; This commit: - Changes `bindSubPats` to traverse types, as well as terms, in search of references to bound symbols - Changes `Substitution` to reuse `Tree#substituteSymbols` rather than the home-brew substitution from `Tree`s to `Tree`s, if the `to` trees are all `Ident`s - extends `substIdentsForTrees` to substitute selections like `x1.caseField` into types. I had to dance around the awkward handling of "swatches" (exception handlers that can be implemented with JVM native type switches) by duplicating trees to avoid seeing the results of `substituteSymbols` in `caseDefs` after we abandon that approach if we detect the patterns are too complex late in the game. I also had to add an escape hatch for the "type selection from volatile type" check in the type checker. Without this, the translation of `pos/t7459c.scala`: case <synthetic> val x1: _$1 = (null: Test.Mirror[_]).universe; case5(){ if (x1.isInstanceOf[Test.JavaUniverse]) { <synthetic> val x2: _$1 with Test.JavaUniverse = (x1.asInstanceOf[_$1 with Test.JavaUniverse]: _$1 with Test.JavaUniverse); matchEnd4({ val ju1: Test.JavaUniverse = x2; val f: () => x2.Type = (() => (null: x2.TypeTag[Nothing]).tpe); .. triggers that error at `x2.TypeTag`.
* | | Suppress match analysis under -Xno-patmat-analysisAdriaan Moors2014-12-122-0/+160
| |/ |/| | | | | | | | | | | NoSuppression doesn't suppress. FullSuppression does. Now using the latter when running under `-Xno-patmat-analysis`. I should really have tested. /me hides under a rock
* | Merge pull request #4078 from gbasler/topic/fix-analysis-budgetAdriaan Moors2014-12-121-0/+33
|\ \ | | | | | | Avoid the `CNF budget exceeded` exception via smarter translation into CNF.
| * | Avoid the `CNF budget exceeded` exception via smarter translation into CNF.Gerard Basler2014-10-271-0/+33
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The exhaustivity checks in the pattern matcher build a propositional formula that must be converted into conjunctive normal form (CNF) in order to be amenable to the following DPLL decision procedure. However, the simple conversion into CNF via negation normal form and Shannon expansion that was used has exponential worst-case complexity and thus even simple problems could become untractable. A better approach is to use a transformation into an _equisatisfiable_ CNF-formula (by generating auxiliary variables) that runs with linear complexity. The commonly known Tseitin transformation uses bi- implication. I have choosen for an enhancement: the Plaisted transformation which uses implication only, thus the resulting CNF formula has (on average) only half of the clauses of a Tseitin transformation. The Plaisted transformation uses the polarities of sub-expressions to figure out which part of the bi-implication can be omitted. However, if all sub-expressions have positive polarity (e.g., after transformation into negation normal form) then the conversion is rather simple and the pseudo-normalization via NNF increases chances only one side of the bi-implication is needed. I implemented only optimizations that had a substantial effect on formula size: - formula simplification, extraction of multi argument operands - if a formula is already in CNF then the Tseitin/Plaisted transformation is omitted - Plaisted via NNF - omitted: (sharing of sub-formulas is also not implemented) - omitted: (clause subsumption)
* | | Merge pull request #4169 from retronym/ticket/9008Adriaan Moors2014-12-051-0/+5
|\ \ \ | | | | | | | | SI-9008 Fix regression with higher kinded existentials
| * | | SI-9008 Fix regression with higher kinded existentialsJason Zaugg2014-12-031-0/+5
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allow a naked type constructor in an existential type if we are directly within a type application. Recently, 84d4671 changed nested context creation to avoid passing down the `TypeConstructorAllowed`, which led to missing kind errors in code like `type T[({type M = List})#M]`. However, when typechecking `T forSome { quantifiers }`, we create a nested context to represent the nested scope introduced for the quantifiers. But we need to propagate the `TypeConstructorAllowed` bit to the nested context to allow for higher kinded existentials. The enclosed tests show: - pos/t9008 well kinded application of an hk existential - neg/t9008 hk existential forbidden outside of type application - neg/t9008b kind error reported for hk existential Regressed in 84d4671.
* | | | Merge pull request #4180 from som-snytt/issue/7683Lukas Rytz2014-12-044-0/+43
|\ \ \ \ | | | | | | | | | | SI-7683 Enable -Ystop-before:typer
| * | | | SI-7683 Enable -Ystop-before:typerSom Snytt2014-12-014-0/+43
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Allows a plugin to run before typer without incurring typechecking. The test is that a plugin doesn't run when stopping after parser, and that the truncated compilation run also succeeds, since updating check files for the output of -Xshow-phases is tedious.
* | | | | SI-9018 Fix regression: cycle in LUBsJason Zaugg2014-12-031-0/+16
| |/ / / |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Regressed in 4412a92d, which admirably sought to impose some structure on the domain of depths, but failed to preserve an imporatnt part of said structure. When calculating LUBs and GLBs, the recursion depth is limited by propagating a decreasing depth parameter. Its initial value is the recursion limit, and is calcluated from the maximum depth of the types fed into the calculation. Here are a few examples that give a flavour of this calculation: ``` scala> class M[A] defined class M scala> class N extends M[M[M[M[A]]]] <console>:34: error: not found: type A class N extends M[M[M[M[A]]]] ^ scala> class N extends M[M[M[M[Int]]]] defined class N scala> lubDepth(typeOf[N] :: Nil) res5: scala.reflect.internal.Depth = Depth(4) scala> type T = M[Int] with M[M[Int]] defined type alias T scala> lubDepth(typeOf[T] :: Nil) res7: scala.reflect.internal.Depth = Depth(3) ``` One parts of the LUB calculation, `lub0`, truncates the lub to `Any` when the depth dives below zero. Before 4412a92d: ------------------ value decr incr ------------------ -3 -3 -2 (= AnyDepth) -2 -3 -1 -1 -2 0 0 -1 1 1 0 2 ... After 4412a92d: ----------------------- value decr incr ----------------------- -MaxInt -MaxInt -MaxInt (= AnyDepth) 0 -MaxInt 1 1 0 2 ... The crucial difference that triggered the regression is that decrementing a depth of zero now goes to the sentinel value, `AnyDepth`, rather than to `-1`. This commit modifies `Depth` to allow it to represent any negative depth. It also switches the sentinel value for `AnyDepth`. Even though I don't believe it is needed, I have also allowed for `Depth.Zero.decr.decr.decr == Depth.AnyVal`, which was historically the case in 2.10.4. To better understand what was happening, I added tracing to the calculation and diffed the before and after: https://gist.github.com/retronym/ec59608eecc52bb497fa Notice that when `elimSub(ts, depth = 0)` recursively calls `lub`, it does so with the variant that caluculates the allowable depth from the shape of the given types. We can then infinitely recurse. Before 4412a92d: ``` |-- elimSub(depth = 0, ts = List(Comparable[_ >: TestObject.E.Value with String <: Comparable[_ >: TestObject.E.Valu | |-- lub(depth = -1, ts = List(TestObject.E.Value with String, TestObject.C)) | | |-- lub0(depth = -1, ts0 = List(TestObject.E.Value with String, TestObject.C)) | | | |-- elimSub(depth = -1, ts = List(TestObject.E.Value with String, TestObject.C)) | | | |== List(TestObject.E.Value with String, TestObject.C) | | | |-- Truncating LUB to | | | |== Any | | |== Any | |== Any |== List(Comparable[_ >: TestObject.E.Value with String <: Comparable[_ >: TestObject.E.Value with String <: java.io |-- lub(depth = 0, ts = List(java.lang.type, java.lang.type)) | |-- lub0(depth = 0, ts0 = List(java.lang.type, java.lang.type)) | | |-- elimSub(depth = 0, ts = List(java.lang.type, java.lang.type)) | | |== List(java.lang.type) | |== java.lang.type |== java.lang.type |-- elimSub(depth = 0, ts = List(Object, Object)) |== List(Object) |-- elimSub(depth = 0, ts = List(Any, Any)) |== List(Any) ``` After 4412a92d: ``` |-- elimSub(depth = 0, ts = List(Comparable[_ >: TestObject.E.Value with String <: Comparable[_ >: TestObject.E.Valu | |-- lub(depth = _, ts = List(TestObject.E.Value with String, TestObject.C)) | | |-- lub(depth = 3, ts = List(TestObject.E.Value with String, TestObject.C)) | | | |-- lub0(depth = 3, ts0 = List(TestObject.E.Value with String, TestObject.C)) | | | | |-- elimSub(depth = 3, ts = List(TestObject.E.Value with String, TestObject.C)) | | | | |== List(TestObject.E.Value with String, TestObject.C) | | | | |-- lub1(depth = 3, ts = List(TestObject.E.Value with String, TestObject.C)) | | | | | |-- elimSub(depth = 3, ts = List(scala.math.Ordered[TestObject.E.Value], scala.math.Orde | | | | | |== List(scala.math.Ordered[TestObject.E.Value], scala.math.Ordered[TestObject.C]) ```
* | | | Merge pull request #4099 from retronym/ticket/7596Grzegorz Kossakowski2014-11-216-0/+65
|\ \ \ \ | |/ / / |/| | | SI-7596 Curtail overloaded symbols during unpickling
| * | | SI-7596 Curtail overloaded symbols during unpicklingJason Zaugg2014-11-066-0/+65
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In code like: object O { val x = A; def x(a: Any) = ... } object P extends O.x.A The unpickler was using an overloaded symbol for `x` in the parent type of `P`. This led to compilation failures under separate compilation. The code that leads to this is in `Unpicklers`: def fromName(name: Name) = name.toTermName match { case nme.ROOT => loadingMirror.RootClass case nme.ROOTPKG => loadingMirror.RootPackage case _ => adjust(owner.info.decl(name)) } This commit filters the overloaded symbol based its stability unpickling a singleton type. That seemed a slightly safer place than in `fromName`.
* | | | Merge pull request #4118 from retronym/ticket/5639Jason Zaugg2014-11-195-14/+29
|\ \ \ \ | | | | | | | | | | SI-5639 Fix spurious discarding of implicit import
| * | | | SI-5639 Predicate bug fix on -Xsource:2.12Jason Zaugg2014-11-181-0/+1
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | To be conservative, I've predicated this fix in `-Xsource:2.12`. This is done in separate commit to show that the previous fix passes the test suite, rather than just tests with `-Xsource:2.12`.
| * | | | SI-5639 Fix spurious discarding of implicit importJason Zaugg2014-11-094-14/+28
| | |_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In Scala fa0cdc7b (just before 2.9.0), a regression in implicit search, SI-2866, was discovered by Lift and fixed. The nature of the regression was that an in-scope, non-implicit symbol failed to shadow an eponymous implicit import. The fix for this introduced `isQualifyingImplicit` which discards in-scope implicits when the current `Context`'s scope contains a name-clashing entry. Incidentally, this proved to be a shallow solution, and we later improved shadowing detection in SI-4270 / 9129cfe9. That picked up cases where a locally defined symbol in an intervening scope shadowed an implicit. This commit includes the test case from the comments of SI-2866. Part of it is tested as a neg test (to test reporting of ambiguities), and the rest is tested in a run test (to test which implicits are picked.) However, in the test case of SI-5639, we see that the scope lookup performend by `isQualifyingImplicit` is fooled by a "ghost" module symbol. The apparition I'm referring to is entered when `initializeFromClassPath` / `enterClassAndModule` encounters a class file named 'Baz.class', and speculatively enters _both_ a class and module symbol. AFAIK, this is done to defer parsing the class file to determine what inside. If it happens to be a Java compiled class, the module symbol is needed to house the static members. This commit adds a condition that `Symbol#exists` which shines a torch (forces the info) in the direction of the ghost module symbol. In our test, this causes it to vanish, as we only need a class symbol for the Scala defined `class Baz`. The existing `pos` test for this actually did not exercise the bug, separate compilation is required. It was originally checked in to `pending` with this error, and then later moved to `pos` when someone noticed it was not failing.
* | | | Merge pull request #4116 from retronym/ticket/5413-2Lukas Rytz2014-11-101-0/+9
|\ \ \ \ | | | | | | | | | | SI-5413 Test for fixed owner-corruption bug with names/defaults
| * | | | SI-5413 Test for fixed owner-corruption bug with names/defaultsJason Zaugg2014-11-091-0/+9
| |/ / / | | | | | | | | | | | | Fixed in SI-8111 / 2.10.4.
* | | | Merge pull request #4108 from retronym/ticket/7750Lukas Rytz2014-11-102-0/+9
|\ \ \ \ | | | | | | | | | | SI-7750 Test case for fixed spurious existential feature warning
| * | | | SI-7750 Test case for fixed spurious existential feature warningJason Zaugg2014-11-072-0/+9
| | |/ / | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | In 2.11.0-M8, the enclosed test issued the following feature warning: sandbox/t7750.scala:7: warning: the existential type LazyCombiner[_$1,_$2,_$3] forSome { type _$1; type _$2; type _$3 <: Growable[_$1] with Sizing }, which cannot be expressed by wildcards, This went way in 2.11.0-RC1 after we turned off the problematic attempt to infer existential bounds based on the bounds of the corresponding type parameters.
* | | | Merge pull request #4102 from retronym/ticket/8965Lukas Rytz2014-11-102-0/+8
|\ \ \ \ | | | | | | | | | | SI-8965 Account for corner case in "unrelated types" warning
| * | | | SI-8965 Account for corner case in "unrelated types" warningJason Zaugg2014-11-072-0/+8
| |/ / / | | | | | | | | | | | | | | | | It's okay for the two types to LUB to something above `Object` if they both individially were its supertype.
* | | | Merge pull request #4101 from adriaanm/sam-exLukas Rytz2014-11-108-0/+60
|\ \ \ \ | | | | | | | | | | [sammy] eta-expansion, overloading, existentials
| * | | | [sammy] use correct type for method to overrideAdriaan Moors2014-11-073-10/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Don't naively derive types for the single method's signature from the provided function's type, as it may be a subtype of the method's MethodType. Instead, once the sam class type is fully defined, determine the sam's info as seen from the class's type, and use those to generate the correct override. ``` scala> Arrays.stream(Array(1, 2, 3)).map(n => 2 * n + 1).average.ifPresent(println) 5.0 scala> IntStream.range(1, 4).forEach(println) 1 2 3 ``` Also, minimal error reporting Can't figure out how to do it properly, but some reporting is better than crashing. Right? Test case that illustrates necessity of the clumsy stop gap `if (block exists (_.isErroneous))` enclosed as `sammy_error_exist_no_crash` added TODO for repeated and by-name params
| * | | | [sammy] eta-expansion, overloading (SI-8310)Adriaan Moors2014-11-066-0/+58
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Playing with Java 8 Streams from the repl showed we weren't eta-expanding, nor resolving overloading for SAMs. Also, the way Java uses wildcards to represent use-site variance stresses type inference past its bendiness point (due to excessive existentials). I introduce `wildcardExtrapolation` to simplify the resulting types (without losing precision): `wildcardExtrapolation(tp) =:= tp`. For example, the `MethodType` given by `def bla(x: (_ >: String)): (_ <: Int)` is both a subtype and a supertype of `def bla(x: String): Int`. Translating http://winterbe.com/posts/2014/07/31/java8-stream-tutorial-examples/ into Scala shows most of this works, though we have some more work to do (see near the end). ``` scala> import java.util.Arrays scala> import java.util.stream.Stream scala> import java.util.stream.IntStream scala> val myList = Arrays.asList("a1", "a2", "b1", "c2", "c1") myList: java.util.List[String] = [a1, a2, b1, c2, c1] scala> myList.stream.filter(_.startsWith("c")).map(_.toUpperCase).sorted.forEach(println) C1 C2 scala> myList.stream.filter(_.startsWith("c")).map(_.toUpperCase).sorted res8: java.util.stream.Stream[?0] = java.util.stream.SortedOps$OfRef@133e7789 scala> Arrays.asList("a1", "a2", "a3").stream.findFirst.ifPresent(println) a1 scala> Stream.of("a1", "a2", "a3").findFirst.ifPresent(println) a1 scala> IntStream.range(1, 4).forEach(println) <console>:37: error: object creation impossible, since method accept in trait IntConsumer of type (x$1: Int)Unit is not defined (Note that Int does not match Any: class Int in package scala is a subclass of class Any in package scala, but method parameter types must match exactly.) IntStream.range(1, 4).forEach(println) ^ scala> IntStream.range(1, 4).forEach(println(_: Int)) // TODO: can we avoid this annotation? 1 2 3 scala> Arrays.stream(Array(1, 2, 3)).map(n => 2 * n + 1).average.ifPresent(println(_: Double)) 5.0 scala> Stream.of("a1", "a2", "a3").map(_.substring(1)).mapToInt(_.parseInt).max.ifPresent(println(_: Int)) // whoops! ReplGlobal.abort: Unknown type: <error>, <error> [class scala.reflect.internal.Types$ErrorType$, class scala.reflect.internal.Types$ErrorType$] TypeRef? false error: Unknown type: <error>, <error> [class scala.reflect.internal.Types$ErrorType$, class scala.reflect.internal.Types$ErrorType$] TypeRef? false scala.reflect.internal.FatalError: Unknown type: <error>, <error> [class scala.reflect.internal.Types$ErrorType$, class scala.reflect.internal.Types$ErrorType$] TypeRef? false at scala.reflect.internal.Reporting$class.abort(Reporting.scala:59) scala> IntStream.range(1, 4).mapToObj(i => "a" + i).forEach(println) a1 a2 a3 ```
* | | | | Merge pull request #4100 from gourlaysama/wip/lint-buildLukas Rytz2014-11-103-0/+53
|\ \ \ \ \ | |_|_|/ / |/| | | | SI-8954 Make @deprecated{Overriding,Inheritance} aware of @deprecated.
| * | | | SI-8954 Make @deprecated{Overriding,Inheritance} aware of @deprecated.Antoine Gourlay2014-11-063-0/+53
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This makes sure that: - there is no warning for a @deprecated class inheriting a @deprecatedInheritance class - there is no warning for a @deprecated method overriding a @deprecatedOverriding method - there is no "deprecation won't work" warning when overriding a member of a @deprecatedInheritance class with a @deprecated member - the above works even if the classes/methods are indirectly deprecated (i.e. enclosed in something @deprecated) This should remove quite a few useless deprecation warnings from the library.
* | | | | Merge pull request #4083 from retronym/ticket/8947Jason Zaugg2014-11-072-0/+42
|\ \ \ \ \ | | | | | | | | | | | | SI-8947 Avoid cross talk between tag materializers and reify
| * | | | | SI-8947 Avoid cross talk between tag materializers and reifyJason Zaugg2014-10-302-0/+42
| | |_|_|/ | |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | After a macro has been expanded, the expandee are expansion are bidirectionally linked with tree attachments. Reify uses the back reference to replace the expansion with the expandee in the reified tree. It also has some special cases to replace calls to macros defined in scala-compiler.jar with `Predef.implicitly[XxxTag[T]]`. This logic lives in `Reshape`. However, the expansion of a macro may be `EmptyTree`. This is the case when a tag materializer macro fails. User defined macros could do the also expand to `EmptyTree`. In the enclosed test case, the error message that the tag materializer issued ("cannot materialize class tag for unsplicable type") is not displayed as the typechecker finds another means of making the surrounding expression typecheck. However, the macro engine attaches a backreference to the materializer macro on `EmpytyTree`! Later, when `reify` reshapes a tree, every occurance of `EmptyTree` will be replaced by a call to `implicitly`. This commit expands the domain of `CannotHaveAttrs`, which is mixed in to `EmptyTree`. It silently ignores all attempts to mutate attachments. Unlike similar code that discards mutations of its type and position, I have refrained from issuing a developer warning in this case, as to silence this I would need to go and add a special case at any places adding attachments.
* | | | | SI-8962 Fix regression with skolems in pattern translationJason Zaugg2014-11-061-0/+31
| |_|_|/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | During the refactoring of error reporting in 258d95c7b15, the result of `Context#reportError` was changed once we had switched to using a `ThrowingReporter`, which is still the case in post typer phase typechecking This was enough to provoke a type error in the enclosed test. Here's a diff of the typer log: % scalac-hash 258d95~1 -Ytyper-debug sandbox/test.scala 2>&1 | tee sandbox/good.log % scalac-hash 258d95 -Ytyper-debug sandbox/test.scala 2>&1 | tee sandbox/bad.log % diff -U100 sandbox/{good,bad}.log | gist --filename=SI-8962-typer-log.diff https://gist.github.com/retronym/3ccbe7e0791447d272a6 The test `Context#reportError` happens to be used when deciding whether the type checker should be lenient when it hits a type error. In lenient mode (see `adaptMismatchedSkolems`), it `adapt` retries with after slackening the expected type by replacing GADT and existential skolems with wildcard types. This is to accomodate known type-incorrect trees generated by the pattern matcher. This commit restores the old semantics of `reportError`, which means it is `false` when a `ThrowingReporter` is being used. For those still reading, here's some more details. The trees of the `run2` example from the enclosed test. Notice that after typechecking, `expr` has a type containing GADT skolems. The pattern matcher assumes that calling the case field accessor `expr` on the temporary val `x2 : Let[A with A]` containing the scrutinee will result in a compatible expression, however this it does not. Perhaps the pattern matcher should generate casts, rather than relying on the typer to turn a blind eye. ``` [[syntax trees at end of typer]] // t8962b.scala ... def run2[A](nc: Outer2[A,A]): Outer2[Inner2[A],A] = nc match { case (expr: Outer2[Inner2[?A2 with ?A1],?A2 with ?A1])Let2[A with A]((expr @ _{Outer2[Inner2[?A2 with ?A1],?A2 with ?A1]}){Outer2[Inner2[?A2 with ?A1],?A2 with ?A1]}){Let2[A with A]} => (expr{Outer2[Inner2[?A2 with ?A1],?A2 with ?A1]}: Outer2[Inner2[A],A]){Outer2[Inner2[A],A]} }{Outer2[Inner2[A],A]} [[syntax trees at end of patmat]] // t8962b.scala def run2[A](nc: Outer2[A,A]): Outer2[Inner2[A],A] = { case <synthetic> val x1: Outer2[A,A] = nc{Outer2[A,A]}; case5(){ if (x1.isInstanceOf[Let2[A with A]]) { <synthetic> val x2: Let2[A with A] = (x1.asInstanceOf[Let2[A with A]]: Let2[A with A]){Let2[A with A]}; { val expr: Outer2[Inner2[?A2 with ?A1],?A2 with ?A1] = x2.expr{<null>}; matchEnd4{<null>}((expr{Outer2[Inner2[?A2 with ?A1],?A2 with ?A1]}: Outer2[Inner2[A],A]){Outer2[Inner2[A],A]}){<null>} ... ```
* | | | Merge pull request #4092 from som-snytt/issue/5217Lukas Rytz2014-11-051-0/+17
|\ \ \ \ | | | | | | | | | | SI-5217 Companion privates in scope of class parms
| * | | | SI-5217 Companion privates in scope of class parmsSom Snytt2014-11-041-0/+17
| |/ / / | | | | | | | | | | | | Test of the same. It progressed in 2.10.1.
* | | | Merge pull request #4044 from retronym/ticket/5091Lukas Rytz2014-11-051-0/+19
|\ \ \ \ | |_|_|/ |/| | | SI-5091 Move named-args cycle test from pending to neg
| * | | SI-6051 Test case, the issue seems to be fixed.Lukas Rytz2014-11-041-0/+19
| | | |
* | | | Merge pull request #4036 from retronym/topic/opt-tail-callsJason Zaugg2014-11-041-0/+129
|\ \ \ \ | |_|_|/ |/| | | SI-8893 Restore linear perf in TailCalls with nested matches
| * | | SI-8893 Restore linear perf in TailCalls with nested matchesJason Zaugg2014-10-101-0/+129
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Compilation perfomance of the enclosed test regressed when the new pattern matcher was introduced, specifically when the tail call elimination phase was made aware of its tree shapes in cd3d34203. The code added in that commit detects an application to a tail label in order to treat recursive calls in the argument as in tail position. If the transform of that argument makes no change, it falls back to `rewriteApply`, which transforms the argument again (although this time on a non-tail-position context.) This commit avoids the second transform by introducing a flag to `rewriteApply` to mark the arguments are pre-transformed. I don't yet see how that fixes the exponential performance, as on the surface it seems like a constant factor improvement. But the numbers don't lie, and we can now compile the test case in seconds, rather than before when it was running for > 10 minutes. This test case was based on a code pattern generated by the Avro serializer macro in: https://github.com/paytronix/utils-open/tree/release/2014/ernststavrosgrouper The exponential performance in that context is visualed here: https://twitter.com/dridus/status/519544110173007872 Thanks for @rmacleod2 for minimizing the problem.
* | | Merge pull request #4043 from retronym/ticket/3439-2Jason Zaugg2014-11-022-0/+36
|\ \ \ | | | | | | | | SI-3439 Fix use of implicit constructor params in super call
| * | | SI-5454 Test case for another ticket fixed by the previous commitJason Zaugg2014-10-101-0/+10
| | | |
| * | | SI-3439 Fix use of implicit constructor params in super callJason Zaugg2014-10-101-0/+26
| |/ / | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | When typechecking the primary constructor body, the symbols of constructor parameters of a class are owned by the class's owner. This is done make scoping work; you shouldn't be able to refer to class members in that position. However, other parts of the compiler weren't so happy about this arrangement. The enclosed test case shows that our checks for invalid, top-level implicits was spuriously triggered, and implicit search itself would fail. Furthermore, we had to hack `Run#compiles` to special case top-level early-initialized symbols. See SI-7264 / 86e6e9290. This commit: - introduces an intermediate local dummy term symbol which will act as the owner for constructor parameters and early initialized members - adds this to the `Run#symSource` map if it is top level - simplifies `Run#compiles` accordingly - tests this all in a top-level class, and one nested in another class.
* | / SI-8934 Fix whitebox extractor macros in the pres. compilerJason Zaugg2014-10-273-0/+31
| |/ |/| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | The code that "aligns" patterns and extractors assumes that it can look at the type of the unapply method to figure the arity of the extractor. However, the result type of a whitebox macro does not tell the whole story, only after expanding an application of that macro do we know the result type. During regular compilation, this isn't a problem, as the macro application is expanded to a call to a synthetic unapply: { class anon$1 { def unapply(tree: Any): Option[(Tree, List[Treed])] } new anon$1 }.unapply(<unapply selector>) In the presentation compiler, however, we now use `-Ymacro-expand:discard`, which expands macros only to compute the type of the application (and to allow the macro to issue warnings/errors). The original application is retained in the typechecked tree, modified only by attributing the potentially-sharper type taken from the expanded macro. This was done to improve hyperlinking support in the IDE. This commit passes `sel.tpe` (which is the type computed by the macro expansion) to `unapplyMethodTypes`, rather than using the `finalResultType` of the unapply method. This is tested with a presentation compiler test (which closely mimics the reported bug), and with a pos test that also exercises `-Ymacro-expand:discard`. Prior to this patch, they used to fail with: too many patterns for trait api: expected 1, found 2
* | SI-8900 Don't assert !isDelambdafyFunction, it may not be accurateLukas Rytz2014-10-151-0/+11
|/ | | | | | | | | | | | | | | | | | The implementations of isAnonymousClass, isAnonymousFunction, isDelambdafyFunction and isDefaultGetter check if a specific substring (eg "$lambda") exists in the symbol's name. SI-8900 shows an example where a class ends up with "$lambda" in its name even though it's not a delambdafy lambda class. In this case the conflict seems to be introduced by a macro. It is possible that the compiler itself never introduces such names, but in any case, the above methods should be implemented more robustly. This commit is band-aid, it fixes one specific known issue, but there are many calls to the mentioned methods across the compiler which are potentially wrong. Thanks to Jason for the test case!
* SI-8894 dealias when looking at tuple componentsAdriaan Moors2014-10-081-0/+12
| | | | | | | Classic bait-and-switch: `isTupleType` dealiases, but `typeArgs` does not. When deciding with `isTupleType`, process using `tupleComponents`. Similar for other combos. We should really enforce this using extractors, and only decouple when performance is actually impacted.