summaryrefslogtreecommitdiff
path: root/book/src/main/scalatex/indepth
diff options
context:
space:
mode:
Diffstat (limited to 'book/src/main/scalatex/indepth')
-rw-r--r--book/src/main/scalatex/indepth/AdvancedTechniques.scalatex304
-rw-r--r--book/src/main/scalatex/indepth/CompilationPipeline.scalatex210
-rw-r--r--book/src/main/scalatex/indepth/DesignSpace.scalatex241
-rw-r--r--book/src/main/scalatex/indepth/JavaAPIs.scalatex46
-rw-r--r--book/src/main/scalatex/indepth/JavascriptInterop.scalatex1
-rw-r--r--book/src/main/scalatex/indepth/SemanticDifferences.scalatex267
6 files changed, 1069 insertions, 0 deletions
diff --git a/book/src/main/scalatex/indepth/AdvancedTechniques.scalatex b/book/src/main/scalatex/indepth/AdvancedTechniques.scalatex
new file mode 100644
index 0000000..66b8b10
--- /dev/null
+++ b/book/src/main/scalatex/indepth/AdvancedTechniques.scalatex
@@ -0,0 +1,304 @@
+@import book.BookData._
+@p
+ @sect.ref{Getting Started} walks you through how to set up some basic Scala.js applications, but that only scratches the surface of the things you can do with Scala.js. Apart from being able to use the same techniques you're used to in Scala-JVM in the browser, Scala.js opens up a whole range of possibilities and novel techniques that are not found in typical Scala-JVM applications.
+
+@p
+ Although these techniques may technically be possible on the JVM, very few Scala-JVM applications are built in a way that can take advantage of them. Most Scala-JVM code runs on back-end servers which have a completely different structure from the client-side apps that Scala.js allows.
+@p
+ This client-side user-interface-focused code lends itself to completely different design patterns from those used to develop server-side code. This section will explore a number of techniques which are present
+
+@ul
+ @li
+ @sect.ref("Functional-Reactive UIs", "Functional-reactive user interfaces")
+ @li
+ @sect.ref("Asynchronous Workflows", "Asynchronous user-interation workflows")
+
+@p
+ One note is that these are "Techniques" rather than "Libraries" because they have not been packaged up in a way that is sufficiently nice that you can use them out-of-the-box just by adding a dependency somewhere. Thus, they each require some small amount of boilerplate before use, though the amount of boilerplate is fixed: it does not grow with the size of your program, and anyway gives you a chance to tweak it to do exactly what you want.
+
+@val advanced = wd/'examples/'demos/'src/'main/'scala/'advanced
+
+@sect{Functional-Reactive UIs}
+ @p
+ @lnk("Functional-reactive Programming", "http://en.wikipedia.org/wiki/Functional_reactive_programming") (FRP) is a field with encompasses several things:
+
+ @ul
+ @li
+ @b{Discrete}: Handling of first-class event-streams like in @link("RxJS", "https://github.com/Reactive-Extensions/RxJS")
+ @li
+ @b{Continuous}: Handling of first-class signals, like in @link("Elm", "http://elm-lang.org/learn/What-is-FRP.elm")
+
+ @sect{Why FRP}
+ @p
+ The value proposition of FRP is that in a "traditional" program, when an event occurs, events and changes propagate throughout the program in an ad-hoc manner. An event-listener may trigger additional events, call some callbacks, or set some mutable variables that subsequent code will read and react to.
+
+ @p
+ This works, but the ad-hoc nature is both free-ing and limiting. You are free to do whatever you want in response to any action, but in return the developer who maintains your code (e.g. yourself 6 months from now) has no idea what your code is doing in response to any action: the possible consequence of an action is basically "Anything"!
+
+ @p
+ Furthermore, because the propagation is ad-hoc, there is no way for the code to help ensure that you are propagating changes in a "valid" manner: it is thus easy for programmer errors to result in changes or events being incorrectly propagated. This most often results in data falling out of sync: a UI widget may forget to update when an action is taken, resulting in an inconsistent state being shown to the user, ultimately resulting in confused users.
+
+ @p
+ FRP basically structures these event- or change-propagations as first-class values within the program, either as an @hl.scala{EventSource[T]} type that represents a discrete source of individual @hl.scala{T} events, or as a @hl.scala{Signal[T]} type which represents a continuous time-varying value @hl.scala{T}. This comes at some cost within the program: you now have to program using these @hl.scala{EventSource}s or @hl.scala{Signal}s, rather than just ad-hoc running callbacks or listening-to/triggering events all over the place. In exchange, you get more powerful tools to work with these values, making it easy for the library to e.g. ensure that changes always propagate correctly throughout your program, and that all values are always kept in sync.
+
+ @sect{FRP with Scala.Rx}
+ @p
+ @lnk("Scala.Rx", "https://github.com/lihaoyi/scala.rx") is a change-propagation library that implements the @b{Continuous} style of FRP. To begin with, we need to include it in our @code{build.sbt} dependencies:
+
+ @hl.ref(wd/'examples/'demos/"build.sbt", "scalarx", "")
+
+ @p
+ Scala.Rx provides you with smart variables that automatically track dependencies with each other, such that if one smart variable changes, the rest re-compute immediately and automatically. The main primitives in Scala.Rx are:
+
+ @ul
+ @li
+ @b{Var}s: Smart variables that can be set manually, and automatically notify their dependents that they need to recompute
+ @li
+ @b{Rx}s: Smart values which are set as some computation of other @b{Rx}s or @b{Var}s, which recompute automatically when their dependencies change, and notify their dependents
+ @li
+ @b{Obs}s: Observers on either an @b{Rx} or a @b{Var}, which performs some action when it changes
+
+ @p
+ @hl.scala{Var}s and @hl.scala{Rx}s roughly correspond to the idea of a @hl.scala{Signal} described earlier. The documentation for Scala.Rx goes into this in much more detail, so if you're curious you should read it. This section will jump straight into how to use Scala.Rx with Scala.js.
+
+ @p
+ To begin with, let's set up our imports:
+
+ @hl.ref(advanced/"BasicRx.scala", "package advanced", "@JSExport")
+
+ @p
+ Here we are seeing the same @hl.scala{dom} and @hl.scala{scalatags}, imports we saw in the hands-on tutorial, as well a new @hl.scala{import rx._} which bring all the Scala.Rx names into the local namespace.
+
+ @p
+ Scala.Rx does not "natively" bind to Scalatags, but integrating them yourself is simple enough that it's not worth putting into a separate library. He's a simple integration:
+
+ @hl.ref(advanced/"BasicRx.scala", "implicit def")
+
+ @p
+ Scalatags requires that anything you want to embed in a Scalatags fragment be implicitly convertible to @hl.scala{Frag}; here we are providing one for any Scala.Rx @hl.scala{Rx[T]}s, as long as the @hl.scala{T} provided is itself convertible to a @hl.scala{Frag}. We call @hl.scala{r().render} to extract the "current" value of the @hl.scala{Rx}, and then set up an @hl.scala{Obs} that watches the @hl.scala{Rx}, replacing the previous value with the current one every time its value changes.
+
+ @p
+ Now that the set-up is out of the way, let's consider a simple HTML widget that lets you enter text in a @hl.html{<textarea>}, and keeps track of the number of words, characters, and counts how long each word is.
+
+ @split
+ @more
+ @hl.ref(advanced/"BasicRx.scala", "val txt =")
+
+ @less
+ @example(div, "advanced.BasicRx().main")
+
+ @p
+ This snippet sets up a basic data-flow graph. We have our @hl.scala{txt} @hl.scala{Var}, and a bunch of @hl.scala{Rx}s (@hl.scala{numChars}, @hl.scala{numWords}, @hl.scala{avgWordLength}) that are computed based on @hl.scala{txt}.
+
+ @p
+ Next, we construct our Scalatags fragment: a @hl.scala{textarea} tag with a listener that updates @hl.scala{txt}, and a @hl.scala{div} containing the @hl.scala{textarea} and a list containing the bound values of our @hl.scala{Rx}s.
+
+ @p
+ That's all we need to end up with a live-updating widget, which re-renders the necessary bits of the page when the contents of the text box changes! Note how the code basically flows top-to-bottom, like a batch-rendering program, but at the end of it we get a live widget. The code is much simpler than a similar widget built up using jQuery or Backbone.
+
+ @p
+ Furthermore, there is no chance for the parts of the DOM which are "live" to fall out of sync. There is no visible logic that handles the individual re-calulations and re-renders: that is all done by Scala.Rx and by our @hl.scala{rxFrag} implicit. Because we do not need to write code for each site to keep each individual @hl.scala{Rx} and each DOM fragment in sync, that means there is no chance of the developer screwing it up and resulting in an out-of-sync page.
+
+ @sect{More Rx}
+ @p
+ That was a pretty simple example to get you started with a simple Scala.Rx application. Let's look at a more meaty example to see how we can use Scala.Rx to help structure our interactive web application:
+
+
+ @split
+ @more
+ @hl.ref(advanced/"BasicRx.scala", "val fruits =")
+
+ @less
+ @example(div, "advanced.BasicRx().main2")
+
+ @p
+ This is a basic re-implementation of the autocomplete widget we created in the chapter @sect.ref{Interactive Web Pages}, except done using Scala.Rx. Note that unlike the original implementation, we don't need to manage the clearing of the output area via @hl.scala{innerHTML = ""} and the re-rendering via @hl.scala{appendChild(...)}. All this is handled by the same @hl.scala{rxFrag} code we wrote earlier.
+
+ @p
+ Furthermore, this implementation is more efficient than the original: In the original, everything is always re-rendered every time, which can be a problem if the number of things being rendered is large. In this implementation, only when a fruit appears-in/disappears-from the list does re-rendering happen, and only for that particular fruit. For the bulk of the fruits which did not experience any change in appearance, the DOM is left entirely untouched.
+
+ @p
+ Again, there is no chance for the developer to make a mistake updating things, because all this rendering and re-rendering is hidden from view inside the library.
+
+ @hr
+
+ @p
+ Hopefully this has given you a sense of how you can use Scala.Rx to help build complex, interactive web applications. The implementation is tricky, but the basic value proposition is clear: you get to write your code top-to-bottom, like the most old-fashioned static pages, and have it transformed by Scala.Rx into an interactive, always-consistent web app. By abstracting away the whole event-propagation, manual-updating process inside the library, we have ensured that there is no place where the developer can screw it up, and the application's UI will forever be in sync with its data.
+
+@sect{Asynchronous Workflows}
+ @p
+ In a traditional setting, Scala applications tend to have a mix of concurrency models: some spawn multiple threads and use thread-blocking operations or libraries, others do things with Actors or Futures, trying hard to stay non-blocking throughout, while most are a mix of these two paradigms.
+
+ @p
+ On Scala.js, things are different: multi-threaded concurrency is a non-starter, since Javascript engines are all single-threaded. As a result, there are virtually no blocking APIs in Javascript: all operations need to be asynchronous if you don't want them to freeze the user interface of the browser while the operation is happening. Scala.js uses standard Javascript APIs and is no different.
+
+ @p
+ However, Scala.js has much more powerful tools to work with than your typical Javascript libraries. The Scala standard library comes with a rich API for @sect.ref{Futures & Promises}, which are thankfully 100% asynchronous. Though this design was chosen for performance on the JVM, it perfectly fits our 100% asynchronous Javascript APIs. We have tools like @sect.ref{Scala-Async}, which works perfectly with Scala.js, and lets you create asynchronous computations in a much less confusing manner.
+
+ @sect{Futures & Promises}
+ @p
+ A Future represents an in-progress computation that may or may not have completed. It may encapsulate a web request, or an RPC, or a task happening on another thread. They are not a novel concept, and Scala provides a good in-built implementation of Futures that works well with Scala.js.
+
+ @p
+ To motivate this, let's consider a simple example application that:
+
+ @ul
+ @li
+ Takes as user input a comma-separated list of city-names
+ @li
+ Fetches the temperature in each city from @code{api.openweathermap.org}
+ @li
+ Displays the results when they are all back
+
+ @p
+ We'll work through a few implementations of this.
+
+ @p
+ To begin with, let's write the scaffolding code, that will display the input box, deal with the listeners, and all that:
+
+ @hl.ref(advanced/"Futures.scala", "val myInput")
+
+ @p
+ So far so good. The only thing that's missing here is the mysterious @hl.scala{handle} function, which is given the list of names and the @hl.scala{output} div, and must handle the Ajax requests, aggregating the results, and displaying them in @hl.scala{output}. Let's also define a small number of helper functions that we'll use later:
+
+ @hl.ref(advanced/"Futures.scala", "def urlFor", "def parseTemp")
+
+ @p
+ @hl.scala{urlFor} encapsulates the messy URL-construction logic that we need to make the Ajax call to the right place.
+
+ @hl.ref(advanced/"Futures.scala", "parseTemp", "def formatResults")
+
+ @p
+ @hl.scala{parseTemp} encapsulates the messy result-extraction logic that we need to get the data we want (current temperature, in celsius) out of the structured JSON return blob.
+
+ @hl.ref(advanced/"Futures.scala", "def formatResults", "def main")
+
+ @p
+ @hl.scala{formatResults} encapsulates the conversion of the final @hl.scala{(name, celsius)} data back into readable HTML.
+
+ @p
+ Overall, these helper functions do nothing special, btu we're defining them first to avoid having to copy-&-paste code throughout the subsequent examples. Now that we've defined all the relevant scaffolding, let's walk through a few ways that we can implement the all-important @hl.scala{handle} method.
+
+ @def exampleDiv = div(height:="200px")
+
+ @sect{Direct Use of XMLHttpRequest}
+ @example(exampleDiv, "advanced.Futures().main0")
+ @hl.ref(advanced/"Futures.scala", "def handle0", "main")
+
+ @p
+ This is a simple solution that directly uses the @hl.scala{XMLHttpRequest} class that is available in Javascript in order to perform the Ajax call. Every Ajax call that returns, we aggregate in a @hl.scala{results} buffer, and when the @hl.scala{results} buffer is full we then append the formatted results to the output div.
+ @p
+ This is relatively straightforward, though maybe knottier than people would be used to. For example, we have to "construct" the Ajax call via calling mutating methods and setting properties on the @hl.scala{XMLHttpRequest} object, where it's easy to make a mistake. Furthermore, we need to manually aggregate the @hl.scala{results} and keep track ourselves whether or not the calls have all completed, which again is messy and error-prone.
+
+ @p
+ This solution is basically equivalent to the initial code given in the @sect.ref{Raw Javascript} section of @sect.ref{Interactive Web Pages}, with the additional code necessary for aggregation. As described in @sect.ref{dom.extensions}, we can make use of the @hl.scala{Ajax} object to make it slightly tidier.
+
+ @sect{Using dom.extensions.Ajax}
+ @example(exampleDiv, "advanced.Futures().main1")
+ @hl.ref(advanced/"Futures.scala", "def handle1", "main")
+
+ @p
+ This solution uses the @hl.scala{dom.extensions.Ajax} object, as described in @hl.scala{dom.extensions}. This basically wraps the messy @hl.scala{XMLHttpRequest} interface in a single function that returns a @hl.scala{scala.concurrent.Future}, which you can then map/foreach over to perform the action when the @hl.scala{Future} is complete.
+ @p
+ However, we still have the messiness inherent in the result aggregation: we don't actually want to perform our action (writing to the @hl.scala{output} div) when one @hl.scala{Future} is complete, but only when @i{all} the @hl.scala{Future}s are complete. Thus we still need to do some amount of manual book-keeping in the @hl.scala{results} buffer.
+
+ @sect{Future Combinators}
+ @example(exampleDiv, "advanced.Futures().main2")
+ @hl.ref(advanced/"Futures.scala", "def handle2", "main")
+
+ @p
+ Since we're using Scala's @hl.scala{Future}s, we aren't limited to just map/foreach-ing over them. @hl.scala{scala.concurrent.Future} provides a @lnk("rich api", "http://www.scala-lang.org/files/archive/nightly/docs/library/scala/concurrent/Future.html") that can be used to deal with common tasks like working with lists of futures in parallel, or aggregating the result of futures together.
+ @p
+ Here, instead of manually counting until all the @hl.scala{Future}s are complete, we instead create the Futures which will contain what we want (name and temperature) and store them in a list. Then we can use the @hl.scala{Future.sequence} function to invert the @hl.scala{Seq[Future[T]]} into a @hl.scala{Future[Seq[T]]}, a single Future that will provide all the results in a single list when every Future is complete. We can then simply foreach- over the single Future to get the data we need to feed to @hl.scala{formatResults}/@hl.scala{appendChild}.
+
+ @p
+ This approach is significantly neater than the previous two examples: we no longer have any mutation going on, and the logic is expressed in a very high-level, simple manner. "Make a bunch of Futures, join them, use the result" is much less error-prone than the imperative result-aggregation-and-counting logic used in the previous examples.
+
+ @hr
+
+ @p
+ @hl.scala{scala.concurrent.Future} isn't limited to just calling @hl.scala{.sequence} on lists. It provides the ability to @hl.scala{.zip} two Futures of different types together to get their result, or @hl.scala{.recover} in the case where Futures fail. Although these tools were originally built for Scala-JVM, all of them work unchanged on Scala.js, and serve their purpose well in simplifying messy asynchronous computations.
+
+ @sect{Scala-Async}
+ @p
+ Let's look at how to use Scala-Async. To motivate us, let's consider a simple paint-like canvas application similar to the one we built in the section @sect.ref{Making a Sketchpad using Mouse Input}. This application will have a few properties:
+
+ @ul
+ @li
+ The user clicks and drags to begin drawing a line on the canvas
+ @li
+ When the user releases the mouse, we fill the shape that was formed by the dragging
+ @li
+ The user clicks again to clear the canvas; like most clicks, the action happens when the button is released
+ @li
+ And can repeat the process from the top, indefinitely
+
+ @p
+ This is a toy example, but is enough to bring out the difficulty of doing things the "traditional" way, and why using Scala-Async with Scala.js is superior. To begin with, let's set the stage:
+
+ @hl.ref(advanced/"Async.scala", "val renderer")
+
+ @p
+ To initialize the canvas with the part of the code which will remain the same, so we can look more closely at the code which differs.
+
+ @sect{Traditional Asynchrony}
+ @p
+ Let's look at a traditional implementation, using Scala.js but no special features. We'll just use the Javascript @hl.scala{canvas.onmouveXXX} operations directly.
+
+ @split
+ @more
+ @hl.ref(advanced/"Async.scala", "// traditional")
+
+ @less
+ @example(canvas, "advanced.Async().main0")
+ @p
+ This is a working implementation, and you can play with it on the right. We basically set the three listeners:
+
+ @ul
+ @li
+ @hl.scala{canvas.onmousemove}
+ @li
+ @hl.scala{canvas.onmousedown}
+ @li
+ @hl.scala{canvas.onmouseup}
+
+ @p
+ And each listener is in charge of deciding what to do when it is it's turn to fire.
+
+ @p
+ This code is pretty tricky and hard to follow. It's not immediately clear what it is doing. One thing you may notice is the presence of this @hl.scala{dragState} variable, which seems to add a lot to the confusion with branches all over the place. At first you may think you can simplify the code to do without it, but attempts to do so will reveal why it is necessary.
+
+ @p
+ This variable is necessary because each mouse event could mean different things at different times. For example, @hl.scala{canvas.onmousemove} should do nothing it occurs between an @hl.scala{canvas.onmousedown} and @hl.scala{canvas.onmouseup}. @hl.scala{canvas.onmouseup} itself has two tasks: it either ends the dragging phase (which necessitates the fill-current-shape call) or it serves to clear the canvas if happening after a drag. And @hl.scala{canvas.onmousedown} should not start a new drag if the previous drawing hasn't been cleared from the canvas.
+
+ @p
+ This is a pretty simple workflow for the user, and yet the code is already tricky enough it's not obvious that it's correct at first glance. More complex tools will have correspondingly more complex workflows, and it is easy to see how just another 1 or 2 more states can get out of hand.
+
+ @sect{Using Scala-Async}
+ @p
+ Now we've seen what a "traditional" approach looks like, let's look at how we would do this using Scala-Async.
+
+ @split
+ @more
+ @hl.ref(advanced/"Async.scala", "// async")
+
+ @less
+ @example(canvas, "advanced.Async().main")
+
+ @p
+ We have an @hl.scala{async} block, which contains a while loop. Each round around the loop, we wait for the @hl.scala{mousedown} channel to start the path, waiting for either @hl.scala{mousemove} or @hl.scala{mouseup} (which continues the path or ends it respectively), fill the shape, and then wait for another @hl.scala{mousedown} before clearing the canvas and going again.
+
+ @p
+ Hopefully you'd agree that this code is much simpler to read and understand than the previous version. In particular, the control-flow of the code goes from top to bottom in a "natural" fashion, rather than jumping around ad-hoc like in the previous callback-based design.
+ @p
+ You may be wondering what these @hl.scala{Channel} things are, and where they are coming from. Although these are not provided by Scala, they are pretty straightforward to define ourselves:
+
+ @hl.ref(advanced/"Async.scala", "class Channel")
+
+ @p
+ The point of @hl.scala{Channel} is to allow us to turn event-callbacks (like those provided by the DOM's @hl.scala{onmouseXXX} properties) into some kind of event-stream, that we can listen to asynchronously (via @hl.scala{apply} that returns a @hl.scala{Future}) or merge via @hl.scala{|}. This is a minimal implementation for what we need now, but it would be easy to provide more functionality (filter, map, etc.) as necessary.
+
+ @hr
+
+ @p
+ Scala-Async is a Macro; that means that it is both more flexible and more limited than normal Scala, e.g. you cannot put the @hl.scala{await} call inside a lambda or higher-order-function like @hl.scala{.map}. Like Futures, it doesn't provide any fundamentally new capabilities, but is a tool that can be used to simplify otherwise messy asynchronous workflows.
diff --git a/book/src/main/scalatex/indepth/CompilationPipeline.scalatex b/book/src/main/scalatex/indepth/CompilationPipeline.scalatex
new file mode 100644
index 0000000..fe2adb9
--- /dev/null
+++ b/book/src/main/scalatex/indepth/CompilationPipeline.scalatex
@@ -0,0 +1,210 @@
+@import book.BookData._
+
+@p
+ Scala.js is implemented as a compiler plugin in the Scala compiler. Despite this, the overall process looks very different from that of a normal Scala application. This is because Scala.js optimizes for the size of the compiled executable, which is something that Scala-JVM does not usually do.
+
+@sect{Whole Program Optimizaton}
+ @p
+ At a first approximation, Scala.js achieves its tiny executables by using whole-program optimization. Scala-JVM, like Java, allows for separate compilation: this means that after compilation, you can combine your compiled code with code compiled separately, which can interact with the code you already compiled in an ad-hoc basis: code from both sides can call each others methods, instantiate each others classes, etc. without any limits.
+
+ @p
+ Even things like package-private do not help you: Java packages are separate-compile-able too, and multiple compilation runs can dump things in the same package! You may think that private members and methods may be some salvation, but the Java ecosystem typically relies heavily on reflection, which depends on the fact that these private things remain exactly as-they-are.
+
+ @p
+ Overall, this makes it difficult to do any meaningful optimization: you never know whether or not you can eliminate a class, method or field. Even if it's not used anywhere you can see, it could easily be used by some other code compiled separately, or accessed through reflection.
+
+ @p
+ With Scala.js, we have decided to forgo reflection, and forgo separate compilation, in exchange for smaller executables. This is made easier by the fact that the pure-Scala ecosystem makes little use of reflection overall. Thus, at the right before shipping your Scala.js app to your users, the Scala.js optimizer gathers up all your Scala.js code, determines which things are used and which are not, and eliminates all the un-used classes/methods/variables. This allows us to achieve a much smaller code size than is possible with reflection/separate-compilation support. Furthermore, because we forgo these two things, we can perform much more aggressive inlining and other compile-time optimizations than is possible with Scala-JVM, further reducing code size and improving performance.
+
+ @p
+ It's worth noting that such optimizations exist as an option on the JVM aswell: @lnk("Proguard", "http://proguard.sourceforge.net/") is a well known library for doing similar DCE/optimization for Java/Scala applications, and is extensively used in developing mobile applications which face similar "minimize-code-size" constraints that web-apps do. However, the bulk of Scala code which runs on the server does not use these tools.
+
+@sect{How Compilation Works}
+ @p
+ The Scala.js compilation pipeline is roughly split into multiple stages:
+
+ @ul
+ @li
+ @b{Initial Compilation}: @code{.scala} files to @code{.class} and @code{.sjsir} files
+ @li
+ @b{Fast Optimization}: @code{.sjsir} files to one smallish/fast @code{.js} file, or
+ @li
+ @b{Full Optimization}: @code{.sjsir} files to one smaller/faster @code{.js} file
+
+ @p
+ @code{.scala} files are the source code you're familiar with. @code{.class} files are the JVM-targetted artifacts which aren't used for actually producing @code{.js} files, but are kept around for pretty much everything else: the compiler uses them for separate compilation and macros, and tools such as @lnk.misc.IntelliJ or @lnk.misc.Eclipse use these files to provide IDE support for Scala.js code. @code{.js} files are the output Javascript, which we can execute in a web browser.
+ @p
+ @code{.sjsir} files are worth calling out: the name stands for "ScalaJS Intermediate Representation", and these files contain compiled code half-way between Scala and Javascript: most Scala features have by this point been replaced by their Java/Javascript equivalents, but it still contains Types (which have all been inferred) that can aid in analysis. Many Scala.js specific optimizations take place on this IR.
+
+ @p
+ Each stage has a purpose, and together the stages do bring benefits to offset their cost in complexity. The original compilation pipeline was much more simple:
+
+ @ul
+ @li
+ @b{Compilation}: @code{.scala} files to @code{.js} files
+
+ @p
+ But produced far larger (20mb) and slower executables. This section will explore each stage and we'll learn what these stages do, starting with a small example program:
+
+ @hl.scala
+ def main() = {
+ var x = 0
+ while(x < 999){
+ x = x + "2".toInt
+ }
+ println(x)
+ }
+
+ @sect{Compilation}
+ @p
+ As described earlier, the Scala.js compiler is implemented as a Scala compiler plugin, and lives in the main repository in @lnk("compiler/", "https://github.com/scala-js/scala-js/tree/master/compiler"). The bulk of the plugin runs after the @code{mixin} phase in the @lnk("Scala compilation pipeline", "http://stackoverflow.com/a/4528092/871202"). By this point:
+
+ @ul
+ @li
+ Types and implicits have all been inferred
+ @li
+ Pattern-matches have been compiled to imperative code
+ @li
+ @hl.scala("@tailrec") functions have been translated to while-loops, @hl.scala{lazy val}s have been replaced by @hl.scala{var}s.
+ @li
+ @hl.scala{trait}s have been @lnk("replaced by interfaces and classes", "http://stackoverflow.com/a/2558317/871202")
+
+ @p
+ Overall, by the time the Scala.js compiler plugin takes action, most of the high-level features of the Scala language have already been removed. Compared to a hypothetical, alternative "from scratch" implementation, this approach has several advantages:
+
+ @ul
+ @li
+ It helps ensure that the semantics of these features always, 100% match that of Scala-JVM
+ @li
+ It reduces the amount of implementation work required by re-using the existing compilation phases
+
+ @p
+ This first phase is mostly a translation from the Scala compiler's internal AST to the Scala.js Intermediate Representation, and does not contain very many interesting optimizations. At the end of the initial compilation, the Scala compiler with Scala.js plugin results in two sets of files:
+
+ @ul
+ @li
+ The original @code{.class} files, @i{almost} as if they were compiled on the JVM, but not quite. They are sufficiently valid that the compiler can execute macros defined in them, but they should not be used to actually run.
+ @li
+ The @code{.sjsir} files, destined for further compilation in the Scala.js pipeline.
+
+ @p
+ The ASTs defined in the @code{.sjsir} files is at about the same level of abstraction as the @hl.scala{Tree}s that the Scala compiler is working with at this stage. However, the @hl.scala{Tree}s within the Scala compiler contain a lot of cruft related to the compiler internals, and are also not easily serializable. This phase cleans them up into a "purer" format, (defined in the @lnk("ir/", "https://github.com/scala-js/scala-js/blob/master/ir/src/main/scala/scala/scalajs/ir/Trees.scala") folder) which is also serializable.
+
+ @p
+ This is the only phase in the Scala.js compilation pipeline that separate compilation is possible: you can compile many different sets of Scala.js @code{.scala} files separately, only to combine them later. This is used e.g. for distributing Scala.js libraries as Maven Jars, which are compiled separately by library authors to be combined into a final executable later.
+
+ @sect{Fast Optimization}
+ @p
+ Without optimizations, the actual JavaScript code emitted for the above snippet would look like this:
+ @hl.javascript
+ ScalaJS.c.Lexample_ScalaJSExample$.prototype.main__V = (function() {
+ var x = 0;
+ while ((x < 999)) {
+ x = ((x + new ScalaJS.c.sci_StringOps().init___T(
+ ScalaJS.m.s_Predef$().augmentString__T__T("2")).toInt__I()) | 0)
+ };
+ ScalaJS.m.s_Predef$().println__O__V(x)
+ });
+ @p
+ This is a pretty straightforward translation from the intermediate reprensentation into vanilla JavaScript code:
+
+ @ul
+ @li
+ Scala-style method @hl.scala{def}s become Javascript-style prototype-function-assignment
+ @li
+ Scala @hl.scala{val}s and @hl.scala{var}s become Javascript @hl.scala{var}s
+ @li
+ Scala @hl.scala{while}s become Javascript @hl.scala{while}s
+ @li
+ Implicits are materialized, hence all the @hl.scala{StringOps} and @hl.scala{augmentString} extensions are present in the output
+ @li
+ Classes and methods are fully-qualified, e.g. @hl.scala{println} becomes @hl.scala{Predef().println}
+ @li
+ Method names are qualified by their types, e.g. @hl.scala{__O__V} means that @hl.scala{println} takes @hl.scala{Object} and returns @hl.scala{void}
+
+ @p
+ This is an incomplete description of the translation, but it should give a good sense of how the translation from Scala to Javascript looks like. In general, the output is verbose but straightforward.
+
+ @p
+ In addition to this superficial translation, the optimizer does a number of things which are more subtle and vary from case to case. Without diving into too much detail, here are a few optimizations that are performed:
+
+ @ul
+ @li
+ @b{Dead-code elimination}: entry-points to the program such as @hl.scala("@JSExport")ed methods/classes are kept, as are any methods/classes that these reference. All others are removed. This reduces the potentially 20mb of Javascript generated by a naive compilation to a more manageable 400kb-1mb for a typical application
+ @li
+ @b{Inlining}: under some circumstances, the optimizer inlines the implementation of methods at call sites. For example, it does so for all "small enough" methods. This typically reduces the code size by a small amount, but offers a several-times speedup of the generated code by inlining away much of the overhead from the abstractions (implicit-conversions, higher-order-functions, etc.) in Scala's standard library.
+ @li
+ @b{Constant-folding}: due to inlining and other optimizations, some variables that could have arbitrary values are known to contain a constant. These variables are replaced by their respective constants, which, in turn, can trigger more optimizations.
+ @li
+ @b{Closure elimination}: probably one of the most important optimizations. When inlining a higher-order method such as @code{map}, the optimizer can in turn inline the anonymous function inside the body of the loop, effectively turning polymorphic dispatch with closures into bare-metal loops.
+ @p
+ Applying these optimizations on our examples results in the following JavaScript code instead, which is what you typically execute in fastOpt stage:
+
+ @hl.javascript
+ ScalaJS.c.Lexample_ScalaJSExample$.prototype.main__V = (function() {
+ var x = 0;
+ while ((x < 999)) {
+ var jsx$1 = x;
+ var this$2 = new ScalaJS.c.sci_StringOps().init___T("2");
+ var this$4 = ScalaJS.m.jl_Integer$();
+ var s = this$2.repr$1;
+ x = ((jsx$1 + this$4.parseInt__T__I__I(s, 10)) | 0)
+ };
+ var x$1 = x;
+ var this$6 = ScalaJS.m.s_Console$();
+ var this$7 = this$6.outVar$2;
+ ScalaJS.as.Ljava_io_PrintStream(this$7.tl$1.get__O()).println__O__V(x$1)
+ });
+
+ @p
+ As a whole-program optimization, it tightly ties together the code it is compiling and does not let you e.g. inject additional classes later. This does not mean you cannot interact with external code at all: you can, but it has to go through explicitly @hl.scala{@@JSExport}ed methods and classes via Javascript Interop, and not on ad-hoc classes/methods within the module. Thus it's entirely possible to have multiple "whole-programs" running in the same browser; they just will likely have duplicate copies of e.g. standard library classes inside of them, since they cannot share the code as it's not exported.
+
+ @p
+ While the input for this phase is the aggregate @code{.sjsir} files from your project and all your dependencies, the output is executable Javascript. This phase usually runs in less than a second, outputs a Javascript blob in the 400kb-1mb range, and is suitable for repeated use during development. This corresponds to the @code{fastOptJS} command in SBT.
+
+ @sect{Full Optimization}
+ @hl.javascript
+ Fd.prototype.main = function() {
+ for(var a = 0;999 > a;) {
+ var b = (new D).j("2");
+ E();
+ a = a + Ja(0, b.R) | 0
+ }
+ b = Xa(ed().pc.Sb);
+ fd(b, gd(s(), a));
+ fd(b, "\n");
+ };
+
+ @p
+ The @lnk("Google Closure Compiler", "https://developers.google.com/closure/compiler/") (GCC) is a set of tools that work with Javascript. It has multiple @lnk("levels of optimization", "https://developers.google.com/closure/compiler/docs/compilation_levels"), doing everything from basic whitespace-removal to heavy optimization. It is an old, relatively mature project that is relied on both inside and outside Google to optimize the delivery of Javascript to the browser.
+
+ @p
+ Scala.js uses GCC in its most aggressive mode: @lnk("Advanced Optimization", "https://developers.google.com/closure/compiler/docs/api-tutorial3"). GCC spits out a compressed, minified version of the Javascript (above) that @sect.ref{Fast Optimization} spits out: e.g. in the example above, all identifiers have been renamed to short strings, the @hl.javascript{while}-loop has been replaced by a @hl.javascript{for}-loop, and the @hl.scala{println} function has been inlined.
+
+ @p
+ As described in the linked documentation, GCC performs optimizations such as:
+
+ @ul
+ @li
+ Whitespace removal
+ @li
+ Variable and property renaming
+ @li
+ Dead code elimination
+ @li
+ Inlining
+
+ @p
+ Notably, GCC @i{does not preserve the semantics of arbitrary Javascript}! In particular, it only works for a subset of Javascript that it understands and can properly analyze. This is an issue when hand-writing Javascript for GCC since it's very easy to step outside that subset and have GCC break your code, but is not a worry when using Scala.js: the Scala.js optimizer (the previous phase in the pipeline) automatically outputs Javascript which GCC understands and can work with.
+ @p
+ There is some overlap between the optimizations performed by the Scala.js optimizer and GCC. For example, both apply DCE and inlining in some form. However, there are also a lot of optimizations specific to each tool. In general, the Scala.js optimizer is more concerned about producing very efficient JavaScript code, while GCC shines at making that JavaScript as small as possible (in terms of the number of characters).
+ @p
+ The combination of both these tools produces small and fast output blobs: ~100-400kb. This takes 5-10 seconds to run, which makes it somewhat slow for iterative development, so it's typically only run right before final testing and deployment. This corresponds to the @code{fullOptJS} command in SBT.
+
+@hr
+
+@p
+ This hopefully has given a good overview of how the Scala.js compilation pipeline works. The pipeline and optimizer is a work-in-progress, and is changing all the time in an attempt to achieve ever-smaller executables and ever-faster code.
+
+@p
+ This whole chapter has been focused on the @i{what} but not the @i{why}. The chapter on @sect.ref{Scala.js' Design Space} contains a section which talks about @sect.ref("Small Executables", "why we care so much about small executables"). \ No newline at end of file
diff --git a/book/src/main/scalatex/indepth/DesignSpace.scalatex b/book/src/main/scalatex/indepth/DesignSpace.scalatex
new file mode 100644
index 0000000..6eec6fd
--- /dev/null
+++ b/book/src/main/scalatex/indepth/DesignSpace.scalatex
@@ -0,0 +1,241 @@
+@import book.BookData._
+
+@p
+ Scala.js is a relatively large project, and is the result of both an enormous amount of hard work as well as a number of decisions that craft what it's like to program in Scala.js today. Many of these decisions result in marked differences from the behavior of the same code running on the JVM. This chapter explores the reasoning and rationale behind these decisions.
+
+
+@sect{Why No Reflection?}
+ @p
+ Scala.js prohibits reflection as it makes dead-code elimination difficult, and the compiler relies heavily on dead-code elimination to generate reasonably-sized executables. The chapter on @sect.ref("The Compilation Pipeline") goes into more detail of why, but a rough estimate of the effect of various optimizations on a small application is:
+
+ @ul
+ @li
+ @b{Full Output} - ~20mb
+ @li
+ @b{Naive Dead-Code-Elimnation} - ~800kb
+ @li
+ @b{Inlining Dead-Code-Elimnation} - ~600kb
+ @li
+ @b{Minified by Google Closure Compiler} - ~200kb
+
+ @p
+ The default output size of 20mb makes the executables difficult to work with. Even though browsers can deal with 20mb Javascript blobs, it takes the browser several seconds to even load it, and up to a minute after that for the JIT to optimize the whole thing.
+
+ @sect{Dead Code Elimination}
+ @p
+ To illustrate why reflection makes things difficult, consider a tiny application:
+
+ @hl.scala
+ @@JSExport
+ object App extends js.JSApp{
+ @@JSExport
+ def main() = {
+ println(foo())
+ }
+ def foo() = 10
+ def bar = "i am a cow"
+ }
+ object Dead{
+ def complexFunction() = ...
+ }
+
+ @p
+ When the @sect.ref("Fast Optimization", "Scala.js optimizer"), looks at this application, it is able to deduce certain things immediately:
+
+ @ul
+ @li
+ @hl.scala{App} and @hl.scala{App.main} are exported via @hl.scala{@@JSExport}, and thus can't be considered dead code.
+ @li
+ @hl.scala{App.foo} is called from @hl.scala{App.main}, and so has to be kept around
+ @li
+ @hl.scala{App.bar} is never called from @hl.scala{App.main} or @hl.scala{App.foo}, and so can be eliminated
+ @li
+ @hl.scala{Dead}, including @hl.scala{Dead.complexFunction}, are not called from any live code, and can be eliminated.
+
+ @p
+ The actual process is a bit more involved than this, but this is a first-approximation of how the dead-code-elimination works: you start with a small set of live code (e.g. @hl.scala{@@JSExport}ed things), search out to find the things which are recursively reachable from that set, and eliminate all the rest. This means that the Scala.js compiler can eliminate, e.g., parts of the Scala standard library that you are not using. The standard library is not small, and makes up the bulk of the 20mb of the uncompressed blob.
+
+ @sect{Whither Reflection?}
+ @p
+ To imagine why reflection makes this difficult, imagine a slightly modified program which includes some reflective calls in @hl.scala{App.main}
+
+ @hl.scala
+ @@JSExport
+ object App extends js.JSApp{
+ @@JSExport
+ def main() = {
+ Class.forName(userInput()).getMethod(userInput()).invoke()
+ }
+ def foo() = 10
+ def bar = "i am a cow"
+ }
+ object Dead{
+ def complexFunction() = ...
+ }
+
+ @p
+ Here, we're assuming @hl.scala{userInput()} is some method which returns a @hl.scala{String} that was input by the user or otherwise somehow decided at runtime.
+ @p
+ We can start the same process: @hl.scala{App.main} is live since we @hl.scala{@@JSExport}ed it, but what objects or methods are reachable from @hl.scala{App.main}? The answer is: it depends on the values of @hl.scala{userInput()}, which we don't know. And hence we don't know which classes or methods are reachable! Depending on what @hl.scala{userInput()} returns, any or all methods and classes could be used by @hl.scala{App.main()}.
+ @p
+ This leaves us a few options:
+
+ @ul
+ @li
+ Keep every method or class around at runtime. This severely hampers the compiler's ability to optimize, and results in massive 20mb executables.
+ @li
+ Ignore reflection, and go ahead and eliminate/optimize things assuming reflection did not exist.
+ @li
+ Allow the user to annotate methods/classes that should be kept, and eliminate the rest.
+
+ @p
+ All three are possible options: Scala.js started off with #1. #3 is the approach used by @lnk("Proguard", "http://proguard.sourceforge.net/manual/examples.html#annotated"), which lets you annotate things e.g. @hl.scala{@@KeepApplication} to preserve things for reflection and preventing Proguard from eliminating them as dead code.
+
+ @p
+ In the end, Scala.js chose #2. This is helped by the fact that overall, Scala code tends not to use reflection as heavily as Java, or dynamic languages which use it heavily. Scala uses techniques such as @lnk("lambdas", "http://docs.scala-lang.org/tutorials/tour/anonymous-function-syntax.html") or @lnk("implicits", "http://docs.scala-lang.org/tutorials/tour/implicit-parameters.html") to satisfy many use cases which Java has traditionally used reflection for, while friendly to the optimizer.
+
+ @p
+ There are a range of use-cases for reflection where you want to inspect an object's structure or methods, where lambdas or implicits don't help. People use reflection to @lnk("serialize objects", "http://jackson.codehaus.org/DataBindingDeepDive"), or for @lnk("routing messages to methods", "https://access.redhat.com/documentation/en-US/Fuse_ESB_Enterprise/7.1/html/Implementing_Enterprise_Integration_Patterns/files/BasicPrinciples-BeanIntegration.html"). However, both these cases can be satisfied by...
+
+ @sect{Macros}
+
+ @p
+ The Scala programming language, since the 2.10.x series, has support for @lnk("Macros", "http://docs.scala-lang.org/overviews/macros/overview.html") in the language. Although experimental, these are heavily used in many projects such as Play and Slick and Akka, and allow a developer to perform compile-time computations and generate code where-ever the macros are used.
+
+ @p
+ People typically think of macros as AST-transformers: you pass in an AST and get a modified AST out. However, in Scala, these ASTs are strongly-typed, and the macro is able to inspect the types involved in generating the output AST. This leads to a lot of @lnk("interesting techniques", "http://docs.scala-lang.org/overviews/macros/implicits.html") around macros where you synthesize ASTs based on the type (explicit or inferred) of the macro callsite, something that is impossible in dynamic languages.
+
+ @p
+ Practically, this means that you can use macros to do things such as inspecting the methods, fields and other type-level properties of a typed value. This allows us to do things like @lnk("serialize objects with no boilerplate", "https://github.com/lihaoyi/upickle"):
+
+ @hl.scala
+ import upickle._
+
+ case class Thing(a: Int, b: String)
+ write(Thing(1, "gg"))
+ // res23: String = {"a": 1, "b": "gg"}
+
+ @p
+ Or to @lnk("route messages to the appropiate methods", "https://github.com/lihaoyi/autowire") without boilerplate, and @i{without} using reflection!
+
+ @p
+ The fact that you can satisfy these use cases with macros is non-obvious: in dynamic languages, macros only get an AST, which is basically opaque when you're only passing a single value to it. With Scala, you get the value @i{together with it's type}, which lets you inspect the type and generate the proper serialization/routing code that is impossible to do in a dynamic language with macros.
+
+ @p
+ Using macros here also plays well with the Scala.js optimizer: the macros are fully expanded before the optimizer is run, so by the time the optimizer sees the code, there is no more magic left: it is then free to do dead-code-elimination/inlining/other-optimizations without worrying about reflection causing the code to do weird things at runtime. Thus, we've managed to substitute most of the main use-cases of reflection, and so can do without it.
+
+@sect{Why does error behavior differ?}
+ @p
+ Scala.js deviates from the semantics of Scala-JVM in several ways. Many of these ways revolve around the edge-conditions of a program: what happens when something goes wrong? An array index is out of bounds? An integer is divided-by-zero? These differences cause some amount of annoyance when debugging, since when you mess up an array index, you expect an exception, not silently-invalid-data!
+
+ @p
+ In most of these cases, it was a trade-off between performance and correctness. These are situations where the default semantics of Scala deviate from that of Javascript, and Scala.js would have to perform extra work to emulate the desired behavior. For example, compare the division behavior of the JVM and Javascript.
+ @sect{Divide-by-zero: a case study}
+ @hl.scala
+ /*JVM*/
+ 15 / 4 // 3
+ @hl.javascript
+ /*JS*/
+ 15 / 4 // 3.25
+ @p
+ On the JVM, integer division is a primitive, and dividing @hl.scala{15 / 4} gives @hl.scala{3}. However, in Javascript, it gives @hl.javascript{3.25}, since all numbers of double-precision floating points.
+
+ @p
+ Scala.js works around this in the general case by adding a @hl.javascript{| 0} to the translation, e.g.
+
+ @hl.scala
+ /*JVM*/
+ 15 / 4 // 3
+ @hl.javascript
+ /*JS*/
+ (15 / 4) | 0 // 3
+
+ @p
+ This gives the correct result for most numbers, and is reasonably efficient (actually, it tends to be @i{more} efficient on modern VMs). However, what about dividing-by-zero?
+
+ @hl.scala
+ /*JVM*/
+ 15 / 0 // ArithmeticException
+ @hl.javascript
+ /*JS*/
+ 15 / 0 // Infinity
+ (15 / 0) | 0 // 0
+
+ @p
+ On the JVM, the JVM is kind enough to throw an exception for you. However, in Javascript, the integer simply wraps around to @hl.javascript{Infinity}, which then gets truncated down to zero.
+ @p
+ So that's the current behavior of integers in Scala.js. One may ask: can we fix it? And the answer is, we can:
+ @hl.scala
+ /*JVM*/
+ 1 / 0 // ArithmeticException
+ @hl.javascript
+ /*JS*/
+ function intDivide(x, y){
+ var z = x / y
+ if (z == Infinity) throw new ArithmeticException("Divide by Zero")
+ else return z
+ }
+ intDivide(1, 0) // ArithmeticException
+ @p
+ This translation fixes the problem, and enforces that the @hl.scala{ArithmeticException} is thrown at the correct time. However, this approach causes some overhead: what was previously two primitive operations is now a function call, a local variable assignment, and a conditional. That is a lot more expensive than two primitive operations!
+
+ @sect{The Performance/Correctness Tradeoff}
+ @p
+ In the end, a lot of the semantic differences listed here come down to the same tradeoff: we could make the code behave more-like-Scala, but at a cost of adding overhead via function calls and other checks. Furthermore, the cost is paid regardless of whether the "exceptional case" is triggered or not: in the example above, every division in the program pays the cost!
+ @p
+ The decision to not support these exceptional cases comes down to a value judgement: how often do people actually depend on an exception being thrown as part of their program semantics, e.g. by catching it and performing actions? And how often are they just a way of indicating bugs? It turns out that very few @hl.scala{ArithmeticException}s, @hl.scala{ArrayIndexOutOfBoundsException}s, or similar are actually a necessary part of the program! They exist during debugging, but after that, these code paths are never relied upon "in production".
+ @p
+ Thus Scala.js goes for a compromise: in the Fast Optimization mode, we run the code with all these checks in place (this is work in progress; currently only @code{asInstanceOf}s are thus checked), so as to catch cases where these errors occur close-to-the-source and make it easy for you to debug them. In Full Optimization mode, on the other hand, we remove these checks, assuming you've already ran through these cases and found any bugs during development.
+ @p
+ This is a common pattern in situations where there's a tradeoff between debuggability and speed. In Scala.js' case, it allows us to get good debuggability in development, as well as good performance in production. There's some loss in debuggability in development, sacrificed in exchange for greater performance.
+
+@sect{Small Executables}
+ Why do we care so much about how big our executables are in Scala.js? Why don't we care about how big they are on Scala-JVM? This is mostly due to three reasons:
+
+ @ul
+ @li
+ When cross-compiling Scala to Javascript, the end-result tends to be much more verbose than when cross-compiled to Java Bytecode.
+ @li
+ Scala.js typically is run in web browsers, which typically do not work well with large executables compared to e.g. the JVM
+ @li
+ Scala.js often is delivered to many users over the network, and long download times force users to wait, degrading the user experience
+
+ @p
+ These factors combined means that Scala.js has to put in extra effort to optimize the code to reduce it's size at compile-time.
+
+ @sect{Raw Verbosity}
+ @p
+ Scala.js compiles to Javascript source code, while Scala-JVM compiles to Java bytecode. Java bytecode is a binary format and thus somewhat optimized for size, while Javascript is textual and is designed to be easy to read and write by hand.
+ @p
+ What does these mean, concretely? This means that a symbol marking something, e.g. the start of a function, is often a single byte in Java bytecode. Even more, it may not have any delimiter at all, instead the meaning of the binary data being inferred from its position in the file! On the other hand, in Javascript, declaring a function takes a long-and-verbose @hl.javascript{function} keyword, which together with peripheral punctuation (@code{.}, @code{ = }, etc.) often adds up to tens of bytes to express a single idea.
+ @p
+ What does this mean concretely? This means that expressing the same meaning in Javascript usually takes more "raw code" than expressing the same meaning in Java bytecode. Even though Java bytecode is relatively verbose for a binary format, it still is significantly more concise the Javascript, and it shows: the Scala standard library weighs in at a cool 6mb on Scala-JVM, while it weighs 20mb on Scala.js.
+ @p
+ All things being equal, this would mean that Scala.js would have to work harder to keep down code-size than Scala-JVM would have to. Alas, not all other things are equal.
+
+ @sect{Browsers Performance}
+ @p
+ Without any optimization, a naive compilation to Scala.js results in an executable (Including the standard library) weighing around 20mb. On the surface, this isn't a problem: runtimes like the JVM have no issue with loading 20mb of Java bytecode to execute; many large desktop applications weigh in the 100s of megabytes while still loading and executing fine.
+ @p
+ However, the web browser isn't a native execution environment; loading 20mb of Javascript is sufficient to heavily tax even the most modern web browsers such as Chrome and Firefox. Even though most of the code comprises class and method definitions that never have their contents executed, loading such a heavy load into e.g. Chrome makes it freeze for 5-10 seconds initially. Even after that, even after the code has all been parsed and isn't been actively executed, having all this Javascript makes the browser sluggish for up to a minute before the JIT compiler can speed things up.
+ @p
+ Overall, this means that you probably do not want to work with un-optimized Scala.js executables. Even for development, the slow load times and initial sluggishness make testing the results of your hard-work in the browser a frustrating experience. But that's not all...
+
+ @sect{Deployment Size}
+ @p
+ Scala.js applications often run in the browser. Not just any browser, but the browsers of your users, who had come to your website or web-app to try and accomplish some task. This is in stark contrast the Scala-JVM applications, which most often run on servers: servers that you own and control, and can deploy code to at your leisure.
+
+ @p
+ When running code on your own servers in some data center, you often do not care how big the compiled code is: the Scala standard library is several (6-7) megabytes, which added to your own code and any third-party libraries you're using, may add up to tens of megabytes, maybe a hundred or two if it's a relatively large application. Even that pales in comparison to the size of the JVM, which weighs in the 100s of megabytes.
+ @p
+ Even so, you are deploying your code on an machine (virtual or real) which has several gigabytes of memory and 100s of gigabytes of disk space. Even if the size of the code makes deployment slower, you only deploy fresh code a handful of times a day at most, and the size of your executable typically does not worry you.
+ @p
+ Scala.js is different: it runs in the browsers of your users. Before it can run in their browser, it first has to be downloaded, probably over a connection that is much slower than the one used to deploy your code to your servers or data-center. It probably is downloaded thousands of times per day, and every user which downloads it must pay the cost of waiting for it to finish downloading before they can take any actions on your website.
+
+ @p
+ A typical website loads ~100kb-1mb of Javascript, and 1mb is on the heavy side. Most Javascript libraries weigh in on the order of 50-100kb. For Scala.js to be useful in the browser, it has to be able to compare favorably with these numbers.
+
+ @hr
+
+ @p
+ Thus, while on Scala-JVM you typically have executables that (including dependencies) end up weighing 10s to 100s of megabytes, Scala.js has a much tighter budget. A hello world Scala.js application weighs in at around 100kb, and as you write more code and use more libraries (and parts of the standard library) this number rises to the 100s of kb. This isn't tiny, especially compared to the many small Javascript libraries out there, but it definitely is much smaller than what you'd be used to on the JVM.
diff --git a/book/src/main/scalatex/indepth/JavaAPIs.scalatex b/book/src/main/scalatex/indepth/JavaAPIs.scalatex
new file mode 100644
index 0000000..51ac71f
--- /dev/null
+++ b/book/src/main/scalatex/indepth/JavaAPIs.scalatex
@@ -0,0 +1,46 @@
+@import book.BookData._
+
+@p
+ Below is a list of classes from the Java Standard Library that are available from Scala.js. In general, much of @hl.scala{java.lang}, and parts of @hl.scala{java.io}, @hl.scala{java.util} and @hl.scala{java.net} have been ported over. This means that all these classes are available for use in Scala.js applications despite being part of the Java standard library.
+@p
+ There are many reasons you may want to port a Java class to Scala.js: you want to use it directly, you may be trying to port a library which uses it. In general, we haven't been porting things "for fun", and obscure classes like @hl.scala{org.omg.corba} will likely never be ported: we've been porting things as the need arises in order to support libraries (e.g. @lnk("Scala.Rx", "https://github.com/lihaoyi/scala.rx") that need them.
+
+@sect{Available Java APIs}
+
+ @ul
+ @for(data <- BookData.javaAPIs)
+ @li
+ @a(data._1, href:=data._2)
+
+@sect{Porting Java APIs}
+ @p
+ The process for making Java library classes available in Scala.js is relatively straightforward:
+ @ul
+ @li
+ Find a class that you want to use in Scala.js, but is not implemented.
+ @li
+ Write a clean-room implementation in Scala, without looking at the source code of @lnk("OpenJDK", "http://openjdk.java.net/"). This is due to legal-software-license incompatibility between OpenJDK and Scala.js. Reading the docs or specification are fine, as is looking at the source of alternate implementations such as @lnk("Harmony", "http://harmony.apache.org/")
+ @li
+ Submit a pull-request to the @lnk("Scala.js repository", "https://github.com/scala-js/scala-js"), including your implementation, together with tests. See the @lnk("existing tests", "https://github.com/scala-js/scala-js/tree/master/test-suite/src/test/scala/org/scalajs/testsuite/javalib") in the repository if you need examples of how to write your own.
+
+ @p
+ In general, this is a simple process, for "pure-Java" classes which do not use any special JVM/Java-specific APIs. However, this will not be possible for classes which do! This means that classes that make use of Java-specific things like:
+
+ @ul
+ @li
+ Threads
+ @li
+ Filesystem APIs
+ @li
+ Network APIs
+ @li
+ @hl.scala{sun.misc.Unsafe}
+
+ @p
+ And other similar APIs will either need to be rewritten to not-use them. For example, @hl.scala{AtomicXXXs} can be written without threading/unsafe APIs because Javascript is single-threaded, making the implementation for e.g. an @hl.scala{AtomicBoolean} pretty trivial:
+
+ @hl.ref(cloneRoot/"scala-js"/'javalib/'src/'main/'scala/'java/'util/'concurrent/'atomic/"AtomicBoolean.scala")
+
+ @p
+ Others can't be ported at all (e.g. @code{java.io.File}) simply because the API capabilities they provide (blocking reads & writes to files) do not exist in the Javascript runtime.
+
diff --git a/book/src/main/scalatex/indepth/JavascriptInterop.scalatex b/book/src/main/scalatex/indepth/JavascriptInterop.scalatex
new file mode 100644
index 0000000..30404ce
--- /dev/null
+++ b/book/src/main/scalatex/indepth/JavascriptInterop.scalatex
@@ -0,0 +1 @@
+TODO \ No newline at end of file
diff --git a/book/src/main/scalatex/indepth/SemanticDifferences.scalatex b/book/src/main/scalatex/indepth/SemanticDifferences.scalatex
new file mode 100644
index 0000000..bcc0f6b
--- /dev/null
+++ b/book/src/main/scalatex/indepth/SemanticDifferences.scalatex
@@ -0,0 +1,267 @@
+@import book.BookData._
+@p
+ Although Scala.js tries very hard to maintain compatibility with Scala-JVM, there are some parts where the two platforms differs. This can be roughly grouped into two things: differences in the libraries available, and differences in the language itself. This chapter will cover both of these facets.
+
+@sect{Language Differences}
+
+ @sect{Primitive data types}
+ @p
+ All primitive data types work exactly as on the JVM, with the three following
+ exceptions.
+
+ @sect{Floats can behave as Doubles by default}
+ @p
+ Scala.js underspecifies the behavior of @code{Float}s by default. Any @code{Float} value can be stored as a @code{Double} instead, and any operation on @code{Float}s can be computed with double precision. The choice of whether or not to behave as such, when and where, is left to the
+ implementation.
+ @p
+ If exact single precision operations are important to your application, you can enable strict-floats semantics in Scala.js, with the following sbt setting:
+ @hl.scala
+ scalaJSSemantics ~= { _.withStrictFloats(true) }
+ @p
+ Note that this can have a major impact on performance of your application on JS interpreters that do not support @lnk("the Math.fround function", "https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Math/fround").
+
+ @sect{toString of Float, Double and Unit}
+ @p
+ @code{x.toString()} returns slightly different results for floating point numbers and @code{()} (@code{Unit}).
+
+ @split
+ @half
+ @hl.scala
+ // Scala-JVM
+ > println(())
+ ()
+ > println(1.0)
+ 1.0
+ > println(1.4f)
+ 1.4
+
+ @half
+ @hl.scala
+ // Scala.js
+ > println(())
+ undefined
+ > println(1.0)
+ 1
+ > println(1.4f)
+ 1.399999976158142
+
+ @p
+ In general, a trailing @code{.0} is omitted. Floats print in a weird way because they are printed as if they were Doubles, which means their lack of precision shows up.
+ @p
+ To get sensible and portable string representation of floating point numbers, use @code{String.format()} or related methods.
+
+ @sect{Runtime type tests are based on values}
+ @p
+ Instance tests (and consequently pattern matching) on any of @code{Byte}, @code{Short}, @code{Int}, @code{Float}, @code{Double} are based on the value and not the type they were created with. The following are examples:
+ @ul
+ @li
+ 1 matches @code{Byte}, @code{Short}, @code{Int}, @code{Float}, @code{Double}
+ @li
+ 128 (@code{> Byte.MaxValue}) matches @code{Short}, @code{Int}, @code{Float}, @code{Double}
+ @li
+ 32768 (@code{> Short.MaxValue}) matches @code{Int}, @code{Float}, @code{Double}
+ @li
+ 2147483647 matches @code{Int}, @code{Double} if strict-floats are enabled, otherwise @code{Float} as well
+ @li
+ 2147483648 (@code{> Int.MaxValue}) matches @code{Float}, @code{Double}
+ @li
+ 1.5 matches @code{Float}, @code{Double}
+ @li
+ 1.4 matches @code{Double} only if strict-floats are enabled, otherwise @code{Float} and @code{Double}
+ @li
+ @code{NaN}, @code{Infinity}, @code{-Infinity} and @code{-0.0} match @code{Float}, @code{Double}
+ @p
+ As a consequence, the following apparent subtyping relationships hold:
+ @hl.scala
+ Byte <:< Short <:< Int <:< Double
+ <:< Float <:<
+ @p
+ if strict-floats are enabled, or
+ @hl.scala
+ Byte <:< Short <:< Int <:< Float =:= Double
+ @p
+ otherwise.
+
+ @sect{Undefined behaviors}
+ @p
+ The JVM is a very well specified environment, which even specifies how some bugs are reported as exceptions. Some examples are:
+ @ul
+ @li
+ @code{NullPointerException}
+ @li
+ @code{ArrayIndexOutOfBoundsException} and @code{StringIndexOutOfBoundsException}
+ @li
+ @code{ClassCastException}
+ @li
+ @code{ArithmeticException} (such as integer division by 0)
+ @li
+ @code{StackOverflowError} and other @code{VirtualMachineError}s
+ @p
+ Because Scala.js does not receive VM support to detect such erroneous conditions, checking them is typically too expensive.
+ @p
+ Therefore, all of these are considered @lnk("undefined behavior", "http://en.wikipedia.org/wiki/Undefined_behavior").
+ @p
+ Some of these, however, can be configured to be compliant with sbt settings. Currently, only @code{ClassCastException}s (thrown by invalid @code{asInstanceOf} calls) are configurable, but the list will probably expand in future versions.
+ @p
+ Every configurable undefined behavior has 3 possible modes:
+ @ul
+ @li
+ @b{Compliant}: behaves as specified on a JVM
+ @li
+ @b{Unchecked}: completely unchecked and undefined
+ @li
+ @b{Fatal}: checked, but throws @lnk("UndefinedBehaviorError", "http://www.scala-js.org/api/scalajs-library/0.6.0/#scala.scalajs.runtime.UndefinedBehaviorError")s instead of the specified exception.
+ @p
+ By default, undefined behaviors are in Fatal mode for fastOptJS and in Unchecked mode for fullOptJS. This is so that bugs can be detected more easily during development, with predictable exceptions and stack traces. In production code (fullOptJS), the checks are removed for maximum efficiency.
+ @p
+ @code{UndefinedBehaviorError}s are @i{fatal} in the sense that they are not matched by @code{case NonFatal(e)} handlers. This makes sure that they always crash your program as early as possible, so that you can detect and fix the bug. It is @i{never} OK to catch an @code{UndefinedBehaviorError} (other than in a testing framework), since that means your program will behave differently in fullOpt stage than in fastOpt.
+ @p
+ If you need a particular kind of exception to be thrown in compliance with the JVM semantics, you can do so with an sbt setting. For example, this setting enables compliant @code{asInstanceOf}s:
+ @hl.scala
+ scalaJSSemantics ~= { _.withAsInstanceOfs(
+ org.scalajs.core.tools.sem.CheckedBehavior.Compliant) }
+ @p
+ Note that this will have (potentially major) performance impacts.
+ @p
+ For a more detailed rationale, see the section @sect.ref{Why does error behavior differ?}.
+
+ @sect{Reflection}
+ @p
+ Java reflection and, a fortiori, Scala reflection, are not supported. There is limited support for @code{java.lang.Class}, e.g., @code{obj.getClass.getName} will work for any Scala.js object (not for objects that come from JavaScript interop). Reflection makes it difficult to perform the optimizations that Scala.js heavily relies on. For a more detailed discussion on this topic, take a look at the section @sect.ref{Why No Reflection?}.
+
+ @sect{Regular expressions}
+ @p
+ @lnk("JavaScript regular expressions", "http://developer.mozilla.org/en/docs/Core_JavaScript_1.5_Guide:Regular_Expressions") are slightly different from @lnk("Java regular expressions", "http://docs.oracle.com/javase/6/docs/api/java/util/regex/Pattern.html"). The support for regular expressions in Scala.js is implemented on top of JavaScript regexes.
+ @p
+ This sometimes has an impact on functions in the Scala library that use regular expressions themselves. A list of known functions that are
+ affected is given here:
+ @ul
+ @li
+ @code{StringLike.split(x: Array[Char])}
+
+ @sect{Symbols}
+ @p
+ @code{scala.Symbol} is supported, but is a potential source of memory leaks in applications that make heavy use of symbols. The main reason is that
+ JavaScript does not support weak references, causing all symbols created by Scala.js to remain in memory throughout the lifetime of the application.
+
+ @sect{Enumerations}
+ @p
+ The methods @code{Value()} and @code{Value(i: Int)} on @code{scala.Enumeration} use reflection to retrieve a string representation of the member name and are therefore -- in principle -- unsupported. However, since Enumerations are an integral part of the Scala library, Scala.js adds limited support for these two methods:
+ @p
+ Calls to either of these two methods of the forms:
+ @hl.scala
+ val <ident> = Value
+ val <ident> = Value(<num>)
+ @p
+ are statically rewritten to (a slightly more complicated version of):
+ @hl.scala
+ val <ident> = Value("<ident>")
+ val <ident> = Value(<num>, "<ident>")
+ @p
+ Note that this also includes calls like
+ @hl.scala
+ val A, B, C, D = Value
+ @p
+ since they are desugared into separate @code{val} definitions.
+ @p
+ Calls to either of these two methods which could not be rewritten, or calls to constructors of the protected <code>Val</code> class without an explicit name as parameter, will issue a warning.
+ @p
+ Note that the name rewriting honors the @code{nextName} iterator. Therefore, the full rewrite is:
+ @hl.scala
+ val <ident> = Value(
+ if (nextName != null && nextName.hasNext)
+ nextName.next()
+ else
+ "<ident>"
+ )
+ @p
+ We believe that this covers most use cases of @code{scala.Enumeration}. Please let us know if another (generalized) rewrite would make your life easier.
+
+@sect{Library Differences}
+ @val myTable = Seq(
+ ("Most of java.lang.*", "java.lang.Thread, java.lang.Runtime, ..."),
+ ("Almost all of scala.*", "scala.collection.parallel, scala.tools.nsc"),
+ ("Some of java.util.*", "org.omg.CORBA, sun.misc.*"),
+ ("Macros: uPickle, Scala-Async, Scalaxy, etc", "Reflection: Scala-Pickling, Scala-Reflect"),
+ ("Shapeless, Scalaz, Scalatags, uTest", "Scalatest, Scalate"),
+ ("XMLHttpRequest, Websockets. Localstorage", "Netty, Akka, Spray, File IO, JNI"),
+ ("HTML DOM, Canvas, WebGL", "AWT, Swing, SWT, OpenGL"),
+ ("Chipmunk.js, Hand.js, React.js, jQuery", "Guice, JUnit, Apache-Commons, log4j"),
+ ("IntelliJ, Eclipse, SBT, Chrome console, Firebug", "Scala REPL, Yourkit, VisualVM, JProfiler")
+ )
+
+ @p
+ Scala.js differs from Scala-JVM not just in the corner-cases of the language, but also in the libraries available. Scala-JVM has access to JVM APIs and the wealth of the Java libraries, while Scala.js has access to Javascript APIs and Javascript libraries. It's also possible to write pure-Scala libraries that run on both Scala.js and Scala-JVM, as detailed @a("here").
+ @p
+ This table gives a quick overview of the sorts of libraries you can and can't use when working on Scala.js:
+
+ @val tableHead = pureTable(th("Can Use"), th("Can't Use"))
+
+ @tableHead
+ @for(tuple <- myTable)
+ @tr
+ @td{@tuple._1}@td{@tuple._2}
+
+ @p
+ We'll go into each section bit by bit
+
+ @sect{Standard Library}
+ @tableHead
+ @for(tuple <- myTable.slice(0, 3))
+ @tr
+ @td{@tuple._1}@td{@tuple._2}
+
+ @p
+ You can use more-or-less the whole Scala standard library in Scala.js, sans some more esoteric components like the parallel collections or the tools. Furthermore, we've ported some subset of the Java standard library that many common Scala libraries depends on, including most of @hl.scala{java.lang.*} and some of @hl.scala{java.util.*}.
+ @p
+ There isn't a full list of standard library library APIs which are available from Scala.js, but it should be enough to give you a rough idea of what is supported. The full list of classes that have been ported to Scala.js is available under @sect.ref{Available Java APIs}
+
+ @sect{Macros v.s. Reflection}
+ @tableHead
+ @for(tuple <- myTable.slice(3, 4))
+ @tr
+ @td{@tuple._1}@td{@tuple._2}
+
+ @p
+ As described @sect.ref("Why No Reflection?", "here"), Reflection is not supported in Scala.js, due to the way it inhibits optimization. This doesn't just mean you can't use reflection yourself: many third-party libraries also use reflection, and you won't be able to use them either.
+
+ @p
+ On the other hand, Scala.js does support Macros, and macros can in many ways substitute many of the use cases that people have traditionally used reflection for (see @sect.ref("Macros", "here")). For example, instead of using a reflection-based serialization library like @lnk.github.scalaPickling, you can use a macro-based library such as @lnk.github.uPickle.
+
+ @sect{Pure-Scala v.s. Java Libraries}
+ @tableHead
+ @for(tuple <- myTable.slice(4, 5))
+ @tr
+ @td{@tuple._1}@td{@tuple._2}
+ @p
+ Scala.js has access to any pure-Scala libraries that you have cross-compiled to Scala.js, and cross-compiling a pure-Scala library with no dependencies is straightforward. Many of them, such as the ones listed above, have already been cross-compiled and can be used via their maven coordinates.
+ @p
+ You cannot use any libraries which have a Java dependency. This means libraries like @lnk.misc.ScalaTest or @lnk.misc.Scalate, which depend on a number of external Java libraries or source files, cannot be used from Scala.js. You can only use libraries which have no dependency on Java libraries or sources.
+
+ @sect{Javascript APIs v.s. JVM APIs}
+ @tableHead
+ @for(tuple <- myTable.slice(5, 7))
+ @tr
+ @td{@tuple._1}@td{@tuple._2}
+
+ @p
+ Apart from depending on Java sources, the other thing that you can't use in Scala.js are JVM-specific APIs. This means that anything which goes down to the underlying operating system, filesystem, GUI or network are unavailable in Scala.js. This makes sense when you consider that these capabilities are no provided by the browser which Scala.js runs in, and it's impossible to re-implement them ourselves.
+ @p
+ In exchange for this, Scala.js provides you access to Browser APIs that do related things. Although you can't set up a HTTP server to take in-bound requests, you can make out-bound requests using @lnk.dom.XMLHttpRequest to other servers. You can't write to the filesystem or databases directly, but you can write to the @hl.scala{dom.localStorage} provided by the browser. You can't use Swing or AWT or WebGL but instead work with the DOM and Canvas and WebGL.
+ @p
+ Naturally, none of these are an exact replacement, as the browser environment is fundamentally different from that of a desktop application running on the JVM. Nonetheless, there are many analogues, and if so desired you can write code to abstract away these differences and run on both Scala.js and Scala-JVM
+
+
+ @sect{Scala/Browser tooling v.s. Java tooling}
+ @tableHead
+ @for(tuple <- myTable.slice(7, 8))
+ @tr
+ @td{@tuple._1}@td{@tuple._2}
+
+
+ @p
+ Lastly, there is the matter of tools. Naturally, all the Scala tools which depend on the JVM are out. This means things like the @lnk("Yourkit", "http://www.yourkit.com/"), @lnk("VisualVM", "http://visualvm.java.net/") and @lnk("JProfiler", "https://www.ej-technologies.com/products/jprofiler/overview.html") profilers, as well as things like the Scala command-line REPL which relies on classloaders and other such things to run on the JVM
+ @p
+ On the other hand, you do get to keep and continue using many tools which are build for Scala but JVM-agnostic. For example, IDEs such a @lnk.misc.IntelliJ and @lnk.misc.Eclipse work great with Scala.js; from their point of view, it's just Scala, and things like code-navigation, refactoring and error-highlighting all work out of the box. SBT works with Scala.js too, and you see the same compile-erorrs in the command-line as you would in vanilla Scala, and even things like incremental compilation work un-changed.
+ @p
+ Lastly, you gain access to browser tools that don't work with normal Scala: you can use the Chrome or Firefox consoles to poke at your Scala.js application from the command line, or their profilers/debuggers. With source maps set up, you can even step-through debug your Scala.js application directly in Chrome.