summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authorLi Haoyi <haoyi.sg@gmail.com>2019-11-03 20:00:23 +0800
committerLi Haoyi <haoyi.sg@gmail.com>2019-11-03 22:24:44 +0800
commit3eefc92d20d1b705374fdd2cce46b2176c65b6eb (patch)
tree860485f4b8f41f483beafb13f63a9fcb88a79cef
parent57c550990c66845307f77c9cdb44c13d06a7a5c1 (diff)
downloadcask-3eefc92d20d1b705374fdd2cce46b2176c65b6eb.tar.gz
cask-3eefc92d20d1b705374fdd2cce46b2176c65b6eb.tar.bz2
cask-3eefc92d20d1b705374fdd2cce46b2176c65b6eb.zip
tweak docs
-rw-r--r--docs/pages/3 - About Cask.md12
-rw-r--r--docs/pages/4 - Cask Actors.md30
2 files changed, 27 insertions, 15 deletions
diff --git a/docs/pages/3 - About Cask.md b/docs/pages/3 - About Cask.md
index 8e5e5f3..42f01b6 100644
--- a/docs/pages/3 - About Cask.md
+++ b/docs/pages/3 - About Cask.md
@@ -34,8 +34,8 @@ logic they need to function. This has several benefits:
- You can jump to the definition of an annotation and see what it does
- It trivial to implement your own annotations as
- [decorators](/using-cask#extending-endpoints-with-decorators) or
- [endpoints](/using-cask#custom-endpoints).
+ [decorators](/cask#extending-endpoints-with-decorators) or
+ [endpoints](/cask#custom-endpoints).
- Stacking multiple annotations on a single function has a well-defined contract
and semantics
@@ -60,7 +60,7 @@ While these features all are valuable in specific cases, Cask aims for the 99%
of code for which simple, boring code is perfectly fine. Cask's endpoints are
synchronous by default, do not tie you to any underlying concurrency model, and
should "just work" without any advanced knowledge apart from basic Scala and
-HTTP. Cask's [websockets](/using-cask#websockets) API is intentionally low-level, making it
+HTTP. Cask's [websockets](/cask#websockets) API is intentionally low-level, making it
both simple to use and also simple to build on top of if you want to wrap it in
your own concurrency-library-of-choice.
@@ -69,9 +69,9 @@ your own concurrency-library-of-choice.
Cask is implemented as a thin wrapper around the excellent Undertow HTTP server.
If you need more advanced functionality, Cask lets you ask for the `exchange:
HttpServerExchange` in your endpoint, override
-[defaultHandler](/using-cask#def-defaulthandler) and add your own Undertow handlers next to
+[defaultHandler](/cask#def-defaulthandler) and add your own Undertow handlers next to
Cask's and avoid Cask's routing/endpoint system altogether, or override
-[main](/using-cask#def-main) if you want to change how the server is initialized.
+[main](/cask#def-main) if you want to change how the server is initialized.
Rather than trying to provide APIs for all conceivable functionality, Cask
simply provides what it does best - simple routing for simple endpoints - and
@@ -91,4 +91,4 @@ trivial to pull in libraries like
Each of these are stable, well-known, well-documented libraries you may already
be familiar with, and Cask simply provides the HTTP/routing layer with the hooks
necessary to tie everything together (e.g. into a
-[TodoMVC](/using-cask#todomvc-full-stack-web) webapp) \ No newline at end of file
+[TodoMVC](/cask#todomvc-full-stack-web) webapp) \ No newline at end of file
diff --git a/docs/pages/4 - Cask Actors.md b/docs/pages/4 - Cask Actors.md
index 440200b..abf2b90 100644
--- a/docs/pages/4 - Cask Actors.md
+++ b/docs/pages/4 - Cask Actors.md
@@ -13,6 +13,11 @@ ivy"com.lihaoyi::cask-actor:0.2.9"
"com.lihaoyi" %% "cask-actor" % "0.2.9"
```
+Cask Actors are much more lightweight solution than a full-fledged framework
+like Akka: Cask Actors do not support any sort of distribution or clustering,
+and run entirely within a single process. Cask Actors are garbage collectible,
+and you do not need to manually terminate them or manage their lifecycle.
+
## A Logger Actor
Here is a small demonstration of using a `cask.actor.SimpleActor` to perform
@@ -58,11 +63,6 @@ os.read.lines(oldPath) ==> Seq("Comes from liquids from my udder")
os.read.lines(logPath) ==> Seq("I am cow, I am cow", "Hear me moo, moooo")
```
-All cask actors require a `cask.actor.Context`, which is an extended
-`scala.concurrent.ExecutionContext`. Here we are using `Context.Test`, which
-also provides the handy `waitForInactivity()` method which blocks until all
-asynchronous actor processing has completed.
-
In the above example, we are defining a single `Logger` actor class, which we
are instantiating once as `val logger`. We can now send as many messages as we
want via `logger.send`: while the processing of a message make take some time
@@ -70,14 +70,22 @@ want via `logger.send`: while the processing of a message make take some time
[log-rotation](https://en.wikipedia.org/wiki/Log_rotation) to avoid the logfile
growing in size forever) the fact that it's in a separate actor means the
processing happens in the background without slowing down the main logic of your
-program. This is ideal for scenarios where the dataflow is one way: e.g. when
+program. Cask Actors process messages one at a time, so by putting the file
+write-and-rotate logic inside an Actor we can be sure to avoid race conditions
+that may arise due to multiple threads mangling the same file at once.
+
+Using Actors is ideal for scenarios where the dataflow is one way: e.g. when
logging, you only write logs, and never need to wait for the results of
processing them.
+All cask actors require a `cask.actor.Context`, which is an extended
+`scala.concurrent.ExecutionContext`. Here we are using `Context.Test`, which
+also provides the handy `waitForInactivity()` method which blocks until all
+asynchronous actor processing has completed.
+
Note that `logger.send` is thread-safe: multiple threads can be sending logging
messages to the `logger` at once, and the `.send` method will make sure the
-messages are properly queued up and executed. At no point will a thread calling
-`.send` end up blocking another thread from executing.
+messages are properly queued up and executed one at a time.
## Strawman: Synchronized Logging
@@ -197,4 +205,8 @@ both of these can run in parallel as well as in parallel with the main logic. By
constructing our data processing flows using Actors, we can take advantage of
pipeline parallelism to distribute the processing over multiple threads and CPU
cores, so adding steps to the pipeline neither slows it down nor does it slow
-down the execution of the main program. \ No newline at end of file
+down the execution of the main program.
+
+You can imagine adding additional stages to this actor pipeline, to perform
+other sorts of processing, and have those additional stages running in parallel
+as well. \ No newline at end of file