diff options
Diffstat (limited to 'site/src/main/jekyll/core')
-rw-r--r-- | site/src/main/jekyll/core/index.md | 39 | ||||
-rw-r--r-- | site/src/main/jekyll/core/metrics/basics.md | 93 | ||||
-rw-r--r-- | site/src/main/jekyll/core/tracing/basics.md | 101 | ||||
-rw-r--r-- | site/src/main/jekyll/core/tracing/logging.md | 43 |
4 files changed, 0 insertions, 276 deletions
diff --git a/site/src/main/jekyll/core/index.md b/site/src/main/jekyll/core/index.md deleted file mode 100644 index 5d19387c..00000000 --- a/site/src/main/jekyll/core/index.md +++ /dev/null @@ -1,39 +0,0 @@ ---- -title: Kamon | Core | Documentation -layout: default ---- - -At it's core, Kamon aims to provide three basic functionalities: metrics, message tracing and a subscription protocol -to periodically get that valuable information from Kamon and do whatever you want with it. Typically you would push -your data into a metrics repository that may provide you a dashboard and historic metrics analitics, but the door -is open for you to be creative, it is your data anyway. - - -Metrics -------- - - - - - -Traces ------- - -In Kamon, a Trace is a group of events that are related to each other which together form a meaningful piece of functionality -for a given application. For example, if in order to fulfill a `GET` request to the `/users/kamon` resource, a application -sends a message to a actor, which reads the user data from a database and send a message back with the user response to -finish the request, all those interactions would be considered part of the same `TraceContext`. - -Back in the day tracing used to be simpler: if you create a Thread per request and manage everything related to that request -in that single Thread, then using a ThreadLocal you can store all the valuable information you want about that request and -flush it all when the request is fulfilled. Sounds easy, right?, hold on that thought, we will disprove it soon. - -When developing reactive applications on top of Akka the perspective of a trace changes from thread local to event local. -If the system described above were to handle a hundred clients requesting for user's details, you might have a handful -of database access actors handling those requests. The load might be distributed across those actors, and within each actor -some messages will be procesed in the same Thread, then the dispatcher might schedule the actor to run in a different Thread, -but still, even while many messages can be processed in the same Thread, they are likely to be completely unrelated. - -In order to cope with this situation Kamon provides with the notion of a `TraceContext` to group all related events and -collect the information we need about them. Once a `TraceContext` is created, Kamon will propagate it when new events are -generated within the context and once the `TraceContext` is finished, all the gathered information is flushed.
\ No newline at end of file diff --git a/site/src/main/jekyll/core/metrics/basics.md b/site/src/main/jekyll/core/metrics/basics.md deleted file mode 100644 index 991ddf26..00000000 --- a/site/src/main/jekyll/core/metrics/basics.md +++ /dev/null @@ -1,93 +0,0 @@ ---- -title: Kamon | Core | Documentation -layout: documentation ---- - -Metrics -======= - -Some intro about metrics - -Philosophy ----------- - -Back in the day, the most common approach to get metrics out of an Akka/Spray application for production monitoring was -doing manual instrumentation: select your favorite metrics collection library, wrap you messages with some useful -metadata, wrap your actor's receive function with some metrics measuring code and, finally, push that metrics data out -to somewhere you can keep it, graph it and analyse it whenever you want. - -Each metrics collection library has it's own strengths and weaknesses, and each developer has to choose wisely according -to the requirements they have in hand, leading them in different paths as they progress with their applications. Each -path has different implications with regards to introduced overhead and latency, metrics data accuracy and memory -consumption. Kamon takes this responsibility out of the developer and tries to make the best choice to provide high -performance metrics collection instruments while keeping the inherent overhead as low as possible. - -Kamon tries to select the best possible approach, so you don't have to. - - -Metrics Collection and Flushing -------------------------------- - -All the metrics infrastructure in Kamon was designed around two concepts: collection and flushing. Metrics collection -happens in real time, as soon as the information is available for being recorded. Let's see a simple example: as soon as -a actor finishes processing a message, Kamon knows the elapsed time for processing that specific message and it is -recorded right away. If you have millions of messages passing through your system, then millions of measurements will be -taken. - -Flushing happens recurrently after a fixed amount of time has passed, a tick. Upon each tick, Kamon will collect all -measurements recorded since the last tick, flush the collected data and reset all the instruments to zero. Let's explore -a little bit more on how this two concepts are modeled inside Kamon. - -<img class="img-responsive" src="/assets/img/diagrams/metric-collection-concepts.png"> - -A metric group contains various individual metrics that are related to the same entity, for example, if the entity we -are talking about is an actor, the metrics related to processing time, mailbox size and time in mailbox for that -specific actor are grouped inside a single metric group, and each actor gets its own metric group. As you might disguise -from the diagram above, on the left we have the mutable side of the process that is constantly recoding measurements as -the events flow through your application and on the right we have the immutable side, containing snapshots representing -all the measurements taken during a specific period on time for a metric group. - - -Filtering Entities ------------------- - -By default Kamon will not include any entity for metrics collection and you will need to explicitly include all the -entities you are interested in, be it a actor, a trace, a dispatcher or any other entity monitored by Kamon. The -`kamon.metrics.filters` key on your application's configuration controls which entities must be included/excluded from -the metrics collection infrastructure. Includes and excludes are provided as lists of strings containing the -corresponding GLOB patterns for each group, and the logic behind is simple: include everything that matches at least one -`includes` pattern and does not match any of the `excludes` patterns. The following configuration file sample includes -the `user/job-manager` actor and all the worker actors, but leaves out all system actors and the `user/worker-helper` -actor. - -``` -kamon { - metrics { - filters = [ - { - actor { - includes = [ "user/job-manager", "user/worker-*" ] - excludes = [ "system/*", "user/worker-helper" ] - } - }, - { - trace { - includes = [ "*" ] - excludes = [] - } - } - ] - } -} -``` - -Instruments ------------ - -Talk about how HDR Histogram works and how we use it. - - -Subscription protocol ---------------------- - -Explain how to subscribe for metrics data and provide a simple example. diff --git a/site/src/main/jekyll/core/tracing/basics.md b/site/src/main/jekyll/core/tracing/basics.md deleted file mode 100644 index c5df29a9..00000000 --- a/site/src/main/jekyll/core/tracing/basics.md +++ /dev/null @@ -1,101 +0,0 @@ ---- -title: Kamon | Core | Documentation -layout: documentation ---- - -Traces -====== - -A trace is a story, told by some events in your application that explain how the execution of a particular portion of -functionality went during a single invocation. For example, if in order to fulfil a `GET` request to the `/users/kamon` -resource, a application sends a message to an actor, which reads the user data from a database and sends a message back -with the user information to finish the request, all those interactions would be considered as part of the same trace. - -If the application described above were to handle a hundred clients requesting for user's details, there might be a -handful of database access actors handling those requests. When the dispatcher gives an actor some time to execute, it -will process as many messages as possible (as per dispatcher configuration) before the Thread is taken away from it, but -during that time it is incorrect to say that either the actor or the Thread are tied to a trace, because each message -might come from a different source which is probably waiting for a different response. - -Back in the day tracing used to be simpler: if you use single dedicated `Thread` during the execution of a request and -everything related to that request happens in that single `Thread`, then you could use a `ThreadLocal` and store all the -valuable information you want about the processing of that request from anywhere in the codebase and flush it all when -the request is finished. Sounds easy, right?, hold on that thought, it isn't that easy in the reactive world! - -When developing reactive applications on top of Akka the perspective of a trace changes from "thread local" to "event -local" and in order to cope with this Kamon provides the notion of a `TraceContext` to group all related events and -collect the information we need about them. Once a `TraceContext` is created, Kamon will propagate it automatically -under specific conditions and once the `TraceContext` is finished, all the gathered information is flushed. The -`TraceContext` is effectively stored in a ThreadLocal, but only during the processing of certain specific events and -then it is cleared out to avoid propagating it to unrelated events. - - -Starting a `TraceContext` -------------------------- - -The `TraceRecorder` companion object provides a simple API to create, propagate and finish a `TraceContext`. To start a -new context use the `TraceRecorder.withNewTraceContext(..)` method. Let's dig into this with a simple example: - -Suppose you want to trace a process that involves a couple actors, and you want to make sure all related events become -part of the same `TraceContext`. Our actors might look like this: - -{% include_code kamon/docs/trace/SimpleContextPropagation.scala start:47 end:63 linenos:false %} - -You should feel familiar with this code, there is nothing new there. Let's spawn an `UpperCaser` actor and send it five -string messages and see the output on the log file: - -``` -22:24:07.197 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context] -22:24:07.198 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context] -22:24:07.198 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT] -22:24:07.199 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context] -22:24:07.200 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context] -22:24:07.200 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context] -22:24:07.204 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT] -22:24:07.205 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT] -22:24:07.205 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT] -22:24:07.206 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT] -``` - -Can you tell which log statement from `LengthCalculator` corresponds to each log statement from `UpperCaser`?, seems -easy to figure it out manually in this case, but as the number of events happening concurrently in your app grows it -becomes harder to answer that question without some extra help. Let's see how Kamon can help us in this situation: - -{% include_code kamon/docs/trace/SimpleContextPropagation.scala start:38 end:40 linenos:false %} - -When using the `TraceRecorder.withNewTraceContext(..)` method, Kamon will create a new `TraceContext` and make it -available during the execution of the piece of code passed as argument. This time, we are sending a message to an actor -which happens to be one of the situations under which Kamon will automatically propagate a `TraceContext`, so we can -expect the current context to be available to the actor when processing the message we just sent, and -<strong>only</strong> when processing that message. Let's repeat the exercise of sending five messages to this actor, -now doing it with a new `TraceContext` each time and look at the log: - -``` -22:24:07.223 INFO [Ivan-Topolnjaks-MacBook-Pro.local-1][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext] -22:24:07.224 INFO [Ivan-Topolnjaks-MacBook-Pro.local-2][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext] -22:24:07.225 INFO [Ivan-Topolnjaks-MacBook-Pro.local-1][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT] -22:24:07.225 INFO [Ivan-Topolnjaks-MacBook-Pro.local-3][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext] -22:24:07.226 INFO [Ivan-Topolnjaks-MacBook-Pro.local-4][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext] -22:24:07.227 INFO [Ivan-Topolnjaks-MacBook-Pro.local-5][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext] -22:24:07.227 INFO [Ivan-Topolnjaks-MacBook-Pro.local-2][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT] -22:24:07.228 INFO [Ivan-Topolnjaks-MacBook-Pro.local-3][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT] -22:24:07.228 INFO [Ivan-Topolnjaks-MacBook-Pro.local-4][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT] -22:24:07.229 INFO [Ivan-Topolnjaks-MacBook-Pro.local-5][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT] -``` - -Can you tell which log statement from `LengthCalculator` corresponds to each log statement from `UpperCaser` now?, it -has become a no brainer: each `TraceContext` created by Kamon gets a unique token that we are including in the log -patterns (the first value between square brackets) and with that small but important piece of information the relation -between each log line is clear. - -Just by logging the trace token you can get a lot of visibility and coherence in the information available on your logs, -please refer to the [logging](../logging/) section to learn how to include the trace token in your logs. - - -Rules for `TraceContext` Propagation ------------------------------------- - -* Actor creation. -* Sending messages to a Actor. -* Using ActorLogging. -* Creating and transforming a Future. diff --git a/site/src/main/jekyll/core/tracing/logging.md b/site/src/main/jekyll/core/tracing/logging.md deleted file mode 100644 index 7d2251ef..00000000 --- a/site/src/main/jekyll/core/tracing/logging.md +++ /dev/null @@ -1,43 +0,0 @@ ---- -title: Kamon | Core | Documentation -layout: documentation ---- - -Logging -======= - -Kamon provides a very simple way to make sure that the trace token available when the log statement was executed is -included in your logs, no matter if you are logging synchronously or asynchronously. Kamon provides built in support -for logging with Logback, but extending the support to any other logging framework should be a trivial task. - -When using `ActorLogging` all logging events are sent to your actor system's event stream and then picked up by your -registered listeners for actual logging. Akka captures the actor, thread and timestamp from the instant in which the -event was generated and makes that info available when performing the actual logging. As an addition to this, Kamon -attaches the `TraceContext` that is present when creating the log events and makes it available when the actual logging -is performed. If you are using the loggers directly then the `TraceContext` should be already available. - -`TraceRecorder.currentContext` gives you access to the currently `TraceContext`, so the following expression gives you -the trace token for the currently available context: - -```scala -TraceRecorder.currentContext.map(_.token) -``` - -Kamon already packs a Logback converter that you can register in your `logback.xml` file and use in your logging -patterns as show bellow: - -```xml -<configuration scan="true"> - <conversionRule conversionWord="traceToken" converterClass="kamon.trace.logging.LogbackTraceTokenConverter" /> - <appender name="STDOUT" class="ch.qos.logback.core.ConsoleAppender"> - <encoder> - <pattern>%date{HH:mm:ss.SSS} %-5level [%traceToken][%X{akkaSource}] %msg%n</pattern> - </encoder> - </appender> - - <root level="debug"> - <appender-ref ref="STDOUT" /> - </root> - -</configuration> -``` |