\ No newline at end of file
diff --git a/site/src/main/jekyll/_includes/navigation-bar.html b/site/src/main/jekyll/_includes/navigation-bar.html
index a2ffb41e..cdc2a2d6 100644
--- a/site/src/main/jekyll/_includes/navigation-bar.html
+++ b/site/src/main/jekyll/_includes/navigation-bar.html
@@ -1,51 +1,73 @@
+
+
+
+
+
+
+
+
+ {% include google-analytics.html %}
+
+
+
diff --git a/site/src/main/jekyll/_posts/2014-03-17-kamon-meets-the-world.md b/site/src/main/jekyll/_posts/2014-03-17-kamon-meets-the-world.md
index e8d76651..68b315b3 100644
--- a/site/src/main/jekyll/_posts/2014-03-17-kamon-meets-the-world.md
+++ b/site/src/main/jekyll/_posts/2014-03-17-kamon-meets-the-world.md
@@ -27,5 +27,7 @@ to be happy with it, we hope now that you can try Kamon and let us know what you
We are currently short on documentation, but feel free to ask anything you need through the mailing list! more docs are
on the oven.
-So, what are you waiting for? go and learn about [tracing in Kamon](/core/tracing/) and [get started](/get-started/)
-right now!
+So, what are you waiting for? go and learn about [tracing in Kamon] and [get started] right now!
+
+[tracing in Kamon]: /core/tracing/basics/
+[get started]: /introduction/get-started/
\ No newline at end of file
diff --git a/site/src/main/jekyll/_posts/2014-04-24-kamon-for-akka-2-3-is-now-available.md b/site/src/main/jekyll/_posts/2014-04-24-kamon-for-akka-2-3-is-now-available.md
index 29911f92..08b8638a 100644
--- a/site/src/main/jekyll/_posts/2014-04-24-kamon-for-akka-2-3-is-now-available.md
+++ b/site/src/main/jekyll/_posts/2014-04-24-kamon-for-akka-2-3-is-now-available.md
@@ -15,5 +15,6 @@ our releases will come in pairs and aligned with the following Akka versions:
* 0.2.x releases are compatible with Akka 2.2, Spray 1.2 and Play 2.2.
The 0.3.0/0.2.0 releases contain exactly the same feature set as our 0.0.15 release, we just made the necessary changes
-to make it compatible with Akka 2.3. If you were waiting for this release, then go and [get started](/get-started) right
-away!
\ No newline at end of file
+to make it compatible with Akka 2.3. If you were waiting for this release, then go and [get started] right away!
+
+[get started]: /introduction/get-started/
\ No newline at end of file
diff --git a/site/src/main/jekyll/_posts/2014-04-27-get-started-quicker-with-our-docker-image.md b/site/src/main/jekyll/_posts/2014-04-27-get-started-quicker-with-our-docker-image.md
index f75628c6..853eb75a 100644
--- a/site/src/main/jekyll/_posts/2014-04-27-get-started-quicker-with-our-docker-image.md
+++ b/site/src/main/jekyll/_posts/2014-04-27-get-started-quicker-with-our-docker-image.md
@@ -8,7 +8,7 @@ tags: announcement
We are very excited to see people adopting Kamon as their monitoring tool for reactive applications and, of course, we
want to keep growing both in users base and features. According to our site metrics, the most visited section is the one
-describing our [StatsD module](/statsd/), that made us think, what can we do to make it easier for people to get started
+describing our [StatsD module], that made us think, what can we do to make it easier for people to get started
with Kamon and StatsD?, well, that's an easy question to answer: build a package containing all the required
infrastructure and plumping, and let the users just focus on what cares to them, their apps and their metrics. That's
why today we are publishing a Docker image with all that you need to get started in a few minutes!
@@ -54,3 +54,6 @@ purpose of making your life easier. This should give you an idea of how the dash
from one of our toy applications:
+
+
+[StatsD module]: /backends/statsd/
\ No newline at end of file
diff --git a/site/src/main/jekyll/acknowledgments.md b/site/src/main/jekyll/acknowledgments.md
deleted file mode 100644
index f75f4e24..00000000
--- a/site/src/main/jekyll/acknowledgments.md
+++ /dev/null
@@ -1,36 +0,0 @@
----
-title: Kamon | Acknowledgments
-layout: default
----
-
-Acknowledgments
-===============
-
-We, the Kamon team, would like to express our gratitude to all the people and companies that help us make Kamon the best
-solution in the metrics collection space for Akka, Spray and Play!. Let's give names and regards to this wonderful
-fellows:
-
-Our contributors
-----------------
-
-Everything starts with an idea, and [this](https://github.com/kamon-io/Kamon/graphs/contributors) guys are helping us
-take that idea and make it a reality. A reality that is helping developers around the world to measure and monitor their
-success with reactive technologies. Kudos to all of you!
-
-
-Our users
----------
-
-It is absolutely rewarding to know that Kamon is useful for people around the world, and it is even better when these
-people come to us looking for help, reporting issues, giving feedback or telling us how smoothly Kamon is monitoring
-their production systems, thanks for using Kamon! keep coming and spread the word :).
-
-
-
-[YourKit, LLC](http://www.yourkit.com)
---------------------------------------
-
-We care a lot about performance and we try hard to keep Kamon's overhead as low as possible, but we couldn't succeed on
-this matter without [YourKit's Java Profiler](http://www.yourkit.com/java/profiler/index.jsp). It is well known to be
-one of the best profilers out there and they have been so kind to support us by providing a open source use license to
-Kamon developers. Thanks YourKit! We highly appreciate your support and commitment to the open source community.
diff --git a/site/src/main/jekyll/akka/index.md b/site/src/main/jekyll/akka/index.md
deleted file mode 100644
index dca297ad..00000000
--- a/site/src/main/jekyll/akka/index.md
+++ /dev/null
@@ -1,97 +0,0 @@
----
-title: kamon | Akka Toolkit | Documentation
-layout: default
----
-
-Akka Module
-===
-
----
-Dependencies
----
-
-Apart from scala library kamon depends on:
-
-- aspectj
-- spray-io
-- akka-actor
-
-
-Installation
----
-Kamon works with SBT, so you need to add Kamon.io repository to your resolvers.
-
-Configuration
----
-Just like other products in the scala ecosystem, it relies on the typesafe configuration library.
-
-Since kamon uses the same configuration technique as [Spray](http://spray.io/documentation "Spray") / [Akka](http://akka.io/docs "Akka") you might want to check out the [Akka-Documentation-configuration](http://doc.akka.io/docs/akka/2.1.4/general/configuration.html "Akka Documentation on configuration")
-.
-
-In order to see Kamon in action you need first to set up your sbt project.
-
-1) Add Kamon repository to resolvers
-
-```scala
-"Kamon Repository" at "http://repo.kamon.io"
-```
-
-2) Add libraryDepenency
-
-```scala
- "kamon" %% "kamon-spray" % "0.0.11",
-```
-
-In addition we suggest to create aspectj.sbt file and add this content
-
-```scala
- import com.typesafe.sbt.SbtAspectj._
-
- aspectjSettings
-
- javaOptions <++= AspectjKeys.weaverOptions in Aspectj
-```
-
-3) Add to your plugins.sbt in project folder (if you don't have one yet, create the file) and add the Kamon release to the resolver and the aspecj.
-
-```scala
- resolvers += Resolver.url("Kamon Releases", url("http://repo.kamon.io"))(Resolver.ivyStylePatterns)
-
- addSbtPlugin("com.typesafe.sbt" % "sbt-aspectj" % "0.9.2")
-```
-**application.conf**
-
-```scala
- akka {
- loggers = ["akka.event.slf4j.Slf4jLogger"]
-
- actor {
- debug {
- unhandled = on
- }
- }
- }
-```
-
-Examples
----
-
-TODO: (to be published) The example will start a spray server with akka and logback configuration. Adjust it to your needs.
-
-Follow the steps in order to clone the repository
-
-1. git clone git://github.com/kamon/kamon.git
-
-2. cd kamon
-
-For the first example run
-
-```bash
- sbt "project kamon-uow-example"
-```
-
-In order to see how it works, you need to send a message to the rest service
-
-```bash
- curl -v --header 'X-UOW:YOUR_TRACER_ID' -X GET 'http://0.0.0.0:6666/fibonacci'
-```
diff --git a/site/src/main/jekyll/assets/css/kamon.css b/site/src/main/jekyll/assets/css/kamon.css
index 7a5577da..cc0da027 100644
--- a/site/src/main/jekyll/assets/css/kamon.css
+++ b/site/src/main/jekyll/assets/css/kamon.css
@@ -73,4 +73,69 @@ img[alt=statsD] { width: 100%; }
width: 700px ;
margin-left: auto ;
margin-right: auto ;
-}
\ No newline at end of file
+}
+
+
+#doc-tree ul {
+ list-style: none outside none;
+ margin-left: 0px;
+ padding-left: 6px;
+}
+
+.divider {
+ *width: 100%;
+ height: 1px;
+ margin: 9px 1px;
+ *margin: -5px 0 5px;
+ overflow: hidden;
+ background-color: #e5e5e5;
+ border-bottom:1px solid #e5e5e5;
+ }
+
+.dropdown-submenu {
+ position: relative;
+}
+
+.dropdown-submenu>.dropdown-menu {
+ top: 0;
+ left: 100%;
+ margin-top: -6px;
+ margin-left: -1px;
+ -webkit-border-radius: 0 6px 6px 6px;
+ -moz-border-radius: 0 6px 6px;
+ border-radius: 0 6px 6px 6px;
+}
+
+.dropdown-submenu:hover>.dropdown-menu {
+ display: block;
+}
+
+.dropdown-submenu>a:after {
+ display: block;
+ content: " ";
+ float: right;
+ width: 0;
+ height: 0;
+ border-color: transparent;
+ border-style: solid;
+ border-width: 5px 0 5px 5px;
+ border-left-color: #ccc;
+ margin-top: 5px;
+ margin-right: -10px;
+}
+
+.dropdown-submenu:hover>a:after {
+ border-left-color: #fff;
+}
+
+.dropdown-submenu.pull-left {
+ float: none;
+}
+
+.dropdown-submenu.pull-left>.dropdown-menu {
+ left: -100%;
+ margin-left: 10px;
+ -webkit-border-radius: 6px 0 6px 6px;
+ -moz-border-radius: 6px 0 6px 6px;
+ border-radius: 6px 0 6px 6px;
+}
diff --git a/site/src/main/jekyll/assets/js/kamon.js b/site/src/main/jekyll/assets/js/kamon.js
new file mode 100644
index 00000000..14a81e36
--- /dev/null
+++ b/site/src/main/jekyll/assets/js/kamon.js
@@ -0,0 +1,5 @@
+$(document).ready(function () {
+ $('label.tree-toggler').click(function () {
+ $(this).parent().children('ul.tree').toggle(300);
+ });
+});
\ No newline at end of file
diff --git a/site/src/main/jekyll/backends/datadog.md b/site/src/main/jekyll/backends/datadog.md
new file mode 100644
index 00000000..f14ec23e
--- /dev/null
+++ b/site/src/main/jekyll/backends/datadog.md
@@ -0,0 +1,88 @@
+---
+title: Kamon | Datadog | Documentation
+layout: documentation
+---
+
+Reporting Metrics to Datadog
+===========================
+
+
+[Datadog] is a monitoring service for IT, Operations and Development teams who write and run applications at scale, and
+want to turn the massive amounts of data produced by their apps, tools and services into actionable insight.
+
+Installation
+------------
+
+To use the Datadog module just add the `kamon-datadog` dependency to your project and start your application using the
+Aspectj Weaver agent. Please refer to our [get started] page for more info on how to add dependencies to your project
+and starting your application with the AspectJ Weaver.
+
+
+Configuration
+-------------
+
+First, include the Kamon(Datadog) extension under the `akka.extensions` key of your configuration files as shown here:
+
+```scala
+akka {
+ extensions = ["kamon.statsd.Datadog"]
+}
+```
+
+Then, tune the configuration settings according to your needs. Here is the `reference.conf` that ships with kamon-datadog
+which includes a brief explanation of each setting:
+
+```
+kamon {
+ datadog {
+ # Hostname and port in which your StatsD is running. Remember that Datadog packets are sent using UDP and
+ # setting unreachable hosts and/or not open ports wont be warned by the Kamon, your data wont go anywhere.
+ hostname = "127.0.0.1"
+ port = 8125
+
+ # Interval between metrics data flushes to Datadog. It's value must be equal or greater than the
+ # kamon.metrics.tick-interval setting.
+ flush-interval = 1 second
+
+ # Max packet size for UDP metrics data sent to Datadog.
+ max-packet-size = 1024 bytes
+
+ # Subscription patterns used to select which metrics will be pushed to Datadog. Note that first, metrics
+ # collection for your desired entities must be activated under the kamon.metrics.filters settings.
+ includes {
+ actor = [ "*" ]
+ trace = [ "*" ]
+ }
+
+ simple-metric-key-generator {
+ # Application prefix for all metrics pushed to Datadog. The default namespacing scheme for metrics follows
+ # this pattern:
+ # application.host.entity.entity-name.metric-name
+ application = "kamon"
+ }
+ }
+}
+```
+
+
+Integration Notes
+-----------------
+
+* Contrary to many Datadog client implementations, we don't flush the metrics data as soon as the measurements are taken
+ but instead, all metrics data is buffered by the `Kamon(Datadog)` extension and flushed periodically using the
+ configured `kamon.statsd.flush-interval` and `kamon.statsd.max-packet-size` settings.
+* Currently only Actor and Trace metrics are being sent to Datadog.
+* All timing measurements are sent in nanoseconds, make sure you correctly set the scale when plotting or using the
+ metrics data.
+* It is advisable to experiment with the `kamon.statsd.flush-interval` and `kamon.statsd.max-packet-size` settings to
+ find the right balance between network bandwidth utilization and granularity on your metrics data.
+
+
+
+Visualization and Fun
+---------------------
+
+
+
+[Datadog]: http://www.datadoghq.com/
+[get started]: /introduction/get-started/
\ No newline at end of file
diff --git a/site/src/main/jekyll/backends/kamon-dashboard.md b/site/src/main/jekyll/backends/kamon-dashboard.md
new file mode 100644
index 00000000..357bf904
--- /dev/null
+++ b/site/src/main/jekyll/backends/kamon-dashboard.md
@@ -0,0 +1,6 @@
+---
+title: kamon | Dashboard | Documentation
+layout: documentation
+---
+Coming soon
+-----------
\ No newline at end of file
diff --git a/site/src/main/jekyll/backends/newrelic.md b/site/src/main/jekyll/backends/newrelic.md
new file mode 100644
index 00000000..4ba14dd5
--- /dev/null
+++ b/site/src/main/jekyll/backends/newrelic.md
@@ -0,0 +1,117 @@
+---
+title: kamon | NewRelic Module | Documentation
+layout: documentation
+---
+
+NewRelic Module
+===============
+
+If you are a Newrelic user and tried to start start your app using the Newrelic agent you probably noticed a crude reality:
+nothing is shown in your dashboard, no web transactions are recognized and errors are not reported for your Spray applications.
+Don't even think about detailed traces for the slowest transactions.
+
+We love Spray, and we love Newrelic, we couldn't leave this happening anymore!
+
+Currently the Newrelic Module works together with the Spray Module to get information about your Web Transactions and send
+that information to Newrelic servers as a aggregate to the data already colected by Newrelic's Agent. Currently the data
+being reported is:
+
+- Time spent for Web Transactions: Also known as `HttpDispatcher` time, represents the total time taken to process a web
+transaction, from the moment the `HttpRequest` is received by spray-can, to the moment the answer is sent to the IO layer.
+- Apdex
+- Errors
+
+Differentiation between JVM and External Services is coming soon, as well as actor metrics and detailed traces.
+
+
+
+Installation
+-------------
+
+To use the Newrelic module just make sure you put the `kamon-newrelic` and `kamon-spray` libraries in your classpath and
+start your application with both, the Aspectj Weaver and Newrelic agents. Please refer to our [get started](/get-started) page
+for more info on how to add the AspectJ Weaver and the [Newrelic Agent Installations Instructions](https://docs.newrelic.com/docs/java/new-relic-for-java#h2-installation).
+
+
+Configuration
+-------------
+
+Currently you will need to add a few settings to your `application.conf` file for the module to work:
+
+```scala
+akka {
+ // Custom logger for NewRelic that takes all the `Error` events from the event stream and publish them to NewRelic
+ loggers = ["akka.event.slf4j.Slf4jLogger", "kamon.newrelic.NewRelicErrorLogger"]
+ // Make sure the NewRelic extension is loaded with the ActorSystem
+ extensions = ["kamon.newrelic.NewRelic"]
+}
+
+kamon {
+ newrelic {
+ // These values must match the values present in your newrelic.yml file.
+ app-name = "KamonNewRelicExample[Development]"
+ license-key = 0123456789012345678901234567890123456789
+ }
+}
+```
+
+
+Let's see it in Action!
+-----------------------
+
+Let's create a very simple Spray application to show what you should expect from this module. The entire application code
+is at [Github](https://github.com/kamon-io/Kamon/tree/master/kamon-examples/kamon-newrelic-example).
+
+```scala
+import akka.actor.ActorSystem
+import spray.routing.SimpleRoutingApp
+
+object NewRelicExample extends App with SimpleRoutingApp {
+
+ implicit val system = ActorSystem("kamon-system")
+
+ startServer(interface = "localhost", port = 8080) {
+ path("helloKamon") {
+ get {
+ complete {
+
+ }
+ }
+ }
+ }
+}
+```
+
+As you can see, this is a dead simple application: two paths, different responses for each of them. Now let's hit it hard
+with Apache Bench:
+
+```bash
+ab -k -n 200000 http://localhost:8080/helloKamon
+ab -k -n 200000 http://localhost:8080/helloNewRelic
+```
+
+After a couple minutes running you should start seeing something similar to this in your dashboard:
+
+![newrelic](/assets/img/newrelic.png "NewRelic Screenshot")
+
+
+Note: Don't think that those numbers are wrong, Spray is that fast!
+
+
+
+Limitations
+-----------
+* The first implementation only supports a subset of NewRelic metrics
+
+
+Licensing
+---------
+NewRelic has [its own, separate licensing](http://newrelic.com/terms).
+
diff --git a/site/src/main/jekyll/backends/statsd.md b/site/src/main/jekyll/backends/statsd.md
new file mode 100644
index 00000000..677552e5
--- /dev/null
+++ b/site/src/main/jekyll/backends/statsd.md
@@ -0,0 +1,94 @@
+---
+title: Kamon | StatsD | Documentation
+layout: documentation
+---
+
+Reporting Metrics to StatsD
+===========================
+
+
+[StatsD](https://github.com/etsy/statsd/) is a simple network daemon that continuously receives metrics over UDP and
+periodically sends aggregate metrics to upstream services like (but not limited to) Graphite. Because it uses UDP,
+sending metrics data to StatsD is very fast with little to no overhead.
+
+
+Installation
+------------
+
+To use the StatsD module just add the `kamon-statsd` dependency to your project and start your application using the
+Aspectj Weaver agent. Please refer to our [get started](/get-started) page for more info on how to add dependencies to
+your project and starting your application with the AspectJ Weaver.
+
+
+Configuration
+-------------
+
+First, include the Kamon(StatsD) extension under the `akka.extensions` key of your configuration files as shown here:
+
+```scala
+akka {
+ extensions = ["kamon.statsd.StatsD"]
+}
+```
+
+Then, tune the configuration settings according to your needs. Here is the `reference.conf` that ships with kamon-statsd
+which includes a brief explanation of each setting:
+
+```
+kamon {
+ statsd {
+ # Hostname and port in which your StatsD is running. Remember that StatsD packets are sent using UDP and
+ # setting unreachable hosts and/or not open ports wont be warned by the Kamon, your data wont go anywhere.
+ hostname = "127.0.0.1"
+ port = 8125
+
+ # Interval between metrics data flushes to StatsD. It's value must be equal or greater than the
+ # kamon.metrics.tick-interval setting.
+ flush-interval = 1 second
+
+ # Max packet size for UDP metrics data sent to StatsD.
+ max-packet-size = 1024 bytes
+
+ # Subscription patterns used to select which metrics will be pushed to StatsD. Note that first, metrics
+ # collection for your desired entities must be activated under the kamon.metrics.filters settings.
+ includes {
+ actor = [ "*" ]
+ trace = [ "*" ]
+ }
+
+ simple-metric-key-generator {
+ # Application prefix for all metrics pushed to StatsD. The default namespacing scheme for metrics follows
+ # this pattern:
+ # application.host.entity.entity-name.metric-name
+ application = "kamon"
+ }
+ }
+}
+```
+
+
+Integration Notes
+-----------------
+
+* Contrary to many StatsD client implementations, we don't flush the metrics data as soon as the measurements are taken
+ but instead, all metrics data is buffered by the `Kamon(StatsD)` extension and flushed periodically using the
+ configured `kamon.statsd.flush-interval` and `kamon.statsd.max-packet-size` settings.
+* Currently only Actor and Trace metrics are being sent to StatsD.
+* All timing measurements are sent in nanoseconds, make sure you correctly set the scale when plotting or using the
+ metrics data.
+* It is advisable to experiment with the `kamon.statsd.flush-interval` and `kamon.statsd.max-packet-size` settings to
+ find the right balance between network bandwidth utilization and granularity on your metrics data.
+
+
+
+Visualization and Fun
+---------------------
+
+StatsD is widely used and there are many integrations available, even alternative implementations that can receive UDP
+messages with the StatsD protocol, you just have to pick the option that best suits you. For our internal testing we
+choose to use [Graphite](http://graphite.wikidot.com/) as the StatsD backend and [Grafana](http://grafana.org) to create
+beautiful dashboards with very useful metrics. Have an idea of how your metrics data might look like in Grafana with the
+screenshot bellow or use our [docker image](https://github.com/kamon-io/docker-grafana-graphite) to get up and running
+in a few minutes and see it with your own metrics!
+
+![statsD](/assets/img/kamon-statsd-grafana.png "Grafana Screenshot")
diff --git a/site/src/main/jekyll/changelog.md b/site/src/main/jekyll/changelog.md
deleted file mode 100644
index 5c0ee4ff..00000000
--- a/site/src/main/jekyll/changelog.md
+++ /dev/null
@@ -1,51 +0,0 @@
----
-title: Kamon | Changelog
-layout: default
----
-
-Changelog
-=========
-
-
-Version 0.3.0/0.2.0 (2014-04-24)
---------------------------------
-
-* Same feature set as 0.0.15 but now available for Akka 2.2 and Akka 2.3:
- * 0.3.0 is compatible with Akka 2.3, Spray 1.3 and Play 2.3-M1.
- * 0.2.0 is compatible with Akka 2.2, Spray 1.2 and Play 2.2.
-
-
-Version 0.0.15 (2014-04-10)
----------------------------
-
-* kamon
- * Now publishing to Sonatype and Maven Central
- * `reference.conf` files are now "sbt-assembly merge friendly"
-
-* kamon-core
- * Control of AspectJ weaving messages through Kamon configuration
- * Avoid the possible performance issues when calling `MessageQueue.numberOfMessages` by keeping a external counter.
-
-* kamon-statsd
- * Now you can send Actor and Trace metrics to StatsD! Check out our [StatsD documentation](/statsd/) for more
- details.
-
-* kamon-play (Experimental)
- * Experimental support to trace metrics collection, automatic trace token propagation and HTTP Client request
- metrics is now available for Play! applications.
-
-
-
-Version 0.0.14 (2014-03-17)
----------------------------
-* kamon-core
- * Improved startup times
- * Remake of trace metrics collection
- * Support for custom metrics collection (Experimental)
-
-* kamon-play
- * Initial support (Experimental)
-
-* site
- * [logging](/core/logging/) (WIP)
- * [tracing](/core/tracing/) (WIP)
diff --git a/site/src/main/jekyll/core/logging.md b/site/src/main/jekyll/core/logging.md
deleted file mode 100644
index d8324f82..00000000
--- a/site/src/main/jekyll/core/logging.md
+++ /dev/null
@@ -1,43 +0,0 @@
----
-title: Kamon | Core | Documentation
-layout: default
----
-
-Logging
-=======
-
-Kamon provides a very simple way to make sure that the trace token available when the log statement was executed is
-included in your logs, no matter if you are logging synchronously or asynchronously. Kamon provides built in support
-for logging with Logback, but extending the support to any other logging framework should be a trivial task.
-
-When using `ActorLogging` all logging events are sent to your actor system's event stream and then picked up by your
-registered listeners for actual logging. Akka captures the actor, thread and timestamp from the instant in which the
-event was generated and makes that info available when performing the actual logging. As an addition to this, Kamon
-attaches the `TraceContext` that is present when creating the log events and makes it available when the actual logging
-is performed. If you are using the loggers directly then the `TraceContext` should be already available.
-
-`TraceRecorder.currentContext` gives you access to the currently `TraceContext`, so the following expression gives you
-the trace token for the currently available context:
-
-```scala
-TraceRecorder.currentContext.map(_.token)
-```
-
-Kamon already packs a Logback converter that you can register in your `logback.xml` file and use in your logging
-patterns as show bellow:
-
-```xml
-
-
-
-
- %date{HH:mm:ss.SSS} %-5level [%traceToken][%X{akkaSource}] %msg%n
-
-
-
-
-
-
-
-
-```
diff --git a/site/src/main/jekyll/core/metrics.md b/site/src/main/jekyll/core/metrics.md
deleted file mode 100644
index dcef8304..00000000
--- a/site/src/main/jekyll/core/metrics.md
+++ /dev/null
@@ -1,93 +0,0 @@
----
-title: Kamon | Core | Documentation
-layout: default
----
-
-Metrics
-=======
-
-Some intro about metrics
-
-Philosophy
-----------
-
-Back in the day, the most common approach to get metrics out of an Akka/Spray application for production monitoring was
-doing manual instrumentation: select your favorite metrics collection library, wrap you messages with some useful
-metadata, wrap your actor's receive function with some metrics measuring code and, finally, push that metrics data out
-to somewhere you can keep it, graph it and analyse it whenever you want.
-
-Each metrics collection library has it's own strengths and weaknesses, and each developer has to choose wisely according
-to the requirements they have in hand, leading them in different paths as they progress with their applications. Each
-path has different implications with regards to introduced overhead and latency, metrics data accuracy and memory
-consumption. Kamon takes this responsibility out of the developer and tries to make the best choice to provide high
-performance metrics collection instruments while keeping the inherent overhead as low as possible.
-
-Kamon tries to select the best possible approach, so you don't have to.
-
-
-Metrics Collection and Flushing
--------------------------------
-
-All the metrics infrastructure in Kamon was designed around two concepts: collection and flushing. Metrics collection
-happens in real time, as soon as the information is available for being recorded. Let's see a simple example: as soon as
-a actor finishes processing a message, Kamon knows the elapsed time for processing that specific message and it is
-recorded right away. If you have millions of messages passing through your system, then millions of measurements will be
-taken.
-
-Flushing happens recurrently after a fixed amount of time has passed, a tick. Upon each tick, Kamon will collect all
-measurements recorded since the last tick, flush the collected data and reset all the instruments to zero. Let's explore
-a little bit more on how this two concepts are modeled inside Kamon.
-
-
-
-A metric group contains various individual metrics that are related to the same entity, for example, if the entity we
-are talking about is an actor, the metrics related to processing time, mailbox size and time in mailbox for that
-specific actor are grouped inside a single metric group, and each actor gets its own metric group. As you might disguise
-from the diagram above, on the left we have the mutable side of the process that is constantly recoding measurements as
-the events flow through your application and on the right we have the immutable side, containing snapshots representing
-all the measurements taken during a specific period on time for a metric group.
-
-
-Filtering Entities
-------------------
-
-By default Kamon will not include any entity for metrics collection and you will need to explicitly include all the
-entities you are interested in, be it a actor, a trace, a dispatcher or any other entity monitored by Kamon. The
-`kamon.metrics.filters` key on your application's configuration controls which entities must be included/excluded from
-the metrics collection infrastructure. Includes and excludes are provided as lists of strings containing the
-corresponding GLOB patterns for each group, and the logic behind is simple: include everything that matches at least one
-`includes` pattern and does not match any of the `excludes` patterns. The following configuration file sample includes
-the `user/job-manager` actor and all the worker actors, but leaves out all system actors and the `user/worker-helper`
-actor.
-
-```
-kamon {
- metrics {
- filters = [
- {
- actor {
- includes = [ "user/job-manager", "user/worker-*" ]
- excludes = [ "system/*", "user/worker-helper" ]
- }
- },
- {
- trace {
- includes = [ "*" ]
- excludes = []
- }
- }
- ]
- }
-}
-```
-
-Instruments
------------
-
-Talk about how HDR Histogram works and how we use it.
-
-
-Subscription protocol
----------------------
-
-Explain how to subscribe for metrics data and provide a simple example.
diff --git a/site/src/main/jekyll/core/metrics/basics.md b/site/src/main/jekyll/core/metrics/basics.md
new file mode 100644
index 00000000..991ddf26
--- /dev/null
+++ b/site/src/main/jekyll/core/metrics/basics.md
@@ -0,0 +1,93 @@
+---
+title: Kamon | Core | Documentation
+layout: documentation
+---
+
+Metrics
+=======
+
+Some intro about metrics
+
+Philosophy
+----------
+
+Back in the day, the most common approach to get metrics out of an Akka/Spray application for production monitoring was
+doing manual instrumentation: select your favorite metrics collection library, wrap you messages with some useful
+metadata, wrap your actor's receive function with some metrics measuring code and, finally, push that metrics data out
+to somewhere you can keep it, graph it and analyse it whenever you want.
+
+Each metrics collection library has it's own strengths and weaknesses, and each developer has to choose wisely according
+to the requirements they have in hand, leading them in different paths as they progress with their applications. Each
+path has different implications with regards to introduced overhead and latency, metrics data accuracy and memory
+consumption. Kamon takes this responsibility out of the developer and tries to make the best choice to provide high
+performance metrics collection instruments while keeping the inherent overhead as low as possible.
+
+Kamon tries to select the best possible approach, so you don't have to.
+
+
+Metrics Collection and Flushing
+-------------------------------
+
+All the metrics infrastructure in Kamon was designed around two concepts: collection and flushing. Metrics collection
+happens in real time, as soon as the information is available for being recorded. Let's see a simple example: as soon as
+a actor finishes processing a message, Kamon knows the elapsed time for processing that specific message and it is
+recorded right away. If you have millions of messages passing through your system, then millions of measurements will be
+taken.
+
+Flushing happens recurrently after a fixed amount of time has passed, a tick. Upon each tick, Kamon will collect all
+measurements recorded since the last tick, flush the collected data and reset all the instruments to zero. Let's explore
+a little bit more on how this two concepts are modeled inside Kamon.
+
+
+
+A metric group contains various individual metrics that are related to the same entity, for example, if the entity we
+are talking about is an actor, the metrics related to processing time, mailbox size and time in mailbox for that
+specific actor are grouped inside a single metric group, and each actor gets its own metric group. As you might disguise
+from the diagram above, on the left we have the mutable side of the process that is constantly recoding measurements as
+the events flow through your application and on the right we have the immutable side, containing snapshots representing
+all the measurements taken during a specific period on time for a metric group.
+
+
+Filtering Entities
+------------------
+
+By default Kamon will not include any entity for metrics collection and you will need to explicitly include all the
+entities you are interested in, be it a actor, a trace, a dispatcher or any other entity monitored by Kamon. The
+`kamon.metrics.filters` key on your application's configuration controls which entities must be included/excluded from
+the metrics collection infrastructure. Includes and excludes are provided as lists of strings containing the
+corresponding GLOB patterns for each group, and the logic behind is simple: include everything that matches at least one
+`includes` pattern and does not match any of the `excludes` patterns. The following configuration file sample includes
+the `user/job-manager` actor and all the worker actors, but leaves out all system actors and the `user/worker-helper`
+actor.
+
+```
+kamon {
+ metrics {
+ filters = [
+ {
+ actor {
+ includes = [ "user/job-manager", "user/worker-*" ]
+ excludes = [ "system/*", "user/worker-helper" ]
+ }
+ },
+ {
+ trace {
+ includes = [ "*" ]
+ excludes = []
+ }
+ }
+ ]
+ }
+}
+```
+
+Instruments
+-----------
+
+Talk about how HDR Histogram works and how we use it.
+
+
+Subscription protocol
+---------------------
+
+Explain how to subscribe for metrics data and provide a simple example.
diff --git a/site/src/main/jekyll/core/tracing.md b/site/src/main/jekyll/core/tracing.md
deleted file mode 100644
index bf79bd47..00000000
--- a/site/src/main/jekyll/core/tracing.md
+++ /dev/null
@@ -1,101 +0,0 @@
----
-title: Kamon | Core | Documentation
-layout: default
----
-
-Traces
-======
-
-A trace is a story, told by some events in your application that explain how the execution of a particular portion of
-functionality went during a single invocation. For example, if in order to fulfil a `GET` request to the `/users/kamon`
-resource, a application sends a message to an actor, which reads the user data from a database and sends a message back
-with the user information to finish the request, all those interactions would be considered as part of the same trace.
-
-If the application described above were to handle a hundred clients requesting for user's details, there might be a
-handful of database access actors handling those requests. When the dispatcher gives an actor some time to execute, it
-will process as many messages as possible (as per dispatcher configuration) before the Thread is taken away from it, but
-during that time it is incorrect to say that either the actor or the Thread are tied to a trace, because each message
-might come from a different source which is probably waiting for a different response.
-
-Back in the day tracing used to be simpler: if you use single dedicated `Thread` during the execution of a request and
-everything related to that request happens in that single `Thread`, then you could use a `ThreadLocal` and store all the
-valuable information you want about the processing of that request from anywhere in the codebase and flush it all when
-the request is finished. Sounds easy, right?, hold on that thought, it isn't that easy in the reactive world!
-
-When developing reactive applications on top of Akka the perspective of a trace changes from "thread local" to "event
-local" and in order to cope with this Kamon provides the notion of a `TraceContext` to group all related events and
-collect the information we need about them. Once a `TraceContext` is created, Kamon will propagate it automatically
-under specific conditions and once the `TraceContext` is finished, all the gathered information is flushed. The
-`TraceContext` is effectively stored in a ThreadLocal, but only during the processing of certain specific events and
-then it is cleared out to avoid propagating it to unrelated events.
-
-
-Starting a `TraceContext`
--------------------------
-
-The `TraceRecorder` companion object provides a simple API to create, propagate and finish a `TraceContext`. To start a
-new context use the `TraceRecorder.withNewTraceContext(..)` method. Let's dig into this with a simple example:
-
-Suppose you want to trace a process that involves a couple actors, and you want to make sure all related events become
-part of the same `TraceContext`. Our actors might look like this:
-
-{% include_code kamon/docs/trace/SimpleContextPropagation.scala start:47 end:63 linenos:false %}
-
-You should feel familiar with this code, there is nothing new there. Let's spawn an `UpperCaser` actor and send it five
-string messages and see the output on the log file:
-
-```
-22:24:07.197 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
-22:24:07.198 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
-22:24:07.198 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
-22:24:07.199 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
-22:24:07.200 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
-22:24:07.200 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
-22:24:07.204 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
-22:24:07.205 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
-22:24:07.205 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
-22:24:07.206 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
-```
-
-Can you tell which log statement from `LengthCalculator` corresponds to each log statement from `UpperCaser`?, seems
-easy to figure it out manually in this case, but as the number of events happening concurrently in your app grows it
-becomes harder to answer that question without some extra help. Let's see how Kamon can help us in this situation:
-
-{% include_code kamon/docs/trace/SimpleContextPropagation.scala start:38 end:40 linenos:false %}
-
-When using the `TraceRecorder.withNewTraceContext(..)` method, Kamon will create a new `TraceContext` and make it
-available during the execution of the piece of code passed as argument. This time, we are sending a message to an actor
-which happens to be one of the situations under which Kamon will automatically propagate a `TraceContext`, so we can
-expect the current context to be available to the actor when processing the message we just sent, and
-only when processing that message. Let's repeat the exercise of sending five messages to this actor,
-now doing it with a new `TraceContext` each time and look at the log:
-
-```
-22:24:07.223 INFO [Ivan-Topolnjaks-MacBook-Pro.local-1][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
-22:24:07.224 INFO [Ivan-Topolnjaks-MacBook-Pro.local-2][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
-22:24:07.225 INFO [Ivan-Topolnjaks-MacBook-Pro.local-1][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
-22:24:07.225 INFO [Ivan-Topolnjaks-MacBook-Pro.local-3][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
-22:24:07.226 INFO [Ivan-Topolnjaks-MacBook-Pro.local-4][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
-22:24:07.227 INFO [Ivan-Topolnjaks-MacBook-Pro.local-5][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
-22:24:07.227 INFO [Ivan-Topolnjaks-MacBook-Pro.local-2][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
-22:24:07.228 INFO [Ivan-Topolnjaks-MacBook-Pro.local-3][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
-22:24:07.228 INFO [Ivan-Topolnjaks-MacBook-Pro.local-4][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
-22:24:07.229 INFO [Ivan-Topolnjaks-MacBook-Pro.local-5][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
-```
-
-Can you tell which log statement from `LengthCalculator` corresponds to each log statement from `UpperCaser` now?, it
-has become a no brainer: each `TraceContext` created by Kamon gets a unique token that we are including in the log
-patterns (the first value between square brackets) and with that small but important piece of information the relation
-between each log line is clear.
-
-Just by logging the trace token you can get a lot of visibility and coherence in the information available on your logs,
-please refer to the [logging](../logging/) section to learn how to include the trace token in your logs.
-
-
-Rules for `TraceContext` Propagation
-------------------------------------
-
-* Actor creation.
-* Sending messages to a Actor.
-* Using ActorLogging.
-* Creating and transforming a Future.
diff --git a/site/src/main/jekyll/core/tracing/basics.md b/site/src/main/jekyll/core/tracing/basics.md
new file mode 100644
index 00000000..c5df29a9
--- /dev/null
+++ b/site/src/main/jekyll/core/tracing/basics.md
@@ -0,0 +1,101 @@
+---
+title: Kamon | Core | Documentation
+layout: documentation
+---
+
+Traces
+======
+
+A trace is a story, told by some events in your application that explain how the execution of a particular portion of
+functionality went during a single invocation. For example, if in order to fulfil a `GET` request to the `/users/kamon`
+resource, a application sends a message to an actor, which reads the user data from a database and sends a message back
+with the user information to finish the request, all those interactions would be considered as part of the same trace.
+
+If the application described above were to handle a hundred clients requesting for user's details, there might be a
+handful of database access actors handling those requests. When the dispatcher gives an actor some time to execute, it
+will process as many messages as possible (as per dispatcher configuration) before the Thread is taken away from it, but
+during that time it is incorrect to say that either the actor or the Thread are tied to a trace, because each message
+might come from a different source which is probably waiting for a different response.
+
+Back in the day tracing used to be simpler: if you use single dedicated `Thread` during the execution of a request and
+everything related to that request happens in that single `Thread`, then you could use a `ThreadLocal` and store all the
+valuable information you want about the processing of that request from anywhere in the codebase and flush it all when
+the request is finished. Sounds easy, right?, hold on that thought, it isn't that easy in the reactive world!
+
+When developing reactive applications on top of Akka the perspective of a trace changes from "thread local" to "event
+local" and in order to cope with this Kamon provides the notion of a `TraceContext` to group all related events and
+collect the information we need about them. Once a `TraceContext` is created, Kamon will propagate it automatically
+under specific conditions and once the `TraceContext` is finished, all the gathered information is flushed. The
+`TraceContext` is effectively stored in a ThreadLocal, but only during the processing of certain specific events and
+then it is cleared out to avoid propagating it to unrelated events.
+
+
+Starting a `TraceContext`
+-------------------------
+
+The `TraceRecorder` companion object provides a simple API to create, propagate and finish a `TraceContext`. To start a
+new context use the `TraceRecorder.withNewTraceContext(..)` method. Let's dig into this with a simple example:
+
+Suppose you want to trace a process that involves a couple actors, and you want to make sure all related events become
+part of the same `TraceContext`. Our actors might look like this:
+
+{% include_code kamon/docs/trace/SimpleContextPropagation.scala start:47 end:63 linenos:false %}
+
+You should feel familiar with this code, there is nothing new there. Let's spawn an `UpperCaser` actor and send it five
+string messages and see the output on the log file:
+
+```
+22:24:07.197 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
+22:24:07.198 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
+22:24:07.198 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
+22:24:07.199 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
+22:24:07.200 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
+22:24:07.200 INFO [undefined][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello without context]
+22:24:07.204 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
+22:24:07.205 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
+22:24:07.205 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
+22:24:07.206 INFO [undefined][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WITHOUT CONTEXT]
+```
+
+Can you tell which log statement from `LengthCalculator` corresponds to each log statement from `UpperCaser`?, seems
+easy to figure it out manually in this case, but as the number of events happening concurrently in your app grows it
+becomes harder to answer that question without some extra help. Let's see how Kamon can help us in this situation:
+
+{% include_code kamon/docs/trace/SimpleContextPropagation.scala start:38 end:40 linenos:false %}
+
+When using the `TraceRecorder.withNewTraceContext(..)` method, Kamon will create a new `TraceContext` and make it
+available during the execution of the piece of code passed as argument. This time, we are sending a message to an actor
+which happens to be one of the situations under which Kamon will automatically propagate a `TraceContext`, so we can
+expect the current context to be available to the actor when processing the message we just sent, and
+only when processing that message. Let's repeat the exercise of sending five messages to this actor,
+now doing it with a new `TraceContext` each time and look at the log:
+
+```
+22:24:07.223 INFO [Ivan-Topolnjaks-MacBook-Pro.local-1][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
+22:24:07.224 INFO [Ivan-Topolnjaks-MacBook-Pro.local-2][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
+22:24:07.225 INFO [Ivan-Topolnjaks-MacBook-Pro.local-1][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
+22:24:07.225 INFO [Ivan-Topolnjaks-MacBook-Pro.local-3][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
+22:24:07.226 INFO [Ivan-Topolnjaks-MacBook-Pro.local-4][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
+22:24:07.227 INFO [Ivan-Topolnjaks-MacBook-Pro.local-5][akka://simple-context-propagation/user/upper-caser] Upper casing [Hello World with TraceContext]
+22:24:07.227 INFO [Ivan-Topolnjaks-MacBook-Pro.local-2][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
+22:24:07.228 INFO [Ivan-Topolnjaks-MacBook-Pro.local-3][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
+22:24:07.228 INFO [Ivan-Topolnjaks-MacBook-Pro.local-4][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
+22:24:07.229 INFO [Ivan-Topolnjaks-MacBook-Pro.local-5][akka://simple-context-propagation/user/upper-caser/length-calculator] Calculating the length of: [HELLO WORLD WITH TRACECONTEXT]
+```
+
+Can you tell which log statement from `LengthCalculator` corresponds to each log statement from `UpperCaser` now?, it
+has become a no brainer: each `TraceContext` created by Kamon gets a unique token that we are including in the log
+patterns (the first value between square brackets) and with that small but important piece of information the relation
+between each log line is clear.
+
+Just by logging the trace token you can get a lot of visibility and coherence in the information available on your logs,
+please refer to the [logging](../logging/) section to learn how to include the trace token in your logs.
+
+
+Rules for `TraceContext` Propagation
+------------------------------------
+
+* Actor creation.
+* Sending messages to a Actor.
+* Using ActorLogging.
+* Creating and transforming a Future.
diff --git a/site/src/main/jekyll/core/tracing/logging.md b/site/src/main/jekyll/core/tracing/logging.md
new file mode 100644
index 00000000..7d2251ef
--- /dev/null
+++ b/site/src/main/jekyll/core/tracing/logging.md
@@ -0,0 +1,43 @@
+---
+title: Kamon | Core | Documentation
+layout: documentation
+---
+
+Logging
+=======
+
+Kamon provides a very simple way to make sure that the trace token available when the log statement was executed is
+included in your logs, no matter if you are logging synchronously or asynchronously. Kamon provides built in support
+for logging with Logback, but extending the support to any other logging framework should be a trivial task.
+
+When using `ActorLogging` all logging events are sent to your actor system's event stream and then picked up by your
+registered listeners for actual logging. Akka captures the actor, thread and timestamp from the instant in which the
+event was generated and makes that info available when performing the actual logging. As an addition to this, Kamon
+attaches the `TraceContext` that is present when creating the log events and makes it available when the actual logging
+is performed. If you are using the loggers directly then the `TraceContext` should be already available.
+
+`TraceRecorder.currentContext` gives you access to the currently `TraceContext`, so the following expression gives you
+the trace token for the currently available context:
+
+```scala
+TraceRecorder.currentContext.map(_.token)
+```
+
+Kamon already packs a Logback converter that you can register in your `logback.xml` file and use in your logging
+patterns as show bellow:
+
+```xml
+
+
+
+
+ %date{HH:mm:ss.SSS} %-5level [%traceToken][%X{akkaSource}] %msg%n
+
+
+
+
+
+
+
+
+```
diff --git a/site/src/main/jekyll/dashboard/index.md b/site/src/main/jekyll/dashboard/index.md
deleted file mode 100644
index 8a31899e..00000000
--- a/site/src/main/jekyll/dashboard/index.md
+++ /dev/null
@@ -1,7 +0,0 @@
----
-title: kamon | Dashboard | Documentation
-layout: default
----
-Coming soon
---------
-
diff --git a/site/src/main/jekyll/datadog/index.md b/site/src/main/jekyll/datadog/index.md
deleted file mode 100644
index 0d840951..00000000
--- a/site/src/main/jekyll/datadog/index.md
+++ /dev/null
@@ -1,85 +0,0 @@
----
-title: Kamon | Datadog | Documentation
-layout: default
----
-
-Reporting Metrics to Datadog
-===========================
-
-
-[Datadog](http://www.datadoghq.com/) Datadog is a monitoring service for IT, Operations and Development teams who write
-and run applications at scale, and want to turn the massive amounts of data produced by their apps,
-tools and services into actionable insight.
-
-Installation
-------------
-
-To use the Datadog module just add the `kamon-datadog` dependency to your project and start your application using the
-Aspectj Weaver agent. Please refer to our [get started](/get-started) page for more info on how to add dependencies to
-your project and starting your application with the AspectJ Weaver.
-
-
-Configuration
--------------
-
-First, include the Kamon(Datadog) extension under the `akka.extensions` key of your configuration files as shown here:
-
-```scala
-akka {
- extensions = ["kamon.statsd.Datadog"]
-}
-```
-
-Then, tune the configuration settings according to your needs. Here is the `reference.conf` that ships with kamon-datadog
-which includes a brief explanation of each setting:
-
-```
-kamon {
- datadog {
- # Hostname and port in which your StatsD is running. Remember that Datadog packets are sent using UDP and
- # setting unreachable hosts and/or not open ports wont be warned by the Kamon, your data wont go anywhere.
- hostname = "127.0.0.1"
- port = 8125
-
- # Interval between metrics data flushes to Datadog. It's value must be equal or greater than the
- # kamon.metrics.tick-interval setting.
- flush-interval = 1 second
-
- # Max packet size for UDP metrics data sent to Datadog.
- max-packet-size = 1024 bytes
-
- # Subscription patterns used to select which metrics will be pushed to Datadog. Note that first, metrics
- # collection for your desired entities must be activated under the kamon.metrics.filters settings.
- includes {
- actor = [ "*" ]
- trace = [ "*" ]
- }
-
- simple-metric-key-generator {
- # Application prefix for all metrics pushed to Datadog. The default namespacing scheme for metrics follows
- # this pattern:
- # application.host.entity.entity-name.metric-name
- application = "kamon"
- }
- }
-}
-```
-
-
-Integration Notes
------------------
-
-* Contrary to many Datadog client implementations, we don't flush the metrics data as soon as the measurements are taken
- but instead, all metrics data is buffered by the `Kamon(Datadog)` extension and flushed periodically using the
- configured `kamon.statsd.flush-interval` and `kamon.statsd.max-packet-size` settings.
-* Currently only Actor and Trace metrics are being sent to Datadog.
-* All timing measurements are sent in nanoseconds, make sure you correctly set the scale when plotting or using the
- metrics data.
-* It is advisable to experiment with the `kamon.statsd.flush-interval` and `kamon.statsd.max-packet-size` settings to
- find the right balance between network bandwidth utilization and granularity on your metrics data.
-
-
-
-Visualization and Fun
----------------------
-
diff --git a/site/src/main/jekyll/get-started.md b/site/src/main/jekyll/get-started.md
deleted file mode 100644
index 07c59654..00000000
--- a/site/src/main/jekyll/get-started.md
+++ /dev/null
@@ -1,58 +0,0 @@
----
-title: Kamon | Get Started
-layout: default
----
-
-Get Started with Kamon
-======================
-
-Kamon is distributed as a core and a set of modules that you include in your application classpath. This modules contain
-all the required pointcuts and advices (yeap, Kamon uses Aspectj!) for instrumenting Akka actors message passing,
-dispatchers, futures, Spray components and much more.
-
-To get started just follow this steps:
-
-
-First: Include the modules you want in your project.
-----------------------------------------------------
-
-All Kamon components are available through Sonatype and Maven Central and no special repositories need to be configured.
-If you are using SBT, you will need to add something like this to your build definition:
-
-```scala
-libraryDependencies += "io.kamon" % "kamon-core" % "0.3.0"
-```
-
-Then, add any additional module you need:
-
-* kamon-core
-* kamon-spray
-* kamon-statsd
-* kamon-newrelic
-
-### Compatibility Notes: ###
-
-* 0.3.x releases are compatible with Akka 2.3, Spray 1.3 and Play 2.3-M1.
-* 0.2.x releases are compatible with Akka 2.2, Spray 1.2 and Play 2.2.
-
-
-Second: Start your app with the AspectJ Weaver
-----------------------------------------------
-
-Starting your app with the AspectJ weaver is dead simple, just add the `-javaagent` JVM startup parameter pointing to
-the weaver's file location and you are done:
-
-```
--javaagent:/path-to-aspectj-weaver.jar
-```
-
-In case you want to keep the AspectJ related settings in your build and enjoy using `run` from the console, take a look
-at the [sbt-aspectj](https://github.com/sbt/sbt-aspectj/) plugin.
-
-
-Third: Enjoy!
--------------
-
-Refer to module's documentation to find out more about core concepts like [tracing](/core/tracing/),
-[metrics](/core/metrics/) and [logging](/core/logging/), and learn how to report your metrics data to external services
-like [StatsD](/statsd/) and [New Relic](/newrelic/).
diff --git a/site/src/main/jekyll/index.html b/site/src/main/jekyll/index.html
index eba6146b..26730da8 100644
--- a/site/src/main/jekyll/index.html
+++ b/site/src/main/jekyll/index.html
@@ -13,7 +13,7 @@ title: Kamon - Tools for Reactive Applications Monitoring
Kamon
Kamon is a set of tools that helps you to get metrics out of your reactive applications.
diff --git a/site/src/main/jekyll/integrations/akka/index.md b/site/src/main/jekyll/integrations/akka/index.md
new file mode 100644
index 00000000..6ca89409
--- /dev/null
+++ b/site/src/main/jekyll/integrations/akka/index.md
@@ -0,0 +1,97 @@
+---
+title: kamon | Akka Toolkit | Documentation
+layout: documentation
+---
+
+Akka Module
+===
+
+---
+Dependencies
+---
+
+Apart from scala library kamon depends on:
+
+- aspectj
+- spray-io
+- akka-actor
+
+
+Installation
+---
+Kamon works with SBT, so you need to add Kamon.io repository to your resolvers.
+
+Configuration
+---
+Just like other products in the scala ecosystem, it relies on the typesafe configuration library.
+
+Since kamon uses the same configuration technique as [Spray](http://spray.io/documentation "Spray") / [Akka](http://akka.io/docs "Akka") you might want to check out the [Akka-Documentation-configuration](http://doc.akka.io/docs/akka/2.1.4/general/configuration.html "Akka Documentation on configuration")
+.
+
+In order to see Kamon in action you need first to set up your sbt project.
+
+1) Add Kamon repository to resolvers
+
+```scala
+"Kamon Repository" at "http://repo.kamon.io"
+```
+
+2) Add libraryDepenency
+
+```scala
+ "kamon" %% "kamon-spray" % "0.0.11",
+```
+
+In addition we suggest to create aspectj.sbt file and add this content
+
+```scala
+ import com.typesafe.sbt.SbtAspectj._
+
+ aspectjSettings
+
+ javaOptions <++= AspectjKeys.weaverOptions in Aspectj
+```
+
+3) Add to your plugins.sbt in project folder (if you don't have one yet, create the file) and add the Kamon release to the resolver and the aspecj.
+
+```scala
+ resolvers += Resolver.url("Kamon Releases", url("http://repo.kamon.io"))(Resolver.ivyStylePatterns)
+
+ addSbtPlugin("com.typesafe.sbt" % "sbt-aspectj" % "0.9.2")
+```
+**application.conf**
+
+```scala
+ akka {
+ loggers = ["akka.event.slf4j.Slf4jLogger"]
+
+ actor {
+ debug {
+ unhandled = on
+ }
+ }
+ }
+```
+
+Examples
+---
+
+TODO: (to be published) The example will start a spray server with akka and logback configuration. Adjust it to your needs.
+
+Follow the steps in order to clone the repository
+
+1. git clone git://github.com/kamon/kamon.git
+
+2. cd kamon
+
+For the first example run
+
+```bash
+ sbt "project kamon-uow-example"
+```
+
+In order to see how it works, you need to send a message to the rest service
+
+```bash
+ curl -v --header 'X-UOW:YOUR_TRACER_ID' -X GET 'http://0.0.0.0:6666/fibonacci'
+```
diff --git a/site/src/main/jekyll/integrations/play/applications.md b/site/src/main/jekyll/integrations/play/applications.md
new file mode 100644
index 00000000..97918dc5
--- /dev/null
+++ b/site/src/main/jekyll/integrations/play/applications.md
@@ -0,0 +1,4 @@
+---
+title: kamon | Play | Documentation
+layout: documentation
+---
\ No newline at end of file
diff --git a/site/src/main/jekyll/integrations/play/ws-library.md b/site/src/main/jekyll/integrations/play/ws-library.md
new file mode 100644
index 00000000..97918dc5
--- /dev/null
+++ b/site/src/main/jekyll/integrations/play/ws-library.md
@@ -0,0 +1,4 @@
+---
+title: kamon | Play | Documentation
+layout: documentation
+---
\ No newline at end of file
diff --git a/site/src/main/jekyll/integrations/spray/client-side.md b/site/src/main/jekyll/integrations/spray/client-side.md
new file mode 100644
index 00000000..15b41c72
--- /dev/null
+++ b/site/src/main/jekyll/integrations/spray/client-side.md
@@ -0,0 +1,8 @@
+---
+title: kamon | Spray | Documentation
+layout: documentation
+---
+
+Spray Module(TODO)
+===
+
diff --git a/site/src/main/jekyll/integrations/spray/server-side.md b/site/src/main/jekyll/integrations/spray/server-side.md
new file mode 100644
index 00000000..15b41c72
--- /dev/null
+++ b/site/src/main/jekyll/integrations/spray/server-side.md
@@ -0,0 +1,8 @@
+---
+title: kamon | Spray | Documentation
+layout: documentation
+---
+
+Spray Module(TODO)
+===
+
diff --git a/site/src/main/jekyll/introduction/get-started.md b/site/src/main/jekyll/introduction/get-started.md
new file mode 100644
index 00000000..84c53314
--- /dev/null
+++ b/site/src/main/jekyll/introduction/get-started.md
@@ -0,0 +1,66 @@
+---
+title: Kamon | Get Started
+layout: documentation
+---
+
+Get Started with Kamon
+======================
+
+Kamon is distributed as a core and a set of modules that you include in your application classpath. This modules contain
+all the required pointcuts and advices (yeap, Kamon uses Aspectj!) for instrumenting Akka actors message passing,
+dispatchers, futures, Spray components and much more.
+
+To get started just follow this steps:
+
+
+First: Include the modules you want in your project.
+----------------------------------------------------
+
+All Kamon components are available through Sonatype and Maven Central and no special repositories need to be configured.
+If you are using SBT, you will need to add something like this to your build definition:
+
+```scala
+libraryDependencies += "io.kamon" % "kamon-core" % "0.3.0"
+```
+
+Then, add any additional module you need:
+
+* kamon-core
+* kamon-spray
+* kamon-statsd
+* kamon-newrelic
+
+### Compatibility Notes: ###
+
+* 0.3.x releases are compatible with Akka 2.3, Spray 1.3 and Play 2.3-M1.
+* 0.2.x releases are compatible with Akka 2.2, Spray 1.2 and Play 2.2.
+
+
+Second: Start your app with the AspectJ Weaver
+----------------------------------------------
+
+Starting your app with the AspectJ weaver is dead simple, just add the `-javaagent` JVM startup parameter pointing to
+the weaver's file location and you are done:
+
+```
+-javaagent:/path-to-aspectj-weaver.jar
+```
+
+In case you want to keep the AspectJ related settings in your build and enjoy using `run` from the console, take a look
+at the [sbt-aspectj] plugin.
+
+
+Third: Enjoy!
+-------------
+
+Refer to module's documentation to find out more about core concepts like [tracing], [metrics] and [logging], and learn
+how to report your metrics data to external services like [StatsD], [Datadog] and [New Relic].
+
+
+[sbt-aspectj]: https://github.com/sbt/sbt-aspectj/
+[tracing]: /core/tracing/basics/
+[metrics]: /core/metrics/basics/
+[logging]: /core/tracing/logging/
+[StatsD]: /backends/statsd/
+[Datadog]: /backends/datadog/
+[New Relic]: /backends/newrelic/
\ No newline at end of file
diff --git a/site/src/main/jekyll/introduction/what-is-kamon.md b/site/src/main/jekyll/introduction/what-is-kamon.md
new file mode 100644
index 00000000..3bc3ae8e
--- /dev/null
+++ b/site/src/main/jekyll/introduction/what-is-kamon.md
@@ -0,0 +1,6 @@
+---
+title: Kamon | Get Started
+layout: documentation
+---
+
+Intro on what is Kamon.
\ No newline at end of file
diff --git a/site/src/main/jekyll/license.md b/site/src/main/jekyll/license.md
deleted file mode 100644
index 8bbc65b1..00000000
--- a/site/src/main/jekyll/license.md
+++ /dev/null
@@ -1,29 +0,0 @@
----
-title: Kamon | License
-layout: default
----
-
-License
-=================
-
-
- }
- }
- }
- }
-}
-```
-
-As you can see, this is a dead simple application: two paths, different responses for each of them. Now let's hit it hard
-with Apache Bench:
-
-```bash
-ab -k -n 200000 http://localhost:8080/helloKamon
-ab -k -n 200000 http://localhost:8080/helloNewRelic
-```
-
-After a couple minutes running you should start seeing something similar to this in your dashboard:
-
-![newrelic](/assets/img/newrelic.png "NewRelic Screenshot")
-
-
-Note: Don't think that those numbers are wrong, Spray is that fast!
-
-
-
-Limitations
------------
-* The first implementation only supports a subset of NewRelic metrics
-
-
-Licensing
----------
-NewRelic has [its own, separate licensing](http://newrelic.com/terms).
-
diff --git a/site/src/main/jekyll/project-info/acknowledgments.md b/site/src/main/jekyll/project-info/acknowledgments.md
new file mode 100644
index 00000000..f75f4e24
--- /dev/null
+++ b/site/src/main/jekyll/project-info/acknowledgments.md
@@ -0,0 +1,36 @@
+---
+title: Kamon | Acknowledgments
+layout: default
+---
+
+Acknowledgments
+===============
+
+We, the Kamon team, would like to express our gratitude to all the people and companies that help us make Kamon the best
+solution in the metrics collection space for Akka, Spray and Play!. Let's give names and regards to this wonderful
+fellows:
+
+Our contributors
+----------------
+
+Everything starts with an idea, and [this](https://github.com/kamon-io/Kamon/graphs/contributors) guys are helping us
+take that idea and make it a reality. A reality that is helping developers around the world to measure and monitor their
+success with reactive technologies. Kudos to all of you!
+
+
+Our users
+---------
+
+It is absolutely rewarding to know that Kamon is useful for people around the world, and it is even better when these
+people come to us looking for help, reporting issues, giving feedback or telling us how smoothly Kamon is monitoring
+their production systems, thanks for using Kamon! keep coming and spread the word :).
+
+
+
+[YourKit, LLC](http://www.yourkit.com)
+--------------------------------------
+
+We care a lot about performance and we try hard to keep Kamon's overhead as low as possible, but we couldn't succeed on
+this matter without [YourKit's Java Profiler](http://www.yourkit.com/java/profiler/index.jsp). It is well known to be
+one of the best profilers out there and they have been so kind to support us by providing a open source use license to
+Kamon developers. Thanks YourKit! We highly appreciate your support and commitment to the open source community.
diff --git a/site/src/main/jekyll/project-info/changelog.md b/site/src/main/jekyll/project-info/changelog.md
new file mode 100644
index 00000000..5c0ee4ff
--- /dev/null
+++ b/site/src/main/jekyll/project-info/changelog.md
@@ -0,0 +1,51 @@
+---
+title: Kamon | Changelog
+layout: default
+---
+
+Changelog
+=========
+
+
+Version 0.3.0/0.2.0 (2014-04-24)
+--------------------------------
+
+* Same feature set as 0.0.15 but now available for Akka 2.2 and Akka 2.3:
+ * 0.3.0 is compatible with Akka 2.3, Spray 1.3 and Play 2.3-M1.
+ * 0.2.0 is compatible with Akka 2.2, Spray 1.2 and Play 2.2.
+
+
+Version 0.0.15 (2014-04-10)
+---------------------------
+
+* kamon
+ * Now publishing to Sonatype and Maven Central
+ * `reference.conf` files are now "sbt-assembly merge friendly"
+
+* kamon-core
+ * Control of AspectJ weaving messages through Kamon configuration
+ * Avoid the possible performance issues when calling `MessageQueue.numberOfMessages` by keeping a external counter.
+
+* kamon-statsd
+ * Now you can send Actor and Trace metrics to StatsD! Check out our [StatsD documentation](/statsd/) for more
+ details.
+
+* kamon-play (Experimental)
+ * Experimental support to trace metrics collection, automatic trace token propagation and HTTP Client request
+ metrics is now available for Play! applications.
+
+
+
+Version 0.0.14 (2014-03-17)
+---------------------------
+* kamon-core
+ * Improved startup times
+ * Remake of trace metrics collection
+ * Support for custom metrics collection (Experimental)
+
+* kamon-play
+ * Initial support (Experimental)
+
+* site
+ * [logging](/core/logging/) (WIP)
+ * [tracing](/core/tracing/) (WIP)
diff --git a/site/src/main/jekyll/project-info/license.md b/site/src/main/jekyll/project-info/license.md
new file mode 100644
index 00000000..8bbc65b1
--- /dev/null
+++ b/site/src/main/jekyll/project-info/license.md
@@ -0,0 +1,29 @@
+---
+title: Kamon | License
+layout: default
+---
+
+License
+=================
+
+