Serving metrics with titan is only part of the story. Once those are served one must set up and deploy Prometheus so it can scrape those results.
There are a few ways in which Prometheus can be deployed. The easiest I find is simply to install the binary and run is a service, this way it automatically restarts when the server reboots, etc.
Below we download the zipped latest release (at the time of writing this v2.23.0).
1 2 3 4
This downloads, unzips the files necessary to run Prometheus, then renames the directory from
prometheus. Next you can move into that directory and run Prometheus manually to make sure all works well.
This should run prometheus and make it available on port
9090 by default and you should be able to see prometheus running at
<server-ip>:9090. If you do not make sure that port
9090 is open on your server.
prometheus.yml file contains the “targets” to scrape, that is the plumber APIs and shiny applications to scrape the metrics.
We can create a new service to easily have Prometheus run in the background, restart when needed, etc.
In that service place the following.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19
This essentially will create a new service, very similar to shiny-server. Note the use of
--web.enable-lifecycle to reload the configuration file by executing a
This creates the service it can then be run after reloading the daemon.
1 2 3
sudo systemctl status prometheus will show whether the service is running correctly.
We have yet to explore the configuration file. Below is an example of a job to scrape a shiny application.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Once changes have been made to the configuration we must tell prometheus to reload it. Since we set the flag
--web.enable-lifecycle when launching the Prometheus service we can simply make a
POST request to the
/-/reload endpoint of Prometheus to reload the configuration file.
There is a convenience function in titan to do so from R.
All functions that pertain to the Management of the Prometheus server start with a capital letter.
From there onwards it’s just a matter of adding jobs to the configuration file and reload it.