diff --git a/.travis/README.md b/.travis/README.md
index 3b314fa186..6c6ca3f66c 100644
--- a/.travis/README.md
+++ b/.travis/README.md
@@ -54,7 +54,7 @@ At this stage, basically, we build :-)
We do a baseline check of our build artifacts to guarantee they are not broken
Briefly our activities include:
- Verify docker builds successfully
-- Run the standard netdata installer, to make sure we build & run properly
+- Run the standard Netdata installer, to make sure we build & run properly
- Do the same through 'make dist', as this is our stable channel for our kickstart files
## Artifacts validation
@@ -66,7 +66,7 @@ Briefly we currently evaluate the following activities:
- Basic software unit testing
- Non containerized build and install on ubuntu 14.04
- Non containerized build and install on ubuntu 18.04
-- Running the full netdata lifecycle (install, update, uninstall) on ubuntu 18.04
+- Running the full Netdata lifecycle (install, update, uninstall) on ubuntu 18.04
- Build and install on CentOS 6
- Build and install on CentOS 7
(More to come)
diff --git a/CONTRIBUTING.md b/CONTRIBUTING.md
index caa11a984a..e09030eb09 100644
--- a/CONTRIBUTING.md
+++ b/CONTRIBUTING.md
@@ -15,15 +15,15 @@ This is the minimum open-source users should contribute back to the projects the
### Spread the word
-Community growth allows the project to attract new talent willing to contribute. This talent is then developing new features and improves the project. These new features and improvements attract more users and so on. It is a loop. So, post about netdata, present it to local meetups you attend, let your online social network or twitter, facebook, reddit, etc. know you are using it. **The more people involved, the faster the project evolves**.
+Community growth allows the project to attract new talent willing to contribute. This talent is then developing new features and improves the project. These new features and improvements attract more users and so on. It is a loop. So, post about Netdata, present it to local meetups you attend, let your online social network or twitter, facebook, reddit, etc. know you are using it. **The more people involved, the faster the project evolves**.
### Provide feedback
-Is there anything that bothers you about netdata? Did you experience an issue while installing it or using it? Would you like to see it evolve to you need? Let us know. [Open a github issue](https://github.com/netdata/netdata/issues) to discuss it. Feedback is very important for open-source projects. We can't commit we will do everything, but your feedback influences our road-map significantly. **We rely on your feedback to make Netdata better**.
+Is there anything that bothers you about Netdata? Did you experience an issue while installing it or using it? Would you like to see it evolve to you need? Let us know. [Open a github issue](https://github.com/netdata/netdata/issues) to discuss it. Feedback is very important for open-source projects. We can't commit we will do everything, but your feedback influences our road-map significantly. **We rely on your feedback to make Netdata better**.
### Translate some documentation
-The [netdata localization project](https://github.com/netdata/localization) contains instructions on how to provide translations for parts of our documentation. Translating the entire documentation is a daunting task, but you can contribute as much as you like, even a single file. The Chinese translation effort has already begun and we are looking forward to more contributions.
+The [Netdata localization project](https://github.com/netdata/localization) contains instructions on how to provide translations for parts of our documentation. Translating the entire documentation is a daunting task, but you can contribute as much as you like, even a single file. The Chinese translation effort has already begun and we are looking forward to more contributions.
### Sponsor a part of Netdata
@@ -57,7 +57,7 @@ Netdata delivers alarms via various [notification methods](health/notifications)
### Help other users
-As the project grows, an increasing share of our time is spent on supporting this community of users in terms of answering questions, of helping users understand how netdata works and find their way with it. Helping other users is crucial. It allows the developers and maintainers of the project to focus on improving it.
+As the project grows, an increasing share of our time is spent on supporting this community of users in terms of answering questions, of helping users understand how Netdata works and find their way with it. Helping other users is crucial. It allows the developers and maintainers of the project to focus on improving it.
### Improve documentation
@@ -80,11 +80,11 @@ Of course we appreciate contributions for any other part of the NetData agent, i
#### Code of Conduct and CLA
-We expect all contributors to abide by the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). For a pull request to be accepted, you will also need to accept the [netdata contributors license agreement](CONTRIBUTORS.md), as part of the PR process.
+We expect all contributors to abide by the [Contributor Covenant Code of Conduct](CODE_OF_CONDUCT.md). For a pull request to be accepted, you will also need to accept the [Netdata contributors license agreement](CONTRIBUTORS.md), as part of the PR process.
#### Performance and efficiency
-Everything on Netdata is about efficiency. We need netdata to always be the most lightweight monitoring solution available. We will reject to merge PRs that are not optimal in resource utilization and efficiency.
+Everything on Netdata is about efficiency. We need Netdata to always be the most lightweight monitoring solution available. We will reject to merge PRs that are not optimal in resource utilization and efficiency.
Of course there are cases that such technical excellence is either not reasonable or not feasible. In these cases, we may require the feature or code submitted to be by disabled by default.
@@ -92,9 +92,9 @@ Of course there are cases that such technical excellence is either not reasonabl
Unlike other monitoring solutions, Netdata requires all metrics collected to have some structure attached to them. So, Netdata metrics have a name, units, belong to a chart that has a title, a family, a context, belong to an application, etc.
-This structure is what makes netdata different. Most other monitoring solution collect bulk metrics in terms of name-value pairs and then expect their users to give meaning to these metrics during visualization. This does not work. It is neither practical nor reasonable to give to someone 2000 metrics and let him/her visualize them in a meaningful way.
+This structure is what makes Netdata different. Most other monitoring solution collect bulk metrics in terms of name-value pairs and then expect their users to give meaning to these metrics during visualization. This does not work. It is neither practical nor reasonable to give to someone 2000 metrics and let him/her visualize them in a meaningful way.
-So, netdata requires all metrics to have a meaning at the time they are collected. We will reject to merge PRs that loosely collect just a "bunch of metrics", but we are very keen to help you fix this.
+So, Netdata requires all metrics to have a meaning at the time they are collected. We will reject to merge PRs that loosely collect just a "bunch of metrics", but we are very keen to help you fix this.
#### Automated Testing
@@ -106,7 +106,7 @@ Of course, manual testing is always required.
#### Netdata is a distributed application
-Netdata is a distributed monitoring application. A few basic features can become quite complicated for such applications. We may reject features that alter or influence the nature of netdata, though we usually discuss the requirements with contributors and help them adapt their code to be better suited for Netdata.
+Netdata is a distributed monitoring application. A few basic features can become quite complicated for such applications. We may reject features that alter or influence the nature of Netdata, though we usually discuss the requirements with contributors and help them adapt their code to be better suited for Netdata.
#### Operating systems supported
diff --git a/CONTRIBUTORS.md b/CONTRIBUTORS.md
index 1cb02caa5b..c6f51348b0 100644
--- a/CONTRIBUTORS.md
+++ b/CONTRIBUTORS.md
@@ -2,9 +2,9 @@
SPDX-License-Identifier: GPL-3.0-or-later
-->
-# netdata contributors license agreement
+# Netdata contributors license agreement
-**Thank you for contributing to netdata!**
+**Thank you for contributing to Netdata!**
This agreement is part of the legal framework of the open-source ecosystem
that adds some red tape, but protects both the contributor and the project.
@@ -17,22 +17,22 @@ contributions for any other purpose.
## copyright license
-The Contributor (*you*) grants netdata Inc. a perpetual, worldwide, non-exclusive,
+The Contributor (*you*) grants Netdata Inc. a perpetual, worldwide, non-exclusive,
no-charge, royalty-free, irrevocable copyright license to reproduce,
prepare derivative works of, publicly display, publicly perform, sublicense,
and distribute his contributions and such derivative works.
## copyright transfer
-The Contributor (*you*) hereby assigns netdata Inc. copyright in his
+The Contributor (*you*) hereby assigns Netdata Inc. copyright in his
contributions, to be licensed under the same terms as the rest of the code.
-> *Note: this means we may re-license netdata (your contributions included)
+> *Note: this means we may re-license Netdata (your contributions included)
> any way we see fit, without asking your permission.
-> We intend to keep the netdata agent forever FOSS.
+> We intend to keep the Netdata agent forever FOSS.
> But open-source licenses have significant differences and in our attempt to
-> help netdata grow we may have to distribute it under a different license.
-> For example, CNCF, the Cloud Native Computing Foundation, requires netdata
+> help Netdata grow we may have to distribute it under a different license.
+> For example, CNCF, the Cloud Native Computing Foundation, requires Netdata
> to be licensed under Apache-2.0 for it to be accepted as a member of the
> Foundation. We want to be free to do it.*
@@ -43,9 +43,9 @@ original creation and that he is legally entitled to grant the above license.
> *Note: if you are committing third party code, please make sure the third party
> license or any other restrictions are also included with your commits.
-> netdata includes many third party libraries and tools and this is not a
+> Netdata includes many third party libraries and tools and this is not a
> problem, provided that the license of the third party code is compatible with
-> the one we use for netdata.*
+> the one we use for Netdata.*
## signature
@@ -66,7 +66,7 @@ are subject to this agreement.
> 1. add your github username and name in this file
> 2. commit it to the repo with a PR, using the same github username, or include this change in your first PR.
-# netdata contributors
+# Netdata contributors
This is the list of contributors that have signed this agreement:
diff --git a/README.md b/README.md
index 1a935386b3..0780b73d8b 100644
--- a/README.md
+++ b/README.md
@@ -154,17 +154,17 @@ not just visualize metrics.
Release v1.16.0 contains 40 bug fixes, 31 improvements and 20 documentation updates
-**Binary distributions.** To improve the security, speed and reliability of new netdata installations, we are delivering our own, industry standard installation method, with binary package distributions. The RPM binaries for the most common OSs are already available on packagecloud and we’ll have the DEB ones available very soon. All distributions are considered in Beta and, as always, we depend on our amazing community for feedback on improvements.
+**Binary distributions.** To improve the security, speed and reliability of new Netdata installations, we are delivering our own, industry standard installation method, with binary package distributions. The RPM binaries for the most common OSs are already available on packagecloud and we’ll have the DEB ones available very soon. All distributions are considered in Beta and, as always, we depend on our amazing community for feedback on improvements.
- Our stable distributions are at [netdata/netdata @ packagecloud.io](https://packagecloud.io/netdata/netdata)
- The nightly builds are at [netdata/netdata-edge @ packagecloud.io](https://packagecloud.io/netdata/netdata-edge)
**Netdata now supports TLS encryption!** You can secure the communication to the [web server](https://docs.netdata.cloud/web/server/#enabling-tls-support), the [streaming connections from slaves to the master](https://docs.netdata.cloud/streaming/#securing-the-communication) and the connection to an [openTSDB backend](https://docs.netdata.cloud/backends/opentsdb/#https).
-**This version also brings two long-awaited features to netdata’s health monitoring:**
+**This version also brings two long-awaited features to Netdata’s health monitoring:**
- - The [health management API](https://docs.netdata.cloud/web/api/health/#health-management-api) introduced in v1.12 allowed you to easily disable alarms and/or notifications while netdata was running. However, those changes were not persisted across netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new [LIST command](https://docs.netdata.cloud/web/api/health/#list-silencers) of the API allows you to view at any time which alarms are currently disabled or silenced.
- - A way for netdata to [repeatedly send alarm notifications](https://docs.netdata.cloud/health/#alarm-line-repeat) for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.
+ - The [health management API](https://docs.netdata.cloud/web/api/health/#health-management-api) introduced in v1.12 allowed you to easily disable alarms and/or notifications while Netdata was running. However, those changes were not persisted across Netdata restarts. Since part of routine maintenance activities may involve completely restarting a monitoring node, Netdata now saves these configurations to disk, every time you issue a command to change the silencer settings. The new [LIST command](https://docs.netdata.cloud/web/api/health/#list-silencers) of the API allows you to view at any time which alarms are currently disabled or silenced.
+ - A way for Netdata to [repeatedly send alarm notifications](https://docs.netdata.cloud/health/#alarm-line-repeat) for some, or all active alarms, at a frequency of your choosing. As a result, you will no longer have to worry about missing a notification, forgetting about a raised alarm. The default is still to only send a single notification, so that existing users are not surprised by a different behavior.
As always, we’ve introduced new collectors, 5 of them this time:
diff --git a/REDISTRIBUTED.md b/REDISTRIBUTED.md
index b0fac2e75f..7be569ae2c 100644
--- a/REDISTRIBUTED.md
+++ b/REDISTRIBUTED.md
@@ -1,16 +1,16 @@
# Redistributed software
-netdata copyright info:
+Netdata copyright info:
Copyright 2016-2018, Costa Tsaousis.
Copyright 2018, Netdata Inc.
Released under [GPL v3 or later](LICENSE).
-netdata uses SPDX license tags to identify the license for its files.
+Netdata uses SPDX license tags to identify the license for its files.
Individual licenses referenced in the tags are available on the [SPDX project site](http://spdx.org/licenses/).
-netdata redistributes the following third-party software.
+Netdata redistributes the following third-party software.
We have decided to redistribute all these, instead of using them
-through a CDN, to allow netdata to work in cases where Internet
+through a CDN, to allow Netdata to work in cases where Internet
connectivity is not available.
- [Dygraphs](http://dygraphs.com/)
diff --git a/backends/README.md b/backends/README.md
index ef5baa1b6b..86f08325ff 100644
--- a/backends/README.md
+++ b/backends/README.md
@@ -1,15 +1,15 @@
# Metrics long term archiving
-netdata supports backends for archiving the metrics, or providing long term dashboards,
+Netdata supports backends for archiving the metrics, or providing long term dashboards,
using Grafana or other tools, like this:

-Since netdata collects thousands of metrics per server per second, which would easily congest any backend
-server when several netdata servers are sending data to it, netdata allows sending metrics at a lower
+Since Netdata collects thousands of metrics per server per second, which would easily congest any backend
+server when several Netdata servers are sending data to it, Netdata allows sending metrics at a lower
frequency, by resampling them.
-So, although netdata collects metrics every second, it can send to the backend servers averages or sums every
+So, although Netdata collects metrics every second, it can send to the backend servers averages or sums every
X seconds (though, it can send them per second if you need it to).
## features
@@ -30,7 +30,7 @@ X seconds (though, it can send them per second if you need it to).
metrics are sent to a document db, `JSON` formatted.
- - **prometheus** is described at [prometheus page](prometheus/) since it pulls data from netdata.
+ - **prometheus** is described at [prometheus page](prometheus/) since it pulls data from Netdata.
- **prometheus remote write** (a binary snappy-compressed protocol buffer encoding over HTTP used by
**Elasticsearch**, **Gnocchi**, **Graphite**, **InfluxDB**, **Kafka**, **OpenTSDB**,
@@ -54,26 +54,26 @@ X seconds (though, it can send them per second if you need it to).
So, counters are sent as counters and gauges are sent as gauges, much like all data collectors do.
For example, to calculate CPU utilization in this format, you need to know how to convert kernel ticks to percentage.
- - `average` sends to backends normalized metrics from the netdata database.
- In this mode, all metrics are sent as gauges, in the units netdata uses. This abstracts data collection
+ - `average` sends to backends normalized metrics from the Netdata database.
+ In this mode, all metrics are sent as gauges, in the units Netdata uses. This abstracts data collection
and simplifies visualization, but you will not be able to copy and paste queries from other sources to convert units.
- For example, CPU utilization percentage is calculated by netdata, so netdata will convert ticks to percentage and
+ For example, CPU utilization percentage is calculated by Netdata, so Netdata will convert ticks to percentage and
send the average percentage to the backend.
- - `sum` or `volume`: the sum of the interpolated values shown on the netdata graphs is sent to the backend.
- So, if netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
- netdata charts will be used.
+ - `sum` or `volume`: the sum of the interpolated values shown on the Netdata graphs is sent to the backend.
+ So, if Netdata is configured to send data to the backend every 10 seconds, the sum of the 10 values shown on the
+ Netdata charts will be used.
Time-series databases suggest to collect the raw values (`as-collected`). If you plan to invest on building your monitoring around a time-series database and you already know (or you will invest in learning) how to convert units and normalize the metrics in Grafana or other visualization tools, we suggest to use `as-collected`.
-If, on the other hand, you just need long term archiving of netdata metrics and you plan to mainly work with netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use `average`, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.
+If, on the other hand, you just need long term archiving of Netdata metrics and you plan to mainly work with Netdata, we suggest to use `average`. It decouples visualization from data collection, so it will generally be a lot simpler. Furthermore, if you use `average`, the charts shown in the back-end will match exactly what you see in Netdata, which is not necessarily true for the other modes of operation.
-5. This code is smart enough, not to slow down netdata, independently of the speed of the backend server.
+5. This code is smart enough, not to slow down Netdata, independently of the speed of the backend server.
## configuration
In `/etc/netdata/netdata.conf` you should have something like this (if not download the latest version
-of `netdata.conf` from your netdata):
+of `netdata.conf` from your Netdata):
```
[backend]
@@ -82,7 +82,7 @@ of `netdata.conf` from your netdata):
host tags = list of TAG=VALUE
destination = space separated list of [PROTOCOL:]HOST[:PORT] - the first working will be used, or a region for kinesis
data source = average | sum | as collected
- prefix = netdata
+ prefix = Netdata
hostname = my-name
update every = 10
buffer on failures = 10
@@ -122,13 +122,13 @@ of `netdata.conf` from your netdata):
destination = [ffff:...:0001]:2003 10.11.12.1:2003
```
- When multiple servers are defined, netdata will try the next one when the first one fails. This allows
- you to load-balance different servers: give your backend servers in different order on each netdata.
+ When multiple servers are defined, Netdata will try the next one when the first one fails. This allows
+ you to load-balance different servers: give your backend servers in different order on each Netdata.
- netdata also ships [`nc-backend.sh`](nc-backend.sh),
+ Netdata also ships [`nc-backend.sh`](nc-backend.sh),
a script that can be used as a fallback backend to save the metrics to disk and push them to the
time-series database when it becomes available again. It can also be used to monitor / trace / debug
- the metrics netdata generates.
+ the metrics Netdata generates.
For kinesis backend `destination` should be set to an AWS region (for example, `us-east-1`).
@@ -138,16 +138,16 @@ of `netdata.conf` from your netdata):
- `hostname = my-name`, is the hostname to be used for sending data to the backend server. By default
this is `[global].hostname`.
-- `prefix = netdata`, is the prefix to add to all metrics.
+- `prefix = Netdata`, is the prefix to add to all metrics.
-- `update every = 10`, is the number of seconds between sending data to the backend. netdata will add
- some randomness to this number, to prevent stressing the backend server when many netdata servers send
+- `update every = 10`, is the number of seconds between sending data to the backend. Netdata will add
+ some randomness to this number, to prevent stressing the backend server when many Netdata servers send
data to the same backend. This randomness does not affect the quality of the data, only the time they
are sent.
- `buffer on failures = 10`, is the number of iterations (each iteration is `[backend].update every` seconds)
to buffer data, when the backend is not available. If the backend fails to receive the data after that
- many failures, data loss on the backend is expected (netdata will also log it).
+ many failures, data loss on the backend is expected (Netdata will also log it).
- `timeout ms = 20000`, is the timeout in milliseconds to wait for the backend server to process the data.
By default this is `2 * update_every * 1000`.
@@ -155,7 +155,7 @@ of `netdata.conf` from your netdata):
- `send hosts matching = localhost *` includes one or more space separated patterns, using ` * ` as wildcard
(any number of times within each pattern). The patterns are checked against the hostname (the localhost
is always checked as `localhost`), allowing us to filter which hosts will be sent to the backend when
- this netdata is a central netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a
+ this Netdata is a central Netdata aggregating multiple hosts. A pattern starting with ` ! ` gives a
negative match. So to match all hosts named `*db*` except hosts containing `*slave*`, use
`!*slave* *db*` (so, the order is important: the first pattern matching the hostname will be used - positive
or negative).
@@ -166,8 +166,8 @@ of `netdata.conf` from your netdata):
except charts ending in `*reads`, use `!*reads apps.*` (so, the order is important: the first pattern
matching the chart id or the chart name will be used - positive or negative).
-- `send names instead of ids = yes | no` controls the metric names netdata should send to backend.
- netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
+- `send names instead of ids = yes | no` controls the metric names Netdata should send to backend.
+ Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read
by the system and names are human friendly labels (also unique). Most charts and metrics have the same
ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes,
statsd synthetic charts, etc.
@@ -176,26 +176,26 @@ of `netdata.conf` from your netdata):
These are currently only sent to opentsdb and prometheus. Please use the appropriate format for each
time-series db. For example opentsdb likes them like `TAG1=VALUE1 TAG2=VALUE2`, but prometheus like
`tag1="value1",tag2="value2"`. Host tags are mirrored with database replication (streaming of metrics
- between netdata servers).
+ between Netdata servers).
## monitoring operation
-netdata provides 5 charts:
+Netdata provides 5 charts:
-1. **Buffered metrics**, the number of metrics netdata added to the buffer for dispatching them to the
+1. **Buffered metrics**, the number of metrics Netdata added to the buffer for dispatching them to the
backend server.
-2. **Buffered data size**, the amount of data (in KB) netdata added the buffer.
+2. **Buffered data size**, the amount of data (in KB) Netdata added the buffer.
-3. ~~**Backend latency**, the time the backend server needed to process the data netdata sent.
+3. ~~**Backend latency**, the time the backend server needed to process the data Netdata sent.
If there was a re-connection involved, this includes the connection time.~~
- (this chart has been removed, because it only measures the time netdata needs to give the data
- to the O/S - since the backend servers do not ack the reception, netdata does not have any means
+ (this chart has been removed, because it only measures the time Netdata needs to give the data
+ to the O/S - since the backend servers do not ack the reception, Netdata does not have any means
to measure this properly).
-4. **Backend operations**, the number of operations performed by netdata.
+4. **Backend operations**, the number of operations performed by Netdata.
-5. **Backend thread CPU usage**, the CPU resources consumed by the netdata thread, that is responsible
+5. **Backend thread CPU usage**, the CPU resources consumed by the Netdata thread, that is responsible
for sending the metrics to the backend server.

@@ -204,12 +204,12 @@ netdata provides 5 charts:
The latest version of the alarms configuration for monitoring the backend is [here](../health/health.d/backend.conf)
-netdata adds 4 alarms:
+Netdata adds 4 alarms:
1. `backend_last_buffering`, number of seconds since the last successful buffering of backend data
2. `backend_metrics_sent`, percentage of metrics sent to the backend server
3. `backend_metrics_lost`, number of metrics lost due to repeating failures to contact the backend server
-4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by netdata~~ (this was misleading and has been removed).
+4. ~~`backend_slow`, the percentage of time between iterations needed by the backend time to process the data sent by Netdata~~ (this was misleading and has been removed).

diff --git a/backends/WALKTHROUGH.md b/backends/WALKTHROUGH.md
index d3666ef5db..7f1dda66f4 100644
--- a/backends/WALKTHROUGH.md
+++ b/backends/WALKTHROUGH.md
@@ -41,7 +41,7 @@ visibility into your application and systems performance.
## Getting Started - Netdata
To begin let’s create our container which we will install Netdata on. We need
-to run a container, forward the necessary port that netdata listens on, and
+to run a container, forward the necessary port that Netdata listens on, and
attach a tty so we can interact with the bash shell on the container. But
before we do this we want name resolution between the two containers to work.
In order to accomplish this we will create a user-defined network and attach
@@ -68,7 +68,7 @@ be sitting inside the shell of the container.
After we have entered the shell we can install Netdata. This process could not
be easier. If you take a look at [this link](../packaging/installer/#installation), the Netdata devs give us
-several one-liners to install netdata. I have not had any issues with these one
+several one-liners to install Netdata. I have not had any issues with these one
liners and their bootstrapping scripts so far (If you guys run into anything do
share). Run the following command in your container.
@@ -97,7 +97,7 @@ Netdata dashboard.

This CHART is called ‘system.cpu’, The FAMILY is cpu, and the DIMENSION we are
-observing is “system”. You can begin to draw links between the charts in netdata
+observing is “system”. You can begin to draw links between the charts in Netdata
to the prometheus metrics format in this manner.
## Prometheus
diff --git a/backends/aws_kinesis/README.md b/backends/aws_kinesis/README.md
index 4249790975..b5726d2dbd 100644
--- a/backends/aws_kinesis/README.md
+++ b/backends/aws_kinesis/README.md
@@ -1,8 +1,8 @@
-# Using netdata with AWS Kinesis Data Streams
+# Using Netdata with AWS Kinesis Data Streams
## Prerequisites
-To use AWS Kinesis as a backend AWS SDK for C++ should be [installed](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) first. `libcrypto`, `libssl`, and `libcurl` are also required to compile netdata with Kinesis support enabled. Next, netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
+To use AWS Kinesis as a backend AWS SDK for C++ should be [installed](https://docs.aws.amazon.com/en_us/sdk-for-cpp/v1/developer-guide/setup.html) first. `libcrypto`, `libssl`, and `libcurl` are also required to compile Netdata with Kinesis support enabled. Next, Netdata should be re-installed from the source. The installer will detect that the required libraries are now available.
If the AWS SDK for C++ is being installed from source, it is useful to set `-DBUILD_ONLY="kinesis"`. Otherwise, the building process could take a very long time. Take a note, that the default installation path for the libraries is `/usr/local/lib64`. Many Linux distributions don't include this path as the default one for a library search, so it is advisable to use the following options to `cmake` while building the AWS SDK:
@@ -21,7 +21,7 @@ To enable data sending to the kinesis backend set the following options in `netd
```
set the `destination` option to an AWS region.
-In the netdata configuration directory run `./edit-config aws_kinesis.conf` and set AWS credentials and stream name:
+In the Netdata configuration directory run `./edit-config aws_kinesis.conf` and set AWS credentials and stream name:
```
# AWS credentials
aws_access_key_id = your_access_key_id
@@ -32,7 +32,7 @@ stream name = your_stream_name
```
Alternatively, AWS credentials can be set for the *netdata* user using AWS SDK for C++ [standard methods](https://docs.aws.amazon.com/sdk-for-cpp/v1/developer-guide/credentials.html).
-A partition key for every record is computed automatically by the netdata with the purpose to distribute records across available shards evenly.
+A partition key for every record is computed automatically by Netdata with the purpose to distribute records across available shards evenly.
[]()
diff --git a/backends/prometheus/README.md b/backends/prometheus/README.md
index 6b070dea8d..f879ba35a3 100644
--- a/backends/prometheus/README.md
+++ b/backends/prometheus/README.md
@@ -1,32 +1,32 @@
-# Using netdata with Prometheus
+# Using Netdata with Prometheus
-> IMPORTANT: the format netdata sends metrics to prometheus has changed since netdata v1.7. The new prometheus backend for netdata supports a lot more features and is aligned to the development of the rest of the netdata backends.
+> IMPORTANT: the format Netdata sends metrics to prometheus has changed since Netdata v1.7. The new prometheus backend for Netdata supports a lot more features and is aligned to the development of the rest of the Netdata backends.
-Prometheus is a distributed monitoring system which offers a very simple setup along with a robust data model. Recently netdata added support for Prometheus. I'm going to quickly show you how to install both netdata and prometheus on the same server. We can then use grafana pointed at Prometheus to obtain long term metrics netdata offers. I'm assuming we are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM or a cloud instance is up to you).
+Prometheus is a distributed monitoring system which offers a very simple setup along with a robust data model. Recently Netdata added support for Prometheus. I'm going to quickly show you how to install both Netdata and prometheus on the same server. We can then use grafana pointed at Prometheus to obtain long term metrics Netdata offers. I'm assuming we are starting at a fresh ubuntu shell (whether you'd like to follow along in a VM or a cloud instance is up to you).
-## Installing netdata and prometheus
+## Installing Netdata and prometheus
-### Installing netdata
+### Installing Netdata
-There are number of ways to install netdata according to [Installation](../../packaging/installer/#installation)
-The suggested way of installing the latest netdata and keep it upgrade automatically. Using one line installation:
+There are number of ways to install Netdata according to [Installation](../../packaging/installer/#installation)
+The suggested way of installing the latest Netdata and keep it upgrade automatically. Using one line installation:
```
bash <(curl -Ss https://my-netdata.io/kickstart.sh)
```
-At this point we should have netdata listening on port 19999. Attempt to take your browser here:
+At this point we should have Netdata listening on port 19999. Attempt to take your browser here:
```
http://your.netdata.ip:19999
```
-*(replace `your.netdata.ip` with the IP or hostname of the server running netdata)*
+*(replace `your.netdata.ip` with the IP or hostname of the server running Netdata)*
### Installing Prometheus
-In order to install prometheus we are going to introduce our own systemd startup script along with an example of prometheus.yaml configuration. Prometheus needs to be pointed to your server at a specific target url for it to scrape netdata's api. Prometheus is always a pull model meaning netdata is the passive client within this architecture. Prometheus always initiates the connection with netdata.
+In order to install prometheus we are going to introduce our own systemd startup script along with an example of prometheus.yaml configuration. Prometheus needs to be pointed to your server at a specific target url for it to scrape Netdata's api. Prometheus is always a pull model meaning Netdata is the passive client within this architecture. Prometheus always initiates the connection with Netdata.
#### Download Prometheus
@@ -57,7 +57,7 @@ sudo tar -xvf /tmp/prometheus-2.3.2.linux-amd64.tar.gz -C /opt/prometheus --stri
We will use the following `prometheus.yml` file. Save it at `/opt/prometheus/prometheus.yml`.
-Make sure to replace `your.netdata.ip` with the IP or hostname of the host running netdata.
+Make sure to replace `your.netdata.ip` with the IP or hostname of the host running Netdata.
```yaml
# my global config
@@ -101,7 +101,7 @@ scrape_configs:
#source: [as-collected]
#
# server name for this prometheus - the default is the client IP
- # for netdata to uniquely identify it
+ # for Netdata to uniquely identify it
#server: ['prometheus1']
honor_labels: true
@@ -180,21 +180,21 @@ sudo systemctl enable prometheus
Prometheus should now start and listen on port 9090. Attempt to head there with your browser.
-If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the netdata host as a scraped target.
+If everything is working correctly when you fetch `http://your.prometheus.ip:9090` you will see a 'Status' tab. Click this and click on 'targets' We should see the Netdata host as a scraped target.
---
## Netdata support for prometheus
-> IMPORTANT: the format netdata sends metrics to prometheus has changed since netdata v1.6. The new format allows easier queries for metrics and supports both `as collected` and normalized metrics.
+> IMPORTANT: the format Netdata sends metrics to prometheus has changed since Netdata v1.6. The new format allows easier queries for metrics and supports both `as collected` and normalized metrics.
-Before explaining the changes, we have to understand the key differences between netdata and prometheus.
+Before explaining the changes, we have to understand the key differences between Netdata and prometheus.
-### understanding netdata metrics
+### understanding Netdata metrics
##### charts
-Each chart in netdata has several properties (common to all its metrics):
+Each chart in Netdata has several properties (common to all its metrics):
- `chart_id` - uniquely identifies a chart.
@@ -208,32 +208,32 @@ Each chart in netdata has several properties (common to all its metrics):
##### dimensions
-Then each netdata chart contains metrics called `dimensions`. All the dimensions of a chart have the same units of measurement, and are contextually in the same category (ie. the metrics for disk bandwidth are `read` and `write` and they are both in the same chart).
+Then each Netdata chart contains metrics called `dimensions`. All the dimensions of a chart have the same units of measurement, and are contextually in the same category (ie. the metrics for disk bandwidth are `read` and `write` and they are both in the same chart).
-### netdata data source
+### Netdata data source
Netdata can send metrics to prometheus from 3 data sources:
-- `as collected` or `raw` - this data source sends the metrics to prometheus as they are collected. No conversion is done by netdata. The latest value for each metric is just given to prometheus. This is the most preferred method by prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how to get meaningful values out of them.
+- `as collected` or `raw` - this data source sends the metrics to prometheus as they are collected. No conversion is done by Netdata. The latest value for each metric is just given to prometheus. This is the most preferred method by prometheus, but it is also the harder to work with. To work with this data source, you will need to understand how to get meaningful values out of them.
The format of the metrics is: `CONTEXT{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
- If the metric is a counter (`incremental` in netdata lingo), `_total` is appended the context.
+ If the metric is a counter (`incremental` in Netdata lingo), `_total` is appended the context.
- Unlike prometheus, netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
+ Unlike prometheus, Netdata allows each dimension of a chart to have a different algorithm and conversion constants (`multiplier` and `divisor`). In this case, that the dimensions of a charts are heterogeneous, Netdata will use this format: `CONTEXT_DIMENSION{chart="CHART",family="FAMILY"}`
-- `average` - this data source uses the netdata database to send the metrics to prometheus as they are presented on the netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the netdata dashboard charts. This is the easiest to work with.
+- `average` - this data source uses the Netdata database to send the metrics to prometheus as they are presented on the Netdata dashboard. So, all the metrics are sent as gauges, at the units they are presented in the Netdata dashboard charts. This is the easiest to work with.
The format of the metrics is: `CONTEXT_UNITS_average{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
- When this source is used, netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes netdata, it will get all the database data. To identify each prometheus server, netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
+ When this source is used, Netdata keeps track of the last access time for each prometheus server fetching the metrics. This last access time is used at the subsequent queries of the same prometheus server to identify the time-frame the `average` will be calculated. So, no matter how frequently prometheus scrapes Netdata, it will get all the database data. To identify each prometheus server, Netdata uses by default the IP of the client fetching the metrics. If there are multiple prometheus servers fetching data from the same Netdata, using the same IP, each prometheus server can append `server=NAME` to the URL. Netdata will use this `NAME` to uniquely identify the prometheus server.
- `sum` or `volume`, is like `average` but instead of averaging the values, it sums them.
The format of the metrics is: `CONTEXT_UNITS_sum{chart="CHART",family="FAMILY",dimension="DIMENSION"}`.
All the other operations are the same with `average`.
-Keep in mind that early versions of netdata were sending the metrics as: `CHART_DIMENSION{}`.
+Keep in mind that early versions of Netdata were sending the metrics as: `CHART_DIMENSION{}`.
### Querying Metrics
@@ -241,11 +241,11 @@ Fetch with your web browser this URL:
`http://your.netdata.ip:19999/api/v1/allmetrics?format=prometheus&help=yes`
-*(replace `your.netdata.ip` with the ip or hostname of your netdata server)*
+*(replace `your.netdata.ip` with the ip or hostname of your Netdata server)*
-netdata will respond with all the metrics it sends to prometheus.
+Netdata will respond with all the metrics it sends to prometheus.
-If you search that page for `"system.cpu"` you will find all the metrics netdata is exporting to prometheus for this chart. `system.cpu` is the chart name on the netdata dashboard (on the netdata dashboard all charts have a text heading such as : `Total CPU utilization (system.cpu)`. What we are interested here in the chart name: `system.cpu`).
+If you search that page for `"system.cpu"` you will find all the metrics Netdata is exporting to prometheus for this chart. `system.cpu` is the chart name on the Netdata dashboard (on the Netdata dashboard all charts have a text heading such as : `Total CPU utilization (system.cpu)`. What we are interested here in the chart name: `system.cpu`).
Searching for `"system.cpu"` reveals:
@@ -272,7 +272,7 @@ netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension=
# COMMENT netdata_system_cpu_percentage_average: dimension "idle", value is percentage, gauge, dt 1500066653 to 1500066662 inclusive
netdata_system_cpu_percentage_average{chart="system.cpu",family="cpu",dimension="idle"} 92.3630770 1500066662000
```
-*(netdata response for `system.cpu` with source=`average`)*
+*(Netdata response for `system.cpu` with source=`average`)*
In `average` or `sum` data sources, all values are normalized and are reported to prometheus as gauges. Now, use the 'expression' text form in prometheus. Begin to type the metrics we are looking for: `netdata_system_cpu`. You should see that the text form begins to auto-fill as prometheus knows about this metric.
@@ -302,13 +302,13 @@ netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="iowait"} 233
netdata_system_cpu_total{chart="system.cpu",family="cpu",dimension="idle"} 918470 1500066716438
```
-*(netdata response for `system.cpu` with source=`as-collected`)*
+*(Netdata response for `system.cpu` with source=`as-collected`)*
For more information check prometheus documentation.
### Streaming data from upstream hosts
-The `format=prometheus` parameter only exports the host's netdata metrics. If you are using the master/slave functionality of netdata this ignores any upstream hosts - so you should consider using the below in your **prometheus.yml**:
+The `format=prometheus` parameter only exports the host's Netdata metrics. If you are using the master/slave functionality of Netdata this ignores any upstream hosts - so you should consider using the below in your **prometheus.yml**:
```
metrics_path: '/api/v1/allmetrics'
@@ -321,13 +321,13 @@ This will report all upstream host data, and `honor_labels` will make Prometheus
### Timestamps
-To pass the metrics through prometheus pushgateway, netdata supports the option `×tamps=no` to send the metrics without timestamps.
+To pass the metrics through prometheus pushgateway, Netdata supports the option `×tamps=no` to send the metrics without timestamps.
## Netdata host variables
-netdata collects various system configuration metrics, like the max number of TCP sockets supported, the max number of files allowed system-wide, various IPC sizes, etc. These metrics are not exposed to prometheus by default.
+Netdata collects various system configuration metrics, like the max number of TCP sockets supported, the max number of files allowed system-wide, various IPC sizes, etc. These metrics are not exposed to prometheus by default.
-To expose them, append `variables=yes` to the netdata URL.
+To expose them, append `variables=yes` to the Netdata URL.
### TYPE and HELP
@@ -335,7 +335,7 @@ To save bandwidth, and because prometheus does not use them anyway, `# TYPE` and
### Names and IDs
-netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique).
+Netdata supports names and IDs for charts and dimensions. Usually IDs are unique identifiers as read by the system and names are human friendly labels (also unique).
Most charts and metrics have the same ID and name, but in several cases they are different: disks with device-mapper, interrupts, QoS classes, statsd synthetic charts, etc.
@@ -353,7 +353,7 @@ You can overwrite it from prometheus, by appending to the URL:
### Filtering metrics sent to prometheus
-netdata can filter the metrics it sends to prometheus with this setting:
+Netdata can filter the metrics it sends to prometheus with this setting:
```
[backend]
@@ -362,9 +362,9 @@ netdata can filter the metrics it sends to prometheus with this setting:
This settings accepts a space separated list of patterns to match the **charts** to be sent to prometheus. Each pattern can use ` * ` as wildcard, any number of times (e.g `*a*b*c*` is valid). Patterns starting with ` ! ` give a negative match (e.g `!*.bad users.* groups.*` will send all the users and groups except `bad` user and `bad` group). The order is important: the first match (positive or negative) left to right, is used.
-### Changing the prefix of netdata metrics
+### Changing the prefix of Netdata metrics
-netdata sends all metrics prefixed with `netdata_`. You can change this in `netdata.conf`, like this:
+Netdata sends all metrics prefixed with `netdata_`. You can change this in `netdata.conf`, like this:
```
[backend]
@@ -383,8 +383,8 @@ To get the metric names as they were before v1.12, append to the URL `&oldunits=
### Accuracy of `average` and `sum` data sources
-When the data source is set to `average` or `sum`, netdata remembers the last access of each client accessing prometheus metrics and uses this last access time to respond with the `average` or `sum` of all the entries in the database since that. This means that prometheus servers are not losing data when they access netdata with data source = `average` or `sum`.
+When the data source is set to `average` or `sum`, Netdata remembers the last access of each client accessing prometheus metrics and uses this last access time to respond with the `average` or `sum` of all the entries in the database since that. This means that prometheus servers are not losing data when they access Netdata with data source = `average` or `sum`.
-To uniquely identify each prometheus server, netdata uses the IP of the client accessing the metrics. If however the IP is not good enough for identifying a single prometheus server (e.g. when prometheus servers are accessing netdata through a web proxy, or when multiple prometheus servers are NATed to a single IP), each prometheus may append `&server=NAME` to the URL. This `NAME` is used by netdata to uniquely identify each prometheus server and keep track of its last access time.
+To uniquely identify each prometheus server, Netdata uses the IP of the client accessing the metrics. If however the IP is not good enough for identifying a single prometheus server (e.g. when prometheus servers are accessing Netdata through a web proxy, or when multiple prometheus servers are NATed to a single IP), each prometheus may append `&server=NAME` to the URL. This `NAME` is used by Netdata to uniquely identify each prometheus server and keep track of its last access time.
[]()
diff --git a/backends/prometheus/remote_write/README.md b/backends/prometheus/remote_write/README.md
index 73cb1daf5a..2baa00fa09 100644
--- a/backends/prometheus/remote_write/README.md
+++ b/backends/prometheus/remote_write/README.md
@@ -2,7 +2,7 @@
## Prerequisites
-To use the prometheus remote write API with [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) [protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries should be installed first. Next, netdata should be re-installed from the source. The installer will detect that the required libraries and utilities are now available.
+To use the prometheus remote write API with [storage providers](https://prometheus.io/docs/operating/integrations/#remote-endpoints-and-storage) [protobuf](https://developers.google.com/protocol-buffers/) and [snappy](https://github.com/google/snappy) libraries should be installed first. Next, Netdata should be re-installed from the source. The installer will detect that the required libraries and utilities are now available.
## Configuration
diff --git a/collectors/README.md b/collectors/README.md
index 7252138893..1407cb16cb 100644
--- a/collectors/README.md
+++ b/collectors/README.md
@@ -1,20 +1,20 @@
# Data collection plugins
-netdata supports **internal** and **external** data collection plugins:
+Netdata supports **internal** and **external** data collection plugins:
-- **internal** plugins are written in `C` and run as threads inside the netdata daemon.
+- **internal** plugins are written in `C` and run as threads inside the `netdat`a` daemon.
-- **external** plugins may be written in any computer language and are spawn as independent long-running processes by the netdata daemon.
- They communicate with the netdata daemon via `pipes` (`stdout` communication).
+- **external** plugins may be written in any computer language and are spawn as independent long-running processes by the `netdata` daemon.
+ They communicate with the `netdata` daemon via `pipes` (`stdout` communication).
-To minimize the number of processes spawn for data collection, netdata also supports **plugin orchestrators**.
+To minimize the number of processes spawn for data collection, Netdata also supports **plugin orchestrators**.
- **plugin orchestrators** are external plugins that do not collect any data by themeselves.
Instead they support data collection **modules** written in the language of the orchestrator.
Usually the orchestrator provides a higher level abstraction, making it ideal for writing new
data collection modules with the minimum of code.
- Currently netdata provides plugin orchestrators
+ Currently Netdata provides plugin orchestrators
BASH v4+ [charts.d.plugin](charts.d.plugin/),
node.js [node.d.plugin](node.d.plugin/) and
python v2+ (including v3) [python.d.plugin](python.d.plugin/).
@@ -42,7 +42,7 @@ plugin|lang|O/S|runs as|modular|description
[plugins.d](plugins.d/)|`C`|any|internal|-|implements the **external plugins** API and serves external plugins
[proc.plugin](proc.plugin/)|`C`|linux|internal|yes|collects resource usage and performance data on Linux systems
[python.d.plugin](python.d.plugin/)|`python` v2+|any|external|yes|a **plugin orchestrator** for data collection modules written in `python` v2 or v3 (both are supported).
-[statsd.plugin](statsd.plugin/)|`C`|any|internal|-|implements a high performance **statsd** server for netdata
+[statsd.plugin](statsd.plugin/)|`C`|any|internal|-|implements a high performance **statsd** server for Netdata
[tc.plugin](tc.plugin/)|`C`|linux|internal|-|collects traffic QoS metrics (`tc`) of Linux network interfaces
## Enabling and Disabling plugins
@@ -59,7 +59,7 @@ All **external plugins** are managed by [plugins.d](plugins.d/), which provides
### Internal Plugins
-Each of the internal plugins runs as a thread inside the netdata daemon.
+Each of the internal plugins runs as a thread inside the `netdata` daemon.
Once this thread has started, the plugin may spawn additional threads according to its design.
#### Internal Plugins API
@@ -72,7 +72,7 @@ collect_data() {
collected_number collected_value = collect_a_value();
- // give the metrics to netdata
+ // give the metrics to Netdata
static RRDSET *st = NULL; // the chart
static RRDDIM *rd = NULL; // a dimension attached to this chart
@@ -100,20 +100,19 @@ collect_data() {
}
else {
// this chart is already created
- // let netdata know we start a new iteration on it
+ // let Netdata know we start a new iteration on it
rrdset_next(st);
}
// give the collected value(s) to the chart
rrddim_set_by_pointer(st, rd, collected_value);
- // signal netdata we are done with this iteration
+ // signal Netdata we are done with this iteration
rrdset_done(st);
}
```
-Of course netdata has a lot of libraries to help you also in collecting the metrics.
-The best way to find your way through this, is to examine what other similar plugins do.
+Of course, Netdata has a lot of libraries to help you also in collecting the metrics. The best way to find your way through this, is to examine what other similar plugins do.
### External Plugins
diff --git a/collectors/apps.plugin/README.md b/collectors/apps.plugin/README.md
index ee5c6971ab..bf57ea648f 100644
--- a/collectors/apps.plugin/README.md
+++ b/collectors/apps.plugin/README.md
@@ -5,9 +5,9 @@
To achieve this task, it iterates through the whole process tree, collecting resource usage information
for every process found running.
-Since netdata needs to present this information in charts and track them through time,
+Since Netdata needs to present this information in charts and track them through time,
instead of presenting a `top` like list, `apps.plugin` uses a pre-defined list of **process groups**
-to which it assigns all running processes. This list is [customizable](apps_groups.conf) and netdata
+to which it assigns all running processes. This list is [customizable](apps_groups.conf) and Netdata
ships with a good default for most cases (to edit it on your system run `/etc/netdata/edit-config apps_groups.conf`).
So, `apps.plugin` builds a process tree (much like `ps fax` does in Linux), and groups
@@ -15,7 +15,7 @@ processes together (evaluating both child and parent processes) so that the resu
a predefined set of members (of course, only process groups found running are reported).
> If you find that `apps.plugin` categorizes standard applications as `other`, we would be
-> glad to accept pull requests improving the [defaults](apps_groups.conf) shipped with netdata.
+> glad to accept pull requests improving the [defaults](apps_groups.conf) shipped with Netdata.
Unlike traditional process monitoring tools (like `top`), `apps.plugin` is able to account the resource
utilization of exit processes. Their utilization is accounted at their currently running parents.
@@ -26,9 +26,9 @@ that fork/spawn other short lived processes hundreds of times per second.
`apps.plugin` provides charts for 3 sections:
-1. Per application charts as **Applications** at netdata dashboards
-2. Per user charts as **Users** at netdata dashboards
-3. Per user group charts as **User Groups** at netdata dashboards
+1. Per application charts as **Applications** at Netdata dashboards
+2. Per user charts as **Users** at Netdata dashboards
+3. Per user group charts as **User Groups** at Netdata dashboards
Each of these sections provides the same number of charts:
@@ -64,7 +64,7 @@ The above are reported:
`apps.plugin` is a complex piece of software and has a lot of work to do
We are proud that `apps.plugin` is a lot faster compared to any other similar tool,
while collecting a lot more information for the processes, however the fact is that
-this plugin requires more CPU resources than the netdata daemon itself.
+this plugin requires more CPU resources than the `netdata` daemon itself.
Under Linux, for each process running, `apps.plugin` reads several `/proc` files
per process. Doing this work per-second, especially on hosts with several thousands
@@ -135,14 +135,14 @@ The order of the entries in this list is important: the first that matches a pro
ones at the top. Processes not matched by any row, will inherit it from their parents or children.
The order also controls the order of the dimensions on the generated charts (although applications started
-after apps.plugin is started, will be appended to the existing list of dimensions the netdata daemon maintains).
+after apps.plugin is started, will be appended to the existing list of dimensions the `netdata` daemon maintains).
## Permissions
`apps.plugin` requires additional privileges to collect all the information it needs.
The problem is described in issue #157.
-When netdata is installed, `apps.plugin` is given the capabilities `cap_dac_read_search,cap_sys_ptrace+ep`.
+When Netdata is installed, `apps.plugin` is given the capabilities `cap_dac_read_search,cap_sys_ptrace+ep`.
If this fails (i.e. `setcap` fails), `apps.plugin` is setuid to `root`.
#### linux capabilities in containers
@@ -158,15 +158,15 @@ chown root:netdata /usr/libexec/netdata/plugins.d/apps.plugin
chmod 4750 /usr/libexec/netdata/plugins.d/apps.plugin
```
-You will have to run these, every time you update netdata.
+You will have to run these, every time you update Netdata.
## Security
`apps.plugin` performs a hard-coded function of building the process tree in memory,
-iterating forever, collecting metrics for each running process and sending them to netdata.
-This is a one-way communication, from `apps.plugin` to netdata.
+iterating forever, collecting metrics for each running process and sending them to Netdata.
+This is a one-way communication, from `apps.plugin` to Netdata.
-So, since `apps.plugin` cannot be instructed by netdata for the actions it performs,
+So, since `apps.plugin` cannot be instructed by Netdata for the actions it performs,
we think it is pretty safe to allow it have these increased privileges.
Keep in mind that `apps.plugin` will still run without escalated permissions,
@@ -210,7 +210,7 @@ For more information about badges check [Generating Badges](../../web/api/badges
## Comparison with console tools
-Ssh to a server running netdata and execute this:
+SSH to a server running Netdata and execute this:
```sh
while true; do ls -l /var/run >/dev/null; done
@@ -318,24 +318,24 @@ FILE SYS Used Total 0.3 2.1 7009 netdata 0 S /usr/sbin/netdata
/ (vda1) 1.56G 29.5G 0.0 0.0 17 root 0 S oom_reaper
```
-#### why this happens?
+#### why does this happen?
All the console tools report usage based on the processes found running *at the moment they
examine the process tree*. So, they see just one `ls` command, which is actually very quick
with minor CPU utilization. But the shell, is spawning hundreds of them, one after another
(much like shell scripts do).
-#### what netdata reports?
+#### What does Netdata report?
The total CPU utilization of the system:

-
_**Figure 1**: The system overview section at netdata, just a few seconds after the command was run_
+
_**Figure 1**: The system overview section at Netdata, just a few seconds after the command was run_
And at the applications `apps.plugin` breaks down CPU usage per application:

-
_**Figure 2**: The Applications section at netdata, just a few seconds after the command was run_
+
_**Figure 2**: The Applications section at Netdata, just a few seconds after the command was run_
So, the `ssh` session is using 95% CPU time.
@@ -344,7 +344,7 @@ Why `ssh`?
`apps.plugin` groups all processes based on its configuration file
[`/etc/netdata/apps_groups.conf`](apps_groups.conf)
(to edit it on your system run `/etc/netdata/edit-config apps_groups.conf`).
-The default configuration has nothing for `bash`, but it has for `sshd`, so netdata accumulates
+The default configuration has nothing for `bash`, but it has for `sshd`, so Netdata accumulates
all ssh sessions to a dimension on the charts, called `ssh`. This includes all the processes in
the process tree of `sshd`, **including the exited children**.
@@ -353,9 +353,9 @@ the process tree of `sshd`, **including the exited children**.
> `apps.plugin` does not use these mechanisms. The process grouping made by `apps.plugin` works
> on any Linux, `systemd` based or not.
-#### a more technical description of how netdata works
+#### a more technical description of how Netdata works
-netdata reads `/proc/
requests|warning|critical
The column `minimum requests` state the minimum number of requests required for the alarm to be evaluated. We found that when the site is receiving requests above this rate, these alarms are pretty accurate (i.e. no false-positives).
-[**netdata**](https://my-netdata.io/) alarms are user configurable. Sample config files can be found under directory `health/health.d` of the netdata github repository. So, even [`web_log` alarms can be adapted to your needs](../../../health/health.d/web_log.conf).
+[**netdata**](https://my-netdata.io/) alarms are user configurable. Sample config files can be found under directory `health/health.d` of the [Netdata GitHub repository](https://github.com/netdata/netdata/). So, even [`web_log` alarms can be adapted to your needs](../../../health/health.d/web_log.conf).
[]()
diff --git a/collectors/statsd.plugin/README.md b/collectors/statsd.plugin/README.md
index 399918dc94..4df9860048 100644
--- a/collectors/statsd.plugin/README.md
+++ b/collectors/statsd.plugin/README.md
@@ -4,7 +4,7 @@ statsd is a system to collect data from any application. Applications are sendin
There is a [plethora of client libraries](https://github.com/etsy/statsd/wiki#client-implementations) for embedding statsd metrics to any application framework. This makes statsd quite popular for custom application metrics.
-netdata is a fully featured statsd server. It can collect statsd formatted metrics, visualize them on its dashboards, stream them to other netdata servers or archive them to backend time-series databases.
+Netdata is a fully featured statsd server. It can collect statsd formatted metrics, visualize them on its dashboards, stream them to other Netdata servers or archive them to backend time-series databases.
Netdata statsd is inside Netdata (an internal plugin, running inside the Netdata daemon), it is configured via `netdata.conf` and by-default listens on standard statsd ports (tcp and udp 8125 - yes, Netdata statsd server supports both tcp and udp at the same time).
@@ -62,19 +62,19 @@ The application may append `|@sampling_rate`, where `sampling_rate` is a number
#### Overlapping metrics
-netdata statsd maintains different indexes for each of the types supported. This means the same metric `name` may exist under different types concurrently.
+Netdata's statsd server maintains different indexes for each of the types supported. This means the same metric `name` may exist under different types concurrently.
#### Multiple metrics per packet
-netdata accepts multiple metrics per packet if each is terminated with `\n`.
+Netdata accepts multiple metrics per packet if each is terminated with `\n`.
#### TCP packets
-netdata listens for both TCP and UDP packets. For TCP though, is it important to always append `\n` on each metric. netdata uses this to detect if a metric is split into multiple TCP packets. On disconnect, even the remaining (non terminated with `\n`) buffer, is processed.
+Netdata listens for both TCP and UDP packets. For TCP though, is it important to always append `\n` on each metric. Netdata uses this to detect if a metric is split into multiple TCP packets. On disconnect, even the remaining (non terminated with `\n`) buffer, is processed.
#### UDP packets
-When sending multiple packets over UDP, it is important not to exceed the network MTU (usually 1500 bytes minus a few bytes for the headers). netdata will accept UDP packets up to 9000 bytes, but the underlying network will not exceed MTU.
+When sending multiple packets over UDP, it is important not to exceed the network MTU (usually 1500 bytes minus a few bytes for the headers). Netdata will accept UDP packets up to 9000 bytes, but the underlying network will not exceed MTU.
## configuration
@@ -107,7 +107,7 @@ This is the statsd configuration at `/etc/netdata/netdata.conf`:
### statsd main config options
- `enabled = yes|no`
- controls if statsd will be enabled for this netdata. The default is enabled.
+ controls if statsd will be enabled for this Netdata. The default is enabled.
- `default port = 8125`
@@ -117,15 +117,15 @@ This is the statsd configuration at `/etc/netdata/netdata.conf`:
is a space separated list of IPs and ports to listen to. The format is `PROTOCOL:IP:PORT` - if `PORT` is omitted, the `default port` will be used. If `IP` is IPv6, it needs to be enclosed in `[]`. `IP` can also be ` * ` (to listen on all IPs) or even a hostname.
-- `update every (flushInterval) = 1` seconds, controls the frequency statsd will push the collected metrics to netdata charts.
+- `update every (flushInterval) = 1` seconds, controls the frequency statsd will push the collected metrics to Netdata charts.
-- `decimal detail = 1000` controls the number of fractional digits in gauges and histograms. netdata collects metrics using signed 64 bit integers and their fractional detail is controlled using multipliers and divisors. This setting is used to multiply all collected values to convert them to integers and is also set as the divisors, so that the final data will be a floating point number with this fractional detail (1000 = X.0 - X.999, 10000 = X.0 - X.9999, etc).
+- `decimal detail = 1000` controls the number of fractional digits in gauges and histograms. Netdata collects metrics using signed 64 bit integers and their fractional detail is controlled using multipliers and divisors. This setting is used to multiply all collected values to convert them to integers and is also set as the divisors, so that the final data will be a floating point number with this fractional detail (1000 = X.0 - X.999, 10000 = X.0 - X.9999, etc).
The rest of the settings are discussed below.
## statsd charts
-netdata can visualize statsd collected metrics in 2 ways:
+Netdata can visualize statsd collected metrics in 2 ways:
1. Each metric gets its own **private chart**. This is the default and does not require any configuration (although there are a few options to tweak).
@@ -143,11 +143,11 @@ create private charts for metrics matching = !myapp.*.badmetric myapp.*
The default is to render private charts for all metrics.
-The `memory mode` of the round robin database and the `history` of private metric charts are controlled with `private charts memory mode` and `private charts history`. The defaults for both settings is to use the global netdata settings. So, you need to edit them only when you want statsd to use different settings compared to the global ones.
+The `memory mode` of the round robin database and the `history` of private metric charts are controlled with `private charts memory mode` and `private charts history`. The defaults for both settings is to use the global Netdata settings. So, you need to edit them only when you want statsd to use different settings compared to the global ones.
-If you have thousands of metrics, each with its own private chart, you may notice that your web browser becomes slow when you view the netdata dashboard (this is a web browser issue we need to address at the netdata UI). So, netdata has a protection to stop creating charts when `max private charts allowed = 200` (soft limit) is reached.
+If you have thousands of metrics, each with its own private chart, you may notice that your web browser becomes slow when you view the Netdata dashboard (this is a web browser issue we need to address at the Netdata UI). So, Netdata has a protection to stop creating charts when `max private charts allowed = 200` (soft limit) is reached.
-The metrics above this soft limit are still processed by netdata and will be available to be sent to backend time-series databases, up to `max private charts hard limit = 1000`. So, between 200 and 1000 charts, netdata will still generate charts, but they will automatically be created with `memory mode = none` (netdata will not maintain a database for them). These metrics will be sent to backend time series databases, if the backend configuration is set to `as collected`.
+The metrics above this soft limit are still processed by Netdata and will be available to be sent to backend time-series databases, up to `max private charts hard limit = 1000`. So, between 200 and 1000 charts, Netdata will still generate charts, but they will automatically be created with `memory mode = none` (Netdata will not maintain a database for them). These metrics will be sent to backend time series databases, if the backend configuration is set to `as collected`.
Metrics above the hard limit are still collected, but they can only be used in synthetic charts (once a metric is added to chart, it will be sent to backend servers too).
@@ -217,7 +217,7 @@ Using synthetic charts, you can create dedicated sections on the dashboard to re
Synthetic charts are organized in
-- **applications** (i.e. entries at the main menu of the netdata dashboard)
+- **applications** (i.e. entries at the main menu of the Netdata dashboard)
- **charts for each application** (grouped in families - i.e. submenus at the dashboard menu)
- **statsd metrics for each chart** (i.e. dimensions of the charts)
@@ -257,11 +257,11 @@ Using the above configuration `myapp` should get its own section on the dashboar
`[app]` starts a new application definition. The supported settings in this section are:
- `name` defines the name of the app.
-- `metrics` is a netdata simple pattern (space separated patterns, using `*` for wildcard, possibly starting with `!` for negative match). This pattern should match all the possible statsd metrics that will be participating in the application `myapp`.
+- `metrics` is a Netdata simple pattern (space separated patterns, using `*` for wildcard, possibly starting with `!` for negative match). This pattern should match all the possible statsd metrics that will be participating in the application `myapp`.
- `private charts = yes|no`, enables or disables private charts for the metrics matched.
- `gaps when not collected = yes|no`, enables or disables gaps on the charts of the application, when metrics are not collected.
-- `memory mode` sets the memory mode for all charts of the application. The default is the global default for netdata (not the global default for statsd private charts).
-- `history` sets the size of the round robin database for this application. The default is the global default for netdata (not the global default for statsd private charts).
+- `memory mode` sets the memory mode for all charts of the application. The default is the global default for Netdata (not the global default for statsd private charts).
+- `history` sets the size of the round robin database for this application. The default is the global default for Netdata (not the global default for statsd private charts).
`[dictionary]` defines name-value associations. These are used to renaming metrics, when added to synthetic charts. Metric names are also defined at each `dimension` line. However, using the dictionary dimension names can be declared globally, for each app and is the only way to rename dimensions when using patterns. Of course the dictionary can be empty or missing.
@@ -281,7 +281,7 @@ So, the format is this:
dimension = [pattern] METRIC NAME TYPE MULTIPLIER DIVIDER OPTIONS
```
-`pattern` is a keyword. When set, `METRIC` is expected to be a netdata simple pattern that will be used to match all the statsd metrics to be added to the chart. So, `pattern` automatically matches any number of statsd metrics, all of which will be added as separate chart dimensions.
+`pattern` is a keyword. When set, `METRIC` is expected to be a Netdata simple pattern that will be used to match all the statsd metrics to be added to the chart. So, `pattern` automatically matches any number of statsd metrics, all of which will be added as separate chart dimensions.
`TYPE`, `MUTLIPLIER`, `DIVIDER` and `OPTIONS` are optional.
@@ -336,13 +336,13 @@ and this synthetic chart:
The `[dictionary]` section accepts any number of `name = value` pairs.
-netdata uses this dictionary as follows:
+Netdata uses this dictionary as follows:
1. When a `dimension` has a non-empty `NAME`, that name is looked up at the dictionary.
2. If the above lookup gives nothing, or the `dimension` has an empty `NAME`, the original statsd metric name is looked up at the dictionary.
-3. If any of the above succeeds, netdata uses the `value` of the dictionary, to set the name of the dimension. The dimensions will have as ID the original statsd metric name, and as name, the dictionary value.
+3. If any of the above succeeds, Netdata uses the `value` of the dictionary, to set the name of the dimension. The dimensions will have as ID the original statsd metric name, and as name, the dictionary value.
So, you can use the dictionary in 2 ways:
@@ -351,11 +351,11 @@ So, you can use the dictionary in 2 ways:
In both cases, the dimension will be added with ID `myapp.metric1` and will be named `metric1 name`. So, in alarms you can use either of the 2 as `${myapp.metric1}` or `${metric1 name}`.
-> keep in mind that if you add multiple times the same statsd metric to a chart, netdata will append `TYPE` to the dimension ID, so `myapp.metric1` will be added as `myapp.metric1_last` or `myapp.metric1_events`, etc. If you add multiple times the same metric with the same `TYPE` to a chart, netdata will also append an incremental counter to the dimension ID, i.e. `myapp.metric1_last1`, `myapp.metric1_last2`, etc.
+> keep in mind that if you add multiple times the same statsd metric to a chart, Netdata will append `TYPE` to the dimension ID, so `myapp.metric1` will be added as `myapp.metric1_last` or `myapp.metric1_events`, etc. If you add multiple times the same metric with the same `TYPE` to a chart, Netdata will also append an incremental counter to the dimension ID, i.e. `myapp.metric1_last1`, `myapp.metric1_last2`, etc.
#### dimension patterns
-netdata allows adding multiple dimensions to a chart, by matching the statsd metrics with a netdata simple pattern.
+Netdata allows adding multiple dimensions to a chart, by matching the statsd metrics with a Netdata simple pattern.
Assume we have an API that provides statsd metrics for each response code per method it supports, like these:
@@ -382,7 +382,7 @@ To add all response codes of `myapp.api.get` to a chart use this:
dimension = pattern 'myapp.api.get.* '' last 1 1
```
-The above will add dimension named `200`, `400` and `500` (yes, netdata extracts the wildcarded part of the metric name - so the dimensions will be named with whatever the `*` matched). You can rename the dimensions with this:
+The above will add dimension named `200`, `400` and `500` (yes, Netdata extracts the wildcarded part of the metric name - so the dimensions will be named with whatever the `*` matched). You can rename the dimensions with this:
```
[dictionary]
@@ -435,17 +435,17 @@ Using the above, the dimensions will be added as `GET`, `ADD` and `DELETE`.
## interpolation
-~~If you send just one value to statsd, you will notice that the chart is created but no value is shown. The reason is that netdata interpolates all values at second boundaries. For incremental values (`counters` and `meters` in statsd terminology), if you send 10 at 00:00:00.500, 20 at 00:00:01.500 and 30 at 00:00:02.500, netdata will show 15 at 00:00:01 and 25 at 00:00:02.~~
+~~If you send just one value to statsd, you will notice that the chart is created but no value is shown. The reason is that Netdata interpolates all values at second boundaries. For incremental values (`counters` and `meters` in statsd terminology), if you send 10 at 00:00:00.500, 20 at 00:00:01.500 and 30 at 00:00:02.500, Netdata will show 15 at 00:00:01 and 25 at 00:00:02.~~
-~~This interpolation is automatic and global in netdata for all charts, for incremental values. This means that for the chart to start showing values you need to send 2 values across 2 flush intervals.~~
+~~This interpolation is automatic and global in Netdata for all charts, for incremental values. This means that for the chart to start showing values you need to send 2 values across 2 flush intervals.~~
-~~(although this is required for incremental values, netdata allows mixing incremental and absolute values on the same charts, so this little limitation [i.e. 2 values to start visualization], is applied on all netdata dimensions).~~
+~~(although this is required for incremental values, Netdata allows mixing incremental and absolute values on the same charts, so this little limitation [i.e. 2 values to start visualization], is applied on all Netdata dimensions).~~
(statsd metrics do not loose their first data collection due to interpolation anymore - fixed with [PR #2411](https://github.com/netdata/netdata/pull/2411))
## sending statsd metrics from shell scripts
-You can send/update statsd metrics from shell scripts. You can use this feature, to visualize in netdata automated jobs you run on your servers.
+You can send/update statsd metrics from shell scripts. You can use this feature, to visualize in Netdata automated jobs you run on your servers.
The command you need to run is:
diff --git a/collectors/tc.plugin/README.md b/collectors/tc.plugin/README.md
index 4133b4f8db..170c5e9526 100644
--- a/collectors/tc.plugin/README.md
+++ b/collectors/tc.plugin/README.md
@@ -42,7 +42,7 @@ QoS is about 2 features:
1. **Monitoring the bandwidth used by services**
- netdata provides wonderful real-time charts, like this one (wait to see the orange `rsync` part):
+ Netdata provides wonderful real-time charts, like this one (wait to see the orange `rsync` part):

@@ -62,7 +62,7 @@ QoS is about 2 features:
When your system is under a DDoS attack, it will get a lot more bandwidth compared to the one it can handle and probably your applications will crash. Setting a limit on the inbound traffic using QoS, will protect your servers (throttle the requests) and depending on the size of the attack may allow your legitimate users to access the server, while the attack is taking place.
- Using QoS together with a [SYNPROXY](../proc.plugin/README.md#linux-anti-ddos) will provide a great degree of protection against most DDoS attacks. Actually when I wrote that article, a few folks tried to DDoS the netdata demo site to see in real-time the SYNPROXY operation. They did not do it right, but anyway a great deal of requests reached the netdata server. What saved netdata was QoS. The netdata demo server has QoS installed, so the requests were throttled and the server did not even reach the point of resource starvation. Read about it [here](../proc.plugin/README.md#linux-anti-ddos).
+ Using QoS together with a [SYNPROXY](../proc.plugin/README.md#linux-anti-ddos) will provide a great degree of protection against most DDoS attacks. Actually when I wrote that article, a few folks tried to DDoS the Netdata demo site to see in real-time the SYNPROXY operation. They did not do it right, but anyway a great deal of requests reached the Netdata server. What saved Netdata was QoS. The Netdata demo server has QoS installed, so the requests were throttled and the server did not even reach the point of resource starvation. Read about it [here](../proc.plugin/README.md#linux-anti-ddos).
On top of all these, QoS is extremely light. You will configure it once, and this is it. It will not bother you again and it will not use any noticeable CPU resources, especially on application and database servers.
@@ -72,7 +72,7 @@ On top of all these, QoS is extremely light. You will configure it once, and thi
- ensure each end-user connection will get a fair cut of the available bandwidth.
-Once **traffic classification** is applied, we can use **[netdata](https://github.com/netdata/netdata)** to visualize the bandwidth consumption per class in real-time (no configuration is needed for netdata - it will figure it out).
+Once **traffic classification** is applied, we can use **[netdata](https://github.com/netdata/netdata)** to visualize the bandwidth consumption per class in real-time (no configuration is needed for Netdata - it will figure it out).
QoS, is extremely light. You will configure it once, and this is it. It will not bother you again and it will not use any noticeable CPU resources, especially on application and database servers.
@@ -115,10 +115,10 @@ To do it the hard way, you can go through the [tc configuration steps](#qos-conf
The **[FireHOL](https://firehol.org/)** package already distributes **[FireQOS](https://firehol.org/tutorial/fireqos-new-user/)**. Check the **[FireQOS tutorial](https://firehol.org/tutorial/fireqos-new-user/)** to learn how to write your own QoS configuration.
-With **[FireQOS](https://firehol.org/tutorial/fireqos-new-user/)**, it is **really simple for everyone to use QoS in Linux**. Just install the package `firehol`. It should already be available for your distribution. If not, check the **[FireHOL Installation Guide](https://firehol.org/installing/)**. After that, you will have the `fireqos` command which uses a configuration like the following `/etc/firehol/fireqos.conf`, used at the netdata demo site:
+With **[FireQOS](https://firehol.org/tutorial/fireqos-new-user/)**, it is **really simple for everyone to use QoS in Linux**. Just install the package `firehol`. It should already be available for your distribution. If not, check the **[FireHOL Installation Guide](https://firehol.org/installing/)**. After that, you will have the `fireqos` command which uses a configuration like the following `/etc/firehol/fireqos.conf`, used at the Netdata demo site:
```sh
- # configure the netdata ports
+ # configure the Netdata ports
server_netdata_ports="tcp/19999"
interface eth0 world bidirectional ethernet balanced rate 50Mbit
@@ -155,7 +155,7 @@ With **[FireQOS](https://firehol.org/tutorial/fireqos-new-user/)**, it is **real
match input src 10.2.3.5
```
-Nothing more is needed. You just run `fireqos start` to apply this configuration, restart netdata and you have real-time visualization of the bandwidth consumption of your applications. FireQOS is not a daemon. It will just convert the configuration to `tc` commands. It will run them and it will exit.
+Nothing more is needed. You just run `fireqos start` to apply this configuration, restart Netdata and you have real-time visualization of the bandwidth consumption of your applications. FireQOS is not a daemon. It will just convert the configuration to `tc` commands. It will run them and it will exit.
**IMPORTANT**: If you copy this configuration to apply it to your system, please adapt the speeds - experiment in non-production environments to learn the tool, before applying it on your servers.
@@ -191,7 +191,7 @@ Add the following configuration option in `/etc/netdata.conf`:
Finally, create `/etc/netdata/tc-qos-helper.conf` with this content:
```tc_show="class"```
-Please note, that by default Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
+Please note, that by default Netdata will enable monitoring metrics only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Set `yes` for a chart instead of `auto` to enable it permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
[]()
diff --git a/collectors/xenstat.plugin/README.md b/collectors/xenstat.plugin/README.md
index f803dbf36c..b52757a6b8 100644
--- a/collectors/xenstat.plugin/README.md
+++ b/collectors/xenstat.plugin/README.md
@@ -6,7 +6,7 @@
1. install `xen-dom0-libs-devel` and `yajl-devel` using the package manager of your system.
-2. re-install netdata from source. The installer will detect that the required libraries are now available and will also build xenstat.plugin.
+2. re-install Netdata from source. The installer will detect that the required libraries are now available and will also build xenstat.plugin.
Keep in mind that `libxenstat` requires root access, so the plugin is setuid to root.
@@ -25,7 +25,7 @@ Domain:
## Configuration
-If you need to disable xenstat for netdata, edit /etc/netdata/netdata.conf and set:
+If you need to disable xenstat for Netdata, edit /etc/netdata/netdata.conf and set:
```
[plugins]
diff --git a/contrib/README.md b/contrib/README.md
index c5ce873a77..a5dafa01bc 100644
--- a/contrib/README.md
+++ b/contrib/README.md
@@ -1,4 +1,4 @@
-# netdata contrib
+# Netdata contrib
## Building .deb packages
@@ -7,8 +7,8 @@ Debian package. It has been tested on Debian Jessie and Wheezy,
but should work, possibly with minor changes, if you have other
dpkg-based systems such as Ubuntu or Mint.
-To build netdata for a Debian Jessie system, the debian directory
-has to be available in the root of the netdata source. The easiest
+To build Netdata for a Debian Jessie system, the debian directory
+has to be available in the root of the Netdata source. The easiest
way to do this is with a symlink:
~/netdata$ ln -s contrib/debian
@@ -50,9 +50,9 @@ updates first.
Then proceed as the main instructions above.
-### Reinstalling netdata
+### Reinstalling Netdata
-The recommended way to upgrade netdata packages built from this
+The recommended way to upgrade Netdata packages built from this
source is to remove the current package from your system, then
install the new package. Upgrading on wheezy is known to not
work cleanly; Jessie may behave as expected.
diff --git a/contrib/sles11/README.md b/contrib/sles11/README.md
index d052b9454a..fb584bbec3 100644
--- a/contrib/sles11/README.md
+++ b/contrib/sles11/README.md
@@ -1,10 +1,10 @@
-# spec to build netdata RPM for sles 11
+# Spec to build Netdata RPM for sles 11
Based on [opensuse rpm spec](https://build.opensuse.org/package/show/network/netdata) with some
changes and additions for sles 11 backport, namely:
- init.d script
- run-time dependency on python ordereddict backport
-- patch for netdata python.d plugin to work with older python
+- patch for Netdata python.d plugin to work with older python
- crude hack of notification script to work with bash 3 (email and syslog only, one destination,
see comments at the top)
diff --git a/daemon/README.md b/daemon/README.md
index 62cc8c3b2b..26d19b66de 100644
--- a/daemon/README.md
+++ b/daemon/README.md
@@ -2,10 +2,10 @@
## Starting netdata
-- You can start netdata by executing it with `/usr/sbin/netdata` (the installer will also start it).
+- You can start Netdata by executing it with `/usr/sbin/netdata` (the installer will also start it).
-- You can stop netdata by killing it with `killall netdata`.
- You can stop and start netdata at any point. Netdata saves on exit its round robbin
+- You can stop Netdata by killing it with `killall netdata`.
+ You can stop and start Netdata at any point. Netdata saves on exit its round robbin
database to `/var/cache/netdata` so that it will continue from where it stopped the last time.
Access to the web site, for all graphs, is by default on port `19999`, so go to:
@@ -16,7 +16,7 @@ Access to the web site, for all graphs, is by default on port `19999`, so go to:
You can get the running config file at any time, by accessing `http://127.0.0.1:19999/netdata.conf`.
-### Starting netdata at boot
+### Starting Netdata at boot
In the `system` directory you can find scripts and configurations for the various distros.
@@ -27,7 +27,7 @@ The installer already installs `netdata.service` if it detects a systemd system.
To install `netdata.service` by hand, run:
```sh
-# stop netdata
+# stop Netdata
killall netdata
# copy netdata.service to systemd
@@ -36,10 +36,10 @@ cp system/netdata.service /etc/systemd/system/
# let systemd know there is a new service
systemctl daemon-reload
-# enable netdata at boot
+# enable Netdata at boot
systemctl enable netdata
-# start netdata
+# start Netdata
systemctl start netdata
```
@@ -48,7 +48,7 @@ systemctl start netdata
In the system directory you can find `netdata-lsb`. Copy it to the proper place according to your distribution documentation. For Ubuntu, this can be done via running the following commands as root.
```sh
-# copy the netdata startup file to /etc/init.d
+# copy the Netdata startup file to /etc/init.d
cp system/netdata-lsb /etc/init.d/netdata
# make sure it is executable
@@ -67,7 +67,7 @@ In the `system` directory you can find `netdata-openrc`. Copy it to the proper p
For older versions of RHEL/CentOS that don't have systemd, an init script is included in the system directory. This can be installed by running the following commands as root.
```sh
-# copy the netdata startup file to /etc/init.d
+# copy the Netdata startup file to /etc/init.d
cp system/netdata-init-d /etc/init.d/netdata
# make sure it is executable
@@ -81,7 +81,7 @@ _There have been some recent work on the init script, see PR https://github.com/
#### other systems
-You can start netdata by running it from `/etc/rc.local` or equivalent.
+You can start Netdata by running it from `/etc/rc.local` or equivalent.
## Command line options
@@ -97,7 +97,7 @@ netdata -h
The program will print the supported command line parameters.
-The command line options of the netdata 1.10.0 version are the following:
+The command line options of the Netdata 1.10.0 version are the following:
```
^
@@ -182,7 +182,7 @@ The command line options of the netdata 1.10.0 version are the following:
## Log files
-netdata uses 3 log files:
+Netdata uses 3 log files:
1. `error.log`
2. `access.log`
@@ -190,18 +190,18 @@ netdata uses 3 log files:
Any of them can be disabled by setting it to `/dev/null` or `none` in `netdata.conf`.
By default `error.log` and `access.log` are enabled. `debug.log` is only enabled if
-debugging/tracing is also enabled (netdata needs to be compiled with debugging enabled).
+debugging/tracing is also enabled (Netdata needs to be compiled with debugging enabled).
Log files are stored in `/var/log/netdata/` by default.
#### error.log
-The `error.log` is the `stderr` of the netdata daemon and all external plugins run by netdata.
+The `error.log` is the `stderr` of the `netdata` daemon and all external plugins run by netdata.
-So if any process, in the netdata process tree, writes anything to its standard error,
+So if any process, in the Netdata process tree, writes anything to its standard error,
it will appear in `error.log`.
-For most netdata programs (including standard external plugins shipped by netdata), the
+For most Netdata programs (including standard external plugins shipped by netdata), the
following lines may appear:
tag|description
@@ -213,7 +213,7 @@ tag|description
So, when auto-detection of data collection fail, `ERROR` lines are logged and the relevant modules
are disabled, but the program continues to run.
-When a netdata program cannot run at all, a `FATAL` line is logged.
+When a Netdata program cannot run at all, a `FATAL` line is logged.
#### access.log
@@ -231,7 +231,7 @@ where:
- `PERCENT_COMPRESSION` is the percentage of traffic saved due to compression.
- `PREP_TIME` is the time in milliseconds needed to prepared the response.
- `SENT_TIME` is the time in milliseconds needed to sent the response to the client.
- - `TOTAL_TIME` is the total time the request was inside netdata (from the first byte of the request to the last byte of the response).
+ - `TOTAL_TIME` is the total time the request was inside Netdata (from the first byte of the request to the last byte of the response).
- `ACTION` can be `filecopy`, `options` (used in CORS), `data` (API call).
@@ -242,17 +242,17 @@ See [debugging](#debugging).
## OOM Score
-netdata runs with `OOMScore = 1000`. This means netdata will be the first to be killed when your
+Netdata runs with `OOMScore = 1000`. This means Netdata will be the first to be killed when your
server runs out of memory.
-You can set netdata OOMScore in `netdata.conf`, like this:
+You can set Netdata OOMScore in `netdata.conf`, like this:
```
[global]
OOM score = 1000
```
-netdata logs its OOM score when it starts:
+Netdata logs its OOM score when it starts:
```sh
# grep OOM /var/log/netdata/error.log
@@ -261,16 +261,16 @@ netdata logs its OOM score when it starts:
#### OOM score and systemd
-netdata will not be able to lower its OOM Score below zero, when it is started as the `netdata`
+Netdata will not be able to lower its OOM Score below zero, when it is started as the `netdata`
user (systemd case).
-To allow netdata control its OOM Score in such cases, you will need to edit
+To allow Netdata control its OOM Score in such cases, you will need to edit
`netdata.service` and set:
```
[Service]
-# The minimum netdata Out-Of-Memory (OOM) score.
-# netdata (via [global].OOM score in netdata.conf) can only increase the value set here.
+# The minimum Netdata Out-Of-Memory (OOM) score.
+# Netdata (via [global].OOM score in netdata.conf) can only increase the value set here.
# To decrease it, set the minimum here and set the same or a higher value in netdata.conf.
# Valid values: -1000 (never kill netdata) to 1000 (always kill netdata).
OOMScoreAdjust=-1000
@@ -278,7 +278,7 @@ OOMScoreAdjust=-1000
Run `systemctl daemon-reload` to reload these changes.
-The above, sets and OOMScore for netdata to `-1000`, so that netdata can increase it via
+The above, sets and OOMScore for Netdata to `-1000`, so that Netdata can increase it via
`netdata.conf`.
If you want to control it entirely via systemd, you can set in `netdata.conf`:
@@ -293,9 +293,9 @@ Using the above, whatever OOM Score you have set at `netdata.service` will be ma
## Netdata process scheduling policy
-By default netdata runs with the `idle` process scheduling policy, so that it uses CPU resources, only when there is idle CPU to spare. On very busy servers (or weak servers), this can lead to gaps on the charts.
+By default Netdata runs with the `idle` process scheduling policy, so that it uses CPU resources, only when there is idle CPU to spare. On very busy servers (or weak servers), this can lead to gaps on the charts.
-You can set netdata scheduling policy in `netdata.conf`, like this:
+You can set Netdata scheduling policy in `netdata.conf`, like this:
```
[global]
@@ -306,7 +306,7 @@ You can use the following:
policy|description
:-----:|:--------
-`idle`|use CPU only when there is spare - this is lower than nice 19 - it is the default for netdata and it is so low that netdata will run in "slow motion" under extreme system load, resulting in short (1-2 seconds) gaps at the charts.
+`idle`|use CPU only when there is spare - this is lower than nice 19 - it is the default for Netdata and it is so low that Netdata will run in "slow motion" under extreme system load, resulting in short (1-2 seconds) gaps at the charts.
`other`
or
`nice`|this is the default policy for all processes under Linux. It provides dynamic priorities based on the `nice` level of each process. Check below for setting this `nice` level for netdata.
`batch`|This policy is similar to `other` in that it schedules the thread according to its dynamic priority (based on the `nice` value). The difference is that this policy will cause the scheduler to always assume that the thread is CPU-intensive. Consequently, the scheduler will apply a small scheduling penalty with respect to wake-up behavior, so that this thread is mildly disfavored in scheduling decisions.
`fifo`|`fifo` can be used only with static priorities higher than 0, which means that when a `fifo` threads becomes runnable, it will always immediately preempt any currently running `other`, `batch`, or `idle` thread. `fifo` is a simple scheduling algorithm without time slicing.
@@ -337,30 +337,30 @@ When the policy is set to `other`, `nice`, or `batch`, the following will appear
## scheduling settings and systemd
-netdata will not be able to set its scheduling policy and priority to more important values when it is started as the `netdata` user (systemd case).
+Netdata will not be able to set its scheduling policy and priority to more important values when it is started as the `netdata` user (systemd case).
You can set these settings at `/etc/systemd/system/netdata.service`:
```
[Service]
-# By default netdata switches to scheduling policy idle, which makes it use CPU, only
+# By default Netdata switches to scheduling policy idle, which makes it use CPU, only
# when there is spare available.
# Valid policies: other (the system default) | batch | idle | fifo | rr
#CPUSchedulingPolicy=other
-# This sets the maximum scheduling priority netdata can set (for policies: rr and fifo).
-# netdata (via [global].process scheduling priority in netdata.conf) can only lower this value.
+# This sets the maximum scheduling priority Netdata can set (for policies: rr and fifo).
+# Netdata (via [global].process scheduling priority in netdata.conf) can only lower this value.
# Priority gets values 1 (lowest) to 99 (highest).
#CPUSchedulingPriority=1
# For scheduling policy 'other' and 'batch', this sets the lowest niceness of netdata.
-# netdata (via [global].process nice level in netdata.conf) can only increase the value set here.
+# Netdata (via [global].process nice level in netdata.conf) can only increase the value set here.
#Nice=0
```
Run `systemctl daemon-reload` to reload these changes.
-Now, tell netdata to keep these settings, as set by systemd, by editing `netdata.conf` and setting:
+Now, tell Netdata to keep these settings, as set by systemd, by editing `netdata.conf` and setting:
```
[global]
@@ -370,9 +370,9 @@ Now, tell netdata to keep these settings, as set by systemd, by editing `netdata
Using the above, whatever scheduling settings you have set at `netdata.service` will be maintained by netdata.
-#### Example 1: netdata with nice -1 on non-systemd systems
+#### Example 1: Netdata with nice -1 on non-systemd systems
-On a system that is not based on systemd, to make netdata run with nice level -1 (a little bit higher to the default for all programs), edit netdata.conf and set:
+On a system that is not based on systemd, to make Netdata run with nice level -1 (a little bit higher to the default for all programs), edit `netdata.conf` and set:
```
[global]
@@ -387,9 +387,9 @@ sudo service netdata restart
```
-#### Example 2: netdata with nice -1 on systemd systems
+#### Example 2: Netdata with nice -1 on systemd systems
-On a system that is based on systemd, to make netdata run with nice level -1 (a little bit higher to the default for all programs), edit netdata.conf and set:
+On a system that is based on systemd, to make Netdata run with nice level -1 (a little bit higher to the default for all programs), edit `netdata.conf` and set:
```
[global]
@@ -415,9 +415,9 @@ sudo systemctl restart netdata
You may notice that netdata's virtual memory size, as reported by `ps` or `/proc/pid/status` (or even netdata's applications virtual memory chart) is unrealistically high.
-For example, it may be reported to be 150+MB, even if the resident memory size is just 25MB. Similar values may be reported for netdata plugins too.
+For example, it may be reported to be 150+MB, even if the resident memory size is just 25MB. Similar values may be reported for Netdata plugins too.
-Check this for example: A netdata installation with default settings on Ubuntu 16.04LTS. The top chart is **real memory used**, while the bottom one is **virtual memory**:
+Check this for example: A Netdata installation with default settings on Ubuntu 16.04LTS. The top chart is **real memory used**, while the bottom one is **virtual memory**:

@@ -431,19 +431,18 @@ number of threads running.
The system does this for speed. Having a separate memory arena for each thread, allows the
threads to run in parallel in multi-core systems, without any locks between them.
-This behaviour is system specific. For example, the chart above when running netdata on alpine
-linux (that uses **musl** instead of **glibc**) is this:
+This behaviour is system specific. For example, the chart above when running Netdata on Alpine Linux (that uses **musl** instead of **glibc**) is this:

**Can we do anything to lower it?**
-Since netdata already uses minimal memory allocations while it runs (i.e. it adapts its memory on start, so that while repeatedly collects data it does not do memory allocations), it already instructs the system memory allocator to minimize the memory arenas for each thread. We have also added [2 configuration options](https://github.com/netdata/netdata/blob/5645b1ee35248d94e6931b64a8688f7f0d865ec6/src/main.c#L410-L418)
+Since Netdata already uses minimal memory allocations while it runs (i.e. it adapts its memory on start, so that while repeatedly collects data it does not do memory allocations), it already instructs the system memory allocator to minimize the memory arenas for each thread. We have also added [2 configuration options](https://github.com/netdata/netdata/blob/5645b1ee35248d94e6931b64a8688f7f0d865ec6/src/main.c#L410-L418)
to allow you tweak these settings: `glibc malloc arena max for plugins` and `glibc malloc arena max for netdata`.
However, even if we instructed the memory allocator to use just one arena, it seems it allocates an arena per thread.
-netdata also supports `jemalloc` and `tcmalloc`, however both behave exactly the same to the glibc memory allocator in this aspect.
+Netdata also supports `jemalloc` and `tcmalloc`, however both behave exactly the same to the glibc memory allocator in this aspect.
**Is this a problem?**
@@ -452,53 +451,53 @@ No, it is not.
Linux reserves real memory (physical RAM) in pages (on x86 machines pages are 4KB each).
So even if the system memory allocator is allocating huge amounts of virtual memory,
only the 4KB pages that are actually used are reserving physical RAM. The **real memory** chart
-on netdata application section, shows the amount of physical memory these pages occupy(it
+on Netdata application section, shows the amount of physical memory these pages occupy(it
accounts the whole pages, even if parts of them are actually used).
## Debugging
-When you compile netdata with debugging:
+When you compile Netdata with debugging:
-1. compiler optimizations for your CPU are disabled (netdata will run somewhat slower)
+1. compiler optimizations for your CPU are disabled (Netdata will run somewhat slower)
-2. a lot of code is added all over netdata, to log debug messages to `/var/log/netdata/debug.log`. However, nothing is printed by default. netdata allows you to select which sections of netdata you want to trace. Tracing is activated via the config option `debug flags`. It accepts a hex number, to enable or disable specific sections. You can find the options supported at [log.h](../libnetdata/log/log.h). They are the `D_*` defines. The value `0xffffffffffffffff` will enable all possible debug flags.
+2. a lot of code is added all over netdata, to log debug messages to `/var/log/netdata/debug.log`. However, nothing is printed by default. Netdata allows you to select which sections of Netdata you want to trace. Tracing is activated via the config option `debug flags`. It accepts a hex number, to enable or disable specific sections. You can find the options supported at [log.h](../libnetdata/log/log.h). They are the `D_*` defines. The value `0xffffffffffffffff` will enable all possible debug flags.
-Once netdata is compiled with debugging and tracing is enabled for a few sections, the file `/var/log/netdata/debug.log` will contain the messages.
+Once Netdata is compiled with debugging and tracing is enabled for a few sections, the file `/var/log/netdata/debug.log` will contain the messages.
> Do not forget to disable tracing (`debug flags = 0`) when you are done tracing. The file `debug.log` can grow too fast.
-#### compiling netdata with debugging
+#### compiling Netdata with debugging
-To compile netdata with debugging, use this:
+To compile Netdata with debugging, use this:
```sh
-# step into the netdata source directory
+# step into the Netdata source directory
cd /usr/src/netdata.git
# run the installer with debugging enabled
CFLAGS="-O1 -ggdb -DNETDATA_INTERNAL_CHECKS=1" ./netdata-installer.sh
```
-The above will compile and install netdata with debugging info embedded. You can now use `debug flags` to set the section(s) you need to trace.
+The above will compile and install Netdata with debugging info embedded. You can now use `debug flags` to set the section(s) you need to trace.
#### debugging crashes
-We have made the most to make netdata crash free. If however, netdata crashes on your system, it would be very helpful to provide stack traces of the crash. Without them, is will be almost impossible to find the issue (the code base is quite large to find such an issue by just objerving it).
+We have made the most to make Netdata crash free. If however, Netdata crashes on your system, it would be very helpful to provide stack traces of the crash. Without them, is will be almost impossible to find the issue (the code base is quite large to find such an issue by just objerving it).
-To provide stack traces, **you need to have netdata compiled with debugging**. There is no need to enable any tracing (`debug flags`).
+To provide stack traces, **you need to have Netdata compiled with debugging**. There is no need to enable any tracing (`debug flags`).
Then you need to be in one of the following 2 cases:
-1. netdata crashes and you have a core dump
+1. Netdata crashes and you have a core dump
2. you can reproduce the crash
If you are not on these cases, you need to find a way to be (i.e. if your system does not produce core dumps, check your distro documentation to enable them).
-#### netdata crashes and you have a core dump
+#### Netdata crashes and you have a core dump
-> you need to have netdata compiled with debugging info for this to work (check above)
+> you need to have Netdata compiled with debugging info for this to work (check above)
Run the following command and post the output on a github issue.
@@ -506,9 +505,9 @@ Run the following command and post the output on a github issue.
gdb $(which netdata) /path/to/core/dump
```
-#### you can reproduce a netdata crash on your system
+#### you can reproduce a Netdata crash on your system
-> you need to have netdata compiled with debugging info for this to work (check above)
+> you need to have Netdata compiled with debugging info for this to work (check above)
Install the package `valgrind` and run:
@@ -516,7 +515,7 @@ Install the package `valgrind` and run:
valgrind $(which netdata) -D
```
-netdata will start and it will be a lot slower. Now reproduce the crash and `valgrind` will dump on your console the stack trace. Open a new github issue and post the output.
+Netdata will start and it will be a lot slower. Now reproduce the crash and `valgrind` will dump on your console the stack trace. Open a new github issue and post the output.
[]()
diff --git a/daemon/config/README.md b/daemon/config/README.md
index c36a5b6db2..207602a4c7 100644
--- a/daemon/config/README.md
+++ b/daemon/config/README.md
@@ -8,21 +8,21 @@ This config file **is not needed by default**. Netdata works fine out of the box
`netdata.conf` has sections stated with `[section]`. You will see the following sections:
-1. `[global]` to [configure](#global-section-options) the [netdata daemon](../).
+1. `[global]` to [configure](#global-section-options) the [Netdata daemon](../).
2. `[web]` to [configure the web server](../../web/server).
3. `[plugins]` to [configure](#plugins-section-options) which [collectors](../../collectors) to use and PATH settings.
4. `[health]` to [configure](#health-section-options) general settings for [health monitoring](../../health)
-5. `[registry]` for the [netdata registry](../../registry).
+5. `[registry]` for the [Netdata registry](../../registry).
6. `[backend]` to set up [streaming and replication](../../streaming) options.
7. `[statsd]` for the general settings of the [stats.d.plugin](../../collectors/statsd.plugin).
8. `[plugin:NAME]` sections for each collector plugin, under the comment [Per plugin configuration](#per-plugin-configuration).
9. `[CHART_NAME]` sections for each chart defined, under the comment [Per chart configuration](#per-chart-configuration).
-The configuration file is a `name = value` dictionary. Netdata will not complain if you set options unknown to it. When you check the running configuration by accessing the URL `/netdata.conf` on your netdata server, netdata will add a comment on settings it does not currently use.
+The configuration file is a `name = value` dictionary. Netdata will not complain if you set options unknown to it. When you check the running configuration by accessing the URL `/netdata.conf` on your Netdata server, Netdata will add a comment on settings it does not currently use.
## Applying changes
-After `netdata.conf` has been modified, netdata needs to be restarted for changes to apply:
+After `netdata.conf` has been modified, Netdata needs to be restarted for changes to apply:
```bash
sudo service netdata restart
@@ -42,36 +42,36 @@ Please note that your data history will be lost if you have modified `history` p
setting | default | info
:------:|:-------:|:----
-process scheduling policy | `keep` | See [netdata process scheduling policy](../#netdata-process-scheduling-policy)
+process scheduling policy | `keep` | See [Netdata process scheduling policy](../#netdata-process-scheduling-policy)
OOM score | `1000` | See [OOM score](../#oom-score)
glibc malloc arena max for plugins | `1` | See [Virtual memory](../#virtual-memory).
-glibc malloc arena max for netdata | `1` | See [Virtual memory](../#virtual-memory).
-hostname | auto-detected | The hostname of the computer running netdata.
-history | `3996` | The number of entries the netdata daemon will by default keep in memory for each chart dimension. This setting can also be configured per chart. Check [Memory Requirements](../../database/#database) for more information.
+glibc malloc arena max for Netdata | `1` | See [Virtual memory](../#virtual-memory).
+hostname | auto-detected | The hostname of the computer running Netdata.
+history | `3996` | The number of entries the `netdata` daemon will by default keep in memory for each chart dimension. This setting can also be configured per chart. Check [Memory Requirements](../../database/#database) for more information.
update every | `1` | The frequency in seconds, for data collection. For more information see [Performance](../../docs/Performance.md#performance).
config directory | `/etc/netdata` | The directory configuration files are kept.
stock config directory | `/usr/lib/netdata/conf.d` |
log directory | `/var/log/netdata` | The directory in which the [log files](../#log-files) are kept.
web files directory | `/usr/share/netdata/web` | The directory the web static files are kept.
-cache directory | `/var/cache/netdata` | The directory the memory database will be stored if and when netdata exits. Netdata will re-read the database when it will start again, to continue from the same point.
-lib directory | `/var/lib/netdata` | Contains the alarm log and the netdata instance guid.
+cache directory | `/var/cache/netdata` | The directory the memory database will be stored if and when Netdata exits. Netdata will re-read the database when it will start again, to continue from the same point.
+lib directory | `/var/lib/netdata` | Contains the alarm log and the Netdata instance guid.
home directory | `/var/cache/netdata` | Contains the db files for the collected metrics
plugins directory | `"/usr/libexec/netdata/plugins.d" "/etc/netdata/custom-plugins.d"` | The directory plugin programs are kept. This setting supports multiple directories, space separated. If any directory path contains spaces, enclose it in single or double quotes.
-memory mode | `save` | When set to `save` netdata will save its round robin database on exit and load it on startup. When set to `map` the cache files will be updated in real time (check `man mmap` - do not set this on systems with heavy load or slow disks - the disks will continuously sync the in-memory database of netdata). When set to `dbengine` it behaves similarly to `map` but with much better disk and memory efficiency, however, with higher overhead. When set to `ram` the round robin database will be temporary and it will be lost when netdata exits. `none` disables the database at this host. This also disables health monitoring (there cannot be health monitoring without a database). host access prefix | | This is used in docker environments where /proc, /sys, etc have to be accessed via another path. You may also have to set SYS_PTRACE capability on the docker for this work. Check [issue 43](https://github.com/netdata/netdata/issues/43).
-memory deduplication (ksm) | `yes` | When set to `yes`, netdata will offer its in-memory round robin database to kernel same page merging (KSM) for deduplication. For more information check [Memory Deduplication - Kernel Same Page Merging - KSM](../../database/#ksm)
+memory mode | `save` | When set to `save` Netdata will save its round robin database on exit and load it on startup. When set to `map` the cache files will be updated in real time (check `man mmap` - do not set this on systems with heavy load or slow disks - the disks will continuously sync the in-memory database of Netdata). When set to `dbengine` it behaves similarly to `map` but with much better disk and memory efficiency, however, with higher overhead. When set to `ram` the round robin database will be temporary and it will be lost when Netdata exits. `none` disables the database at this host. This also disables health monitoring (there cannot be health monitoring without a database). host access prefix | | This is used in docker environments where /proc, /sys, etc have to be accessed via another path. You may also have to set SYS_PTRACE capability on the docker for this work. Check [issue 43](https://github.com/netdata/netdata/issues/43).
+memory deduplication (ksm) | `yes` | When set to `yes`, Netdata will offer its in-memory round robin database to kernel same page merging (KSM) for deduplication. For more information check [Memory Deduplication - Kernel Same Page Merging - KSM](../../database/#ksm)
TZ environment variable | `:/etc/localtime` | Where to find the timezone
timezone | auto-detected | The timezone retrieved from the environment variable
debug flags | `0x0000000000000000` | Bitmap of debug options to enable. For more information check [Tracing Options](../#debugging).
debug log | `/var/log/netdata/debug.log` | The filename to save debug information. This file will not be created if debugging is not enabled. You can also set it to `syslog` to send the debug messages to syslog, or `none` to disable this log. For more information check [Tracing Options](../#debugging).
-error log | `/var/log/netdata/error.log` | The filename to save error messages for netdata daemon and all plugins (`stderr` is sent here for all netdata programs, including the plugins). You can also set it to `syslog` to send the errors to syslog, or `none` to disable this log.
-access log | `/var/log/netdata/access.log` | The filename to save the log of web clients accessing netdata charts. You can also set it to `syslog` to send the access log to syslog, or `none` to disable this log.
+error log | `/var/log/netdata/error.log` | The filename to save error messages for Netdata daemon and all plugins (`stderr` is sent here for all Netdata programs, including the plugins). You can also set it to `syslog` to send the errors to syslog, or `none` to disable this log.
+access log | `/var/log/netdata/access.log` | The filename to save the log of web clients accessing Netdata charts. You can also set it to `syslog` to send the access log to syslog, or `none` to disable this log.
errors flood protection period | `1200` | UNUSED - Length of period (in sec) during which the number of errors should not exceed the `errors to trigger flood protection`.
errors to trigger flood protection | `200` | UNUSED - Number of errors written to the log in `errors flood protection period` sec before flood protection is activated.
-run as user | `netdata` | The user netdata will run as.
+run as user | `netdata` | The user Netdata will run as.
pthread stack size | auto-detected |
cleanup obsolete charts after seconds | `3600` | See [monitoring ephemeral containers](../../collectors/cgroups.plugin/#monitoring-ephemeral-containers), also sets the timeout for cleaning up obsolete dimensions
gap when lost iterations above | `1` |
-cleanup orphan hosts after seconds | `3600` | How long to wait until automatically removing from the DB a remote netdata host (slave) that is no longer sending data.
+cleanup orphan hosts after seconds | `3600` | How long to wait until automatically removing from the DB a remote Netdata host (slave) that is no longer sending data.
delete obsolete charts files | `yes` | See [monitoring ephemeral containers](../../collectors/cgroups.plugin/#monitoring-ephemeral-containers), also affects the deletion of files for obsolete dimensions
delete orphan hosts files | `yes` | Set to `no` to disable non-responsive host removal.
enable zero metrics | `no` | Set to `yes` to show charts when all their metrics are zero.
@@ -90,8 +90,8 @@ setting | default | info
:------:|:-------:|:----
PATH environment variable | `auto-detected` |
PYTHONPATH environment variable | | Used to set a custom python path
-enable running new plugins | `yes` | When set to `yes`, netdata will enable detected plugins, even if they are not configured explicitly. Setting this to `no` will only enable plugins explicitly configirued in this file with a `yes`
-check for new plugins every | 60 | The time in seconds to check for new plugins in the plugins directory. This allows having other applications dynamically creating plugins for netdata.
+enable running new plugins | `yes` | When set to `yes`, Netdata will enable detected plugins, even if they are not configured explicitly. Setting this to `no` will only enable plugins explicitly configirued in this file with a `yes`
+check for new plugins every | 60 | The time in seconds to check for new plugins in the plugins directory. This allows having other applications dynamically creating plugins for Netdata.
checks | `no` | This is a debugging plugin for the internal latency
### [health] section options
@@ -129,7 +129,7 @@ The configuration options for plugins appear in sections following the pattern `
Most internal plugins will provide additional options. Check [Internal Plugins](../../collectors/) for more information.
-Please note, that by default Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
+Please note, that by default Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
#### External plugins
diff --git a/database/README.md b/database/README.md
index de0aa9b53b..c7f5463ad3 100644
--- a/database/README.md
+++ b/database/README.md
@@ -3,7 +3,7 @@
Although `netdata` does all its calculations using `long double`, it stores all values using
a [custom-made 32-bit number](../libnetdata/storage_number/).
-So, for each dimension of a chart, netdata will need: `4 bytes for the value * the entries
+So, for each dimension of a chart, Netdata will need: `4 bytes for the value * the entries
of its history`. It will not store any other data for each value in the time series database.
Since all its values are stored in a time series with fixed step, the time each value
corresponds can be calculated at run time, using the position of a value in the round robin database.
@@ -23,22 +23,22 @@ use the **[Database Engine](engine/)**.
## Memory modes
-Currently netdata supports 6 memory modes:
+Currently Netdata supports 6 memory modes:
1. `ram`, data are purely in memory. Data are never saved on disk. This mode uses `mmap()` and
supports [KSM](#ksm).
-2. `save`, (the default) data are only in RAM while netdata runs and are saved to / loaded from
- disk on netdata restart. It also uses `mmap()` and supports [KSM](#ksm).
+2. `save`, (the default) data are only in RAM while Netdata runs and are saved to / loaded from
+ disk on Netdata restart. It also uses `mmap()` and supports [KSM](#ksm).
3. `map`, data are in memory mapped files. This works like the swap. Keep in mind though, this
- will have a constant write on your disk. When netdata writes data on its memory, the Linux kernel
+ will have a constant write on your disk. When Netdata writes data on its memory, the Linux kernel
marks the related memory pages as dirty and automatically starts updating them on disk.
Unfortunately we cannot control how frequently this works. The Linux kernel uses exactly the
same algorithm it uses for its swap memory. Check below for additional information on running a
- dedicated central netdata server. This mode uses `mmap()` but does not support [KSM](#ksm).
+ dedicated central Netdata server. This mode uses `mmap()` but does not support [KSM](#ksm).
-4. `none`, without a database (collected metrics can only be streamed to another netdata).
+4. `none`, without a database (collected metrics can only be streamed to another Netdata).
5. `alloc`, like `ram` but it uses `calloc()` and does not support [KSM](#ksm). This mode is the
fallback for all others except `none`.
@@ -49,7 +49,7 @@ Currently netdata supports 6 memory modes:
but depends on the configured disk space and the effective compression ratio of the data stored.
For more details see [here](engine/).
-You can select the memory mode by editing netdata.conf and setting:
+You can select the memory mode by editing `netdata.conf` and setting:
```
[global]
@@ -60,7 +60,7 @@ You can select the memory mode by editing netdata.conf and setting:
cache directory = /var/cache/netdata
```
-## Running netdata in embedded devices
+## Running Netdata in embedded devices
Embedded devices usually have very limited RAM resources available.
@@ -74,36 +74,36 @@ second updates.
If you set `update every = 2` and `history = 1800`, you will still have an hour of data, but
collected once every 2 seconds. This will **cut in half** both CPU and RAM resources consumed
-by netdata. Of course experiment a bit. On very weak devices you might have to use
+by Netdata. Of course experiment a bit. On very weak devices you might have to use
`update every = 5` and `history = 720` (still 1 hour of data, but 1/5 of the CPU and RAM resources).
You can also disable [data collection plugins](../collectors) you don't need.
Disabling such plugins will also free both CPU and RAM resources.
-## Running a dedicated central netdata server
+## Running a dedicated central Netdata server
-Netdata allows streaming data between netdata nodes. This allows us to have a central netdata
+Netdata allows streaming data between Netdata nodes. This allows us to have a central Netdata
server that will maintain the entire database for all nodes, and will also run health checks/alarms
for all nodes.
-For this central netdata, memory size can be a problem. Fortunately, netdata supports several
+For this central Netdata, memory size can be a problem. Fortunately, Netdata supports several
memory modes. **One interesting option** for this setup is `memory mode = map`.
### map
-In this mode, the database of netdata is stored in memory mapped files. netdata continues to read
+In this mode, the database of Netdata is stored in memory mapped files. Netdata continues to read
and write the database in memory, but the kernel automatically loads and saves memory pages from/to
disk.
**We suggest _not_ to use this mode on nodes that run other applications.** There will always be
dirty memory to be synced and this syncing process may influence the way other applications work.
-This mode however is useful when we need a central netdata server that would normally need huge
+This mode however is useful when we need a central Netdata server that would normally need huge
amounts of memory. Using memory mode `map` we can overcome all memory restrictions.
There are a few kernel options that provide finer control on the way this syncing works. But before
-explaining them, a brief introduction of how netdata database works is needed.
+explaining them, a brief introduction of how Netdata database works is needed.
-For each chart, netdata maps the following files:
+For each chart, Netdata maps the following files:
1. `chart/main.db`, this is the file that maintains chart information. Every time data are collected
for a chart, this is updated.
@@ -111,7 +111,7 @@ For each chart, netdata maps the following files:
2. `chart/dimension_name.db`, this is the file for each dimension. At its beginning there is a
header, followed by the round robin database where metrics are stored.
-So, every time netdata collects data, the following pages will become dirty:
+So, every time Netdata collects data, the following pages will become dirty:
1. the chart file
2. the header part of all dimension files
@@ -147,8 +147,8 @@ There are 2 more options to tweak:
2. `dirty_ratio`, by default `20`.
These control the amount of memory that should be dirty for disk syncing to be triggered.
-On dedicated netdata servers, you can use: `80` and `90` respectively, so that all RAM is given
-to netdata.
+On dedicated Netdata servers, you can use: `80` and `90` respectively, so that all RAM is given
+to Netdata.
With these settings, you can expect a little `iowait` spike once every 10 minutes and in case
of system crash, data on disk will be up to 10 minutes old.
@@ -169,7 +169,7 @@ for this setup** is `memory mode = dbengine`.
### dbengine
-In this mode, the database of netdata is stored in database files. The [Database Engine](engine/)
+In this mode, the database of Netdata is stored in database files. The [Database Engine](engine/)
works like a traditional database. There is some amount of RAM dedicated to data caching and
indexing and the rest of the data reside compressed on disk. The number of history entries is not
fixed in this case, but depends on the configured disk space and the effective compression ratio
@@ -187,10 +187,10 @@ Netdata offers all its round robin database to kernel for deduplication
In the past KSM has been criticized for consuming a lot of CPU resources.
Although this is true when KSM is used for deduplicating certain applications, it is not true with
-netdata, since the netdata memory is written very infrequently (if you have 24 hours of metrics in
+netdata, since the Netdata memory is written very infrequently (if you have 24 hours of metrics in
netdata, each byte at the in-memory database will be updated just once per day).
-KSM is a solution that will provide 60+% memory savings to netdata.
+KSM is a solution that will provide 60+% memory savings to Netdata.
### Enable KSM in kernel
diff --git a/database/engine/README.md b/database/engine/README.md
index adc69ffd72..441a3eea05 100644
--- a/database/engine/README.md
+++ b/database/engine/README.md
@@ -23,13 +23,13 @@ journalfile-1-0000000003.njf
They are located under their host's cache directory in the directory `./dbengine`
(e.g. for localhost the default location is `/var/cache/netdata/dbengine/*`). The higher
numbered filenames contain more recent metric data. The user can safely delete some pairs
-of files when netdata is stopped to manually free up some space.
+of files when Netdata is stopped to manually free up some space.
*Users should* **back up** *their `./dbengine` folders if they consider this data to be important.*
## Configuration
-There is one DB engine instance per netdata host/node. That is, there is one `./dbengine` folder
+There is one DB engine instance per Netdata host/node. That is, there is one `./dbengine` folder
per node, and all charts of `dbengine` memory mode in such a host share the same storage space
and DB engine instance memory state. You can select the memory mode for localhost by editing
netdata.conf and setting:
@@ -59,10 +59,10 @@ quota. Both numbers are in **MiB**. All DB engine instances will allocate the co
separately.
The `page cache size` option determines the amount of RAM in **MiB** that is dedicated to caching
-netdata metric values themselves.
+Netdata metric values themselves.
The `dbengine disk space` option determines the amount of disk space in **MiB** that is dedicated
-to storing netdata metric values and all related metadata describing them.
+to storing Netdata metric values and all related metadata describing them.
## Operation
@@ -72,7 +72,7 @@ the **Page Cache**.
When those pages fill up they are slowly compressed and flushed to disk.
It can take `4096 / 4 = 1024 seconds = 17 minutes`, for a chart dimension that is being collected
-every 1 second, to fill a page. Pages can be cut short when we stop netdata or the DB engine
+every 1 second, to fill a page. Pages can be cut short when we stop Netdata or the DB engine
instance so as to not lose the data. When we query the DB engine for data we trigger disk read
I/O requests that fill the Page Cache with the requested pages and potentially evict cold
(not recently used) pages.
@@ -91,7 +91,7 @@ applications.
Using memory mode `dbengine` we can overcome most memory restrictions and store a dataset that
is much larger than the available memory.
-There are explicit memory requirements **per** DB engine **instance**, meaning **per** netdata
+There are explicit memory requirements **per** DB engine **instance**, meaning **per** Netdata
**node** (e.g. localhost and streaming recipient nodes):
- `page cache size` must be at least `#dimensions-being-collected x 4096 x 2` bytes.
@@ -115,11 +115,11 @@ file descriptors available per `dbengine` instance.
Netdata allocates 25% of the available file descriptors to its Database Engine instances. This means that only 25%
of the file descriptors that are available to the Netdata service are accessible by dbengine instances.
You should take that into account when configuring your service
-or system-wide file descriptor limits. You can roughly estimate that the netdata service needs 2048 file
+or system-wide file descriptor limits. You can roughly estimate that the Netdata service needs 2048 file
descriptors for every 10 streaming slave hosts when streaming is configured to use `memory mode = dbengine`.
-If for example one wants to allocate 65536 file descriptors to the netdata service on a systemd system
-one needs to override the netdata service by running `sudo systemctl edit netdata` and creating a
+If for example one wants to allocate 65536 file descriptors to the Netdata service on a systemd system
+one needs to override the Netdata service by running `sudo systemctl edit netdata` and creating a
file with contents:
```
diff --git a/docs/Add-more-charts-to-netdata.md b/docs/Add-more-charts-to-netdata.md
index 285713b028..4d62cd64a9 100644
--- a/docs/Add-more-charts-to-netdata.md
+++ b/docs/Add-more-charts-to-netdata.md
@@ -187,7 +187,7 @@ rethinkdb|python
v2 or v3|Connects to multiple rethinkdb servers (local or r
application|language|notes|
:---------:|:------:|:----|
-retroshare|python
v2 or v3|Connects to multiple retroshare servers (local or remote) to collect real-time performance metrics.
netdata plugin: [python.d.plugin](../collectors/python.d.plugin)
plugin module: [retroshare.chart.py](../collectors/python.d.plugin/retroshare)
configuration file: [python.d/retroshare.conf](../collectors/python.d.plugin/retroshare)|
+retroshare|python
v2 or v3|Connects to multiple retroshare servers (local or remote) to collect real-time performance metrics.
Netdata plugin: [python.d.plugin](../collectors/python.d.plugin)
plugin module: [retroshare.chart.py](../collectors/python.d.plugin/retroshare)
configuration file: [python.d/retroshare.conf](../collectors/python.d.plugin/retroshare)|
---
@@ -196,7 +196,7 @@ retroshare|python
v2 or v3|Connects to multiple retroshare servers (local or
application|language|notes|
:---------:|:------:|:----|
-squid|python
v2 or v3|Connects to multiple squid servers (local or remote) to collect real-time performance metrics.
netdata plugin: [python.d.plugin](../collectors/python.d.plugin)
plugin module: [squid.chart.py](../collectors/python.d.plugin/squid)
configuration file: [python.d/squid.conf](../collectors/python.d.plugin/squid)|
+squid|python
v2 or v3|Connects to multiple squid servers (local or remote) to collect real-time performance metrics.
Netdata plugin: [python.d.plugin](../collectors/python.d.plugin)
plugin module: [squid.chart.py](../collectors/python.d.plugin/squid)
configuration file: [python.d/squid.conf](../collectors/python.d.plugin/squid)|
squid|BASH
Shell Script|Connects to a squid server (local or remote) to collect real-time performance metrics.
DEPRECATED IN FAVOR OF THE PYTHON ONE. It is still supplied only as an example module to shell scripting plugins.
Netdata plugin: [charts.d.plugin](../collectors/charts.d.plugin#chartsdplugin)
plugin module: [squid.chart.sh](../collectors/charts.d.plugin/squid)
configuration file: [charts.d/squid.conf](../collectors/charts.d.plugin/squid)|
@@ -298,8 +298,8 @@ postfix|BASH
Shell Script|Charts the postfix queue size.
DEPRECATED
application|language|notes|
:---------:|:------:|:----|
NFS Client|`C`|This is handled entirely by the Netdata daemon.
Configuration: `netdata.conf`, section `[plugin:proc:/proc/net/rpc/nfs]`.
-NFS Server|`C`|This is handled entirely by the netdata daemon.
Configuration: `netdata.conf`, section `[plugin:proc:/proc/net/rpc/nfsd]`.
-samba|python
v2 or v3|Performance metrics of Samba SMB2 file sharing.
documentation page: [python.d.plugin module samba](../collectors/python.d.plugin/samba)
netdata plugin: [python.d.plugin](../collectors/python.d.plugin)
plugin module: [samba.chart.py](../collectors/python.d.plugin/samba)
configuration file: [python.d/samba.conf](../collectors/python.d.plugin/samba)|
+NFS Server|`C`|This is handled entirely by the `netdata` daemon.
Configuration: `netdata.conf`, section `[plugin:proc:/proc/net/rpc/nfsd]`.
+samba|python
v2 or v3|Performance metrics of Samba SMB2 file sharing.
documentation page: [python.d.plugin module samba](../collectors/python.d.plugin/samba)
Netdata plugin: [python.d.plugin](../collectors/python.d.plugin)
plugin module: [samba.chart.py](../collectors/python.d.plugin/samba)
configuration file: [python.d/samba.conf](../collectors/python.d.plugin/samba)|
---
@@ -307,7 +307,7 @@ samba|python
v2 or v3|Performance metrics of Samba SMB2 file sharing.
&n
application|language|notes|
:---------:|:------:|:----|
-CUPS|C|Charts metrics of printers, jobs and other cups destinations.
netdata plugin: [cups.plugin](../collectors/cups.plugin)
+CUPS|C|Charts metrics of printers, jobs and other cups destinations.
Netdata plugin: [cups.plugin](../collectors/cups.plugin)
---
@@ -315,7 +315,7 @@ CUPS|C|Charts metrics of printers, jobs and other cups destinations.
<
application|language|notes|
:---------:|:------:|:----|
-xenstat|C|Collects host and domain statistics for XenServer or XCP-ng hypervisors.
netdata plugin: [xenstat.plugin](../collectors/xenstat.plugin)
+xenstat|C|Collects host and domain statistics for XenServer or XCP-ng hypervisors.
Netdata plugin: [xenstat.plugin](../collectors/xenstat.plugin)
---
diff --git a/docs/GettingStarted.md b/docs/GettingStarted.md
index 3ddf4c388d..792eb1f298 100644
--- a/docs/GettingStarted.md
+++ b/docs/GettingStarted.md
@@ -32,9 +32,9 @@ If still Netdata does not receive the requests, something is blocking them. A fi
-When you install multiple Netdata servers, all your servers will appear at the node menu at the top left of the dashboard. For this to work, you have to manually access just once, the dashboard of each of your netdata servers.
+When you install multiple Netdata servers, all your servers will appear at the node menu at the top left of the dashboard. For this to work, you have to manually access just once, the dashboard of each of your Netdata servers.
-The node menu is more than just browser bookmarks. When switching Netdata servers from that menu, any settings of the current view are propagated to the other netdata server:
+The node menu is more than just browser bookmarks. When switching Netdata servers from that menu, any settings of the current view are propagated to the other Netdata server:
- the current charts panning (drag the charts left or right),
- the current charts zooming (`SHIFT` + mouse wheel over a chart),
diff --git a/docs/Running-behind-nginx.md b/docs/Running-behind-nginx.md
index 81ebc1a756..5479118cb3 100644
--- a/docs/Running-behind-nginx.md
+++ b/docs/Running-behind-nginx.md
@@ -14,7 +14,7 @@ The software is known for its low impact on memory resources, high scalability,
- Password-protect access to Netdata, until distributed authentication is implemented via the Netdata cloud Sign In mechanism.
-- A proxy was necessary to encrypt the communication to netdata, until v1.16.0, which provided TLS (HTTPS) support.
+- A proxy was necessary to encrypt the communication to Netdata, until v1.16.0, which provided TLS (HTTPS) support.
## Nginx configuration file
diff --git a/docs/anonymous-statistics.md b/docs/anonymous-statistics.md
index 376a2c4aaa..689b692f14 100644
--- a/docs/anonymous-statistics.md
+++ b/docs/anonymous-statistics.md
@@ -26,7 +26,7 @@ To ensure anonymity of the stored information, we have configured GTM's GA varia
|page|netdata-dashboard
|hostname|dashboard.my-netdata.io
|anonymizeIp|true
-|title|netdata dashboard
+|title|Netdata dashboard
|campaignSource|{{machine_guid}}
|campaignMedium|web
|referrer|http://dashboard.my-netdata.io
@@ -35,7 +35,7 @@ To ensure anonymity of the stored information, we have configured GTM's GA varia
|Page Path|/netdata-dashboard
|location|http://dashboard.my-netdata.io
-In addition, the netdata-generated unique machine guid is sent to GA via a custom dimension.
+In addition, the Netdata-generated unique machine guid is sent to GA via a custom dimension.
You can verify the effect of these settings by examining the GA `collect` request parameters.
The only thing that's impossible for us to prevent from being **sent** is the URL in the "Referrer" Header of the browser request to GA. However, the settings above ensure that all **stored** URLs and host names are anonymized.
diff --git a/docs/configuration-guide.md b/docs/configuration-guide.md
index 1c79e02769..26ebb9119e 100644
--- a/docs/configuration-guide.md
+++ b/docs/configuration-guide.md
@@ -59,7 +59,7 @@ Entire plugins can be turned off from the [netdata.conf [plugins]](../daemon/con
##### Show charts with zero metrics
-By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
+By default, Netdata will enable monitoring metrics for disks, memory, and network only when they are not zero. If they are constantly zero they are ignored. Metrics that will start having values, after Netdata is started, will be detected and charts will be automatically added to the dashboard (a refresh of the dashboard is needed for them to appear though). Use `yes` instead of `auto` in plugin configuration sections to enable these charts permanently. You can also set the `enable zero metrics` option to `yes` in the `[global]` section which enables charts with zero metrics for all internal Netdata plugins.
### Modify alarms and notifications
@@ -92,11 +92,11 @@ You have several options under the [netdata.conf [web]](../web/server/#access-li
##### Stop sending info to registry.my-netdata.io
-You will need to configure the [registry] section in netdata.conf. First read the [registry documentation](../registry/). In it, are instructions on how to [run your own registry](../registry/#run-your-own-registry).
+You will need to configure the [registry] section in `netdata.conf`. First read the [registry documentation](../registry/). In it, are instructions on how to [run your own registry](../registry/#run-your-own-registry).
##### Change the IP address/port Netdata listens to
-The settings are under netdata.conf [web]. Look at the [web server documentation](../web/server/#binding-netdata-to-multiple-ports) for more info.
+The settings are under `netdata.conf` [web]. Look at the [web server documentation](../web/server/#binding-netdata-to-multiple-ports) for more info.
### System resource usage
@@ -110,7 +110,7 @@ The page on [Netdata performance](Performance.md) has an excellent guide on how
##### Prevent Netdata from getting immediately killed when my server runs out of memory
-You can change the Netdata [OOM score](../daemon/#oom-score) in netdata.conf [global].
+You can change the Netdata [OOM score](../daemon/#oom-score) in `netdata.conf` [global].
### Other
diff --git a/docs/netdata-security.md b/docs/netdata-security.md
index a905717d93..afbb32775e 100644
--- a/docs/netdata-security.md
+++ b/docs/netdata-security.md
@@ -132,7 +132,7 @@ iptables -t filter -A netdata -j DROP
iptables -t filter -D INPUT -p tcp --dport ${NETDATA_PORT} -m conntrack --ctstate NEW -j netdata 2>/dev/null
# add the input chain hook (again)
-# to send all new netdata connections to our filtering chain
+# to send all new Netdata connections to our filtering chain
iptables -t filter -I INPUT -p tcp --dport ${NETDATA_PORT} -m conntrack --ctstate NEW -j netdata
```
_script to allow access to Netdata only from a number of hosts_
diff --git a/docs/privacy-policy.md b/docs/privacy-policy.md
index e46d783ed2..4d7a7e526e 100644
--- a/docs/privacy-policy.md
+++ b/docs/privacy-policy.md
@@ -37,22 +37,22 @@ The menu lists the Netdata servers you have visited. For example, when you jump
(like the currently viewed charts, the current zoom and pan operations on the charts, etc.) are propagated to the new server, so that the new dashboard will come with exactly the
same view. The global registry keeps track of 4 entities:
-1. **machines**: i.e. the netdata installations (a random GUID generated by each netdata the first time it starts; we call this **machine_guid**)
+1. **machines**: i.e. the Netdata installations (a random GUID generated by each Netdata the first time it starts; we call this **machine_guid**)
- For each netdata installation (each `machine_guid`) the registry keeps track of the different URLs it is accessed.
+ For each Netdata installation (each `machine_guid`) the registry keeps track of the different URLs it is accessed.
-2. **persons**: i.e. the web browsers accessing the netdata installations (a random GUID generated by the registry the first time it sees a new web browser; we call this **person_guid**)
+2. **persons**: i.e. the web browsers accessing the Netdata installations (a random GUID generated by the registry the first time it sees a new web browser; we call this **person_guid**)
- For each person, the registry keeps track of the netdata installations it has accessed and their URLs.
+ For each person, the registry keeps track of the Netdata installations it has accessed and their URLs.
-3. **URLs** of netdata installations (as seen by the web browsers)
+3. **URLs** of Netdata installations (as seen by the web browsers)
For each URL, the registry keeps the URL and nothing more. Each URL is linked to *persons* and *machines*. The only way to find a URL is to know its **machine_guid** or have a **person_guid** it is linked to it.
4. **accounts**: i.e. the information used to sign-in via one of the available sign-in methods. Depending on the method, this may include an email, an email and a profile picture.
For *persons*/*accounts* and *machines*, the registry keeps links to *URLs*, each link with 2 timestamps (first time seen, last time seen) and a counter (number of times it has been seen).
-*machines*, *persons*, and timestamps are stored in the netdata registry regardless of whether you sign in or not.
+*machines*, *persons*, and timestamps are stored in the Netdata registry regardless of whether you sign in or not.
If sending this information is against your policies, you can [run your own registry](../registry/#run-your-own-registry).
Note that ND versions with the 'Sign in' feature of the ND Cloud do not use the global registry.
diff --git a/health/README.md b/health/README.md
index 345f7fc70d..a03c7eec7e 100644
--- a/health/README.md
+++ b/health/README.md
@@ -1,9 +1,9 @@
# Health monitoring
-Each netdata node runs an independent thread evaluating health monitoring checks.
+Each Netdata node runs an independent thread evaluating health monitoring checks.
This thread has lock free access to the database, so that it can operate as a watchdog.
-Health checks (alarms) are attached to netdata charts, allowing netdata to automatically
+Health checks (alarms) are attached to Netdata charts, allowing Netdata to automatically
activate an alarm as soon as a chart is created. This is very important for
netdata, since many charts are dynamically created during runtime (for example, the
chart tracking network interface packet drops, is automatically created on the first
@@ -20,15 +20,15 @@ use expressions combining the latest value of any number of metrics.
## Health configuration reference
-Stock netdata health configuration is in `/usr/lib/netdata/conf.d/health.d`.
+Stock Netdata health configuration is in `/usr/lib/netdata/conf.d/health.d`.
These files can be overwritten by copying them and editing them in `/etc/netdata/health.d`
(run `/etc/netdata/edit-config` to edit them).
In `/etc/netdata/health.d` you can also put any number of files (in any number of sub-directories)
-with a suffix `.conf` to have them processed by netdata.
+with a suffix `.conf` to have them processed by Netdata.
-Health configuration can be reloaded at any time, without restarting netdata.
-Just send netdata the SIGUSR2 signal, like this:
+Health configuration can be reloaded at any time, without restarting Netdata.
+Just send Netdata the SIGUSR2 signal, like this:
```sh
killall -USR2 netdata
@@ -50,7 +50,7 @@ The only difference is the label `alarm` or `template`.
Netdata supports overriding **templates** with **alarms**.
For example, when a template is defined for a set of charts, an alarm with exactly the
same name attached to the same chart the template matches, will have higher precedence
-(i.e. netdata will use the alarm on this chart and prevent the template from being applied
+(i.e. Netdata will use the alarm on this chart and prevent the template from being applied
to it).
### The format
@@ -135,7 +135,7 @@ hosts: server1 server2 database* !redis3 redis*
The above says: use this alarm on all hosts named `server1`, `server2`, `database*`, and
all `redis*` except `redis3`.
-This is useful when you centralize metrics from multiple hosts, to one netdata.
+This is useful when you centralize metrics from multiple hosts, to one Netdata.
---
@@ -187,7 +187,7 @@ Everything is the same with [badges](../web/api/badges/). In short:
- `of DIMENSIONS` is optional and has to be the last parameter. Dimensions have to be separated
by `,` or `|`. The space characters found in dimensions will be kept as-is (a few dimensions
- have spaces in their names). This accepts netdata simple patterns and the `match-ids` and
+ have spaces in their names). This accepts Netdata simple patterns and the `match-ids` and
`match-names` options affect the searches for dimensions.
The result of the lookup will be available as `$this` and `$NAME` in expressions.
@@ -289,8 +289,8 @@ Format:
exec: SCRIPT
```
-The default `SCRIPT` is netdata's `alarm-notify.sh`, which supports all the notifications
-methods netdata supports, including custom hooks.
+The default `SCRIPT` is Netdata's `alarm-notify.sh`, which supports all the notifications
+methods Netdata supports, including custom hooks.
---
@@ -373,19 +373,17 @@ For some alarms we need compare two time-frames, to detect anomalies. For exampl
### Expressions
-netdata has an internal [infix expression parser](../libnetdata/eval).
+Netdata has an internal [infix expression parser](../libnetdata/eval).
This parses expressions and creates an internal structure that allows fast execution of them.
These operators are supported `+`, `-`, `*`, `/`, `<`, `<=`, `<>`, `!=`, `>`, `>=`, `&&`, `||`,
`!`, `AND`, `OR`, `NOT`. Boolean operators result in either `1` (true) or `0` (false).
-The conditional evaluation operator `?` is supported too. Using this operator IF-THEN-ELSE
-conditional statements can be specified. The format is: `(condition) ? (true expression) :
-(false expression)`. So, netdata will first evaluate the `condition` and based on the result
-will either evaluate `true expression` or `false expression`.
+The conditional evaluation operator `?` is supported too. Using this operator IF-THEN-ELSE conditional statements can be specified. The format is: `(condition) ? (true expression) :(false expression)`. So, Netdata will first evaluate the `condition` and based on the result will either evaluate `true expression` or `false expression`.
+
Example: `($this > 0) ? ($avail * 2) : ($used / 2)`.
-Nested such expressions are also supported (i.e. `true expression` and `false expression` can
-contain conditional evaluations).
+
+Nested such expressions are also supported (i.e. `true expression` and `false expression` can contain conditional evaluations).
Expressions also support the `abs()` function.
@@ -407,7 +405,7 @@ or warning thresholds. This usage helps to avoid bogus messages resulting from
variations in the value when it is varying regularly but staying close to the threshold
value, without needing to delay sending messages at all.
-An example of such usage from the default CPU usage alarms bundled with netdata is:
+An example of such usage from the default CPU usage alarms bundled with Netdata is:
```
warn: $this > (($status >= $WARNING) ? (75) : (85))
@@ -491,7 +489,7 @@ Although the `alarm_variables` link shows you variables for a particular chart,
Alarms can have the following statuses:
- - `REMOVED` - the alarm has been deleted (this happens when a SIGUSR2 is sent to netdata
+ - `REMOVED` - the alarm has been deleted (this happens when a SIGUSR2 is sent to Netdata
to reload health configuration)
- `UNINITIALIZED` - the alarm is not initialized yet
@@ -509,7 +507,7 @@ The external script will be called for all status changes.
## Examples
-Check the `health/health.d/` directory for all alarms shipped with netdata.
+Check the `health/health.d/` directory for all alarms shipped with Netdata.
Here are a few examples:
@@ -526,7 +524,7 @@ template: apache_last_collected_secs
crit: $this > (10 * $update_every)
```
-The above checks that netdata is able to collect data from apache. In detail:
+The above checks that Netdata is able to collect data from apache. In detail:
```
template: apache_last_collected_secs
@@ -653,12 +651,12 @@ The `lookup` line will calculate the sum of the all dropped packets in the last
The `crit` line will issue a critical alarm if even a single packet has been dropped.
Note that the drops chart does not exist if a network interface has never dropped a single packet.
-When netdata detects a dropped packet, it will add the chart and it will automatically attach this
+When Netdata detects a dropped packet, it will add the chart and it will automatically attach this
alarm to it.
## Troubleshooting
-You can compile netdata with [debugging](../daemon#debugging) and then set in `netdata.conf`:
+You can compile Netdata with [debugging](../daemon#debugging) and then set in `netdata.conf`:
```
[global]
@@ -671,7 +669,7 @@ Important: this will generate a lot of output in debug.log.
You can find the context of charts by looking up the chart in either
`http://your.netdata:19999/netdata.conf` or `http://your.netdata:19999/api/v1/charts`.
-You can find how netdata interpreted the expressions by examining the alarm at `http://your.netdata:19999/api/v1/alarms?all`. For each expression, netdata will return the expression as given in its config file, and the same expression with additional parentheses added to indicate the evaluation flow of the expression.
+You can find how Netdata interpreted the expressions by examining the alarm at `http://your.netdata:19999/api/v1/alarms?all`. For each expression, Netdata will return the expression as given in its config file, and the same expression with additional parentheses added to indicate the evaluation flow of the expression.
## Disabling health checks or silencing notifications at runtime
diff --git a/health/notifications/awssns/README.md b/health/notifications/awssns/README.md
index 82c7ef7a0a..7bb3487143 100644
--- a/health/notifications/awssns/README.md
+++ b/health/notifications/awssns/README.md
@@ -14,9 +14,9 @@ To get this working, you will need:
* The Amazon Web Services CLI tools. Most distributions provide these with the package name `awscli`.
* An actual home directory for the user you run Netdata as, instead of just using `/` as a home directory. Setup of this is distribution specific. `/var/lib/netdata` is the recommended directory (because the permissions will already be correct) if you are using a dedicated user (which is how most distributions work).
* An Amazon SNS topic to send notifications to with one or more subscribers. The [Getting Started](https://docs.aws.amazon.com/sns/latest/dg/GettingStarted.html) section of the Amazon SNS documentation covers the basics of how to set this up. Make note of the Topic ARN when you create the topic.
-* While not mandatory, it is highly recommended to create a dedicated IAM user on your account for netdata to send notifications. This user needs to have programmatic access, and should only allow access to SNS. If you're really paranoid, you can create one for each system or group of systems.
+* While not mandatory, it is highly recommended to create a dedicated IAM user on your account for Netdata to send notifications. This user needs to have programmatic access, and should only allow access to SNS. If you're really paranoid, you can create one for each system or group of systems.
-Once you have all the above, run the following command as the user netdata runs under:
+Once you have all the above, run the following command as the user Netdata runs under:
aws configure
@@ -28,6 +28,6 @@ Notes:
* Netdata's native email notification support is far better in almost all respects than it's support through Amazon SNS. If you want email notifications, use the native support, not SNS.
* If you need to change the notification format for SNS notifications, you can do so by specifying the format in `AWSSNS_MESSAGE_FORMAT` in the configuration. This variable supports all the same vairiables you can use in custom notifications.
- * While Amazon SNS supports sending differently formatted messages for different delivery methods, netdata does not currently support this functionality.
+ * While Amazon SNS supports sending differently formatted messages for different delivery methods, Netdata does not currently support this functionality.
[]()
diff --git a/health/notifications/custom/README.md b/health/notifications/custom/README.md
index eeaad8a606..80210572b2 100644
--- a/health/notifications/custom/README.md
+++ b/health/notifications/custom/README.md
@@ -46,7 +46,7 @@ Variables available to the custom_sender:
- `${alarm_id}` the unique id of the alarm that generated this event
- `${event_id}` the incremental id of the event, for this alarm id
- `${when}` the timestamp this event occurred
- - `${name}` the name of the alarm, as given in netdata health.d entries
+ - `${name}` the name of the alarm, as given in Netdata health.d entries
- `${url_name}` same as `${name}` but URL encoded
- `${chart}` the name of the chart (type.id)
- `${url_chart}` same as `${chart}` but URL encoded
@@ -67,7 +67,7 @@ Variables available to the custom_sender:
- `${old_value_string}` friendly old value (with units)
- `${image}` the URL of an image to represent the status of the alarm
- `${color}` a color in #AABBCC format for the alarm
- - `${goto_url}` the URL the user can click to see the netdata dashboard
+ - `${goto_url}` the URL the user can click to see the Netdata dashboard
- `${calc_expression}` the expression evaluated to provide the value for the alarm
- `${calc_param_values}` the value of the variables in the evaluated expression
- `${total_warnings}` the total number of alarms in WARNING state on the host
diff --git a/health/notifications/discord/README.md b/health/notifications/discord/README.md
index 7694fef4b9..7b43f4e229 100644
--- a/health/notifications/discord/README.md
+++ b/health/notifications/discord/README.md
@@ -6,7 +6,7 @@ This is what you will get:
You need:
-1. The **incoming webhook URL** as given by Discord. Create a webhook by following the official [Discord documentation](https://support.discordapp.com/hc/en-us/articles/228383668-Intro-to-Webhooks). You can use the same on all your netdata servers (or you can have multiple if you like - your decision).
+1. The **incoming webhook URL** as given by Discord. Create a webhook by following the official [Discord documentation](https://support.discordapp.com/hc/en-us/articles/228383668-Intro-to-Webhooks). You can use the same on all your Netdata servers (or you can have multiple if you like - your decision).
2. One or more Discord channels to post the messages to.
Set them in `/etc/netdata/health_alarm_notify.conf` (to edit it on your system run `/etc/netdata/edit-config health_alarm_notify.conf`), like this:
diff --git a/health/notifications/email/README.md b/health/notifications/email/README.md
index 84a9e0ce71..ebe72f6d86 100644
--- a/health/notifications/email/README.md
+++ b/health/notifications/email/README.md
@@ -2,7 +2,7 @@
You need a working `sendmail` command for email alerts to work. Almost all MTAs provide a `sendmail` interface.
-netdata sends all emails as user `netdata`, so make sure your `sendmail` works for local users.
+Netdata sends all emails as user `netdata`, so make sure your `sendmail` works for local users.
email notifications look like this:
@@ -16,7 +16,7 @@ You can configure recipients in [`/etc/netdata/health_alarm_notify.conf`](https:
You can also configure per role recipients [in the same file, a few lines below](https://github.com/netdata/netdata/blob/99d44b7d0c4e006b11318a28ba4a7e7d3f9b3bae/conf.d/health_alarm_notify.conf#L313).
-Changes to this file do not require netdata restart.
+Changes to this file do not require a Netdata restart.
You can test your configuration by issuing the commands:
diff --git a/health/notifications/flock/README.md b/health/notifications/flock/README.md
index 0d679ce6b3..70a850376e 100644
--- a/health/notifications/flock/README.md
+++ b/health/notifications/flock/README.md
@@ -7,7 +7,7 @@ This is what you will get:
You need:
-The **incoming webhook URL** as given by flock.com. You can use the same on all your netdata servers (or you can have multiple if you like - your decision).
+The **incoming webhook URL** as given by flock.com. You can use the same on all your Netdata servers (or you can have multiple if you like - your decision).
Get them here: https://admin.flock.com/webhooks
@@ -21,8 +21,8 @@ Set them in `/etc/netdata/health_alarm_notify.conf` (to edit it on your system r
SEND_FLOCK="YES"
# Login to flock.com and create an incoming webhook.
-# You need only one for all your netdata servers.
-# Without it, netdata cannot send flock notifications.
+# You need only one for all your Netdata servers.
+# Without it, Netdata cannot send flock notifications.
FLOCK_WEBHOOK_URL="https://api.flock.com/hooks/sendMessage/XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX"
# if a role recipient is not configured, no notification will be sent
diff --git a/health/notifications/irc/README.md b/health/notifications/irc/README.md
index 9ea86e92d0..804ff6041b 100644
--- a/health/notifications/irc/README.md
+++ b/health/notifications/irc/README.md
@@ -10,7 +10,7 @@ Irssi terminal client:
You need:
-1. The `nc` utility. If you do not set the path, netdata will search for it in your system `$PATH`.
+1. The `nc` utility. If you do not set the path, Netdata will search for it in your system `$PATH`.
Set the path for `nc` in `/etc/netdata/health_alarm_notify.conf` (to edit it on your system run `/etc/netdata/edit-config health_alarm_notify.conf`), like this:
diff --git a/health/notifications/kavenegar/README.md b/health/notifications/kavenegar/README.md
index d833eef82e..b8026b89a4 100644
--- a/health/notifications/kavenegar/README.md
+++ b/health/notifications/kavenegar/README.md
@@ -32,7 +32,7 @@ SEND_KAVENEGAR="YES"
# copy your api key. You can generate new API Key too.
# You can find and select kevenegar sender number from this place.
-# Without an API key, netdata cannot send KAVENEGAR text messages.
+# Without an API key, Netdata cannot send KAVENEGAR text messages.
KAVENEGAR_API_KEY=""
KAVENEGAR_SENDER=""
DEFAULT_RECIPIENT_KAVENEGAR=""
diff --git a/health/notifications/messagebird/README.md b/health/notifications/messagebird/README.md
index cdb3e8dc11..62b8b2eaa3 100644
--- a/health/notifications/messagebird/README.md
+++ b/health/notifications/messagebird/README.md
@@ -31,7 +31,7 @@ SEND_MESSAGEBIRD="YES"
# to get the API key, click on 'API' in the sidebar, then 'API Access (REST)'
# click 'Add access key' and fill in data (you want a live key to send SMS)
-# Without an access key, netdata cannot send Messagebird text messages.
+# Without an access key, Netdata cannot send Messagebird text messages.
MESSAGEBIRD_ACCESS_KEY="XXXXXXXX"
MESSAGEBIRD_NUMBER="XXXXXXX"
DEFAULT_RECIPIENT_MESSAGEBIRD="XXXXXXX"
diff --git a/health/notifications/pagerduty/README.md b/health/notifications/pagerduty/README.md
index 884b979235..8f03a0695e 100644
--- a/health/notifications/pagerduty/README.md
+++ b/health/notifications/pagerduty/README.md
@@ -2,11 +2,11 @@
[PagerDuty](https://www.pagerduty.com/company/) is the enterprise incident resolution service that integrates with ITOps and DevOps monitoring stacks to improve operational reliability and agility. From enriching and aggregating events to correlating them into incidents, PagerDuty streamlines the incident management process by reducing alert noise and resolution times.
-Here is an example of a PagerDuty dashboard with netdata notifications:
+Here is an example of a PagerDuty dashboard with Netdata notifications:
-
+
-To have netdata send notifications to PagerDuty, you'll first need to set up a PagerDuty `Generic API` service and install the PagerDuty agent on the host running netdata. See the following guide for details:
+To have Netdata send notifications to PagerDuty, you'll first need to set up a PagerDuty `Generic API` service and install the PagerDuty agent on the host running Netdata. See the following guide for details:
https://www.pagerduty.com/docs/guides/agent-install-guide/
diff --git a/health/notifications/pushbullet/README.md b/health/notifications/pushbullet/README.md
index 42b343e457..0c0b9a3224 100644
--- a/health/notifications/pushbullet/README.md
+++ b/health/notifications/pushbullet/README.md
@@ -36,7 +36,7 @@ SEND_PUSHBULLET="YES"
# not have a pushbullet account, the pushbullet service will send an email
# to that address instead
-# Without an access token, netdata cannot send pushbullet notifications.
+# Without an access token, Netdata cannot send pushbullet notifications.
PUSHBULLET_ACCESS_TOKEN="o.Sometokenhere"
DEFAULT_RECIPIENT_PUSHBULLET="admin1@example.com admin3@somemail.com"
```
diff --git a/health/notifications/pushover/README.md b/health/notifications/pushover/README.md
index 1debf5dcd4..90e2646a9b 100644
--- a/health/notifications/pushover/README.md
+++ b/health/notifications/pushover/README.md
@@ -2,11 +2,11 @@
pushover.net allows you to receive push notifications on your mobile phone. The service seems free for up to 7.500 messages per month.
-netdata will send warning messages with priority `0` and critical messages with priority `1`. pushover.net allows you to select do-not-disturb hours. The way this is configured, critical notifications will ring and vibrate your phone, even during the do-not-disturb-hours. All other notifications will be delivered silently.
+Netdata will send warning messages with priority `0` and critical messages with priority `1`. pushover.net allows you to select do-not-disturb hours. The way this is configured, critical notifications will ring and vibrate your phone, even during the do-not-disturb-hours. All other notifications will be delivered silently.
You need:
-1. APP TOKEN. You can use the same on all your netdata servers.
+1. APP TOKEN. You can use the same on all your Netdata servers.
2. USER TOKEN for each user you are going to send notifications to. This is the actual recipient of the notification.
The configuration is like above (slack messages).
diff --git a/health/notifications/rocketchat/README.md b/health/notifications/rocketchat/README.md
index f05e73f08b..f2650aedbd 100644
--- a/health/notifications/rocketchat/README.md
+++ b/health/notifications/rocketchat/README.md
@@ -4,7 +4,7 @@ This is what you will get:

You need:
-1. The **incoming webhook URL** as given by RocketChat. You can use the same on all your netdata servers (or you can have multiple if you like - your decision).
+1. The **incoming webhook URL** as given by RocketChat. You can use the same on all your Netdata servers (or you can have multiple if you like - your decision).
2. One or more channels to post the messages to.
Get them here: https://rocket.chat/docs/administrator-guides/integrations/index.html#how-to-create-a-new-incoming-webhook
@@ -22,8 +22,8 @@ Set them in `/etc/netdata/health_alarm_notify.conf` (to edit it on your system r
SEND_ROCKETCHAT="YES"
# Login to rocket.chat and create an incoming webhook. You need only one for all
-# your netdata servers (or you can have one for each of your netdata).
-# Without it, netdata cannot send rocketchat notifications.
+# your Netdata servers (or you can have one for each of your Netdata).
+# Without it, Netdata cannot send rocketchat notifications.
ROCKETCHAT_WEBHOOK_URL="