Sometimes you may have two or more nodes that are replicas of each other or you have two applications you want to monitor on the same machine, but each has different user accounts. This guide talks about how to register nodes correctly, and assign policies.

Overview

Under most deployments, nodes are self contained with an individual name, hostname/IP and a set of credentials. They are scanned daily, checked for change and have any assigned policies applied. However, sometimes you might have a slightly unusual setup where two nodes, or two applications share some commonality. Two examples we are going to discuss in this article are:

  • monitoring two or more nodes that are almost identical replicas of each other (for example replicas or EC2 instances in an Auto Scaling Group), and
  • monitoring two applications as separate nodes that happen to exist on the same host, but have different user and password accounts.

This guide offers some advice on how to add, arrange and write policies for nodes that fall into one of these two interdependent categories.

Adding Nodes

Before we discuss a few worked examples with our two scenarios, we need to review some background on what the different properties of nodes are, what their functions are, and list any limitations.

Node Name

Firstly, the name of a node should generally be unique so it helps you identify the node in our UI. The UpGuard engine will enforce node name uniqueness, but there might be situations where you want to define multiple nodes with the same name:

  • If you have a group of EC2 instances in an AWS autoscaling group, they may or may not be named exactly the same.
  • You have two applications that live on the same VM.
  • You have two replica nodes, potentially one live and one stand by.

Many users name their nodes to match the hostname of the node as it makes the node easy to identify, but it should be remembered that the node name property has no functional bearing on whether or how a node is scanned. For server, network and device based nodes, this is controlled by the Hostname / IP Address property.

Here, you have two options - you can either introduce a more extensive node naming convention, or you can make use of the external ID field. The UpGuard Engine will allow two nodes to be named identically if their corresponding external ID fields have a value set, and those values are different. For example, if you have 2 EC2 instances in the same AWS Auto Scaling Group, they will have the same name, but different external ID values.

Node Hostname / IP Address

Unlike the Name property, the Hostname/IP property defines the exact functional method by which UpGuard will connect to the node. Along with the Port, Username and Password/Key fields, these properties define functionally how a scan will be run. It should be noted again that since the method UpGuard uses to connect to a node is governed by these fields, the name of the node has zero functional bearing here. The hostname/IP, username and password properties of nodes do not need to be unique at all across all of your nodes.

Example: Two Applications Existing on the Same VM

In this example, we have two applications called banking and promotions that live on the same VM called prod01 and we would like to scan each application as a separate node. Each application has a different username and password. The VM has a hostname of prod01.bigcorp-bank.com. The first thing to decide is how to name the nodes.

If we decide to use a more extensive naming convention, we could name the two nodes:

  • prod01 - banking, and
  • prod01 - promotions.

If we decide to use the External ID field, then we could set the following properties on each node:

  • Node Name: prod01, External ID: banking
  • Node Name: prod01, External ID: promotions

For both nodes, we would set the Hostname / IP Address field to be prod01.bigcorp-bank.com. We would then need to make sure the Username and Password fields for each node have their different corresponding username and password values for each application. Since these nodes are conceptually different, you will probably also assign them to different node groups.

Example: Replica Nodes

In this example, we have two machines that are meant to be near identical replicas of each other. They may be named slightly differently from the VM level, but they both sit behind a load balancing DNS record that only points to the live machine at any one time. Let’s call the two VMs prod01-a and prod01-b and they both sit behind and DNS entry prod01.bigcorp-bank.com.

Here, if we wanted to scan both nodes to make sure they are identical and in good health, we could add them the normal way as two separate nodes with their own hostnames (prod01-a and prod01-b) and common username/password credentials. However, if we wanted to add them as a single node and only scan the live node each time, we could use the following properties:

  • Node Name: prod01
  • Node Hostname: prod01.bigcorp-bank.com (using the hostname attached to the DNS record ensures only the live VM is scanned as the node).

This example also assumes that the authentication method for each of the load balanced VMs is identical so that the same credentials can be used for whichever VM the DNS record is actually pointing to at scan time.

Change Detection and Alerting

If two applications are registered as two separate nodes with different credentials then they will scan and have changes detected as two independent nodes. If you are monitoring applications in this way you may not be interested in detecting change for system level items such as packages, users or services. In this case you can add these categories to an ignore list. For more information, please see our guide on Ignore Lists.

If you have two replica machines set up as a single node, there may be subtle differences between these machines you don’t care about. Some common examples here are the node’s hostname or IP address fields. In this case you can also add these to an ignore list as you know these will continue to change as the two VMs take lead from one another.

Policy Design - Files

This section talks about policy design for deployed files. If you have two nodes that represent two different applications, that happen to exist on the same VM, then these should each have their own separate policies defined. In fact, the nodes will probably be in different node groups as they are conceptually different and only share a hostname by chance.

For replica nodes, you should generally be able to define one policy as the files deployed on each replica should be identical and exist in identical locations. However, if each replica happens to deploy files slighly differently you can design a policy that is a union of the checks to perform on both machines, but set the absent pass flag on every file check. This way, all files that actually exist on a particular machine are checked as normal, and any files missing on one particular VM, but existing on the other VM, are ignored.

For more information on how to set the Absent Pass field via the UI, please visit our guide on Absent Pass. If you are creating policies via the API, please see the absent_pass parameter for the Add File Check endpoint.

What Next?

For more information on monitoring CI/CD deployed files, please visit our guide on Monitoring CI/CD Deployments.

For more information on AWS Auto Scaling Groups, please visit our guide on How to Manage Nodes in an AWS Auto Scaling Group.

Tags: nodes policies