Masters and Slaves

Each appliance has three local databases:

  • An (encrypted) key store containing identity and encryption key information. The master appliance has a read + write version of this database and each slave has a read-only copy.
  • A security information event database that contains events generated by the Agents attached to it.
  • A temporary database used for publishing security events to the master.

The Appliance uses a single master / multiple slave database model where the majority of the time agents are only reading updates to the state of the world they live in. These updates include agent configuration updates, new policy distributions, changes to existing policies, new encryption-key distributions and changes to the access control lists on existing encryption keys. An example of a write operation would be an endpoint client creating an encryption key. Once the client creates a key and its corresponding access control list (ACL), the client must send that information to the master database. Once the master is aware of the new key and ACL, it updates its own keystore and synchronizes the change with the slave instances. Once each slave has learned about the new key the agents connected to that slave will learn about it.

Non-Clustered Data

Each Appliance has its own configuration information that is not replicated to other members of the cluster. This information includes things like hostname, IP Address, DNS information, etc.

Local host mapping

If no "Current Addresses" appear in the Terminal/Shell (via DHCP), you need to add Trusted IP addresses to the HOSTS file.

Agent Communication

We only use online/offline status and home data center to determine which server we prefer. If that server is unavailable, we try every other server until we find one that works. We will also periodically check if we're able to return to the preferred server if we're not already talking to it. Expected behavior is that the agent talks to one of the servers in the other data center.


Clustering two or more PK Endpoint Manager (PEM) Appliances can assist in centralized management and fault tolerance. Clusters are required for failover, high availability and performance. Smartcrypt data (identities, encryption keys, policies and configuration data) are replicated between nodes to ensure information cannot be lost. Clustered systems can be geographically dispersed and do not need to be located on the same network.

Note: Clustering systems do not provide network load-balancing functionality. If you want to load balance traffic among appliances, you can use a 3rd party load balancer that supports sticky sessions or use DNS round-robin. That said, systems can be added to data centers to ensure that Users in a specific Active Directory OU are always redirected to the same appliance or group of appliances.

Creating a new Cluster

To create a cluster, navigate to the Advanced tab and select Cluster. If the Appliance is not a current Cluster member, you will see a message indicating that the server is currently in standalone mode. Click Start a Cluster to put the Appliance in cluster mode. You should see a single member in the cluster list with State set to Up and the Database Source set to Master. To add systems to this cluster, see "Adding a new system to an existing Cluster."

Adding a new system to an existing Cluster

To add a new Appliance to the Cluster:

  1. Navigate to Advanced > Cluster. If the Appliance is not a current Cluster member, you will see a message indicating that the server is currently in standalone mode.
  2. Click Start Pairing Mode to allow the node to accept a new Appliance.
  3. Navigate to the current Master node in the cluster and choose Pair with Another.
  4. Type the URL of the new node you put into pairing mode in the previous step and click Pair. Once the pairing process has finished you will see a "Complete" message. On the clustering page you will see all nodes and their current states. New nodes will show up in a "Down" state until they have finished their initial data replication from the Master. For large databases over slower network links this could take several minutes.

Removing a system from a cluster

To remove an Appliance from a Cluster:

  1. Navigate to Advanced > Cluster.
  2. Select the node you wish to remove and choose Remove on the right hand side. This will set the removed node back to its original factory state. If you do not plan on re-adding the removed node back into the cluster, choose  Delete from the link on the right. Note: If the node you wish to remove is currently the master, you must choose a new master first. See "Promoting a slave to a master" below.

Promoting a slave to master

To make a slave node the master node:

  1. Login to the node you wish to promote.
  2. Navigate to Advanced > Cluster.
  3. Click Make Database Master. This will cause a re-assignment of the master role that, upon completion. will allow you to cleanly remove the old master from the Cluster.

Data Centers

Data Centers are used to determine which user agents connect to which PK Endpoint Managers. A Data Center requires a defined Active Directory Organizational Unit and when a user agent from that OU authenticates it will be directed to communicate with the PK Endpoint Manager or Cluster responsible for serving that OU. Data centers are used to map Active Directory (AD) users by Organizational Unit (OU) to an PK Endpoint Manager Cluster. When a user logs in, they are directed to a PK Endpoint Manager or Cluster of Managers responsible for servicing their OU.

Creating a New Data Center

Navigate to Advanced > Data Centers. Click Add. Choose a descriptive name and type out the Active Directory Organizational Unit paths you wish to assign to this Data Center. For example:

...Data Center Name: PKWARE UK 
...Data Center OUs: ou=users,ou=lhr,dc=pkware,dc=com
...Data Center Name: PKWARE US
...Data Center OUs: ou=users,ou=mke,dc=pkware,dc=com | ou=users,ou=mke,dc=pkware,dc=com | ou-users,ou=lga,dc=pkware,dc=com

Assigning a Data Center to a Node

Once you have created a new Data Center, it must be assigned to a node in the Cluster. Return to the Cluster tab and click Edit Data Centers. From here, you may assign specific PK Endpoint Manager nodes to the data centers you wish them to service.