Skip to main content

On-Prem Deployment using Filesystem

Note

Chef Automate 4.10.1 released on 6th September 2023 includes improvements to the deployment and installation experience of Automate HA. Please read the blog to learn more about key improvements. Refer to the pre-requisites page (On-Premises, AWS) and plan your usage with your customer success manager or account manager.

Note

  • If the user chooses backup_config as file_system in config.toml backup is already configured during the deployment, and in that case the below steps are not required. If backup_config is left blank, then the configuration needs to be configured manually.

Overview

A shared file system is always required to create OpenSearch snapshots. To register the snapshot repository using OpenSearch, it is necessary to mount the same shared filesystem to the exact location on all master and data nodes. Register the location (or one of its parent directories) in the path.repo setting on all master and data nodes.

Setting up the backup configuration

Configuration in OpenSearch Node

  • Mount the shared file system to the base mount path which is mentioned in backup_mount on all OpenSearch and Frontend servers.

Note

  • /mnt/automate_backups is the default value for the backup_mount, which is also used in this document page as reference backup path.
  • While using file_system as backup type, the uid of hab user should be same across all the remote nodes. The same will be verified during verify check before deployment
  • Do not modify/delete any file manually inside the backup_mount directory

Apply the following steps on all of the OpenSearch server node

  • Create an OpenSearch sub-directory and set permissions (only if the network mount is correctly mounted).

    sudo mkdir /mnt/automate_backups/opensearch
    sudo chown hab:hab /mnt/automate_backups/opensearch/
    

Configuration for OpenSearch Node from Bastion Host

Configure the OpenSearch path.repo setting by following the steps given below:

  • Create a .toml (say os_config.toml) file in the Bastion host and copy the following template with the path to the repo.

      [path]
      # Replace /mnt/automate_backups with the backup_mount config found on the Bastion host in /hab/a2_deploy_workspace/a2ha.rb
      repo = "/mnt/automate_backups/opensearch"
    
  • Following command will add the configuration to the OpenSearch node.

      chef-automate config patch --opensearch <PATH TO OS_CONFIG.TOML>
    
Healthcheck commands
  • Following command can be run at the bastion node

    chef-automate status --opensearch
    
  • Following command can be run in the OpenSearch node

    hab svc status (check whether OpenSearch service is up or not)
    
    curl -k -X GET "<https://localhost:9200/_cat/indices/*?v=true&s=index&pretty>" -u admin:admin (Another way to check is to check whether all the indices are green or not)
    
    # Watch for a message about OpenSearch going from RED to GREEN
    journalctl -u hab-sup -f | grep 'automate-ha-opensearch'
    

Configuration for Automate Node from Provision Host

  • Configure Automate to handle External OpenSearch Backups.

  • Create an automate.toml file on the provisioning server using the following command:

    touch automate.toml
    

    Add the following configuration to automate.toml on the provisioning host:

    [global.v1.external.opensearch.backup]
    enable = true
    location = "fs"
    
    [global.v1.external.opensearch.backup.fs]
    # The `path.repo` setting you've configured on your OpenSearch nodes must be a parent directory of the setting you configure here:
    path = "/mnt/automate_backups/opensearch"
    
    [global.v1.backups.filesystem]
    path = "/mnt/automate_backups/backups"
    
  • Patch the automate.toml config to trigger the deployment from the provision host.

    chef-automate config patch --fe automate.toml
    

Backup and Restore

Backup

To create the backup, by running the backup command from bastion. The backup command is as shown below:

chef-automate backup create

Restore

To restore backed-up data of the Chef Automate High Availability (HA) using External File System (EFS), follow the steps given below:

  • Check the status of Automate HA Cluster from the bastion nodes by executing the chef-automate status command.

  • Execute the restore command from bastionchef-automate backup restore <BACKUP-ID> --yes -b /mnt/automate_backups/backups --airgap-bundle </path/to/bundle>.

Note

  • If you are restoring the backup from an older version, then you need to provide the --airgap-bundle </path/to/current/bundle>.
  • Large Compliance Report is not supported in Automate HA

Troubleshooting

Try these steps if Chef Automate returns an error while restoring data.

  1. Check the Chef Automate status.

    chef-automate status
    
  2. Check the status of your Habitat service on the Automate node.

    hab svc status
    
  3. If the deployment services are not healthy, reload them.

    hab svc load chef/deployment-service
    

Now check the status of the Automate node and then try running the restore command from the bastion host.

  1. How to change the base_path or path. The steps for the File System backup are as shown below:

    • While at the time of deployment backup_mount default value will be /mnt/automate_backups

    • In case, if you modify the backup_mount in config.toml before deployment, then the deployment process will do the configuration with the updated value

    • In case, you changed the backup_mount value post-deployment, then we need to patch the configuration manually to all the frontend and backend nodes, for example, if you change the backup_mount to /bkp/backps

    • Update the FE nodes with the below template, use the command chef-automate config patch fe.toml --fe

         [global.v1.backups]
            [global.v1.backups.filesystem]
               path = "/bkp/backps"
         [global.v1.external.opensearch.backup]
            [global.v1.external.opensearch.backup.fs]
               path = "/bkp/backps"
      
    • Update the OpenSearch node with the below template, use the command chef-automate config patch os.toml --os

      [path]
         repo = "/bkp/backps"
      
      • Run the curl request to one of the automate frontend node

        curl localhost:10144/_snapshot?pretty
        
        • If the response is empty {}, then we are good

        • If the response has json output, then it should have correct value for the backup_mount, refer the location value in the response. It should start with the /bkp/backps

        {
         "chef-automate-es6-event-feed-service" : {
         "type" : "fs",
         "settings" : {
         "location" : "/mnt/automate_backups/opensearch/automate-elasticsearch-data/chef-automate-es6-event-feed-service"
               }
            },
         "chef-automate-es6-compliance-service" : {
         "type" : "fs",
         "settings" : {
         "location" : "/mnt/automate_backups/opensearch/automate-elasticsearch-data/chef-automate-es6-compliance-service"
               }
            },
         "chef-automate-es6-ingest-service" : {
             "type" : "fs",
         "settings" : {
         "location" : "/mnt/automate_backups/opensearch/automate-elasticsearch-data/chef-automate-es6-ingest-service"
               }
            },
         "chef-automate-es6-automate-cs-oc-erchef" : {
         "type" : "fs",
         "settings" : {
         "location" : "/mnt/automate_backups/opensearch/automate-elasticsearch-data/chef-automate-es6-automate-cs-oc-erchef"
               }
            }
         }
        
        • If the pre string in the location is not match with backup_mount, then we need to to delete the existing snapshots. use below script to delete the snapshot from the one of the automate frontend node.
           snapshot=$(curl -XGET http://localhost:10144/_snapshot?pretty | jq 'keys[]')
           for name in $snapshot;do
               key=$(echo $name | tr -d '"')
              curl -XDELETE localhost:10144/_snapshot/$key?pretty
           done
        
        • The above scritp requires the jq needs to be installed, You can install from the airgap bundle, please use command on the one of the automate frontend node to locate the jq package.
        ls -ltrh /hab/cache/artifacts/ | grep jq
        
        -rw-r--r--. 1 ec2-user ec2-user  730K Dec  8 08:53 core-jq-static-1.6-20220312062012-x86_64-linux.hart
        -rw-r--r--. 1 ec2-user ec2-user  730K Dec  8 08:55 core-jq-static-1.6-20190703002933-x86_64-linux.hart
        
        • In case of multiple jq version, then install the latest one. use the below command to install the jq package to the automate frontend node
        hab pkg install /hab/cache/artifacts/core-jq-static-1.6-20190703002933-x86_64-linux.hart -bf
        
  2. Below steps for object storage as a backup option

    • While at the time of deployment backup_config will be object_storage
    • To use the object_storage, we are using below template at the time of deployment
       [object_storage.config]
        google_service_account_file = ""
        location = ""
        bucket_name = ""
        access_key = ""
        secret_key = ""
        endpoint = ""
        region = ""
    
    • If you configured pre deployment, then we are good
    • If you want to change the bucket or base_path, then use the below template for Frontend nodes
    [global.v1]
      [global.v1.external.opensearch.backup.s3]
          bucket = "<BUCKET_NAME>"
          base_path = "opensearch"
       [global.v1.backups.s3.bucket]
          name = "<BUCKET_NAME>"
          base_path = "automate"
    
    • You can choose any value for the variable base_path. base_path patch is only required for the frontend node.

    • Use the command to apply the above template chef-automate config patch frontend.toml --fe

    • Post the configuration patch, and use the curl request to validate

      curl localhost:10144/_snapshot?pretty
      
    • If the response is empty {}, then we are good

    • If the response has JSON output, then it should have the correct value for the base_path

      {
          "chef-automate-es6-event-feed-service" : {
            "type" : "s3",
            "settings" : {
              "bucket" : "MY-BUCKET",
              "base_path" : "opensearch/automate-elasticsearch-data/chef-automate-es6-event-feed-service",
              "readonly" : "false",
              "compress" : "false"
            }
          },
          "chef-automate-es6-compliance-service" : {
            "type" : "s3",
            "settings" : {
              "bucket" : "MY-BUCKET",
              "base_path" : "opensearch/automate-elasticsearch-data/chef-automate-es6-compliance-service",
              "readonly" : "false",
              "compress" : "false"
            }
          },
          "chef-automate-es6-ingest-service" : {
            "type" : "s3",
            "settings" : {
              "bucket" : "MY-BUCKET",
              "base_path" : "opensearch/automate-elasticsearch-data/chef-automate-es6-ingest-service",
              "readonly" : "false",
              "compress" : "false"
            }
          },
          "chef-automate-es6-automate-cs-oc-erchef" : {
            "type" : "s3",
            "settings" : {
              "bucket" : "MY-BUCKET",
              "base_path" : "opensearch/automate-elasticsearch-data/chef-automate-es6-automate-cs-oc-erchef",
              "readonly" : "false",
              "compress" : "false"
            }
          }
      }
      
      • In case of base_path value is not matching, then we have to delete the existing snapshot. please refer to the steps from the file system.
Edit this page on GitHub

Thank you for your feedback!

×









Search Results