Chef Infra Server/Chef Backend to Automate HA
Note
Warning
Customers using only Standalone Chef Infra Server or Chef Backend are advised to follow this migration guidance. Customers using Chef Manage with Chef Backend should not migrate with this.
Also, for the customers using a standalone Chef Infra Server, cookbooks should be in the database or S3 but not in the file system.
This page explains the procedure to migrate the existing Standalone Chef Infra Server or Chef Backend data to the newly deployed Chef Automate HA. This migration involves two steps:
- Backup the data from an existing Chef Infra Server or Chef Backend via
knife-ec-backup
. - Restore the backed-up data to the newly deployed Chef Automate HA environment via
knife-ec-restore
.
Take a backup using the knife-ec-backup
utility and move the backup folder to the newly deployed Chef Server. Later, restore using the same utility. The backup migrates all the cookbooks, users, data bags, policies, and organizations. knife-ec-backup
utility backups and restores the data in an Enterprise Chef Server installation, preserving the data in an intermediate, in editable text format.
Note
- The migration procedure is tested on Chef Server versions 14+ and 15+.
- The migration procedure is tested and is possible on the Chef Backend version above 2.1.0.
Prerequisites
Check the AWS Deployment Prerequisites and On-premises deployment Prerequisitespage before migrating.
Backup the Existing Chef Infra Server or Chef Backend Data
Execute the below command to install Habitat:
curl https://raw.githubusercontent.com/habitat-sh/habitat/master/components/hab/install.sh \ | sudo bash
Execute the below command to install the habitat package for
knife-ec-backup
hab pkg install chef/knife-ec-backup
Execute the below command to generate a knife tidy server report to examine the stale node, data, etc.
hab pkg exec chef/knife-ec-backup knife tidy server report --node-threshold 60 -s <chef server URL> -u <pivotal> -k <path of pivotal>
Where
pivotal
is the name of the userpath of pivotal
is where the user’s pem file is stored.node-threshold NUM_DAYS
is the maximum number of days since the last checking before a node is considered stale. For Example:
hab pkg exec chef/knife-ec-backup knife tidy server report --node-threshold 60 -s https://chef.io -u pivotal -k /etc/opscode/pivotal.pem
Execute the below command to initiate a backup of your Chef Server data.
hab pkg exec chef/knife-ec-backup knife ec backup backup_$(date '+%Y%m%d%H%M%s') --webui-key /etc/opscode/webui_priv.pem -s <chef server url>
In this command:
--with-user-sql
is required to handle user passwords and ensure user-specific association groups are not duplicated.--with-key-sql
is to handle cases where customers have users with multiple pem keys associated with their user or clients. The current chef-server API only dumps the default key. Sometimes, users will generate and assign additional keys to give additional users access to an account but still be able to lock them out later without removing everyone’s access.
For example:
hab pkg exec chef/knife-ec-backup knife ec backup backup_$(date '+%Y%m%d%H%M%s') --webui-key /etc/opscode/webui_priv.pem -s https://chef.io`.
- Execute the below command to clean unused data from reports. This is an optional step
hab pkg exec chef/knife-ec-backup knife tidy server clean --backup-path /path/to/an-ec-backup
Execute the below command to copy the backup directory to the Automate HA Chef Server.
scp -i /path/to/key /path/to/backup-file user@host:/home/user
If your HA Chef Server is in a private subnet, scp backup file to bastion and then to Chef Server.
Add S3 Configurations for Cookbook Storage
Before restoring the backup on the Automate HA Chef Server, configure S3 storage for cookbooks. The cookbooks stored in S3 in the Chef server can be stored in S3 or Postgres in Automate HA.
Note
- If you use the same bucket for Automate HA, the old files won’t be affected even if the new files for cookbooks are uploaded.
- When you use the new bucket for Automate HA, new files for cookbooks are uploaded.
Restore Data to Chef Automate HA
Execute the below command to install the habitat package for
knife-ec-backup
hab pkg install chef/knife-ec-backup
Execute the below command to restore the backup.
export LC_ALL=en_US.UTF-8 export LANG=en_US.UTF-8 hab pkg exec chef/knife-ec-backup knife ec restore <path/to/directory/backup_file> -yes --concurrency 1 --webui-key /hab/svc/automate-cs-oc-erchef/data/webui\_priv.pem --purge -c /hab/pkgs/chef/chef-server-ctl/*/*/omnibus-ctl/spec/fixtures/pivotal.rb
Steps to Validate if Migration is Successful
Download the validation script using below
curl https://raw.githubusercontent.com/chef/automate/main/dev/infra_server_objects_count_collector.sh -o infra_server_objects_count_collector.sh
Where:
-S
is the Chef Server URL-K
is the path of pivotal.pem file-F
is the path to store the output file
Execute the below command to get the counts of objects
bash infra_server_objects_count_collector.sh -S <chef-serve-url> -K /path/to/key -F Filename
Repeat the above commands for the new server to get the counts
Now run the below command to check the differences between the old and new data. Ideally, there should be no differences if the migration was done successfully.
diff old_server_file new_server_file
In-place Migration (Chef Backend to Automate HA)
As part of this scenario, the customer will migrate from the chef-backend (5 machines) to Automate HA in place, i.e., Automate HA will be deployed in those five machines only where Chef-backend is running. One extra bastion node will be required to manage the deployment of Automate HA on the chef backend infrastructure.
Note
Note
- Set up your workstation based on the newly created Automate-HA’s chef-server. It is only needed if you have set up the workstation earlier.
- This in-place migration works when cookbooks are stored in a database or s3. This does not support use-case, where cookbooks are stored in the filesystem.
- To validate the In-place migration, run the validation script before starting the backup and restore.
curl https://raw.githubusercontent.com/chef/automate/main/dev/infra_server_objects_count_collector.sh -o infra_server_objects_count_collector.sh
Where:
-S
is the Chef Server URL-K
is the path of pivotal.pem file-F
is the path to store the output file
bash infra_server_objects_count_collector.sh -S <chef-serve-url> -K /path/to/key -F Filename
Steps for In-place Migration
ssh to all the backend nodes of chef-backend and run
chef-backend-ctl stop
ssh to all frontend nodes of chef-backend and run
chef-server-ctl stop
Create one bastion machine under the same network space.
ssh to bastion machine and download chef-automate CLI and extract the downloaded zip file
https://packages.chef.io/files/current/latest/chef-automate-cli/chef-automate_linux_amd64.zip | gunzip - > chef-automate && chmod +x chef-automate | cp -f chef-automate /usr/bin/chef-automate
Create an airgap bundle using the command
./chef-automate airgap bundle create
Generate the
config.toml
file using the following command:./chef-automate init-config-ha existing_infra
Edit
config.toml
and add the following:- Update the
instance_count
- fqdn : load balance URL, which points to the frontend node.
- keys : ssh username and private keys
- Ensure to provide Chef backend’s frontend server IPs for Automate HA Chef Automate and Chef Server.
- Ensure to provide Chef backend’s backend server IPs for Automate HA Postgres and OpenSearch machines.
- Sample configuration; please modify according to your needs.
[architecture.existing_infra] secrets_key_file = "/hab/a2_deploy_workspace/secrets.key" secrets_store_file = "/hab/a2_deploy_workspace/secrets.json" architecture = "existing_nodes" workspace_path = "/hab/a2_deploy_workspace" ssh_user = "myusername" ssh_port = "22" ssh_key_file = "~/.ssh/mykey.pem" sudo_password = "" # DON'T MODIFY THE BELOW LINE (backup_mount) backup_mount = "/mnt/automate_backups" # Eg.: backup_config = "object_storage" or "file_system" backup_config = "file_system" # If backup_config = "object_storage" fill out [object_storage.config] as well ## Object storage similar to AWS S3 Bucket [object_storage.config] bucket_name = "" access_key = "" secret_key = "" # For the S3 bucket, the default endpoint value is "https://s3.amazonaws.com" # Include protocol to the endpoint value. Eg: https://customdns1.com or http://customdns2.com endpoint = "" # [Optional] Mention object_storage region if applicable # Eg: region = "us-west-1" region = "" ## === === [automate.config] # admin_password = "" # automate load balancer fqdn IP or path fqdn = "chef.example.com" instance_count = "2" # teams_port = "" config_file = "configs/automate.toml" [chef_server.config] instance_count = "2" [opensearch.config] instance_count = "3" [postgresql.config] instance_count = "3" [existing_infra.config] automate_private_ips = ["10.0.1.0","10.0.2.0"] chef_server_private_ips = ["10.0.1.0","10.0.2.0"] opensearch_private_ips = ["10.0.3.0","10.0.4.0","10.0.5.0"] postgresql_private_ips = ["10.0.3.0","10.0.4.0","10.0.5.0"]
- Update the
Deploy using the following command:
./chef-automate deploy config.toml --airgap-bundle <airgapped bundle name>
Clean up the old packages from the chef-backend (like Elasticsearch and Postgres)
Once done, restore data to Chef Automate HA
Connect Workstation/Nodes to Automate HA
Open the
~/.chef/config.rb
file in your workstation and update thechef_server_url
with the Chef-server Lb fqdn.Example:
chef_server_url "https://<chef-server-lb-fqdn>/organizations/new_org"
Now run
knife user list
,knife node list
, orknife cookbook list
. It will give you valid output.
Updating on Nodes
- ssh into every node and open the
/etc/chef/client.rb
file. On Windows machines, the default location for this file isC:\chef\client.rb
. On all other systems, the default location for this file is/etc/chef/client.rb
. - Update the chef-server URL with chef-server-lb fqdn.
- Run the
chef-client
command. It will connect with the new setup, perform the scan and generate the report in chef-automate.
Updating Nodes via Workstation
Bootstrap the nodes to update the chef_server_url
using the following steps:
- Open the
~/.chef/config.rb
file in your workstation and update thechef_server_url
with the chef-server-lb fqdn. - Go to your workstation and open the
~/.chef/config.rb
file. - Update the
chef_server_url
with the chef server LB fqdn. - Now do node bootstrapping. It will update the chef_server_url on that node. Refer: Node Bootstrapping
Use Automate HA for Chef-Backend User
Download and install Chef Workstation from the bastion host or local machine. To set up Chef Workstation, see the Workstation Set Up documentation.
Use Existing Private Supermarket with Automate HA
If you are using private instance of Supermarket with Chef Backend, you can refer to Supermarket with Automate HA to ensure that your same private instance of Supermarket works with Automate HA cluster.