Configuration and Change Management

2021.1

Tuna standardizes and automates configuration management through the use of automation scripts as well as documentation of all changes to production systems and networks. Automation tools such as Ansible, Packer, and Terraform automatically configure all Tuna systems according to established and tested policies, and are used as part of our Disaster Recovery plan and process.

Policy Statements

Tuna policy requires that:

(a) All production changes, including but not limited to software deployment, feature toggle enablement, network infrastructure changes, and access control authorization updates, must be invoked through approved change management process.

(b) Each production change must maintain complete traceability to fully document the request, including requestor, date/time of change, actions taken and results.

(c) Each production change must be fully tested prior to implementation.

(d) Each production change must include a rollback plan to back out the change in the event of failure.

(e) Each production change must include proper approval.

  • The approvers are determined based on the type of change.
  • Approvers must be someone other than the author/executor of the change.
  • Approvals may be automatically granted if certain criteria is met. The auto-approval criteria must be pre-approved by the Security Officer and fully documented and validated for each request.

Controls and Procedures

Configuration Management Processes

  1. Configuration management is automated using industry-recognized tools like Ansible, Packer, and Terraform to enforce secure configuration standards.

  2. All changes to production systems, network devices, and firewalls are reviewed and approved by Security team before they are implemented to assure they comply with business and security requirements.

  3. All changes to production systems are tested before they are implemented in production.

  4. Implementation of approved changes are only performed by authorized personnel.

  5. Tooling is used to generate an up to date system inventory.

    • All systems are categorized and labeled by their corresponding environment, such as dev, qa, sandbox, and prod.
    • All systems are classified and labeled based on the data they store or process, according to Tuna data classification model.
    • The Security team maintains automation which monitors all changes to IT assets, generates inventory lists, using automatic IT assets discovery, and services provided by each cloud provider.
    • IT assets database is used to generate the diagrams and asset lists required by the Risk Assessment phase of Tuna's Risk Management procedures
    • Tuna Change Management process ensures that all asset inventory created by automation is reconciled against real changes to production systems. This process includes periodic manual audits and approvals.
    • During each change implementation, the change is reviewed and verified by the target asset owner as needed.
  6. Tuna uses the Security Technical Implementation Guides (STIGs) published by the Defense Information Systems Agency as a baseline for hardening systems.

    • Windows-based systems use a baseline Active Directory group policy configuration in conjunction with the DISA STIGs.
    • Linux-based systems use Red Hat Enterprise Linux STIG as a guideline for implementation.
    • EC2 instances in AWS are provisioned using only hardened and approved Amazon Machine Images (AMIs).
    • Docker containers are launched using only approved Docker images that have been through security testing.
  7. All IT assets in Tuna have time synchronized to a single authoritative source.

    • Cloud systems are configured to aws own internal ntp servers pool, which is synced with global ntp network.
  8. All frontend functionality (e.g. user dashboards and portals) is separated from backend (e.g. database and app servers) systems by being deployed on separate servers or containers.

  9. All software and systems are required to complete full-scale testing before being promoted to production.

  10. All code changes are reviewed to assure software code quality, while in development, to proactively detect potential security issues using pull-requests and static code analysis tools. More details can be found in the Software Release / Code Promotion section.

Configuration Monitoring and Auditing

All infrastructure and system configurations, including all software-defined sources, are centrally aggregated to a configuration management database (CMDB) -- GitLab.

Configuration auditing rules are created according to established baseline, approved configuration standards and control policies. Deviations, misconfigurations, or configuration drifts are detected by these rules and alerted to the security team.

Production Systems Provisioning

  1. Before provisioning any systems, a request must be created and approved in the Jira INFRA DEPLOY (DPL) project.

  2. The security team must approve the provisioning request before any new system can be provisioned, unless a pre-approved automation process is followed.

  3. Once provisioning has been approved, the implementer must configure the new system according to the standard baseline chosen for the system's role.

  4. If the system will be used to store sensitive information, the implementer must ensure the volume containing this sensitive data is encrypted.

  5. Sensitive data in motion must always be encrypted.

  6. A security analysis is conducted once the system has been provisioned. This can be achieved either via automated configuration/vulnerability scans or manual inspection by the security team. Verifications include, but is not limited to:

    • Removal of default users used during provisioning.
    • Network configuration for system.
    • Data volume encryption settings.
    • Intrusion detection and virus scanning software installed.
  7. The new system is fully promoted into production upon successful verification against corresponding Tuna standards and change request approvals.

User Endpoint Security Controls and Configuration

  1. Employee laptops, including Windows, Mac, and Linux systems, are configured either

    • Manually by IT or the device owner; or
    • Automatically using a configuration management tool or equivalent scripts.
  2. The following security controls are applied at the minimum:

    • Disk encryption
    • Unique user accounts and strong passwords
    • Approved NTP servers
    • Approved security agents
    • Locking after 2 mins of inactivity
    • Auto-update of security patches
  3. The security configurations on all end-user systems are inspected by Security through either a manual periodic review or an automated compliance auditing tool.

Server Hardening Guidelines and Processes

  1. Linux System Hardening: Linux systems have their baseline security configuration applied via automation tools. These tools cover:

    • Ensuring that the machine is up-to-date with security patches and is configured to apply patches in accordance with our policies.
    • Stopping and disabling any unnecessary OS services.
    • Apply applicable DISA STIGs to OS and applications.
    • Configuring 15-minute session inactivity timeouts for SSH sessions.
    • Installing and configuring the virus scanner.
    • Installing and configuring the NTP daemon, including ensuring that modifying system time cannot be performed by unprivileged users.
    • Configuring disk volumes for providers that do not have native support for encrypted data volumes, including ensuring that encryption keys are protected from unauthorized access.
    • Configuring authentication to the centralized Directory Services servers.
    • Configuring audit logging as described in the Auditing Policy section.
  2. Windows System Hardening: Windows systems have their baseline security configuration applied via the combination of Group Policy settings and/or automation scripts. These baseline settings cover:

    • Joining the Windows Domain Controller and applying the Active Directory Group Policy configuration (for AD-managed systems only).
    • Ensuring that the machine is up-to-date with security patches and is configured to apply patches in accordance with our policies.
    • Apply applicable DISA STIGs to OS and applications.
    • Stopping and disabling any unnecessary OS services.
    • Configuring session inactivity timeouts.
    • Installing and configuring security protection agents such as anti-virus scanner.
    • Configuring transport encryption according to the requirements described in the Mobile Device Security and Disposable Media Management section.
    • Configuring the system clock to point to approved NTP servers and ensuring that modifying system time cannot be performed by unprivileged users.
    • Configuring audit logging as described in the Auditing Policy section.
  3. Any additional configuration changes applied to hardened Windows systems must be clearly documented by the implementer and reviewed by the Security team.

Configuration and Provisioning of Management Systems

  1. Provisioning management systems such as configuration management servers, remote access infrastructure, directory services, or monitoring systems follows the same procedure as provisioning a production system.

  2. Critical infrastructure roles applied to new systems must be clearly documented by the implementer in the change request.

Configuration and Management of Network Controls

All network devices and controls on a sensitive network are configured such that:

  • Vendor provided default configurations are modified securely, including

    • default encryption keys,
    • default SNMP community strings, if applicable,
    • default passwords/passphrases, and
    • other security-related vendor defaults, if applicable.
  • Encryption keys and passwords are changed anytime anyone with knowledge of the keys or passwords leaves the company or changes positions.

  • Traffic filtering (e.g. firewall rules) and inspection (e.g. Network IDS/IPS or AWS VPC flow logs) are enabled.

  • An up-to-date network diagram is maintained.

In AWS, network controls are implemented using Virtual Private Clouds (VPCs) and Security Groups. The configurations are managed as code and stored in approved repos. All changes to the configuration follow the defined code review, change management and production deployment approval process.

Provisioning AWS Accounts

AWS Account Structure / Organization

Tuna maintains a single Organization in AWS, maintained in a top-level AWS account (master). Sub-accounts are connected that each hosts separate workloads and resources in its own sandboxed environment. The master account itself handles aggregated billing for all connected sub-accounts but does not host any workload, service or resource, with the exception of DNS records for Tuna root domain, using AWS Route53 service. DNS records for subdomains are maintained in the corresponding sub-accounts.

Access to each account is funneled through our designated SSO provider, which establishes a trust relationship to a set of predefined roles in the master account. Once authenticated, a user then leverages AWS IAM Assume Role capability to switch to a sub-account to access services and resources.

The account and network structure looks like the following:

SSO/IdP ── Tuna-master
│ └── billing and root DNS records only
├── Tuna-dev
│   └── VPC
│ └── Subnets
│ └── Security-Groups
│ └── EC2 instances
├── Tuna-qa
│   └── VPC
│ └── Subnets
│ └── Security-Groups
│ └── EC2 instances
├── Tuna-sandbox
│   └── VPC
│ └── Subnets
│ └── Security-Groups
│ └── EC2 instances
├── Tuna-prod
│   └── VPC
│ └── Subnets
│ └── Security-Groups
│ └── EC2 instances
...

Infrastructure-as-Code

Tuna AWS environments and infrastructure are managed as code. Provisioning is accomplished using a set of automation scripts and Terraform code. Each new environment is created as a sub-account connected to Tuna-master. The creation and provisioning of a new account follows the instructions documented in the Bootstrap a new AWS environment page of the development wiki.

Automated change management for deploys to AWS

The Tuna Continuous Delivery Pipeline automates creation and validation of change requests. This is done in a 3-step process:

  1. Create/Validate Change Request Ticket

    GitLab CI is used for continuous delivery (build and deploy), and we employ GitLab CI-Jira automation such that:

    • Whenever deployment to a controlled environment (e.g. production accounts and infrastructure account) is requested, the GitLab CI job will check for an approved DPL or CORE ticket, or create a new MAYDAY ticket if not found and an emergency is detected.
    • The automation code will attempt to automatically populate the required data for the MAYDAY ticket (see list above).
    • If the data cannot be automatically populated, GitLab CI may pause the job and prompt the user for manual input.
    • Job will be paused until the request is approved or canceled (rejected). Before continuing to deployment, GitLab CI will validate the change request's build job identifier, build number and source code branch.
    • After passing all checks and tests, the CI job will generate the artifacts and commit them to the deploy GitLab CI instance.
  2. Production artifacts repository and Obtain Approval

    A separate GitLab CI is implemented to provide additional security to assist/automate the deploy approval process:

    • This GitLab CI instance holds the production artifacts

    • The instance is responsible for the last step of the deploy procedure:

      • Upon a commit, the CD job gets triggered and generates a deploy job

      • Satelite systems may be automatically deployed.

      • Systems that live inside the CDE always require manual input to get deployed to production.

      • At the CDE, a final manual check is required and finally the deploy is executed by the job itself.

        !!! attention

        The following practices will fail this validation and result in
        manual processing, therefore should be avoided:
        - squashing commits on PR merges
        - commits after PR approval without re-approval
    • If needed, details of the analysis are posted to the DPL or CORE Jira ticket.

    • Random inspections of automatically approved tickets are performed by the security team monthly to ensure the automation functions properly.

    !!! important

    1. Note that the above flow does not catch weaknesses in design, and
    therefore does not replace the need for threat modeling and security
    review in the design phase.
    2. Additional requirements may be added later as the process continues
    to mature.
  3. Detect Risky Changes, Deploy and Close

    GitLab CI job proceeds only with an approved and validated DPL or CORE ticket.

    • During production infra deploys, a terraform plan is always performed first to detect risky changes.

    • Examples of security-related or risky changes include:

      • Change to "policy" attribute of resource (aws_s3_bucket.policy, aws_kms_key.policy)
      • Change to IAM policy, role, user or group
      • Attach/detach policy
      • Change/delete to security group
      • Anything is deleted (in prod, deletes should be unusual so they should be manually reviewed)
    • If risky changes are detected, the deploy is paused and the DPL ticket is updated to require manual review before continuing.

    • Once a deploy is completed, the DPL ticket is automatically resolved and closed.

Patch Management Procedures

Local Systems

Tuna uses automated tooling to ensure systems are up-to-date with the latest security patches.

  • On local Linux and Windows systems, the unattended-upgrades tool is used to apply security patches in phases.

    • High Risk security patches are automatically applied as they are released
    • Monthly system patching for regular applications are applied as needed.
    • Snapshotting of a system will take place before an update is applied.
    • Once the update is deemed stable the snapshot will be removed.
    • In case of failure of the update the snapshot will be rolled back.
    • If the staging systems function properly after the two-week testing period, the security team will promote that snapshot into the mirror used by all production systems. These patches will be applied to all production systems during the next nightly patch run.
    • The patching process may be expedited by the Security team
    • On Windows systems, the baseline Group Policy setting configures Windows Update to implement the patching policy.

Cloud Resources

Tuna follows a "cattle-vs-pets" methodology to keep the resources in the cloud environments immutable and up-to-date with security patches.

  • Engineering team builds security-approved AMI from the latest AWS optimized Amazon Machine Image (AMI) to include required security agent.

  • The security agents installed on the security-approved AMIs scan for and report new vulnerabilities every time the image is rebuilt.

  • Other security agents installed on the security-approved AMIs scan for and report changes on the servers' filesystem daily.

  • The custom AMIs are automatically rebuilt from the latest AWS AMIs weekly to include the latest security patches.

User Endpoints

Tuna requires auto-update for security patches to be enabled for all user endpoints, including laptops and workstations.

  • The auto-update configuration and update status on all end-user systems are inspected by Security through either manual periodic audits or automated compliance auditing agents installed on the endpoints.

Production Deploy / Code Promotion Processes

In order to promote changes into Production, a valid and approved Change Request (CR) is required. It can be created in the Change Management System/Portal which implements the Tuna Change Management workflow, using the INFRA DEPLOY (DPL) Jira project to manage changes and approvals.

  • At least two approvals are required for each DPL ticket. By default, the approvers are

    • Security Lead and
    • Engineering Lead.
  • Additional approver(s) may be added depending on the impacted component(s). For example,

    • the IT Manager is added as an approver for IT/network changes; and
    • the DevOps Lead is added as an approver for changes to aws-Tuna-infra account in AWS.
  • Each DPL ticket requires the following information at a minimum:

    • Summary of the change
    • Component(s) impacted
    • Justification
    • Rollback plan
  • Additional details are required for a code deploy, including:

    • Build job name
    • Build ID and/or number
    • Deploy action (e.g. plan, apply)
    • Deploy branch (e.g. master)
    • Target environment (e.g. aws-Tuna-infra, aws-Tuna-prod-us, datacenter-hq)
    • Links to pull requests and/or Jira issues
    • Security scan status and results

Emergency Change

In the event of an emergency, the person or team on call is notified. This may include a combination or Development, IT, and Security.

If an emergency change must be made, such as patching of a zero-day security vulnerability or recovering from a system downtime, and that the standard change management process cannot be followed due to time constraint or personnel availability or other unforeseen issues, the change can be made by:

  • Notification: The Engineering Lead, Security Lead, and/or IT Lead must be notified by email, Slack, or phone call prior to the change . Depending on the nature of the emergency, the leads may choose to inform members of the executive team.

  • Access and Execution: Manually access of the production system or manual deploy of software, using one of the following access mechanisms as defined in Access Control policy and procedures:

    1. Support/Troubleshooting access
    2. Root account or root user access
  • Post-emergency Documentation: A MAYDAY ticket should be created within 24 hours following the emergency change. The ticket should contains all details related to the change, including:

    • Reason for emergency change
    • Method of emergency access used
    • Steps and details of the change that was made
    • Sign-off/approvals must be obtained per the type of change as defined by the standard CM process
  • Prevention and Improvement: The change must be fully reviewed by Security and Engineering together with the person/team responsible for the change. Any process improvement and/or preventative measures should be documented and an implementation plan should be developed.