Designing toward a Zero Trust strategy

Earlier this year, DISA released the zero trust reference architecture for the DoD. Per President Biden’s Executive Order on Improving the Nation’s Cybersecurity released this year, “the Federal Government must advance toward Zero Trust Architecture.” With motivations within the Federal Government and DoD to adopt zero trust, several of our clients have asked how zero trust might impact their product portfolios and future certification efforts on the DoDIN Approved Product List (APL).

Zero trust is a drastic shift in strategic network defense where previous assumptions of inherent security based on traditional physical and network security measures, such as secured datacenter installations, and trusted internal networks behind firewalls, are no longer considered adequate or trusted. Devices and applications operating on internal networks, and the internal organization personnel who operate and maintain them have long been a soft targets for intrusion given their inherent trust and often widely granted privileged access. Zero trusts look to re-address the issues associated with flawed implied trust models leveraged broadly throughout the any IT enterprise, datacenter, or campus network. While many of the concepts supporting zero trust are not new to security practitioners – they are a definitive change to the standard IT management practices on traditionally “trusted” networks. Zero trust guiding principals serve as the foundation of an overall strategy to rethink how access to IT resources is managed:

  • Never trust, always verify – All users, devices, applications, workloads and data flow should be treated untrusted. Assumed trust should never be granted – instead require authentication and explicit authorization for each of these categories as they operate. Least privilege, a concept that is not new to the U.S. Government, should be enforced through use of dynamic security policies that take into account not only identity and Role-based Access Control (RBAC), but also consider trend and expected behavioral analytics as part of rights management.
  • Assume breach – Treat the environment as if it has been compromised already, and implement security practices that limit lateral attack vectors. Implement default deny policies on access control lists, both within the network and on end points. Perform logging and inspection of all activities within the architecture, to include user and device actions, data flow both across the network and within a system, and all use of resources or requests to access resources – and implement methods to continually monitor the activities for any suspicious or unexpected behaviors.
  • Verify explicitly – Perform access management to all resources using secure methods that are consistent, and use multiple decision points to determine both the context and need for requested access.

Here are some practical examples where zero trust relates to traditional thinking.

Traditional Thinking Zero Trust Thinking What Developers Can Do To Align Products
Sysadmins all have security clearance and have gone through a background check. They are trusted to have full, unfettered access to my network systems. Sysadmins could present as an insider threat either inadvertently or with malicious intent, so even though they have been vetted for risk factors to national security, they should not be explicitly trusted to operate freely within the network. Instead, their access should always be tied to strong authentication, with explicit authorization for the least amount of access required to perform their duties. Each attempt to access resources should be validated uniquely and based on the context of the access request. Further, their access should be continually monitored and evaluated for unexpected behaviors that may indicate unauthorized activities.
  • Design systems to be used only with strong multi-factor authentication, tied to external authentication stores.
  • Develop granular, explicit and separated role based access controls and design interaction around a least privilege model with separation between consumers, administrators and security management.
  • Require each request for resource access to be validated uniquely, and based on the context of the request. Align access requests to context and known workflow attributes to prevent undesirable use of credentials within the system.
  • Implement granular audit logging of every activity or request for resource access.
  • Do not allow for accounts that have the “keys to the kingdom”.
  • Do not forget to address command-line interfaces (CLIs), application programming interfaces (APIs), and device-to-device interactions with the same due diligence and security rigor as client facing user interfaces.
Systems behind my firewall are on a trusted DoD network, so I don’t need to be concerned with my devices communicating between one another on the internal LAN. Even internal networks are subject to compromise, and should not be considered explicitly trusted. Instead, communications across any network plane regardless of logical location within a security boundary should be protected with encryption and strong multi-factor authentication wherever technically feasible. Access controls should be enforced even for internal communication, providing a least privilege access model by enforcing specific endpoint, port and protocol based network access controls. Internal networks should not be left as a soft target for easy access to an insider threat, or breach of an internal device.
  • Treat each endpoint as its own bastion, with default deny ACLs and explicit inbound and outbound permit rules.
  • Implement encryption and strong authentication for device to device communications using DoD PKI with Mutual TLS (mTLS) authentication.
  • Log and continually monitor both internal and external network traffic for suspicious or abnormal activities.
  • Limit the use of broad subnet/supernet based ACLs, in favor of specific endpoint ACLs where possible.
  • Enable similar security approach for both IPv4 and IPv6 network segments.
The application is installed on a STIG compliant server within my secure datacenter, on an internal network, and doesn’t have any active CVEs showing up in Nessus based ACAS scans. Its safe to assume the application is secure and not a threat. Both physical and network security are subject to compromise. In addition, IP vulnerability scanners such as Nessus based ACAS are only able to inspect for known CVEs and vulnerabilities. There is a persistent risk of unknown vulnerabilities in applications that can be introduced both unintended by developers, or by malicious actors who have compromised the vendors supply chain. As such, it should be assumed that even with no active indicators of open vulnerabilities, software may have unknown vulnerabilities or compromises that have yet to be discovered by security researchers – but are known to adversaries. Applications should always be configured to operate with a least privilege model using a range of both discretionary and mandatory access controls to enforce access restrictions. Applications that interoperate with other components of the system architecture should always use encryption and strong multi-factor authentication wherever technically feasible – and their associated system and service accounts should also be tied to a least privilege model. The application should be continually monitored and evaluated for unexpected behaviors that may indicate unauthorized activities.
  • Implement security measures, auditing, multi-factor authentication, access controls and monitoring across the entire supply chain.
  • Perform regular code scans on organization-maintained software.
  • Ensure software configuration management is performed with changes to code being thoroughly reviewed by independent validators outside of the developer organizational structure, before production release.
  • Cryptographically sign application components using a reputable CA issued certificate that supports revocation checking. Limit access to, and require non-repudiated authentication to private keys used to sign software – and audit and monitor access to private keys continuously.
  • Actively maintain and release security patches for products in a timely fashion.
  • Proactively notify the CVE program when vulnerabilities are discovered.
  • Ensure software is developed to operate with a least privilege model, enforced with use of both discretionary and mandatory access controls.
  • Implement first-hop security systems for client and datacenter systems like 802.1x, IPv6 Router Advertisement Guard (RA Guard), Dynamic Host Control Protocol (DHCP) Snooping, Unicast Reverse Path Forwarding (uRPF), IP Source Control, etc
The service account sometimes needs privileged access to a resource, so that privileged access should be always enabled for the service account. Identifying the specific workflows requiring resource access, and then tailoring dynamic security policies with multiple attributes that combine into a confidence driven policy for access management to resources on a case by case, conditional basis provides access only when explicitly required, and prevents arbitrary use of resources that may fall outside of intentional workflow. By using a confidence metric, it provides an additional method to detect and prevent abnormal attempts to access resources.
  • Document detailed product workflow, and implement dynamic access controls based on expected workflow.
  • Ensure service accounts are granted only the minimum amount of access to system resources which is required to perform the function.
  • Implement auditing for all service account actions, and continuously monitor for suspicious or abnormal activities.
  • Utilize encryption and strong authentication for any service account that is utilized over the network.
  • Prevent service accounts from being interactively accessed by normal accounts.

Additional Reading:

 

 

Are you a product vendor or DoD agency confused on where to get started?  Get in touch with us today, we can help!

Scroll to Top