diff --git a/docs/categorize.md b/docs/categorize.md index fa071be..26d7db6 100644 --- a/docs/categorize.md +++ b/docs/categorize.md @@ -3,6 +3,7 @@ The Categorize step remains largely the same, but is the first opportunity to show that RMF tasks (C-1, C-2, & C-3) can be done more quickly with a cross-functional team (people aspect). As mentioned in previous sections, it’s essential to have technical assessors in place, along with highly competent infrastructure, platform, and pilot application teams. Security categorization is the most important step in the Risk Management Framework (RMF) since it ties the information system’s security activities to the organization’s mission/business priorities. [FIPS 199](https://csrc.nist.gov/pubs/fips/199/final), *Standards for Security Categorization of Federal Information and Information Systems*, defines requirements for categorizing information and information systems. [FIPS 200](https://csrc.nist.gov/pubs/fips/200/final), *Minimum Security Requirements for Federal Information and Information Systems*, specifies a risk-based process for selecting the security controls necessary to satisfy the minimum requirements. [NIST SP 800-60](https://csrc.nist.gov/pubs/sp/800/60/v1/r1/final), *Guide for Mapping Types of Information and Information Systems to Security Categories*, is a four-step process for categorizing the information and information system level of risk: + 1. Identify information types 2. Select provisional impact levels for the information types 3. Review provisional impact levels and adjust/finalize information impact levels for the information types @@ -17,6 +18,7 @@ The system’s impact level is used to select a baseline set of security control ## System Security Plan (SSP) Start SSP development utilizing guidance from [NIST Special Publication 800-18, Revision 1](https://csrc.nist.gov/pubs/sp/800/18/r1/final). See note in Implementation & Assessment about SSP digitization and automation. Typical SSP templates will include the following: + 1. Information System Name 2. Risk Categorization (following FIPS 199 & 200 guidance) 3. Information System Owner @@ -29,8 +31,3 @@ Start SSP development utilizing guidance from [NIST Special Publication 800-18, 10. In-scope security and privacy controls 11. Date of completion/update 12. Date of approval with evidence - -

- -> [!NOTE] -> *We are proposing the term “cATO” no longer be used, see Manifesto* diff --git a/docs/implement-assess.md b/docs/implement-assess.md index d8f3318..f06e15c 100644 --- a/docs/implement-assess.md +++ b/docs/implement-assess.md @@ -13,6 +13,7 @@ The ideal situation is that control selection is tailored based upon the system In the graphic above, let’s assume that the platform has a common controls authorization and we are deploying an application to it. After accounting for inheritance, SD Elements would run an application survey to determine what security requirements the application development team is uniquely responsible for, and any additional tailoring could be performed. These can be added to the team’s backlog via native integrations or API, creating a traceable identifier for each. To start, we recommend a kickoff meeting with security (including privacy if the system is impacted): + - An hour meeting - Assign the team an assessor - Perform product architectural analysis @@ -21,6 +22,7 @@ To start, we recommend a kickoff meeting with security (including privacy if the - Assigned assessor will help team prioritize backlog Assuming a code level implementation, the engineers would pick up the story, complete implementation and documentation, and assessor acceptance would be required, much like PM acceptance for user stories. It helps to standardize the way teams respond to these tasks: + - Describe the team’s technical decisions and task implementation details - Provide a link to the code and/or regularly maintained artifacts to make reviewing easier - Provide a technical point of contact with name and email, who signed for the task completion @@ -49,6 +51,7 @@ Here are examples of how we used Snyk and Aqua in one implementation: To ensure adequate progress throughout the course of development, we recommend periodic control and scan reviews such as: Weekly meeting will be set between assessor and team + - Assigned assessor must have access to the team's backlog - Help prioritize security controls - Determine product security progress @@ -63,6 +66,7 @@ The SSP and POAM should be digitized, and automation can be applied as your matu The POAM describes the actions that are planned to correct deficiencies in a given control identified during the assessment of controls as well as during continuous monitoring. A POAM typically includes tasks to be accomplished with a recommendation for completion before or after system authorization; resources required to accomplish the tasks; milestones established to meet the tasks; and the scheduled completion dates for the milestones and tasks. POAMs are reviewed by the Authorizing Official (AO) to ensure there is agreement with the remediation actions planned to correct the identified deficiencies. POAMs are not needed when deficiencies are accepted by the AO as residual risk, or, are remediated during an assessment and before a release. Residual risk is often covered by other controls that were fully and successfully addressed. + - Residual risk is defined as risk that remains after efforts to identify and mitigate said risk have been taken. - Information System Security Officers (ISSO) or Application Security Assessors will monitor for new POAM items submitted for review, and report them to the AO as needed. @@ -89,8 +93,3 @@ This process of assessing risks and verifying that requirements have been met oc We recommend starting simple and adding automation based on your largest bottlenecks. This looks very different in an organization with 5 apps versus 100, or with 100 users versus 100,000, or with 1,000 monthly API calls versus 10M. Trying to automate all the things on day 1 is a terrible strategy (you will fail) and unnecessary. The goal is risk management, and then continuous improvement. Sure, think big… but then start small and scale appropriately. [OSCAL](https://pages.nist.gov/OSCAL/) looks promising when it comes to generating authorization packages dynamically, saving a great deal of time and money as [demonstrated by AWS](https://aws.amazon.com/blogs/security/aws-achieves-the-first-oscal-format-system-security-plan-submission-to-fedramp/) last year. We have built an OSCAL-based RMF platform and are beginning to implement it with our customers. In the future we may add implementations to the playbook or as an external reference. - -

- -> [!NOTE] -> *We are proposing the term “cATO” no longer be used, see Manifesto* diff --git a/docs/laws.md b/docs/laws.md index 00cb4d7..527474c 100644 --- a/docs/laws.md +++ b/docs/laws.md @@ -1,8 +1,6 @@ # FISMA -*(Copied from the NIST RMF website [FISMA background](https://csrc.nist.gov/Projects/risk-management/fisma-background))* - -
+*Copied from the NIST RMF website [FISMA background](https://csrc.nist.gov/Projects/risk-management/fisma-background)* ## What is FISMA? @@ -11,6 +9,7 @@ The Federal Information Security Management Act (FISMA) [FISMA 2002], part of th [The Federal Information Security Modernization Act of 2014](https://www.congress.gov/113/plaws/publ283/PLAW-113publ283.pdf) amends FISMA 2002, by providing several modifications that modernize federal security practices to address evolving security concerns. These changes result in less overall reporting, strengthens the use of continuous monitoring in systems, increases focus on the agencies for compliance and reporting that is more focused on the issues caused by security incidents. FISMA 2014 also required the Office of Management and Budget (OMB) to amend/revise OMB Circular A-130 to eliminate inefficient and wasteful reporting and reflect changes in law and advances in technology. FISMA, along with the Paperwork Reduction Act of 1995 and the Information Technology Management Reform Act of 1996 (Clinger-Cohen Act), explicitly emphasizes a risk-based policy for cost-effective security. In support of and reinforcing FISMA, the Office of Management and Budget (OMB) through [Circular A-130](https://www.whitehouse.gov/omb/information-for-agencies/circulars/), “Managing Federal Information as a Strategic Resource,” requires executive agencies within the federal government to: + - Plan for security - Ensure that appropriate officials are assigned security responsibility - Periodically review the security controls in their systems @@ -21,6 +20,7 @@ FISMA, along with the Paperwork Reduction Act of 1995 and the Information Techno ## What does FISMA require? Federal agencies need to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of: + - information collected/maintained by or on behalf of an agency - Information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency. @@ -42,11 +42,12 @@ As defined in FISMA 2002, "[t]he term ‘Federal information system’ means an # NIST Risk Management Framework -*(Copied from the NIST RMF website [FISMA background](https://csrc.nist.gov/Projects/risk-management/fisma-background))* +*Copied from the NIST RMF website [FISMA background](https://csrc.nist.gov/Projects/risk-management/fisma-background)* The NIST Risk Management Framework (RMF), outlined in NIST Special Publication 800-37, provides a flexible, holistic, and repeatable 7-step process to manage security and privacy risk and links to a suite of NIST standards and guidelines to support implementation of risk management programs to meet the requirements of the Federal Information Security Modernization Act (FISMA). **The risk-based approach of the NIST RMF helps an organization:** + 1. **Prepare** for risk management through essential activities critical to design and implementation of a risk management program. 2. **Categorize** systems and information based on an impact analysis. 3. **Select** a set of the NIST SP 800-53 controls to protect the system based on risk assessments. diff --git a/docs/misconceptions.md b/docs/misconceptions.md index eca8aae..91dd313 100644 --- a/docs/misconceptions.md +++ b/docs/misconceptions.md @@ -1,6 +1,7 @@ # Common Misconceptions Here are some common misconceptions we’ve heard about cATO: + - It is a way to avoid having to do RMF - ***you have to do RMF better than most to achieve initial and then ongoing authorization*** - It is authorizing the people and/or the process - ***FISMA is a law that requires us to authorize systems, which gives consideration but not primacy to people and process*** - It is a way to push whatever you want, whenever you want - ***you have to meet all security and privacy requirements to deploy to production*** @@ -12,6 +13,7 @@ Here are some common misconceptions we’ve heard about cATO:
There are also common misconceptions about FedRAMP and DISA Provisional Authorizations. Here is what you need to know: + - FedRAMP does not directly apply to DoD. DISA does, however, use FedRAMP authorization packages to formally grant a sort of Provisional Authorization reciprocity. - Provisional Authorization is not an ATO. Agency Mission Owner Authorizing Officials must review the Provisional Authorization along with agency specific implementation assessments, then grant an ATO for the system to be used. The goal is to maximize the reuse of existing evidence. - You do not have to wait for a FedRAMP or DISA Provisional Authorization before your agency can use a system. Agencies are allowed to perform an initial authorization to operate and send their evidence to the JAB or DISA AO for review to sponsor the system for FedRAMP or DISA PA, respectively. This will likely be the fastest route to ATO. Check local policy with your agency. diff --git a/docs/monitor.md b/docs/monitor.md index 31c8b2b..ff8a6f1 100644 --- a/docs/monitor.md +++ b/docs/monitor.md @@ -3,6 +3,7 @@ We recommend laying out an initial observation period with your AO of 6-12 months. During this time, meet regularly with assessors and your AO to demonstrate the results of your greatly improved implementation, assessment, and continuous monitoring processes. Set conditions for gaining an ongoing authorization and report on these metrics during these performance reviews. This is also your opportunity to outline recommendations for continuous improvement initiatives, and obtain feedback from assessors and AOs to achieve the virtuous cycle we laid out in our ‘why’. Aside from monitoring via automation, embedding Assessors and Privacy Officers should be staffed organically to: + - Review security scan results when developers mark findings as false positives, or decide to suppress for future sprints - Provide feedback to developers if disagreements arise - Assist developers with mitigations @@ -15,11 +16,7 @@ Aside from monitoring via automation, embedding Assessors and Privacy Officers s Perform independent spot checks and penetration tests during this time. There will be findings. Explicitly reject unrealistic standards like ‘zero findings. Instead, focus on time to discovery and time to remediation as key metrics–your continuous delivery capability applied to remediation will impress even the most risk averse of stakeholders. In summary, just like building software, managing risk is a continuous process... it is never done. A successful risk management program requires us to treat risk as a first class citizen. Continuously monitoring our security and privacy controls is how we will retain confidence in our security posture and program. Here is a good starter strategy that will ensure we remain honest with ourselves: + - Product teams are responsible for continuously managing security and compliance risks that are surfaced by security vulnerability scanning solutions (e.g. SAST, SCA, DAST, Image/Container, etc.). - Product teams and Security Control Assessors are expected to meet at least weekly to discuss upcoming release plans for the product, changes in security vulnerability posture, and any product context that should be updated within security vulnerability scanning tool project surveys. - Security Control Assessors and Privacy Officers are responsible for ensuring product teams are complying with policy expectations. - -

- -> [!NOTE] -> *We are proposing the term “cATO” no longer be used, see Manifesto* diff --git a/docs/ongoing-auth.md b/docs/ongoing-auth.md index b64c80d..b638176 100644 --- a/docs/ongoing-auth.md +++ b/docs/ongoing-auth.md @@ -13,6 +13,7 @@ Leverage a memorandum signed by the AO to grant ongoing authorization. Per NIST ## Authorization Frequency We recommend quarterly authorization frequency as a starting point, with a meeting for risk reporting to stakeholders (AO, SCA, etc.) + - This again emphasizes more, not less - Manual reporting to start–slides are ok but move towards automation and a dashboard as you mature - Identify and accept risk @@ -20,6 +21,7 @@ We recommend quarterly authorization frequency as a starting point, with a meeti - Formally document renewal The quarterly reporting should include things like: + - New applications shipped onto the platform - % security requirements (e.g. SD Elements) approved by assessor - Compliance with ongoing authorization and cATO playbook policy’s @@ -35,9 +37,3 @@ The quarterly reporting should include things like: - Staffing Congratulations! You’ve got an ongoing authorization that allows you to continuously deliver applications and services, provided that teams shift left on security and privacy risk. - -

- -> [!NOTE] -> *We are proposing the term “cATO” no longer be used, see Manifesto* - diff --git a/docs/organize.md b/docs/organize.md index 107ffe6..b149a21 100644 --- a/docs/organize.md +++ b/docs/organize.md @@ -31,6 +31,7 @@ Don’t build when you can buy, don’t buy when you can rent. We have a forthcoming video on this topic. In the meantime, we want to explain why building your own platform services, including those associated with implementation security and privacy risk management, will likely keep you from ever having a well-paved path to production. According to the CNCF Platforms White Paper, “here are capability domains to consider when building platforms for cloud-native computing: + 1. Web portals for observing and provisioning products and capabilities 2. APIs (and CLIs) for automatically provisioning products and capabilities 3. "Golden path" templates and docs enabling optimal use of capabilities in products @@ -47,11 +48,11 @@ According to the CNCF Platforms White Paper, “here are capability domains to c Here is what that looks like: -![THis is an image!](images/CNCF.png) +![This is an image!](images/CNCF.png) That is a lot of capability. Developing, deploying, operating, and maintaining all that capability would realistically take more resources (people and money) than you want to spend for what is below the mission value line. Here is a very conservative look at solely the labor costs for each area: -![THis is an image!](images/Assumptions.png) +![This is an image!](images/Assumptions.png) What’s the value in that? Since we are part of the federal acquisition system, I really want to emphasize that cost, schedule, and performance are one side of a value equation. Value equals performance divided by the product of schedule and cost. It’s important because no matter how much you minimize cost and schedule, if performance is low, value will be low. It’s also worth noting that performance isn’t measured against a specification, but against what the mission and the user really needed–in this case a platform that enables applications to continuously deliver with reduced cognitive load. @@ -61,4 +62,4 @@ But how likely are we to achieve that? Can we get the talent? Will they be produ Here is a buy example, where many FTEs are avoided by buying platform licenses, conservatively reducing total cost by 50% while improving likelihood, drastically lowering TTV, and lowering effort and stress: -![THis is an image!](images/Buy.png) +![This is an image!](images/Buy.png) diff --git a/docs/outcomes.md b/docs/outcomes.md index ccf59c2..a73d134 100644 --- a/docs/outcomes.md +++ b/docs/outcomes.md @@ -3,12 +3,14 @@ Every experiment we introduce must include a way to test and validate a hypothesis for desired outcome(s). Introducing the changes we have outlined in this playbook for people, process and technology are no different. Because your cATO experiment will be touching all three factors at any given time, you’ll need a balanced set of leading and lagging indicators to validate your desired outcomes. We recommend starting with the following combination of outcome types and metrics, adjusting where appropriate for your local context, and ideally confirming a baseline for each: **Mission Outcomes** + - Expect product teams leveraging your cATO, to capture their desired user outcome(s) and business impact(s) metrics for their mission(s). - User outcomes represent what their intended users (i.e. warfighters, operators and civilians) will do with the software. - Business impacts represent the results we expect to generate for the organization or agency. - These demonstrate the actual value proposition(s) of your cATO. **cATO Outcomes** + - Security/Privacy Incidents in Prod - Time to value (e.g. Time-to-ATO and Time-to-Assessed-Task) - Security vulnerabilities in Prod @@ -16,15 +18,12 @@ Every experiment we introduce must include a way to test and validate a hypothes - POAM count and aging **Workforce Happiness Outcomes** + - Short surveys representing interaction points between product, security, privacy and authorizing officials measuring engagement, psychological safety and satisfaction **DevOps Performance Outcomes (Learn more at [https://dora.dev/](https://dora.dev/))** + - Lead time for changes - Deployment Frequency - Change Failure Rate - Mean Time to Restore (MTTR) after incident, outage or service degradation - -

- -> [!NOTE] -> *We are proposing the term “cATO” no longer be used, see Manifesto* diff --git a/docs/people.md b/docs/people.md index 4fc485c..e65e544 100644 --- a/docs/people.md +++ b/docs/people.md @@ -27,6 +27,7 @@ We also recommend fencing off program dollars to be sent to the departments that Whether it's the actual teams implementing and supporting cATO, or the product teams who are delivering outcomes to warfighters, operators and civilians, we want to observe how security, privacy and operations become first class citizens throughout the system life cycle. There are actually multiple recommendations that should be considered here. Start with a principle of learning by doing, by creating learning opportunities directly within, and throughout, your SDLC processes. Similar to the pairing model described earlier, and to support our rationale for having dedicated assessors, there is an opportunity for product teams to pair with assessors beyond just the assessment step of RMF. Think of it like embedding another subject matter expert into the product team, where the product team can now learn in near real-time about threats, weaknesses and general security concepts. This helps: + - Create a bi-directional learning environment for teams to learn about cybersecurity and complex privacy laws and regulations, and assessors to learn deeper context about the products they will be assessing. - Build better quality into your products, earlier. - Reduce the feedback loop process between control implementation, assessment and authorization. diff --git a/docs/policy.md b/docs/policy.md index de015fc..e948a3f 100644 --- a/docs/policy.md +++ b/docs/policy.md @@ -9,6 +9,7 @@ Before we dive into integrated process and technology, readers need a baseline u ## Authorization Boundary The authorization boundary for a system is established during the RMF Prepare Task. Organizations have flexibility in determining what constitutes the authorization boundary for a system. The set of system elements included within an authorization boundary defines the system (i.e., the scope of the authorization). When a set of system elements is identified as an authorization boundary for a system, the elements are generally under the same direct management. Other considerations for determining the authorization boundary include identifying system elements that: + - Support the same mission or business functions; - Have similar operating characteristics and security and privacy requirements; - Process, store, and transmit similar types of information (e.g., categorized at the same impact level); or @@ -25,6 +26,7 @@ Authorization boundaries include all system elements, including hardware, firmwa ## Authorization Types System and common control authorization occurs as part of the RMF Authorize step. A system authorization or a common control authorization can be an initial authorization, an ongoing authorization, or a reauthorization as defined below: + - ***Initial*** authorization is defined as the initial (start-up) risk determination and risk acceptance decision based on a complete, zero-based review of the system or of common controls. The zero-based review of the system includes an assessment of all implemented system-level controls (including the system-level portion of the hybrid controls) and a review of the security status of inherited common controls as specified in security and privacy plans. The zero-based review of common controls (other than common controls that are system-based) includes an assessment of applicable controls (e.g. policies, operating procedures, implementation information) that contribute to the provision of a common control or set of common controls. - ***Ongoing authorization*** is defined as the subsequent (follow-on) risk determinations and risk acceptance decisions taken at agreed-upon and documented frequencies in accordance with the organization’s mission/business requirements and organizational risk tolerance. Ongoing authorization is a time-driven or event-driven authorization process. The authorizing official is provided with the necessary information regarding the near real-time security and privacy posture of the system to determine whether the mission/business risk of continued system operation or the provision of common controls is acceptable. Ongoing authorization is fundamentally related to the ongoing understanding and ongoing acceptance of security and privacy risk and is dependent on a robust continuous monitoring program. - ***Reauthorization*** is defined as the static, single point-in-time risk determination and risk acceptance decision that occurs after initial authorization. In general, reauthorization actions may be time-driven or event-driven. However, under ongoing authorization, reauthorization is in most instances, an event-driven action initiated by the authorizing official or directed by the senior accountable official for risk management or risk executive (function) in response to an event that results in security and privacy risk above the level of risk previously accepted by the authorizing official. Reauthorization consists of a review of the system or the common controls similar to the review carried out during the initial authorization. The reauthorization differs from the initial authorization because the authorizing official can choose to initiate a complete zero-based review of the system or of the common controls or to initiate a targeted review based on the type of event that triggered the reauthorization. Reauthorization is a separate activity from the ongoing authorization process. However, security and privacy information generated from the continuous monitoring program may be leveraged to support reauthorization. The reauthorization actions may necessitate a review of and changes to the organization’s information security and privacy continuous monitoring strategies which may in turn affect ongoing authorization. @@ -34,6 +36,7 @@ System and common control authorization occurs as part of the RMF Authorize step ## Authorization Decisions ​​Authorization decisions are based on the content of the authorization package. There are four types of authorization decisions that can be rendered by authorizing officials: + - Authorization to operate (ATO) - Common control authorization - Authorization to use @@ -62,6 +65,7 @@ Continuous monitoring helps to achieve a state of ongoing authorization where th ## Conditions for Implementation of Ongoing Authorization When the RMF has been effectively applied across the organization and the organization has implemented a robust continuous monitoring program, systems may transition from a static, point-in-time authorization process to a dynamic, near real-time ongoing authorization process. To do so, the following conditions must be satisfied: + - The system or common control being considered for ongoing authorization has received an initial authorization based on a complete, zero-based review of the system or the common controls. - An organizational continuous monitoring program is in place that monitors implemented controls with the appropriate degree of rigor and at the required frequencies specified by the organization in accordance with the continuous monitoring strategy and NIST standards and guidelines. diff --git a/docs/prepare.md b/docs/prepare.md index 165bdcf..5d96820 100644 --- a/docs/prepare.md +++ b/docs/prepare.md @@ -3,6 +3,7 @@ Use the Prepare step to align all stakeholders to go on a journey towards ongoing authorization strategy using people, process, and technology to achieve near real-time continuous monitoring of controls and cybersecurity. It is important to develop a communications strategy with your team and relevant stakeholders. Key points to emphasize in your are communications strategy are: + - RMF is our common denominator, start there - Discuss real concerns, don’t generalize - Compare outcomes, not intentions vs. outcomes @@ -62,8 +63,3 @@ To help with this, we recommend formalizing a Shared Responsibility Model using ## Tools and Automation This is also a good time to present any tools and automation to be used for both digitization of documentation and workflows and their subsequent automation. This is especially important if you elect not to use the enterprise’s preferred GRC platform, such as eMASS or XACTA. FISMA and RMF do not mandate any tools, though an exception to policy may be required at some level of your organization if these solutions have been mandated. - -
- -> [!NOTE] -> *We are proposing the term “cATO” no longer be used, see Manifesto* diff --git a/docs/sdlc.md b/docs/sdlc.md index bc6d4a0..b2f4a82 100644 --- a/docs/sdlc.md +++ b/docs/sdlc.md @@ -3,16 +3,19 @@ We covered this throughout the document, but here is a quick summary: **People** + - Integrated cybersecurity culture (cross-functional teams) - Technical assessors (from your performer, or from your AO’s contract(s)) **Process** + - Perform all RMF steps - Create Living documentation by way of your SDLC toolsuite - Follow NIST Guidance + create an ongoing authorization playbook - Establish continuous delivery, with metrics for high quality and reduced risk **Technology / Automation** + - Implement high common controls inheritance via opinionated cloud platform - Modern Security Requirements Management (e.g. Tracer or SD Elements) - Static Application & Dependency Vulnerability Scanning (e.g. Snyk) diff --git a/docs/why.md b/docs/why.md index 137d6a2..b892b31 100644 --- a/docs/why.md +++ b/docs/why.md @@ -21,15 +21,18 @@ In their book Continuous Delivery, Dave Farley and Jez Humble define continuous ## The Benefits **Improve security posture and lower risk** + - Reduce the number of security defects through threat analysis and secure coding practices - Continuously detect and remediate application vulnerabilities quickly via the Secure Release Pipeline - Cybersecurity and vulnerability education is available to application development teams simply by utilizing the secure release pipeline **Increase transparency and trust** + - Default access to all body of evidence artifacts throughout the software development life cycle (i.e. source code, documents, diagrams) for security control assessors and cybersecurity personnel to support continuous monitoring (e.g. assessment and evaluation) - Incrementally automating risk assessment via secure release pipelines **Reduce costs & increase delivery of value to organizations and end-users** + - Reducing the number of security defects and risks - Leveraging a cloud environment - Shipping software can be accomplished in hours or days, instead of weeks, months or even years @@ -39,6 +42,7 @@ In their book Continuous Delivery, Dave Farley and Jez Humble define continuous ## What's Really at Stake In the digital era, both the warfighting domain and policy domain are digital. Both demand the early and continuous delivery of valuable software: + - We cannot afford to be disrupted on the battlefield–our democracy will be toppled from without. - We cannot afford to fail to deliver on promises to our citizens–our democracy will be toppled from within.