Skip to content

Commit

Permalink
Merge pull request #8 from rise8-us/bullet-formatting-bug
Browse files Browse the repository at this point in the history
Bullet formatting bug
  • Loading branch information
rmonroe-va authored Nov 14, 2023
2 parents 19b774b + a361928 commit d7b5ec0
Show file tree
Hide file tree
Showing 13 changed files with 38 additions and 38 deletions.
7 changes: 2 additions & 5 deletions docs/categorize.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
The Categorize step remains largely the same, but is the first opportunity to show that RMF tasks (C-1, C-2, & C-3) can be done more quickly with a cross-functional team (people aspect). As mentioned in previous sections, it’s essential to have technical assessors in place, along with highly competent infrastructure, platform, and pilot application teams.

Security categorization is the most important step in the Risk Management Framework (RMF) since it ties the information system’s security activities to the organization’s mission/business priorities. [FIPS 199](https://csrc.nist.gov/pubs/fips/199/final), *Standards for Security Categorization of Federal Information and Information Systems*, defines requirements for categorizing information and information systems. [FIPS 200](https://csrc.nist.gov/pubs/fips/200/final), *Minimum Security Requirements for Federal Information and Information Systems*, specifies a risk-based process for selecting the security controls necessary to satisfy the minimum requirements. [NIST SP 800-60](https://csrc.nist.gov/pubs/sp/800/60/v1/r1/final), *Guide for Mapping Types of Information and Information Systems to Security Categories*, is a four-step process for categorizing the information and information system level of risk:

1. Identify information types
2. Select provisional impact levels for the information types
3. Review provisional impact levels and adjust/finalize information impact levels for the information types
Expand All @@ -17,6 +18,7 @@ The system’s impact level is used to select a baseline set of security control
## System Security Plan (SSP)

Start SSP development utilizing guidance from [NIST Special Publication 800-18, Revision 1](https://csrc.nist.gov/pubs/sp/800/18/r1/final). See note in Implementation & Assessment about SSP digitization and automation. Typical SSP templates will include the following:

1. Information System Name
2. Risk Categorization (following FIPS 199 & 200 guidance)
3. Information System Owner
Expand All @@ -29,8 +31,3 @@ Start SSP development utilizing guidance from [NIST Special Publication 800-18,
10. In-scope security and privacy controls
11. Date of completion/update
12. Date of approval with evidence

<br/><br/>

> [!NOTE]
> *We are proposing the term “cATO” no longer be used, see Manifesto*
9 changes: 4 additions & 5 deletions docs/implement-assess.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,7 @@ The ideal situation is that control selection is tailored based upon the system
In the graphic above, let’s assume that the platform has a common controls authorization and we are deploying an application to it. After accounting for inheritance, SD Elements would run an application survey to determine what security requirements the application development team is uniquely responsible for, and any additional tailoring could be performed. These can be added to the team’s backlog via native integrations or API, creating a traceable identifier for each.

To start, we recommend a kickoff meeting with security (including privacy if the system is impacted):

- An hour meeting
- Assign the team an assessor
- Perform product architectural analysis
Expand All @@ -21,6 +22,7 @@ To start, we recommend a kickoff meeting with security (including privacy if the
- Assigned assessor will help team prioritize backlog

Assuming a code level implementation, the engineers would pick up the story, complete implementation and documentation, and assessor acceptance would be required, much like PM acceptance for user stories. It helps to standardize the way teams respond to these tasks:

- Describe the team’s technical decisions and task implementation details
- Provide a link to the code and/or regularly maintained artifacts to make reviewing easier
- Provide a technical point of contact with name and email, who signed for the task completion
Expand Down Expand Up @@ -49,6 +51,7 @@ Here are examples of how we used Snyk and Aqua in one implementation:

To ensure adequate progress throughout the course of development, we recommend periodic control and scan reviews such as:
Weekly meeting will be set between assessor and team

- Assigned assessor must have access to the team's backlog
- Help prioritize security controls
- Determine product security progress
Expand All @@ -63,6 +66,7 @@ The SSP and POAM should be digitized, and automation can be applied as your matu
The POAM describes the actions that are planned to correct deficiencies in a given control identified during the assessment of controls as well as during continuous monitoring. A POAM typically includes tasks to be accomplished with a recommendation for completion before or after system authorization; resources required to accomplish the tasks; milestones established to meet the tasks; and the scheduled completion dates for the milestones and tasks. POAMs are reviewed by the Authorizing Official (AO) to ensure there is agreement with the remediation actions planned to correct the identified deficiencies.

POAMs are not needed when deficiencies are accepted by the AO as residual risk, or, are remediated during an assessment and before a release. Residual risk is often covered by other controls that were fully and successfully addressed.

- Residual risk is defined as risk that remains after efforts to identify and mitigate said risk have been taken.
- Information System Security Officers (ISSO) or Application Security Assessors will monitor for new POAM items submitted for review, and report them to the AO as needed.

Expand All @@ -89,8 +93,3 @@ This process of assessing risks and verifying that requirements have been met oc
We recommend starting simple and adding automation based on your largest bottlenecks. This looks very different in an organization with 5 apps versus 100, or with 100 users versus 100,000, or with 1,000 monthly API calls versus 10M. Trying to automate all the things on day 1 is a terrible strategy (you will fail) and unnecessary. The goal is risk management, and then continuous improvement. Sure, think big… but then start small and scale appropriately.

[OSCAL](https://pages.nist.gov/OSCAL/) looks promising when it comes to generating authorization packages dynamically, saving a great deal of time and money as [demonstrated by AWS](https://aws.amazon.com/blogs/security/aws-achieves-the-first-oscal-format-system-security-plan-submission-to-fedramp/) last year. We have built an OSCAL-based RMF platform and are beginning to implement it with our customers. In the future we may add implementations to the playbook or as an external reference.

<br/><br/>

> [!NOTE]
> *We are proposing the term “cATO” no longer be used, see Manifesto*
9 changes: 5 additions & 4 deletions docs/laws.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,6 @@
# FISMA

*(Copied from the NIST RMF website [FISMA background](https://csrc.nist.gov/Projects/risk-management/fisma-background))*

<br/>
*Copied from the NIST RMF website [FISMA background](https://csrc.nist.gov/Projects/risk-management/fisma-background)*

## What is FISMA?

Expand All @@ -11,6 +9,7 @@ The Federal Information Security Management Act (FISMA) [FISMA 2002], part of th
[The Federal Information Security Modernization Act of 2014](https://www.congress.gov/113/plaws/publ283/PLAW-113publ283.pdf) amends FISMA 2002, by providing several modifications that modernize federal security practices to address evolving security concerns. These changes result in less overall reporting, strengthens the use of continuous monitoring in systems, increases focus on the agencies for compliance and reporting that is more focused on the issues caused by security incidents. FISMA 2014 also required the Office of Management and Budget (OMB) to amend/revise OMB Circular A-130 to eliminate inefficient and wasteful reporting and reflect changes in law and advances in technology.

FISMA, along with the Paperwork Reduction Act of 1995 and the Information Technology Management Reform Act of 1996 (Clinger-Cohen Act), explicitly emphasizes a risk-based policy for cost-effective security. In support of and reinforcing FISMA, the Office of Management and Budget (OMB) through [Circular A-130](https://www.whitehouse.gov/omb/information-for-agencies/circulars/), “Managing Federal Information as a Strategic Resource,” requires executive agencies within the federal government to:

- Plan for security
- Ensure that appropriate officials are assigned security responsibility
- Periodically review the security controls in their systems
Expand All @@ -21,6 +20,7 @@ FISMA, along with the Paperwork Reduction Act of 1995 and the Information Techno
## What does FISMA require?

Federal agencies need to provide information security protections commensurate with the risk and magnitude of the harm resulting from unauthorized access, use, disclosure, disruption, modification, or destruction of:

- information collected/maintained by or on behalf of an agency
- Information systems used or operated by an agency or by a contractor of an agency or other organization on behalf of an agency.

Expand All @@ -42,11 +42,12 @@ As defined in FISMA 2002, "[t]he term ‘Federal information system’ means an

# NIST Risk Management Framework

*(Copied from the NIST RMF website [FISMA background](https://csrc.nist.gov/Projects/risk-management/fisma-background))*
*Copied from the NIST RMF website [FISMA background](https://csrc.nist.gov/Projects/risk-management/fisma-background)*

The NIST Risk Management Framework (RMF), outlined in NIST Special Publication 800-37, provides a flexible, holistic, and repeatable 7-step process to manage security and privacy risk and links to a suite of NIST standards and guidelines to support implementation of risk management programs to meet the requirements of the Federal Information Security Modernization Act (FISMA).

**The risk-based approach of the NIST RMF helps an organization:**

1. **Prepare** for risk management through essential activities critical to design and implementation of a risk management program.
2. **Categorize** systems and information based on an impact analysis.
3. **Select** a set of the NIST SP 800-53 controls to protect the system based on risk assessments.
Expand Down
2 changes: 2 additions & 0 deletions docs/misconceptions.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
# Common Misconceptions

Here are some common misconceptions we’ve heard about cATO:

- It is a way to avoid having to do RMF - ***you have to do RMF better than most to achieve initial and then ongoing authorization***
- It is authorizing the people and/or the process - ***FISMA is a law that requires us to authorize systems, which gives consideration but not primacy to people and process***
- It is a way to push whatever you want, whenever you want - ***you have to meet all security and privacy requirements to deploy to production***
Expand All @@ -12,6 +13,7 @@ Here are some common misconceptions we’ve heard about cATO:
<br/>

There are also common misconceptions about FedRAMP and DISA Provisional Authorizations. Here is what you need to know:

- FedRAMP does not directly apply to DoD. DISA does, however, use FedRAMP authorization packages to formally grant a sort of Provisional Authorization reciprocity.
- Provisional Authorization is not an ATO. Agency Mission Owner Authorizing Officials must review the Provisional Authorization along with agency specific implementation assessments, then grant an ATO for the system to be used. The goal is to maximize the reuse of existing evidence.
- You do not have to wait for a FedRAMP or DISA Provisional Authorization before your agency can use a system. Agencies are allowed to perform an initial authorization to operate and send their evidence to the JAB or DISA AO for review to sponsor the system for FedRAMP or DISA PA, respectively. This will likely be the fastest route to ATO. Check local policy with your agency.
Expand Down
7 changes: 2 additions & 5 deletions docs/monitor.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,7 @@
We recommend laying out an initial observation period with your AO of 6-12 months. During this time, meet regularly with assessors and your AO to demonstrate the results of your greatly improved implementation, assessment, and continuous monitoring processes. Set conditions for gaining an ongoing authorization and report on these metrics during these performance reviews. This is also your opportunity to outline recommendations for continuous improvement initiatives, and obtain feedback from assessors and AOs to achieve the virtuous cycle we laid out in our ‘why’.

Aside from monitoring via automation, embedding Assessors and Privacy Officers should be staffed organically to:

- Review security scan results when developers mark findings as false positives, or decide to suppress for future sprints
- Provide feedback to developers if disagreements arise
- Assist developers with mitigations
Expand All @@ -15,11 +16,7 @@ Aside from monitoring via automation, embedding Assessors and Privacy Officers s
Perform independent spot checks and penetration tests during this time. There will be findings. Explicitly reject unrealistic standards like ‘zero findings. Instead, focus on time to discovery and time to remediation as key metrics–your continuous delivery capability applied to remediation will impress even the most risk averse of stakeholders.

In summary, just like building software, managing risk is a continuous process... it is never done. A successful risk management program requires us to treat risk as a first class citizen. Continuously monitoring our security and privacy controls is how we will retain confidence in our security posture and program. Here is a good starter strategy that will ensure we remain honest with ourselves:

- Product teams are responsible for continuously managing security and compliance risks that are surfaced by security vulnerability scanning solutions (e.g. SAST, SCA, DAST, Image/Container, etc.).
- Product teams and Security Control Assessors are expected to meet at least weekly to discuss upcoming release plans for the product, changes in security vulnerability posture, and any product context that should be updated within security vulnerability scanning tool project surveys.
- Security Control Assessors and Privacy Officers are responsible for ensuring product teams are complying with policy expectations.

<br/><br/>

> [!NOTE]
> *We are proposing the term “cATO” no longer be used, see Manifesto*
8 changes: 2 additions & 6 deletions docs/ongoing-auth.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,13 +13,15 @@ Leverage a memorandum signed by the AO to grant ongoing authorization. Per NIST
## Authorization Frequency

We recommend quarterly authorization frequency as a starting point, with a meeting for risk reporting to stakeholders (AO, SCA, etc.)

- This again emphasizes more, not less
- Manual reporting to start–slides are ok but move towards automation and a dashboard as you mature
- Identify and accept risk
- Make necessary corrections
- Formally document renewal

The quarterly reporting should include things like:

- New applications shipped onto the platform
- % security requirements (e.g. SD Elements) approved by assessor
- Compliance with ongoing authorization and cATO playbook policy’s
Expand All @@ -35,9 +37,3 @@ The quarterly reporting should include things like:
- Staffing

Congratulations! You’ve got an ongoing authorization that allows you to continuously deliver applications and services, provided that teams shift left on security and privacy risk.

<br/><br/>

> [!NOTE]
> *We are proposing the term “cATO” no longer be used, see Manifesto*
7 changes: 4 additions & 3 deletions docs/organize.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,6 +31,7 @@ Don’t build when you can buy, don’t buy when you can rent.
We have a forthcoming video on this topic. In the meantime, we want to explain why building your own platform services, including those associated with implementation security and privacy risk management, will likely keep you from ever having a well-paved path to production.

According to the CNCF Platforms White Paper, “here are capability domains to consider when building platforms for cloud-native computing:

1. Web portals for observing and provisioning products and capabilities
2. APIs (and CLIs) for automatically provisioning products and capabilities
3. "Golden path" templates and docs enabling optimal use of capabilities in products
Expand All @@ -47,11 +48,11 @@ According to the CNCF Platforms White Paper, “here are capability domains to c

Here is what that looks like:

![THis is an image!](images/CNCF.png)
![This is an image!](images/CNCF.png)

That is a lot of capability. Developing, deploying, operating, and maintaining all that capability would realistically take more resources (people and money) than you want to spend for what is below the mission value line. Here is a very conservative look at solely the labor costs for each area:

![THis is an image!](images/Assumptions.png)
![This is an image!](images/Assumptions.png)

What’s the value in that? Since we are part of the federal acquisition system, I really want to emphasize that cost, schedule, and performance are one side of a value equation. Value equals performance divided by the product of schedule and cost. It’s important because no matter how much you minimize cost and schedule, if performance is low, value will be low. It’s also worth noting that performance isn’t measured against a specification, but against what the mission and the user really needed–in this case a platform that enables applications to continuously deliver with reduced cognitive load.

Expand All @@ -61,4 +62,4 @@ But how likely are we to achieve that? Can we get the talent? Will they be produ

Here is a buy example, where many FTEs are avoided by buying platform licenses, conservatively reducing total cost by 50% while improving likelihood, drastically lowering TTV, and lowering effort and stress:

![THis is an image!](images/Buy.png)
![This is an image!](images/Buy.png)
9 changes: 4 additions & 5 deletions docs/outcomes.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,28 +3,27 @@
Every experiment we introduce must include a way to test and validate a hypothesis for desired outcome(s). Introducing the changes we have outlined in this playbook for people, process and technology are no different. Because your cATO experiment will be touching all three factors at any given time, you’ll need a balanced set of leading and lagging indicators to validate your desired outcomes. We recommend starting with the following combination of outcome types and metrics, adjusting where appropriate for your local context, and ideally confirming a baseline for each:

**Mission Outcomes**

- Expect product teams leveraging your cATO, to capture their desired user outcome(s) and business impact(s) metrics for their mission(s).
- User outcomes represent what their intended users (i.e. warfighters, operators and civilians) will do with the software.
- Business impacts represent the results we expect to generate for the organization or agency.
- These demonstrate the actual value proposition(s) of your cATO.

**cATO Outcomes**

- Security/Privacy Incidents in Prod
- Time to value (e.g. Time-to-ATO and Time-to-Assessed-Task)
- Security vulnerabilities in Prod
- Security vulnerability Mean Time to Remediation (MTTR)
- POAM count and aging

**Workforce Happiness Outcomes**

- Short surveys representing interaction points between product, security, privacy and authorizing officials measuring engagement, psychological safety and satisfaction

**DevOps Performance Outcomes (Learn more at [https://dora.dev/](https://dora.dev/))**

- Lead time for changes
- Deployment Frequency
- Change Failure Rate
- Mean Time to Restore (MTTR) after incident, outage or service degradation

<br/><br/>

> [!NOTE]
> *We are proposing the term “cATO” no longer be used, see Manifesto*
Loading

0 comments on commit d7b5ec0

Please sign in to comment.