Skip to main content

Security Engineering

Overview

Security Engineering is the process by which security activities are integrated into each phase of the software development lifecycle from envisioning requirements through deployment and maintenance. Each discrete security engineering step provides value by reducing the cost of risk assessment and mitigation.

A good security engineering strategy includes security as a proactive and integrated practice. This is in contrast to a reactive security strategy, which considers and mitigates risks only when they are discovered by 3rd parties during or post deployment. A good security strategy reduces cost and disruption while allowing conscious and deliberate decisions around risk. A reactive security strategy increases cost and disruption by unconsciously assuming risk and inadvertently pushing risk decisions late in the process to when they are most expensive.

Security Engineering Activities

Security engineering activities are added to the development lifecycle based upon measured risk of the application or feature being developed. For high-risk applications you will apply every activity deeply. For a medium risk application, you will apply fewer activities or may apply some activities with less depth. For a low-risk application, you may perform few or no activities at all.

The security engineering process includes the following activities:

  • Security objectives and requirements

  • Security design

  • Threat model

  • Security architecture and design review

  • Security code review

Security Objectives and Requirements

Security objectives and requirements should be defined early in the application development process. Security objectives are goals and constraints that affect the confidentiality, integrity, and availability of your data and application. 

Security objectives are used to:

  • Guide threat modeling activities. 

  • Flow into security design activities.

  • Determine the scope and guide the process of security architecture and design reviews. 

  • Help set code review objectives. 

  • Guide security test planning and execution.

The primary goal of security objectives is to identify the assets you need to protect and articulate why they need to be protected. Assets can include:

  • Accounts, passwords, tokens, session identifiers.

  • PII or other sensitive or protected data, including unpublished articles, editorial comments, and any other non-public data.

  • Access to other internal systems, services, servers, or accounts.

  • Corporate or brand reputation.

Another goal of security objectives is to ensure compliance. Requirements should take into consideration:

  • Policy and standards compliance (for example, ISO 27001 certification, PCI, or GDPR). 

  • SLAs or Quality of Service commitments (for example, availability or performance).

When thinking about why an asset needs protection, you can use the CIA triad:

  • Confidentiality. Does the asset need to be protected from being accessed or stolen?

  • Integrity. Does the asset need to be protected from modification or corruption?

  • Availability. Does the asset need to be protected from denial of service?

Security Design

Designing for security is an active approach to addressing the security objectives for the application or feature in the architecture and design. 

Security Design should always consider the following design principles:

  • Principle of Open Design – assume the system's design is known to the adversary. This should not compromise the security of the system. Colloquially known as “no security by obscurity”

  • Principle of Economy of Mechanism – keep the design and code as simple as possible

  • Principle of Separation of Privilege – ensure that no single entity has the keys to the kingdom. Ex. limit root account usage.

  • Principle of Least Privilege – enforce need-to-know and need-to-access rules. Users and systems should only be granted access to a resource if they absolutely require it.

  • Principle of Complete Mediation – make sure that all access to any resource is always checked for authorization. This principle also echoes the concept of Zero Trust.

  • Principle of Fail Safe Defaults – by default all access should be denied and only granted when proper authorization is in place.

  • Principle of Least Astonishment – make sure that the security controls are easy to use and will not cause your users to seek insecure alternatives.

It can be useful to review a set of categories where vulnerabilities are most likely, to focus the security design activities.

  • Input and Data Validation. Does it make sense to use a centralized approach to input validation? What are the sources of untrusted input and how should they be validated? What is the role of client side vs. server-side validation? Are the data formats well understood so that it will be easy to validate for type, format, length, and range?

  • Authentication. What can be done to protect credentials, tokens, and sessions? Is there expiration for each? What is the storage mechanism? How are they protected in transit and at rest?

  • Authorization. What will be done to support least privilege? Is authorization granular enough? Are authorization decision pushed as close to resource access as possible? Are access permissions checked every time a request is made?

  • Sensitive Data. Are secrets being stored unnecessarily? Is all sensitive data encrypted in transit and at rest? Is all sensitive data flagged so that developers can take necessary precautions when writing code that accesses or processes it? Are access controls strong enough to protect sensitive data?

  • Cryptography. Are industry standard encryption methods and key sizes being used? Is platform provided key management being used? Are keys cycled regularly and stored securely?

  • Exception and Error Management. Are error messages and accessible log information generic enough to not help an attacker? Is sensitive data kept out of logs? Is error handling consistent and structured?

  • Auditing and Logging. Is potentially malicious behavior logged? Are all security decisions logged?

  • Attack Surface Reduction. What can be done to reduce avenues of attack or reduce blast radius in the case of a successful attack.

Threat Model

Threat modeling is an engineering technique that you can use to help identify threats, attacks, vulnerabilities, and countermeasures that may be relevant to your application. 

Threat modeling helps identify (or confirm) the assets you need to protect, threats to those assets, attacks that can be leveraged to realize those threats, vulnerabilities that could allow those attacks, and mitigations that could be put into place to reduce the likelihood of the threats being realized. 

The key value of threat modeling is to identify risks and prioritize how to address those risks. A threat model can inform design, development, and testing strategies. Think of a threat model as a catalog of all the possible threats and vulnerabilities for your application, but since it is tied to a prioritized set of assets, it means you can prioritize your approach to the threats and vulnerabilities as well. No organization has the resources necessary to treat every threat equally. Use a threat model to identify where it makes sense to apply effort.

Definitions:

  • Asset. An asset is a resource that has value to an attacker. It varies by perspective. An asset might be the availability of information or the information itself, such as customer data, PII or unpublished articles. It might be intangible, such as organization’s reputation, or tangible such as information systems. It might be your employees and contractors as well. To an attacker, an asset could be the ability to misuse your application to gain advantage is also an asset.

  • Threat. A threat is an undesired event or potential occurrence, often best described as an effect that could damage or compromise an asset or objective. It may or may not be malicious in nature. A threat can be a natural disaster, a nation state with unlimited cyber offensive capabilities, a criminal organization or mischievous users.

  • Vulnerability. A vulnerability is a weakness in some aspect or feature of a system that allows a threat to be realized. Vulnerabilities can exist at the network, host, or application level and include operational practices. 

  • Attack. An attack is an action taken that uses one or more vulnerabilities to realize a threat. This could be someone following through on a threat or exploiting a vulnerability. 

  • Countermeasure. A countermeasure addresses a vulnerability to reduce the probability of attack or the impact of a threat. A countermeasure does not directly address a threat. Instead, it addresses the factors that define the threat. Countermeasures range from improving application design or improving code, to improving an operational practice. 

Steps:

  1. Review the Architecture. This can be as simple as reviewing a Theme document or an epic or can be as in depth as a full architecture review. It depends on the scope of what’s being reviewed as well as the familiarity that the team members have with the system under review. The goal is to dive deeply enough into the details that any missed assets are found and the existing mechanisms for protecting those assets are uncovered. Review:

    1. Technologies used

    2. Mitigations already in place

    3. Roles

    4. Use cases

    5. Functional specifications

    6. Data flow, including entry and exit points

    7. Trust boundaries. 

  2. Identify Assets. This often looks like a brainstorming exercise to think through all the types of data and services that are of value to the business or to an attacker. Developers can lead the process of enumerating all the assets in the system from the business point of view while the Security team can lead the process of enumerating assets from an attacker’s point of view. Prioritize these assets in terms of their importance to the business and the necessity of protecting them from an attacker. 

  3. Identify Threats. The fastest way to identify threats is to apply confidentiality, integrity, and availability (CIA) requirements to each asset. A threat is anything that negatively impacts those requirements. It is important to review roles and validate access restrictions for each role. Finally, consider any important business logic that could be subverted to an attacker’s goals. 

  4. Review Vulnerabilities and Attacks. This is a brainstorming exercise in which the team will look at the threats and think about how an attacker would try to achieve each of them. Security experience and expertise helps immensely in this step, so be sure to include a member of the security team. You can also use the vulnerabilities list in the Common Web Application Vulnerabilities document.

  5. Iterate Preconditions. For each vulnerability and attack, ask yourself the conditions under which the attack would succeed. For instance, to steal credentials off the wire, they need to be transmitted in the clear and over a line that an attacker could access. 

  6. Identify Countermeasures. For each precondition, identify what can be done to block the related attack from occurring. This could include design mitigations, coding standards to apply to critical code, and testing that needs to be done to verify success. 

When this process is done you will have documented your assets, threats, attacks, preconditions, and countermeasures. By prioritizing your assets you know on which threats and countermeasures you should focus first. 

Security Architecture and Design Review

The purpose of a security architecture and design review is to expose the high-risk design decisions that have been made, review effectiveness of asset protection strategies, and review mitigations in place for known threats and vulnerabilities. It’s important to use existing security requirements, and threat models to guide the discussion. 

A thorough security architecture review will include the following areas:

  • Deployment and Infrastructure. How does this design interact with other network controls, such as a CDN or a WAF? How is it deployed in relation to other system components? What cloud infrastructure services and assets does it rely upon? Is there anything unique that this design requires compared to other similar system components? Is secure and approved method used to store secrets? Does it integrate with third-party APIs or applications?

  • Security Concerns. Review the design against the areas where security mistakes are most often made, including authentication, authorization, input and data validation, error handling, sensitive data, cryptography, and logging. 

  • Tier by Tier. Review each tier of the solution, presentation, business, and data. Understand the control and data flow that is suggested by the design and how this will unfold for common use cases. What are the trust boundaries? What are the entry points? 

Security Code Review

A properly conducted code review can achieve multiple security objectives.  To have an effective code review, you must first understand the security objectives of your design as well as patterns of insecure code and then review the code with a clear idea of what you are looking for. Use the security objectives and threat model to focus the code review on the types of problems that are most important. 

Security code reviews are most effective when the primary focus of the review is on security. Start by reviewing the code with the following focus areas in mind:

  • Common Vulnerabilities. Review checklist of common vulnerabilities for your technology stack, development language, and application scenario.

  • Data Access. Look for improper storage of database connection strings and proper use of authentication to the database. Validate that the database restricts the application to only the data it needs. 

  • Input and Data Validation. Is all user input considered untrusted. Is it validated server-side? Look for client-side validation that is not backed by server-side validation, poor validation techniques, and reliance on file names or other insecure mechanisms to make security decisions. 

  • Authentication. Look for overlong sessions, exposure of tokens, or non-standard use of platform authentication mechanisms. In Arc, authentication should be performed through Okta or a bearer token. New authentication mechanisms or non-standard credential stores should be avoided.

  • Authorization. Look for failure to check roles and privileges when accessing back-end resources or endpoints, and other common authorization problems. Roles and privileges should be checked with every request to every service and any time a trust boundary is crossed.

  • Sensitive Data. Look for mismanagement of sensitive data by disclosing secrets in error messages, code, memory, unnecessary environment variables, files, or network. 

  • Hard-coded Secrets. Look for hard-coded secrets by examining the code for variable names such as “key”, “password”, “pwd”, “secret”, “hash”, and “salt”. Additionally look for high entropy strings such as "9P)Z=yD4+spq=". These may be hard coded access tokens.

  • Use of Cryptography. Check to ensure that approved managed services, are used whenever possible. Otherwise, check for failure to clear secrets as well as improper use of the cryptography APIs themselves. Make sure that platform provided cryptography routines are used - never write your own crypto.

  • Logging. Check to make sure security sensitive operations and logic are logged to a central location without writing sensitive information to log files.