This guide was created by me, using many online resources, over a 3 month period in order to pass my CISSP.
The following content in the document is correct to pass the CISSP as of November 2020.
Let’s go through the objectives as of Q1 2021:
Objectives for each CISSP domain
Security and Risk Management – 15%
Asset Security – 10%
Security Architecture and Engineering – 13%
Communication and Network security – 14%
Identity and Access Management – 13%
Security Assessment and Testing – 12%
Security Operations – 13%
Software Development Security – 10%
Security and Risk Management – 15%
Security Governance
Align security with the business and business strategy. A Security Leader will serve as the subject matter expert of issues of the CIA triad but can also act as business leaders, and understands the business’ need, goals and objections. Short-term and long-term
Information security must align itself with the governance process of the specific organisation, i.e. a commit of senior leaders may have the oversight of information security and data governance functions or have a risk management committee.
In a publicly traded companies this is typically a board of directors. Security leaders must determine the best way to integrate information security into governance processes. Governing groups should be informed of any security incidents that take place and review the results of audits.
Company acquisitions can be tricky, it may be difficult to ensure the company being bough is up to the same security standard as the other.
Chief information security officer, or CISO – usually leads a team of information security professionals, all members of the security team must follow the principle of due care, this says that security professionals must fulfil the legal responsibilities of the organisation as well as the professional standards of information security. They must exercise the reasonable level of care that would be expected of any security professional in their situation.
- Meeting stakeholder needs
- covering the enterprise end to end
- applying a single integrated framework
- enabling a holistic approach
- separating governance from management
The international Organisation for Standardisation also publishes a control framework and its referred to ISO27001. This is a very commonly used standard.
NIST, the national institute for standard and technology, has a document called the security and privacy controls for federal information system and organisations. It’s called NIST Special Publication 800-53 or NIST 800-53. It’s over 400 pages on how to guild a security program for government (US) agencies and organisation, it goes through:
- the fundamentals of information security, talking about multitiered risk management, security control structures, baselines, and designations, the use of external service providers, and how to assess assurance and trustworthiness for information systems.
- the process of implementing security and privacy controls, talking about selecting an appropriate security control baseline, and then tailoring that baseline to the specific needs of an organisation, creating overlays and documenting the control selection process for both new development, and legacy systems.
Although most organisations don’t follow them letter for letter these frameworks like ISO and NIST provide useful tools for designing security controls for any organisation.
Compliance and Ethics
Sec professionals are increasingly finding themselves becoming legal and regulatory compliance exports.
Main types of compliance obligations are:
- Criminal law - designed to deter people from taking actions that would be detrimental to society and to punish those who do take such actions.
- Civil law - Civil law is designed to resolve disputes among individuals, organisations and or governments agencies
- Administrative law - Administrative law allows for the effective operation of government by allowing executive branch agencies to promulgate regulations that facilitate carrying out their duties.
- Private regulations - These regulations don't have the force of law on their own, but compliance is often required by contract, such as PCIDSS.
- Computer Fraud and Abuse Act - CFAA
- ECPA – Electronic Communications Privacy Act
For example, the Health Insurance Portability and Accountability Act, HIPAA, provides criminal and civil law, governing the uses of health information but doesn't go into great detail. The Centre for Medicare and Medicaid Services publishes security and privacy regulations that provide the specific requirements that covered entities must follow. Those security and privacy regulations are an example of administrative law
Most of the laws related to information security fall into the categories of civil and administrative law, occasionally, we cross paths with criminal law – like for cases of information and data theft and system intrusion.
Software License Agreements describe the terms of use for software that an organisation acquires. These agreements contain many different provisions that cover the circumstances of acceptable use. These may include:
- number and types of individuals who may use the software
- the amount of information that may be processed
- the locations where data may be processed
- the numbers of servers supporting the software
- whatever restrictions the software publisher chooses to include in the agreement
There are a series of legal mechanisms available to protect intellectual property. These include copyrights, trademarks, patents and trade secrets.
Copyrights protect creative works against theft
Information protected by copyright includes books, web content, magazines, and other written works as well as art, music, and even computer software
Patents protect inventions providing the inventor with exclusive use of their invention for a period of time. The purpose of patents is to stimulate invention by ensuring inventors that others will not simply copy their ideas in the marketplace.
In the United States, the Government uses a category of regulations known as export controls to restrict the flow of goods and information considered sensitive for military and scientific purposes, like the International Traffic in Arms Regulations or ITAR. This covers things like firearms and bombs that are considered defence (?).
The Export Administration Regulations, or EAR apply to technology and information that's considered dual use - technology or information that has both military and commercial applications.
Security professions have a range of responsibilities dictated by laws and regulations in the unfortunate event of a known or suspected data breach. Commonly, it includes notifying law enforcing/ government agencies, as well as notified individuals affected.
Encryption is an easy way to protect your organisation against data breaches, many laws include exemptions for encrypted data.
Security Policy
Most security professionals recognise a framework consisting of four different types of documents. Policies, standards, guidelines, and procedures
Business Continuity
- Core responsibility of the info sec professional – supports the security objective of availability.
- B.C efforts are collection of actives designed to keep a business running in case of an incident.
- What business activities will be covered by the plan?
- What types of systems will it cover?
- What types of controls will it consider?
Continuity planers use a tool known as a business impact assessment, or BIA, to help make these decisions.
- The BIA is a risk assessment that follows one of the quantitative or qualitative processes. It identifies the organisation’s critical business processes and then, tracing those back to the critical IT systems that support those processes.
- Once identified the IT systems planners can identify the risk to those systems and conduct a risk assessment.
- The organisation must make decisions about control implementations that factor in cost.
Redundant – systems are designed in such a way that the failure of a single component doesn’t bring the entire system down i.e. businesses can continue in the event of a single predictable failure by removing it from the system.
Single point of failure analysis is an important part of an organisation's continuity of operations planning efforts. Other situations that might jeopardise business continuity are bankruptcy of key vendors.
Personnel succession planning is when an employee in a key position leaves the businesses and the business has identified successors for that position.
High availability – HA uses multiple systems to protect against failures (having redundancy)
Fault tolerance – FT helps protect a single system from failing in the first place by making it RESILIANT
RAID – redundant array of inexpensive disk is a way to increase fault tolerance. NOT A BACKUP STRATEGY.
Disk mirroring, RAID 1, stores the same data on two different disks so if one hard disk fails the other can be used with no downtime.
Personnel Security
People are weak. Strong security policies that clearly outline expectations for individual behaviour and consequences will help deal with this.
Personnel security programs should be built upon educating employees about these policies and each employee's role in protecting the enterprise.
Explicit procedures that describe how you will handle violations
Usually requires coordination between the cybersecurity team, managers throughout the organisation, the legal team, and HR.
Arm your staff with the knowledge that they need to protect themselves against both technical and non-technical risks, including social engineering attempts.
Background checks and monitoring, as well as data loss prevention technology can minimise the risk of an insider threat.
NDAs, where the employee agrees not to disclose any confidential information learned during the course of employment even after the employee leaves
Employee access revocation of rights and assets needs to be timed well.
Employers can preserve their employees in many ways
- Organisations should collect only the information that they need in the legitimate course of employment, and they should store that information only as long as it remains necessary for a valid business reason.
- organisations should limit access to sensitive information to those with a valid need to know.
- organisations should use encryption and data masking whenever possible
The use of social media can be a valuable business tool, but organisations must ensure that they consider and address the associated security risks. Some organisations have strict policies on their employee’s social media presents.
Risk Assessment
Information security professionals need to prioritise their risk lists in order to spend resources where they will have the greatest security effect
Risk assessment is the process of identifying and triaging the risks facing an organisation based upon the likelihood of their occurrence and the expected impact they will have on the organisation's operations.
A threat is some external force that jeopardises the security of your information and systems.
A risk assessment ranks those risks by two factors: likelihood and impact
The likelihood of a risk is the probability that it will actually occur.
The impact of a risk is the amount of damage that will occur if a risk happens.
Qualitative techniques use subjective judgements to assess risks, typically categorising them as low, medium, or high on both the likelihood and impact scales.
Quantitative techniques use objective numeric ratings to assess likelihood and impact, usually in terms of dollars.
If we had a data centre valued at 20 million USD and expect that a flood would cause 50% damage to the facility, we compute our SLE by 20 x 0.5 which is 10 million USD in damage = the impact of the risk.
- Annualised rate of occurrence (ARO) is the number of times each year that we expect the risk to occur. If we used the flood as an example here people could use a flood map to figure out the risk is once every 50 years or 2% a year. This is the same as 0.02.
- Annualised loss expectancy or ALE - used to incorporate both of these likelihood and impact values. SLE x ARO = ALE. In the example we’ve used the SLE was 10 million USD and the ARO was 0.02 meaning the ALE is 200,000 USD a year. This means that we should expect to lose this amount per year, but it is important to remember that in reality this cost won’t occur each year. It will really cost 10 million USD every time a flood occurs but since its only expected to happen every 50 the average is 200,000 USD a year.
Quantitative techniques also help us assess our ability to restore IT services. If an asset is repairable it can be repaired but if it is non-repairable it has to be replaced.
Mean time of failure (MTTF) is the amount of time that we expect will pass before an asset fails.
If repairable, mean time between failures (MTBF) is used, same as MTTF but repaired. Mean time to repair (MTTR) is the amount of time that an asset will be out of service for repair each time that it fails. When using MTBF and MTTR we get a good idea of the expected downtime for an IT service or component.
You can perform risk avoidance, risk transference, risk mitigation, risk acceptance, or risk deterrence.
- Avoid = change your business practice so that the risk cannot affect your business
- Transfer = shifts the impact of the risk to another organisation (insurance)
- Mitigation = reduce likelihood or impact of the risk
- Acceptance = Cost of performing another risk management action outweighs the benefit of controlling the risk
- Deterrence = take actions that dissuade a threat from exploiting a vulnerability (barbed wire fence for potential burglars)
Security controls are the procedures and mechanisms that an organisation puts in place to address security risks in some manner.
- Technical controls are technical, carried out by technology.
- Operational controls manage technology, they are always carried out by individuals.
- Management controls are focused on the mechanics of the risk management process.
Controls can fail through a false positive or false negative. Control assessments designed to test the correct functioning and effectiveness of controls. Risk management frameworks provide proven, time-tested techniques for performing enterprise risk management.
Before beginning the process, the organisation should gather information from two categories. Architectural Description and Organisational Inputs.
STEP ONE of the risk management framework, categorises the information system being assessed, as well as the information that will be stored, processed, and transmitted by the system. This is normally done by performing an impact assessment.
STEP FOUR, the organisation performs a control assessment to determine whether the controls were correctly implemented and if they’re operating correctly.
Risk visibility and reporting techniques ensure that the results of these risk management processes are clearly documented and tracked over time. The Risk Register is a centralised document that tracks information about the nature and status of each risk facing the organisation. Usually they include a description of each risk, a categorisation used to group the risks, the results of a risk assessment including the probability and impact of each risk, and a risk rating calculated by multiplying the probability and compact scores. It may also include risk steps that are still in progress.
Threat Modelling
Threat modelling should prompt periodic reviews of the security infrastructure.
Asset Focus - Use asset inventory as the basis for analysis
Threat Focus – Identify how specific threats may affect each information system
Service Focus – Identify the impact of various threats on a specific service
Diagramming attacks are useful to determine potential attacks.
Vendor Management
Manage vendor management through relationships throughout the supply chain in a way that protects CIA.
Vendor Management Life Cycle
SLRs turn into SLAs (service-level requirements get proposed and turned into an agreement)
Memorandum of understanding (MOU) documents the aspects of the relationship.
Business partnership agreements (BPA) two organisations agree to do business.
Interconnection security agreements (ISAs) include details on the way that two organisations will interconnect their network systems and/or data.
Agreements put in place prior to beginning a new vendor relationship should contain clear language around data ownership and sharing, as well as protection.
In most cases a customer will want to ensure that the customer ownership of the information and that the vendor's right to use the information is carefully limited.
In addition, customers should ensure that the contract includes language that requires the vendor securely delete all customer information within an acceptable period of time after the relationship ends.
Managed security service providers (MSSPs) provide security services for other organisations.
Awareness and Training
Security training programs help protect organisations against intentional or accidental missteps.
Security awareness is meant to remind employees about the security lessons that they've already learned.
While all users should receive some degree of security education, organisations should also customise training to meet specific role-based requirements.
One approach used by many organisations is to conduct initial training whenever an employee joins the organisation or assumes new job responsibilities, and then use annual refresher training to cover the same material and update users on new threats and controls.
Compliance programs ensure that an organisation's security controls are consistent with a variety of laws, regulations, and standards that govern organisations, for example a University in the US would have very different requirements then a business in Europe.
There are three different types of compliance obligations that should be covered in an organisation’s security awareness and training program.
Laws are requirements passed by a governmental authority at the national or local level.
Regulations are mandatory requirements that an organisation must follow but are not embodied in law.
Standards are detailed, technical specifications for security and other controls. Organisations may be required to comply with standards by contract or regulation.
- These three compliance obligations just formalise the practices a business/ organisation should already be doing.
Security awareness measuring efforts don't need to be complicated. One easy way to measure the effectiveness of your program is simply to ask users how they feel about security education in a survey.
Questions like “how well do you think our organisation prepares you to deal with information security threats?” Or, “do you know your information security responsibilities?”
You should use the results of security awareness surveys to help select new training and awareness tools, answers to these questions over time can give a good perspective on the effectiveness.
Security awareness programs must continuously evolve as both business and security requirements change and new threats emerge. They should include:
- In-person and online classes
- Provide advanced formal knowledge to security knowledge for security practitioners.
- Remind employees on a routine basis.
Asset Security – 10%
How to protect your organisations data:
- Have clear policies and procedures surrounding the appropriate use of data and security controls that must be in place.
- Use encryption to protect sensitive information when in rest or transit
- Use access controls to restrict access to information while it is stored on devices.
Big data is the use datasets that are much larger than those used by conventional data processes
Policies/ procedures should always meet the following key criteria:
- Policies provide the foundational authority for data security efforts, adding legitimacy to your work and providing hammer if needed to ensure compliance.
- They offer clear expectations to everyone involved in data security by explaining what data must be protected and the controls that should be used to protect that data.
- They provide guidance on the appropriate paths to follow when requesting access to data for business purposes, and they offer an exception process for formally requesting policy exceptions when necessary to meet business requirements.
A few examples of these policies are Data storage, data transmission, data classification, data lifecycle, data retention
- The data owner is a senior-level official who bears overall responsibility for that data. They set policies and guidelines around the use and security.
- Data Stewards handles implementations of high-level policies set by the data owner. Usually has a reporting relationship with the owner.
- Custodians or processors are the individuals who store and process the information. Usually IT staff due to their roles as administrators. They ensure appropriate data protections are in place, so they meet requirements set by the owner.
A system owner or an asset owner is different to data owner. Someone might physically own a system but that does not mean they also own all the information stored or processed on that system.
The Generally Accepted Privacy Principles (GAPP)
Limited data collection is the most important way that an organisation can protect personal privacy.
Data Security Controls
Security baselines provide enterprises with an effective way to specify the minimum standards for computing systems and efficiently apply those standards across deployed device regardless of their purpose, OS or data they handle. The baseline is supposed to be generic to encompass all types of devices.
Something along the lines of “the device does not jeopardise the confidentiality, integrity, or availability of other systems or the data those systems contain and all actives on the device comply with the organisation’s data security requirements.
In addition to baseline requirements the organisation can often create specific standards for different OS, devices etc.
The Centre for Internet Security publishes a series of security benchmarks that represent the consensus opinions of a large number of subject matter experts.
An organisation will likely use a benchmark but modify it further or added security.
An organisation would, for example, say that they will adopt the Centre for Internet Security Benchmark standard for Windows Server 2012 R2, dated April 28, 2016 by changing requirement 1.1.2 to set the password expiration period to 180 days instead of the standard's default 60-day expiration period.
NTFS – Microsoft
Full control
Read permission allows a user to read the contents of a file or list the contents of a folder
Read and Execute gives the same as read but also users to traverse directories and execute application files.
Write permission allows a user to create files and folders and write data those files and folders.
Modify is a combination of read, execute and write with the additional ability to delete files.
LINUX - Each file or folder belongs to both an individual user and a group.
Write permission - w
Execute permission - x
User owner - u
Group owner - g
All other users – o
Hardware security modules, or HSMs, use dedicated hardware to perform encryption and decryption operations, and safely store encryption keys. The Trusted Platform Module, or TPM, is a specialised HSM found in many computer systems.
You should apply the same security controls to data stored in the cloud as you would data stored in your own data centre. Classification schemes vary but all basically try to group information into high, medium, and low sensitivity levels, and differentiate between public and private information. The military uses the familiar top secret, secret, confidential, and unclassified scheme, while a business uses friendlier terms to accomplish the same goal. An organisation may designate security classification levels for systems and then only allow systems to process information at their security level or lower. Darik’s Boot and Nuke is a secure disposal procedure.
Security Architecture and Engineering – 13%
Security Engineering
Different components should not communicate with each other unless absolutely necessary. For example, the segmentation of a network.
The Bell-LaPadula Model is designed to ensure that users of multilevel systems don't get access to information higher than their security clearance level. This model says no subject should be able to read an object at a level higher than the subject’s security clearance. (No read up). The model also says that a subject at one security level should not be able to write information to an object with a lower security level. This is to prevent security leaks. (No write down).
The Biba integrity model covers integrity instead of confidentiality. The subject should not be able to read an object at a security level lower than the subject’s clearance. (No read down). Also, the subject should not be able to write information to an object at a higher security level than his or her clearance. (No write up).
The National Security Agency (NSA) issued the Trusted Computer System Evaluation Criteria, abbreviated TCSEC. Which is a book to help understand security requirements and capabilities of different products. This book is now referred to as the Orange Book.
The Orange Book is no longer used as it was replaced in 2005 by an international standard known as the Common Criteria.
Certification is the process of determining that a technology product meets the requirements of a certain level of certification.
Accreditation, on the other hand, is a decision made after certification, and it is a specific decision as to whether a technology system may be used in a specific environment.
Virtualisation
In a type 1 hypervisor, also known as a bare metal hypervisor, the hypervisor runs directly on top of the hardware and then hosts guest operating systems on top of that. This is the most common form of virtualisation found in data centres. In a type 2 hypervisor, the physical machine actually runs an operating system of its own and the hypervisor runs as a program on top of that operating system. VM escape attack can attack the host system through a VM. Duh.
Public cloud computing uses a shared responsibility model. This is where the customer is responsible of parts of security while the provider is responsible for other parts. a Software as a Service model, the public cloud provider delivers an entire application to its customers Customers of Infrastructure as a Service vendors purchase basic computing resources from vendors and piece them together to create customised IT solutions. Platform as a Service, vendors provide customers with a platform where they can run their own application code without worrying about server configuration. This is a middle ground between Infrastructure and Software as a Service.
Hardware Security
There are two majority categories of memory, read-only memory, or ROM, and random-access memory, or RAM.
ROM has contents that are written permanently or semi-permanently to the physical memory chip.
RAM is shared memory used by all of the applications on a computer system. Typically lost when no power. The OS must perform an important function called memory management and more importantly memory protection, this means that is must enforce access rules making sure that processes don’t access portions of memory that don’t belong to them.
Unauthorised requests may be innocent in nature, resulting from bugs in applications, or they may be more malicious attempts to undermine memory security.
Memory leaks occur when applications request memory from the operating system and don't fully release that memory when it is no longer needed (can be fixed through a reboot, not malicious, just annoying)
Sometimes systems allow inadvertent interfaces that weren't planned by software developers but may be exploited by malicious users to exfiltrate information from a sensitive system to the outside world. These unintended interfaces are known as covert channels. Covert channels provide a backdoor for communications into or out of a system.
- Covert storage channels work by placing data in an unexpected location where it may be read by another individual or system.
- Covert timing channels work by modifying some system resource in a pattern that may be detected by remote users.
Client and Server Vulnerabilities
Applets allow developers to put the computational burden on the client, but security equals a no no.
Local caches can create security issues if a malicious attacker is able to create incorrect cache records. In an attack known as cache poisoning, an attacker inserts fake records in the DNS cache on a local computer which then redirects unsuspecting users of that computer to illegitimate websites.
Data flow control manages the transfer of information to and from servers
- Administrators must take steps to ensure that data flow does not become high enough in volume that it overwhelms the available bandwidth of either the server or the network. Failure to enforce data flow control in this manner can lead to a denial of service attack.
- System architects should carefully map out and understand how data flows within their systems, paying particular attention to sensitive information. By mapping out data flows, they can apply these controls with confidence that they are applying them to all of the systems that store, process, and transmit sensitive information.
There are two specific types of attack that database administrators should pay careful attention to, aggregation and inference.
Aggregation - occurs when an individual with a low-level security clearance is able to piece together facts available at that low level to determine a very sensitive piece of information that he or she should not have access to.
Inference - occurs when an individual can figure out sensitive information from the facts available to him or her.
Aggregation is pulling the data together, inference is when you use the information to come to a conclusion.
Most databases use SQL to store, retrieve and modify data.
NoSQL is a relatively new type of database. NoSQL databases are a key-value store. That means that the database has only two elements, a key that is used to identify and locate data in the database, and a value that is associated with that key.
Grid computing builds a virtual supercomputer by assembling the unused resources many individual computers at different locations to work on a single problem.
Peer-to-peer computing, or P2P computing, is another example of distributed computing where many different computers band together to provide a service. No centralised controller.
Web Security
- injection attacks (SQL injection) – insert unwanted transaction code to access databases
- broken authentication – exploits broken authentication method making it possible to hijack a session
- sensitive data exposure – insecure web app discloses confidential information
- XML external entities attacks – poorly configured XML processor may allow remote code execution
- broken access control – allows unauthorised access
- security misconfiguration – an error in any setting in a route, server, data centre, can result in malicious activity occurring.
- cross-site scripting (XXS) – Inserts malicious scripts on websites
- insecure deserialization – Allows API exploitation
- using components with known vulnerabilities
- insufficient logging and monitoring
Cross-Site Scripting (XSS) – an attacker embeds malicious scripts in a third-party website that are later run by visitors. This can be stopped using input validation to remove script code from any input.
Cross-site request forgery – some people call it CSRF, and other use XSRF. Similar to XSS but they go a step further and prey upon the fact that users often have multiple sites open at the same time and may be logged into many different sites in different browser tabs.
SQL:
- Semicolons Separate SQL Statements
- Two Dashes Indicate Comments
- ‘ = important
- Input validation protects against unsafe user input by checking it on the server before executing commands
- Parameterised SQL precompiles SQP code on the database server to prevent user input from altering query structure
Defending against cross-site request forgery is difficult and often requires rearchitecting web applications to use cryptographically strong tokens in each exchange between authenticated users and a website. Other measures include preventing the use of HTTP GET or automatically logging users out after a short idle period. Fuzz testing or fuzzing is a very important software security testing technique, it provides many different types of valid and invalid input to software in an attempt to make it enter an unpredictable state or disclose confidential information. Zed application proxy or ZAP is an application that can be used to fuzz test. Zzuf is another program used. Session Hijacking attacks pick up users session cookies and compare values to workout the similarities between them, then the attacker can manipulate their request header to make the website think the attack is someone else who has previously logged in and will log them in accordingly.
Mobile Security
- Applications that allow access to data or resources should require authentication. Credential management practices around these should be the same as any other sensitive resource access i.e. strong passwords.
- Transitive trusts = Ties application authentication back to the organisation’s central authentication services.
- Administrators must be sure that each app’s data use meets the organisations security policies.
- Every organisation, whether it intends to allow BYOD, require BYOD, or ban personally owned devices entirely, should have a clear, written policy on the matter that is known to all employees.
Smart Device Security
Industrial control systems (ICS), are the devices and systems that control industrial production and operation. They include systems that monitor electrical, gas, water, and other utility infrastructure and production operations. These can be a great target for hackers, especially as they’re traditionally not as well protected as other computing infrastructure.
ICS types
- Supervisory control and data acquisition – SCADA
Usually report back to control system. Remote monitoring, remote telemetry.
- Distributed control systems – DCS
Focused on controlling processes like water and waste. Uses sensors and feedback systems to change its controls.
- programmable logic controllers - PLCs.
Handle specialised input and output like temperature or vibration. Ensure uninterrupted processing. They connect to human machine interfaces so humans can monitor them.
Most important thing you can do to secure a smart device is to keep it updated
Organisations that must run vulnerable systems may turn to security wrappers as an alternative approach. In this approach, the device is not directly accessible over the network, but instead is reached through a wrapper system that monitors input and output for security issues and only passes through vetted requests from network systems.
Network segmentation places untrusted devices on a network of their own, where they have no access to trusted systems. Same concept as a DMZ.
Encryption
Cryptography is the use of mathematical algorithms to transform information into a form that is unreadable by unauthorised individuals.
Ciphertext is the unreadable version of plaintext that has already been encrypted.
Algorithms are used to transform encrypted ciphertext back into plaintext format.
Symmetric encryption algorithms, also known as shared secret encryption algorithms, the encryption and decryption operations use the same key.
Asymmetric encryption algorithms use different keys for encryption and decryption. They're also known as public key encryption algorithms, and they use the concept of a keypair
In asymmetric cryptography, anything that is encrypted with one key from the pair can be decrypted with the other key from that pair.
Asymmetric cryptography is slower than symmetric cryptography.
Goals
Digital signatures provide nonrepudiation.
A code is a system that substitutes one meaningful word or phrase for another.
Ciphers are systems that use mathematical algorithms to encrypt and decrypt messages. All cryptographic algorithms are examples of ciphers
Ciphers have two different ways of processing a message.
- Stream ciphers work on one character of the message at a time.
- Block ciphers work on chunks of the message, known as blocks, at the same time.
- Substitution ciphers actually change the characters in a message. For example, changing the characters in a message by shifting all of the letters one letter down the alphabet.
- Transposition ciphers don't change the characters in a message, but instead they rearrange them I.e. scrambling.
Proprietary encryption algorithms are a red flag, details on new encryption algorithms are normally published and open for inspection by the community.
Longer key length, the more secure your information will be but encrypting and decrypting will be slower.
When you choose your encryption approach, you'll need to perform your own cost-benefit analysis and select a key length that balances your security goals with the speed of encryption and decryption.
One-Time Pad is an example of an unbreakable cipher. Sender and receiver have identical pads. The pad must be as long as the total of the characters of all of the messages that they will exchange. They are unbreakable because they are totally random. The issue with this is that the distribution of the pad is difficult.
The Cryptographic Life Cycle: as algorithms age they often become insecure as flaws are discovered, a life cycle approach ensures they are phases out when they become insecure.
NIST offers a five-stage lifecycle.
- Phase one is initiation - the organisation realises that it needs a new cryptographic system and gathers the requirements for that system. An example of these objectives might be to protect the integrity of keys, and use digital signatures etc.
- During phase two, the organisation develops, or more likely acquires, the cryptographic system and finds an appropriate combination of software, hardware, algorithms, and keys that meets their security objectives.
- implementation and assessment, where they configure the system for use, and assess whether it properly meets the organisation's security objectives.
- Operations and maintenance. During this phase, the organisation ensures the continued secure operation of the crypto system.
- when the system is no longer viable for continued long-term use, the organisation stops use of the system, and destroys or archives sensitive material, such as the keys used with the system.
Digital rights management, or DRM software mechanisms, provide content owners with the technical ability to prevent the unauthorised use of their content. Spotify uses this.
Symmetric Cryptography
Data Encryption Standard (DES) designed by IDM for the federal government is no longer safe.
- It takes 64 bits of plain text as input in the top and then runs it through an encryption operation known as the Feistel function (yellow F blocks).
- It uses the function 16 times to produce its cipher text.
- Each of these F boxes that implements the Feistel function takes half a block of input, or 32 bits, and combines it with a piece of the 56-bit encryption key.
- Then the output of that function is broken up into eight segments and fed into eight different functions called S boxes, s stands for substitution, and each one of these boxes contains a different substitution cipher.
- The results of all of those substitutions are then combined back together again and fed into a P box, P stands for permutation, which is just another term for transposition. So the output of all of those S boxes is scrambled up to produce the cipher text.
It is a block cipher that works on 64-bit blocks using a 56-bit key.
3DES, triple DES, literally the plaintext is sent through three rounds of DES encryption to produce a much stronger cipher text.
There are three options when using 3DES, using 3 different keys for each round of encryption, or having the first and third key the same while the second key is different, or having all the keys the same.
3 different keys are equivalent to a key strength of 112 bits.
If keys 1 and 3 are the same the key strength is only 80 bits.
If all keys are the same, it is the same strength as DES.
Using DES twice, not three times, is subject to a man-in-the-middle attack making it equivalent to DES.
Tripe DES (3DES) is still used today and considered secure when using different keys for each cycle.
Advanced Encryption Standard
- Created to replace DES
- Uses a combination of substitution and transposition functions to achieve strong encryption, much like DES
- AES is a symmetric cipher, and it is a block cipher that works on 128-bit blocks.
- AES allows for three different key lengths. The user can choose between a 128-bit key, a 192-bit key, or a 256-bit key.
- All three of these options are considered secure today.
🐡Blowfish
- Symmetric algorithm
- Developed by cryptography expert Bruce Schneier in 1993 as a potential replacement for the DES
- Works on blocks of 64 bits, using any key length you choose between 32 and 448 bits.
- Blowfish is no longer considered secure because there are known attacks against some weak encryption keys when used with Blowfish.
🐟🐟Two fish
- Twofish is a symmetric encryption algorithm that works on blocks of 128 bits, using key lengths of 128, 192, or 256 bits.
- Twofish is still considered secure for use today.
- Relies on a Feistel network for secrecy that combines substitution and transposition. Much like DES.
RC4 is a symmetric stream cipher that was widely used to encrypt network communications.
RC4 stream cipher works by creating a stream of bits to use as the encryption key. This stream has many of the qualities of a random string, but it is not quite random because it is initialised using a selected encryption key. This makes it possible for both the sender and recipient of a message to use the same key to generate the same key stream.
- RC4 is a symmetric encryption algorithm.
- It is a stream cipher and allows a variable length key between 40 and 2,048 bits.
- RC4 is no longer considered secure for use on modern networks.
Steganography is the process of hiding information within another file, so that it is not visible to the naked eye.
Asymmetric Cryptography
Rivest-Shamir-Adleman (RSA) – created in 1977 - Asymmetric cryptography solves issues of scalability, by giving each user a pair of keys for use in encryption and decryption operations.
RSA is fairly slow which means its not usually used for the full communication but just the initial key exchange.
Keys can be variable length, normally between 1024 and 4096 bits. These size keys are still considered secure.
- When a new user wants to use RSA cryptography to communicate with others, he or she creates a new key pair. There's a lot of complex math involved in creating that key pair, but the underlying principle that you need to understand, is that the user selects two very large prime numbers that are used to create the encryption keys.
- As with any asymmetric algorithm, the user is then responsible for keeping the private key secure and distributing the public key to other people with whom he or she wishes to communicate.
- When a user wants to send an encrypted message to another user with the RSA algorithm, the sender encrypts the message with the recipient’s public key.
- When a user wants to send an encrypted message to another user with the RSA algorithm, the sender encrypts the message with the recipient’s public key.
- When someone receives an RSA encrypted message, the recipient decrypts that message with his or her own private key.
PGP and GnuPG
Pretty good privacy is a commercial product and GnuPG is freely available.
- The sender of a message has the original plain text and then generates a random, symmetric encryption key.
- Next, the sender encrypts the message using that random symmetric key, and then encrypts the random key using the recipient’s public key.
- The sender then transmits the encrypted message, which is a combination of the encrypted data, and the encrypted random key.
- When the recipient receives the encrypted message, he or she performs the decryption process.
- First, the recipient decrypts the encrypted random key using the recipient’s private key. This produces the random key created by the sender.
- Next, the recipient uses that random key to decrypt the encrypted message and retrieve the original message.
Elliptic Curve Cryptography
ECC does not depend upon the prime factorisation problem all other asymmetric algorithms use. It uses a completely different problem known as the elliptic curve discrete logarithm problem.
Quantum cryptography may be able to defeat cryptographic algorithms. Elliptic curve cryptography is even more susceptible to quantum attacks.
Key Management
- Out-of-Band Key Exchange – exchange the key in some way that they both trust that uses a different communications channel.
- The Diffie-Hellman key exchange algorithm solves the problem of key exchange for symmetric algorithms.
- The real algorith uses numbers, represented by the variables P and G.
P must be a prime number.
ECC uses a similar approach.
The idea behind key stretching is that an algorithm takes a relatively insecure value, such as a password, and manipulates it in a way that makes it stronger and more resilient to threats like dictionary attacks. (Salting and hashing)
- PBKDF2, password-based key derivation function v2 is one algorithm used to perform key stretching.
- BCRYPT is a similar algorithm but based upon the Blowfish cipher.
- Most security professionals recommend that anyone using this function repeat the salt/hash process at least 4,000 times, if not more.
Public Key Infrastructure
The web of trust is decentralised which never made it popular for business and people to use outside of the technical community as it requires technical knowledge to manage and there is a high barrier to entry for new people.
The Web of Trust takes advantage of this by using digital signatures to vouch for the public keys of individuals. Every participant signs the public keys of everyone they know when they verify that the public key belongs to that person.
Everyone in the system can then build a list of the people they trust to vouch for others. If this web becomes large enough, there is a reasonable expectation that indirect trust relationships will allow most people to communicate with most other people.
Public Key Infrastructure or PKI solves many issues associated with the web of trust. It relies in the trust participants have in highly trusted centralised service providers.
The providers are known as certificate authorities.
Certificate Authorities verify the identity of individuals and organisations and then issue those individuals and organisations digital certificates vouching that the public key associated with them actually belongs to them.
- When you wish to obtain a digital certificate, you approach a certificate authority. The CA will ask you to prove your identity following different standards for individuals and organizations.
- This may simply involve verifying ownership of a domain name or may be more rigorous and may involve physical proof of identity, depending on the type of certificate that you are trying to obtain.
- The CA will then create a digital certificate for you, signed by them, and provided to you with your public encryption key through a secure channel.
Hash is a one-way function that transforms a variable length input into a unique, fixed-length output // Message Digest is another term for Hash.
Message Digest 5 / MD5 – replaced DM4 after it was found to be insecure. MD5 produces a 128-bit hash. MD5 is no longer considered secure and should not be used.
Secure Hash Algorithm / SHA or SHA1 – produced 160-big hash and is also no longer secure. But apparently secure still for hmac assuming password is secure?
SHA-2, 224, 256, 384, 512-bit hashes – Is still secure but many people do not trust SHA-2 as it was developed by the US government.
Keccak algorithm is the SHA-3 standard set by NIST in 2006.
RIPEMD / RACE Integrity Primitives Evaluation Message Digest – 128, 160, 256, 320-bit outputs. The 128-bit version is no longer secure, but the other bit outputs are and are still used today.
Hash-based message authentication code / HMAC combines symmetric cryptography with hashes to provide authentication and integrity for messages. When using HMAC, the sender of a message provides a secret key that is used in conjunction with the hash function to create a message authentication code. The recipient of the message can then repeat the process with the same secret key to verify the authenticity and integrity of the message.Hash functions are also used in conjunction with asymmetric cryptography for both digital signatures and technologies that depend upon digital signatures, such as digital certificates.
- Authentication - the the person owning the public key used to sign the message is actually the person who created the message.
- Integrity - the message was not altered after it was digitally signed by the creator.
- non-repudiation- the recipient could prove these facts to a third party if necessary
Digital signatures depend upon hash functions that are collision-resistant, and that anything encrypted with one key from an asymmetric key pair can only be decrypted with the other key from the same pair (asymmetric).
Digitally signing a message does not provide confidentiality for the message.
The process for creating digital certificates follows the X.509 standard created by the International Telecommunications Union, or ITU.
HOW TO CREATE A DIGITAL CERTIFICATE
The security of digital certificates depends upon the security of the private key associated with that certificate.
If the certificate owner’s private key is compromised, they will need to revoke the digital certificate.
The original approach is the certificate revocation list, or CRL. Where a user would need to verify the serial number on their certificate was not in a CRL and could be trusted. Issue with this was the bandwidth needed to download and update all CRLs for every CA.
The online certificate status protocol, or OCSP – anyone about to use a certificate sends a request to the CA to verify that the cert is still valid.
Google Chrome uses its own proprietary approach for verifying certificates.
Cryptanalytic Attacks
Brute-force attacks guess at the encryption key until they stumble across the correct one. Not possible against modern decryption unless the algorithm has a limited keyspace size. (shitty algorithm).
Frequency analysis attack - the person trying to break the cipher does statistical analysis of the cipher text to try to detect patterns.
A known-plaintext attack is where the attacker uses this knowledge to try to crack the decryption key for other messages
A chosen-plaintext attack, the attacker can study how the algorithm works in greater detail and attempt to learn the key being used, the attacker needs the algorithm and key for this attack.
Physical Security
Intermediate distribution facilities that help extend the network throughout their physical plant.
If evidence handled during a cyber-attack investigation may be used in court, investigators must document and preserve the chain of custody, ensuring that evidence is not tampered with while in their hands.
Security professionals should perform an inventory of all sensitive locations under their control and conduct physical security assessments of those facilities on a regular basis.
The current standards for data centre cooling come from the American Society of Heating, Refrigeration, and Air Conditioning Engineers.
Humidity is as important as temperature.
Every piece of electronic equipment generates electromagnetic radiation, and this poses two risks: EMI can interfere with a system causing it to malfunction, if an attacker can capture the EMI emanations from a facility they may be able to reconstruct they keystrokes or other activity that generates the electromagnetic signals. This may contain sensitive data.
Deterrent controls are designed to deter unauthorized activity. They’re designed to show someone that they will likely get caught and there will be a consequence.
DPD is one way to group controls, another way is technical and administrative.
Preventive controls are designed to actually block an intruder from successfully penetrating the physical security of a facility. Example: biometric reader at a door.
Detective controls are designed to pick up where preventive controls leave off. An alert security staff to any successful violation of physical security or an attempt to violate security that is in progress. Example: security cameras.
Technical controls, uses technology of some kind to deter, prevent or detect physical security violations. Example: alarm system (uses motion detection and sensors).
Administrative controls do not use technology. Rather, they are business processes designed to bolster physical security. Example: background checks on the guards who monitor access to secured areas to ensure that the guards are trustworthy.
Good approaches to physical security combine both technical and administrative controls.
Compensating controls are designed to fill a known gap in a security environment. Example: barbed wire fence surrounding a facility but has one gate in the fence with a turnstile that allows authorized individuals access. Since the turnstile is the weak spot an organisation may want to place a guard at the gate as a compensating control.
Communication and Network security – 14%
TCP/IP Networking
TCP Flags
- SYN: opens a connection
- FIN: closes a connection
- ACK: acknowledges a SYN or FIN
UDP has no handshake as its connectionless and lightweight.
OSI Model. I remember the layers, from top to bottom with; All People Seem To Need Data Processing.
Like the OSI model, the TCP model uses layers to describe different parts of a network communication, but it does so using fewer layers. The physical layer and data link layer of the OSI model are replaced by a single network interface layer in the TCP model. The OSI's network layer is simply renamed as the internet layer in the TCP model, while the OSI's transport layer retains the same name in the TCP model.
At the top of the stack, three layers from the OSI model are combined with the OSI model session layer, presentation layer, and application layer combined into a single application layer in the TCP model.
TCP combines 3,1,1,2 from the TCP model.
Application, transport, internet, network interface are the names, from top to bottom.
IPv4 are 32-bit addresses (8 bits for each group, 4 groups). 28 = 256
IPV6 uses 128 bits. 8 groups of 4 hexadecimal numbers.
Ports start from 0 and go all the way up to 65,535.
0 – 1,023 are the well-known ports. - 1,024 – 49,151 are registered ports - 49,152 + are dynamic ports.
Supervisory control and data acquisition are a control system architecture.
DNP3 facilitates communications between devices using three distinct components.
The purpose of these systems is to allow the collection of data from intelligent electronic devices that are located at a series of remote substations, and to transmit control commands to those devices
Each one of these remote substations has a remote terminal unit, or RTU, that provides connectivity for all of the intelligent electronic devices at that substation. This data then needs to travel over communications links. These communication links are sometimes very low-speed connections that are used to reach remote sites.
They could be wired connections, even using dial-up modems, copper wires, or fibre optics. They also might make use of radio communications over RF frequencies, microwaves, or using spread spectrum technology.
Data travels over those communication links to and from the final component of the DNP3 network, the SCADA master station. This is the centralized control point that collects data from the intelligent electronic devices and transmit control commands back to the remote sites. The centralised control point may serve as the point where administrators actually control the system, or those administrators may use a set of external control points to manipulate the SCADA system.
ICMP functions as destination unreachable, redirects, time exceeded and address mask requests and replies.
If you compare DNP3 to the OSI model, you'll find that it covers the entire range of the OSI stack, from the physical communication links of layer one, all the way up to the application interface instructions of layer seven.
Network Security Devices
Firewalls: Stateful Inspection tracks open connections
Firewalls always implicitly deny
The firewall only sees
- source system address
- Destination system address
- Destination port and protocol
- Action (allow or deny)
Web application firewalls – specifically protect web application by using application awareness to look into the application layer and block web attacks.
Proxied provide performance boosting through caching.
IPSec commonly used for site-to-site VPNs
SSL/TLS commonly used for remote access VPNs
Anomaly detection, behaviour-based detection and heuristic detection are the same thing.
UTMs or Unified Threat Management combine multiple security functions in a single appliance.
Content Distribution networks or CDNs allow organisations who might experience a high volume of web traffic from around the world without building a massive web infrastructure. Example: a smaller website has a great sale on, and experiences loads of traffic they can not handle. A CDN can retrieve content from your web server and cache it or local users around the world through their global infrastructure.
CDNs can also provide protection again DDOS attacks.
Modem stands for modulator demodulator; this is because it converts digital form to analogue form.
ICANN distributes large blocks of addresses to regional authorities for distribution.
Port Address Translation, or PAT, allows multiple systems to share the same public address.
Users on the same VLAN will be able to directly contact each other as if they were connected to the same switch. All of this happens at layer two of the network stack without involving routers or firewalls.
NAC/ Network Access Control is technology that intercepts network traffic coming from devices that connect to a wired or wireless network and verifies that the system and user are authorised to connect to the network before allowing them to communicate with other systems.
M(andatory)A(cess)C(ontrol) uses an authent protocol called 802.1x802.1x transaction steps:
- The device that wishes to connect to the NAC protected network must be running software known as a supplicant which performs all of the NAC related tasks.
- The switch or wireless controller that the device connects to is known as the authenticator.
- The back-end authentication server is a centralised serve that performs authent for many services, including NAC.
The defence in depth principle states that organisations should use multiple, overlapping security controls to achieve the same control objective.
This is a layered approach to security and protects against the failure of any single security control. If one control fails, there is still another control designed to achieve the same security objective standing in its place.
Example: Encrypted VPN connection between sites, but also HTTPs implementation on sensitive communications between those two sites as well as VLANS.
The classic security control of the network parameter is a stateful inspection firewall that keeps out any traffic that isn't explicitly authorised by a firewall rule. You can build a defence in depth approach by adding router access control lists may filter traffic before it even reaches the firewall.
NAC can also provide Role-Based Access.
NAC also performs posture checking to check if the device is sufficiently secured before granting access to the network. E.g. verifying antivirus, signatures, firewall config, security patches. It can then send failed devices into a quarantine VLAN.
Older Linux system used telnet for remote access, but telnet does not provide any encryption and should not be used. SSH provides a secure alternative.
RDP provides encrypted desktop access to Windows servers/machines.
Application virtualisation allows users to “stream” applications to their own system that are running on a different environment. Citrix XenApp, VMWare ThinApp, and Microsoft App-V are all examples of application virtualisation technologies.
Screen scraping is a technique used primarily to interface antiquated mainframe systems with the internet. Screen scraping software interacts directly with a mainframe, and then presents data to users through a web server. Not used really, users have a limited ability to entre commands into a website that transmits to a screen scraping system before being entered into the mainframe
Specialised Networking
When encryption isn't viable due to the lower quality calls, VoIP administrators may achieve some protection against eavesdropping through the use of a separate VLAN for voice communications.
The open source community developed a protocol called the extensible messaging and presence protocol, XMPP, to allow a standard technology for private messaging hosted by IT departments.
Network Attached Storage, or NAS, are storage devices that connect to a network and provide storage services to other devices on that network.
Devices accessing NAS storage use standard storage protocols, such as the CIFS protocol used to access file sharers on Windows systems, or NFS which is used for similar purposes on Linux systems.
Larger storage needs require the use of Storage Area Networks, or SANs. These are massive arrays of devices that serve networks with very large storage requirements. Sans use the SCSI or iSCSI protocol.
VSANs are used for sensitive traffic needing to carry out storage. (VLAN equivalent of SAN).
MPLS or Multiprotocol Packet Label Switching provides an efficient way to route traffic along fixed network paths.
The vast majority of networks use IP address-based routing to send packets on their way by looking at the destination IP address and comparing it to internal routing tables at each hop along the way, MPLS uses a fixed path approach meaning it is faster.
MPLS performs functions found in both the data link and network layers. Referred to as being in layer 2.5 of the OSI model. Its basically a tunnel where the first router is the entry point and the last in the exit.
The MPLS network is referred to as a Label Switched Path or LSP
The first router is known as the label edge router or LER
Middle routers are known as label switching routers or LSR
The end router is known as the Egress Node
The Label Distribution Protocol or LDP, tells all of the routers in the label switched path the label chosen by label edge routers, and the label switched router's role in carrying out that path.
MPLS routing protocol is the Reservation Resource Protocol with Traffic Engineering, or RSVP-TE. This protocol performs the same function as LDP, but it adds traffic engineering capabilities that allow network engineers to reserve bandwidth for MPLS traffic.
Secure Network Management
Standard entry = Access list (number 1-99), permit or deny, and the source IP, and the mask
Extended entry = Can block traffic based upon source and dest IP and ports, as well as protocol.
VLAN hopping - Disable automatic truck negotiation to prevent VLAN hopping attacks.
VLAN Pruning – limiting the exposure of VLANs by limiting the number of switches where they are trunked, especially for sensitive VLANs.
Port Security – limit the devices that may connect to a network switch port by MAC address, this can either be done in static mode or dynamic mode where the switch memorises the first MAC address that it sees on any given port.
Shadowed rules occur when a rule base contains a rule that will never be executed because of its placement in a rule base. The firewall will check its rule base in top down order.
Promiscuous rules violate the principle of least privilege and jeopardise system security. This is when rules allow too much access.
Orphaned rules are another type of firewall configuration error. They occur when a system or service is decommissioned, but the rules are never removed from the firewall.
Cisco routers support the concept of access control lists.
Cisco devices support two types of access control lists. Standard and extended.
MAC flooding occurs when attackers send large numbers of different MAC addresses to a switch, hoping to overflow the switch's MAC address table and cause it to forget where devices are.
Using flood guards and loop prevention helps administrators maintain secure, highly available networks.
Simple Network Management Protocol or SNMP provides network administrators with a means to centrally configure and monitor network devices.
There are three components involved in SNMP network administration.
Managed device are network devices, a SNMP agent is a piece of software that runs on the managed device and communications with the SNMP service. Metrics such as device performance and network activity. SNMP can also reconfigure devices remotely using a SNMP SET request. The agent then sends back a SNMP response. Managed devices may also initiate communication with the network management system when they have unusual news to report. The agent will send a SNMP TRAP to the network management system.
SNMP is currently on version three.
Virtualised Networks
Software defined networking (SDN) combines two different functions of a network, the control plane and the data plane.
The SDN controller is where network administrators and algorithms make decisions about network routing, and then the controller reaches out to each device on the network and programs it to carry out those instructions properly.
A major benefit to SDN is that it makes networking programmable.
SDNs also allow strong network segmentation because vlans can be configured easily, as well as quick response times to attacks via programming.
Port isolation is a network security technique that is particularly useful when users of the network do not trust each other and are not trusted themselves. Another work for this is private VLANS
Network attacks
In an attack known as the Smurf attack, the attacker sends echo requests to the broadcast addresses of third-party servers using a forged source address.
In an amplification attack, the attacker carefully chooses requests that have very large responses. The attacker can then send very small requests over his or her network connection that generate very large replies over the third party's network connection.
A typical packet as only one or two flags but a Christmas tree packet has them all enabled which might cause some systems to crash. This is called the Christmas Tree attack.
DNS uses a Hierarchical lookup system, where the initial request goes to a server on the client's network, the following requests go on to ask other servers until it finds an answer. ARP Poisoning is a spoofing technique that provides false information in response to ARP requests. Can only work on a local network. In this case the malicious user makes a victim’s system think they are the gateway.
Transport Encryption
TLS is only a protocol that uses other cryptographic algorithms. TLS is not a cryptographic algorithm itself.
During a session, once the client is satisfied about the server's identity, the client creates a random encryption key called the session key.
Session keys are also known as ephemeral keys.
SSL was the predecessor to TLS and works in a very similar way. However, there are known security flaws in SSL, and it should no longer be used.
The Authentication Headers or AH protocol uses an integrity check value to provide tamper proofing for IP packets.
Security Associations or SAs are used to describe the cryptographic technologies that a system supports.
TLS uses a concept known as sypher suites to allow systems to communicate the encryption and hashing algorithms that they support to other systems.
Wireless Network
MAC Filtering allows you to supply a list of the MAC addresses that may use your wireless network, and then limit access to the network to only those devices.
Wired equivalent privacy or WEP uses very weak encryption
Wi-fi protected access or WPA uses the Temporal Key Integrity Protocol or TKIP (encryption key changes all the time) to add security that WEP didn't have and is still used but not guaranteed to be secure.
WPA v2 is better as it uses AES. It uses counter mode cipher block chaining message authentication code protocol. or CCMP to apply the encryption.
Preshared keys, enterprise authentication, and captive portals are the main mechanisms to authenticate users of a wireless network Preshared keys are used commonly by home WIFI networks.
- Not good for big networks as a change of the key means all users must reauthenticate.
- No way to revoke access.
Enterprise authentication uses RADIUS and an authorisation server.
- Uses username and password set up by a business
- Enterprise authentication takes place on these networks using versions of the extensible authentication protocol or EAP
- The lightweight extensible authentication protocol or LEAP was a version of EAP created by Cisco that should not be used (not secure).
- EAP TLS variant uses transport layer security to protect EAP communications and is considered highly secure.
- Protected EAP or PEAP protocol takes standard EAP variants and protects them inside a TLS tunnel.
Captive Portal
- the user is redirected to a webpage that requires them to authenticate before gaining access to the network.
802.11ac networks use beamforming, this is when the access point uses multiple antennas to detect the location of a device connecting to the access point and then steer the signal in the direction of the device.
The best way to place wireless access points is to uses specialised hardware and software to measure signal strength. This is called a wireless site survey.
WIFI Protected Setup, or WPS is unsecure because the algorithm it uses has a flaw which makes the pin easier to calculate. Should be disabled.
Rogues access points occur when someone connects an unauthorised wireless access point to an enterprise network.
Bluejacking is spam via Bluetooth – old tech
BlueSnarfing is forced pairing - also rarely seen
Host Security
Trusted Operating Systems is a formal term used to describe operating systems that have gone through an accreditation process by government agencies known as the Common Criteria.
Antispyware and antimalware are referred to as the same thing.
Host software baselining uses a standard list of the software that you expect to see on systems in your environment and then reports deviations from that baseline. You'll be able to identify unwanted software running on computers on your network.
Both network and host firewalls need to be configured for best security practices.
Most modern laptops come with built-in slots for inserting a special locking cable.
Security tags deter theft and provide an easy return mechanism for lost devices.
Virtual Machines are nothing more than files.
Virtualisation allows sandboxing of untrusted software and testing of security controls.
Virtualisation platform must also be patched, just like OSes and applications.
Identity and Access Management – 13%
Identity and access management is the practice of ensuring that computer systems have a clear picture of the identity of each individual or resource authorised to access the system. Also, that the system can control access in a way that prevents unauthorised individuals from accessing resources, while permitting authorised individuals to perform legitimate actions.
Entity is the physical person but can be physical or virtual objects as well.
Identity corresponds to the roles that an individual play within an organisation – can have multiple.
Identification > Authentication > Authorisation
Biometrics – fingerprint scanner
- Easy enrolment
- Low false acceptance
- Low false rejection
- Low intrusiveness (not creepy apparently)
Magnetic stripes are easily duplicated
Smart cards make it more difficult to forge, they include a microchip in the card
FAR – false acceptance rate
FRR – false rejection rate
CER – Crossover error rate (will tell you the relationship between FAR and FRR)
In order for Multi-factor authentication to take place, the authentication techniques need to be different. I.e. something you know and something you are (password and fingerprint)
Smart phones led to the adoption of soft token technology
The HMAC-based (hash-based message authentication code) one-time password algorithm, HOTP, uses a shared secret and an incrementing counter to generate the code displayed on the token.
Eye scans are seen as more intrusive. Facial recognition is also creepy but less than eyes.
Typical registration process:
The code changes whenever the button is pushed, and the code is valid until it is used.
The time-based one-time password algorithm, TOTP, doesn't use a counter. Instead, it uses the time of day in conjunction with a shared secret. This means that the code changes constantly and is only valid until the token generates the next code.
The token and the authentication system must have synchronized clocks for TOTP to function correctly.
Common Access Card, or CAC is used as a something you have verification for smart cards.
Password Authentication Protocol (PAP) - used to authenticate with a server. No encryption. Don’t use.
- Once they establish the link, the server sends a random value to the client. This is known as the challenge value.
- When the client receives the challenge, it combines the challenge with the secret and computes a hash of both values.
- The client transmits the hash value to the server. Response value.
- The server receives the response and stores it in memory while computing its own hash value by using the same hash function on the challenge it sent to the client and the shared secret that they both know.
- The server compares the response it computed and if the two values match the server knows that the clients secret is the same.
- The server can then authenticate the client without ever having to send the actual secret password over the network.
RADIUS Authentication:
The RADIUS client is usually an application sever like an access point in a wireless network. RADIUS uses UDP and does not encrypt. TACAS – Terminal Access Controller Access Control System. Rarely used today. TACAS+ - Functions similarly to radius. Uses TCP and Encrypts. Red = cisco.
Used in Microsoft Active Directory. Kerberos is a ticket-based authentication system that allows users to authenticate to a centralized service and then use tickets from that authentication process to gain access to distributed systems that support Kerberos authentication.
Service that supports Kerberos = Kerberized service.
Kerberos – user uses Kerberos client to provide username and password. The client then sends a CLEAR TEXT auth request to an authentication server.
Server then looks it up and retrieves password and sends back a session key encrypted with the client’s password. It also provides a ticket granting ticket that only the server can encrypt.
When the client receives these messages, it decrypts the session key with its password.
When the client wants to access a service, it contacts the Authentication server which is the ticket granting server and it sends the request for the service as well as the ticket granting ticket. It ALSO sends an authenticator which contains the client’s own ID and the current time which it encrypts using the client’s session key.
The ticket granting server first decrypts the ticket granting ticket to receive the session key. The TGS then uses the key to decrypts the authenticator clients ID and time stamp. The server then randomly generates a client/server session key that the client will then use to communicate with the services they desire.
The TGS then sends back to the client a client/server ticket which is encrypted with the services secret key, this contains the client/server session key. The second message is a copy of the client/server session key encrypted with the ticket granting servers session key.
The client then sends two messages to the service. The first is the client/server ticket and a new authenticator encrypted with the client/server session key.
The server uses the client/server session key after decryption. It then uses this key to decrypt the ticket and give access to the service.
For D.1:
Lightweight Directory access protocol (LDAP) provides the means to query a directory such as active directory.
Port 88 – Kerb
Port 389 – ldap
Port 636 – secure ldap
NTLM was the standard for Microsoft before Kerberos. Avoid.
The Security Assertion Mark-up Language (SAML) allows browser based single sign-on across a variety of web systems.
the end user is known as the principal
the organisation provides the proof of identity and is known as the identify provider (employer/school/account provider)
the web-based service that the end user wishes to access is known as the service provider.
For D.2:
Benefits are SSO, also no credential access for service providers. Secret stays with User and Identity provider.
IDaaS or Identity as a service use directory integration and also integrate with many different applications at once simplifying the logging in experience.
D.1
D.2
OAuth and OpenID Connect protocols provide a federated SSO experience for the web. This is when you are able to sign into different websites using your google account for example.
OAuth is an authorisation protocol not authentication.
OpenID works with OAuth to identify and authenticate.
Certificate-based authentication has the public key signed by a trusted certificate authority which provides additional assurance. This type of authentication can be used with the IEE 802.1x standard for network authentication.
Accountability
- All actions can be traced back to a user that is undeniable. In order for this each user must be unique, and strong authentication prevents unauthorised users for gaining access.
- The system must also track user activity carefully through auditing mechanisms. Logs must also be secure.
logging + monitoring + identification + authentication = accountability
- Session management / timeouts can not only disconnect users after a certain amount of time or inactive time has passed but can also perform soft timeouts where the user is prevented from performing sensitive actions until they reauthenticate.
Credential Management
Account management = Principles of least privilege + separation of duties + job rotation + management the account lifecycle.
- GPOs (group policy objects) which are groups of configurations and apply them to domains or smaller groups called OUs (Organisational units)
- Best practice is that passwords should be at least eight characters. Obviously longer the better...
- Roles group permissions together and can be assigned to multiple users at once.
- Security groups are used to mange rolls and permissions of users.
- When a new user joins a team, they can assign that user to the team’s role and the user will gain all permissions associated with their new job.
Authorisation
- Once an individual successfully authenticates to a system, authorization determines the privileges that individual has to access resources and information.
- Mandatory access control, or MAC systems, the operating system itself restricts the permissions that may be granted to users and processes on system resources. Users themselves cannot modify permissions.
- Discretionally access control systems let owners of files computer and resources to have the discretion to configure permission as they see fit.
- The NTFS file system used by Windows implements access control lists and allows users to assign a variety of permissions.
- Full control. Read. Read and execute so a user can run executable programs. Write, lets user create and add data to files. Modify, users can delete as well as read and execute.
- Data base access control systems can use SQL server authentication, windows authentication (underlying server user accounts) or a mixture of both authentication methods.
- Role-based authorisation can be given to users by administrators.
- Account-based authorisation in another method given to accounts.
Access Control Attacks
- The shadow password file is hidden from users so the hashes of passwords are not available for everyone to see, if it was, people could brute force the hashes.
- The Birthday Problem – hash collisions become common with large samples. Hashing Algorithms need to be careful of this.
- Rainbow table attacks work by precomputing common password hashes therefore saving a computational step during a password attack.
- Watering hole attacks use techniques to lure unsuspecting users and infect their systems.
Security Assessment and Testing – 12%
Threat Assessment
Well-designed security programs include a variety of assessment techniques that overlap and complement each other.
Using baseline reporting, attack surface reviews, code reviews, and architecture reviews provides the organization with good insight into its current security status.
Penetration testing process has two phases outlined by NIST. Discovery phase and attack phase, once in the attack phase they go from gaining access, to escalating privileges, system browsing, and then installing additional tools to gain additional information or access and then repeating the cycle.
Intrusive/ dangerous vulnerability scans can disrupt systems, while non-intrusive/ safe scans do not.
The issue with false positives is that people can become desensitised to vulnerability reports if too many are false alarms.
False negatives are obviously more dangerous as the scan fails to report a vulnerability that actually exists.
Log Monitoring
SIEM (Sec info and event management) systems using “artificial intelligence” or more likely machine learning can assist with the problem of security data overload.
The SIEM has all of the puzzle pieces and performs an activity known as log correlation to recognise combinations of different activities from different log sources that together may indicate a potential security incident.
Software Testing
Application code is the most common source of vulnerabilities. Code reviews are important when testing software.
During a code review, developers have their work reviewed by other developers who examine the code to ensure that it does not contain obvious or subtle security issues.
The code review may be totally informal, completely formal or a combination. The most formal code review process is called the Fagan inspection.
- Planning - developers perform the pre-work required to get the code review underway. This includes preparing the materials required for the review, identifying the review participants, and scheduling the actual review.
- Overview - the leader of the review assigns roles to different participants and provides the team with an overview of the software that is being reviewed.
- Preparation - the participants review the code and any supporting materials on their own to get ready for the review meeting. They look for any potential issues.
- Meeting - developers raise any issues that they discover during the preparation phase and discuss them with the team. The meeting is where the review team formally identifies any defects in the software that require correction.
- Rework – Correct any defects identified during the review, if significant enough, the process returns to the planning stage to cycle again.
- Follow-up – Leader confirm all defects were corrected.
Because this inspection is so formally it is more common to see modified versions of the Fagan inspection process.
Code tests verify that software is functioning properly. They go beyond reviews and use technology to assist in the code inspection process. Its common to use both the test and review technique when testing software.
P O P M R – F
Synthetic transactions are an important part of dynamic code testing. Synthetic transactions are scripted sets of inputs and instructions given to code when the testers know what output the code should produce for each input.
Testing software can automatically cycle through these synthetic transactions to verify that the code is functioning properly across a wide variety of tests.
Fuzzing or fuzz testing provides many different types of valid and invalid input to software in an attempt to make it enter an unpredictable state or disclose confidential information.
The fuzz testing software can generate input values randomly or from a specification. This is known as generation fuzzing.
The fuzz tester can analyse real input and then modify those real values. This is known as mutation fuzzing.
Complex software systems often reply on multiple independent components. This is why we use Application Programming Interfaces (APIs).
Misuse cast testing evaluates software from the attack’s perspective by testing the results of unexpected behaviour.
Test coverage analysis seeks to give developers a sense of how much of the code was evaluated during a set of tests. Test coverage is defined as the percentage of the software. Many different variables can be the coverage. Either code or function. Software testing packages can automatically compute the test coverage.
Disaster Recovery
Disaster recovery is a subset of business continuity activities designed to restore a business to normal operations as quickly as possible following a disruption. May include temporary measures to get operations back up but DR is not finished until everything is back to normal.
During this the focus of the organisation will change from normal business activity to restoring operations as quickly as possible.
After the immediate danger to the organisation clears, the disaster recovery team shifts from immediate response mode into assessment mode. This step triages the damage to the organisation and develops a plan to recover on a permanent basis.
RTO Recovery Time Objective – Maximum amount of time that it should take to recover a service after a disaster.
RPO Recover Point Objective – Maximum time period from which data may be lost in the wake of a disaster.
Full backups as the name implies, include everything on the media being backed up. They make a complete copy of the data.
Differential backups supplement full backups and create a copy of only the data that has changed since the last full backup.
Incremental backups are similar to differential backups but with a small twist. Incremental backups include only those files that have changed since the most recent full or incremental backup. Use less space but require greater recovery time.
Backup rotation strategies describe how an organization reuses backup media over time. The point is to keep more copies of the most recent backups and fewer copies of older backups. A common approach is called the grandfather-father-son or GFS approach.
In GFS you must maintain 12 different sets of backup media. 4 are designated son, 4 dad and 4 grandfather, then as the organisation backs up they always follow these rules:
- Monday – Thursday use the son set, with the first being on a Monday.
- Every Friday use the father set. The first Friday goes to father set 1 then 2 the next week etc.
- The last day of the month use the grandfather set.
This allows an organisation to only use 12 sets of backup media over 4 months instead of 120 (one a day). Another common twist on GFS is to use 12 grandfather sets for a year of backups.
All modern backup software has built-in verification mechanisms
You should configure alerts from backup software to trigger notification the System Administrators when a backup fails.
Regular tests of your backup restoration capabilities should be performed.
Hot sites are the premier form of disaster recovery facility. They are fully operational data centres that have all of the equipment and data required to handle operations, ready to run. Expensive.
Cold sites may be used to restore but with a significant investment of time.
Warm sites are a compromise in between.
Disaster recovery sites not only provide a facility for technology operations, but they also serve as an offsite storage location for business data.
Each test of a disaster recovery plan has two goals.
- it validates that the plan functions correctly and that disaster recovery technology will work in the event of an actual disaster.
- the disaster recovery test provides an opportunity to identify necessary updates to the plan due to technology or business process changes.
There are five types of disaster recovery testing
- Read-throughs – checklist reviews, DR staff read procedures and provide feedback to the current plan.
- Walk-throughs – Staff get together and plan, this gives the staff the opportunity to discuss the plan with each other. More effective.
- Simulations – same as walk-through but the team discusses specific disasters and how they would react to them.
- Parallel tests – activates the DR plan, bringing up warm or hot sites but does not actually switch operations to the spare site.
- Full interruption tests – The organisation actually switches sites and interrupts normal business while switching. If something goes wrong, it will actually affect the business.
When any of these tests conclude, a written report of the procedures and results needs to be written up by the team-lead.
Assessing Security processes
Process data includes the electronic and/or paper records supporting each of the security processes that an organisation puts in place to ensure the confidentiality, integrity, and availability of information and resources.
Technical data includes the logs generated by servers, network devices, firewalls, IDS, IPS, access control systems, and other tools. This information comes in almost overwhelming quantities and is normally processed by a SIEM.
Management reviews provide an important double check on the work performed by employees. They also reduce fraud and malfeasance by creating a culture of oversight. I.e. the employee is less likely to do something malicious if they know a manger will catch them and they will get in trouble.
An organisation might require that any policy exception be documented in a formal request for change or RFC and have the approval of a senior level official.
Security programs use two primary types of metrics to demonstrate their effectiveness and the state of the organization's security controls.
Key performance indicators or KPIs are metrics that demonstrate the success of the security program. KPIs look at historical performance.
Key risk indicators or KRIs are measures that quantify the security risk facing an organisation in the future. They attempt to show future security risks.
KRI will also need to be customised to the organisation (as well as KPIs)
- Impact - the likelihood that the indicator will identify potential risks that are significant to the business.
- The effort to implement, measure, and support the indicator on an ongoing basis.
- Reliability - the fact that the indicator is a good predictor of risk.
- Sensitivity - The indicator must be able to accurately capture variances in the risk.
Assessments are generally performed by or requested by an organisation. Audits are usually performed at the request of someone else such as a regulator.
Audits may be performed by two different types - internal auditors and external auditors.
Every audit should have a clearly defined scope. This scope may be very broad, or focused, like the PCI DSS.
User access review assessments - during this review, assessors obtain a listing of all the rights and permissions granted to a user and verify that those permissions are implemented correctly, and that there is an appropriate chain of approval for each permission setting.
Every security program should include control testing procedures, a process for managing exceptions to controls, the building of control remediation plans, and the use of compensating controls.
Control testing should be regular i.e. check that there are no new ports opened on the firewall.
Compensating controls are alternative security controls used to mitigate the risk introduced when an organisation makes an exception to a security requirement.
Security Operations – 13%
Investigations and Forensics
Four main types of investigation.
- Administrative - investigate operational issues related to the organisation's technology infrastructure. Low standards of evidence because there is no legal action involved.
- Criminal - conducted by government law enforcement agencies with the objective of investigating violations of criminal law. High standard of evidence. (Highest)
- Civil - also investigate the violation of a law, but they are non-criminal offenses involving a dispute between two parties. Civil cases may be initiated by the government, businesses, or private citizen. Lower evidence standards.
- Regulatory - conducted by government agencies looking into potential violations of administrative law. Regulatory investigations may be either civil or criminal in nature and use the standard of evidence appropriate to the type of case that the agency plans to bring. Regulatory investigations may also be undertaken by non-governmental authorities to enforce compliance with industry standard.
Three main types of evidence
Real – tangible objects that may be brought to a courtroom and examined
Testimonial – witness provides information to the court that is accepted into evidence. May be directly observed evidence, witness with credentials may also give expert opinion. The goal of digital forensics is to collect, preserve, analyse, and interpret digital evidence in support of an investigation.
Documentary – Information brought into court in written or digital form. The parol evidence rule states that when two parties enter into a written agreement, the court will assume that the written contract between the parties is the entire agreement. And that agreement may not be modified verbally.
The order of volatility influences how investigators should gather evidence. Usually in the order below:
- Network traffic
- Memory contents
- System config & process information, files, including temporary
- Logs and archived records
When conducting any forensic data capture, investigators should take note of the current time from a reliable source and compare it to the time on the device. This process is known as recording the time offset.
Forensic analysts use special devices known as write blockers, or forensic disk controllers, to prevent data being overwritten accidentally.
Network Flow/ NetFlow data captures high-level information about all communications on a network. Doesn’t include the packets, but useful to see “who-talked-to-whom” information.
Software forensics experts may analyse the code for the two products and draw conclusions about whether one company used the other company's source code to add functionality.
SF can also be used to analyse the origins of malware by comparing it to other malware written by a same author.
The chain of custody, also known as the chain of evidence, provides a paper trail that tracks each time someone handles a piece of physical evidence.
When collecting physical evidence, the evidence should always be placed in an evidence storage bag or other container that is labelled with the date, time, and location of collection, the name of the person collecting the evidence, and the contents of the storage bag. It should then be sealed with a tamper-resistant seal that would show if someone opened the container. This is the beginning of the chain of custody. Each piece of evidence should then be accompanied by an evidence log that records important events that happen in the life cycle of the evidence.
Important components to incident reporting:
- Notification – everyone who needs to know about an incident is aware that a response is underway
- Real-time updates – those who need to be familiar with the response efforts are kept information along the process
- Documentation – permanent records are kept of the incident details and the response effort.
Organisation should have a specific list of individuals to contact in the event of an incident, depending on the incident.
Using an automated system allows the leader of the incident response team to send a single message without worrying about the recipient list. The system can then track down individuals, confirm that they acknowledge receipt of the message, and contact back-up responders, as needed.
The organisation's incident notification procedures should describe any external notification requirements and outline the timeline and process for those notifications.
Organisations may also want to report the incident to share intelligence with others in the industry.
An incident response team should create a formal report at the conclusion of each incident response.
Electronic Discovery
- Preservation – individuals and departments should be informed of litigation and told to preserve reports related to the dispute. Preservation includes more than just not intentionally destroying information.
- Collection – It's up to the attorneys to decide when collection is warranted. Organisations must have processes in place to collect this information and will normally use an electronic discovery management system to assist with the collection and organization of those records.
- Production – records are provided. Attorneys will look over all collected records and put together a file containing all relevant records.
Logging and Monitoring
Information security continuous monitoring is maintaining ongoing awareness of information security, vulnerabilities, and threats to support organisational risk management decisions.
Six steps of the continuous monitoring process given to us by NIST:
- define a continuous monitoring strategy based upon risk tolerance that maintains clear visibility into assets, vulnerabilities, threats, and business impact
- establish a monitoring program by outlining the metrics we're going to use and the frequency at which we're going to monitor and assess our security.
- implement the program by collecting the metrics, performing the assessments, and building reports. These tasks should be as automated as possible.
- Analyse and report findings from collected data.
- responding to those findings by mitigating, avoiding, transferring, or accepting the risk.
- review and update the monitoring program. Adjusting our monitoring strategy and maturing our measurement capabilities.
- DLP definition - DLPs help an organisation enforce information handling policies and procedures in order to prevent data loss. Not specifically network related in the definition but the only type of DLP I’ve used is network based.
Resource Security
Change management processes ensure that organisations follow a standardised process for requesting, reviewing, approving, and implementing changes to information systems.
Baselining is an important component of configuration management. A baseline is a snapshot of a system or application at a given point in time. Can be used to assess a change to the system.
Version control are also critical components of change management programs.
Change and config management allows technology professionals to track the status of hardware, software and firmware.
In a type one hypervisor, also known as a bare metal hypervisor, the hypervisor runs directly on top of the hardware and then hosts guest operating systems on top of that. This is the most common form of virtualisation found in data centres.
In a type two hypervisor, the physical machine actually runs an operating system of its own and the hypervisor runs as a program on top of that operating system. This type of virtualisation is commonly used on personal computers.
Virtualisation also makes host elasticity easy. Elasticity means that a system can expand, and contract as needed to meet changing business requirements.
Virtualisation can be used to sandbox untrusted software and testing security controls.
No one cloud model is inherently superior to the others (public, private, hybrid) it depends entirely on the organisation.
PaaS – (platform) In-between IaaS and SaaS. The vendor handles the hardware, databases (where all your data is stored), and the environment required to run your web application. You simply provide them with your developed web application and data, and then they deploy the application for you.
SaaS – Customer purchases an entire app.
- IaaS is bigger, eliminates the need for data centres entirely.
- PaaS is just an environment to run an application.
IaaS – Customer purchases servers/ storage. IaaS saves businesses the cost and effort of installing in-house hardware and on-site virtual environment which makes it ideal for businesses looking to share their temporary workload. To put it simply, it provides more flexibility than traditional IT models, as it eliminates the need for onsite data centres traditional IT infrastructure demands.
Security principles
Privileged account management solutions put special controls in place to secure these privileged accounts and monitor the activities of privileged users.
A solution to this is password vaults, these are secure repos that store the password to sensitive accounts. The idea behind this is no one knows the password, and a new one is created whenever needed. Another solution is enhanced monitoring of the admin accounts. Command proxying is another method where private account managers make all private requests go through them for verification.
Incident Management
Incident response program
- IR policy should provide foundational authority for the program.
- IR policy should define incidents that fall under the policy.
- IR policy should include an incident prioritisation scheme.
Law enforcement brings resources to an investigation but also introduces new risk and regulations.
It is likely the incident will become public.
Staffing and building an IR team is also a challenging task as they will likely need to be available 24-7 and have backup personnel.
A first responder's highest priority should be containing the damage by isolating effected systems.
Potential incident > First responder mode (isolate damage) > Incident escalation and notification > Mitigation > Recovery and reconstitution > lessons learned and reporting
Personnel Safety
Rules:
- Employers should always place the safety of their employees above all other concerns.
- It's always preferable to have employees working in teams wherever possible.
- Employees should also have the ability to indicate that they are in a dangerous situation or under duress. (Panic buttons, code words, or alarms that employees can secretly trigger to indicate that an attacker is forcing them to comply or they are in danger.
- Risk assessments should cover major risks to employee safety and develop response plans. Kind of like how everyone has a fire evacuation plan, but for all risks to employees.
- Every company should have a plan for lockdown in the event of workplace violence. This plan should instruct employees on how they can find safe refuge and seek assistance from law enforcement.
- Every organisation should also have developed its own, unique, emergency management plan that reacts to the geographic, structural, and operating characteristics of their business environment.
Software Development Security – 10%
Some of the key principles of application hardening are:
- ensuring that applications use proper authentication to validate the identity of users
- that applications encrypt any sensitive data so that attackers can't read it by accessing the underlying storage directly
- ensuring that applications validate any user input to ensure that it does not contain dangerous code that might jeopardise the security of the software or underlying computing infrastructure
- ensuring that applications are not vulnerable to any known exploits and when exploits are discovered that they are promptly corrected.
Organisation usually have control over ERP or enterprise resource planning, during this they can make configuration choices whether to choose a specific type of encryption and who will have access to the system. As well as authentication techniques and the scope of access for groups or individual users. Once chosen, baselines are a good way to manage this.
Software Development Lifecycle
Requirements definition - Every software project should begin with a solid set of requirements. Developers should work hand in hand with their customers to outline the specific purpose of the software and the details of the business goals that it will achieve.
The classic approach to software development is a methodology known as the Waterfall approach.
The Spiral model is designed to mitigate some of the disadvantages associated with the waterfall model.
In the first phase, developers determine objectives, alternatives and constraints. Then they move on to evaluating alternatives and identifying and resolving risks. From there, they develop and test the code and then they begin the planning phase for future development work.
The major differences is how developers move through the model.
It follows a fairly rigid series of steps that begin with developing system requirements, move on to developing software requirements, then produce a preliminary design from those requirements that is used as the basis for a detailed design.
Once that design is complete, developers begin the coding and debugging process where they create software. When they finish coding, the software is tested rigorously and, if it passes those tests, it's moved into operations and maintenance mode.
This approach does allow for movement back to an earlier step, but only once phase at a time. For example, if software fails the testing process, it moves back into coding and debugging before being submitted for additional testing. This process is very rigid and doesn't allow for many changes to the software while development is in progress.
Agile: This approach values rapidly moving to the creation of software.
SW-CMM
The SoftWare Capability Maturity Model
This model helps organisations identify where they are in the maturation process. Five levels.
The IDEAL Model
- has five phases.
Initiating, Diagnosing, Establishing, Action and Learning. This model is more focused on the process that an organization follows to improve itself.
Once the development process concludes, the organisation is still responsible for maintaining and operating that code until it's eventually decommissioned.
Any code changes that take place must take in an orderly fashion with appropriate testing and approvals. The change management program should consist of three key elements:
Request control – allows customers to request modifications to the software that is currently being deployed.
Benefits and costs of implementations are estimated
Change control - When devs do make changes they have to request their change through an RFC document which is then reviewed by a board.
Release control – the QA team test and verify the code. After approval the manager moves the code into production. Typically, developers do not have permission to update production code.
DevOps seeks to build collaborative relationships between developers and operators with open communication.
DevOps practitioners seek to create the environment where developers can rapidly release new code while operations staff can provide a stable operating environment.
IaaS useful for DevOps.
Software Security Issues
Cross-Site Scripting (XSS) – an attacker embeds malicious scripts in a third-party website that are later run by visitors. This can be stopped using input validation to remove script code from any input.
SQL injection – insert unwanted transaction code to access databases. Protection solutions are input validation and parametrised SQL (code that prevents user input from altering query structure).
Privilege escalation vulnerabilities often arise as the result of buffer overflow issues or other security vulnerabilities in code that allow an end user to execute arbitrary instructions on the server. Validation checking is one solution. Security patches. Least principal.
The Directory Traversal Attack allows an attacker to manipulate the file system structure on a web server.
Unix “.” References the current directory, “..” references one level up. Directory traversal attacks use these references to try to gain access and exploit unsecure files. Input validation = solution.
Buffer overflow attacks also pose a danger to the security of web applications. When software engineers develop applications, they often set aside specific portions of memory to contain variable content. Users often provide answers to questions that are critical to the application's functioning and fill those memory buffers. If the developer fails to check that the input provided by the user is short enough to fit in the buffer, a buffer overflow occurs. The user content may overflow from the area reserved for input into an area used for other purposes and unexpected results may occur, may give away valuable data to the attacker.
Cookie risk = deanonymisation
Session Hijacking attacks pick up users session cookies and compare values to work out the similarities between them, then the attacker can manipulate their request header to make the website think the attack is someone else who has previously logged in and will log them in accordingly.
Permissions granted to an extension may be overly broad, giving third parties access to your personal information.
Extensions could also be trojans.
when code execution attacks takes place within an application running on a server, the code executes with the permissions of that application process. You should limit that access as much as possible, running application services with restricted accounts that follow the principle of least privilege. Patches also resolve vulnerabilities in systems attackers use to execute this attack.
Secure Coding Practices
Software should be able to handle errors appropriately. Error / exception handling.
Different programming languages implement exception handling in different ways.
Explicit instructions need to be given to tell code how to handle errors safely.
Code repository = store the source files used in software development in a centralised location that allows for secure storage and the coordination of changes.
Code repositories also perform version control, allowing the tracking of changes and rollback.
Code repositories promote code re-use as developers can search the repo for existing code. Also helps avoid old code that nobody is responsible for maintenance.
when developers release code publicly, they must be careful to remove sensitive information from that code before publishing it.
Software development kit (SDK) provides programming resources.
It's fairly common for security flaws to arise in shared code, making it extremely important to know these dependencies and remain vigilant about security updates.
Code signing provides a way for developers to demonstrate to end users that applications are from a legitimate source. Digital signatures.
Software Security Assessment
Risk management should be performed on software too, risk analysis, mitigation, likelihood and impact.
Bolt-on security is security that is an after-the-fact activity, is usually poorly designed and affects not only the security but also the effectiveness of code
There are two main activities that occur during software testing: model validation and verification.
Software model validation ensures that software produced by a development effort is meeting the original business requirements.
Software verification answers the question: Are we building the software right?
Load tests are used to replicate maximum expected load. Developers use automated scripts either internally or through a third-party load testing service to simulate real-world activity on the system under test.
User acceptance testing or UAT is usually the final phase in software testing. Once developers are confident that the software is correct, they turn it over to end users for their evaluation.
This is usually done in a testing environment where users are asked to simulate real-world transactions without actually altering production data. Also referred to as beta testing.
After releasing code, developers often make minor and major changes to that code to fix bugs discovered post-launch and to add new functionality to the system.
Before releasing these modifications, they conduct regression testing to verify that the changes do not have unintended side effects.
The process of regression testing uses sets of inputs and provides them to both the original system and the modified code. Test packages then verify that the software behaves the same way both before and after the modification
Vendor-supplied software – purchased from third-party vendors but run and managed by the customer.
SaaS runs on vendor-managed services… and is also developed by the vendor.
Even when an organization purchases software as a service, the application administration team likely still retains some security responsibilities that are not covered by the vendor. Such as secure configuration.
Weird tips and tricks:
A switch in modern networking is a network node that forwards packets toward a destination depending on a locally significant connection identifier over a fixed path. This fixed path is called a virtual circuit and is set up by a signalling protocol (a switched virtual circuit, or SVC) or by manual configuration (a permanent virtual circuit, or PVC).
Apparently NIST has a standard pertaining to perimeter protection that states critical areas should be illuminated up to eight feet high and two feet out? Who makes this shit up? It is recommended that Disaster recovery plan and business continuity plan (DRP + BCP) be tested at minimum once a year.
Apparently, an emergency plan should be implemented immediately after a disaster to secure the area in order to prevent looting, fraud or vandalism?
Halon fire extinguishers suppress combustion through a chemical reaction that “kills” the fire, although it is not commonly used and banned commercially in the EU or ozone issues.
A transponder responds with an access code to signals transmitted by a reader.
Voltage surge is 3+ nanoseconds. A voltage spike is 1-2 nanoseconds.
Power outage is referred to as a fault.
Bromine is not EPA approved as a replacement for Halon.
Apparently fax machines are more secure than fax servers.
Point to Point tunnelling (not used) The GRE tunnel is used to carry encapsulated PPP packets, allowing the tunnelling of any protocols that can be carried within PPP, including IP, NetBEUI and IPX.
Session hijacking cannot be safeguarded against, not even through mutual authentication using protocols such as IPsec.
Session Hijacking can only be safeguarded against by logging out or by editing browser architecture
Clipper chip = bad nsa developed chip that promoted backdoors. 80bit and 128bit backdoor.
DES is 56 bit key.
Maintenance hooks get around the system's or application's security and access control checks by allowing whoever knows the key sequence to access the application and most likely its code. Maintenance hooks should be removed from any code before it gets into production.
RSA can be used for data encryption, key exchange, and digital signatures. DSA can only be used for digital signatures.
Circuit-based firewalls make decisions based on header information, not the protocol's command structure.
Application-based proxies are the only ones that understand this level of granularity about the individual protocols.
Secure Multipurpose Internet Mail Extensions (S/MIME) is a standard for encrypting and digitally signing e-mail and for providing secure data transmissions using public key infrastructure (PKI).
Message authentication code (MAC) is a cryptographic function.
Kerberos is made up of a KDC, a realm of principals (users, services, applications, and devices), an authentication service, tickets, and a ticket granting service.
Instantiation is what happens when an object is created from a class. Polyinstantiation is when more than one object is made and the other copy is modified to have different attributes.
The Graham-Denning model is a computer security model that shows how subjects and objects should be securely created and deleted.
A potential vulnerability of the Kerberos authentication server is single point of failure
Prudent man rule – Senior executives must take all reasonable steps/precautions exercising the same due care that an ordinary prudent man would in the same situation.
Strategic plans are long term (5+ years) defining long terms goals whereas operational tend to be around 1 year and tactical are reviewed monthly/quarterly.
Processes run in isolation which prevents them from sharing memory with other processes preventing conflicts.
Five elements of AAA services = Identification, authentication, authorisation, auditing, accounting.
Bounds are the limits set on memory within which an isolated process is confined.
The definition of Secret material is that it’s unauthorised disclosure would have significant effects causing critical damage to national security (Top Secret is ‘drastic effects’ and ‘grave damage). States that a port can be in are open/closed/filtered
DREAD is part of a system for risk-assessing computer security threats previously used at Microsoft and although currently used by OpenStack and other corporations it was abandoned by its creators. It provides a mnemonic for risk rating security threats using five categories.
PAP credentials are transferred as ‘clear text’ and are not encrypted
Port 1521 is used for connections to Oracle database servers and should never be exposed to a public network.
The ITIL framework provides 9 KPIs that security programs may choose to leverage.
- the % decrease in security breaches reported to the service desk.
- the % decrease in the impact of security breaches.
- The % increase in SLAs that have appropriate security clauses.
- The number of preventive security measures that the organization implemented in response to security threats.
- The amount of time that has elapsed between the identification of a security threat and the implementation of an appropriate control.
- The # of major security incidents.
- The # of security incidents that created service outages or impairments.
- The # of security test, training, and awareness events that took place.
- the # of shortcomings identified during security tests.