An Introduction to Zero-Knowledge and ZK SNARKS

16th December 2020 by Dr Mark Blunden @id-3

An Introduction to Zero-Knowledge and ZK SNARKS
Suppose that you wish to prove to another party (who we will call the Verifier) that we know a secret password. What typically happens is that you tell the password to the Verifier, who checks it against a previously stored version (in practice, the Verifier will not store your actual password, but rather a value derived from it using a cryptographic hash function or similar one-way function).


A clear downside to this is that you (the Prover) are revealing your password to the Verifier. What Zero-Knowledge (ZK) does is enable you to prove something (such as knowledge or possession of a password or a cryptographic secret) to another party but do so without revealing any information about your actual secret. That is, the Verifier gains no (or zero) knowledge about the secret, other than learning that the Prover does indeed know the secret. If the Prover knows the secret then the ZK protocol should always succeed in convincing the Verifier of this, but if a Prover does not know the secret then the proof should always fail (with overwhelming probability).


The concept of Zero-Knowledge dates back to the 1980s. Zero-Knowledge proofs can be interactive or non-interactive, and protocols have evolved over the years. Early schemes include proving knowledge of a cryptographic secret (such as a Discrete Log), which may be used for identification purposes. More recent developments include ZK SNARKS, and new areas of application include cryptocurrencies with both Zcash and Ethereum using ZK protocols to provide privacy.


The acronym ZK SNARK stands for ‘Zero Knowledge Succinct Non-Interactive Argument of Knowledge. The property of succinctness comes from being relatively small in both size and the amount of computation required. SNARKS are a type or class of proof, and examples of ZK SNARKS include Bulletproofs, Plonk, Sonic, and one commonly referred to as Groth16 (denoting the author and year). Schemes differ in size of proof, efficiency of proof creation and verification, and trusted setup requirements.


So, returning to our earlier password example, a SNARK could be used to prove knowledge of a secret (such as a password) that hashes to a given value. However, using a SNARK in such a way for identification purposes does not really make sense – for one thing, it is non-interactive, and user authentication typically requires interaction between the Prover and Verifier to show the presence or ‘aliveness’ of the Prover. However, this example can help illustrate the difference between SNARKs and other more traditional ZK proofs.


An example of an interactive ZK proof for identification is where the Prover has a Public Key and proves in ZK to the Verifier knowledge of corresponding Private Key. To do this, the Verifier asks one or more questions. If the Prover knows the secret (the Private Key), then they can always answer correctly. If they do not know the secret then, depending on the protocol being used, they will either not be able (in any practical sense) to answer correctly, or may only be able to answer correctly some of the time. If the latter case, then the process is repeated sufficiently many times that the likelihood of the Prover not knowing the secret and being able to provide a correct answer every time becomes negligible. In either case, the verification process used by the Verifier is to check that the Public Key and the answer provided by the Prover are mathematically consistent, and the protocol ensures that when doing so the Verifier does not learn any information about the value of the Prover’s Private Key.


SNARKs work in a different way. A proof consists of public and secret inputs and a circuit (or series of mathematical steps or operations) that performs a computation on the inputs. In the example of proving knowledge of an input to a hash function, the public input will be a hash (output) value, and the secret input (such as a password, in the above example) will be an input to the hash function. The circuit computes the hash of the secret input and compares this with the public hash value that formed a part of the proof input, giving a true or false output depending on whether the comparison results in a match. The proof shows that a Prover has knowledge of secret input(s) that for the given public inputs cause the circuit to output a value of true. In a more general sense, the proof also demonstrates that a computation – as defined by the circuit – has been done correctly (in the sense that the circuit evaluates to true). The objective is that once created the proof can be verified at any later time and by any verifying party without requiring any interaction with the Prover.


Non-Interactive Zero Knowledge (NIZK) proofs such as SNARKs require that both the Prover and Verifier have access to some common knowledge defined by a Structured Reference String (SRS), or sometimes referred to as a Common Reference String (CRS). The SRS includes information about the relation or statement (which defines the circuit) being proved. For succinctness, some SNARKS use a summary of the statement being proved. The SRS also includes public Evaluation and Verification Keys, where the former is the set of values required by the Prover to create the proof, and the latter is the set of values required by the Verifier to verify the proof.


It is the creation of the SRS in a pre-processing phase that allows for the succinctness and performance advantages of some SNARKS, with alternative approaches currently having NIZK proofs that are larger or take longer to verify.
The SRS used in many NIZK proofs are dependent on random and secret information, and the security is dependent on this information remaining secret. Knowledge of such information allows the protocol to be subverted, which is why the secret information should be discarded and ‘forgotten’ once the SRS has been generated. The secure creation of the SRS is of vital importance, and is also a distinguishing feature between different SNARK schemes. Furthermore, an SRS is typically specific to a relation (or circuit), and so a new SRS is required whenever the relation changes.
One option is to use a trusted party. However, this can lead to issues such as to how to select a party that is widely trusted and accepted. An alternative option is to use a form of multi-party computation to collaboratively construct the Evaluation and Verification Keys. The aim is to involve a large number of parties, and the process should ensure that the resulting SRS is sound provided at least one of the participants is honest and erases their own secrets. This can result in trusted setup ceremonies that are open to all to participate, a by-product of which may be to engender an affinity for the scheme amongst the participants. An example of the use of trusted setup ceremonies is Zcash which established in such a way two distinct sets of parameters, one in 2016 for Zcash Sprout, and one in 2018 for Sapling.
Another approach is to create a Universal Reference String that can be used for any circuit up to a given size, thus removing the need to generate a new SRS every time the circuit (or relation) changes. Sonic is an example of a scheme that is universal and also updatable. Although Sonic does still require an initial trusted setup, participants can continue to add randomness by updating the SRS. As the reference string is updated, it is strengthened.


ZK SNARKS are a very important tool for providing privacy in blockchain cryptocurrencies such as Zcash, where they enable digital transactions to remain private. Address and transaction information can be selectively shared for auditing or regulatory compliance. However, comprehensive protection requires more than just zero-knowledge proofs. The wider adoption of such currencies, including the entry of institutional capital, also requires the use of extremely secure processing such as for holding in custody and transfers. Secure hardware technology such as Hardware Security Modules (HSMs) such as the Secure Vault HSM from HUB Security provide customisable handling of digital assets in an extremely secure environment. Bringing all these pieces together can ensure the value and protection required.

We are interested to know what data you are planning to protect? Complete the ID-3 Data Protection Questionnaire to find out how we can help you.

 

Are encryption keys more important than your data?

19th May 2020 by Brad Beutlich @nCipher

Today, more than ever, protecting data and systems is extremely important – corporate reputations, and in turn their business, can depend on it. There are many layers to data security and, for companies who rightly don’t trust perimeter security, data encryption is the most important layer. Even at this layer however, there’s an even more important level that many companies don’t protect: the encryption key. With this in mind, I ask this question: Are your encryption keys more important than your data?

When faced with this question, most likely the immediate respond will be “The data of course.” Upon further contemplation however, most people might change their answer. It’s not as black and white as you might think.

Let’s begin by asking two more questions: Would people disclose private information if they knew that it would be compromised? Could modern commerce be conducted without the promise of privacy? The answer to both of these questions is of course “No”.

Something that people might not completely understand, because it’s not often discussed, is that all modern encryption processes are publicly known. Gone are the days of security by obscurity. The processes for today’s popular cryptographic algorithms like ECC, AES, DES and RSA are well documented and understood. For thousands of years however, encryption processes themselves were considered secret. The problem with this practice is that the processes couldn’t be sufficiently tested. These old solutions were secure until someone cracked them, and then they were obsolete. All it took was one person to crack the system – and no one knew who that one person was or when the process was cracked. This was of course what happened in WWII with the Enigma machine used by the Nazi regime. Both the process and the key were secret until the Allies cracked them – and the Nazis never knew when their process was compromised.

Modern cryptographers realized the folly of securing something by using a secret process AND a secret key. Some might argue that doubling the complexity is a good thing but it’s the lack of vetting by peer review that make a secret process so vulnerable. The other thing to consider is that in a secret process there is always at least one person who knows the secret. What is stopping that person, and therefore the secret, from being compromised or having the perception of compromise? A secure encryption solution can only have one secret in order to be secure. This is why a new process needed to be developed and the process needed to be tested with only one secret: the key. The only secret is the cryptographic key generated by the known process and subsequently used in a complex mathematical equation to make the data unreadable to anyone other than those who know the secret key. Another way to look at this is that the data may or may not be confidential, but in all cases the key must be secret. If the secret key is disclosed, then all of the data is disclosed.

The English language is full of Idioms for different situations. There is a reason why the term “weakest link” has so many other alternatives: Achilles’ heel, house of cards, kryptonite, single point of failure, fatal flaw, soft underbelly, gaping hole, etc. It is primarily because this situation occurs so often in real life that one expression would end up being redundant and boring. In the case of a cryptographic key, there aren’t enough idiomatic expressions to sufficiently express the modern-world devastation resulting from the compromise of a cryptographic key.

One of today’s data protection challenges is that while most security professionals understand the strength of standardized encryption through peer vetting, they are not so aware of the singular importance of keeping the key protected. There’s a very old little ditty that ends with “And all for the want of a horseshoe nail.” It’s a cautionary tale about how something as simple as a horse shoe nail can take down an entire kingdom. Cryptographic secrets are just as underappreciate and much more important. I’d like to offer a modern equivalent to make my point:

For the want of a crypto key the data was lost, For the want of the data a reputation was lost, For the want of a reputation the sales were lost, For the want of the sales the revenue was lost, For the want of the revenue the business was lost, And all for the want of a crypto key.

Unlike a horseshoe nail, a cryptographic key can be protected with something as simple as a hardware security module (HSM) but unfortunately, in most companies, they are not. Organizations who wish to protect their data or systems using cryptography must begin to realize that the key used to protect the data or systems is more important that the data itself. Until they do, our personal data, corporate data and systems are open to compromise. Is the key more important than the data? It’s not black and white after all is it?

nCipher shows how nShield BYOK can strengthen your cloud key management practices.
 
We are interested to know what data you are planning to protect? Complete the ID-3 Data Protection Questionnaire to find out how we can help you.

 

IoT security failures as product defect: the coming wave of strict liability

26th April 2019 by Robert Carolina
Executive Director @Institute for Cyber Security Innovation, Senior Visiting Fellow, Information Security Group,
Royal Holloway University of London

 

Victims of defective products are not required to demonstrate the “fault” of a product manufacturer. It’s enough to demonstrate the existence of a defect in the product that causes harm. Under European laws, “a product is defective when it does not provide the safety which a person is entitled to expect taking all circumstances into account…” [1] at Art.6; [2] at s.3.

 

Product strict liability has always been a source of concern for manufacturers (and importers, who are subject to the same liability). They are obviously concerned about liability in the absence of fault. Unlike many other forms of liability (like warranty), manufacturers are practically unable to limit this liability to victims who sue alone or collectively in a class action.

Two important conditions must exist before a victim can succeed on a strict liability claim: 1) There must be a “product” which is defective
2) A victim harmed by a defective product can only use this legal theory to claim compensation for death or personal injury (or damage to non-commercial property under the laws of the EU). Economic harm, business interruption, loss of business revenue, etc, are not recoverable under this theory. These two conditions made strict liability a niche topic or an intellectual curiosity for most lawyers working in the fields of software development and cyber security and meant that it was traditionally overlooked in these fields. For decades we have taken comfort in the widely shared legal opinion that software, as such, does not fit within the definition of “product” under European or American laws. Even if software was to be viewed as a product, we reasoned, opportunities for defective software design to cause death or personal injury seemed exceedingly rare. One long-understood risk of strict liability concerns defective software control systems as a component in safety-critical hardware. The manufacturer of the resulting defective hardware is subject to strict liability claims, irrespective of the source of the defect. This risk can be illustrated with the example of the Therac-25 radiation therapy machine. Between 1985-87, six patients treated using the Therac-25 were exposed to massive radiation overdoses (100x intended dose). Three of these patients died as a result of the overdoses. The design of the machine’s system control software is widely cited as a cause of the overdose incidents, which were thankfully rare. [3] Under a strict liability analysis, the Therac-25 device as a whole is a “product”. If the machine failed to provide the “safety which a person is entitled to expect,” such a product would be defective and the manufacturer strictly liable for personal injury or death. The fact that the flaw originated in control software would be irrelevant. For decades, my legal colleagues and I rested comfortable in the belief that software errors (including software security flaws) rarely killed anyone. Today, by contrast, the IoT presents a rapidly growing set of opportunities for “death by software”. A net-connected software-controlled product (e.g., an autonomous vehicle, an industrial control system, a pacemaker, a vehicle using fly-by-wire) that fails to deliver appropriate safety, is defective whether the safety is compromised through the design of electrical, mechanical, software, or security, systems. Thus strict liability applies to products whether safety is compromised through errors in algorithmic decision-making (e.g., an autonomous vehicle decides to swerve into oncoming traffic after misreading road markings) or security errors (e.g., a broken authentication scheme permits a remote hacker to divert the same vehicle into oncoming traffic). While the hardware product manufacturer (or importer) is clearly subject to the risk of strict liability, what about those in the upstream supply chain? What if, for example, the manufacturer of the Therac-25 had purchased their control software from a third party as a component, or the autonomous vehicle manufacturer adopts and installs a defective authentication package embodied in third-party software? Under current law, defective component “product” manufacturers face strict liability. A manufacturer of defective brakes, for example, is strictly liable for personal injury caused by automobiles which become defective because the defective brakes are installed. Software (on its own) is not currently thought to be a product in this area of law. The author of a defective software component probably cannot face a strict liability claim from an injured victim – even if the software caused the hardware product to harm the victim. This may be about to change. More than three decades have passed since the 1985 adoption of the European Directive on product strict liability [1]. The reliance society places on software and online services has become a central feature of everyday life. European policy makers have noticed, and the tide of product liability policy appears to be shifting. The European Commission completed a comprehensive evaluation of European product liability law in 2018. The term “software” features prominently, and repeatedly, in the 108-page report [4]. The Commission openly questions the extent to which “digital products” (e.g., software as a product, SaaS, PaaS, IaaS, data services, etc.) should be redefined as “products” and thus subjected to strict liability analysis when defects cause death or personal injury [5]. A Commission Expert Group on liability and new technologies is currently examining possible changes to the law. Expanding the definition of “product” is central to this review. We seem to be accelerating towards a world in which cyber security failures in the IoT will create increasing risk to life and limb. Manufacturers of tangible IoT products already face strict liability if their product is unsafe – including cases where safety is compromised by poor cyber security. It appears that software developers, SaaS providers, and other cloud service providers, may soon be required to step up to this same stringent standard of responsibility throughout Europe. We hope they’ll be prepared for the challenge. Works Cited: [1] European Economic Community, Council Directive of 25 July 1985 on the approximation of the laws, regulations and administrative provisions of the Member States concerning liability for defective products (85/374/EEC), vol. L210, 1985, p. 29.
[2] Consumer Protection Act 1987.
[3] N. Leveson, “Medical Devices: The Therac-25,” in Safeware: System Safety and Computers, Addison-Wesley, 1995.
[4] European Commission, Evaluation of Council Directive 85/374/EEC of 25 July 1985 on the approximation of the liability for defective products, Brussels, 2018.
[5] European Commission, Liability for emerging digital technologies, Brussels, 2018.

The spectre of poor HSM procedures

26th April 2019 by Elton Jones @ID-3

 

In this article, we briefly outline the challenges faced by regulated organisations that need procedural control over their HSM estate. It provides an insight into the level of awareness, lack of resources and expertise that pose significant challenges in production system adoption.

The need for procedures

Regulation dominates the payment systems behind global financial institutions. Primarily because most financial service providers are regulated to a standard by typically a local payments network provider such as LINK UK or an international provider such as VISA or Mastercard. In both cases regulatory control such as the Payment Card Industry (PCI) PIN is applicable either because PCI PIN is mandated directly by the international providers regulators (the PCI council) or because the local network provider adopts, legislates and regulates the relevant controls mandated in the PCI standard.

Regulatory compliance is at the forefront of all payment service provider’s service enablement strategy. It is the primary reason why the technical and procedural controls are in place. Failure to comply with the regulatory requirements through poor procedures could lead to steep penalties levied by the card brands and suspension of network use or card issuance for the issuer. Such a threat means that card issuers and switch service providers are highly sensitive to any aspect of the regulation not being adhered to and poor procedures are a factor. Furthermore poor procedures could lead to a costly security breach followed by the risk of reputational damage.

A significant aspect of regulatory control is the development and demonstrable use of procedures. PCI PIN Security mandates that virtually every control objective related to keys, components or HSMs has a policy and procedure. Furthermore, the business top down ownership of the procedures must be in place and all affected parties (key custodians, supervisory staff, technical management, etc.) must be aware of those procedures. Without procedures compliance is simply not possible and the fact is that most business owners don’t really know where they stand with the state of their procedures until it is too late, furthermore procedure custodians feel vulnerable with their liabilities at audit time.

HSM related procedures enable control and consistency over the way a company achieves an objective as well as to demonstrate retrospective control over its device, component and key management activities sometimes long after the activity has taken place. Crucially, signed procedures are the only way to attest for an event post performance to an auditor and so attested performances must be meticulously planned and coordinated to correctly capture any event. 61% of respondents in the 2019 Ponemon Global Encryption Trends Survey states that key management is painful with the top reasons shown below for which the right procedures can help:

The industry faces a common problem in finding it difficult to create and maintain a reasonably reliable quality programme for the creation of procedures, this issue needs to be addressed.

Personal experience has witnessed the fact that procedures often do not get enough visibility until very late on in a project and that little time, budget and resource allocation is assigned to them. The lack of resources given to the creation of procedures means that they are typically:

  • Not demonstrative because they don’t sufficiently detail the process
  • Do not capture the right sign off at the relevant points and therefore cannot attest to the performance

This lack of detail and attestation leaves little for an auditor to validate and without procedures there is a strong chance that the platform simply won’t make production.

The irony is that businesses all over the world, using the same HSM products to satisfy the same regulation for interaction with similar card schemes require carbon copies of procedures but without help they face varying problems. This help is not available because vendors are interested in product delivery and businesses are interested in service delivery, very little support is available to solve the problems that exist in the gaps.

Procedure creation is typically left to custodians within the business with little understanding of the intricacies of HSM implementation. In some cases an HSM was a reluctantly inherited item of hardware that in some cases was considered as simply an encryption router and treated as such.

Having worked for a vendor for many years as an HSM SME, my consulting role was typically to focus on the implementation of the HSM into its environment and any associated “near box” activities. I could be involved with pre-production or test, but it was unlikely that I would ever be part of the wider needs of standards such as ITIL service delivery. Vendor consultants typically do not assist with high or low lever designs, implementation planning, creation of logs for component access, device inventories, physical inspections or procedure development.

Consideration of the wider service delivery requirement provides a better experience in the handover of an HSM to the business from project to support, anything less is a problem which needs addressing.

Conclusions

Procedure development requires expertise. Having certified on a 2-day HSM capabilities training course and passing the exam is a great start to gaining that expertise but procedure-based training on how to complete daily operations is better. Access to a costly spare HSM is also fantastic to refresh skills and to get familiar with various controls but access to the required wisdom to make informed decisions on procedure development or to have them developed for you is better.

To summarise, some reasons for poor procedures in an organisation are listed:

  • They are not backed by a Security Policy or sufficiently business owned
  • They are not given enough time or budget for success
  • Lack of hands-on HSM experience means that they are created without correct workflow and therefore not accurate
  • They don’t reflect new product updates
  • They can be inconsistent and disjointed due to multiple authors and no peer review
  • They are created without a reasonable knowledge of the regulatory requirements
  • They are created without any consideration of the vendors security recommendations
  • The organisation is unclear of how to change a procedure after an auditor’s non-compliance notice
  • Issues of versioning mean that the procedure is based on legacy features
  • The procedures typically contain missing or incomplete signatories and attestation
  • Paper based procedures become disorganised, lost, incomplete as well as hard to locate when requested
  • On multiple occasions the procedures are not available for the performance

All of the above make HSM procedure management problematic for businesses globally. The business is accountable for the regulation and it will be the business that receives the non-compliance when a procedure does not satisfy its regulatory requirements. Put simply, HSM procedure mismanagement is a ticking time bomb for the unaware and should be considered a threat that could give rise to significant risk that must be managed. Not to take control over procedure management is naive at best and negligent at worst. Organisations need to regularly review how they are managing the risk or poor procedures.

Do you need assistance in your HSM procedure management, Click here?

Cryptographic Key Management – the Risks and Mitigation

24th April 2019 by Guest Blogger Rob Stubbs @ Cryptomathic

With the increasing dependence on cryptography to protect digital assets and communications, the ever-present vulnerabilities in modern computing systems, and the growing sophistication of cyber attacks, it has never been more important, nor more challenging, to keep your cryptographic keys safe and secure. A single compromised key could lead to a massive data breach with the consequential reputational damage, punitive regulatory fines and loss of investor and customer confidence.

In this article we look at why cryptographic keys are one of your company’s most precious assets, how these keys can be comprised, and what you can do to better protect them – thereby reducing corporate risk and enhancing your company’s cyber-security posture.

Introduction

Cryptography lies at the heart of the modern business – protecting electronic communications and financial transactions, maintaining the privacy of sensitive data and enabling secure authentication and authorization. New regulations like GDPR and PSD2, the commercial pressure for digital transformation, the adoption of cloud technology and the latest trends in IoT and blockchain/DLT all help drive the need to embed cryptography into virtually every application – from toasters to core banking systems!

The good news is that modern cryptographic algorithms, when implemented correctly, are highly-resistant to attack – their only weak point is their keys. However, if a key is compromised, then it’s game over! This makes such cryptographic keys one of your company’s most precious assets, and they should be treated as such. The value of any key is equivalent to the value of all the data and/or assets it is used to protect.

There are three primary types of keys that need to be kept safe and secure:

  1. Symmetric keys – typically used to encrypt bulk data with symmetric algorithms like 3DES or AES; anyone with the secret key can decrypt the data
  2. Private keys – the secret half of public/private key pairs used in public-key cryptography with asymmetric algorithms like RSA or ECDSA; anyone with the private key can impersonate the owner of the private key to decrypt private data, gain unauthorized access to systems or generate a fraudulent digital signature that appears authentic
  3. Hash keys – used to safeguard the integrity and authenticity of data and transactions with algorithms like HMAC-SHA256; anyone with the secret key can impersonate the originator of the data/transactions and thus modify the original data/transactions or create entirely false data/transactions that any recipient will believe is authentic

With an ever-increasing number of keys to protect, and an ever-increasing value of data being protected by those keys, not to mention the demands of PCI-DSS or GDPR, this is a challenge that nearly every business needs to face and address as a matter of urgency.

What dangers await?

There are many threats that can result in a key being compromised – typically, you won’t even know the key has been compromised until it has been exploited by the attacker, which makes the threats all the more dangerous. Here are some of the major threats that should be considered:

Weak keys

A key is essentially just a random number – the longer and more random it is, the more difficult it is to crack. The strength of the key should be appropriate for the value of the data it is protecting and the period of time for which it needs to be protected. The key should be long enough for its intended purpose and generated using a high-quality (ideally certified) random number generator (RNG), ideally collecting entropy from a suitable hardware noise source.

There are many instances where poor RNG implementation has resulted in key vulnerabilities.

Incorrect use of keys

Each key should be generated for a single, specific purpose (i.e. the intended application and algorithm) – if it is used for something else, it may not provide the expected or required level of protection.

Re-use of keys

Improper re-use of keys in certain circumstances can make it easier for an attacker to crack the key.

Non-rotation of keys

If a key is over-used (e.g. used to encrypt too much data), then it makes the key more vulnerable to cracking, especially when using older symmetric algorithms; it also means that a high volume of data could be exposed in the event of key compromise. To avoid this, keys should be rotated (i.e. updated / renewed) at appropriate intervals.

Inappropriate storage of keys

Keys should never be stored alongside the data that they protect (e.g. on a server, database, etc.), as any exfiltration of the protected data is likely to compromise the key also.

Inadequate protection of keys

Even keys stored only in server memory could be vulnerable to compromise. Where the value of the data demands it, keys should be encrypted whenever stored and only be made available in unencrypted form within a secure, tamper-protected environment and even (in extreme cases) kept offline.

There have been a number of vulnerabilities that could expose cryptographic keys in server memory including HeartbleedFlip Feng Shui  and Meltdown/Spectre.

Insecure movement of keys

It is often necessary to move a key between systems. This should be accomplished by encrypting (“wrapping”) the key under a pre-shared transport key (a key encryption key, or KEK), which may be either symmetric or asymmetric. Where this is not possible (e.g. when sharing symmetric transport keys to bootstrap the system), the key should be split into multiple components that must then be kept separate until being re-entered into the target system (and then the components are destroyed).

Non-destruction of keys

Keys should be destroyed (i.e. securely deleted, leaving no trace) once they have expired, unless explicitly required for later use (e.g. to decrypt data). This removes the risk of accidental compromise at some future date.

Insider threats (user authentication, dual control, segregation of roles)

One of the biggest classes of threat that a key faces is insider threats. If a rogue employee has unfettered access to a key, they might use it for a malicious purpose or pass it onto someone else to the same end.

Lack of resilience

Not only must the confidentiality and integrity of keys be protected, but also their availability. If a key is not available when required, or worse still lost due to some fault, accident or disaster with no backup available, then the data it is protecting may also be inaccessible / lost.

Lack of audit logging

If the key lifecycle is not fully recorded or logged, it will be more difficult to identify when a compromise has happened and any subsequent forensic investigation will be hampered.

Manual key management processes

The use of manual key management processes, using paper or inappropriate tools such as spreadsheets and accompanied by manual key ceremonies, can easily result in human errors that often go unnoticed and may leave keys highly vulnerable.

Mitigating the threats

So, what can be done to counter these threats and keep your keys (and your company) safe?

The only effective way to mitigate these threats is to use a dedicated electronic key management system, ideally a mature, proven solution from a reputable provider with good customer references. Any such key management system should utilize a hardware security module (HSM) to generate and protect keys, and to underpin the security of the whole system. If well-designed, such a system will offer the following benefits:

  • Full lifecycle management of keys
  • Generation of strong keys using a FIPS-certified RNG and hardware entropy source
  • Protection of keys using a tamper-resistant HSM
  • Strict policy-based controls to prevent the misuse/reuse of keys
  • Automatic key rotation
  • Automatic secure key distribution
  • The ability to securely import/export keys in components or under a transport key
  • The ability to securely destroy keys at the end of their lifecycle
  • Strong user authentication, segregation of duties, and dual control over critical operations
  • Intuitive user interface and secure workflow management to minimize the risk of human error
  • Support for high-availability and business continuity
  • Tamper-evident audit log, usage log and key histories for demonstrating compliance
  • Ability to respond quickly to any detected compromise

Not only will such a system help protect your keys, it will also boost efficiency, reduce reliance on highly-skilled personnel, and simplify achieving, maintaining and demonstrating compliance with a multitude of standards and regulations such as GDPR, PCI-DSS, HIPAA, SOX and ISO 27001.

The biggest danger of all …

.. is inaction! The impact of a key compromise can be substantial:

  • Forensic investigation costs
  • Remediation costs
  • Loss of sensitive information (e.g. industry secrets)
  • Loss of competitive advantage
  • Direct financial losses (e.g. illegitimate financial transactions)
  • Litigation
  • Compensation to customers
  • Fines
  • Loss of reputation
  • Loss of business
  • Reduction in share price
  • Dismissed executives
  • Business closing down (as has been the result of some other data breaches)

Duty of reasonable care

An interesting court case in the USA as long ago as 1932, T.J. Hooper v. Northern Barge Corp., established that a company still has a reasonable duty of care towards using available technology, even where such technology may not be regarded as industry standard.

A company operates two tugs, each towing three barges full of coal for delivery. En route, the tugs encountered a storm which sank the last barge of each tug’s tow. The evidence suggests that there was a weather report broadcast over radio which would have warned the tug-captains of the weather and persuaded them to put into harbor. However, the tug-captains only had private radio receiving sets which were broken and their employer did not furnish them with sets for work. At the time of the incident, there was no industry standard or custom of furnishing all boats with radio receivers. [source]

The ruling concluded that “There are precautions so imperative that even their universal disregard will not excuse their omission … We hold the tugs therefore because had they been properly equipped, they would have got the Arlington reports. The injury was a direct consequence of this unseaworthiness.”

If we translate that into today’s world of key management, in the event of a legal case resulting from a key being compromised, a court may well find that if the defendant wasn’t using a key management system, a readily-available technology that could have prevented the incident, even though the use of such key management systems may not be considered industry-standard, then the defendant could be held to not be exercising a reasonable duty of care. The moral is that it is better to be seaworthy than to capsize through a lack of reasonable care!

References

Bring Your Own Key: What Is It and What Are Its Benefits?

29th March 2019 by Elton Jones @ID-3

Ever since cloud computing first emerged, security has been a prime concern of end users. The idea of handing over control of IT systems to third party operators by running hosted applications and infrastructure on remote servers has always sat uncomfortably with a significant number of business owners and CTOs.

These concerns are doubled when it comes to migrating the most sensitive data and the most mission critical applications into the cloud.

Over the years, cloud service providers have been able to win over most doubters on security with cutting edge privacy and anti-malware protections combined with state-of-the-art redundancy protocols. But on data, the disquiet persists. Out of your control, hosted on a remote multi-tenant server, who’s to say who really has access to it?

All data stored on a cloud platform, and indeed all data traffic communicated back and forth between the client and the host, is protected by encryption. But encryption is only as secure as the encryption keys used to decipher it. The question of ownership of the keys then raises its head. If they are also hosted in the same cloud system, managed by the same third-party operator, you are back to square one with the security concerns. If an external party can see or get hold of your encryption keys, they can get access to your data.

Bring Your Own Key (BYOK) is a protocol that aims to resolve this problem by maintaining a fundamental separation of encrypted data and encryption key. While your data might be entrusted into the hands of a cloud service provider, the key is not – at least, not in a way that forfeits the end user’s control over it, or makes it in any way accessible to external parties.

What does a fundamental separation of encryption and encryption key mean in practice? In the most simple terms, it means that your cloud host does not encrypt your data for you, or have anything to do with generating the key. BYOK means that encryption keys are generated, stored and applied completely independently by the client – they literally ‘bring their own key’ to enable encryption.

Taking HSM into the Cloud

To achieve secure encryption of data assets in an on-premise system, enterprises have long relied on hardware security modules (HSMs). An HSM is a cryptographic device which generates, stores and safeguards strong keys for the management of encryption across an IT system. Sophisticated and tamper resistant, an HSM can manage multiple digital keys for different use cases.

While HSMs remain a powerful cryptographic tool for on-premise systems, they were not designed for external encryption. Once you start introducing the public and private cloud, multiclouds and hybrid infrastructures into the equation, HSMs cease to be as effective. Until recently, there has been no accepted standard by which cloud providers will accept HSM-generated keys to work on their systems.

The main alternative to HSMs for cloud encryption to date has been Key Management Services (KMS). KMS is an encryption service offered and managed by a cloud provider for use on their own platforms. It offers all the key functionality you would get from an on-premise HSM, but solves the issue of compatibility. But the big drawback goes right to the heart of concerns over ownership and control of encryption – it hands the keys to the same people hosting your data services, which requires a considerable level of trust and removes any claim of ‘sole control’.

BYOK can be seen as offering a middle ground between HSM and KMS, some of the control but less of the overhead. Indeed, HSMs are integral to the BYOK concept. The approach has been pioneered by nCipher, which has developed an HSM product – nShield – that is compatible with the three biggest global cloud platforms, Microsoft Azure, Amazon Web Services (AWS) and Google Cloud.

nShield operates like any other HSM. Encryption keys are generated and stored within the on-premise device, meaning you are not handing over control to a third party provider – you are in charge of how and when the keys are used. The main difference is, nShield keys can be exported to the cloud service provider allowing the provider to encrypt data for applications hosted in the cloud as well as those run on premise.

With AWS and Google Cloud, keys are ‘leased’ to the host temporarily to encrypt digital assets on their servers. After an allocated period, the keys are automatically destroyed so there is no risk of them sitting on remote servers indefinitely waiting to fall into the wrong hands. Decryption is handled on the client side. Because the client retains the master key on premise, they can simply export a copy to the cloud provider as and when needed to update or change their encryption.

Azure Key Vault

With Azure, things work differently. nCipher has worked with Microsoft who host a highly available platform of nShield HSMs in the Azure cloud, the Azure Key Vault, and provide a protocol for securely storing encryption keys in it. The key is used to manage data encryption in the cloud environment, without the provider (Microsoft in this case) having any access to the data. In this formulation, BYOK completely integrates the robust security and control offered by HSM with the flexibility of the cloud.

The Azure Key Vault essentially works by encrypting the encryption key. Using the Transparent Data Encryption (TDE) approach, it encrypts the Database Encryption Key (DEK) stored on the boot page of a database using an asymmetric key known as a TDE Protector.

The TDE Protector can be generated in the client’s own on-premise HSM and then exported to the Azure Key vault via a secure bridge. But in what is arguably the biggest innovation of the nCipher and Microsoft BYOK approach (the systems leverage of nShield SecurityWorld controls end-to-end), the Azure Key Vault can also generate the TDE Protector itself. This is not like a KMS where the service provider generates and manages the key – this is all still controlled by the customer, but using a secure cloud environment rather than their own HSM.

The key point here is that even when the TDE Protector is generated within the Azure Vault, the cloud provider does not see and cannot extract the key. This maintains the confidence of full control for the client without the need for a highly available on-site HSM. Moreover, the HSM is not required once the tenant key is in the Azure Vault. Data services might be managed in the Azure cloud, but there is still effective and complete separation from responsibility for encryption and security, with the client managing keys and permissions for the Azure Vault migrated keys themselves.

BYOK is therefore all about establishing and maintaining trust in cloud data security by keeping all control of the encryption in the hands of the data owner – as they would have if they were running their databases on site. What the Azure approach demonstrates is that this confidence can still maintained even when the security protocols themselves are migrated to the cloud.

nCipher shows how nShield BYOK can strengthen your cloud key management practices.
https://youtu.be/lOpaD4vShsU
 
We are interested to know what data you are planning to protect? Complete the ID-3 Data Protection Questionnaire to find out how we can help you.