The GDPR Fever: Engineering Information Privacy
This is the fifth post in the GDPR Fever series aimed at explaining whether and how DLP technologies could be used for achieving GDPR compliance.
Undeniably, the most significant innovation for corporate IT departments dealing with the GDPR has come with the “data protection by design” principle introduced in Article 25. This, together with the “data protection impact assessment[ CITATION Cou16 \l 1033 ]” prescribed in Article 35, essentially mandate the use of privacy engineering by organizations that develop or overhaul their IT systems processing personal data in order to make them GDPR-compliant.
Specifically, Article 25(1) of the GDPR provides that:
“The controller shall, both at the time of the determination of the means for processing and at the time of the processing itself, implement appropriate technical and organisational measures, … which are designed to implement data-protection principles … in an effective manner and to integrate the necessary safeguards into the processing in order to meet the requirements of this Regulation and protect the rights of data subjects[ CITATION Cou16 \l 1033 ].”
This legally binding norm of implementing data protection principles along with core business functions at the design stage of system development categorically requires that organizations engineer personal data protection capabilities – which is exactly the mission of privacy engineering defined by NIST as “a specialty discipline of systems engineering focused on achieving freedom from conditions that can create problems for individuals with unacceptable consequences that arise from the system as it processes PII[ CITATION Bro17 \l 1033 ].”
The core feature of privacy engineering is that its risk modeling is based on a “privacy impact assessment” process or, in the GDPR’s lexicon, “data protection impact assessment” (DPIA). Article 29 Data Protection Working Party, an advisory to the EU Commission, defines DPIA as:
“A process designed to describe the processing, assess its necessity and proportionality and help manage the risks to the rights and freedoms of natural persons resulting from the processing of personal data by a ssessing them and determining the measures to address them. … In other words, a DPIA is a process for building and demonstrating compliance [ CITATION Art17 \l 1033 ].” The GDPR allows organizations using any privacy engineering frameworks that include privacy risk assessment, such as those from MITRE and NIST, as well as the coming international standard ISO/IEC 27550 (2nd Working Draft).
Data controllers and processors can also choose any systematic DPIA process or DPIA methodology “provided it takes account of the components described in Article 35(7)[ CITATION Art17 \l 1033 ]”. In the “Guidelines on DPIA”, these requirements are interpreted into a list of criteria which should be used to check if a DPIA is good enough to comply with the GDPR. In addition, the guidelines provide a list of acceptable existing DPIA frameworks developed in different EU countries, as well as refer to the ISO/IEC 29134:2017 standard devoted to guidelines for privacy impact assessment.
Shifting the focus from the concepts to a more technical level raises three questions important for understanding the practical aspects of “data protection by design” implementation.
The first question is what does this principle actually mean for system architects and designers?
It means that in addition to the system’s core business functionality, data protection will become another core processing function. Hence, it must be designed in the main system design process together with system’s business applications. Crucially, the entire processing system will be ready for release into production only after its data protection features have been implemented and tested at the same level of quality assurance as the system’s core business functions.
The second practical question is for which processing operations a data protection impact analysis should be performed?
As Article 35(1) provides, a DPIA should be performed for every processing operation that "is likely to result in a high risk to the rights and freedoms of natural persons[ CITATION Cou16 \l 1033 ]" – that is when the disclosure or misuse of personal data in the operation leads to a significant damage to data subjects. In addition, Article 35(3) directly specifies three cases of processing activities when DPIAs must be performed regardless of the level of associated risks.
And the third important question is what the DPIA-based process of data protection integration into the system design will generally look like? A simplified sequence of steps in this process is outlined below:
- Steps in the scope of the DPIA phase:
- A processing operation, action, or activity is evaluated with respect to the types and level of its associated risks, including both data security and data misuse risks.
- For every identified high risk of an operation, its type is identified in terms of GDPR requirements violated in case the risk event occurs (e.g. data protection principles, security of processing, etc.).
- Modeling tools available in the DPIA framework chosen for the project are used to define which fundamental data protection goals (confidentiality, integrity, availability, unlinkability, transparency, and intervenability) correspond to the GDPR requirements affected by the risk assessed.
- The DPIA’s guidelines are finally used for choosing appropriate technical safeguards and organizational measures recommended by the DPIA framework for eliminating or reducing the assessed risk to the acceptable level.
- Two other engineering steps follow the DPIA phase in the cycle of implementing data protection in the system:
- A solution is architectured and designed for implementing the chosen safeguards to neutralize the risk.
- A set of tests is conducted to verify that the implemented solution (a) addresses the risk/s to a sufficient degree, (b) does not negatively impact the system’s business application functions, and (c) the cost of its implementation is acceptable given the addressed risk severity and probability. If the verification fails, the next iteration starts from step 4 in the DPIA phase followed again by the two engineering steps in a cycle until either a satisfactory solution is designed, or the risk is finally recognized as irremovable for the system context. If the latter, this must be reported to the project’s management and further on consulted with the relevant Data Protection Authority (DPA).
Indeed, extending the system development culture towards privacy engineering will require organizations to invest significantly more competences, efforts, and time in order to succeed in future development or to overhaul projects. However, these investments will be paid off manifold by savings on avoided data breach costs and fines for noncompliance with the GDPR – all because of the following crucial benefits that privacy engineering delivers to organizations:
- It enables them to implement personal data protection through a proven, controllable, predictable, and repeatable method.
- By practicing privacy engineering, they can confidently achieve and maintain GDPR compliance.
- Privacy engineering makes the processes of implementing data protection and achieving compliance fully coherent – in fact, they both are accomplished as a single integrated methodical process.
- Being an imperative for achieving GDPR compliance, privacy engineering elevates the significance and priority of data protection up to the level of core business application development and, as a result, helps secure the budget and resources necessary for this task.
There Is No Privacy Without Security
Deciphering the "Integrity & Confidentiality" Principle
From Legal to Technical – Landing the GDPR at the IT Field
DLP Is Necessary for GDPR Compliance
Engineering Information Privacy
DLP by Design