GDPR (General Data Protection Regulation): Best Practices for Compliance

GDPR is a new update to the Data Protection Directive, in force across the EU and associated states from 25th May 2018. This article won't attempt to discuss the subtleties of the entire act, but instead assist in understanding how the Application works and some suggested actions to assist you in maintaining compliance with GDPR. Care should be taken to ensure that your entire system and any advice taken from this article form a compliant whole.
 
The article is written for the most recent versions of both Synergy and CallScripter 4.5, and the advice and processes discussed in this article should relate to both products unless a distinction is specifically stated. In either case, it is recommended that you maintain a system that is properly updated in line with our Best Practices.
 

Topics

  1. Definitions
  2. Simplifying Data Management: Workflow Design
  3. Handling of Data Subject Requests
  4. Data Storage Best Practices
  5. Application Data Structure
  6. Scheduled Data Deletion
  7. SQL Data Access Restriction
  8. Database Encryption
 

Definitions

For clarity, this article will use terminology in line with the GDPR definitions where available.
 
  • Synergy or CallScripter 4.5 shall be the Application,
  • Any contact between a person and your company via the Application shall be the Interaction,
  • The representative at your company that undertakes an Interaction shall be the Agent,
  • The person engaging with your Agent shall be the Data Subject,
  • The identifying information you gather regarding the Data Subject shall be Personal Data,
  • Your company shall be the Data Processor,
  • The company the Data Subject is attempting to contact and you are acting on behalf of shall be the Data Controller.
Note: the Data Controller may be the same company as the Data Processor; for instance if running a contact centre handling interactions exclusively regarding your own company.
 
Note: in recent versions of Synergy, "Scripts" have been renamed to "Workflows" to better align with industry norms. This document will exclusively refer to Workflows for brevity, but it should be understood that a Workflow is synonymous with a Script in earlier versions of Synergy or CallScripter 4.5, and are not the same as the legacy CallScripter 4.5 functionality known as Workflows.
 

Simplifying Data Management: Workflow Design

There are some design choices that can be made in Workflows that can reduce the amount of Personal Data that is captured in the first place.
 

Only Capturing Necessary Personal Data

Only processing "necessary" Personal Data is one of the features of GDPR, and at its most basic this can be served by not capturing Personal Data in the Application unless it is actually required.
 
It might be habit to collect a Data Subject's name, telephone, email, and address at the start of an Interaction, but is that actually required for the process in the current Workflow? In many cases these details will be required, but if the Workflow is serving to simply take overflow messages for a Data Controller (a "virtual receptionist"), then it may only be necessary to collect a name and preferred response detail.
 
It may be that certain Personal Data are required as part of the contract with the Data Controller, but these Personal Data should be reviewed to ensure that they're appropriate and actually necessary for the Data Controller as part of your GDPR review.
 

Temporarily Using Unnecessary Personal Data in a Workflow

Following on from the above point about only collecting necessary Personal Data, there may be instances where Personal Data is needed for the normal process of completing a Workflow, but isn't required by the Data Controller. For example, a Workflow that allows a Data Subject to amend a shipping order might only require their Customer ID or Shipping ID - however, it is common to ask the Data Subject's name to allow a normal dialogue during the Interaction.
 
In a case like this it may be desirable to capture Personal Data only for the duration of the Workflow, with no intention to retain it or pass it along to the Data Controller. There are a few different methods to remove the unnecessary Personal Data prior to the completion of a Workflow:
 
  1. Use the Sensitive Data option (Synergy only) to prevent the data being saved back to tbl_Data. Fields with this setting will lose their value upon page transition, although it is currently still stored in tbl_activities as of 4.6.32.
  2. Use a Clear Data control (Synergy, CallScripter 4.5), with all of the fields containing unnecessary Personal Data being placed together as Used Controls on a single page that is cleared prior to completing the Workflow,
  3. Clear the values of individual fields that contain Personal Data prior to completing the Workflow, such as by having a visible JavaScript - Button control (Synergy, CallScripter 4.5) that clears the required fields' data before then using .click() on a hidden page-navigation control like a Button control (Synergy, CallScripter 4.5),
  4. As soon as the Personal Data has been collected, assign its value to a Variable (SynergyCallScripter 4.5), and blank the value of the input field it was collected in prior to leaving the Workflow Page.
In the first two instances, the Personal Data will exist in tbl_Data from when the Agent leaves the Workflow Page on which they were captured until the Agent leaves the Workflow Page on which where they are deleted. In the third instance, the Personal Data will never exist in tbl_Data, but will instead appear in tbl_variable_state (which is automatically cleared after completing a Workflow).
 
Note that in all of these cases, should the Workflow be abandoned prior to completion for any reason, the Personal Data may not be removed from the database; as such, it's worth also utilising measures such as Scheduled Data Deletion to remove any unmanaged data.
 

Preventing Personal Data Being Stored

For certain business models, it may be possible to never have Personal Data be stored at all. In an extension to the the methods described in the above section regarding temporary use of Personal data, should the Personal Data be captured, processed, and then deleted all within the same Workflow Page, then the Personal Data would never be saved back to tbl_Data or tbl_variable_state.
 
For example, a Workflow that is simply used for a Virtual Receptionist-type process could capture some Personal Data and the requested contact reason or message, perform some simple processing on the data to determine the action, and then pass the data to the client's database or web endpoint via an External Data Source control (Synergy, CallScripter 4.5), and then clear the relevant fields prior to leaving the Workflow Page and completing the Workflow.
 
In scenarios like these, a more common appproach would be to send an email containing the requisite data to the Data Processor. It should be noted, however, that although this does prevent Personal Data from being stored in tbl_Data or tbl_variable_state, depending on the methods used to process or forward the information to the Data Processor then data may be stored elsewhere in the database. For instance, if an Email is sent from Synergy then details are stored in tbl_message. For this reason, consideration still needs to be made for the particular processes and Application Data Structure and Scheduled Data Deletion.
 

Handling of Data Subject Requests

Under GDPR, Data Subjects have broad rights to request access to, or modification of, their stored Personal Data. How this applies to your own business and is acted upon will vary, but it is likely that a system to handle a Data Subject making contact to exercise one of these rights will be required. While acting upon these requests doesn't need to be completed during the Interaction, sufficient information will need to be captured to allow the Data Controller to determine whether to grant the request or not, and to allow them to action the request if granted or notify the Data Subject if not.
 
Should you be following our Data Storage Best Practices and moving all relevant data out into an external database or CRM, then the process of granting a request should be possible from within that external store. In such instances, the Application database effectively becomes a long-lived cache, and any requests to have information amended or removed should not need to be duplicated on the database as long as a regular and prompt Scheduled Data Deletion process be observed. The requirement within the Application becomes simply one of capturing the request and passing it on to the relevant Data Controller.
 
The easiest method to handle this is probably to include a standalone Workflow to capture the required information from the Data Subject, with some method of reaching this standalone Workflow being available to the Agent or included in each Workflow. In this way, if any alterations are needed to this process then they can be made in a single location rather than having to be manually copied between numerous Workflows. This standalone Workflow could be queued via JavaScript API (Synergy, CallScripter 4.5 has offline documentation in\\CallScripter Data\Documentation\CallScripterAPIs.chm), and then the original Workflow could be closed with an appropriate outcome to allow the completion of the standalone Workflow. Care would need to be taken to ensure that an Agent could always invoke the standalone Workflow and close the original Workflow properly.
 
To support this process, some optional code can be introduced into the custom.js file to produce a button on the Toolbar that will trigger moving to a GDPR Workflow as defined above, or other desired process. Please contact the Helpdesk if you wish to include functionality like this.
 

Data Storage Best Practices

The Application is designed to process Interactions rather than act as a CRM, with the data structure optimised for Application performance. As such, it is recommended that any data that will be used in any fashion after the completion of an Interaction be transferred to an external location such as a custom database or CRM, so that any data within the Application database is either associated with a currently in-progress Interaction, or can be freely deleted. This allows the best performance from the Application (as other systems won't be querying the Application database for purposes such as reporting or other actions), as well as the construction of a coherent record with all data relating to a particular Data Subject being stored in an easily readable and manipulable format. This supports the requirement to be able to serve a Data Subject's request should it be granted, as well as making the further processing of any data easier both to execute and attribute.
 
It is possible to make use of External Links (SynergyCallScripter 4.5) to automate the passing of record information in to the Workflow at pop, as well as the passing of any changes to that record back into a data source at the completion of the Workflow. It is generally recommended that no more than 30 fields are linked using External Links to prevent slow Workflow pops, but this should be tuned to the realities of your environment and tolerances. Should larger datasets be required, a possible solution is to additionally make use of a CRM or similar system, with the External Link only passing across core details and a record reference which can then be used to acquire the relevant record information from the CRM once the Workflow has already launched.
 

Application Data Structure

Captured Personal Data might be stored by the Application in a number of locations, but it can be broken down to the following broad categories:
 
  1. (Synergy Only) Data is stored temporarily in Redis (the caching layer),
  2. Data is stored in a Microsoft SQL database as a mix of permanent and transient data,
  3. Should certain Workflow errors occur, data may be stored as part of the error message written out to log files (if enabled).
 
The Redis caching layer won't be discussed further in this article, as this is only used to temporarily store data as a cache. The inclusion of data into error logs will also not be discussed further in this article, as filesystem access to the Application's install location would typically be restricted, and awareness of the logs and their contents should already exist as part of maintaining your system.
 
As there are numerous free-text fields throughout the management side of the Application that could potentially have Personal Data entered into them, this section will only discuss locations where Personal Data will be stored if processed during an Interaction or might reasonably be entered as part of standard management operations. Should any Personal Data be added to Application fields not specifically intended to contain it, then it is your responsibility to track and manage this data in the database.
 
Processes to manage and remove old or orphaned data from these SQL tables are documented below in the Scheduled Data Deletion section.
 

Synergy

There are four SQL tables that may contain Personal Data in normal operation:
 
  • tbl_activities is populated with a JSON string containing various data relating to the Interaction, including any data passed in with the initial pop or captured on previous pages of the Workflow. It can be thought of as a package of all of the data required to bring the popped Workflow to the current state, or to allow the transfer of the Workflow to a different Agent. As such, any Personal Data captured during a Workflow will be included - with the exceptions discussed in the Workflow Design section above. At the completion of the Interaction, any entries relating to that Workflow will be deleted. It is linked into the data structure of the Interaction via the Session ID.
  • tbl_Data is populated with the values of any fields that have been populated as part of the Workflow. Each field in a Workflow will get its own entry should it be populated. As such, any Personal Data captured during a Workflow will be included - with the exceptions discussed in the Workflow Design section above. It is linked into the data structure of the Interaction via the Session ID.
  • tbl_message is populated with a JSON string of the data required to send any Emails from within the Workflow, including the recipients and email content. It is linked to the data structure of the Interaction via the Outbound History ID.
  • tbl_variable_state is populated with the values of any System or Workflow variables that exist in the popped Workflow. Each variable in a Workflow will gets its own entry regardless of whether it's populated or not. Any Personal Data captured and assigned to a variable during a Workflow will be included, and at the completion of the Interaction any entries relating to that Workflow will be deleted. It is linked into the data structure of the Interaction via the Session ID.
 
Aside from these SQL tables that are expected to contain Personal Data, there are a few additional SQL tables that may contain Personal Data or data regarding your own employees or customers:
 
  • tbl_amMessages is populated with the contents of any User Messages that have been sent. If this feature is enabled, then the contents of any messages that have been sent will be stored here, and consideration should be made whether to regularly remove historic data from this table.
  • tbl_AppConfig and tbl_appConfig_Instances are populated with various Application settings, and might contain information relating to your Data Controller or other Data Processors. If there are any changes in which Data Controllers or Data Processors you are associated with, or their details, then these tables should be reviewed to find any outdated data.
  • tbl_Users is populated with the credentials and names of any users of the Applications. Should any users be deleted from the GUI, their details will still remain in this SQL table, and consideration should be made for fully removing any user data that is no-longer required.
 

CallScripter 4.5

Note: this section will not cover legacy features or modules associated with CallScripter 4.5.
 
There are nine SQL tables that may contain Personal Data in normal operation:
 
  • tbl_activities is populated with a JSON string containing various data relating to the Interaction, including any data passed in with the initial pop or captured on previous pages of the Workflow. It can be thought of as a package of all of the data required to bring the popped Workflow to the current state, or to allow the transfer of the Workflow to a different Agent. As such, any Personal Data captured during a Workflow will be included - with the exceptions discussed in the Workflow Design section above. At the completion of the Interaction, any entries relating to that Workflow will be deleted. It is linked into the data structure of the Interaction via the Session ID.
  • tbl_call_history is populated with various data regarding individual Interactions, and includes any information that the Agent may have recorded as further information. It is linked into the data structure of the Interaction via the Session ID.
  • tbl_Data is populated with the values of any fields that have been populated as part of the Workflow. Each field in a Workflow will get its own entry should it be populated. As such, any Personal Data captured during a Workflow will be included - with the exceptions discussed in the Workflow Design section above. It is linked into the data structure of the Interaction via the Session ID.
  • tbl_email_history is populated with the data required to send any Reports from within the Workflow, including the recipients and content. It is linked to the data structure of the Interaction via the Session ID.
  • tbl_outbound is populated with the data relating to an outbound call list, and includes the Data Subject's name and telephone numbers. It is linked into the data structure of the Interaction via the Session ID.
  • tbl_outbound_history is populated with the data relating to outbound calls, and includes an Agent comments field. It is linked into the data structure of the Interaction via the Outbound ID.
  • tbl_Session is populated with the data relating to individual Interactions, and includes any information that the Agent may have recorded as further information. It is linked into the data structure of the Interaction via the Session ID.
  • tbl_sms is populated with the data required to send any SMS messages from within the Workflow, including the recipients and SMS content. It is linked to the data structure of the Interaction via the Session ID.
  • tbl_variable_state is populated with the values of any System or Workflow variables that exist in the popped Workflow. Each variable in a Workflow will gets its own entry regardless of whether it's populated or not. Any Personal Data captured and assigned to a variable during a Workflow will be included, and at the completion of the Interaction any entries relating to that Workflow will be deleted. It is linked into the data structure of the Interaction via the Session ID.
 
Aside from these SQL tables that are expected to contain Personal Data, there are a few additional SQL tables that may contain Personal Data or data regarding your own employees or customers:
 
  • tbl_amMessages is populated with the contents of any Agent Messages that have been sent. If this feature is enabled, then the contents of any messages that have been sent will be stored here, and consideration should be made whether to regularly remove historic data from this table.
  • tbl_AppConfig and tbl_appConfig_Instances are populated with various Application settings, and might contain information relating to your Data Controller or other Data Processors. If there are any changes in which Data Controllers or Data Processors you are associated with, or their details, then these tables should be reviewed to find any outdated data.
  • tbl_Customers is populated with the details for all Customers configured within the Application, and might contain information relating to your Data Controller or other Data Processors. If there are any changes in which Data Controllers or Data Processors you are associated with, or their details, then these tables should be reviewed to find any outdated data.
  • tbl_new_reports and tbl_new_report_schedule are populated with the templates and configuration for any Reports configured within the Application, including recipients and the report bodytext.
  • tbl_Numbers and tbl_numbersHistory are populated with the DDI Administrator settings, and might contain information relating to your Data Controller or other Data Processors. If there are any changes in which Data Controllers or Data Processors you are associated with, or their details, then these tables should be reviewed to find any outdated data.
  • tbl_outbound_schedules and tbl_outbound_templates are populated with data related to outbound call lists and schedules, and may contain data relating to your Data Controller, other Data Processors, or your own employees.
  • tbl_Users is populated with the credentials and names of any users of the Applications. Should any users be deleted from the GUI, their details will still remain in this SQL table, and consideration should be made for fully removing any user data that is no-longer required.
  • tblMQs is populated with information relating to Task Management, and may contain data related to your Data Controller, other Data Processors, or your own employees.
 

Scheduled Data Deletion

An important feature of GDPR is not keeping data longer than necessary. One way to achieve this is to simply have a scheduled data deletion job that archives and/or deletes the records for any Interaction that has been inactive longer than a certain threshold. This section will only discuss the management of data held within the Application's SQL database, and any data that has been transferred out of the Application will need to be handled appropriately.
 
As discussed above in Data Storage Best Practices, it is recommended that any data that is required beyond the actual execution of a Workflow is stored in an external database or CRM to reduce the operational impact on the Application when undertaking regular tasks such as reporting or other data analysis. Should this practice be followed, then all of the Personal Data within the Application can be treated as disposable as soon as the Interaction has completed and the required information passed to this external database or CRM. If this is the case, then any requests to remove or update Personal Data would only need to be served against the structured external data store, with the knowledge that the information stored in the Application SQL database will be automatically removed at a regular interval.
 
A point that needs to be considered is that should data deletion be undertaken against the Application's SQL database, reindexing will also need to be done to avoid performance degradation caused by the inevitable index fragmentation resulting from the deletion. A common usage is that a number of different data deletion configurations are scheduled for different times, with a separate reindexing task to then take place after their completion.
 

Synergy

Synergy doesn't currently include a Stored Procedure for the regular deletion of data, but is currently being produced and will be included in future versions of the Application. Please contact the Helpdesk if you use a version of the Application that doesn't currently include it, and wish access to this Stored Procedure.
 

CallScripter 4.5

CallScripter 4.5 comes with a Stored Procedure intended to support the regular deletion of data for the purposes of SQL database maintenance, but to support GDPR a slightly more rigorous version is required. Please contact the Helpdesk to be provided with a copy of this that is up-to-date and appropriate for your system, as well as covering a greater number of tables.
 

SQL Data Access Restriction

While Agents have their database access restricted by the bundling of pre-formed queries into various database controls inside a Workflow, Workflow Editors have the capability to write free-form SQL queries into these same database controls. As such, a Workflow Editor could potentially create a Workflow to harvest particular data from the Application database or other databases when it was run. Some measures to mitigate this risk are discussed below.
 

Restricting Access to the Application Database

Workflow Editors have the ability by default to write queries against the Application database with no further credentials required. In Synergy this default access can be revoked by disabling Synergy Database Querying, but in CallScripter 4.5 it is irrevocable. As such, other mitigating actions will need to be considered such as controlling who has access to the Workflow Editor (as discussed below) - but it should also be combined with reduced Personal Data capture and data deletion to reduce the amount of Personal Data that could be acquired by any malicious user.
 

Restricting Access to the Workflow Editor

As a side-effect of the way that Workflows are built, anyone with a Workflow Editor licence will have the ability to inspect the configuration of any control in any Workflow. This means that any credentials or connection details for any SQL database (or other endpoint) utilised in a Workflow will be accessible to all Workflow Editors. As such, users with Workflow Editor access should be chosen carefully, password and access best practices enforced, and their access revoked should it no-longer be required by that user as part of their job.
 
It should also be noted that these credentials could be used directly in other applications (such as SQL Server Management Studio) to connect in a potentially unmanaged fashion.
 

Restricting Access via Permissions and Stored Procedures

Probably the most secure method available in this context is to exclusively make use of Stored Procedures within Workflows, and then restrict the utilised SQL user roles to only have Execute permissions. In this case, Workflow Editors would still be able to access the database, but only in predefined, limited ways if the Stored Procedures are written to carefully restrict the scope of data that they can inspect and return.
 

Database Encryption

One aspect of your GDPR-compliant solution may be to encrypt the Application database to reduce the risk of a Data Breach. While this can be achieved for the database specifically, only SQL Server's Transparent Data Encryption (TDE) method is supported, with no support for the Always Encrypted (AE) method. While use of TDE is a valid method to manage specific risks, it will require consideration of any potential limitations such as:
 
  • Only available in SQL Server Enterprise edition,
  • Only covers data while at-rest - specifically the actual database files and logs on the disc,
  • Encrypts all data within target databases on a server, rather than only specific columns,
  • Backups can't be compressed as much due to being encrypted (randomised),
  • There is a minor performance penalty, and this will apply to all databases located on a server that has any encrypted databases,
  • If the database encryption details are lost, then backups will be unusable.
 
It should be noted that while there is an Encrypt Field option for some controls in CallScripter 4.5, this isn't appropriate from a GDPR compliance context and shouldn't be viewed as a usable solution.