Bring-your-own-device is a common policy for startups. You may want to save costs, reduce waste/duplication, and/or give their people more flexibility in preferred devices. But it also comes with its own challenges from a security and compliance standpoint.
Standards like SOC 2 are a little more flexible and forgiving when it comes to end user device policies; especially if you can prove limited sensitive data is stored on those devices as it’s all kept within the secured cloud systems. In theory, the consumer data right and cloud security alliance - CSA STAR - allow for some auditor judgement and scope limitation in the same way; but there’s a higher burden of proof with more specific requirements around endpoint devices if it cannot be proven. And for CDR you also need to help the ACCC or your CDR Principal to understand how you effectively limit that data transfer to devices in order for them to sign off on your compliance.
Why is BYOD challenging?
The starting point of BYOD is that the devices you need to secure and comply with standards, are owned by the employee. That raises some questions about what’s appropriate in relation to those devices; is it fair to restrict their ability to install software? Track the devices? Remote wipe upon their termination? Even just enforcing basic security policies like passwords, screen lock, firewalls and encryption, may conflict with the employees preferences and needs for using their own device.
End user devices are a rising topic of focus for information security and compliance standards; because generally people are the weakest link in an organisations security. It’s endpoint devices where data can “leak” from the established system boundaries where rigorous security is applied. Different standards imply or require varying degrees of security for endpoints. SOC 2 for example is more flexible to take a risk-based view, and often a BYOD policy and acceptable use policy signed by employees is adequate for low risk cloud environments where minimal data is stored on devices. ISO 27001 and CSA STAR are more prescriptive but generally allow de-scoping of some of the endpoint device controls/control objectives if it can be proven the associated risks are not applicable or otherwise effectively managed. The consumer data right (CDR) is probably the hardest one to get away from having endpoint device controls; with several prescribed control objectives related to managing devices for the CDR Data Environments.
Defining/reducing the scope of devices
Especially for standards like CDR, it’s important to define the scope and reduce that scope to the extent possible. That reduces the compliance and audit burden, and may also be the difference between whether compliance is achievable or not. For example, you might have your engineering, security and operations teams using company devices, or enforcing more strict security policies for their own devices. For CDR the scope is limited to your CDR Environment so where you map out the key systems in that environment, you then want to limit access to those to the fewest number possible as that will reduce the devices that fall into scope for your CDR compliance.
Removing endpoints from the equation
Its rarely worth taking this path but worth noting you can remove endpoints from your compliance completely. That’s where you can prove the endpoints have no material information security risk. Of course that’s hard to do; you need to show not only that they shouldn’t hold sensitive data, but that they can’t hold sensitive data. That’s an important difference. If someone wanted to, could they export sensitive data from your systems to put it on their device? If yes, then arguably the risk remains unless there was strong monitoring and enforcement in place to prevent, or detect and rectify it. The way to prove employees can’t do that is showing strong access restrictions and segregation of duties. Of course it’s impossible to categorically prevent anyone from exporting sensitive data from the systems to their devices, but security and compliance is all driven by risk, so the goal is proving the risk is remote enough. If nobody has access to the production database by design, there’s strong change management controls, and any access to the production database has a strong temporary access controls with independent approval. Of course that’s talking about the production environment and assuming all sensitive data is contained there. If you have other critical data locations the same principles apply but it may be harder to prove, for example if you have sensitive data in the organisational Dropbox or Google drive.
What BYOD/endpoint controls are typically expected?
Here’s a sort of checklist, which is roughly in order of expectation and the breadth of standards that require or generally cover them.
- Acceptable use policy outlining boundaries and appropriate use of devices
- BYOD policy (if applicable) outlining responsibilities for own devices
- Strong device password settings
- Screen timeout and lock
- Hard disk encryption
- Anti-virus software
- Device logging
- Device policy enforcement through an MDA
- Multi-factor authentication (eg. Biometric)
- Device firewalls
- Restricted software installation/application whitelisting
- Restricted removable media
- Restricted file sharing (eg. Airdrop)
- Email monitoring and blocking
- Device tracking and remote wipe
- Restricted local administrator rights
About AssuranceLab
AssuranceLab is a modern cybersecurity audit firm that provides assurance reports (ASAE 3150, SOC 1/2, and more!). Our award-winning, free software has helped over 500 companies prepare for their compliance goals. We're experts in the latest software and cloud providers. We guide your team through the compliance practices in a way that fits your environment and culture. We work closely with clients through our agile and collaborative approach; saving time, costs, and headaches along the way.