Working with public cloud services, increases the attack surface, and requires us to adapt our security controls and policies.

Public cloud environments reside outside the organizational network, and as a result are exposed to the internet and shared with other customers, even if we consider multi-tenancy and the ability to limit access to cloud environments using VPN tunnels. The traditional approach whereby organizational assets are kept inside a secure perimeter does not work for cloud environments.

In this post, I will review some of the common policies and procedures we need to review and adapt, in-order to mitigate the risk of working with public clouds.

Identity management

  • In our local environments (on premise), we typically manage user accounts on a local Active Directory or LDAP server. Moving to public clouds (and sometimes even to multi-cloud environments), we need to think about managing the user identities through federations (protocols such as SAML or OAuth). The goal is to be able to disable user privileges or even revoke permissions from a central location, instead of creating and managing multiple identities for each user account on each and every cloud service.
  • All the major cloud providers enable us to increase the security protection of user accounts (and especially with privileged accounts), using MFA (multi-factor authentication), with mobile applications such as Google Authenticator, Microsoft Authenticator app, etc.
  • In local environments (on premise), the most privileged account (in Microsoft environments) is typically a member of the Domain Admins (in Active Directory), with privileges limited to local domain boundaries. Moving to public clouds (and sometimes even to multi-cloud environments), an account with the “Global Administrator” role, is able to control the entire AD Azure tenant, subscriptions we manage, and even access to 3rd party SaaS services we are using and have the same user accounts across the entire public cloud environment. Our policy need to address and recommend how to control members of privileged accounts (such as having a strict password policy, enforcing the use of MFA, enabling logging for such privileged accounts, etc.)
  • The same recommendation applies to the AWS Root account – avoid using keys, configure a strong password, enforce the use of MFA, and minimize the use of the AWS Root account on daily basis.
  • Working with public cloud environments also brings new types of identities, with access to APIs (application programming interface). Policies need to address and instruct staff regarding the use of temporary credentials, key rotation and avoid storing any kind of credentials in scripts, hard-coded inside applications and under no circumstances, store keys or secrets in public repositories such as GitHub or Docker registries.

Data protection

  • In our local environments (on premise), we typically allow access to our data for internal employees. Moving to public clouds, we need to think about storing our data in external environments, sometimes accessible from the Internet.
  • We need to enforce strong authentication and authorization for anybody with access to our data.
  • In case we are planning to store sensitive information in cloud services (such as private, healthcare, finance, trade secrets, etc.), we need to perform automatic data classification, and enforce mechanisms for data leak prevention, using automated tools.
  • In case we are planning to store sensitive information in cloud services, we need to enforce encryption at transit (all traffic must go through TLS tunnels) and be stored encrypted – it is recommended to use our own encryption keys (“Customer managed keys” or “Bring your own keys”), for any cloud service that support this capability, and configure key rotation every scheduled interval, to avoid key re-use.
  • We need to ensure that our policies and technical implementation meet any standards, compliance or regulatory requirements e.g. GDPR (General Data Protection Regulation).

Network Access management:

  • In our local environments (on premise), we typically protect our assets (usually servers and workstations) behind a firewall, and in most cases, access to our servers is allowed for our organization’s internal employees.
  • Moving to public clouds, puts our servers outside our local perimeter, and in many cases, any new server we deploy in public cloud (and services such as object storage – AWS S3, Azure blob storage, etc.), receives a public IP address, which makes it accessible (unless we configure otherwise) to the entire Internet.
  • Our policies and procedures need to enforce access rights according to needs. We need to think about zero trust – authenticate any access request, enforce minimize access rights, audit every access attempt, etc.
  • We need to enforce network segmentation according to role (such as Web, App, DB) and type (Prod, Dev, Test, etc.) – if there is no business reason for public access to database, put the database on a private segment, accessing to limited number of IP’s, segments and network traffic.
  • We need to control both inbound and outbound network traffic from/to our assets in the public cloud – limit access according to business needs.

Change management

  • Moving to the public cloud decreases the time it takes to deploy new environments (from virtual machines to Containers). Moreover, many environments (usually test/development, but sometimes even production environments), only exist for a short period of time and are then decommissioned.
  • We need to make sure that all changes are done by authorized identities
  • Changes should be conducted using automated tools/scripting languages, in all environments (Prod, Dev, Test, etc.), in order to enforce compliance with organization standards and industry best practices
  • Deviation from configuration standard must be audited, raise alerts and automatically remediated

Patch management and vulnerability assessment:

  • Moving to the public cloud decreases the time it takes to deploy new environments (from virtual machines to Containers). Moreover, many environments (usually test/development, but sometimes even production environments), only exist for a short period of time and are then decommissioned.
  • Policies and procedures need to address topics such as patch management and vulnerability management for temporary or permanent environments.
  • As part of daily routines, we need to embed automation – to discover any vulnerabilities in large and constantly changing server farms, with automatic remediation of any threats discovered.
  • Cloud-native environments are often making use of containers. The deployment processes need to embed scanning for vulnerable open source libraries and packages inside containers, automatic patching or at least breaking the deployment phase in case critical vulnerabilities are discovered.

Asset management:

  • Moving to the public cloud decreases the time it takes to deploy new environments (from virtual machines to Containers). Moreover, many environments (usually test/development, but sometimes even production environments), only exist for a short period of time and are then decommissioned.
  • If we need to locate all of our assets in the public cloud (from virtual servers, to containers and even Serverless environments) – the easiest way to locate assets is according to the environment (such as Test/Dev, Prod, etc.) is to enforce tagging on all assets and all IaaS / PaaS environments.

Vendor Management

  • In our local environments (on premise), we typically manage servers ourselves. Moving to IaaS / PaaS cloud environments means deploying servers (and services) outside our perimeter, and sometimes (such as in SaaS solutions), we rely on 3rd parties to maintain the servers that store and process our data.
  • We need to enforce strong authentication and authorization through secured channels (such as VPN tunnels) to personnel, as well as to 3rd party suppliers and business partners.
  • We need to audit any access attempt and be able to identify anomaly behaviors (such as time of the day, source location, privilege requested, etc.)
  • If we are using virtual machines or conducting backups of virtual machines in the cloud, we need to consider data at rest / full disk encryption of our virtual machines.
  • As part of using cloud services, we need to evaluate our partners (such as cloud vendors), their security and privacy controls, through penetration testing of PaaS / SaaS environments or using external audit reports (such as SOC 2 Type 2)
  • Ensure that the vendor identifies any third party contractors that it uses
  • Ensure that we have reviewed and have an acceptable contract and Service Level Agreement (SLA) in place with the supplier (for many cloud vendors there may be no opportunity to change it) which defines its responsibilities in relation ensuring the security of the cloud environment and our data as well as your responsibilities as a client
  • Ensure there is an exit mechanism whereby you can recover your organization’s data when the contract ends (or is terminated) with any costs clearly defined

Monitoring:

  • Moving to public clouds (and sometimes even to multi-cloud environments), we need to think about monitoring large and changing environments – Who has access to what?
  • Every major cloud provider has the ability to enable audit trails, which creates thousands of log files – we need automated solutions for reviewing the logs, and raising alerts for critical security incidents.
  • We need to find a way to send logs to a SIEM solution and conduct investigations, and in most cases, the most suitable solution will be cloud native SIEM solution.
  • We need to control access to log files (read, write, delete) according to business needs.

Incident response:

  • Moving to public clouds increases our attack surface. Current best practices for conducting forensics on premise may not be relevant to cloud environments, because we do not have the same access to the servers, or the underlying infrastructure such as network equipment, storage or virtualization layers.
  • When possible, we should enable network traffic monitoring (services such as VPC flow logs or Azure Network watcher)
  • We should check with our cloud vendors, during initial assessment before signing the contract, what are the options for conducting our own security investigation and what are the cloud vendors contractual obligations regarding notifying us in case of a data breach related to the systems that store or process our data.
  • When possible, we should deploy anti-malware solutions on our IaaS environments (according to the operating system support)

Application security

  • Moving to cloud services increases the use of API’s (application programming interface). We need to make sure that all access to our API’s (whether we wrote them, or we use cloud service APIs) is authenticated, authorized and audited.
  • We need to make sure the transport of the API’s is encrypted via TLS tunnel.
  • Since APIs are publicly exposed by their nature, we need to think about solutions such API gateways that allow as to enforce throttling on the amount of requests from the same source (IP, user identity, service, etc.), in-order to minimize the chance of application denial of service.

Additional references:

About the author

Eyal Estrin is a cloud architect, working in the Inter-University Computation Center (IUCC) in Israel. He has more than 20 years of experience in infrastructure, information security and public cloud services. He is a public columnist and shares knowledge about cloud services. You can follow him on Twitter at @eyalestrin