Before we get started, let’s take a look at this definition: “A language-neutral way of implementing objects that can be used in environments different from the one they were created in.”
And this one: “A platform-independent, distributed, object-oriented system for creating binary software components that can interact.”
Do you think these are describing a service mesh from the 21st century? Well, they are actually definitions of Microsoft DDE (1987) and COM (1993) technologies, respectively. It’s worth noting that a lot of concepts were invented a while ago and then just reused under different names to solve different cases.
The real world is going to merge desktop and servers, desktop applications, web applications and APIs to one unified environment. It seems clear to me and a lot of other security experts that we’ve already made a lot of technologies to solve similar problems like interprocess communications, data sharing and data analysis. This fact made obvious an assumption that the security layer should be the same as well. There is no difference between the attack surface on endpoints, servers and clouds.
It doesn’t matter what the subject is or what the object is during the data transmission process. It could be a Win32 application, legacy web application, microservice, serverless application or something else entirely — the security requirement will be the same. It’s related to that fact that all the security controls eventually target to protect data. This is true for other resources, of course, but let me please keep folks who have dealt with unexpected crypto miners and DDoS attacks out of this talk. Let’s focus on data as the main thing.
Whatever we are building in security — from old-fashioned VLANs and role models to modern pod security policies, east-west communications security, or container isolation strategy — we always use data as a starting point. And the data nowadays is everywhere inside a company — in clouds, servers, desktops, laptops and mobile devices. To make business faster, we need to give widespread access to this data, and that’s not a trend but a survival requirement.
It’s the same thing for CISO and risk management folks from where user data was stolen. Regardless if it’s a developer machine, QA environment or database, it’s all the same from a data perspective.
This idea is fairly similar to the zero trust concept that was invented by Forrester analyst John Kindervag back in 2010. It requires verifying every single source (object or subject) in any communication processes at every single stage. According to a 2018 CSO article, “Zero Trust is a security concept centered on the belief that organizations should not automatically trust anything inside or outside its perimeters and instead must verify anything and everything trying to connect to its systems before granting access.”
So the data is similar everywhere, as we discussed above. The last thing that was mentioned as a difference between desktops and servers is automated end user behavior. People with that point of view claimed that desktops should be protected in a completely different way just because they are machines for human operators, unlike servers, which host services for external usages.
So at this point, we have figured out that desktops, laptops, servers and serverless applications are all similar from the data sensitivity and usage perspectives. That’s why similar application security controls and policies are relevant in the modern world.
Application security always starts with the following points:
- Data profiling, categorization, prioritizing and risk mapping (data authentication).
- Attack surface definition and entry/input points inventory.
- Application to data and input mapping.
As a result of this, we will have a map of data types, which application uses which types of data and which inputs these applications accept. This map allows for implementing basic controls, policies and other restrictions. Again, this is all completely unified and agnostic to the platform, application type and runtime.
Once we have a map, the first goal is to build an authentication and authorization strategy and implement it. Again, according to the zero trust concept mentioned above, we should not trust anything by default. And to claim they are not “anyone,” applications should authenticate themselves.
As you can see, we have to authenticate data before application authentication since they are working with this data. Data authentication is unique and depends on business and compliance requirements. For example, PII, IP, PCI, and other data could be different for different businesses, countries and regulations.
To sum up, data authentication is a starting point to build an application security strategy. It’s similar across all applications, regardless of where they are installed: on a laptop, desktop, server, clouds or serverless. This ideology aligns as well with the zero trust security concept, which is standard for cybersecurity nowadays.
Originally from Forbes.