Is It Dead? What’s The Deal With Actual Privacy With Stephen Cavey And Gary LaFever

Let's talk Data!

Is It Dead? What’s The Deal With Actual Privacy With Stephen Cavey And Gary LaFever

December 2, 2022 Transcriptions 0

 

 

Data breaches are happening more than ever, and it’s up to companies to start taking data privacy seriously. The eyes have it! And by “it,” we mean visibility into what you’re doing. These days, the security of all your data is always at risk, whether in transit or at rest. So, is privacy dead? The short answer is not really, but the longer answer is… sort of? Kind of? Tune into today’s episode to discover what that means for consumers and businesses!

Join Eric Kavanagh as he interviews two experts: Stephen Cavey, CEO at Ground Labs and Gary LaFever, Co-CEO & General Counsel at Anonos. Learn why you should get serious, not just about data security but governance as well. Start protecting your data today!

Transcript

[00:00:39] Eric: Yours truly here on the road dialing in from the Reuters NEXT conference in New York City, which has been fascinating. I saw JP Gorman, the CEO of Morgan Stanley, speak. He talked about Twitter, China, finances, the future, and the consequences of having a decade of free money, which we’re seeing. He did not talk too much about privacy so that’s what we will talk about on this show. We’re going to talk about privacy, what it means, and how you can enact policies that enable privacy around data. It is a big deal.

Someone tongue in cheek said, “Is privacy dead?” No, but it is hanging on by a thread in certain situations. Thirdparty cookies are apparently going to go away. That’s an interesting development, which is going to be a very significant challenge for marketers everywhere and everyone, quite frankly. That’s going to be a big deal. It’s going to change how you market. Privacy is important. It’s certainly important in highly regulated environments.

If you look at the EU, for example, and the rules and regulations they have, it’s pretty strict about where that data can go and who can see that data, personally identifiable information in particular but let’s be honest. You are being watched all the time. Your phone is a spy. Your phone is tracking you wherever you go. There is a story about how a bunch of apps were still tracking you even though you told them not to track you.

Truth be told, these things are difficult to achieve. It’s difficult programming from an engineering perspective to design truly bulletproof systems that are deeply and enduringly protecting your data. It’s hard to do. We’re going to talk to two guests. Gary LaFever is out here from Anonos. Gary, that’s an interesting company you’ve got here. Tell us about what you’re doing and how you fit into the privacy space.

[00:02:26] Gary: Eric, thank you. I appreciate the opportunity. Anonos is a company I’m a Cofounder of. It combines 2 of my backgrounds and 2 of my passions. Believe it or not, one is the law and the other one is technology. I started with an Accenture background in computer science and then practiced for ten years with a big firm, Hogan Lovells. I saw that while these two silos try to help one another, they rarely can communicate. It’s all about that intersection between law and technology so people can make use of their data.

[00:02:55] Eric: That’s pretty interesting stuff. I’m a fascinated person as regards the law. I’ve always wondered. You practice law. It’s an interesting term. You practice medicine too, which means you’re always getting better at it. When you look at the nexus of laws and things like privacy, you have governance and policies. What’s interesting and exciting about the world of data and information systems these days is that we can have real policies baked into information systems that have teeth.

Years ago, you could either give access or not give access. That was about the only string you had to pull whereas now, we can have policies that are tied to LDAP, an active directory, or something like that sense your role and then give you role-based access to things. We have come a long way in the last few years. What do you think about that?

[00:03:46] Gary: What you’re focusing on there and highlighting is the move from analog controls. The province of lawyers is words, policies, procedures, and treaties but they’re only words. They don’t prevent bad things from happening and the digital controls but where we are is at the beginning of the next generation because the different controls you identified all have to do with where you’re establishing the perimeter, who gets to get to the data, when they get to see it, and for what purpose.

Gartner says 60% of data breaches are internal. It’s people who have the right to access the data but not for the use to which they put it. Access controls are incredibly important but by themselves, they’re not enough. You also have to have controls that flow with the data and can limit misuse by the intended recipients. Let’s be honest. Everyone is going to be breached and used by people who weren’t intended to get the data. It went from analog to digital. Digital controls have to flow with the data and not be gatekeepers.

[00:04:43] Eric: That’s very interesting. I’m frankly dying to understand how you can make that happen because data can be at rest. It can be in motion. I can understand how you could limit which applications get access to that data but once it’s in an application, then how can you govern what it gets used for and what it doesn’t? Is that in a wrapper around the data? How does that work?

[00:05:10] Gary: It goes to the whole concept of information surplus. Typically, people protect data at rest and in transit as you indicated with encryption. They try to prevent people who shouldn’t get access to it with access controls but once they get access to the data, they are provided clear text. That’s where the problem starts. Based on my experience, most applications don’t require clear text. That information surplus, which is all the information that you’re seeing that you don’t need for your intended purpose, is a liability.

There have been evolutions and improvements in this through the years. It’s creating a transformed version of the data that gives you what you need for your authorized process and no more. Do you need to know everything about this person? Do you just need to know what nationality they’re from, what region of the country they’re from, whether they’re a male or a female, or how tall they are? The ability to transform data and reveal only that version of the data that is needed for a given use is where a lot of the digitization comes from. The next move is dynamic digitization.

[00:06:15] Eric: I have another client that we work with doing some cool work in the blockchain space. He uses an example as you did. What he says is when you’re buying something from somebody, you give them your credit card. Maybe in certain situations, you give them your identification. Why are you giving all this information to people? All they need is one piece of information. Why can’t you give them what they need? It sounds like you’re taking a programmatic approach in the information system world to address that very situation, which you call interestingly information surplus. I like that.

[00:06:47] Gary: It’s interesting because it may sound like an asset but it’s a liability. Every piece of information that you get that you don’t need is either unfound wealth because you didn’t pay for it or a liability. Way too often, things that you don’t pay for end up being a liability. I love what you raised, which is you don’t even need to know a credit card. What you know is whether the issuer of the credit card will stand behind the transaction.

Information surplus may sound like an asset, but it's actually a liability. Click To Tweet

[00:07:12] Eric: Let’s get into some of the use cases here. You have to be careful about personally identifiable information. People steal it. I remember in the early days of the interwebs when people would get worried. It was some valid concern about putting their credit card number online. Folks would say, “You give your credit card to the waiter at the restaurant all the time. They take it back somewhere else. They might be writing it down. You never know. They could be taking it.” That’s probably how some of those details are stolen. Youre addressing in this information system world access to which pieces of data. What are some of the broader use cases that you see of why clients come to you and use your technology?

[00:07:57] Gary: What it is about is removing obstacles. I say that out of respect for all the advances in technology that have occurred. It’s not that hard to perform the processing that is desired. Whether it’s internal data sharing to determine the next best action with a customer or it’s internal data sharing within a pharmaceutical company to compare the results of two different clinical trials, both of those are still internal within the organization.

Technologically, it should be very easy but oftentimes, you get to know whether it’s from your privacy team or your security team because of concerns of liability. If you can show that even internally the information you’re sharing gives your colleagues what they need to process what’s authorized but no more, you get more yeses more often. When you need the identifying data, that’s a special request, which is fulfilled if there’s a reason for it. No one loses. It’s a different approach.

I want to comment on something because to me, it helps encapsulate a huge difference between the EU and US perspectives on privacy. You said it yourself. It’s personally identifying information but in the EU, they’re concerned with any data that can be combined to identify. That’s much more than your credit card number. The fact that you’ve worked for sixteen years at a data show in and of itself probably identifies you but neither of those two by themselves is personally identifiable information. To continue to benefit from the data, you need to reduce the information surplus to protect both PII and personal data, which are those data elements that when combined reveal you in a mosaic.

[00:09:36] Eric: For the benefit of our audience, there are lots of things that you can do to protect data. When it’s in motion, you encrypt it. You can encrypt it at rest too. You can encrypt access to it. You can also mask data. That’s a common technology that is used. What is interesting too is when they have formpreserving masking where it looks like a phone number or a social security number but it’s not the right number. It’s a different number that is hiding the real number. These are also effective techniques but they can be a bit difficult to manage and set up. How is what you’re doing different from traditional data masking or formpreserving masking?

[00:10:12] Gary: Those were great evolutions toward the dynamic digitization and protection of data. The problem is they suffer from a number of issues. It has been used for years like, “I’m going to replace your email. Don’t worry about it but I’m replacing your email with the same static token every time it appears.” It’s child’s play to put together a couple of different transactions and figure out what you want. You have to have dynamism. You have to use different tokens at different times for different purposes but you can keep track of that.

Information surplus is not only how much detail or identification is included within the data revealed but how much of the correlations in relationships between different data uses are revealed. All of that can be managed. With the ongoing decreases in cost, storage, and processing, the backroom stuff that is needed to make this happen is infinitely doable. The uses of the data see no difference in speed, fidelity, or accuracy.

DMR Stephen Cavey | Data Privacy

Data Privacy: Information surplus is not only how much identification is included within the data revealed. It’s also how much of the correlations between different data uses are revealed.

 

[00:11:05] Eric: Do you wind up focusing heavily on financial services and healthcare? Is there a stronger industry than others in terms of your prospects and your clients?

[00:11:14] Gary: You nailed it generally, which is highly regulated organizations. If you are not getting a no to the desired data use, you don’t need our technology. The reason you’re getting the no is not that technologically, it’s difficult to do. The reason you’re getting the no is that someone is telling you that you could get in trouble. It’s a highly regulated industry like financial services, telco, and healthcare.

Healthcare is a big one because you get the flip side. Information surplus should not degrade the accuracy, relevancy, and fidelity of the data. If you pull back on the information you’re giving someone, and in doing so, it’s not as real, you haven’t accomplished a lot. You have to have those two competing things and reconcile the tension.

[00:11:53] Eric: That’s a good point. Stephen Cavey is back in. We will throw it over to him for a few minutes before our first break. Stephen Cavey dialing in from Ground Labs. Tell us real quick about your role in the privacy world, what you’re working on, and how you’re helping companies with discovery.

[00:12:09] Stephen: Eric, it’s a real pleasure to be here. What Ground Labs does is help companies to identify and discover the critical data that are resting around their entire organization. A lot of organizations embark on data security and data privacy initiatives to protect that data and comply with the different laws that are coming out around the world. Ultimately, it’s all about avoiding data breaches as well.

There are critical decisions made about how to secure data and what data there is that needs to be secured but very often, those decisions are made on the basis of assumption. It’s going out there and asking the business, “What data do we have? Where is it? How are we processing it? Why do we need it?” If you do it on that basis, which organizations have done for the longest time, you will come up with a situation where in many cases, it can be a very full sense of security.

You’ve invested a lot of time and money in some great security solutions that do a fantastic job of securing data but you’ve overlooked significant amounts of data that were outside of your view or visibility. There are many different reasons for that, which we can dive into and unpeel in if necessary but at the end of the day, the primary job of securing data is to first understand the data we have.

The primary job of securing data is to first understand what data you have. Click To Tweet

That’s where Ground Labs comes in. We can help companies to discover across their on-premise and their cloud-based data sources looking for unstructured and structured data and identifying personal data types from over 50 countries. In our world, most organizations, certainly the ones that we work with, are very often selling to an international customer base. They’re collecting data types from all over the world but some of those organizations only focus domestically on what the requirements are that they need to focus on.

If we can help them identify all of the different types of data that they hold, they can start to go down the path of securing the types of data that they have, not leave anything out of scope, and not take anything for granted because unfortunately, when data breaches continue to happen, very often the data that’s being stolen or taken from them is data that they weren’t even aware they had in the first place.

DMR Stephen Cavey | Data Privacy

Data Privacy: When data breaches happen, the data being stolen are those even the owner wasn’t aware of.

 

[00:14:26] Eric: That’s a good point. I’m glad you brought that up. We could talk about this a bit in the next segment because the other big storyline here is that you’re not protecting data from people seeing the data. You’re protecting data from machines seeing the data. When we talk about artificial intelligence and unleashing the power of this technology, to Elon Musk’s point, it’s tremendous. It’s gamechanging. You better be careful that you’re not unleashing the Kraken by allowing the AI to point at these data types that, to your point, Stephen, you don’t even understand you have. If you don’t know what you’ve got, it’s very hard to take inventory. It’s impossible to come up with a plan for how to protect it.

[00:15:12] Stephen: The biggest problem is that organizations don’t fully understand all the data that they have. We run surveys all the time. We have groups of professionals in a room. The general simple question we put is, “Do you think your organization knows where all the data is that it’s holding?” About 70% will respond, “We don’t think our organization truly knows where all of the data is.”

I would very much like to ask that 30% leftover who do think they know where all the data is, “Could you explain in more detail to me how you know where all that data is?” When you dig deep enough, you will very often find it is an assumptions-based approach. They’ve got some wonderful data maps. They can articulate clearly what data they have but when you dig into the basis under which they formed these conclusions, you will find it can be an assumptions-based approach.

 [00:17:32] Eric: We heard lots of great commentaries. Privacy didn’t come up too much but it’s always lurking in the background. One thing that did come up certainly was artificial intelligence or AI. For those who aren’t too familiar with what it boils down to, its algorithms that are trained on data that make predictions, make recommendations, or do various things.

AI typically does 1 of 2 things. It either classifies, so it will say, “You should be in this segment or that segment,” or it can optimize a decision point such as pricing, for example, or, “Do I alert someone? Do I not alert someone maybe in fraud detection, network security, or things of that nature?” You see some of that stuff but AI as Elon Musk said has tremendous potential. The world’s biggest bad guy, Vladimir Putin, said that whoever wins the AI race is going to rule the world. He thinks that it’s him. Hopefully, it’s not. The point is it is powerful but you have to be careful because every model that you build is trained on data.

Sometimes you have these adversarial models where you can train one on the other. That’s an interesting space as well. The privacy issue is that you have to be careful about what you allow these algorithms to see. To Stephen’s point earlier, if you don’t know what data you have in your environment, you would better do some heavyduty discovery before you start flipping on AI engines to optimize anything like personalization, for example.

People are saying, “Personalization is the future.I know a fair amount about it. It’s a real challenge. You have to have pretty darn clean data if you’re going to start doing that thing. You have to be very careful about what you automate. Gary LaFever, I’ll bring you back in from Anonos. This is a significant concern for organizations to leverage AI. They don’t know what data they have. They better figure that out before flipping the switch on AI. What do you think?

[00:19:20] Gary: Without question, Stephen is correct. The very first step is you have to know what data you have and where it is because if you don’t know that, all the other approaches you take are only going to be if anything somewhat helpful. Once you know what data you have and you know where it is and when you turn the AI spotlight on that data, it’s going to see all the data that’s in the light, not just some of it.

There are some things you can do there. One is synthetic data. It also goes into some of the issues with AI. When you’re building models, how do you know that the data that you’re building the model on is reflective of reality? Your experience base, customer base, and transaction base may be accurate for you but is that accurate for the new markets or new directions you want to go?

Synthetic data does help there because if the AI processes synthetic data incorrectly if it’s truly synthetic, it’s not revealing anything that’s identifying but when you want to run production data through the model, that’s what this is about. Building a model is not why you do it. You do it for those insights, inferences, correlations, and innovations. That’s got to be real data.

That’s where it becomes critical that you don’t turn that AI spotlight on all the data. If you’re trying to figure out the next best action for customers, do you need to know where they live? Maybe so. Maybe not. Do you need to know their gender? You need to cut back on the full exposure of the data that the AI has because that’s when it gets in trouble. When it misuses the data, it gets accessed.

[00:20:50] Eric: I like your concept here of information surplus. When you have too much, it’s a liability. That’s going to be a liability times X in the world of AI because you don’t know what’s going to happen. There is this explainability storyline in AI. We want to have explainability. I’ve talked to quite a few people in this space.

There’s a guy from a company called Monitor who had an interesting point. He thinks that table stakes are going to be explainability in the future. You’re going to have to have that. Everyone is going to have that but even still, it does get a little bit dicey to dig under the covers and figure out, “Why did the engine recommend that I not give this loan to this person? Why did it recommend we give it to that person?” You can look at log files and things and try to break that down but it is a bit of a challenge.

[00:21:38] Gary: It’s explainability and defensibility because explaining why the way you did it was wrong doesn’t help but being able to defend the steps that you took to limit the misuse of the information goes a lot further.

[00:21:51] Eric: Stephen, I want to bring you back in. I love your focus on discovery. Frankly, we have seen a lot of activity in the last couple of years around data catalogs, for example. Some of which are very dynamic and will scan your entire environment. There are other older technologies. One of my favorite companies is a company called ExtraHop. It’s in the security space but what they do is they will siphon off all your network traffic and use it to build a digital twin of your entire network.

It shows all the databases, applications, and the flow of data between all these things. Thats a form of discovery but what you’re talking about too is also very important and compelling. It’s to do a scan of your information systems and figure out what’s where. I’m guessing you’re looking at metadata like column names. You’re also looking at which piece of the architecture touches which other piece. Can you walk us through how you do discoveries? How are the actual algorithms that you employ doing?

[00:22:48] Stephen: It’s an interesting topic you bring up because not all data discovery is the same. There are different forms of data discovery, not just different forms of data discovery. You alluded to application discovery, device discovery, network fingerprinting, and so on. If you go back ten years or more, data discovery was an assumed piece of functionality that you could push a button or tick a box, and it would all work wonderfully.

If all you did was invest in a Data Loss Prevention solution or a DLP solution, you would get that box ticked. It would all work and show you exactly what you had then you would move on. At the same time, it would stop data from being exfiltrated from your network. The harsh reality for anyone that has been through that is particularly when you turn on blocking mode and say, “I want to stop any data from going out the door that shouldn’t be.”

They watched their networks come to a grinding halt and quickly turned off that blocking mode and realized that this is difficult to do. It’s easy to do an average job of discovery. It’s hard to do it well. Unfortunately, it takes the amount of time that we have been doing it and being highly specialized in it to start to understand the nuances and the challenges, particularly when the amount of data is exploding out there in the world.

We’re no longer looking at repositories of a few terabytes. We’re looking at repositories of hundreds of terabytes and collectively petabytes of information in an average decent-sized organization. That’s a real challenge when you think about it from a discovery standpoint. I alluded to before that not all data discovery is the same. What I mean by that is that because of these challenges around, there’s so much more data. There are so many different forms of data. It’s difficult to understand.

What we find within the industry is that some data discovery solutions may skip a lot of the data that’s in your environment as a way of being able to simply get through it and try and reduce the number of false positives that are likely to come up if you’re doing a basic job of data discovery. What we have learned at Ground Labs is first, you do need to scan every byte of data that exists in your organization.

DMR Stephen Cavey | Data Privacy

Data Privacy: Some data discovery solutions skip a lot of data in your environment to simply get through it and reduce the number of false positives.

 

There are so many different formats and structures of data out there. You need to have a way of dealing with those nuances and scenarios. I mentioned earlier that there are unstructured and structured data. Eric, you referred to databases with column names and other things. You need to be able to scan through the entire breadth of a database. That in itself is challenging but the reason why that’s important to do is it’s not the column names that are going to give away where data is located.

You will have these free text comment fields, for example. This often happens in CRM where you have customer service operators and so on. They can type any free text information about a customer. Sometimes, those fields are being used to log very sensitive information like their driver’s license, their credit card, or other forms. You won’t pick that up unless you’ve gone through and scanned every single row of a database.

We work with organizations whose databases are terabytes in size. They’re adding 50 or 100 gigabytes of new data every single day. In the past, they have truly struggled to try and scan every part of that database because of its size. They have tried to take this assumptions-based approach saying, “Let’s rely on the column names to try and infer whether we think there is or isn’t something sensitive.” That’s a very dangerous game to play to make those assumptions.

[00:26:28] Eric: When I think about this discovery stuff, it’s fascinating. This is an interesting topic to dive into a bit. If you think about the layoffs that are happening at big companies, Amazon is laying off 10,000 people. I thought they would never lay off anybody the way they were growing. Facebook and Twitter laid off thousands of people. Lots of these companies are laying people off. Those are a lot of jobs that we’re getting done that are now either not getting done or getting done by someone else. That’s a very interesting challenge.

What I hope comes from this is a more orchestrated approach to critical business processes like data exploration and data discovery because you have different teams. You have your security team, compliance team, risk team, and marketing team. All these folks typically are in silos. They’re doing many of the same things. You think about security and governance, for example. If there’s anything that goes hand in glove, it’s security and governance. If it’s not governed, it’s not secure. If it’s not secure, it’s not governed.

You have different people working on these things and then you have different tools used to get these things done. I’m hoping that maybe as we struggle through this recessionary period, what we’re going to see is more of an awareness of where there is overlap and how we need to readjust organizational hierarchies to be leaner and more efficient. I’ll throw it over to Stephen first, and then maybe Gary can comment on it. What do you think about that, Stephen?

[00:27:51] Stephen: I couldn’t agree more, Eric. In the world of security and certainly with my background, we very often say, “Security and privacy are not an IT problem. It truly is a business problem.” That’s where a lot of organizations fail. They will embark on these big initiatives to try and understand data but they will put that responsibility back on the IT security team and expect them to help sort it all out. The underlying members of the business that are working day in and day out make this big assumption, “There’s that team over there. They will make sure that we’re secure. It’s their responsibility to ensure that we are secure. That will allow us to do whatever we like.”

This is where you’ve got to turn the conversation around. We talked briefly about AI before. Only several years ago, you would walk through the halls of the RSA Conference every year down in San Francisco at the Moscone Center. The amount of AI-led marketing headlines and terms that vendors were using to try and draw customers in was incredible because we are going through a cybersecurity skills shortage problem. It is a real challenge to get skilled people into our businesses.

There was this perception, “If I invest in a solution that headlines with AI, that will be the savior to all my problems. I no longer need to hire staff to take care of that.” It’s funny now that the number of vendors that are leading with the AI message has reduced. It doesn’t mean they’re not using AI or ML in some formal capability but they’re trying to be a lot more realistic. In our space, the best weapon you have is your people. You need to get the people who are creating this data, working with this data, and understanding this data day in and day out involved in the process of keeping it secure and making them responsible for cleaning up any mess that’s identified.

When it comes to data privacy and security, your best weapon is your own people. Click To Tweet

In going through that process, you will see that the overall awareness and the culture within the organization start to become far more aligned to data security and data privacy. They’re realizing that if there’s a platform that’s overseeing the data that they handle and it’s discovering when they don’t do the right things with people’s data and they get alerted on it, they start to be more conscious of how they’re handling that data and do the right things with that data.

There’s a basic attitude. If the members of your organization are seeing the personal information of an individual, that’s a big deal. That requires an incredible amount of care and caution compared to the past. It was the Wild West. We were so desensitized to seeing lots of records of individuals. We had no qualms about emailing those Excel files and everything else to each other, whether it was employees’ or customers’ data. It’s a different world we live in. That very much explains why there are now over 130 countries with modern data privacy laws coming into place. The penalties are going up significantly when you do the wrong thing with that data.

[00:30:46] Eric: You made a good point too about the trajectory of where these things are going and also your comment about when people realize. When you see the misuse, you’re going to do a much better job of appreciating your role and security. Let’s face it. Security to your point is not the job of five people on the other side of the organization. Security is everyone’s involvement.

Phishing attempts are everywhere these days. These guys are getting pretty darn clever. Luckily, I’m a Mac guy so it didn’t cause me too much trouble but I clicked on one because it was so realistic looking. This is a couple of years ago but you have to be careful about that stuff. It’s everywhere. They’re flooding your inboxes, especially old emails that have been around.

We have launched the editorial calendar for DM Radio 2023 year 16th. Send me an email at Info@DMRadio.biz if you want a copy of that. We want to know what you want to know. Let us know if you want to be on the show. There’s a fantastic conversation here. We’ve got Gary LaFever from a company called Anonos and Stephen Cavey from a company called Ground Labs. We’re talking about privacy. Is it dead? It’s not dead.

The encouraging storyline is that companies are getting very serious now about governance, not just security. We’re finally seeing an awareness that governance and security are interwoven. It’s the same stuff with different applications. I was saying before that the layoffs might spur more of a synthesis of roles within organizations. I’ll throw it over to you, Stephen, and then I want to get Gary’s comment on that. Go ahead, Stephen.

[00:33:48] Stephen: Privacy is becoming center stage now. I use that word very cautiously. Thanks to the number of data breaches that are being publicly reported, it’s becoming common knowledge. It’s almost becoming a dinner party conversation now. You’re seeing organizations saying, “We’re doing everything that we should be doing.” We’re moving beyond an era where adequate security was having a firewall, some antivirus, and maybe automatically locking your screen and a few other simple controls.

We’re now entering an era where companies are starting to realize that if they don’t get a handle on understanding all the data that they have about individuals and start to manage that data appropriately, the consequences are significant. In the US, we are still early in this journey as you’ve only got a small handful of states that have enacted data privacy laws. There’s the whole conversation around the national privacy law but we will end up in a place where the whole country will be governed in some way, shape, or form when it comes to people’s data in a consistent manner.

DMR Stephen Cavey | Data Privacy

Data Privacy: Companies are starting to realize that if they don’t understand and manage their data appropriately, the consequences will be significant.

 

Businesses will be getting asked harder questions, “Are you handling data in the right way? Do you have evidence to prove that?” Unfortunately, in that data breach scenario, you’re going to have to front the data commissioner or whoever you’re responsible for reporting to and say, “Here’s what we were doing. Here’s where we failed.” That’s a tough conversation to have. The attitude has changed in terms of privacy and people’s data.

[00:35:19] Eric: Let’s bring Gary back in from Anonos. What do you think about this theory that the layoffs will lead to a greater synthesis of roles? Break down some of these traditional silos, which probably shouldn’t be there. What do you think?

[00:35:32] Gary: That’s possible. What’s going to cause it even more is these issues are now board-level issues. The best way to produce convergence is when you go up a triangle, it hits a peak. A similar thing happens within organizations. If you’ve had a lot of mid-level managers in security, privacy, and governance, they can all do their job and feel like they have done what they’re supposed to but the SEC has now proposed new disclosure requirements for the board to say, “Who on the board is responsible?”

There’s one very simple question you can ask to cause almost immediate convergence. How are we securing our data when in use? Upon the breach, if the data was encrypted at rest and in transit, you would have no issue. It’s when you go to make use of the data. You use the term digital twin. The digital twin is a digital representation of a person, place, or thing with great value, danger, and risk. How are you, as a member of an organization, securing your data when in use?

I introduced earlier the concept I did not make of information surplus, which is information you don’t need for a given purpose but what we refer to as the output of our system is a variant twin, which is a transform-protected output that is use case-specific and only reveals that element of a digital twin needed for a given process. You need variant twins or some version of something because if you’re not securing the data when in use even if you did know where it all is, you’re incredibly exposed for either internal misuse or exterior breaches and hacks.

If you're not securing data when in use even if you know where it is, you're incredibly exposed. Click To Tweet

[00:37:05] Eric: Let’s face it. Hackers are very clever. They find ways to exploit loopholes whether in operating systems, applications, or over the network. The attack surface is so vast these days. We’re always changing things too. You talk about the modern data stack, which adds complexity to the situation. No matter which way you turn, and I’ll throw this back over to you, Gary, you have to have a data protection strategy in place. It better be robust and defensible. You better be able to articulate that if and when a regulator comes knocking.

[00:37:42] Gary: It’s not just the regulators. What we’re finding is it has more to do with the data supply chain. We have one client. He represents German auto manufacturers through leasing. The German auto manufacturers were asked by the supervisory authorities, “What do you do to comply with Schrems II and limitations on the transfer of data?” They send it down. The thing is any company that could not provide demonstrable evidence that they protect the data when in use was going to be cut out of the data supply chain.

You can cross the street with everyone else and hope you don’t get a ticket but if your pivotal supplier is going to cut you off, you’re going to pay much more attention to that data supply chain than you are to a regulator. Under laws like the GDPR, there are joint and several liabilities. If someone else up or down the chain is doing it wrong, you can be held personally liable. The strongest element is you are not the regulatory infractions and enforcement. It’s going to be the board’s concerns about their obligations as fiduciaries and it’s going to be companies’ concerns about their ability to continue to do business with partners that will drive these advantageous changes more rapidly.

[00:38:46] Eric: That’s an interesting term you threw out there. I’ll throw it over to Stephen to comment on advantageous changes. I remember when the Enron scandal broke. I thought to myself, “This is a real opportunity.” You go in and have your governance team or have your exploratory team find ways to improve the business. If you’re going to take a look at all your business processes, that’s a good opportunity to figure out, “Maybe we can optimize these. Why do we have all these choke points? Why do we have all this risk in our setup and to move to the cloud?”

Stephen, I’ll throw it over to you. Moving to the cloud is an excellent opportunity to reexamine your business processes, your data, who you’re interacting with, what your data supply chain is, and all of the facets of the organization. Let’s face it. People are busy. They’re doing stuff. They’re going to be busier because a lot of people are getting laid off. You’ve got fewer people doing more work. It does behoove organizations to take a very hard look at the discovery side, not just the data discovery and the systems discovery but the process discovery.

[00:39:51] Stephen: There are a lot of data migration projects going on out there. Forward-thinking organizations are seeing that as an opportunity to say, “We will fully move all this data up to the cloud. Let’s review what this data is and see if we need to keep all of it and what needs to go up.” To touch on the other point before about being advantageous, I see different scenarios where you suddenly see an immediate change of attitude. They go from the old way of thinking to suddenly, “We need to think about this differently.”

Usually, the three scenarios we most commonly see are number one, they have suffered a data breach. That’s an immediate attitude changer in any organization. Number two, they are being pushed by some regulatory compliance deadlines. They have to show evidence of doing the right thing. They know they have to change what they’re doing. Number three is interesting. It’s a change of guard. You get a new CISO, Head of Privacy, or Head of Security come in. They’ve got no legacy reputation they’re trying to hold onto about decisions they have made in the past and don’t try to defend those.

DMR Stephen Cavey | Data Privacy

Data Privacy: Three scenarios can change a company’s way of thinking. They suffered a data breach, pushed by some regulatory compliance deadline, or encountered a change of guard.

 

They’ve got a clean slate to say, “I’ve got the right to question everything that you’ve all been doing because I’m the new guy or new girl. I’ll go through and start to ask hard questions. If I don’t like the answers that I’m getting from the team about what data we have, what we’re doing with it, and where’s the evidence to back those statements up, then that’s the chance to go back to square one and say, ‘Let’s re-baseline everything.’” Ultimately, if this person has come in and they’re now responsible for the security and integrity of all that data, they’re going to want to understand what it is first before they start making decisions about how to secure it.

[00:41:28] Eric: That’s a good point. It’s a big issue. It’s something that has to be addressed. I’ve thought a lot about this general issue and why it’s so complicated but I love the advice you give. If you get a new person that comes in, that’s your opportunity. You’ve got probably 3 to 6 months. You better be asking lots of questions to lots of people. Document the conversations. It’s all about documentation.

It’s all about knowing that you’ve at least asked the right questions and done so in a fashion that will reasonably give you the answers that you’re looking for but then you have to combine that with discovery and getting perspectives on things and looking at reports. All this stuff goes into the amalgam of what is required to ensure privacy. Customer experience is going to demand privacy.

It’s time for the bonus segment here. What a fantastic show. I’m talking to Gary LaFever from Anonos, look those folks up, and Stephen Cavey from Ground Labs. We’re talking all about privacy. We left off with Stephen. I’ll throw it over to you, Gary. I love this concept of bringing a new person in. I’ll tell a quick little joke. My business partner, Dr. Robin Bloor, is a British gentleman who always says very clever things. He said, “Every company has one person responsible for security. That person’s name is scapegoat,” meaning they come along.

If it goes wrong, you’re out. Next. That doesn’t solve the problem. You could fire someone. That’s a redress of grievance. Maybe that person did drop the ball but that doesn’t mean you solved the problem. You get a new person to come in and learn this entire matrix of whatever is going on in your organization. To Stephen’s point, that’s an important time for a CISO to ask lots of questions, document everything, and then start developing your roadmap. What do you think?

[00:44:27] Gary: A challenge there is it doesn’t mean the new CISO has a new budget. When the board or the CFO approved a budget for security, how much did the person that came before you who’s now not there spend on that? One of the things that makes a lot of sense here is if you can control that misuse of information surplus and show an increased revenue stream from revealing only selective elements of data. It reduces your exposure but also increases your opportunities for data sharing and combining revenue production. The issue you have upon a change of the guard is how much budget is left to do what needed to be done in the first place the right way. It always helps to have plus signs and not just minus signs.

[00:45:10] Eric: Revenue streams tend to light up the eyes of board members and senior executives because they’re looking at money coming in and money going out. Money is always on the move. That’s why cashflow is so important. Big repositories can be dangerous because they can be hacked vis-a-vis FTX, the collapse we have witnessed in the crypto space, for example. That’s a very unpleasant experience. My heart goes out to everyone who lost money on that puppy. Data is like cash too. Data has intrinsic value but it could have a negative value if it’s used in the wrong context.

[00:45:47] Gary: Cash has minimal value when it sits in the bank. Cashflow is what increases the overall value of the data. This goes back to something we have mentioned several times, which is data flows. How do you protect it in a way that the protections flow with the data while enabling you to reduce downside and exposure and increase opportunities and revenue opportunities? It’s a win-win. When there’s a change of guard, part of the story is we can fix what wasn’t done correctly before and do it in a way that puts more data flows on the table than taking them away.

Cash has minimal value when it just sits in the bank. Cash flow is what increases the overall value of data. Click To Tweet

[00:46:22] Eric: That’s very clever. Stephen Cavey, I’ll throw it over to you for final comments from Ground Labs. What are you working on next? Where are things going? Are you hiring? I hear there are a lot of people looking for a job. What are your thoughts?

[00:46:33] Stephen: We are hiring. We are in a space that is growing rapidly and continues to grow rapidly in the amount of interest and increased budgets on the security front. We are still seeing the amount of money that organizations need to spend on data privacy and data security holding up. Those are certainly not being cut back. To Gary’s point earlier, what about the budget cutbacks, and you have a new person come in? The benefit of being able to quantify the data security risks that you have goes a long way into justifying any subsequent budget discussions that you have.

In the past, there has always been a qualitative analysis. It’s like, “We’re high-risk or medium-risk based on several different inputs,” but you can start to be very quantitative about it and have hard evidence to show, “This is exactly how much exposed data is sitting in our organization. People could come in and steal.” Here’s a quick story. We worked with a big telco years ago. What they did was they got all of the executives in the room, sat them at a table, put a piece of paper in front of them, and said, “This is your personal information that is sitting in our billing database. Ten thousand employees can access that.” You wouldn’t believe how quickly they got the budget and support to start trying to fix those problems.

[00:47:50] Eric: That’s funny. Folks, this has been a fantastic show. Look these gentlemen up online, Gary LaFever from Anonos and Stephen Cavey from Ground Labs. We will be talking to you next episode.

 

Important Links