Reality On The Ground – And In The Cloud! With Dr. Stefan Sigg And Dion Hinchcliffe
The cloud continues to change everything, but the rumors of on-prem’s demise have been exaggerated. Traditional data centers will have a very long tail, even as cloud computing giants like Amazon, Microsoft, and Google bolster their offerings. Cloud Native is the new call to arms. Organizations look to optimize their mission-critical workflows across an increasingly heterogeneous technology topography.
But what, exactly, is Cloud Native? How are today’s innovators spanning multiple cloud environments while protecting their on-prem investments? Check out this episode of DM Radio to hear Host @eric_kavanagh interview Dr. Stefan Sigg, Software AG Chief Product Officer, and renowned IT Analyst Dion Hinchcliffe of Constellation Research. They will discuss strategies for protecting data sovereignty, avoiding vendor lock-in, and optimizing information architecture across the cloud, on-prem, and the edge.
[00:00:14] Eric: The show now, reality on the ground and in the cloud at the episode of the show. First, we are going to have Dion Hinchcliffe, and Dr. Stefan Sigg from Software AG will join us. I want to hand it over really quickly. I will stop sharing and hand it over to Dion Hinchcliffe. Dion, take it away.
[00:00:32] Dion: What I wanted to talk about for 4 or 5 minutes was where we see our research showing where the cloud is going. Cloud is the number one topic in IT, in how the world is delivering technology. This is the whole view from when this all started. I was working with AWS back in 2006 when they had EC2 and S3, the very first cloud services that were public cloud. That was amazing. We have gone through a whole bunch of evolution since then.
SaaS has become a big thing. Edge and IoT have been topics for years now where the cloud touches all of our organizations, our cities, and everything. We have a hybrid cloud. People are realizing that some parts are private. Better private are on-prem, and some are better out there in public, and we get to fuse them all together.
Things like serverless become ever more important because we realize we don’t want to have to deal with any knowledge about our compute infrastructure. We want to optimize that later. We want to be able to use resources and not care where they are or how they work. We want them to work with somebody behind the scenes to make them work optimally.
Where we are now, starting right about 2020, when the pandemic began, we began to see multi-cloud becoming a big topic. Multi-cloud and cross-cloud are what we want. We want to be able to pull all the cloud services and data we have across all of our SaaS clouds and all of our public clouds and be able to create new offerings, new solutions, and new businesses out of all of our cloud assets.
That requires going across our clouds. Using all of them together, not just sourcing from a bunch of clouds and silos. Cross-cloud and multi-cloud are hot now. Cloud-native, which is containerizing and using microservices architectures and creating very composable cloud architectures that have real legs in terms of how long they are going to blast, is basically a future-proofing way of using an architecture that was designed from the ground up for the way the cloud works best.
We are also seeing the rise of industry clouds. Some of the hyperscalers and some of the SaaS players like Salesforce are creating special clouds around healthcare or financial services and have special capabilities. You don’t have to sit with a generic compute cloud and go, “How do I transform my organization because I’m a hospital, healthcare network, bank or educational institution? I need something that’s already designed that has a lot of the features I need.”
Industry clouds have started to become a thing. We are still early days yet but that’s what this evolution shows. We are also seeing that with Edge and IoT, and regional cloud becoming a big deal. Finding these cloud resources is difficult. Cloud and Edge brokers are helping you find organizations that have local data centers or have big edge networks that you can tap into. We are seeing that the big hyperscalers or that big monolithic approach is going away, which is we are using having a more intelligent use of the cloud. It’s great.
On the leading edge, you have things like cloud coalitions coming together around certain topics or initiatives. We have a Green cloud. A lot of the existing hyperscalers are doing that already but its sustainability, in general, is being exported to the cloud. We have metaverses, and then we also have the return of the geographic clouds.
We already have one in China but there’s this real risk and maybe an opportunity for optimization of the cloud. That’s coming. We are starting to see that with all the changes in the world that we are going to have little more geographic regions. Europe is doing its own thing. Its own regulation that’s working. It’s good to see but that’s where we are going to end up. You’ve now seen everything that has happened and will happen in the cloud right around a twenty-year time span. It’s a good five-minute overview. I hope that was good.
[00:04:17] Eric: You are right that what goes around comes around. You have decentralization and then centralization. Things are changing. They are changing quickly but to your point, it is an opportunity. It’s not a risk. It is a challenge, I suppose, to navigate these environments but it’s a reality. Cloud is real. It’s here to stay. I don’t think there’s any doubt about that but on-prem is going to have a very long tail, folks. Don’t look for the on-prem data center to collapse anytime soon. It’s going to be at least around for seven years. Isn’t that the amortization cycle for some of these technologies? That’s part of it but you are going to see interesting use cases around that.
I’m so pleased to have two veterans of the industry on the line. We have got my good buddy, Dion Hinchcliffe of Constellation Research, and Dr. Stefan Sigg, Chief Product Officer from Software AG, dialing in from all the way across the globe. What a great metaphor for the discussion of decentralization of the cloud of trying to understand how the enterprise operates in this new very topic graphically challenging environment where you’ve got public cloud, private cloud, and Edge. We are going to talk to Dion about that for sure. What is the Edge? It’s a very interesting concept.
You have a whole slew of data centers that are popping up all around the country now to serve this edge functionality because the availability zones of big cloud providers tend to be pretty large. They are not out there on the fringe in the middle of nowhere in North Carolina, for example. They are in some pretty hotbed areas.
If you are going to do Edge computing right, you need to have very low latency, and that means you need to have some provider nearby. It’s interesting watching this all transpire. We have privacy. We have governance concerns. It’s fun that in the recent past, all of a sudden, data governance says, “Hot.” I can tell you that data governance was not hot or interesting several years ago, and no one wanted to talk about it.
You would get run out of the room if you started talking about data governance several years ago but now people get it. GDPR is a big reason for that. The General Data Protection Regulation is coming out of the EU, and that’s a driver. Maybe I will start off with Dr. Sigg from Software AG. Let’s bring you in and talk about your vision and how Software AG focuses on being the neutral party and the technology stack playing well with others. How does that all dovetail with cloud-native and being responsible with data?
[00:07:30] Stefan: Thank you for having me, and please call me Stefan. Don’t call me Dr. Sigg. It feels like being a dentist, which I’m not. We at Software AG, are all about connectivity and integration. Connectivity and integration of applications, data, devices, and processes. This is the notion where it all comes together.
We have been from our on-premise heritage into the center stage of the cloud, transforming and building our software cloud-native. Which means it is designed for the cloud. It is designed for container-based Kubernetes cluster deployments. That scale, perform and leverage the scalable infrastructure. Scale horizontally that share resources.
All that has been done but this is the “prerequisite” to play there. As I like to say, the job to be done for customers and enterprises is still very much hybrid. There are the private cloud and the public cloud. Cloud-to-cloud integration is there but still, for the foreseeable future, all kinds of Kubernetes of private cloud, public cloud, on-prem, and Edge, all of that need to be integrated. All of these different layers, the integration points are still alive and kicking. This is what we make sure that once maybe an integration is broken up because customers move stuff into the cloud. We fixed and integrated again all across the different variations of deployments.
[00:09:10] Eric: This term cloud-native is taking over for obvious reasons, and cloud-native, in a nutshell, means a piece of functionality that can work wherever you put it, essentially. I can take it out of my Amazon environment and put it in to see how it works. Theoretically, I should be able to take it out of the Google Cloud platform and put it on-prem. A lot of the principles around standardization are coming to the fore here and coming together somewhat neatly. It seems to me. It’s still going to be a little bit of a challenge but it will always be a little bit of a challenge.
[00:09:44] Stefan: The concepts are there to be cloud agnostic but still, you have to do things right. You must design your software from the get-go so that switching costs are low, yet you want to use the infrastructure. If you want to live with the life cycle of the underlying cloud stack, you have to have a good balance between not having only the least common denominator.
Now in terms of the services that the hyperscaler provides to you. You want to get the benefits of new versions and developments there but you want to make sure that you are decoupled and that you can move from one cloud to the other so that you can integrate from one cloud to the other. What I think is coming very strongly is that also companies who have their private cloud in one of those cloud providers also do not want anymore the traditional installer-based software. They want to get deployed as if it was a public cloud. The IT departments also participate in the cloud philosophy and the new virtual data centers.
Making sure that your software is flexible and truly natively deployable. That’s very important and one sentence. It’s not only deployability. You need to make sure that your internal development or your software engineers also code on the same environments that you deploy into. That’s not always the case. There are a lot of teams and companies that still work internally, not in an on-prem mode, and then use a container to be chipped into the cloud. That is okay. You can do that but what I would call cloud-native is when the cloud starts with the developer coding in a container-based environment and Kubernetes environment, and then where the deployment into production is another click and not a big transformation.Make sure your software is flexible and natively deployable. Your software engineers should code on the same kind of environment you will deploy into. Click To Tweet
[00:11:57] Eric: That’s good advice. I’m wondering from your perspective. I’m throwing out this theory that on-prem data centers are going to have a very long tail, and I’m talking decades is my theory. Do you see an alignment around what on-prem will be doing in the future? I will throw this little curve ball at you too. We got back from the Confluent Conference in Austin, Texas. A lot of interesting things are happening there.
One of the best case studies was a company called CORE Financial, a net new company. They have no technical debt. They could look at the world and start, and they are doing financial reconciliation of crypto and a bunch of other stuff. In that environment where you don’t have technical debt, you can leverage the power of what’s out there.
You can leverage the very latest in technologies. What I see with this with streaming data, for example, with streaming analytics or streaming apps. It tends to be net new use cases that are the bread-and-butter bills environments or those technologies, as opposed to transforming what we were already doing. Is there a chance that we are going to be able to transform these on-prem environments or is all the new stuff going to be net new use cases using the cool technology? What do you think?
[00:13:08] Stefan: The companies are very pragmatic. If not to say opportunistic in what they do. There are even mainframes there, and we can talk about that because we have very loyal customers who have a perfectly running application on a mainframe. Companies say, “Why should I touch it? Why should I do it?” Still, we can rehost it on a Linux server. That’s also good.
If I think back a couple of years that it was possible to create hype in IT, and then people were jumping on the hype for the sake of the hype. I don’t see that anymore. I see that companies sit down and think about, “What is my benefit?” That’s why I have almost never seen 100% movement of a data center into the cloud. What I see is 70%, 80%, maybe 90%.
There’s always involved for a very good reason. A remainder of services and applications are on-prem, and why not? You can go out and into the cloud, and while you go into the cloud, you might want to maybe discontinue one or the other application that you then consume by a SaaS. Perfect. That’s all good. It creates a little bit more diversity in your IT landscape and the classical on-prem, maybe a new private cloud, SaaS cloud, and then more and more the Edge company comes in. Maybe from a factory or store floor. Maybe from other remote locations. This is the diversity of where applications and systems are deployed. It’s our job to connect and integrate this landscape as if it was one coherent set of IT.
[00:15:06] Eric: I will bring Dion Hinchcliffe from Constellation Research to comment on that. Organizations must be practical. If you are not pragmatic, you are going to have some real difficulties on your hands very quickly but there is such a desire to leverage the latest technologies. There’s such a desire to do things in a new and different way as you are understanding the cloud, as you look at the edge, in particular, that creates a tremendous amount of opportunities to do things in different ways. To Stefan’s point, complexity is a killer, and so you want an environment your organization’s going to be able to manage over the long haul. How do you balance all those things out, Dion?
[00:15:47] Dion: I do think organizations are pragmatic but they can overdo it as well, saying, “Not meet their customer needs who are demanding these new services,” but we saw a big change over the last few years where companies now want to hold their customers much closer because they are not in its predictable location. Edge devices and Edge services allow them to create connected products that are in their businesses or their homes and be able to service their customers and collect data and share data with each other. That value proposition is clear.
What the real risk is that a lot of organizations are trying to build it all or build a lot of it themselves or even create a lot of the architecture themselves. In this world, if you haven’t been investing in security first, if you haven’t been building hardened services that are mature, and you try and create this new edge-friendly, cloud-native architectures that cross all these different boundaries. You are going into a very difficult operating environment where there are cyber threats and all sorts of challenges, and making all that perform.You are getting into a difficult operating environment if you try to create edge-friendly and cloud-native architectures without investing in security first or building a hardened service. Click To Tweet
What we often see as organizations try and hedge all those risks by saying, “Let’s work with something that already has a basic foundation that has these things worked out that is highly secure. That has already been thought about and been resistant to all of those threat actors out there as already a lot of the performance issues have governance and management built in. You can pour the specifics of your business into that mold or on top of that foundation.
That’s a long bit of a pattern but a lot of organizations think they can get that from Amazon, Microsoft, or Google, who are mostly working on their own clouds, not that entire universe of data. Not all those SaaS clouds and APIs. There are 18,000 commercial APIs out there and counting. How do you weave that all together into a coherent service that is secure, performant, governable, and manageable to do it all yourself? A tall order most organizations aren’t ready for.
[00:17:43] Eric: Stefan, I will bring you back into this. There is an impression that if you are a business person and you look at the Google Cloud platform, you look at Amazon Web Services, there’s a default presumption that they can do whatever needs to be done. The short answer there is yes, that they can. One of the most interesting insights I heard about Amazon Web Services is the reason why it looks like a Home Depot of functionality where it’s aisles and aisles of different things you could possibly do is that Amazon works very closely with each company to develop a very particular bespoke solution for them.
You have all these different services that get baked into something but it’s like a Home Depot because there are all these different ways of doing things. The short answer is yes. They can do it all for you. The beauty of package software in the old days when you would get your CDs and load it, and it would be off to the races is that it would work for you. In the cloud, you have to dot all these I’s and cross all these T’s, and it went up to being a bit more complex than you think. What do you think, Stefan?
[00:18:46] Stefan: Yes, that’s true. It’s now that the amount of services is overwhelming. It’s a high risk that people like developers from companies get lost there, and they start to do things that are creating hard dependencies that are not decoupled from each other. It’s very important that companies like Software AG, on the one hand, side leverage the cloud and give the customers the benefits of a cloud but then also provide reuse and decoupling.
At the end of the day, also sovereignty that avoids in-house developers from creating too much hard code dependencies, and where package services are not necessarily only applications but package services like an API management, integration stack, and data integration stack is there as a bigger service which is ready to use by the appropriate users, and without stitching together the mini services that have a high risk to become very complex.
We all know how it goes when people are leaving an IT organization. There is hardly any handover. You are better down the road what we say to build and buy. You buy the standardization, platforms, and stacks, and then you build those parts only which are differentiating for you. You want to avoid that data pipeline from devices over the edge into the cloud.
Anything goes with software, and smart software developers can do that but what’s the point to rebuild what’s already there? That is true for integrating, connecting, and for so many other things, which are maybe even also out of our portfolio. While there is almost virtually no escape from an overlap with Amazon or Microsoft, there are still a lot of good things that companies like Software AG can provide to customers in terms of helping them not get lost in the jungle of services of a cloud.
[00:21:15] Eric: That’s a very good way to put it. You brought up a number of key points here, Stefan. One of which is that you need visibility. The key with custom code is that wherever that does exist, you are going to want to have visibility into what it is and what it does. The custom code is what can certainly separate you from the pack but it’s also what can separate you from a functioning enterprise if the person who wrote it leaves and there’s no breadcrumb trail to figure out what that person did. This is a very real challenge. I saw it from a major airline company that I used to book with all the time, where they had a major glitch in their system. I called it, and they finally told me, “It’s because you are using your membership information, and we haven’t updated that system.”
I’m here talking with Dr. Stefan Sigg, Chief Product Officer for Software AG, and Dion Hinchcliffe of Constellation Research. We saw both these gentlemen in the summer across the pond going over to Circuit Zone, which was absolutely fantastic. Software AG is doing a lot in the world of sustainability. It’s a big issue for energy purposes and lots of other purposes these days but we do want to talk about cloud-native and how it maps into digital transformation.
This is a term that has been around for a good long while now. Stefan, one technology suite I came across a while ago, and it’s what started my whole conversation with Software AG around process mining and process engineering. To the average business person, you might think, “Process mining, what on Earth are you talking about?”
What we are talking about is being able to ping the system that you use for your enterprise. Let’s say you are a manufacturer or you are a retailer, and you’ve got all these different products. What does it look like when I want to load a new product into the system? What does my order to cash look like? It’s not as always as clear as you might think it is. A lot of times, there are hops that go between systems to get something done.
With process mining, you can see the defacto process that’s under the hood. Maybe it’s in a big ERP system like software SAP, for example. It could be anywhere but being able to track. This is something that comes back from years and years of site reliability engineering and basic IT troubleshooting. “Why is our system slow?” “I don’t know. Let’s take a look at it.”
CPU uses stuff, storage is hot, and different things are happening. A smart person still has to figure it out. “Maybe that’s because this node went down or that happened.” There’s a lot of grunt work that goes into troubleshooting. Process mining comes along and offers a remarkable window into actual processes. Once you see how things are being done, that’s when you can get an idea about how to change things. Stefan, tell us about RS and how process mining comes into the picture when you are trying to do a digital transformation.
[00:25:32] Stefan: It’s huge. It’s a powerful concept of process mining, and there are two key parts. The first part is to be able to reconstruct process structures out of data where the process is inherently hidden. This is where you mine out the process out of the data, and the data can be any data. The typical data that is often talked about is order to cash or hire to retire.
What is even more interesting is to reconstruct processes out of production data, supply chain data or logistics data. That is the way to filter out a process structure out of data. The next part is the visualization of the processes. Not the process as a static thing but each and every process instance. Every production of a metal part, every truck that is going from A to B. Every little parcel that is going across the ocean. All this is one process that can follow a nicely modeled process or it can as well not follow this process.
That is exactly where the interesting part comes in where you can see, “There are these exceptions of things that go a different route, and that cost me money, time, and energy, and you can immediately see why. That is a big difference from classical BI dashboards, where the usual question is, “What do I have to do to make this KPI 10% better?” The answer usually is very difficult because the data has been transformed and copied a lot of times. Tracing back to the original system is very difficult, and the notion of a process flow is not given. In process mining, you can see the outliers visually.
I have seen a lot of times where people were saying, “I haven’t seen that at all. That’s the first time I’m seeing that if I go and do this plan and this machine is installed, this is going a totally different way.” You can see the energy, time, and money lying on the street. You have to pick it up and align the processes again, and then it comes to interplay with the to-be model that is modeled. The key in RS is the equally strong interplay between well-documented process and data-driven process analytics and process mining going hand in hand.
[00:28:16] Eric: What I find so exciting here, Stefan, is that there are so many ways to do things these days. You look at what Kubernetes did to the world of enterprise technology. It opened a huge door to a new way. It’s overkill sometimes. You don’t always want to rely on Kubernetes. If you are a large organization. You certainly want to have that somewhere in the mix.
When I think about how the Edge changes everything, how private cloud and public cloud are coming in together, and how on-prem is going to have this long tail, you have a very broad and rich tapestry from which to design your future state. You bring up a good point that in process mining and that discovery process, you are going to learn things about your company that you probably never knew before about how stuff works. Until you know how something works, you are going to have a hard time optimizing it or let alone automating it.
[00:29:13] Stefan: One of the necessary pre-conditions to have is the availability of good clean data, especially when it comes to the non-EIP type of data. It’s quite hard to assemble or to build up a good system of record because in production and logistics, in those non-financial or non-accounting environments, it’s quite diverse.
Therefore, the combination of strong device integration and IoT stack where all this new type of data is cleaned up and stored in a good way, and then process mining on top. That is a fantastic opportunity. Very much untapped potential in many companies. The complicated step of data provisioning is solved because the IoT stack is already there. That’s a big difference to have already the data there to having projects to assemble and fix the data may be in a one-off project. Therefore, we believe in that fantastic interplay between process mining analytics on top of IoT data that is there, anyways.The combination of strong device integration and OT stack where new type of data is cleaned up and stored in a good way, with mining process on top, is an untapped potential in many companies. Click To Tweet
[00:30:34] Eric: Let me bring Dion Hinchcliffe back into this again, and we will use this ERA, the Electronic Racing Association, as an example. We saw these wonderful young people racing these cars, and it’s a very interesting model. Everyone gets the same car, so there’s no tinkering with your car versus someone else’s car. It’s the same car and the same model. Now it’s all about driving efficiency and use of data.
You have this real-time communication between the pit crews and the drivers saying, “We noticed on turn three you are losing traction, and that is affecting your ability to accelerate down that long stretch.” That’s real-time communication that is data-driven and exciting stuff as well. I thought that was such a beautiful example of how you can marry this IoT data into things with real challenges, like driving a race car around a track in real-time. That’s some pretty cool stuff.
[00:31:31] Dion: It was amazing to see. The way that you can use data plays at two levels. One, you can build a dashboard and a view that allows you to respond in real-time to the data. We heard that they would be able to tell the driver based on how you are behaving. You got to be careful in this upcoming turn. They would use it almost in real time, which is striking.
That’s not something that you can do unless you have this entire technology stack sitting inside the car, transmitting data to another place that it recognizes should be used in a real-time fashion. You can also sit back and then look systematically across all those data sets over time and do strategic analysis saying, “What do we want to do about how these cars perform? Do we see patterns across them that we can use?” There are so many ways you can use that data. The key is to be able to collect it and have a platform that can do that and then be able to transmit that to solutions that let you operate on it.
[00:32:26] Eric: It was fun to watch that in action. I also wanted to dive more a bit into this IT and OT data. Operational Technology, you think machines. The machines are doing things. In a manufacturing environment, you have a whole heck of a lot of OT data that has been there for a long time but now we are finally starting to see the nexus of these two, and that can come in very handy for lots of different reasons.
One, again, about manufacturers. We will look at the forcing functions of tariffs, COVID, and then the Ukraine war. These are very serious world events that are fundamentally changing what is available to you. If you build cars, for example, we have this chip shortage. Although there’s some joking where it’s a bit of a chip hoarding going on for fear of the shortage that’s coming. You have to separate the truth from the narrative. The IT and OT Nexus represents a fascinating point in time in the evolution of industry and manufacturing. Stefan, do you want to talk about that really quickly? What things you can get from that?
[00:33:30] Stefan: We started already before the energy crisis. A research project with the renowned University of Darmstadt in Germany on energy efficiency on the shop floor. The problem was energy efficiency and carbon-neutral production. This is the vision of the institute and the whole research team, and they have built a whole sample factory that looks like a true production factory but it is built for research purposes. It’s physical big building, and we have installed all of our software there. We are working with the professor there to find out how can modern IoT stack and analytics on top help companies optimize energy efficiency to define a roadmap to carbon-neutral production.
That was started years ago, and I have never expected that this can become so hot in a tragic sense like now when everybody is switching off lights and turning down heaters where energy efficiency is not only a good thing to do but a mission-critical thing. We are there at the center of the attention of many companies to see that it is doable. The technology is there. I need to equip my machines. Maybe retrofit with some sensors but then it is possible to see where are the outliers of the processes that cost me energy and CO2.
[00:35:23] Eric: You are building efficiency from multiple drivers. One of which is financial, and one of which is the desire to do good things and to be a responsible steward of energy. When push comes to shove, it’s all about either the top line or the bottom line. Let’s face it. There are some very heavy expenses and costs when energy goes up. That affects everybody.
It’s going to affect the entire supply chain. It is a forcing function. Necessity is the mother of invention. Dion, I will throw it over to you. We have seen some very creative ways to get around things. When human beings need to get creative, they tend to get creative and do some interesting things. What do you think, Dion?
[00:36:01] Dion: The best phrase that describes that is innovation often comes from where you least expect it. What we have seen is that the bar keeps lowering for who can create solutions. Who can take and build on top of all this data, services, and everything that we have and then produce business value? We have seen this sharp increase. It’s the combinatorics.
It’s being able to integrate what we already have in new solutions that is where most of the new value comes from and where the major new value is. Creating those synergies out of all of the IT that we have invested in over 20 or 30 years. If we are going to hand it over to a small group in the IT department to try and transform our organizations and automate everything, we are going to fall behind it.
What we see is that we can unleash innovation across organizations. More and more companies are saying, “We have to compose what we have much faster than we have before. We have to integrate and create value for our customers. Deliver all the data that we have been collecting for them. Do more for them if we want to continue to survive.”Software engineers must unleash innovation across organizations. They must integrate and create value for customers if they are to survive. Click To Tweet
We need to unleash the floodgates of innovation, and often good ideas come from the corners of your organization, not the center. You have to unleash that. I have seen citizen development and even pro-coders who are using these much more nimble and agile tools to knit things together and taking the smart guy down the hall and say, “You have all these great ideas but you would have to get in line in the IT and wait behind 50 other applications in the pipeline.” To get anything done. Go do it. We do see this shift happening where we are seeing innovation being unleashed on the ground, and the data in the cloud is where that’s happening because that’s the most composable and integrable place that has ever been created.
[00:37:46] Eric: What a great way to end that segment. Cloud is the most composable environment ever. In the back of my mind, as you were talking. I was thinking about airflow and how many people are using airflow now to stitch together these workflows across different environments. The beauty is that we can track everything now, and what gets measured gets managed. We learn that over many years. We are in a very special place for innovation.
What a fantastic show is talking to Dr. Stefan Sigg, Chief Product Officer for Software AG, and Dion Hinchcliffe of Constellation Research. As I think about all the topics we have discussed, there is one cool discipline that leverages all of them, and that’s this whole concept of a digital twin, which is what the metaverse boils down to.
I was trying to describe to a layperson that a digital twin environment is like a simulation. It’s a sim. Kids play Sims all the time. When you are playing your video game, it’s a simulation of the real world. You have different characters. With digital twins, what you want to do is basically have a representation of your key components, whether that’s your infrastructure, your technology, your people or your processes. The more of that you can bake into your digital twin environment, the better off you are, and then you can use that to do your scenario modeling.
“What happens if we stop production of this car and roll up production of this other car based upon the fact that we can’t get this part anymore from Taiwan because of things going on over there?” These are very real questions that business people have. Traditionally, you make your best guess and go with it but when you can do these complex Monte Carlo-type assessments, that’s very good.
What was the great line by Eisenhower, “Have always found that plans are useless but planning is indispensable.” You learn from the process about your business and about what’s possible. Even if you don’t execute what you played around with in this environment, you learn something from that process, and that helps your business. What do you think, Stefan?
[00:41:03] Eric: That’s perfectly put, whether it is the digital twin world, you can do things that otherwise would be dangerous or expensive. You can try it out, and you can test it. The digital world nowadays gives you a very good feeling about what could happen, what will happen, and the real project. It is fantastic that you can do that, whether it is an exchange of machines on the shop floor or making fixes on a complicated engine.
Even think also about a digital twin of an enterprise or a business, which then is a digital business process where you play around a little bit of changing a business process and doing that data-driven before you do the physical part of a business process transformation. That’s huge, and we are at the beginning of the interplay between the physical world and the image of the physical world in a business system. It’s a digital twin in a modeling environment or the process mining environment.
That is the interplay between the IT and OT data, and where the connectivity and the data integration of both are opening up a whole new world for a new breed of applications and solutions that could be in an ideal world. Even lead to a new business model or idea which is software driven. Augmenting may be a usual business model that is driven by a physical good that is produced.
[00:42:56] Eric: That’s very well put. If you start thinking about how useful a large digital twin environment could get in warfare. Not to bring up an unpleasant subject but they have been doing that for many years. The cool thing I will throw up to you, Dion, is that now we have so much real-world data at scale. There are lots of different sources you can tap into to understand what’s happening and where people are going, whether it’s cameras, counting cars on the highway or counting football at a shopping mall. The closer you get to now, the closer you get to being able to do interesting things. It’s a renaissance for that thing. What do you think, Dion?
[00:43:39] Dion: We reach a point where our organizations don’t have a lot of dark areas inside them. To be digitally dark means that we don’t have any data about that part of the organization in a digital format or are trapped in a silo we can’t get to. We can go out there. Our organizations are so quantized and connected to everything that it generates data that we can then quickly construct digital twins, verify that they are correct, and then do what-if scenarios predictions. As Stefan said, we can do it when it might be too dangerous to do it. It would be way too expensive. It was used to model and create digital twins of nuclear weapons because they were so destructive and expensive. It is much easier to do it inside a supercomputer.
Now that power has come down that, virtually every single business can get some GPUs or go out in the cloud and get some compute power. They can run very accurate and high-fidelity digital twins that they can ask questions about their business to make decisions faster, get insights quicker, predict the future or spot patterns that they can then look at their actual data for and say they can do predictive maintenance or look for other things failure modes that can happen.
There are all sorts of things you can do both tactically and strategically if you have digital twins. You can use a metaverse to visualize them, interact with them, and do a lot of those scenarios. You get this very high-resolution interface to your digital twin that allows you to exist with your business and manipulate it in a way you could never do before. We are at the beginning of all this, so it’s an exciting time.
[00:45:12] Eric: I’m going to tease our upcoming webinar with Stefan and Software AG, in which we are going to talk about achieving a truly connected enterprise. There’s a fun slide in there, Stefan, where you say, “What if the elevator waited for you instead of you waiting for the elevator?” You get right down to brass tax. That’s not all that complicated.
We see patterns of when people come to work and when they leave work. If there’s only one person left in the office, he is on the ninth floor and walking down the hallway. He’s probably going to the elevator. You can get it there. The point is that the more that we analyze this data, the more that we process, consume, and understand what it all means, and the better ideas we are going to get for how to optimize things.
A lot of times, it’s not an either/or. It could be both. It could be a multitude of different answers for different use cases. We are at the beginning. It’s because COVID was such a forcing function, forcing us all to work from home and to think through business processes. We are now on the front end of a big movement of redefining and redesigning our world. The prospects are very good. What do you think, Stefan?
[00:46:23] Stefan: That’s exactly what it is. We are listening and talking to customers and understanding the job to be done as much as we can with our technology. We then make sure that they hire us in the best possible way.
[00:46:39] Eric: I will throw it over to you, Dion. There’s a great quote I heard, in fact, from the guy from Core Financial. He said, “What happened doesn’t matter ever. What matter is how you respond to the environment.” That was very interesting and compelling. We talked about a streaming-first architecture and how much information technology has been designed with this database paradigm in mind of capturing and persisting the data and then using it to do something. Using streaming data is a very different paradigm than that where you are capturing it, in some cases, using it before it ever got persisted anywhere.
The opportunities are borderline endless for how to change things but the key is how do you set up your own skunk works in your organization? What’s some advice for companies or people in big companies who are like, “I know I have to change something. How do I get people to listen really quick?” What do you think?
[00:47:32] Dion: We call this the next best action. If you’re an IT system, your business systems can say, “Look at all the options that you have and inside the decision windows of your competitors tell you the right action to take. The best action to take on all available data and models that you have run against it.” That’s what you are looking for. We are in that nirvana. We have to figure out where to place it in our business.
We see the word ops being added to everything. AnalyticsOps, AIOps, and DevOps because we want to action all of these and make these decisions. That’s where we are going to see most of this appearing as we are going to start operationalizing and actioning all these things. It’s an exciting time, as I said.
[00:48:14] Eric: We did a show on the March of Ops, and it was all about that. It started with DevOps when developers started working with operations people, and amazing things happened. One of the coolest things is that the so-called IT business divide dissipated in one way, and the IT business divide has been around forever and ever since there was IT. There were some good reasons for it and some not-so-good reasons for it but DevOps came along and solved that conundrum, and then you have DataOps, and now we have Marketing Ops.
What does Ops mean? It means the work. It means the op-like stuff that you do when you are doing things like writing emails or calling people on the telephone. How’s that for a crazy idea? Going to a conference and having an in-person discussion with someone. We have gotten to appreciate all these things again because we got through a very strange and difficult time there. The bottom line is that the opportunities are endless.
The tapestry is as wide as you want it to be. The fun part is going to be the new business models that spin out of this because new business models are fun. New business models, if you look around, that’s why things are changing. That’s where all the exciting stuff is happening. People are coming up with new ways of doing things. These guys have online Stefan Sigg and Dion Hinchcliffe.
About Stefan Sigg
CPO/CTO senior executive in software industry at management board level with various advisory board assignments. 25+years of experience in leading large and global software engineering, product management, and Cloud operations teams (US, Canada, Europe, India, China, South Korea). Strong track record in developing and bringing large scale software products to market: SAP Business Warehouse, SAP HANA, SAP Analytics Cloud, webMethods.io, ARIS Process Mining, Cumulocity IoT.
About Dion Hinchcliffe
Dion Hinchcliffe is an internationally recognized thought leader, IT expert, enterprise architect, bestselling book author, frequent keynote speaker, analyst, and transformation consultant. Dion works with the leadership teams of Fortune 500 and Global 2000 firms to drive successful change with emerging digital methods including employee experience, online community, cloud computing, data centers, digital business models, Internet ecosystems, Internet of Things, workforce collaboration, and the future of work.