#120 An Argument for Global Data Networks

Subscribe to get the latest

on Wed Jan 25 2023 16:00:00 GMT-0800 (Pacific Standard Time)

with Darren W Pulsipher, Alan Evans,

On this episode Darren interviews Alan Evan, principle technologist at MacroMeta, about distributed data management and the impact of global distribution of data in the cloud to edge ecosystem.


Keywords

#datamanagement #globaldatanetwork #macrometa #dataarchitecture #data #edge #compute #technology

Listen Here


Alan Evans is the principal technologist at Macrometa and focuses on bringing Global Data Networks to customers worldwide. As Darren finds out in this interview, his insight into data management and the complexities of data management in global organizations is invaluable. The focus of this interview is to understand the Laws of Edge computing and the characteristics of data that drive these new data architectures.

Laws of Edge Computing

To effectively deploy edge computing architectures, three laws must be considered: the law of physics, the law of economics, and the law of the land.

Law of Physics

The law of physics refers to the distance between edge devices and their corresponding on-prem and cloud data centers. The fact is that data currently cannot travel faster than the speed of light, which introduces latency in the movement of data from edge devices into the cloud or data center. Sometimes this latency does not affect the analytics and insight demanded by organizations’ use cases. However, there are some use cases where real-time data analytics and understanding are critical.

Law of Economics

the following law is the law of economics, not all network, storage, and computing devices are created equal. Typically better performance devices cost more money, but how much money to spend is determined by the value of the movement of the data and the insight gained from collective analytics. Some organizations are finding additional costs in the direction of data in cloud technology. While ingress costs are typically free egress costs, moving data out of the cloud or from one region to another is costly. Understanding the economics behind edge computing is critical when developing distributed data architectures.

Law of the Land

The last law to consider, the law of the land, is primarily regulated by local, regional, and country governments that want to protect the privacy of their citizens, industries, or government operations. Understanding the regulations around data generated at the edge and the governance around accessibility, distribution, and storage must be considered. Ignoring the law of the land concerning data can be costly through fines, re-architecture, and complex solutions.

Data Characteristics

Understanding the laws of the land is the first aspect to consider. However, understanding data characteristics are just as important as understanding the operating rules they must adhere to when building business insight. These characteristics include data size, frequency, storage location, type, privacy and access regulations, and spoilage.

Data Size

Traditional data warehouses require the data to be in the exact location, meaning all data needs to be moved or copied to the data center or cloud location. The data is also normalized based on the analysis performed. Because the raw data can be used to solve multiple business problems, a copy of the information is required. Data duplication is multiplied when organizations create different data warehouses for business problems they are trying to solve. This increases data size, driving up storage costs and administration.

Data Frequency

Because data is constantly being generated, it is critical to understand the generation rate. With the number of different data sources generating increasing volumes of data, it is essential to catalog its frequency and volume. Organizations This impacts how the data is collected, stored, and processed.

Data Source Location

The location of the data generation–machine, human, or software—is another critical driving factor for data analysis architectures. As the number of source locations increases, the architecture becomes more complex. Additionally, the location and the connectivity of the data source, combined with the volume and frequency of data generation, drive architectural data decisions.

Type of Data

Data sources generate databases, video, audio, emails, texts, and reports. This data can be grouped into three categories: structured, unstructured, and semi-structured—these characterizations aid in processing the data and impact the type of data architecture used. Additional groups can be made within different categories of data to increase reusability, understanding, and, ultimately, insight into the data. Developing a data taxonomy is critical in building a robust data architecture that generates real business value.

Privacy and Access

Governments and industries are increasingly regulating data privacy to protect the privacy of their citizens, patients, and customers. Understanding and adhering to the data regulations include who has access to the data, what can be done, and how long it must be stored. Several regulations focus on the location of data, healthcare patient data, financial data, and payment information. The National Institute of Science and Technology (NIST) documents key privacy and access controls used for compliance with regulations.

Data Spoilage

Three factors slow down business decisions due to increasing data’s time to value—time to identify the data sources, collect the data, and normalize and clean the data. In the past, all of an organization’s data resided in the data center. Adopting IoT, cloud, and remote work technologies have scattered data to several locations, including workers’ homes, the cloud, and the edge. Gathering data into one place to perform data analysis takes time and increases the time to produce value for an organization.

Call to action

Now that you’ve been armed with the laws of edge computing and the characteristics of data, explore different data architectures that meet your needs and help you build genuine business and mission insight from your data. For more information and a white paper, check out the links on our blog. Embracingdigital.org.

Podcast Transcript

1

Hello, this is Darren

Pulsipher, chief solutionarchitect of public sector at Intel.

And welcome to Embracing

Digital Transformation,where we investigate effective change,leveraging people processand technology.

On today's episode,an argument for Global data Networkswith special guest

Alan Evan, principal technologistat MacroMeta.

Alan, welcome to the show.

Cheers, mate.

Nice to be here.

Hey, Alan,we've been talking for what months?

Has it been that long?

It could be longer.

Yeah, it could be longer.

Feels like. Could be longer.

Yeah. Yeah.

Oh, thanks.

Yeah.

I have been told I'm long winded,but not that long winded.

I think it's great.

We've. We've been actuallynow about six monthswhile you and I wrote a white papertogether specifically on this.

And it was actually pretty coolbecause we met every Wednesday,my morning year afternoonfor a good portion of that six monthsand talked about things.

It was a little bitcrazy, was a great time, right?

Very, very early.

It was very.

Convenient. Yeah.

Yes. Next.

Next one, we'll have to reverse.

Why don't I get up?

Yeah. There you go.

You get up at three in the morning.

Yeah, exactly.

Thank you.

All right.

Hey, let. Let's diveright right into this.

What?

What do you feel is the most importantaspect as we move forward into 2023and beyond?

We've got data spread all over the place.

What do you feel is kind of the keyaspect of managing this dataall over the place?

I think first,that's a great question, by the way,but I think the first pointwhen I think about it, it's,you know,

I hate to use the term paradigm shift.

People always useit does a paradigm shift.

But the big shift for me,

I think when I think about enterprisearchitectures and applicationsis, you know, at some pointyou always get to the data, right?

There's a data problemor a data use case behind it all.

That's what drives these applications.

And I think up until very recently,you know, what we call legacy approachesto data processingand data managementhave been largely okay.

And when I think about that,

I think I'm looking at,you know, big data batch based processing,you know, generating insights and then,you know, data scientistlooking at, you know, that and querying itand and exploring it and then,you know, trying to produce some sort ofactionable insight that then they will,you know, they willthen use and and feed into the rest ofwhatever business that they are running.

Right.

I thinkwith the advent of,you know, modern high performancenetworks, connected things, you know, so

Internet of Things, you know, we're seeingan exponential increase in data.

And the challenge with that is, you know,you no longer really can afford to haveor the only way to look at insight,having a human in the loop,it can't be your only solution for thesemodern applications and solutions.

You can't have a human therewaiting for data be pulled from whereverit's being generated or have a costput into a big data lake and then,you know, start to turn the handle on itand try to generate some insights.

And then and then some time in the,you know, later on, you know,try to do something about it.

And I think that the big shift hereis that from the legacy big data,slow data approach to how do I deal with,you know, billions of connected devices,you know,what the value of data is fleetingand I need to be able to action that datain a very short order.

I like how you put atemporal aspect to data and it's value.

I mean, we talked about this several timesand we'll talk about it more today.

That data, as it agesdoes not become more valuable.

It becomes less valuablefor ever actionable insight,which I think is fascinating.

A perception,as you said, it's kind of todaywhen we look at data science and insight,most of itis this big, methodical,

I got to do this, I got to do that.

It's just slow, right? Mm hmm.

And I will get insight on what happenweeks or months or even years later.

And there's no sense of urgency.

But I think there is nowthere's a sense of urgency.

But we haven't quite caught up with it.

Would you agree?

Yeah. Yeah, totally.

I mean, I think we'veyou know, there's various there's there'sno canonical view of this, in my opinion.

There's lots of contributing viewsthat I think if you expose yourself to theto the to the trends and initiativesthat are going on acrossmultiple industries,you know, you start to see a lot ofsynergies between them.

Right.

And I think the one that stands out to meis, you know,when we first started talking about,you know, artificial intelligence,for example, it was, well,you know, we collect all this data.

It's got to be good data,so we have to clean it.

We then want to use it to train our modelsand, you know, that's great.

We've now got a model is trained.

We can now ask questionsand get an insight out of it effectively,you know, removing the humanfrom the loop, albeit after, you know,the human is now we've rolls the nowno longer providing the insight.

They're now training a modelto provide the insight.

So it's not, you know,no one to get into that kind of thing. Butyou know, even once you've trainedthe model,you know, so to your point of data value,

I think, you know,yeah, real time data has immense value,historical value,historical data has value.

When you put it in the contextof training a model so that it can acton new real time in the moment data.

Right.

And the challenge there is, is that,you know, you build these modelsand then they're sitting outin a central locationsomewhere when the data's typically beingthat you want it to actupon is being generated,you know, in the human world.

Right, right at the edge, you know, and.

Right, right on the edge, right?

Yeah.

So right on the edge,

The real edge where we all are,you know, where our devices are.

And you know that that, you know,having a 300 millisecond or longerroundtriptime to my AI inference to get it insidekind of defeats the objectof building the model in the first placebecause I'm looking for a real timeactuation and insight intowhat is going on in the moment.

I can't afford to have that pushed outcentrally, right?

I need itclose to where the data's being generated.

Well, thatbrings up something you came up

I thought was brilliant in our paper,which was the three lawsof edge computing.

You called it the Laws of Physics,the Law of the Law of Economics.

And the third one, what was it?

Oh, the Law of the Land.

Of law, of the.

Land of Fascinating insight.

Yeah.

So explain the explainthose three laws real quick.

What? Why?

Why would you put laws on edge edges?

The wild West?

We can do whatever we feel like, right?

Well, we should definitely we shoulddefinitely go into with that mindset.

It's not about putting laws on it.

It is about thinking.

You know, it's about reality, right?

Yeah, it's it's it's about sort of

So when I think about the laws thatthe more than I thinkthe laws is probably a bit marketingsort of termbut more guidelines as to how I.

Like it. Aboutwhat kind ofcharacteristics requirementsdoes your applicationhave and how,how do you kind of classify them.

Right?

And the first way to think aboutit is you, you know, the laws of physics.

So you can think of an edge applicationin the context of the laws of physics,usually from like the speed of lightand the, you know,the connectivity between endpoints.

So, you know, an edgeapplication by definition is onethat, you know, reduces the distancebetween endpoints.

Okay.

So having a low latencykind of connection, you know, in backto the sense we you know, we have a sub 50

P90 round trip zone, Okay,you know, much lower in other placesand you know,but then you've got the concept of,you know, propagation delay as well.

So when I think about the laws of physics,

I'm thinking about it in a coupleof different dimensions as well.

So not just the distancebetween endpoints,but alsohow long does it take to process dataand what technologies should I useto actually handle the processing of data.

So, you know, I've worked with enterprisesin the past who,you know, have attemptedto build an edge solutionand they've used,you know, some of these legacybig data kind of approaches and,you know, the project at the edgeand completely negatedthe the benefitfrom the laws of physics perspective.

So there's yes, you've got a low roundtrip time, but the tack you put inplace is still processingdata like, you know,you know, ten years ago.

So therefore there'sno benefit of doing it right.

So you have towhen you think of the laws of physics,it's as I say, it's a combination of,you know, round trip time latencybut also the propagation delay of the datathrough the system itself.

So glad you brought that upbecause in somein some cases, right,we have edge devices that are so far outat the edge that the connectivityand those the laws of physics,like you said,just make itthat latency is is thereand you've got to deal with it.

You can't just say, well, I'mjust going to ignore the speed of light.

I'm going to ignorethat my device is on a satellite that's,you know, 1500 miles away or even furtherwhere on the other side of the world.

And I'm trying to connect theseto endpoints that are, you know,

You're going to have some latencywith that.

Well, yeah, you brought that point up.

Well, yeah.

And to I think to that point as well,you know, when you talk about the laws,physics, it's like you think aboutput your self in the perspective of the,you know,the enterprise in their applicationand the user experience they want to have.

So I can't think of an applicationwhere, you know, an enterprisewhere they don't want to havea good experience for their, for their,you know, their clients or their consumersor their devices or whatever.

Right?

The challenge is, you know,and you can solve that quite simplyby reducing, you know, the round triptime to the of the endpoints for sure.

But it's like, yes,we mentioned the propagation device.

I'm not going to go into that again.

But then then the next challenge ishow do you handle that on a global scale?

How do you provide a consistent experiencefor whateveryour endpoints are, your devices,your humans, whatever, or global scale,you know, rather than what you get todaywith a centralized approach where,you know, somebody in New York is gettinga great experience and somebody,you know, on the other side of the planetis getting a really poor experience.

It's like, well,how do I enable those devices,those users, to interactwith the application,with high performance,know the laws of physicsabout a global scale, okay.

And the edgeneeds to takethose things into account.

Well, in this brings up the next pointbecause most people would say, well,just have more substationsor endpoints connected inand just make short hops between things.

But then you have the law of economics.

Right now, I can't deploya million devices out there.

It's too expensive.

So explaina little bit on your law of economics.

We've got to make this reasonable, right?

You can't just. Yeah, yeah.

Well, the law of economics, you know,when I when I thought about this,you know, it'sas much about what you said,but also includesreally the value of that data itself.

You know, and this comes back to thethe value of data is fleeting.

Okay.

So it's say as we go forward and,you know, with, you know, into 2023and beyond, you know, the amount of datathat's being generated from the edges,it's just growing exponentially.

I forget the exact numbers we can probablylook that up, but it's zettabytes.

I think of data that's predictedto be generated from the edge.

And it's like,how are you going to process that?

You know, at the top of our chart,we talked about, you know, the old sort ofarchitecturalprocess of of back hauling all that datato a central locationand then and a human trawling over it.

It's like,is that really going to be cost effective?

I mean, from a from just from a dataprocessing perspective,is that the most costeffective way to do it?

There's a lot of noisein the signal as well

When you do that is all the datayour back hard hauling is that is thatyou know the valuable to youor is it specific insights now?

Yeah.

Is it the insights that valueor is it the data this value?

Did you really want to pull all that dataor just a subset of it,or did you not want to pull a subsetthat you actually want to process itand get to the pointwhere you're convertingraw data into valuable insight,you know, as it's being generatedso you can actuate it.

So the law of economics isit's not justabout moving datafrom one location to there to another.

It's it's about monetizing data, right?

It's about whether that's that's saving,you know, money, whether or not it'sgenerating new revenue.

Any of those things.

It's like the economicsaround the data itself. Andif you process the data at the edgeas opposed to it's a central location,does this all this does this open upnew economic models of business modelsfor you as a business that that willthat will mean the differencebetween going out of businessor being successful,having happy customersor unhappy customers.

Right. Okay.

Let's let's talk about the third law,right?

Because we've got physics, economics,and now the law of the land ideaare you applying the privacy laws here?

Is that what you're talking about here?

Totally.

So the law of the land is is,is is a fun one, isn't it?

I mean, one of the you know,we're also looking at a model here where,you know, a lot of enterprises have been,you know, Yeah, sure.

A lot of enterprisesof the whole lift and shift.

And they've put their applicationsin a central location,but there's an awful lot of applicantenterprises out there that can't do that.

And you know, they have a heavy investmentin on prem data centersand you know, these can be for a numberof different reasons, right?

It could be the privacy aspect of it.

It could.

And thosethose could be depending on the industrythat you're talking about, that could be alegislative or a regulative requirementfor them to do data processing.

So it's financial data.

Maybe it's close to a stock exchange, youknow, maybe it's a manufacturing company.

Maybe they have policies in placethat they've imposed on themselves.

This is our policyof how we want our data to be managed.

But they'regoing through a transformationwhere they really want to take advantageof, you know, thethe kind of modern sort of OpEx drivenbusiness model of cloud computingwhilst maintaining the performanceyou get from having an on prem data centerand the security that comes with that.

And the edge reallywhen we talk about the law of the landis really considering those things.

So whether or not these are,you know, health caredata that needs to be handledvery securely,perhaps captured at the edge, processedat the edge, but anonymized in some way,or it could be, as I say,could be, you know, financial data or or,you know,government datathat needs to be kept in a securelocation And processed, even though thezip code or postcode level.

I love Ilove these three thingskind of merge together because to me,we call them laws,but really they're what's the right word.

They'renot even impediments.

They're just the operating environmentthat you're in.

I mean, particularly

I can't get away from it.

Guiding principles of how to thinkwhat what,what do we meanwhen we talk about EDGE Right.

From afrom a first principles perspective,you know, why are we doing this?

What's the what are the driving factors?

And, you know, as I say,

I think these three laws,they're not really laws, but,you know, they they dotend to guide your thinkingwhen considering the why and how and whatthat's going to.

So so if we tie those nowto what we call data characteristics.

Right.

We've got data has characteristics in thisenvironment that has these three laws.

And we talked a lot about this.

A lot of people, when they think aboutdata characteristics, the commonthings come up, data sizeof how frequency,how frequent am I generating data,where is the data located,where it's being generated,

And then you get into types of data.

Is it like video data?

Is it audio, Is it text, is it, you know,encrypt it, allthose sorts of thingsand you have the privacy access.

But the one thing I want us to delve intoeven more and we've been hinting on it,is data spoilage.

This was a new one for me.

You guys introduced this concept to methat dataspoils over time like the rotten bananasin my in my kitchen.

Right.

You buy them green?

No one will see them.

And then, you know, later on they're brownand no one wants to eat them again.

There's that sweet spot rightwhere? Yeah, Yeah.

You can actually do somethingwith those bananas.

Yeah, they are totally. Yeah.

I think a lot about thethat the perishable spoilage of data,you know and I think it'syou can again there's and there'sa number of ways to think about itthere's the one of the use casesor you knowit's not even a hypothetical use casebut certainly you know a use case.

I think of, you know, one of my years ago,

I used to work in the broadcast and TVmedia industry, you know, notnot not on the producing, but on the,you know, the architecture in the businessdevelopment company event as a vendor.

And we did a lot of targeted advertising,you know, Soand I remember the shift between,you know, just carteblanche, you know, blanket advertisingwhere you just everyone gets the same.

Add to thatonce the connectivity was introducedand you could start to understanda little bit more about who was viewing,you could actually put the viewer orthe user into categories of advertising.

So campaigns,so you'd have various campaigns that wouldhave a profile andyou would, you know, you'd match them up.

And when the ads came on,you'd be able to target themwith a specific advert right?

And when I think about thatin the context of, you know, data,you know, it's I think there's athere's a lot of parallels to it.

So so imagine you were maybe youyou know, you've got your phone on you.

It's connected.

You've subscribed to an appthat basically givesyou offers around,you know, wherever you are in the world.

Now, say it's say it'sbased in the inner city and you'reyou know, you're walking along the streetand you're the app.

You subscribe to it.

So it's not you know, it's not imposing onyour, you know, in your time.

And if you want it to notify you of stuff.

Right.

So say you're walking down the streetand, you know, you know, there'sthere's definitely there's a lot of thingsfighting for your attentionof what happens.

And this there's a retail storejust ahead of youthat you know, has a number of offers on.

They would really like youto walk into the store.

How when is the besttime to ping me on the app?

Notify me on the app of,you know, when I should pop in the shop?

Is it when I'm on the walk,when I'm walking to when I'm right outsideor when I'm already down the streetthinking about what's on?

Right.

Obviously,you know, it's about as I'm approachingthe store, you can get away with doing itif I'm stood outside.

But in an ideal world, you want to be,you know, letting me know that, hey,you know, Europe, we notice you're about,you know, 50 yards away from our shop.

We've got a great offer.

If you come in today, we'll sweeten iteven further or something like that.

But if you try to use that data, you know,this guy was walking past my shopthe other day.

It's like you guys, you know.

And that's I think that is.

If. Epitome of the perishable data.

It's like you had this you know

Ireland was walking past you know,whatever shop the other day andthey're like, well that's great enough.

Maybe you will get. Himnext time, you know.

But by then, you know,the shopping experiencethat the opportunity to bring me intothe store has gone right.

I mean, lots of people probablywould answer that question with thethe old automotive use cases,you know, about, you know,breaking deadlock between, you know, inpotential collision environments.

And of course, thosethose are of super high risk.

And I think that's another part of edgecomputing that and not the buildingof applications around real time data thatyou know, there's there's the fun stufflike I just talked about like,you know, a shopping experience,you know, the ability to do advertising,which is really importantin a retail environment,but it's not life critical, right?

It's not like.

It's not it's not critical infrastructure,right?

Yeah, Yeah.

But you can but the same rules apply,even though there's a potentialfor accidents avoidance.

Like if you were in a in a, in ain the Air Force, the Royal Air Force,you know, you would have,you know, you have two different typesof air traffic control that you have,you know as wellyou have air traffic controlwhich just dealing with peopleflying around in a non-combat environment.

And then you have fighter control,which is dealing withyou know, we really need to be keepingtrack of everythingthat's going on right now.

We need to be able to and whilstthe pilots are going to make theirown decisions in a deadlock situation,we need a better answer and reactand provide them definitive life saving,you know, information from datain real time.

Yeah.

And it's like, so you've got those twoends of the spectrum, you've got the funand you've got the, the extreme like that.

Well, also it's interesting.

Let's go to thethe Fighter control, for example.

Yeah, I need that real time information,but I also need to do

I hate to use the word postpostmortem,but after the fact I want to thengo and say, what could we have done betterthat goes into that training, Right?

Whether I'm training an AI modelor I'm training people, there's still someadditional value that comes out of data,which is wonderful.

Unlike bananas, right?

Once they're spoiled, the only thing I canreally do is banana nut bread, right?

Or banana bread.

And that's all I get out of it. Right?

Right.

So there's there's still even thoughthere's data spoilage,there's still some intrinsic valueout of data, even even as it spoils.

Yeah, I'm laughing because we're big fanin this family of banana bread.

It's it's like we. Go.

We let our bananas spoil because we knowwe're getting banana bread.

You're getting banana bread. It'sthat's also

All right sowe've talked about the Yeah,we talked about all these characteristics.

We've talkedabout the operating environment, the laws.

Let's talk briefly about the architecturesthat help us overtake it, take advantage of the environmentthat we're in,because I can't just say one architectureis going to solve all my problems.

We already know that.

But there are some distinctarchitectural approachesto solving thesethese problems that we're talking about,especially with edge computing and data,with all these different characteristics.

Yeah, yeah, totally.

And I think that this is, you know,this is what I've seen over the overthe years of working in thethe edge space.

And I think it doesrelate to the to symptom,you know, obviously the loss as well.

It would have to wouldn't it.

But the,you know, you could computepretty much anywhere you wantand you know, within reason and you know,if you are part of the valuemay be derived from unique pointsof presence itself,you know that you can take advantage of.

Otherwise it's, you know, it's multilogical and neutral.

Right?

So, you know,anyone can put some hardware in thereand that's a that's a great model.

The next step up.

So you're talking about infrastructureas a service at this pointand that gets you so far.

But at some pointyou're going to want to buildan application to run on that compute.

And you know, if you're just running itin a single location, you know,then maybe thethe proximity in the locationis is sufficientfor you to be able to almost,if I can use the term lift and shiftfrom wherever it is to that edge.

But as soon as you want to getwhat I said earlier about theyou know, that consistency of experience,whether it's humanor a device or whatever,on a distributed stage.

Right.

And a distributed system, you know,that becomes more challenging. Andit's not so easyjust to pick off the shelfa few components and just say, that'smy tech stock and it's going to workglobally because these this these arethis is complicated stuffto build a distributed system.

It's like, how do you handlethe consistency and reconciliationof data, you know, in a distributed systemwhilst letting you put APIs on it?

How do you deal with, you know, dataat rest and data and flight?

We're talking about real timedata here, right?

So there's an enough predominantly,you know, you can be a mess of thinkingthat we're just focusing on datathat's being generated in the moment.

But to your point, a few moments ago,you know, the banana bread, right?

The part of thesay you're doing, you know, alittle process, part of that processcould be complex joinsand then data enrichmentas you as you're processing the extractingtransforming and lifting the data.

Right.

So you need to have that.

You start to get into thisvery complex environment where you realizethat your application needs a,well, almost a smorgasbord of technologiesto actually realize the kind of thingsyou want to do.

And all of a suddenyou're getting a lot of complexity.

And and I guess, you know, as is chattingand always says, you know,computer science is always about,you know, abstracting complexity.

Right? And and that's what we've done.

You know, we have takenwe're very opinionatedplatform, and I say opinionated becausewe know through our experiencethe kind of technologies that, you know,you need to have pre integratedand and customized to be able to buildthese kind of high performance, real timeapplications that can take advantage of,you know, data that has been collectedand is available for Oracle,but then also allow you to combine itwith with real timedata to provide real timeactionable insights on a global scale.

And I think that's to your questionthat that that is the real challengeof, of of edge computing.

It's like how do I go from a desirefor a performance improvementfor my application,whether or not it's more physics,whether it's an economicor whether it's a law of the land to thenactually taking into accounthow does this work on a global scale?

Okay.

And that's the real challenges, right?

And it yeah, and Iand I love that that approachbecause that also saysbecause I'm taking into considerationthe three lawsit says that I'm not necessarilyprocessing all the data on the edge.

I'm processing it in the ecosystem,which gives me flexibility.

Right.

And I need that flexibilitybecause as we mentioned,sometimes I need that real time insightand sometimes I need the data to spoilto produce great banana bread. Yes.

Is that where I'mwhen I'm combining stuff from other thingsand it takes more time.

So I can't just say no, everything's outon the edge or everything's centralized.

It's got to be I have to be ableto support multiple modesand that's what I really loveabout your guys's approach to this.

Yeah, I mean, thisthis is this is the thing.

I mean, I'm sorry,

I was going to say something else, butthe distributed systems are not justit would be wrong to assumethat distributed system simply means,you know, I'mreplicating the exact same partsof my applicationacross every point of presencethat I have available to me.

You know, back when I was at university,the, you know, we I or my placement.

Yeah,

I used to work with transducers,if you remember those downand you know, we'd have one. So yeah.

One transducer that did one thing.

I don't remember the model numbers.

It's too long ago we had one transmitterthat had one was good at one thingand another transmitterthat was good at another thing.

And the trick was to do parallelprocessing, you know, written in allcome across these, these transmittersand use them for, for the,you know, to the to their, you know,for the appropriate value that they bring.

Right.

And you can you can draw a parallel,no pun intended back to, you know,distributed systems in cloud computing.

Right.

So you could have you've got the edgethat's perfect for doing real timefast data processing to generate insightsand all kinds of other coolthings at the edge.

And then you can usesome of that centralized approach for,if you like, your machine learning.

So you have that.

The idea of a reinforcementlearning is a great example.

So federated or distributedreinforcement learning where you arerunning your inference at the edge,but you're your modeltraining and update isis handled centrally and then you're doinga an update of the edge.

Inference is dynamically.

So you're combining the best of both, youknow, historical data and real time data.

And not only that, the insights, right?

So you it's not just yes,

I got my real time data,my historical data, but you want to seewhat did you do with that data,those insights and then howdo you feed all that back and then retrainand improve the model itself?

So you're talking about buildingdistributed systems,not just moving stufffrom central locations to the edge.

You're talking about buildinghigh performance applications that thatthat that use the capabilitiesthat are in the industry todayfor being the best possibleway for your application.

So this is great. Alan.

I think we

I think we've kind of shown everyone,hey, this is the problem spacethat we're in.

You guys have

I think, a unique and fascinatingarchitectural approach to this.

Bye bye.

Handling all the complexitythat's in a global data meshhandling function as a service.

On top of that, in the data governancepart of that, we should spend a wholenother podcast just going more in depthinto your guys's architecture.

But we don't have time todaybecause we're already out of time,which you should be street read andyou. Can read the white paper without.

Yeah, read. The white paper.

That's a good primer.

And you know, we can alwaysredirect people to the websiteand, you know, they can,they can read more about it there.

So we got lots of Yeah.

In fact, check out check out our websiteyou can find on embracing digital dot org.

You can find a link to the white paper.

Check out MacroMeta.com.

MacroMeta.com Yeah.

All right And that from MacroMeta.com.

And I'm sure you guys have a link up thereto this wonderful white paperthat Alan and I wrote.

So. Alan, it's been a pleasurehaving you on the show today.

Yeah, thanks very much.

My pleasure as well.

Thank you.

Thank you for listeningto Embracing Digital Transformation today.

If you enjoyed our podcast,give it five Stars on your favoritepodcasting site or YouTube channel,you can find out more informationabout embracing digital transformationand embracingdigital.org.

Until nexttime, go out and do something wonderful.