#157 Operationalizing GenAI

Subscribe to get the latest

on Wed Sep 06 2023 17:00:00 GMT-0700 (Pacific Daylight Time)

with Jeffrey Lancaster, Darren W Pulsipher,

In this podcast episode, host Darren Pulsipher, Chief Solution Architect of Public Sector at Intel, discusses the operationalization of generative AI with returning guest Dr. Jeffrey Lancaster. They explore the different sharing models of generative AI, including public, private, and community models. The podcast covers topics such as open-source models, infrastructure management, and considerations for deploying and maintaining AI systems. It also delves into the importance of creativity, personalization, and getting started with AI models.


#ai #generativeai #infrastructuremanagement #aisystems #aimodels #operationalization #datainput #modeltraining #finetuning #digitaltransformation #opensourcemodels #privateclouds #edgecomputing #aitools #creativeoutput #responsibleusage #reinforcementlearning #monitoring #optimization #sandboxenvironment #cloudbasedinfrastructure #onpremisesinfrastructure #hybridinfrastructure #customerservice #brainstormingapplications #embracingdigital

Listen Here

Exploring Different Sharing Models of Generative AI

The podcast highlights the range of sharing models for generative AI. At one end of the spectrum, there are open models where anyone can interact with and contribute to the model’s training. These models employ reinforcement learning, allowing users to input data and receive relevant responses. Conversely, some private models are more locked down and limited in accessibility. These models are suitable for corporate scenarios where control and constraint are crucial.

However, there is a blended approach that combines the linguistic foundation of open models with additional constraints and customization. This approach allows organizations to benefit from pre-trained models while adding their layer of control and tailoring. By adjusting the weights and words used in the model, organizations can customize the responses to meet their specific needs without starting from scratch.

Operationalizing Gen AI in Infrastructure Management

The podcast delves into the operationalization of generative AI in infrastructure management. It highlights the advantages of using open-source models to develop specialized systems that efficiently manage private clouds. For example, one of the mentioned partners implemented generative AI to monitor and optimize their infrastructure’s performance in real time, enabling proactive troubleshooting. By leveraging the power of AI, organizations can enhance their operational efficiency and ensure the smooth functioning of their infrastructure.

The hosts emphasize the importance of considering the type and quality of data input into the model and the desired output. It is not always necessary to train a model with billions of indicators; a smaller dataset tailored to specific needs can be more effective. By understanding the nuances of the data and the particular goals of the system, organizations can optimize the training process and improve the overall performance of the AI model.

Managing and Fine-Tuning AI Systems

Managing AI systems requires thoughtful decision-making and ongoing monitoring. The hosts discuss the importance of selecting the proper infrastructure, whether cloud-based, on-premises, or hybrid. Additionally, edge computing is gaining popularity, allowing AI models to run directly on devices reducing data roundtrips.

The podcast emphasizes the need for expertise in setting up and maintaining AI systems. Skilled talent is required to architect and fine-tune AI models to achieve desired outcomes. Depending on the use case, specific functionalities may be necessary, such as empathy in customer service or creativity in brainstorming applications. It is crucial to have a proficient team that understands the intricacies of AI systems and can ensure their optimal functioning.

Furthermore, AI models need constant monitoring and adjustment. Models can exhibit undesirable behavior, and it is essential to intervene when necessary to ensure appropriate outcomes. The podcast differentiates between reinforcement issues, where user feedback can steer the model in potentially harmful directions, and hallucination, which can intentionally be applied for creative purposes.

Getting Started with AI Models

The podcast offers practical advice for getting started with AI models. The hosts suggest playing around with available tools and becoming familiar with their capabilities. Signing up for accounts and exploring how the tools can be used is a great way to gain hands-on experience. They also recommend creating a sandbox environment within companies, allowing employees to test and interact with AI models before implementing them into production.

The podcast highlights the importance of giving AI models enough creativity while maintaining control and setting boundaries. Organizations can strike a balance between creative output and responsible usage by defining guardrails and making decisions about what the model should or shouldn’t learn from interactions.

In conclusion, the podcast episode provides valuable insights into the operationalization of generative AI, infrastructure management, and considerations for managing and fine-tuning AI systems. It also offers practical tips for getting started with AI models in personal and professional settings. By understanding the different sharing models, infrastructure needs, and the importance of creativity and boundaries, organizations can leverage the power of AI to support digital transformation.

Podcast Transcript


Hello, this is Darren

Pulsipher, chief solution,architect of public sector at Intel.

And welcome to Embracing

Digital Transformation,where we investigate effective change,leveragingpeople process and technology.

On today's episode,operationalizing generative

AI with returning guest Dr.

Jeffrey Lancaster.

Jeffrey.welcome back to the show.

A number for the fourth time.

These are so much fun.

Darren I could we could do tons of theseand I'd be perfectly happy.

You know, we may we may call itthis is this is I think this would bethe almost the eighth c episodein a series on generative AI.

I love having you come in in betweensome of these othersbecause you and I are both generalists.

The other ones got very specificon very specific things,but today I actually want totalk about operationalizing Gen AIbecause I think there's this misnomer outthere.

It's just there, It's, it'sthis thing that we just useand that could be kind of dangerous.

Yeah, it's, it'swhen something is so magical,it's easy to oversimplify it.


And I think part of the decisionsthat leaders are going to have to make,you know, requiresa bit of understanding aboutwhat's actually going on under the hood.

And so you don't have to understand it.

And maybe at the level of, you know,what code is actually running, butyou do have to make some decisions aboutwhere do you want this to run,

Do you want it to runwithin your own infrastructure?

Do you want it to run withinsomebody else's infrastructure?

Do you want to have control overhow manyinputs are being broughtinto kind of updating the model?

Or do you want to put guardrailsonto the system that's been used?

So, you know, maybe what we could dois talk through some of those decisionsso that when people are kind of makinga case for leveraging some of these tools,they can already have thought out,okay, well,this is what we're going to need to do itboth responsibly but also securely.

And, you know, ultimatelyto meet the objective of whateverthe case, the use cases that somebodyis actually trying to accomplish.

I told you.

So let's startwith the three different category.

The categories.

No, they're really kind of three differentsharing models of Jenny Iright we got publicwhich is your chat JPT, Bard Cloudand then we've got communityand then we have Private

Jen AI sounds a lotlike what we did with Cloud.

We have Private clouds,we have public cloud.

There was this concept of community cloudsthat kind of disappeared.

But gov you could say

Gov cloud is a community cloud.

I guess. Yeah. Yeah.

So we're going to see the same thingin general.

I make sense.

So I'll be honest,

I'm not familiar with the community model.

I'd love to hear more about that, butyou know, the way that I think about it,you've got two ends of the spectrumand then something in the middle.

The one end of the spectrum is, like yousaid, there's open modelswhere anybody can interact with the modeland anybody is also kind oftuning the model because those models workthrough reinforcement.

So they're taking what people are asking.

They're taking people what they're taking,what people are generating,they're taking the responsesthat are being produced.

All ofthat is going back into the training,the large language model itselfto deliver relevant answers.

And so that's one end of the spectrum.

And the other end of the spectrumis a private model,and that's where you might really lockdown and limit maybe who has access to it,what data the model has accessto, you know, in terms ofwhen it's being trained,what it's being trained on.

And you really are being,you know, you're constrainingthe art of the possible in that case.

And so there that might be really usefulin a lot of industryor corporate scenarioswhere you don't want it to engagein a lot of lateral thinking.

You don't want it to engagein a lot of creativity or,you know, you just want thatconversational user interface.

Yeah, this is an

I've got a great point becausepopped in my head when you were talking.

If I put a Ginni AI on my chat botfor my customer serviceand I use a public one,that could be very dangerousbecause it could say,well,you should just buy this other productbecause it's more reliable,which is your competitor product.

But if I do a private Ginni, I,

I could put the guardrails in theresaying, push my product always, right?

Yeah. And you could even script it.

You can get to the point where you'realmost giving it the, the, the transcriptsthat you want it to produce and saying tryand keep, you know, this is this realm.


So I can see I could see the benefit of,of the private.

Yeah, for sure.

Well the challenge with the private oneand I'll tell youthis, is that the computational overheadthat's already gone into the openmodels has already been done.

So to train opening.

Google to train them, that's important.

To train the model. That's right.

So they've already done the heavylifting to train those models.

If you go to a completely private modelwhere you might say, you know what,we want to build our own,you're then going to have to take onthat computational overheadin order to get to the pointwhere you have a large language modelthat you can use.

Now, luckily, there's that middle groundand this is kind of what I wastalking about.

You know, if this is a spectrum whereyou've kind of got a blended approach,and what that blendedapproach says is, okay, well,let me take the kind of linguisticunderpinnings that I get from thosepre-trained open models,but then layer on top of itsome constraints,and by layering on top of it,those constraints, you say, okay,

I'm not going to have to retrain my ownlarge language model, you know,which is going to take hours and hoursof compute time and generate a lot ofgreenhouse gas. Right.


I'm going to have to reproducewhat somebody else has already done.

But by giving an additional layerthat says these arethe ways that I kind of want to startto tweak some of the weights in the model,and I want to start to tweaksome of the words that can be used.

And andmaybe I don't want you to use these words,but I do want you to really prioritizethese words.

Then you can get to a pointin kind of tune that open modelto somethingthat actually meets your needs.

So like you said,you're not gettingyour competitors responses,but you're also not having to goand totally train winners. Okay?

So so that's really the only reasonableway you can move forward is to leveragemodels already out there.

There are several opensource models available.

That's right.

Lomita Or as I was corrected by my my teamdown in South America,

Yamaha yammer to Yammer.

So that's an open source model.

Very, very open source.

You can do whatever you wantwith that model as long as you'refollowingthe ethics involved in in in LMS.

That's right.

So I'm seeing people startingto use those models to developand put guardrailsor specialize the model on certain things.

Like on a previous podcastwhere I talked about

Gen AI in infrastructure management,we have a partner of oursthat is putting Gen AI on the front endof managing their private clouds.

Yeah, super cool, because now

I can ask my, my infrastructure,how are you doing today?

Okay, well, hey, I'm doing pretty good,except this one area is a little slowor I can say, well,what parts are slow and why?

Instead of having to learna bunch of commands and go throughwhat an incredible useof what I would call privatea private jet, I right.

Trained it for the type of work

I want it to do.

And it can.

And it's getting live feeds from my data.

And then that's you know,that's one of those considerationsthat people need to think through thatbefore you and I have talked about kind ofwhat data are you beingare you wanting to input into the model?

What data do you want to get out?

So if the type of datathat you want to input,which might be infrastructure sensor data,if that's not been part of the text modelthrough the image modelsor the video modelsor the music, you know,the ones that have already been done,you might have to do it yourself andthat's okay.

I think where, you know,where that gets really interesting is thenwhen you can start to say, okay, well,if I can't adapt to the types of modelsthat have already been produced,maybe I do have to go and train my own,but I don't need to train itwith as many signalsor as many features as what exists in.


To be.

Talking about.

Right. Exactly.

So you end up finding some middle groundand, you know, even even opening.

I had said there's not going to bea GPT five, that the way forwardis not to just add billions and billionsmore indicators within the model.

What they're really going to dois they have to start thinking through,okay, how can we better eitheradapt to the incoming datato get something that is useful?

How can we better adapt the outgoing,you know, whatever is produced inwhatever is generated.

So it's the current thinking,at least from a lot of the field isthe way forward isn't more and moreand more to expand.

And so people shouldn'tnecessarily be put off to say, okay, I'mgoing to have to train a model that has,you know, 100 billion.

Maybe my model only needs, you know,a few million or tens of millions.

It depends on the data that that's comingin. Okay.

So let's move towe talked about private I think

Private Jenny AI,lots of opportunities, right? Yes.

What about public it does it really havea place moving forward where everyonebecause you've got a lot of peopleyou talked about reinforce learningthat's happening when people interactwith charging CBT or Bard,the models changing and learning.

Do I want that too?

Is there aroom for is there room for a public


In the future, what do you think?

I think it's a really good questionand you know,for instance, we talked aboutsome of the different use cases.

I'm writing emails.

I don't want a sandbox versionnow, you know, writing e-mail,maybe I want that broad modelif I'm writing,you know, a book for, for instance,

I don't want something that's locked down.

I want somethingthat's going to bring in moremaybe perspectives, more points of view,maybe more data.

So ultimately,

I keep coming back to the same thing,which is,yeah, there's a time and a place whereyou're going to want to constrain things,which might be brand identity, customerservice, you know, competitive, and.

Then other timeswhere you want to really broad.


And that's one of those early decisionsthat leaders are going to have to make, iswhat amount of flexibility do

I want the user to have in their system?

Do I want them to be able to tapinto a broad kind of brain trust,or do I want them to have kind ofa constrained knowledge base to pull from?

So to me,this is where community clouds fit in,and this concept goes way backto community grids.

Back in the early 2000s,when I was in the Global Grid Forum,there was this conceptthat communities would shareand I could totally see,especially in areas like medical,where doctors would share a community DNA.

I the doctors throughoutan organizationor even better throughout the countrywhere they're sharing a journeyto help do diagnosis.

Wouldn't that. And that. Ratable.

And that makes total sense.

Yeah and I think when.

You start because I don't want you knowsome punk like me putting inand putting in stuff and, you know,asking stupid questions that changes aa medical genAI but qualified doctorsthat are interacting with the gen

AI and sharing information.

What an incredible wayto share information with lots of doctors.

So that community I think, is going to bea really cool thing.

Well, and I think what you're getting atis almost a rolebased approachto who gets to retrain the AI.

So, you know,if I think about what's out there,if I think about the knowledge basesthat are out there,that might be the foundationfor a community modelyou brought up medical, right?

So there's going to be a lot of doctorswho might be interacting with it in a wayto help with a diagnosis,to help with a treatment plan, to helpgenerate thingswhich they then might decide to tweakbecause they went to med schooland did a residencyand are qualified to actually do that.

Where is I

When I'm looking at drug interactionsor I'm looking at something else?

You don't want the model to maybe adjustbecause of my dumb question.

That's exactly right.

Can I eat cheese while I'mon this medicine or whatever it is?

Like? That shouldn't influence the model.

And so I think what we're going to seeis in those community situationsthat certain usersare going to have different privilegesthan maybe a general public useralong the way.

Well, maybethey won't even have access to it at all.

That's right. Because ask any doctorthey wish that Web MD was not around.

And yet I've got all these symptoms.

I'm going to die in three.

No, you're not going to die.

You know, you have allergies.

But I mean, you know, on the flipside of that, I do wonderthe problem with the way that we interactwith like a WebMDis that when I get results,

I'm not also presented with well,there's also all these other conditionsthat have the exact same effect.

So I think he was right.


It's like you immediatelygo to that to the red alert,which is go to your doctor immediately.

But if I am getting some contextand say, okay, well hold on a second,only, you know, .05percent of the population has this,whereas 30% of the population has this,it might be more likelythat you don't needto have a large concern about this.

So go see your doctor.

Let me make an appointment for you.

You know, I do think tying togethersome of those different functionsand the different context,it still has a place here.


We've talked aboutwe've talked about understandingthe the scope of public communityand private.

Let's talk about now as an organization.

How do I how do I manage these AIs?

Obviously, the public one,

I can't really manage, right?

It's public one.

I can have policy around it.

I can do that.

That's different.

Let's talk about managing my own GenAI

I or community AI that maybe I'm managing.

What operational things do

I need to worry about?

Well, there's less that to worry about.

But yeah, let's talk about the decisionsthat you're gonna want to make first.

So that first decision is probablywhere do I want this thing to live?

Do I want it to live in the cloud?

Do I want it to live on prem?

Do I want it to live in some hybridcombination of the two of those?

What I think is really interesting is do

I want it to live on the edge?

And this is, you know,you and I haven't really talked about whatthe future of some of this stuffis going to be yet, but it won't be longuntil the models and the compute necessaryto run some of these things is going to besubstantially smaller than it is today.

So is there a casewhere I might want it to run outon a devicewhere maybe it's not retraining the model?

There, but it's sending something back.

To all the information out there?

Exactly. Exactly.

And you don't need that roundtripmaybe to take place for the data,but you're getting the benefit of thaton a device,you know, maybe wherever you're located.

So first decision is, you know,in the cloud, on prem, some hybridinfrastructure.

Second decisionthat I would say that people have to make.

And when you say worry,this is really what I would worry about.

Do I have the peopleand do I have the talentthat can both set this thing upand also manage it?

So to expect this to bea, you know, a thingwhere it's out of the box,yet you plug it in and it just goes,that's not going to be, at least today,the way that many of these models work.

There is still going to be a matter ofam I stitching together different modelsbecause there might be differentcomponents that go into it.

There might be,you know, and a socio emotional componentof modelthat's specifically trained for empathy.

There might be a model that's specificallytrained for generating imagery.

There might be a modelspecifically trained for the conversation.

Okay. Soyeah, soa total solution is going to bea combination of models stitched together,probably with an overarchinginput model, an output model.

So it's it's not this big, huge

Goliath thing that handles everything.

In fact, we know we're chargingfor there's multiple models behind it.

It's it's parsing those things outbased off of the input.

So, okay, so there's some there'ssome architecture work that has to happen.


Without a doubt.

You know, and I think that architecturework is got to be based onwhat is the outcome that you want,

You know,what is the casethat you're trying to solve for?

If you're trying to solvefor every possible use case,

I think you're really going to miss out.

I think if instead you're saying,you know, this is going to be a customerservice interface,this is going to be a toolthat my development teams can useto think broadly and to brainstorm.

Well, those two are very differentin terms of how you might set them upand you want to think through againthose guardrailsthat you want to put on something.

In one case,you might want it to have a lot of empathybecause you might want to say, I'mso sorry you're having that problem.

You know, let mesee what other people have encounteredin the past,whereas in a brainstorming app,you might want itto be much more kind of a cheerleader.

You say that's a great idea.

What if we also did thisand this and this?

Let's kind of expandthe conversation that way sothere's not going to be,

I don't think, a one size fits allthis is the way to do it,because everybody's going to tryand use it in a different way.

Gotcha. Gotcha. Okay.

So we talked about location, architecture.

Let's talk about a dirty little secret.

And that is my models getting sick there.

There's some great examples out therewhere guys were releasedand the trolls went crazy on themand they became misogynistic,bigoted, you foul mouthed, rude eyes.

I mean, that has yeah,we're not going to namethe two companies that that happened to,but it's happened right whenthey released out the out in the wild.

So what that tells me is chat

TVT or Openaiand Google,they actually have someone kind ofkeeping the health of the generative

AI healthy, right?

Because they can get sick,they can start making mistakes.

Hallucinations are a real thing,so you can't just leave them alone,it sounds like.

Is that sound right?

I think we have to be careful.

I would separate outthose two as two different issues.

One is in a few with reinforcement.

So depending on how muchauthorityyou're giving to the user to then buildthat reinforcement into the model,you mightthen get it steered in a directionthat you've seen a lot in the news.

That's going to be really differentthan the level of hallucination,which in some casesyou might want a lot of hallucinations.

So I don't want peopleto think of hallucination as a bad thing.

We don't want to think of that creativitythat's brought to itnecessarily as somethingwhich is a flaw in the model.

It's not a flaw.

It's actually a feature,but it's a feature that you can tuneand you have to be really intentionalabout how you tune that feature.

Because if you sayif you if you ignorehow much hallucination you wanted to have,then you're going to get answersthat you're not expecting.

But again, those two different scenarios,customer service versus,let's say, brainstorming, brainstorming,you might want a lot of hallucinationbecause you're like, Give me crazy ideas.

I'm going to be able to figure outwhat's what customer service.

And this is maybe counterintuitive.

You wouldn't want it to have zerohallucination because how often doessomebody call a customer service lineand know exactly what the problem is?


More often than not, somebody says,you know, this is what's going on.

Like it's not working.

The way that it's supposed to.

It's doing this and this,and you want it to still havea little bit of creative freedomto start to address and get to a pointwhere you can actually diagnosewhat the issue is that's going on.

So you're not mayberamping up the creativity all the waybecause you don't want them to say, oh,you know,you need to take your cat outside.

And that's the problem that will fix it.

But you do still want to give it enoughwhere it's not going to requireexactly saying something in a certain wayto be able to get to the answer.


So I love I love how you differentiatethe two hallucinations.

A is the creativity part of the

I want that,but I want to be able to adjustthat as I need to.

Right? Yeah.

The other partwhat we talked about is the A.I.model, learningfrom interacting with people.

That's where it can get sick, right?

That's very sick based off ofthe interactions it's having with peopleand how much weight you give thatinteraction to adjusting one.

So there'sguardrails that you want to put in place.

You might say, You know what,

I don't think our customer,you know, that initial promptshould be rolled into the model.

I'm going to wait for,you know, some log data.

I want to go back and look at itand I want to see how people are actuallyusing.

Or exclude curse wordsfrom infiltrating the model,because you really don't want your A.I.cursing back at people.

Exact curse words, but also topos.

So you can say, okay, in, you know,you might plug into a knowledge basethat has a very, very broadknowledge graph,but within that knowledge graphyou might cut off certain connections.

Say, you know, I don't want you to bringup anything having to do with religion.

I don't want you to bring up anythinghaving to do with politicsor anything like that or you might say,

You know what,

I want to limit this to religion.

Maybe you're building somethingthat's a, you know, a modern confessionalfor the church or something like that.

Like they're, you know, again,it's a decision that has to be made.

But you want to know going into it,these are some of the decisionswe're going to have to agree onbefore we just dive in and start buildingand, you know, limitingeven what can be broken.

So it's really interesting to mebecause the A.I.model itself is changingright?

So does it make sense at allto snapshot these modelsor version control them, saying,

I really like this model the way it is?

I want to take a snapshot of it,put it over here, have it, keep learning.

But I really if I want to, I can go back.

What what I think you're getting atand then whatthe next logical step of that might beis the personalization of the model.

Okay, so, you know, it won't be long.

And this is where some of the dialogtracking in the dialog trackingcomes into playwith something like Chatbot, for instance,you can get it to rememberall of the thingsthat you've already talked about,which might includesettings or tone or waysthat you want it to interact,and you can actually encapsulatethat into a personality.

Now you and I might each wanta different personality to interact with.

Now, in reality, what thatis, is that's a differentkind of snapshot of the model,but really it's kind of a differentimplementation of the way that it'sgoing to interact with your account.

Almost likepre configuring the model to interactand engage with you in a certain way.

So could it not be long until each of ushas a different role modelthat we're interacting with?

Absolutely. Yeah. No, no, I can see that.

And it can keep state right on that.

I give it a name of a persona.

Hey, this is this is Darren.


And hey, anytime you interact with Darren,

I want you to respondin this tone, right?


And today I want to

I want to talk to this personality maybelater today I want to talk to a different

I can because.

Yeah, I can totally see a new branchof psychology coming out of this.

Well, and it's, you know,it still does require the userto do a little bit of that thinking.


Because yeah, maybe I want a personality.

You know, on the one handthat's really goodat writing scientific articles,you know, that has that language,that knows the kind of way to speak infor a scientific journal.

But then later maybe I'm doing somethingwhere I want somebody who'screative and supportive and, you know,maybe that's a different tone.

And so that might be a different settingthat I could use when I then go to engagewith the tool, then.

Okay, so this is interesting because Icould do that at a company level too, so.

That's right.

So there is operational

I have to interact with this.

I have to operationalize it,

I have to use it.

It's just I'm going to be using itdifferently than I've I've doneother things like I find in my databasesfor the type of work it needs to do.

I can find to my my,

I to do the same by creating personasor creating my own modelseither way, based off of a public model,let let'slet's tie this whole altogether in ain a nice tight bow.

How do I get started?

Where do I go?

I mean, where do I go to get started forfor my for my company or for mepersonally?

Yeah. Great question.

So those are going to be two differentanswers, right?

So if I'm going to startusing this for myself personally,my recommendation is gostart playing with the tools.

You know, again,depending on what you want to do,get some of that usage under your belt,go out, sign up for a

GPT account, go out, sign up for Bard,or start using it in your search.

Go out and sign up for mid journeyor daily, you know,see what the tools can do.

Because once you understandwhat the tools cando, you're going to better be ableto formulate the question that you want.

When you go to do this for your company.

You know, you don't want to start dumpingyour company data into those open models.

We've seen that not go well.

So you want to havea little bit more discretionand maybe thought put into itbefore you set that up.

What a lot of companies are doingis they're building a sandboxwhere people can go and playwithin kind of a safe space.

That's my recommendation for a first step.

Before you jump to implementinga production environmenttest it out, see how people use it.

Let people interactwith whatever you builtso you can see are thereother considerations that we might needbefore we get to the pointof releasing this into the wild?

I Great, great, great advice.

Jeffrey, as always, it's fun to talk.

It's fun to go through these things.

We most definitelyare going to have you backand keep keep listen to the show on.

Jenny I it's a hot topic.

We cover hot topicsso thanks again, Jeffrey.

Thank you, dear.

Thank you for listeningto Embracing Digital Transformation today.

If you enjoyed our podcast,give it five stars on your favoritepodcasting site or YouTube channel,you can find out more informationabout embracing digital transformationand embracingdigital.org Until nexttime, go out and do something wonderful.