#151 Understanding Generative AI

Subscribe to get the latest

on Thu Aug 17 2023 00:00:00 GMT-0700 (Pacific Daylight Time)

with Jeffrey Lancaster, Darren W Pulsipher,

In this episode, host Darren Pulsipher interviewed Dr. Jeffrey Lancaster from Dell Technologies. Their discussion centered on generative AI and its potential impact.


Keywords

#genai #ai #datamanagement #people

Listen Here


What is Generative AI?

Artificial intelligence systems that have the ability to generate new content are known as generative AI. These systems can produce various types of output, such as text, images, audio, and video. This is different from most AI currently in use, which is primarily analytical and focused on tasks like classification, predictions, and recommendations. Generative AI offers a more creative and open-ended approach to artificial intelligence applications.

Revolutionary Potential

Both the host and guest were of the same accord that generative AI is a technological breakthrough that has the potential to be a game-changer. It possesses the unparalleled ability to amplify human creativity and generate exceptional content from even the most rudimentary prompts. Its potential to revolutionize various industries such as writing, design, and music is undeniable. However, the societal impact of this technology is yet to be fully comprehended.

Concerns in Academia

In the context of higher education, there has been a growing concern over the prevalence of plagiarism and the exploitation of generative AI by students who seek to cheat. This issue has prompted discussions around the ethical considerations of utilizing AI in academic settings. However, it has been suggested by experts, such as Lancaster, that academia can play a pivotal role in advising on these ethical considerations. By doing so, educators can empower students with the necessary skills to responsibly evaluate and critically analyze AI-generated content, which will undoubtedly be a recurring theme throughout their future careers. By taking a proactive approach towards addressing these concerns, the academic community can ensure that the integration of AI in education is not only effective but also ethical and responsible.

Benefits for Efficiency

Generative AI has the potential to revolutionize how we approach time-consuming tasks such as writing reports, emails, articles, and code. With the assistance of AI, the process could be greatly accelerated, saving precious time and resources. However, it is important to note that human oversight is still crucial. Even with the advancements in AI technology, it cannot be fully trusted to produce flawless work. As such, careful review and editing by humans remains an essential step in ensuring the accuracy and quality of the final product.

Customization and Implementation

To implement a successful generative AI solution, organizations must carefully consider their unique data needs and security requirements. While readily available options like ChatGPT can be helpful, a truly customized solution requires significant resources and expertise. This may involve collecting and analyzing large amounts of data, as well as investing in powerful computing resources. Before fully deploying generative AI, it’s crucial to establish a comprehensive framework that takes into account all aspects of the organization’s operations, including data privacy and security protocols. With the right approach and resources, generative AI can be a powerful tool for organizations seeking to enhance their data-driven decision-making capabilities.

Podcast Transcript

1

Hello, this is Darren

Pulsipher chief solution,architect of public sector at Intel.

And welcome to Embracing

Digital Transformation,where we investigate effective change,leveragingpeople process and technology.

On today's episode,

Understanding Generative,

I was special guest Dr.

Jeffrey Lancaster from Dell Technologies.

Jeffrey, welcome to the show.

Thanks for having me here.

Hey, we had an opportunity to talkand I just

I have a bro crush on you, asmy wife would say,because I was talking afterwards as I did.

Jeffrey got he understands all thisreally cool stuff about Jenner today,

AI and all this. We got to talk some more.

She was like, Oh, you got to hurryand get this out of your system.

But before we go there, Jeffrey,tell us a little bitabout your backgroundand where you're coming from and all that.

I know enough to be dangerous,as I would say.

So, you know,

I have kind of a weird background,and I think that's why

I'm so interested in emerging technology.

So my background is actually as a chemist.

I did my Ph.D. in chemistry,actually as an undergrad.

I also studied art and art history.

So I have a background in sculpture,history, chemistry,kind of all over the place.

When I finished my Ph.D.,

I still wanted to besort of involved in academia,but I didn't want the pressureof being an academic with a.

So I became a librarianat Columbia University.

I was supporting science and engineeringdisciplines, overseeing something calledthe Digital Science Center.

So I got involved.

So I still involved in research.

I was still helping facultykind of do what they were doing.

But in a way that started to take mea little bit out of the day to dayof the scientific enterprise.

I got a little fed up, admittedly,with how slowly higher ed moves,and so I left highereducation, went to a startupfor a little while where I was teachingbusinesses about technology.

So I would fly around the world.

I would teach CEOs how to code CMO's,how to do data science systems,how to hack things like that.

And it was a really interesting timefor me to both learn,but also quickly to take my learningand kind of what was happeningin current events and be able to translatethat for an audience and specificallya business audience.

And so a lot of that revolvedaround the question ofwhy should they care who at theirorganization is doing these thingsand how do you goand have a good conversationwith these people, and how do you knowenough to ask the right questions?

After doing that for a couple of years,

I then joined Dell Technologies,been with Dell for about threeand a half years, a little over threeand a half years, and joined Dellright before the pandemic.

And so, you know, was able an interestingtransition.

Very, very interesting,you know, and to see kind of inreal time how people were adapting,how we adapted as a company,but also how our customers were adapting.

And so in my role now,

I work with higher education institutions,

I work with colleges, universitiesin about the 13 states that I covercover from Virginiaall the way up to Maine.

And we talk about anythingfrom at the time,their pandemic responseor continuity planning.

We'll talk about e-sports,the classroom of the future, DIYinitiatives, sustainabilityinitiatives, research,which of course is a big focusarea of mine.

How new technology is might be changingthe way that they operateand thinking abouthow can they begin to build a culturewhere you don'tnecessarily butt headsbetween innovation and security?

And so how do you do that in a waywhich still is compliantwithin the organizationbut allows you to move more quicklythan kind of the higher end institutionthat I had been at,where I ultimately felt like, you know,it just took forever to make a decisionand to try something new.

And so how do you buildthat culture of innovation?

So I have a question aroundthat, because I interviewedsomeone from UVAin the business school,and we talked about COVIDand the effects that COVID had on IT,organizations and organizations in generalto innovate faster all of a sudden,because we foundin the first three weeks of COVID,everyone can move fast all of a suddenwhere before it was like my five yearplan to move people to office 365and these big, huge, long, drawn out, big,huge budget thingsthat all of a sudden happenin three weeks at a fraction of the price.

So we know that we can innovate,we know we can move fast.

So why do you thinkor how can we keep that?

That's sustainability of speedof innovation or or should we even.

Yeah, I think I think shouldwe is a better questionand I think should beis a better question becauseyou don't wantto constantly be changing thingsbecause people get whiplash.

Right.

And so the, the shift to remote work,remote teaching and remote learningwas significant enoughthat people would toleratedoing something in a different way.

You don't normally have that tolerancefrom people, and especially in academia.

And I know we'll talk about stategovernment,we'll talk about K-12 educationif you want to.

More often than not, peopledon't have a tolerance for that change.

And so you really have to balancein higher ed, at least these ideas aroundshared governance, these ideas aroundmaking sure that people are involvedand included in the decision making,making sure that people'svoices are heard,which is really critically importantwhen you're dealing withpublic initiatives.

It's really important that you hear fromwhomever your customer is, whether that'sthe citizenry, whether that's students,whether that's faculty, staff.

And more often than not,the reason why these things take so long.

It's not a technical reasonwhy it takes so long.

It's a cultural reasonthat, yeah, of course,because you have to build upthat momentum.

And so with the pandemic,it was different for everybody.

Everybody was like, Yeah, absolutely,we need these things.

And at the same time,you also had technologyinnovation moving really, really quickly.

So you had Zoom,you know, really coming of age,which had been a toolthat a lot of people were using,but it became the kind of de facto termfor we're having a teleconference.

Maybe somepeople said WebEx,maybe some people said teams a little bit.

No, but Zoom became a verb.

We were going to zerojust like, you know, Kleenex is a tissueand Q-Tip is a cotton swab.

Like it became the namefor the thing that we're doing now.

And so, you know, paired with thatmassive uptake,yeah, things moved really, really quickly.

Now, as people move back to in-personeducation, there's no longerthat need to move quickly.

And so people have dial it backa little bit.

They've got back into committee work,they've gotten back into slowerdecision making, which is frustratingfor people on the industry sidebecause they want to keep moving fast,selling big things, change can changeand that's not always how things work.

Yeah, I'm feeling thatmyself and even insideare organized in at Intel,we move so fast and now thinkthe bureaucracy has set back.

And that's right.

Now things are slowing down again.

And I think the CEOs are feelingsome of that pain.

On getting peopleto come back to the office.

Well,and another thing that happened alongthose lineswas a lot of the CEOs that I talked to,they got a seat at the table of decisionmaking during the pandemic.

Yes, they did. Yeah.

There was the recognitionthat the role that they were playingwas critically importantto the success of the organization,to the success of the institution.

They needed to know,do we have the capability?

Can we do this in a secure way?

You know,are we going to be able to handle the loadof shifting everybodyto this new way of doing things?

Now, what has happenedis that many of those CIOs maybe no longerhave that same urgent seat at the table.

And so there's some additional layersof bureaucracy, like you said, betweenthe kind of organizational mission settingand now what is essentiallya transactional functionfor a lot of people in I.T., where it'skeeping the lights on maintenance,which has become deferredmaintenance, it'sdoing more with less,it's having less budget, it'sall of these factorswhich are no longer thatcritically urgent thingthat was it's now back to reality.

But I'm going to throw a kink in here.

Yes, we have another potential blackswan moment happening now,which is gender to me. I,

I think it's a huge, big deal.

Some people think it'sprogressive.

I think it's way more than that.

I think it's revolutionary.

So what do you feel like?

You're really overexaggerating it.

I mean, let melet me pick that for a second, too.

Why do you think it's revolutionary?

It justit feels to meand I know that's a weird thing.

It feels to me likeit was in the nineties.

I was in Silicon Valley in the nineties.

I graduated from from college in 94,arrived in Silicon Valley in 94, 95,and it just feels likeif you're not part of it,you're going to get rolled over.

Sure.

So and and it's not a company adopting it.

Individuals are adopting it.

So to me, if an organization doesn'tpick up on it, especially, it'salmost like a convert, The perfect storm,a convergence of of ecosystems.

I have people working at home,a lot of people still working at home.

I now have generative AI out therethat appears it'sgoing to help me do my jobmore effectively or more efficiently,and I can get a lot more done in a smalleramount of timeif I if I take advantage of it.

And companies, I don'tthey don't have the visibility that maybethey had before on their employeesbecause they're not in the office.

I just think all that happeningtogether is going to cause thisthing to just kind of spiral.

I don't know.

It just feels that way in my gut.

Yeah, I mean, future futureprediction is always hard, right?

And sothe way that I've been thinking about itlately is that it's generative.

AI itself is a class of tools,and it depends on how you decideyou want to use those toolsand do you want to use other people'sversion of the tool?

Do you want to buildyour own version of the toolwhere I think things are going?

And again, this is why when I sort ofwas giving you a bit of my background,

I was talking about the decisionmaking process around new technologies.

I think the reasonthat a lot of businessesand a lot of public entitiesand a lot of other folksinitially had a little bit of a knee jerkreaction to it was this is somethingnew that has the potentialto change everything and does it.

It may very well,but it's also somethingthat a lot of people don't understandand a lot of people don't understandand what it can do,what it can't do, what it should doand what it shouldn'tdo, really,you know, if you think about it. And sowhere where

I find the interesting space right nowis, okay, we know this tool is out there,but organizationsneed to decidehow they want to use a tool like that.

Do we want to use itfor our internal processes?

Do we want to findoperational efficiencies?

Maybe Dowe want to use it to better engagewith our customers,with our citizenry, with our students?

Maybe Do we want to make itso that people can find information moreeasily or so that they can do the workthat they were doing more efficiently?

Maybe.

But then the counter question tothat is always, well,what are they going to dowith all that extra time?

And so this where it becomesa culture question again,if we say, okay, you know,you can use some of these toolsand you get a2x3x5x efficiency gainon writing emails or putting togetherdocuments or, you know,generating imagery, whatever that thingis, are you going to, as an organization,have fewer people employed?

Well, that's really scary to people.

Are you going to give people more timefor creative pursuits?

Maybe, you know, are you going to expectmore out of people?

Are you going to change the metricsfor which you're judging people?

So there's a lot of things still at playand there's a lot of leverswhere organizationsare having to make the decision,

How do we want to bring this tool inand what impact is that going to haveon the people that we have there now?

I like

I like how you said,what if it's going to have an impact?

Yeah,absolutely.

I mean,this sounds kind of strange, butand you probably don't remember these daysbecause you're younger than I am.

I'm an old or younger. Yeah, yeah, yeah.

I'm an old man.

As as my kids will tell me,there used to bebefore my time, there were typing poolswhere a whole bunch of peoplesat there and typed memos.

They listened and typed memosor handwritten shorthand.

I don't know anyone that knowsshorthand anymore,but that used to be a business classthat you telephone.

Court Reporter So you shorthand.

Yeah. Yeah. Okay.

The Court reporters rightthere used to be stenographers,

There used to be all these jobs.

They've a lot of them been replacedand a lot of there used to be a mail room.

Yeah, in large corporationsthere still are in some,but most don't have it anymore.

Yeah.

So the jobs shift and changethat happened over a long period of time,except when we we hit certain things.

So it is going to impact.

Absolutely going to impact.

So to what degreeand how it's going to impact,

I guess, is your choiceas an organization?

Well, in the examples that you mentionedare all aboutinformationtransfer and knowledge transfer.

And that's wherethis gets really interesting because,you know,you and I talked about this today,people whotry to use these tools in the same way.

So if you try to use HPT in the same waythat you use Google, you are going to beprofoundly disappointed.

Yes, I know that myselfand a lot of people do.

You know, they start offand they say, okay, I'm going to use thistext interfaceand I'm going to ask it a questionand I'm going to verifythat it knows what it's talking about.

So I'm going to ask you that questionwhere I already know the answer.

Now, that's a Google type of questionwhere there is a definitivegive sort of factual answer to something.

Why Generally the AI is differentand why it's excitingis that that questionisn't really meant for generative AI.

You know, what's the capital of insertname of country is notreally the question that you want to askby the question that you want to ask.

Generative

AI is I'm going to be taking a vacationto that country and I want a five dayitinerary where I make surethat I see museums and I eatlocal authentic foods.

Can you put together an itinerary for me?

That's a question you can't Google that.

You can maybe find examplesthat other people well, well,people that are really good at Google,

Google that.

But it takes a half hour,hour, 2 hours to do because I'm Googlingall these different placesand reading reviewsand so it's not a single query.

Right, Right, right.

Where Google is great,where I ask a single query,

I get something back,then I use my brain rightto then process some of the informationand ask more questions.

So what you're saying is generative?

I can take a more generalized conceptright, and useits augmented reality or its augmentedbrain, whatever we want to call it.

It's a large language model that youand I alwaysstruggle because on one handit's the next word predictor.

All it's doing is it's saying it'staking my question as an input.

It's trying to understandwhat I'm asking it for,and then it's going to predict whatthe next word and a response ought to be.

And that's great.

You know, and where I thinkthis gets interesting for people is that,you know, that example that I just gaveabout making a travel itinerary,

I might want the full knowledgeof the World

Wide Web to help me answer that question.

You know, I want to knowall the information that's out theresynthesize brought togetherin a way that I can now tap into thatwhere institutions and organizationswill want something a little bit differentis that they might want that answerto be contextualizedto their local environment.

And this is where I think the power of it,because now, you know,these large language models are out there.

Everybody doesn't have to go and do that.

Again, that that tool already exists.

But what we may want to do is we maywant to put our own skin onto that tool.

Maybe we want to paint itour own color or whatever we want.

But in thecontext of this, it would be,how can I ask that itineraryquestion in higher education,

It might be for your course schedule.

Well, if I'm going to ask it to help meput together a course schedule,

I don't want itpulling courses from Harvardif I'm going to Columbiafrom other institutions. Exactly.

I want to know this is my institution.

The information that you're gettingis only from this institution.

And so it's a bit of a walled gardenat that sense.

But what's great about itfor technology companies and other peopleis that each individual contextneeds its own individual environmentin order to operate.

And that's very exciting for peoplelike Dell and Intelbecause it means we can sell more stuff,but we can sell more stuff,but we've got to buy out all the cloud.

Companies love it too, because some ofthese are going to be living in the cloud.

But it meansthat to contextualize that informationreally increasesthe value of it for the peoplewho are searching for that information.

And so to do that,that's why it's a game changer,because you can now layer this abilityto do sort of natural language generationwith contextualization.

So the contextualizationis that easy to do?

Is that going to be easy for me?

A Darren Pulsifer to go and say I'm goingto create the same thing that Chad GPTor Google Bard have done on my computersthat I have in my data center.

Can I do that easily?

Easy is relative, you know,and easy is is hardbecause it depends on what Darren knows.

It depends on what Darren's done before.

I think that for peoplewho are versed in kind of stitchingtogether a constellation of differenttools, it's not going to be any different.

So tapping into a large languagemodel is not going to besubstantially differentthan tapping into deep learning modelsor TensorFlow models or other thingsthat they may have done before.

The skill setis going to be about the same.

Now, if you're an institutionthat's never done that beforeand you're not hours prior.

So so the barrier to entryon this is not super high,especiallywith with like Lomita coming out.

Yeah, I'd say it depends.

So Lamar to great example,you know, Metta essentially saying,hey everybody can take our large languagemodel and use it for freeand that's great.

The language model itselfdoesn't really get you to the end goal,which is being able to spit backuseful information to somebody.

So where, wherewhat we're seeing a lot of really is thatcompanies are now injecting this stuffinto the tools that already exist.

So you see Microsoftinjecting it into office 365.

You see turn it inand injecting it into their total.

You see it inall of these different peoplethat people may already be doing businesswith.

Now saying, well,we've got generative AI built in now.

And on the one handyou say, Great,

I don't need to go and implement anythingbecause it's already thereand somebody who knows more about itthan me put it in there.

But the question that I always ask is,does an organizationhave a frameworkwhere they can start to say, well,this is howwe want people to interact with it,

This is where we want our data to live?

Are we contributing thingsback to this vendors modelor is everything I say staying safeand secure in our own instead of our own?

In our own instance,if you're not asking that question,you're missing out on a huge,really security holebecause you're not going to be ableto control how people use it.

So you're not going to necessarilybe able to say, Hey, don't copy and paste

PII data or hippie data into this tooland have some detector that says, Oh,that'swhy you shouldn't be sending this outto the large language model.

Maybe you could do that, But still.

So you've got to have some, I think,thought putinto how people are going to use itand where that data is living.

Before you get to the question ofis it going to be easy or not?

Becausedepending on what data you're using,it might get harder and harder and harderdepending on what you want to do.

Okay, So to me, this sounds like the CTO,your Chief data officer,and the strategic datagovernance plan has to be in placein order to reallylet this thing flyin your organization, right?

Because if not,you're starting to lose data.

You're intellectual propertywill start going out the door.

Well, and it's a hard thing to do becauseit's not any single person'sresponsibility.

So you do see some job postingsfor a chief officer and things like,

Yeah, yeah, yeah, yeah.

But you know, when I think about wherethat person lives, they're living at theintersection of a couple of different jobsand sort of functional areas.

They're living data.

You mentionedthey're living at the center of security,they're living at the centerof user interface and user experience.

They're living at the centerof infrastructure and, you know, thingsthat might historically fallunder the I.T organization, the chiefinformation officer, they might also be,you know, having as acustomer and again,

I think about higher education.

So, you know,maybe your customer service people,maybe your alumni engagement people,maybe you're teaching and learning people.

There's a lot of different partsof an organization that could potentiallyleverage these technologies.

And so it's not just to build itand people are going to figure it out,but it is a probably mediated processto get to whatever thatultimate conversational interface.

It's clear.

Do you think that generative

AI is scary for a lot of higher ed?

I mean,because there's a lot of unknown, right?

And there's a whole thing aroundand I'll have someone come onnext week and talk about this.

An English professorat the university level.

How do I how do I deal withgenerative AI andwho owns the work?

I mean, I there's there'sa lot of questions around this, right?

Is it okay for me to use generative

AI in making my emails look betteror to making my presentationmore presentable?

Is it like a ghostwriter?

You know, if I'm writing a book, yeah.

There's all these weird thingsthat are now popping up.

Well, thank goodness for academia on on,you know, to find the answersto these questionsbecause the initial reaction of academicswas, no, don't use it.

Kids are going to use it to cheat. Right.

And you saw this in New York City.

You saw this in country is like,oh, this is a tool for people to cheat.

What has happened since thenis that wave kind of crashed and subsided.

And in higher education, at least, there'sa lot of people that are saying, well,how can we nowuse this to help prepare studentsfor the worldthat they're going to be going out into?

Because, you know, you can't ignore it.

And I think that's why earlierwhen you said, likethis is a big watershed moment, it'sbecause you can't ignorewhat the potentialfor something like this is.

So academia is really good aboutunderstanding citation,understanding information management.

That's my librarian hat on.

Again, it's really good at understandingwhat the social impacts ofthings are going to be.

It's really good at understandingkind of workforce trendsas industriesare starting to evolve and change.

And so if if we operate underthe assumption that students of todayare going to be needing to use these toolsas part of their jobs tomorrow,then they should be getting trained innot just how to use them,but how do I judge whethersomethingthat's presentedas a fact is actually affect how do I,you know, investigate somethingto make sure that it's believable?

Do I just accept a citation or do

I actually go and look at that citationand see if I critically understand itthe same way?

And this has gottena lot of people in trouble because todaygenerally I can fabricate citations.

You know, you saw this with somelegal cases and sort of some fakelegal citations.

And sonot until a certain pointdid anybody actually go and look upthe case law or look up the referencesor look up whatever it is.

And so I think what it's causing is it'scausing people to start to say, okay,

I can use this toolto help me write in the styleof something authoritative.

And that's very powerfuland that helps get you to authoritativemuch more quickly.

So I can say write this in the styleof a chemistry article,write this in the style of, you know,a preparation,a science dissertation,whatever, never want.

But when it gets to that pointnow, my critical brain has to pop inand I have to say, okay,is this actually I have to do the humanbit of work, which is, is thiswhat I actually want it to be?

I have to go in and edit.

I have to go and make it my ownso that it's becausethe technology is not at the pointwhere you can 100% trust it yet it's not.

Do you think it will ever get to the pointwhere I can 100% trust it?

I thinkbecause I I've been taking some classesbecause I'm working on my dissertationright now and the classes

I have to become a certified researcher.

Okay.

So I've taken someclasses on the ethics of researchand some fascinating thingshave popped up.

Several case studies onpeople fabricating,fabricating stuff and falsifying recordsand all this stuff.

Humans, we we should already be usingour critical mind to questionwhether something is real or not, right?

Yeah.

And just because it comes from an A.I.doesn't mean we can trust it.

Just likeif just because it comes from my professorat the university doesn't mean

I necessarily should trust it.

I should check. Right.

So I think that critical thoughthas always supposed to be there.

But now it sounds likeyou have to ramp it up even more.

I mean, let's let's agreethat humans are inherently lazy, right?

And we're always kind of lookingfor the easiest path toward something.

And the reason that generative AIis so interesting is because it presentsan easy path toward thingsthat are very difficult or time consuming.

Writing articles.

You know that to be honest,not a lot of people find joyin, and not a lot of peoplespend their free time doing these things.

There are some people that do, but yeah,yeah, yeah, yeah.

It's things that I think as humanswe find cumbersome.

And so is there an additional layerof responsibilityput on an authorwho uses these tools to verify?

Absolutely.

Is there an additional layerof responsibility put on a readerwho's looking at these tools? Absolutely.

And there are some thingswhich are coming outwhich are going to help with that,which is thatthey're going to be a citation,let's say, forfor text, maybe a watermark.

And an image or, you know,metadata embedded into thingsso that you can start to seethere's some interesting toolsthat I've seenthat at Columbia coming out of the Schoolof Journalism, which have to do withthere's a whole projectabout how Wikipedia pages are edited.

And so you can start to seewhat the history of the pages areand who's making the changes made.

The change was thingswere things added or things removed.

And I don't think it'sgoing to be that longuntil you be able to essentially togglethe text document to see, okay,which of these bits were generatedby, let's say, Chechnyansor BART or something else,and which bits did the human go back inand actually craft and sell?

That's like a deepfake.

Intel's got some great technologyaround Deepfake for videos and images.

We're going to startseeing the same thing for textbecause there's certain patterns.

I've already noticed

Chachi has a certain way of talking.

That's right.

It's got a voice and,and I think that's goingto value the human voice even morebecause you'll could you,could you feed in all of Darren'swriting and have chat up to youright in the style of Darrenyou actually could and you knowbut still you know, you need that initialkind of training corpus to get there.

But I do think we're going to findthat instead of devaluing the humanas part of the process,

I think what this is going to dois it's going to make it more clearwhich part the humansshould be spending time on and which part,you know, can be kind of shortcut.

This is totally fascinating.

Obviously, Jeffrey we need to have youcome back on the show anytime you want.

Absolutely.

In fact,we're going to do what we talked about.

We're going to do a series.

Yeah, you're right.

We're going to do a series on Generative.

I were right at the cusp of this thing.

So, Jeffrey,thanks for coming on the show today.

You got it.

Thank you for listeningto Embracing Digital Transformation today.

If you enjoyed our podcast,give it five stars on your favoritepodcasting site or YouTube channel,you can find out more informationabout embracing digital transformationand embracingdigital.org.

Until nexttime, go out and do something wonderful.