#135 Trustworthy and Ethical AI

Subscribe to get the latest

on Wed Apr 26 2023 17:00:00 GMT-0700 (Pacific Daylight Time)

with Darren W Pulsipher, Gretchen Stewart,

In this episode Darren interviews Gretchen Stewart, Chief Data Scientist of Public Sector at Intel where they discuss the trustworthiness and ethics of artificial intelligence.


#ai #ethics #trustworthiness #deepfake #aicontent #aidetection

Listen Here

In the late 2010s, Microsoft released Tay as an AI chatbot designed to learn from its conversations with users on Twitter. However, things quickly went wrong when Tay began spewing racist and offensive comments, causing a public relations nightmare for Microsoft. Despite this, data scientist Gretchen Stewart believes that AI chatbots like Tay can still be useful tools, as long as they are developed with diverse teams that consider ethics and trust. Stewart argues that critical thinking is essential when using AI chatbots like ChatGPT, which are based on biased data and algorithms. AI developers must build diversity and ethics into the development process rather than bolting them on at the end.


AI has immense capabilities but still lacks the human senses and experiences that can affect decision-making. There are ethical considerations in AI development and the need for skepticism when dealing with new technologies. Change is inevitable as the world moves towards the fourth Industrial Revolution, and people must adapt to keep up. However, raising concerns and posing ethical questions is crucial to ensure that AI is used for the greater good.

There are potential dangers of relying on artificial intelligence (AI) as a source of information. T AI can be helpful; however, it should not be blindly trusted because it is only as good as the data it is fed, which can be flawed and outdated. Critical thinking and questioning are essential when evaluating AI-generated content. Teaching these skills should become part of the curriculum in schools and universities. Additional to questioning AI’s veracity, diversity in AI responses should be considered when utilizing AI when making decisions.


As technology continues to advance, new ethical concerns continue to arise. This is true of Artificial Intelligence as well as AI-generated content. Recently, an AI-generated collaboration between two artists generated received 15 million downloads in 24 hours setting new records, but the artists involved needed to be made aware of the collaboration. Technologists should be allowed to produce such technology without considering ethical implications. The need for policies to catch up with technological advancements and for designers to create tools that can help ensure the trustworthiness and ethics of AI-generated content. Intel has developed “fake catcher” products as an example of a tool that can help detect fake videos, which is one step towards ensuring the ethical use of AI technology.

Combatting fake AI-generated content has become an industry all itself, and it requires transparency in AI development. This has started an arms race, with bad actors using AI for malicious purposes, and white hats building technology to detect and expose AI generated content. It is important to educate individuals, especially younger generations, about the ethics and potential risks associated with AI.

Podcast Transcript


Hello, this is Darren

Pulsipher, chief solution,architect of public sector at Intel.

And welcome to Embracing

Digital Transformation,where we investigate effective change,leveragingpeople, process and technology.

On today's episode,

Trustworthy and Ethical A.I.with special guest Gretchen Stewart,

Chief Data Scientist,the Public Sector at Intel.

Gretchen, welcome to the show.

Thank you, Darren.

I am so excited to be backand we are definitely going to havea conversation on somethingthat's very timely today.

So I'm really looking forward to it.

Yeah, normally

I would say Gretchen, introduce yourself.

Everyone should know youand if you don't, you've got to go backand listen to Gretchen's previous podcast.

Very well done.

She is our A.I.expert on the CTO office team,which were glad.

Gretchen Gretchen, Gretchen's with uson that team because we do need someone,especially now.

Chad Chip has just taken the worldby storm and causedso many ethical issuesthat we got to deal with.

So Gretchen, please straighten us out.

Do I still have a jobor do you just take my job?

No, You know,

I I'm excited about Chad GPT itit offers us the ability to really havethat combination of human and machine,and it's going to take awaysome of the things that we do, butthey are boringkinds of things that I hate to dosome administrative stuffand even in some cases, I have to admit

I haven't been putting fingers to keyboardas often as I used to.

You and I joke about that periodicallyis that now I can go to chat GPT and say,

All right,

Python, I'd like to do a Python code to doa, B and C, and it's pretty good.

And so I don't have to think,

Oh gosh, do I remember how to do that.

It's been or.

Are we go, go find it on StackOverflowor find a book.

Yeah exactly.

Or did I already do that andit's in my GitHub or something like that.

So yeah, it'sone of those things that it's

I think it's going to be really exciting.

I honestly was at an eventabout a week ago at a museumand people were talkingabout leveraging Chat GPTas a way to expand ideas around artand that when you start to think about,you know,there are people who have absolutelybrilliant capabilities, butsometimes they might be stuckand leveraging something like Chatbot Tto say, I'm thinking about this mediumand these are kind of my ideasand it might spur on some even betterand betterideas about designing and developingsome really interesting art.

Yeah, I never thought about that.

I mean, just recently

I took my family for spring break.

My daughter's graduating from high school.

So last last family trip, right,with with the younger kids.

And we went to Italy.

And I thought it was very fascinatingand this is in context of chat.

She GPT it was banned in Italybecause of privacy concerns.

And then as we were going throughseveral different museums, art museums,it was fascinating to listento the tour guides talking because eachtour guide had a different storyfor the same piece of art.

And I was like, Wait, what's the truthhere?


I was funny.

I was like, Whoa, what's going on?

It was the map room.

And in the Vatican Museum, a beautifulmap's all along the walls of Italy,and our tour guide saidit took ten years to do thisand the tour guide next to us,as I was listening to her talking, said,

Oh, it took two years and 100 artiststo do it.

And I'm like,

What's the truth? Right?

And us as tourists were like, Sure,my tour guide knows everything.

But then you step inand you say, Well, Chad,she could really tell mebecause it has consumed all that data.

And I think it'd be interesting.

Maybe it would say,

Well, it's controversialon how much time they really took.

I don't know.

Yeah. So and think about it.

If you're in research or,you know, if you're a lawyerand you are looking for,you know, you normally go into,you know, Westlawand and find information.

Now all of this plus more is in chat JPT

So there could be a way for a lot ofpeople to get better information fasterbecause you could neverresearch through all the informationthat's in interactivity.

So that brings up.

A good but I think we also really needto remember that, you know,

AI has gone wrong in the past.

And so it's really criticalfor us to think about,you know, we think about the algorithmchallenges aroundperpetuating discriminationand or when Microsoft released

Tay as the AI chat botand how that crashed and became.



So, so I think, you know,some of the let's put this on hold,you know, I think, you know, truthfully,the cat's already out of the bag,so to speak.

But I think there's ways to use this.

And and being a data scientist, to meit just means there'll be more peoplewho are thinking about itfrom a very diverse and a trustand an ethics perspective.

And that's really important.

And and this is going to force more peopleto think criticallyand have those kind of conversationsto ensure to your earlier point,how accurate is thisand is this a good bit of informationthat that needs to be connectedwith the expertise that the people havethat are all part of the teamyou're working on?

It reminds me, Gretchen, of the Internetin thein the nineties, late nineties, right.

Because yes, this wasthe same conversation we were having then.

The internet is fullof all this information.

And I remember

I did some seminars on the Internet at,at some local universitiesbecause I was an early adopterand they were asking me about itand people said, Well, the internet,how do I know the information onthe Internet is correct?


And I think we have to askthat same question today.

How do I know the information

I'm getting out of Chad?

GP is correct.

Exactly. Exactly.

And and can you correlate and correspondand and again, use critical thinking.

I mean, even at Intelwe have a responsibly AI counciland this is a group of peopleand I'm lucky enough to sit on it.

The really has a global review and scopeand there's lots of data that comes to us,but we also have folkswho have a standardslens or a legal lens or an H.R.lens that are really looking atnot only what are we doing internally,but how are we working withour external partners?

And I think what's most criticalfor peopleis to build them into the processof any kind of development,even if it's just using Chad.

GPT That data is part of a processthat you're working on.

So make sure that when you're thinkingabout this that you don't bolton the idea around ethicsand having a diverse team.

I think if there's one thingthat I learned, you know, in schoolas a math major, you eithergot the right answer or the wrong answer.

And then when I started to spend more timeand look at those,you know, push the envelopekind of math designs in linearalgebra and finite math and thingslike that, it really became clear to methat you need a full group of very diversepeople who are coming at itso that you end up with the best answerand think about not bolting that onto kind of the end of what you're doing,but that really it's a journeyand there's not an endpoint and chat.

GPT three or Chapter four or 28will be onesthat will be able to leverage and use.

But I don't think we're ever goingto turn into cyborgs and or,you know, someone's going to replace mephysically.

Well, and that was that was the same fearswhen the Internet was going wild.


And it does change economies.

Absolutely. Yes.

Changes. Yes.

I want to

I want to touch on this diversity aspect.

Should we have diverse eyes as well?

Because we all know eyes are biased,period.

Yes, They they are biased.

Yes. Right.

So should I have a chat?

GPT and Google's is bear a rightif they were trained withwith even the same data setsbut with different biases kind of built inbecause there are biasesthat's just absolutely.

Well think about.

Would it make sense to have to,you know, give me answers back.

Well, yes.

And I think that that's part of the reasonwhen people are designingtheir models, lots of timesthey are looking at OC.

I think linear regressionwould be a good thing for this.

But I also know that I should be thinkingabout leveraging maybe gradient boostor some other algorithmand they weight them differentlyto come up with better and more accurateand potentially less biaseddata.

But the truth is, as you said, I mean.

Chatterjee GPT is based on informationthat's around the worldthat's fed in whether it besocial orthings that are comingfrom the Library of Congress or wherever.

All this information is coming from itby by default is biasedbecause it's designedby biased people and robots.

Well,and it's also a filter, too, right? Right.

Because if you rememberthe first chat shippedor it was even to was was wasfilthy. Right?

It was right because they just scouredeverything on the Internet.

Well, there's a lot of really garbage.

Yeah, right.

So, so they had people and

I can't remember which country was it inwas in the Philippines or Nigeria.

They had large amounts of people filteringdata.


They had criteria and said, go, go labelthis data.

As you know, we don't want it.

So obviously there is some notnot even in the algorithms,but in the data that we feed itand the data we decide to feed it presentssome level of trustworthiness, right?

Whether good or bad,there is a level of trustworthiness there.

Yeah. Yeah.

And when I mean, we're looking at it fromall of these different senses,we're looking at it from a visual sense,an auditory sense, a kinetic kind of

I mean, so we look at thingsfrom a whole bunch of different sensesthat the computer doesn't have,despite the fact that everyone says,oh, it's it's human or it's it'sisn't intelligent as a human intention.

And it's and it's not.

I mean, these are machinesthat have huge capability,and we are able to helpdesignthe systems to get to better answers.

But it still is because in just God'souter kind of thing, it's not the.

Oh, okay.

So I am also comparing I'm

I'm connecting that with a differentinformation loop that I might have hador a different sense that I might have hadbecause I went to Africa or whateverit might be, that you just have differentthings that you bring into it.

And that's why I think it's not an eitheror, it's a both, you know, And

I think that it's also really importantfor us to realize that we're not finished.

You know,this is the beginning of the conversationthat's going to continue to go on.

And there will be more and more thingsthat we will not have to do.

I mean, case in point in that session,

I was talking to you about where

I'm an artist and his wife.

She happens to be a Ph.D.in psychology.

The two of them were on the paneland they talked about how their son,who I think is probably in the thirdor fourth grade, doesn'teven know how to sign his own name.

And everyone knows how horrible.

And it's like he doesn't need to do that.

You know, you use Venmo, youyou do all of these thingswhere it's digital signing, etc.,so you just scribble whatever it is.

And he doesn't really know cursive.

That's, you know, does he really need to

I mean, it makes you start to think aboutthose assumptions that you've beenbrought up with or that you have thatyou know, truthfullythat might not be as relevant these days.

Do you really need to knowhow to write something in cursive?

So what you're telling me, peoplethat have a hard time with changecan have a really hard timeover the next couple of yearsbecause this is fundamentallygoing to change a lot of things.

Oh, yeah, yeah, yeah, absolutely.

And as people have talked for years,you know,about the fourth Industrial Revolution,we're in the middle of it.

And I think that for

I thinkthere's a lot we can learn from historyin termsof how to better move people through this.

But this is moving so quickly.

It's like it's sort of like thatfunny T-shirt I saw one day.

You know, it.

If you want to be on the porch,you got to play with the big dogs.

It's sort of like you got to jump onand pay attention to this.

But at the same time, with a certainamount of skepticism and the thoughtprocess of am I working with other peopleto really think about andthat you have the obligationto raise ethical questionsand concernsthat are coming from that the information.

So is that why the pause?

Is that why all the.

Well,it wasn't all the leaders, but no, I was.

Going to say this is this is where

I'm going to show my feminist side.

But it was interesting thatall the pause came from people of one Sex.

And for the most part, one caller.

I didn't know.

I didn't I didn't even thatdidn't even hit my hit my radar.

Yeah, it.

Was, you know, Elon Musk and Wars,

Wozniak and others.

And then granted, I hear whatthey were saying, but literallynot just Elon Musk,but two or three days afterit talked about how much moneyyou just invested in a chat.

GPT like company.

Yeah, I did.

I thought it was. Up to be suspect.

Yeah, I was thinking the same thinga little bit.

Gretchen But not with Elonbut with the CEO of Openai.

Yes. When he saidlet's put a pause on things.

Now that I've released chat or gpt four,nothing should go beyond GPT fourand I'm like, you know what?

You sound very insincere, right?

And it might be, I'm afraid that some ofmy competitors are going to catch up.

That's what it sounded like to me.

There is a real concern though, right?

Is there not otherwisebecause a thousand people signed it.

Yes. Yeah, I mean,there were a lot of people that signed it.

And I think part of it is thatjust isthey're wanting usnot to think it through.

I think that people are smarter than thatand that they really areor should not assumethat that is a 100% answer.

You know, that it's not completelyaccurate, you know, and the truth iswe owe it to the societyto really think aboutwhat are those ethical questions?

Are we respecting human rightsbased on this information?

Have we really had the right human teamoversight with all of that data?

Are we able to explain it?

And if you can't explain it, thenyou have to be suspect to like,where does all this data really come from?

From chat.

So so what you're saying.

Yeah, I get it.

We need to teach the world that.

Hey, Chadshipped is is an aggregator of dataand a distributor of ideas, right?

Yeah, but it is fed by datathat is two years old.

First off. Yes. Yes.

And because I put in there was.


And flawed, but but let's takelet's talk about our upcoming generation,because I've got three teenagers at homeright now, 16, 17, 18.

You know, if you were to ask them,is Churchill Beatty accurate,they would say 100% true.

And and frankly, I think that'sthe sentiment of most people,not just of the younger generation,but I think of a lot of people.

It's an A.I., It's intelligent.

Right. Right.

So I think we need to get out the wordjust like we did with the Internet,saying, hey, not everything you read onthe Internet is true.

We need to say the same thing about A.I..

Not everything you hear from an AI is truebecause the basis of its datais the internet.

Yes, that's the basis of the datais flawed and and biased in its own right.

And also,to your point, a couple of years old,

I've learned a lot and changemy opinionson a lot of things in the last two years.

So how you know what I mean?

So there's so many thingsthat are out there that can do that.

I think what this really forces,which is something that I've alwaystried to figure out how I can doit better, is to think critically.

I really think that whatthis is going to force is that becomespart of the curriculum for grade schooland for college, medical school.

And and that weyou know, we work for some of thewe work with some of the brightest peopleon the planet at Intel.

I mean, it's scary scaryhow smart they are.

But at the same time,we all have flat sides.

We all are,you know, have our own bias and come intothings from a different perspective.

And I have found, as I'm sureyou have, too, that when we pull peopletogether who come at itfrom that different perspective,we end up with somethingmuch better, much, much better decisions.

And that that criticalityof asking those questions like

I know I was annoying when I was a kidbecause I would always ask why,but you know. What I mean?

I could see you as that kid.

I told you like I would raise my handin class all the time.

You would be like, Shut her up.

But you know what I mean.

So I'm like, Why?

Why are we thinking thatthat's the best way to do this?

Or so And so based on the workthat you've done, have you thought aboutis there a different way to do this,or do we have all the data?

Are thereother places that we need to go to?

So. Chad JPT Absolutely.

Great place to get some info,but we also should be lookingat other places again like that.

As I was talking aboutcreating those models where you useseveral different algorithms and then wethem based on whatyou're really trying to do.

But it all starts from what's the problem?

What are you trying to solve?

Are you asking the right questions?

And again, are you really thinkingand coming at this critically?

And I think it alsobrings us to the point whereit's not a you know, again,when I was in my math class,if I got the answer, I good if I didn't.

All right, you didn't get it right.

But in our world today,nobody works by themselves.

They can't.

Yeah, with all of the informationand all that we need to do, it has to bea blended, diverse teamthat, you know, different ages,different sizes, different sexualities,you name it, that you just have people.

Get different perspectives.

Is that that that's.

Yeah it's it's critical.

And and I think having those kindsof discussions and pulling in thisdifferent data will allow all of usto think more criticallybecause I think we have gotten lazy,like you said, Oh, beauty is 100% right.

No, it's not.

I mean, no, there's a lot of things.

Film errors.

Yeah, exactly.

Like I asked,who wrote who wrote this book?

The articulate case deployment.

I wrote that bookand it didn't have me in there.

It had some other person in there,like when where to get that from.

And so it was fascinating.

Oh, it was it was fundamentallywrong on a basic fact,which I thought was interesting.

I want to shiftgears from trustworthiness.

Sure. Into ethics.

And I'm going to pose thisbecause I heard it on the news thismorning and I was like, wow.

And I generatedsong, which was a collaborationbetween Drake and Breck.

I think it was. The other artistwas released on on Spotify,

Amazon Musicand all that, got 15 million downloadsand the artists were not involvedin the collaboration at all.

And I did it.

It was taken down immediately,but it was the most popular songfor the month of April, you know, one day.

And they were they were talking on on theon the radio this morningwhen I was listening to it,they were talkingabout the ethics behind itand were people just downloading itbecause it was air generated.

So my question to you ishow how do we control the ethicsaround A.I.and generating content?

And do we attribute that?

I mean, what are the otherethical issues we have around A.I.generatedcontent?


I think you bring up a really good point.

And I think this is where,you know, for folks like you and Iwho work in the government space,this is really where the technology is wayfar advanced from the policy.

And and that we are going to have to thinkaboutsome of those policy questions.

But I think it goes backto thinking about it critically.

I mean, if if you highlight that the

I mean, I'm assuming somebody said, hey,take a Drake songthat sounds like X or this artist songthat sounds like Yand mash it togetherand come up with a, you know, okay,

But you're right that the responsibilityis that you say, Hey, that's what I did.

And thenif there are certain things that you needto, you know, copyrightand all that, again, it'sthat the policy isn't set up for that yet.

But I think that the capabilitiesare there.

And so I think thatwhen you're building itthat you need to describe what that is.

And if you can't describe it,

I mean, to methat just feels wrong that you shouldn'thave it out there.

But again, I'myou know, but when I'm one person.

Yeah, but should we even allowtechnologists to produce technologylike this?

That's where the big question is, right?

This is another example.

And we're starting to see more of this.

I just read an article and it's actuallyin our in our weekly podcaston Embracing Digital this week,which is a news podcast.

There was a A.I. voice.

Cloning is an issue and badpeople are using itto virtually kidnap children.

It's a huge problem, I guess.

And the FBI is all over thiswhere they've captureda little bit of a of your child's voice.

They then call you on the phone andand your kid is talking to you, Mom,

I'm in danger.

Someone has kidnaped me.

And then the kidnaper gets on the phone.

Yeah. Yeah.

So, yeah,

I. Think it if we even allow A.I.to go in this direction wherebecause as you said in the beginning,the genie is out of the bottle.

So how do we pull it backin? I don't know.

Well, you know, I think a great example ofus thinking about that is our statecatcher product that we have.

And so that's it.

And again, maybe we can create thatsame kind of thing that people haveand they can just add it into their phoneand it becomes an app or a model card,so to speak.

But in the case ofthe fake catcher, the idea is thatyou and I are human.

We are not A.I. generated.

And you know, we have different colorin our face and different waysthat our heart is beatingthat is different than a fake video.

And so what we have done is we have a toolthat is over 90 to, I think 93% accurate,where you run this through the videoand it will show you, hey, it's a fake.

And I think that part of the technologynow needs to createthose things like truth.

We need to create modules and toolsthat can help. A.I.

In terms of is it ethical or not,are therequestions that should be askedbefore something goes out?

And as a designer of technology, you know,we need to be thinking through someof that and then have thatalmost thatchain of custody or that detailthat then would say,here's what we went through, here'show we did this, here'sthe data set that we used,and that that almost has to besomething that every time you releasesomething like a next version of chatbotthat that's attached to itwith all of this.

So we know what actually went in in.

Exactly, exactly.

So so Gretchen, this is really interestingbecause it sounds to melike this is an air arms racein some respects, right?

Because you've got count,you've got counter A.I.or fake detection to detectair generated content.

We but like you said,the genie is out of the bottle.

So there are bad actors out therethat are going to use A.I.for bad things,just like they did with the Internet. Yep.

And just like they'rethey're they've done with cryptoand now they're going to do with A.I..

So this is an education technologycombatant, right?

Going back and forth.

But if we compare thisa little bit to the nuclear arms race,it's a little bit differentbecause there's somethere's some fundamental knowledge.

You have to have and somephysical materialthat you have to have to builda nuclear weapon.

Right? Right.

But to build an A.I.,that that can do some crazy.

This is all out in the wild.

It is.

And I think we also need to think abouthow even things like Facebookand others have created people very dug inand not having realcritical conversations on things.

And that's to me,that's the thing I worry aboutthe most, isthat people will really believe thisand and thereforemake some decisions based on it.

And the decisions could be, you know,detrimental and potentially, you know.

So you're talkingpoliticians could be making decisionsright in in policy and lawsand things like that. Right. Right.

Yeah. Based off of you too.

And I think youand I and others need to benot onlyeducating the current politicians,but working hard to get peoplewho are a bit younger,especially in the US, because, you know,

I remember I think it wasthe Alaska Senator Ted Stevens, talkingabout the Internetwho said it was just a bunch of tubes.

Oh, that was that was also Al Gore, too.

But I was like, yeah.

So, I mean, some of it is,you know, we we have to to deal with itfrom all fronts.

And I think it's impossibleto put a hold on on that.

And I also I really believe most ofthe people want to put holds on it,like you said, want to do itso that they can make more money.

It it you know,it came across that way, frankly.

But I agree with whatthey're saying in general,which is we got towe got to figure something out herebecausewe have to educate people.

We've got there's a lotthat has to be done in this spaceand we have to understandthe ethics around using a chat.

JPT I can't imagine teachersmust be pulling their hair out in collegesand in high schoolsbecause who needs to write a report?

You know, I mean,my kids have been playing around with itand they said,

Help me write a script for a new playwhere the antagonistand the protagonist are these charactersbased off of superheroesin the early 1900s?

And bam, it's like, holy cow.

So is it Well,is it whowho understands technology the best wins?

It has been, you.

Know, maybe.

But you know, I'mgoing to go back to one of my favoritewomen in technology,which was Admiral Grace Hopper.

And she heard one of her quotes,which I really like, isno computer is ever going to aska new reasonable question.

It takes trained people to do that.

And I think that we all need tothink about that when we're using chat,because some of it is arewe asking the questions in the right wayso that it really is explainable,you know, And are weare we potentially withsome of the things that we're using,creating new security risks?

You know, you and I are talkingabout a number of things that to mereally are security risks.

And how do wehow do we

I won't say go backwards,but how do we start to look at,you know, those kinds of questionsthat a teacher needsto ask in terms of,okay, I assume you use Chad.

GP How much of it did youwhat were the questions that you asked it?

How did you formulate your outline?

You know, some of thoseask it in a different as a teacher, askin a different way, make the assumptionthat they're probably using it.

But how did you use it?

What are the ways that you cameto the paper that you have?

You know what I mean? Yeah.

Yeah, I, I get it.

It's, it's an interesting dilemma.

And the truth is there'swe're at the beginning of all of this.

And as I said, there's there's no I mean,

I wish I had all the right answers,but I, you know, this isit's a writing process.

It's, it's a journey.

And and really,there is no endpoint there.

There will be more thingsattached to a chat.

JPT There will be more autonomousmanufacturing.

There will be a lot more autopilotcapability.

I mean, there's just so much morethat's going to happen.

And I think that anybodywho thinks they can stop it is is crazy.

Yeah. Yeah.

I mean, or or they or they think they'remore powerful than they really are.

Yeah. Yeah. Well, okay.

Well, yeah, there you go.

But that's true, too.

So. Hey, Gretchen,it's been great talking to you.

Obviously,we're going to have to talk to you againin six months for sure, becausethe landscape is changing so quickly.

Oh, apps and shorter than that.

So. Absolutely.

And thank you.

I appreciate it.

Again, I don't have all the answers,but I'm definitely willing to, you know,ask more people and try and,you know, improve my critical thinking,because that to me is reallythat's the fun part.

Yeah. No.

And the fun. Yeah,

I totally agree with you.

Thanks again,

Gretchen, for coming on. You're welcome.

All right.

Thank you.

Thank you for listeningto Embracing Digital Transformation today.

If you enjoyed our podcast,give it five stars on your favoritepodcasting site or YouTube channel,you can find out more informationabout embracing digital transformationand embracingdigital.org.

Until nexttime, go out and do something wonderful.