#159 GenAI Policies

Subscribe to get the latest

on Thu Sep 14 2023 00:00:00 GMT-0700 (Pacific Daylight Time)

with Jeremy Harris, Darren W Pulsipher,

In this episode, host Darren interviews Jeremy Harris and delve into the importance of establishing policies and guidelines for successful digital transformation. With the increasing prevalence of digital technologies in various industries, organizations need to adapt and embrace this transformation to stay competitive and meet evolving customer expectations.


Keywords

#policies #ai #generativeai #guidelines #jeremyharris #darrenpulsipher #roadmap #challenges #efficiencies #dataprotection #privacy #compliance #ethicalconsiderations #feedback #engagement #ratings #reviews #customersatisfaction #customerengagement #embracingdigital #edt159

Listen Here


The Need for Clear Policies and Guidelines

Jeremy and Darren stress the significance of having a clear policy and a well-defined roadmap for digital transformation. Rushing into digitalization without proper planning can lead to challenges and inefficiencies. By establishing policies and guidelines, organizations can outline their objectives, set a strategic direction, and ensure that everyone is on the same page.

They emphasize that digital transformation is more than just adopting new technologies - it requires a shift in organizational culture and mindset. Policies can help facilitate this change by setting expectations for employees, defining digital best practices, and providing a framework for decision-making in the digital realm.

Digital transformation brings forth a complex set of challenges, such as data security, privacy, and compliance. Organizations need to address these challenges by incorporating them into their policies and guidelines. This includes implementing data protection measures, conducting regular security audits, and ensuring compliance with relevant regulations.

Policies should also address the ethical considerations that come with digital transformation. The hosts emphasize the importance of organizations being responsible stewards of data and ensuring that the use of digital technologies aligns with ethical standards. Clear guidelines can help employees understand their responsibilities and promote responsible digital practices across the organization.

The Role of Feedback and Engagement

The hosts highlight the importance of feedback and engagement in the digital world. Adopting a policy that encourages and values feedback can help organizations continuously improve and adapt to changing circumstances. By welcoming suggestions and input from employees and customers, organizations can refine their digital strategies and ensure that they are meeting the needs of all stakeholders.

They also mention the significance of ratings and reviews in the digital era. Feedback through ratings and reviews not only provides valuable insights to organizations but also serves as a measure of customer satisfaction and engagement. Policies can outline how organizations collect and respond to feedback and establish guidelines for capturing customer sentiment in the digital space.

Conclusion

Digital transformation is a journey that requires careful planning, clear policies, and ongoing adjustments. By establishing policies and guidelines, organizations can navigate the complexities of digitization, address challenges, and ensure responsible and effective use of digital technologies. Embracing digital transformation is not just about adopting new tools, but also about creating a digital culture that fosters innovation and meets the evolving needs of customers and stakeholders.

Podcast Transcript

1

Hello, this is Darren

Pulsipher, chief solution,architect of public sector at Intel.

And welcome to Embracing

Digital Transformation,where we investigate effective change,leveraging people processand technology.

On today's episode,creating a generative policywith returning guest Jeremy Harris.

Hi, Jeremy.

Welcome back to the show.

Thanks. Thanks for having me.

It's been a couple of yearssince you've been on right?

Yeah, it has been a little while.

Yeah.

Last time you were sittingat our dining room table.

That's right.

Yeah.

This time,even though you could come down.

Jeremy is my neighbor, by the way.

We decided to do it this way.

We'll see if it works.

Well, I figured this would be more alongthe lines of all of the other peoplewho you get to interview.

So I feel like I'm, like, actually, like,qualified to be here on your showinstead of just hanging out at your house.

Yeah, that's true.

Last time we had to, you know,put our snacks away and the card game,we were right before we did it, so.

Right. Yeah,that makes it a little easier.

A little more professional,

I guess, if you want to call it that.

Just sopeople know, Jeremy, you're a lawyerspecifically in practice privacy lawin the health care industry.

And this crazy thing happened about, what,nine months ago?

No longer that.

Now we're in October,almost ten months ago.

Generative AI.

Was kind of born.

Yeah.

I will say this because I am a lawyer,because I work for the company I work for.

I have to give youthat standard disclaimerthat all of the opinions are mine andnothing is reflecting on my company, etc.

But a lot of people took note of generallythey I am.

A lot of companies are cashing in,especially on the health care industry.

I mean, it just it's blossomed.

I mean, Google has a whole

I large language model actuallybased for health care.

I mean, it's a big industryand it's a big issue.

So so that's really interestingbecause it's not been around long,but it's moving super fast.

Yeah.

And and I think that that'sone of the reasons I think we we've beenchatting over the last months aboutthis is

I think, you know, when I look at AIG,

I think it's a little differentand I know a lot of your audiencewell will just sit here and go,oh, the lawyer'sgoing to like explain this to us.

But then the layperson standpoint,right from well, I guess from mynot really a layperson anymorefrom the legal side, when I look at it,

I think it's a little bit different thanthe other technology we've used before.

I mean, the other technologies we use,

I mean, you have a pen or penciland then you're going to goto a typewriter, which is a lot faster.

You have a typewriter, then you get to goto a computer, which is a little faster.

So each of these toolsallowed for some efficiency, rightin the use in the method.

But but the generation, Right.

The creation process was reallyfundamentally the same behind it all.

It just made ita little bit more efficient to get it out.

So I think what we're seeing hereand why I think generallygender and I and I might slip upand just call it I at some point,but I think thisidea is a little fundamentally differentbecause I think it takes the patternsand the processes and puts them together.

So I see it as actually not really a toolthat we useto make things more efficient.

I mean, it can, but I think it's reallyhow we interact with a set of systems.

So it's a little bit different,

I think, on how we we utilizethe tool rather than just,

Oh, this is going to make thisthis monotonous projecta little bit more easy to manage.

It actually can createand kind of start developing some things.

So I think that's whyit creates all sorts of anxietyfor a lot of people, including me.

But well, I bet because you're right,it's in the name generative.

It needs to be integrated.

Right, Right.

I mean, just the basics.

And I know what I seeis pretty fundamental,but I think that I think that's the pointwhere we start like right there, you know,when we when we keep talking about,you know, how all of your guestsand I've watched a lot of your episodes,

I've seen people come inand talk about what I can doand what it canand all of these cool things.

And so I followed over the pastlittle while and while I'm not an experton the technology side of it, what I dolook at is what the C-suite is looking at,what the whatthe operators are looking at,how are they utilizingor how are they thinking about it?

Even.

And I think that's our fundamentalstarting point is we really needto get a literacy across the entity,right, whatever entity you're in.

So if it's Intel, if it's, you know,my company, Sutter Health,if it's whatever, I mean, you really dohave to start getting it for and peoplehave to be somewhat conversant with whatgenerative AI is and what it's not.

And I think until you get to that point,

I mean, it's like using email, well,it's more intuitive to know what it isbecause again, it's just a simple tool.

But I think generative AI createsa little bit of a barrier withoutcommon understanding.

So that's where I'd say, Look,we got to start there. Well, all right.

And this is a big problem, though,because people are already usingchat bars or being searched.

I mean, it's readily available.

Yeah.

And, you know, I talked to

I talked to Laura Torres Nooyiabout this in her classes,and she's teachinghow to use itbecause and how to use it effectively.

And and all thisbecause everyone's already using it.

She's already as an English professor.

She told us in a previous episodethat she can't put the genieback in the bottle.

It's out. Right.

So it's some kind of policy.

Quick.

Right.

Otherwise, what do you do?

Yeah, no, I agree.

And I think that what I seeis there's a couple of policiesthat I've seen, and I'll usea nameless child, for example,one of my ownover the summer.

They may or may not have had to writea paper on a book, so they read the bookduring the process of creating a reportand a consultant was used.

I don't know.

I think it was cheap to run.

Happen in the case of thisparticular school was they said, yeah,we've detected some anomalieson the how it's writtenand we think you might have been usingchanging the gender to they Iand you know my my child being thestressed case that he can beyou know he fessed up and yeah yeah I didbecause I knew

I was in a time crunch balland you know he took responsibility for itbut it's an interesting policyread the no tolerance policy.

We don't use it.

We don't agree with it.

We don't accept it.

That's one way to approach it.

And a lot of schools are approaching itthat way.

So I use that as kind of thethe very I don't agree, by the way.

I don't agree with that process.

And I mean, obviously it's my opinion,but even in a school setting, I agreewith what Laura would say.

You got to figure out a way to teach it.

You got to figure out a way to embrace it,just like we do any innovation,you know, whencompanies are embraced email,they were just more efficient.

When they embraced the Internet,they were more efficient.

Those who who caught on earlyare the ones who had a lot of success.

You. So what I see herefrom a policy perspective overall isyou can ignore it.

You can actually not just ignore it, butyou can actually say I mean, the ignorepart is is a different subject, butyou can actually say, no, we don't use it.

We're not going to use it.

But I think you're right.

You can't put that genieback in the bottle.

So you're going to have a problemfrom the very beginningbecause it already becamea consumer issue.

Right?

It's already out thereand it's hard to say, hey,you can't ever use itor see it or look at it just there.

Yeah.

So so what are what are the concernsright on thatthat organizations have with using it?

Because that's a big question.

I thinkin the medical field.

Why not use in a you know, whilewhat are the ramifications of of using itthat organizations are worried aboutwhat's what are those.

Know I can say right now that overall of the industriesyou're going to have overlapping concerns.

Right.

But not just not just specificto the health care.

I mean, health care has its ownlittle nuances as well.

But so. You see laws in health carevery. Right.

I mean,and how about autonomous driving, Right.

Auto manufacturers will have a similarhey, what privacy things do we have here?

How much data do we want to collect?

I mean, there's all sorts of interestingapplications across the industry.

But I think in general,if you're looking at kindof some generalities, I think whatwhat I've been seeingand my colleagues that I talk tohave been have been worriedabout range anywhere fromthe privacy regulations andthat there's a proliferationof state specific privacy laws.

What you can and can't.

They're usually very consumer focused.

The consumer canrequest that the data be deletedfrom where you know who we are and what.

Yeah, you can.

Yeah.

And so those are some concerns, right?

How can I comply with a regulationthat says I can delete datawhen I really can't delete the data?

It's in the model. Right.

And I've already used it to, to train. And

I think there are some privacy concernsas well with whether or not that data,whether it's health and healthinformation,you know, defined by hip as GIor electronic health informationor whatever,whatever acronym you want to use todayor of personally identifiable information,which is normally how the state laws areidentifying either personal informationor I personally identifiable information.

If it's something like that,how are how are you going to input thatand not expect that input tobe further disclosed or used?

So thereare some there are some privacy concernsjust in the overall approach to how ithow it's done or how it's managed.

So I think that gives rise tothat's why you have to be very clearwith everybody on board, at least to

I would say.

I mean, they don't have to be 100% up tospeed on it, but they got to be like 30%.

Right?

I mean, they got to havea fundamental idea of what it is,you know,

And when they say, you know, an alarmin the law that's like a master's of law,that's another year cert degree.

But when I say them to any you know,

I centric,that becomes a whole differentdiscussion there.

And that's like,oh, well, what which language modelare you using and how does ithow does it learn?

How's it been trained, what biasesare implied because of the its training.

So there's another concern that we haveis how the A.I.model even works in the first place.

What's the input?

What's the algorithm?

What what's the training onwhat data, even to read it.

Right, right.

So things. Right.

Because all of the data,all the data that gets in is is, you know,being used to createand predict future outputs.

I mean, that's basically.

Risking article that I reported on onmy embracing digital this week newscast.

And it was that generative A.I.in health carecould cause underrepresentedorgan groups,individuals, underrepresented individualsto be misdiagnosed because their datahasn't been in the training models.

Right.

Which, you know,that's so interesting to me.

That bias is a real. Right.

So your your health equitytype of movements that are going onright now is to assure that you havedifferent races, ethnicities,socioeconomic backgrounds,all of those all of those considerations,all those factors are input intothe health care modelto get to some sort of data setthat representshow health care is being deliveredand how effective that deliveryis to outcomes.

It's trickybecause you also have autonomy, right?

I mean, every patient who's let's saythis is a random category,let's take me I'll use me

I'm a white, fairly affluent male.

And, you know, in California.

Okay, Well, what does that mean for me?

Well, my datais much, much different than my neighborwho happens to be from India.

Originally.

And their datais input into the health systema little bit differentlyand not track the same.

It's not ever been collected in that way.

So if I go to the doctor,

I'm getting the benefit of a modelthat perhaps is trained on me.

But my neighbor with differentproclivities or different backgroundis not going to have that.

Same because they're gettinga model based on me and they might be,

Yeah, yeah, that might be from from Bombayor Bangladesh or whatever it is.

And they have a differentmarker, They have differentinputs.

So it's really interesting to see from abiased standpoint, that's definitely true.

And I know Google came outwith their med parm,

I believe is what they call their

AI or their lab for health care.

And I know that Mayo Clinic, for instance,is using that in termsof how they're looking at creating notes,creating diagnosesand looking at medication interactionsas all of these things.

And that's being being builtinto the system, which I embrace.

I mean, I'm with you.

I like the title of your series

Embracing It.

You have tothere's no way that the health care systemis going to be isolatedfrom AI or generative.

Yeah, but with you just with.

All these concerns, though, I could seewhere a health care organization or noteven just health care, we could look at itin the military or government.

They're like going,there's too much risk involved hereand andthere's too much too muchrisk and liability and all those things.

So why you even go there?

Well, I think that's athat's a crucial pointof beginning to go down the route ofdo we or don't we use iteveryone, once they're up to speedand have some sort of baselineand knowing what it is, thenthey got to say, okay, how is it effectiveor helpful to our our productor our business line?

Right.

You have to go throughbecause let's say in health care, itactually you can go into subsets, right?

So you have the actual provision of healthcare and you know that doctor patientnurse interactionwhere you're actually in-personwith somebody or like this on a video chator whatever it is, but you're in person,you have that communication.

And based on the knowledge and trainingof a health care professional,you're getting a diagnosisor treatment plan, referrals, etc..

Well, how about claims, right?

You get billed for those visits.

How about claimsand how which ones are approvedby insurance or which ones aren't?

I mean, there are a lot ofother administrative functionsthat health care industry could use itwithout having a lot of concernsabout the bias used to go into it.

There really are,

I can tell you right now, I mean,we haven't even startedto talk about the risk I could list out.

You have regulatory requirementseither on the privacy or informationsecurity of cybersecurity issueswith who's actually hosting this.

All right.

And where they come from, Where is it?

Where's it going?

How who's using who has access to it?

You have a lot of contracts.

What happens if you actually don'twant to have a vendor of yours?

You know, I without at least telling youor have written approval,you have to throw that in your contract.

Yeah. Yeah.

You have to start changing a lot ofhow you do business just on aon a fundamental level, your interactionswith your contractors becomes different.

You have risk with liability,like you were mentioning is the airyou generate something is the air liableor we as the company liable?

I know the Fitz.

No, that's the federal circuit.

So the D.C. Circuit Court.

So a U.S.federal Circuit Court has now said that a

I whatever generative Iowa,whoever owns it or runsit, cannot be deemed to be an authoror a inventor.

Yeah, I saw that.

In fact, I reported onthat as well on my on my weekly newscast.

So what does that mean for you? Right.

What does that mean for anybodyin any industry, how you have to startlooking at all of your your processesand going down and saying, hey,let's look at having a policy about A.I..

And we've talkedand I know that your entityhas had several different policies.

My entity is creating policy and tryingto figure out what to do with it.

I've talked to otherswho have something very generic.

I have others who havedecidedto ignore it and not have a policy at all.

And they've told me that they're like,

We just don't do it.

I'm like, Well, you are doing it.

You know, if.

You do it, whether you have a policyor not, it's being. You.

I guarantee you do a little searchon your own networks.

I guarantee your employees have done itduring work with work in mind.

Right.

So how do you deal with that as as a risk?

And how do you recognize.

Well, one, you have to recognize the risksthat are out there, right?

So all of these different things you have,you know, can we use it?

Is it beneficial?

Which business linesare we going to use it more versus less?

How do we integrate it into the systemsthat we have and build itwith privacy and security in mind? Right.

Using those privacy by designor security by design principlesand include this as one of those systemsthat you can utilize.

So so you got.

That one trend I'm starting to see.

There's the really super large elmlike challenge forand and you know, part and all those.

But I'm also seeing the emergenceof what I call community gen

I am Private Gen

II where I can take modelsthat were generalized and trainedand then train them on my own data.

And now and that's not getting loose,that's not getting outside of my ownwalls.

Does that help alleviatea lot of the risk around privacyif I'm one yourself now?

Yeah, it does.

I mean, Ellen's come with risk no matterwhat, whether there be private or not.

I mean, they may perpetuate that bias.

They may even spreadmisinformation. Right.

I mean, that that level of creativityor I think they call it the hallucination.

Yeah, right. Yeah.

That hallucination in that that fog,whatever you want to call it.

I've heard a couple of different terms.

What?

I mean, set that level for creativity.

I'll just kind of identify itthat way. Right.

The generative part of this,you've been very carefulbecause it could actually just createsomething that's completely false.

That is true.

And it's not malicious.

It's just is what it is.

It's part of the model, right,that you're generating something.

So I think you. Have control over that.

Right?

Once you have a private,especially in that private outletwith a of a localizedor a specific industry specificor a company specificor a product specific type of elm.

Right.

Depending on how how broador how narrow you tailor this,

I think you do countera lot of the concerns with the biaswithin misinformation, with privacyor security breach potentiality.

I actually heard 11idon't know who was one present or talkabout the potential harmto the environment,even because of the processing powerthat it creates.

They do consume a lot. Yeah.

I mean, maybe that's a considerationthat some companieswill have more than others.

You know, it just depends on the cultureand where you are.

So I think having that tailored datafor appropriate outputsis completely a strategythat I would embrace wholeheartedly.

And that'swhat like the Google poem, right?

The poem, our med poem is.

To me, that's a community GenAi

It's a community, right?

It's a communityand it's a broad community, right?

It's anyone who wants to jump inon a electronic healthrecord like systemand they want to create.

And what it does is it pairswith your system that you have existing,it runs data through there and it createspatterns and identifies things.

Hey, what are the prescribing patterns?

Well, you can use it be great.

You could use it to identify,hey, is there a diversion problemwith this medication, the opioid crisis?

How are we doing thiswith with all of these prescriptions?

Are we prescribing more or less?

Are some physicianswith prescribing over the norm?

I mean, you can actually start,you know, do a lot of that,but it can automate that. Yeah.

So the benefitsof a community, especially in health care,think about it.

Well, care community.

Jenny

I we're all the doctors in the nationhave all the records to everything.

Could you imagine what that would didthat would improvehealth care across the nation?

Well, and and right now, I mean, you havethe office of Health Information, right?

OMC You have the Department of Healthand Human Services.

You have a lot of entitieson the federal sidewho are actually creating informationsharing.

I mean, they call it information blocking.

Well, it's really anti informationblocking in these regulations.

Breaking down.

The formation sharing and its openness.

You have California coming out with aa law last year or two years agonow, and it's coming into effectin another year or two.

Where are you going to have a directexchange of health care information?

So almost every provider,some few exceptions,but almost every certainly every hospitaland every big provider practiceare going to have to share informationthrough this process.

Well,how are you going to generate a reporter?

Anything from that?

You're goingto have to automated something.

So there's already with the platformsbeing envisionedand exactly what you're saying,you can actually utilize AIin a tailored way, right?

You can actually have a thoughtful processwhere you decide what information is putin, how are you going to train itand aim solely for that.

Right, aim, aim for that.

So you can weed outa lot of the unintelligibleor dangerous type of output that you get.

Well, if you create something again,there's limits right?

Obviously you can't be ultra.

I mean you lose some of the generativepart of it as you lock it down more.

But I do think you can takesome quote, badoutputs are bad data or badalgorithms to get to a thingwhere you are at a result,where you realize, oh,there's a lot of benefit herethat we can actually controland it's going to be fantastic.

It's going to it's going to be way moreefficient in how you share data across.

Still the need for a doctor.

That's one thing

I want to get across to people isthis these augment and helpprofessionalsdo their jobs more effectivelyand it gives them a wayof sharing their knowledge withother people in their same profession,in these community general eyes,which I think arecan't be under under understate IDbecause we've been doing thissort of thingin the softwaredevelopment world for decades.

We share code all the timeright now with I it's going to be easierfor us to share code.

Doctors, though,haven't really had a great platformto do that in the pastand with a GenAI in the middlehelping coordinate and collaboratewith you all that data, I think ultimatelyyou're going to get better care and pushedbetter care all the way out to the edgewhere a a country doctorcan now have all of the help of doctorsat the top medical facilitiesin the in the world.

Which yeah.

Incredible when you think about. It.

Yeah youryour your integration for digital medicineis going to expand it's going to beand I think you know for examplemy CEO will tell anyone who listen thatyeah that's where we're goingbecause that's where it isthat's where the future actually is,is in that digital medicine.

The problems that we're going to faceare the regulatory frameworks.

And the legal side of it is always farbehind the actual technology side.

And this is not new.

This has been since the beginning of timeany type of technology.

The law is very slow to come upwith a solution or a regulation for it.

And that's where I thinkwhere you have each companyreally does need, even if you don't thinkyou're going to use it a lot,you should actually goand look and clearly definethe purpose and scope of your

AI or generative A.I.policy just so you have a starting point.

Every I would recommend and again

I recommended to my own client,

I recommend it to anybodyyou got to go through and define, okay,how are we going to use it?

You've got to be willing to realizethat this is so fast movingthat policy may change every month.

It's not one of those one and done.

Tables a couple of times since right here.

And as the data sets change,as the ethics around

AI evolve,you know, can you use it for what?

What accountability is there?

What privacy guidelines are guardrailsare do you havehow about attributing AIwhen you do something that's using AI?

How what's the transparency like?

Do you have to identify, hey, this actualthis reporter this was helped by AImaybe I mean, I don't, I don't I thinkregulators are looking at that, though.

They are very cautious and they wantto make sure that even if you use the AI,a couple of the law review articlesthat I have been reading actuallygive you one example, one,it was it was,you know, just something on LinkedIn.

You know,you just start looking around andone of them they hadthere was a headline that said AI policy.

So I'm looking at this going,

Oh, this is interesting.

I've been reading up on this.

And they actually rana prompt through two different A.I.models, two different languagemodels, and said, Hey, write it.

And generative.

I use policy for our law firm.

And they ran it and then they hadthey posted both policies on there.

And they clearly said, look,both of these reviewswere generatedsolely by A.I., actually, in that case.

But then I've seen some otherswho are they have an articleand it said I was consultedor used in the generationof this article, butyou always have to get back to your point.

You always have that person, at least now.

And I don't know thatit's ever really going to change.

You're going to have somebody to reviewand look at that, right?

You're going to have somebody adoptthat generative AI product and ideasas they go through it and say, Hey,this is how it applies to usas an entity, as a company,or this is what we want out there,this is the diagnosis and we agreewith the data that went behind it.

So, yes, this is,you know, they're going to adoptand kind of sign offon that type of thing.

I think is the safest way to approach

AI at this point.

Well, and some my other guestshave said the same thing.

Right.

You can't fully trust a generative

A.I., Right.

Because that creativity aspectand I've even I've even playedaround a lot with generative AI.

And I said, Well,where did you get your facts from?

Give me some quotes. Right?

Some references.

It made them up.

They didn't exist. Right?

So we are going to have to use itas as a tool, not as replacing me,but as a toolthat I have to validate and check.

I make sure that it's right, butbefore I use anythingthat it's generating.

Yeah, no, clearly.

And I think and I think that comes a lotwith the culturethat you engender in yourin your corporate environment, right.

In your company, whether it's a schooland you say, no, we can't use okay,everyone knows the rules.

I mean, that's that's a pretty simple ruleif that's the way they want to go.

I mean, I understandit. I'm not going to fight against it.

I get it.

But I do think you can harnessthat that technology in a way where,okay, let's teach the ethics behind it.

Let's teach

How do we utilize it in a way tookay, now you generated something,you still are the author.

You still have to go in and say, no, no,

I actually don't like this.

I actually this isn't my voice, right?

You have to actually startthinking about it in a critical way.

And I think you can actually geta better outcome in insome instanceswith like using the school example.

But there's also a whole,

I think, thatleads into the accountabilityand in kind of that responsibility modelthat even in in a corporate environment,you're going to have to gethow are we going to be accountable?

Who's who's roleis it in the company to be responsiblefor the useor the parameters around air use?

Right.

If you want to come upwith an acceptable use policy,you want to call it that in your in yourin your company.

Sounds great to mebecause if you're looking atintellectual property issues and tryingto, you can't copy can't be usedfor copyrightabletype of issues on its own.

But what happens if you create somethingusing AI, You modify it to a pointwhere you can adopt it to be your yoursand then it's probably fine, right?

So you havethere's a lot of things to think about.

Yeah, yeah.

There's,and there's all sorts of unquestionedor untested legal theoriesbehind any of that, right?

I mean, just because you can't be theinventor, I can't be deemed an inventor.

Well, you can use it in an invention.

So how does that workand where is that line?

And the answer is thatwill still be determined in future cases.

I'm sure. Oh, yeah.

But evenin northern California,there was a case against Open

I just filed in Juneand it's a it's a class action suitbecause they're allegingthat it's a violation of privacy violationbecause they were scrapingsocial media posts,they were scraping locations and allsorts of data on these individuals.

These users, are they AI or Californiaconsumers, whatever you want to call them.

And so there's this case out thereabout how openair is now violatingjust fundamental privacy rights.

And in California,there happens to be a constitutional rightin the stateconstitution, a right to privacy.

And so that creates a little bit.

Whereas in the federal government,it's not quite so clear that there isa constitutional right to privacy.

So there's there's there'ssome room in there for a lot of litigationrisks that come with using AIand not having a parameterand having rules set up in your companyjust opens you up to risksthat you may not otherwise be opento, you know, even like six months ago.

Yeah, I find it interesting as well thatyou may not even know you're using AI.

A great thing just happened.

I use a tool called Grammarlywhere it checks my grammar.

I've been working on my dissertation,my dissertation for a long timeand and I've been using Grammarlyand about two months agothey added grammarly go,which is an AI on the back end.

Now I knew that,but a lot of people don't.

So all of a sudden they're using an AIand not even know they're using an AIbecause I'msure Grammarly sent an email out to say,

Hey Grammarly Go uses a generative

AI in the butno one know,right?

Because it's in the hundreds of emails

I get every dayand I happen to research itsome not everyone's going to.

So there are going to be moreand more toolsthat have generative

AI on the back end that are doing things.

Right that we didn't do well.

I think that I think that lendsto the whole discussion aboutwhen you institute a new tool,when you go out and contractwith some vendor for a data processingor, you know, whatever, whatever it is,you know, in my in my world, we contractwith a lot of different datacompanies, right?

They work through data in different ways.

They analyze data and report back to usand give us things that wethen can turn in to CMS or whoeverit is to get paid.

And there's a lot of things we do.

So there's a lot of dataflowing in and out.

Well, we wouldn'tknow whether that entity right,that your vendor is using right now.

And that's one of the that's one ofthe cautions I think going right now is,you know, my reading or my reading of theother people's reading ofbecause you know, the draft legislationis very trickyto get a hold of in really good form.

But the federal legislation,there's all sorts of discussionabout who's going to be liable for things.

Right.

Is the chain is the AI companygoing to be liable for things?

Well, thathow does that really pass down the line?

That's a very high level strict liability.

Right.

So open API is using a vendorwhich that vendor then use, you know,passes that information on to your companyand you send it to the government.

Are you now defrauding the I mean, there'sall sorts of chain reactionsthat I can think of that there'd be novellegal ideas herethat people are going to say, Hey,who can we blame for something?

And when the generative has openedup, it's going to keep lawyers busy.

Oh, it definitely will.

And you know, as long as it keepsyou guys paid, that's all right.

Yeah.

I mean,

I guess the good thing is I don't Bill.

Right.

I'm sorry if my colleagues say,

Well, Bill.

Oh, I'm sure there is a market out therethat it's like the new,you know,that new plaintiff's counsel, right?

When they have a new lawthat goes into effectand you can sue people for more money,they're going to go afterand you'll have all thatthose plaintiffs suing companies.

Well, this is the similar type of thing.

You know, I'm sure that that class actionhere in Northern California.

Yes. Well, it's a class action,meaning as soon as they certify a class,those attorneys are going to besending out mailers saying, hey,we were affected with this.

Come and join our lawsuit.

And they're going to try to runthat number up.

And, you know, I'm not sayingthat they're doing anything nefarious.

I mean, if they you know,that's a legitimate claim.

It's a legitimate claim.

But yeah, it's certainly it'scertainly going to be a big business,

I think, in the legal industryfor a long time.

Well, if anything, we got from today'sepisode, it'sthere's still more to come around,have a policyanything and there's still a lot moreto come in this spacebecause there's a lot of unknowns still.

Yeah.

No, that's definitely I mean, I would sayestablish a policy, walk through things.

It's clear. All right.

Jeremy It's always fun talking to you,but thanks for coming on.

Oh, no.

No, No problem like that.

There's somethank you for listeningto Embracing Digital Transformation today.

If you enjoyed our podcast,give it five stars on your favoritepodcasting site or YouTube channel,you can find out more informationabout embracing digital transformationand embracingdigital.org Until nexttime, go out and do something wonderful.