Technology Partner Richard Harris Interviewed on AI on WTIC Radio
Day Pitney Technology Partner Richard Harris was interviewed by Brian Shactman of WTIC Radio's Brian & Company along with Adam Chiara, Applied Associate Professor of Communication at the University of Hartford on artificial intelligence (AI). The conversation centered on how AI affects education, media literacy, and the news media. The discussion included patterns to spot deep fakes, how quickly AI can adapt and improve, making it even harder to identify, as well as the ethics surrounding the use of AI and where legal regulation is beginning to form.
WTICAM Rick Harris 11/10/2023: Audio automatically transcribed by Sonix
WTICAM Rick Harris 11/10/2023: this mp3 audio file was automatically transcribed by Sonix with the best speech-to-text algorithms. This transcript may contain errors.
Brian Shactman:
You know, a topic that's consumed a lot of my attention. And we've talked about it a fair amount on the air, and I kind of want to make it an ongoing thing. Is artificial intelligence sort of what are we doing with it? How are we reacting to it? And we're lucky right now to to have two people. Rick Harris is on the phone from Day Pitney and Professor Adam Chiara from University of Hartford with us in studio. And Rick, how you doing this morning?
Rick Harris:
Really well. How are you? Good.
Brian Shactman:
I'm doing very well. And. Professor. Good morning.
Professor Adam Chiara:
Hi. Good morning Brian.
Brian Shactman:
Okay. And so I guess I want to start with some sort of overarching questions that are sort of serving our listeners. And I'll start with you, professor, how whether it's you or your students, how is AI or for you, maybe ChatGPT how is it being used right now?
Professor Adam Chiara:
Well, I mean, you know, this is new and evolving. We have to think about this. It's really only hit the mainstream consciousness in the last year, right? So the way many students, for example, know is, oh, I could put in a question and get an answer, I can get a prompt, I can get written.
Brian Shactman:
I can get an essay.
Professor Adam Chiara:
I can get an essay. Right. So, you know, there's there's that side of it on the other side for us, we as faculty members, we know that that could happen. We also know, hey, this could be a tool. This will be used in the workforce. So are there ways that we can use it where it's not just writing an essay, you know, but also maybe using it as prompts, as using it as part of research, as using it as ways to edit, you know, so it's this back and forth wrestling. So when you ask how are we using it? Well, we're still trying to really figure out what is the best way moving forward. And you know, I don't have an answer yet. You're still kind of going through what are the best practices.
Brian Shactman:
Have you been presented with essays that, you know were not fully created from scratch from your students?
Professor Adam Chiara:
Yes.
Brian Shactman:
And how do you deal with that?
Professor Adam Chiara:
So far I've had conversations and I've, you know, talked to the student afterwards and I've said, you know, why did you do it?
Brian Shactman:
And they copped to it. They, you.
Professor Adam Chiara:
You know, because you know, you know, without going into the weeds, I can ask them a follow up question or two. Oh, this is really interesting about this or this essay. And if they can't answer it right, you know, then we get into it. And so then we say, well, you know, like, why did you feel confident when you wrote, you know, did you feel confident that you actually knew the material or did you just write it in, you know, and so we have this conversation, I think by having them understand that, you know, even if they they think that it's helpful, is it really helping them learn? Because then when I ask a follow up question, so far the answer has been no.
Brian Shactman:
Well, some just some, some don't want to learn as much as they want to get the assignment done or what have you. I was in college. I remember you know, Rick, our paralegals, the way the dodo.
Rick Harris:
You know, I, I don't believe that AI is going to replace humans. I do think that humans who know how to use AI efficiently will replace humans who don't know how to use AI efficiently.
Brian Shactman:
So how how do you use it?
Rick Harris:
Well, you know there are a lot of tools out there and a growing number of tools, especially in the legal industry, but all, all in all industries. And I think that we use it for idea generation for grammar checking, things like that. We also use it currently in discovery if you have large documents and you're trying to find things in those large documents, A.I. tools can be very helpful in doing that. I think there is a downside. And Adam, you know, pointed that out earlier, which is, you know, if young associates try to use A.I. instead of doing the thinking themselves, I think they're going to shortchange themselves in their development. And, and growth. And it's, it's kind of cheating them out of the education they should be getting.
Brian Shactman:
Now it only goes through the information that's sort of in the world. I mean, we've seen the stories of people, you know, found precedent that actually didn't exist. Right? I mean, that's happened in legal research already.
Rick Harris:
Yeah. You know, every lawyer who's been paying attention to the news knows about that case. And it was a situation where the lawyers asked the AI to draft the brief. And then they didn't thoroughly check to see whether the cases that were being cited were real. And, you know, the the AI tools aren't really good at giving you true facts. They are really good at writing, simply writing. And you should always assume that that writing is all made up, hallucinated and check every fact before you rely on anything generated by an AI.
Professor Adam Chiara:
And Brian so if I can add on to that too, you know, you said that it's used off of previous information, which is is it's the original data set, you know, but part of what makes a generative AI is that it's also thinking and trying to create itself. It's making predictions. It's taking this information and trying to make sense of it. And then coming up with a prediction. What do you think the user, the human, wants as the answer? What makes sense? So sometimes, as Rick was just saying, saying those predictions can be wrong. It's not like it's just taking Google and copying and pasting different parts of finds on the internet. It's taking that information and trying to create its own response. And so to Rick's point, you know a lot of that can be inaccurate because it's making a prediction that was just wrong. You know, it didn't it doesn't make sense. It's nonsensical to us, to the to the program. It makes sense. Right. But to us on the outcome here. No, it was just wrong. And so that's where I think we have to understand. It's not just finding answers.
Brian Shactman:
And it's going to I mean, they say it's already, you know, the multiplier in its capacity. It's it's changing all the time. And we'll talk. We're going to take a little pause here. And I want to talk with Professor Chiara about being ahead of the curve, trying to stay ahead of the curve and also recognizing real from fake. This is something you now need to teach us, a skill, which I think is really important in media and in communications. And then with Rick, this is like a double sided thing. There's how it can maybe contribute to your job, but then there's the legal stuff that you're going to have to do work wise. As we start to establish precedents and all this other stuff, like it's a huge business opportunity to on the legal side. So I want to get to those two issues where we're going to talk about Sandy Terenzi for a second, then we'll do a quick traffic and weather, and then we'll get back to with a professor.
Brian Shactman:
Bye Mark, thank you very much. We're back here with Rick Harris from Dave Pitney, as well as Professor Adam Chiara from the University of Hartford. You know, professor, I'll start. I'll start with you. Because for me, I. Even I have trouble. And you know, Rick, you can chime in. Of course I have trouble sometimes, you know, especially with this thing in Gaza, there's been a lot of deep fake stuff. There's been a lot of like, we've even seen videos that they line up dead bodies and you see someone scratching themselves under the blanket and all this stuff. It's so difficult to know what's real and not real, and AI has a lot to do with that, too. I mean, how do you it's two parter. Like how aware are the kids that they have to vet? Because sometimes they'll see something on TikTok and they'll take it as gospel and it's not even true. How do you deal with that side of it? And, you know, how does AI factor into all that staying ahead of the curve?
Professor Adam Chiara:
Yeah so, you know, it's a great question because what happens is I think students they are aware that these exist deep fakes. There's the things can be manipulated you know. But just because we're aware of it doesn't mean we're always thinking about it. And so you're scrolling and this could be it's not just students. This could be all of us. Right? We're just scrolling. Right. Do you see something? And it might seem a little suspicious, but guess what? We've moved on to the next post, but it's still kind of lingering in our head. Right. And so of course, there's always the hey, verify check. Other sources make sure, you know, we can say that to students. But again, it's it's difficult for us to do that on every single piece of content content that we're getting. And my big concern, Brian, is that we're going to be very apathetic, right? We're going to get to the point where we just don't believe anything. And we're just, you know, you know, oh, it's got to be fake, right? Because everything is fake. And, you know, with less trust in the actual media itself, the gatekeepers who are verifying, you know, now, that's where I'm really getting concerned that we're all going to be in our own information, you know, silos here, and nobody's believing anything. And then, you know, who do we trust?
Brian Shactman:
So we're already there, though. The algorithms react to what you look at and, and they go from there. But there's no systemic way to fact check.
Professor Adam Chiara:
No, you know, people are coming up with programs that are going to try to catch deepfakes. There's a lot of pressure on platforms like, let's say, Facebook to say, hey, anytime there's been a video that has been manipulated, as you can tell, that he's used artificial intelligence, you need to tag it, you know, have like a watermark on it. And everybody knows but, you know, there's no regulation on that yet. There's no they're not no one's making them do that. So right now we are still in this wild west of artificial intelligence, deepfakes. You know, everything that's coming at us.
Brian Shactman:
You know, Rick, before I go into some legal stuff, does that that any of this conversation resonate with the work you guys do?
Rick Harris:
Yeah, it absolutely does. And, you know, as we think about things like election security going into the 2024 election cycle you know, people are very concerned about this the deepfakes and, and, you know, false rumors have been circulating on the internet, you know, since probably since the internet first became publicly available. Right. But what changes now is that AI can generate it so quickly. And the text coming out of this and the pictures and things are so realistic that it's, it's almost scary and it's going to be a lot harder to catch, you know, Adam and I, who have been doing this for a long time and, and playing around with AI, we can kind of recognize texts and graphics that are generated via these tools.
Brian Shactman:
Is that just, is that is that is is that just by experience and repetition, or is there an is there an actual skill to that? Like, can you teach that to other people?
Rick Harris:
No. You know, it really is experience and repetition. There's a stylistic manner of writing that particularly, you know, ChatGPT, which I play around with a lot seems to have. And so I can now see when somebody's using a ChatGPT I don't always catch it. But even, you know, I think the professor who, you know, gets lots of papers and looks at them and says this looks suspicious. He tags them because it has some stylistic difference that he senses can't have been written by the individual.
Professor Adam Chiara:
So keep in mind with that, though, Brian, is that you know, this is evolving so quickly and getting better every day. And so what we think we understand now, you know, for example, a deepfake, I can look at blinking. Sometimes the blinking doesn't line up, right. But who knows, in a month from now that will probably be fixed by these programs. And so every time you think that, hey, I realize the style, I can recognize certain traits, well, soon that will be indistinguishable and you know, that will be gone and then it'll be even better and better. And I do think it will get one day where any person, even somebody who's looking at this like Rick and me all the time, will not be able to tell the difference, you know, and then you will need programs to identify it because the technology is in its infancy. Wait till it gets so good that you know no one's going to be able to recognize.
Brian Shactman:
So, you know, and I want to ask you both because this is like sort of my question of the day with people, even though you do different things, that information is what you deal with. And in fact, where do you go for information? I'll start with you, professor. Then, Rick, I want to get your answer like Twitter used to be. Twitter was my everything right? Because I could just follow. I could sort of pick and choose from all the news sources, and it's just a way for them to collate them for me. Right. Or, you know, it's an aggregator for me, right? The New York Times, The Wall Street Journal, whatever. And now, like, it's just not consistent. And I can't trust a lot of stuff. So I, I'm a little lost for where to go for good information. Like how do you. Gather information. Where do you go for information?
Professor Adam Chiara:
You know I'm in there still in between. You know, I'm a I'm a older millennial, right. But I still grew up in the time with traditional media. So for me, my habits still remain traditional media. I still read the Wall Street Journal, I still get the New York Times, I still get the Hartford Courant. You know, yes, I use social media, but social media is a compliment. I'll go on Twitter, you know, X now.
Brian Shactman:
I refuse to say X.
Professor Adam Chiara:
So I always have to. But, you know, it complements the information that that I already it supports the information that I already have the foundation. But you know what? That's me. Younger generations are not like that. I have these conversations every day. They get it from what they'll claim. Social media. Yeah. And while social media is a mix of traditional sources and their peers, and, you know, they don't.
Brian Shactman:
Believe me i've had dads, do you know, that the blah blah blah and blah blah blah and blah blah blah. What did you see that. Oh, Snapchat. I was like, well, find me, find a journalist. I was like, find me a second source that's credible to confirm that. And like eight out of ten times it'll be wrong. And so what about you, Rick? I mean, I, I just I'm fascinated by. That's not an encouraging answer, by the way. But I'm and it's not a negative on you. It's like I just I'm lost a little bit myself and.
Rick Harris:
Yeah.
Brian Shactman:
What about you, Rick?
Rick Harris:
Well, you know, first, it warms my heart to hear a millennial say, you know, to the New York Times and Wall Street Journal and, you know, the many of the, the legacy media you know, I'm a baby boomer, and, you know, that's where I try to get my news from. But it's not wrong to look at Twitter to see what the trends are, because that's really what you were doing, Brian. Right. You you look at Twitter, see what the trends are, but then you would go back to the primary sources. In fact, you'd go to secondary sources like the New York Times and then look to the primary source. Right. And what what we're trying to teach people over time is go to the primary source. If somebody's talking about something that's happening in Congress, a piece of legislation, go and look at it yourself. Make sure that you're getting that information from a reliable source. One of the things that, you know, we value in this country about legacy media is that there are rules of conduct for them, and they do use multiple sources before they report on something, whereas somebody who's, you know, sticking something out on Instagram, you have no idea whether they checked the facts before they put it out there.
Brian Shactman:
Right. And there's a there's a knock on effect. Because if they do that and then someone consumes it without double checking, there's there's no second source on two levels of, of the authenticating process. I just real quick because, you know, we're always up against the clock. Rick, in terms of the legal opportunity for business, like, have you launched like an AI department? I mean, there's got to be like a huge opportunity for representation on this thing, too.
Rick Harris:
Yeah. So Day Pitney put together a team of lawyers from across all disciplines. So we have litigation folks on it. We have intellectual property folks on it. We have corporate lawyers on it. And, you know, you employment lawyers on it. So you name the specialty and we combine those and have a very active team and have actually since I guess it was sometime in February, we put this together when this really began to blossom in the public view.
Brian Shactman:
That's smart. It's good business. Before I let you go, professor, are you encouraged by the level of work your students are doing and the integrity? Because, like, if they don't have a baseline of integrity, we're screwed.
Professor Adam Chiara:
Yeah. Brian, I'm glad you asked that, because I did want to end on an optimism note. It sounds like I'm very pessimistic here. I'm actually very optimistic. And here's why. We are at least all very aware of the power of artificial intelligence. I'm having conversations in my classes. My colleagues and I are figuring out how do we address this with students? And so I think there's an awareness and there's an awareness now that there wasn't one, let's say, social media really became prominent. Social media was like, oh, this is this fun, fun technology. We didn't think of all the negative consequences that could come. We understand a lot of the negative consequences now, and so we are at least addressing them. And as long as we're having these kinds of conversations and discussing it, I'm hopeful that we'll figure it all out together, because that's what we're trying to do right now. We're all just trying to figure this out together.
Brian Shactman:
That's exactly right. Hey, Rick, thanks for the time. I really appreciate it.
Rick Harris:
All right. And, Brian, you don't really know whether I'm an AI or not, do you?
Professor Adam Chiara:
Well, you reacted to me with an specificity that I would have let I think I.
Brian Shactman:
And if you are I, then I'm totally. I'm in trouble. Thanks, Rick. Have a good one. Rick.
Rick Harris:
Have a good one.
Brian Shactman:
Rick Harris from Day Pitney and Professor Adam Chiara from the University of Hartford. It's great to meet you. We should definitely talk more. I'd love to. You know, I we want to. Education is big in my world. And I think you're on you know, the students that are coming out now are going to dictate a lot of what's going to happen in the future with this stuff. So it's really important. Thanks for the time.
Professor Adam Chiara:
I appreciate it, Brian. Thank you.
Sonix is the world’s most advanced automated transcription, translation, and subtitling platform. Fast, accurate, and affordable.
Automatically convert your mp3 files to text (txt file), Microsoft Word (docx file), and SubRip Subtitle (srt file) in minutes.
Sonix has many features that you'd love including advanced search, automatic transcription software, powerful integrations and APIs, generate automated summaries powered by AI, and easily transcribe your Zoom meetings. Try Sonix for free today.
Recommended
Day Pitney Data Privacy, Protection and Litigation practice group co-chair Naju Lathia and Litigation Associate Potoula Tournas authored the article "New Reporting Requirements in the Cybersecurity and Critical Infrastructure Sectors," for the New Jersey Law Journal's Cybersecurity Special Section.
Day Pitney New Jersey Associates Alexis Abuhadba, Bamisope I. Adeyanju, John Clarke, Carly M. Clinton, Ashley M. Fay, Jessica Laird, Maryann Lennon, Valmir Magjuni, and Taylor Spallanzani were featured in the New Jersey Law Journal's New Associates Yearbook 2024.
The arrival of Mark C. Williams as a new Energy Partner to Day Pitney's Washington, D.C. office was featured in The National Law Journal.
The arrival of Mark Williams was featured in the Law360 article "Day Pitney Latest Firm To Add Energy Talent With 30-Year Vet."
The arrival of Energy Partner Mark Williams to Day Pitney was featured in Bloomberg Law. Williams advises private equity funds, insurance-sector investors, banks, pension funds, multinational institutional investors, hedge funds, and traditional investment funds on regulatory requirements applicable to electricity and gas investments.
Day Pitney Press Release
Day Pitney Press Release
The news of Brooke Penrose joining Day Pitney as a partner in the firm's Intellectual Property and Technology practice was featured in Thomson Reuters' The Daily Docket Industry Moves column. Penrose is based in Boston.
Day Pitney Land Use Partner Steven J. Wernick is featured in the Miami Times article "New High-Rises and Vanishing Roots in West Grove." Day Pitney is representing the Macedonia Missionary Baptist Church, the oldest African American Baptist church in Miami-Dade County, in its expansion to enhance its community outreach. Wernick and Counsel Joseph Ruiz are leading the effort to navigate the complex permitting process for expansion.
The arrival of Day Pitney Intellectual Property Partner Brooke Penrose was featured in Law360 article "Day Pitney Adds Trademark, Copyright Pro in Boston."