How Generative AI Wastes Water and Kills Your Imagination

Every time you use a LLM, you're wasting water, fueling fascism, and shrinking your imagination. So what can you do about it? Our hosts Rosie Tran and Amanda King explain.

How Generative AI Wastes Water and Kills Your Imagination
Photo by Giorgio Parravicini / Unsplash

Well, we did it. We've reached the end of the month.


I am about halfway done with the rewrite of "How to Protect Yourself from Fascists & Weirdos." With the murder of Alex Pretti and Renee Good, and the recently confirmed murder of a third U.S. citizen, Ruben Ray Martinez, what we had didn't really meet the moment.


What we had wasn't bad. In fact, you're going to see the original draft in a few different forms here on Stupid Sexy Privacy (for example, here). But we thought something a bit more tactical would be useful.


For that reason, I'm going to keep my usual address here kind of short for the next few weeks while we finish up the book.


What I will tell you before I go, however, is that every time you use Generative AI, you're wasting water and empowering the fascists. We explain that in today’s episode.


And yes, any time you do something on the Internet, there is some data center out there using up someone's fresh water. That's a fact. But you still have, in this instance of using Large Language Models, the ability to NOT use them and conserve that water.


If you care at all about the climate emergency and the world's dwindling supply of fresh water, I hope you'll remember that the next time you pull up ChatGPT or any of the other options out there.


The other thing I want to tell you is that Rosie lays out my argument for this in today's episode. She also explains that fascists HATE art because art shows the world how things can be without them.


It's probably overly simplistic to say, but I'll say it anyway: my belief is that every time you use ChatGPT or another LLM to generate something, you're making yourself dumber, killing the planet, and shrinking your imagination. It's a net negative.

That's also why fascists LOVE AI slop.

They LOVE generative AI because it requires no thought, no soul, and no love.

If you can't think, the fascists win.

If you can't "see" what the world can look like without them, then their presentation of being invincible and indomitable works. This is a reason why, if you ever see a bunch of white supremacists get together to protest, you should absolutely play this song. Not only will they lose their shit, but you'll also effectively burst their presentation's bubble.


The solution to fascism is to make more art, not less, and to do so using your hands, your heart, and your brain.

-BJ

Hello. Farewell. Hello. Farewell.

You can follow me here on Bluesky

Show Notes

Stupid Sexy Privacy Show Notes For Season 1, Episode 27

Episode Title: How Generative AI Wastes Water and Kills Your Imagination

Guests: Dr. Uri Gal, a professor of business information systems at the University of Sydney Business School.

Episode Summary: Every time you use a LLM, you're wasting water, fueling fascism, and shrinking your imagination. So what can you do about it? Our hosts Rosie Tran and Amanda King explain.

Highlights From Our Interview with Dr. Gal

-Everything you enter into a Large Language Model is used to train that Large Language Model. So, like your Mom or Day may have told you a long time ago about the Internet: If you don't want someone to know something, don't post about it on the Internet, and definitely don't post your medical information or other sensitive data into any of these Chat Models.

-This interview was recorded in 2023, and somehow, the world is STILL slow to properly regulate the Large Language Models. In fact, as we write this, here in the Untied States the Secretary of Defense is threatening Anthropoic over access to their dataset and LLM, all while his boss is trying his best to make sure no State can properly regulate what we're calling AI. So we wanted to run this interview from '23 to help establish that we're not dealing with a "new" problem here, and it's time to act and properly regulate AI both federally here in the United States and globally around the world. Over a million ChatGPT users, for example, have expressed some form of suicidal ideation, and there are a number of lawsuits working their way through court over suicides linked to usage of these unregulated Large Language models.

-If you don't understand the privacy concerns surrounding Large Language Models, or just how they work, this is a good interview to start with before going through the other episodes we've recorded on the topic, like how using Generative AI rots your brain, and how to hide from bad state actors in the future (like Hegseth.)

Our Sponsor: DuckDuckGo <--Our Recommended Browser and VPN

Get Your Privacy Notebook: Get your Leuchtturm1917 notebook here.

-BitWarden.com (Password Manager: easier to use, costs money)

- KeepPassXC (Password Manager: free, harder to use, but more secure)

-Slnt Privacy Stickers for Phones and Laptops

-Slnt Faraday bag for your Stranger Danger phone.

-Mic-Lock Microphone Blockers

-Mic-Lock Camera Finder Pro

-BitDefender (best anti-virus for most people across most devices)

-Stop using SMS and WhatsApp, start using Signal.

-Use Element instead of Slack for group coordination

-Use StopGenAI's Guide to getting Generative AI out of your life.

--Use cash whenever possible. If you have to buy something online, try to use Privacy.com to shield your actual credit or debit card when making purchases online.

Get In Touch: You can contact us here

Want the full transcript for this week's episode?

Easy. All you gotta do is sign-up for our newsletter. If you do, you'll also get a .mp3 and .pdf of our new book, "How to Protect Yourself From Fascists & Weirdos" as soon as it's ready. It's free.

But if you'd like to comment on StupidSexyPrivacy.com posts, or if you just want to support our work, you can support us by becoming a paid subscriber. It's $2 per month or $24 for the year.

Stupid Sexy Privacy Season 1, Episode 27

DuckDuckGo Commercial #2

Announcer: Hey, here's a joke. Knock knock.

Announcer 2: It's Google Chrome, and I don't need to ask who's there. I already know it's you. I know your search history, your email address, location, device settings, even your financial and medical data.

Announcer: Wow, that's not funny. Now I'm definitely switching to DuckDuckGo.

Announcer 2: That's smart. If you use Google Search or Chrome, your personal information is probably exposed. And that's no laughing matter. The free DuckDuckGo browser protects your personal information from hackers, scammers, and data-hungry companies.

DuckDuckGo has a search engine built in, but unlike Google, it never tracks your searches. And you can browse like on Chrome, but it blocks most cookies and ads that follow you around.

DuckDuckGo is built for data protection, not data collection. That's why it's used by millions to search and browse online. Don't wait. Download the free DuckDuckGo browser today. Visit DuckDuckGo.com or wherever you get your apps.

Stupid Sexy Privacy Intro

Rosie: Welcome to another edition of Stupid Sexy Privacy. 

Andrew: A podcast miniseries sponsored by our friends at DuckDuckGo. 

Rosie: I’m your host, Rosie Tran. 

You may have seen me on Rosie Tran Presents, which is now available on Amazon Prime.

Andrew: And I’m your co-producer, Andrew VanVoorhis. With us, as always, is Bonzo the Snow Monkey.

Bonzo: Monkey sound!

Rosie: I’m pretty sure that’s not what a Japanese Macaque sounds like.

Andrew: Oh it’s not. Not even close.

Rosie: Let’s hope there aren’t any zooologists listening.

Bonzo: Monkey Sound!

Rosie: Ok. I’m ALSO pretty sure that’s not what a Snow Monkey sounds like.

*Clear hers throat*

Rosie: Over the course of this miniseries, we’re going to offer you short, actionable tips to protect your data, your privacy, and yourself from fascists and weirdos.

These tips were sourced by our fearless leader — he really hates when we call him that — BJ Mendelson. 

Episodes 1 through 33 were written a couple of years ago. 

But since a lot of that advice is still relevant, we thought it would be worth sharing again for those who missed it.

Andrew: And if you have heard these episodes before, you should know we’ve gone back and updated a bunch of them.

Even adding some brand new interviews and privacy tips along the way.

Rosie: That’s right. So before we get into today’s episode, make sure you visit StupidSexyPrivacy.com and subscribe to our newsletter.

Andrew: This way you can get updates on the show, and be the first to know when new episodes are released in 2026.

Rosie: And if you sign-up for the newsletter, you’ll also get a free pdf and mp3 copy of BJ and Amanda King’s new book, “How to Protect Yourself From Fascists & Weirdos.” All you have to do is visit StupidSexyPrivacy.com

Andrew: StupidSexyPrivacy.com

Rosie: That’s what I just said. StupidSexyPrivacy.com

Andrew: I know, but repetition is the key to success. You know what else is?

Rosie: What?

Bonzo: Another, different, monkey sound!

Rosie: I’m really glad this show isn’t on YouTube, because they’d pull it down like, immediately.

Andrew: I know. Google sucks.

Rosie: And on that note, let’s get to today’s privacy tip!

This Week's Privacy Tip

Rosie: In the same way there's nothing Christian about Christian Nationalism, there's no intelligence in artificial Intelligence.

That said, BJ's been looking for an opportunity to write a short Privacy Tip, concerning what's called "Generative AI," and how it's an imagination killer.

So, at this point in our season one rewind, you know how we feel: Artificial Intelligence is like the word innovation. If you replace Innovation, or Artificial Intelligence, with the word "magic," as many critics have suggested, you'll notice there's no difference.

What that means, is that what we're being told is the future, is actually someone's vision of it, and not reality. In this case, those someones are the Tech Billionaires.

You might already know this.

But here are two things you're not hearing enough about:

First: Every time you go and use one of these large language models, you're pissing away fresh water. In fact, we want you to go and fill up a glass of water right now.

Fill that glass of water, all the way to the top, and then say, "LOL. I don't need this." Then pour that water out. That's what you're doing every time you use ChatGPT.

If that doesn't piss you off, it should. Because wasting water is a sin.

Especially in a world that's increasingly running short of it. A problem being exacerbated by tech companies who need fresh water to power their data centers.

Much like how we think there's a moral obligation not to use platforms like Twitter, we feel there's a moral obligation not to waste any more fresh water than we already do, and to preserve as much of it as we can.

Second, all because tech billionaires want our world to look a certain way, doesn't mean it has to. Elon Musk can spend $290 million dollars to support all the fascists he wants, like he did in 2024, but that doesn't mean the fascists are in charge of your mind.

Your imagination is the most powerful weapon you have. It's critical you take time every day, whenever possible, to feed it. Whether that be just listening to music on your commute, going to an art gallery, or drawing in your privacy notebook.

Hey, we told you to get that thing for a reason, you know? It's not just for writing down your most important passwords and passphrases.

monkey sounds

Fascists like generative AI because no imagination is required. You tell the chat model what you want, and it gives you a close enough approximation, using statistics, to give it to you. There's nothing special happening underneath the hood. It's not some God like intelligence.

It's just math, baby!

And although we can't yet prove it, early research shows the frequent use of generative AI is bad for your brain. That's because your brain is like any other part of your body. The more you use it, the better it performs. Ditto with your imagination. The more you use it, the stronger it becomes.

While people have tried to assert that sitting is the new smoking — and they're not entirely wrong — It's more accurate to say that using generative AI is the new smoking. It's bad for your brain, it's bad for the people around you, and it's bad for the planet.

So here's the thing we want you to know: Generative AI is an imagination killer, and the more you use it, the stronger the fascists become.

That's because the kind of Fascists that tech billionaires eagerly fund, don't want you to use your imagination.

That's because using your imagination allows you to visualize a better world. One where climate change has been reversed, and one where the fascists have been defeated. So, if you can and when you can, read more, hang out with creative people, and explore your world.

Remember: Fascists hate art, because art shows people what a world without Fascism looks like.

So if you want to help end the worldwide rise in authoritarianism, the simplest and most fun thing you can do is make some art.

Any kind will do.

Because not only does art inspire and motivate, it also provides us with a road map to a better future.

This Week's Interview with Dr. Uri Gal

Andrew: Just a heads up that the following interview was recorded in 2023. It features my co-host Amanda King and Dr. Uri Gal, a professor of business information systems at the University of Sydney Business School. If you're interested in large language models and the privacy issues surrounding them, you're not gonna wanna miss it.

Amanda King, co-host of Stupid Sexy Privacy: Thank you for joining us. I have with us today, Uri Gal, a professor of business information systems at University of Sydney. Thank you for joining us, professor. If you could give everyone a brief introduction to what you do, what your background is, that would be fantastic.

Dr. Uri Gal: Thank you. Yeah, I'm a professor of business information systems, which basically means that I look into the way in which digital technologies are used by organizations. And I'm particularly interested in looking at the social and ethical consequences that new technologies have. My background might help to explain this interest of mine. I actually have an undergrad in sociology and anthropology, so I'm not really a technologist per se or an engineer. And I have a master's in organizational psychology. And even my PhD, studied in organizational behavior. So my keen interest is really in understanding human and social processes as they relate to new technologies.

Amanda: No, that's really interesting. And I feel like it's a bit ahead of the curve as well, right? Because it's really only now that ... or recently, that ethics has kind of been coming into the conversation of technology in terms of at what point is it still ethical to do this thing? And I feel like nothing is really exemplifies that anymore than the recent surge in AI and ChatGPT. And I know you've written some articles on that, but take us through a bit of large language models and why this is such an ethical conundrum at the moment.

Dr. Gal: So large language models are based on huge amounts of data, as the name suggests, large. And AI traditionally has been trained on data sets. That in itself is not a new concept. Many times these data sets were specifically designed for unique purposes and data was collected from narrower sources. With the advent of the internet and specifically with the proliferation of social media, which both of them have contributed to an explosion of information. Much of it is very easily accessible by anybody. We've seen the expansion of very large data sets that are used by various types of models, not just large language models necessarily. And like you indicated, this brings with it various ethical issues, specifically around privacy, where the data is gathered from, what kind of information it reveals about the people that post data, that generate this data, whether or not they're compensated for the use of this data in training data sets, and a variety of other ethical issues that I imagine we'll get to.

[Editor's Note: Here in 2026, our co-producer, BJ Mendelson, is supposed to receive a settlement check, along with many other authors, from Anthropoic, since the company used his book and other author's books to train their Claude model without telling them. The case is Bartz v. Anthropic]

Amanda: Yeah, absolutely. Now with ChatGPT and large language models, Do we know what data from the internet that they're using or what the kind of most likely sources for their data is? Do we have that information yet?

Dr. Gal: We have general information that companies like OpenAI, which is the company behind ChatGPT, released to the public. So I believe that the number that they provided was 300 billion characters, that the model was trained on. By the way, overnight I received an email from OpenAI that they're releasing ChatGPT 4, which based on what I've heard is trained on even a larger data set. I don't know what the number will be there. But yeah, so we're talking about massive amounts of information that are basically harvested from all over the internet.

Amanda: So you would have to assume then that anything you write and publish online or anything from your social media could effectively be used to inform this, yeah?

Dr. Gal: Yeah, so it would include things like blog posts and reviews that people write on restaurants or hotels or what have you. We do know that OpenAI had access to Twitter data up until I believe December of 2020, where Musk decided to stop access to Twitter data. But up until that point, OpenAI and ChatGPT had access to Twitter data. So it's just an example of another source of data that they would use to train them on.

Amanda: Yeah. Now, is there any way at the moment to opt out of this?

Dr. Gal: Nope. laughs. To the extent that you're on the internet and you're an active participant in the internet. Facebook account or if you're involved in any other social media platform, or if you write any type of text anywhere on the internet, you need to take into consideration it might be consumed by ChatGPT and other language models for that matter, right? They're not the only player in the market.

Amanda: So if you were someone who wanted to be kind of a bit more privacy aware on the internet, how would you reconcile large language models and their training data sets? What kind of mindset would you need to have with that? Is it kind of like you said, where it's just once you publish something on the Internet, you have to acknowledge that it may be part of that data set? Or is there another angle that you could potentially take?

Dr. Gal: One angle that one could take is just to be careful with the kind of information that they post online, which I know is not particularly helpful because people use various websites and social media platforms for a variety of purposes. Many of them tend to be for personal purposes, with family members and things along those lines.

But I think as a basic principle, it's always better to err on the side of caution. And if you don't need to write something, don't write it. And we have other types of generative AI applications that are based on images. Along the same notion, I would say don't post photos that you don't have to post, because we don't know how they might be used to train what sort of generative AI applications like Mid-journey and others that are probably coming. So that's one thing that I would say.

Secondly, I think probably the most reasonable way to deal with this is through regulation and legislation, which is not really directly in the hands of individual consumers; but that's something that should be coming because I think it's another example of where technological advancements move way faster than the legal frameworks that are meant to enforce how they're being used across society. Which is another really important thing to keep in mind, right? Which is the scale of these things. This is not something that's, you know, ChatGPT and other LLMs sit somewhere in between narrow AI, what people call narrow AI, which are applications that are quite specific to achieving a unique task like playing chess or chess or playing go or things like that. And AGI, artificial general intelligence, which is meant to be a of a do it all AI machine that doesn't really exist yet. And many people doubt that it ever will exist, but that's ... I guess that's a different conversation. So [The LLMs] sit somewhere in between that and we need to keep in mind how quickly these models have become so popular all over the world. I believe it's the fastest growing consumer product ever released. I forget the specific time frame, but it took them a couple of weeks to go from zero to a million users. So the scale here presents a real ethical challenge as well.

Amanda: Yeah, absolutely. And two questions off the back of that. First one, just jumping off of that growth in particular, with your background and your knowledge around kind of AI and information systems and things like that. Why do you think it got so popular so fast? What's your working hypothesis?

Dr. Gal: I think this technology, and again, we're not just talking about ChatGPT, it's also the image generating AI machines and other large language models that are being rolled out now. I think they do present a novel functionality that's kind of fun to play around with. If you leave aside for a minute all the possible risks that it presents to people and to society at large, which I think are real and severe. But leaving that aside for a minute, I think it's pretty cool to interact with an interface that can write back things to you based on simple prompts and generate extremely plausible text, high level plausible text in ways that seem to be specifically tailored for your requests. So there's something there that's captivating, that's kind of alluring. It's a new shiny toy. think the novelty is one big aspect of why so many people are excited about this and hopefully to wear out soon.

Amanda: Hopefully. Hopefully.

Dr. Gal: Yeah.

Amanda: What are those ethical risks? What is that risk there? Because there are a couple of different levels of it. Can you break it down for us what those are?

Dr. Gal: Yeah. So one of the risks is something that we started talking about before, which is the privacy aspect. And within the privacy aspect, are various branches to that. So there's the issue of where the data comes from and whether anybody was asked for the data to be used to train the model. And the answer is no. And we need to keep in mind that OpenAI is a for-profit business. It used not to be, but now it is. At least part of it is a for-profit business. I think they still have a research lab adjacent to the main business, but the main business is for-profit. And they have access to this massive raw resource, which is our data, that they pay nothing for or close to nothing if we think about the infrastructural effort involved in scraping the data.

But I don't know many other companies that are able to have access to the raw resource without which they have nothing to offer really for free. So there's that element of it. And then another privacy aspect is that anything that we put in our prompts when we interact with ChatGPT or ChatGPT 4, I guess, which is coming out now, can be used by OpenAI and become part of the training data for the model, which may sound innocuous, but it's not really when you consider the kind of information that people might be putting into the prompts. One of the things that people use ChatGPT for is to edit documents. Documents can contain all sorts of information about private affairs in addition to other things. To the extent that there's private information in these documents, then it might be consumed by ChatGPT and become part of the public domain when other people prompt it to generate whatever text they're interested in. So we need to consider that everything that we put in there might be used for other purposes in the future.

BJ's Book Commercial:

Hello Everyone, this is BJ Mendelson, and I am the writer and co-producer of Stupid Sexy Privacy.

When I’m not working on the show, I’m usually yelling at my television because of the New York Mets.

I want to take a moment to tell you about a book I co-authored with Amanda King.

It’s called “How to Protect Yourself From Fascists & Weirdos,” and the title tells you everything you need to know about what’s inside.

Thanks to our friends at DuckDuckGo, Amanda and I are releasing this book, for free, in early 2026.

If you want a DRM free .pdf copy? You can have one.

If you want a DRM free .mp3 of the audiobook? You can have that too.

All you need to do is visit StupidSexyPrivacy.com and subscribe to our newsletter.

That website again is StupidSexyPrivacy.com, and we’ll send you both the PDF and the MP3, as soon they’re ready.

Now, I gotta get out of here before Bonzo shows up. 

He’s been trying to sell me tickets to see the White Sox play the Rockies.

And I don’t have time to explain to him how Interleague Baseball is a sin against God.

I’ve got a book to finish.

This Week's Interview with Dr. Uri Gal (Continued)

Amanda: One thing that you speak about kind of in the articles that you released in the last couple of months is GDPR and the privacy policy and the lack of clarity and visibility around that. Given that this is a very fast moving space at the moment, what kind of response or change, if any, have you seen in the privacy policy or the language within it or the conversations around GDPR with relation to large language models and all of that.

Dr. Gal: Let me preface my response by saying that I'm not a legal expert.

Amanda: Yeah. Absolutely.

Dr. Gal: And what I know is just based on own observations. And my observations have indicated that nothing has changed. But I think this is not for a lack of concern. I think it's because the pace of the legal process is much slower than the pace of technological development. I think people are aware this presents a new challenge. But even with the GDPR as it stands currently, there are real questions about whether language models like ChatGPT are compliant with it. So for instance, there's a stipulation in GDPR regarding the right to be forgotten.

Which means that you can ask, for instance, if you, for whatever reason, a website out there that mentions your name in a negative context; and this page is indexed by Google, you have a right to ask Google to deindex that page. And it's not clear how a large language model like ChatGPT allows for something like this to happen. So there are real debates happening. So in Europe and I imagine elsewhere as well, about the legal aspects of these new technologies.

Amanda: Yeah, and this is a very big question and if you don't have an answer that is entirely fine. From your perspective, is there a way that large language models and privacy can actually coexist? And you can have something like ChatGPT that actually does strike that balance and find a way to still respect people's privacy and have that kind of, or are they entirely incompatible?

Dr. Gal: That's a very good question. Given the sheer amounts of data that are required to train these language models, to the point that they become as ... reliable. I don't want to say reliable because they're highly unreliable. As convincing as they are, such that they have the capacity to produce text that seems probable. I don't have a good idea as to what would be a good alternative to the internet as a whole, as a source for this data. And maybe I'm missing something there, but if I am, I'm not sure what that is. And if I'm correct, then there's no other alternative to the internet, then I think we need to have as a society a sincere conversation about what is the nature of the data that sits on the internet. And of course, this is a very overgeneralized statement because there are different types of data that reside on the internet as it were. And some of them are, I guess, more public in nature than others. And some of them were written with the intention of becoming as public as possible, in fact, to become viral. But not everything is like that. But even if something was written to become as public as possible, that doesn't mean that whoever wrote that piece of text necessarily consents to to be used in this other way and for this use to be financially profitable for another entity. That's a different proposition there. So I don't know that there's a necessary clash between large language models and privacy, but if there is, and I'm not sure, I'm not sure exactly how to resolve it.

Amanda: Yeah. I mean, it's ... it's a big question. It's kind of ... the chicken and egg problem to a certain extent. But in terms of privacy, particularly in the United States, one of the big concerns is reproductive health information. And I know that's something that you've touched on as well. Do you see kind of an intersection at all between privacy around reproductive health and ChatGPT further than the sense that it could be information that's pulled into the training data? Are there other crossover points that people should be aware of with this?

Dr. Gal: Chat GPT does have a tendency of providing information to prompts that can include information about people. And the information that it provides can contain actual details about individuals and their lives in ways that are completely unpredictable to the user or to anybody else for that matter. So it may be correct, it may be incorrect, it may be partly correct, it may be misleading, and there's no way to ascertain what the quality of the data is. When we think about the medical aspect or health related aspects of large language models and these interactive text based applications like ChatGPT. I think there are real concerns there that largely emanate from the tendency of these technologies to hallucinate. People refer to this as AI hallucinations, which is the tendency that they have to produce seemingly plausible but manifestly incorrect information. And I would be highly surprised if no one uses ChatGPT and other similar interfaces to ask medical questions. Like, what do these symptoms mean? What do I need to do if I feel if I'm feeling that after I've done this or whatever it is. And again, think about the scale which we talked about before, when you extrapolate this across the whole population. How many lives are going to be lost because of this?

Amanda: Yeah

Dr Gal: And I don't know.

Amanda: Could we maybe give a quick refresher for people who know about ChatGPT but don't necessarily understand how it works? Because it doesn't necessarily work in the way that you expect it to because at least from my understanding, there's not a filter for 'is this true', right? It's just kind of a pattern matching thing, right?

Dr. Gal: For the most part, yes. It just consumes data from the internet and whatever was written on the internet can be given back to the user. My understanding is that they did use another layer of reinforcement learning done by humans. And I'm not sure if you've heard about the critiques that were written about this, the people that they used to do that. I believe we're basing ... I want to say Kenya or maybe another African country, but I think it was Kenya. And I think they got paid like $2 an hour or something. And their job was to examine the output produced by a ChatGPT and confirm whether it was reasonable. And given the sheer volume of data that flows through the system, and clearly given the amount of incorrect information that we know the system produces, they can't cache everything. And I've read somewhere that up to 20% of the output of chat GPT is complete bullshit.

[Editor note: Some studies conducted in 2025 suggest the amount of bullshit ChatGPT and other LLMs produce is way higher, almost half of all responses, but numbers vary based on the study and methodology of it.]

Amanda: Yeah, okay. That's quite a bit. So... Not only.

Dr. Gal: In fact, can I share something with you? So the article that I published on ChatGPT has an example. It has a screenshot of the beginning of a book that was generated by the system to demonstrate how it can use copyrighted material. And the book that I have there is Catch-22. But before I had Catch-22, the original version of the article contained the first few passages or so I thought from the book, 'The True History of the Killian' by Peter Carey, who's an Australian author. And I just prompted ChatGPT to give me the first few passages from the book, which it happily did, and I put it in the article. And after the article got published, literally within hours, less than a day, I get an email from Peter Carey saying, this is not my book. I'm not sure who wrote this, but this is not something I would ever write.

So obviously we very quickly took it down and replaced it with an actual excerpt from a different book. But the point that I want to make here is that the system would very confidently produce manifestly wrong information. There's no way to develop. I mean, in this case, there was a way to validate the information because I could actually look at the book and open it and do my homework, which I didn't do. I apologize, but that's what happened. So like I said, it's not a very flattering story.

Amanda: It's not a very flattering story, maybe, but it is quite illustrative, right? Of how easy it is to be convinced that incorrect data is correct because ChatGPT and other large language models do it in such a way that is very convincing because it follows our natural writing patterns, right?

Dr. Gal: Yes. And this is a link to another ethical concern, a really significant ethical concern around the use of these language models, which is their capacity to write convincing and yet crappy, incorrect information at scale, very cheaply. In a way that can be very simply weaponized for all sorts of purposes, you know, in order to spread misinformation. And today we have not just the ability to generate text extremely quickly, but also to generate extreme high quality deep fakes of people's faces. And when you combine these two things together and they can be combined together, the consequence can be scary. Right? You can get anybody to say anything and post it anywhere. And most, I would think most unsuspecting observers or consumers of that would buy into it because it looks like the real person saying real things.

Amanda: No, absolutely. And one of the things that you've mentioned, right? Is that the best way to kind of properly manage this is probably legislation. And we know legislation moves quite slowly. What in your brain could trigger legislation moving more quickly or this regulation or understanding in the legal world about what's happening to go a bit, to come a bit quicker, a bit sooner?

Dr. Gal: So in the US, and I've heard people talk about this, an important date is 2024 because you have an election coming up. And we've all experienced the last election and realized how, what a slippery slope it can be and how things that we used to take for granted, we shouldn't take for granted anymore around having a reliable and for the most part, problem-free process.

And that was without ChatGPT and deep fakes that were widely and commercially available. And I know that people have a sense of urgency around this because there's a real concern that with these two technologies combined together, it's going to be extremely difficult to ensure that the next election is actually going to go through properly. I don't know if this sense of urgency is going to translate into policy and legal work before, but I really hope that something happens because otherwise, God help us.

Amanda: Yeah, no, fair enough. That was a point that completely like passed me by there. But no, very true. 2024 with deep fakes and large language models and the kind of behavioral targeting that you can still do on a lot of social media platforms would be hectic.

Dr. Gal: In fact, heard some... Yesterday, I listened to another podcast and someone mentioned their deepfake that was done with Biden. And apparently in that video, he claimed that he would recruit into the army everybody between the ages of 20 and 22. So, you know, it's just a little teaser of what might be coming in that direction.

Amanda: And again, kind of pulling on your education and your degrees, the average person, right, until there is legislation, until there is the proper regulation of the training data or whatever it may be, are there any particular hallmarks, or that kind of uncanny valley sense that we can look out for to understand whether or not something may be generated by a large language model? Or is it really that kind of seamless at this point?

Dr. Gal: I would, even before the emergence of these large language models, we have had problems with misinformation and disinformation before and my recommendation is for people to always look at the source of the information. But I guess the source can be forged as well pretty easily. And therefore it's also important to cross reference and look for multiple sources and try to triangulate that way. And at any rate, always be cautious. I guess as an individual user of ChatGPT, like we said before, I don't know that there's much that can be done about whether or not they can access the information that you put on the internet. But one thing that you can control is what information you provided them in your prompts. So if you can help it at all, don't provide personal information about you, your family, your hobbies, your preferences, your love life or health status or whatever. Be very cautious.

Amanda: Is there anything that we haven't talked about or you haven't had really much of a chance to talk about in other articles that you feel like is important or ... necessary or critical for people to understand or keep in mind or interesting to know anything along those lines?

Dr. Gal: I mean, we uh can have a slightly more philosophical debate about the importance of privacy. Many people tend to think that I have nothing to hide, so why should I care if they use my data? And that always strikes me as a bizarre and somewhat naive statement. And that's because even if you have nothing to hide today, whatever you think is innocuous today might come back and bite you in the ass tomorrow because things might be different. Your country might be under a different kind of government. Legislation might be different. Social norms might be different. And so even if something appears to be inconsequential today, it might not be. And also we need to keep in mind that there's a whole economy, there's an entire infrastructure, global infrastructure of data brokers, thousands of different companies of different sizes, different stripes, different colors, different flavors, but what they do, what their business model is, is to get their hands on as much data as they possibly can. In some cases, analyze the data, provide some insights and sell it to other companies. For the most part they have no regard for people's privacy. So even if you're posting something that might appear completely innocent in one context, keep in mind that it might be used for all sorts of purposes in very different contexts which you completely hadn't considered.

So I would urge people to be more cautious about the kind of information that they spread behind them.

Amanda: No, fair enough. It is one of those things where we are very...in the moment focused with what we're sharing and what we're saying, not necessarily considering that everything can change tomorrow, potentially.

Dr. Gal: Yeah. And I think we still lack awareness around that. And I think many people still prefer having personalized services online because of the data that they provide at the expense of their privacy. And I think it's concerning.

Amanda: Well, thank you so much for your time today. It was lovely chatting with you. And if people wanted to find further information or further articles about things that you've written and published about this, what would be the best way to find it?

Dr. Gal: The best way is to Google my name. laughs

Amanda: laughs Fair enough. Fair enough. Well, thank you again and have a lovely day.

DDG Browser Live Read - Browser Highlights

Rosie: Today I’d like to highlight a couple of features offered in DuckDuckGo’s browser.

Both are really important to know about as it relates to Artificial Intelligence.

Now, as you know, DuckDuckGo’s search engine does not track what you search for.

It also offers helpful AI summaries. Similar to what Google has, but here’s the key difference:

DuckDuckGo’s AI Summaries are more concise than what Google offers, and are more private.

I can’t stress that last point enough.

Because a lot of information we enter online is anything but.

Now let’s talk about AI Chat models for a second, like ChatGPT.

Although we prefer you not use AI Chat models, if you choose to do so, Duck.Ai allows you to privately access them within the DuckDuckGo Browser.

DuckDuckGo anonymizes chats, so AI companies don't know who the queries are coming from.

Your data is never used to train these chat models.

And your conversation with these chat models are completely private.

Duck.Ai costs you nothing to use, and there’s no account required to do so.

And if you’re like us at Stupid Sexy Privacy, and you’re anti-AI, you can turn off both Duck.AI and the AI search summaries right within the browser.

No harm. No foul.

Bonzo The Monkey: *Monkey Sound*

Rosie: Oh. Thanks for reminding me, Bonzo. I meant to include this sentence:

Do you think AI Slop is ruining the Internet? We do too.

That’s why DuckDuckGo’s search engine also lets you filter AI images out of your search results.

Bonzo The Monkey: irate Monkey Sounds!

Rosie: I know. Those images you saw of Clint Eastwood were very upsetting.

Bonzo The Monkey: Monkey Sound

Rosie: What? I didn’t make those!

Bonzo The Monkey: Monkey Sound

Rosie: No, I didn’t!

Rosie: Andrew, can you please come get Bonzo. He’s accusing me of creating synthetic media again, and that’s really offensive.

*Andrew comes and gets Clyde. SFX of a chase ala Hannah Barbara. There’s some banging.

*Rosie Clears her Throat*

Rosie: (to herself) Where was I?

Rosie: So, do you want to explore these AI tools without having them creep on you?

Well, there’s a browser designed for data protection, not data collection, and that’s DuckDuckGo.

Make sure you visit DuckDuckGo.com, and check out today’s show notes for a link to download the DuckDuckGo Browser for your laptop and mobile device.

Stupid Sexy Privacy Outro

Rosie: This episode of Stupid Sexy Privacy was recorded in Hollywood, California.

It was written by BJ Mendelson, produced by Andrew VanVoorhis, and hosted by me, Rosie Tran.

And of course, our program is sponsored by our friends at DuckDuckGo.

If you enjoy the show, I hope you’ll take a moment to leave us a review on Apple Podcasts, or wherever you may be listening.

This won’t take more than two minutes of your time, and leaving us a review will help other people find us.

We have a crazy goal of helping five percent of Americans get 1% better at protecting themselves from Fascists and Weirdos.

Your reviews can help us reach that goal, since leaving one makes our show easier to find.

So, please take a moment to leave us a review, and I’ll see you right back here next Thursday at midnight. 

After you watch Rosie Tran Presents on Amazon Prime, right?