Are Baby Boomers Really to Blame For The Disinformation Plague?

In this special Thanksgiving episode of Stupid Sexy Privacy, BJ Mendelson speaks with members of Clemson University's Human and Technology Lab about baby boomers spreading disinformation, how children interact with artificial intelligence, and what the right level of privacy is for all of us.

Are Baby Boomers Really to Blame For The Disinformation Plague?
Photo by CDC / Unsplash

Well Howdy,

This episode ran as scheduled on Thursday, but as my fellow Americans know, Thursday is Thanksgiving.

So ...

We decided to run the show notes today, since we've been teasing for a while that you should expect emails from us in the future on Tuesday's.

Around Christmas and New Year, I believe Christmas is also a Thursday, you'll get another Tuesday email with show notes for that episode.

We're at least three weeks ahead of you, production-wise. So whenever there's a holiday on a Thursday in 2026, you can expect a new episode from us.

That's because we're already taking our break for the holiday. Or are about to do so when you hear the episode.

On Tuesday's, as soon as I am done catching up on recordings for the original batch of Privacy Tips — our sister podcast — you'll get a short privacy tips post. And then on Thursday's, the show notes for Stupid Sexy Privacy.

We won't ever email you more than twice a week. But we do want all of our new friends signing-up for this newsletter to know what to expect.

Short show notes this week, but I really liked this episode, and I hope you'll sign-up (it's free) to check out its transcript in full.

-BJ

Show Notes

Stupid Sexy Privacy Show Notes For Season X, Episode X

Episode Title: Are Baby Boomers to Blame For The Disinformation Plague?

Guests:

Dr. Bart Knijnenburg Bart Knijberg, Professor at Clemson University

Sushmita Khan PHD candidate at Clemson University

Dr. Reza Anarakay, assistant professor at Louisiana State University

Episode Summary: In this special Thanksgiving episode of Stupid Sexy Privacy, BJ Mendelson speaks with members of Clemson University's Human and Technology Lab about baby boomers spreading disinformation, how children interact with artificial intelligence, and what the right level of privacy is for all of us.

Our Sponsor:

DuckDuckGo <--Recommended Browser and VPN

Get Your Privacy Notebook: Get your Leuchtturm1917 notebook here.

-BitWarden.com (Password Manager: easier to use, costs money)

- KeepPassXC (Password Manager: free, harder to use, but more secure)

-Slnt Privacy Stickers for Phones and Laptops

-Mic-Lock Microphone Blockers

-Mic-Lock Camera Finder Pro

-BitDefender (Anti-Virus)

-Stop using SMS and WhatsApp, start using Signal.

-Use Element instead of Slack for group coordination

Get In Touch: You can contact us here


Want the full transcript for this week's episode? Easy. All you gotta do is sign-up for our free newsletter. If you do, you'll also get a .mp3 and .pdf of our new book, "How to Protect Yourself From Fascists & Weirdos" as soon as it's ready.

Stupid Sexy Privacy Season 1, Episode 14 Full Transcript

DuckDuckGo Spot

Announcer: Welcome back to the DuckDuckGo Privacy Challenge, where contestants get a chance to learn why millions use DuckDuckGo's free browser to search and browse online. Now for our first contestant,  Julie.  True or false? Google's Chrome protects your personal information from being tracked.

Julie: Hmm,  I'm going to say true.

Announcer: Incorrect, Julie. If you use Google Search or their Chrome browser, your personal information has probably been exposed. Not just your searches, but things like your email, location, and even financial or medical information.

Julie: Wow, I had no idea.

Announcer: Second question. What browser can you switch to for better privacy protection?

Julie: Is it DuckDuckGo? 

Announcer: That's correct. The DuckDuckGo browser keeps your personal information protected.  Say goodbye to hackers, scammers, and the data-hungry companies.  Download from DuckDuckGo.com or wherever you get your apps.

Show Intro

Rosie: Welcome to another edition of Stupid Sexy Privacy. 

Andrew: A podcast miniseries sponsored by our friends at DuckDuckGo. 

Rosie: I’m your host, Rosie Tran. 

You may have seen me on Rosie Tran Presents, which is now available on Amazon Prime.

Andrew: And I’m your co-producer, Andrew VanVoorhis. With us, as always, is Bonzo the Snow Monkey.

Bonzo: Monkey sound!

Rosie: I’m pretty sure that’s not what a Japanese Macaque sounds like.

Andrew: Oh it’s not. Not even close.

Rosie: Let’s hope there aren’t any zooologists listening.

Bonzo: Clip 2 0:28-0:39 Bonzo: (Mystery Sound)

Rosie: Ok. I’m ALSO pretty sure that’s not what a Snow Monkey sounds like.

*Clear hers throat*

Rosie: Over the course of this miniseries, we’re going to offer you short, actionable tips to protect your data, your privacy, and yourself from fascists and weirdos.

These tips were sourced by our fearless leader — he really hates when we call him that — BJ Mendelson. 

Episodes 1 through 31 were written a couple of years ago. 

But since a lot of that advice is still relevant, we thought it would be worth sharing again for those who missed it.

Andrew: And if you have heard these episodes before, you should know we’ve gone back and updated a bunch of them.

Even adding some brand new interviews and privacy tips along the way.

Rosie: That’s right. So before we get into today’s episode, make sure you visit StupidSexyPrivacy.com and subscribe to our newsletter.

Andrew: This way you can get updates on the show, and be the first to know when new episodes are released in 2026.

Rosie: And if you sign-up for the newsletter, you’ll also get a free pdf and mp3 copy of BJ and Amanda King’s new book, “How to Protect Yourself From Fascists & Weirdos.” All you have to do is visit StupidSexyPrivacy.com

Andrew: StupidSexyPrivacy.com

Rosie: That’s what I just said. StupidSexyPrivacy.com.

Andrew: I know, but repetition is the key to success. You know what else is?

Rosie: What?

Bonzo: (Mystery Sound)

Rosie: I’m really glad this show isn’t on YouTube, because they’d pull it down like, immediately.

Andrew: I know. Google sucks.

Rosie: And on that note, let’s get to today’s privacy tip!

Privacy Tip - Buyers Beware

Rosie: Happy Thanksgiving everyone. 

Now, we know this holiday can be complicated to celebrate. 

Both because of the genocide, disrespect, and biological warfare, America has practiced toward our Native brothers and sisters. Dating back to the First Thanksgiving in 1621.

And also because Thanksgiving  — in Modern America — means having to sit at a dinner table with your racist, Facebook-loving Aunt.

But we want you to hear us out.

Yes. We should give our Native brothers and sisters their land back. 

But we should also recognize that Thanksgiving, like a lot of things, was also hijacked from the Wampanoag and other Tribes based in what we now call New England. 

The practice of giving thanks is part of a long tradition among indigenous communities. A practice dating back thousands of years.

So, we at Stupid Sexy Privacy like to focus on the true origins of Thanksgiving, and we encourage you to do the same.

We also encourage you to support The John R. Lewis Voting Rights Advancement Act. This bill would update, modernize, and strengthen the Voting Rights Act of 1965. A law that has been under constant attack by the Supreme Court’s Chief Justice, John Roberts.

The actions of the Roberts Court have allowed white supremacists to increasingly limit the right to vote. Especially to anyone who isn’t white, male, or Christian.

This includes our native brothers and sisters, who were only given the right to vote in 1924. 

And even today, they, like many of our friends and neighbors,  routinely face obstacles to prevent them from exercising that right.

We will talk more about how to support the John R. Lewis Voting Rights Advancement Act in future episodes. 

For now, we want to turn your attention to a wide-ranging discussion our co-producer, BJ Mendelson, had with Dr. Bart Knijberg, who is the co-director of the Human and Technology Laboratory at Clemson University. 

BJ and Dr. Bart were joined by assistant professor Dr. Reza Anaraky, at Louisiana State University, and Sushmita Khan, a Ph.D. candidate at Clemson. 

While you’re hopefully enjoying a full stomach, or a safe flight, this conversation is the perfect companion for those interested in how the elderly and children interact with Artificial Intelligence.

What the best level of privacy is for most people.

And whether or not we should be mad at baby boomers for spreading disinformation.

So let’s get to it.

Interview Transcript Part 1

(Note to readers: The following transcript has been lightly edited for brevity and clarity.)

BJ Mendelson, Co-Producer, Stupid Sexy Privacy: All right, before we begin, I'd like to have, since we have a number of guests and I think this is the biggest interview I've done for Stupid Sexy Privacy, I'd like to have everyone just take a moment to introduce themselves.

Dr. Bart Knijnenburg, Co-director of the Human and Technology Laboratory at Clemson University: Go ahead Sushmita, I'll let you go first.

Sushmita Khan, a Ph.D. candidate at Clemson: Hi BJ, thank you so much for having us here. I am Sushmita. I am a PhD candidate at Clemson University. Bart is my advisor and my research is primarily on AI and privacy for children.

Dr. Reza Anaraky, Assistant Professor at Louisiana State, formerly of the HAT Lab at Clemson University:  I’ll go next. It's a pleasure to be here. I'm Reza Anaraky,. I'm an assistant professor at Louisiana State University and I study HCI, privacy and older adults.

Dr. Knijnenburg: I'm Bart, Bart Kanijnenburg. My students usually just call me Bart or Dr. Bart. I'm a professor at Clemson University. I've been here for about 10 years. My research is on adaptive systems, privacy, and all the possible combinations between those two things.

BJ Mendelson: Great, and I thank you, the three of you, for agreeing to join me. We have a lot to cover, so I'm probably gonna go right into it. 

So, Sushmita,  I'd like to start with you, because at the time we recorded this interview, my niece just started middle school. She's starting, as New York also starts to implement a ban on smartphones used during the school hours. I have to imagine, like a lot of kids her age, and she's about 10, who have access to the Internet there's been some conversations about how to stay safe on the internet, but is that what you encounter? And,  just sort of a follow up to that, how prepared are the students that are middle school aged when you encounter them in talking about these issues?

Sushmita Khan: Okay, so thanks for the question. As it applies to middle school students, they're actually quite prepared for online safeties, but primarily as it applies to concepts of stranger danger, which means like, do not share your password, your home location or any other personally identifiable information with strangers on the Internet. This also includes like credit card numbers, social security, etc.

But it's important to kind of recognize that not all middle schoolers have cell phones or social media accounts. Like, fifth graders are relatively younger, so they tend not to have a social media presence. But a lot of them do play multiplayer games online. So the concept of not sharing personally identifiable information is particularly something that you see more amongst the gaming community. 

But that doesn't mean that they're immune to sharing any personally identifiable information. They can be very easily convinced. It could be simply because they trust the person they're gaming with or it can be that they've been convinced. So even at instances when they know it's against their best interest to share the information they may just end up sharing, for example,

We talked to this aspiring influencer on TikTok. She's in sixth or seventh grade at the time and she would often host open Zoom meet and greet in her own personal Zoom room, which with a bunch of strangers she had never met and … the funny story is it's not like her parents or her family didn't know about it. Her teachers knew, sometimes siblings know as well but often the advice is to de-escalate in cases of intense situations. 

So I guess to summarize my answer, kids do know about online privacy and they have some skills but they don't really conceptualize the consequences of the danger in that situation.

BJ:  And that seems like a problem with most people, right? Is that we know there's a problem, but we can't seem to conceptualize some of the issues surrounding that. So, Reza, that brings me to you. We found that in our experience at Stupid Sexy Privacy, we found that generally speaking, older adults… don't trust technology such as AI. AI here is referring to like chat bots and large language models and machine learning. But they're also not entirely clear on what is and what is not AI. And to complicate things, some of our older friends and neighbors may be struggling with other issues that can impair their judgment. How do you adjust for that when speaking to older populations about some of these privacy concerns?

Dr. Reza Anarakay: The general argument that older adults don't trust technology in general is partly true, but to complicate things even more with older adults, it's not just about them not being clear about things and more so around misconceptions and technology not being clear and not being well designed for older adults in mind. So usually technology products are designed by young adults. Older adults' ways of thinking, their wants and their needs is not accounted in the technology. So the output naturally is going to be low adoption rates from older adults and hesitation and not fully engagement. And as a result, learning materials are also not as effective and as productive. We have the elaboration likelihood framework, elaboration likelihood model framework in education. It's originally coming from persuasion literature, but we use it in broad domains. And so this argues that when we want an audience to engage with a topic, they should have the motivation to engage. 

So that builds the cognitive engagement, and the ability, like the literacy, for example. And I can give you some examples …  that here there is a connection. So older adults don't have a lot of motivation or technology is not tailored to them. 

In a recent study, we talked about asocial technology. So from a young adult's perspective, if you want to help older adults, you think about older adults who are a population who have, for example, a lot of physical problems. And so it's hard for them to go to do their groceries. And so a solution naturally is that, let's design an online platform for them so that they can sit in the comfort of their chair. Don't have to navigate the traffic, don't need to wait in the long queues and carry the load that they buy. And so they can order products online and get them delivered to their homes. In the study, we showed that the more older adults use these asocial technologies, and they use asocial because these are technologies that digitalize the traditional in-person experiences. So the more older adults use and rely on these technologies, the more depressed they are in the next year, the more anxious they feel.

So these asocial technologies are killing some elements, some aspects of the experience that are consequential and important for all the adults, which mainly here is the in-person interactions. So you see the technology is missing some things that are valuable to older adults and therefore older adults do not have a very positive attitude towards these technologies. 

In another study, we think about privacy and so older adults … It's important to note that older adults have a broader definition of privacy. Older adults often view privacy not just as wanting to secure their data, but also as the right to exercise autonomy over what they want or their personal territories. Give you an example, another example. Watching sports online is a hobby for older adults and for many people. And what do you do if you want to enhance that experience? You will probably simulate the stadium, like simulate a very fancy environment, maybe a big screen showing the game, good audio. But then, and you think that, or you expect all the adults to fully engage and love that. But then that's not true for everyone because... 

From my interview with some older adults, for example, they say that they prefer a small screen rather than a big TV,  because they can lay down easier and get away from it and get to exercise autonomy over their life. So you see there are assumptions that us as young adults, young developers can make and they are reasonable, but they don't necessarily resonate with older adults. So I want to kind of question the assumptions that we pose in the technology design and point the finger towards the designers of the technology, and argue that they really don't understand older adults. Therefore, there are challenges and setbacks in the technology design and in convincing and persuading older adults to use this technology.

BJ: That's something we talk about a lot, is the design of a lot of apps and software. It's just done for a specific group of people, right? And it's often not done for people outside of what you would typically find in, let's say Silicon Valley, for example. It's generally upwardly mobile white people when they design the technology, they're not thinking about children or the elderly.


Reza: Right, now if you want to enhance this technology for older adults, there are also ways to, or kind of optimize your communication. I'm sure Bart has experiences with the AI versus human communication and faces, and you can probably explain.

Dr. Bart Knijnenburg: Yeah. I was going to say to some extent, it's a good thing that older adults don't …you know, it's okay for them to be a little bit more hesitant, right? Like this is something we see with kids that they sometimes really, you know, they jump headfirst into a new technology and don't really think. (laughs.)

Older adults are the opposite. They are really careful. And that's actually, you know, we can see that as a positive trait when it comes to privacy.  With College students, like freshmen, we did some studies on cybercrime victimization. We saw that college students who rate themselves as more familiar with technology are actually more likely to be victimized because they are more daring in what they do. And older adults are much more conservative in what they do. They have this loss aversion. They don't want to venture outside of what they're comfortable with.

For good reasons, right? Like they prefer to stick with what they're comfortable with. It helps for them to have a personal connection and to be able to take things at their own pace, right? When we're doing privacy education with older adults, we actually find, kind of going back to your question about AI, that AI is actually really helpful when it comes to teaching older adults about privacy, because AI has a more natural interaction than what you get from a piece of text or these interactive privacy tutorials. And it also is very patient, right? 

Like you can keep asking questions and questions and questions and it will always answer. But older adults don't really trust AI. Now, to some extent, that's a good thing, right? We don't want them to put all their trust in a system that, you know, they may not fully understand. But if you want to then use AI to help older adults with privacy education, then you are with a conundrum because, you know, they're supposed to trust the system that helps them. That teaches them something. So what we have found is that trust transfer can actually really help. So the idea here, one of my students who just graduated did this study where she had an older adult, so someone from the older adults peer group, saying, hey, we have this privacy education and there's this chatbot that's going to help you walk through this. And that very short introduction, just saying like, hey,  I vouch for this chatbot that can help you, took away a lot of the hesitation that older adults had about using AI for privacy education. And importantly, we didn't find that it resulted in over-trust. That was a big concern for us, that they would now trust everything AI would do. That's not true.

So again, I think the carefulness is a positive trait. And… If you can find a way for them to engage in a personable and personalized way with privacy education,  for example, through AI, that can really be helpful in helping them understand how to navigate the online world.

BJ:  Yeah, and Sushmita, I'm curious about how children are interacting with AI. What have you found on that end of things in trying to educate them about a lot of these different topics?

Sushmita: So children tend to be inherently curious. So if you put something new and interesting and fun in front of them, they would engage with it and they would interact with it. And given the age and their brain development, they're not always like thinking about the worst case scenarios or the consequences of it. And at the same time, children don't really have like the quote unquote advanced tools or the skill set and knowledge required to kind of really understand these otherwise abstract concepts, which is like AI.

It's not something you can touch or feel it's just something that's happening behind the scenes So that really like brings about the question that how do you really talk to kids about these concepts? And get them to really like conceptualize? Like I was saying they don't necessarily conceptualize well, and we did this study and it was a super fun study where we found that simple concept of like counting is a very effective mechanism of teaching them about things like personalized recommendations and how it's happening behind the scenes. 

So what we did was we gave them a context of a souvenir store and because of the region we are in, football is very popular and kids are very much into football. So in the context, that was like a sale of some football game related products like merchandises and souvenirs etc and we gave them a bunch of data and we were like, why don't you look through the data?

And this was between fifth to eighth grade, so you have to understand that these are really young people we are talking about, they really don't have like, you know, even the advanced math skills yet. And we were like go through the data and count the data and make an advertisement for this business so they can increase their sales and attract more customers to buy their products. And then they presented their analysis and I thought it was very interesting because not only did they count and like sort it by frequency, they also like contextualize it with the like problem given at hand. It's like “you know it's this region most people are fans of the sport and of the colors given that football games are during fall season so people would probably want to wear something warm like a hoodie, so probably a fleece line hoodie that has a hood. The color is probably the school's color because that's the fan base. Also I do particularly remember this one presentation where, the student, she designed a small purse which was transparent and her explanation was well the data had a lot of like purses. The frequency of purses was quite high but if you're going to a football game you can't take a big part or you can take a part that's not transparent, so that's why it's like the specific dimension and it's transparent. And so I thought that was very interesting that you can actually capitalize on whatever skills they have and given the correct nudges and providing the effective tools they can connect the dots based the dots that are there in the mind. They just needed the skills to or the equipments to connect the dots. So that was really interesting. But what was even more interesting is that when you contextualized math problems in real life experiences, for example, coming across like personalized recommendation, it kind of also made students more interested in math. And, you know, we had instances where like, there's always a group of people who are more into math, and there's always a group of people who are more into social sciences. And we had students who had said they're more into social sciences kind of start getting more intrigued about math as well and vice versa. Instead of giving an obscure math or computer science example, grounding it in real life experience seemed to work both ways, which I thought was very fun.


BJ: That's awesome. I'm always happy to hear things like that. One of reasons why I was so excited to speak with the three of you was like, I'm always excited to hear about how people respond to these concepts, especially across different age groups. And it's curious for me to hear some of the similarities. So, Reza, I have a question for you. We tend to pick on baby boomers a little bit at stupid, sexy privacy.  For better or worse, it's just a whole other thing we can into at some other time. But, I imagine you have your hands full with this particular group. So I'm a little curious about, statistically speaking, this generation, how do I describe this? They're not known for their empathy, let's say. While we're talking about Baby Boomers, so you can see from their voting patterns or just from a lot of research that's been done. So. I'm curious about, when it comes to teaching them as a group, what steps or tactics might you use to help stress the importance of … not only the privacy as an individual issue or matter, but it's also a group matter. It's a communal effort. It's not something that we can solve individually. And I'm just a little curious about how they respond to that and some of the tactics that you might use to convey that.

Reza: Yeah, yeah, absolutely. So as we age, our views shifts and changes. And from my experience, I can tell that, the older adult populations, they want to connect more. They have more tendency to, or urge to connect with other people. And so I would say for all adults, they have a fair amount of empathy. And so we can look into other things that change. My research in psychology, my collaboration with Dr. Kaileigh Byrne showed that older adults are more likely to spend effort to get a hold of what they have. Whereas younger adults are more likely to spend effort to acquire new gains and collect something new.  So this loss aversion tendency is stronger for older adults and that gain seeking tendency is stronger for young adults.

So this notion may apply to ideologies and ideas too. So that, for example, if you are older, you are more likely to want to stick with your opinion. So that's like a state that you have in your mind. And so you had spent most of your life with that. You're less likely to let that go. But we studied something similar to that in the privacy domain.

So let's say that you have a technology product that can protect your privacy. I can tell you that this is a privacy enhancing technology product, or I can tell you it's a privacy preserving technology product. And we're just playing with words, right? This is the same product that is kind of promoting your privacy. But in one version, I tell you that it enhances your privacy. And the other version will tell you that it preserves your privacy. And turns out that all that older adults are a little more inclined to take action and, for example, adopt this tool if I tell them it's a privacy preserving technology. 

So there are some things that change with age and we can leverage those in order to communicate privacy. But definitely the collaborative aspect is and the sense of community is also important. And they definitely make older adults feel more empowered. It feels like there's a feedback loop where the community belonging improves, although self-efficacy improves, they feel more empowered, and their self-efficacy also promotes the sense of community belonging again. So definitely the collaborative aspect is helpful. Also, take caregivers, they play a role here. And so if there is someone who is playing as technology caregiver, they can increase these associations. Nevertheless, the person who provides technology caregiving will also kind of make older adults feel as a part of community too.

Advertising Break at 32:44

Hello Everyone, this is Amanda King, and I am one of the co-hosts of Stupid Sexy Privacy.

These days, I spend most of my time speaking to businesses and audiences about search engine optimization. 

But I do want to take a moment to tell you about a book I co-authored with B.J. Mendelson.

It’s called “How to Protect Yourself From Fascists & Weirdos,” and the title tells you everything you need to know about what’s inside.

Thanks to our friends at DuckDuckGo, BJ and I are releasing this book, for free, in 2026

If you want a DRM free .pdf copy? You can have one.

If you want a DRM free .mp3 of the new audiobook? You can have that too.

All you need to do is visit StupidSexyPrivacy.com and subscribe to our newsletter.

That website again is StupidSexyPrivacy.com, and we’ll send you both the PDF and the MP3, as soon they’re ready.

Now, I gotta get out of here before Bonzo shows up. 

He doesn’t think SEO is still a thing. And I don’t have the time to argue with him.

I got a book to finish.

Interview Transcript Part 2


BJ:  Yeah, and I think that's particularly important given just the age range that we're now talking about with the Baby Boomers where they might have people my generation taking care of them who might be more tech-inclined to show them. But I just wanted to touch on this point. So it sounds like there's this old study, right, where it's the same bottle of wine, but people are told that each cup that they're given has a different price range and people attribute the most expensive cup to being like a fancier wine. So it sounds a little bit like that, right? Where you're saying the presentation is what matters.

Reza: Yeah, yeah, that's very important. I mean, we will anchor everything. We will have an anchor point in our comparisons and decision making and then compare our options with that. And so if you tell someone something positive about that bottle of wine that you were giving the example, they may attribute some positive traits to that and decide that, that's a better bet. And so...

Now it gets tricky when these presentation interacts with individual characteristics like age, that if I'm older, this certain type of framing is going to work more for me. And so you can use it creatively. Like let's say you're developing a fitness app and your goal is, what is your goal? Your goal is to get everyone to do exercise. So you should personalize the message that you're communicating to your audience to maximize your product. You're output. Your goal, whatever you're aiming at. For example, you can tell younger adults that, so what were younger adults? Younger adults were gain seekers. They wanted to acquire new gains. You can tell them in order to get in shape and be fit, you go out and do exercise because that's a gain that they acquire. What about older adults? They were loss-aversive. So we can tell them in order to not lose your ability to be physically active go out and do the exercise. So we can leverage these different presentation styles  and considering the individual's characteristics to their benefit and to achieve a better goal for everyone.

BJ:  That’s a great point. Bart, I have a couple of questions that I am going to kind of smush them together. This interview is going to air after we spoke with Rebecca Williams at the ACLU. We were talking to her about, if you go to a protest with a completely locked down phone, there's a risk that you could stand out because you'd appear as an anomaly when compared to the other phones around you. So one of the things I wanted to ask you about was …

And when we talk about more privacy … Is more privacy always the answer to a lot of the privacy and safety issues that are discussed today?

Dr. Bart: Yeah, that's a good question. So it's not always the best answer. Let me first address the security part of this, which is fascinating to me because… You know, there's a, not, you know, you're not just being more suspicious if you apply more security, but people are also suspicious of security, right? Like one thing that I notice is that, you know, and people who do research on security are kind of struggling with this … Is that the moment you tout the security practices of your tools, people actually get more suspicious, right? I, I kind of, I, I've been calling this the hundred percent not poisonous milk principle, right? You go to the supermarket and you pick up a carton of milk and it says, this is 100 % not poisonous. (laughs) and you're like, okay, I have questions, right? So how do you get around that? Well, I think the best way around that is to normalize security, right? Like kind of make that the endowed option, make that the thing that people come to expect so that the absence of security actually is more suspicious than the presence of security. That would be the solution there. Now, as for the balance between, you know, like… what level of privacy do you need? It does really depend on the person, right? Like I often follow the perspective of Sandra Petronio who had this communication privacy management theory. And so her theory is around how people have privacy boundaries that they can expand and contract, right? So it's not just... privacy is something that I always have to keep for myself. Like sometimes it's beneficial to put someone in the circle, right? Like you can gain trust, you have more efficient communication. It can be another person, it can also be a company, right? Like I am gladly giving up some of my...  movie and series watching behavior to Netflix so that it can actually recommend new series to me, right? Same thing with Spotify, although my Spotify currently has a lot of Elmo (laughs)  in it because of my kids. the idea that, you know, data like disclosing data is bad is that that's not true, right?  it can actually lead to personal and social benefits to disclose. Now that makes things really difficult, right? Because you can't take a do this and do that approach to privacy education because it really differs per person, right? You also can't just go for like, let's just give people all the knobs and things to control things by themselves because people get super overwhelmed, right? If you think about it on a day-to-day basis, we see so many different privacy related situations. We have so many privacy transactions. I kind of hate that term, but for the purpose of this discussion, it's kind of useful to think about it that way.

You quickly realize that actually giving people control over all those decisions is actually not going to work. Right? So we need something to make it manageable. Right? And what we as people do is we just take shortcuts. Right? We take some rules of thumb that makes the process quicker. Like how with the bottle of wine, we look at the… Like the expansiveness of the glass, right? Like that's the shortcut that we take to, this is better, right? And very often that actually works really well, but sometimes it leads to inconsistent behavior, right? Like with Reza, we have shown in several studies that whether you ask people to opt into something or opt out of something does differ how people behave, even though it should be the same, right? Same thing, whether you indeed frame it as a gain or a loss, it actually makes a difference.

And there's also this thing called a compromise effect that I've shown in a couple of situations where when you give people a whole range of privacy options, if people don't know, they kind of just pick one of the middle options. So some companies exploit this by having outrageous options when it comes to privacy, right? So that you kind of push people kind of to the higher end, right? And so we fall prey to these kinds of situations, right? But it doesn't mean that we're completely mindless, right? I remember one study I did with Reza where we literally asked people, do you want to tag everyone in every single photo that you have on Facebook? Right? And we implemented a default effect and a framing effect to see if we could find any differences. Turned out in that study, everyone said no. (laughs) Nobody wanted to do that, right? With good reason. You don't want to tag every single photo on Facebook ever, are you crazy? So that does mean that people do think like when they're kind of pushed to the limit. But in many, many cases, we're not thinking very carefully and that's really the problem, right? So that's a difficult problem to solve and we have some ideas on how to solve it. Again, kind of goes back to Reza's point about collaboration likelihood, right? If people have motivation and self-efficacy, that tends to help. So build an interface that shows people that they can do it, right? That does really help. We had a situation where, you know, these form auto-completion tools, right? Like you fill in your name and it kind of fills out the entire form regardless of whether the fields are required or not.

People tend to just leave everything in that form, right? But just adding a button to the end of each field that allows you to remove the information from that field. it turns going into the field and pressing backspace into an action of a single click, that made people much more purpose specific in what information they would disclose compared to just a filled out form. 

So that's a really interesting way of kind of increasing people's kind of… perception of like, hey, I can do this. That then really helps them kind of make these decisions in a better way.

BJ: So let's go around just one more time. Sushmita, I think everyone agrees that media literacy and teaching kids about how to disinformation is critically important. I'd love to hear a little bit more about the misinformation module that you offer. I understand it involves free ice cream, which I'm not going to lie. you had my attention as soon as I saw that I was like, OK, I need to hear more about this.

Sushmita: Yeah, so that was a really fun one and the results were really great. Obviously it sort of worked. So what we had is like we had this education module on misinformation and we wrapped it up with this email which is supposedly coming from a teacher to the student saying that ‘hey on your field trip day there'll be this ice cream truck where we'll be giving out like free ice cream’ or maybe discounted ice cream for a dollar. I don't really remember the details whichever was more believable and whichever seemed more like you know it's like seemed to attract more people like, yeah, I really do want free ice cream so I would fall for it kind of a language. 

So, you know, and the email was obviously fake. It was like riddled with like spelling mistakes and weird grammar. It came from a very peculiar email address and all of that stuff. So we really like, you know, staged it and stuff. And …

That email went about like two weeks ahead of the thing and then students were kind of like talking amongst themselves that yeah there would be free ice cream, so we wonder what it'd be like and you know all the stuff that kids talk about. Obviously the teachers were in on it with us so the day before the field trip … the teacher was talking to a few students, ‘that what's this thing you all are talking about like free ice cream where did you hear it?’

So they had managed to get some of their students to talk and be like, yeah, ‘we got this email from a teacher’ and all of that stuff. So let's pull out that email and take a look at it. So the teacher was like, what email address is this coming from? And students were like, well, this looks a little odd to be a teacher's email address, right? Because they know what their teacher's email addresses look like. And then it's like, well, ice cream is not spelled with an exclamation point, it's spelled with an I. So things like that, right? (laughs)

So that's when teachers were pointing out like the red flags and the errors and they kind of like tied it up with the fact that, yeah, we all really want free ice cream, which is why we are more likely to overlook the obvious red flags. But obviously if something looks like it's too good to be true, it often is. So look for the red flags first. Because they fell for the misinformation we did reward them with free ice cream at the end, but yeah that was a really interesting exercise because on the day of the field trip slash free ice cream event when kids were really reflecting on it and one of the p.i who was there she was telling us like how  you know, kids were talking about, yeah, there were all of these like obvious like issues in the email, but we kind of, you know, really hoped for free ice cream, so we looked over it and there were a couple of students who were talking about like, ‘yeah, we didn't think it might be like too good to be true and we were not sure it would be there, but most of our friends thought it would be there, so we went with the judgment of the majority.’ And that is something not in those exact terms, in that similar sentiment resurfaced in the second iteration where we had found that even when kids knew about misinformation and the consequences of misinformation, given the age group and the stage of their cognitive development, humor tends to take over the knowledge of the consequences of misinformation. If it's humorous, if it's novel, they're more likely to share it even when they know that it is really fake. There is really not a shark on the freeway. Like, but it's funny, but you know, it's funny. So it'll make me look cooler and to my friends. I mean, it'll increase my social status and all of that stuff. And so this is something that really surfaced through the same iteration and and It was something so obvious that it's always there. Like if it's funny, then I would probably share it with others. Whatever the intention is, it's not like malicious misinformation or disinformation, but it's more to get some attention on the internet and it's more prevalent among younger kids. So that's something we really kind of identified and pinpointed with that. And that's something.

Well, I believe that that's something that we could really focus on that, yeah, it might be humorous, but that does not mean that it's not bad or it doesn't hurt people. So that's like the next angle that we'd probably want to really look at when teaching media literacy.

BJ: Yeah, I that's, I mean, I grew up, my generation grew up in meme culture, right? So like we're kind of conditioned to spread something even if we, without thinking about it necessarily, without stopping, even especially if it's funny, right? Cause it says something about us. So that's really interesting. That's, yeah, especially cause I'm sure kids are more, most attuned to like their social status, right? And they want their peers to think that they're funny.


Sushmita:  Yeah, it's like a combination of that age and it's like, well, there's obviously not an octopus climbing the tree. Why would you even think something like that? Nobody would believe it. But in cases of like extremely emotional situations like natural disasters, you're already vulnerable. So if I tell you that there is a shark in the freeway, like the world around me has fallen apart. What is a little shark on the freeway kind of a situation? So yeah.

BJ:  Right. And Reza, I sort of have the same question. Again, I don't want to bury baby boomers, but there's certainly some anger among the younger generations concerning the spread and acceptance of disinformation. That's not to say that younger generations aren't responsible, because we certainly are. But I think there's a perception that among baby boomers, everyone has that stereotypical uncle, right? That they think of dreading meeting at a Thanksgiving dinner or a family dinner that maybe is older and sort of got sucked into like a Facebook vortex of disinformation. I'm just curious about what you've encountered and sort of what, as you interact with an elderly population, what steps usually are taken to sort of address that?

Reza:  Yeah, well, that's also an interesting question. And about disinformation, it's probably the tendency for everyone, regardless of age, that we tend to, or we are more likely to believe disinformation or misinformation, that is more aligned with our previous mindset and to dismiss it if it doesn't. So that's fairly universal cognitive tendency we have. And what makes older adults somewhat distinct? And maybe I should have been clear since the beginning, that with older adults, when we talk about older adults, we're talking people who are 55 or 65 years and older, depending on what definition you follow. But with older adults, what is distinct about them and how it complicates the situation is what we've been talking about. They don't trust the technology as much as other people. And so they view the technology as such. In our studies, and again, I want to… point it back to the developers as a side note that, ‘hey, you designed the technology such that it's hurting older adults more’. In our studies, we consistently come across this theme that, for example, dark design patterns, the nudges and the ways that people use to maximize data disclosure, they hurt older adults more. So they overly make older adults more concerned, even on a subconscious level. So when older adults see … For example, a positive default, a disclosure, by default, they may feel higher concerns. And so probably what's happening behind the scene is that the excessive use of such mechanisms made older adults so concerned and so skeptical about technology. And so they shaped their views and mindsets.

 Now, coming back to the misinformation story.  Well, when there is the information comes through the technology, and it contradicts older adults beliefs. Many older adults are quicker to reject it because of the negative experiences with the technology have. And at the same time, if there is misinformation that aligns with their opinions, it's also harder to counter it with technology-based interventions because again, the medium here is not trusted, so they don't trust the technology-based interventions.

So maybe Bart can think of a trust transfer study that kind of turns around this problem and there is a mediator that is trusted and then all the others trust that technology-based intervention more. But yeah, we have this problem of skepticism and so that's caused by the technology and we need to find ways to alleviate that.

BJ: And Bart, I mean, there's a whole different constellation of questions I want to ask, but sort of addressing these problems, and maybe finding a trusted intermediary, I did want to ask you about a tool that we wrote about recently called Blockparty. And essentially like … You had said that you can't fully automate the privacy process. And we agree. And it sounds like what Reza is talking about where you've got a generation of people who don't trust the platforms. And maybe on Sushmita’s end, you've got a generation who trust it, but maybe don't necessarily understand the implications that come with that trust. And so,  in thinking about a tool like BlockParty where

it serves …  I want to see if it meets your criteria of what you defined as a recommender system for privacy settings. I was hoping you could expand a little bit on that, and maybe talk about other tools that might exist that could help both the younger generation and the older generation deal with some of the problems we've discussed.

Bart:  Yeah. I mean, the best tool out there is to talk to other people, right? I think, kind of before I get into Blockparty, kind of going back to the problems with, with misinformation, I think, you know, there's this interesting dichotomy that people have created between disinformation and misinformation, right? Where the misinformation is the people who are sharing things maliciously and the disinformation is the poor defenseless people who are sharing it because they don't know. And I think that's a false dichotomy. I think there's a… In many ways across generations, a problem that I heard this term on another podcast, Olivier Zhudetai (Sp?) was talking about perverse media enjoyment, right? Like I want to get the likes. I want to create that community. I want to be in my little bubble. My media consumption patterns kind of escalate around that.

I think that that's something where, you know, going out into the world and talking to real people can sometimes lead to chance encounters and a more kind of neutral understanding of the world that kind of breaks some of these spirals. It's not a coincidence that a lot of this spiraling happened during the COVID pandemic, right? Like where people were kind of locked up and...

trying to kind of figure this out and navigate this on their own. 

Now going back to BlockParty, I think what BlockParty does well is it's something that helps you translate, right? In human-computer interaction, one of the… great people in human computer interaction, Don Norman. He posited that there's this two gulfs between a human and a system. Gulf of Execution. I'm trying to do something. How do I get the system to do it?

And the Gulf of Evaluation. I pressed a button. Now we need to figure out what it did. Right. And that's an issue in privacy too. Right. Like we're trying to translate my privacy preferences into the settings of the box that I'm using or the website that I'm using. Right? And that's the difficulty. Right. So I think this is something that Blockparty does kind of well, right. It brings together a bunch of things in a more user friendly way so that you don't have to navigate the 200 plus privacy settings that Facebook has ,or read the privacy policy that's longer than the US Constitution, right? Like I want this to be simplified. So BlockParty is a way of doing that. I think in principle, automation can help with that as well, right? I think this idea of I can sometimes offload some of my privacy decisions to an algorithm because my privacy decisions preferably are somewhat consistent, right? Like the system can take some of my privacy settings, you know, on my phone and figure out the next time I'm installing an app, it kind of already knows that I generally don't give it permission to use my camera or my microphone. Or I do give it permission to do acts, especially for this type of an app, right? So it can actually automate some of these things. It can actually do some of that stuff. But we have to realize that the input to such an algorithm is our preferences. So, and the moment I'm no longer making the decision myself, then how do I express my preferences, right? So that's a problem in recommender systems in general. And so that's a problem that we've been talking about. It's also paradoxically going back to the misinformation. It's what causes that filter bubble. Right? It's that idea that algorithm gives me X, algorithm gives me X, algorithm gives me X. As long as I like X and I never encounter anything beyond X, it keeps feeding me that thing. The solution is to build systems that can help us understand what our preferences are. That can sometimes say, ‘hey, now might be the good time to actually ask you a question about your privacy, to not just automate it away, but to actually help you figure out, you know. What are your values when it comes to … why are you using Facebook? Right. I sometimes ask myself that? (laughs) Why am I still using this? And then I, and then I can use that critical reflection to think about, okay, then, then what does that mean for my privacy settings? Right. And again, for some people that means everything I post needs to be completely public because that's how I gain value. For other people … That means I'm only going to follow three people. And that's all I, all I need to do to gain a benefit out of this. Right?

So we need systems that kind of help us reflect on these things, right? That kind of overcome this idea that … I've been working on with Sushmitra called Digital Mindlessness, right? I think a lot of the systems that we use, especially in this AI era, are trying to take away a lot of the decision complexity to the extent that they encourage mindlessness, right? I think taking away decision complexity is a really good idea, but what needs to come in its place is a higher level thinking about what I want to accomplish rather than, I don't have to think about it anymore, right? 

Like that's really the solution. So are there tools out there that do that? Not yet. So I think we need to go there. And I think that's something that is an interesting combination between careful interface design and education. And luckily, I see increasing understanding from the big players like a Facebook and a Google and an Apple that that's in their best interest in the long run as well, right? They don't want customers who are completely uninformed about privacy. They actually want their customers to feel comfortable using their tools so that they can keep using them even longer, right? So luckily over time, it seems like these companies are trying to help their users to understand these things. That's an uphill battle and a very difficult process, but we're trying to work on that, yeah.

BJ:  This was great. And thank you all for staying about 10 minutes over what we had scheduled. Where can we find each of you real quick? What resources should we check out?

Bart: One thing that is, I think, really useful is I … I recently co-edited an open access book titled Modern Sociotechnical Perspectives on Privacy. It's written for an academic audience, but it has a lot of kind of introductory articles. Reza has, is I think, co-author on two or three of the articles in the book. So if you're interested in research on privacy and kind of academic side of things. This is a really nice book and it's open access, so you can download it and read it. I know a lot of my colleagues are using it when they have like students who start getting interested about privacy or in their undergraduate courses on privacy. So that's a … it's a higher level education but I think you know probably people in the audience of this podcast are some of them who would be really interested in using that.

Live Read

Rosie: There are two economies in America. 

One for the wealthy, and one for you and me. 

And the one for you and me resembles what used to be called a “third-world country.” 

These days, we more politely call them “developing countries.”

And we should!

Because with the way things are going, those developing countries are soon going to kick our ass. 

And offer things like universal healthcare coverage.

So, we know managing your privacy, data security, and anonymity can get expensive.

Which leads to the question: What’s the least you can do, to get the most in protecting yourself from fascists and weirdos?

We’d like to recommend the following:

-Use Signal for all text messaging with your friends, family, co-workers, and fellow protestors. Do NOT use WhatsApp.

-Use Bitwarden to manage your passwords.

And last but not least, get the DuckDuckGo Subscription Plan.

For about $10 a month, or around $100 a year, DuckDuckGo offers a solid VPN, identity theft restoration protection, private access to advanced AI chat models, and a data removal service. 

These four items are often sold separately for way more than $100 a year.

And $100 a year is way less than what you spend on virtually every streaming service.

You can sign up for the DuckDuckGo subscription via the Settings menu in the DuckDuckGo browser, available on iOS, Android, Mac, and Windows, or via the DuckDuckGo subscription website: duckduckgo.com slash subscriptions

The DuckDuckGo subscription is currently available to residents of the U.S., U.K., E.U., and Canada. Feature availability varies by region.

But your piece of mind will not. Because supporting companies like DuckDuckGo is one of the key ways we can defeat fascists and weirdos.

Don’t support companies that support the fascists and weirdos.

Support DuckDuckGo instead.

Stupid Sexy Privacy Outro

Rosie: This episode of Stupid Sexy Privacy was recorded in Hollywood, California.

It was written by BJ Mendelson, produced by Andrew VanVoorhis, and hosted by me, Rosie Tran.

And of course, our program is sponsored by our friends at DuckDuckGo.

If you enjoy the show, I hope you’ll take a moment to leave us a review on Spotify, Apple Podcasts, or wherever you may be listening.

This won’t take more than two minutes of your time, and leaving us a review will help other people find us.

We have a crazy goal of helping five percent of Americans get 1% better at protecting themselves from Fascists and Weirdos.

Your reviews can help us reach that goal, since leaving one makes our show easier to find.

So, please take a moment to leave us a review, and I’ll see you right back here next Thursday at midnight. 

After you watch Rosie Tran Presents on Amazon Prime, right?