Listen to the Signal Speaks podcast
This podcast was organized by the Office of Equity, Diversity, Inclusion and Anti-Racism at the University of Waterloo, and hosted by Dr. Karim Wissa, Director of Research, Innovation & Knowledge Mobilization at EDI-R, and Dr. Alex Pershai, Associate Director, Equity at EDI-R.
Read the transcript
KW: Hi everybody, I am Karim Wissa, Director of Research, Innovation and Knowledge Mobilization here in the Office of Equity, Diversity, Inclusion and Anti-Racism at the University of Waterloo. And I'm hosting this episode with Alex Pershai.
AP: Hi, I'm Alex Pershai. I'm the Associate Director Equity at the Office of Equity, Diversity, Inclusion and Anti-Racism.
KW: And today we have with us Dr. Brianna Wiens is an Assistant Professor in the Department of English, Language and Literature at the University of Waterloo and co-founder and co-director of the Feminist Think Tank and SIGNAL, along with Shana MacDonald. Her current work investigates the circulation and weaponization of tropes of gendered violence across AI and digital media platforms, foregrounding their feminist responses.
AP: And we also have Dr. Shana MacDonald, who is the O'Donovan Chair in Communication Across Disciplines at the University of Waterloo. Her research explores feminist, queer and anti-racist digital media, as well as the rise of online hate, technology facilitated gender-based violence and disinformation online. She authored a book called The Art of Memes in Feminist Digital Culture, which came out in 2025.
Hello and welcome! And let's start with the exciting news: the SIGNAL Summit is coming up soon! The Summit will take place at the University of Waterloo between the 4th and the 6th of March. So could you please tell us more about this exciting event?
BW: Yeah, absolutely. We're really excited to share details about the upcoming symposium. It is in March. March Summit taking place from March 4th to 6th. It is supported by the Global Futures Fund. And this year's gathering is bringing together students, researchers, and community practitioners. And we're really trying to explore issues of community building, of gender justice, digital media, but also extremism. And then along with that, care-driven research.
SM: Yeah, so the Summit comes out of our work with SIGNAL, which is a new and emerging network on campus. SIGNAL stands for Strategies for Intersectional Gender Justice, Networked Action and Liberation. And the summit was designed with intention. It is around International Women's Day. And so there was a nod to that in the programming. We're balancing plenary talks with interactive workshops, data jams, panel conversations, and keynotes. So…
BW: We've got three days of really exciting workshops!
AP: Cool.
BW: So on the first day, we've got plenaries from leading scholars and practitioners.
And those are all exploring intersectionality in preventing violent extremism, thinking about solidarity and care and activism, but then also the impacts of digital misinformation and toxic media. So we are really lucky to have Jaigris Hodson from Royal Roads University, Canada Research Chair in Digital Misinformation, Polarization, and Antisocial Media. We also have Jillian Hunchak, who analyzes right-wing extremism in the Canadian, American, and British context. We also have panels featuring faculty, postdocs. We've got students who are sharing research on community building, thinking about tech-facilitated gender-based violence, radicalization, and online harm.
And then also what we're really excited about is that there's a Waterloo-specific initiative for student outreach and community building from our colleagues in physics, so Brenda Lee, and then Yu-Ru Liu in Math. And then Rebecca McAlpine in the Integrated Teaching Support Unit. So we'll talk about their work in building kind of community spaces across disciplines.
We also are really excited that we've got creative and skills-based workshops, including erasure poetry as activism, gender justice strategies, and then hands-on data engagement with our open access tools.
So we're really lucky to have Adra Raine, who's an EDIR fellow, a writer and educator. We have Nick Ruest from York University. And we have Alex and Pamela facilitating some of these. So thank you.
KW: Thank you.
AP: No, thank you. And I can hardly wait for the event, honestly, because it sounds so amazing.
But the Summit is actually a part of the bigger initiative. So would you mind sharing a couple of words about SIGNAL and how it came to be and what the goals are and how the whole project started?
SM: Yeah, absolutely. So, you know, Bri and I have been collaborating for probably over eight years now. And we have a really grassroots initiative on campus called Feminist Think Tank, which is an open kind of entry social and community space that runs every week throughout winter and fall terms. And within that research and [the] kind of collaborating we've been doing for a long time, we were imagining ways in which we could be responsive to the Hagey Hall attacks in ways that were really productive for the community, but also kind of more broadly nationally. And so SIGNAL comes out of that.
We've long done research on the Internet, and so we're familiar with the ways in which ecosystems have been existing and evolving over the past 10 years. And we've noticed an incredibly troubling trend, probably in the last five years, where there's a complete amplification of some of the more extreme and hate-filled parts of the Internet that are kind of becoming quite mainstream. And so the network is very attendant to that and has brought together scholars, policy analysts, activists, people in a lot of different spaces with a lot of different skill sets to try and figure out how we can approach the problem of online hate, especially as it's related to gender-based violence. But also polarization and radicalization and then how we can counter that.
BW: So this upcoming summit is one of the many events offered by SIGNAL Network. SIGNAL stands for Strategies for Intersectional Gender Justice, Networked Action, and Liberation.
And we started this work with a SHHRC partnership development grant, which was named the Digital Feminist Network of Canada.
But this morphed into SIGNAL because we were trying to change the range of what we were actually engaging in. So SIGNAL was really created in response to a growing and urgent reality that we are seeing, that digital spaces are increasingly sites of harm, particularly for women, queer, trans, two-spirit, and racialized communities. And so globally, we've seen the surge of technology-facilitated violence. And research is showing that online hate, radicalization, and harassment are really not confined to those spaces. They're shaping our classrooms, our relationships, community safety, and democratic participation. And so this project started because of those forms of harassment that we were seeing. What SIGNAL is trying to do is broken down into three different streams that reflect what its name is.
So in Intersectional Gender Justice, we are focused on mapping technology-facilitated violence and algorithmic misogyny. We're studying how AI systems and digital platforms are reproducing discrimination and then analyzing extremist media ecosystems.
In our second stream, Networked Action, we are really thinking about how to build tools for change. So this includes content moderation approaches, thinking about safety over corporate profits. We are creating and have created misogyny and hate detection tools for podcasts that our colleague Nick Ruest has worked on. And we're thinking about practical resources for educators and institutions like Learn Shells.
And then in Liberation, we are really committed to preserving and studying digital activism.
So we are archiving online resistance movements, we're looking at counter-speech campaigns so that future organizers can learn from today's strategies.
In large part, the project also started, unfortunately, because of the harassment that many of us receive just for doing anything related to feminism, gender, sexuality, race, or racism on campus. And you may have likely received some of this sort of online threats too. This project that Shana was talking about really grew out of years of watching those same patterns repeat.
And not only for us as, you know, I was a PhD when Shana and I started working together.
So not even just PhD students and faculty, but also undergrad students, really brilliant students, particularly women, queer and trans students, being harassed out of online spaces. And then sometimes out of their programs entirely. We've heard people say, “I'm not staying in this program anymore. I'm not going to go to grad school because it's actually so horrible to be here.” And so it started out as these sort of individual incidents, like misogynist comments in group chats, coordinated racist harassment on social media. Those revealed, of course, much larger and systematic ecosystems that Shana was talking about as well. So one exciting thing that I think the real catalyst for some of this work came from was when our research team began mapping Manosphere podcasts and saw just how widespread and organized that ecosystem was.
So we weren't just seeing these isolated trolls. We were watching this really massive infrastructure that was designed to radicalize young men in particular and then normalize violence against women and marginalized genders. And then we saw this bleeding into the classroom, into our campus culture, and then also our students' lives.
SM: And so one of the lucky things with different forms of funding that have come together to help support Signal is that we're really well prepared, say, for what's coming out with AI right now.
So the way that we've understood ecosystems and radicalization within podcasts is making us well-equipped to start tracking and naming and documenting AI harms, especially around violence against women, rape culture, misogyny, racism. Because it's really prevalent, but it's not being talked about as much as it should be.
AP: Oh, absolutely. But then it actually tackles a bigger problem because when we talk about intersectional gender justice, especially when it comes to gender-based violence, we do not have proper definitions. Every time when it comes to understanding a specific case, the interpretation is literally like it could be contextual, it could depend on a specific investigator, it could depend on the specific situation who's involved, who's not involved.
And therefore, let's say, when we already kind of like trying to understand what's happening,
and unfortunately, when it comes to any kind of misconduct, injustice or other kinds of oppression, it’s usually the system that supports those who are privileged. And so when we talk about these things, it's very important for one to acknowledge what's happening, but it's also to understand what kind of mechanisms we can use to address those issues. But now it's actually even more complicated with artificial intelligence because, again, there are no mechanisms to understand–not necessarily to understand, but mechanisms to identify and then attach them to the existing policy.
So this is why your work is so important, because it not only creates spaces to have those kinds of conversations, but also probably will bring some protocols or some kind of initiatives that will allow this kind of identification to happen, that will help all sorts of services on campus and beyond.
BW: Yeah, we hope so. Excitingly, I will phrase it as, I think there's a lot of room for dialogues around AI. I think there is this great sort of technoutopian push for AI use uncritically. But I mean, even this week, we saw one of the key players at Anthropic say that he was stepping down because we are in “mortal peril” with AI, in his words. And so I hope that that kind of prompts us to think, like, what are the ways that AI is being used for really nefarious means in addition to how might we think about it in more sustainable, collaborative ways? Because it is quite an individualist pursuit in many ways.
SM: Yeah, and I think that's why I like the way that this work is built organically over an eight-year process, where it's always been centered on community building and supports, but also being really aware of the environment within which we're working and being able to speak to both of those things at the same time. And with the kinds of funding we have right now, we're able to expand the work we do to have dialogue with policymakers. And I think this is where new possibilities are opening up for us. And so, you know, one of the things that SIGNAL will be doing this coming week is going to the UK to talk with Responsible AI. There's a grant that's funding some of the work that we're doing that will produce policy reports around regulation for AI harms. And so I think that it's starting to be a part of what our mandate is and the purview. And so, yeah, we are really excited about that.
AP: It is great because those – as a practitioner who's involved in gender equality work, I can tell that there's a significant – I'm hesitant to say a ‘lack’ of proper definitions and policies and documentation, but we probably can call it this way just, you know, for the sake of it. That the more we know how to understand things, the better we can address that. And not only in terms of understanding the scale of the event, but also being prolific in terms of considering the harm done. Because usually it's just – it focuses on response work or focuses on prevention work, but we kind of undermine the harm that is done in the process. And acknowledging that could be a big step for, you know, for this kind of work that we're doing. It's very exciting to know, yeah.
KW: In your research, what are you both seeing in terms of – you say AI harm. Are we looking at deep fakes? Is it more AI-generated responses that are harassing people? What is coming up?
BW: Great question. All of the above.
KW: Right.
BW: So Shana and I are taking this on from almost – not two different angles, but two different almost case studies.
So I've been kind of conceptualizing what I'm calling machine learning misogyny. So what are the ways that they’re kind of scraping of data and then the implementation of that data to create new LLMs or AI are re-embedding old kinds of sexist, racist tropes? So we're looking at four case studies of, like, bot harassment campaigns and then pro-natalist surveillance techniques, the weaponization of femininity and gender in general right now. And for some reason, I've totally forgotten what the last case study is.
Anyway, so those four case studies are coming together to think: how are these different areas, which are seemingly disconnected, actually all fuelling each other in really concerning ways to recreate old gender tropes and old racist tropes?
So none of these are new.
KW: Right.
BW: They are kind of deeply embedded in these, like, infrastructures of technology.
KW: Right.
BW: (to Shana) And then do you want to talk about yours?
SM: Yeah, so almost in complement, I'm coming at it from the visual culture side and how this is kind of being mainstreamed in everyday practices of image-generating content online. And so looking specifically at what we would call ‘AI slop image generation’ and the kinds of genres that are emerging and the patterns and playbooks within those. And again, to no surprise, they're misogynist. There's a lot of rape culture. There's a lot of violence against women. And there is a lot of racism. And so it's not – you cannot disconnect them.
One of the things that I'm mostly concerned with in tracking this stuff is how it relates to new forms of nihilistic violent extremism, where some of our colleagues have done really good research showing how misogyny is at the core of nihilistic violent extremism, and how this becomes a bit of a mainstreaming operation in order to desensitize everyday users to the kinds of ideologies that are on a more extreme level, but kind of baking them into our everyday culture, which I think is cause for alarm.
BW: One of the conversations we were just having the other day was about how image generators like Grok very much, you know, normalize moments of sexual violence. They normalize a lot of sexualized imagery. And then in some ways it makes us immune to a lot of the news around sexual violence that we're actually seeing right now in the news with, for example, the Epstein files, that become so numbing to us because we're seeing it over and over and over again. And image generators like Grok, I think are very much impacting how normalized this all gets.
SM: Yeah. And one of the things I will add, we should actually dialogue more about this, is that the stuff that I'm documenting right now is based on cartoon cat soap operas, which are a very popular new AI generated genre.
KW: Yeah.
SM: And they're pronatalist. So every single one of them deals with heteronormative cat couples. Sometimes there's interspecies couples, but that has its own racial tones to it, or racist tones to it. And the narrative will always follow pregnancy, sometimes from rape, sometimes not. But it's about pregnant cats. And so there's a pronatalist bent with a very specific vision of an upper class existence that is coded white.
And it's slop, but it is non-ending. And they're getting millions of views. So it's being actively consumed.
KW: And are these, I'm trying to think now, are they hosted on YouTube, these videos?
SM: TikTok.
KW: TikTok.
SM: Yeah. And Instagram.
KW: And Instagram. Okay. So it's pretty mainstream.
SM: Oh, yeah. And it's not hard to find.
KW: Right.
SM: It's very quick on the algorithm.
KW: Right.
AP: Basically, we're talking about the new type of cyberbullying.
KW: Yeah.
AP: And that kind of poses all sorts of new challenges to how to do the cyberbullying prevention work. Because, like, for one, the old school, quote unquote, cyberbullying is not addressed. Because in many cases, people think that whatever you do online is just…you do online.
It doesn't really matter. And, however, this has significant consequences for people who experience cyberbullying to begin with, but also for teaching AI. Because now I don't know, like, I'm afraid to think how many platforms are actually harvesting your data and you click activity and whatnot. But, like, literally every single step teaches artificial intelligence. And then we come up to this new level of cyberbullying that is not recognized as cyberbullying. But how can we change that? And also what bothers me in, like, when we discuss these things, it's the – how we homogenize groups in a way. So when we talk about gender-based violence, we usually kind of – like, if I say gender-based violence, people usually assume that this is violence against women. However, it's much – it is a much bigger problem because it also affects dramatically gender minorities. Transgender people, non-binary people, gender-non-conforming people, and each group will experience a different type of cyberbullying. And so I wonder how AI actually contributes to that, on the one hand, by raising identities, but then, on the other hand, how to expose trans and non-binary people to be even more vulnerable when it comes to cyberbullying and other kinds of gender-based violence. Because, like, if it's not identifiable, how can we possibly address that?
BW: Yeah, great question. I think there's a lot to unpack there.
(light laughter)
AP: Sorry.
BW: No, it's good! And even before AI, I think it's even the labeling of acts online [that] are already doing framing work. So I can't – the academic in me can't help but think about, like, Butler's implicit censorship or Foucault's discursive power to be thinking about…how are active events being labeled right now in online spaces to be framing certain groups of people as ‘harmers’ and certain groups of people as always the ‘victim’? And I think that AI then very much is going to, like, smooth it out. I think about, like, how all of the language gets really, like, slippery and it's not very nuanced. And much of the content that is now being circulated online is AI-generated.
KW: Right.
BW: “It is easier to write with AI”, apparently. “It is quicker to come up with ideas. Good for brainstorming.” People are trying to convince me of [this], and I'm obviously not convinced. None of us here are convinced.
(group laughter)
BW: But that labeling happened before AI and is now being exacerbated by AI in many ways.
And so, I mean, I don't know how intense we want to get about it, but if I think about ICE right now and we think about the labeling of Renée Good as a ‘domestic terrorist’.
KW: Right.
BW: That's a labeling kind of device that's being used to say “let's turn inwards on ourselves.
Let's kind of normalize and then have justification for the violence that's happening.”
SM: I'm going to hop in on that. So, yeah, one of the things that I'm interested in with new technologies right now is the harm that's being done on a social and cultural scale. And so, I think I love what you're saying here, Bri, because for me it's almost like the principles of algorithms of binary code, this or that, hierarchies, literal binaries, is producing binaristic thinking once again in us as populations. And so, you're either a domestic terrorist or you support the President of the United States.
BW: Yeah.
SM: That's kind of where it's going, right? And so, I think that when we're going back to the conversation about gender, it's really useful, I believe, and we've both written on this recently in separate papers on thinking about femmephobia because it allows us to think about it beyond just cis women. It allows us to think about anybody with a femme-presenting or non-hyper-masculine performance to be a threat and that that gets to be an object of hate or erasure. And so, I think AI does a flattening, I agree, or a smoothing that both erases and then implies places of threat.
So, if I think back to some of the videos I'm seeing, there is always going to be a gender hierarchy. They are using femme – women cats. Femme-presenting figures. But also, they're implying that the worst thing that can happen, and I'm going to be explicit for a second, is raping of males. So, the women rape is not of consequence, but some of the punchlines of this goes to raping males. And so, I think that it's completely erased trans and non-binary experience, and yet it's in there, but it's flattened. And I really insist that we slow down all of the media we're consuming and look at all of the levels of that hidden binaristic thinking that, again, these videos only exist because algorithms are taking in information to output that video. Yes, there are prompts, but this image generation is also an amalgamation of discourse, but that's slotted into categories that simplify everything.
KW: Right. So, it's kind of like a binary determinism? Like, because the code is structured this way, the output becomes….
SM: I think so.
KW: Right.
SM: But that's one that's deeply socially informed.
BW: Yeah.
KW: Yeah, exactly.
SM: By terrible data.
KW: Right. Right.
AP: It's interesting you said that because I teach a course called Critical Masculinities, which is basically the critical revision of different types of masculinities, from the perception of different types of hegemonies. And one thing that repeatedly comes in the analysis of every single social institution that we touch, be it race, be it family, fatherhood, queer identities, gender non-conforming identities, and so on and so forth, is that being femme is being targeted. And that kind of, so it's basically, it's not just about who has access to power, which will be the more common narrative in gender studies, but it's like there's an underlining motive there that no matter what we do, we still reestablish the fact that being femme will face all sorts of social consequences. And that actually now manifests through AI, and that needs to be addressed, both in terms of preventing gender-based violence, but it's also in terms of gender equality education. So I think that those things are actually connected, and it's interesting that it's only these kinds of conversations that allow us to connect them, because we usually will discuss them separately.
BW: I would say unless you're doing femme the, in quotes, right way, according to kind of conservative, even alt-right spaces, right? If we think about trad wife, that's an incredibly femme, not in the political sense, but in the aesthetic sense, identity. And that is being very much pushed in pronatalist or in kind of women's spaces. Saying like, “this is the way to be the right woman” right now.
But then at the same time, kind of thinking about what's being said here, this reminds us of why literacy is such an important, you know, historically, powerful tool for gaining freedom, for gaining agency, and the ability to be heard.
So in my course, The Discourse of Dissent, we recently read Freedom is a Constant Struggle by Angela Davis, where education is framed as essential to resisting oppression. And we read On Tyranny by Timothy Snyder, which is really emphasizing the importance of informed, critical citizens in protecting democracy. And so thinking then about AI, I really worry that this reliance on AI in education is contributing to not only the decline in literacy and critical thinking skills among students, but the ability to actually recognize the symptoms of fascism, the symptoms of conservatism. And so when students aren't actively engaging in learning, they risk [not] learning the very tools that actually make education really transformative, that make it so empowering.
And so one of the very first lessons from Snyder is “do not obey authoritarian dictates in advance”. And so in much the same way, I think what we're saying here is like we cannot obey the AI dictates in advance either. We have to push back against this ‘utopianism’.
AP: Absolutely.
SM: I think that adding to that, another concern to build on that, is that AI, like all technologies, especially in the 21st century, and we have tons of excellent scholars who have written all the important books on this so far, these are not neutral technologies. And so baked into AI's design would be implicitly sexist, racist kind of principles that don't think about difference expansively. And so I agree with you, Bri, that like if we're taking these at face value and using them for education, these are tools that already have those binaristic thinking principles in them, and they're not going to have a lot of capaciousness or capacity outside of that. And I think it's a way in which technology is reinforcing a gender hierarchy that we are also seeing replicated in popular discourse and in policy. So it's coming from a lot of different fronts right now, and I think that's what SIGNAL is pretty concerned about, is trying to show these multimodal ways in which gender conservatism or a patriarchal gender hierarchy is really rearing its head again in ways that are distressing.
AP: So basically we are talking about the importance of equity work in SIGNAL’s work.
(silence)
SM: We're nodding our heads. We're nodding our heads.
(group laughter)
KW: (humorously) The silence.
(more group laughter)
AP: The reason I'm asking is because usually when it comes to intersectional gender justice, again, there's a tendency to smoothen the term and kind of like look at one population,
kind of generalize the whole thing and say like, “oh, we need to prevent everything that's related to gender”. But then we lose those moments of vulnerability, in some cases, double vulnerability or triple vulnerability, when it comes to specific identities that are maybe not even on those diversity privilege charts and stuff like that, because a lot of things, they fall between the cracks.
And in many cases, there's not, sometimes there isn't a language that can describe them. And therefore, I do believe that one, understanding what we're dealing with, but also being more specific, helps to identify what we're working with and also identifying that equity work is not just a beautiful word or (laughter) a fashionable thing to do these days.
No, it's actually a very practical tool in terms of responding to different kinds of injustices. And in many cases, as you say, those injustices, they may not have either a clear explanation or clear definition. Or on the other hand, they cannot be, they could be caused by a non-human entity.
SM: Can I add one thing that I'd love to hear your thoughts on this too?
AP: Sure.
SM: So I always really enjoy hearing Alex speak on this because I find it's a good reminder that things that are so taken for granted for me and Bri, like our feminism is always going to be intersectional. And for us, that means that it is trans-inclusive, it is gender non-conforming inclusive, and it is inclusive outside of whiteness. And we're very hyper-conscious of those things. But we maybe just assume that everybody understands what's inside our heads.
And so I always really like that Alex is reminding us about, like, practical tools. And that one of the things facing us right now that I feel really strongly about is that we have to move into “Big Tent” thinking and think about how to produce coalitions and solidarities where we can really support each other, even if trust is something we have to build as we're in coalition. Because we have really, really big things to address. One of them being the rise of authoritarianism globally, and the other one being its relationship, I would say, to the advancements in technology. And so how do we make sure that we are always being deeply inclusive when, you know, I keep talking about it as a tsunami. I feel like there's a tsunami coming at us, and we can see it. And then sometimes I ask myself, what are the steps? Is it that we're just about to go into triage mode? And that we try and fix whatever we can that's right in front of us, like post-tsunami or in the middle of it? What do we do? And I think we have to do it together.
So for me, practically, it's about as much as we can be together as possible, even though I know that there are harm possibilities within that.
BW: Shana and I love Adrienne Marie Brown's Emergent Strategy. And one of the key tenets for Adrienne Marie Brown around organizing is to “move at the speed of trust”. But because of geopolitical powers, trust is fractured in so many more ways than perhaps it has been, at least within my lifetime, that I've been able to recognize. And so if trust is the thing that needs to be built and we need to be moving at the speed of building that trust, how can we be grappling with all of the onslaught of issues that we're currently dealing with?
I think one of the things that ultra-conservative spaces are really good at is simplification. And because of that, that trust can more easily be built and then they can move quite quickly. Whereas in other spaces and organizing spaces, there's so much nuance and complexity that matters so deeply, but that can make trust slower to be built. And in the past, I would have said, yes, we must move slowly so that we can actually be working together and fully developing these bonds. And I don't know what that looks like right now when things are so fractured and they're so fraught.
So it wasn't a good answer to your question. It was just these complicated things that I've been kind of grappling with.
SM: Yeah.
AP: Actually, it made me think about, like why I say things that I say. I'm kind of a, I hate the word hybrid, but I am this weird creature that combines three different perspectives.
So I am a researcher/an academic. I am an equity, diversity, inclusion specialist, and I am also a community activist. And I always connect the three pieces.
So like all research for me, I don't imagine it without having a community consultation, the right one, like being community informed is important. Then the other thing is considering how things are done. So like we're going into policymaking, revision, response work, prevention work, everything that is related to EDI work. And then research, we kind of like, go deeper into understanding the meanings, the numbers, the categories.
My favorite example is when I come and teach workshops on EDI, I usually take some kind of statistics in the organization, like put it on the slide and say like, okay, what do you mean by class? What do you mean by gender? What do you mean by age? What do you mean by seniority? And in many cases, people just cannot really respond to that. And for me, those three perspectives are inseparable, which makes me difficult in some situations.
Let's say academics, a lot of them tell me that my work and what I write sometimes is too grounded in community work. And then things that I do, let's say for EDI, sometimes it's way too complicated because I actually question things, like why we do what we do, because the outcome depends on the meaning that we attach to the work. And then for activists, sometimes it also could be like, way too much because there are people who are in pain here and now and they need to be helped immediately. So it kind of puts all sorts of different perspectives, but it also makes me think that maybe the problem is that we cannot find some kind of common ground for this because we keep separating those different perspectives. And if we kind of try to understand again from the trust perspective, like how do we gain it? Because my sad realization is, like I made it many years ago, that I always start from the negative. And this is something you have to make your peace with when you do any kind of EDI or community-based work.
Because you will meet people who were deeply hurt in the process. And before you can actually change something, you can at least recognize the damage done and then maybe reconcile. And then maybe you reach a place where you can work together. But I don't know if they have this luxury of time anymore. And again, evolving AI kind of puts an extra pressure on us.
SM: (to Karim) I feel like I want to know from your perspective, actually. I’m putting Karim on the spot. Because you did such a great job working through, with your team, the Manosphere Infographic.
KW: Right. Which is coming soon, by the way!
SM: Yeah, we're excited about that! And that was taking our work, and our perspective, but also finding a way to make it speak to young men.
KW: Yeah.
SM: In ways that I don't know if I was equipped to do.
KW: Yeah. I mean, it's a question that when I was hearing the three of you speak, I was thinking, yeah, what are the strategies by which we build trust? And one is a shared problem. The other is a common enemy, for lack of a better word. So not necessarily a shared problem, but having a common obstacle.
And so…but specifically in terms of, like, I'll say my approach or our approach to the Manosphere Infographic was primarily like, thinking of two audiences. One is the people who want to intervene and help those who might be slipping down the path towards radicalism. And the other one was those themselves who are in the midst of the Manosphere. And so for those who are in the midst of it, it was, well, what are the actual psychological or social issues that they might be struggling with and trying to speculate on and say, how do you reach them? And so, yeah, it's not going to be denigrating them, obviously, for their failures and their misgivings, but trying to address: okay, if you feel insecure, if you feel like a failure, how do we get you to see another opportunity? Because that's oftentime the failure that I've seen with people is the lack of an alternative pathway leads them down whatever the easiest path is, which is either, yeah, trad wife, the Manosphere, that kind of thing.
So those are kind of the approaches that I usually take is to, in almost in like cynical ways, what speaks to people's self-interest? But I'd love to hear how you all, what your strategies are for approaching or building that trust amongst different communities or how to…(laughs) (to Bri or Shana, most likely) I see that. I'm sorry for your face chances.
BW: So…
KW: Because you've done such an awesome job with SIGNAL and the feminist network.
BW: Well, thanks. That’s kind of you. I think one of the things I'm really noticing that I'm sure we've all noticed in some way is this collapse of language that we're dealing with that I think very much is contributing to the fracture of trust. So you can say one word and that word will immediately be a stress response to someone or immediately put people on two sides of a fence. And so I have, I mean, I've always been careful with my language, but I'm noticing, especially in class, I'm having to be especially careful with how I'm framing things so as not to isolate particular people. So we had a conversation a few weeks ago about the differences between patriotism, nationalism, and white supremacy, because those three terms are very much being conflated. And in contemporary politics, in many ways, they are conflated when we see people who are engaging in acts of violence, calling themselves “patriots”. But that's not the technical definition of the word.
And so there are people who see themselves as patriots, but who see it as like, I am here to actually build community with people around me and reach out across bounds. And that for me is patriotism. And there was quite a rift in the conversation because that immediately made some backs go up because they had been very much hurt by people in the name of patriotism.
And so rather than actually going on with, it was an hour and a half for class – we didn't even get to class content. We spent an hour and a half talking about those three terms, what they did at one point mean, but how they are totally conflated in language right now. And why it's important to actually understand how media is very much influencing how we understand identity and words and moving very slowly. And I could tell that some people were kind of frustrated with how slowly we were moving. They're like, ‘we're stuck in the semantics!’. And I'm like, well, right now the semantics are hurting people. And so if they're hurting people, we have to pause and be like, what's going on? And so in that moment, it was like, we must move at the speed of trust and we have to slow it down.
And I left that class being like, wow, that was a disaster. (laughs) But [the] next week, someone's like, “we had a really great conversation on patriotism!” And I thought, “phew! OK, really good that's how you experienced that!” Because I didn't know what we were going to do about that after.
KW: Right.
SM: I like the idea of being really careful with language. It's not a skill of mine.
BW: I don’t think that’s true!
SM: I tend to be a little more…hot off the collar, we might say, and say things. But I do think that when I think a lot about organizing, community building, I think that we all need to know what our skill set is that we bring to the table. And to really do that thing really well and know that we can't do everything. But then to ask for help where we have gaps.
And so (to Karim) you helped in a way that, you know, I am building skills to talk to young men about their online use through my own family networks. But it's not immediately useful for me because my mind will always go to, I want to protect gender minorities and people who are deeply vulnerable in those ways to gender-based violence. And so I think that pairing and collaborating with people is really important. And that we should just really know what we do well and work with that. But also consistently creating the spaces and opportunities to connect. And make those very low stakes, very low entry. And just letting people feel seen and part of something, really, it goes a long way.
AP: I just wanted to follow up quickly on something you just said. We kind of, again, like, put together all ages who experience different kinds of gender-based violence, be it identified this way or not. But could you please maybe say a bit more about children and all like people underage and what kind of risks they are exposed to with AI or cyberbullying? As people who have children, does it bother you? For one, the exposure and vulnerability of your children to that unknown, I don't know if they're anonymous or not, but all sorts of people or AI who will just tell them that they don't look good enough. That they don't behave like a man or a woman or completely exclude trans and non-binary identities, for instance. Or if they say that, like, okay, if you want to be better in your community, you need to do ABC or worse. They can just push people to do things that they may regret later. And be it of sexualized nature or not. So how do you respond and, like, combining those two perspectives? Because we rarely speak about, like, parenthood and gender-based violence. So how are those things connected?
SM: I think it would be really great to have concentrated resources for parent digital literacy and how to have conversations. And I think that is beginning to emerge in quite a few spaces.
You know, I follow the designers of technologies, and I don't let my children have social media.
Like all designers of social media, they do not let their children have social media. So I'm the same. And that's how I've avoided that problem.
But it does seep into the culture. So very much both of my children have confronted pressures of hyper-masculinity within their cohorts because all of their friends are consuming negative, targeted social media towards men, young men, and identity. And so, you know, obviously we're having very active conversations about that. I think that there is a huge danger, but I do also have, you know, nephews and family friends who are teenage boys who have unlimited access to social media. And so with them, it's about, again, low stakes, low bar-to-entry conversations.
‘What are you looking at?’ ‘What's on your algorithms right now?’ Letting them know what algorithms are, what they do, how they work, so that they can have a bit of an informed understanding of the way in which we could talk about what we would call media manipulation.
And then that way they can make choices. But with one of them, I've built enough trust that I get, like, pretty amazing videos sent to me. And then we unpack them in group chat together.
And I feel kind of privileged and lucky to have that chat with this one young man. And so I think that, you know, not shutting it down immediately and just being like, “okay, what do we mean by this?” You know, if my oldest son got called a ‘simp’ because he had a girlfriend. And so we had a very good conversation where I was like, why are your male friends mad at you and calling you basically an unmasculine male because you managed to talk to a girl? Like, that's the level of upside-downness within gender ideology right now. Which, because I study this, I know where it comes from. That is a mainstreaming of an incel belief system that's showing up in 12-year-old boys' conversations.
And so I think that we need to have parent literacy. But I see something that we need to deal with in the next 5 to 10 years is the way in which the kind of messaging happening from incel to mainstream culture for young boys, via Andrew Tate, via all of these influencers, is going to deeply impact heteronormative gender relations in ways that are going to really harm young women. And we need to equip young women with the skills to be able to stand up to what will be, I would say, tantamount to abusive behaviors.
AP: Yeah.
SM: And so I would love to see resources for that.
AP: To kind of try to go back to the SIGNAL work and stuff that you do because, for one, it's a very important part of having this conversation going. But it's also, you're doing amazing work in terms of kind of partnering up with different units, for the lack of a better term, at the university, but also outside of the university. Maybe you can share some more about that?
BW: Sure. Yeah. So we've been really lucky to have some really great partners. And so we've got a few academic partners, of course, University of Waterloo. We've also got York University, University of Ottawa, Carleton University, Lethbridge University…
SM: Royal Roads.
BW: Royal Roads, Guelph. But then also…
SM: McMaster.
BW: McMaster. But then we also have the Canadian Anti-Hate Network, the University of Ottawa's Archives and Special Collections, Wikimedia, and Inter-Arts Matrix. And so what's really important about all of these different partners coming together is that we actually are able to kind of attack the issue from many different sectors, which is really important to us. I mean, I was thinking about that last question you asked, and I've been putting it off for so long because my child is in utero, and I thought, I don't have to think about this for a while. (laughter) But it does require probably, one, curbing my own social media right away, and it's probably going to come so much quicker because of the way so many of these jokes are – well, so many of these attacks are framed as jokes in mainstream media and then also in schools, I'm hearing.
But that's, I think, why it's so important to have these partners that we're working with. And so…
SM: One of the reasons why this group has come together is because there's so many people in Canada doing amazing work, and there's no need for us to be not resource sharing. There's no reason not to be, or no reason for us to be doing the exact same thing and reinventing the wheel. There's so many good skill sets in Canada and internationally. It's not a huge group of people, but it is a significant group, and we can all learn from each other and share. So we're doing a lot of sharing of data sets, sharing of how we code our data, bringing people in, like we're doing for the Summit, people with expertise, to bring to our community, to let the conversation start here. Because we're really committed to making Waterloo a real hub of this activity in the next five to seven years, like a go-to place for these conversations in Canada. And one thing is that this is a really great group of people, and we all really enjoy sharing time with each other, and we're all also people who understand how hard this work is. And this work is very, very difficult. And so you need a network of care, which is built into our mandate.
BW: It was one of the reasons that we started the first iteration of this network to begin with. How do we actually create communities of care and solidarity in the face of such intensified backlash? Because we are in a moment of backlash. And so having this group of people, this group of partners, has been really important for kind of spreading it out in a way, but also for determining what are the resources that you are all using when this inevitably happens. And I think what's been really important, too, is that talking across these different spaces helps us to see kind of where the gaps are.
So, you know, school-age children being a gap, but also, interestingly, in terms of misogyny and radicalization, university spaces and classrooms are actually also being a gap. Most people are looking outside of what happens in the university. And so one of the spaces we're really invested in is, like, what happens in our classrooms? How can we begin to actually notice misogyny or radicalization so that we can have the conversations before it slips too far? And how do we show care in those moments as well is really important, both for ourselves but for the person who might be slipping into those spaces. And so we're really, like, how do we build and design these safer futures?
SM: And university students are an excellent point of entry for that because if you've left a home and you didn't have digital oversight in your home, this is a great place for learning but also for community support. And so, you know, I'm teaching classes right now on toxic media and having students show me what's on their feeds and they don't have the digital literacy skills to make sense of it. But they go, “yeah, what's up with this, miss?” And I'm like, I'm happy to now give you a three-hour lecture on why this is entirely problematic. (group laughter) And these are really, really good students. These are lovely humans. And so they deserve, I believe, to have those conversations and to be given the skills so that they can consume their media in ways that isn't harming them or others.
AP: Yeah, that's great. So the upcoming Summit will be, like, the great introduction into a much bigger project.
SM: Yeah, we're really excited. In some ways it is the launch of SIGNAL at the University of Waterloo. And it is open to absolutely everybody on campus. And so we're hoping to make it a very robust and enjoyable three days.
AP: So it's March 4th through 6th. And it will be here on campus.
BW: Yeah, at the University of Waterloo. Yes.
SM: And sign up on Eventbrite.
(group laughter)
SM: And for students, there will be food.
KW: (laughing)There we go.
BW: And [food] for staff and faculty.
SM: And for staff and faculty.
(more group laughter)
AP: Thank you very much for being here and sharing all this.
KW: Yeah, thank you.
AP: Again, thank you so much for your amazing work. And we do hope that this is also a beginning of a series of conversations that we can go deeper into gender equality, gender justice, and the work of SIGNAL.
BW: Absolutely. Thanks for your work!
SM: Thanks for having us! Thank you.