Rhetoric, Responsibility, and the Platform: An Interview with Jessica Reyman

Article PDF

Rhetoric, Responsibility, and the Platform: An Interview with Jessica Reyman

Jessica Reyman
Jessica Reyman

Jessica Reyman is an Associate Professor of English and Director of Graduate Studies at Northern Illinois University. She is author of the book The Rhetoric of Intellectual Property: Copyright Law and the Regulation of Digital Culture, and she has published several articles in venues such as College English and Technical Communication Quarterly as well as chapters in edited collections such as Cultures of Copyright (co-authored with Tim Amidon) and Theorizing Digital Rhetoric. With Erika Sparby (Illinois State University), Reyman is currently at work on an edited collection titled Digital Ethics: Rhetoric and Responsibility in Online Aggression, Hate Speech, and Harassment.

As editors of this special issue, we immediately thought of Professor Reyman’s work as being central to understanding and theorizing the rhetorics of platforms. In particular, her 2013 College English essay, “User Data on the Social Web: Authorship, Agency, and Appropriation,” serves as an early critique of platforms in rhetoric and writing studies, demonstrating the ways in which corporate platforms have transformed issues and concepts that rhetoric and writing theorists have been researching for decades (agency, ownership, property, control, etc.). Across all of her work, including this interview, we see that Reyman refuses simplistic answers to complex issues; she urges us to see corporate writing systems as social and technological actors that are not devoid of agency or responsibility. Yet, avoiding deterministic traps, Professor Reyman offers insights and strategies that help us to see how we can enact positive change in our scholarship, classrooms, and communities. She reminds us that the work of unpacking the politics and rhetorics of platforms, while still very much in progress, is necessary to confront the great challenges corporate platforms pose to both public and private life.

Dustin Edwards and Bridget Gelms: You’ve been working on issues related to digital regulation for quite some time now, whether it be with critical studies of intellectual property, user data, or algorithmic processes. We’re curious: what got you started with this line of research? And how has the evolution of your research progressed?

Jessica Reyman: My research interests developed as I studied writing and rhetoric in college and then in graduate school at a time when we, as a culture, were experiencing the rise of personal computers, networked communication, and the internet. I first used a personal computer and word processor in high school, and by college I was surfing the web and using email in a campus computer lab. As I moved through graduate school, I was teaching writing in a computer classroom and within the next several years came peer-to-peer file sharing and Napster, then social media, and now the algorithmic web. These rapid technological developments occurred concurrently with my development of scholarly expertise in the field of writing and rhetoric. I witnessed first-hand the ways in which technology, beginning with electronic composing tools and extending to the World Wide Web and social media, revolutionized writing, literacy, and participation in public and civic discourse.

During the formative years for me as a scholar, I saw great shifts in practices of and environments for reading, writing, researching, teaching, communicating, sharing, and participating online. With these new opportunities came proclamations of revolution from scholars and pundits such as John Perry Barlow, who declared in 1996 the “Declaration of the Independence of Cyberspace.” What soon followed, however, was a steady trajectory toward regulation of cyberspace, through proprietary software agreements, through developments in copyright law, through filter bubbles, through terms of use policies. My research agenda developed and evolved during a time characterized by the promise of opportunities in digital rhetoric dampened by regulatory systems. This tension between open authorship/open participation and the legal, cultural, and technological systems that shape them is ongoing today and will no doubt continue as we further blur the lines between physical life and online life. There will be plenty more to research and write about!

DE and BG:  Yes, it seems like that tension will continue to play out—especially as platforms infiltrate many of our everyday practices. On that note, your research often reminds us that new digital literacies and methods of resistance are needed to confront legal and often corporate regimes of control. For example, in your essay, “User Data on the Social Web: Authorship, Agency, and Appropriation,” you note, “The generation, collection, and use of data occur with a surprising lack of transparency. Terms-of-use policies that describe data collection and use are required by law, but these are lengthy and difficult to understand when read at all. Even more problematic is the fact that everyday users are often led to believe that the data they contribute is advantageous to them” (518). What do you think students and everyday citizens can do to begin to challenge or resist the issues present in data generation, collection, and use? Where can they reasonably start in order to learn more about their data generation, collection, and use?

JR: Becoming familiar with terms of use policies, managing your privacy and data collection settings, and making informed choices about which services and platforms to use are good starting points for students. But I wouldn’t place the responsibility to challenge and resist issues present in data generation, collection, and use with internet users alone. I find it problematic to locate responsibility with everyday citizens, according to a “buyer beware” model of internet usage. At times the media, software companies, and social media service providers tout options for modifying privacy settings, the development of Do Not Track add-ons, and choices among platforms and services. I find these suggestions to be misguided, conveniently shifting responsibility from software developer or service provider to user.

I think we, as a culture, can do more to hold software developers and companies responsible and accountable for designing systems that enact a different ethic, that consider users’ privacy and ownership rights. In an age when participation in so many life activities—including commerce, education, civic discourse, personal communication—require users to relinquish rights to the own data and content, norms regarding responsible and ethical collection, management, circulation, and use of content and data need to change. While our students (and internet users in general) have the capacity to choose among settings and services, these choices are quite limited, ultimately controlled by the technology developers and providers. Therefore, the most pressing change that needs to happen is with our shared expectations for how such systems are designed, and what accountability we expect from software companies and service providers to offer ethical systems.

DE and BG: Thank you for pinpointing the issue of responsibility and for shifting the frame to include the processes and decisions of platforms and their developers. This brings us to a question about methods and methodologies. Because so many of the platforms that have such a large bearing on our everyday lives are black-boxed (operating via backstage algorithmic procedures and data practices), what kinds of methodological approaches do you find useful for researching backstage algorithmic procedures and data practices and their effects on users/cultures?

JR: One method for further interrogating effects would be through gathering information from expert users themselves about their experiences. Ethnographic and auto-ethnographic studies, observations, and interviews allow us to gain perspectives of those who inhabit various online spaces, use platforms, manage practices and policies, and have developed work-arounds or approaches that we can learn from.

DE and BG: We wonder if we can talk a bit more about responsibility in classroom practices.  In that same 2013 College English essay, you write:

In fact, in some cases we teach with select social and participatory Web technologies in the classroom, requiring that students join social networks or write to class blogs or wikis that are hosted by corporate entities and over which instructors and university officials have little control. These technologies offer much in the way of free or inexpensive tools for communicating, composing, and learning, but students, professors, and university officials often have limited understanding of the hidden practices surrounding the management of user data on the social Web. (514)

As teachers and scholars of rhetoric and writing, what is our responsibility regarding teaching critical understandings of data, and how has the responsibility evolved and/or will continue to evolve since the publication of your article? We’re also curious if you might comment on the politics of data use in online learning management systems (e.g., Canvas and Blackboard).

JR: Problems arise when students are compelled to use certain technological systems, like the online learning management systems Blackboard and Canvas, as a requirement for a course. Each system comes with terms of use, which typically collect data from users and apply it—in aggregate—to improve their services and technologies. The issue in these situations is not necessarily what data is collected or even how it is used, but that the data is collected without permission and without opt-out options.

Another example of data collection within university settings is with SafeAssign and other plagiarism detection services. Plagiarism detection services concern me even more because they are used as tools to evaluate student writing. Students are asked to submit their own intellectual property, and then this information is stored and used to “detect” academic misconduct among users, typically by identifying matching material in an originality report. Not all faculty members across campuses are trained in pedagogy surrounding plagiarism, and many come to rely on plagiarism detection services to guide their decision-making, grading, and even determination of academic misconduct. These tools are problematic because institutions back them, professors require them, students are required to use them, and many times without critical thinking about and open acknowledgement of 1) the intellectual property and data students relinquish and 2) the problematic reliance on machine-scoring of writing.

I won’t go as far as to say faculty shouldn’t use either content management systems or even plagiarism detection services. With both content management systems and with plagiarism detection services, there are trade-offs, convenience versus control. Best practices involve experts on campus educating faculty about terms of use of institution-sanctioned services, offering systems as support, not replacements, for pedagogy, and supporting a range of technology options for faculty and students. The best use of these technological systems, of course, is with awareness of how such systems work and after careful consideration of trade-offs.

DE and BG: We imagine lots of readers of this journal teach first-year writing courses or courses that have significant research components. What do you think students—some of whom may be very new to academic research practices—need to know about social platforms, algorithms, and user data?

JR: Using technology on a daily basis does not necessarily result in technological literacy. Literacy is developed through careful reflection and critical thinking about technology use. I would encourage students to reflect on the algorithms and platforms they interact with daily in their content generation and social media participation, asking themselves questions such as:

  • What are the affordances and constraints of the design of this system? What does it allow me to do? What does it limit me from doing?
  • What are the default settings of this system? What modifications are possible? How easy are they to manage?
  • What are the terms of using this system? What am I agreeing to when I use it? How are these terms (and any changes to them) communicated to me?

While students may have little choice in which services to adopt, they can exert some agency over how to use them. Based on their answers to the questions above they can take actions such as changing settings, using some features and not others, sharing certain content and data (or not), and engaging in debates and discussions with others about policies and practices of popular services and platforms. The goal is to become responsible users and active participants rather than passive consumers of platforms and services.

DE and BG: As researchers and scholars in rhetoric and composition, we often turn to university classrooms as sites for intervention. Yet, we wonder if there are other sites for developing critical perspectives on platform practices—e.g., public or counterpublic intellectual work? In other words, what can rhetoric scholars offer apart from a kind of classroom-based pedagogy to resist or challenge problematic platform practices/biases?

JR: Many everyday interactions across various online spaces we inhabit can help to challenge or resist problematic platform practices and biases. So much of our lives are now online, and so many online spaces we participate in blur our identities as teachers, scholars, friends, employees, parents, children, supervisors, and citizens. Our audiences across these spaces are broad and varied, and they know us as inhabitants of these different roles. In this online ecology, then, we have the opportunity to contribute to ongoing conversations about technology and platform practices every day. Through our Facebook and Twitter posts, in our discussion forums, in online comment spaces, anonymous or not, we can continue to pose questions and offer perspectives that contribute to the ongoing public discussion. The key is to make the comment, have the discussion. We have more opportunities than ever to share our experiences and expertise.

DE and BG: Yes, we agree. Sometimes, though, it’s a paradoxical situation we find ourselves in—should we stay on platforms and engage in public work that critiques or brings awareness about platforms? Or, fully recognizing the problematic tradeoffs of platforms, should we actively resist using these systems? It seems to us that these types of questions can’t easily be answered. And part of the problem is the “we”— platforms affect people differently. For example, studies show women, communities of color, and LGBTQ+ communities are at higher-risk of experiencing harassment and abuse on social platforms (Citron; Duggan; Jane; Mantilla), bringing about questions of agency and responsibility, which we know you’ll tackle in your forthcoming edited collection with Erika Sparby. Certainly, algorithms play a large role in mediating these kinds of vitriolic exchanges and other interactions that happen on platforms. As you describe in your recent chapter on algorithmic agency, rhetorical theory needs to come to grips with the ways in which algorithms matter in big ways (finance, education, criminal justice, media circulation, etc.). In particular, as you diagnose them, algorithms seem to pose problems for understandings of agency. In addition to issues of agency, we’re wondering if you might comment on other ways rhetoric scholars might interrogate algorithms from a rhetorical perspective.

JR: I would like to see even more attention given to concepts of rhetorical ethics, to responsibility and accountability. Jim Porter defines “rhetorical ethics,” not as a moral code or a set of laws but rather a “set of implicit understandings between writer and audience about their relationship” (68). While Porter’s work appeared before the rise of social media and other contemporary web contexts, we have now seen how these implicit agreements extend beyond writer and reader (who often occupy both roles) to also include the individuals, communities, and institutions that build and manage technological spaces for discourse and engagement. Further, as Jim Brown argues, digital platforms, networks, and technologies themselves carry ethical programs with rhetorical implications. We have been reticent to apply “morality” to technology, and for good reason, but I think there are ways we can carefully interrogate how technological design, use, and regulation has far-reaching and important ethical implications. I hope to see growing interest in examining (un)ethical interface and platform design and data practices, exploring informed responses and actions challenging unethical practices, and theorizing about what frameworks and approaches for ethical human-machine collaborations might look like.

Works Cited

  • Barlow, John Perry. “A Declaration of the Independence of Cyberspace.” Electronic Frontier Foundation, 8 February 1996, www.eff.org/cyberspace-independence.
  • Brown, James Jr. Ethical Programs: Hospitality and the Rhetorics of Software. University of Michigan Press, 2015.
  • Citron, Danielle Keats. Hate Crimes in Cyberspace. Harvard University Press, 2014.
  • Duggan, Maeve. “Online Harassment 2017.” Pew Research Center: Internet and Technology, 11 July 2017, www.pewinternet.org/2017/07/11/online-harassment-2017/.
  • Jane, Emma A. “‘Your a ugly whorish, Slut’: Understanding E-bile.” Feminist Media Studies, vol. 14, no. 4, 2014, pp. 531-46.
  • Mantilla, Karla. Gendertrolling: How Misogyny Went Viral. ABC-CLIO, 2015.
  • Porter, James E. Rhetorical Ethics and Internetworked Writing. Greenwood Publishing Group, 1998.
  • Reyman, Jessica. “The Rhetorical Agency of Algorithms.” Theorizing Digital Rhetoric, edited by Aaron Hess and Amber Davisson, Routledge, 2018, pp. 112-25.
  • —-. “User Data on the Social Web: Authorship, Agency, and Appropriation.” College English, vol. 75, no. 5, 2013, pp. 513-33.
css.php