In Part I, we discussed our process of program development as one of digital invention. We covered the following sites of invention:
- program learning objectives;
- specific course outcomes, assignments, and activities;
- tools for assessment integrated into every course; and
- a reflective and responsive process for course design, instruction, and learning.
As a fully integrated program, we acknowledge that, in discussing each component of the program in isolation, we cannot fully represent the interplay among the components or the order in which we constructed them. For example, while we started with our program learning objectives we were constructing a comprehensive assessment structure designed to operate at the program, course, project, and activity levels; more importantly, our assessment structure provides the professional writing minor with the tools, processes, and policies to improve student learning, teacher effectiveness, project and course design, and program development. Part II will focus on our approach to program assessment and teacher support, including the following subsections:
- Role of Assessment,
- Multipurpose Rubrics and Spreadsheet Thinking,
- Incorporation of Analytics, and
- Role of Teacher Development.
Role of Assessment in the Design of Our Professional Writing Minor
While many on our campus still see assessment as a means of surveillance imposed from above, we identify assessment as a generative set of procedures (see, for example, NCTE’s position statement on writing assessment). Assessment, from our perspective, should help us understand what we are doing and improve the ways that we are doing it. In general, our approach to assessment in the professional writing minor begins with one primary question:
How can we best understand student learning and pedagogical effectiveness in our program?
To answer this question, we need to gather and analyze a range of data, review and reflect on our findings to plan for revisions, and, finally, incorporate those revisions seamlessly into the program—thereby making assessment a natural part of our daily workload. More importantly, we also adhere to Redecker and Johannessen’s recent claim: “Lately, however, there has been a growing awareness that curricula—and with them assessment strategies—need to be revised to more adequately reflect the skills needed for life in the 21st century” (80).
We begin with an understanding that assessment does not happen in a vacuum; it must develop locally and respond to the needs of local stakeholders. We want assessment to be contextual: meaningful, manageable, and sustainable. Like those features of the program discussed in Part I, our long-term assessment plans build from our program objectives—using multipurpose rubrics, learning analytics, and adaptive technologies to create a variety of corpora that can be examined within/across sections and courses in real time and over time. Program components blended together or assessed separately offer a comprehensive set of lenses for reviewing a program and provide relevant insights that allow all stakeholders to make informed choices in the present and in the future. To do this work, we use (or will use) a variety of assessment tools.
Multipurpose Rubrics and Spreadsheet Thinking in the Design of Our Professional Writing Minor
Our earliest use of rubrics and spreadsheet thinking occurred in our business writing program, and we want to enhance that practice in our professional writing minor. For us, rubrics are quantitative, provide different ways into the data, and offer multiple lenses to analyze the program. In our professional writing minor, we are building rubrics based on an interrelationship among our program objectives, specific course outcomes, and particular project goals. While the primary purpose of the rubric for each project is evaluation based on course outcomes, we also want to have criteria that grow out of our program objectives and operate across courses.
Our rubrics are designed to not only help students understand the application of particular concepts more fully (both as discussions in class that define the criteria and as tools for peer review and evaluation) but also help us track the development of student skill sets across the program. In other words, we consider our rubrics as both conduits and lenses. For example, we might start with an evaluation sheet like the following for a Poster Project in our Document Design course:
For this project, we have a criterion based on visual acuity (“Visual is Primary”). This is tied to a course outcome for Document Design (“Analyze and describe the visual design of documents”). We can discuss with students the ways that different readers might define “Visual is Primary,” how we define this criterion as part of the evaluation of the project, how it relates to examples, and how it promotes a complex understanding of the concept. We can also draw a correlation to our program objectives (“Exhibit visual design awareness and acuity”). While the rubric can be tied to the goals of a particular project, we can also make connections to other rubrics in the same course or in different courses in the minor that use some aspect of “visual” as a criterion for evaluating a project.
With the evaluated projects, we can create a spreadsheet from the rubrics of all of the student evaluations in a particular course, such as the following example from Document Design:
These simple spreadsheets can be combined with other courses to provide us with data to analyze and develop the program in multiple ways. We can analyze by a single criterion in a single course (i.e., “How did students perform on ‘Visual is Primary’ in Document Design?”), by a single criterion across sections and over time, by related criteria across courses, and/or by criteria related to a specific course outcome. We can also compare students in hybrid sections and those in wholly online sections; to us, the possibilities seem endless. While some might find this data imprecise, we see it as a starting point for conversations about student learning, course effectiveness, pedagogical strategies, teacher support, and program objectives. These conversations are the impetus that leads to improvements in all areas of the program.
We have plans for a more open evaluative system in the future, in which cohorts of teachers will offer snapshot evaluations of all student work to the teacher of record for a particular course. While this strategy gives all teachers in the program a greater awareness of the kinds of work occurring in all of our courses, we can also use these snapshot evaluations as jumping-off points for broader discussions based on the spreadsheet analyses, for promoting new and ongoing teacher conversations, and for creating teacher development workshops. Similarly, we plan to incorporate a range of adaptive learning analytics into the program to help enhance student learning and pedagogical effectiveness.
Incorporation of Analytics in the Design of Our Professional Writing Minor
We are hoping to use learning analytics to offer another perspective on our central assessment question: how can we best understand student learning and pedagogical effectiveness in our program?
While the original inspiration for incorporating adaptive learning analytics in our courses came from “The Data-Driven Classroom,” which described the Open Learning Initiative at Carnegie Mellon University (see also Brown, ELI, JISC-CETIS, and Long and Siemens for additional strategies we plan to use), our starting point comes from Buckingham-Shum’s introduction: “Learning analytics has emerged as one of the most common terms for the community seeking to understand the implications of these developments for how we analyse learning data and improve learning systems through evidence-based adaptation” (2). While there are many strategies for data gathering that we might consider, our primary goal is to capture the data in the context of a particular course, a range of courses, and the entire program. We cannot understand the data unless we compile it and ask the right questions. This will take a commitment to our assessment practices as well as a constant refinement of our methods.
We have begun to explore the use of both discourse analytics and social network analytics to better understand how students (and teachers) are interacting in a particular course. Since language is a primary tool for knowledge negotiation and construction, analyzing discourse in context can help us understand how and why students are learning effectively. These measures, to us, are a bit like studies in computational linguistics but work primarily in real time. For example, we might apply analytics to social platforms in a course to understand how students use key terms, key concepts, metaphors, or examples. These could be student-student interactions but also student-teacher and teacher-teacher interactions within the program. By moving much of our course content and student interactions online, we create a pool of data to analyze; therefore, we can compare between students as well as between semesters and even across semesters. Again, the key to successful data analysis (and assessment) for us is asking the right questions.
The Role of Teacher Development in the Design of Our Professional Writing Minor
Because we use a large number of part-time instructors often without disciplinary training in professional writing, a primary goal of program design for us must be to provide robust support structures for all teachers in the program. We share course materials (i.e., syllabus, schedule, and project evaluation sheets), supplemental classroom materials (i.e., project handouts, PowerPoint slides, and project notes), sample materials (i.e., student drafts for each of the projects, mini-lectures, and samples of daily messages), suggestions for course preparation, additional readings, and discussion of the theoretical assumptions of the course designers. We use shared cloud drives and master templates to create shared workspaces and keep all our materials centralized, current, easy-to-access, and easy-to-revise.
To enable these support structures, our program comprises a mentor program for anyone teaching a particular course for the first time—including providing materials such as the syllabus, course readings, assignment options, and other support materials well before the beginning of the semester—along with regular (weekly) meetings during that first semester. Concomitantly, to provide further (significant) support, we offer five professional development meetings every semester to assist with our teachers’ effectiveness and efficiency. These meetings are part of the standard workload (10 hours per week), held during lulls in course schedules, and cover such topics as reviewing program-specific course and classroom materials, norming assessment and evaluation materials, introducing program-specific processes or procedures, and providing an open forum for program governance.
To meet staffing needs, many writing programs seek “trained teachers” to teach in their programs—usually defined as people who have the appropriate degree credentials and/or have taught for the program in the past—, but this, too often, allows programs to avoid faculty development and support. Our program, however, wants teachers willing to participate in program development and to contribute to our shared “training.” This means, of course, that our teacher support must address workload and time commitment to help teachers meet the high (yet manageable) expectations established by the program; however, since faculty development can too often be an overload, our program begins by defining standard teacher workload expectations, as follows:
Time in class means those three hours per week that teachers are physically in the classroom. For online teachers, this would mean designating three hours per week when a teacher will be working on site: interacting with students, offering “lectures,” and dealing with questions or concerns.
Time preparing for class means those hours that teachers use for organizing their classroom materials and handouts, arranging possible discussion topics, and constructing any lectures they might present.
Time responding to and evaluating student writing means those hours that teachers spend with student drafts and final products. For our program, these are two different activities. Responding to student writing comprises those times when a teacher offers feedback in-process. Evaluating student writing comprises those times at the end of the process when a teacher grades a final product. This is an important distinction, and one we make explicitly with both teachers and students. They need to understand the purposes and values of each.
Time in professional development means those hours that teachers spend improving the quality of their teaching and the quality of instruction in the program. We believe that professional development should be a working condition, rather than overtime; should be proactive, rather than reactive; should be a long-term consideration, rather than a short-term response; and must improve all aspects of the program environment.
Preparing class materials and activities should be designed as part of the normal, expected workload. In other words, writing teacher support should not constitute “overtime” for teachers to be successful in our program. As we stated in Part I, teacher training should be part of, and should help teachers adhere to, the ten-hour work week.
Still, faculty (most particularly contingent faculty) devote many more hours to their profession than those for which they are actually paid. Redefining writing teacher support is imperative in light of these conditions. As Nagelhout has argued elsewhere:
[M]any claim that if you want to have successful faculty development, you either “feed them” or “pay them.” While I don’t completely disagree with these “hints,” I do believe they arise from a perspective of “working conditions affecting faculty development” rather than “faculty development affecting working conditions,” and if we accept this position, then faculty development will always be a burden, always more work piled on already full plates. (A15)
In our professional writing minor, we believe that working conditions shape and are shaped by the framework of our program, and to improve program quality we are committed to improving working conditions by clearly articulating an acceptable workload. We examine best practices for our program and come to some agreements on the best ways to teach and assess the course(s) based on 10-Hours-per-Week. Rather than a top-down approach, this process is an ongoing conversation, a collaborative effort that provides a voice for all stakeholders. Support should be about negotiating ideals–for the program, for the teachers, for the students–and creating a “culture of support” (Marek). Without these program-based conversations, we really are just a bunch of individuals teaching the courses.
These really are just first thoughts on our desire to take a more comprehensive, digital approach to the work that we do in building a program. We really are just in the planning stage, the first semester before the five-year plan. But we have a commitment to increase self-directed and nonformal learning activities/opportunities (i.e., independent and lifelong learning). Students should be responsible for their learning and, more importantly, that learning should be neither prescriptive nor limited to a single classroom. We believe in learning over gatekeeping, exploration over cramming, collaboration over isolation, engagement over regurgitation, process over product, knowing how (or why) over knowing what, and lifelong applications over short-term demonstrations. Our program and course designs always begin from these ideals. Ultimately, we want to develop an inclusive process for improving each course and the program as a whole. This process is part and parcel with student learning and teacher support. More importantly, this places assessment at the forefront of our thinking, naturalizing the process as a normal part of our daily work—rather than as an add-on.
- Brown, Malcolm. Learning Analytics: Moving from Concept to Practice. EDUCAUSE Learning Initiative Briefing. 2012. Web. http://www.educause.edu/library/resources/learning-analytics-moving-concept-practice
- Buckingham-Shum, Simon. Learning Analytics. UNESCO Institute for Information Technologies in Education Policy Brief. 2012. Web. http://iite.unesco.org/pics/publications/en/files/3214711.pdf
- CCCC Committee on Assessment. “Writing Assessment: A Position Statement.” 2009. Web. http://www.ncte.org/cccc/resources/positions/writingassessment
- ELI. Seven Things You Should Know About First Generation Learning Analytics. EDUCAUSE Learning Initiative Briefing. 2011. Web. http://www.educause.edu/library/resources/7-things-you-should-know-about-first-generation-learning-analytics
- JISC-CETIS. Analytics; What is Changing and Why Does it Matter? UK Joint Information Systems Committee: Centre for Educational Technology and Interoperability Standards: CETIS Analytics Series, 1 (1). 2012. Web. http://publications.cetis.ac.uk/wp-content/uploads/2012/11/Analytics-Vol1-No1-Briefing-Paper-online.pdf
- Long, P. and Siemens, G. Penetrating the fog: analytics in learning and education. Educause Review Online, 46.5 (2011): 31-40. Web. http://www.educause.edu/ero/article/penetrating-fog-analytics-learning-and-education
- Marek, K. Learning to Teach Online: Creating a Culture of Support for Faculty. Journal of Education for Library and Information Science 50.4 (2009): 275-292. Print.
- Nagelhout, E. “Faculty development as working condition.” College Composition and Communication FORUM 59.1 (2007): A14-A16. Print.
- Redecker, Christine, and Johannessen, Øystein. “Changing Assessment —Towards a New Assessment Paradigm Using ICT.” European Journal of Education Vol. 48.1 (2013): 79-96. Print.
- Smith, Steven. “The Data-Driven Classroom.” 2012. Web. http://americanradioworks.publicradio.org/features/tomorrows-college/keyboard-college/data-driven-classroom.html