Abstract
Since the 1980s, research on employment conditions in post-secondary institutions has focused on the growth of contingent academic workers, or what the Higher Education Quality Council of Ontario (HEQCO) has labelled “nonfull-time instructors” (Field, Jones, Stephenson, & Khoyetsyan, 2014). Very little attention, however, has been paid to administrative, physical plant, and other operational staff employed within universities and colleges. Using data from a study of University of Regina students and employees, academic and support staff, this paper confronts the broader conditions of labour around the ivory tower. Employment at a post-secondary institution is analyzed through the lens of living wage research advanced by the Canadian Centre of Policy Alternatives (CCPA) (Ivanova & Klein, 2015). The study reframes the notion of a living wage in a post-secondary institution to include work-life balance, job security, and the realities of dignity and respect in the university workplace.
Abstract
Researchers are under increasing pressure to disseminate research more widely with non-academic audiences (efforts we call knowledge mobilization, KMb) and to articulate the value of their research beyond academia to broader society. This study surveyed SSHRC-funded education researchers to explore how universities are supporting researchers with these new demands. Overall, the study found that there are few supports available to researchers to assist them in KMb efforts. Even where supports do exist, they are not heavily accessed by researchers. Researchers spend less than 10% of their time on
non-academic outreach. Researchers who do the highest levels of academic publishing also report the highest levels of non-academic dissemination. These findings suggest many opportunities to make improvements at individual and institutional levels. We recommend (a) leveraging intermediaries to improve KMb, (b) creating institutionally embedded KMb capacity, and (c) having funders take a leadership role in training and capacity-building.
I wish Woody Allen’s aphorism that 80 percent of success is showing up applied to the persistent problem of college remediation. More than half of incoming community-college students, and approximately 20 percent of incoming students at four-year institutions, are academically unprepared when they arrive on campus. Fewer than one in 10 students who enroll in remedial coursework in community college will attain a credential within three years. "Showing up" isn’t enough, because those who enter developmental education in college struggle to complete. This is particularly troubling given that community colleges and regional public universities are the points of entry for a large number of traditionally underrepresented students.
aculty dread the grade appeal; anxiety prevails until the whole process is complete. Much has been written about ow to avoid such instances, but the potentially subjective assessments of written essays or clinical skills can be specially troublesome. One common cause of grade appeals is grading ambiguity in which the student and faculty ember disagree on the interpretation of required content. Another cause is inequity, whereby the student feels thers may have gotten more credit for very similar work or content (Hummel 2010). In the health-care field specially, these disagreements over clinical-skills assessments can actually result in student dismissal from the program and may lead to lawsuits.
Whether it’s talking to colleagues, reading the latest research or visiting a teaching and learning center, professors have places to turn to learn about best pedagogical practices. Yet faculty members in general still aren’t known for their instructional acumen. Subject matter expertise? Yes. Teaching? Not so much.
As dean, I travelled to San Francisco a few years ago with most of my college’s faculty members and doctoral students for a national conference in our field. I didn’t rent a car, because everything on the agenda — leadership meetings and donor visits — was within walking distance of our hotel. Then a major donor from a faraway suburb called and wanted to meet near his home.
Unfortunately, the local rental dealerships were sold out of standard vehicles, but — "good news" — a luxury convertible was available for the same price. I pondered for a moment and declined. Why? I was worried about the optics. That is: how it would look if people from my campus saw me driving away from the hotel like some movie star, thereby confirming prejudices about rich, privileged deans.
Was I being silly, even paranoid?
Faculty members juggle teaching, grading assignments, and conducting research. They write grants, run labs, and serve on the committees that keep their academic departments and institutions going.
One aspect of their jobs that stands out in both its rewards and its challenges is working with students. Here are key findings from a Chronicle survey of nearly 1,000 faculty members: Most faculty members find teaching students to be satisfying work.
The prevailing statistics on cheating are disheartening. Some put the rate at 75%. That means three out of every four students admit to some kind of academic dishonesty at some point during their higher education.
We all know that this is not a new phenomenon. Cheating is as old as higher education itself. Older, really, if you look outside the classroom. Classicists tell us that cheating scandals occurred even during the ancient Olympic Games.
So is there really a way to solve a problem with such ancient roots?
In the world of college composition, we spend a lot of time talking about how to teach writing — with as many opinions on that as there are instructors — but very little time talking about why we teach it.
Many professors take a philosophical approach, asserting that the purpose of teaching writing is to enrich students’ lives, promote self-exploration, or encourage political activism. Certainly all of those can be byproducts of a college writing course, but I would argue that none qualifies as its main purpose. The reason institutions offer — and often require — first-year composition is quite simple: so students learn how to communicate their expertise.
The most famous dictum of the science fiction writer and futurist Arthur C. Clarke may be his Third Law: “Any technology sufficiently advanced is indistinguishable from magic.” And for most of us, the efficiency of 21st-century search engines — Google, Bing, Yahoo and others — can be uncannily accurate. But when it comes to learning, instant gratification can be as much a bug as a feature.
Take high school students today. They have grown up using search engines and other web resources; they don’t need to understand how these tools work in order to use them. In fact, thanks to what’s called machine learning, search engines and other software can become more accurate — and even those who write the code for them may not be able to explain why.
If social movements are best conceived as temporary public spaces, as moments of collective creation that provide societies with ideas, identities, and even ideals, as Eyerman and Jamison (1991, p. 4) have argued, then educational researchers have much to learn from movements. Educational processes and contexts are crucial to the ways in which social movements ideas, identities, and ideals are generated and promoted, taught and learned, contested and transformed. Indeed, movements themselves are educators, engaging participants in informal education (through participation in movement activity),
non-formal education (through the educational initiatives of the movement), and even, sometimes, quasi-formal education (through special schools within movements). Moreover, movements are producers of knowledge that, when successful, educate not only their adherents but also broader publics (Crowther & Shaw, 1997; Dykstra & Law, 1994; Eyerman & Jamison, 1991; Hall, 2006; Martin, 1988; Stromquist, 1998).
Asked to offer advice to new hires in his department, a senior professor replied, "There is no way not to have a first year." Her remark seemed odd, and a bit ominous, but it turned out to be an accurate indicator of the harried life of a first-year faculty member.
Do you really believe that watching a lecturer read hundreds of PowerPoint slides is making you smarter?
I asked this of a class of 105 computer science and software engineering students last semester.
If you’re a faculty member, you’ve spent the last few weeks preparing your syllabus for the spring semester. You’ve updated the document and added a little to it. This latest round of edits may have pushed your syllabus another page longer — most now run about five pages, though nearly every campus has lore of some that exceed 20.
Let’s start by acknowledging the truth: Course evaluations are incredibly biased, and aren’t an accurate measure of an instructor’s
effectiveness in the classroom. Too often, students’ perceptions of your appearance, demeanor, or pedigree prevent them from writing a fair and relevant review of your actual teaching. Yet despite dozens of studies demonstrating their unreliability, course evaluations continue to be used in hiring, tenure, and promotion decisions by most colleges and universities.
What makes a good introduction for a dissertation? Graduate students practice critiquing one another’s thesis chapters, but they rarely read the introductions — usually because those are written to meet a defense deadline. Which is why when you need to write one, you can find yourself with neither experience nor models.
One of the oldest — and most tired — debates in the education world is about skills versus content. For years, especially in K-12 circles, teachers, administrators, and education researchers have debated whether skills or content are more important for students to learn.
The apparent dichotomy has proven surprisingly sturdy. In an April 2016 report on skills as “the new canon,” The Chronicle detailed an effort at Emory University to shift faculty focus toward teaching the skill of using and evaluating evidence. The story quoted Emory lecturer Robert Goddard, who worried that the move to skills-focused courses was “doing a disservice to the students by not having a more coherent, uniform body of content to deliver.” Such a conception suggests a zero-sum game: More time spent on skills necessarily means less time spent on content.
But if a consensus has emerged in this long-standing debate, it’s one that pushes against an either/or approach.
Douglas Mulford worried when his lab course moved to remote instruction this past spring. Mulford, a senior lecturer of chemistry at Emory University, had worked out a system for giving in-person exams in large classes. But with his 440 students taking their final online, he feared, it would be much easier for them to cheat.
So Mulford set out to protect his test. He looked into lockdown browsers, which limit what students can do on their computers during a test, but concluded they were pointless: Most of his students had a smartphone, too, he figured, and could simply consult it instead. He thought about using a proctoring service, but wasn’t convinced it could handle this volume
of tests on such short notice. So he settled on what he calls “Zoom proctoring,” having students take their final in a Zoom room, with videos turned on, while a TA watched them and recorded the session.
When it comes to skills development, sometimes you have to make advantage before you can take advantage.
I’m sitting at my desk in the Research Institute at SickKids, putting the finishing touches on our skills and career development curriculum for the upcoming academic year. Our office has an open-door policy, so one of the institute’s PhD students pops in to talk about internships. They’re interested in participating in our administrative internship program, which places grad students and postdocs in departments like grant development, knowledge translation and tech transfer. What they really want though is to work in the project management unit. They’re seriously interested in moving into a project management role after they graduate, but they want to get some practical experience first to find out if they really enjoy the work and to build their network.
In higher education, the concept of good is elusive. Do we know good when we see it? For example, while there is general agreement that community college graduation rates are too low, there is not yet consensus about what would constitute a good, or an outstanding, graduation rate.
At community colleges, benchmarking and benchmarks are about understanding the facts and using them to assess performance, make appropriate comparisons, establish baselines, set goals, and monitor progress — all in the service of improving practice so more students succeed.
As part of this practice, the Center for Community College Student Engagement encourages colleges to use data that can support reasonable comparisons both within and across institutions and to have broad, campuswide conversations to address key follow-up questions: What are our priorities here, in this college? In what areas do we need and wish to excel? And how good is good enough — for our students, our college, our community?