Last week, in my final rhetoric class of the semester, we did an end-of-term exercise that I’ve assigned for the past few years. I use notecards to write a series of prompts meant to encourage students to reflect on the semester and what they’ve learned. Each student comes to the front of the classroom, takes a notecard, and responds to the prompt in front of the class. There are also doughnuts.
Among the prompts is this one: "Before this class, I thought rhetoric was [fill-in-the-blank]. Now I think rhetoric is [fill-in-the-blank]." I got the format from Kimberley Tanner, who calls such prompts "retrospective post-assessments."
At this year’s freshman orientation at Morehouse College, David Thomas, president of the historically black men’s
institution, was one of the new arrivals in Graves Hall. “I had a pretty rough night the first night,” he says. Students later
told him: “None of us sleep on the mattress. Didn’t your mother come and make your bed?”
There’s mounting evidence suggesting that student evaluations of teaching are unreliable. But are these evaluations, commonly referred to as SET, so bad that they’re actually better at gauging students’ gender bias and grade expectations than they are at measuring teaching effectiveness? A new paper argues that’s the case, and that evaluations are biased against female instructors in particular in so many ways that adjusting them for that bias is impossible.
The “talent economy,” consisting of highly skilled personnel from the science, technology, engineering and mathematics (STEM) fields, is the linchpin of a productive society and economy. Maintaining knowledge-sharing in these fields relies on training, retaining and attracting global talent. It also requires encouraging international and inter-sectorial experiences (i.e., within academia, governments, industry and NGOs) for domestic and foreign researchers –otherwise known as “brain circulation” [PDF]. Indeed, international and intersectorial mobility should be a part of career development for scientists to become leaders in
increasingly multi- and interdisciplinary professional environments.
UBC’s “Moments that Matter” course mines departmental expertise to transform a second-year history course into a team performance.
The dull roar of plastic computer keys clicking in the lecture hall at the University of British Columbia stills for a moment as Canadian history professor Bradley Miller flashes a picture onto the screen behind him.
It’s former prime minister Pierre Elliott Trudeau, flamboyantly decked out in a cape, white jacket with a rose pinned to the lapel and a 19th-century dandy’s hat – an incongruous sight at that most high-testosterone of events, the Canadian Football League’s Grey Cup championship of 1970.
In the past few years, the business world has increasingly embraced failure. Entrepreneurs, once coy about past losses and missteps, now flaunt their failures like badges of honour. The idea of “failing upward” has become a recurring motif in blog posts, TED Talks, business conferences and self-help books – and this fetishization of failure has started to infiltrate the world of higher education.
What would happen if you were to arrive to your classroom, unplug the devices, turn off the projector, and step away from the PowerPoint slides … just for the day?
What would you and your students do in class?
This was the challenge I presented to 100 faculty members who attended my session at the Teaching Professor Conference in St. Louis this past June. The title of the session was, “Using ‘Unplugged’ Flipped Learning Activities to Engage Students.” Our mission was to get “back to the basics” and share strategies to engage students without using technology.
Regardless of our subject area, we’ve all had moments where some students appear to hang on every word,
gobbling up our messages, images, graphs, and visuals with robust engagement. Within those very same classes,
however, there will be a degree of confusion, perplexed looks, or at worst, the blank stare! In my field of anatomical
education, like many other STEMM* disciplines, the almost ubiquitous use of multimedia and other increasingly
complex computer visualizations is an important piece of our pedagogic tool kit for the classroom, small group, or
even the one-on-one graduate-level chalk talk. Although a picture indeed does say a thousand words, the words that
each person hears, or more importantly, comprehends, will vary widely.
Recruiting and hiring are duties that face almost all academic leaders, and they take a large bite out of their time and resources. It makes sense, then, to make every attempt to retain these new professionals. At the 2016 Leadership in Higher Education Conference, Kenneth Alford led a preconference workshop about the development and use of a mentoring program
to help develop and retain new faculty.
Scholarly reading is a craft — one that academics are expected to figure out on our own. After all, it’s just reading. We all know how to do that, right?
Yes and no. Scholarly reading remains an obscure, self-taught process of assembling, absorbing, and strategically deploying the writing of others.
Digital technology has transformed the research process, making it faster and easier to find sources and to record and retrieve information. Like it or not, we’ve moved beyond card catalogs, stacks of annotated books and articles, and piles of 3x5 cards. What hasn’t changed, however, is the basic way we go about reading scholarly work.
Do you really believe that watching a lecturer read hundreds of PowerPoint slides is making you smarter?
I asked this of a class of 105 computer science and software engineering students last semester.
For the past five years, the Community College Survey of Student Engagement (CCSSE) has been at the cutting edge of
measuring aspects of the student experience that are linked to student success. The validation studies summarized in
this report show the link between CCSSE results and improved student success. CCSSE’s reach and influence — it has collected
information from almost 700,000 students at 548 different colleges in 48 states, British Columbia, and the Marshall Islands — is nothing short of remarkable in such a short period of time.
As a Biomedical Sciences major, I completed the two required “Physics for the Life Sciences” courses during the first year of my undergrad, and never considered those concepts again. Until now. I’m doing my doctorate in cardiovascular science, and the physics of blood flow has become an important element of my experiments. The little I remember from those two courses is far from sufficient for my current project. I’m now trying to teach myself the basics of fluid dynamics so I can properly understand and explain my own project.
Student evaluations of teaching, or SET, aren’t short on critics. Many professors and other experts say they’re unreliable -- they may hurt female and minority professors, for example. One recent metastudy also suggested that past analyses linking student achievement to strong evaluation scores are flawed, a mere “artifact of small-samplesized studies and publication bias.”
Now one of the authors of that metastudy is back for more, with a new analysis suggesting that SET ratings vary by course subject, with professors of math-related fields bearing the brunt of the effect.
Here's an unsettling fact. One of Canada's most-renowned universities, with a student population the size of a small city, is chronically reliant on philanthropic donations to meet the demand for on-campus mental-health programs.
Let's think about that for a second.
Imagine having to scramble every year for donations simply to meet a minimum service standard. Now imagine being an institution without the luxury of a large rolodex of donors – relying only on tuition fees or internal funding.
“Watching a (nearly) finished student receive that coveted job offer, whether it’s a faculty position she’s worked so hard for, a position at that top research lab, or a lucrative offer from that hot startup everyone wants to join.”
“Watching one of you students deliver a fantastic talk at a premier conference in front of a packed room of attendees from all over the world.”
“Getting an unexpected thank you note in the mail or an email from a former student, thanking you for that class you taught her six years ago and detailing how it’s changed the trajectory of her life and career.”
“Meeting up with a former student at an academic conference and being introduced to his or her current students getting ready to present their work.”
When you first joined the faculty, chances are your orientation included an overview of your responsibilities as a member of your new academic community. You were probably informed that you had an obligation to support the success of your students and colleagues, were expected to be an exemplar in terms of your scholarship and contributions to your discipline, and were required to devote a percentage of your time to departmental, college, or university service.
Interviews for campus-leadership positions have shifted entirely to video, in our Covid-19 era of travel bans and social distancing. Many of the clients I work with as a campus search consultant expect that shift to remain a trend, even after our shelter-in-place era passes. Video interviewing has its advantages — it saves money, for one — but it also creates a unique set of stresses for candidates.
In more than 100 administrative searches, I’ve seen an array of video snafus: cameras angled to focus on shiny foreheads, cameos by pets and naked toddlers, unmade beds clearly visible in the background. I’ve seen candidates — thinking they were on mute — shout at a spouse to be quiet and tell a child to "go pee." I’ve seen committee members — thinking they were on mute — talk about a candidate. I’ve watched candidates put on their eye makeup, sneeze into the screen, and bring in their kids to help manage the technology.
In the online class environment, students enjoy many advantages, such as increased scheduling flexibility, ability to balance work and school, classroom portability, and convenience. But there are potential shortcomings as well, including the lack of student-instructor interaction and a student not understanding the instructor’s expectations. A key mechanism to convey expectations while increasing student-instructor communication is relevant, timely, constructive, and balanced instructor feedback.
The prevailing statistics on cheating are disheartening. Some put the rate at 75%. That means three out of every four students admit to some kind of academic dishonesty at some point during their higher education.
We all know that this is not a new phenomenon. Cheating is as old as higher education itself. Older, really, if you look outside the classroom. Classicists tell us that cheating scandals occurred even during the ancient Olympic Games.
So is there really a way to solve a problem with such ancient roots?