While much literature has considered feedback and professional growth in formative peer reviews of teaching, there has been little empirical research conducted on these issues in the context of summative peer reviews. This ar- ticle explores faculty members’ perceptions of feedback practices in the summative peer review of teaching and reports on their understandings of why constructive feedback is typically non-existent or unspecific in summative reviews. Drawing from interview data
with 30 tenure-track professors in a research-intensive Canadian university, the findings indicated that reviewers rarely gave feedback to the candidates, and when they did, comments were typically vague and/or focused on the positive. Feedback, therefore, did not contribute to professional growth in teaching. Faculty members suggested that feedback was limited because of the following: the high-stakes nature of tenure, the demands for research productivity, lack of pedagogical expertise among academics, non-existent criteria for evaluating teaching, and the artificiality of peer reviews. In this article I argue that when it comes to summative reviews, elements of academic culture, especially the value placed on collegiality, shape feedback practices in important ways.
While much literature has considered feedback and professional growth in formative peer reviews of teaching, there has been little empirical research conducted on these issues in the context of summative peer reviews. This article explores faculty members’ perceptions of feedback practices in the summative peer review of teaching and reports on their understandings of why constructive feedback is typically non-existent or unspecific in summative reviews. Drawing from interview data with 30 tenure-track professors in a research-intensive Canadian university, the findings indicated that reviewers rarely gave feedback to the candidates, and when they did, comments were typically vague and/or focused on the positive. Feedback, therefore, did not contribute to professional growth in teaching. Faculty members suggested that feedback was limited because of the following: the high-stakes nature of tenure, the demands for research productivity, lack of pedagogical expertise
among academics, non-existent criteria for evaluating teaching, and the artificiality of peer reviews. In this article I argue that when it comes to summative reviews, elements of academic culture, especially the value placed on collegiality, shape feedback practices in important ways.
Administrators at many colleges and universities have had online courses at their institutions for many years, now. One of the hidden challenges about online courses is that they tend to be observed and evaluated far less frequently than their face-to-face course counterparts. This is party due to the fact that many of us administrators today never taught online courses ourselves when we were teaching. This article provides six "secrets" to performing meaningful observations and evaluations of online teaching, including how to use data analytics, avoid biases, and produce useful results even if observers have never taught online themselves.
The educational benefits of embedding hands-on experience in higher education curriculum are widely recognized (Beard & Wilson, 2013). However, to optimize the learning from these opportunities, they need to be grounded in empirical learning theory. The purpose of this study was to examine the characteristics of internships in Ontario colleges and universities, and to assess
the congruence between the components of these internships and Kolb’s (1984) experiential learning framework. Information from 44 Ontario universities and colleges, including 369 internship program webpages and 77 internship course outlines, was analyzed. The findings indicated that internship programs overemphasize the practical aspect of the experience at the expense
of linking theory and practice. To optimize experiential education opportunities, recommendations include establishing explicit learning activities consistent with each experiential learning mode, including practice, reflection, connecting coursework and practical experience, and implementing creative ideas in practice.
There’s a lot of talk these days about evidence-based instructional practices, so much that I’ve gotten worried we aren’t thinking enough about what that means. Let me see if I can explain with an example.
Recently I’ve been trying to locate the evidence that supports quizzing, wondering if it merits the evidence-based label. Tracking down this evidence in our discipline-based research is challenging because although quizzing has been studied across our disciplines, it’s not easily searchable. My collection of studies is good, but I know it’s not complete. As you might suspect, the results are mixed; they are more positive than negative, but still, a significant number of researchers don’t
find that quizzes affect learning outcomes.
I’ve been following, with something like exasperation, the discussion over Harvard University’s new study on teaching. Not
surprisingly, the study found that physics students performed better on multiple-choice tests if they were taught via active learning
strategies than by lecture alone. Yet it also found that students tended to feel they learned more from listening to a
polished lecture.
“Stereotype threat” is a well-known social psychological construct in which people live down or up to the expectations others have of them based their gender, race, age, or other such characteristics. As professors we are careful — or we should be — not to translate our personal beliefs about students’ capabilities into our expectations of how they will perform academically, but we rarely think about how students’ expectations of us affect our performance.
In particular, faculty who are women and/or members of racial minority groups run the risk of becoming stereotype threatened: feeling anxiety about whether they will either confirm or disprove students’ stereotypical beliefs.
If you don’t think students — or all people — have ideas about what a professor looks and sounds like, try this exercise: Ask a few people who don’t know you’re an academic to describe the “average” professor. Undoubtedly they will paint a picture of an older white male who may or may not be wearing a tweed jacket.
THE PAUCITY OF WOMEN IN SCIENCE HAS BEEN documented over and over again. A 2012 Report from the President’s Council of Advisors on Science and Technology reported that a deficit of one million engineers and scientists will result in the United States if current rates of training in science, technology, math, and engineering (STEM) persist (President’s Council
of Advisors on Science and Technology, 2012). It’s not hard to see how this hurts the United States’ competitive position—particularly if women in STEM meet more gender bias in the U.S. than do women elsewhere, notably in India and China.
THE PAUCITY OF WOMEN IN SCIENCE HAS BEEN documented over and over again . A 2012 Report from the President’s Council of Advisors on Science and Technology reported that a deficit of one million engineers and scientists will result in the United States if current rates of training in science, technology, math, and engineering (STEM) persist (President’s Council of Advisors on Science and Technology, 2012) . It’s not hard to see how this hurts the United States’ competitive position—particularly if women in STEM meet more gender bias in the U .S . than do women elsewhere, notably in India and China .
The philosophical halls are ringing lately with an argument over the virtue of graduate-student publication. J. David Velleman, a professor of philosophy at New York University, started the clamor in July when he posted "The publication Emergency" on a philosophers’ blog, "The Daily
Nous."
Velleman makes a simple but radical two-part proposal:
First, philosophy journals "should adopt a policy of refusing to publish work by graduate students."
Second, to give teeth to the ban, Velleman suggests that philosophy departments "adopt a policy of
discounting graduate-student work in tenure-and-promotion reviews."
W e’ve all read the startling stories about lax standards in higher education. As faculty members, we’ve struggled with the growing expectation among undergraduates that a minor amount of work should be the norm for collegelevel courses. In their 2011 book, Academically Adrift: Limited Learning on College Campuses, Richard Arum and Josipa Roksa found that half of the students in the study’s sample "had not taken a single course during the prior semester that required more than 20 pages of writing, and one-third hadn’t taken one that required even 40 pages of reading per week."
She sat in the front row of my classroom, quiet but engaged. She didn’t raise her hand, but when I invited her into the conversation or asked students to speak to one another, she showed she had done the reading and had thought about it. I learned from an informal writing exercise that she was a first-generation college student, paving the way to higher education for her family.
Tioga High School senior Emily Kennedy studies a child development college course online in Groveland, as part of
a collaboration with Columbia College.
The Early Childhood Education Report 2017 is the third assessment of provincial and territorial frameworks for early childhood education in Canada. Nineteen benchmarks, organized into five equally weighted categories, evaluate governance structures, funding levels, access, quality in early learning environments and the rigour of accountability mechanisms.
Results are populated from detailed provincial and territorial profiles developed by the researchers and reviewed by provincial and territorial officials. Researchers and officials co-determine the benchmarks assigned. We are pleased to welcome Nunavut and Yukon as new participants in this edition. ECEReport.ca includes the profiles for each jurisdiction, including the federal government, plus the methodology that shapes the report, references, charts and figures and materials from past reports.
Innovation cannot be taught like math, writing or even entrepreneurship, writes Deba Dutta. But it can be inculcated with the right skills, experiences and environments.
The past few years have ushered in more strident calls for accountability across institutions of higher learning. Various internal and external stakeholders are asking questions like "Are students learning what we want them to learn?" and "How do the students' scores from one institution compare to its peers?" As a result, more institutions are looking for new, more far-reaching ways to assess student learning and then use assessment findings to improve students' educational experiences.
However, as Trudy Banta notes in her article An Accountability Program Primer for Administrators, “just as simply weighing a pig will not make it fatter, spending millions simply to test college students is not likely to help them learn more.” (p. 6)
While assessing institutional effectiveness is a noble pursuit, measuring student learning is not always easy, and like so many things we try to quantify, there’s much more to learning than a number in a datasheet. As Roxanne Cullen and Michael Harris note in their article The Dash to Dashboards, “The difficulty we have in higher education in defining and measuring our outcomes
lies in the complexity of our business: the business of learning. A widget company or a fast-food chain has clearly defined goals and can usually pinpoint with fine accuracy where and how to address loss in sales or glitches in production or service. Higher education is being called on to be able to perform similar feats, but creating a graduate for the 21st century workforce is a very
different kind of operation.” (p. 10)
This special report Educational Assessment: Designing a System for More Meaningful Results features articles from Academic Leader, and looks at the assessment issue from a variety of different angles. Articles in the result include:
• The Faculty and Program-Wide Learning Outcome Assessment
• Assessing the Degree of Learner-Centeredness in a Department or Unit
• Keys to Effective Program-Level Assessment
• Counting Something Leads to Change in an Office or in a Classroom
• An Accountability Program Primer for Administrators
Whether you’re looking to completely change your approach to assessment, or simply improve the
efficacy of your current assessment processes, we hope this report will help guide your discussions
and eventual decisions.
Rob Kelly
Editor
Academic Leader
When the path is clear and given, when a certain knowledge opens up the way in advance, the decision is already made, it might as well be said there is none to make: irresponsibly, and in good conscience, one simply applies or
implements a program. Perhaps, and this would be the objection, one never escapes the program. In that case, one must acknowledge this and stop talking with authority about moral or political responsibility. The condition
of possibility of this thing called responsibility is a certain experience and experiment of the possibility of the impossible; the testing of the aporia from which one may invent the only possible invention, the impossible invention
(Jacques Derrida, 1992b, p. 41, italics in original).
Effective classroom management is much more than just administering corrective measures when a student misbehaves; it's about developing proactive ways to prevent problems from occurring in the first place while creating a positive learning environment. Establishing that climate for learning is one of the most challenging aspects of teaching, and one of the most difficult skills to master. For those new to the profession, failure to set the right tone will greatly hinder your effectiveness as a teacher. Indeed, even experienced faculty may sometimes feel frustrated by classroom management issues. Strategies that worked for years suddenly become ineffective in the face of some of the challenges today’s students bring with them to the classroom.
Brought to you by The Teaching Professor, this special report features 10 proven classroom management techniques from those on the front lines who’ve met the challenges head-on and developed creative responses that work with today's students. This report will teach you practical ways to create favourable conditions for learning, including how to:
. Get the semester off on the right foot
. Prevent cheating
. Incorporate classroom management principles into the syllabus
. Handle students who participate too much
. Establish relationships with students
. Use a contract to help get students to accept responsibility
. Employ humour to create conditions conducive to learning
The goal of 10 Effective Classroom Management Techniques Every Faculty Member Should Know is to provide actionable strategies and no-nonsense solutions for creating a positive learning environment – whether you’re a seasoned educator or someone who's just starting out.
Love or hate it, group work can create powerful learning experiences for students. From understanding course content to developing problem solving, teamwork and communica-tion skills, group work is an effective teaching strategy whose lessons may endure well beyond the end of a course. So why is it that so many students (and some faculty) hate it?
At a time when the Excellence Gap highlights that underserved populations are not achieving at advanced levels, Effective Program Models for Gifted Students from Underserved Populations is a valuable resource for examining ways to remedy this undesirable situation. This book describes eight models that represent various curricular emphases and applies them across grades. Consequently, it is a handy resource for any educators who want to teach in ways that allow students from poverty, as well as children who are African American or Hispanic, to achieve at advanced levels. These are the children who are often underrepresented in programs or services for advanced and gifted learners