Syllabi: best practices or fair best guesses?

Curriculum is a frequent topic of educational research. Over the past decade, researchers have measured and evaluated student responses to the tone, voice, length, design, and presentation format of the program. The product of this research can be expected to be an identified set of best practices in program construction. Certainly, there has been no reluctance to endorse various program models as being based on “best practice”. However, what emerges from a review of the existing literature on program design is a winding path of contradictions. Absent are large-scale studies that span across the curriculum or across educational disciplines. The lack of consensus in the approach ultimately suggests inconsistent conclusions for best practice. The best practices for any program may be highly dependent on the instructor, the course, and the institution; alternatively, best practices may not be able to and should not be identified at this time.

In September 2020, we launched a literature search to determine if best practices in program design could be identified. After a comprehensive review of 68 articles meeting the inclusion criteria, minimal consensus could be reached on which elements of the program could improve the student experience. The results of the entire research frequently conflicted or compared inconsistent variables, often relying on small sample sizes within individual courses. For example, while Anderson (2010) found that students perceived more warmth in programs meant to be written by women, Denton and Valoso (2017) found that instructor’s perception of gender had no impact; instead, the user-friendliness of the written tone was associated with positive consideration from the students. In the extended communications related to class politics, Baily et al. (2015) assessed that the gender of the instructor did not influence students’ perceptions of fairness in responses to student requests for exceptions to the policy. Jenkins et al (2014) found that the gender of the instructor does not influence student expectations regarding the implementation of strong course policy rules in the program.

Researchers explored the use of audio and visual media to complement the written curriculum by generating instructor perception. Harnish and Bridges (2011) found it helpful to use a “warm” tone in the curriculum, as 172 Pennsylvania State University students reported that friendly tones are motivating; however, using a video intro in conjunction with the program did not facilitate the instructor’s perception of warmth. Jones (2018) suggested that graphics or illustrations were not considered useful in a study of 103 freshmen. In a study of 56 students, Overman et al. (2019) found that students remember more details of traditional curricula compared to withholding information from an infographic template. In 2017, Moceck discovered that curriculum infographics improve retention of students representing populations at risk. Mikhailova (2018) argued that students at Clemson University preferred graphics programs and found them easier to understand. What conclusion can we draw from this sparse collection of positive and negative points?

Another annoying question: is less more? While acknowledging that student expectations shape program preferences, Lightner and Bernander (2018) found that a simple program was easier to understand, based on the views of students expressed in a focus group (eight students) and a survey (83 students). Conversely, Martin and Sheetz (2011) advocate longer and more detailed curricula for online courses, and a 2010 survey of 97 psychology students indicated a preference for detailed curricula (Saville et al. , 2010). Likewise, in a study of 149 community college students, Harrington and Gabert-Quillen (2015) find that adding extra details to a program, such as study tips, has a positive impact on the perception of students. course students and instructor. Typically, the preferred program length appears to be in the range of six to 12 pages; that is, assuming that the teacher’s institution does not require a prescribed model, which most institutions do.

Yet more choices over program design permeate the literature. Additional studies have explored the effectiveness of “promising”, negotiated, contractual, engaging, democratic, and learner-centered programs. There may be consensus on the importance of expressing relevance, but no clear direction on a particular course of action that effectively expresses relevance.

There may be a path to consistency in the body of literature. One could identify limited scenarios in which gender and graphics or details make a difference. Best practices could be identified for certain types of instructors and certain types of students. But ad hoc exceptionalism is of limited use, and it raises questions as to whether a generalization is ever possible. If a practice motivates students, but results in decreased information retention, what value prevails? In an ideal world, there would be an approach to curriculum design that would make sense to every instructor, in every discipline, in every institution. Such an approach has not seen the light of day.

The failure to find meaningful consensus should come as no surprise. Makel and Plucker first sounded a more generalized alarm on educational research in 2014. Their landmark study of 100 educational journals found that only 0.13% of published education studies were replications (Makel and Plucker 2014). Additionally, they found that, out of the small percentage of replicated studies, the likelihood of successful replication was deeply affected by the overlap of contributing authors in the original and replication attempts. Makel et al. (2021) broadened their range of concerns in a recent study that identifies the tendencies of educational researchers to omit and / or mass data, in order to confirm hypotheses. To address concerns about limited sample sizes in published research, educational journals have issued calls for manuscripts devoted to replicating previous studies (Educational Research & Evaluation, 2021).

For now, however, the best practice in program design is to be careful in promoting “best practices.”

Dr Lindsey Luther, DNP, is a faculty member at Mount Carmel College of Nursing. Professor Miriam Abbott, MA, is a faculty member at Mount Carmel College of Nursing. Dr. Roxanne Oliver, DNP, is Director of Graduate Programs at Mount Carmel College of Nursing.

The references

Anderson, Kristin. 2010. “Stereotypes of students over professors: an example of a double violation of ethnicity and gender”. Social psychology of education 3: 459-472.

Bailey, Sarah, Jade Jenkins and Larissa Barber. 2015. “Student Responses to Course Policy Decisions: An Empirical Inquiry. ” Psychology education 43 (1): 22-31.

Denton, Ashley Wagoner and James Velaso. 2017. “Changes in the tone of the program affect the warmth (but not competence) ratings of male and female instructors. ” Social psychology of education 21: 173-187.

Educational research and evaluation. 2021. “A call for replication studies in education”.

Harnish, Richard and Robert Bridges. 2011. “Effect of Program Tone: Student Perceptions of the Instructor and the Course. ” Soc Psychol Educ 14: 319-330.

Harrington, CM, and CA Gabert-Quillen. 2015. “Program Length and Use of Images: An Empirical Survey of Student Perceptions. ” Psychology teaching and learning scholarship 1(3): 235-243.

Jenkins, Jade, Ashley D. Bugeja, and Larissa K. Barber. 2014. “More content or more politics? A closer look at program details, instructor gender, and instructor perceptions of effectiveness. College education 62 (4): 129-135. 10.1080 / 87567555.2014.935700

Jones, Natasha. 2018. “Designing Human-Centered Programs: Positioning Our Students as Expert End Users. ” Computing and composition 49: 25-35.

Lightner, Robin and Ruth Benander. 2018. “First Impressions: Student and Faculty Feedback on Four Styles of Curriculum. ” International Journal of Teaching and Learning in Higher Education. 30 (3): 443-453.

Makel, Matthew and Jonathan Plucker. 2014. “The facts are more important than the novelty: replication in the sciences of education. ” Educational researcher.

Makel, Matthew, Jaret Hodges, Bryan Cook, and Jonathan Plucker. 2021. “Questionable and open research practices are prevalent in educational research. ” Educational researcher.

Martin, Peter and Laura Temple Scheetz. 2011. “Teaching and Learning Experiences in a Collaborative Distance Learning Environment.” Training in gerontology and geriatrics 32 (3): 215-224.

Mikhailova, EA 2018. “Improving Soil Science Education with a Graphics Curriculum”. Natural science education 47: 1-6.

Mocek, Evelyn A. 2017. “The Effects of Program Design on Information Withholding by At-Risk First-Semester Students. ” Program log 6 (2)

Overman, Amy, Quian Xu and Deandra Little. 2019. ”What do students really pay attention to and what do they remember about a program? An eye-tracking study of visually rich and textual programs. Teaching and learning scholarship in psychology. 10.1037 / stl0000157

Saville, Bryan, Tracy Zinn, Allison Yost and Kimberly Marchuk. 2010. Program Details and Students’ Perceptions of Teacher Effectiveness. Teaching of psychology. 37: 186-189.’_Perceptions_of_Teacher_Effectiveness

Post views:

Comments are closed.