Tải bản đầy đủ (.pdf) (16 trang)

Ongoing and systematic academic program review

Bạn đang xem bản rút gọn của tài liệu. Xem và tải ngay bản đầy đủ của tài liệu tại đây (443.55 KB, 16 trang )

141

Chapter 7

Ongoing and Systematic
Academic Program Review
Neal Shambaugh
West Virginia University, USA

ABSTRACT
Systematic program review is proposed as a semi-formal means to proactively involve higher education
faculty, staff, students and administrators in analyzing and making decisions about the future of their
programs. The chapter first examines issues facing higher education, issues that provide a rationale
for annual program reviews. The chapter positions program review as a form of participant-oriented
program evaluation, and describes features of annual program reviews. A case study illustrates how
a program review was conducted. Summary benefits and implementation guidelines are provided for
administrators and faculty.

INTRODUCTION
Current accountability approaches in higher education focus on the accreditation of the overall institution with specialized accreditations for many curricular programs. Academic administrators, particularly
department chairs and program coordinators, are tasked out with these reporting requirements, instead
of using data for program improvement and better understanding who their students are and the impact
of academic programs on students.
Within the culture of academia, the prevailing stance is the status quo, a stance that resists any attempt to add to the current responsibilities of faculty and any change in familiar habits and practices.
Administrators who have attempted to instill innovative practices, particularly any strategic planning
or program evaluation efforts, have faced resistance. While higher education remains fixed in view of
curriculum, outcomes, and faculty work, continual “disruptions” are occurring in demography, economics, and culture (McGee, 2015). How can higher education administrators, faculty, and staff jointly face
these ever-changing realities and re-think how to serve the needs of a learning population and overall
re-frame how they look at their work in academic programs?
DOI: 10.4018/978-1-5225-0672-0.ch007


Copyright © 2017, IGI Global. Copying or distributing in print or electronic forms without written permission of IGI Global is prohibited.



Ongoing and Systematic Academic Program Review

Rather than formal program evaluation or strategic planning, this chapter proposes a less formal
approach or program review. The purpose of program review is to periodically discuss how academic
programs have provided good value to students and to answer important questions, such as “Who are our
students?” and “What changes need to be made or what gaps exist in our programs, gaps that students
need?” The focus on annual program reviews is to better understand the student and the needs of the
student, how the program meets those needs, and uses this understanding to direct future work. Program
review is an annual activity conducted by faculty, staff, and students. In this way, program review becomes a regular fixture and responsibility of faculty members.
The chapter first summarizes four issues facing higher education, issues which suggest the need for
a means to understand students, societal changes, enrollment changes, and accountability requirements.
The second section describes program review as form of participant-oriented program evaluation, while
a third section identifies features of program review. A case study of program review is provided so the
reader can see how the process might work and what questions guide the annual review. A final section
summarizes major benefits from annual program reviews, as well as implementation guidelines for
faculty members, program coordinators, and dean-level administrators.

ISSUES FACING HIGHER INSTITUTIONS
Four specific issues facing higher education provide a rationale for why ongoing and systematic program
review may be helpful and necessary.
First, the complex physical and psychological nature of the learner is always changing. The brain’s
neurological system undergoes constant change and whatever we put in front of people impacts their
thinking and behavior (Restak, 2003). Ultimately what we are motivated to pursue is based on what we
choose to pay attention to (Gallagher, 2009). Schools and institutions are organized around stable features, such as a well-defined management-worker structure, workplace needs were known and roles were
clear, and technology was less an influence than today. According to his “then” versus “now” perspective
Chester (2005), identified values as one major category of differences in young people. These value shifts,

according to Chester, involve digital choices, self-expression, immediate involvement, and a free-agent
work ethic. Digital thinking embraces a re-boot approach to behavior rather than a traditional analog
view that behavior has consequences. Self-expression differences in “then versus now” can be seen with
the notion of personal respect, which was traditionally viewed as earned over time as opposed to now
where respect by young people is expected immediately. Rather than working “up the ladder” today’s
young people want to make a difference. Their motivation to stay in the classroom or on the job may be
keyed to the opportunities they have at becoming immediately engaged in real world issues. Traditional
views saw life as an unfolding sequence, but life is viewed by young people as immediate involvement in
games of reaction. Today’s young workers see themselves in numerous occupations over their lifetime,
as opposed to one career. One can see this value shift in higher education in what enrollment managers
label as “swirl,” in which students enroll in multiple institutions and attend classes intermittently (Adelman, 2006). Parents see their identify influenced by their work, while younger people see their identity
as a network of relationships and work as a changing reality over their lifetime.
A second issue is that educational structures have not adapted to these individual and social changes.
Digital media production and learning, for example, occurs by young people in public network “hang
outs” outside of adult control. The skills and literacies learned are not configured as learning outcomes
142



Ongoing and Systematic Academic Program Review

in public schools or higher education (Ito, 2010). Educational reform lags behind fast moving technological change, such as social media, so that understanding the individual in school and the workplace
may increasingly need collaborative structures for learning (Shirky, 2009, 2010). One feature of program review advocated in this chapter is to build in ways to continually learn from human feedback in
academic and workplace settings.
Given the first two issues of evolving personal characteristics and workplace needs, traditional higher
education institutions are facing a third issue, that of smaller enrollments. Changing demographics, such
as dropping numbers of high school graduates and tighter family budgets, have resulted in higher education institutions tapping international students and out-of-state students (Selingo, 2013) to pay the bills.
Meanwhile, higher education institutions are also facing lower statewide support and competing service
providers who are attempting to reach these audiences (NCES, 2011). Students are partially drawn to
for-profit educational services, which can give young people the personalized treatment they expect (i.e.,

the customer orientation) and the specialized, personalized and flexible educational programs desired
by older students (Selingo, 2013). Ongoing program review, the focus of this chapter, becomes a means
to help answer questions like “Who the student is?” and “What gaps exist in our programs?”
A related pressure and the fourth issue facing higher education is the call for increased accountability
to show evidence that higher education is a good value and worthy of investment. Formal accountability,
one means to demonstrate this value, is addressed by accreditation of the entire institution, a process
that is required for students to qualify for Federal aid, but also specialized accreditation of programs, a
process that is not required but can be very expensive for institutions (Selingo, 2013). Despite the cost,
administrators use this accreditation to enhance their reputation, while there are efforts to transform
the process into a means for program improvement (e.g., Driscoll & Noriega, 2006). One related set of
accountability measures for institutions is known as persistence (term-to-term return), retention rates
(fall-to-fall student numbers) and graduation or completion (time to graduation) (Hundrieser, 2012). Such
data may be traced to societal and curricular issues, which can be openly acknowledged and discussed
in ongoing program reviews by not only administrators but by faculty and staff who remain closest to
students. In addition to understanding why enrollment in some programs is flat or declining, program
administrators, parents and even donors would find of interest any intent to understand why students do
not continue to graduation. Citizens, too, value increased accountability, owing to the growing expense
of higher education and its value for future employment.
In summary, higher education faces changes in the student and worker, changes in the workplace,
increased competition for student enrollments, and accountability pressures. These four issues provide a
rationale for why institutions could begin ongoing program review activities in order to learn more about
how academic programs provide value and what changes need to be made. Program review provides
a means for everyone involved in the educational process to discuss the changing student, workplace
needs, dropping enrollment, and program success data. One of the challenges to considering program
review is to educate faculty and staff on the purposes for review as opposed to more formal program
evaluation approaches.

PROGRAM REVIEW AS PARTICIPANT-FOCUSED EVALUATION
Program evaluation consists of a broad set of approaches to make judgments about programs, depending
on their primary focus. Some program evaluation approaches focus on the program itself, such as program

143



Ongoing and Systematic Academic Program Review

objectives, information needs, consumer ratings, and expert judgments, and are based on a rationalistic
approach to evaluation (Fitzpatrick, Sanders, & Worthen, 2004). Program evaluation also includes a
continuum of informal and informal approaches (Schwandt, 2001). Program review as discussed in
this chapter can be considered a type of program evaluation known as participant-focused evaluation,
which keeps students and faculty as the purpose for the review process. Faculty, staff, and students are
considered as first-hand sources of experience and naturalistic inquiry approaches (e.g., case studies) are
typically used. Historically, Stake’s (1967, 1995) work on responsive evaluation provided early guidance
for participant-oriented program evaluation, while and Guba and Lincoln’s (1989) focus on stakeholders
served to bring people into the evaluation process.
Specific approaches to evaluating academic programs in which all stakeholders participate in appraising student learning plus faculty and staff work (Allen, 2004) share several features. First, as with any
evaluation activity, it is necessary to obtain consensus on the purpose for any review or evaluation. The
purpose for evaluation in participant-oriented approaches is to understand how students, faculty and staff
contribute to program success. A second feature has data linked to institutional goals, as they exist in unit
and institutional mission statements and strategic plans. A third feature of participant-oriented program
evaluation in the review of academic programs is determining that ongoing program improvement is a
priority and key to overseeing accurate accreditation and certification requirements (Ramaley, 2006).
Such a focus re-frames program review as a part of faculty work and takes a broader view of evaluation
as more than an often heard statement that “this is the year we have to do that.”
A fourth feature of human-centered reviews is that all constituents are involved. Middaugh (2009)
advocates that all members of the university community must participate in systematic and sustainable
assessment, which is tied to strategic planning. Faculty members, in their discussions about curriculum
and the work they do, must continually ask and answer the question “How can our academic programs
and experiences help to prepare the citizens responsible for a globally-connected world?” and more
fundamentally ask this question: “How can faculty work processes evolve for ongoing re-examination?”

Underlying these questions is the pragmatic question of “How do we change from “what we are” to
“what we need to be?”

FEATURES OF ONGOING AND SYSTEMATIC PROGRAM REVIEW
Ongoing Review Provides a Strategic Approach
People tend to avoid strategic conversations or activity because the common practice is to develop a
strategic plan and then nothing more is done about the plan. A major failure is not connecting the plan
to actual work and not having a process to evaluating the plan itself. As strategic plans take time for
coordination, discussion, and decisions, department chairs and faculty members are reluctant to repeat
the process. Any strategic decisions must specify how faculty work supports the plan, a problematic
connection as the customary practice in academic settings is the primacy of the faculty member’s agenda
and assigned workload for teaching, research, and service responsibilities. A problem with strategic
plans in academia is that faculty evaluation is not connected to the strategic plan. A second failure in
many strategic plan documents is not specifying how the plan will be monitored on a continual basis and
evaluated periodically. Thus, strategic plans are cut off from academic work and without an evaluation
procedure the plan is guaranteed to “sit on the shelf.”
144



Ongoing and Systematic Academic Program Review

Program review labels the process as a more accessible activity to faculty and administrators, because
the activity addresses the work that faculty do, the teaching, advising, research, and service activities
within curricular programs. Program review, given the suggested framework and questions suggested in
this chapter, examines what could be viewed as strategic priorities, overall program directions as well as
making judgments about priorities given available resources. Program review questions prompt faculty to
comment on how their programs serve constituent needs as well as to what extent these programs serve
a presence in their communities or state, as well as national and international contributions. Program
review asks the broad questions of “What are we known for?” “What do we want to be known for?” And

“How shall we spend our day?” These are pragmatic questions of interest to faculty members. Program
review, once the benefits are experienced through actual use, is then understood as an ongoing activity as
opposed to a periodic challenge of re-writing a strategic plan when the present plan was never attended to.

Program Review Develops a Systematic Process
The case study that follows describes a systematic and doable process to conduct yearly program reviews. The program review document summarizes current programs, displays enrollment data, and uses
analysis questions to lead to synthesis decisions. As a result of this discussion, lasting 60-90 minutes,
a program area group collectively becomes informed as to how programs are meeting student needs,
where the gaps are, what changes need to be made, and what resources are needed if programs are to be
changed or added to in some way. Several issues may be identified that warrant future meetings to address them. Thus, a record of discussion can be archived and retrieved as needed to document progress
towards identified priorities. A key question asks faculty, students and staff to suggest improvements to
the process of annual program reviews.
A tool that can be used to map the benchmark progress of projects that have been identified by
the group is logic modeling. Logic modeling has been used by grant funding agencies and non-profit
organizations to map progress toward a goal. Logic models prompt users to identify the outcomes and
the specific activities to reach those outcomes, and display progress within those activities. Thus, logic
models provide a communication tool that is not a strategic plan but rather a working map of progress
continually updated along benchmark activities. The components of a logic model include resources,
activities, outputs, outcomes, and goals (W. K. Kellogg, 2004). An attribute of logic modeling is first
determining the goal or long-term result of the project or program (Knowlton & Phillips, 2013). The
model specifies resources needed by the program and activities that tap these resources, including financial and social capital. Outputs prompt faculty to identify short-term, intermediate, and long-term
results, as well as impacts and benefits from these results (Knowlton & Phillips, 2013). Logic models
are sometimes viewed as tools for program evaluation, but they are viewed here as tools for mapping
progress on faculty-identified priorities, rather than as an evaluation mechanism.

Program Reviews Examine Who the Learners Are
Program reviews prompt faculty members to re-examine assumptions about students. One of the realities of today’s world is that the learner and demographic characteristics of the population are always
changing. Higher education institutions have focused on high school graduates, adolescents who make
up the traditional student population, but with dropping enrollments, administrators are beginning to
pay attention to attracting adult learners who have a job and need to complete a program for career ad145




Ongoing and Systematic Academic Program Review

vancement. These potential students may have started a program but did not finish because of family or
military responsibilities. Another group of potential students are those who need ongoing professional
development to build knowledge and skills as well as those who are motivated as life-long learners.
Program reviews question the assumptions that higher education faculty and administrators have of
who the students are. They help to identify gaps in program offerings or re-examine primary programs
as well as niche program offerings. In the program review document introduced in the case study, an
initial set of analysis questions ask “Who do we serve?” Asking this question helps faculty members to
match existing programs to current students. Asking the question “Who are we missing?” attempts to
match potential students to existing programs. Answers to both questions prompt program area group
faculty to identify gaps in program offerings and to make prioritized decisions about addressing those
gaps. Here discussions of resource needs will arise. Program reviews serve to initiate dialogue between
faculty members about program changes in light of available resources and what might be needed. A
strategic benefit occurs out of these discussions as decisions about program changes can result in resource
adjustments and project scheduling.

PROGRAM REVIEWS DISCOVER THE BASIS FOR PROGRAM PERFORMANCE
Program review helps to establish the criteria for program performance, such as student knowledge
and skill competency, as well as enrollment and graduation rates. An example worth elaborating on is
competency-based education, an approach to curriculum which identifies what we want graduates to
know and be able to do upon graduation from a program. Rather than seeing education as needing a
degree or certificate and organized around the student credit hour completed, a competency model assesses student learning as opposed to degrees granted. Such an approach will gain significant traction if
financial-aid restrictions allow this form of education. Features of such competency programs may still
tied to the credit system, but typically enable credits earned for life experience.
Critical issues in such programs are the evaluation of courses (e.g., adequate time with instructors)
and assessment of student competence. Competency assessment may involve demonstration of knowledge

and skills in an actual workplace setting. Lumina Foundation published its report on what college graduates should know from an associate, undergraduate, and master’s degrees (Adelman, Ewell, Gaston, &
Schneider, 2014). In the study learning outcomes were organized around five categories of proficiency,
including specialized knowledge, broad and integrative knowledge, intellectual skills, applied and collaborative learning, and civic and global learning. In its report, the Lumina Foundation acknowledged
the value of affective goals, such as integrity, personal initiative, and professionalism, to be considered
by individual program groups.
Program reviews would provide a mechanism for ongoing discussions of program goals, including
quality of curricular delivery and determining what quality means. The identification of competencies,
for example, might already be in place from accrediting bodies for individual programs. Student competencies may take the form of competency-based education but could also follow traditional summative
portfolio or capstone approaches to documenting what students can do.
Another opportunity for the discussion of program goals, in particular those that address student
learning, is to involve employer feedback and to consider their viewpoints on what constitutes students’
capabilities upon graduation. While there may be resistance from faculty as seeing their programs as
workforce development, such discussions on what constitutes expertise open the door for input by the
workplace sector in tangible ways.
146



Ongoing and Systematic Academic Program Review

CASE STUDY OF ONGOING PROGRAM REVIEW
An example of systematic program review of undergraduate and graduate programs in Child Development is used to illustrate one way to implement this participant-oriented approach. Child Development
curricular programs included one undergraduate program with four areas of emphasis attached to the
program, three undergraduate minors available across the university, two undergraduate certificates,
one master’s program, and a child development specialization for an across the college PhD program.

Front-End Data and Program Review Document
An associate dean for Academic Affairs, a former member of the department, briefed Child Development
faculty on the merits of the review and the process, which would include a 90-minute meeting where
faculty members responded to questions. The department chair and program area faculty approved the

idea for the review and scheduled a meeting.
Prior to the meeting the associate dean provided the group with summary information on program
offerings and historical enrollment numbers. Five-year enrollment numbers for both undergraduate and
graduate programs were summarized in tables and visualized in graphs to depict five-year trends. Summer enrollments for five years were also supplied. Also distributed in advance was a program review
document template, which was used to record the responses to the questions and ensuing discussion.
The review document included the following elements and discussion topics:

UNDERGRADUATE PROGRAMS (REPEAT FOR GRADUATE PROGRAMS)
Enrollment and Explanations





Program name, Classification of Instructional Programs (CIP) number, and the institution’s curriculum code.
Five-year enrollment tables and graphs.
Discuss reasons for program enrollment trends/numbers (up/down/flat).
Discuss societal/cultural, legislation, and state/local context as well as university and program issues underlying the enrollment numbers.

Analysis: Program Students – Who Do We Serve?
1.
2.
3.
4.
5.

Who are your students in these programs?
Who are we missing for existing programs?
Who are we not serving?
What is needed to start any new programs?

What is our statewide and national presence? How might we serve our constituents?

Synthesis – Decisions: Program Goals with Resources
1. What summer courses are needed?
147



Ongoing and Systematic Academic Program Review

2. What improvements need to be made to existing programs?
3. For any new program or initiative provide the following information: Identify rationale/students,
resources, time frame, and enrollment targets.
4. What program features might attract donor support?
The above questions were repeated for the graduate program offerings. A final question was asked on
how the program review process could be improved.

Results of Program Review Meeting
Reasons for Increased Program Enrollment







The undergraduate program in Child Development was organized by four Areas of Emphasis
(AOE). Increased enrollment numbers for each AOE were based on changes in state policy and
specific job placement in the workforce. The Pre-school AOE provided students with PreschoolKindergarten certification. The Child Development AOE was a Plan B certificate program for
students who opted out of teacher education testing. A third AOE, Family and Youth Studies, was

not a certification track, but a program for older students with families. Finally, a fourth AOE,
Special Needs for Preschool-Kindergarten, was developed for in conjunction with the undergraduate Child Development degree.
A staff person hired to coordinate programs was cited as key to addressing student issues.
Admitting students with a GPA of 2.5 attracted students who had dropped below higher GPA
requirements in other programs.
A general education course for students across the university was cited as an effective introduction
to the Child Development program. For example, students were required to conduct an observation in the programs nursery school.
Certificate enrollment was now being tracked by the university registrar instead of the college
advising office.

Context for Enrollment



Program coordinators in Child Development are responsible for staying current with changes in
state policy requirements for graduates to teach in the state’s public schools. This policy awareness
prompted program design changes to enable graduates to obtain higher-paying jobs.
Two certificate programs were added because of the U.S. Head Start Reauthorization Act, which
required more focus on infant/toddler development.

Analysis



148

Who do we serve: Four Areas of Emphasis (AOE) matched four job placement categories in state
public schools.
Who are we missing in existing programs: Three minors, which were added to support student
interests, increased enrollment. Courses for an Early Childhood Education Director’s credential




Ongoing and Systematic Academic Program Review







are fully enrolled each summer. Program certificates, certifications, and minors improved students’ employability.
Who are we not serving: As certified teachers were needed for pre-school teaching and Head Start
programs, the program review reported on progress to offer an online undergraduate Bachelor of
Science program in Child Development. It was reported in the meeting that all courses except one
were already being taught online.
What is needed to start the online program: Faculty reported that these online courses were being readied for in-house review as required by state policy for any online course. Faculty requested
support to assist on this detailed requirement. A suggestion was made to initially cap enrollment.
Faculty suggested that another staff person may be needed to coordinate the program in the longterm if higher enrollments are achieved.
Statewide presence: The group cited their accomplishments of being responsive to state students
by providing programs aligned with existing job needs.

Synthesis – Decisions





During the meeting it was decided to begin the online Bachelor of Science program for Pre-School
Certification, as preschool teachers and Head Start staff would be needing this degree. Faculty set

up a future meeting to agree on a program development timeline and discuss marketing support
from the Dean’s office.
A specific 200-level course was reported to be required in the new online program. The group
agreed that this course needed to be taught in the summer to enable students who may need this
course to graduate.
A future meeting was scheduled to identify other needed courses for summer

Program Features for Donor Support


The Associate Dean who was facilitating the meeting recommended that the college’s Director
for Development and Director of Recruitment be invited to a program meeting to discuss program
offerings that might support statewide employment.

Summary of Program Review Meeting
After a 90-minue meeting the program review accomplished the following:





Understanding enrollment increases is just as important as explaining flat or declining enrollments.
Connecting program decisions to potential student needs (i.e., employment) means students have
a purpose for enrolling in a program as opposed to offering programs and expecting students to
apply.
Discussing additional program offerings is conducted in the context of available and needed resources. Faculty members have a say in what programs can be offered given existing resources.
Scanning the horizon for changes in state policy, career interests, workplace priorities give faculty
information early enough to schedule program development and put resources in place.

149




Ongoing and Systematic Academic Program Review





Producing information for department and college recruiting as well as the college advisory group
made up of individuals from the private sector.
Involving staff responsible for development and recruiting serves to increase both donor support
and student numbers.
Addressing program “presence” across the state and how faculty, students, and staff contribute to
the wellbeing of local, statewide, and national and international audiences.

BENEFITS OF ONGOING PROGRAM REVIEW










Dedicating time for faculty, staff and students to discuss how one’s academic program helps to
prepare students to enter the workforce and develop habits of ongoing learning and improvement.
Establishing a long-term objective that may be broad in scope (e.g., establish a Learning Sciences

research and teaching program) but captures the direction a program wants to follow in reaching
that long-term objective. Faculty-determined decisions provide direction and a timetable for future work in light of resources.
Challenging assumptions held about a program’s current learner profile.
Documenting the performance of students, faculty and staff within the program.
Providing enrollment and matriculation data for short-term decisions, such as curriculum changes
(courses, practicums, labs, studios, etc.) and instructor assignments.
Ensuring that data is used for program improvement as well as program accountability for certification and accreditation bodies (Driscoll & Noriega, 2006).
Producing timely information to inform various constituents, including advisory groups, donors,
students, and community leaders.
Developing a data collection system, which should be ongoing, routine, and invisible if possible.
Data needs to be transformed for understanding (e.g., visuals, graphs) by users and made available
(e.g., dashboards).

GUIDELINES FOR DEANS AND DEPARTMENT CHAIRS
An ongoing and systematic program review will not work unless the activity is supported by the academic
unit’s dean or director along with the department chairs, who may resist and view this activity as “one
more thing for faculty to do.” Consequently, what are needed is discussion of the values and purposes
for program review, trials by volunteer program areas, and input from faculty members during and after
the reviews. Department chairs and faculty may initially view program review as an artifact of strategic
planning that never goes anywhere for “hunting for programs to cut.” Department chairs, in particular,
will need answers to “Why do this?” Major reasons to conduct program reviews can be pragmatically
characterized do deans and department chairs in the following ways:



150

Enrollment in the unit continues to drop over an x-year period. This trend will continue without
some form of intervention.
Improving enrollment is not just about recruitment, but also program improvement, keeping the

students we have, giving them a quality experience, and attracting new students.



Ongoing and Systematic Academic Program Review






Program reviews can become a required duty of program coordinators and workload adjusted as
needed.
Enrollment review and program improvement are essential to giving the recruiter and development officers “something to work with.”
Given an institutional priority of service to a community or state, a program review explicitly
identifies program presence.
We demonstrate that as a college we have our own process for program improvement.

GUIDELINES FOR FACULTY
Angelo (2002) reported that faculty members will engage in assessment as a valued work activity “only
if they find it intellectually compelling, professionally rewarding, and relatively unburdensome (p. 186).
Faculty may be attracted to discussions that center on what they do; namely, teaching, research, and
service activities, as opposed to administrative issues. Faculty may also warm up to program review as
a way to document one’s efforts, a typical requirement for faculty members.
With top-down support of deans and department chairs, faculty can be briefed by an associate dean
or department chair that the program review process gives faculty members an opportunity have a role
in making decisions about the work they do. The review process is more than justifying programs, but
about first answering some key questions on paper, which explain enrollment trends, critique current
programs, and suggest the need for changes.
The process becomes a collegial mechanism to invest all program educators and staff to identify

program and presence priorities. Faculty recommend program improvements, needed resources, and
recruitment targets. An annual review, suggested for the fall semester at the beginning of the academic
year, also prompts decisions for summer offerings the following year. In addition, faculty can become
more invested and aware of potentials of their programs for attracting donor support.
Sharing a program review question template provides structure to guide people through this review.
It is important, too, that faculty, students, and staff suggest changes in the questions, format, and the
overall process. Overall, the value to faculty is that this structured process gives them information to
make informed decisions.

IMPLEMENTING PROGRAM REVIEW
Implementing ongoing program review for program improvement and accountability could be through a
top-down directive from a provost to deans of academic and support units. A potentially more effective
approach is to identify programs whose members collectively decide to try out a program review and
directly experience its benefits.
Suggested steps for a department chair or program facilitator are listed below:
1. Gather data on program enrollments over a five-year period and graph the numbers. Break out
enrollment data by program degree/option if possible.
2. Brief the program faculty on benefits and process.
3. Identify a program where faculty members agree to participate.

151



Ongoing and Systematic Academic Program Review

4. Distribute in advance a program review form showing a list of program offerings, 5-year enrollment
trends, and prompting questions.
5. Schedule a 90-minute meeting of all program faculty early in the Fall semester, so as to also discuss
course offerings for the following summer.

6. Invite college recruitment and development staff to attend.
7. Use and revise the following review format and questions as needed.

Program Name and Offerings
Program Enrollment History (Data Table and Graph)


Discuss and suggest reasons for program enrollment trends/numbers:

Analysis: Program Students – Who Do We Serve?




Who are your students (matching program to students):
Who are we missing (matching students to programs):
Statewide presence (who/how/when):

Decisions: Program Goals with Resources






Program improvements: rationale, resources needed, timeframe, enrollment target
New program/initiative: rationale, resources, timeframe, enrollment target
Summer 2015: courses, PD programs (possible student numbers):
Program feature to attract donor support:
How to improve process:

1. Designate person to act as a facilitator and recorder.
2. Record in the form notes from the discussion, using the format order to organize the session.
3. Note any needs for subsequent meetings as determined by the faculty members.
4. Revise notes and add a summary decision list for action steps.
5. Send the program review form to faculty members and the department chair. Archive on online
sites for reference and use for program reports and communication with different constituent
groups as needed.

FUTURE DIRECTIONS
Social Change Features


152

Academic institutions will continue to face pressures from society, competitive institutions, and
potential students to offer learning programs that result in improved employment or lifelong learning. Failure to do so will result in continual drops in enrollment.



Ongoing and Systematic Academic Program Review






Academic institutions will resist but will have to re-examine their attitudes and policies towards
the credit-hour model and look to options such as competency-based education where education
is based on developing proficiencies versus credit hour accumulation.
Academic units will continue to resist changes to the status quo. Change in how units function,

including faculty workload and promotion criteria, will require both top-down and bottom-up
leadership and faculty buy-in.
Formative program evaluation models, such as participant-oriented approaches, may be developed that are central to faculty member work, as opposed to summative approaches (Fitzpatrick,
Sanders, & Worthen, 2004).
Program development will need to include processes to facilitate faculty dialogue. One approach
could be the long-standing research using activity theory to better understand the roles of people
and the contexts in which people work and improving work across boundaries (Daniels, et al,
2009).

Technological Features




Program evaluation tools, such as logic modeling, may become a routine tool in faculty work, but
will function behind the scenes to depict progress towards program goals.
Faculty responsibilities will include automatic data collection, which is used for performance
reviews as well as program improvement.
Collected data will be automatically transformed into visual representations, such as currently is
accomplished with dashboards, and is available at all times.

CONCLUSION
Annual program review provides a working process for faculty to make informed decisions about their
work in academic programs. A systematic review process can be organized by analysis questions, which
seek to explain enrollment patters and learn more about potential students and any gaps in program offerings to meet the needs of those students. Synthesis questions prompt faculty members to make decisions about program changes and any improvements given a discussion of resources needed for those
changes or new programs. Annual program review enables data-based program improvement as well as
contributing to program accountability.

REFERENCES
Adelman, C. (2006). The toolbox revisited: Paths to degree completion from high school through college.

Washington, DC: U. S. Department of Education.
Adelman, C., Ewell, P., Gaston, P., & Schneider, C. G. (2014). The degree qualifications profile: A learning centered framework for what college graduates should know and be able to do to earn the associate
or bachelor’s or master’s degree. Retrieved from />Allen, M. J. (2004). Assessing academic programs in higher education. Boston, MA: Anker Publishing.

153



Ongoing and Systematic Academic Program Review

Angelo, T. A. (2002). Engaging and supporting faculty in the scholarship of assessment: Guidelines
from research and best practice. In T. W. Banta et al. (Eds.), Building a scholarship of assessment (pp.
185–200). San Francisco, CA: Jossey-Bass.
Chester, E. (2005). Getting them to give a damn: How to get your front line to care about your bottom
line. Chicago, IL: Dearborn Trade Publishing.
Daniels, H., Edwards, A., Engestrom, Y., Gallagher, T., & Ludvigsen, S. R. (Eds.). (2009). Activity theory
in practice: Promoting learning across boundaries and agencies. New York: Routledge.
Driscoll, A., & de Noriega, D. C. (2006). Taking ownership of accreditation: Assessment processes that
promote institutional improvement and faculty engagement. Sterling, VA: Stylus.
Fitzpatrick, J. L., Sanders, J. R., & Worthen, B. R. (2004). Program evaluation: Alternative approaches
and practical guidelines (3rd ed.). Boston, MA: Pearson.
Gallagher, W. (2009). Rapt: Attention and the focused life. New York: Penguin.
Guba, E.G., & Lincoln, Y.S. (1989). Fourth generation evaluation. Thousand Oaks, CA: Sage.
Hundrieser, J. (2013). Strategic enrollment planning: A dynamic collaboration. Coralville, IA: Noel-Levitz.
Ito, M. (2010). Hanging out, messing around, and geeking out: Kids living and learning with new media.
Cambridge, MA: MIT Press.
Knowlton, L. W., & Phillips, C. C. (2013). The logic model guidebook: Better strategies for great results.
Los Angeles: Sage.
McGee, J. (2015). Breakpoint: The changing marketplace for higher education. Baltimore, MD: Johns
Hopkins University Press.

Middaugh, M. F. (2009). Planning and assessment in higher education: Demonstrating institutional
effectiveness. Hoboken, NJ: Jossey-Bass. doi:10.1002/9781118269572
National Center for Education Statistics (NCES). (2011). Projections of education statistics to 2020.
U.S. Department of Education. Retrieved from />Ramaley, J. A. (2006). Using accreditation to improve practice. In A. Driscoll & D. C. Noriega (Eds.),
Taking ownership of accreditation: Assessment processes that promote institutional improvement and
faculty engagement (pp. xi–xvi). Sterling, VA: Stylus.
Restak, R. (2003). The new brain: How the modern age is rewiring your mind. Emmaus, PA: Rodale
Books.
Schwandt, T. A. (2001). Responsiveness and everyday life. In J. C Greene & T. A. Abma (Eds.). Responsive evaluation. New Directions for Evaluation, 92, 73-88. doi:10.1002/ev.36
Selingo, J. (2013). College (Un)bound: The future of higher education and what it means for students.
Las Vegas, NV: Amazon Publishing.
Shirky, C. (2009). Here comes everybody: The power to organize without organizations. New York:
Penguin.

154



Ongoing and Systematic Academic Program Review

Shirky, C. (2010). Cognitive surplus: How technology makes consumers into collaborators. New York:
Penguin.
Stake, R. E. (1967). The countenance of educational evaluation. Teachers College Record, 68, 523–540.
Stake, R. E. (1995). The art of case study research. Thousand Oaks, CA: Sage.
W. K. Kellogg Foundation. (2004). Logic model development guide: Using logic models to bring together
planning, evaluation, & action. Retrieved from />wk-kellogg-foundation-logic-model-development-guide

ADDITIONAL READING
Bennet, A., & Bennet, D. (2004). Organizational survival in the new world: The intelligent complex
adaptive system. Boston, MA: Elsevier.

Buller, J. L. (2007). The essential academic dean: A practical guide to college leadership. San Francisco,
CA: Jossey-Bass.
Buller, J. L. (2013). Positive academic leadership: How to stop putting out fires and start making a difference. San Francisco, CA: Jossey-Bass.
Cockell, J., & McArthur-Blair, J. (2012). Appreciative inquiry in higher education: A transformative
force. San Francisco, CA: Jossey-Bass.
Crookston, R. K. (2012). Working with problem faculty: A 6-step guide for department chairs. San
Francisco, CA: Jossey-Bass.
Crow, M. M., & Dabars, W. B. (2015). Designing the new American university. Baltimore, MD: Johns
Hopkins University Press.
Culmsee, P., & Awati, K. (2011). The heretic’s guide to best practices: The reality of managing complex
problems in organizations. Bloomington, IN: iUniverse.
Gardner, H., Csikszentmihalyi, M., & Damon, W. (2001). Good work: When excellence and ethics meet.
New York: Basic Books.
Glassick, C. E., Huber, M. T., & Maeroff, G. I. (1997). Scholarship assessed: Evaluation of the professoriate. San Francisco, CA: Jossey-Bass.
Gmelch, W. H., & Miskin, V. D. (2004). Chairing an academic department. Madison, WI: Atwood
Publishing.
Palomba, C. A., & Banta, T. W. (1999). Assessment essentials: Planning, implementing, and improving
assessment in higher education. San Francisco, CA: Jossey-Bass.
Schon, D. A., & Rein, M. (1994). Frame reflection: Toward the resolution of intractable policy controversies. New York: Basic Books.

155



Ongoing and Systematic Academic Program Review

Stone, T., & Coussons-Read, M. (2011). Leading from the middle: A case-study approach to academic
leadership for associate deans. Lanham, MD: Rowman & Littlefield.
Suskie, L. (2015). Five dimensions of quality: A common sense guide to accreditation and accountability.
San Francisco, CA: Jossey-Bass.

Tuchman, G. (2009). Wannabe U: Inside the corporate university. Chicago, IL: The University of Chicago Press. doi:10.7208/chicago/9780226815282.001.0001

KEY TERMS AND DEFINITIONS
Accreditation: A certification process that evaluates an educational institution is fulfilling its mission.
Dashboard: At-a-glance representation of key performance indicators.
Logic Modeling: A process to identify a goal and supporting resources and activities to reach that
goal and visually represent progress.
Participant-Oriented Program Evaluation: A category of program evaluation which focuses on
people, as opposed to program goals.
Persistence: The measure of students in higher education who return term-to-term and are more
likely to complete an academic program.
Program Review: A semi-formal approach, a form of participant-oriented program evaluation, used to
determining the performance of people within academic programs, including faculty, staff, and students.
Program Evaluation: A family of models used to determine the “success” of a program and determine what success means.
Strategic Planning: A general term applied to any effort to determine major directions for a group
or institution.

156



×