Assessment Team

Four-Year Assessment Cycle

Proposed Four-Year Assessment Cycle for Academic Programs

Henderson State University is moving to a four-year assessment cycle. Beginning in fall 2016, all academic programs will begin the first year of the four-year cycle. On a per-program basis, assessment will be conducted and associated data will be collected for three consecutive years, with year four used to analyze data and determine future actions/adjustments/directions for program improvement and also create a new assessment plan for the next four-year cycle. Departmental collaboration with the assessment process is essential. Department-wide collaboration, including discussion, participation, and reflection throughout the assessment process will generate effective and useful assessment plans, and also provide precise direction for future action and improvement.

Outline for the Four-Year Assessment Cycle

Years One through Three

Goals and Outcomes
The most effective and workable program assessments focus on two to three goals (objectives). Each goal should have one to two outcomes (student learning outcomes – SLOs). Programs with external accreditation requirements may have additional goals and/or outcomes as required by the guidelines of their particular accrediting body.

Goals are typically carried over for up to three consecutive years during the assessment cycle. The associated outcomes also are typically carried over for consecutive years of a given assessment cycle. Each subsequent four-year assessment cycle; however, should contain new goals and outcomes that are different from the previous four-year cycle.

Measures and Criteria
Each outcome (SLO) should have at least two measures (methods), unless one measure is a nationally normed examination — then only a single measure is necessary for that outcome. If using a nationally normed examination as a measure, a summary document outlining what, specifically, is being measured by the standardized examination must be attached. It is recommended that programs consider using subscores of the standardized examinations to inform objectives.

At least one measure for each outcome must be direct; however, two direct measures per outcome are best. In addition to one or two direct measures, one or more indirect measures may also be used, if desired (see glossary for definitions of direct and indirect measures). Indirect measures are generally considered supplementary.

All measures must have related documents attached. A related document is the rubric or instrument that outlines what, specifically, is being measured, and is used to provide data to determine whether or not the associated criterion is being met.

Each measure should have an associated criterion. Criteria must be appropriate to assess the outcome and criteria should be established prior to the collection of data.

Observations and Reflection (Year-End Results)
For each of the three years, data for that year should be compiled, analyzed, and entered into TracDat. Conclusions and inferences based on the data also should be developed and entered each year. Compiled data should be compared to the data of previous years; this may reveal important trends. During each year, time should be allotted for consideration of any trends, conclusions, or inferences provided from accumulated assessment data, and minor adjustments to the assessment plan for the following year (based on this reflection) may be made, if desired.

Year Four

No data collection will occur during year four.

During the fall semester (August through December):data analysis and the inferences/conclusions generated over the previous three-year period, along with any proposed future actions for program adjustment (action and follow up), should be used to produce a cumulative and comprehensive, yet concise, summary report consisting of no more than three to five pages. All assessment data, associated conclusions/inferences from data analysis, and proposed future action(s) for program adjustment and improvement also must be entered into TracDat. The summary report should be used to outline and briefly describe the results of program assessment that has occurred during the previous three years and also include a description of future changes that will be implemented in the next assessment cycle. At a minimum the report should include:

1. What your program accomplished
2. What your program did not accomplish
3. What future changes/adjustments will be implemented in the program based on the information obtained from the previous three years ofassessment

During the spring semester (January through May): develop and enter into TracDat the new assessment plan for the next four-year cycle. The new plan should be implemented the following fall semester and continued as outlined above for the next four years.

Glossary

Action: what you are going to do to address—the problem/limitation with an area or aspect of a program, determined based on program-level assessment and associated analysis of assessment data.

Action and follow up: consists of precise program adjustments based on analysis of the collected assessment data and associated conclusions/inferences drawn from data analysis and observations conducted during the assessment period.

Assessment: refers to a continuous process instituted to understand and improve student learning. While academic programs/units may find different pathways to arrive at this goal, this process must begin with articulation of educational goals for all programs and courses. These goals should be expressed as measurable outcomes, followed by the selection of reliable and valid measures. After collecting, interpreting, and sharing findings, the aim is to use these to better understand how and what students learn, how well students are meeting expected program goals, as well as to develop strategies to improve the teaching and learning processes and overall program quality and effectiveness in accomplishing its stated mission.

Assessment cycle: a continuous cycle that identifies and documents strengths, weaknesses, needs, improvements, and future plans.

Benchmark: the actual measurement of group performance against an established standard or performance; the established performance standard is often external.

Criterion: the standard of performance established as the passing score for a performance or other measurements, such as an examination, recital, or writing assignment. The performance is compared to an expected level of mastery in the area rather than to other students’ scores.

Cross-sectional studies: provide information about a group of students at one point in time.

Evaluate and evaluation: often considered synonymous to the terms assess and assessment, respectively. Sometimes a distinction is made between evaluation andassessment, with the difference being that assessment is a process predicated on knowledge of intended goals or objectives, whereas evaluation is a process concerned with outcomes without prior concern or knowledge about goals.

Follow-up: a review to determine whether or not an identified problem, discrepancy, limitation, or concern has been resolved, and is based on an action from a previous observation (a follow up is a review to determine if the proposed action is working).

Goals (known as objectives in some areas): statements about the general academic aims or ideals to which an educational program/unit aspires. Goal statements allow us to clearly state expectations in regard to the learning achievements of our students. Further, goals at the program/unit level should align with the mission of the university. Goal statements are not amenable, as stated, to measurement—goals are too broad and general to be measured directly.

Longitudinal studies: provide information from the same group of students at several different points in time.

Measures (methods): the specific instruments or performances used to provide data about learning. They are the tools that provide information as to the level of achieved results or outcomes. A baseline measure is where a department is currently performing and target measure is where the department wishes to perform. To avoid systematic bias in findings, multiple measures are required. There are two types of measures: direct measures and indirect measures. Direct measures provide a source of direct, tangible, visible, measureable, or quantifiable evidence of student learning, whereas indirect measures provide anecdotal or proxy evidence based on is perceptions. See below for definitions:

Direct measure: the assessment is based on an analysis of student behaviors or products in which they demonstrate how well they have mastered learning outcomes. Some examples of direct measures are: nationally normed or standardized examinations (licensure or major field tests), written, oral, or performance work, so long as it is accompanied by an appropriate evaluation rubric/instrument, capstone experiences (research projects, theses, presentations, exhibitions, performances), and direct ratings of student skills and abilities by qualified supervisors.

Indirect measure: the assessment is based on an analysis of reported perceptions about student mastery of learning outcomes. Some examples of indirect measures are: surveys, student, alumni, or employer ratings and/or perceptions, graduation rates, and acceptance rates into graduate programs or placement into discipline-specific/related jobs/careers.

Outcomes (Objectives; see also definition for SLO): outcomes are specific, measurable statements that specifically state what students will know, think, or do upon successful completion of an element or portion of a program or upon completion of the program.

Process: a method generally involving steps or operations that is ordered and/or interdependent.

Qualitative and Quantitative Research: describe two research methods. Both are valuable as a means to assess student learning outcomes. In a practical and somewhat philosophical sense, the difference is that quantitative research tries to make use of objective measures to test hypotheses and to allow for controlling and predicting learning. Qualitative research makes use of more subjective observations of learning.

Related document: the rubric or instrument that outlines what is being measured via the measure.

Reliability: the extent to which studies or findings can be replicated.

Sampling: consists of obtaining information from a portion of a larger group or population. When the selection of a sample is randomly chosen, there is greater likelihood that the findings from the sample will be representative of the larger group.

Student Learning Outcomes (SLO): the knowledge, skills, attitudes, and habits and philosophies that students take with them from a learning experience. Learning outcomesare statements that describe significant and essential learning that students have achieved, and can reliably be demonstrated at the end of a course or program. Learning outcomes identify what the student will know and be able to doby the end of a course or program — the essential and enduring knowledge, abilities (skills), and attitudes (values, dispositions) that constitute the integrated learning needed by a graduate of a course or program. Student Learning Outcomes (SLOs) for an academic program are defined as the knowledge, skills, or behaviors that a program's students should be able to demonstrate upon program completion.

Validity: the ability to demonstratethat a given measure actually measures what it is purported to measure.

TracDat: an assessment management tool designed to meet the assessment and planning needs necessary to overcome common assessment obstacles. TracDat also allows for institution-wide view of assessment plans and uniform reporting across departments. Each program/unit is responsible for entering/maintaining its assessment plan in TracDat.

Four-year Assessment Cycle Timeline (2016-2020)

Year One (2016/2017):
- Collect, compile, and enter data into TracDat for that year (year 1 – January-May). Year one data will be analyzed and discussed in year two

Year Two (2017/2018):
- Collect, compile, and enter data for that year (year 2 – August-May). Year two data will be analyzed and discussed in year three
- Analyze data for year one – by second Friday in September
- Discuss the year one data – consider trends, make conclusions and inferences (close the loop on all assessment processes), and enter those into TracDat. Make any necessary adjustments/action plans to the plan to commence in January or following August, if applicable) – by second Friday in October

Year Three (2018/2019):
- Collect, compile, and enter data for that year (year 3 – August-May). Year three data will be analyzed and discussed in year four
- Analyze data for year two – by second Friday in September
- Discuss the year two data – consider trends, make conclusions and inferences (close the loop on all assessment processes), and enter those into TracDat. Make any necessary adjustments/action plans to the plan to commence in January or following August, if applicable) – by second Friday in October

Year Four (2019/2020):
- No data collection in year four
- Analyze data for year three – by second Friday in September
- Discuss the year three data – consider trends, make conclusions and inferences(close the loop on all assessment processes), and enter those into TracDat – by second Friday in October
- Fall semester of year four: Prior to the end of the fall semester (August-December) data analysis and the inferences/conclusions generated over the previous three-year period, along with any proposed future actions for program adjustment, are used to produce a program summary assessment report. The summary report is - spring semester of year four: Prior to the end of the spring semester (January-May) develop and enter into TracDat the new assessment plan for the next four-year cycle, which will commence the following fall semester.