Student Learning Assessment Team
Jessica Wise
Faculty Assessment Chair
Co-Chair of the Academic Assessment Committee
Kate Evans
Co-Curricular Chair
Co-Chair of the Academic Assessment Committee
School Leads
Cynthia Fletcher
School of Math, Sciences, and Allied Health
Mindy Hodges
School of Technical and Professional Studies
Deena Martin
School of Fine Arts, Humanities and Social Sciences
A
Alignment: The process of analyzing how explicit criteria line up or build upon one another within a particular learning path. The thoughtful mapping of outcomes from one curricular level to the next. When developing student learning outcomes, course outcomes must align or match up with program learning outcomes which in turn align to institutional learning outcomes that directly align with the college mission and vision.
Artifact: Used in both student learning assessment and program evaluation to denote a student-produced product or performance used as evidence of learning. For example, an artifact in student services might be a realistic and achievable student educational plan. Whereas, an artifact in science might be a lab notebook showcasing the design and execution of a lab experiment to answer a scientific question. In communication, an artifact could be an audio/video recorded speech or a written copy of the speech itself.
Assessment: Narrowly defined, it is the systematic collection, review, and use of information about educational programs undertaken for the purpose of improving student learning and development. Broadly defined, it includes all activities that teachers and students undertake to get information that can be used diagnostically to alter teaching and learning. Under this definition, assessment encompasses teacher observation, classroom discussion, and analysis of student work, including homework and tests (Black & William, 1998). Activity is the emphasis, so any process that produces data that can be used for analysis and improvement of student achievement and learning would qualify as assessment. This broad definition allows for a variety of approaches and methods that can include processes to gather data throughout the course as well as those to evaluate student learning at the conclusion of the course.
Assessment Cycle: The assessment cycle refers to a systematic cycle of collecting and reviewing information about student learning. The complete cycle involves of the following steps: clearly stating expected goals for student learning, offering learning experiences, measuring the extent to which students have achieved expected goals, and using the evidence collected to improve teaching and learning (Suskie, 2009; Walvoord, 2010). See also Student Learning Assessment.
Assessment for Accountability: The application of accountability data for educational improvement. The primary drivers of assessment for accountability are external, such as legislators, accreditation agencies, or the public. Usually this entails the analyses of indirect or secondary data.
Assessment Plan: A document that demonstrates alignment of a program’s student learning outcomes to the institutional learning outcomes (ILO) while explaining each of the following: program learning outcomes (PLO), course learning outcomes (CLO), student learning outcomes (SLO); direct and/or indirect assessment methods to be used to demonstrate the attainment of each outcome/objective; a brief description of the assessment methods; an indication of which learning outcomes are addressed by each method, the intervals at which student learning data (evidence) is collected and reviewed; and the individual(s) responsible for the collection, review and reporting of said evidence.
Assessment Results: The data or evidence produced by the assessment process. These data are not always quantifiable or measurable in numerical terms, but may also include qualitative evidence such as portfolios, narratives, performances, or other data that may be more dependent on observation than computation. However, bear in mind that observations are often assessed using a rubric designed especially for use in scoring or assessing observable actions. Any information produced by assessment processes that can be used for analysis and improvement of student achievement and learning would fall under the definition of assessment results.
Authentic Assessment: the practice of simulating “a real world experience by evaluating the student’s ability to apply critical thinking and knowledge or to perform tasks that may approximate those found in the work place or other venues outside of the classroom setting” (The Case for Authentic Assessment Grant Wiggins, 1990). In order for an assessment to be authentic, it must be meaningful and must demonstrate students’ ability to apply their knowledge rather than simply to reproduce decontextualized information.
B
Benchmark: A specific standard against which an outcome or product is measured. A benchmark determines the acceptable level of achievement for stated outcome(s) or learning objective(s).
Bloom’s Taxonomy: One of several classification methodologies used to describe increasing complexity or sophistication in the affective, cognitive and psychomotor domains. The cognitive domain consists of the following six levels: Knowledge, Comprehension, Application, Analysis, Synthesis, and Evaluation. See the Bloom’s Primer PowerPoint on this Website for a concise overview of the Cognitive Domain categorized in Bloom’s Taxonomy.
C
Classroom Assessment Techniques (CATs): Brief, flexible classroom techniques that provide rapid, informative feedback to improve classroom dynamics by monitoring learning from the student’s perspective, throughout the semester. This assessment method is often not designed to capture and document the types of complex thinking abilities generally included in Student Learning Assessment, but they might still yield useful information that can both produce immediate feedback for students and inform the classroom practices or planning of the individual instructor. See also Formative Assessment.
Classroom-based assessment: The formative and summative evaluation of student learning administered typically within a classroom.
Competencies: See Student Learning Outcomes.
Content Validity: Indicates that the assessment is consistent with all stated outcomes and measures the content as delineated for measurement.
Continuous Improvement: A cyclical process used to identify and collect evidence of learning through which instructional changes are implemented with the intent to improve student learning.
Core Competencies: The intended results of student learning experiences across courses, programs, and degrees. These typically describe important, measurable life abilities while providing a unifying, overarching purpose for a broad spectrum of individual learning experiences.
Course Learning Outcomes (CLO): The result of a student’s learning upon completion of a course. These outcomes are aligned with any stated course objectives.
Course-level Assessment: Method(s) of assessing student learning within the classroom environment, using course goals, outcomes and content to gauge the extent of learning that is taking place.
Culture of evidence: In post-secondary education this term often refers to an institutional culture that supports and integrates research, data analysis, evaluation, and planned change as a result of assessment to inform decision-making (Pacheco, 1999). Such a culture is characterized by the generation, analysis and valuing of quantitative and qualitative data in the decision-making process.
Curriculum Map: A matrix representation of a program's learning outcomes that depicts where in the sequence of courses or units specific learning outcomes are taught within a program or course.
D
Direct Measures: Assessment or testing processes used to directly evaluate student work. These measures provide tangible and clear evidence of student learning. Examples of these types of measures include: exam questions, portfolios, performances, projects, reflective essays, computer programs, and observations.
E
Education Standards: A set of expectations typically established by a governing body for learning and/or instructional excellence.
Embedded assessment: Embedded assessment. Classroom assignments or tests that occur within the regular class or curricular activity and are directly linked to stated student learning outcomes (perhaps through primary trait analysis). Specific questions or demonstration activities can be employed routinely on exams in classes across courses, departments, programs, or the institution. These assessments can provide formative information for pedagogical improvement and student learning needs.
Evaluation: Broader in scope than assessment, evaluation typically refers to a product-oriented, comparative, or prescriptive process that seeks to make an informed judgment about the extent to which a program is achieving its intended outcomes and/or the quality or worth of a program.
F
Formative Assessment: Brief check of student learning with the intent of monitoring student learning to provide ongoing feedback that can be used by instructors to improve their teaching and by students to improve their learning. Formative assessments are often used to help students to identify their strengths and weaknesses, their gaps in knowledge, and their target areas that need additional practice. For instructors, formative assessment helps faculty recognize where students are struggling throughout the instructional process. These assessments are generally low stakes, meaning they have low or no point value. Examples of formative assessments include asking students to: draw a concept map in class to represent their understanding of a topic; submit one or two sentences identifying the main point of a lecture; or turn in a research proposal for early feedback.
G
General Education Assessment: Student assessment that measures a post secondary institution’s general education competencies. Typically, these core competencies are measured across disciplines.
Goals: Broad, overarching statements describing the end result of learning or level of achievement within a learning context.
Grades: Evaluation of a student’s performance in a course over time. Grades represent an overall assessment of student class work, homework and/or special projects. Grades are intended to reflect at what level the student overall acquired the knowledge, skills, abilities and attitudes identified in the course’s stated learning objectives.
Grading: A process of evaluating student performance. This could be one basis for student learning assessment if it follows a rubric containing explicitly defined levels of student achievement.
I
Indirect Measures: These are a form of assessment that obliquely provide evidence of learning, indicating loosely that students are probably attaining a learning goal. These measures use perceptions, reflections or secondary evidence to make inferences about student learning. These require inference between the student's action and a direct evaluation of said action. As these forms of assessment do not directly measure learning, they are not typically relied upon for assessing student learning or for assigning a grade to student learning. These forms of indirect assessment include course grades, student self-rating scale surveys, student satisfaction surveys, placement rates, retention and graduation rates, and honors and awards earned by students.
Institutional Learning Outcomes (ILO): The knowledge, skills, abilities and attitudes a student is expected to be able to demonstrate following completion of a program of courses. These are sometimes referred to as core competencies. At UA-PTC, these are the seven delineated outcomes to which all program learning outcomes, course learning outcomes, student learning outcomes, and course objectives are aligned.
L
Learning Objectives: Typically, these are a list of tasks to be accomplished in order to achieve a stated goal. These are usually found within a course syllabus.
Learning Outcomes: Measurable statements regarding changes or improvement or growth in the knowledge, skills, attitudes, and habits of mind that students acquire as a result of the learning experience. See also Learning Objectives.
M
Meta cognition: The act of purposefully thinking about one’s own thinking and regulating one’s own learning. It involves critical analysis of how decisions are made or how processes are delineated.
O
Objectives: The sequential, incremental steps leading toward a goal. Objectives create a framework for the overarching student learning outcomes.
Outcomes: Used in both student assessment and program evaluation, these are the results of instruction, a process or series of activities (either program or learning activities).
P
Pedagogy: The art and science of instruction: how something is taught and how students learn it. Pedagogy includes how the teaching occurs, the approach to teaching and learning, how content is delivered, and what the students learn as a result of the process.
Primary Trait Analysis: The process used to identify major characteristics that are expected in student work. After the primary traits are identified, specific criteria with performance standards are defined for each trait. This process is often used in the development of rubrics. PTA is a way to evaluate and provide reliable feedback on important components of student work thereby providing more information than a single, holistic grade (Walvoord & Anderson, 1998).
Program: A cohesive set of aligned courses that sequentially build a set/series of knowledge, skills, abilities, and attitudes, resulting in a certificate or degree. A program may also refer to a set of co-curricular activities offered by a post secondary institution’s student services and administrative units.
Program Learning Outcomes (PLO): The result of a student’s learning upon completion of a degree program or certificated program, or program of study.
R
Rubric: A scoring guide usually in the form of a matrix that explicitly states the criteria and standards for student work. The traits of student work are separately and specifically named, and each trait is evaluated from high to low.
S
Standards: Clear definitions of expectations, or targets for student performance against which success in achieving an outcome is measured. See also Education Standards.
Student Learning Assessment: Synonymous with the term measurement, it is commonly used in the higher education context to refer to a systematic cycle of collecting and reviewing information about student learning. This assessment evaluates the curriculum as designed, taught, and learned. It involves the collection of data aimed at measuring successful learning in the individual course, and improving instruction with the ultimate goal towards improving learning and pedagogical practice. The complete cycle involves: clearly stating expected goals for student learning, offering learning experiences, measuring the extent to which students have achieved expected goals, and using the evidence collected to improve teaching and learning (Suskie, 2009; Walvoord, 2010).
Student Learning Outcomes (SLOs): The specific observable and measurable results expected after a learning experience. These outcomes may involve concepts, knowledge, skills, processes, and/or attitudes providing evidence that learning has occurred as a result of a specified course or program activity. An SLO refers to an overarching outcome for a course, program, degree or certificate, or student services area (such as the library). SLOs describe a student’s ability to apply a collection of discrete skills and knowledge through critical thinking processes to produce a learning artifact.
Summative Assessment: Assessments designed to determine a students’ academic development after a set unit of material or at a given benchmark or after learning activities leading to a Student Learning Outcome (Dunn & Mulvenon, 2009). The intent of summative assessments is not to provide specific direction for improvement to students but rather to arrive at a final determination of a student’s performance. Examples of summative assessments include, but are not limited to final exams, course portfolios, performance, and capstone projects.
T
Triangulation: The collection of data from multiple sources or measures in order to show consistency of results.
V
Validity: In assessment it is an indication that a test method accurately measures what it is designed to measure with limited effect from extraneous data or variables. See also Content Validity.
Variable: A discrete factor that impacts an outcome.
Glossary Reference List