Academic Assessment Steering Committee: The Academic Assessment Steering Committee (AASC) is a group of multiple faculty members from each college. The AASC will review the reports and provide feedback to programs. Programs are expected to respond to feedback from the AASC during the next cycle.
Action plan: a well-defined plan for addressing deficiencies identified through the program review data analysis or a plan for ensuring continued growth and improvement. An action plan typically includes the following elements: (1) a broad, aspirational goal; (2) a specific, measurable objective; (3) implementation strategies for the objective (who will do what and when); (4) an indication of how the objective will be measured; and (5) a target for the objective.
Academic Assessment Policy: All major degree programs engage in the assessment process in an ongoing manner as stated in BOR policy 2.9 Institutional Effectiveness: Planning and Assessment (https://www.usg.edu/policymanual/section2/C357) and as consistent with SACSCOC Standard 8.2.a: Student Outcomes: Educational Programs (https://sacscoc.org/accrediting-standards/). The process includes 1) identifying measurable student learning outcomes appropriate to the degree-level, 2) determining where in the curriculum the outcomes are (or should be) fostered, 3) using appropriate and effective tools to measure progress toward outcome achievement and the degree to which these outcomes are achieved, 4) collectively reviewing student learning data generated from measures to identify strengths and weaknesses, and 5) planning and implementing strategies for improving student learning. The success of this process is dependent upon the development and implementation of clear procedural guidelines, the uniform presentation of assessment documents, and fostering an environment that supports transparency and accuracy in reporting.
The University’s assessment and reporting processes are independent of professional accreditation requirements and are undertaken regardless of whether external accrediting bodies exist. Whenever possible, assessment of learning undertaken for external accreditors and for the University should be complementary.
Alignment: The connection between course and program student learning outcomes, the activities used by courses and programs to aid in student learning, and assessment methods.
Analytic Rubric: A scoring key- typically a grid that identifies criteria for completion on a task or assignment. An analytic rubric establishes differing levels to meeting criteria or objectives and a corresponding outcome for scoring.
Artifact: A sample of student work chosen for evaluation that demonstrates specific learning outcomes.
Assessment: The systematic evaluation of student learning based on direct and indirect evidence for the purpose of identifying strengths and weaknesses and creating initiatives for improving over time.
Assessment Cycle:
A six step process between instructors, students and the assessment office.
Bloom’s Taxonomy: A classification of the different objectives and skills that educators set for their students (learning objectives). The terminology has six levels of learning; knowledge, understand, apply, analyze, evaluate, and create. These six levels can be used to structure the learning objectives, lessons, and assessments of your course.
Course Student Learning Outcome: A statement of a measurable skill that students will be able to demonstrate mastery of at the end of a course.
Core Curriculum Student Learning Outcomes:
Area A1—Communication Skills (6 Hours Required)
Students will use research and analysis to produce written communication adapted appropriately for specific audiences, purposes, and rhetorical situations.
Area A2—Quantitative Skills (3 Hours Required)
Students will apply mathematical knowledge using analytical, graphical, written, or numerical approaches to interpret information or to solve problems.
Area B—Institutional Options (7 Hours Required) Global Engagement
Students will recognize and articulate global perspectives across diverse societies in historical and cultural contexts.
Area C—Humanities, Fine Arts, and Ethics (6 Hours Required)
Students will identify and critically examine human values expressed in ideas and cultural products.
Area D—Natural Sciences, Math, and Technology (11 Hours Required and at least 8 of these hours must be in a lab science course)
Students will use scientific reasoning and methods, mathematical principles, or appropriate technologies to investigate natural phenomena.
Area E—Social Sciences (9 Hours Required)
Students will articulate and analyze how political, historical, social, or economic forces have shaped and continue to shape human behaviors and experiences.
Curriculum Maps: Visual method to align instruction with student learning outcomes, reveal gaps in curriculum, and help design instruction and assessment cycles.
Data Collection: The process of gathering and measuring information on variables of interest, in an established, systematic fashion that enables one to answer research questions, test hypotheses, and evaluate outcomes.
Direct Measurement: Measure that requires the student to demonstrate his/her knowledge and skills in response to the instrument. Achievement tests, student academic work, observations and case studies are all examples of direct measurements.
Formative Assessment: Assessment that is carried out throughout the course, project, or time-frame to provide feedback regarding whether the object is being met.
General Education and Core Curriculum Committee (GECC): Voting members of the General Education & Core Curriculum Committee are composed of senators or senate alternates representing each college and the library. They are responsible for planning, facilitating, reporting, and recommending improvements in the assessment of GE and CC outcomes.
Indirect Measurement: Evaluation based on student reflection on their learning instead of demonstrating a specific skill; usually collected by means of surveys, reflection papers, interviews, observation or focus groups.
Inter-rater Reliability: The degree of agreement or consensus among raters. This can be calculated in different ways (joint-probability of agreement, Cohen’s kappa, Scott’s pi, Kriooendorff’s alpha, etc.)
Item Analysis: The act of analyzing student responses to individual exam questions with the intention of evaluating exam quality.
Objective Test: A test consisting of factual questions requiring extremely short answers that can be quickly and unambiguously scored by anyone with an answer key, thus minimizing subjective judgements by both the person taking the test and the person scoring it.
Measurement Tool: instruments used by researchers and practitioners to aid in assessment or evaluation. These tools can include direct, indirect and summative assessments, qualitative and quantitative measurements, analytic rubrics, direct and indirect tests, surveys, interviews, etc.
Mission Statement: A formal summary of the aims and values of a company, organization or individual
Motivation Factors: The nature of an assignment evokes student motivation. For example, if an assignment is extra credit in one course and the same assignment is a final project in another section of that same course, we can expect students’ motivation to be different in these conditions. Student motivation should be an explicitly noted factor when assessing student work over time and across sections.
Peer-review: An evaluation of student work by their colleagues or peers for either formative or summative purposes.
Portfolio: An organized collection of student work that shows evidence of a student’s cumulative learning.
Program Student Learning Outcome: Statement of a measurable skill that students will be able to demonstrate mastery of at the end of an academic program.
Qualitative Measurement: Evaluation based on descriptive data rather than numerical data, with results that are reported in narrative form.
Quantitative Measurement: Evaluation based on numerical data such as test scores, grade distributions, grade point average, and standardized test surveys.
Reliability: Consistency of a measure over time (test-retest reliability), across items (internal consistency), and between different researchers (inter-rater reliability).
Sampling: Taking a portion of a whole to use for analysis. Non-probability sampling does not involve random selection and probability sampling requires a randomly selected sample.
Student Learning Outcome (SLO): A Statement that clearly outlines the expected knowledge, skills, attributes, competencies, and habits of mind that a student is expected to acquire during the specified course or program.
Summative Assessment: Assessment that is carried out at the end of a course, project, or time-frame to evaluate whether the objective was achieved (i.e., the overall performance).
Teaching Strategy: A specific method used by instructors to aid students in mastery of course and program learning outcomes.
Test Blueprint: A document that reflects the content areas of an assessment that will be given to test student learning on course objective skills, knowledge, and abilities.
Transparency Assignments: Assignment design aimed at making the learning process more explicit to students (how and why they are learning specific content).
Validity: Refers to the measures ability to measure what it is intended to measure and not another construct.