Background: For convenience, we have classified our program outcomes, which are essentially identical to the ABET EAC Criterion 3 outcomes, into three groups. The first group consists of outcomes dealing with technical skills; the second group consists of outcomes dealing with "soft skills"; the third group consists of the two outcomes dealing with societal issues. This page concerns our approach to evaluating the outcomes in the first group.
Outcomes in the group: The outcomes in the first group are:
a. an ability to apply knowledge of mathematics, science, and engineering;
b. an ability to design and conduct experiments, as well as to analyze and interpret data;
c. an ability to design a system, component, or process to meet desired needs within realistic constraints such as economic, environmental, social, political, ethical, health and safety, manufacturability, and sustainability;
e. an ability to identify, formulate, and solve engineering problems;
f. an understanding of professional and ethical responsibility;
k. an ability to use the techniques, skills, and modern engineering tools necessary for engineering practice.
Evaluation of outcomes: Students acquire the knowledge and skills that help them achieve these outcomes throughout the curriculum. We created POCAT(!) (Program OutComes Achievement Test) to help assess the degree to which specific program outcomes were achieved by Computer Science students as they near completion of the program. The questions on POCAT are based on topics from various high-level courses (such as CSE 2331, 2431, 2501, 3241, 3341, 3421, and 390X). However, they are not the typical questions one might find in, say, the final exams of these courses. Instead, they are more conceptual and are intended to see how well students understand key concepts from across the curriculum and, more specifically, the degree to which they have achieved the outcomes listed above.
All BSCSE, BSCIS, and BACIS majors will be required to take the POCAT. The performance on the test will not affect the grades of individual students in any courses, nor will any records be retained on how individual students performed on the test. Initially, we had considered the possibility of doing some simple statistical analysis to see, for example, whether factors such as which particular section of a course a student takes, or the order in which a student takes two specific courses, has any effect on the student's performance in questions dealing with particular areas. Hence, in the pilot test in AU05, we had asked students taking the test to write their names on the test papers; however, the students who took the pilot test overwhelmingly preferred that we not ask for student names. Therefore, students are no longer asked to put their names on the test. Each test paper has a preprinted letter code on it; a student taking the test can, if he/she chooses to, make a note of the code that appears on his/her particular test paper; no one else knows the codes corresponding to individual students. Thus when summary results are posted by code, each individual student will be able to determine how well he/she performed on the test, but no one else will have this information. To reiterate, the goal is to assess the program, in particular to assess the effectiveness of the program in ensuring that students achieve the outcomes listed above by the time of their graduation. The assessment is offered near the end of the term, with multiple times available to fit a student’s schedule.
Here are a sample test and solution guide for the sample: