Review of the computer science program


Course outcome direct measure values for core courses in 062 (Cont.)



Download 2.43 Mb.
Page15/25
Date18.10.2016
Size2.43 Mb.
#1433
TypeReview
1   ...   11   12   13   14   15   16   17   18   ...   25


Course outcome direct measure values for core courses in 062 (Cont.)

 

PO6

PO7

PO8

PO9

PO10

PO11

PO12

ICS 102

 

 

 

 

 

 

 

 

 

 

ICS 201

 

 

 

 

 

 

 

 

 

 

ICS 202

 

 

 

 

 

 

 

 

 

 

ICS 233

 

 

 

 

 

 

 

 

 

 

ICS 251

 

 

 

 

 

 

 

 

 

 

ICS 252

 

 

 

 

 

 

 

 

 

 

ICS 253

 

 

 

 

 

 

 

 

 

 

ICS 309

 

 

 

 

 

 

 

 

 

 

ICS 324

 

 

 

 

 

 

0.00

 

 

 

ICS 343

 

 

 

2.00

 

 

 

 

 

 

ICS 351

3.00

4.00

3.00

3.00

3.00

 

3.00

3.00

3.00

3.00

ICS 353

 

 

 

 

 

 

 

 

 

 

ICS 381

 

 

 

 

 

 

 

 

 

 

ICS 399

 

 

 

 

 

 

 

 

 

 

ICS 410

 

 

 

 

 

 

 

 

 

 

ICS 411

4.00

 

 

4.00

4.00

3.00

3.00

4.00

4.00

4.00

ICS 431

 

 

 

 

 

 

 

 

 

 

SWE 311

 

 

 

 

 

 

2.00

 

 

 

The program outcome direct measure value results for 061 and 062 are shown in the following figure:




Figure: Direct measure values for each program learning outcome based on grade-based assessment

One can clearly see that with the exception of Program outcome 2, performance has improved from 061 to 062. However, Program outcomes 1 and 9 are less than 2.5, whereas Program outcomes 6, 7, 8, 10, 11, and 12 are greater than or equal to 3. According to these values, the program may need more attention to "Knowledge in Major" and "Team Work" program outcomes. In fact, one reason Team Work was ranked low could be related to the fact that measuring it in an objective way is difficult. Otherwise it seems we are doing fine with the rest of the program outcomes.


Direct Assessment II: Rubrics-based Assessment

Rubrics-based assessment was introduced late in 062 as an alternative to grade-based assessment. The department, as well as the University, felt that using grade-based assessment may not be the best option for the following reasons:



  1. It incurs a lot of overhead on the faculty member, having to record more than one mark for a single work that assesses more than one course learning outcome. This overhead was the reason that some faculty gave up on it and used the "exception" method of direct assessment, as defined by our department. This complexity hinders the continuity and sustainability of applying grade-based assessment in subsequent semesters. Assessment should be simple enough to be carried out consistently.

  2. A lot of faculty complained about not being able to formulate questions as they used to, due to the grade-based assessment. For example, they would think a lot before putting questions with subparts that assess different course learning outcomes.

  3. The administrative overhead of following any changes in course learning outcomes and their mapping to program outcomes proves to be too complicated. A good example is the difference in 061 and 062 course to program outcomes mapping.

  4. Grade-based assessment cannot be considered an "independent" assessment of outcomes, as they are being carried out by the same faculty members teaching the students. Having an independent evaluation provides more insight, despite being less accurate/qualitative.

  5. The CS program consists of two options: Summer Training option and a Cooperative Work option. The grade-based assessment could not clearly evaluate the outcomes of each option

  6. Grade-based assessment provides "local" performance indicator of achieving program outcomes. By concentrating on individual core courses and mapping their outcomes to the program, one may loose the "big picture" of interrelating students' performance in the program.

By the end of 062, the department developed its first set of rubrics to evaluate the program outcomes. Since the CS program consists of two options, the following courses were chosen to evaluate the program outcomes:

  1. CS Program with Summer Training option:

    1. ICS 411: Senior Project

    2. ICS 399: Summer Training

  2. CS Program with Cooperative Work option:

    1. ICS 351: Cooperative Work

In addition, the final exams for 6 core courses in 061 have been chosen to evaluate the first program outcome: "Knowledge in Major". These courses include ICS 201: Introduction to Computer Science, ICS 251: Foundations of Computer Science, ICS 313: Fundamentals of Programming Languages, ICS 381: Introduction to Artificial Intelligence, ICS 431: Operating Systems, and ICS 432: Computer Network Systems.
The same set of rubrics were used to evaluate most program outcomes for both options. For students under the summer training option, the final report of the senior project course "ICS 411", its final oral presentations and the summer training course were used. For students under the Cooperative work option, the final report of the Cooperative Work course "ICS 351" was used. We could not include the final oral presentations for ICS 351 as they were conducted early in 062, before rubrics were finalized.

This will enable us to compare and contrast the performance of students in each option in most program outcomes. Such data will be very useful in evaluating each option separately, in order to identify weaknesses and strengths within each option, so that they can be developed in a better and a much more focused manner.


Rubrics Description and Method of Evaluation

Program Outcome 1: Knowledge in Major

This outcome was evaluated based on the final exam results for the six core courses mentioned earlier. We are aware that this may not be the best way of "independently" evaluating this outcome, however, we do cannot do much about it. Currently there are no standardized national test exams that our students can take and hence use their scores as independent evaluation of the outcome. However, we heard that the University might be working on exit exams. Until that comes to light, we will continue using this method. Since all program outcomes have been evaluated on a scale of 1..4, Program Outcome #1 is no exception. We decided to follow the following scale for evaluation based on the average score of the final exam out of 100:




Avg Score in Final Exam

80 - 100

70 - 80

50 - 70

0 - 50

Value

4

3

2

1



Program Outcomes 6, 9, and 10: Professional responsibility, Teamwork, and Self Management, respectively

The final employer survey carried out for students in both options, in ICS 351: Cooperative Work and ICS 399: Summer Training, is used to evaluate these outcomes. The form is identical for both courses, which makes it easy to compare. Below is a sample form, where we have added numbers to each question in order to easily refer to them in our discussion below:



First, after we calculated the average score for each question, the average value was mapped to a number between 1 and 4 according to the following table below:



Avg Score for a question

9 - 10

7 - 8

5 - 6

0 - 5

Value

4

3

2

1

Each program outcome was calculated as follows. For Program outcome 6, we first compute what we call the "Relative Professional Satisfaction". The idea behind this measure is to identify students who were technically ranked higher than "professionally". In particular, students who averaged in Questions 1 through 9 more than their average in Questions 10 and 11, were considered "professionally not satisfying to their employers". Our premise is that students who were punctual and had a great attendance history should score at least as good as the other questions. A lower score, according to our interpretation, is an alarming indication of the lack of "Professional Responsibility", which is Program Outcome 6. Once the average is calculated, the relative professional satisfaction is calculated, on a scale from 1 to 4, as follows:

Relative Professional Satisfaction (RPS) = [(# students whose Avg(10,11) < Avg(1..9))*1 + (# students whose Avg(10,11) >= Avg(1..9))*4]/ Total # of students

The value for Program Outcome 6 is then calculated as the average of (Q4 Value , Q5 Value , RPS)

Program Outcome #9, Teamwork, is calculated as the average of (Q7 Value , Q8 Value)

Program Outcome #10, Self Management, is calculated as the average of (Q1 Value , Q2 Value, Q6 Value)
Rest of the Program Outcomes:

A set of rubrics were developed for the rest of program outcomes, as they can be more "technically" observed than the previous "soft" skills. These were developed by the ICS Programs Assessment Standing Committee and are currently under evaluation and reconsideration in order to refine them to better reflect the program outcome and make the evaluation more objective and accurate.

Once the rubrics were in place, a random selection of final reports for the senior project course, ICS 411, and the Cooperative Work Course, ICS 351, were chosen from the two semesters. The sample represents approximately half the total number of reports for each course, which is a good sample. For future evaluations, we plan to carry out the assessment on all reports. The reason we resorted to half of the reports is the lack of time and faculty. This exercise was carried out during summer where the number of faculty members available was low. Each report was given to two faculty members who were not involved in teaching and/or coordinating the corresponding course for evaluation. Each criteria is given a number between 1 and 4, corresponding to poor, to excellent. If the evaluations were at most at a difference of 1, the average value is taken. Otherwise, the report would have been given to a third evaluator. If the third evaluation was closer to one evaluation from the other, the average of the two close evaluations was taken. Otherwise, the average of the three evaluations is taken.

Below is the list of rubrics used for evaluating the corresponding Program Outcome. All of them are based on the final report in the corresponding course, except for the Oral communication skills that were assessed during the final presentations of all the senior project students in 062. Also, the Oral Communications Skill value was computed based on the average of the responses received from faculty attending those presentations, as it is not possible to apply to it what the procedure followed for evaluating the final reports.


Program Outcomes 2: Modeling

4

3

2

1

  • Overall architecture present and well defined

  • The design of the system, broken into modules, is present and consistent

  • Details of each module are present and seem accurate

  • Overall architecture present and well defined

  • The design of the system, broken into modules, is present and mostly consistent

  • Details of 50% of modules are not present but seem accurate

  • Overall architecture present and well defined

  • The design of the system, broken into modules, is present but not complete or has inconsistencies.

  • Details of modules seem not to be accurate nor complete

  • The overall architecture may be present but the design of the system is either missing or lacking major components with little or no details


Program Outcome 3: Problem solving

4

3

2

1

  • Requirements are present, well defined and classified into functional and non-functional

  • All actors of the system have been identified along with their responsibilities.

  • The software process and the phases of the project are clearly identified

  • Design, implementation, testing, and deployment phases are well presented.

  • Most requirements are present and well defined and classified into functional and non-functional

  • Most actors of the system have been identified and the responsibility of identified actors have been explicitly mentioned

  • The software process and phases of the project are clearly identified.

  • The design, implementation, testing, and deployment phases are presented.

  • Most requirements are present, but some are not well defined and classified into functional and non-functional

  • Some actors of the system have been identified and detailed responsibility of identified actors may be missing.

  • The software process and phases of the project are identified.

  • The design, implementation, testing, and deployment phases are vaguely presented.

  • Some major requirements are missing and some are either not classified into functional and non-functional or had wrong classification

  • Most actors of the system have been missing or responsibilities of actors are missing

  • The software process and the phases of the project are not identified

  • The solution to the design, implementation, testing, and deployment is not clearly presented or parts are missing.



Download 2.43 Mb.

Share with your friends:
1   ...   11   12   13   14   15   16   17   18   ...   25




The database is protected by copyright ©ininet.org 2024
send message

    Main page