Cse 221 Project – Winter 2003 System Performance Measurements on RedHat Linux 0 and Windows xp 1 Project Members



Download 205.68 Kb.
Page1/8
Date29.01.2017
Size205.68 Kb.
#11568
  1   2   3   4   5   6   7   8


CSE 221 Project – Winter 2003


System Performance Measurements

on

RedHat Linux 8.0 and Windows XP 5.1

Project Members:

Urvashi Rao Venkata

Reena Mathew

Table of Contents


1 Introduction 5

2 Project Overview 5

3 Memory 6

3.1 Cache Vs RAM Vs Swap Disk 6

3.2 Code/Memory Optimizations 7

3.3 Looping and Sub-Looping 8



4 Disk IO 10

4.1 Disk access (read-write-read) 10

4.2 Sequential Vs Random Reads 14

4.3 Read Vs mmap 16

4.4 Remote File Access 19

5 Processes 20

5.1 System Calls 20

5.2 Function performance with different implementation styles. 21

6 Arithmetic Operations 23

6.1 Integer and Float Operations 23



7 Linux Vs Windows 24

7.1 Memory 24

7.2 Disk IO 26

7.3 Process 28



8 Conclusion 29

Appendix 1 30

References 31


Table of Figures

Figure 1 : Code Segment for Memory Access 6

Figure 2 : Graphs for Memory Access 6

Figure 3 : Graph for Memory Access for Code plus Data 8

Figure 4 : Code Segment for Looping and Sub-Looping 8

Figure 5 : Graphs for Sub-Looping with 8192KB on Linux 9

Figure 6: Graphs for Sub-Looping with 8192KB on Windows 9

Figure 7 : Code Segment for Disk Access using 'fread' and 'fwrite' 10

Figure 8 : Graphs for Disk Read And Write Access on Linux 11

Figure 9 : Graphs for Disk Read And Write Access on Windows 11

Figure 10 : Code Segment for Disk Access using 'open' and 'close' 12

Figure 11 : Graphs for Disk Read And Write Access on Linux 12

Figure 12 : Graphs for Disk Read And Write Access on Windows 13

Figure 13 : Code Segment for Disk Access using 'mmap' and 'munmap' 13

Figure 14 : Graph for Disk Access using 'mmap' and 'munmap' 14

Figure 15 : Code Segment for Sequential and Random Disk Access 14

Figure 16 : Graphs for Sequential and Random Disk Access on Linux and Windows 15

Figure 17 : Graphs for Sequential and Random Disk Access on Linux to determine Page Read Ahead Size 16

Figure 18 : Graph for Sequential and Random Disk Access on Windows to determine Page Read Ahead Size 16

Figure 19 : Code Segment for 'read' Vs 'mmap' 17

Figure 20 : Graphs for Disk Access using 'read' and 'mmap' 17

Figure 21 : Individual graphs for 'mmap' and normal ' read-write' 18

Figure 22 : Table for read and mmap timings 18

Figure 23 : Code Segment for Local and Remote Disk Access 19

Figure 24: Graphs for Local and Remote Disk Access 19

Figure 25 : Code Segment for System Calls 20

Figure 26 : Table of performance of system calls 21

Figure 27 : Code Segment for different call types 22

Figure 28 : Graphs for Inline Vs Function Vs Recursion calls on Linux 22

Figure 29 : Graph for Inline Vs Function Vs Recursion on Windows 23

Figure 30 : Graphs for Forks 23

Figure 31 : Code Segment for Arithmetic Operations 24

Figure 32 : Graphs for Arithmetic Operations 24

Figure 33 : Graphs comparing Linux and Windows memory access bandwidths 25

Figure 34 : Graphs for Code plus data with different levels of optimization on Linux 25

Figure 35 : Graphs for Code plus data with different levels of optimization on Windows 25

Figure 36 : Graphs comparing Linux and Windows with O3 optimization level 26

Figure 37 : Graphs comparing the Sub-Looping between Linux and Windows 26

Figure 38 : Graphs comparing disk read bandwidths for Linux and Windows 27

Figure 39 : Graphs comparing disk write bandwidths for Linux and Windows 27

Figure 40 : Graphs comparing Sequential and Random disk access between Linux and Windows 28

Figure 41 : Graphs for comparing the disk accesses for Page Read Ahead between Linux and Windows 28

Figure 42 : Graphs comparing the times for different call types on Linux and Windows 28

1 Introduction


This project involves measuring the performance of an Operating System (OS) based on hardware architecture. It discusses a set of metrics that can be used to evaluate the performance of certain features of an Operating System.
Measurements were made on two Operating Systems, RedHat Linux 8.0 and Windows XP 5.1. The measurements for each operating system have been evaluated and compared between the two operating systems. Observations have been discussed and conclusions have been made with respect to each type of measurement.

2 Project Overview


The goal of the project has been to evaluate the performance of two different operating systems on the same hardware. The metrics chosen fall into the categories of memory access, disk access, processes and arithmetic operations.
Architecture

Hardware – Dell Inspiron 2650

Processor – Intel(R) Pentium(R) 4 CPU 1.70 GHz

Cache Size – 512 KB

RAM Size – 256 MB

Hard Disk – 30 GB


Operating Systems

(i) Redhat Linux Version 8.0

(ii) Microsoft Windows XP Version 5.1.2600 Service Pack 1.0

- Cygwin 1.3.20-1 (UNIX environment for Windows)


Compiler

gcc3.2 – C compiler (on Linux and Cygwin)


Benchmarking and performance measurements correspond to sampling a multidimensional space spanned by non-orthogonal axes. Measurements are often not completely isolated from the effects of related factors. Wherever possible, we have attempted to design our metrics to minimize this interference, and to emphasize/isolate trends corresponding to the aspect being measured. In other places we have attempted analyses that take into account major factors affecting the quantity being measured.
We have measured performance parameters and have tried to verify our results with documented features of the OS (for example – kernel buffer sizes) to see if these OS specifications are actually 'visible' from the application level.

More than to get absolute numbers, out tests attempt to highlight the way the OS manipulates the hardware, when different code implementations are used. These observations can be analysed in terms of efficient software design, to use algorithms that use the behaviour of the OS to its advantage, or equivalently, that take measures to avoid common pitfalls.


We have done all our tests with compiler optimization (O3). Wherever we observed anomalies, we re-ran the tests without this optimization to check if the anomalies were due to it. Our idea behind this is that we wanted to measure these performance numbers based on the best possible performance (i.e. with optimization). We are aware of the fact that the effects of compiler optimization will be different for our test code and any application that may be designed keeping our results in mind, and we have tried to reduce this effect by measuring relative performance. We made absolute measurements only for computations like memory / disk accesses and atomic arithmetic operations which were performed with compiler optimization to get figures for best possible performance.
For all tests, the same code was used on Linux and Windows. The “Cygwin” interface was used to translate the Unix kernel calls into equivalent (nearly) functionality on Windows. This does not guarantee an exact functional emulation of the Linux test codes, but is the closest we could get.
Note: In the rest of the report, the terms ‘Linux’ and ‘Windows’ correspond to the specifications mentioned above.

Download 205.68 Kb.

Share with your friends:
  1   2   3   4   5   6   7   8




The database is protected by copyright ©ininet.org 2024
send message

    Main page