Beautification



Download 8.56 Mb.
Page22/55
Date28.05.2018
Size8.56 Mb.
#50548
1   ...   18   19   20   21   22   23   24   25   ...   55

Linear Trends

The significant linear trend found in terms of the total changes, quality changes and expected changes made in designs across levels of formality suggests that as the formality of a design increases (i.e. as a design appears more formal, pretty and tidy): 1) the number of changes made in attempt to improve the design (i.e. total changes) decreases; 2) quality changes decreases; and 3) expected changes (i.e. the number of ‘planned errors’ corrected) decreases. Vice versa, as the formality of a design decreases (as a design appears rougher, less tidied-up and sketchy): 1) the number of changes made in attempt to improve the design (i.e. total changes) increases; 2) quality changes decreases; and 3) expected changes (i.e. the number of ‘planned errors’ corrected) increases. Moreover, the significant linear trend in the expected changes made across levels of formality showed a more robust effect of formality on design performance. Even when each design presented contained the same number of “planned errors” for corrections, the number of corrections made to those errors (i.e. expected changes) still differed significantly in a linear manner as the design looked more or less formal. Such findings support the underlying concepts in design education (e.g. in design classes) and design process prescription in design handbooks (e.g. Fowler & Stanwick, 2004; Brinck, Gergle, & Woods, 2002) that low formality designs (e.g. rough, hand-drawn, unfinished looking design) should be used for early stages in design to facilitate exploration and for catching early errors; where as, higher-fidelity and higher-formality prototypes (computer-rendered, design) should be created at later stages for refinement. The effects of formality found in this study suggests that although it maybe helpful for designers to use beautification functions in sketching-based design tools to provide a quick glimpse of the (near) finish product in just a few clicks, designers, however, should not beautify a design until the very last stage of the process, as the results showed that subjects’ performance was poorer (in terms of improving the design) as the level of formality of the design presented increased. In addition, the linear trend in design performance in terms of the number of changes made, further linked previous findings (Black, 1990; Plimmer and Apperley, 2004), that reviewers/designers interact with low formality designs (rough, hand-drawn) and high formality designs (tidied-up, computer-rendered) – suggesting that the relationship (i.e. the effect of formality on deign performance) can still be found at other levels of formality.


High formality Versus Low formality designs on Tablet PC

Total, quality and expected changes made were significantly lower when subjects were presented with the high formality design on the tablet PC, compared to other four designs with lower levels of formality. On the other hand, total, quality and expected changes made were significantly higher when subjects were presented with the low formality design on the tablet PC, compared to other designs with higher levels of formality on the tablet PC.

The same trend was found in previous studies (e.g. Black, 1990; Plimmer and Apperley, 2004) that subjects interacted differently with hand-drawn designs versus computer-rendered designs, which suggested that different problem-solving mechanisms and strategies used in different design media, when a design appears rough and sketchy (e.g. on pen and paper, or on a sketch-based interface) or computer-rendered and formal (e.g. on computer using different software). Moreover, Black (1990) found that subjects (students) were more satisfied with their design on computer (high formality design) than with the design on paper (low formality design). However, after a review with their tutor, interestingly, subjects indicated in the questionnaire that they were more with the design on paper than on computer (screen). This suggested that after the class review students noticed that their design on paper (low formality design) was indeed better than their design on the computer. Hence, in the context of this study, Black’s results suggested that design performance (in terms of decision making and error detection) maybe better when working with low formality designs better than with high formality designs. However, it must be noted that Black’s results was only a subjective and indirect indication of design performance. Also, in her study, the participants created the designs from scratch; where as in the present study, participants were given already-designed forms to improve on (i.e. reviewing/editing the designs) – different cognitive processes may have played a role depending on the nature of the design task (whether designing from scratch or reviewing).

In another study, Plimmer and Apperley (2004) found also, that subjects interacted with hand-drawn, sketchy designs differently from computer-rendered, formal looking designs. Plimmer and Apperley’s results showed that subjects made fewer changes to the high formality design (computer rendered, formal and tidy design created in VB.Net) compared to the low formality design (hand-drawn and sketchy design created in FreeForm, an informal sketch-based tool), in which most changes were quality changes and that the two deliberate errors in the design were mostly detected and corrected. However, one must be bear in mind that in Plimmer and Apperley’s study, no distinctions were made between different types of changes made to a design – any changes (total changes); quality changes that improved the design; or expected changes (corrections of planned errors) for controlled comparisons between designs with a different level of formality. Furthermore, Plimmer used a different application for each design task, and both tasks involved a desk top computer with the standard input devices including mouse and keyboard, as opposed to the present study which involved Tablet PC with pen-input used to present four designs with low to high formality, plus one low formality design presented with for comparison.

Previous studies (i.e. Black, 1990; Plimmer & Apperley, 2004) only looked at two levels of formality – low formality and high formality; where as, the present study examines and compared several levels of formality systematically form low to high, on the same tool (plus one low formality on paper for comparison). Therefore, comparison of results from the current study with previous studies must be interpreted with extreme caution.
Low formality VS low formality designs: Paper versus Tablet PC

Furthermore, total, quality and expected changes made were (much) higher in the low formality design (on paper) than all other levels of formality on the tablet PC. Hence, the significant difference of the number of changes made between the two low formality designs – one presented on paper and one presented on the Tablet PC – the number of changes made on the design presented on paper was much higher than the design presented on the tablet PC. Both designs had the same formality according to the same beautification criteria – non-beautified hand-drawn design (refer to Table 2, 3, 4). This raises curiosity about how much difference there is between seeing and perceiving a design on paper compared seeing and perceiving the same design on the tablet PC; and whether or not, the ‘gap’ between paper and pen and computer has been really ‘bridged’ and in an effective way. Or it maybe that the difference in performance was simply due to the difference in medium, as using the tablet PC introduces formality itself, and even a different dimension of formality. In addition, research have shown that information processing on screen differs from information processing on paper, for example, differences in handling and manipulation, display size, angle of view and differences in cognitive requirements such as short term memory required to remember other information outside of the screen, compared to the wide perceptual field supported by using paper (for a fuller review of the differences, see Dillion, 2003). All these factors may have contributed to the differences in design performance between the two designs with the same level of formality, but on a different medium.

Moreover, there is an implication on sketch-based design tools that sketching on paper is still not being modeled and supported on current computer interface in a true manner. Hence, it maybe useful for future research on sketch-based design tools. to focus on improving such aspects, to enable users to experience better and more natural human-computer interaction.

There is no previous research on comparing different levels of formality of designs in one study, therefore, conclusive statements could be made. However, there is a promising future in research concerning beautification and formality in conjunction, towards a better understand of beautification approaches (e.g. what level of formality should a hand-drawn diagram beautify to become) and how it will affect designers in an unobtrusive way.


What happens in the middle, when a design is more or less formal?

Total changes

Along with the significant linear trend, although subjects’ performance, in terms of total changes made, differed significantly when working on designs with low formality (on paper and on tablet PC) and high formality, subjects’ performance did not differ significantly when they were presented with designs with medium-high and medium-low formality, and designs with medium-low and low formality on the Tablet PC. However, subjects’ performance was visually different on the graphs (see Figure 11).

The non-significant differences between the three levels of formality could be attributed to the significant (weaker) cubic trend found, p < .01, partial η2 = .23, in addition to the linear trend. Moreover, low formality design on paper and low formality design on the tablet PC may not belong on the same continuum (levels of formality) – maybe a different dimension; and even so, it is likely that the low formality design lies further towards the low formality end of the continuum. In comparison, low formality design on the tablet PC was systematically created with respect to other levels of formality, so that each level of formality lies next to each other in an approximate interval within the same continuum. This was one of the main limitations in the present study – although including the low formality designs presented on paper provided useful a comparison, it may have affected the statistical analysis as it took into account the changes made at every level of formality. The number of changes made in the low formality design on paper was much higher than designs with other levels of formality, hence, it maybe because of that, that when the number of changes made at each level of formality was compared, no significant differences were found. However, optimistically, the high number of changes made in the low formality design on paper may do very little damage to affect the overall results. Therefore with this in mind, ANOVA with repeated measures of the performance data was conducted again, this time including only four levels (from low formality to high formality on the tablet PC). As predicted, however, the same results were found.

Another explanation for the non-significant differences between the middle levels of formality is that it was impossible to predict every change that was going to be made, hence ‘total changes’ made (i.e. any changes made) was measured, for interesting comparisons between quality changes and expected changes made. Plus, as participants were free to make any changes, total (as well as quality) changes made across levels of formality could not be measured in a systematic and controlled way, like expected changes made.

The non-significant difference found between the medium-high, medium-low, and low formality design may also indicate a small effect, or small significant difference, that was not shown statistically. As this study is the first, known so far in the literature, to explore different levels of formality of designs and their effects on design performance, more research is needed to improve such methodological issues, and most importantly, future replications of this study and/or variations of the current study are needed to be more conclusive and to further interpret the findings and implications of the effects of different levels of formality, particularly on the less extreme levels of formality where there is a lack of research on – whether there is a true difference in performance when a design appears more or less formal; and whether there are major implications of such findings on beautification techniques.
Quality changes

Different to results in total changes, subject’s performance in terms of quality changes made, was significantly different at each level of formality except in one occasion where no significant difference was found between quality changes made in the medium-low formality design and quality changes made in the low formality design (on tablet PC). One explanation for this is the familiarity of design content in the medium-low formality design (i.e. University Graduation Application form) which may have resulted in an increase in quality changes made with respect to quality changes other designs. In contrast, subjects may have been less familiar content in the low formality design on tablet PC (i.e. Bank Loan Application form), hence, fewer-than-expected number of quality changes were made – this also highlight the results on individual preference and overall enjoyment ratings for designs (see later sections). However, content was impossible to control for, as mentioned later that, because different individuals will have different exposure to and personal experience with a variety of fill-in forms, and reactions to different forms will differ (see later sections on “overall enjoyment”).

Other than the higher number quality changes made in the medium-low formality design as anticipated, it can be said that even for the medium levels of formality, the linear trend still occurs in a significant manner – that design performance, in terms of quality changes made, decreases at a significant level as the level of formality increase.
Expected changes

Similar to total changes made, although subjects’ performance, in terms of expected changes made, differed significantly when working on designs with low formality (on paper and on tablet PC) and high formality, subjects’ performance did not differ significantly when they were presented with designs with high formality and medium-high formality, medium-high and medium-low formality, and designs with medium-low and low formality on the Tablet PC. However, subjects’ performance was visually different on the graphs (see Figure 19). Unlike total changes (uncontrolled measurement of any changes made), expected changes were planned deliberate errors for subjects to correct, with each design containing the same number of errors (twenty-three correctable errors); however, unexpectedly, there was no significant incremental differences between each level of formality. This suggested that subjects’ performance (in terms of expected changes made) was comparable when they were presented with designs with higher formalities (high formality and medium-high formality), and similarly in designs with lower formalities on the Tablet PC (low formality and medium-low formality; and medium-low formality and medium-high formality).

Although with the significant linear trend found, the relationship between the mid levels of formality is still unclear. It maybe that the differences in the designs’ levels of formality was too little to make a difference in affecting subjects’ design performance in making expected changes; or it could be that the differences are significant, but was not detected in this particular study due to the simplistic nature of the design tasks conducted in the laboratory, far from a real design situation, which left little space for subjects to do real design work.
It must be noted that during results analysis, it was ambiguous in deciding whether parametric or non-parametric analysis should be employed. Although the independent variable – formality – was mathematically and systematically manipulated with three assumptions made (refer to methods section 2.5.2) while controlling for other variables, it was debatable whether that the level of measurement in terms of levels of formality was interval. In other words, whether there was a real difference in appearance (formality) between designs across the levels of formality (based on the assumptions that it is an interval level of measurement as the variable was meticulously manipulated); or whether the design appearance could only be ranked from low formality to high formality, where one design is higher in formality than another; hence, units between levels of formality not exactly equal (ordinal level of measurement). On the other hand, with the dependent variable being the number of changes made in the designs across levels of formality, interval level of measurement was achieved. There was slight non-normal (skewness and kurtosis) distribution, but was statistically justified (homogeneity of variance, distribution normality, skewness and kurtosis statistics compared against the errors and sphericity) to be reasonable for a parametric test involving one-way repeated measures ANOVA. Such ambiguities was addressed by first choosing the appropriate approach guided by justification (parametric approach), then double checking by comparing parametric results with non-parametric outcome of analyzes. Friedman’s rank test (the parametric equivalent of one-way repeated measures) was also conducted for total, quality and expected changes made. However, the non-parametric results were not reported, as it yielded the same significant results as the parametric tests including significant differences and significant trends found. The results from post hoc Wilcoxin’s sign rank test was also the same as the parametric post hoc comparisons i.e. differences were found in the same pairs of variables

Overall, with the strong effect of formality on design performance, as well as significant linear trends found in terms of total, quality and expected changes made across levels of formality; the results found in previous studies (i.e. Black, 1990; Plimmer and Apperley, 2004), that was found in the present study, that design performance differs when working with low formality designs compared to high formality designs; this study is the first, known so far in the literature, to explore different levels of formality of designs and their effects on design performance. Therefore, future replications of this study and/or variations of the current study are needed, in order to improve methodological issues, to further interpret the findings, and to be more conclusive on the effects of levels of formality of designs, particularly the less extreme levels of formality where there is a lack of research on. For example, one could examine whether there is a true difference in design performance when a design appears more or less formal; and whether there are major implications of such findings on beautification techniques in sketching-based tool.


4.2. Between-subject effects: Expertise

Overall, results from between-subject analyses on expertise suggest that as formality increase, the differences in terms of performance between groups (i.e. experts and novice) tend to decrease (i.e. performance at a similar level); and vice versa where the differences in performance between groups increase as formality decreases (i.e. more variability in performance). Such finding suggests that no matter what level of design experience, major/specialization or study level one has (i.e. expertise), formality still has an impact on design performance, and more specifically, on the number of changes made to improve the design (total, quality and expected changes made).


4.2.1. Design experience

As hypothesized, design experience was found to affect design performance across levels of formality. The significant effect of design experience (the between-subject factor) on the number of changes made (total, quality and expected changes) across levels of formality indicated that subjects with Computer Science (CS)/ Software Engineering (SE) design experience performed better (i.e. made more changes) across levels of formality compared to subjects with no experience or some non-CS/SE experience.

Significant linear trends were found which further indicated that regardless of magnitude differences (the level of performance), the effects of formality on design performance (total, quality and expected changes made) still existed when design experience was taken into account.

The significant formality-by-design experience interaction showed that, in addition to the effect of formality, design experience also affected design performance (i.e. total changes and quality changes made) in a non-parallel manner. Although, no statistically significant formality-by-design experience interaction was found on expected changes made across levels of formality, it was observable that the difference in performance decreased as the level of formality increased; and vice versa, where the difference in performance increased as the level of formality decreased. Hence, there was some formality-by-design interaction. This further suggests that individuals with different design experience are affected by formality in different ways (magnitude), which in turn influence design performance.

Moreover, as level of formality decreased, the larger differences in performance (total, quality and expected changes made) were particularly noticeable when the two low formality designs were presented; and more specifically: 1) between-subject performance difference at each level of the two low formality design; and 2) within-subject performance difference between working on the two low formality designs presented on a different medium. Such big differences suggest that the change of media seem to have a greater impact on those who had CS/SE design experience. Hence, the slope representing the number of changes made was steeper (i.e. more changes) for subjects with CS/SE design experience compared to others with no CS/SE design experience to some CS/SE design experience.
4.2.2. Study major/specialization

For comparing between-subjects effect in design experience, subjects were categorized into two even groups with fifteen subjects in each group, and data were analyzed statistically to test Hypothesis 2a. However, study major/specialization and study level were explored only mainly through visual inspection of graphical output due to the various reasons as discussed in the results section (Chapter 3), for example, unbalanced number of subjects in each group, problems with grouping of subjects and the relatedness of the between-subject factors.

No significant effect of study major (the between-subject factor) was found on the number of changes made (total, quality and expected changes) across levels of formality; which indicated that performance level was similar in the two groups. Linear and quadratic trends were found in total changes and quality changes, however not in expected changes made across levels of formality. Furthermore, significant formality-by-study major interaction was found in total changes and quality changes, and again not in expected changes, across levels of formality. Such results suggest that the effect of formality on design performance is still true for subjects with different major/specialization (i.e. as the level of formality increase, the number of changes made decreases); however, the extent to which they are affected by formality may differ. It appears that results found in statistical analyses and information conveyed through graphical output showed conflicting effects. In addition to the imbalanced number of subjects in each group, (n=10 and n=20) that may have contributed to the conflicting results; performance of subjects with a non-CS/SE major at the medium-low formality condition of was, somehow, unexpected. Subjects in both group seemed to have performed at the same level – the total, quality and expected changes made was similar in both the CS/SE major group and the non-CS/SE major group. This could also help explain the non-significant results from the between-subject effects tests. Furthermore, subjects who majored in information system and other engineering specializations such as computer system and electrical engineering may have had taken some CS papers (e.g. most commonly CS101 and 105), and/or already had some knowledge of website design and HTML components, but those subjects were also grouped into the non-CS/SE major group. Thus, such factors may have played a role in producing the trend in the non-CS/SE major group – i.e. similar level of performance as those in the CS/SE major group. Hence, one must be extra cautious when interpreting such results.
4.2.3. Study Level

On the whole, the between-subjects results found in this study showed that in the context of levels of formality of designs, there is a strong relationship between expertise and design performance; which supports findings from previous studies comparing experts and novices in the design process, for example, Christiaans and Dorst (1992) who compared junior and senior industrial design students, and Atman et al. (1999) who also compared students (first year and fourth year engineering students) and both studies showed that there was a difference in terms of design performance and behaviour between the two groups of ‘experts’ and ‘novices’ in the design process. Furthermore, observations in the experiment revealed that some subjects with CS/SE design experience (‘experts’) went through the whole design first to fix the ‘errors’, for example, changing an element to the appropriate element, then went onto the detailed work; where as, ‘novices’ such as bioscience and psychology students with no design experience searched for problems one by one. Such finding further supports the research findings that novice and expert uses different problem-solving strategies (e.g. Ho, 2001 who found that expert designer used explicit problem solving strategies, which the novice appeared not to possess, but both expert and novice used similar, bottom-up, or working-backward, problem solving strategies.)


Overall, results from comparing subjects with different design experience seemed to be the more reliable for interpretation in comparison to study major/specialization and study level, and in addition, the number of subjects grouped in each group was equal (i.e. fifteen subjects), and adequate for achieving reasonable statistical power according to Cohen (1988). However, one of the limitations of the current study was that the subjects had different design experience and came from a range of different (design) disciplines. Hence, results could be improved by having participants with similar design backgrounds (i.e. design experience, major/specialization and study level) to reduce variability to yield stronger results. It may also be more useful to include a larger sample size in the two groups; however, one must assess practicality of testing a greater number of subjects. Moreover, the definition of expertise still remains one of the limitations in many studies – although expertise was explained in the present study by including design experience, domain-specific knowledge and education level; however, there was a considerable amount of overlapping between the three between-subjects factors as they are highly correlated.
4.3. Multiple Regression Analysis

As there was overlapping between the between-subject factors, the contribution of variables to the overall effect was explored – how much formality and each between-subjects factors account for the variance and in the end how much overall? Results from Multiple Regression Analysis showed that design experience explained as much as formality in the total number of changes made across levels of formality, which suggests that the more design experience a participant has, the more changes he/she attempts to make to improve the design. For explaining quality changes made across levels of formality, design experience and study level explained the variance in addition to the main effect of formality. Here in the context of this study, participants may have design experience but their study level may be different, therefore, if participants is at a higher study level, their design experience and related knowledge is likely to be better than those who are at a lower level of study. However, it can also be observed in the experiment that software engineering graduates with design experience may have made more changes to a design, however, those changes did not necessary improve the design. A classic example would be the radio button – according to web design guideline, the radio button should be on the left side to the label (text) it is associated to. However, some participants followed the original deliberate “error” that was presented in the design where, for example, a radio button was deliberately located to the right of the associated text, and would draw the radio button to the right of the label when a participant adds an item. Moreover, some participants explicitly changed a radio button to the right of the associated label. Formality was correlated higher, and explained more in expected changes made. One of the reasons that variance explained by formality was much higher was mostly likely that each design had the same number of “planned errors” for participants to change, therefore, changes made could be compared between different levels of formality in a more controlled manner compared to total and quality changes made.

As multiple regressions analysis showed that the level of formality correlates fairly strongly with the number of changes made (total and quality changes, and especially expected changes made), future research should explore more dimensions of beautification (in terms of HTML designs) such as colour (multiple dimension and a huge area of study itself), texture, shading, the (2.5-Dimensional) ‘look-and-feel’ (e.g. buttons that one could “click” on and textboxes that one could “type” into a textbox) etc, and its effects on the design process. How adding or removing a dimension may affect the design process would be of interest to examine also, e.g. the number of changes made; perception of attractiveness; and overall enjoyment etc. From the current study, it maybe hypothesized that as the number of dimensions of increase, formality will increase, and thus, the number of changes people make will differ (decrease), and the effects would be greater as more dimensions are added and at a higher level, for example, more colours, controls (e.g. buttons, textboxes, drop down menus) with shading, 3-D ‘look-and-feel’.
4.4. Additional Findings

4.4.1. Relationship between total, quality and expected changes

Quality changes derived from total changes, and expected changes derived from quality changes. Guidelines are useful in many ways for example, standardizing and learning, and in this case, helped design the five HTML forms that were presented to the participants. Guidelines were used to produce designs contained deliberate design ‘errors’ based on HCI handbooks and design guidelines. Furthermore, the changes that were counted as quality changes were based also, on design guidelines, to decide whether a change was of quality.

However, one must point out the problem with the large number of assumptions made by guidelines and handbooks that are sometimes arbitrary but these are rules to help standardize designs e.g. like platforms for different operating systems. Such that websites have behaved similarly, for example, clicking a blue underlined ‘link’ will lead to you the site you clicked on; and only one item could be chosen within the dropdown menu. Humans have learned to use the internet and have become used to the conventions that are used to build websites and forms containing HTML controls e.g. textbox, radio buttons, tick boxes, buttons etc. Assumptions used in guidelines, in turn, forms the basis for evaluating whether a design is good. In this study, finding criteria for good design of HTML forms based on experimental studies about errors and usability, and whether the form has been filled-in correctly was difficult. Thus, it was even more difficult to decide whether a change made by the participant was really a quality change (ones that will improve the design).

The counting and analysis of the extra changes that participants made (total minus quality changes; and quality minus expected changes) further questioned the usefulness of (web) design guidelines. Why did people make those extra changes? Why were the changes not included in the criterion for good designs after all? Even software engineers (fresh graduates and employed) made some changes to the designs that did not confirm to the guidelines and standards. It was the most difficult to decide whether a change should be counted as quality change, or just a change.

It was also noticed from re-counting and re-analysis of different changes, that the ‘extra changes’ (total minus quality) are mostly changes in the flow of information i.e. relocating items to another area. Furthermore, relocate of elements and items seemed to increase as it formality level decrease. Flow of information and logic is subjective to each individual therefore, only the relocation of items that made a difference (i.e. improvement) to the design was counted as a quality change. Also, the exact changes made were highly individualized – some changes were similar but most often, the changes were not the same. This further reflects the individual differences on what is important to change and what is not, and also, what is considered an improvement in a design is different to different individuals. Maybe most guidelines are subjective in the end? But on the whole, the “extra changes” that did not qualify as quality changes could be interpreted in a light-hearted manner, as subjects had varying levels of design experience, domain-based knowledge (major/specialization) and study level, which may have contributed to the high number of extra changes?

Looking at this from another angle, why did people not make some of the changes (total and quality changes) i.e. those made by other participants? Motivation, boredom and concentration may have played a role here. Software engineers (graduates) made significantly more changes across all levels of formality compared to undergraduates (with CS design experience and majors) – it appeared that software engineers may have put in more effort. It was also noticeable that some subjects had lower motivation level, for example, making only a few changes, stopped and indicated that he/she had finished. Also, near the fourth and fifth conditions, it was sometimes noticeable that some subjects’ concentration level decreased – stopping at 8-9 minutes during the 11-minute condition.
4.5. “Overall Enjoyment” rankings of the five designs ranking of designs

Participants were asked to rank the five designs in the order of overall enjoyment when working on each design from the most-liked design to the least-liked design. Also, due to its nature, it was impossible to infer a magnitude of differences in the overall enjoyment from such responses.

Overall, participants indicated that they enjoyed working on the higher formality designs more than low formality designs, and the rankings increased (i.e. less liked) as the levels of formality of designs increased. In other words, participants liked working on lower formality designs less. Interestingly, subjects enjoyed working on the low formality design on paper more than working on the low formality design on the tablet PC – indicated by the difference in rankings. Participants’ preference for design media may have played a role affecting subjects’ enjoyment ranking. The underlying reasons for the rankings were also examined.
4.5.1. Rankings according to aesthetic aspects of designs

Analysis of the ranked data showed that the subjects’ rankings of the five designs according to aesthetics were significantly different (p < .0001), and the Kendall coefficient of concordance of .57 indicated strong differences among the rankings. Post-hoc tests were also conducted.

Most participants ranked according to the appearance of the designs (twenty-one subjects) and these subjects indicated in the questionnaire that a design was easier to follow and comprehend when it looked “nice and tidy” with all the “elements aligned” with “tidy lines and fonts” compared designs that were “untidy”; hence, subjects enjoyed it more when working on the designs that appeared aesthetically pleasing than designs that looked rough and “sketchy". Hence, participants indicated that they enjoyed working on designs that appeared tidier and nicer, compared to designs that looked less attractive and untidy.

Importantly, such results also helped validate the levels of formality of the designs – the design with high formality appeared to be more formal, and aesthetically pleasing, than the medium-high formality design; medium-high formality design appeared to be more formal and aesthetically pleasing than medium-low formality; and low formality (on he tablet PC) looked the least formal, with untidy and unaligned elements compared to other designs.

Moreover, in the present study, the aim was not to show which design was liked the most and which design was liked the least, and why. Rather, of additional interest to the present study was the question of whether there were individual differences or a common pattern of perception when working on designs with different levels of formality, that influenced whether a person liked working on a design (overall enjoyment); which may have played a role in affecting the number of changes made in a design, across levels of formality. From the results where subjects response were grouped into three main groups, it could be seen that the pattern of overall enjoyment ranking with the underlying ranking factor being design aesthetics, is somehow correlated with the effects of formality: as formality decreases, the less a participant enjoyed working on a design; and as formality decreases, the number of changes increase. However, the pattern at the low formalities, the pattern did not match exactly i.e. the number of changes made increases from low formality on tablet to low formality on paper compared to the rank pattern being the opposite, where the low formality design on tablet PC was less (least) liked compared to the low formality design on paper, often scoring a higher (highest) rank (i.e. least liked).

From the design perspective, the findings of the current study suggests that it is better to work with informal designs with sketchy properties (non-beautified), as it is more likely that change to a design would be greater which may improve the quality of a design – shown in the results where all three types of changes (total, quality and expected) increased as a design appeared less formal and sketchy. However, despite such advantage of working with rough and sketchy (low formality) designs, findings from the current study suggest that people still like working on designs that appear more formal i.e. humans like beautiful products (e.g. Hassenzahl, 2004; Overbeeke & Wensveen, 2004; Tractinsky, Katz & Ikar, 2000), just like a (web) designer likes formality in their design – participants liked working on design that appeared most aesthetically pleasing i.e. the high formality design that looked tidy, with elements aligned, line straightened, and text in a clear legible font (Roman Times Numeral). Thus conflicting mechanisms occur: people’s perception or their likes of things do not match their mind’s workings – as if both working against each other. On one side, one would prefer and/or like working on a design that appears formal (beautified) but on the other, the mind works the best when a design looks rough and sketchy. And even though, overall, participants did not enjoy the working on the lower formalities, they still performed better compared to their performance when working with a high formality design, where fewer changes were made.

Finally, as research has shown that aesthetics affect the consumers’ (users’) perception on usability of the product, it raises the question of whether aesthetics will also affect the designers (and their clients) during the design process – are designers affected by aesthetic, just like the end-users, during the design process? Will they make fewer changes to a design as it appears more attractive/aesthetically pleasing, and therefore associate the design as a “good [enough] design” that requires little changes? Or are the designers immune to the effects of aesthetics as they are designing the product, given the requirements.
4.5.2. Rankings according to effort required

Rank test for several related samples found that there was an effect of formality on participants’ perceived effort required when working on the designs, which in turn affected the overall enjoyment rankings – hence rankings were significantly different and with the Kendall coefficient of concordance of .37 indicating fairly strong differences among the rankings.

The trend of participants’ overall enjoyment rankings according to perceived effort required were comparable with the rankings according to aesthetics of designs – the high formality design had the lowest score (i.e. participants enjoyed working on the high formality design most) and as formality decreased, the scores (ranks) increased (i.e. participants enjoyed working on the designs less and less as formality decreased). In addition, similar to trend of rankings according to aesthetics, low formality design on Tablet PC had the higher (highest) score compared to low formality design on paper (which was comparable with medium-low formality design), indicating that the participants enjoyed working on the low formality design on paper more than the low formality design on the tablet PC. Such results suggest that when a design appears more formal (beautified), it may affect participants’ perception of working on the design, and that when media of presentation differs, perception may differ. The rankings also reflect the relationship between levels of formality and perceived effort required – the more formal a design appears, the less (perceived) effort required; however, no causation effect could be concluded as the two elements affect each other.
4.5.3. Rankings according to Stimulation/fun level

No differences found on the effect of formality on participants’ perceived level of fun/stimulation when working on the designs, which in turn reflected the overall enjoyment rankings. This suggests that participants found the designs similar in terms of level of fun/stimulation. On the other hand, such results also suggest there was great individual variability which was also noticeable when participants’ rankings were examined individually. This suggests that perception of fun/stimulating level was subjective and varied across individuals. Maybe it was a confounding factor that individuals found it more interesting to work on a particular design therefore put more effort in to improving a design. One could also argue that it was probably the level of formality that affected participants’ interest to work on a design (i.e. whether a design was fun/stimulating in comparison with other designs) or it was maybe just the context of the design alone – e.g. America’s next top model, dog registration form, university graduation form, bank loan application and subscription to an magazine online. For example, paper being rated as the least interesting maybe contributed to its similarity with online subscription forms that internet users are frequently faced with, therefore (some) participants may have felt that it was the least interesting to work on out of the five designs. Thus, motivation maybe affected – when one is bored, it is likely that one would make fewer changes as opposed to when one is motivated and interested in the design. It is also likely that more attention and effort is put into improving a design – i.e. more changes made and therefore, it is likely (but not necessarily) that more quality changes are made (compared to when little changes are made). This suggests that although every effort had been made to control for such effect (see discussion in methods section 2.5.2.) to obtain results that reflect the effects of formality (and not others), subjects reacted to the content different designs differently – each individual has different preferences, exposure, background and experience (in life).

On the other hand, it may also be that for those participants who ranked according go level of stimulation of designs – instead of formality, the perceived level of fun/stimulation of designs may have been the main factor that affected the number of changes made in different designs. However, although with varying perceptions of whether a design was fun to work on, it made no difference to the effect of formality – the number of changes made as formality increased (decreased) still decreased (increased). However, as there was only a few number of participants (N = 7) who ranked according to this factor, results was only preliminary and hence, not conclusive; hence, it was only an indication of the underlying reasons (and was not the underlying reason itself) that affected the overall enjoyment rankings.
Overall, the different enjoyment rankings suggest that whether one likes working on a design more or less, the effects of formality still exist – as formality increases, the number of changes made (design performance) decreases.
4.6. Design tool preference

Design tool preference during the experiment and design tool preference in real world design situations were indicated by participants. Due to the impracticality, no statistical analysis was used, and visual inspection of graphs was more useful for the purposes of examining design tool preference.


4.6.1. Design tool preference during the experiment

Design medium preference(s) during the experiment was important to at least touch on as they may act as a mediator/moderator on the effects of formality on the number of changes made in a design when presented on a different medium. However, as subjects were presented with four designs on the Tablet PC and only one design one the paper, design medium preference(s) as a mediator/moderator, therefore, could not be statistically examined in a satisfactory manner. It would be more feasible in the future, to include two sets of designs, from low formality to high formality, one set presented on the tablet PC and the other set presented on the paper. It is then, that design medium preference could be examined and compared as a mediator/moderator.

The null hypothesis in Hypothesis 4 was rejected as no significant difference was found in design tool preference in during the experiment – the number of subjects who preferred using paper and pen (thirteen participants, 43.3%) and the number of subjects who preferred using the tablet PC (fifteen participants, 50%) was very similar. This further supports that InkKit (tablet PC) bridges the gap between paper and computer, by providing a sketching-based interface (as well as other additional functionalities) and thus, subjects’ interaction with InkKit was comparable with pen and paper.

Factors relating to expertise that were likely to affect design tool preference were examined briefly, including study major/specialization and design experience. Result suggests that those who majored in CS/SE (who are likely to be more computer oriented) are more likely to prefer using the tablet PC (Inkit) – a medium with conventional computer functions such as selection, resizing, deletion, copy and paste, drag and drop, undo, redo; as opposed to the traditional design tool/medium: pen and paper. Such result also indicates that those who majored in non-CS/SE subjects are more likely to prefer paper and pen over the use of tablet (Inkit).

However, as seen from the reasons for preference, subjects either preferred one tool over the other because he/she liked using it better (the positive features of the preferred tool were mentioned) or because he/she disliked using the ‘other’ tool or found using the other tool dissatisfactory (the negative features of the non-preferred tool mentioned); and thus, the preferred tool was only an indication of the better tool out of the two. Therefore, it is difficult to judge objectively which design tool a participant really preferred – i.e. did a participant prefer a particular tool because the other was bad or did he/she prefer the tool that has the more advantages over the other tool?

Regarding no preference for either design tool used in the experiment, it can be interpreted from the results such that subjects with CS/SE major and experience will tend to notice the advantages and disadvantages of each tool (and the suitability for different tasks, as one of the subjects had expressed – see Appendix 21); therefore, some subjects indicated no particular design tool preference for one tool over the other as both tools had their advantages.

Furthermore, design tool preference according to a subjects’ study major – one must interpret the results with caution as the number of subjects in each group was unbalance and therefore, only a proportion of subjects (within each group) preferring a particular tool was shown. Additionally, in some cases with few subjects in the group, a slight difference in the number of subjects (for example, a difference of one subject) preferring one tool over another may have resulted in a more observable difference in percentage. In comparison, the results showing subjects design tool preference according to design experience was less unambiguous to interpret as the number of subjects was balanced in each group. Hence, overall results are not conclusive to suggest whether there is a difference in tool preference in subjects with different study majors and design experience. To be more conclusive about design tool preferences (overall as well as between groups), a more balanced number of subjects in each group and a greater number of subjects on the whole would be needed.
On the whole, no direct relationships could be concluded that design tool preference in the experiment affected the number of changes made across levels of formality as the there was only one condition with paper and pen as opposed to four conditions that required participants to use the Tablet PC along with the Tablet pen – therefore comparisons could not be made explicitly; hence, future research. The greater number of changes using paper and pen compared to using Tablet PC and the tablet pen suggested that the two design tool do not belong on the same continuum in terms of interaction with a design – two different dimension of design interaction mechanism. Furthermore, even with some subjects having preference for one tool over another, the main trend (number of changes made across levels of formality) still exists regardless of design tool preferences – as formality increases, the number of changes made decreases. This could be interpreted as: fewer design “errors” corrected (expected changes); fewer improvements (quality changes), and generally fewer attempts made (total changes) to improve the design – evidently, with respect one’s design expertise suggested by the between-subjects effects. From another angle, as formality of a design decreases, the number of changes made increases (and much greater in low formality design on paper) regardless of design tool preference, which further suggests as designs appeared less formal, more design errors were corrected (expected changes), more improvements were made (quality changes), and attempts to improve the design increased (total changes)


Download 8.56 Mb.

Share with your friends:
1   ...   18   19   20   21   22   23   24   25   ...   55




The database is protected by copyright ©ininet.org 2024
send message

    Main page