Modeling semantic and orthographic similarity effects on memory for individual words



Download 445.77 Kb.
Page10/18
Date09.01.2017
Size445.77 Kb.
#8509
1   ...   6   7   8   9   10   11   12   13   ...   18

Model Fits of Experiment 3


The model outlined in the Introduction, and applied to experiment 1, was applied to Experiment 3 without the special assumptions made for Experiment 2 .

Parameters. The four parameters to generate predictions for this experiment were set at: c=0.3, n =.35, b=5, AND =3..

Recognition and Similarity Judgments. Figure 9 shows the predicted recognition and similarity results. In addition to the several ways in which the model made the correct predictions, there were some observed effects that were not handled well by the model. First, the difference between target conditions A and B was correctly predicted. The model predicted these differences based on word frequency. Words from condition A had lower word frequency than words from condition B. The difference between the unrelated low and high frequency distractors was also correctly predicted by word frequency differences. For the recognition ratings, the model predicted that related distractors from conditions F, G, and H have higher old ratings than the unrelated distractor conditions with similar word frequencies. This is because the related distractors overlap more with the memory contents than unrelated distractors. However, as was pointed out in the previous section, the results showed a trend for the condition G words to have lower old ratings than the unrelated low frequency distractors.

Another mismatch between observed and predicted results is for the condition C words. They were incorrectly predicted to have higher old ratings than condition B words despite the fact that the word frequency of condition C words was higher than condition B words. Also, the model incorrectly predicted that condition C words received the highest similarity ratings. This suggests that condition C words are not placed correctly with respect to the other study words (condition A and B) in the semantic space formed by WAS.



General Discussion

The memory model presented in this paper brings together the idea of explicit representation of orthographic and semantic features with a process model operating on those features. Words are represented by vectors of feature values that are based on an analysis of the semantic and orthographic features of words. The vectors of feature values representing various semantic aspects of words came from the Word Association Space. This space was developed by analyzing the associative relationships of a large database of free association norms and representing words with similar associative patterns with similar feature vectors. To represent orthography, the letters of the words were encoded. These representations were coupled with a process model for recognition memory. This model was based on the REM model, which used Bayesian principles to decide whether a memory probe is old or new.

One novel aspect in this model was the distinction between recognition and similarity judgments. The ability of participants to differentiate between recognition and similarity judgments was apparent in all experiments. Participants could distinguish between distractors that preserved the meaning of one of the themes on the study list versus distractors that were not similar to any words on the study list. In the model, the recognition judgments were assumed to rely on both the semantic and orthographic overlap of probe and memory contents while (semantic) similarity judgments were assumed to rely only on the semantic overlap of probe and memory contents. In Experiment 2, it was found that with orthographically related distractors, the category length of orthographic categories had no effect on (semantic) similarity judgments but increased the false alarms for the recognition judgments. This is consistent with the assumption that orthographic features are not involved in the calculation of similarity judgments.

The three experiments in this paper explored various predictions of the model with a focus on the interplay between semantic and orthographic similarity between probe and memory contents. The predictions of the model were tested at two different levels: at the level of condition means and at the level of individual word performance. In all three experiments, the model successfully predicted most of the qualitative differences in condition means. This suggests that the similarity relationships in the semantic space and in the orthographic representation are useful to predict memory performance.

Even stronger evidence for the idea that similarity relations among words explains recognition and similarity judgment data comes from the within condition correlation data. The correlational analyses showed that a small but significant part of the variance in performance was due to similarity relations due to differences among words within conditions, even though these words generally were chosen so such differences would be small.

An undesirable aspect of the present approach is the rather ad hoc fashion in which a word frequency mechanism had to be appended to the basic model. It may well be that a feature frequency approach would provide a more principled account, but this would only be possible in conjunction with a different word space, one in which high frequency words were clumped together, and one in which high frequency words had high frequency features that were clumped near the center of each featural dimension. WAS represented similarity by inner products, resulting in high frequency words being pushed to the outside of the space. This problem was solved by normalizing the vector lengths, but at the cost of removing word frequency differences, in turn requiring the model to be augmented by a different word frequency mechanism of a very ad hoc nature. There is obviously room here for further research and improvement of the models.

There are several areas ways in which the model can be extended and there are several new assumptions that can be tested. For example, one major assumption in the REM model and this memory model is that the features that represent different aspects of words can be stored in one trace. Instead, it could be assumed that separate attributes such as semantic and physical features are stored in separate traces. This would lead to a system in which familiarity is calculated for the semantic and physical contents of memory separately as opposed integrally. Preliminary simulations have suggested that there is not much difference between these two recognition memory models.



Download 445.77 Kb.

Share with your friends:
1   ...   6   7   8   9   10   11   12   13   ...   18




The database is protected by copyright ©ininet.org 2024
send message

    Main page