The Rescorla-Wagner model has seen widespread success in modelling not only its original target of animal learning, but also several areas of human learning. However, despite its success, a number of studies with humans have found effects that are not predicted by the model, thus inspiring proposals for modifications to the model. One such proposal, by Van Hamme and Wasserman (1994, VHW), is that humans not only learn from present cues to all (present and absent) outcomes, as in the original model, but also learn from the absence of cues. They set out to test this hypothesis with a causal rating experiment. However, behaviour in learning studies may depend on the task. We propose that error-driven learning should be considered to be a form of implicit learning and that the results of VHW’s contingency judgement task might stem from explicit strategies involving logic and reasoning. The present study investigates this question by a) running simulations with both the original and modified versions of the model; b) replicating the VHW experiment (Experiment 1); and c) extending the experiment with new stimuli and by including unseen stimuli following the learning phases (Experiment 2). Simulations show that the VHW modified model predicts that cues learnt at the beginning will be unlearnt when absent over the following blocks, so that they become negative predictors over time. In contrast, the original RW predicts that the absent cues remain steady (positive) predictors over the blocks. Results showed no significant difference in cue assignment between training and test, in line with the original RW model. Moreover, predictive cues in the training phase showed significantly higher ratings than a new cue introduced in the test phase, at least in some cases, also partially supporting the original RW. We propose that in the development of human learning theory, attention should be paid to whether the behaviour (or other learning data) to be modelled results from implicit learning or involves higher level cognitive processes. We suggest that the RW may best capture implicit error-driven learning.