The sensory signals that drive movement planning arrive in a variety of 'reference frames', and integrating or comparing them requires sensory transformations. We propose a model in which the statistical properties of sensory signals and their transformations determine how these signals are used. This model incorporates the patterns of gaze-dependent errors that we found in our human psychophysics experiment when the sensory signals available for reach planning were varied. These results challenge the widely held ideas that error patterns directly reflect the reference frame of the underlying neural representation and that it is preferable to use a single common reference frame for movement planning. We found that gaze-dependent error patterns, often cited as evidence for retinotopic reach planning, can be explained by a transformation bias and are not exclusively linked to retinotopic representations. Furthermore, the presence of multiple reference frames allows for optimal use of available sensory information and explains task-dependent reweighting of sensory signals.