Dialogue State Tracking (DST), a key component of task-oriented dialogue systems, tracks user intentions by predicting the values of pre-defined slots in a dialogue. Existing works on DST treat all slots indiscriminately and independently, which ignores the relationships across slots and limits the learning of hard slots (those slots are hard to be predicted correctly), eventually hurting overall performance. In this paper, we propose an iterative learning framework, i.e. iteratively updates the dialogue state with confident slots, to alleviate the aforementioned problem. Specifically, we first employ a scorer to estimate slot confidence. Then, those slots with high confidence are utilized to update the previous state, and the updated state will be fed into the scorer again to recalculate the confidence. In the last iteration, we apply an objective with the confidence penalty to focus on the hard slots. The experiments show that our approach outperforms existing methods on popular datasets.