Word Sense Disambiguation (WSD) has been a fundamental task for human language understanding. In specific contexts, a word may have different meanings. For rarely seen word senses, the disambiguation becomes challenging with limited examples. Meta-learning, as a widely adopted machine learning method for few-shot learning, addresses this by extracting metacognitive knowledge from training data, aiding models in "learning to learn". Hence, the advancement of meta-learning hinges on leveraging high-quality metacognitive knowledge. In light of this, we propose a Bi-Branch Meta-Learning method for WSD to enrich and accumulate metacognitive insights. Our method employs two branches during training and testing. During training, we use a bi-branch loss with original and augmented data from large language models to compensate for data scarcity. In testing, information from base classes generates bi-branch scores to refine predictions. Experiments show our method achieves a 74.3 F1 score in few-shot scenarios, demonstrating its potential for few-shot WSD.