This paper proposes a conditional version of Divergence Triangle [1] as a framework to train generator, energy-based and inference models jointly with the information of labels, where the learning of the above three models are integrated perfectly in a unified probabilistic formulation. Experiments demonstrate that, within this one framework, we are able to complete the following tasks together, (1) control the fine-grained categories to generate realistic images, (2) obtain the meaningful representation of observed data in the low dimensions, and also (3) conduct label classification on unobserved data. Additionally, I also discuss a possible extension on Conditional Divergence Triangle model at the end of this paper for future work.