Modern intelligent tutoring systems, exploiting technological advances in augmented and virtual reality and large language models, offer fluent natural language interaction between a virtual character and a student complemented with a multimodal interface, including recognition and synthesis of affects and intentions expressed in speech tonality, facial expression, gaze, and body language. Being concerned with consumer satisfaction, developers of such systems often miss the educational needs. Here we present a Virtual Tutor that, using the above technologies, helps students to self-regulate during learning. This is made possible based on the self-regulated learning theory integrated into an emotional cognitive architecture. Virtual Tutor uses its emotional intelligence to model, guide, and motivate students to engage in self-regulation. It does it in parallel with performing the basic tutoring functions. Results of our preliminary study provide some evidence of support for Virtual Tutor. This work was supported by the Russian Science Foundation Grant #22-11-00213, https://rscf.ru/en/project/22-11-00213/.