Brain-computer interface technology has made significant progress in the field of intelligent human-computer interaction. Among them, electroencephalography-based emotion recognition, as one of the important research directions in emotional brain-computer interaction, has received widespread attention. However, most previous studies were limited to feature extraction of global brain networks and local brain areas in the EEG spatial domain but ignored the channel-level dynamic features of EEG. To address this limitation, we proposed a Channel-Adaptive Graph Convolutional Network with Temporal Encoder (CAG-TEN). In CAG-TEN, the channel-adaptive graph convolutional module assigns a unique parameter space to each channel, focusing on channel-level dynamic features. Additionally, the temporal encoder module, inspired by the Encoders concept, is used to explore long-term temporal dependencies in EEG sequences. We conduct rigorous comparative experiments of CAG-TEN against several representative baseline models on the SEED dataset and achieve optimal performance.