Large language models (LLMs) like OpenAI's GPT-4 (OpenAI et al., 2023), or Google's PaLM (Chowdhery et al., 2022) generate text responses to user-generated text prompts. In contrast to work that evaluates the extent to which model-generated text coheres with linguistic rules (i.e., formal competence) (Chomsky et al., 2023; Piantadosi, 2023), the present symposium discusses the work of cognitive scientists aimed at assessing the extent and manner in which LLMs show effective understanding, reasoning and decision making, capacities associated with human higher cognition (i.e., functional competence) (Binz & Schulz, 2023; Mahowald et al., 2023; Webb et al., 2023). Given both their expertise and their interest in clarifying the nature of human thinking, cognitive scientists are in a unique position both to carefully evaluate LLMs' capacity for thought (Bhatia, 2023; Han et al., 2024; Mitchell, 2023) and to benefit from them as methodological and theoretical tools. This symposium will thus be of interest not only to cognitive scientists concerned with machine intelligence, but also to those looking to incorporate advances in artificial intelligence with their study of human intelligence.