This study explores whether Large Language Models (LLMs) can mimic human cognitive processes, particularly pragmatic reasoning in language processing. Focusing on how humans tend to offer semantically similar alternatives in response to negated statements, the research examines if LLMs, both base and fine-tuned, exhibit this behavior. The experiment involves a cloze task, where the models provide completions to negative sentences. Findings reveal that chat models closely resemble human behavior, while completion models align worse with human responses. This indicates that mere linguistic input statistics might be inadequate for LLMs to develop behaviours consistent with pragmatic reasoning. Instead, conversational fine-tuning appears to enable these models to adopt behaviors akin to human pragmatic reasoning. This research not only sheds light on LLMs' capabilities but also prompts further inquiry into language acquisition, especially the role of conversational interactions in developing pragmatic reasoning.