Humans tend to privilege an intermediate level of categorization, known as the basic level, when categorizing objects that exist in a conceptual hierarchy (e.g. choosing to call a Labrador a dog instead of Labrador or animal). Domain experts demonstrate a downward shift in their object categorization behaviour, recruiting subordinate levels in a conceptual hierarchy as readily as conventionally basic categories (Tanaka & Philibert, 2022; Tanaka & Taylor, 1991). Do multimodal large language models show similar behavioural changes when prompted to behave in an expert-like way? We test whether GPT-4 with Vision (GPT-4V, OpenAI, 2023a) and LLaVA (Liu, Li, Wu, & Lee, 2023; Liu, Li, Li, & Lee, 2023) demonstrate downward shifts using an object naming task and eliciting expert-like personas by altering the model's system prompt. We find evidence of downward shifts in GPT-4V when expert system prompts are used, suggesting that human expert-like behaviour can be elicited from GPT-4V using prompting, but find no evidence of downward shift in LLaVA. We also find that there is an unpredicted upward shift in areas of non-expertise in some cases. These findings suggest that in the default case, GPT-4V is not a novice: instead, it behaves at default with a median level of expertise, while further expertise can be primed or forgotten through textual prompts. These results open the door for GPT-4V and similar models to be used as tools for studying differences in the behaviour of experts and novices, and even comparing contrasting levels of expertise within the same large language model.