This paper sought to understand the impact of labelling an argument as AI-generated compared to human-authored, and how factors such as portrayals of expertise and the nature of arguments presented (narrative versus statistical) may affect the persuasiveness of the arguments. Three domains were explored: health, finance, and politics. We show that arguments with AI source labels, both non-expert and expert, were rated by participants as less persuasive than when they had their counterpart human-authored source labels attached. Moreover, although the statistical arguments were found to be more persuasive than the narrative arguments, this did not affect the impact of an AI source label, with a significant interaction effect only being seen for the domain of politics for the expert AI source. The study explored the role of attitude towards AI on the impact of source labels as an exploratory analysis and found no significant interaction effect across the three domains.