Human-level natural language understanding (NLU) of opentext is far beyond the current state of the art. In practice, ifdeep NLU is attempted at all, it is within narrow domains. Wereport a program of R&D on cognitively modeled NLU thatworks toward depth and breadth of processing simultaneous-ly. The current contribution describes lessons learned – scien-tifically and methodologically – from an exercise in applyingdeep NLU to open-domain texts. An overarching lesson wasthat although learning to compute sentence-level semanticsseems like a natural step toward computing full, context-sensitive, semantic and pragmatic meaning, corpus evidenceunderscores just how infrequently semantics can be cleanlyseparated from pragmatics. We conclude that a more compre-hensive methodology for automatic example selection and re-sult validation is needed as prerequisite for success in devel-oping NLU applications operating on open text.