Rather than the standard set of focused data used to teach robots new tasks, the method goes big, mimicking the massive troves of information used to train large language models (LLMs).