Llms Need Mark as Answer
Introduction
Today’s LLMs have already ingested basically all of the publicly available information they can to build their models. In order to improve further, they’re going to need to seek out additional sources of information. One obvious such source is the countless interactions they have with their users, and while privacy concerns are certainly relevant here, for the purposes of this article I want to focus on another issue: quality signals. How will these LLMs know whether a given exchange led to a solution (defined however the user would define it)? Without this knowledge, there’s no way for LLMs to give more weight to answers that ultimately were fruitful over answers that were useless but the user gave up and ended the session.