LLM is trained with lots of text and is used to predict the next word or sentence based on input (prompt) and the previous interaction in the session. The older RNN loop back the output to generate newer output but it has limited “memory” than LLM.
Given input, LLM compute a probability distribution of the next word spanning the entire language corpora. This approach is used on sentence level as well to output a coherent and contextually amenable output.
No comments:
Post a Comment