I meant that the only mechanism of verification of LLM output makes LLM pointless, hence if they’re using LLMs they won’t be really verifying it. If your use case has a risk appetite for being wrong 10% of time then go ahead. I’d rather get my news from a source that aims for 0% even if they make an occasional miss.
I meant that the only mechanism of verification of LLM output makes LLM pointless, hence if they’re using LLMs they won’t be really verifying it. If your use case has a risk appetite for being wrong 10% of time then go ahead. I’d rather get my news from a source that aims for 0% even if they make an occasional miss.