They’re reasonable in terms of covering their butts. They do not ensure in any way, shape or form that they are actually followed. If they were followed (all source material read and understood by a human) then an LLM wouldn’t add anything of value because a human already summarised it. This is a lot of text about their dreams and aspirations, not an actual policy.
They do not ensure in any way, shape or form that they are actually followed.
There is no mechanism in all of human nature that automatically and definitively binds us to our word. Whether we’re writing code, or laws, or wedding vows, it is a choice at all times to adhere to them - and they are prone to error.
All Ars can do is write the rules and choose to enforce them. Whether they choose to or not is a trust exercise, which it’s only built on a long history of consistent good choices.
I meant that the only mechanism of verification of LLM output makes LLM pointless, hence if they’re using LLMs they won’t be really verifying it. If your use case has a risk appetite for being wrong 10% of time then go ahead. I’d rather get my news from a source that aims for 0% even if they make an occasional miss.
They’re reasonable in terms of covering their butts. They do not ensure in any way, shape or form that they are actually followed. If they were followed (all source material read and understood by a human) then an LLM wouldn’t add anything of value because a human already summarised it. This is a lot of text about their dreams and aspirations, not an actual policy.
There is no mechanism in all of human nature that automatically and definitively binds us to our word. Whether we’re writing code, or laws, or wedding vows, it is a choice at all times to adhere to them - and they are prone to error.
All Ars can do is write the rules and choose to enforce them. Whether they choose to or not is a trust exercise, which it’s only built on a long history of consistent good choices.
I meant that the only mechanism of verification of LLM output makes LLM pointless, hence if they’re using LLMs they won’t be really verifying it. If your use case has a risk appetite for being wrong 10% of time then go ahead. I’d rather get my news from a source that aims for 0% even if they make an occasional miss.