• MediumSizedSnack@piefed.social
    link
    fedilink
    English
    arrow-up
    4
    ·
    11 hours ago

    As far as policy guidelines go, I think they’re quite reasonable. I read the entire policy and the staff comments on the page. They lay out expectations: Humans must write the articles and verify everything that an AI output claims comes from a source such as through transcribing interviews or summarizing documents. They lay out that not following the policy is a violation which leads to the authors or editors being on the hook for those failures, likely tied to Ars or Condé Nast’s disciplinary procedures. These would be the reactive controls through accountability for failures, which there may yet be.
    Per the document and the subsequent comments, they confirm that editors and reporters both have to verify that reporting is accurate. That is a reasonable amount of proactive controls. If there becomes a pattern of failures either in the amount of them or a lack of accountability, then it would be fair to assume their output is AI slop, but I think that’s currently too early to claim. You’re under no obligation to assume they’ll be successful or that they are sincere, but it’s a clearly written reader-facing policy.

    • misk@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      3
      ·
      10 hours ago

      They’re reasonable in terms of covering their butts. They do not ensure in any way, shape or form that they are actually followed. If they were followed (all source material read and understood by a human) then an LLM wouldn’t add anything of value because a human already summarised it. This is a lot of text about their dreams and aspirations, not an actual policy.

      • HarkMahlberg@kbin.earth
        link
        fedilink
        arrow-up
        2
        ·
        7 hours ago

        They do not ensure in any way, shape or form that they are actually followed.

        There is no mechanism in all of human nature that automatically and definitively binds us to our word. Whether we’re writing code, or laws, or wedding vows, it is a choice at all times to adhere to them - and they are prone to error.

        All Ars can do is write the rules and choose to enforce them. Whether they choose to or not is a trust exercise, which it’s only built on a long history of consistent good choices.

        • misk@piefed.socialOP
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          7 hours ago

          I meant that the only mechanism of verification of LLM output makes LLM pointless, hence if they’re using LLMs they won’t be really verifying it. If your use case has a risk appetite for being wrong 10% of time then go ahead. I’d rather get my news from a source that aims for 0% even if they make an occasional miss.