• misk@piefed.socialOP
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    9 hours ago

    Reporters may use AI tools vetted and approved for our workflow to assist with research, including navigating large volumes of material, summarizing background documents, and searching datasets. Even then, AI output is never treated as an authoritative source. Everything must be verified.

    When we attribute a statement, a position, or a quote to a named source, that material comes from direct engagement with interviews, transcripts, published statements, or documents reviewed by the reporter. AI tools must not be used to generate, extract, or summarize material that is then attributed to a named source, whether as a direct quote, a paraphrase, or a characterization of someone’s views.

    How do you control „Everything must be verified” specifically? I’m pretty sure similar rules must have been in place when they published a hallucinated interview.

    tl;dr - don’t quote or link to Arse Technica because it could be made up entirely

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 hours ago

      So basically their journalists are cleared for using a tool that gives off radioactive levels of mutated LinkedIn-speak (not X but Y!), and in the best case scenario they think the proximity to these tools won’t leech into the style of their own writing?

      And that’s at the very best of circumstances. Just a little toxicity in the article contents.

      They lost an editor who used fake AI quotes before this new explicit allowance.

    • MediumSizedSnack@piefed.social
      link
      fedilink
      English
      arrow-up
      3
      ·
      6 hours ago

      As far as policy guidelines go, I think they’re quite reasonable. I read the entire policy and the staff comments on the page. They lay out expectations: Humans must write the articles and verify everything that an AI output claims comes from a source such as through transcribing interviews or summarizing documents. They lay out that not following the policy is a violation which leads to the authors or editors being on the hook for those failures, likely tied to Ars or Condé Nast’s disciplinary procedures. These would be the reactive controls through accountability for failures, which there may yet be.
      Per the document and the subsequent comments, they confirm that editors and reporters both have to verify that reporting is accurate. That is a reasonable amount of proactive controls. If there becomes a pattern of failures either in the amount of them or a lack of accountability, then it would be fair to assume their output is AI slop, but I think that’s currently too early to claim. You’re under no obligation to assume they’ll be successful or that they are sincere, but it’s a clearly written reader-facing policy.

      • misk@piefed.socialOP
        link
        fedilink
        English
        arrow-up
        3
        ·
        6 hours ago

        They’re reasonable in terms of covering their butts. They do not ensure in any way, shape or form that they are actually followed. If they were followed (all source material read and understood by a human) then an LLM wouldn’t add anything of value because a human already summarised it. This is a lot of text about their dreams and aspirations, not an actual policy.

        • HarkMahlberg@kbin.earth
          link
          fedilink
          arrow-up
          2
          ·
          3 hours ago

          They do not ensure in any way, shape or form that they are actually followed.

          There is no mechanism in all of human nature that automatically and definitively binds us to our word. Whether we’re writing code, or laws, or wedding vows, it is a choice at all times to adhere to them - and they are prone to error.

          All Ars can do is write the rules and choose to enforce them. Whether they choose to or not is a trust exercise, which it’s only built on a long history of consistent good choices.

          • misk@piefed.socialOP
            link
            fedilink
            English
            arrow-up
            2
            ·
            edit-2
            2 hours ago

            I meant that the only mechanism of verification of LLM output makes LLM pointless, hence if they’re using LLMs they won’t be really verifying it. If your use case has a risk appetite for being wrong 10% of time then go ahead. I’d rather get my news from a source that aims for 0% even if they make an occasional miss.

  • CerebralHawks@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    1
    arrow-down
    2
    ·
    6 hours ago

    I’m still good with Ars (though, I have used the term “Arse Technica” in the past — I mean, it’s right there), but I’ve always respected their journalism.

    IMO, using AI in the way they’re talking about is using Wikipedia. Whenever I look something up on DDG and it uses Duck.ai to summarise search results (same thing Google gets shit on for doing!), I check their sources. Sometimes they’re right, sometimes they’re not. I approach Wikipedia in the same way. It’s a good collection of knowledge, but Wikipedia is subject to vandalism, so I check behind them. Wikipedia is more reliable than the AIs, but sometimes it gets it wrong, too.

    You have to remember and realise, Ars Technica is a tech site. In fact, the name is Latin for “the art of technology” (I think — something like that). If they don’t embrace AI to some extent, how can they reasonably call themselves a tech site? They have to embrace it. They write about it, they cover it, they use it — but the line has to be drawn somewhere, and they still fall on the side of responsible AI use. If you draw the line at any AI use, you’re outside their target audience. You might also be outside the target audience of a comm called “technology.” There are comms for retro tech that would be glad to have you. That is not to say that AI is the only way forward. It’s just where technology is now. I’m with those hoping it’s a fad that blows over and a bubble that bursts. As a technologist, there are plenty of things about AI that I don’t like, but I don’t avoid it entirely.

    • XLE@piefed.social
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 hours ago

      In general, search result summary is killing a whole ecosystem and making people doing research more complicit, so yes DDG should not get a free pass just because they’re Not Google

    • misk@piefed.socialOP
      link
      fedilink
      English
      arrow-up
      5
      ·
      6 hours ago

      You have to remember and realise, Ars Technica is a tech site. In fact, the name is Latin for “the art of technology” (I think — something like that). If they don’t embrace AI to some extent, how can they reasonably call themselves a tech site? They have to embrace it.

      Let’s say there’s a magazine on constructing buildings. Do they have to embrace using asbestos themselves to report on it?