Cabinet Minister Judith Collins wants the government to expand the use of artificial intelligence (AI), starting with the health and education sectors where it could be used to assess mammogram results and provide AI tutors for children.

“It doesn’t do the work for them. It says some things like ‘go back, rethink that one, look at that number,’ those sorts of things. What an exciting way to do your homework if you’re a child.”

Deploying AI in education and health would be seen as high risk uses under new legislation passed by the European Union regulating AI.

Using AI in those settings in EU countries must include high levels of transparency, accuracy and human oversight.

But New Zealand has no specific AI regulation and Collins is keen to get productivity gains from extending its use across government, including using it to process Official Information Act requests.

An OIA request by RNZ for a government Cabinet paper on AI was turned down (by a human) on the grounds that the policy is under live consideration.

  • RegalPotoo@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    4 days ago

    I don’t think it’s all grift - there are absolutely places where LLMs are the best tech out there, but it’s probably not going to take everyone’s jobs any time soon (at least not on merit - in sure there are plenty of places that’d accept a 50% drop in quality for a 90% drop in price)

    I’ve seen a pretty compelling case study of a company using an LLM as a “tier zero” support tech - instead of getting a tier 1 tech to classify a case, decide if they had the tools to address the issue or if it needs to go to tier 2, work out if it was an instance of a known issue etc before they actually start working on the problem, give the LLM some examples and get it to do the triage so the humans can do the more complicated stuff. It does about as well as a human, for a fraction of the price.

    • TagMeInSkipIGotThis@lemmy.nz
      link
      fedilink
      arrow-up
      2
      ·
      3 days ago

      I’d have to see that in action before I pass judgement but given LLMs predilection for hallucination and the vagaries of how humans report tech faults I would be surprised if it was significantly more accurate or effective than a human. After all if its working out if there’s a known issue then essentially its not much beyond a script at that point and in that case do you want to trade the unpredictability of what an LLM might recommend vs something (human or otherwise) that will follow the script?

      Even if an LLM were an effective level 0 helpdesk it would still need to overcome the user’s cultural expectation (in many places) that they can pick up the phone and speak to somebody about their problem. Having done that job a long long time ago, diagnosing tech problems for people who don’t understand tech can be a fairly complex process. You have to work through their lack of understanding, lack of technical language. You sometimes have to pick up on cues in their hesitations, frustrated tone of voice etc.

      I’m sure an LLM could synthesis that experience 80% of the time, but depending on the tech you’re dealing with you could be missing some pretty major stuff in the 20%, especially if an LLM gives bad instructions, or closes without raising it etc. So you then need to pay someone to monitor the LLM and watch what its doing - at which point you’ve hired your level 1 tech again anyway.