• v_pp@lemmygrad.ml
    link
    fedilink
    arrow-up
    4
    ·
    edit-2
    5 days ago

    I’m highly skeptical of this at first glance. Replacing self-attention with gated recurrent units seems like a decisive step back in natural language processing capabilities. The advancement that gave rise to LLMs in the first place was when people realized that building networks out of a bunch of self-attention blocks instead of recurrent units like GRU or LSTM was extremely effective.

    In short, they are proposing an older type of model which are generally outclassed by attention-based transformers that power all the LLMs we see today. I doubt it will be able to achieve nearly as good results as existing LLMs. I foresee this type of research being used to silence criticisms of the ungodly amounts of energy used by LLMs to say “See, people are working on making them way more efficient! Any day now…” Meanwhile they will never come to fruition.

    • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
      link
      fedilink
      arrow-up
      1
      ·
      5 days ago

      Thing is that each approach has its own advantages. It could be that a simpler approach in certain cases makes more sense. At the end of the day, people will benchmark this and we’ll see how it compares. Seems like initial benchmarks show that the approach works.

      • v_pp@lemmygrad.ml
        link
        fedilink
        arrow-up
        4
        ·
        edit-2
        5 days ago

        AI/ML research has long been notorious for choosing bullshit benchmarks that make your approach look good, and then nobody ever uses it because it’s not actually that good in practice.

        It’s totally possible that there will be legitimate NLP use-cases where this approach makes sense, but that is almost entirely separate from the current LLM craze. Also, transformer-based LLMs pretty much entirely supplanted recurrent networks as early as like 2018 in basically every NLP task. So even if the semiconductor industry massively reoriented to producing chips that support “MatMul-free” models like this one to even get an energy reduction, that would still mean that the model outputs would be even more garbage than they already are.

        • ☆ Yσɠƚԋσʂ ☆@lemmygrad.mlOP
          link
          fedilink
          arrow-up
          2
          ·
          5 days ago

          Sure, that’s why I said other people will benchmark it as well at some point and we’ll know definitively. Based on my reading, the idea here is to combine both approaches as an optimization technique. Using GPT as a hammer for every problem has been the hype phase. Now, people are starting to realize that other approaches have value too, and it’s likely that combining different approaches will in fact produce interesting results.