

I said literally this in my reply, and the lemmy hivemind downvoted me. Beware of sharing information here I guess.
I said literally this in my reply, and the lemmy hivemind downvoted me. Beware of sharing information here I guess.
That’s not what I said. It’s absolutely dystopian how Musk is trying to tailor his own reality.
What I did say (and I’ve been doing AI research since the AlexNet days…) is that LLMs aren’t old school ML systems, and we’re at the point that simply scaling up to insane levels has yielded results that no one expected, but it was the lowest hanging fruit at the time. Few shot learning -> novel space generalization is very hard, so the easiest method was just take what is currently done and make it bigger (a la ResNet back in the day).
Lemmy is almost as bad as reddit when it comes to hiveminds.
Generally, yes. However, there have been some incredible (borderline “magic”) emergent generalization capabilities that I don’t think anyone was expecting.
Modern AI is more than just “pattern matching” at this point. Yes at the lowest levels, sure that’s what it’s doing, but then you could also say human brains are just pattern matching at that same low level.
That’s a massive claim, so it’ll require some substantial evidence. However, there are many statistical irregularities in swing states and other anomalies that haven’t occurred for 100+ years.
There are already several SMART and ETA lawsuits in a few states to demand recounts/audits.
little spider bros help everybody out
It’s clear you don’t really understand the wider context and how historically hard these tasks have been. I’ve been doing this for a decade and the fact that these foundational models can be pretrained on unrelated things then jump that generalization gap so easily (within reason) is amazing. You just see the end result of corporate uses in the news, but this technology is used in every aspect of science and life in general (source: I do this for many important applications).