Unrepentant Techno-Hermit, forever trying to make less do more.

  • 0 Posts
  • 14 Comments
Joined 2 months ago
cake
Cake day: March 8th, 2025

help-circle





  • Almost certainly not, no. Evolution may work faster than once thought, but not that fast. The problem is that societal, and in particular, technological development is now vastly outstripping our ability to adapt. It’s not that people are getting dumber per se - it’s that they’re having to deal with vastly more stuff. All. The. Time. For example, consider the world as it was a scant century ago - virtually nothing in evolutionary terms. A person did not have to cope with what was going on on the other side of the planet, and probably wouldn’t even know for months if ever. Now? If an earthquake hits Paraguay, you’ll be aware in minutes.

    And you’ll be expected to care.

    Edit: Apologies. I wrote this comment as you were editing yours. It’s quite different now, but you know what you wrote previously, so I trust you’ll be able to interpret my response correctly.


  • Thank you. I appreciate you saying so.

    The thing about LLMs in particular is that - when used like this - they constitute one such grave positive feedback loop. I have no principal problem with machine learning. It can be a great tool to illuminate otherwise completely opaque relationships in large scientific datasets for example, but a polynomial binary space partitioning of a hyper-dimensional phase space is just a statistical knowledge model. It does not have opinions. All it can do is to codify what appears to be the consensus of the input it’s given. Even assuming - which may well be far too generous - that the input is truly unbiased, at best all it’ll tell you is what a bunch of morons think is the truth. At worst, it’ll just tell you what you expect to hear. It’s what everybody else is already saying, after all.

    And when what people think is the truth and what they want to hear are both nuts, this kind of LLM-echo chamber suddenly becomes unfathomably dangerous.








  • To paraquote H. L. Mencken: For every problem, there is a solution that’s cheap, fast, easy to implement – and wrong.

    Silver bullets and magic wands don’t really exist, I’m afraid. There’s amble reasons for DBA’s being well-paid people.

    There’s basically three options: Either increase the hardware capabilities to be able to handle the amount of data you want to deal with, decrease the amount of data so that the hardware you’ve got can handle it at the level of performance you want or… Live with the status quo.

    If throwing more hardware at the issue was an option, I presume you would just have done so. As for how to viably decrease the amount of data in your active set, well, that’s hard to say without knowledge of the data and what you want to do with it. Is it a historical dataset or time series? If so, do you need to integrate the entire series back until the dawn of time, or can you narrow the focus to a recent time window and shunt old data off to cold storage? Is all the data per sample required at all times, or can details that are only seldom needed be split off into separate detail tables that can be stored on separate physical drives at least?