For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.
I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?
Is it that they aren’t trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I’m just not aware of?
I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.
Umm… AI has been used to improve compilers dating all the way back to 2004:
https://github.com/shrutisaxena51/Artificial-Intelligence-in-Compiler-Optimization
Sorry that I had to prove you wrong so overwhelmingly, so quickly 🤷
Yeah, as @uranibaba@lemmy.world says, I was using the narrow meaning of AI=ML (as the OP was). Certainly not surprised that other ML techniques have been used.
That Cummins paper looks pretty interesting. I only skimmed the first page, but it looks like they’re using LLMs to estimate optimal compiler parameters? That’s pretty cool. But they also say something about it having a 91% hit compliant code hit rate, I wonder what’s happening in the other 9%. Noncompliance seems like a big problem? But I only have surface-level compiler knowledge, probably not enough to follow the whole paper properly…
Looking at the tags, I only found one with the LLM tag, which I assume naught101 meant. I think people here tend to forget that there is more than one type of AI, and that they have been around for longer than ChatGPT 3.5.