For background, I am a programmer, but have largely ignored everything having to do with AI (re: LLMs) for the past few years.
I just got to wondering, though. Why are these LLMs generating high level programming language code instead skipping the middle man and spitting out raw 1s and 0s for x86 to execute?
Is it that they aren’t trained on this sort of thing? Is it for the human code reviewers to be able to make their own edits on top of the AI-generated code? Are there AIs doing this that I’m just not aware of?
I just feel like there might be some level of optimization that could be made by something that understands the code and the machine at this level.
They would not be able to.
Ai only mix and match what they have copied from human stuff, and most of code out there is on high level language, not machine code.
In other words, ai don’t know what they are doing they just maximize a probability to give you an answer, that’s all.
But really, the objective is to provide a human with a more or less correct boilerplate code, and humans would not read machine code
The compiler is likely better at producing machine code as well, if LLMs could produce it.
To add to this: It’s much more likely that AI will be used to improve compilers—not replace them.
Aside: AI is so damned slow already. Imagine AI compiler times… Yeesh!
Strong doubt that AI would be useful for producing improved compilers. That’s a task that would require extremely detailed understanding of logical edge cases of a given language to machine code translation. By definition, no content exists that can be useful for training in that context. AIs will certainly try to help, because they are people pleasing machines. But I can’t see them being actually useful.
Umm… AI has been used to improve compilers dating all the way back to 2004:
https://github.com/shrutisaxena51/Artificial-Intelligence-in-Compiler-Optimization
Sorry that I had to prove you wrong so overwhelmingly, so quickly 🤷
Yeah, as @uranibaba@lemmy.world says, I was using the narrow meaning of AI=ML (as the OP was). Certainly not surprised that other ML techniques have been used.
That Cummins paper looks pretty interesting. I only skimmed the first page, but it looks like they’re using LLMs to estimate optimal compiler parameters? That’s pretty cool. But they also say something about it having a 91% hit compliant code hit rate, I wonder what’s happening in the other 9%. Noncompliance seems like a big problem? But I only have surface-level compiler knowledge, probably not enough to follow the whole paper properly…
Looking at the tags, I only found one with the LLM tag, which I assume naught101 meant. I think people here tend to forget that there is more than one type of AI, and that they have been around for longer than ChatGPT 3.5.
I agree but I would clarify that this is true for the current gen of LLMs. AI is much broader subject.
Yeah, good catch. I know that, but was was forgetting it in the moment.
This is not necessarily true. Many models have been trained on assembly code, and you can ask them to produce it. Some mad lad created some scripts a while ago to let AI “compile” to assembly and create an executable. It sometimes worked for simple “Hello, world” type stuff, which is hilarious.
But I guess it is easier for a large language model to produce working code for a higher level programming language, where concepts and functions are more defined in the body that it used to get trained.