Well like I said, tasks, not reasoning or understanding.
Example: I have 10GB of memes broadly separated into folders, but otherwise unorganized - names of files are random. Given a multi-modal LLM - i.e., can "see" (really describe) images - it's quite capable of going through each image and renaming them to a descriptive file name based on the image content, just given a prompt telling it to do that.
Same for some of the UI interaction models, where you can say "Click the button that says 'OK' in this program" and it still works even if the text on the button is "Okay", "Ok" or even "Accept" etc.
Yes you can manually do a lot of those things as well but it's more cumbersome than without LLMs.
Reasoning and "intelligence" isn't actually required for it to be useful for NLP tasks.
I actually am doing a lot of this stuff in my day job, and I can say the ability to do fuzzy type searching, instruction etc is light years ahead of where it was just a couple years ago. Doesn't mean they're intelligent or are "AI", just that they're good at attending the input context in a way that makes generating the right (or at least, usually good enough) response much easier.
Well like I said, tasks, not reasoning or understanding.
Example: I have 10GB of memes broadly separated into folders, but otherwise unorganized - names of files are random. Given a multi-modal LLM - i.e., can see images - it's quite capable of going through each image and renaming them to a descriptive file name based on the image content, just given a prompt telling it to do that.
Same for some of the UI interaction models, where you can say "Click the button that says 'OK' in this program" and it still works even if the text on the button is "Okay", "Ok" or even "Accept" etc.
Yes you can manually do a lot of those things as well but it's more cumbersome than without LLMs.
Reasoning and "intelligence" isn't actually required for it to be useful for NLP tasks.
I actually am doing a lot of this stuff in my day job, and I can say the ability to do fuzzy type searching, instruction etc is light years ahead of where it was just a couple years ago. Doesn't mean they're intelligent or are "AI", just that they're good at attending the input context in a way that makes generating the right (or at least, usually good enough) response much easier.