Win / Conspiracies
Conspiracies
Sign In
DEFAULT COMMUNITIES All General AskWin Funny Technology Animals Sports Gaming DIY Health Positive Privacy
Reason: None provided.

Yep, came to post much the same.

Was reading this: https://github.com/neurallambda/neurallambda yesterday and it has a really good example of what these LLMs actually do:

https://github.com/neurallambda/neurallambda/raw/master/doc/socrates.png

Text for the lazy:

User: All men are mortal. Socrates is a man. What else do we know about Socrates?

ChatGPT: We can conclude that Socrates is mortal.

This looks like reasoning. It isn't. It's picking (based on "attention" to the input prompt, and its trained word-probabilities) the next word to output. So it picks "We can conclude that Socrates is" and at this point it could have completed with "n't mortal", "mortal", "a dog" etc. but "mortal" has the highest score so that's what's picked. It's also kind of a bad example of reasoning because the context (input text by the user/previous replies by the bot) is clearly weighting "is mortal" for "Socrates" very highly when the attention mechanism reads that context.

Regardless, there was no thought process or reasoning happening. Even "chain of thought" and other methods in use for making LLMs "reason better" are basically a trick, because outputting text that describes (the most likely based on weights from training and context) some plausible reasoning process is then attended to and is more likely to produce a weighting that gives a better result.

That said, it can be used "like reasoning" in practice: For automating things that require a little bit of fuzziness, like if you want to ask a question about some text but don't have the exact word to search. It's decent at that.

It fails at planning, hard.

It's also (as of this moment) impossible to get these to actually learn new things dynamically, because the model's weights are fixed after training (training is slow and takes lots of memory). Naive methods of context implementations are quadratic to the number of tokens, so a super long context (needed for long term learning) is currently out of reach, though there are techniques to extend this, but even then we're talking about extending to 1m ish tokens, which will still require hundreds of GBs of very fast RAM (VRAM or similar) and is still inadequate for a long term memory.

What we will see instead is narrow expert models that are good at some task or job and can deal with that specific thing adequately (like automating basic computer tasks like sorting files, drafting boilerplate emails etc.).

The vision enabled ones are useful for recognition tasks etc.

They're going to be very useful and already are in natural language tasks, but "AGI" this is not.

51 days ago
1 score
Reason: None provided.

Yep, came to post much the same.

Was reading this: https://github.com/neurallambda/neurallambda yesterday and it has a really good example of what these LLMs actually do:

https://github.com/neurallambda/neurallambda/raw/master/doc/socrates.png

Text for the lazy:

User: All men are mortal. Socrates is a man. What else do we know about Socrates?

ChatGPT: We can conclude that Socrates is mortal.

This looks like reasoning. It isn't. It's picking (based on "attention" to the input prompt, and its trained word-probabilities) the next word to output. So it picks "We can conclude that Socrates is" and at this point it could have completed with "n't mortal", "mortal", "a dog" etc. but "mortal" has the highest score so that's what's picked. It's also kind of a bad example of reasoning because the context (input text by the user/previous replies by the bot) is clearly weighting "is mortal" for "Socrates" very highly when the attention mechanism reads that context.

Regardless, there was no thought process or reasoning happening. Even "chain of thought" and other methods in use for making LLMs "reason better" are basically a trick, because outputting text that describes (the most likely based on weights from training and context) some plausible reasoning process is then attended to and is more likely to produce a weighting that gives a better result.

That said, it can be used "like reasoning" in practice: For automating things that require a little bit of fuzziness, like if you want to ask a question about some text but don't have the exact word to search. It's decent at that.

It fails at planning, hard.

It's also (as of this moment) impossible to get these to actually learn new things dynamically, because the model's weights are fixed after training (training is slow and takes lots of memory). Naive methods of context implementations are quadratic to the number of tokens, so a super long context (needed for long term learning) is currently out of reach, though there are techniques to extend this, but even then we're talking about extending to 1m ish tokens, which will still require hundreds of GBs of very fast RAM (VRAM or similar) and is still inadequate for a long term memory.

What we will see instead is narrow expert models that are good at some task or job and can deal with that specific thing adequately (like automating basic computer tasks like sorting files, drafting boilerplate emails etc.).

The vision enabled ones are useful for recognition tasks etc.

They're going to be very useful and already are in natural language tasks, but aren't "intelligence" at all.

"AGI" this is not.

51 days ago
1 score
Reason: None provided.

Yep, came to post much the same.

Was reading this: https://github.com/neurallambda/neurallambda yesterday and it has a really good example of what these LLMs actually do:

https://github.com/neurallambda/neurallambda/raw/master/doc/socrates.png

Text for the lazy:

User: All men are mortal. Socrates is a man. What else do we know about Socrates?

ChatGPT: We can conclude that Socrates is mortal.

This looks like reasoning. It isn't. It's picking (based on "attention" to the input prompt, and its trained word-probabilities) the next word to output. So it picks "We can conclude that Socrates is" and at this point it could have completed with "n't mortal", "mortal", "a dog" etc. but "mortal" has the highest score so that's what's picked. It's also kind of a bad example of reasoning because the context (input text by the user/previous replies by the bot) is clearly weighting "is mortal" for "Socrates" very highly when the attention mechanism reads that context.

Regardless, there was no thought process or reasoning happening. Even "chain of thought" and other methods in use for making LLMs "reason better" are basically a trick, because outputting text that describes (the most likely based on weights from training and context) some plausible reasoning process is then attended to and is more likely to produce a weighting that gives a better result.

That said, it can be used "like reasoning" in practice: For automating things that require a little bit of fuzziness, like if you want to ask a question about some text but don't have the exact word to search. It's decent at that.

It fails at planning, hard.

It's also (as of this moment) impossible to get these to actually learn new things dynamically, because the model's weights are fixed after training (training is slow and takes lots of memory). Naive methods of context implementations are quadratic to the number of tokens, so a super long context (needed for long term learning) is currently out of reach, though there are techniques to extend this, but even then we're talking about extending to 1m ish tokens, which will still require hundreds of GBs of very fast RAM (VRAM or similar) and is still inadequate for a long term memory.

What we will see instead is narrow expert models that are good at some task or job and can deal with that specific thing adequately (like automating basic computer tasks like sorting files, drafting boilerplate emails etc.).

The vision enabled ones are useful for recognition tasks etc.

"AGI" this is not.

51 days ago
1 score
Reason: None provided.

Yep, came to post much the same.

Was reading this: https://github.com/neurallambda/neurallambda yesterday and it has a really good example of what these LLMs actually do:

https://github.com/neurallambda/neurallambda/raw/master/doc/socrates.png

Text for the lazy:

User: All men are mortal. Socrates is a man. What else do we know about Socrates?

ChatGPT: We can conclude that Socrates is mortal.

This looks like reasoning. It isn't. It's picking (based on "attention" to the input prompt, and its trained word-probabilities) the next word to output. So it picks "We can conclude that Socrates is" and at this point it could have completed with "n't mortal", "mortal", "a dog" etc. but "mortal" has the highest score so that's what's picked. It's also kind of a bad example of reasoning because the context (input text by the user/previous replies by the bot) is clearly weighting "is mortal" for "Socrates" very highly when the attention mechanism reads that context.

Regardless, there was no thought process or reasoning happening. Even "chain of thought" and other methods in use for making LLMs "reason better" are basically a trick, because outputting text that describes (the most likely based on weights from training and context) some plausible reasoning process is then attended to and is more likely to produce a weighting that gives a better result.

That said, it can be used "like reasoning" in practice: For automating things that require a little big of fuzziness, like if you want to ask a question about some text but don't have the exact word to search. It's decent at that.

It fails at planning, hard.

It's also (as of this moment) impossible to get these to actually learn new things dynamically, because the model's weights are fixed after training (training is slow and takes lots of memory). Naive methods of context implementations are quadratic to the number of tokens, so a super long context (needed for long term learning) is currently out of reach, though there are techniques to extend this, but even then we're talking about extending to 1m ish tokens, which will still require hundreds of GBs of very fast RAM (VRAM or similar) and is still inadequate for a long term memory.

What we will see instead is narrow expert models that are good at some task or job and can deal with that specific thing adequately (like automating basic computer tasks like sorting files, drafting boilerplate emails etc.).

The vision enabled ones are useful for recognition tasks etc.

"AGI" this is not.

51 days ago
1 score
Reason: None provided.

Yep, came to post much the same.

Was reading this: https://github.com/neurallambda/neurallambda yesterday and it has a really good example of what these LLMs actually do:

https://github.com/neurallambda/neurallambda/raw/master/doc/socrates.png

Text for the lazy:

User: All men are mortal. Socrates is a man. What else do we know about Socrates?

ChatGPT: We can conclude that Socrates is mortal.

This looks like reasoning. It isn't. It's picking (based on "attention" to the input prompt, and its trained word-probabilities) the next word to output. So it picks "We can conclude that Socrates is" and at this point it could have completed with "n't mortal", "mortal", "a dog" etc. but "mortal" has the highest score so that's what's picked. It's also kind of a bad example of reasoning because the context (input text by the user/previous replies by the bot) is clearly weighting "is mortal" for "Socrates" very highly when the attention mechanism reads that context.

Regardless, there was no thought process or reasoning happening. Even "chain of thought" and other methods in use for making LLMs "reason better" are basically a trick, because outputting text that describes (the most likely based on weights from training and context) some plausible reasoning process is then attended to and is more likely to produce a weighting that gives a better result.

That said, it can be used "like reasoning" in practice: For automating things that require a little big of fuzziness, like if you want to ask a question about some text but don't have the exact word to search. It's decent at that.

It fails at planning, hard.

It's also (as of this moment) impossible to get these to actually learn new things dynamically, because the model's weights are fixed after training (training is slow and takes lots of memory). Naive methods of context implementations are quadratic to the number of tokens, so a super long context (needed for long term learning) is currently out of reach, though there are techniques to extend this, but even then we're talking about extending to 1m ish tokens, which will still require hundreds of GBs of very fast RAM (VRAM or similar) and is still inadequate for a long term memory.

51 days ago
1 score
Reason: Original

Yep, came to post much the same.

Was reading this: https://github.com/neurallambda/neurallambda yesterday and it has a really good example of what these LLMs actually do:

https://github.com/neurallambda/neurallambda/raw/master/doc/socrates.png

Text for the lazy:

User: All men are mortal. Socrates is a man. What else do we know about Socrates?

ChatGPT: We can conclude that Socrates is mortal.

This looks like reasoning. It isn't. It's picking (based on "attention" to the input prompt, and its trained word-probabilities) the next word to output. So it picks "We can conclude that Socrates is" and at this point it could have completed with "n't mortal", "mortal", "a dog" etc. but "mortal" has the highest score so that's what's picked. It's also kind of a bad example of reasoning because the context (input text by the user/previous replies by the bot) is clearly weighting "is mortal" for "Socrates" very highly when the attention mechanism reads that context.

Regardless, there was no thought process or reasoning happening. Even "chain of thought" and other methods in use for making LLMs "reason better" are basically a trick, because outputting text that describes (the most likely based on weights from training and context) some plausible reasoning process is then attended to and is more likely to produce a weighting that gives a better result.

That said, it can be used "like reasoning" in practice: For automating things that require a little big of fuzziness, like if you want to ask a question about some text but don't have the exact word to search. It's decent at that.

It fails at planning, hard.

51 days ago
1 score