Only partway through the article but it was a small shock when the word 'rodent' turned up unexpectedly:
"...later, if the rodent reenters that place, the cell will fire"
Totally fair and normal of course I just had been imagining human or generic neurons/dendrites up to that point. The test species wasn't mentioned earlier as far as I can see!
in-silico 1 days ago [-]
Earlier in the article:
> In 2014, when Magee attached electrodes to rodents to record their neural activity,
danwills 3 hours ago [-]
Ah thanks for correcting me! Guess I glossed-over it earlier on in the article, apologies!
mobeets 23 hours ago [-]
Good point, this is probably the author assuming a little more context on the reader’s part. The cases where you can record from neurons in humans are very rare (basically only in treatment resistant epilepsy), and most of the work on hippocampus uses rodents
largbae 1 days ago [-]
It seems obvious that a humanoid robot system or other truly general-purpose AI will need a stack of model types that work in concert. An LLM could be analogous to the conscious part of our brains, while many smaller and possibly frequently updateable models might provide "muscle memory" and reflexes.
If that becomes the case, then similarly built humanoid robots might have differentiated capabilities depending on their experience, just like us.
jaggederest 21 hours ago [-]
I think ultimately we're going to see structures that start to approximate the Type 1 and Type 2 thinking systems in humans - fast, deterministic models for microsecond and millisecond scale, and something in the current LLM ballpark for tactical and short term. We probably don't have a model that is out of the box good enough for the medium-term and long term planning, I think that's the most obvious gap in this kind of tower-of-hanoi style model stack.
harrall 1 days ago [-]
An LLM is more like the unconscious part of my brain. It’s my gut. It shits out answers using an ungodly amount of parallel processing and it’s often right.
But it also hallucinates thoughts and beliefs too, and that’s where the conscious parts have to intervene.
But the conscious parts are expensive to run and I can’t multi-task that.
The conscious parts also degrade first when I don’t get enough sleep.
1 days ago [-]
idiotsecant 23 hours ago [-]
I think the LLM is more like the 'internal monologue'. I am quite unqualified to claim this since I don't have one as far as I can tell, but I understand it's constantly observing and providing 'first draft' thinking approximating LLM quality
Zababa 1 days ago [-]
>It seems obvious that a humanoid robot system or other truly general-purpose AI will need a stack of model types that work in concert.
I don't think that much of AI today is obvious, so I'm suspicious of anything that is "obvious" about the future.
balamatom 1 days ago [-]
OK AI user.
Did it truly take someone else to externalize the mechanics of cognition into a machine for you, for you to become able to notice them and become interested in them?
And then to remain focused on the machine that you see, rather than the machine that you are.
Pitiful.
yieldcrv 23 hours ago [-]
> Today, neuroplasticity is taken as fact, but for much of the 150-year history of neuroscience, the adult brain was thought to be static
Ha, thats dismissive! the most enlightened among us still quote this static adult brain phrenology as fact because an old person in their life is slow to new concepts and who relies on an excuse themselves of being old
mfgadv99 1 days ago [-]
[dead]
Rendered at 12:40:39 GMT+0000 (Coordinated Universal Time) with Vercel.
"...later, if the rodent reenters that place, the cell will fire"
Totally fair and normal of course I just had been imagining human or generic neurons/dendrites up to that point. The test species wasn't mentioned earlier as far as I can see!
> In 2014, when Magee attached electrodes to rodents to record their neural activity,
If that becomes the case, then similarly built humanoid robots might have differentiated capabilities depending on their experience, just like us.
But it also hallucinates thoughts and beliefs too, and that’s where the conscious parts have to intervene.
But the conscious parts are expensive to run and I can’t multi-task that.
The conscious parts also degrade first when I don’t get enough sleep.
I don't think that much of AI today is obvious, so I'm suspicious of anything that is "obvious" about the future.
Did it truly take someone else to externalize the mechanics of cognition into a machine for you, for you to become able to notice them and become interested in them?
And then to remain focused on the machine that you see, rather than the machine that you are.
Pitiful.
Ha, thats dismissive! the most enlightened among us still quote this static adult brain phrenology as fact because an old person in their life is slow to new concepts and who relies on an excuse themselves of being old