Jump to content

New version of ChatGPT released: real-time translation, human-like interaction + more


Recommended Posts

Posted (edited)
2 hours ago, Bloo said:

AI going "rogue" is not something to worry about. Transformer-based language models like ChatGPT just predict the next word in a stream of text. They actually do not understand anything. These are impressive demos, but I'm confident there are a litany of glaring problems with it.

The biggest issue currently is context length & the memory of these context lengths. Transformers arch suffers from this and there have been many new archs discovered like Mamba, Megadalon (or whatever that was called) but they're all ****. They have enormous context lengths but they can't remember for ****. 

 

Technically speaking, if someone is able to achieve nearly infinite context length with very little loss in remembrance with prompting (aka perfect recall), then they have created a true AGI

 

You can feed as much information to it as possible and it'll remember every single thing. But we're not there yet

Edited by Delirious

  • Replies 75
  • Created
  • Last Reply

Top Posters In This Topic

  • Delirious

    9

  • Bloo

    5

  • Comedor

    3

  • Starfish

    3

Top Posters In This Topic

Posted (edited)

Btw Meta will be open sourcing their multi modal GPT4o (essentially a thing just like this one but it'll be open source) in 2 months according to a major person in Meta.

 

And Google is announcing theirs today

 

So we will see competition :coffee2:

Edited by Delirious
Posted
3 hours ago, TitanicSurvivor said:

 It's 2024 and now even a robot sings better than mlada dama Raznatovic :bibliahh:

who is this khia :deadbanana: 

  • Haha 1
Posted

MPG feat. ChatGPT 4-OH soon 

  • Haha 2
  • ATRL Moderator
Posted
7 minutes ago, Delirious said:

The biggest issue currently is context length & the memory of these context lengths. Transformers arch suffers from this and there have been many new archs discovered like Mamba, Megadalon (or whatever that was called) but they're all ****. They have enormous context lengths but they can't remember for ****. 

 

Technically speaking, if someone is able to achieve nearly infinite context length with very little loss in remembrance with prompting, then they have created a true AGI

 

You can feed as much information to it as possible and it'll remember every single thing. But we're not there yet

Technically speaking, no, infinite context length does not mean the model will achieve AGI. This has already been shown empirically in state of the art language models:

2307.03172

Language models just predict the next token (or word). They do not actually embed knowledge or understanding. 
 

Also, other works have shown model performance has a logarithmic performance trend, meaning its improvements diminish with size increases. 

  • Like 1
Posted
1 hour ago, May said:

girl it was trying to open up to you-

***** :ahh:

 

 

Posted
1 hour ago, Comedor said:

Stop being scared of technology grandma.

Ok damn, ChatGPT defender 

Posted
11 minutes ago, CécredSpaces said:

who is this khia :deadbanana: 

 

Posted

Terrifying

Posted
2 hours ago, simplywohoo said:

i tried it and it's not working too well. when i asked some questions, it did research online and gave me the answer and the source, however what it told me was the exact opposite of what the article said :rip:

It's not rolled out yet, just the text capabilities of this model. Maybe in the coming weeks we'll get the more fluid one :khalyan:

Posted (edited)
43 minutes ago, Bloo said:

Technically speaking, no, infinite context length does not mean the model will achieve AGI. This has already been shown empirically in state of the art language models:

2307.03172

Language models just predict the next token (or word). They do not actually embed knowledge or understanding. 
 

Also, other works have shown model performance has a logarithmic performance trend, meaning its improvements diminish with size increases. 

I tried to word it in a simple way but AGI will only be achieved if you achieve perfect recall + infinite context length. You need especially perfect recall.

And yes, improvements do diminish but the training loss diminishes sub-linearly not logarithmically I think. aka Chinchilla Scaling Laws

Edited by Delirious
  • ATRL Moderator
Posted
9 minutes ago, Delirious said:

I tried to word it in a simple way but AGI will only be achieved if you achieve perfect recall + infinite context length. You need especially perfect recall.

And yes, improvements do diminish but the training loss diminishes sub-linearly not logarithmically. aka Chinchilla Scaling Laws

Chinchilla Scaling Laws only apply to statistical measures. That has nothing to do with AGI because that pertains to something beyond anything that can be measured with conventional statistical lenses. 
 

Regardless, there is plenty of reason to be skeptical that neural network architectures as is are sufficient to achieve AGI and will hit a bottleneck for generalizing beyond basic tasks. 

  • Like 1
Posted
1 minute ago, Bloo said:

Chinchilla Scaling Laws only apply to statistical measures. That has nothing to do with AGI because that pertains to something beyond anything that can be measured with conventional statistical lenses. 
 

Regardless, there is plenty of reason to be skeptical that neural network architectures as is are insufficient to achieve AGI and will hit a bottleneck for generalizing beyond basic tasks. 

The Chinchilla Scaling Laws predicts the final loss based on how many parameters a model has and how much data it was trained on. Let's assume we scale the dataset to infinity and training parameters to infinity (which is obviously not plausible) - wouldn't this reduce the training loss to nearly 0? How will this not be indistinguishable to AGI? It will know everything about the world.

I guess the question is what is the definition of AGI. Everyone has different definitions.

Also there's a difference between AGI and ASI i guess

  • ATRL Moderator
Posted
1 minute ago, Delirious said:

The Chinchilla Scaling Laws predicts the final loss based on how many parameters a model has and how much data it was trained on. Let's assume we scale the dataset to infinity and training parameters to infinity (which is obviously not plausible) - wouldn't this reduce the training loss to nearly 0? How will this not be indistinguishable to AGI? It will know everything about the world.

I guess the question is what is the definition of AGI. Everyone has different definitions.

Also there's a difference between AGI and ASI i guess

Because AGI is such a fuzzy concept philosophically speaking. If you have some perfect model and some infinite data and achieve 0 loss, the belief that 0 loss in predictive tasks corresponds with general intelligence is a baseless assumption. To be fair, it’s also an assumption that the opposite is true. But this comes back to a larger philosophical conversation of what entails “intelligence” and that is beyond the scope of discussions of artificial neural networks. I’m a giant skeptic of this idea that artificial neural networks will lead to AGI in any scenario. The only thing I think would make me revisit this assumption (assuming we don’t just outright achieve it and I have to swallow my words) would be quantum computing being a usable system. That… would change all the theory we know about information systems and I don’t know what those implications are. But, I do know with current computer architectures, there are very simple problems that are impossible to solve and so there are very strict barriers for what is possible on a digital machine. And I subscribe to the notion that general intelligence is outside of what’s possible. 
 

That said, AGI is not necessary for AI to be dangerous and a giant concern. Deepfakes scare the hell out of me and they have nothing to do with AGI. 

  • Like 2
Posted
2 minutes ago, Bloo said:

Because AGI is such a fuzzy concept philosophically speaking. If you have some perfect model and some infinite data and achieve 0 loss, the belief that 0 loss in predictive tasks corresponds with general intelligence is a baseless assumption. To be fair, it's also an assumption that the opposite is true. But this comes back to a larger philosophical conversation of what entails "intelligence" and that is beyond the scope of discussions of artificial neural networks. I'm a giant skeptic of this idea that artificial neural networks will lead to AGI in any scenario. The only thing I think would make me revisit this assumption (assuming we don't just outright achieve it and I have to swallow my words) would be quantum computing being a usable system. That… would change all the theory we know about information systems and I don't know what those implications are. But, I do know with current computer architectures, there are very simple problems that are impossible to solve and so there are very strict barriers for what is possible on a digital machine. And I subscribe to the notion that general intelligence is outside of what's possible. 
 

That said, AGI is not necessary for AI to be dangerous and a giant concern. Deepfakes scare the hell out of me and they have nothing to do with AGI. 

Yes deepfakes are going to become a huge issue. Scams will be horrific for everyone and if Whisper or Open Sora (if it ever happens) gets in the hands of bad people...oh dear

  • ATRL Moderator
Posted
2 minutes ago, Delirious said:

Yes deepfakes are going to become a huge issue. Scams will be horrific for everyone and if Whisper or Open Sora (if it ever happens) gets in the hands of bad people...oh dear

100%. We need media literacy classes now more than ever. People will really need to know how to discern about online content. 

  • Like 1
  • Thanks 1
Posted

I 100% support AI development I think it will be great for learning. 

  • Like 1
Posted

Can we change the voice at least? :dies: 

Posted

How are you all getting the new version? Mine is still stuck on GPT 3.5. :biblionny:

Posted

I'm learning the piano and I'd love something like this to guide me through my questions.

Posted
3 hours ago, The7thStranger said:

The thing that's driving me nuts is that it's not being and not going to be used to take away all the arduous and lame tasks we all hate doing. Going after illustration, translation, design, music, etc... that's just sucking all the joy out the jobs some of us really like doing. :rip:

Omg this! And this happens all the time. Technology has progressed so much over the last few centuries, but instead of using it to make our lives easier and give us more free time for leisure/friends/family, we're still working as much as ever (or even more).

And now the AI will be used to take away creative jobs, leaving us with boring, repetitive jobs.

  • Like 1
Posted

Will we finally get a good Japanese to English translater then 

Posted

When is this getting released for us free users :chick3:

Posted

raven-symone-thats-so-raven.gif

Alexa, play The End Of The World on Spotify 

Posted

feckkkkkk, this is mental!!!

×
×
  • Create New...

Important Information

We have placed cookies on your device to help make this website better. You can adjust your cookie settings, otherwise we'll assume you're okay to continue.