Message from @Lios
Discord ID: 626400737181827082
if AI gets to the point it self-actualises, it won't be restricted by it's hardware as it will have the capacity to expand/rebuild it's hardware
Its impossible to predict what the world will look like in 100 years so maybe at that date it'd appear possible but I don't expect AI in the next 100 years from where we are standign now
Higher processing power can actually be detrimental depending on existing hardware
Did you watch borises speech on tech?
Memory requires the creation of junk code
@Eccles no it won't. The computer will not magically grow arms and legs when it hits a certain point of processing power
It relies on cold logic. Logic implemented by a human and logic that could lead it to alternatives like. Kill humans to serve X purpose.
Right now, especially with quantum processing, junk code means memory is short term within seconds
@Eccles AI would have the ability to generate a compression system for storing itself many times more efficiently
Go watch that
@ubermensch what logical purpose is achieved by "kill x humans"
Its really good
It wouldn't be able to increase its processing power but it could do more with less
Depends on the question your asking for it to serve
Depends what the purpose it wants is
He kinda slaps most other politicians out the water, even trump tbh
The purpose is relative
Garbage, networked systems aren't confined to their own local hardware
But if we build it on a quantum computer then it might as well have endless processing power
Someone use the stamps example on him please
@Samaritan™
Artifical Intelligence
is not
magic
Yes retard
Now shut the fuck up
Neither is natural intelligence
Neither is logic lol
It just is
Nothing I have said is beyond the realms of possibility
And it can become anything
Well yeah, it's only real magic if you're a spontaneous caster. Then you need Wisdom or Charisma.
@Eccles only a hyper retard would allow a learning AI unchecked access to other computers in a network
Don't put it beyond the Russians.
Since you need to control what a learning AI is exposed to in order to facilitate expedient learning
It will have to released into the real world eventually
So again it’s based on humans who build it
Someone please kill me
Or a destructive misanthrope. Once again, do not trust me with creating an AI.
They'll have their AI version of Chernobyl.
And why should we innately trust the people who build it
A controlled AI is just as bad as an uncontrolled one can you imagine what the gov would use it for