Message from @negatic
Discord ID: 648275691661557788
Googles quantum supremacy was super fast at literally the one thing it was designed for
i just copied your text @negatic
either way it still doesnt make sense
its easyer to build an ai without safety measurs than one with them
there is nothing called a friendly AI or unfriendly AI
Depends on your definition of friendly
AI is supposed to be neutral
Like literally friendly or hostile
unbiased
As in “hi have a nice day “ or “ go F yourself”
still no
@everyone Ron Shekelson's accts have been shoad. SEE above video
its terms for if an ai humanitys goals includeed in its own goals or not
You can program something o appear friendly
That’s the artificial part
if an ai has not then it might kill us off as unneeded
an AI cant be described with those characteristics
its not a human
if its telling you to fuck yourself, likely it picked it up some from a dataset
if our goals are included it at least wont just get rid of us humans
Exactly
tahts why i explained it
those seem like highly unspecific goals
yes they are cause the damn problem is what are the goals to set
tahts the whole damn problem
The more you know about AI the less you fear that it could take over the world
how do you make an ai care for humans and not let us end in a way we dont desire
again a very unspecific goal so good luck training it
i already said thats the whole problem
Ai can't be trained
and there lots of people working on this
They are only programmed to do what the human allows it to do
Movies aren't like ai
i told google alexa to destroy humanity, HELP
Machine learning can “train” it
today you train ais basically
It’s spooky because it seems real but it’s all artificial
ther eneuronal networks and at the end you got no idea why they do what they do
OK boomer
Yeah we program this neural networks and don’t really know how they work
no we let it learn from datasets
and then we dont know how it comes to its conclusions