Message from @mineyful

Discord ID: 648275312467247117


2019-11-24 21:32:28 UTC  

Who knows

2019-11-24 21:32:34 UTC  

yea i dont think we are that close

2019-11-24 21:32:40 UTC  

We literally invented computers a few generations ago

2019-11-24 21:32:45 UTC  

It’s not that old

2019-11-24 21:32:53 UTC  

`friendly ai is harder than building an unfriendly ai`

none of these make sense

2019-11-24 21:33:17 UTC  

but u dont need an ai build by a single person. the first strong ai /seed ai build might be enough to lead us to destruction

2019-11-24 21:33:18 UTC  

yeah but things are now exponentially faster at quantum level speeds

2019-11-24 21:33:33 UTC  

And those quantum level speeds are only for incredibly specific tasks

2019-11-24 21:33:36 UTC  

friendly ai is harder than building an unfriendly ai

2019-11-24 21:33:40 UTC  

u wrote it wrong

2019-11-24 21:33:45 UTC  

@negatic the government/richest corporations would be in control of those not your average joe lol

2019-11-24 21:33:54 UTC  

Googles quantum supremacy was super fast at literally the one thing it was designed for

2019-11-24 21:33:58 UTC  

i just copied your text @negatic
either way it still doesnt make sense

2019-11-24 21:34:05 UTC  

its easyer to build an ai without safety measurs than one with them

2019-11-24 21:34:13 UTC  

there is nothing called a friendly AI or unfriendly AI

2019-11-24 21:34:22 UTC  

Depends on your definition of friendly

2019-11-24 21:34:27 UTC  

AI is supposed to be neutral

2019-11-24 21:34:28 UTC  

Like literally friendly or hostile

2019-11-24 21:34:32 UTC  

unbiased

2019-11-24 21:34:38 UTC  

As in “hi have a nice day “ or “ go F yourself”

2019-11-24 21:34:47 UTC  

still no

2019-11-24 21:35:05 UTC  

@everyone Ron Shekelson's accts have been shoad. SEE above video

2019-11-24 21:35:06 UTC  

its terms for if an ai humanitys goals includeed in its own goals or not

2019-11-24 21:35:07 UTC  

You can program something o appear friendly

2019-11-24 21:35:15 UTC  

That’s the artificial part

2019-11-24 21:35:19 UTC  

if an ai has not then it might kill us off as unneeded

2019-11-24 21:35:33 UTC  

an AI cant be described with those characteristics
its not a human
if its telling you to fuck yourself, likely it picked it up some from a dataset

2019-11-24 21:35:38 UTC  

if our goals are included it at least wont just get rid of us humans

2019-11-24 21:35:40 UTC  

Exactly

2019-11-24 21:35:52 UTC  

tahts why i explained it

2019-11-24 21:36:09 UTC  

the tearms just mean if the goal of the ai contains the goals of humanity like freedom etc

2019-11-24 21:36:26 UTC  

those seem like highly unspecific goals

2019-11-24 21:36:42 UTC  

yes they are cause the damn problem is what are the goals to set

2019-11-24 21:36:48 UTC  

tahts the whole damn problem

2019-11-24 21:37:03 UTC  

The more you know about AI the less you fear that it could take over the world

2019-11-24 21:37:14 UTC  

how do you make an ai care for humans and not let us end in a way we dont desire

2019-11-24 21:37:33 UTC  

again a very unspecific goal so good luck training it

2019-11-24 21:37:53 UTC  

i already said thats the whole problem

2019-11-24 21:38:02 UTC  

Ai can't be trained

2019-11-24 21:38:02 UTC  

and there lots of people working on this