Message from @negatic

Discord ID: 648279221915090948


2019-11-24 21:47:08 UTC  

2019-11-24 21:47:12 UTC  

25th december 2019

2019-11-24 21:47:14 UTC  

THE EARTH

2019-11-24 21:47:16 UTC  

crap its agent smith

2019-11-24 21:47:16 UTC  

did you watch the video where someone trapped a selfdriving car?

2019-11-24 21:47:17 UTC  

2019-11-24 21:47:25 UTC  

no

2019-11-24 21:47:33 UTC  

i just want a tesla

2019-11-24 21:47:41 UTC  

yeah its all over the place

2019-11-24 21:47:46 UTC  

why not a ferrari

2019-11-24 21:47:52 UTC  
2019-11-24 21:47:53 UTC  

gas cars

2019-11-24 21:48:01 UTC  

plus tesla is more advanced

2019-11-24 21:48:13 UTC  

and is considered the worlds safest car and has much evidence to prove

2019-11-24 21:48:14 UTC  

it shows basically that there is still a way to go

2019-11-24 21:49:14 UTC  

but lets hope the ai wont ever work like feard and just like an extension for us like a calculator but instead we input some goal and it outputs whatever like a houseplan, a programm etc

2019-11-24 21:49:25 UTC  

you seem to imply that if AI was programmed with some goals like "human well being" etc it might kill us off
im saying thats bit impossible, @negatic

2019-11-24 21:49:49 UTC  

ferraris are faster and why would u need an ai to drive your car

2019-11-24 21:49:52 UTC  

i wanted to imply that if it WAS NOT then maybe

2019-11-24 21:50:05 UTC  

thats impossible too

2019-11-24 21:50:10 UTC  

why?

2019-11-24 21:51:53 UTC  

an ai specifically trained to perform music for example, will not automatically for whatever reason start to kill off humans, it was never programmed with the goal of "human well being" but that doesnt mean its going to do the opposite for the luls

2019-11-24 21:52:45 UTC  

i agreee with your example

2019-11-24 21:52:59 UTC  

but we are not at strong ai yet

2019-11-24 21:53:19 UTC  

u probably heard the paperclip example

2019-11-24 21:53:43 UTC  

or the one solving some math problem as goal

2019-11-24 21:53:55 UTC  

and killing humans off in the process

2019-11-24 21:54:27 UTC  

yeah ...its kinda a bad one

2019-11-24 21:54:27 UTC  

in those cases you simply gave the goa solve this problem or produce as many paperclips as possible

2019-11-24 21:54:37 UTC  

why is it bad?

2019-11-24 21:55:36 UTC  

the most realistic threats AI has currently is becoming a threat to a lot of jobs
that could lead to some big unemployement issues

2019-11-24 21:55:47 UTC  

agreed

2019-11-24 21:56:01 UTC  

old skool

2019-11-24 21:56:09 UTC  

anyway the examples remind me of an error in a programm causing an infite loop

2019-11-24 21:56:16 UTC  

because its so stupid..
so in that example afaik an AGI programmed to maximise paperclips suddenly goes *boom*

2019-11-24 21:56:19 UTC  

to infinity

2019-11-24 21:56:24 UTC  

the danger may simply be asking a wrong question

2019-11-24 21:56:46 UTC  

it doesnt suddenly go boom it just does whats its told

2019-11-24 21:56:51 UTC  

how can you have a wrong question

2019-11-24 21:56:59 UTC  

i pressed enter too early