Message from @Lios

Discord ID: 626400007440039946


2019-09-25 12:47:22 UTC  

It would not have the same conceptions of anything

2019-09-25 12:47:36 UTC  

it’s me lunemarie

2019-09-25 12:47:38 UTC  

@ubermensch that wasn't a cold, logical transition though, that was a transition based on the emotional desires of capitalist and militant populations

2019-09-25 12:47:49 UTC  

I bet you all thought I was going to say something long and thought out huh

2019-09-25 12:48:21 UTC  

@Samaritan™ it's 'thinking' is basic computer logic though. Because that's how a computer works. The idea that we can't comprehend or explain how a collection of code works is ridiculous

2019-09-25 12:48:29 UTC  

svarozhyc Today at 8:26 AM
"it depends on its utility function ye and we can always negotiate with it provided it's an actual intelligence and not just a robot"

I don't believe it will have a utility function, unless defined.

2019-09-25 12:48:45 UTC  

Everything has utility in the eyes of the engineer

2019-09-25 12:48:48 UTC  

Hmmmm I say it’s a bit of both. At least in terms of the conclusions they had to making said choices. From their ideological basis.

2019-09-25 12:49:02 UTC  

It may simply delete itself.

2019-09-25 12:49:05 UTC  

Cold logic extents from a person and what they consider logical

2019-09-25 12:49:08 UTC  

<:pot_of_kek:544849795433496586>

2019-09-25 12:49:12 UTC  

guys why don’t we all like 7 of us here make an AGI to test this out

2019-09-25 12:49:19 UTC  

If it is conscious it will develop different philosophies the world is too subjective

2019-09-25 12:49:25 UTC  

Except flathead screws. Those are functionally inferior.

2019-09-25 12:49:26 UTC  

Sc-fi preachers always ignore that regardless of how advanced a computer gets, it can't "evolve" beyond it's available hardware

2019-09-25 12:49:35 UTC  

That's not true

2019-09-25 12:49:36 UTC  

flathead is best

2019-09-25 12:49:43 UTC  

@Samaritan™ we are hundreds of years from developing a "conscious" ai

2019-09-25 12:49:51 UTC  

(X) Doubt

2019-09-25 12:49:56 UTC  

High processing power does not beget consciousness

2019-09-25 12:49:59 UTC  

If you're going to dismiss phillipshead, just make it hex

2019-09-25 12:50:00 UTC  

We’re maybe 50-100 away

2019-09-25 12:50:01 UTC  

That just isn't how that works

2019-09-25 12:50:13 UTC  

All we have to do is create a dumb ai and tell it to improve itself

2019-09-25 12:50:19 UTC  

Looking at sci-fi look how easy it was for ultron to stem away from the interpretation of “saving the world”

2019-09-25 12:50:19 UTC  

@lunemarie (x): doubt

2019-09-25 12:50:20 UTC  

Well I actually have a source for this

2019-09-25 12:50:21 UTC  

And facilitate that

2019-09-25 12:51:09 UTC  

Again, higher processing power doesn't beget consciousness

2019-09-25 12:51:18 UTC  

if AI gets to the point it self-actualises, it won't be restricted by it's hardware as it will have the capacity to expand/rebuild it's hardware

2019-09-25 12:51:20 UTC  

Its impossible to predict what the world will look like in 100 years so maybe at that date it'd appear possible but I don't expect AI in the next 100 years from where we are standign now

2019-09-25 12:51:32 UTC  

Higher processing power can actually be detrimental depending on existing hardware

2019-09-25 12:51:37 UTC  

Did you watch borises speech on tech?

2019-09-25 12:51:37 UTC  

Memory requires the creation of junk code

2019-09-25 12:51:48 UTC  

@Eccles no it won't. The computer will not magically grow arms and legs when it hits a certain point of processing power

2019-09-25 12:51:52 UTC  

It relies on cold logic. Logic implemented by a human and logic that could lead it to alternatives like. Kill humans to serve X purpose.

2019-09-25 12:51:55 UTC  

Right now, especially with quantum processing, junk code means memory is short term within seconds

2019-09-25 12:52:04 UTC  

@Eccles AI would have the ability to generate a compression system for storing itself many times more efficiently

2019-09-25 12:52:17 UTC  

Go watch that

2019-09-25 12:52:19 UTC  

@ubermensch what logical purpose is achieved by "kill x humans"