Message from @Spookaswa

Discord ID: 529158351561556009


2018-12-31 01:38:33 UTC  

This is what literally happened in Italy during the 20s

2018-12-31 01:39:13 UTC  

If accelerationism fails then what can I say

2018-12-31 01:39:24 UTC  

The people would've chosen their fate

2018-12-31 01:41:59 UTC  

The will of the people is a double edged sword

2018-12-31 01:44:22 UTC  

I plan to one day live on my own somewhere in Montana or elsewhere in the interior of the US

2018-12-31 01:44:37 UTC  

Just in general try and be ready to provide for yourself

2018-12-31 01:59:36 UTC  

I've been thinking about leaving my current area

2018-12-31 01:59:49 UTC  

Can't decide where to move though

2018-12-31 02:01:42 UTC  

My thoughts are either somewhere out in Eastern Washington State, Eastern Oregon, Idaho, or Eastern BC

2018-12-31 02:53:10 UTC  

2018-12-31 03:00:34 UTC  

Пigger left

2018-12-31 04:20:49 UTC  

all AI work that has been done in the past 20 years should be burnt to the ground

2018-12-31 04:24:19 UTC  

BOOMER GANG

2018-12-31 04:41:33 UTC  

Shabbat shalom

2018-12-31 04:44:36 UTC  

Shalom

2018-12-31 04:45:22 UTC  

> AI is too dangerous to be developed or used en masse

Well, it depends. It depends on the kind of AI in question. Narrow AI? Is probably *never* really going to be a super dangerous thing, as long as it is somewhat kept to a low scale and to a specific task, and always in conjunction with a human task giver/overseer.

General AI? Actually dangerous shit yes should not get looked into.

2018-12-31 04:45:47 UTC  

Don't develop it at all

2018-12-31 04:45:48 UTC  

Ban it

2018-12-31 04:45:58 UTC  

should be ban on General AI, yes, I'd agree

2018-12-31 04:46:22 UTC  

We can't trust ourselves to just have *a little* AI, it'll eventually become a slippery slope

2018-12-31 04:46:49 UTC  

nooo, no no no not quite. The kind of tech we use in narrow AI is totally different from anything that'd be capable of being human level (let alone beyond)

2018-12-31 04:47:12 UTC  

we don't even really understand what that would look like

2018-12-31 04:47:26 UTC  

there's not even blueprint for how to go about creating a vague concept

2018-12-31 04:47:54 UTC  

the narrow AIs themselves however will be dangerous *in certain contexts*

2018-12-31 04:47:56 UTC  

for example,

2018-12-31 04:48:06 UTC  

in situations where you'd replace all humans in military and police with them

2018-12-31 04:48:13 UTC  

dystopia level right there

2018-12-31 04:48:42 UTC  

*Looks at Xi Jinping*

2018-12-31 04:48:50 UTC  

Though the west will probably come around eventually

2018-12-31 04:49:36 UTC  

they'd be able to carry out any task, any order, any command with zero remorse or consideration for such, if they'd even possess anything resembling sentience which they indeed probably would not. You could order an narrow-AI based army to nuke Texas **for no reason** and it would obey because, well, *of course it would.* Human soldiers might revolt when faced with such command.

2018-12-31 04:51:07 UTC  

So there are some golden rules I have for going forward with this kind of tech:

1) limit it to narrow AI
2) ban general AI
3) limit all narrow AI to be outside police, state, military, general chain of command or offices of civil servants

2018-12-31 04:51:59 UTC  

but you can't possibly stop all AI development, there's far too much investment in it

2018-12-31 04:52:04 UTC  

by far too many people

2018-12-31 04:53:46 UTC  

@Xinyue the obvious solution is to replace humans with AI

2018-12-31 04:53:52 UTC  

so we are all on the same playing field

2018-12-31 04:53:57 UTC  

evolution waits for no one

2018-12-31 04:54:06 UTC  

Narrow AI is too relative and machine learning in general is dangerous

2018-12-31 04:54:23 UTC  

christ

2018-12-31 04:54:31 UTC  

are you guys really going to go mechanicum of mars on us

2018-12-31 04:54:33 UTC  

@Ten-Speed_Bicycle except that is not the solution at all 🤔 that's a non-solution