Message from @Ten-Speed_Bicycle
Discord ID: 529160239761850369
Shalom
> AI is too dangerous to be developed or used en masse
Well, it depends. It depends on the kind of AI in question. Narrow AI? Is probably *never* really going to be a super dangerous thing, as long as it is somewhat kept to a low scale and to a specific task, and always in conjunction with a human task giver/overseer.
General AI? Actually dangerous shit yes should not get looked into.
Don't develop it at all
Ban it
should be ban on General AI, yes, I'd agree
We can't trust ourselves to just have *a little* AI, it'll eventually become a slippery slope
nooo, no no no not quite. The kind of tech we use in narrow AI is totally different from anything that'd be capable of being human level (let alone beyond)
we don't even really understand what that would look like
there's not even blueprint for how to go about creating a vague concept
the narrow AIs themselves however will be dangerous *in certain contexts*
for example,
in situations where you'd replace all humans in military and police with them
dystopia level right there
*Looks at Xi Jinping*
Though the west will probably come around eventually
they'd be able to carry out any task, any order, any command with zero remorse or consideration for such, if they'd even possess anything resembling sentience which they indeed probably would not. You could order an narrow-AI based army to nuke Texas **for no reason** and it would obey because, well, *of course it would.* Human soldiers might revolt when faced with such command.
So there are some golden rules I have for going forward with this kind of tech:
1) limit it to narrow AI
2) ban general AI
3) limit all narrow AI to be outside police, state, military, general chain of command or offices of civil servants
but you can't possibly stop all AI development, there's far too much investment in it
by far too many people
@Xinyue the obvious solution is to replace humans with AI
evolution waits for no one
Narrow AI is too relative and machine learning in general is dangerous
christ
are you guys really going to go mechanicum of mars on us
@Ten-Speed_Bicycle except that is not the solution at all 🤔 that's a non-solution
HURR TECHNOLOGY BAD CUZ IT CHANGES THINGGGSSS
>When you make a robot and then kill yourself so that the robot can live your life <:ancom:520002567988838401>
Why would you cuck your entire species though. That goes against pretty much the whole reason why you, and your entire species line, exist.
Also there's no gain in it
the mind uploads aren't gonna work
>cuck your species
are you cucking cro-magnon men when you evolved to be homo sapiens?
Ok this is epic
Funny, man survived wars that killed millions, massive plagues that nearly wiped out their own people, yet, if things are to continue in technological development, its own creation will be its undoing
That was a remarkably bad point man. First of all, the genes were still present there, albeit in different form. There was still continuum between the one that came before, and the one that came after. Second of all, that wasn't a choice on anyone's part - therefore, not *cucking.*
Now, you though advocate for the abrupt phasing out of biological life it seems, becoming AI, which is - not only probably impossible from scientific standpoint - the most insane concept there is.
But lets dwell on that little point of scientific untenability for a moment
see, it might be that the brain is non-computable. And if it is non-computable, then it quite literally can never be transferred over to digital form. And to make matters worse, it might be further that *no computer can ever be conscious,* specifically because the mind has an element of non-computability to it, a semantics in addition to syntax, whereas the computer doesn't have that element of non-computability.
My point is that banning technological advancement is kind of a shitty idea too.
>the mind has an element of non computability
<:lol:521377935672737792>
Well yes