the-temple-of-veethena-nike_general
Discord ID: 633966934622208031
547,842 total messages. Viewing 100 per page.
Prev |
Page 1493/5479
| Next
an artifical intelligence will run on different hardware
no shit
but if the algorithm is a FUNCTION of specific hardware
it's can't self-seed
NNs will also not have the empathy mechanism like we do
bio-neurons have a workaround for that to achieve prioritization, but in an atomized sense yeah.
NNs lack the gut bacteria influencing its decisions with serotonine
that is a meta detail, way out of scope
please define self-seed
of course it won't have empathy
what is nns
nanodes?
Neural Networks
o
What if the next step in AI is to make it run on human bodies and brains?
@Irreversal Neural Networks. A simulated neuron network of an actual brain function
neural networks could have empathy, it depends on how we seed them
heard of it before i think
self-seeding means that a mechanism or algorithm for evolving intelligence lacks the boundary elements and convention to evolve that algorithm
to split hair on that an say it is illusory empathy is an odd choice because you could do the exact same thing with humans
@Kealor the underlying rason for that empathy though will be a set of rules in a file, not the complicated instinct that needs an interplay between a lymbic system, several other glands and gut bacteria giving you extra chemicals...
the boundry elements and convention for intelligence is simply an extention of the brain's need to organize and compress stimulus information without increasing O^n with every new input
do you mean "a lack of self-seeding --"?
Damnit, I'm trying to hold on, but need to shut down for at least a few hours. >.<; Cheers.
@ManAnimal you can make NN that is self-crorrecting, and/or self--evolving
@Uksio yes but it could be manifested still without chemical imballances. the mechanism would be different but it would result in a behaviour pattern that is rather similar to empathy
self-seeding involves two factors: 1) optimization criteria and their selection and 2) the ability of any algorithm to self-organize new input which lacks and initial schema
Poor lauci. I feel bad for a person that is left out because of ignorance on the topic
in the same way that other animals engage in hierarchies but lack a system for serotonin
True
and it is conceptually possible for machines to achieve that phenomena
But that is not an impossibility, merely systems are not there yet
Hahaha. No. But even though you brought up a good counterpoint, MMI has the risk of overtaxing the biological components.
it is far easier to optimize a specific task however, it is another thing entirely to select a goal and then decide what task to pursuit
Anyhow, now I actually sleep.
Sweet dreams.
it's a mathematically limitation though
Let your brain sort your memories in peace
but do you acknowledge that you are essentially claiming that because we dont know how to make true AI now, true AI is impossible
do you know Big-O notation in regards to relating number of inputs to processing time?
It is a mathematical limit for your *current* hardware
that is not what i am implying at all
yes
it isn't a limitation
New and much more capable hardware will not be affected by said limit
it's a design decision
evolution suffers from the phrase, "you cannot get there from here"
Google recently made and proven a freaking quantum chip
a NN models and intermideate step
It did 10 000 computational years of work in few minutes
yes computational self-development will be much more powerful when it is achieved
preselected optimized criteria
i havent actualyl looked into that yet
has google actually proven it sufficiently?
10 000 years. Nigger chip can crack your high security encryptions by brute force
we can ALREADY process discrete info in superior fashion to the human brain
further pursuing the route won't arrive at intelligence; only faster computation of pre-selected criteria
It is a very specialised set of tasks for the computers
Our brains are the opposite
intelligence is the SELECTION of criteria
Very generalised
not application
the human mind "can" process discrete information in an efficient manner, we just dont have nearly as direct an access to those mechanisms as a computer does
sure
it manifests in how autists "visualize" mathematic phenomenal
but a computer can't come CLOSE to the parallel processing ability of the human brain
no contest
a neural network is the result of intelligence
the parts of the brain wired for optical calculations are extremely sophisticated and its suspected that their brains are bypassing the normal mental methods to work through mathematics usign those same processes
not the source
thats is a massive assertion
and that's the disagreement
my fucking god this _name in winders kills me
most see the brain as being like a computer, each portion having specific hardware
The machine just does not do searches with normal spaces and blaNAMEbla formats
rather than the brain being like a memory register
winders is dumb this way
where the hardware is the same, but the parts of memory which handle certain functions are because of a running kernel
that is an empirically defined system though
there is no way to tell these apart empirically
bridging that with the constructed system of computational processing is likely what will lead to the development of ai, rather than sohpisticated illusions
agreed
singularity
just because it hasnt been bridged yet doesnt mean a bridge is impossible as you seem to be claiming
why can't NN be the source and a result of intelligence?
That's literally a description of an evolutionary lรถรถp
which is one of the key reasons i think they are greatly exaggerating
because a NN doesn't self-seed nor can it handle selection of optimzation criteria
or atleast by virtue of them being different, to assert that they fundamentally cannot be merged requires more than just the assertion
how does it not self-seed?
There are NNs that create other NNs and evaluate them
oh 100% everytime any company uses the term "AI" they are full of shit
Neural networking is a phenomena relatively detatched from its namesake
Constantly improving NNs that do a better version of self is literally what Amazon spends billions on
because these are two seperate processes and the input schema is always pre-defined
the brain doesn't require you to know what you are looking for when you first encounter an instance
btw Amazon throws out algorythms that are starting to make too many decisions
lol
you create a representation in it's mind and until you associate that with a definition, you only have it's identity
Imagine a poor algyrythm being "I am self-sentien"
Amazon: delet
machines are capeable of the same behaviour
behaviour?
547,842 total messages. Viewing 100 per page.
Prev |
Page 1493/5479
| Next