File :-(, x, )
Androids Anonymous
mor
>> Anonymous
     File :-(, x)
>> J !YqTAzpG/6w
Post moar, get moar. It's internet Karma.
>> Anonymous
     File :-(, x)
>> The Plague
     File :-(, x)
Silfa from To Heart 2
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
That's not /d/, that's /e/....
>> Móci
     File :-(, x)
Fuckin yes!
Moar Serioooo!!!!!!!
>> Da Mister
     File :-(, x)
Nice, time for KOS-MOS.
>> Móci
     File :-(, x)
>>549004

You say Kos-Mos? :D

I think Serio and Kos-Mos the hottest android chiks ever.
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>>549015
>> Da Mister
     File :-(, x)
>>549015
I can´t agree more, too bad Xenosaga is gone,... I hope for now only.
>> Da Mister
     File :-(, x)
This pic might be in-game screencap, but isn´t she amazing?, up to this day, I haven´t had the guts to reach the ending for the 3rd Episode. I hate Namco Bandai, it´s a crime.
>> Móci
>>549032
You're right, Kos-Mos in this pic extremely cute...
>> Da Mister
>>549043
Aye, the first time I saw this pic, I was speechless, and if you remember the cinema when her eyes turn blue permanently, she becomes far more human, quite sad reason though. I´m still considering this moment of the biggest moments ever in RPG´s, period.
>> Da Mister
     File :-(, x)
Alright, this will be a KOS-MOS thread from now on!
>> Da Mister
     File :-(, x)
I don´t have this lovely pic-amazing wall could-be in higher rez, anybody wants to help?
>> godgundam10
     File :-(, x)
No Mahoro or Minawa? They're both androids. For Shame, 4chan.

Ecchi wano ai echinai tomoimatsu!
>> Da Mister
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Da Mister
     File :-(, x)
You remember when this scene takes place, Shion almost lost it.
>> Anonymous
     File :-(, x)
>> Da Mister
     File :-(, x)
Of course you are aware where this incredible wall comes from, edited from the CG gallery included in the best book I have from all videogames: "Xenosaga Episode I -Official Design Materials-". Amazing hardbound book with info, even reffering the ending of the series.
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Da Mister
     File :-(, x)
I just can´t have enough of KOS-MOS regardless of all these years.
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>>548972
MORE!
>> Anonymous
     File :-(, x)
>>549103
Agreed.
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
I predict that in fifty years the first series of Andriod sex slaves will be produced. First a technition test you and feed the info into a computer which will scan it and pop out the ultimate slave for you. She will know all of your preferences and wants/needs. You will have the choice to take her home right away or send her out into the world and let her start her own life according to how you want her too, then after a little while she will seek you out and you wont even be sure she is yours until she tells you. Aside from obviously sex, she will also be able to do anything else you want, like be a housewife or even have children.
>> Da Mister
>>549112
You think so?, and where is the fun without fights?, I think it would be a bit boring having everything your way, don´t you agree?
>> Anonymous
>>549114
Then there would be the option to send her out into the world and start her own life. She would adjust her personality so that you would like it, but she would also form some of her own needs and wants. These androids would probably be made out of a durable biological agent as opposed to composites, even the brain. There would probably be laws passed making it so only exactly human like Androids could walk in public, I am sure we would see some really pimped out sex slaves that wouldn't be human like.
>> Da Mister
     File :-(, x)
KOS-MOS will live forever.
>> Da Mister
>>549118
Still there would be some kind of control from you, the challenge you have starting from the ground can´t be matched. However, I agree it would turn out nicely to be able to choose and modify it a bit to your liking ( I hope that´s not suggesting the stereotycical "man in control" nonsenes ).
>> Da Mister
     File :-(, x)
This point reminds me of Feb and the rest of Realians from Xenosaga, they were created for multiple purposes ( like combat ), are they living beings?, is this proper from an ethic point of view?. 4chan can also handle deep themes as this one, discuss...
>> Da Mister
     File :-(, x)
Naturally, I´m not forgetting the main focus of thread.
>> Anonymous
>>549123
Well you get what you pay for, if you want a girl with modular parts or the ability to go in and have modifications done then you can. If you want a girl who is self aware and has been issued basic rights due to her complexity, then you can. There will probably be an ethical line drawn at one point. Maybe when a android reaches self awareness then it is considered a living being. All you have to worry about is a terminator scenario. But if she is human enough, then its not a problem.
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> Da Mister
>>549145
But if real artificial inteligence is achieved, what could avoid these beigns from taking over our place?, usually science does not care if something should be done, all it worries for is when can it be accomplished, "assuming the role of god", as Nietzche mentioned, I don´t know. Of course I like new discoveries to improve life, but the cuestion is how far must we go?
>> Da Mister
     File :-(, x)
>>549152
Welcome, you wish to join this discussion?, more KOS-MOS in the meanwhile.
>> Anonymous
>>549154
Then you make them so human like they don't feel any independence or special detachment from the human species. Only there master will know that they are not human. That doesn't apply to the "toy" girls who are only used for sex instead of higher level relationships. In that case you make them three laws safe (or four laws so they don't take over as an act of benevolence like in Irobot). You also create kill orders and put forward counter measures. There are a variety of things one can do.
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> Da Mister
     File :-(, x)
>>549166
It´s impressive how much can you try to make this possible, new laws to keep balance between humans and creations, well "creations" we are as well, jumping to the creator line sounds interesting, intriguing, exciting; but what defines life?, how it appeared, or how you created it?, what defines you? A human is defined by his actions ( this is mentioned in Xenosaga! ), not his origins; but still, WE created life...
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> Da Mister
>>549172
This pic resembles the KOS-MOS v4 concept art for the 3rd Episode. I´ve had no limits for this series, research, studies, galleries, books, the well-known anime, and a long etc.
>> Anonymous
>>549195
An intelligent conversation? On my internet? Im... speechless! Bravo /e/! Leave it to /e/ to bring the intelligence...oddly enough

Im all for AI and android friends, as long as like everyone else noted, the terminator scenario. Otherwise I cant stand it in movies or say, the animatrix scenes where theyre just destroying the androids that were just doing their "human" thing. I cant even scratch a damn toaster without feeling sorry...

I swear, I was born 200 years too early damnit...
>> Da Mister
     File :-(, x)
>> Da Mister
>>549206
Well, why not?, we have no idea how much is happening behind curtains right now. Even the most famous names of the farmaceutical industry are conducting experiments far beyond our imagination. And they don´t give a damn about humankind ( remember Nazis?, story tends to repeat itself ).
>> Casterday !!Tf4hX9zOMIO
     File :-(, x)
ANDRIOD Caster/Rider
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
If and when we finally manage to create one, will it be governed by Issac Asimov's 3 laws of robotics?
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given to it by human beings, except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
>> Da Mister
>>549217
Nice crossover between T-Elos and Rider there!
>> Casterday !!Tf4hX9zOMIO
>>549220
Thanks, I'll warn Will Smith about the incoming robot revolution too
>> Da Mister
>>549220
But also that brings one more problem, true freedom does not exist, yours´ ends where the one for someone else starts. I´ve read Asimov too; how to surpass that master-creation conflict without inflicting one´s will, or letting these creations loose.
>> Casterday !!Tf4hX9zOMIO
>>549223
Lets see, the simple fact is control is an illusion, we do not have control, and control surely does not have us. If you think you have control, you dont. No one does, Not even president Bush has control. Its an illusion, a simple illusion that we control something. To bypass this illusion one must understand we are all free-thinking beings, and given "control" of a mass-produced android unit would lose control of the illusion that we control them. When AI begins, then we will see what kind of illusion we live in.

In reality, once we are broken down, what have we lost? What has been taken from us?

Our Illusions
>> Anonymous
>>549220

If true AI is ever achieved, and that's a big IF, it won't work based on those kind of primitive rule sets... that's just not the way AI works.

And it's a big if because, at least according to some scientists, it's just plain impossible. To create an AI that perfectly emulates our own, we would first have to completely understand how we work.. and that would violate the law that no entity can contain all information about itself. But, well, that's the theory anyway.
>> Anonymous
     File :-(, x)
>>549084
>>549087
>>549088
Delicious Ropponmatsu. There needs to be more.
>> Anonymous
>>548872
is that curly brace?
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
"Chii?"
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
We could bypass the laws by using the "Friendliness Theory" - a theory which states that, rather than using "Laws", intelligent machines should be programmed to be basically altruistic, and then to use their own best judgement in how to carry out this altruism, thus sidestepping the problem of how to account for a vast number of unforeseeable eventualities.
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
>>549220
Robots will do whatever humans program them to do.
We already have war robots, why not terrorism robots? They could also kill on their own terms, if free will is based only on exponential thought capacity, as is currently suggested.
>> Anonymous
     File :-(, x)
The lack of Aiko in this thread is criminal.
>> Casterday !!Tf4hX9zOMIO
I think we've blown this thread up to epic conversational proportions. Intellect increase +1000
>> Da Mister
     File :-(, x)
That´s precisely one one biggest risks in case the ambition for true A.I. is ever completed; any kind of restriction or alternate route implemented on the new beings, already involves asigning instructions to their behavior programs; but in the other hand "giving" tehm absolute freedom based on ourselves is repeating the same mistakes we have made again and again.
>> Da Mister
>>549361
And I´m thankful you have attended my request in such an active way, that proves a few things wrong for the whole internet.
Retaking the point, at the very beginning of Episode I ( Xenosaga ), if you remember, KOS-MOS kills Virgil without remorse, Shion is in shock, and KOS-MOS tells her: "I´m just a weapon", even if KOS-MOS was programmed with logical thinking, taking priorities, she is totally unable to handle human illogical thinking, her commands were far too limited back then, when she became her true self, she was no longer an android?
>> Da Mister
     File :-(, x)
>>549230
And I agree, the human brain is an incredibly complex structure, to mimic it even sounds like fantasy, and then again, if neural processors can ever be created, how can they be kept from duplicating ourselves ( too dangerous I think ).
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
After the image limit is reached, we all must archive this thread.
>> Anonymous
     File :-(, x)
do nano-machine colonies count?
>> Anonymous
     File :-(, x)
World's sexiest alarm clock.
>> Da Mister
     File :-(, x)
>>549398
You are correct, I have taken a snapshot with Photoshop already.
If I can create a new thread, I will so we can continue on the sbuject, or one of you can start it.
The truth is: if we haven´t answered the three key questions ( why are we here?, who we are?,and for what are we here? ), we will not be able to understand ourselves, thus, we will never be able to develop anything above us, isn´t it?
>> Da Mister
>>549417
This is another interesting point too, after all, nanomachines possess an amount of intelligence to seek and destroy illness problems, and the main causes; but again, what determines how new forms of life need to be treated?, I´m not so arrogant to tell how.
>> Twilightdrgn
Thank god, an epic KOS-MOS thread. I'm crying tears of joy.

Hopefully Namco Bandai picks this series up again. The ending for ep. 3 made me BAWWWW.
>> I love Ilfa ~ Anonymous
     File :-(, x)
as much as i enjoy the intelligent conversations,
I've decided to hijack this thread with moar HMX series from ToHeart ~~
>> Anonymous
     File :-(, x)
this pic is so adorable
>> Anonymous
     File :-(, x)
Silfa is really cute...
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
thats all for now, rest aren't /e/nough for this board....

btw, this pretty nice doujinshi that I have laying somewhere on my HDD...
>> Da Mister
     File :-(, x)
>>549450
If my information is right, the Xenosaga IP belongs to Monolith Soft, and they work for Nintendo, they are the only ones capable of continuing this story, no one else should get their hands on (Namco Bandai actually stole the original story for the 3rd Episode and fired the man and his wife supervised by Takahashi).
>> Anonymous
     File :-(, x)
>> Casterday !!Tf4hX9zOMIO
     File :-(, x)
>>549374
If you read the post about losing our illusions, its the truth, since if we give robots true artificial intelligence we'd clearly lose our control over them, which is indeed a true illusion we control the world. Its the illusion of control, since our lives are under our own power, but if we think we have control, its pure illusionary we do. When everything is taken from us, our possession, the entirety of control, as we have everything we need to control what we do and how we live.

AI would only cause more than just simple problems. KOS-MOS was programmed as a weapon, as all simple operating systems revert back to their original programs when conflicting commands or programs try to change the basis of all the operations in a software or hardware configuration, as why she killed Virgil. Thatw as before she started learning human emotional standards and rewriting the original commands which are built in by giving new rules to overwrite the basic programming, which then contravenes the orders first given in the original programming. In all sense, she gains not only a human feel, she loses her own illusion of her original programming and protects, and becomes a weapon of humanity, not for humanity. She lost the illusion of control.

Again, what happens when someone takes everything from us, what have they taken? Illusions.

Once someone takes that from us, all we have lost is >Our Illusions<
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
Have some more Kos-mos.
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> Casterday !!Tf4hX9zOMIO
>>549739
nice... saved for hi-res
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
One Hundred Million Dollars!
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> Anonymous
Spoiler: She dies
>> Anonymous
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
>>549835
And that actually matters? /e/ has been fapping to canon dead girls for a while now.
>> Da Mister
     File :-(, x)
>>549718
Such a point creates a new dilemma, imagine we manage to create A.I. ( it will be just a dream, or end up in a catastrophic scenario ) true inteligence is totally independent, autonomous, and auto-determined, KOS-MOS was created indeed as an anti-gnosis weapon, even if she always had a hard time dealing with human way of thinking, but setting ourselves as an example only makes us prone to be stabbed in the back.
Absolute control is non-existant, there are no absolutes in the universe ( ironically, this is the only acceptable absolute ), and freedom-permanent contact with humans means they will always learn the best and the worst.
In my humble opinion, it shouldn´t happen, some fields research takes must not be walked in any way, correct?
It´s nice to see this thread still here, this subject of discussion is open to everybody who wants to join and share a few opinions related to it, certaintly an unusual request, but intriguing nonetheless.
>> Casterday !!Tf4hX9zOMIO
>>549835
Fapped to Caster, and she dies... either way death is life. Fap to it
>> Da Mister
>>549835
Um,huh?, that can´t be the least relevant to us.
>> Da Mister
     File :-(, x)
Even if the ¿final? episode was released since 2006, KOS-MOS has secured a place between the most beloved VG girls ever.
>> Anonymous
     File :-(, x)
>> Anonymous
     File :-(, x)
>> Twilightdrgn
>>549835

Dumb shit. She didn't technically "die". She fell into dormancy. She can still be switched back to "on".
>> Da Mister
     File :-(, x)
>>549417
Oh my, that´s Emeralda! ( Xenogears ), a previous incarnation of Fei created her. How careless of me...
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
We forgot somebody important: Naomi Armitage.
>> Anonymous
OK I'll admit, I skipped a few posts in here, but it's a great topic and I have a few points of my own.

First: there will never be such a thing as "true A.I." there will only be "Intelligence." The ability to learn and adapt is what this refers to and has no bearing on sentience. Sentience would refer more to emotional state and the concept of being self-aware.

I believe, personally, that creating intelligent, sentient, life is extremely possible--no, just a matter of time. Such a thing will undoubtedly be a mix of bio-technology and nano-technology. The brain will almost undoubtedly be modeled directly after our own, if not it would have to be the first quantum computer. (Assuming that our brain is not a biological quantum computer.) Sentience would be achieved simply through hyper-fast thought processes that would start with an observation of immediate surroundings and lead to observations of self and ones own purpose. The same way mankind achieved it.

There would be no programming, simply the ability to learn.

There would be no laws, they would be governed by the same social laws that govern us.

Which is where we have to tread lightly. We, as humans, already create intelligent life just like this: they are called children. Yet they do not pose a creater/created problem; certainly not usually a violent one. Treat it the same and there should be no issue.

However, the PURPOSE for creating sentient intelligence is somewhat ambiguous. Why make it sentient at all? Just causes problems if all you want it to do is your calculus homework.
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> RiderFan !!o7sFgCyZbre
     File :-(, x)
>> Da Mister
     File :-(, x)
>>550659
When intelligence is created ( when all obstacles have been topped ), it must not be based on our own, that´s no evolution, it needs to be way beyond ourselves, not subject no be contaminated by our problems, feelings, conflicts, etc.
Before we even think about success, we need to make a quantum jump to a far wider perspective, keeping a true objective, pure, focused, think about this point: our brain can only be used as much as 10% ( only super brains - Albert Einstein, to mention a great name in history -), and we are such a threat to our planet, using it to full capacity would be a danger probably to the whole universe.
I happen to agree with you, it´s not only about walking the path without mistakes, it´s requires an authentic reason to do it, why must that be done?, how can we obtain benefits for doing so?, how will humankind become a better community?
I will never dare diminish us humans at all ( in case I suggested so, I´m refraining right now ), I HOPE for a much better future ( and I´m trying my own ) can be reached for our sake, and for the ones depending on it, because the way things are now...
When a past incarnation of Fei (4500 years before the events of Xenogears) created Emeralda as a coliny of nanomachines, she had difficulties interacting with the characters, but she recognized Fei and Elly inmediately, and even if she never speaked, comprehension between her and the rest was not a problem at all, listening the OST now refreshed my memory.
>> Anonymous
>>550659
Finally, someone who makes sense.
>> Anonymous
As touched upon by Issac Asimov with his 3 laws, humanity has to be very careful when creating artificial intelligence.
intelligence is defined as: 1. "the ability to learn or understand or to deal with new or trying situations"
2. "the ability to apply knowledge to manipulate one's environment or to think abstractly as measured by objective criteria"

Implicit within this, is that an organism DOES NOT have outside influence, intelligence is only when it can do this, without been told how - else it is just programming, it must learn.
The ability to view the world in different ways must be present, as well as the drive to understand... this cannot be coded.
If humanity creates AI, we cannot have any control over it, least we stunt it, and just end up creating a brilliant program. For, if we control this "life" then we are effectively sitting at a keyboard coding. If they cannot work out stuff for themselves then they are not intellagant, there is no thought about what we tell them, just them to do.
Our emotions, feelings, mistakes, laws cannot be enforced upon them.
The problem remains then, how do we create such a thing, where we control no aspect of it?
I believe, that humanity must make machines that can work on themselves, and can improve themselves and allow the machines to become sentient themselves.
As with humanity, crawling out of the mud and becoming as we are, a robot that can fix itself, than can experiment and improve itself (this is semi-completed already, once again by Victoria with a program that constantly makes new machines that are better than the last (mini land racer), where the program "learns" with each step.
>> Anonymous
continuing from previous post


This way, a new sentient race can be created, free from our restraints, our thought patterns, our mindset.
We do not know what form this intelligence could take, and may not necessarily be hyper-fast, silicon based, carbon based etc robots need to be coded to make the choice, to try it out for themselves and then left in peace.

Look at earth, where billions of species must have hit evolutionary dead ends over time. Robots WILL hit these, and may never achieve setience, but like this, the chance is there, that the computers will "hit the evlutionary jackpot" and can evolve.

-Just my 2 cents, rambling and semi-literate thoughts :)
>> Anonymous
     File :-(, x)
Spoilers!
1. She's Mary Magdalene.
2. She doesn't die, but does get her legs blown off and shot into space and that's the last we see of her.
3. She's a pretty nice person if you can flip her into non-killbot mode.
4. I don't care how hot for her Shion is; she's not anatomically correct under that.
>> Anonymous
Knowing how the Professor thinks robots should be... He probably made her correct when he was rebuilding her with Assistant Steve.
>> Anonymous
>>551343
That guy's idea of "sexy" was giant and looking like a Gundam.
>> Anonymous
Studio Onegai had put together the cutscene videos from Xenosaga I (a total of 13 videos). I wish they'd start torrenting them again. It used to be at boxtorrents but got removed since it was game footage.
>> Anonymous
>>551345
That maybe true, but Assistant Steve is apparently his moderator, who kept him from turning KOS-MOS into Gundam K05-M05. Regardless of his love for big mecha, He and Steve gave Kosmos lacy underthings, so there must be something worthwhile under said lacy underthings.
>> Twilightdrgn
>>551367

My thoughts exactly. :D
>> Anonymous
When it comes to AI, I always think of the book series Hyperion. I rather like it's depiction of what happens after humanity creates AI's that start evolving on their own.
>> Anonymous
This is probably a stupid thing to say.. it's an un-educated opinion. Oh well.

-When you have a child, maybe you love it. In any case, if you think of it as "your child" you wanna protect it. So that makes you a real Parent. Maybe you'd even be willing to die so the kid would live, 'cause you love it. But there's something beyond even that.
-There's a state of mind you might reach, as a parent, where you accept death. And it's not the normal kind, where you just understand that dying is inevitable. It's accomanied by a rediculous sense of contentment. It's as if, in a cosmic game, your life is one move, and your child's life is the next. Even though the kid's completely different from you and has its own dreams, you can't help but pass the torch. Even if the kid hates you, you let him inherit your soul. And so it's okay to die when your time comes; you don't gotta worry.

That's what it has to be if we really are going to "create" the ultimate AI.
>> Anonymous
>>551828(cont'd)
We can create stuff all day, or at least take a bunch of elements and blend them into something with awareness. We can make something that thinks. We can make an AI that it's possible to fall in love with. We can make an AI that -wants- things. We might, one day, create a being in order to co-exist with us. If it goes "Terminator", we'll fight. But in my humble opinion, there's one final crowning achievement, for us as a human species: Creating something so -complete- and beautiful, that it satisfies our desire for purpose.
As individual people, we want to live, and experience stuff. Satisfy ourselves, you know?
But. For the person that creates the "Next Existence", there will come a feeling that no matter what humanity is or what we do, finally, we don't need to worry about surviving as a species. Because the children we've created are our successors.
Pretty stupid, huh? hahaha.
>> Anonymous
FUCK.
Threadkilling Expert, killed over 100+ threads
>> Anonymous
     File :-(, x)
Seeing all this Kos-Mos have to add one I found long time back that just feels, i don't know, elegant in a sense. Heh and darn you read....your discussion here is a good read for my bored brain.
>> Anonymous
     File :-(, x)
in return for the beautiful kos-moses ~

off deviantart, artist lilykane. since i know nobody can read filenames.
>> Anonymous
>-There's a state of mind you might reach, as a parent, where you accept death. And it's not the normal kind, where you just understand that dying is inevitable. It's accompanied by a ridiculous sense of contentment.

And so when we create artificial life/intelligence we'll be no different, and thus no better off than before. We'll have fulfilled just as much of our "purpose" as reproduction does already.

Satisfying the desire to "Climb the mountain because it's there" is just that. Satisfying a built in instinct. It's no great achievement if that's the only reason you do it.

We should instead try to be more than just animals who have to satisfy desires and needs. We can/should create AI for the sake of the betterment of our species.
>> RiderFan !!o7sFgCyZbre
This thread needs archiving. 4chanarchive is the place.
>> Anonymous
>>550848
The problem with this is that in order to create something beyond ourselves we must understand what it is we create (and how to get there) but in understanding that we would become it. It very quickly becomes a paradox.

The fact is that all life will begin with the same flaws we have, artificial life MAY be free of animalistic instincts, however. Though... I doubt it. I believe that the instincts people see in animals (and are also readily available in humans) is a sign of sentience. A will to survive, to reproduce, to be safe, etc.

One could say that by attempting to create life beyond ourselves (as you put, beyond our emotions) we could create "cold life." Uncaring, emotionless, beings that act purely out of instinct without any restraint. Obviously this can be very dangerous as it is the case in humans that our emotions are what hold us back from acting purely out of instinct and try to push us toward cooperative living. (Where instinct thinks only about the self.)

To force the new life to go straight to a cooperative living mode would take pre-programming which either defeats the purpose of creating the life or will eventually cause the life to resent and hate you for controlling it.

Also, the idea that we only use 10% of our brains is a myth. The fact of the matter is that we only use 10% AT ANY GIVEN POINT IN TIME. Meaning, depending on what we're doing, we are using a different 10%. Yes, Albert Einstein was said to use as much as 12% at any given point in time and how that is is still a mystery. But I do not believe he had a "better brain" than the rest of us, just that he was better able to use it. That extra 2% was simply him able to multi-task, as it were.

It is possible that parallel processing would enable us to create artificial brains that can do this more readily, but I doubt it. *shrug*
>> Anonymous
>>552248

The human mind is not a computer to be analyised as such.
If we are said to only be using 10% of of brain at any given time, there is no way of us telling what 10% is been used.
Are you talking about 10% in terms of intellectual capacity, 10% of the enitre brain, factoring in emotions, sensory responses or what?

Also, einstein would have been able to use a higher pentage beacause:
For whatever reason, genetically, his brain was better at crating neural (right word?) pathways -> his brain could have more parts working at once due to better connections, where another person with the same 'smarts' couldnt.
His worldview allowed him to see connections better. Ever thought of the variations in how people think? Some think pictographically, others numerically etc. Einstein was able to view the world in such a way that he could tie it in to his theoires better. This is not nessacerialy using more brain.
And if you want to compare us to computers as such, why couldnt he just have a more efficent brain?
>> Anonymous
come on, 4 archives requests so far
>> Anonymous
WALLS OF TEXT, EVERYWHERE.
>> Anonymous
How to request an archive? :(
>> Anonymous
>>551334
A good solution to this is to create AI that has a desire to survive, but not to PERPETUATE. Only problem with this is that self-perpetuating "mutations" would (duh) become the dominant strain in any such pool of offspring.

It's been shown in a lot of studies that perpetuating patterns always become dominant in simulations where there are no consequences for various actions and random processes are allowed to run rampant. The self-perpetuating ones end up using up all the resources or mutating the other processes into themselves.
>> Anonymous
go to 4chanarchive.org
go to request interface
put in thread id: 548872
board: /e/

and the auth code.
>> Anonymous
Thanks Anon.
>> Anonymous
Wow.
tl;dr overload.

You can play armchair philosopher all you want,
but the fact of the matter as any Nihilist will tell you, we can do whatever the hell we want.
But damned be the person (or AI) who eventually screws us all over.
>> Anonymous
>>552914
Laffo at a post predicated on the concept that anybody is dumb enough to listen to nihilists.