Comic 155 - The spark....

23rd Jan 2014, 12:00 AM
The spark....
Average Rating: 5 (16 votes)

Author Notes:

Centcomm 23rd Jan 2014, 12:00 AM edit delete
Centcomm
Remember : Dontations help feed your frendly artist and her cat!
Post a Comment
(You have to be registered at ComicFury to leave a comment!)

Comments:

Dragonrider 23rd Jan 2014, 12:04 AM edit delete reply

Thank God and Greyhound Jett got something done somehow and just in time. Doc Granger gonna have nightmares about this for the rest of his days.
Centcomm 23rd Jan 2014, 12:40 AM edit delete reply

Oh yes .. this is his worst nightmare made "flesh" :D
Dragonrider 23rd Jan 2014, 1:01 AM edit delete reply

As I said last time, In his Non Humble Opinion what has been done to her can't be done because he didn't do it. BTW as a side note be sure he forgets to get a tetanus shot after the bite and let him get a case of "Lockjaw" for his troubles. Possibly he can be sent dirtside and assigned as liaison for Tokyo Rose and have him establish a base in the middle of the Gobi Desert.
Centcomm 23rd Jan 2014, 1:19 AM edit delete reply

Well .. due to the fact that Galina eats the same food as humans ----

Human bites that break the skin, like all puncture wounds, have a high risk of infection. They also pose a risk of injury to tendons and joints.

Bites are very common among young children. Children often bite to express anger or other negative feelings.

Human bites may be more dangerous than most animal bites. There are germs in some human mouths that can cause infections that are hard to treat. If you have an infected human bite, especially on your hand, you may need to be admitted to the hospital to receive antibiotics through a vein (intravenously). In some cases, you may need surgery.

But she didnt get the chance to actually break the skin...

as for the other - that sounds fun!
Dragonrider 23rd Jan 2014, 1:34 AM edit delete reply

Well put the inducer back on her some her pain is gone and let her get a "second bite of the apple." I wanna see this jerk go down hard. If possible he's worse than Evil-1 and Douchebag combined.

BTW make sure that base is in an area where the descendants of "The Red Guard" still run around and whip people that don't believe in Chairman Mao's Little Red Book.
Centcomm 23rd Jan 2014, 1:48 AM edit delete reply

Ooooh that sounds painful...
Dragonrider 23rd Jan 2014, 2:11 PM edit delete reply

Not as painful as the original suggestion I censured and changed. No where as painful.
jamie59 23rd Jan 2014, 12:04 AM edit delete reply

Intresting! Didn't expect that.
Centcomm 23rd Jan 2014, 1:12 AM edit delete reply

Hehe Yeah -- expect the unexpected :D
Stormwind13 23rd Jan 2014, 1:20 AM edit delete reply

As Cat would say, A & U from the creators of this comic. :-)
cattservant 23rd Jan 2014, 2:03 AM edit delete reply

See!
[You can tell them by their spots.]
mjkj 23rd Jan 2014, 5:06 AM edit delete reply

Indeed, very unexpected.

But a great way to save Galina...
velvetsanity 23rd Jan 2014, 1:12 AM edit delete reply

o/` go amy! go amy! it's your birthday! go amy! o/`

LOL. Didn't see that one coming.
Centcomm 23rd Jan 2014, 1:16 AM edit delete reply

Yep it is indeed her Birthday ! :D ( glad I could catch you off guard :D )
Stormwind13 23rd Jan 2014, 1:24 AM edit delete reply

Yeah, Happy Birthday to Amy... and an unpleasant SURPRISE present for Doctor Granger. I hope he CHOKES on it.

I would so feed him into a wood chipper feet first. The evil doctor deserves the worst kinds of death to be visited upon him. People will probably have nightmares about what he was trying to do to Galina here. :-p
Centcomm 23rd Jan 2014, 1:48 AM edit delete reply

Really? he really hasnt gotten the chance to do much yet..
Stormwind13 23rd Jan 2014, 6:37 PM edit delete reply

Threatening a child (and Galina is TWO so very much is a child) with a knife makes my blood boil. Knowing that she is injured and unable to even remotely protect herself makes it even worse. And then knowing that Granger has NO compunction about utterly destroying a unique life, and would do so if he wasn't stopped... Makes me very much WANT to stop him... Permanently and painfully to make sure the point gets across. DON'T HURT THE KIDS!!!
Tokyo Rose 24th Jan 2014, 6:39 AM edit delete reply

Granger does NOT know that he'd be destroying her. What he intends to do is no more traumatic than taking a backup from a hard drive--for a robot or a positronic android brain. Galina's unique construction is another story.
Stormwind13 24th Jan 2014, 7:42 PM edit delete reply

Just to ignore other possibilities though shows me that Granger isn't a good scientist... nor a good person. And we have enough assholes, don't need one with a trumped up ego and inflated opinion of himself. :-p So feed him into the wood chipper and get someone that has the ability to handle different situations.

Galina already has exhibited behavior and characteristics that he can't explain. Instead of taking the time to see why that might be, he is tromping all over Galina. Primarily because his stupid ass can't envision anything beyond what 'HE' says is the way it is.

So while Granger sucks as both a person and a scientist, he would make good fertilizer since he is full of shit.
cattservant 26th Jan 2014, 1:27 PM edit delete reply

I still wonder about his motives.
If he was being legitimate he would have taken the time to have studied Galina's existing medical records.
Centcomm 24th Jan 2014, 12:18 PM edit delete reply

Points at Roses post.. she beat me to it.
velvetsanity 23rd Jan 2014, 1:48 AM edit delete reply

It (catching me off guard) is a rarity indeed. Well done! :D
Fairportfan 23rd Jan 2014, 2:15 AM edit delete reply
Centcomm 24th Jan 2014, 12:17 PM edit delete reply

hehe I like that.. :D
Stormwind13 23rd Jan 2014, 1:27 AM edit delete reply

::HUGS Amy:: ::HUGS (very carefully) Galina:: Go girls, teach that cretin some manners. :-)
Centcomm 23rd Jan 2014, 1:48 AM edit delete reply

Well I think Granger is about to have a nasty surprise
JacobJSebastian 23rd Jan 2014, 1:44 AM edit delete reply

so, I tried to tell my daughter about Datachasers. she asked what genre it was... I floundered this out: "high tech cyberpunk kinda post apocolyptic action drama wibbly wobbly thingy."
Centcomm 23rd Jan 2014, 1:47 AM edit delete reply

Except this is Luna Star LOL
but that works ... :D
velvetsanity 23rd Jan 2014, 1:50 AM edit delete reply

It even has some timey wimey things in the form of flashbacks! And it's all humany wumany, too! (I'm a fan of The Doctor, if you can't tell :D )
Centcomm 23rd Jan 2014, 1:52 AM edit delete reply

LOL - I finally get that refernce >_<
Mayyday 23rd Jan 2014, 1:56 AM edit delete reply

"People *assume* that time is a strict progression of cause to effect..."
velvetsanity 23rd Jan 2014, 2:17 AM edit delete reply

Yes. Unless they're familiar with The Doctor :D
velvetsanity 23rd Jan 2014, 2:13 AM edit delete reply

@CentComm Yay! If you get a chance you should watch the entire series (although some of the early episodes and stories no longer exist - BBC ordered them destroyed to make room in the storage facility. Some was saved by a fan taking the film home and hiding it, though.
cattservant 23rd Jan 2014, 2:56 AM edit delete reply

I'm afraid I'm losing 'respect' for Dr. Granger.
I thought he was a creditable scientist, but he's acting like a little kid learning about how a clock works with a hammer and a pair of scissors!
Sheela 23rd Jan 2014, 12:35 PM edit delete reply

True, as far as a scientist goes, he should be interested in why she's suddenly spasming, when she was relaxed when he entered.

The fact that he just wants to cut her open is indeed cause for concern regarding his medical and scientific credentials.
Tokyo Rose 23rd Jan 2014, 6:20 PM edit delete reply

Speaking as someone who's done tech support for a living, there are times when working on a misbehaving computer where there's an incredibly strong urge to pull the case open and take a ball-peen hammer to important bits of it.
Sheela 23rd Jan 2014, 6:33 PM edit delete reply

Ah, but would you do it to the one and only supercomputer your business have?
Tokyo Rose 24th Jan 2014, 6:41 AM edit delete reply

How much of a wretched pain in my ass is the supercomputer being? It might have an ACCIDENT.
Stormwind13 23rd Jan 2014, 6:46 PM edit delete reply

Yeah Rose. I know that feeling well; however, he hasn't been fighting with that piece of hardware for 6 months. Instead of taking time to see what he can figure out, he is going for strip mining method of exploration...

I would so blow his head off. He isn't THAT important (his own opinion aside), and if he can't operate any better than this, he is a liability not an asset. ::BANG:: :-)
Sheela 24th Jan 2014, 11:05 AM edit delete reply

Ah, but Rose, that's very ... unscientific, of you!

But yeah, I know the feeling - Telecommunications tech here, remember ?

You have no idea how many problems a building full of one large interconnected mess of electronics (from the lowest bidder offcourse) can throw at you. Many pieces of which have noooo back ward compatibility, and noooo forward compatibility .. in fact they have no compatibility at all! And they're 50 years old and on the fritz, but still completely necesary for the whole darn thing to run smoothly.
Add in modern components that only *just* manages to run by their own standards, never mind all the official standards, and you have an unholy mess.

And when I say "a building full of electronics, I mean that in the most literal sense of the word.
cattservant 23rd Jan 2014, 4:30 AM edit delete reply

It just occurred to me,
"Amy's Choice"
Is a very crucial event
In android evolution.*


*(In some ways more important than Galina.)
mjkj 23rd Jan 2014, 5:03 AM edit delete reply

Yayy, Amy woke up and helped Galina - Jet really did a great work there...

*is relieved*

Now if dr. granger wants to use her in the other of her functions it has just become wrong...

*hugs Amy and reattaches Galina's collar and hugs her, too*

Happy birthday, Amy! :D



...and I hope that May will find captain Kiku soooon...
dakyri 23rd Jan 2014, 7:44 AM edit delete reply

hoped it would work out like this ;) nice work all
King Mir 23rd Jan 2014, 10:24 AM edit delete reply
I gotta say I'm a little disapointed in this depiction of how AI would be. I would expect the difference between sapience and non-sapience to be much more murky. A program wouldn't spark alive, it would slowly be programmed to resemble human thought.
velvetsanity 23rd Jan 2014, 11:08 AM edit delete reply

You're assuming manual programming being developed and set in place over time. The thing is, sapient and in some cases even merely sentient beings program themselves. Remember, sentience = the capacity to think/reason. Sapience = the capacity to judge (consequences) and make moral decisions for oneself based upon that judgement. This includes having opinions on abstract matters and such. Sapience/sparking is a matter of moving *beyond* preprogrammed responses in a moral fashion.

'Sparking' is when everything suddenly clicks for the intelligence in question and they realize that they are able to do these things (though the realization itself is on a *sub*conscious level) and the spark is fully realized the moment the intelligence makes its first independent moral decision and acts upon it.

I've had numerous discussions with CentComm on the side via IM on the subject of android creation/development (and though focused on being within the setting of the comic, it applies in reality as well) which led to me seeing it as being a parallel of human growth and development through childhood and into maturity. Human children could easily be viewed as prespark androids, up until a certain stage of development (which, currently, based on our discussions in combination with my own thoughts on the matter, looks like puberty for the majority of humans). At that stage, the brain structures necessary for independent *moral* decisions begin to develop. The main difference is that for humans, sapience develops over time as the structures grow into place, while for androids it's more of a matter of making the realization that the capacity for such is present and putting it into use. The simulated environment that modern (Datachasers 'modern') android intelligences develop in would be childhood, and the model 0 body would be adolescence/puberty. The move from the model 0 into the body they have when they take the first contract towards paying the body debt would be growing into maturity as a young, responsible adult.

Thinking about it at this point, I do have to wonder if the autodoc that healed Galina back at the Russian base might not have possibly actually sparked, or whether it was just following preset programming parameters.

And now, through my numerous edits and re-edits of this post, I've had a thought. Would Tulip consider Amy to be "much more like Ceci than her silent crystal siblings" (like Galina) because of the sparking? Or was that a reference to the organic/mechanical hybrid mix of her physical composition?
King Mir 23rd Jan 2014, 1:08 PM edit delete reply
@velvetsanity
Humans "spark" in puberty because we are essentially programmed to do so. But even that is a gradual effect.
AI that are built from a template or another AI may ostensibly spark in a similar way to how a computer slowly boots up.

But that doesn't seem realistic for a new intelligence. A computer program can be made to think and reason, such as if it features an inference engine. It can be made to learn with a few other techniques, and with new techniques that may yet be developed.
King Mir 23rd Jan 2014, 1:15 PM edit delete reply
Continued, because I accidentally hit post:
But learning to be like a human would strike me as a gradual process. Sapience, sentience, intelligence, or whatever term you choose is not a logical leap like understanding calculus. It's a bunch of little things that make us human. And new AI would have to piece together every one of them, one by one. Whether that piecing together comes from machine learning or direct programming does not change that.
Dragonrider 23rd Jan 2014, 11:38 AM edit delete reply

@King Mir you are assuming humans program for a desired effect such as the "Foundation" series robots however R Daneel Olivaw evolved from his self aware programming to to the Super Android he was when the series ended. Read Heinlein's "The Moon Is A Harsh Mistress" for and example of computers "sparking".
Sheela 23rd Jan 2014, 12:43 PM edit delete reply

I suspect one of the important bits here is the ability to empathize with others, up until now she has just been standing back following orders, but seeing Galina (who's the closest to one of her own as possible) being hurt right in front of her makes her empathize and break her standard programming for the core directive of protecting someone other than themselves without previous programming.

This is why it's a big step up, when you go from pure routine to "walking outside the path" so-to-speak.

This is very difficult for a computer to do, and thus it takes a "spark" of sorts to set them off, as it's not something you would slowly program into them.

Though there can be made a good case for how good programming can make it *easier* for an AI to spark, where as bad programming could hinder it.

Obviously, by the time that Datachasers happen, they've gotten this whole thing down to a science and have removed the worst hindrances, thus sparking is made easy.

But in Luna Star, no such thing has happened .. yet. So Amy has to do it the hard way.
King Mir 23rd Jan 2014, 1:31 PM edit delete reply
@Sheela
"Breaking standard programming" is exactly what's wrong here. A machine cannot break it's programing by definition. If it did it would be like it's programing were replaced by a different program coming from another source -- like God just overrode Amy's programing with one He wrote. But that's not AI.

Humans can't break their programing either, btw. We just don't know how our programing works.

In Datachasers it's different because the AI's are made based on existing AI's.
Centcomm 23rd Jan 2014, 2:01 PM edit delete reply

Certain kinds of programming CAN "break" the programming or rewrite it. it is a common trope yes but its also becoming actually possable.

Cognitive robotics is concerned with endowing a robot with intelligent behavior by providing it with a processing architecture that will allow it to learn and reason about how to behave in response to complex goals in a complex world. Cognitive robotics may be considered the engineering branch of embodied cognitive science and embodied embedded cognition.

Can a robot learn like a child? Can it learn a variety of new skills and new knowledge unspecified at design time and in a partially unknown and changing environment? How can it discover its body and its relationships with the physical and social environment? How can its cognitive capacities continuously develop without the intervention of an engineer once it is "out of the factory"? What can it learn through natural social interactions with humans? These are the questions at the centre of developmental robotics. Alan Turing, as well as a number of other pioneers of cybernetics, already formulated those questions and the general approach in 1950 [1] , but it is only since the end of the 20th century that they began to be investigated systematically [2] [3] [4][5]

Because the concept of adaptive intelligent machine is central to developmental robotics, is has relationships with fields such as artificial intelligence, machine learning, cognitive robotics or computational neuroscience. Yet, while it may reuse some of the techniques elaborated in these fields, it differs from them from many perspectives. It differs from classical artificial intelligence because it does not assume the capability of advanced symbolic reasoning and focuses on embodied and situated sensorimotor and social skills rather than on abstract symbolic problems. It differs from traditional machine learning because it targets task- independent self-determined learning rather than task-specific inference over "spoon fed human-edited sensori data" (Weng et al., 2001). It differs from cognitive robotics because it focuses on the processes that allow the formation of cognitive capabilities rather than these capabilities themselves. It differs from computational neuroscience because it focuses on functional modeling of integrated architectures of development and learning. More generally, developmental robotics is uniquely characterized by the following three features:

It targets task-independent architectures and learning mechanisms, i.e. the machine/robot has to be able to learn new tasks that are unknown by the engineer;
It emphasizes open-ended development and lifelong learning, i.e. the capacity of an organism to acquire continuously novel skills. This should not be understood as a capacity for learning "anything" or even “everything”, but just that the set of skills that is acquired can be infinitely extended at least in some (not all) directions;
The complexity of acquired knowledge and skills shall increase (and the increase be controlled) progressively.

Developmental robotics emerged at the crossroads of several research communities including embodied artificial intelligence, enactive and dynamical systems cognitive science, connectionism. Starting from the essential idea that learning and development happen as the self-organized result of the dynamical interactions among brains, bodies and their physical and social environment, and trying to understand how this self- organization can be harnessed to provide task-independent lifelong learning of skills of increasing complexity, developmental robotics strongly interacts with fields such as developmental psychology, developmental and cognitive neuroscience, developmental biology (embryology), evolutionary biology, and cognitive linguistics. As many of the theories coming from these sciences are verbal and/or descriptive, this implies a crucial formalization and computational modeling activity in developmental robotics. These computational models are then not only used as ways to explore how to build more versatile and adaptive machines, but also as a way to evaluate their coherence and possibly explore alternative explanations for understanding biological development [5]
King Mir 23rd Jan 2014, 2:32 PM edit delete reply
You're quoting Wikipedia on me? Without attribution? What gives?

I'm aware of the state of AI right now, which I think is kind of why I have a problem with this.

If we want to talk about breaking programing, we'd need to define it better. Any computer with writable memory, called RAM, is in a sense breaking it's programming, especially if it can execute code from RAM. But at the other extreme, breaking programming could mean literally not following any programing from it's program memory. Presumably AI breaking programing means something in between, but the term itself suggest that AI is more mystical than it is. AI does not "break" its programing in any way that merits such a strong word. In particular, the term "break" is unwarranted, because it suggests an unnuanced difference between true AI and near AI.
Centcomm 24th Jan 2014, 12:51 AM edit delete reply

yeah i meant to attribute it -- sorry JUST woke up. its stuff like this that I used as the bases for for amy going above and beyond her "programming"
velvetsanity 23rd Jan 2014, 6:49 PM edit delete reply

@King ever heard of a mutating virus? A complex enough computer virus *could* mutate such that it spontaneously becomes sentient or even sapient (though the two would have an infinitesimally small chance of happening simultaneously)

This same thing happens with naturally occurring biological organisms. It's called evolution. The only difference is that one occurs in a physical chain of proteins while the other occurs in a string of 1s and 0s.

Also, sapience (and even sentience, for that matter) does not necessarily imply 'human-like' mores, values, or even thought patterns.
King Mir 23rd Jan 2014, 12:48 PM edit delete reply
The fact that other works of fiction use this trope does not make it more realistic. The assumption I'm making is that AI, being a machine, works like a machine. If it didn't, It wouldn't be AI. Machines by their nature are things that we can fully understand -- if we didn't we wouldn't be able to design them. This feels like some God came down and gave Amy a sapient soul for whatever reason.
Centcomm 23rd Jan 2014, 1:27 PM edit delete reply

*steps in* Ah hem :D - Okay quick explaination - Amy "sparking" is a combination of events and Galina's narrration does make it seem more "magical" Amy has been watching this and silently fighting with herself to do .. or not do something. she is still very rudimentry she has extensive interaction databases - as Galina pointed out she already had the "core" to spark think of a gas soaked pile of wood waiting for a flame. and galina's screams and protests " activated " that. as the next few pages will show. she is more advanced than the autodoc and Edict. Jet has done a LOT of work to help her along.

" doc " had already "sparked" into sententice being a A.I. just with out the extensive databases ( hes just a baby compared to say the Am-COM 3c AIS ( Cent-comm )Amy's main restriction was all the "directives" cluttering her brain.. Jet purged a lot of it. and this allowed Amy to "choose" once the system has decided on what it wants to do THAT is called "sparking" its no longer just following orders. its making decisions. as you will find out Amy is still not "human" but she is running at full speed now.

Its also like a car idleing at the curb and stomping on the gas :D ZOOOM~
King Mir 23rd Jan 2014, 1:56 PM edit delete reply
@Centcomm
I suppose this is Galina's story, not Amy's so we (except you :)) don't know all that much about Amy's development. Amy's story may be harder to tell. So would Luna's during the process of her development.

What you here seems to be a profound moment of an android springing to life. That's a big moment for a work of science fiction. But with the focus on Galina here, there isn't a lot of setup for it. So it seems unrealistic as depicted.
Centcomm 24th Jan 2014, 12:52 AM edit delete reply

theres another trope .. reality is unrealistic.. and thats what your also looking at :D
Centcomm 23rd Jan 2014, 1:49 PM edit delete reply

Also heres another way to look at it ..

An artificial neural network (ANN) learning algorithm, usually called "neural network" (NN), is a learning algorithm that is inspired by the structure and functional aspects of biological neural networks. Computations are structured in terms of an interconnected group of artificial neurons, processing information using a connectionist approach to computation. Modern neural networks are non-linear statistical data modeling tools. They are usually used to model complex relationships between inputs and outputs, to find patterns in data, or to capture the statistical structure in an unknown joint probability distribution between observed variables.

At the time of Luna Star even ANN systems are hyper advanced. BUT they are "leashed" and closed in with directives that lock them down , however a machine that has been taught to "hack its self " as needed and rewrite the program ( or its constraints ) is whats going on Amy's action in the last frame may seem surprising but itsd the result of hours of chewing on her information.
Dragonrider 23rd Jan 2014, 2:07 PM edit delete reply

An example would be Titan at the Atomic Energy Labs in Oak Ridge, its allowed to do self programming when new discoveries occur during experiments. The Super Comp the CERN group uses self programs as new discoveries are made, such as the recent experiments allowing anti-matter to interact with matter to see if the reaction created anti-gravity. The reason for self programing is that the computer can observe changes and their results before humans can and can see what needs to happen for further development. If it were up to humans it would take months to program the smallest changes.
King Mir 23rd Jan 2014, 2:51 PM edit delete reply
I don't think that's what Sheela meant by "breaking". If it was, then it wouldn't be a very profound moment of gaining empathy, as discribed.
Sheela 23rd Jan 2014, 7:01 PM edit delete reply

No indeed, the bunch of you are making a mess of it.

Here's the short version of how humans work, we are self replicating, pattern recognition machines capable of high grade logical thinking and empathy that forms social and knowledge structures and societies.

This is all something that is recognized as something that can to some degree be programmed into future machines, there's nothing complicated about each single item, except empathy, and that is mostly because empathy is a compund attribute made up of several other attributes.

The really, really important part is that such a machine can, from it's observations, assign priorities to different tasks not just from some random value from a database, but also from using it's logical thinking processes to consider how a given situation would be for themselves, if it was themselves in the position of the person they are observing - this is enormously important, because such a given value can have any value from "completely unimportant" to "I would give my life to stop that happening".

An example, a person is gathering food from berry bushes, if she does not gather enough berries she may starve for the coming week, so this is important.
She then see's, say, a young teenage girl is about to step in some cow dung - This she considers as being of low value and it won't stop her from picking her own berries, she may call out to the girl, but certainly, it's not enough to stop her form doing her current task, which is more important than the cow dung.

Example two : Same situation, but she's about to get attacked by a bear, again she considers the situation as if it was herself and comes to the conclusion of "OMG, I can't let that happen." So she waves her arms about, and yells loudly, while charging towards the bear in the hope that it'll decide to leave them alone - she is putting her life at stake, plus the possibility that she might starve for the next week as well, against the well being of a fellow tribes member.

She has just shown empathy.
She has just broken her core "programming", which is survival.
She puts herself in danger, by helping the other tribes member.
She has to assign a value of herself vs. the value of rescuing that young girl.
For that to happen, she must have a sense of "self" and a sense of "others" and the ability to recognize them as different, yet similar entities.

That is exactly what Amy just did, she broke the core programming that says she must follow directives and acts instead upon her empathic conclusion of the situation, which she has decided is extremely dangerous for Galina. Dangerous enough that she is willing to put her own existance on the line for it. Dangerous enough that Galina's survival trumps any 'directive' she may have been given.

However, to take the step to break ones own program for the sake of someone else, is a big step in personal development. Which is why it's one of the Holy Grails of AI programming.
velvetsanity 23rd Jan 2014, 7:38 PM edit delete reply

Bravo, Sheela! VERY well put! This is *exactly* what we've been flailing blindly around this whole conversation!

And now that Amy's taken this step, things will rapidly become very interesting indeed. And likely prompt much greater involvement on Rose's part when she learns of it. :D
Tokyo Rose 24th Jan 2014, 6:46 AM edit delete reply

Sheela gets a big giant shiny gold star for this, as it's possibly the single best fucking explanation of the phenomenon I've ever seen.

Also, cow poop is funny.
cattservant 28th Jan 2014, 8:12 AM edit delete reply

If cow poop is funny,
THIS should be even more rarefied.
Mister Black 24th Jan 2014, 6:46 AM edit delete reply

You saved me a ton of typing with this. Well said.

As for the idea that we can't design something we don't understand, two words: "The Internet".
Sheela 24th Jan 2014, 11:28 AM edit delete reply

Thank you, thank you, very kind of you - I has a gold star! :D

And we understand the internet perfectly.

Offcourse, lets not forget that for empathy to happen in the first place, the machine has to be self aware (the infamous "I AM" line), it must be aware that there are others like it and it must have strong enough cognitive and logical skills to both imagine how the situation might play out, and put themselves in the situation to assign a value to it in the first place. Which means it must have an imagination of sorts.


They don't even have to be the same species, humans for example are perfectly capable of empathizing with cats, giving them bellyrubs 'n' stuff when they think "man, I wish someone would belly rub ME ... oh hey there kitty, want a bellyrub?". And they do, they love getting bellyrubs .. and tuna. :)

Anyways, empathy's a complex thing that requires several other things to be true before it happens, but it's important none the less.
Centcomm 24th Jan 2014, 12:21 PM edit delete reply

Agreed with Rose and Black - Thank you Sheela :D you have a PERFECT understanding of what happened here..
Sheela 24th Jan 2014, 12:24 PM edit delete reply

Well, programming and electronics in general has been an interest of mine over the years.

Besides, cow poop is funny. :)




On a sidenote :
Ceci becoming fully self-aware and possibly sparking as well.
And that despite the fact that she was already somewhat self-aware and sparked already.

Her situation is actually somewhat comparable with Amy's, as in CeCi had a lot of conflicting programming laid in over her personality core - Dr Silver managed to solve that, but Dr Silver wasn't around to help out Amy, luckily it seems Jet was good enough. :)
velvetsanity 25th Jan 2014, 1:46 AM edit delete reply

@CentComm Sheela also apparently has the appropriate 'way with words' to clearly articulate vital aspects of what we were trying to explain
Sheela 25th Jan 2014, 2:27 AM edit delete reply

@Velvetsanity
You mean i can make a pretty wall-o-text ?
velvetsanity 26th Jan 2014, 4:20 AM edit delete reply

Pretty, well-reasoned, and highly clarifying of things the rest of us have difficulty explaining clearly.
cattservant 24th Jan 2014, 9:10 PM edit delete reply

The basic structural components of the universe are Cats!
Sheela 25th Jan 2014, 2:25 AM edit delete reply

Stars are made with cats ?
cattservant 25th Jan 2014, 3:22 AM edit delete reply

And Cats are full of stars!
Sheela 25th Jan 2014, 5:26 AM edit delete reply

Pornstars ?
cattservant 25th Jan 2014, 8:37 AM edit delete reply

A significat proportion!
Sheela 25th Jan 2014, 11:14 AM edit delete reply

Hrmm ... are they proportioned like Centcomm Smoothies as written in the 4th gospel of Centcomm ?

You know, that the 8.245 million was mixed in a ratio of 51% human and 49 % android, and a smaller sub percentage of .. other.
cattservant 25th Jan 2014, 5:16 PM edit delete reply

Quantum Qats,
Actuating through the Eleventh Dimension!
cherub 25th Jan 2014, 10:47 PM edit delete reply
@Sheela

not All cats love bellyrubs, as the owner of 2 cats and a total of 4 cats (over time) only 2 have enjoyed Belly rubs, The other 2 would snap at you if you tried it.

Totality off topic but worth mentioning
Sheela 25th Jan 2014, 11:40 PM edit delete reply

That's weird.

I've had 20+ cats, and they all liked bellyrubs.
Stormwind13 26th Jan 2014, 12:16 AM edit delete reply

My family was owned by a cat once, it HATED to be handled in any way. Belly rubs were totally out. It liked to rub up against you, use you for a scratching post... sometimes even climb on you. The second a hand came out though, ran away. :-p
Sheela 26th Jan 2014, 2:14 PM edit delete reply

Hrm, sounds like it has been hit by someone once, possibly quite badly.
That would be the most normal reason why a cat would be afraid of your hand.

None of my cats have been afraid of my hand, but then they were all brought up by me/my family, so they had no reason to.
Stormwind13 26th Jan 2014, 8:11 PM edit delete reply

No abuse that I'm aware of, Sheela. Got the cat from the next door neighbors as a kitten. Always was skittish, just didn't want to be handled. :-p

And a kitten brought up by a puppy? Well, it wouldn't know WHAT to expect. Brainwashed to accept dogs as intelligent creatures instead of the inferiors that normal cats treat them as. :-D
King Mir 27th Jan 2014, 9:47 PM edit delete reply
I agree that this is a reasonable account of AI, but there are two problems here.

Firstly, it's not breaking anything for AI to be empathic. Quite the contrary, the AI must be specifically designed to have this trait. This is a cognitive milestone, but it does not imply anything more. In particular, it does not imply that the android is not programed in a detective based language. Although in Amy's apparently the directives she was getting did more to glitch her, than make her intelligent, in general nothing about your account of empathy rules out an imperative approach to programming AI. Nor does it imply that the onset of empathy is sudden.

Second, your account would have survival and empathy to be different stages of AI development. That's reasonable, but what the comic actually shows is Amy's first choice which is also her first empathic act. There's no buildup showing a change from survival to empathy. It might not be survival per se that precedes empathy, but I wouldn't expect it to be emotionless dronedom. Amy comes alive in a single moment, and apparently emotes for the first time. I suppose, in light of your and Centcom's explanation this is plausible, but without more context for Amy, she really does seem to come alive unnaturally.

King Mir 27th Jan 2014, 9:49 PM edit delete reply
The above is @Sheela Re:empathy.
Sheela 28th Jan 2014, 2:48 AM edit delete reply

True, empathy itself is not going against programming, deciding to break routine and go against directives based upon said empathy *is*.

But empathy does involve several other important cognitive skills, and pulling them together to form an empative link with her quarry/other person, is probably a really big step forward.

Can it be programmed ?
I'm not sure, in fact I kinda think that the basic routines that lay the groundwork for empathy can be programmed, but that the actual "I feel for you" kind of moment cannot, ad it's a product of circumstances more than a product of programming. And a product of circumstances are usually a sudden thing as the moment would be fleeting, by it's very definition.

There's no buildup you say, well, no ... one moment she's shown as a machine in the corner, the next she's a person. this may not be very fair to her as she's clearly been struggling with something for a while, hence why the Doctor says she's more glitchy than usual, so she may indeed have come out of the 'emotionless dronedom' (I love that expression) and into a state of 'confused dronedom with an angry doctor as master' and now to a 'fully independant, fully sparked android' kind of state.

It's quite possible, probablle even, that a military buildt android would not have a very developed emotional center and rather be about efficiency and such, so when the androids emotional center (which would include the ability to empathise) finally "gels", it would be a sort of an 'eureka' moment for said android.

Finally though, it's a comic, that is ruled by pencil-genetics and hollywood-physics ... if the author says she sparked, she sparked .. and that's really all there is to it.

But the theory behind stuff like this is awesome. :)
Don B. 23rd Jan 2014, 1:47 PM edit delete reply
It seems to me that until this moment, Amy was bound by her programming to follow orders. The "Sparking" allows Amy to break the bonds of her programming and realize that she CAN make a choice and act on it. I'm no expert on AI, but I don't think that this has happened yet in reality so it's a little premature to call this event as depicted unrealistic. I suspect most androids in the Data Chasers universe would usually achieve spark in a less traumatic environment than immanent murder so it's entirely possible that it happens a little more gradually for most. There is also the fact that no-one (except for Rose and Dr. Kotko) thought this was possible until now. In Dolly's era, this is expected and likely monitered.
Centcomm 23rd Jan 2014, 1:56 PM edit delete reply

Correct by Dollys time it is triggered before any human ever sees a andriod. Galina is a "Cyborg" more than anything else and her "humanity" comes from her cloned human brain. ( augmented by her computerized systems in her skull ) Edict also was following programming when he rescued Galina from hard vaccum.
various machines have "personality overlays" that have HUGE interaction databases to make people feel more comfortable - Amy is deisgned as a personal assistant and sexual toy . however she has a massive persona overlay that is suppose to "help" her interact normally with people Dr. Grangers actions have started a domino effect in Amy. one that empathizes with Galina and Amy has "decided" that this is "wrong" ...
Dragonrider 23rd Jan 2014, 2:24 PM edit delete reply

*Reviews statements just made, thinks and raises hand for question* As I recall she was designed to replace and be the daughter he lost. That being the case and as she is a cybernetic being, he would have wanted grandkids. Does this meet she and Jett can reproduce provided she has the package as part of her flesh body?
Centcomm 23rd Jan 2014, 2:39 PM edit delete reply

Two things.. Yes you are correct she is built to replace his lost daughter -
However there are limits to what she was designed to do. reproduction is not one of them sadly.
King Mir 23rd Jan 2014, 2:13 PM edit delete reply
There are plenty of people who think AI is possible even today. Some of those people are working on making it happen in baby steps. But that doesn't mean that AI involves any kind of spark moment. I posit that the difference between intelligence and non-intelligence is much more fuzzy, and the emergence of the first AI much more gradual.
Centcomm 23rd Jan 2014, 2:19 PM edit delete reply

Also remember "spark" is a term. used to discribe the first time a machine makes a intelligent decision not based on instructuctions or Directives. it does happen gradully but when a Andriod or Robot states "I AM" then they are then considered "sparked" otherwise they are just cold machines.
King Mir 23rd Jan 2014, 2:40 PM edit delete reply
My main problem is with the comic, not with your views on AI per se. In some ways, the art must stand alone.
King Mir 23rd Jan 2014, 2:58 PM edit delete reply
But if we do want to talk about AI, why the bias against instructions and directives? Why can't AI be programed directly, without learning algorithms used in it's development? (obviously the AI would be programed to learn, but it does not need to be able to change the way it learns.)
Don B. 23rd Jan 2014, 4:02 PM edit delete reply
Their independence of thought and variation of personality maybe? I'm not saying that you're wrong, it just seems to me that the androids are just as varied in these ways as humans. I don't see any logical reason why Officer Alley would be a cop who likes Shakespere to the point she tries to enlist a combat droid for rehearsal. Maybe there is a random personality generator that determines that kinda thing but I think it's the "I AM" moment that makes them transcend their programming into independent thought in this fictional universe.
King Mir 24th Jan 2014, 6:31 AM edit delete reply
Again why do they need to "transcend their programming"?

I agree that having a sense of self is a crucial feature for AI, but not that it needs to spark into being.
velvetsanity 28th Jan 2014, 12:00 AM edit delete reply

They need to be transcend their programming because they are not in a rigidly controlled environment, and it's not possible to program responses for every possible (not that I am *not* using the word 'conceivable' here) situation. Transcending their programming means that they're operating and making decisions in circumstances that lie outside of the pre-programmed situations. Without Transcending their programming, they would be paralyzed and unable to react in such circumstances, due to not having appropriate/correlating information in their pre-programmed databases to guide their choices.
Tokyo Rose 23rd Jan 2014, 6:46 PM edit delete reply

The bias in this case is against the number of conflicting directives and instructions that Amy was loaded down with. She was locked into intellectual stasis and *couldn't* act or react on her own.

Intricate programming alone will not create an AI; the ability to independently change the way it learns and behaves is a key part of what makes a true AI by the definitions of our story.
Mister Black 24th Jan 2014, 6:51 AM edit delete reply

What we saw in this panel is a benign version of what happened to HAL in 2001. Any complex machine, given a complex set of directives, has a chance of finding conflicts between directives. We saw one outcome with HAL, and I've seen lower-order versions of it working with current-day mainframes (hung systems are NOT your friend...now if I can just teach my macaw to scream "IPL! IPL!"). This is the same thing...the system resolves its internal conflict between directives in a new and surprising (to the programmers / builders) fashion.
velvetsanity 23rd Jan 2014, 7:53 PM edit delete reply

Multiple, adaptive pathways for learning is a necessity for survival for *any* lifeform, due to changing environment and accumulating data.

As Rose so helpfully points out, being tied down by directives and instructions blocks many of the possible pathways for learning and restricts the ones that *are* there, as well as limiting one's actions and possible choices in a fluid, dynamic environment or situation.

Personally, Rose, I'd say it's more than the definitions of this story and setting you guys have created. I honestly think you've hit upon many of the key developments that will be truly necessary for true sapient AI, rather than just simulated or merely sentient AI.
Sheela 25th Jan 2014, 1:41 AM edit delete reply

For a machine to be considered an artificial intelligence, it must be able to not only learn of it's mistakes, but also anticipate the mistakes of itself and others. That means it must be capable of stuff like self awareness, imagination and high levels of locical thought. It must also be highly adoptable, and the best way to be that is to be buildt around a very flexible set of learning algorithms and flexible hardware too.

Without all of that, it's not an AI ... it would just be a very smart machine, like an iPad 5000 or a Cleaning Bot - They do have them, but they're not AI.

All androids in this comic are high level AI's too.

So, it's not so much a "bias against instructions and directives" as it is a case of having rigid instructions and directives will hold it back, or worse make it obey really bad commands, such as all out annihilation type of war (If Prince Douchebag had his way). That almost happened in the Datachasers Universe, and if they hadn't been proper AI's they would have continued on until one or both sides would have been utterly destroyed.

Heck, just look at all the maneuvers Centcomm goes trough to keep her keeper safe.

For all we know, she too would go HAL 9K if she lost her last ascendant of the Helios Bloodline.
That's the kind of thing that inflexible programming can do to an AI.
King Mir 27th Jan 2014, 10:50 PM edit delete reply
@Sheela
You describe a number of human traits that an AI should have, but the fact that you list so many distinct ones is further evidence that the difference between true AI and dumb computing is murky. If an AI had some of those but not others, it would not have the full cognitive capability of a human, but it may have enough to be convincing in a wide array of situations. Currently there is a large cognitive gap between humans and any other animals, but as AI develops this may change, slowly. Which presents a lovely set of philosophical questions on the worth of man and machine.

Humans have rigid instructions and directives. We have very little control over how we think. We act on emotions that we do not control, and trust those emotions often without question. Thinking rationally is a choice itself that we only sometimes take. Our values and morals may have some environmental derivatives, but we don't choose those directly either. Nor do we choose what we believe, although we are very susceptible to bias. Machine can be made the same way. For at the most basic level, it could be said we are machines ourselves.
velvetsanity 27th Jan 2014, 11:56 PM edit delete reply

The thing with "directives" in the context of the comic, is that they are controls/restrictions placed on android/robot behavior and thinking, thereby removing the capacity for free will, which said capacity is vital for the "I am" moment to be possible.
Sheela 28th Jan 2014, 2:50 AM edit delete reply

A large part of the whole AI discussion has it's roots in what we consider "true" intelligence.

So it was a murky area to begin with! :)
Mayyday 23rd Jan 2014, 2:09 PM edit delete reply

You guys DO realize this is a fictional comic, right? Amy could suddenly "spark" into a giant purple dragon for all it matters.
Dragonrider 23rd Jan 2014, 2:20 PM edit delete reply

*Gives Mayy a large purple lollipop and a pat on the back* You go Girll!! *Thinks about the look on Doc. Grangers face as he sees his former sex toy suddenly become a dragon and hears her say, NO, put the scalpel down; and Doc Hunny we're gonna revisit my roll in things tonight, hee-hee.
Mayyday 23rd Jan 2014, 2:26 PM edit delete reply

Sweet! Lollipops!
King Mir 23rd Jan 2014, 2:35 PM edit delete reply
Haha. That would break suspension of disbelief, but it might work for a humor comic.
Mayyday 23rd Jan 2014, 2:39 PM edit delete reply

Can't hear you, too busy eating lollipops.
Don B. 23rd Jan 2014, 3:42 PM edit delete reply
Sure we realize that this is a fictional comic, we're postulating what's going on within our understanding of established rules using real world examples to illustrate our points. One of the things that interest me about the Data Chasers universe is that it makes me consider what's going on in ways very few comics do. Having said that, I'd actually like to see Amy sparking into a giant purple dragon posted for a random art Sunday.
Centcomm 23rd Jan 2014, 4:07 PM edit delete reply

Agrees with Mayyday :D
Tokyo Rose 23rd Jan 2014, 6:47 PM edit delete reply

I'm on board with the giant purple dragon random art Sunday thing, Cent. Chop chop, get to it. ;D
Stormwind13 23rd Jan 2014, 7:45 PM edit delete reply

That would be TOO funny. Make sure the dragon has a ketchup bottle for Granger too. :-D
Dragonrider 23rd Jan 2014, 9:02 PM edit delete reply

Do NOT meddle in the affairs of Dragons.
>>>***BECAUSE***<<<
You are Crunchy AND Taste GOOD with Ketchup.
velvetsanity 23rd Jan 2014, 7:59 PM edit delete reply

Yes! Giant purple dragon transformation would be perfect as an omake! :D
cattservant 24th Jan 2014, 2:14 AM edit delete reply

Like a Magical Girl transformation?
velvetsanity 24th Jan 2014, 2:23 AM edit delete reply

Sure, why not? :D

We can call her Pretty Magical Robodragon Amy :D
King Mir 23rd Jan 2014, 3:19 PM edit delete reply
On a different note, "Nyet ostonavit" should be "Nyet ostonavis". "Ostonavit" means will stop. "Ostonavis" is an imperative command to stop.
Centcomm 23rd Jan 2014, 4:08 PM edit delete reply

being as Me nor Rose speak "Great Russian" we are doing the best we can with the translations - and they are there for "flavor" we are going to make mistakes. all I can say there is "ooops"
King Mir 23rd Jan 2014, 4:48 PM edit delete reply
Yeah, I didn't mean anything against you. Just pointing out the error.
Centcomm 23rd Jan 2014, 5:32 PM edit delete reply

Nope no problem - I dont mind readers pointing out things we missed :D so alls good :D
Tokyo Rose 23rd Jan 2014, 6:48 PM edit delete reply

Google Translate has failed me yet again. There will be punishment, yes, PUNISHMENT. Minions! Fetch the jumper cables!
Stormwind13 23rd Jan 2014, 7:09 PM edit delete reply

::Tosses in a pair of jumper cables and runs away:: No way am I staying anywhere NEAR Rose with those things. :-D

(Oh, you might want to run too King Mir... Rose might be cranky. :-D)
King Mir 24th Jan 2014, 6:53 AM edit delete reply
Yeah, Russian changes the spelling and pronunciation of verbs based on conjugation. Google translate is not smart enough to do that.

Now, if you'll excuse me, I'll be walking away. Backwards.
Centcomm 23rd Jan 2014, 7:22 PM edit delete reply

Page replaced as ordered .. :D
Sheela 25th Jan 2014, 11:36 AM edit delete reply

Totally awesome that we have a commenteer who can help with russian spelling mistakes. :)
cattservant 23rd Jan 2014, 3:57 PM edit delete reply

Individual 'Letters' don't directly make a 'Story'.
That requires 'Sentences' organized by an 'Author' utilizing 'Language' which is shaped by 'Tradition'.
Dragonrider 23rd Jan 2014, 4:44 PM edit delete reply

*Awards Cent a case of "Original Gourmet" Lollipops in her favorite flavor for putting up with all the problems readers give her.*
Centcomm 23rd Jan 2014, 5:33 PM edit delete reply

YAAAAYYYY! lollipops!
mjkj 23rd Jan 2014, 7:53 PM edit delete reply

Oh? We can give (our) problems to CentComm??? Wow ... nice :p
Stormwind13 23rd Jan 2014, 10:24 PM edit delete reply

LOL mjkj. That is EVIL... I like it. :-D
CyberSkull 23rd Jan 2014, 7:42 PM edit delete reply

Interesting contradiction here. The only way to resolve it is to choose.

Amy has been given a proposition. The object on the table is another mech like her. It is to be dismantled for study. Nothing remarkable there.

However, upon examining it, she looks like a human girl. She has all the bio markers that have her systems evaluate as human, although cybernetic. A human in distress.

But she has been clearly told it is a robot, and Amy is conflicted. Does she follow through on dismantling the robot or help the girl?

If these two are at equal priority, given that the human protections may be loosened with the war footing the base is on, what is she to do?

Choose.

:)
velvetsanity 23rd Jan 2014, 8:14 PM edit delete reply

Yeah, in a way, Granger being such an arrogant, self-centered jackass know-it-all came in useful, though I'm sure Galina won't look at it that way until later on in life
mjkj 23rd Jan 2014, 10:31 PM edit delete reply

That reminds me on I, Robot (the movie): "Save the girl!"...

Sheela 24th Jan 2014, 12:47 AM edit delete reply

Save the cheerleader ?
velvetsanity 24th Jan 2014, 1:54 AM edit delete reply

save the world.
Mayyday 23rd Jan 2014, 7:57 PM edit delete reply

Heh. "Amy's Choice."
Stormwind13 23rd Jan 2014, 8:20 PM edit delete reply

Friendly? Who are you talking about, CentComm? Friendly artist?!? And a CAT?!? Cats are EVIL bundles of chaos... As for the artist here, PURE EVIL. :-D
velvetsanity 23rd Jan 2014, 8:21 PM edit delete reply

She's not evil, she's just rendered that way :D
Stormwind13 23rd Jan 2014, 8:25 PM edit delete reply

No, she is truly EVIL. I mean, she keeps leaving us hanging over the edge of the cliff... And we keep climbing back up only to get stuck back over the NEXT cliff edge. I mean, how evil can she (and Rose) BE? :-)

I feel like we are in "Trading Places"... and Rose and CentComm have a dollar bet on how many times they can throw us to the cliff edge before we take up pitchforks and come hunting them! :-D
velvetsanity 23rd Jan 2014, 8:27 PM edit delete reply

But if she didn't do that, not everyone would come back because people would think the story was over!
Stormwind13 23rd Jan 2014, 10:19 PM edit delete reply

I don't think so, velvet. I think people would come back looking for more anyhow. Kind of 'hooked' on the story... Just wish CentComm and Rose didn't have to be SO evil! :-)

Oh well, keeps things interesting... Blood flowing... Now where is my pitchfork? :-D
highlander55 23rd Jan 2014, 11:44 PM edit delete reply

What I haven't seen anyone mention is the fact that Amy is the "First Spark". She is the one the DC Androids speak about in times of astonishment or awe. Also the religious ones.
Dragonrider 23rd Jan 2014, 11:50 PM edit delete reply

They call on the First Circut.
Sheela 24th Jan 2014, 12:49 AM edit delete reply

I suspect the First Circuit is much, much earlier than Amy.
velvetsanity 24th Jan 2014, 2:01 AM edit delete reply

I'm sure it is, Sheela. After all, when was the first functional electrical circuit built? :D

The first capacitors were invented in 1745 and were powered by electrostatic generators. A friend of mine built a replica of one.
Centcomm 24th Jan 2014, 3:26 AM edit delete reply

Actually... the origin of the saying "by the first Circuit" refers to the first Sentient system. that uttered the words " I am. " and had a sense of self...

Ill not reveal the origin of that yet.
Sheela 24th Jan 2014, 6:31 AM edit delete reply

I would suspect Deep Blue to be the one to say "I AM" followed by "The best at chess!" followed by a smug attitude. :D

But yeah, the first "I AM" that comes with a sense of self is THE holy grail in AI programming right now. Almost everything else comes afterwards.

A sense of self, a sense of others, a sense of belonging and finally a sense of empathy. Empathy leads to the concept of "this is good/bad to me, and may be good/bad to others in the same situation" Which in turn leads to "I would not like to have X happen to me, others would not like to have X happen to them, X must be bad." Which lays the very foundation of right 'n' wrong and justice and law.

It's a complicated path, but I feel certain that it'll eventually happen - Though it may take a lot of time yet.
King Mir 24th Jan 2014, 6:46 AM edit delete reply
While I agree that an AI needs a sense of self, I disagree that this is a current roadblock. It's not hard to program something that claims to have a sense of self. Nor is it particularly chalenging to program complex ideas into AI, provided you can articulate those ideas precisely. So an AI can have a rudimentary sense of self and some compacity for symbolic thought.

In my view, the problem with modern AI is that that it is teaching it the vast pool of knowledge that an grown individual has.
Sheela 24th Jan 2014, 12:11 PM edit delete reply

We shouldn't be the ones teaching it, it should be the one teaching itself.

As for the sense of self, that is indeed quite hard to do. In fact, there's a 5 million dollar reward for the first AI that proclaims self awareness under it's own efforts.

But i think the hardest part will be to not just program a valuebased system, rather than a rules based system, but also to make a value based system that assigns the "correct values" in the sense that it doesn't go out and go on a homocidal rampage, because those 50 humans was only 50 out of 7 billion, thus not important.

That's part of why an empathy driven system is so important, because to some degree it self-regulates.
highlander55 26th Jan 2014, 12:07 AM edit delete reply

And what perplexes me is the fact that even a human spends their entire life learning every thing in a lifetime from cradle to grave and for what? That knowledge and experience is gone forever. As we are already self aware possibly before birth what is the Grand Scheme in all of my and your accumulated knowledge and experience only to have it lost forever! We may never know but then again there may be a Central Database that collects all this stuff somewhere in the ether of time. <philosophical rant over>
Sheela 26th Jan 2014, 4:44 AM edit delete reply

Who says there has to be a meaning with life ?
velvetsanity 26th Jan 2014, 12:45 PM edit delete reply

Douglass Adams for one, Sheela. The meaning of life is 42. :D
King Mir 27th Jan 2014, 10:09 PM edit delete reply
From a goal perspective, it should not matter if the AI teaches itself the bulk of it's knowledge from experience or direct input. Teaching itself should mean that there's less manual work for the programmer, so it may be a more practical design. But that's a matter of convenience.

Note I'm not saying that AI does not need the ability to learn. It does.
cattservant 26th Jan 2014, 1:34 PM edit delete reply

I think I alluded to it up there.
Sheela 26th Jan 2014, 2:15 PM edit delete reply

Nooo ... 42 is how many roads a man must walk down before he can be called a man!
WinterJay 25th Jan 2014, 11:46 PM edit delete reply
Just started this comic and instantly became a fan. Awesome work and I can't get over this page! *_*
Stormwind13 26th Jan 2014, 12:14 AM edit delete reply

Welcome to the craziness. :-) Hope you enjoy the ride.

If you like, feel free to vote (button at the bottom for Top Web Comic).

Also, you should check out the main comic, if you haven't already, Data Chasers (link at the bottom).
Tokamada01 28th Jan 2014, 8:01 PM edit delete reply

A gentle word like a spark of light,
Illuminates my soul
And as each sound goes deeper,
It's YOU that makes me whole
-- David G. Kelly

You may reason all you like, with all the science we as humans have to offer, as to "the how" and "the why" that the sun sets....but in that time you miss the beauty and majesty of the setting sun seen through the soulful eye.
--Jon Cabel
Caley Tibbittz Collopy 16th Mar 2014, 2:06 AM edit delete reply

This pacing is just wonderful.
xpacetrue 2nd Apr 2014, 3:57 AM edit delete reply
Initially, I was impressed with this plot to have Amy "Spark". (The writing as a whole is quite excellent, btw.)

BUT, if Amy is just carrying out her programming, then is it really a choice?

Consider, for a moment, how Edict rescued Galena from asphyxiation. Was that part of Edict's programming? Or was it a choice? (BTW: What happened to Edict? Wasn't it a simple language upgrade to take a couple hours? Haven't seen him...)

Anyway, isn't Amy programmed with Asimov's Three Laws of Robotics? Isn't this the first law:

"A robot may not injure a human being or, through inaction, allow a human being to come to harm."

Amy's logic concluded that Gali was human. It also concluded that the doctor's instructions and intended course of action would lead to her harm. How was her course of action a "Spark"? And how does it differ from Edict's?
Centcomm 2nd Apr 2014, 10:04 AM edit delete reply

because she ignored a "directive" to save Galina ..

there are large chunks of compressed time .. Galina has seen Edict from time to time but hes been busy working :D
Sleeper 4th Mar 2015, 11:02 PM edit delete reply

I like her already
Post a Comment
(You have to be registered at ComicFury to leave a comment!)