create account

RE: Can AI be trained to assist with moral decision making? by wilkas

View this thread on: hive.blogpeakd.comecency.com

Viewing a response to: @dana-edwards/re-darkreaper90-re-dana-edwards-re-darkreaper90-re-dana-edwards-can-ai-be-trained-to-assist-with-moral-decision-making-20180410t075245556z

· @wilkas ·
Unless pace at which AI evolves becomes too fast for us and it will abandon us. We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.
👍  
properties (23)
authorwilkas
permlinkre-dana-edwards-re-darkreaper90-re-dana-edwards-re-darkreaper90-re-dana-edwards-can-ai-be-trained-to-assist-with-moral-decision-making-20180410t172935404z
categorylife
json_metadata{"tags":["life"],"app":"steemit/0.1"}
created2018-04-10 17:29:36
last_update2018-04-10 17:29:36
depth5
children5
last_payout2018-04-17 17:29:36
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length319
author_reputation55,758,910,052
root_title"Can AI be trained to assist with moral decision making?"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id49,345,976
net_rshares1,387,411,704
author_curate_reward""
vote details (1)
@dana-edwards · (edited)
$0.41
I see no reason to make a distinction between an us and a them. Make it part of you, and make yourself part of it, and you don't have these problems. The problem comes from you making the distinction saying you're separate from it.

Let me give you an example. The water is something which is a part of you. You also are a part of water. If you think you're separate from the water and try to stay dry you'll fail because you're made up mostly of water.

When you figure out how to understand that you are made up of water then you'll have nothing to fear from water.

> We don't care if some ladybird bug understands what we do, because it's "brain" just can't comprehend that. There is a possibility that we will become that bug. We already can't comprehend processing of big data, until it is chewed for us.

There is no we. There is just life, intelligence, and  the many forms it takes. If you use AI to evolve together with it then you have nothing to fear.

So what you are afraid of is not AI. You're not afraid of intelligence or artificial intelligence. You are afraid of machines which have a will of their own. The point? Don't design the machines to have such a will to do anything which you yourself don't want. Design the machines to be an extension of you, of your will.

Think of a limb. If you have a robot arm, and this arm is intelligent, do you fear someday the arm will choke you to death? Why should you?

On the other hand if you design it to be more than an arm, but to be your boss, to be in control of you, then of course now you have something to fear because you're designing it to act as a replacement rather than a supplement. AI can take the form of:

- Intelligence amplification.
- Replacement for humans.

I'm in favor of the first option. People who fear the second option are just people afraid of change itself. If you fear the second option then don't choose an AI which rules over you. Stop supporting companies which rule over you. Focus on AI which improves and augments your abilities rather than replaces you. Merge with the technology rather than try to compete with it, because humans have always relied on technology to live whether it be fire, weapons, clothing, etc.

References
---
1. https://en.wikipedia.org/wiki/Intelligence_amplification
👍  ,
properties (23)
authordana-edwards
permlinkre-wilkas-re-dana-edwards-re-darkreaper90-re-dana-edwards-re-darkreaper90-re-dana-edwards-can-ai-be-trained-to-assist-with-moral-decision-making-20180411t022617640z
categorylife
json_metadata{"tags":["life"],"app":"steemit/0.1","links":["https://en.wikipedia.org/wiki/Intelligence_amplification"]}
created2018-04-11 02:26:18
last_update2018-04-11 02:33:00
depth6
children4
last_payout2018-04-18 02:26:18
cashout_time1969-12-31 23:59:59
total_payout_value0.310 HBD
curator_payout_value0.100 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length2,291
author_reputation353,623,611,191,427
root_title"Can AI be trained to assist with moral decision making?"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id49,409,035
net_rshares92,446,836,944
author_curate_reward""
vote details (2)
@etimarcus · (edited)
"There is no we. There is just life, intelligence, and the many forms it takes. If you use AI to evolve together with it then you have nothing to fear."

THIS

Weak AI is no problem.  The question remaining is what about strong AI.
https://en.wikipedia.org/wiki/Artificial_general_intelligence
properties (22)
authoretimarcus
permlinkre-dana-edwards-re-wilkas-re-dana-edwards-re-darkreaper90-re-dana-edwards-re-darkreaper90-re-dana-edwards-can-ai-be-trained-to-assist-with-moral-decision-making-20180415t220433437z
categorylife
json_metadata{"tags":["life"],"app":"steemit/0.1","links":["https://en.wikipedia.org/wiki/Artificial_general_intelligence"]}
created2018-04-15 22:04:33
last_update2018-04-15 22:08:00
depth7
children2
last_payout2018-04-22 22:04:33
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length293
author_reputation6,569,567,259
root_title"Can AI be trained to assist with moral decision making?"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id50,269,770
net_rshares0
@dana-edwards ·
I don't think strong AI would be a good thing to have on earth. I think if it has a function it should be to spread intelligent life off planet. It's something to put on a space probe with the seeds of life. The Von Neumann Probe in my opinion is a good use case for AGI.

The problem with developing AGI on earth is the motivation for it. Currently most technologies are developed for war. Most of the time the motivation for creating a technology is to exert control over people. An AGI in my opinion if developed now, will be used like a weapon to enslave or control the masses. I'm not in favor of developing WMDs so I'm not in favor of developing an AI which will trigger an arms race and be used to control society.

We might think it will not be used that way but look at surveillance. Look at how social media is used. Look at how the Internet is used. All of it has been weaponized to control people.
properties (22)
authordana-edwards
permlinkre-etimarcus-re-dana-edwards-re-wilkas-re-dana-edwards-re-darkreaper90-re-dana-edwards-re-darkreaper90-re-dana-edwards-can-ai-be-trained-to-assist-with-moral-decision-making-20180416t034047720z
categorylife
json_metadata{"tags":["life"],"app":"steemit/0.1"}
created2018-04-16 03:40:48
last_update2018-04-16 03:40:48
depth8
children0
last_payout2018-04-23 03:40:48
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length909
author_reputation353,623,611,191,427
root_title"Can AI be trained to assist with moral decision making?"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id50,306,526
net_rshares0
@dana-edwards ·
The only way I think an AGI can develop safely is if it emerges slowly in a decentralized fashion. Most attempts to develop it currently are centralized and look more like power grabs. Since there is no current safe way to develop AGI in a way which will not lead to an arms race I cannot say it's desirable to focus on that. If it emerges on its own then I would hope it emerges in a decentralized way which everyone can benefit from out of the gate.
properties (22)
authordana-edwards
permlinkre-etimarcus-re-dana-edwards-re-wilkas-re-dana-edwards-re-darkreaper90-re-dana-edwards-re-darkreaper90-re-dana-edwards-can-ai-be-trained-to-assist-with-moral-decision-making-20180416t034312968z
categorylife
json_metadata{"tags":["life"],"app":"steemit/0.1"}
created2018-04-16 03:43:12
last_update2018-04-16 03:43:12
depth8
children0
last_payout2018-04-23 03:43:12
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length451
author_reputation353,623,611,191,427
root_title"Can AI be trained to assist with moral decision making?"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id50,306,809
net_rshares0
@wilkas ·
I am all for evolving past our human shells, but the resulting being will not be a human. I am talking about a scenario where people decide to stay people and merge with technologies extends no further than smart implants (I reckon most people would be too conservative to think otherwise). And AI (or "mergers") may outpace and abandon those "true" people.
👍  
properties (23)
authorwilkas
permlinkre-dana-edwards-re-wilkas-re-dana-edwards-re-darkreaper90-re-dana-edwards-re-darkreaper90-re-dana-edwards-can-ai-be-trained-to-assist-with-moral-decision-making-20180411t164215862z
categorylife
json_metadata{"tags":["life"],"app":"steemit/0.1"}
created2018-04-11 16:42:18
last_update2018-04-11 16:42:18
depth7
children0
last_payout2018-04-18 16:42:18
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length357
author_reputation55,758,910,052
root_title"Can AI be trained to assist with moral decision making?"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id49,525,913
net_rshares1,446,138,126
author_curate_reward""
vote details (1)