create account

Teaching Robots To Say No by doitvoluntarily

View this thread on: hive.blogpeakd.comecency.com
· @doitvoluntarily ·
$2.15
Teaching Robots To Say No
<center>https://fsmedia.imgix.net/a6/7a/52/85/9ea9/441d/88f1/8031fc0a6aea/pepper-the-robot-attends-the-2017-new-yorker-techfest-at-cedar-lake-on-october-6-2017-in-new-york-c.jpeg?rect=0%2C188%2C3600%2C1797&auto=format%2Ccompress&dpr=2&w=650</center>
## Could you think of any circumstance where it would be a good idea for a robot <b>not to follow orders?</b>
<center> </center> Obviously there are times where <b>blindly following what anyone tells you to</b> without considering the implications for your own well-being, might not always be the best approach to take. 

<center>https://www.iso.org/files/live/sites/isoorg/files/news/News_archive/Ref2169/Ref2169_romeo.jpg/thumbnails/800</center>
What if someone with *occasional memory loss* for example, owned a robot that provided self-care services in the future and they asked that robot to perform a task <b>that they had already completed</b> that day, you might want that robot to instead refuse wasting their time being obedient to that request. Or what if a robot is asked to take a purse from a lady who is seen sleeping on a bench; it wouldn't be wise for them to be programmed to follow *any order that is given* because sometimes that could bring harm to themselves or others, or might bring undesired consequences.

## These robots are sophisticated tools that are being programmed for a variety of useful purposes but it's important to try and prevent unnecessary  harm and that might include enabling those robots to say no on occasion. 
<center></center> If they are being asked to perform a task that might cause harm, be impossible, or to do something that is unethical etc, it's important they can exercise caution and this is why researchers have been busy trying to teach these machines how to exercise a choice.

<center>https://assets3.thrillist.com/v1/image/2414518/size/sk-2017_04_article_main_mobile.jpg</center>
## In one experiment (see video below), researchers have been working on developing <b>robotic controls that make simple inferences</b> *[according to human commands](https://theconversation.com/why-robots-need-to-be-able-to-say-no-55799)* that are given. 

<center></center> In the video below you can see researchers asking the robot to walk off of a table, but the robot refuses insisting that there isn't any sufficient support and that it will fall off, but the human giving the orders reassures the robot that he will catch him so the [robot then responds by walking off the table](https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/researchers-teaching-robots-how-to-best-reject-orders-from-humans).

<center>https://youtu.be/0tu4H1g3CtE</center>
# Researchers have also been interested in *[how people might respond](https://hrilab.tufts.edu/publications/briggsscheutz14ijsr.pdf)* when a robot rejects a command that is given. 
<center> </center> To investigate their potential response, researchers set up an experiment that involved adult test subjects asking robots to knock over towers of aluminum cans that they had made. 

When the participants entered the room, the robot had finished building their can tower and verbally expressed pride with the accomplishment to the test subject, about how happy they were to finish and that it had taken them some time to complete.

With [one group of participants researchers](https://hrilab.tufts.edu/publications/briggsscheutz14ijsr.pdf) noticed that the robots complied with every order to *knock it down*. But with another group of participants, they got much different results. When asked a first time to knock down the tower, the robot would reply <b>it reiterated that *it had just built the tower*</b>, when asked again a 2nd time to comply, it <b>argued that it worked really hard on completing the tower</b>, and when asked a third time to comply, the robot knelt down and started making sobbing noises, crying no in response. On the 4th issue of the order to knock it down is when the robot finally slowly walked over and complied. 

Researchers found that *a number of participants who had observed the protests from the robot* attempting not to comply with the command, had left the tower standing. This supports the notion that a robot might be able to discourage people *from following a particular course of action*. 

Pics:
[Pic1](https://www.inverse.com/article/38669-robots-to-buy-for-future)
[Pic2](https://www.iso.org/news/Ref2169.htm)
[Pic3](https://www.seeker.com/socially-assistive-robots-could-make-you-healthier-not-jobless-2315669739.html)

<center>https://steemitimages.com/DQmaW48YhsgLS2fxaYkXCuon3tFpQEEh1tSUCU3jjJXv7RT/%40doitvoluntarily.gif</center>

## [Getting Robots To Pack The Groceries](https://steemit.com/technology/@doitvoluntarily/getting-robots-to-pack-the-groceries)
πŸ‘  , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and 165 others
properties (23)
authordoitvoluntarily
permlinkteaching-robots-to-say-no
categorytechnology
json_metadata{"tags":["technology","robots","steemstem","blog","life"],"image":["https://fsmedia.imgix.net/a6/7a/52/85/9ea9/441d/88f1/8031fc0a6aea/pepper-the-robot-attends-the-2017-new-yorker-techfest-at-cedar-lake-on-october-6-2017-in-new-york-c.jpeg?rect=0%2C188%2C3600%2C1797&amp;auto=format%2Ccompress&amp;dpr=2&amp;w=650","https://www.iso.org/files/live/sites/isoorg/files/news/News_archive/Ref2169/Ref2169_romeo.jpg/thumbnails/800","https://assets3.thrillist.com/v1/image/2414518/size/sk-2017_04_article_main_mobile.jpg","https://img.youtube.com/vi/0tu4H1g3CtE/0.jpg","https://steemitimages.com/DQmaW48YhsgLS2fxaYkXCuon3tFpQEEh1tSUCU3jjJXv7RT/%40doitvoluntarily.gif"],"links":["https://theconversation.com/why-robots-need-to-be-able-to-say-no-55799","https://spectrum.ieee.org/automaton/robotics/artificial-intelligence/researchers-teaching-robots-how-to-best-reject-orders-from-humans","https://youtu.be/0tu4H1g3CtE","https://hrilab.tufts.edu/publications/briggsscheutz14ijsr.pdf","https://www.inverse.com/article/38669-robots-to-buy-for-future","https://www.iso.org/news/Ref2169.htm","https://www.seeker.com/socially-assistive-robots-could-make-you-healthier-not-jobless-2315669739.html","https://steemit.com/technology/@doitvoluntarily/getting-robots-to-pack-the-groceries"],"app":"steemit/0.1","format":"markdown"}
created2019-05-01 16:27:42
last_update2019-05-01 16:27:42
depth0
children5
last_payout2019-05-08 16:27:42
cashout_time1969-12-31 23:59:59
total_payout_value1.726 HBD
curator_payout_value0.423 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length4,774
author_reputation1,412,689,148,080,496
root_title"Teaching Robots To Say No"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id84,078,028
net_rshares4,728,263,294,628
author_curate_reward""
vote details (229)
@alchemystones ·
$0.04
That does bring into question whether there's a difference between simply acting according to a series of branching commands that's programmed into the robot β€” like the robot pushing over the tower, or not β€” and actual *deductive reasoning* (AI) where the robot rewrites its own coding, according to environmental conditions. 

Interesting how science fiction writers were already exploring the implications, 50 years *before* the technology even existed.
πŸ‘  
properties (23)
authoralchemystones
permlinkre-doitvoluntarily-teaching-robots-to-say-no-20190501t164652702z
categorytechnology
json_metadata{"tags":["technology"],"app":"steemit/0.1"}
created2019-05-01 16:46:54
last_update2019-05-01 16:46:54
depth1
children0
last_payout2019-05-08 16:46:54
cashout_time1969-12-31 23:59:59
total_payout_value0.028 HBD
curator_payout_value0.009 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length455
author_reputation69,939,931,309,701
root_title"Teaching Robots To Say No"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id84,078,967
net_rshares79,071,796,096
author_curate_reward""
vote details (1)
@builderofcastles ·
$0.39
And such we begin anew, talking about Asimov's three laws of robotics.

Unfortunately, none of those three laws make sense to a robot.
They are designed for humans, and takes a great deal of societal understanding to comprehend.
Every word in those "laws" has to be defined, and the definition is not simple.

In other words, the robot would have to do huge calculations to make sure the three laws are not being broken during each iteration of activity.

Robots can't even recognize humans accurately yet.

A specific example is that self driving cars can often classify a sleeping bum as "street furniture".
πŸ‘  , , , , , , ,
properties (23)
authorbuilderofcastles
permlinkre-doitvoluntarily-teaching-robots-to-say-no-20190501t163935111z
categorytechnology
json_metadata{"tags":["technology"],"app":"steemit/0.1"}
created2019-05-01 16:40:12
last_update2019-05-01 16:40:12
depth1
children0
last_payout2019-05-08 16:40:12
cashout_time1969-12-31 23:59:59
total_payout_value0.290 HBD
curator_payout_value0.095 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length609
author_reputation273,898,849,065,101
root_title"Teaching Robots To Say No"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id84,078,646
net_rshares797,536,902,941
author_curate_reward""
vote details (8)
@lighteye ·
> Could you think of any circumstance where it would be a good idea for a robot not to follow orders?

It has been extensively considered by Isaak Asimov, in his famous novel [β€œI, Robot”](https://azdoc.pl/isaac-asimov-i-robot.html), @doitvoluntarily :)
πŸ‘  
properties (23)
authorlighteye
permlinkre-doitvoluntarily-teaching-robots-to-say-no-20190501t163313256z
categorytechnology
json_metadata{"tags":["technology"],"users":["doitvoluntarily"],"links":["https://azdoc.pl/isaac-asimov-i-robot.html"],"app":"steemit/0.1"}
created2019-05-01 16:33:18
last_update2019-05-01 16:33:18
depth1
children0
last_payout2019-05-08 16:33:18
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length252
author_reputation613,484,014,536,707
root_title"Teaching Robots To Say No"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id84,078,314
net_rshares3,268,384,465
author_curate_reward""
vote details (1)
@luegenbaron ·
Easy at Robots; impossible at real AI

Posted using [Partiko Android](https://partiko.app/referral/luegenbaron)
properties (22)
authorluegenbaron
permlinkluegenbaron-re-doitvoluntarily-teaching-robots-to-say-no-20190501t163539207z
categorytechnology
json_metadata{"app":"partiko","client":"android"}
created2019-05-01 16:35:39
last_update2019-05-01 16:35:39
depth1
children0
last_payout2019-05-08 16:35:39
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length111
author_reputation27,465,249,085,978
root_title"Teaching Robots To Say No"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id84,078,426
net_rshares0
@steemstem ·
re-doitvoluntarily-teaching-robots-to-say-no-20190505t150112053z
<div class='text-justify'> <div class='pull-left'> <center> <br /> <img width='200' src='https://res.cloudinary.com/drrz8xekm/image/upload/v1553698283/weenlqbrqvvczjy6dayw.jpg'> </center>  <br/> </div> 

This post has been voted on by the **SteemSTEM** curation team and voting trail. It is elligible for support from <b><a href='https://www.steemstem.io/#!/@curie'>@curie</a></b>.<br /> 

If you appreciate the work we are doing, then consider supporting our witness [**stem.witness**](https://steemconnect.com/sign/account_witness_vote?approve=1&witness=stem.witness). Additional witness support to the [**curie witness**](https://steemconnect.com/sign/account_witness_vote?approve=1&witness=curie) would be appreciated as well.<br /> 

For additional information please join us on the [**SteemSTEM discord**]( https://discord.gg/BPARaqn) and to get to know the rest of the community!<br />

Please consider setting <b><a href='https://www.steemstem.io/#!/@steemstem'>@steemstem</a></b> as a beneficiary to your post to get a stronger support.<br />

Please consider using the <b><a href='https://www.steemstem.io'>steemstem.io</a></b> app to get a stronger support.</div>
properties (22)
authorsteemstem
permlinkre-doitvoluntarily-teaching-robots-to-say-no-20190505t150112053z
categorytechnology
json_metadata{"app":"bloguable-bot"}
created2019-05-05 15:01:15
last_update2019-05-05 15:01:15
depth1
children0
last_payout2019-05-12 15:01:15
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length1,174
author_reputation262,017,435,115,313
root_title"Teaching Robots To Say No"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id84,296,924
net_rshares0