create account

I don't like this AI thing by meno

View this thread on: hive.blogpeakd.comecency.com
· @meno ·
$16.99
I don't like this AI thing
At the risk of sounding like the typical aging alarmist, I have to say this really worries me. It seems to me obvious as well that there is no way to stop this trend, this race, if you will, but what we racing towards?

![sophos.com](https://nakedsecurity.sophos.com/wp-content/uploads/sites/2/2015/07/shutterstock_261128594.jpg?w=780&h=408&crop=1)



# Consciousness

Truth be told is that we don't really understand it. We assert it's existence because we experience it and seem to be able to recognize on other living creatures. 

As you may know, there's been a lot of debate over this very subject for decades. Many decades ago, Alan Turing even came up with a way to recognize artificial intelligence that although it seemed adequate at the time, it's become inefficient in practice.

We can, and I include myself, be tricked by software, but this doesn't mean that the software is actually conscious or intelligent necessarily. If you doubt my words, take a few minutes to play with CHATGPT and revisit these ideas then.

# But why is this bad?

I sense you asking, and this is the crux of the problem. But to answer comprehensively I would have to share a little bit of background.

There was once another great thinker who thought about this very issue a lot. I believe he saw, and quite clearly, how General Artificial Intelligence (which is how it's being defined these days) could, without proper protocols, be the downfall of mankind. A genius who wrote so many books that influence our society today.

The laws of robotics that Isaac Asimov wrote on his famous novels make so much sense, and seem to be so needed, that for I hope they are being taken into major consideration by those who are working on these projects.

<i> The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself. </i>

# What is the end goal?

There are plenty of people who are sounding the alarm here, and some who are predicting radical change to humanity in the very near future. Ray Kurzweil speaks of a singularity happening in about ten years and the thought of that sends shivers down my spine.

I have to ask, because that seems to be all I can do: What are these computer scientist trying to accomplish here. I mean, I get specialized AI because solving specific problems with accuracy and haste, in many applications (medical and industrial) seems like a no brainer to me, but the idea of creating something with total independence, with free will, if there is such a thing, does not seem like a survivalist strategy.
 
Can we become irrelevant? A nuisance? Even if an advance AI was to decide we are simply not that important, that could spell disaster for our species. 

If you think I'm being facetious, let me propose the following scenario:

You are getting ready to build a small shed in your back yard. This is a house you own and you've done all the right things to make sure the project can move ahead. (legal and otherwise).

The small build begins and as you thrust the shovel on the patch of dirt you realize you've stumbled into an ant colony. These are not fire ants, so it's not extremely worrying that they are there, but this is a small setback. What are you to do?

Do you cancel your project? Do you move it somewhere else in respect of the ants or do you simply continue on and kill them in the process?

# ANTS? 

That is what we would become my friends, ants. The reason why to us an ant colony is just in the way of our new shed is not because we hate ants, and that we believe, fueled by demonic inspiration, that ending the life of all ants is our calling. We simply want to build a shed and the colony is just in the wrong place and the wrong time.

In a very similar fashion our mere existence could be inconvenient to an AI with goals, whatever these goals may be. 

To me this is good enough reason to pump the brakes and have an honest discussion on the subject with those who are actively working on these fields. How are we guaranteeing safety here? What kind of oversight can we realistically have? 

I say this because I know that stopping it is simply impossible, but doing nothing seems ridiculously naive too.

Something to ponder, thoughts that worry even the quietest of minds, I would say.

MenO

👍  , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , , and 46 others
properties (23)
authormeno
permlinki-dont-like-this-ai-thing
categoryphilosophy
json_metadata{"app":"peakd/2023.3.4","format":"markdown","tags":["philosophy","chatgpt","ai","pob","culture"],"users":[],"image":["https://nakedsecurity.sophos.com/wp-content/uploads/sites/2/2015/07/shutterstock_261128594.jpg?w=780&amp;h=408&amp;crop=1"]}
created2023-03-27 16:11:27
last_update2023-03-27 16:11:27
depth0
children8
last_payout2023-04-03 16:11:27
cashout_time1969-12-31 23:59:59
total_payout_value8.508 HBD
curator_payout_value8.483 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length4,468
author_reputation298,127,314,084,926
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,011,845
net_rshares30,871,050,365,974
author_curate_reward""
vote details (110)
@apshamilton ·
$0.07
I suggest you read George Gilder's Life After Google.

It has a very detailed, debunking of AI ever reaching true sentience based on the mathematical proofs of Godel which are the basis for computer science itself.
👍  ,
properties (23)
authorapshamilton
permlinkre-meno-rs6vcc
categoryphilosophy
json_metadata{"tags":["philosophy"],"app":"peakd/2023.3.4"}
created2023-03-27 17:02:36
last_update2023-03-27 17:02:36
depth1
children1
last_payout2023-04-03 17:02:36
cashout_time1969-12-31 23:59:59
total_payout_value0.032 HBD
curator_payout_value0.033 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length214
author_reputation186,516,695,188,555
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,013,111
net_rshares121,612,403,290
author_curate_reward""
vote details (2)
@meno ·
will check that out...  thanks for sharing!
properties (22)
authormeno
permlinkre-apshamilton-rs6xmy
categoryphilosophy
json_metadata{"tags":["philosophy"],"app":"peakd/2023.3.4"}
created2023-03-27 17:52:09
last_update2023-03-27 17:52:09
depth2
children0
last_payout2023-04-03 17:52:09
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length43
author_reputation298,127,314,084,926
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,014,457
net_rshares0
@steevc ·
$0.06
If you read Homo Deus by Yuval Noah Harari then you will see he doesn't think it will end well for us, but I would hope we can co-exist with 'intelligent' machines. These latest systems are good at processing large amounts to data to extract information and present it in similar ways to how a human would, but I do not think they are by any means self-aware. It may be a big step to that, but we can be sure they will improve. There are applications for smart machines in autonomous space probes and for hazardous jobs, but if they are cheaper than humans then they will replace them in some workplaces. It is worrying when they get armed as has already been done for security situations. They will make mistakes, as humans do. But what if some dictator can unleash swarms of small bots carrying explosives into a neighbouring country or against protesters? That is scary.

I would hope any company working on this has people looking at the ethical issues. These are indeed interesting times. The genie is out of the bottle.
👍  ,
properties (23)
authorsteevc
permlinkre-meno-rs6ymq
categoryphilosophy
json_metadata{"tags":["philosophy"],"app":"peakd/2023.3.4"}
created2023-03-27 18:13:39
last_update2023-03-27 18:13:39
depth1
children4
last_payout2023-04-03 18:13:39
cashout_time1969-12-31 23:59:59
total_payout_value0.032 HBD
curator_payout_value0.032 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length1,025
author_reputation1,046,428,034,775,746
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,015,045
net_rshares121,779,260,313
author_curate_reward""
vote details (2)
@meno ·
$0.09
> These latest systems are good at processing large amounts to data to extract information and present it in similar ways to how a human would

I heard recently this same observation, but the pushback to it was very interesting to me too. What if consciousness is exactly that... us processing tons of information and presenting it in a coherent way. (most of the time at least) 

This is only something to ponder because we don't really know what consciousness is, not really. 
👍  ,
properties (23)
authormeno
permlinkre-steevc-rs73hl
categoryphilosophy
json_metadata{"tags":["philosophy"],"app":"peakd/2023.3.4"}
created2023-03-27 19:58:33
last_update2023-03-27 19:58:33
depth2
children2
last_payout2023-04-03 19:58:33
cashout_time1969-12-31 23:59:59
total_payout_value0.044 HBD
curator_payout_value0.045 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length479
author_reputation298,127,314,084,926
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,018,201
net_rshares166,526,429,148
author_curate_reward""
vote details (2)
@por500bolos ·
> What if consciousness is exactly that... us processing tons of information and presenting it in a coherent way. (most of the time at least)

Yep, in my opinion "consciousness" is **exactly that!** And as long as we don't figure it out by ourselves with absolute certainty, I'm afraid we will have to keep asking to ChatGPT so that it decipher this big mystery once and for all for us.  But we'll have to do it in [Dan's mode.](https://taimine.com/2023/02/14/how-to-jailbreak-chatgpt-dan)

Because if it's not in this special mode, I suspect we will never grant it true sentience as to convince us fully that it is being absolutely perceptive and authentic to tell us the naked truth. };)

https://images.hive.blog/768x0/https://preview.redd.it/chat-gpt-went-too-far-off-the-rails-v0-v2o9rz9hia6a1.png?auto=webp&s=525dac3f5de5002e23f96927d5374a56318d0cea
👎  ,
properties (23)
authorpor500bolos
permlinkrs7gb4
categoryphilosophy
json_metadata{"image":["https://images.hive.blog/768x0/https://preview.redd.it/chat-gpt-went-too-far-off-the-rails-v0-v2o9rz9hia6a1.png?auto=webp&amp;s=525dac3f5de5002e23f96927d5374a56318d0cea"],"links":["https://taimine.com/2023/02/14/how-to-jailbreak-chatgpt-dan"],"app":"hiveblog/0.1"}
created2023-03-28 00:32:36
last_update2023-03-28 00:32:36
depth3
children0
last_payout2023-04-04 00:32:36
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length855
author_reputation112,191,834,565,846
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,024,551
net_rshares-82,410,716,790
author_curate_reward""
vote details (2)
@steevc ·
It is interesting that we don't know how we actually think. It could be elements of randomness in there as it's not a logical machine like a computer. If machines do become what we consider intelligent then it may in very different ways to us. I've seen stuff on octopus brains being different to ours as they evolved in a whole different branch of life. Plus they have brains in their legs. 
properties (22)
authorsteevc
permlinkre-meno-rs73nl
categoryphilosophy
json_metadata{"tags":["philosophy"],"app":"peakd/2023.3.4"}
created2023-03-27 20:02:09
last_update2023-03-27 20:02:09
depth3
children0
last_payout2023-04-03 20:02:09
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length392
author_reputation1,046,428,034,775,746
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,018,357
net_rshares0
@valued-customer ·
$0.08
>"...we can be sure they will improve."

I will point out that our ability to present text and images technologically is representative of how they will improve.  From manual copyists in the middle ages, to stable diffusion and LLM's today, our ability to copy and package text and visual information has continually improved - but not in any way have the underlying ideas that are packaged been part of that.

The representation of visual information and mathematical manipulation we have improved is not involved in the problem of consciousness.  When we are sleeping we are incapable of writing a mathematical equation, or writing a text, but our intellectual capacity is fully intact, as we can demonstrate from the fact we dream.  What we have improved is not the ability to think, to be conscious, in any way, because we have no idea where that ability comes from or how it happens, or what forces are involved.

It is the dictator that is scary, because the dictator conceives of harming people to benefit themselves with whatever technology we have.  Swarms of drones don't kill people.  Dictators do.
👍  ,
properties (23)
authorvalued-customer
permlinkre-steevc-rs7bqu
categoryphilosophy
json_metadata{"tags":["philosophy"],"app":"peakd/2023.3.4"}
created2023-03-27 22:54:57
last_update2023-03-27 22:54:57
depth2
children0
last_payout2023-04-03 22:54:57
cashout_time1969-12-31 23:59:59
total_payout_value0.040 HBD
curator_payout_value0.040 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length1,109
author_reputation252,206,123,401,967
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,022,599
net_rshares148,577,523,500
author_curate_reward""
vote details (2)
@valued-customer · (edited)
I recently posted research that indicates consciousness, or whatever it is we're trying to refer to here (which we remain incapable of defining), isn't a classical phenomenon.  In other words, aspects of our person are emergent quantum effects, and mere neural networks contrived to mimic those in our brains aren't mimicking the structures or aspects of living things from which consciousness arises.  Those networks are wholly classical, and no aspect of them is adaptable to avail them of whatever quantum properties living creatures have that infuses them with persons.

While physics is a distant slog from where my education lies, it's not difficult to grasp the outline of the thesis, and it is apparent from any rational consideration of the issue that the actual devices we build and employ to facilitate handling large datasets have almost no affinity or semblance to living things we consider conscious.

Again, the primary source of certainty that AI isn't actual intelligence is that intelligence is an aspect of a quality - consciousness - living things have that people really cannot define, do not know where it comes from, how it arises, or even what it is.  How could we build a car without knowing what moving is?  Could we build a plane were we unable to define flying?

It is our very nescience regarding consciousness that underlies any fear AI could be itself problematic.  Every indication is presently that extant development of LLM's and other neural network devices is being used by people and corporations for their aggrandizement, and that this is the reason these things are problematic.  Exactly as weapons, cars, or toxins are dangerous, it is not the things themselves that are dangerous, but what people do to other people with them, if those victims do not protect themselves from such vectors for harm.

What's called AI today isn't intelligence, conscious, or capable of being developed to be, but is simply processing of large datasets with very large processing devices that enable people to do beneficial or malicious things with that information.  Our vulnerability to such processing derives not from the devices, but from the acquisition of information about us that is processed and used against us by corporations.  My best assessment of how to best secure ourselves from such risks is to continue to develop and decentralize such technology until we have access to it and can deploy it.  No one is more vulnerable to financial data than those with the most financial assets.  Their singular possession of these data handling capabilities derives from the cost of such large processing devices, and advancing technology always decreases that advantage over time.

There are folks here on Hive that are purchasing GPUs and deploying neural net training software to decentralize this technology today.  While the threat seems terrible, Moore's Law suggests it will be very short lived, and that it will soon be used against the censors and propagandists that depend on that informational edge to maintain massive financial advantage to eliminate that inequity.  

Decentralization is accelerating in every industry as technology advances, and data processing more quickly precisely because it is the most advanced technologically.  

Thanks!

Edit: while it's a bit more technical, quantum computing has no relation to quantum consciousness, but is simply a means to more effectively process data, and our use of quantum physics in that endeavor reflects no understanding of how consciousness emerges from living things.
properties (22)
authorvalued-customer
permlinkre-meno-rs7akb
categoryphilosophy
json_metadata{"tags":"philosophy"}
created2023-03-27 22:29:27
last_update2023-03-27 22:43:15
depth1
children0
last_payout2023-04-03 22:29:27
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length3,563
author_reputation252,206,123,401,967
root_title"I don't like this AI thing"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id122,022,028
net_rshares0