At the risk of sounding like the typical aging alarmist, I have to say this really worries me. It seems to me obvious as well that there is no way to stop this trend, this race, if you will, but what we racing towards? ![sophos.com](https://nakedsecurity.sophos.com/wp-content/uploads/sites/2/2015/07/shutterstock_261128594.jpg?w=780&h=408&crop=1) # Consciousness Truth be told is that we don't really understand it. We assert it's existence because we experience it and seem to be able to recognize on other living creatures. As you may know, there's been a lot of debate over this very subject for decades. Many decades ago, Alan Turing even came up with a way to recognize artificial intelligence that although it seemed adequate at the time, it's become inefficient in practice. We can, and I include myself, be tricked by software, but this doesn't mean that the software is actually conscious or intelligent necessarily. If you doubt my words, take a few minutes to play with CHATGPT and revisit these ideas then. # But why is this bad? I sense you asking, and this is the crux of the problem. But to answer comprehensively I would have to share a little bit of background. There was once another great thinker who thought about this very issue a lot. I believe he saw, and quite clearly, how General Artificial Intelligence (which is how it's being defined these days) could, without proper protocols, be the downfall of mankind. A genius who wrote so many books that influence our society today. The laws of robotics that Isaac Asimov wrote on his famous novels make so much sense, and seem to be so needed, that for I hope they are being taken into major consideration by those who are working on these projects. <i> The first law is that a robot shall not harm a human, or by inaction allow a human to come to harm. The second law is that a robot shall obey any instruction given to it by a human, and the third law is that a robot shall avoid actions or situations that could cause it to come to harm itself. </i> # What is the end goal? There are plenty of people who are sounding the alarm here, and some who are predicting radical change to humanity in the very near future. Ray Kurzweil speaks of a singularity happening in about ten years and the thought of that sends shivers down my spine. I have to ask, because that seems to be all I can do: What are these computer scientist trying to accomplish here. I mean, I get specialized AI because solving specific problems with accuracy and haste, in many applications (medical and industrial) seems like a no brainer to me, but the idea of creating something with total independence, with free will, if there is such a thing, does not seem like a survivalist strategy. Can we become irrelevant? A nuisance? Even if an advance AI was to decide we are simply not that important, that could spell disaster for our species. If you think I'm being facetious, let me propose the following scenario: You are getting ready to build a small shed in your back yard. This is a house you own and you've done all the right things to make sure the project can move ahead. (legal and otherwise). The small build begins and as you thrust the shovel on the patch of dirt you realize you've stumbled into an ant colony. These are not fire ants, so it's not extremely worrying that they are there, but this is a small setback. What are you to do? Do you cancel your project? Do you move it somewhere else in respect of the ants or do you simply continue on and kill them in the process? # ANTS? That is what we would become my friends, ants. The reason why to us an ant colony is just in the way of our new shed is not because we hate ants, and that we believe, fueled by demonic inspiration, that ending the life of all ants is our calling. We simply want to build a shed and the colony is just in the wrong place and the wrong time. In a very similar fashion our mere existence could be inconvenient to an AI with goals, whatever these goals may be. To me this is good enough reason to pump the brakes and have an honest discussion on the subject with those who are actively working on these fields. How are we guaranteeing safety here? What kind of oversight can we realistically have? I say this because I know that stopping it is simply impossible, but doing nothing seems ridiculously naive too. Something to ponder, thoughts that worry even the quietest of minds, I would say. MenO
author | meno |
---|---|
permlink | i-dont-like-this-ai-thing |
category | philosophy |
json_metadata | {"app":"peakd/2023.3.4","format":"markdown","tags":["philosophy","chatgpt","ai","pob","culture"],"users":[],"image":["https://nakedsecurity.sophos.com/wp-content/uploads/sites/2/2015/07/shutterstock_261128594.jpg?w=780&h=408&crop=1"]} |
created | 2023-03-27 16:11:27 |
last_update | 2023-03-27 16:11:27 |
depth | 0 |
children | 8 |
last_payout | 2023-04-03 16:11:27 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 8.508 HBD |
curator_payout_value | 8.483 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 4,468 |
author_reputation | 298,127,314,084,926 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,011,845 |
net_rshares | 30,871,050,365,974 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
pfunk | 0 | 5,452,095,263,449 | 100% | ||
cryptogee | 0 | 13,009,552,062 | 100% | ||
acidyo | 0 | 5,324,539,836,410 | 50% | ||
steevc | 0 | 1,222,043,680,436 | 48% | ||
lk666 | 0 | 13,782,190,928 | 20% | ||
abh12345 | 0 | 426,846,746,464 | 15% | ||
clayboyn | 0 | 13,413,377,143 | 10% | ||
r0nd0n | 0 | 32,315,742,967 | 15% | ||
vannour | 0 | 146,914,231,928 | 100% | ||
steemcultures | 0 | 1,112,975,839 | 100% | ||
steemworld | 0 | 4,592,443,028 | 100% | ||
creat | 0 | 1,238,751,190 | 100% | ||
mapesa | 0 | 2,062,001,646 | 50% | ||
cisah | 0 | 1,222,410,023 | 100% | ||
tarotbyfergus | 0 | 523,274,322,647 | 100% | ||
v4vapid | 0 | 5,305,178,013,552 | 33% | ||
pedir-museum | 0 | 2,911,101,501 | 100% | ||
aceh | 0 | 1,378,987,120 | 100% | ||
omitaylor | 0 | 2,194,336,721 | 25% | ||
alexis555 | 0 | 2,943,259,301,019 | 28% | ||
lizanomadsoul | 0 | 5,812,963,255 | 3% | ||
munzir | 0 | 1,348,563,400 | 100% | ||
deirdyweirdy | 0 | 53,278,793,379 | 20% | ||
amberyooper | 0 | 30,938,480,155 | 100% | ||
valued-customer | 0 | 32,688,324,497 | 25% | ||
newsflash | 0 | 65,015,235,345 | 8.25% | ||
fredrikaa | 0 | 2,203,334,873,457 | 100% | ||
galberto | 0 | 317,007,600,816 | 100% | ||
paulag | 0 | 31,859,254,871 | 25% | ||
jayna | 0 | 18,640,621,062 | 3% | ||
joeyarnoldvn | 0 | 561,905,079 | 1.68% | ||
gniksivart | 0 | 34,982,269,068 | 10% | ||
kaminchan | 0 | 15,651,991,893 | 39% | ||
sannur | 0 | 2,325,727,208 | 100% | ||
aquaculture | 0 | 536,100,244 | 100% | ||
modernpastor | 0 | 40,879,456,611 | 50% | ||
futurethinker | 0 | 1,379,230,101 | 100% | ||
petrolinivideo | 0 | 5,605,350,581 | 50% | ||
paintingangels | 0 | 2,821,114,198 | 100% | ||
karinxxl | 0 | 24,036,155,108 | 25% | ||
g10a | 0 | 3,170,896,945 | 50% | ||
eonwarped | 0 | 330,129,881,696 | 25% | ||
evecab | 0 | 2,861,379,299 | 80% | ||
kernelillo | 0 | 54,311,164,882 | 100% | ||
shai-hulud | 0 | 660,074,164 | 5% | ||
sneakyninja | 0 | 790,718,888 | 1.3% | ||
markaustin | 0 | 1,641,153,391 | 10% | ||
fourfourfun | 0 | 1,226,131,804 | 4.12% | ||
abitcoinskeptic | 0 | 2,134,415,560 | 7.5% | ||
insideoutlet | 0 | 1,035,579,315 | 5% | ||
saintchristopher | 0 | 4,627,456,463 | 100% | ||
tryskele | 0 | 757,921,716 | 3% | ||
tanishqyeverma | 0 | 1,355,537,069 | 100% | ||
luisfe | 0 | 3,859,904,562 | 100% | ||
jnmarteau | 0 | 1,245,578,558 | 3% | ||
philnewton | 0 | 565,251,290 | 7.5% | ||
newageinv | 0 | 107,781,549,646 | 20% | ||
superstarxtala | 0 | 1,577,624,421 | 100% | ||
captainbob | 0 | 344,865,028,038 | 84.5% | ||
liberviarum | 0 | 1,128,706,827 | 33% | ||
asapers | 0 | 2,122,586,271 | 50% | ||
acesontop | 0 | 87,405,268,690 | 100% | ||
sbi3 | 0 | 141,269,721,893 | 12.39% | ||
leomolina | 0 | 2,453,698,878 | 9% | ||
aliriera | 0 | 9,973,927,179 | 100% | ||
frejafri | 0 | 1,928,584,624 | 5% | ||
apshamilton | 0 | 1,940,814,751,856 | 100% | ||
k0wsk1 | 0 | 3,608,987,054 | 100% | ||
xves | 0 | 8,852,843,495 | 25% | ||
veteransoffgrid | 0 | 507,765,116 | 100% | ||
thedailysneak | 0 | 1,084,237,481 | 1.3% | ||
yestermorrow | 0 | 12,273,359,214 | 31% | ||
preventsuicide | 0 | 1,976,694,033 | 45% | ||
hamismsf | 0 | 615,568,381,606 | 100% | ||
yaelg | 0 | 60,711,735,439 | 90% | ||
bengiles | 0 | 705,817,233,944 | 100% | ||
thelittlebank | 0 | 802,670,640,008 | 100% | ||
karinpics | 0 | 1,000,720,363 | 100% | ||
hornetmusic | 0 | 1,341,032,274 | 100% | ||
mister-meeseeks | 0 | 22,126,303,548 | 25% | ||
sbi-booster | 0 | 67,838,475,368 | 100% | ||
cwow2 | 0 | 7,188,211,057 | 3% | ||
misterengagement | 0 | 1,133,062,059 | 15% | ||
gudnius.comics | 0 | 13,685,307,874 | 100% | ||
jpbliberty | 0 | 560,420,300,149 | 100% | ||
primeradue | 0 | 526,953,579 | 33% | ||
princessamber | 0 | 994,292,631 | 50% | ||
i-c-e | 0 | 5,683,918,716 | 35% | ||
ghostdylan | 0 | 998,444,810 | 50% | ||
todayslight | 0 | 2,775,436,216 | 24% | ||
leighscotford | 0 | 5,532,305,059 | 7.2% | ||
pal-isaria | 0 | 934,123,899 | 25% | ||
hyborian-strain | 0 | 2,435,944,017 | 30% | ||
sbi-tokens | 0 | 974,046,745 | 2.61% | ||
baltai | 0 | 31,089,058,613 | 12.5% | ||
gloriaolar | 0 | 4,198,274,505 | 4.5% | ||
kgsupport | 0 | 1,723,165,591 | 36% | ||
thehockeyfan-at | 0 | 66,112,342,581 | 15% | ||
hivebuzz | 0 | 15,872,554,084 | 3% | ||
ninnu | 0 | 1,643,188,920 | 5.85% | ||
ghaazi | 0 | 2,559,974,484 | 50% | ||
lufg | 0 | 1,647,001,162 | 50% | ||
leveluplifestyle | 0 | 12,411,089,241 | 25% | ||
nfttunz | 0 | 395,892,821,232 | 25% | ||
mattbrown.art | 0 | 612,495,280 | 7.5% | ||
meltysquid | 0 | 43,551,090,031 | 100% | ||
hoffmeister84 | 0 | 822,542,085 | 7% | ||
isabel-vihu | 0 | 4,357,154,318 | 15% | ||
adiiba | 0 | 1,134,030,440 | 20% | ||
bluenix | 0 | 4,979,057,327 | 100% |
I suggest you read George Gilder's Life After Google. It has a very detailed, debunking of AI ever reaching true sentience based on the mathematical proofs of Godel which are the basis for computer science itself.
author | apshamilton |
---|---|
permlink | re-meno-rs6vcc |
category | philosophy |
json_metadata | {"tags":["philosophy"],"app":"peakd/2023.3.4"} |
created | 2023-03-27 17:02:36 |
last_update | 2023-03-27 17:02:36 |
depth | 1 |
children | 1 |
last_payout | 2023-04-03 17:02:36 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.032 HBD |
curator_payout_value | 0.033 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 214 |
author_reputation | 186,516,695,188,555 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,013,111 |
net_rshares | 121,612,403,290 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
valued-customer | 0 | 32,365,107,304 | 25% | ||
meno | 0 | 89,247,295,986 | 11% |
will check that out... thanks for sharing!
author | meno |
---|---|
permlink | re-apshamilton-rs6xmy |
category | philosophy |
json_metadata | {"tags":["philosophy"],"app":"peakd/2023.3.4"} |
created | 2023-03-27 17:52:09 |
last_update | 2023-03-27 17:52:09 |
depth | 2 |
children | 0 |
last_payout | 2023-04-03 17:52:09 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 43 |
author_reputation | 298,127,314,084,926 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,014,457 |
net_rshares | 0 |
If you read Homo Deus by Yuval Noah Harari then you will see he doesn't think it will end well for us, but I would hope we can co-exist with 'intelligent' machines. These latest systems are good at processing large amounts to data to extract information and present it in similar ways to how a human would, but I do not think they are by any means self-aware. It may be a big step to that, but we can be sure they will improve. There are applications for smart machines in autonomous space probes and for hazardous jobs, but if they are cheaper than humans then they will replace them in some workplaces. It is worrying when they get armed as has already been done for security situations. They will make mistakes, as humans do. But what if some dictator can unleash swarms of small bots carrying explosives into a neighbouring country or against protesters? That is scary. I would hope any company working on this has people looking at the ethical issues. These are indeed interesting times. The genie is out of the bottle.
author | steevc |
---|---|
permlink | re-meno-rs6ymq |
category | philosophy |
json_metadata | {"tags":["philosophy"],"app":"peakd/2023.3.4"} |
created | 2023-03-27 18:13:39 |
last_update | 2023-03-27 18:13:39 |
depth | 1 |
children | 4 |
last_payout | 2023-04-03 18:13:39 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.032 HBD |
curator_payout_value | 0.032 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 1,025 |
author_reputation | 1,046,428,034,775,746 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,015,045 |
net_rshares | 121,779,260,313 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
valued-customer | 0 | 32,527,514,591 | 25% | ||
meno | 0 | 89,251,745,722 | 11% |
> These latest systems are good at processing large amounts to data to extract information and present it in similar ways to how a human would I heard recently this same observation, but the pushback to it was very interesting to me too. What if consciousness is exactly that... us processing tons of information and presenting it in a coherent way. (most of the time at least) This is only something to ponder because we don't really know what consciousness is, not really.
author | meno |
---|---|
permlink | re-steevc-rs73hl |
category | philosophy |
json_metadata | {"tags":["philosophy"],"app":"peakd/2023.3.4"} |
created | 2023-03-27 19:58:33 |
last_update | 2023-03-27 19:58:33 |
depth | 2 |
children | 2 |
last_payout | 2023-04-03 19:58:33 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.044 HBD |
curator_payout_value | 0.045 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 479 |
author_reputation | 298,127,314,084,926 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,018,201 |
net_rshares | 166,526,429,148 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
steevc | 0 | 148,143,407,934 | 6% | ||
onealfa.leo | 0 | 18,383,021,214 | 4% |
> What if consciousness is exactly that... us processing tons of information and presenting it in a coherent way. (most of the time at least) Yep, in my opinion "consciousness" is **exactly that!** And as long as we don't figure it out by ourselves with absolute certainty, I'm afraid we will have to keep asking to ChatGPT so that it decipher this big mystery once and for all for us. But we'll have to do it in [Dan's mode.](https://taimine.com/2023/02/14/how-to-jailbreak-chatgpt-dan) Because if it's not in this special mode, I suspect we will never grant it true sentience as to convince us fully that it is being absolutely perceptive and authentic to tell us the naked truth. };) https://images.hive.blog/768x0/https://preview.redd.it/chat-gpt-went-too-far-off-the-rails-v0-v2o9rz9hia6a1.png?auto=webp&s=525dac3f5de5002e23f96927d5374a56318d0cea
author | por500bolos |
---|---|
permlink | rs7gb4 |
category | philosophy |
json_metadata | {"image":["https://images.hive.blog/768x0/https://preview.redd.it/chat-gpt-went-too-far-off-the-rails-v0-v2o9rz9hia6a1.png?auto=webp&s=525dac3f5de5002e23f96927d5374a56318d0cea"],"links":["https://taimine.com/2023/02/14/how-to-jailbreak-chatgpt-dan"],"app":"hiveblog/0.1"} |
created | 2023-03-28 00:32:36 |
last_update | 2023-03-28 00:32:36 |
depth | 3 |
children | 0 |
last_payout | 2023-04-04 00:32:36 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 855 |
author_reputation | 112,191,834,565,846 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,024,551 |
net_rshares | -82,410,716,790 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
adm | 0 | -75,413,729,622 | -0.4% | ||
spaminator | 0 | -6,996,987,168 | -0.25% |
It is interesting that we don't know how we actually think. It could be elements of randomness in there as it's not a logical machine like a computer. If machines do become what we consider intelligent then it may in very different ways to us. I've seen stuff on octopus brains being different to ours as they evolved in a whole different branch of life. Plus they have brains in their legs.
author | steevc |
---|---|
permlink | re-meno-rs73nl |
category | philosophy |
json_metadata | {"tags":["philosophy"],"app":"peakd/2023.3.4"} |
created | 2023-03-27 20:02:09 |
last_update | 2023-03-27 20:02:09 |
depth | 3 |
children | 0 |
last_payout | 2023-04-03 20:02:09 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 392 |
author_reputation | 1,046,428,034,775,746 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,018,357 |
net_rshares | 0 |
>"...we can be sure they will improve." I will point out that our ability to present text and images technologically is representative of how they will improve. From manual copyists in the middle ages, to stable diffusion and LLM's today, our ability to copy and package text and visual information has continually improved - but not in any way have the underlying ideas that are packaged been part of that. The representation of visual information and mathematical manipulation we have improved is not involved in the problem of consciousness. When we are sleeping we are incapable of writing a mathematical equation, or writing a text, but our intellectual capacity is fully intact, as we can demonstrate from the fact we dream. What we have improved is not the ability to think, to be conscious, in any way, because we have no idea where that ability comes from or how it happens, or what forces are involved. It is the dictator that is scary, because the dictator conceives of harming people to benefit themselves with whatever technology we have. Swarms of drones don't kill people. Dictators do.
author | valued-customer |
---|---|
permlink | re-steevc-rs7bqu |
category | philosophy |
json_metadata | {"tags":["philosophy"],"app":"peakd/2023.3.4"} |
created | 2023-03-27 22:54:57 |
last_update | 2023-03-27 22:54:57 |
depth | 2 |
children | 0 |
last_payout | 2023-04-03 22:54:57 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.040 HBD |
curator_payout_value | 0.040 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 1,109 |
author_reputation | 252,206,123,401,967 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,022,599 |
net_rshares | 148,577,523,500 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
steevc | 0 | 130,514,424,105 | 5% | ||
onealfa.leo | 0 | 18,063,099,395 | 4% |
I recently posted research that indicates consciousness, or whatever it is we're trying to refer to here (which we remain incapable of defining), isn't a classical phenomenon. In other words, aspects of our person are emergent quantum effects, and mere neural networks contrived to mimic those in our brains aren't mimicking the structures or aspects of living things from which consciousness arises. Those networks are wholly classical, and no aspect of them is adaptable to avail them of whatever quantum properties living creatures have that infuses them with persons. While physics is a distant slog from where my education lies, it's not difficult to grasp the outline of the thesis, and it is apparent from any rational consideration of the issue that the actual devices we build and employ to facilitate handling large datasets have almost no affinity or semblance to living things we consider conscious. Again, the primary source of certainty that AI isn't actual intelligence is that intelligence is an aspect of a quality - consciousness - living things have that people really cannot define, do not know where it comes from, how it arises, or even what it is. How could we build a car without knowing what moving is? Could we build a plane were we unable to define flying? It is our very nescience regarding consciousness that underlies any fear AI could be itself problematic. Every indication is presently that extant development of LLM's and other neural network devices is being used by people and corporations for their aggrandizement, and that this is the reason these things are problematic. Exactly as weapons, cars, or toxins are dangerous, it is not the things themselves that are dangerous, but what people do to other people with them, if those victims do not protect themselves from such vectors for harm. What's called AI today isn't intelligence, conscious, or capable of being developed to be, but is simply processing of large datasets with very large processing devices that enable people to do beneficial or malicious things with that information. Our vulnerability to such processing derives not from the devices, but from the acquisition of information about us that is processed and used against us by corporations. My best assessment of how to best secure ourselves from such risks is to continue to develop and decentralize such technology until we have access to it and can deploy it. No one is more vulnerable to financial data than those with the most financial assets. Their singular possession of these data handling capabilities derives from the cost of such large processing devices, and advancing technology always decreases that advantage over time. There are folks here on Hive that are purchasing GPUs and deploying neural net training software to decentralize this technology today. While the threat seems terrible, Moore's Law suggests it will be very short lived, and that it will soon be used against the censors and propagandists that depend on that informational edge to maintain massive financial advantage to eliminate that inequity. Decentralization is accelerating in every industry as technology advances, and data processing more quickly precisely because it is the most advanced technologically. Thanks! Edit: while it's a bit more technical, quantum computing has no relation to quantum consciousness, but is simply a means to more effectively process data, and our use of quantum physics in that endeavor reflects no understanding of how consciousness emerges from living things.
author | valued-customer |
---|---|
permlink | re-meno-rs7akb |
category | philosophy |
json_metadata | {"tags":"philosophy"} |
created | 2023-03-27 22:29:27 |
last_update | 2023-03-27 22:43:15 |
depth | 1 |
children | 0 |
last_payout | 2023-04-03 22:29:27 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 3,563 |
author_reputation | 252,206,123,401,967 |
root_title | "I don't like this AI thing" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 122,022,028 |
net_rshares | 0 |