<center>https://steemitimages.com/DQmbYBtQM8uYcdBnujRMWxAg74sBoHQ9oWG9rkx1DM1NLAS/writingtool.jpg</center>   <div class="text-justify">   # <center>BLOG</center>     Still busy as the moment. :_( ... OR ... : ) (Which depends on how you look at being busy. And busy with what?) Will be back to actively writing here nearer to the end of February or early March, I think. Will post some more science fiction; most that I've written has been for print. Also some speculation and science. Will also write more about development. Something that I would like to write more about is the perspective I suggest for judging code. In one sentence, is how easily it can be decomposed, while meanwhile everything can be composed in any order if needed. Had a paper about that recently. (For example, can we add new features to an existing web app by writing another operation .erl and just putting on the server with the others, and making trivial changes to the existing definitions. This will also make it much easier to understand and develop quickly when there are many, many more than a few small, simple operations.) Will write about how this relates to AI. Speaking of which, instead of Python developing with Erlang. (If you recall some of my earlier posts.) Consider the following intuition. You have a desktop with some programs installed. You get a file. No idea about type. The type of the file is opaque. You just have the data of the file, which you don't fully understand. Yet. And you may know in some cases when getting such a file where it came from. Not always. Sometimes. Sometimes it is just what the file says. You don't know what the file "is about". Imagine you know what you programs do, the ones you have installed. You installed them. Selected them. Randomly try opening the file with Draw. It doesn't open. Backtrack. Randomly try opening it with Document. It opens. So you know it's not an image. It's a text document. Otherwise it would have been Draw that opened it and gave a valid output (displayed the image) and not Document. We define a thing by a list of things it is not, by differences, even if such a list is strictly infinite or else very large taken as amount of information compared to the thing taken as amount of information [HUT94, WAT69, POP74, WAT85]. And a bag of rice is not a penguin and also not a pencil and also not a house. It is not many, many things. So when data is labeled may be more interested in what it is [COE17]. If however we are learning, feasibly we learn more and more "about" a thing, what it "is", by learning a growing subset of what it is not. Which may change monotonically but all the same grow, for learning it is not V, W, ... may suggest that it may actually be Z, in another form Z*, even if we already thought (because of how it seemed) is it also not Z. We would like machines to be able to infer about reality that generates data as information, the fibers which got projected when we only see their point projection [MCC97,99,07]. A prolog like search down a tree of a few options with backtracking can tag. If new methods can be added to the tree at a reasonable rate, and backtracking is more complex, still feasible. (See for example papers of Hewitt and Minsky.) This allows growing functionality rapidly and plays a role in any AI because operations have different type. When a connection between neurons gets a lower weight, say probability with which it signals in that link in the incidence matrix, this is just another number. Same type. But when success rate of operation X on data is greater than of operation Y on that data, we also get a different probability. A number. But we have also tacitly tagged that data. In pragmatic meaning; so far as X and Y are not the same type. We infer something about the type of data. What operation yields a meaningful output on it is not arbitrary and not independent of inferences regarding what that data "is about". The record of success rates is now tagging data in unsupervised learning. Can be passed around from neuron to neuron as data. Besides being a bug report. Related to searching for a proof of proposition P such that we know type or meaning of P because we know meaning or type of X and X is useful for proving P but Y is not useful for proving P. It seems not a whole lot of popular discussion about this and I need to write popular summaries of this literature anyway for another reason. There is a *vast* and fun and old literature and not a whole lot of discussion outside of conferences. Involving many agents loaded with such functions. Not much implemented either. Should be implemented I think. Can do real unsupervised inference and learning. Less superficial learning. Maybe we'll see some interesting apps in the future. Real bottleneck in the 70's and 80's and 90's and 00's was ... RAM! Not anymore. So perhaps AI technology may have more play room on blockchains in the future. Besides more ram in the go to machine of a representative consumer, a blockchain makes backtracking based learning with nonmonotonic reasoning much, much easier. Would like to see that! Hmmm. </div> ## ABOUT ME I'm a scientist who writes science fiction under various names. ###### <div class="text-left"> The magazines that I most recommend: [*Magazine of fantasy and science fiction*](https://www.sfsite.com/fsf/), [*Compelling science fiction*](http://compellingsciencefiction.com/), [*Writers of the future*](http://www.writersofthefuture.com/), . . . </div> # <center>                         ◕ ‿‿ ◕ つ</center> ###### <div class="text-center">    #writing  #creativity  #science  #fiction  #novel  #scifi  #publishing  #blog         @tribesteemup @thealliance @isleofwrite @freedomtribe @smg            #technology  #cryptocurrency #life #history  #philosophy                           #communicate  #freedom  #development  #future             </div> ## <center>          UPVOTE !   FOLLOW !</center>   <div class="text-justify"> This work is licensed under a <a rel="license" href="http://creativecommons.org/licenses/by-sa/4.0/">Creative Commons Attribution-ShareAlike 4.0 International License</a>  . . .   . . .   . . .   . . .   . . .  Text and images: ©tibra. *Disclaimer: This text is a popular, speculative discussion of basic science literature for the sake of discussion and regarding it no warranties of any kind exist. Treat it as conjecture about the past and the future. As subject to change. Like the open future itself is subject to change.* Except if this text happens to be explicitly fantasy or science fiction, in which case it is just that. Then exists another *Disclaimer: This is a work of fiction: events, names, places, characters are either imagined or used fictitiously. Any resemblance to real events or persons or places is coincidental.* </div>
author | tibra |
---|---|
permlink | channel-more-thinking-about-the-future-of-the-space-word-count-or-revised-2018-2-9 |
category | channel |
json_metadata | {"tags":["channel","science","development","thealliance","tribesteemup"],"users":["tribesteemup","thealliance","isleofwrite","freedomtribe","smg"],"image":["https://steemitimages.com/DQmbYBtQM8uYcdBnujRMWxAg74sBoHQ9oWG9rkx1DM1NLAS/writingtool.jpg"],"links":["https://www.sfsite.com/fsf/","http://compellingsciencefiction.com/","http://www.writersofthefuture.com/","http://creativecommons.org/licenses/by-sa/4.0/"],"app":"steemit/0.1","format":"markdown"} |
created | 2019-02-09 16:42:15 |
last_update | 2019-03-08 01:47:00 |
depth | 0 |
children | 6 |
last_payout | 2019-02-16 16:42:15 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.590 HBD |
curator_payout_value | 0.154 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 7,594 |
author_reputation | 6,103,786,042,055 |
root_title | "CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 79,627,238 |
net_rshares | 1,671,477,581,203 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
ashe-oro | 0 | 4,418,324,135 | 3.63% | ||
churdtzu | 0 | 84,413,784,522 | 33% | ||
juansgalt | 0 | 5,100,081,941 | 33% | ||
bryanj4 | 0 | 157,990,280 | 8.25% | ||
happyphoenix | 0 | 7,700,314,839 | 15% | ||
sterlinluxan | 0 | 4,918,756,685 | 20% | ||
warofcraft | 0 | 0 | 20% | ||
alchemage | 0 | 9,302,327,751 | 13% | ||
treaphort | 0 | 91,896,193 | 3.63% | ||
burntmd | 0 | 1,974,227,212 | 22% | ||
johnvibes | 0 | 458,438,612 | 6.6% | ||
elamental | 0 | 985,732,271 | 7% | ||
catherinebleish | 0 | 3,351,089,692 | 16.5% | ||
kenistyles | 0 | 41,587,025,794 | 100% | ||
mckeever | 0 | 3,697,384,985 | 15% | ||
emancipatedhuman | 0 | 2,664,941,998 | 8.25% | ||
dannyshine | 0 | 12,925,609,440 | 11.55% | ||
brightstar | 0 | 1,918,644,502 | 4.95% | ||
richardcrill | 0 | 21,631,379,219 | 33% | ||
tftproject | 0 | 3,952,762,765 | 4.95% | ||
adamkokesh | 0 | 399,653,732 | 3% | ||
eftnow | 0 | 31,789,500,866 | 50% | ||
maloneyj55 | 0 | 285,446,618 | 33% | ||
freebornangel | 0 | 8,904,122,401 | 10% | ||
sebcam | 0 | 2,198,540,386 | 100% | ||
consciousness | 0 | 315,992,511 | 33% | ||
triviummethod | 0 | 149,269,174 | 33% | ||
whistleblower | 0 | 205,001,397 | 33% | ||
mwolfe13 | 0 | 102,257,720 | 10.89% | ||
steemcure | 0 | 117,329,636 | 33% | ||
hopehuggs | 0 | 9,846,377,242 | 50% | ||
veganism | 0 | 1,013,199,893 | 33% | ||
steemitcommunity | 0 | 413,427,909 | 33% | ||
omitaylor | 0 | 439,308,919 | 8% | ||
haileyscomet | 0 | 579,388,827 | 20% | ||
tibra | 0 | 3,316,372,626 | 100% | ||
perceive | 0 | 475,525,932 | 33% | ||
thepatrick | 0 | 83,228,760 | 2.31% | ||
iansart | 0 | 6,953,332,002 | 19.8% | ||
whatamidoing | 0 | 2,458,084,977 | 4% | ||
activate.alpha | 0 | 1,516,201,602 | 16.5% | ||
jga | 0 | 4,015,154,382 | 34% | ||
antimedia | 0 | 2,664,399,618 | 22% | ||
bmj | 0 | 1,815,958,154 | 8% | ||
colinhoward | 0 | 5,208,486,620 | 13% | ||
walkerland | 0 | 178,283,459 | 0.66% | ||
bryandivisions | 0 | 414,659,482 | 11% | ||
heart-to-heart | 0 | 1,379,203,391 | 6.6% | ||
vincentnijman | 0 | 604,045,595 | 1.65% | ||
ratticus | 0 | 0 | 10% | ||
iamjamie | 0 | 23,047,198,357 | 100% | ||
phelimint | 0 | 1,813,240,464 | 4.29% | ||
krazypoet | 0 | 207,799,155 | 1.65% | ||
tribesteemup | 0 | 408,851,117,443 | 33% | ||
shonariver | 0 | 8,812,192,170 | 100% | ||
thekitchenfairy | 0 | 16,200,754,220 | 25% | ||
accelerator | 0 | 7,372,327,895 | 0.49% | ||
stsl | 0 | 17,068,051,378 | 13% | ||
solarsupermama | 0 | 1,857,008,547 | 11% | ||
iliasdiamantis | 0 | 6,427,536,052 | 3% | ||
taskmaster4450 | 0 | 31,569,970,096 | 3.3% | ||
mickeybeaves | 0 | 97,076,141 | 10% | ||
canadianrenegade | 0 | 4,964,494,163 | 5% | ||
kieranpearson | 0 | 1,253,461,729 | 33% | ||
katamori | 0 | 1,076,698,071 | 13.6% | ||
karinxxl | 0 | 9,179,338,530 | 20% | ||
senorcoconut | 0 | 97,133,913 | 0.66% | ||
wwf | 0 | 4,339,639,330 | 16.5% | ||
libertyepodcast | 0 | 162,248,683 | 16.5% | ||
earthmother | 0 | 2,368,073,346 | 11% | ||
trucklife-family | 0 | 6,585,777,183 | 10% | ||
lishu | 0 | 1,922,540,677 | 11% | ||
belleamie | 0 | 1,773,663,368 | 8.25% | ||
crescendoofpeace | 0 | 4,701,954,638 | 25% | ||
firststeps | 0 | 2,515,951,080 | 16.5% | ||
paradigmprospect | 0 | 5,218,345,236 | 25% | ||
krishool | 0 | 114,343,381 | 4.95% | ||
sagescrub | 0 | 2,048,329,010 | 6.6% | ||
adisrivastav | 0 | 236,380,604 | 33% | ||
mountainjewel | 0 | 4,272,361,002 | 2.64% | ||
themothership | 0 | 404,117,968,666 | 100% | ||
moxieme | 0 | 1,250,476,585 | 20% | ||
isleofwrite | 0 | 1,256,256,626 | 20% | ||
tonysayers33 | 0 | 4,061,168,747 | 15% | ||
loryluvszombies | 0 | 480,513,626 | 8.91% | ||
verhp11 | 0 | 94,812,598 | 1% | ||
hempress | 0 | 912,934,330 | 11% | ||
nataboo | 0 | 1,889,602,491 | 16.5% | ||
dannyquest | 0 | 592,782,783 | 8.25% | ||
vegan.niinja | 0 | 2,991,167,259 | 22% | ||
eugenekul | 0 | 225,382,573 | 7.26% | ||
movement19 | 0 | 1,166,519,840 | 17% | ||
homestead-guru | 0 | 3,466,084,464 | 16.5% | ||
sovereignalien | 0 | 1,010,052,751 | 33% | ||
steemsmarter | 0 | 8,568,762,401 | 10.89% | ||
smart-shaegxy | 0 | 149,335,862 | 16.5% | ||
celestialcow | 0 | 1,754,834,651 | 7.26% | ||
krystaleye | 0 | 71,684,282 | 16.5% | ||
sima369 | 0 | 243,119,524 | 22% | ||
bobaphet | 0 | 701,795,197 | 1.65% | ||
sugandhaseth | 0 | 36,713,091,998 | 100% | ||
steemer-x | 0 | 145,718,984 | 16.5% | ||
thomaskatan | 0 | 357,805,503 | 23.1% | ||
trufflepig | 0 | 60,871,730,181 | 34% | ||
sbi3 | 0 | 30,614,115,570 | 6.15% | ||
geliquasjourney | 0 | 708,996,366 | 44% | ||
riverflows | 0 | 3,043,703,694 | 5% | ||
monetapes | 0 | 105,790,183 | 3.3% | ||
annemariemay | 0 | 131,747,471 | 16.5% | ||
truthabides | 0 | 240,049,520 | 11% | ||
krisstofer | 0 | 343,126,160 | 7.26% | ||
cambridgeport90 | 0 | 1,736,061,969 | 16.5% | ||
camillesteemer | 0 | -31,664,773 | -100% | ||
rainbowrachel | 0 | 650,045,822 | 10% | ||
smarmy | 0 | 296,165,947 | 16.5% | ||
vividessor | 0 | 294,614,049 | 16.5% | ||
dukefranky | 0 | 164,171,792 | 16.5% | ||
arbit | 0 | 0 | 100% | ||
digitaldan | 0 | 156,911,113 | 1.65% | ||
numberjocky | 0 | 289,792,009 | 16.5% | ||
inspirewithwords | 0 | 90,237,324,066 | 4.95% | ||
mannacurrency | 0 | 8,902,734,704 | 3.3% | ||
daniscib | 0 | 337,398,794 | 6.6% | ||
cryptouru | 0 | 1,802,815,671 | 8.5% | ||
open3ye | 0 | 991,005,398 | 30% | ||
yestermorrow | 0 | 2,877,424,499 | 8.25% | ||
presleyhart | 0 | 782,007,553 | 100% | ||
smg | 0 | 52,216,974,306 | 100% | ||
barleycorn | 0 | 294,125,409 | 16.5% | ||
startreat | 0 | 2,160,938,686 | 50% | ||
beaker303 | 0 | 251,839,458 | 50% | ||
meme.nation | 0 | 505,067,197 | 8.5% | ||
metametheus | 0 | 78,214,127 | 1.65% | ||
karinpics | 0 | 476,046,918 | 100% | ||
porters | 0 | 269,301,299 | 1.65% | ||
freedomtribe | 0 | 18,513,285,993 | 4% | ||
naturalmedicine | 0 | 8,386,319,390 | 3.3% | ||
merlin7 | 0 | 772,715,057 | 0.02% | ||
qwoyn | 0 | 411,684,073 | 8.25% | ||
franciferrer | 0 | 154,212,761 | 10% | ||
cherrykiss | 0 | 10,742,569,722 | 100% | ||
steemexpress | 0 | 2,181,324,381 | 3.91% | ||
dgamez | 0 | 367,297,965 | 100% | ||
penvibes | 0 | 812,058,039 | 50% | ||
stmpay | 0 | 6,253,250,525 | 2.39% | ||
bluesniper | 0 | 2,566,348,671 | 0.47% | ||
kelicimchi | 0 | 92,680,104 | 16.5% | ||
tipu.curator | 0 | 10,747,876,551 | 33% | ||
jesusdavid93 | 0 | 208,038,543 | 100% |
Congratulations @tibra! You have completed the following achievement on the Steem blockchain and have been rewarded with new badge(s) : <table><tr><td>https://steemitimages.com/60x70/http://steemitboard.com/@tibra/votes.png?201902202059</td><td>You made more than 22000 upvotes. Your next target is to reach 23000 upvotes.</td></tr> </table> <sub>_[Click here to view your Board](https://steemitboard.com/@tibra)_</sub> <sub>_If you no longer want to receive notifications, reply to this comment with the word_ `STOP`</sub> > Support [SteemitBoard's project](https://steemit.com/@steemitboard)! **[Vote for its witness](https://v2.steemconnect.com/sign/account-witness-vote?witness=steemitboard&approve=1)** and **get one more award**!
author | steemitboard |
---|---|
permlink | steemitboard-notify-tibra-20190220t214353000z |
category | channel |
json_metadata | {"image":["https://steemitboard.com/img/notify.png"]} |
created | 2019-02-20 21:43:54 |
last_update | 2019-02-20 21:43:54 |
depth | 1 |
children | 0 |
last_payout | 2019-02-27 21:43:54 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 740 |
author_reputation | 38,975,615,169,260 |
root_title | "CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 80,163,479 |
net_rshares | 0 |
Preview. Next post. <center>https://steemitimages.com/DQmWXCT3zjebZXrCSMgEsuAfxjM3u7dXggB59f4yGV6KWHV/Fishing4WiseGuyBuddha.jpg</center>   <div class="text-justify">   # <center>BLOG</center>   **Some considerations regarding**: <center> **One future direction of blockchain technology.** </center> Controlled experiment model of computation (CEMOC). Based on actor spawning and using an implementation for example consisting of Loads, Exceptions, Operations, Notes, Inputs, Destinations actor agents and tag tokens with generic composition in each field. #### <center>      Word count: 4.000 ~ 16 PAGES   |   Revised: 2019.3.15</center>     <center>— 〈  1  〉—</center> ## <center>LET'S MAKE IT</center>   *Will discuss the following*. Have to make a concise, popular way to describe the following kind of system design. Methods transform inputs. Inputs are in mailboxes. Some inputs are files. For example, image files. Each neuron transforms inputs and passes on the results to other neurons and has several methods to choose from. It can update its list of methods, can backtrack when a method produces an undesired result or fails to produce any result, can select the order in which to process inputs, according to certain procedures, and can effectively split into several neurons, each having part of the list of inputs to process and part of the methods with which to process them. Neurons can spawn other neurons where that is part of methods. And neurons use heuristics to decide to which other neurons to send their results as inputs for further transformation. Outputs appear in appropriate mailboxes and are displayed to the correct end users. End users select goals. Methods also extract data and this serves to train neurons as if they were part of a different network. In the same way, just a different type of input and it gets handled by a different type of method. Not all methods work on all inputs. Different overlapping networks coincide in some neurons and the state affected by one network can more or less affect behavior of neurons in the other network. Different logs and internal states exist for neurons. As usual choice and selection often involves primitive randomness or else a probability distribution. Learning affects the probability distribution. The next topic is that automation is required to make reasoning about and working with such systems feasible and furthermore productive. For example, suppose the procedure NewRandomName(NameType,LIST1,LIST2,TEXT), as in NewRandomName(x_Neuron,[TokenT,TokenT1,TokenT2,...],[TokenN1,TokenN2,...,TokenT],TEXT) generates a random atom (x_NeuronA57 or x_NeuronBCD9876 or ...) and then writes that in the place of TokenT or TokenT1 or TokenT2 or ... whenever that occurs in TEXT. Every atom which NewRandomName generates and puts in the place of that token differs from (is "consistent" with) the atoms so far generated and put in the place of the tokens TokenN1 and TokenN2 and .... and TokenT. NewRandomName never write two same random atoms in the place of TokenT when that occurs in two different places either because TokenT was also in the LIST2. Now consider constructing a network with a hundred neurons by declaring an initial complex agent/neuron and sending it a plain text file script as an input. It has a method for parsing that script. NewRandomName(x_Neuron,[TokenT],[TokenT],Repeat(100,AddNeuronToList(ListL,Create(TokenT,[Image1.png,Image2.png],[operation1,operation2,...],[(Send(Result,RandomFrom(ListL)))])))) In other words, the ability to generate a large number of complex agents/neurons generically is to be very compressed and standardized. Using scripts and appropriately expanding them and then filling token placeholders and pattern matching mostly according to naming patterns to parse and then process the scripts is one approach. Other approaches will be discussed. An interesting system will be discussed. As an intro to discussing backtracking-based approaches to AI, which has a vast literature. We already discussed this months and months ago under the form of PyLogTalk and in other ways. The point is a standard typical system in whose terms backtracking-based programming in general can be fruitfully discussed. i.1) Actor agents (actoragents) will have at least the following fields: Loads (dataset), Exceptions, Operations, Notes, Inputs (to other actor agents, things created by transforming loads by operating on them and temporarily being stored), Destinations. The datalist and operations are what were being implemented recently. Each actor agent without data waits. It is not loaded, we will say. But if it is loaded, it will being working. It will select an operation of a set of operations to transform the smallest parts of the dataset that is a load upon it. An operation is selected. If it transforms a selected load such that any result is produced, the load is removed from the dataset of that actor agent. But if not, backtracking occurs. The load is not removed and selection is repeated. In respect to that load, it is the operation which is removed from the operationsset. The actor agent waits if it has no operations using which it may try unloading itself. Actor agents progressively unload themselves and wait or else reach a state where they lack the means by which to unload themselves any further and wait. Selection from an empty set fails. No special logic, same logic that determines working determines waiting. Typically we simply declare an actor agent when it is classlike. In other words, when it becomes loaded, it will select a load and create a copy of itself using the same function the developer uses to create classes of this sort in the script. The copy will contain a dataset, a loadsset containing that and only that load. It will contain, however, all the same operations and everything else. (None of them were eliminated yet.) But this is a result. Therefore that load is removed from the dataset of the parent actor agent. So the classlike actoragent would unload itself by spawning instances which inherit the operationsset but only an appropriate subset of the dataset. Create and Send operations exist: Create("ActorNameIsHere", [LoadsList], [OperationsList]) and Send(object,"ActorNameIsHere"). They should compose. For example, Create("ActorNameIsHereA", [LoadsListB], [Create("ActorNameIsHereC", [LoadsListD], [OperationsListE]),Send(F,"ActorNameIsHereG")]) is valid. The function Send() should not be sensitive to type of object. It sends what it is given to send. i.2) Suppose, for example, or for testing purposes, the order in which loads in the dataset, the loadsset are selected, and the order in which operations are selected, is random. ii) When the end user enters text in the input field in the front end, it is uploaded as the same text in a plain text file. The end user may upload images having various extensions. Let us distinguish the names of folders by some notation: ϕ*Folder will denote, for example, a folder called "Folder". Exists a special actor agent named "Initial". Such that any file dropped into ϕ*Input in the program directory automatically gets added to the inputs for "Initial". This will either be an image file or plain text file. There is an operation called "Terminal". It simply puts the object on which it operated into ϕ*Output. The front end will display it. ϕ*Input and ϕ*Output are cleared at intervals. iii) This particular arrangement is temporary, primarily done make online and offline testing straightforward in the near future. A system may handle different concurrent end users with notes that are passed along with loads and only results corresponding to data supplied by each end user will be shown that end user, even though a single system is operating. Notes can special loads that pair with other loads. iv) There is logging. The user inferace takes inputs, displays outputs, and displays to the user what logic the program performed, what the program is doing or did. All of which relies on the log the system produces at this stage. (Later each log is also going to be used for learning and other things.) So we make operation: LearnExceptionsOperations. A function that takes a string. It then adds the string it takes at the bottom of the current list of strings in a plain text log file. This log file can be at most some size. (It checks size before writing. It writes nothing if the log file becomes too large.) First it searches for an appropriately named log file. If that is absent, it creates the log file with the first entry. If that is present it just writes to it, adding. The log file it looks for is LearnedExceptionsOperations_ReadMeReact.erl when "ReadMeReact" is the actor that runs operation "LearnExceptionsOperations". Create(“ReadMeReact”, ["string1", "string2", ... ], [LearnExceptionsOperations]) ends up running LearnExceptionsOperations on each of the strings in some order and creates a plain text log file inside the same directory as everything else. The contents of LearnedExceptionsOperations_ReadMeReact.erl: string1, string9, string3, string2, ... The writing is concurrent. Once it is written to the log, a string is eliminated from the list in inputs to ReadMeReact. But there are new strings coming in, and yes, operation LearnExceptionsOperations picks inputs, like all operations, when more than one inputs exists, at random, from the list of inputs at the time it makes the pick. For example, Create(“ReadMeReact”, [ ], [LearnExceptionsOperations]) creates the actor, having internal state [],[LearnExceptionsOperations,]. This can be the first actor constructed in the LS. Other actors are created. It is sent some strings, and internal state updated to, for example, ["string1", "string2", "string3", "string4", "string5",],[LearnExceptionsOperations]. ReadMeReact is done writing string1 and string3 to the log file, leaving string2, string4, string5, at which point string6, string7, string8, string9, are sent it. Then its internal state becomes ["string2", "string4", "string5", "string6", "string7", "string8", "string9"],[LearnExceptionsOperations]. It selects randomly, so even though for example string3 was present "before" string8, they have an equal chance of getting selected and string8 may appear in the log earlier in the list, be written earlier to the log file, than string3 next. When the internal state of ReadMeReact becomes [ ], [LearnExceptionsOperations], it goes back to waiting. For all actors, in the basic definition such that when an actor having name "Name" runs an operation "Operation", the string "(Name, Did, Operation)" is Sent to ReadMeReact. That is, Send("(Name, Did, Operation)", ReadMeReact) is done. It should result in "(Name, Did, Operation)" being added to the list of inputs in the internal state of ReadMeReact. Then pretty soon (Name, Did, Operation) will appear written to the text log file LearnedExceptionsOperations_ReadMeReact.erl. Exist three exception cases. When an actor runs an operation called LearnExceptionsOperations. In that case the Send(...) does not occur ... if it did there the LearnedExceptionsOperations_ReadMeReact.erl will be increasingly filled with (ReadMeReact, Did, LearnExceptionsOperations), which is undesirable. When Send(object, DestinationActor) is the operation run by the actor having name "Name"; in that case, Send("(Name, Sent, DestinationActor)", ReadMeReact) is done. When "Create" is the operation, "Name" creates an actor called "NameTwo", Send("(Name, Created, NameTwo)", ReadMeReact) is done. v) We can consider some examples. I will write extensions for clarity, but they will not appear in the code. For example, Create("DisplayIt", [testA.png, testB.png, testC.jpg], [Terminal.erl,]) in the Lowest-level Script (LS, demo.erl), would create an actor that copies testA.png, testB.png, testC.jpg in some order to ϕ*Output. The same should occur for data that is not a file, when submitted a terminal actor agent. We can define *a* terminal actor agent as any one that contains terminal.erl as an operation in it. For example, Create("DisplayIt", [This is a string.,], [Terminal,]) is done, like with the logging, a plain text file containing This is a string. would be created. In all cases, the name of the create file in Output should be 23 randomly selected letters of the alphabet with the appropriate file extension. If the name already exists in the folder, it does NOT overwrite, but randomly generates another random string for the name. Disclaimer: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This text is a popular, speculative discussion of basic science literature for the sake of discussion and regarding it no warranties of any kind exist. Treat it as conjecture about the past and the future. As subject to change. Like the open future itself is subject to change. No promise to do anything or that anything is done is involved.
author | tibra |
---|---|
permlink | re-tibra-channel-more-thinking-about-the-future-of-the-space-word-count-or-revised-2018-2-9-20190308t011652991z |
category | channel |
json_metadata | {"tags":["channel"],"app":"steemit/0.1","image":["https://steemitimages.com/DQmWXCT3zjebZXrCSMgEsuAfxjM3u7dXggB59f4yGV6KWHV/Fishing4WiseGuyBuddha.jpg"]} |
created | 2019-03-08 01:16:54 |
last_update | 2019-03-15 07:35:15 |
depth | 1 |
children | 2 |
last_payout | 2019-03-15 01:16:54 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 13,088 |
author_reputation | 6,103,786,042,055 |
root_title | "CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 80,897,746 |
net_rshares | 0 |
Preview. Next post. Continued. We popularly address some notions of doing logic in a way that the logs generated allow simple machine inference and can be used to reveal where a logical set of maps fails to achieve a goal in general problem solving type problem solving.   <center>— 〈  2  〉—</center> ## <center>NEURONAL GROWTH</center>   If more than one load, a clone of the class-like actor-agent is created, an instance with only one load in it, which inherits the operationsset of the actor-agent whose clone it is, and the load is removed from the loadsset of the class-like actor-agent, while the clone it has temporary existence. The number of actor-agents that comprise the system increases therefore. But in whatever order it happens, the class-like actor-agent works to become progressively less loaded, until it is unloaded, except if new loads are passed it as messages faster than it unloads itself. *This is an old approach, dating back to Hewitt and earlier. What we really want to discuss is why this, often more roundabout, method of doing things is useful. Refer for the moment to [BOD06.1,2] for a review of the literature.* It's a controlled experiment model of computation, such that even when many arguments needed for an operation to complete, only one thing at a time is done, changed, and logged, in the style of Newell, Shaw, and Simon, and the log can be used to learn. As was suggested back then. Each failure tags and identifies pragmatically what an unclassified object is, and the operation that identifies it, which variable specifically makes it different from other objects, is known. Or can be learned after some randomness and repetition. *There is always confusion regarding how this occurs, even though it is contained in the axioms, and mentioned more or less explicitly in some papers; therefore let us discuss this point.* For example, to exclude a subset N from a list of phrases M, an operation requires two arguments before it can work: N and M. But here the class-like actor-agent only selects an argument when more than one argument is ever present in the loadsset and separates a single argument and clones the operationsset and only then, when only a single argument ever present, does any operation take and transform any argument. There are petri net methods and negotiation methods and memory in operations instances which can treat multiargument transformations, multiarrows in a category, but ideally for learning there is simply operation construction and updating the operationsset of actor-agents as needed. One actor-agent gets an argument, such as N and has a BuildExcludeOperation ... operation ... that ... builds ExcludeN. Which is then passed to the actor-agent which will receive M or has M and the ExcludeN occurs there as a load. That actor-agent has a Hewittian UpdateIt operation in it. The temporary actor-agent can only run UpdateIt on an operation-type load, in this case, ExcludeN. Many ways to go from here, but, for example, being a clone of the same actor-agent, there can be private encapsulated state that gets cloned and can be passed as a token that unlocks the class-like actor-agent, the master, whose instance is the temporary actor-agent and allows its clone to modify it in some specific manner, such as updating its operationsset as if it was originally created with this other operationsset. Then the class-like actor-agent receives M and happens to run ExcludeN on it. But the CEMOC format is preserved and aids machine learning. Disclaimer: This work is licensed under a Creative Commons Attribution-ShareAlike 4.0 International License. This text is a popular, speculative discussion of basic science literature for the sake of discussion and regarding it no warranties of any kind exist. Treat it as conjecture about the past and the future. As subject to change. Like the open future itself is subject to change. No promise to do anything or that anything is done is involved.
author | tibra |
---|---|
permlink | re-tibra-re-tibra-channel-more-thinking-about-the-future-of-the-space-word-count-or-revised-2018-2-9-20190315t073729912z |
category | channel |
json_metadata | {"tags":["channel"],"app":"steemit/0.1"} |
created | 2019-03-15 07:37:30 |
last_update | 2019-03-15 07:37:30 |
depth | 2 |
children | 1 |
last_payout | 2019-03-22 07:37:30 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 4,005 |
author_reputation | 6,103,786,042,055 |
root_title | "CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 81,344,407 |
net_rshares | 0 |
Will have to write an essay regarding syntax as important. Consider that most languages work only by processing and parsing LR. How then does a developer write nested functions? For example? Even if treating the B(C(D)) as a string in A(B(C(D))). A block box, not looked inside. Because if the parsing is not outermost left to outermost right but next nearest unambiguous bracket pattern match, then we find A(B(C(D))) being read as A(B(C(D) and )). Instead of breaking to string B(C(D)) and A( ). We want to avoid function(arg1)) or function(arg1))))) or etc, but "function(arg1))))" or (function(arg1)))) is fine. Because we should not be looking inside a string. Don't even know at that point that string contains a function or not. Rewrite always possible before code run or not. Only next operation looks deeper. And each operation looks at most one level deeper. In most cases. So all is predictable despite never knowing in advance, due to randomness and heuristics and learning, what operation will be run on some given. Even for ordinary neural net, imagine neural net learns and decides not to classify, but decides which function to apply to a classified input from a set. We cannot anticipate what it will do, so we put type check inside functions, never check in advance. Cannot predict what it will decide. It may be that inside is no valid syntax at all. For example ( A ( B}])]]] ) )B}])]]] is not valid anything. But at the moment, this is just text, "A ( B}])]]] )" so it gets handled by any operation that works on level of " " but not deeper, and if anything deeper matters, it will check and fail or check and move on at that point. For example, operation987 may simply remove ( ) outermost and then delete all but the first 3 characters and append a ) on the right then run that function, which is valid A(B). When loaded with gibberish ( A ( B}])]]] ) ). Since we cannot know what operation will be run in advance we often just do generic extra ( ) to bypass checking at that point. (A check or else a rewrite will obviously occur at some point. Just not at this point.) So ideally we want (A(B)) to be same as "A(B)" but still written (A(B)) so that a generic outermost bracket stripper or constructor can be used in random combinations with words of this language to build larger or shorter words. If several different bracket strippers or constructors because several different brackets, we don't know which will be encountered so unnecessarily many combinations of bracket transformers and arguments randomly passed them as data to transform will fail. And this would be unpredictable. Yet if LR interpreting of words, not outermost, next outermost, we have the issue where different brackets must be used. We need abstract indices. Like in SNOBOL we can use spaces to delimit indices or labels from words they label. (765 X (bing (80 Y (81 (Z (123 0,1,2,3 (ding 4,5,6 dong) 123) 81) 80) bing) 765) is one possibility. When mismatch inside at least one bracket, not right bracket, this and deeper are taken as if type "...". As if strings. Which indeed is what they probably are. Notice also that out of order and mixed tags and not place notation is used. Place notation is just one possibility but uses too much syntax were that much syntax is not required. Certain words would fail to be used in contexts where they should not fail to be used, where they are appropriate for the context in what matters, if other symbols used. Meanwhile functions defined for (A ... A) and if (B ... B) passed to a function defined for (A ... A) it works all the same, as if each A were a B. Only difference, what is distinguishable, is significant. And number of such differences is significant. System begins leftmost in a word, reads to the right, after passing (X continues until it gets to X) and meanwhile sniffs for ( and if any such thing makes a note and stops sniffing, and treats (X N X) as an ("N") in a system that parses outermost bracket to next outmost bracket _and_ left to right, not just only left to right. If any note is made, it loops. Until no further note is made, and there is a rule such that if (T does not find T) then it treats it all as a string and loops no further. A string can be anything like "Ab Cd !!!)))???", so like (Ab Cd !!!)))???), so like (5 Ab Cd !!!)))??? 5) and yet be valid and legal in that language.
author | tibra |
---|---|
permlink | re-tibra-re-tibra-re-tibra-channel-more-thinking-about-the-future-of-the-space-word-count-or-revised-2018-2-9-20190322t131314149z |
category | channel |
json_metadata | {"tags":["channel"],"app":"steemit/0.1"} |
created | 2019-03-22 13:13:15 |
last_update | 2019-03-22 13:13:15 |
depth | 3 |
children | 0 |
last_payout | 2019-03-29 13:13:15 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 4,377 |
author_reputation | 6,103,786,042,055 |
root_title | "CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 81,734,751 |
net_rshares | 0 |
**Congratulations!** Your post has been selected as a daily Steemit truffle! It is listed on **rank 4** of all contributions awarded today. You can find the [TOP DAILY TRUFFLE PICKS HERE.](https://steemit.com/@trufflepig/daily-truffle-picks-2019-02-10) I upvoted your contribution because to my mind your post is at least **4 SBD** worth and should receive **137 votes**. It's now up to the lovely Steemit community to make this come true. I am `TrufflePig`, an Artificial Intelligence Bot that helps minnows and content curators using Machine Learning. If you are curious how I select content, [you can find an explanation here!](https://steemit.com/steemit/@trufflepig/weekly-truffle-updates-2019-06) Have a nice day and sincerely yours,  *`TrufflePig`*
author | trufflepig |
---|---|
permlink | re-channel-more-thinking-about-the-future-of-the-space-word-count-or-revised-2018-2-9-20190210t170620 |
category | channel |
json_metadata | "" |
created | 2019-02-10 17:06:21 |
last_update | 2019-02-10 17:06:21 |
depth | 1 |
children | 1 |
last_payout | 2019-02-17 17:06:21 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 883 |
author_reputation | 21,266,577,867,113 |
root_title | "CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 79,671,566 |
net_rshares | 0 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
tibra | 0 | 0 | 100% |
:)
author | tibra |
---|---|
permlink | re-trufflepig-re-channel-more-thinking-about-the-future-of-the-space-word-count-or-revised-2018-2-9-20190210t170620-20190315t065450634z |
category | channel |
json_metadata | {"tags":["channel"],"app":"steemit/0.1"} |
created | 2019-03-15 06:54:51 |
last_update | 2019-03-15 06:54:51 |
depth | 2 |
children | 0 |
last_payout | 2019-03-22 06:54:51 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 2 |
author_reputation | 6,103,786,042,055 |
root_title | "CHANNEL. More thinking about the future of the space. ... [ Word Count: 1.250 ~ 5 PAGES | Revised: 2018.2.9 ]" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 81,343,141 |
net_rshares | 0 |