Viewing a response to: @dantheman/re-krnel-subchains-and-multi-chain-matrices-for-massive-blockchain-data-propagation-20161201t165337732z
Better than manual splitting would be automatic. The demand driven model he refers to is from me, and it is about using frequent associations to find data that should be pooled so it can be added to without waiting for the data to come from other nodes, in the case of nodes not caching all the data for the purpose of increasing the number of domains where transaction authorisations can be immediately certified to their provenance. To implement it I suggest something like a hierarchy of nodes like we now have Witnesses, and runners up. Another hierarchy related to capacity, both storage and processing. These nodes don't keep the whole blockchain, but a related subset, and clustered according to transaction history and frequency that leaves from them are added on, such as user accounts, tokens, and contracts. Some have to be more broadly spread, but if these subnodes are sufficient in number, they in effect break the blockchain in a more temporary and specific way, driven by use. Tracking client request frequency, correlating other entities that associate with them, is not for increasing read speed, but decreasing latency by having the necessary data already in cache. I am pretty sure to some degree Graphene already does some of this within the node's live buffer caches, I am mainly talking about expanding the caching to various kinds of caching nodes who specialise in aggregating associated data on disk, instead of having to store it all, only a little, and other nodes know to propagate transactions to them. I think we are talking much the same thing with breaking into subchains, but My idea is derived from the very solutiin Witnesses and Masternodes enable - reducing the cost of convergence by delaying the replication so the data is canonical within a cluster of nodes overlapping in their focal areas, and thus confirmed quickly. In the background the node propagates first to near neighbours, and much later than currently, the network converges. But where it is used, it is nearly instant. Well, I am just trying to help here anyway. Parallelisation is key here so knowledge from routing and caching systems is very pertinent, as is graph analysis to find divisible regions. From what I understand, Graphene is very adapted to graph manipulation. Alsi his reminds me about how 3D graphics systems extensively work with graphs and GPUs have special processing systems for dealing with them (matrix transforms).
author | l0k1 |
---|---|
permlink | re-dantheman-re-krnel-subchains-and-multi-chain-matrices-for-massive-blockchain-data-propagation-20161201t165337732z-2016122t223033355z |
category | blockchain |
json_metadata | {"tags":"blockchain","app":"esteem/1.3.2","format":"markdown+html"} |
created | 2016-12-02 21:30:36 |
last_update | 2016-12-02 21:30:36 |
depth | 2 |
children | 0 |
last_payout | 2017-01-01 16:38:27 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 2,448 |
author_reputation | 94,800,257,230,993 |
root_title | "Subchains and Multi-Chain Matrices for Massive Blockchain Data Propagation" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 1,901,466 |
net_rshares | 0 |