So basically, any company or group can theoretically increase their overall download speed by giving you tiny chuncks of data which correspond to multiple possible downloads. It may require artificial intelligence to determine what you may probably download. So compression is where computers find patterns of data and replace them with smaller abridged segments. So if you send a file of text, it might go through and replace all common words with a single symbol. The reason quantum computing makes a difference is because it can take thousands of formulas and data segments which the user keeps locally, and find an optimum combination to create a coded message convertible to the data being downloaded. So anextreme example of this is netflix may send you a movie you're likely to want to see, and then when you download it, it is instantaneous. Obiviously this is of no use in terms of data size, but what if 10 videos you my watch all have the same introduction. Netflix can then send you this intro, and save 90% of downloading. So this is quite poorly written, but here is a vision for better compression. (Now, in honesty I doubt I know enough to make a dent in work already done, but whatever.) The idea is for senders of data or third parties who companies can hire, can begin to significantly decrease sending time/cost and internet traffic by using an improved form of compression. Actually a protocol anyone can use could be created. The idea is that one can download a protocol of data and formulas up to various sizes, and those sending downloads can tell your computer how to use all this data in clever combinations to recreate locally the files desired. I'm not sure of the math, but I believe with 100 gigs of data and good processing power you... ok out of time.
author | treeleaves |
---|---|
permlink | quantum-or-better-computing-may-decrease-network-traffic-through-optimal-compression |
category | invent |
json_metadata | {"tags":["invent","idea"],"app":"steemit/0.1","format":"markdown"} |
created | 2017-08-26 22:00:00 |
last_update | 2017-08-28 16:20:42 |
depth | 0 |
children | 1 |
last_payout | 2017-09-02 22:00:00 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.072 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 1,798 |
author_reputation | 3,845,241,917,953 |
root_title | "Quantum or Super Computing May Decrease Network Traffic Through Optimal Compression" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 12,972,200 |
net_rshares | 18,294,840,249 |
author_curate_reward | "" |
voter | weight | wgt% | rshares | pct | time |
---|---|---|---|---|---|
tonikub | 0 | 165,635,704 | 100% | ||
kimerwder | 0 | 156,548,325 | 100% | ||
shakamuroterr | 0 | 154,509,248 | 100% | ||
kotofey | 0 | 245,128,998 | 100% | ||
zeeps | 0 | 450,803,668 | 100% | ||
dero | 0 | 451,191,729 | 100% | ||
dragonhunter | 0 | 461,746,803 | 100% | ||
karlosteves | 0 | 456,828,680 | 100% | ||
bormann | 0 | 458,926,574 | 100% | ||
anelka | 0 | 458,911,965 | 100% | ||
nrg | 0 | 373,585,553 | 1.11% | ||
bochkasa | 0 | 526,627,077 | 100% | ||
zajtse | 0 | 508,107,945 | 100% | ||
mariiala | 0 | 532,787,200 | 100% | ||
grimo | 0 | 980,761,537 | 100% | ||
enace | 0 | 495,616,000 | 100% | ||
nuikinsh | 0 | 526,697,244 | 100% | ||
tkachevav | 0 | 532,787,200 | 100% | ||
asednez | 0 | 526,627,081 | 100% | ||
deffeni | 0 | 535,884,800 | 100% | ||
nedoxlebov | 0 | 523,668,785 | 100% | ||
lenyaanoxin | 0 | 529,689,600 | 100% | ||
lelya | 0 | 523,564,150 | 100% | ||
okae | 0 | 52,218,306 | 8% | ||
sevantavr | 0 | 529,830,752 | 100% | ||
skolyshena | 0 | 992,568,110 | 100% | ||
zakapoev | 0 | 986,692,951 | 100% | ||
nalyb | 0 | 928,514,614 | 100% | ||
park7 | 0 | 1,210,718,513 | 100% | ||
evushki | 0 | 998,151,989 | 100% | ||
turist | 0 | 1,009,755,275 | 100% | ||
annaspotapch | 0 | 1,009,753,873 | 100% |
Just to clarify, super computing on server side can benefit download speed by finding optimum compressed form, while on client side typical processing power is needed to decode. The more times people download the same thing the more useful a good compressed form becomes. - - - This is just a mini example: By default this string of data reserves an introduction, even if encapsulated by typical internet jargon. The first bit, says yes this is encoded or no this is pure data. Then the next 7 bits, signify what encoding to use. Then based on this various groups of formulas, functions and data forms can be selected to encode broken up pieces of the data in question. Here is a sample data string: 10101110001111010011010101011100010010011100011111001000101111010110010010100101110001 Here is the encrypted form: 1 1001 01101 1011 0010 0110 / 110 100 111 101 100 101 (11000111) 001 010 110 (Ok, I'm doing this by hand and I have not changed the original data string. I also am trying to consider likely algorithms, when only 512 are available. So far I have not had time to hand create some formulas that will work. But with ingenuity I bet I could condense the above data string to this shorter one, or shorter.) 1 yes encode 1001 super method 9 01101 use break-method 13 (other codes may have other schemes) 1011 variable, 4 bits = 11 (total of 10 bits so far) 0010 m is 2 a = func(var) where var is next 3 bits 0110 n is 6 b = (pos) mod 4 - - - Don't even bother reading thru this, but it shows my thinking We choose (scheme 13, of super 9): 3 at time (8 for pure data) l=126 sized sub-section x*x-8x-4 are pure data/rest use variables use functions/schemes based on section n where n = 11 (four bits) Section 11, has 16 formulas Section 11, assumes two are wanted (next, 8 bits) function m(n(var)) if var is 111, then use last val plus 1, do again + 2, again + 3 What function m and n might look like Function m = var * 7 + (var^2+2*var mod 3) + b, where b is secondary function, var is compressed data Function n = x mod 4, position is x - - - - This process has taught me about the math involved. Basically in the beginning, one chooses between millions of algorithm groups (having more smaller groups), possibly more than once, and then takes each group and combines them indexed, then chooses in smaller sections of the original data which groups and then inner-groups to use. The more data one begins with, the more likely to find a somewhat random combination of compression algorithms, which will match cascading ever smaller sections.
author | treeleaves |
---|---|
permlink | re-treeleaves-quantum-or-better-computing-may-decrease-network-traffic-through-optimal-compression-20170901t210528956z |
category | invent |
json_metadata | {"tags":["invent"],"app":"steemit/0.1"} |
created | 2017-09-01 21:05:27 |
last_update | 2017-09-01 21:07:57 |
depth | 1 |
children | 0 |
last_payout | 2017-09-08 21:05:27 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 2,603 |
author_reputation | 3,845,241,917,953 |
root_title | "Quantum or Super Computing May Decrease Network Traffic Through Optimal Compression" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 13,589,251 |
net_rshares | 0 |