create account

@taskmaster4450le "The ever-shrinking model According to a study from..." by taskmaster4450le

View this thread on: hive.blogpeakd.comecency.com

Viewing a response to: @taskmaster4450le/re-taskmaster4450le-2e6ykpseq

· @taskmaster4450le ·
@taskmaster4450le "The ever-shrinking model According to a study from..."
The ever-shrinking model
According to a study from researchers at Harvard, Stanford, MIT, Databricks, and Carnegie Mellon, quantized models perform worse if the original, unquantized version of the model was trained over a long period on lots of data. In other words, at a certain point, it may actually be better to just train a smaller model rather than cook down a big one.

That could spell bad news for AI companies training extremely large models (known to improve answer quality) and then quantizing them in an effort to make them less expensive to serve.
properties (22)
authortaskmaster4450le
permlinkre-taskmaster4450le-9cvctje3
categoryhive-167922
json_metadata{"app":"leothreads/0.3","format":"markdown","tags":["leofinance"],"canonical_url":"https://inleo.io/threads/view/taskmaster4450le/re-taskmaster4450le-9cvctje3","isPoll":false,"pollOptions":{},"dimensions":[]}
created2024-11-17 18:52:57
last_update2024-11-17 18:52:57
depth3
children0
last_payout2024-11-24 18:52:57
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length562
author_reputation2,193,078,876,199,132
root_title"LeoThread 2024-11-17 10:12"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id138,514,300
net_rshares0