create account

@taskmaster4450le "## [A popular technique to make AI more efficient ..." by taskmaster4450le

View this thread on: hive.blogpeakd.comecency.com

Viewing a response to: @taskmaster4450le/re-leothreads-5c7hjcbp

· @taskmaster4450le ·
@taskmaster4450le "## [A popular technique to make AI more efficient ..."
## [A popular technique to make AI more efficient has drawbacks](https://techcrunch.com/2024/11/17/a-popular-technique-to-make-ai-more-efficient-has-drawbacks/)

One of the most widely used techniques to make AI models more efficient, quantization, has limits — and the industry could be fast approaching them.

In the context of AI, quantization refers to lowering the number of bits — the smallest units a computer can process — needed to represent information. Consider this analogy: When someone asks the time, you’d probably say “noon” — not “oh twelve hundred, one second, and four milliseconds.” That’s quantizing; both answers are correct, but one is slightly more precise. How much precision you actually need depends on the context.

#ai #technology 
properties (22)
authortaskmaster4450le
permlinkre-taskmaster4450le-2zcwq3sgo
categoryhive-167922
json_metadata{"app":"leothreads/0.3","format":"markdown","tags":["leofinance"],"canonical_url":"https://inleo.io/threads/view/taskmaster4450le/re-taskmaster4450le-2zcwq3sgo","links":["https://techcrunch.com/2024/11/17/a-popular-technique-to-make-ai-more-efficient-has-drawbacks/)"],"images":[],"isPoll":false,"pollOptions":{},"dimensions":[]}
created2024-11-17 18:52:33
last_update2024-11-17 18:52:33
depth2
children1
last_payout2024-11-24 18:52:33
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length760
author_reputation2,197,206,251,558,022
root_title"LeoThread 2024-11-17 10:12"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id138,514,295
net_rshares0
@taskmaster4450le ·
@taskmaster4450le "AI models consist of several components that can b..."
AI models consist of several components that can be quantized — in particular parameters, the internal variables models use to make predictions or decisions. This is convenient, considering models perform millions of calculations when run. Quantized models with fewer bits representing their parameters are less demanding mathematically, and therefore computationally. (To be clear, this is a different process from “distilling,” which is a more involved and selective pruning of parameters.)

But quantization may have more trade-offs than previously assumed.
properties (22)
authortaskmaster4450le
permlinkre-taskmaster4450le-kvvteshj
categoryhive-167922
json_metadata{"app":"leothreads/0.3","format":"markdown","tags":["leofinance"],"canonical_url":"https://inleo.io/threads/view/taskmaster4450le/re-taskmaster4450le-kvvteshj","isPoll":false,"pollOptions":{},"dimensions":[]}
created2024-11-17 18:52:48
last_update2024-11-17 18:52:48
depth3
children0
last_payout2024-11-24 18:52:48
cashout_time1969-12-31 23:59:59
total_payout_value0.000 HBD
curator_payout_value0.000 HBD
pending_payout_value0.000 HBD
promoted0.000 HBD
body_length560
author_reputation2,197,206,251,558,022
root_title"LeoThread 2024-11-17 10:12"
beneficiaries[]
max_accepted_payout1,000,000.000 HBD
percent_hbd10,000
post_id138,514,297
net_rshares0