Viewing a response to: @taskmaster4450le/re-leothreads-5c7hjcbp
## [A popular technique to make AI more efficient has drawbacks](https://techcrunch.com/2024/11/17/a-popular-technique-to-make-ai-more-efficient-has-drawbacks/) One of the most widely used techniques to make AI models more efficient, quantization, has limits — and the industry could be fast approaching them. In the context of AI, quantization refers to lowering the number of bits — the smallest units a computer can process — needed to represent information. Consider this analogy: When someone asks the time, you’d probably say “noon” — not “oh twelve hundred, one second, and four milliseconds.” That’s quantizing; both answers are correct, but one is slightly more precise. How much precision you actually need depends on the context. #ai #technology
author | taskmaster4450le |
---|---|
permlink | re-taskmaster4450le-2zcwq3sgo |
category | hive-167922 |
json_metadata | {"app":"leothreads/0.3","format":"markdown","tags":["leofinance"],"canonical_url":"https://inleo.io/threads/view/taskmaster4450le/re-taskmaster4450le-2zcwq3sgo","links":["https://techcrunch.com/2024/11/17/a-popular-technique-to-make-ai-more-efficient-has-drawbacks/)"],"images":[],"isPoll":false,"pollOptions":{},"dimensions":[]} |
created | 2024-11-17 18:52:33 |
last_update | 2024-11-17 18:52:33 |
depth | 2 |
children | 1 |
last_payout | 2024-11-24 18:52:33 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 760 |
author_reputation | 2,197,206,251,558,022 |
root_title | "LeoThread 2024-11-17 10:12" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 138,514,295 |
net_rshares | 0 |
AI models consist of several components that can be quantized — in particular parameters, the internal variables models use to make predictions or decisions. This is convenient, considering models perform millions of calculations when run. Quantized models with fewer bits representing their parameters are less demanding mathematically, and therefore computationally. (To be clear, this is a different process from “distilling,” which is a more involved and selective pruning of parameters.) But quantization may have more trade-offs than previously assumed.
author | taskmaster4450le |
---|---|
permlink | re-taskmaster4450le-kvvteshj |
category | hive-167922 |
json_metadata | {"app":"leothreads/0.3","format":"markdown","tags":["leofinance"],"canonical_url":"https://inleo.io/threads/view/taskmaster4450le/re-taskmaster4450le-kvvteshj","isPoll":false,"pollOptions":{},"dimensions":[]} |
created | 2024-11-17 18:52:48 |
last_update | 2024-11-17 18:52:48 |
depth | 3 |
children | 0 |
last_payout | 2024-11-24 18:52:48 |
cashout_time | 1969-12-31 23:59:59 |
total_payout_value | 0.000 HBD |
curator_payout_value | 0.000 HBD |
pending_payout_value | 0.000 HBD |
promoted | 0.000 HBD |
body_length | 560 |
author_reputation | 2,197,206,251,558,022 |
root_title | "LeoThread 2024-11-17 10:12" |
beneficiaries | [] |
max_accepted_payout | 1,000,000.000 HBD |
percent_hbd | 10,000 |
post_id | 138,514,297 |
net_rshares | 0 |