ログイン
Language:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 02 情報科学
  2. 01 学術雑誌論文

Automated Quantization and Retraining for Neural Network Models Without Labeled Data

http://hdl.handle.net/10061/0002000089
http://hdl.handle.net/10061/0002000089
47f4216d-afb2-4507-9cc9-185f880758d3
アイテムタイプ 学術雑誌論文 / Journal Article(1)
公開日 2023-12-27
タイトル
タイトル Automated Quantization and Retraining for Neural Network Models Without Labeled Data
言語
言語 eng
キーワード
主題Scheme Other
主題 Automated machine learning
キーワード
主題Scheme Other
主題 model compression
キーワード
主題Scheme Other
主題 model retraining
キーワード
主題Scheme Other
主題 multi-objective optimization
キーワード
主題Scheme Other
主題 edge machine learning
資源タイプ
資源タイプ journal article
アクセス権
アクセス権 open access
著者 Thonglek, Kundjanasith

× Thonglek, Kundjanasith

en Thonglek, Kundjanasith

Search repository
Takahashi, Keichi

× Takahashi, Keichi

en Takahashi, Keichi

Search repository
市川, 昊平

× 市川, 昊平

WEKO 63
e-Rad_Researcher 90511676

en Ichikawa, Kohei

ja 市川, 昊平

ja-Kana イチカワ, コウヘイ

Search repository
Nakasan, Chawanat

× Nakasan, Chawanat

en Nakasan, Chawanat

Search repository
Nakada, Hidemoto

× Nakada, Hidemoto

en Nakada, Hidemoto

Search repository
Takano, Ryousei

× Takano, Ryousei

en Takano, Ryousei

Search repository
Leelaprute, Pattara

× Leelaprute, Pattara

en Leelaprute, Pattara

Search repository
飯田, 元

× 飯田, 元

WEKO 64
e-Rad_Researcher 20232126

en Iida, Hajimu

ja 飯田, 元

ja-Kana イイダ, ハジム

Search repository
抄録
内容記述タイプ Abstract
内容記述 Deploying neural network models to edge devices is becoming increasingly popular because such deployment decreases the response time and ensures better data privacy of services. However, running large models on edge devices poses challenges because of limited computing resources and storage space. Researchers have therefore proposed various model compression methods to reduce the model size. To balance the trade-off between model size and accuracy, conventional model compression methods require manual effort to find the optimal configuration that reduces the model size without significant degradation of accuracy. In this article, we propose a method to automatically find the optimal configurations for quantization. The proposed method suggests multiple compression configurations that produce models with different size and accuracy, from which users can select the configurations that suit their use cases. Additionally, we propose a retraining method that does not require any labeled datasets for retraining. We evaluated the proposed method using various neural network models for classification, regression and semantic similarity tasks, and demonstrated that the proposed method reduced the size of models by at least 30% while maintaining less than 1% loss of accuracy. We compared the proposed method with state-of-the-art automated compression methods, and showed that it can provide better compression configurations than existing methods.
書誌情報 en : IEEE Access

巻 10, p. 73818-73834, 発行日 2022-07-13
出版者
出版者 IEEE
ISSN
収録物識別子タイプ EISSN
収録物識別子 2169-3536
出版者版DOI
関連タイプ isReplacedBy
識別子タイプ DOI
関連識別子 https://doi.org/10.1109/ACCESS.2022.3190627
出版者版URI
関連タイプ isReplacedBy
識別子タイプ URI
関連識別子 https://ieeexplore.ieee.org/document/9828404
権利
権利情報Resource https://creativecommons.org/licenses/by/4.0/
権利情報 IEEE is not the copyright holder of this material. This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/
著者版フラグ
出版タイプ NA
戻る
0
views
See details
Views

Versions

Ver.1 2023-12-27 06:37:25.302887
Show All versions

Share

Share
tweet

Cite as

Other

print

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR 2.0
  • OAI-PMH JPCOAR 1.0
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX
  • ZIP

コミュニティ

確認

確認

確認


Powered by WEKO3


Powered by WEKO3