Не ви допада? Няма проблеми! При нас имате възможност за връщане в рамките на 30 дни
Няма да сбъркате с подаръчен ваучер. Получателят може да избере нещо от нашия асортимент с подаръчен ваучер.
30 дни за връщане на стоката
This monograph addresses the issue of Machine Learning (ML) attacks on Intergrated Circuits through Physical Unclonable Functions (PUF). It provides the mathematical proofs of the vulnerability of various PUF families, including Arbiter, XOR Arbiter, ring-oscillator, and bistable ring PUFs, to ML attacks. To achieve this goal, for the assessment of these PUFs a generic framework is developed that include two main approaches. First, with regard to the inherent physical characteristics of the PUFs mentioned above, fit-for-purpose mathematical representations of them are established, which adequately reflect the physical behavior of those primitives. To this end, notions and formalizations, being already familiar to the ML theory world, are reintroduced in order to give a better understanding of why, how, and to what extent ML attacks against PUFs can be feasible in practice. Second, the book explores polynomial time ML algorithms, which can learn the PUFs under the appropriate representation. More importantly, in contrast to previous ML approaches, not only the accuracy of the model mimicking the behavior of the PUF but also the delivery of such a model is ensured by our framework. Besides off-the-shelf ML algorithms, the author applies a set of algorithms originated in property testing field of study that can support the evaluation of the security of PUFs. They serve as a "toolbox", from which PUF designers and manufacturers can choose the indicators being relevant to their requirements. Last but not least, on the basis of learning theory concepts, this monograph explicitly states that the PUF families cannot be considered as an ultimate solution to the problem of insecure ICs. The book provides an insight into both the academic research and the design and manufacturing of PUFs.