(CS2019 3,7) Entropy: Special Issue
Dear Colleagues,
The renewal of research interest in machine learning came with the emergence of the concept of big data during the late 2000s.
Schematically, families of deep learning networks (DLN) emerged with industrial ambitions, taking advantage of the development of graphics cards (GPUs) to construct prediction models from massive amounts of collected and stored data and substantial means of calculation. It is illusory to want to learn a deep network involving millions of parameters without very large databases. We tend to think that more data lead to more information.
In addition, the core of learning is all but a problem of data representation, not in the ‘data compression’ sense. For instance, in DLN, one representation (input layer) is replaced by a cascade of many representations (hidden layers), which means an increase of information (entropy). However, some questions remain:
How does information spread in these inflationary networks? Is information transform conservative through the DLN? Can information theory quantify the learning capacity of these networks?
This Special Issue has the ambition to collect responses to these questions.
Prof. Vincent Vigneron
Prof. Hichem Maaref
Guest Editors
- Hichem MAAREF (PR IUT Evry)
- Vincent VIGNERON (MCF HDR Univ. Evry Paris-Saclay)
- Revue Entropy: Site Web
- Voir sur HAL