Transfer Learning Applications for Real-World Imaging Problems 2nd Edition

admin
5 Min Read

Machine learning (ML) has been recognized as central to artificial intelligence (AI) for many decades. The question of how the things that have been learned in one context can be re-used and adapted in other related contexts, however, has only been brought to the attention of the wider ML research community over the past few years. In parallel (and sometimes preceding this), transfer learning has been receiving increasing attention in other research areas, e.g., psychology.

In deep learning context, problems are abstract concepts observed through the data which consists of instances and associated labels to learn from, while the solutions are considered to be the parameters of the model that will be learned for solving the problem. Transfer learning and domain adaptation refer to the situation where a model is learnt in one setting, and is exploited to improve generalization in another setting. The transfer process begins with a) a target task to be learnt in a target context; b) a set of solutions to the source tasks (already learnt in the source contexts); c) the transfer of knowledge based on the similarity between the target and source tasks. This is commonly understood in a supervised learning context, where the input is the same but the target may be of a different nature. If there is significantly more data in the first setting, then that may help to learn representations that are useful to quickly generalize. This happens because many visual categories share low-level notions of edges and visual shapes, changes in lighting, etc. Recent works have focused on incorporating transfer learning into deep visual representations, to combat the problem of insufficient training data. Pre-training CNNs on ImageNet or Places has been the standard practice for other vision problems. However, features learnt in pre-trained models are not perfectly fitted for the target learning task. Using the pre-trained network as a feature extractor or fine-tuning the network have become a frequently used method to learn task-specific features, while extensive efforts have been made to perceive transfer learning itself.

Therefore, this Special Issue welcomes new research contributions proposing novel (federated) transfer learning and domain adaptation approaches to real imaging-related problems, such as (but not limited to):

These topics solve one (or more) machine learning-related tasks, such as classification, regression, segmentation, detection, etc.

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Journal of Imaging is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI’s English editing service prior to publication or during author revisions.

Share This Article
By admin
test bio
Leave a comment
Please login to use this feature.