Learning sparse dictionaries by seperating components of low intrinsic dimensions

András Lörincz short bio: professor, senior researcher has been teaching at the Faculty of Informatics at Eötvös University, Budapest since 1998. His research focuses on distributed intelligent systems and their applications in neurobiological and cognitive modeling, as well as medicine. He has founded the Neural Information Processing Group of Eötvös University and he directs a multidisciplinary team of mathematicians, programmers, computer scientists and physicists. He has acted as the PI of several successful international projects in collaboration with Panasonic, Honda Future Technology Research and the Information Directorate of the US Air Force in the fields of hardware-software co-synthesis, image processing and human-computer collaboration. He authored about 200 peer reviewed scientific publications. He has received the Széchenyi Professor Award, Master Professor Award and the Széchenyi István Award in 2000, 2001, and 2004, respectively. In 20

04, he was awarded the Kalmár Prize of the John von Neumann Computer Society of Hungary. He has become an elected Fellow of the European Coordinating Committee for Artificial Intelligence for his pioneering work in the field of artificial intelligence in 2006.

Abstract: In recent years, a number of novel applications have emerged through „L1 Magic“, the intriguing property that cost functions using l0 norm (i.e., the minimization of the number of the elements of a basis set representing a given input) and cost functions using l1 norm are equivalent under certain conditions. This feature turns relevant NP-hard-looking problems to polynomial ones. Based on this and related recent advances in signal processing, we study a novel model in which signal is decomposed into a dense signal of low intrinsic dimension and into a sparse signal. In contrast to other approaches, this preprocessing in conjunction with efficient sparse coding can achieve structural sparseness thus allowing for the formation of highly overcomplete and highly sparse, but combinatorial dictionaries. We shall present some results for natural images and will discuss the advantages of the separation of the two types of representations for other data, including movies and texts.

Please follow and like us:
Posted in TEWI-Kolloquium | Kommentare deaktiviert für Learning sparse dictionaries by seperating components of low intrinsic dimensions