Learning from and for Heterogeneous and Ambiguous Data

Wednesday, June 1st, 2022 | 10:00 am (CET) | Room: V.1.07

Univ.-Prof. DI Dr. Peter M. Roth | Prof. at Vetmeduni Wien

Abstract: When talking about new developments in Machine Learning, we typically think about new algorithms, better optimization techniques, or optimized hyperparameters. However, one important aspect is often neglected: the quality and the structure of training data: measurement noise, label noise, and correct but ambiguous labels. In this talk, we address the latter problem, trying to deal with high intra-class and small inter-class variability in the data, following two different strategies. First, we consider the problem of metric learning, showing that by selecting/learning a better metric for a specific problem, better results can be obtained: using the same learning method and the same data. Second, focusing on neural networks, we analyze the influence of specific hyperparameters, namely the activation functions. For both directions, we show that the quality of the finally learned model is highly dependent on the data. To illustrate these aspects, we will further discuss a visualization technique, namely information planes, providing better insights into the current state of the learning system.

Bio: He has been a professor at Vetmeduni Vienna since January 2022. Research interests include Data Science and Machine Learning.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Learning from and for Heterogeneous and Ambiguous Data

Machine Learning in Finance via Randomization

Friday, June 10th 2022 | 10:00 am (CET) | Room: N.2.35 |

Josef Teichmann | Prof. at ETH Zürich

Abstract:

Randomized Signature or random feature selection are two instances of machine learning, where randomly chosen structures appear to be highly expressive. We analyze several aspects of the theory behind it, show that these structures have several theoretically attractive properties and introduce two classes of examples from finance (joint works with Christa Cuchiero, Lukas Gonon, Lyudmila Grigoryeva, Martin Larsson, and Juan-Pablo Ortega).

Bio:

Professor at ETH Zurich since 2009, Research Interests include Mathematical Finance, Machine Learning in Finance and Stochastic Analysis, Executive Secretary of the Bachelier Finance Society.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Machine Learning in Finance via Randomization

Advances in Visual Quality Restoration with Generative Adversarial Networks

Thursday, May 12th 2022 | 2 pm (CET) | Room: HS 6 |

Leonardo Galteri, PhD | University of Florence

Abstract: In the latest years, we have witnessed a growing number of media transmitted and stored on computers and mobile devices. For this reason, there is an actual need to employ smart compression algorithms to reduce the size of our media files. However, such techniques are often responsible for severe reduction of user perceived quality. In this talk we present several approaches we have developed to restore degraded images and videos to match their original quality, making use of Generative Adversarial Networks. The aim of the talk is to highlight the main features of our research work, including the advantages of our solution, the current challenges and the possible directions for future improvements.

Bio: Leonardo Galteri is a Postdoctoral Researcher and Adjunct Professor at the University of Florence.

His research activity is focused on computer vision and pattern recognition techniques. Most of his work involves image and video reconstruction, compression artifact removal and noise removal.

In 2018 he obtained the title of PhD, presenting a thesis on the detection of objects in compressed images and videos using techniques based on deep learning. Throughout his research activity, he has participated in various European, national and technology transfer projects with different responsibilities. He is co-founder and Head of Engineering at Small Pixels s.r.l., a startup company that offers technological solutions for real-time video restoration and enhancement.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Advances in Visual Quality Restoration with Generative Adversarial Networks

Trends in Recommendations Systems – A Netflix Perspective

Thursday April 7th 2022 | 05.30 pm (CET) | via Zoom

Anuj Shah, Ph. D. | Senior Machine Learning Research Practitioner at Netflix

Abstract:

Recommendation systems today are widely used across many applications such as in multimedia content platforms, social networks, and ecommerce, to provide suggestions to users that are most likely to fulfill their needs, thereby improving the user experience. Academic research, to date, largely focuses on the performance of recommendation models in terms of ranking quality or accuracy measures, which often don’t directly translate into improvements in the real-world. In this talk, we present some of the most interesting challenges that we face in the personalization efforts at Netflix. The goal of this talk is to sunshine challenging research problems in industrial recommendation systems and start a conversation about exciting areas of future research.

Bio:

Anuj Shah is a Senior Machine Learning Research Practitioner at Netflix. For the past 10+ years, he’s been working on an applied research team focused on developing the next generation of algorithms used to generate the Netflix homepage through machine learning, ranking, recommendation, and large-scale software engineering. He is extremely passionate about algorithms and technologies that help improve the Netflix customer experience with highly personalized consumer-facing products like the Continue Watching row, the Top 10 rows amongst many others. Prior to Netflix, he worked on machine learning in the Computational Sciences Division at the Pacific Northwest National Laboratory focusing on technologies at the intersection of proteomics, bioinformatics and Computer Science for 8 years. He has a Ph.D. from the Computer Science department at Washington State University and a Masters in C.S. from Virginia Tech.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Trends in Recommendations Systems – A Netflix Perspective

Introduction to 5G from radio perspective

Ms. Thura Hatim Al-Juboori | Ericsson Poland | Friday December 3, 2021 | 10:00 (CET, 09:00 UTC) |

Click here to join the meeting

https://teams.microsoft.com/l/meetup-join/19%3ameeting_NGM4MGJkZTMtMTZkMy00ZmIxLTg1OWUtOGZkZTM4NGZlZTg4%40thread.v2/0?context=%7b%22Tid%22%3a%2292e84ceb-fbfd-47ab-be52-080c6b87953f%22%2c%22Oid%22%3a%227f26e805-f313-4426-92db-570f49724efb%22%7d

Abstract:

5G is the fifth generation of cellular networks. Up to 100 times faster than 4G, 5G is creating never-before-seen opportunities for people and businesses. Faster connectivity speeds, ultra-low latency and greater bandwidth is advancing societies, transforming industries and dramatically enhancing day-to-day experiences. Services that we used to see as futuristic, such as e-health, connected vehicles and traffic systems and advanced mobile cloud gaming have arrived. With 5G technology, we can help create a smarter, safer and more sustainable future.

Bio:

* Network performance and evolution lead for all Europe and Latin America, Ericsson Company (Poland office)

* Professional consultant for network performance and 5G evolution lead with more than 15 years of experience in different Telco topics

* Responsibilities covering all Europe and Latin America:

Spectrum & Regulatory Advisory (spectrum and bandwidth acquisition advisory to operators, also spectrum interference topics)

NSA to SA Evolutions (5G spectrum architecture and deployment strategy

5G Evolution Proof Points (NSA/SA coverage extension NR mid band link budget, ESS spectrum sharing system simulator and SA strategy)  

Performance Benchmarking (OOKLA speed test and crowdsourced data analytics)

Network Performance Assessment (app coverage, VoLTE, recommended actions)

LTE/NR Product Segmentation (capacity improvement with NR introduction, DC /NR CA coverage extension gain)

LTE Capacity Expansion & Planning (LTE densification, transport capacity, NR introduction new use cases eMBB, FWA)

MBB Coverage & Device Analysis (smart network investment, improve coverage and CA strategy based on UE cap)

NR TDD Build with Precision (360 analysis based on site build, EMF, capacity, app coverage and TCO)

Site Evolution (network inventory & dimensioning)

Hot Topics (Cloud RAN, Open RAN, private network, and NW sharing)

Planning Network Evolutions with 5G Predictions Future 3 Years

Awards:

   – TOP 15 woman in 5G (perspektywy woman in tech Dec. 2020)

   – Speaker and Mentor for ‚IT FOR SHE‘ Woman Tech Camp 2021

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Introduction to 5G from radio perspective

Applied Data Science – Use Cases and Challenges in the Semiconductor Industry

Dr. Anja Zernig | KAI Kompetenzzentrum Automobil- und Industrieelektronik GmbH Villach |
Friday, November 26, 2021 | 10:00 (CET, 09:00 UTC) | Online:
https://classroom.aau.at/b/sch-xte-ijl-jdg

Abstract: AI has infected the world. Today, there is a huge hype around Data Science activities all over the world, where one of the biggest challenges for the industry is to deliver financial value quickly but also sustainably. In her talk, she will show some examples on latest Use Cases in the area of Data Science within the semiconductor industry, including technical approaches and practical challenges. Further, she will give some personal insights on important enabling factors that make a Data Science project successful.

Bio: Anja Zernig coordinates Data Science projects at KAI Kompetenzzentrum Automobil- und Industrieelektronik GmbH in Villach, which is a 100% subsidiary of Infineon Technologies Austria AG. Dr. Zernig studied Technical Mathematics at the University of Klagenfurt and received her PhD in 2016. Afterwards, she has been applied as a researcher at KAI, focusing on topics like outlier and anomaly detection, pattern recognition, applied statistical methods and Machine Learning techniques. Since 2019 she is coordinating a team of Data Scientists, involved in various national and international funding projects and acts as a link between the industry and academic collaboration partners. She is supervising researchers and students, dealing with innovative data-analytical concepts within the semiconductor production, testing and optimization and publishes latest scientific insights in different conference and Journal papers. Beside this, Dr. Zernig participates in and supports local Data Science activities, e.g. she is part of the organizing team of the Women in Data Science Villach. In recent times, she is focusing on deployment strategies to guarantee sustainable Machine Learning lifecycles.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Applied Data Science – Use Cases and Challenges in the Semiconductor Industry

RL-Cache: Learning-Based Cache Admission for Content Delivery

Sergey Gorinsky | IMDEA Networks Institute, Madrid |
Friday, November 12, 2021 | 14:00 (CET, 13:00 UTC) | S.0.05

Abstract:
Content delivery networks (CDNs) distribute much of the Internet content by caching and serving the objects requested by users. A major goal of a CDN is to maximize the hit rates of its caches, thereby enabling faster content downloads to the users. Content caching involves two components: an admission algorithm to decide whether to cache an object and an eviction algorithm to decide which object to evict from the cache when it is full. In this paper, we focus on cache admission and propose an algorithm called RL-Cache that uses model-free reinforcement learning (RL) to decide whether or not to admit a requested object into the CDN’s cache. Unlike prior approaches that use a small set of criteria for decision making, RL-Cache weights a large set of features that include the object size, recency, and frequency of access. We develop a publicly available implementation of RL-Cache and perform an evaluation using production traces for the image, video, and web traffic classes from Akamai’s CDN. The evaluation shows that RL-Cache improves the hit rate in comparison with the state of the art and imposes only a modest resource overhead on the CDN servers. Further, RL-Cache is robust enough that it can be trained in one location and executed on request traces of the same or different traffic classes in other locations of the same geographic region.

Bio:
Sergey Gorinsky is a tenured Research Associate Professor at IMDEA Networks Institute in Madrid, Spain. He joined the institute in 2009 and leads the NetEcon (Network Economics) research group there. Dr. Gorinsky received his Ph.D. and M.S. degrees from the University of Texas at Austin, USA in 2003 and 1999 respectively and Engineer degree from Moscow Institute of Electronic Technology, Zelenograd, Russia in 1994. From 2003 to 2009, he served on the tenure-track faculty at Washington University in St. Louis, USA. In 2010-2014, Dr. Gorinsky was a Ramón y Cajal Fellow funded by the Spanish Government. Sergey Gorinsky graduated four Ph.D. students. The areas of his primary research interests are computer networking, distributed systems, and network economics. His work appeared at top conferences and journals such as SIGCOMM, CoNEXT, INFOCOM, Transactions on Networking, and Journal on Selected Areas in Communications. He served as a TPC chair of ICNP 2017 and other conferences, as well as a TPC member for a much broader conference population. Sergey Gorinsky contributed to conference organization in many roles, such as a general chair of SIGCOMM 2018 and ICNP 2020. He also served as an evaluator of research proposals and projects for the European Research Council (ERC StG), European Commission (Horizon 2020, FP7), COST Association, and numerous other funding agencies.


Posted in TEWI-Kolloquium | Kommentare deaktiviert für RL-Cache: Learning-Based Cache Admission for Content Delivery

Standardising the compressed representation of neural networks

Werner Bailer | Joanneum Research, Graz | Friday, June 25, 2021 | 10:00 (CET, 08:00 UTC) | online

Abstract:

Artificial neural networks have been adopted for a broad range of tasks in multimedia analysis and processing, such as visual and acoustic classification, extraction of multimedia descriptors or image and video coding. The trained neural networks for these applications contain a large number of parameters (weights), resulting in a considerable size. Thus, transferring them to a number of clients using them in applications (e.g., mobile phones, smart cameras) benefits from a compressed representation of neural networks.

MPEG Neural Network Coding and Representation is the first international standard for efficient compression of neural networks (NNs). The standard is designed as a toolbox of compression methods, which can be used to create coding pipelines. It can be either used as an independent coding framework (with its own bitstream format) or together with external neural network formats and frameworks. For providing the highest degree of flexibility, the network compression methods operate per parameter tensor in order to always ensure proper decoding, even if no structure information is provided. The standard contains compression-efficient quantization and an arithmetic coding scheme (DeepCABAC) as core encoding and decoding technologies, as well as neural network parameter pre-processing methods like sparsification, pruning, low-rank decomposition, unification, local scaling and batch norm folding. NNR achieves a compression efficiency of more than 97% for transparent coding cases, i.e. without degrading classification quality, such as top-1 or top-5 accuracies.

This talk presents an overview of the context, technical features and characteristics of NN coding standard, and discusses ongoing topics such as incremental neural network representation.

Bio:

Werner Bailer is a Key Researcher at DIGITAL – Institute for Information and Communication Technologies at JOANNEUM RESEARCH in Graz, Austria. He received a degree in Media Technology and Design in 2002 for his diploma thesis on motion estimation and segmentation for film/video standards conversion. His research interests include audiovisual content analysis, multimedia retrieval and machine learning. He regularly contributes to standardization, among others in MPEG, where he co-chairs the ad-hoc group on neural network compression.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Standardising the compressed representation of neural networks

Edge computing in 5G networks

Benedek Kovács PhD | Senior Specialist, Ericsson R&D at Hungary | Friday, May 28, 2021 |

10:30 (CET, 08:30 UTC) |

Join on your computer or mobile app Click here to join the meeting

Join with a video conferencing device teams@video.meet.ericsson.net

Video Conference ID: 128 174 110 8 Alternate VTC dialing instructions

Abstract: We overview the edge computing status from a telecommunication networks perspective, give a definition, introduce an example. We discuss the different driving forces in the telecommunication and cloud industry and go thorugh the different solution proposals. We cover the networking, cloud and management aspects and show the different options using the example.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Edge computing in 5G networks

Cloud, Fog, or Edge: Where and When to Compute?

Dragi Kimovski | Alpen-Adria-Universität Klagenfurt | Friday, December 18, 2020 | 11:00 (CET, 10:00 UTC) | online

Abstract: The computing continuum extends the high-performance cloud data centers with energy-efficient and low-latency devices close to the data sources located at the edge of the network. However, the heterogeneity of the computing continuum raises multiple challenges related to application and data management. These include (i) how to efficiently provision compute and storage resources across multiple control domains across the computing continuum, (ii) how to decompose and schedule an application, and (iii) where to store an application source and the related data. To support these decisions, we explore in this thesis, novel approaches for (i) resource characterization and provisioning with detailed performance, mobility, and carbon footprint analysis, (ii) application and data decomposition with increased reliability, and (iii) optimization of application storage repositories. We validate our approaches based on a selection of use case applications with complementary resource requirements across the computing continuum over a real-life evaluation testbed.

Bio: Dragi Kimovski is a postdoctoral researcher with “Zielvereinbarung” at the Institute of Information Technology (ITEC), University of Klagenfurt. He earned his doctoral degree in 2013 from the Technical University of Sofia. He was an assistant professor at the University for Information Science and Technology in Ohrid, and a senior researcher and lecturer at the University of Innsbruck. During his career, he conducted multiple research stays at the University of Michigan, University of Bologna, and University of Granada. He was a work package leader and scientific coordinator in two Horizon 2020 projects (ENTICE and ASPIDE), and coordinated the OeAD AtomicFog project. He co-authored more than 40 articles in international conferences and journals. His research interests include parallel and distributed computing, fog and edge computing, multi-objective optimization, and distributed processing for bioengineering applications.

Posted in TEWI-Kolloquium | Kommentare deaktiviert für Cloud, Fog, or Edge: Where and When to Compute?
RSS
EMAIL