header logo

Optimization Problems in Answer Set Programming

Dr. Mario Alviano, University of Calabria

29. 05. 2015,   10:00 Uhr,   E.1.42



Abstract:
The goal of the lecture is to present the latest achievements in Answer Set Programming (ASP). In particular, the focus of the lecture is on algorithms for solving optimization problems in ASP, that is, problems encoded by ASP programs with weak constraints. As usual in ASP, solutions of a problem instance are represented by its stable models, or answer sets. If the input program also comprises weak constraints, each of its stable model is associated with a cost determined by the unsatisfied weak constraints. Hence, weak constraints define a cost function, so that stable models of smaller cost are preferred.
The lecture overviews several algorithms for computing the most preferred, or optimal, stable models, and provides some details on core-guided algorithms, which proved to be effective on industrial instances of MaxSAT, the optimization variant of the satisfiability problem for propositional formulas. These algorithms work by iteratively checking satisfiability of a formula that is relaxed at each step by using the information provided by unsatisfiable cores, i.e., sets of weak constraints that cannot be jointly satisfied by any stable model of the input program.
The lecture is of the interest for both students visiting Logic Programming course as well as researchers of technical faculty working on declarative solving of hard problems.

Bio:
Dr. Mario Alviano received his master degree from University of Calabria in 2007 and his PhD in 2010 from the same university. Both works were distinguished by awards: the master thesis won the “Italian best thesis in Artificial Intelligence” a prize awarded by AI*IA, the Italian Association for Artificial Intelligence and PhD thesis was one among three dissertations awarded with a honorable mention by the European Coordinating Committee for Artificial Intelligence (ECCAI). Since 2011 he worked as a post doc and then as Assistant Professor at the Department of Mathematics and Computer Science, University of Calabria. The research interests of Dr. Alviano are spread throughout the field of knowledge representation and reasoning with the main focus on theoretical background and applications of answer set programming.

Controllable Face Privacy

Dr. Terence Sim, National University of Singapore

06. 05. 2015,   11:00 s.t.,   Room L4.1.114 Lakeside Labs



Abstract:

We present the novel concept of Controllable Face Privacy. Existing methods that alter face images to conceal identity inadvertently also destroy other facial attributes such as gender, race or age. This all-or-nothing approach is too harsh. Instead, we propose a flexible method that can independently control the amount of identity alteration while keeping unchanged other facial attributes. To achievethis flexibility, we apply a subspace decomposition onto our face encoding scheme, effectively decoupling facial attributes such as gender, race, age, and identity into mutually orthogonal subspaces, which in turn enables independent control of these attributes. Our method is thus useful for nuanced face de-identification, in which only facial identity is altered, but others, such gender, race and age, are retained. These altered face images protect identity privacy, and yet allow other computer vision analyses, such as gender detection, to proceed unimpeded. Controllable Face Privacy is therefore useful for reaping the benefits of surveillance cameras while preventing privacy abuse. Our proposal also permits privacy to be applied not just to identity, but also to other facial attributes as well. Furthermore, privacy-protection mechanisms, such as k-anonymity, L-diversity, and t-closeness, may be readily incorporated into our method. Extensive experiments with a commercial facial analysis software show that our alteration method is indeed effective.

Biography:

Dr. Terence Sim is an Associate Professor at the School of Computing, National University of Singapore. He teaches an undergraduate course in digital special effects, as well as a graduate course in multimedia. For research, Terence works primarily in these areas: face recognition, biometrics, and computational photography. He is also interested in computer vision problems in general, such as shape-from-shading, photometric stereo, object recognition. On the side, he dabble with some aspects of music processing, such as polyphonic music transcription. Terence also serves as President of the Pattern Recognition and Machine Intelligence Association (PREMIA), a national professional body for pattern recognition, affiliated with the International Association for Pattern Recognition (IAPR). Terence counts it a blessing and a joy to graduate from three great schools: Carnegie Mellon University, Stanford, and MIT.
Personal Website: https://www.comp.nus.edu.sg/~tsim/

Zur Bedeutung der Methodenvielfalt für Forschung und Lehre der Informatik

Dr. Andreas Harrer, TU Clausthal

10. 04. 2015,   16:00-17:45 Uhr,   E.2.42



In diesem Vortrag stellen wir unsere Beiträge zur Auswahl und Anwendung von Methoden der Informatik in Forschung und Lehre vor. Neben aktuellen Forschungsarbeiten im Bereich des computer-gestützten kollaborativen Lernens (CSCL) mittels Triangulation sich ergänzender Forschungsmethoden führen wir auch das fachdidaktische Konzept der Methodenausbildung an der TU Clausthal für den Übergang zwischen Bachelor- und Masterstudium ein. Dieses ist für den internationalen Charakter des Fachbereiches und die hohe Fluktuation zwischen beiden Studienabschnitten von hoher Bedeutung für die Qualität der Lehre.

Informatik – Von der Profession zum “Kinderspiel”

Assoc. Prof. DI Dr. Andreas Bollin, Institut für Informatiksysteme

10. 04. 2015,   09:00-10:45 Uhr,   E.2.42



Die Informatik ist ein wissenschaftliches Fachgebiet, in dem es in nur wenigen Jahrzehnten wesentliche von Schlüsselpersonen getragene Entwicklungen gab – und wohl auch weiterhin geben wird. Diese rasante Weiterentwicklung des Fachs Informatik sowie der Wandel des Erscheinungsbildes in der Öffentlichkeit legen nahe, dass sich deren Inhalte wie Prinzipien nicht einfach in unterrichtsspezifische Dosen abpacken lassen. Der Vortrag beleuchtet daher unterschiedliche Gebiete der Informatik und zeigt anhand von einigen konkreten Beispielen, wie Wissen und informatische Konzepte, die zunächst nur einigen wenigen Experten vorbehalten waren, alters- und entwicklungsstufengerecht Einzug in den Unterricht halten konnten und können.

Educational Data Mining im Kontext der Informatikdidaktik

Dr. Andreas Mühling, TU München

10. 04. 2015,   14:00-15:45 Uhr,   E.2.42



Educational Data Mining liefert einen neuen, evidenzbasierten Beitrag zur Bildungsforschung.
Mit Methoden der Informatik und Statistik können Muster in Datenmengen erkannt werden, die andernfalls unentdeckt bleiben würden. Diese wiederum liefern – im Kontext der Informatikdidaktik angewendet – wichtige Informationen für den Informatikunterricht. Vorgestellt wird eine Forschungsmethode, die auf der automatischen Auswertung einer großen Menge von Begriffsnetzen basiert. Präsentiert werden sowohl Ergebnisse aus abgeschlossenen Studien wie auch weiterführende Forschungsfragen.
Ein zweiter zentraler Aspekt meiner Forschungstätigkeit befasst sich mit der Entwicklung und Validierung von Messinstrumenten für die Informatikdidaktik. Vorgestellt wird der Zwischenstand der aktuellen Forschung zu einem Instrument für Kontrollstrukturen.

Inter- und transdisziplinäre Unterrichtskonzepte in der Informatik

Prof. (FH) Ing. DI Dr. Harald Burgsteiner, FH Graz

10. 04. 2015,   11:00-12:45 Uhr,   E.2.42



Dem Unterrichtsfach Informatik mangelt es an standardisierter Kompetenzorientierung und den dahinterstehenden fundamentalen Ideen bzw. ändern sich diese durch Paradigmenwechsel häufig. Als Kulturtechnik kann die Informatik auch nicht isoliert betrachtet werden.
Künftige Forschungsherausforderungen sind demnach auch die Auswirkungen auf und die Anwendung in anderen Fächern. Es bedarf der Entwicklung von Unterrichtskonzepten und -werkzeugen, die moderne Informationstechnologien mit kreativen und spielerischen Ansätzen verbindet. Zudem soll untersucht werden, welche Auswirkungen der Trend weg von naturwissenschaftlichen hin zu Schulen mit sprachlich-kreativen Schwerpunkten auf das Fach Informatik hat und ob diese Entwicklung tatsächlich den Bedürfnissen der Schüler_innen gerecht wird.

Empirical Results on Cloning and Clone Detection

ACHTUNG - Absage. Vortrag findet voraussichtlich am 1. Juni statt! Prof. Stefan Wagner, Universität Stuttgart

23. April 2015,   14:30 Uhr,   E.2.69



WagnerAbstract: Cloning means the use of copy-paste as method in developing software artefacts. This practice has several problems, such as unnecessary increase of these artefacts, and thereby increased comprehension and change efforts, as well as potential inconsistencies. The automatic detection of clones has been a topic for research for several years now and we have made huge progress in terms of precision and recall. This led to a series of empirical analyses we have performed on the effects and the amount of cloning in code, models and requirements. We continue to investigate the effects of cloning and work on extending clone detection to functionally similar code. This talk will give insights into how clone detection works and the empirical results we have gathered.

Short CV: Stefan Wagner is full professor for software engineering at the University of Stuttgart. He holds a PhD in computer science from TU Munich, where he also worked as a post-doc. His main research interests are quality engineering, requirements engineering, agile software development and safety engineering; all tackled using empirical research.

Image and Video Retargeting for Mobile Devices

Dr. Stephan Kopf, Universität Mannheim

24.03.2015,   14:00,   E.2.42



The talk addresses the challenge of image and video visualization on mobile devices. Nowadays, mobile devices like tablets or smartphones are widely used for the capturing and the visualization of multimedia data. The resolution of the display and of the captured pictures does typically not match, and the image content is thus scaled down when presented. This may cause a significant loss in picture quality, where details are no longer recognizable. Also, scaling does not work well when the aspect ratios of the picture and the screen differ. Unnaturally stretched objects are a result. On the contrary, image and video retargeting techniques like seam carving or warping only modify non-relevant image areas and preserve the most important visual content.
The talk presents state-of-the-art algorithms for image and video retargeting. These methods identify and preserve important objects in images and videos, combine different retargeting operators, and – in the case of video retargeting – avoid temporal inconsistencies. Extensions for stereoscopic images and videos are discussed as well.

Short CV: Dr. Stephan Kopf received his diploma in business informatics (2000) and his Ph.D. in computer science (2007) both from the University of Mannheim (Germany). He completed his habilitation (postdoctoral lecture qualification) in 2012 and is currently working as senior researcher and lecturer at the Department of Networks and Multimedia at the University of Mannheim. His research focuses on multimedia content analysis, media retargeting, high dynamic range video, shape-based object recognition, and digital video watermarking. He has published over 80 refereed journal and conference papers in these fields. Dr. Kopf received the best paper award at the ACM Multimedia Systems conference in 2014. He served as technical program chair of ACM ICIMCS, as guest editor of MTAP, and on the program committee of several conferences and workshops. He is a member of IEEE, ACM and ACM SIGMM.

Kopf

 

Rückblick: “BigMedia”: Multimedia goes Big Data [Slides][Video]



Der Rückblick zum TEWI-Kolloquium von Max Mühlhäuser am 26.01.2015 beinhaltet die Videoaufzeichnung sowie die Folien:

Video

Slides

Big Media: Multimedia goes Big Data from Förderverein Technische Fakultät

Abstract:

Ever more multimedia data gets produced, stored, and shared. This is a well-known phenomenon and quite common for information technology, one might say, but multimedia as a field of computing has always been aiming at humans rather than computers as ultimate consumers: computing was mostly an auxiliary on the path from media creation to human consumption. Despite increasing automation, human consumption is likely to remain the dominating multimedia use case. Since humans have rather fixed sensing and processing capabilities, the dramatic increase in multimedia data production and online availability poses particular ” multimedia big data” challenges – the more so since the characteristics of multimedia make the well-known “four V” of big data particularly virulent.

In light of the aforementioned development, the talk will look at big data challenges for multimedia and at upcoming approaches to meeting these challenges. The problem space will be structured according to an imaginary “processing pipeline” that starts from media capturing via networking and storage/processing until presentation/consumption. Some nonfunctional aspects such as privacy will be addressed, too.

Bio: Max Mühlhäuser is a Full Professor of Computer Science at Technische Universität Darmstadt, Germany, and head of the Telecooperation Lab. In 1986, he received his Doctorate from the University of Karlsruhe and soon afterwards founded the first European research center for Digital Equipment Corp. (DEC). Since 1989, he worked as either professor or visiting professor at universities in Germany, Austria, France, Canada, and the US. Max published more than 450 articles, co-authored and edited books about Ubiquitous Computing, E-learning, and distributed & multimedia software engineering. Max is deputy speaker of a nationally funded cooperative research center on the Future Internet and directorate member of the Center for Advanced SEcurity research Darmstadt (CASED).

UW-OFDM: Non-linear Receivers

Dr. Alexander Onic, Infineon Technologies Linz

26. 01. 2015,   10:00 Uhr,   Raum L.4.101, B04 Lakeside Park



Abstract:

The engineering world is mostly made up from the reuse, reinvention or reapplication of old knowledge, rather than single groundbreaking new findings. Each small advance contributes to human knowledge with an impact that can’t be seen usually until many years later.

As an example, a research summary of non-linear receivers for Unique Word OFDM (orthogonal frequency division multiplexing) is presented, where several data detection techniques for MIMO (multi-antenna) communication systems are reused for a single-antenna setup. This is enabled by the UW-OFDM concept, which opens the range from Bayesian linear data estimation to maximum likelihood detection, opposing to the formerly dull data detection method for classical OFDM.

As an industry example, the limiting factors of the signal sensitivity in FMCW (frequency modulated continuous wave) Radar transceivers are addressed. The problem of on-chip leakage is an inherent problem of signal processing in MMICs (monolithic microwave integrated circuits). How it has been handled in communication ICs might be a clue for how to remarkably enhance FMCW Radar in the future.

Biography:

Alexander Onic is currently Concept Engineer for automotive Radar at Infineon Technologies in Linz, Austria. He received the doctoral degree from Alpen-Adria-Universität Klagenfurt in 2013, where he was part of the research team that invented Unique Word OFDM, a novel signaling scheme for digital communication. In 2007, he graduated with the Dipl.-Ing. degree from Friedrich-Alexander-Universität Erlangen-Nürnberg, after studying electrical engineering with an emphasis on information technology and signal processing. Alex’ research interest in signal processing, communication engineering and estimation theory is consequently supplemented by the research cooperation of Infineon and Johannes-Kepler-Universität Linz on Radar signal processing topics.