Evaluating Recommender Systems for Software Engineers – Lessons Learned

Massimiliano Di Penta | April 4, 2018 | 13:30 | E 2.42

Abstract: The availability of a wide variety of software repositories, ranging from Questions and Answer forums to mailing lists, forges and issue trackers opens the road for building recommender systems aimed at supporting developers in their activities. Upon evaluating such recommenders, in most cases researchers focus on the underlying approach capability of providing accurate and complete results.
In this seminar I will report our experience in evaluating recommenders, showing that an offline evaluation of the approach precision and recall is only a very preliminary starting point. Importantly, different kinds of evaluations having different size and level of control, and above all involving humans, are required to achieve results able to convince practitioners of the actual usefulness and applicability of a tool. Moreover, I will discuss how context plays a paramount role in the empirical evaluation of recommender systems.

dipentaBio: Massimiliano Di Penta is associate professor at the University of Sannio, Italy. His research interests include software maintenance and evolution, mining software repositories, empirical software engineering, search-based software engineering, and testing. He is author of over 250 papers appeared in international journals, conferences and workshops, and received various awards for his research and reviewing activity, including two most influential paper awards (SANER 2017 and GECCO 2015) and three ACM SIGSOFT Distinguished Paper Awards (ICSE, FSE and ASE). He serves and has served in the organizing and program committees of over 100 conferences such as ICSE, FSE, ASE, ICSME, ICST, MSR, SANER, ICPC, GECCO, WCRE, and others. He is currently member of the steering committee of ICSME, MSR, and PROMISE. Previously, he has been steering committee member of other conferences, including ICPC, SSBSE, CSMR, SCAM, and WCRE. He is in the editorial board of ACM Transactions on Software Engineering and Methodology, the Empirical Software Engineering Journal edited by Springer, and of the Journal of Software: Evolution and Processes edited by Wiley. He has served the editorial board of IEEE Transactions on Software Engineering.


Please follow and like us:
Posted in TEWI-Kolloquium | Kommentare deaktiviert für Evaluating Recommender Systems for Software Engineers – Lessons Learned