The JAACS is the Alumni Association of the three Computer Science Institutes INF, IIUN and DIUF of the Universities of Bern, Neuchâtel, and Fribourg. More information about the association and how to register can be found at https://www.jointalumni.ch/admission.
The JAACS annually awards a prize for the best Bachelor, Master, and Doctoral thesis at the INF, IIUN and DIUF that was defended during the academic year. The winners are selected by the respective institutes and are presented the award at the institute’s end-of-year event.
necessity of removing motion blur computationally, a problem called motion deblurring. This problem is challenging because the solution is not unique. Mathematically, a blurry image caused by uniform motion is formed by the convolution operation between a blur kernel and a latent sharp image. Potentially there are infinite pairs blur kernel and latent sharp image that can result in the same blurry image. Hence, some prior knowledge or regularization is required to address this problem. Even if the blur kernel is known, restoring the latent sharp image is still difficult as the high frequency information has been removed. Although we can model the uniform motion deblurring problem mathematically, it can only address the cases where the camera translates on the x-y plane. In practice, however, motion is more complex and can be non-uniform. Non-uniform motion blur can come from many sources, camera out-of-plane rotation, scene depth change, object motion and so on. In this thesis, we present several methods for motion blur removal. We aim to address four challenging motion deblurring problems. We study from the noise blind image deblurring scenario where blur kernel is known but the noise level is unknown. We introduce an efficient and robust solution based on a Bayesian framework using a smooth generalization of the 0 − 1 loss to address this problem. Then we study the blind uniform motion deblurring scenario where both the blur kernel and the latent sharp image are unknown. We explore the relative scale ambiguity between the latent sharp image and blur kernel to address this issue. Moreover, we study the face deblurring problem and introduce a novel deep learning network architecture to solve it. We also address the general motion deblurring problem and particularly we aim at recovering a sequence of 7 frames each depicting some instantaneous motion of the objects in the scene.
Mr. Meiguang Jin has made significant contributions to the problem of removing blur in images due to moving objects. The solution to this problem is required to enable the understanding of an image. Mr. Jin’s methodology is solid and thorough: it begins with the mathematical description of blurred images, uses a probabilistic approach to formalize the estimation of the sharp image as an optimization problem, and creates efficient algorithms to solve the optimization problem. The experiments show state-of-the-art results, showing how a solid theory can lead to better performance. All of Mr. Jin’s work has been published at high-level international peer review conferences.
that the same person has written them. Finally, after checking every text tuple, if we can link them together, we build the final clusters based on a strategy using a distance of probability distributions. Employing a dynamic threshold, we can choose the smallest relative distance values to detect a common origin of the texts. While in our study we mostly focus on the creation of simple methods, investigating more complex schemes leads to interesting findings. We evaluate distributed language representations and compare them to several state-‐of-‐the-‐ art methods for authorship attribution. This comparison allows us to demonstrate that not every approach excels in every situation and that the deep learning methods might be sensitive to parameter settings. The most similar observations (or the category with the smallest distance) to the sample in question usually determines the proposed answers. We test multiple inter-‐textual distance functions in theoretical and empirical tests and show that the Tanimoto and Matusita distances respect all theoretical properties. Both of them perform well in empirical tests, but the Canberra and Clark measures are even better suited even though they do not fulfill all the requirements. Overall, we can note that the popular Cosine function neither satisfies all the conditions nor works notably well. Furthermore, we see that reducing the text representation not only decreases the runtime but can also increase the performance by ignoring spurious features. Our model can choose the characteristics that are the most relevant to the text in question and can analyze the author adequately. We apply our systems in various natural languages belonging to a variety of language families and in multiple text genres. With the flexible feature selection, our systems achieve reliable results in any of the tested settings.
Laudatio by Prof. Jacques Savoy
During this research. M. Kocher and myself have published two papers in the journal Digital Scholarship in the Humanities, one in Information Processing & Management, and another one Journal of the American Society for Information Science and Technology. M. Kocher published one paper in the Joint Conference on Digital Libraries IEEE-ACM in 2017 (and two others in more minor conferences). Finally, Kocher and myself participate during three years to the CLEFPAN evaluation campaigns. Due to our good results, we ask three times M. Kocher to present his results during the CLEF conference.
model that uses a tree to keep track of an inspection session, and a recording infrastructure that allows each widget to decide how user interactions should be recorded. To validate this model, we identify several types of problems that can arise in object inspectors and show how they can be addressed if developer interactions are recorded by the inspector. For example, the new model allows developers to replay inspection sessions, restore partial navigation and generate code from an inspection session.
An “object inspector” is a tool commonly found in programming language debuggers, used to examine and explore the state of objects in a snapshot of the memory of a running program. The object inspector of the Pharo Smalltalk development environment is a state-of-the-art tool that supports sliding windows for navigation through object state, and “moldable” views of objects that are adapted to a given application domain, offering tailored visualizations and interactions.
Common shortcomings of objects inspectors are (i) the navigation path is lost when the focus moves to a new object, forcing the developer to manually replay the path, (ii) there is no possibility to record a navigation for later playback, and (iii) the code corresponding to a useful navigation must be manually reverse-engineered by the developer.
Mr. Kaufmann addresses these issues by extending the Pharo object inspector with “reproducible moldable interactions”. He has modified the underlying model of navigations to capture tree of interactions rather than simply a list. This makes it possible to reapply a sequence of interactions from one object to another one of the same type.
In his MSc thesis, Mr. Kaufmann has carefully surveyed and reported on the relevant related work, he has analyzed the shortcomings of existing object inspectors in detail, he describes the new tree-based model of interaction and presents the implementation details, and he convincingly conveys how the extension simplifies and enhances interaction with an object inspector.
to thousands of concurrent tasks. An overview of its features is given, as well as a more in-depth look into the framework’s architecture. Finally, an example experiment is demonstrated, showcasing LSDSuite’s capabilities.
Testing large-scale distributed systems is a difficult task, and existing frameworks are often lacking in scalability, flexibility or ease-of-use. For his Master thesis, Ismaïl Senhaji has designed and developed LSDSuite, an evaluation framework for large-scale distributed systems that addresses some of the aforementioned shortcomings. Throughout his Master work, Ismaïl has proved his wide knowledge of distributed systems, his resourcefulness, and his capacity to solve complex problems. We congratulate him for this well-deserved award.
based on selected criteria and then an aggregation of these POIs to obtain the ZOIs. Furthermore, a new Markov based model to predict long distance trajectories of pedestrians is introduced. The model is capable of choosing between the first and second order Markov chain in order to accommodate to the different movement behaviours of individual pedestrians and the available trace data. To quantify the movement behaviour of users an existing Periodicity Detection algorithm is modified to achieve a better task-specific, computational performance. In addition, the adaptive Markov model is evaluated and compared to other current trajectory prediction methods using the real-life Mobile Data Challenge (MDC) dataset. The proposed model achieves a precision of up to 86% and a recall of up to 84% for predicting future trajectories of users in the MDC dataset. The thesis further presents a mechanism to predict the number of pedestrians in urban areas traveling on the same trajectory at a future point in time. This mechanism combines the adaptive Markov model with an existing next Place Prediction algorithm and a means of storing and aggregating predicted trajectories for multiple users.
The bachelor thesis of Florian Gerber focuses on the problem of human mobility understandings. He analysed the spatial and temporal parameters of pedestrians’ daily mobility to discover the movement patterns. He proposed advanced machine learning models to predict pedestrians’ future movement trajectories and locations. This work has a great potential in future smart transportation applications, such as route planning, mobile network performance optimisation, etc. His works have been successfully published in 4 scientific papers.
of this thesis was to develop a web-based system that would analyze the measurements for critical deviations, alert staff if necessary, and automatically transfer it to the existing customer website. Besides automating this workflow, the system should also be easy to extend, reuse, and integrate into other applications. To accomplish this, the microservices architectural style was chosen for the implementation.
En juillet 2017, le candidat a eu l’opportunité de travailler pour la firme GEOINFO à Gossau, qui s’occupe de vérifier en continue que des mouvements de terrain même infimes ne se produisent pas dans le cadre d’infrastructures critiques comme par exemple des rails de chemin de fer.
Avant l’arrivée d’Andreas, la compagnie collectait l’ensemble des données; des ingénieurs procédaient à une analyse manuelle; et ensuite seulement les données étaient chargées sur un serveur web et les clients avertis en cas de problème.
En 9 semaines de travail et donc en respectant les délais, Andreas a été capable de comprendre le fonctionnement complexe de la collecte, de l’analyse et du transfert des données chez GEOINFO, d’apprendre un ensemble de technologies nouvelles pour lui (par exemple l’idée de web services REST) et de produire une solution informatique complète pour gérer les processus métiers formant le coeur du business de GEOINFO. De plus, la solution proposée se base sur des microservices réutilisables et sur un scheduler pour les composer. Une solution de grande qualité et qui semble donner totale satisfaction. Il est extrêmement rare qu’un étudiant parvienne à une solution aussi professionnelle et d’une telle qualité scientifique dans le cadre d’un travail de bachelor. Le rapport est lui aussi d’excellente facture.
Photo : IIUN, Mirco Kocher