direkt zum Inhalt springen

direkt zum Hauptnavigationsmenü

Sie sind hier

TU Berlin

Page Content

Basic Information

Student: Prateek Gaur

Advisors: Christoph Boden, Max Heimel

Degree: Master

Abstract

Many parallel data-driven systems have been successful in their ability to store and process large volumes of data. This has led to an increased interest in performing largescale analytics on this data. Much acclaimed for its ability to scale petabytes of data, the MapReduce framework has been found to be limiting for iterative algorithms. Such iterative algorithms form the basis for many domains of data analysis. To address these challenges, various new techniques have been proposed. These usually revolve around either developing extensions to the existing systems or coming up with specialized domain specific systems. Tackling this problem at an algorithmic level, we propose a set of optimization techniques that train either locally producing a sub-optimal, but a fast solution or globally creating slower yet optimal solutions. We evaluate the tradeoffs between these training approaches from the dimensions of quality and performance. Further, we suggest and investigate hybrid training techniques as a possible “middle ground" that try to come up with a better solution while still taking substantially less time than the global approaches. Initial experiments have shown that the proposed architecture yields accurate predictions in a shorter training time following an easy-to-use framework. Our study aims to provide necessary guidelines to Data Scientists for choosing the most effective combination for the performance and cost requirements of a given learning task.

Zusatzinformationen / Extras