More and more, we as members of society are becoming subject to socio-economic and political decisions made using statistical models trained on enormous amounts of cross-referenced data. This data may originate from many different sources, including governments (e.g. census data), industry (e.g. telephone or credit card transactions) and even ourselves (e.g. our use of online social networks).
However, even the cleanest of datasets, those generated with the utmost care, using careful phrasing of survey questions and careful sampling, may contain bias. Data sets often reflect historical bias of gender, age or ethnicity that can be extremely subtle and deep-rooted. In addition, these " subtle biases can be further amplified algorithmically into full-blown discriminatory profiling of certain groups. It is therefore imperative to study scientifically the causes and effects of bias in the era of big data and propose palliative measures.
The aim of this workshop is to gather researchers in industry and academia working on algorithmic and data bias in all areas of society: health care, finance, education and other that can help To design discrimination-free algorithms and fairness-aware data mining.
11月06日
2017
11月10日
2017
注册截止日期
留言