CS 60036 Intelligent Systems

(Spring Semester 2018)

Theory
Niloy Ganguly (NG) niloy {AT} cse.iitkgp.ernet.in


Teaching Assistants

Abhijnan Chakraborty -- chakraborty [DOT] abhijnan {AT} gmail.com

Bidisha Samanta -- bidisha [DOT] samanta {AT} gmail.com



Notices

 

Theory

       Class Room/Hour
       Course Overview
       Evaluation
       Lectures
       Assignments
       Students List

 

          

Class Room/Hours

Lectures : Wed - 3, Thu  - 2, Fri - 4, 5 (fourth class will be taken as per need)
Room : 302
Units : 3-0-0
Credits : 3
Contact : Room #313 (CSE), Phone 83460

Course Overview

Intelligent systems have been traditionally designed to solve problems where human solution was inefficient and would take a lot of man-hours. The tasks were mostly repetitive and can be specified by the humans either through well-formed rules or through feature engineering in labelled data. However, decision making was still an exclusive domain of human intelligence.

Recent years have seen a lot of changes in the above scenario. Many hitherto human decision domains (such as evaluating creditworthiness, judicial decisions, choosing electoral winners etc.) are increasingly becoming algorithmic. Though, in many cases, algorithmic automation might reduce the complexity of the problem, it is also evoking mistrust by ‘hiding the machine.’ Also, often while designing, human bias against people having particular race, gender or economic status are unknowingly being transferred into machine intelligence. Hence, with the booming popularity of the Artificial Intelligence, there is a need to address the very real problems with AI today like discrimination, lack of fairness and trust. In this course, we will study how these issues creep into the design of AI systems, and also learn mechanisms using which we can mitigate discrimination and design systems which are fair and trustworthy.

Topics:

Introduction to intelligent systems
Algorithmic bias and discrimination
Discovering discrimination in historical decision records
Preventing discrimination
Fairness aware data mining
Rethinking fairness as social choice
Intelligent electoral systems
Impossibility of fairness axioms
Strategic Manipulation in elections
Coping with Strategic Manipulation
Information and Communication in Voting
Multiwinner Voting Rules
Introduction to Fair Division
Fairness and Efficiency Criteria
Divisible Goods: Cake-Cutting Procedures
Indivisible Goods: Combinatorial Optimisation
Fair Allocation
Introduction to Blockchain
Blockchain architecture
Blockchain use-case - Bitcoin and its architecture
Weakness of blockchain technology
Future directions in trustworthy computations
 



Books:
1. Blockchain for dummies
https://www-01.ibm.com/common/ssi/cgi-bin/ssialias?htmlfid=XIM12354USEN

2. Handbook of Computational Social Choice - Ariel Procaccia
http://procaccia.info/papers/comsoc.pdf
 


Reading from the Web:

1. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

2. https://medium.com/@mrtz/how-big-data-is-unfair-9aa544d739de

3. https://obamawhitehouse.archives.gov/sites/default/files/microsites/ostp/2016_0504_data_discrimination.pdf


Papers:
1. I Zliobaitė (2015): A survey on measuring indirect discrimination in machine learning.
2. Richard S. Zemel, Yu Wu, Kevin Swersky, Toniann Pitassi, and Cynthia Dwork. 2013: Learning Fair Representations. In Proc. of the 30th Int. Conf. on Machine Learning. 325–333.
3. D. Pedreschi, S. Ruggieri, F. Turini: A Study of Top-K Measures for Discrimination Discovery. SAC 2012.
4. Romei, A. and Ruggieri, S., 2014. A multidisciplinary survey on discrimination analysis. The Knowledge Engineering Review, 29(5), pp.582-638.

5. Ruggieri, S., Pedreschi, D. and Turini, F., 2010. Data mining for discrimination discovery. ACM Transactions on Knowledge Discovery from Data (TKDD), 4(2), p.9.

6. Pedreschi, D., Ruggieri, S. and Turini, F., 2009, June. Integrating induction and deduction for finding evidence of discrimination. In Proceedings of the 12th International Conference on Artificial Intelligence and Law (pp. 157-166). ACM.

7. Luong, B.T., Ruggieri, S. and Turini, F., 2011, August. k-NN as an implementation of situation testing for discrimination discovery and prevention. In Proceedings of the 17th ACM SIGKDD international conference on Knowledge discovery and data mining (pp. 502-510). ACM.

8. Mancuhan, K. and Clifton, C., 2014. Combating discrimination using bayesian networks. Artificial intelligence and law, 22(2), pp.211-238.

9. Francesco Bonchi1; Sara Hajian, Bud Mishra, Daniele Ramazzotti, Exposing the Probabilistic Causal Structure of Discrimination, International Journal of Data Science and Analytics > Issue 1/2017

10. Salvatore Ruggieri, Sara Hajian, Faisal Kamiran, and Xiangliang Zhang, Anti-discrimination Analysis Using Privacy Attack Strategies, ECML-PKDD 2014

11. M. Feldman, S.A. Friedler, J. Moeller, C. Scheidegger, and S. Venkatasubramanian, Certifying and removing disparate impact. In KDD, pp. 259-268, 2015.


12. F. Kamiran and T. Calders. Data preprocessing techniques for classification without discrimination. In Knowledge and Information Systems (KAIS), 33(1), 2012.

13. A methodology for direct and indirect discrimination prevention in data mining
S. Hajian and J. Domingo-Ferrer. In IEEE Transactions on Knowledge and Data Engineering (TKDE), 25(7), 2013.
http://ieeexplore.ieee.org/document/6175897/

14. Fairness Constraints: Mechanisms for Fair Classification
M. B. Zafar, I. Valera, M. Gomez Rodriguez and K. P. Gummadi
AISTATS 2017, Fort Lauderdale, FL, April 2017.

15. Fairness Beyond Disparate Treatment & Disparate Impact: Learning Classification without Disparate Mistreatment
M. B. Zafar, I. Valera, M. Gomez Rodriguez and K. P. Gummadi
WWW 2017, Perth, Australia, April 2017.

16. From Parity to Preference-based Notions of Fairness in Classification
M. B. Zafar, I. Valera, M. Gomez Rodriguez, K. P. Gummadi and A. Weller
NIPS 2017, Long Beach, CA, December 2017.

17. Beyond Distributive Fairness in Algorithmic Decision Making: Feature Selection for Procedurally Fair Learning
N. Grgić-Hlača, M. B. Zafar, K. P. Gummadi and A. Weller
AAAI 2018, New Orleans, LA, February 2018.

18. I. Zliobaite, F. Kamiran and T. Calders. Handling conditional discrimination. In ICDM, pp. 992- 1001, 2011.

19. Ribeiro, M.T., Singh, S. and Guestrin, C., 2016, August. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135-1144). ACM.

20. C. Dwork, M. Hardt, T. Pitassi, O. Reingold and R. S. Zemel. Fairness through awareness. In ITCS 2012, pp. 214-226, 2012.

21. Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V. and Kalai, A.T., 2016. Man is to computer programmer as woman is to homemaker? debiasing word embeddings. In Advances in Neural Information Processing Systems (pp. 4349-4357).

22. Chierichetti, F., Kumar, R., Lattanzi, S. and Vassilvitskii, S., 2017. Fair Clustering Through Fairlets. In Advances in Neural Information Processing Systems (pp. 5036-5044).


Evaluation

Teacher's Assessment : 35
Mid-sem : 25
End-sem : 40


Lectures

1. Discrimination Discovery

2. Fairness Aware ML

3. Why should i trust you?

4. Fair Clustering Through Fairlets

5. Fairness Constraints

6. Preference Based Fairness

7. Procedural Fairness

8. Fairness through awareness.


 

Assignments