User profiles for Dylan Slack

Dylan Slack

UC Irvine
Verified email at uci.edu
Cited by 1410

Fooling lime and shap: Adversarial attacks on post hoc explanation methods

D Slack, S Hilgard, E Jia, S Singh… - Proceedings of the AAAI …, 2020 - dl.acm.org
As machine learning black boxes are increasingly being deployed in domains such as
healthcare and criminal justice, there is growing emphasis on building tools and techniques for …

Rethinking explainability as a dialogue: A practitioner's perspective

H Lakkaraju, D Slack, Y Chen, C Tan… - arXiv preprint arXiv …, 2022 - arxiv.org
As practitioners increasingly deploy machine learning models in critical domains such as
health care, finance, and policy, it becomes vital to ensure that domain experts function …

Assessing the local interpretability of machine learning models

D Slack, SA Friedler, C Scheidegger… - arXiv preprint arXiv …, 2019 - arxiv.org
The increasing adoption of machine learning tools has led to calls for accountability via model
interpretability. But what does it mean for a machine learning model to be interpretable by …

[HTML][HTML] Explaining machine learning models with interactive natural language conversations using TalkToModel

D Slack, S Krishna, H Lakkaraju, S Singh - Nature Machine Intelligence, 2023 - nature.com
Practitioners increasingly use machine learning (ML) models, yet models have become
more complex and harder to understand. To understand complex models, researchers have …

Counterfactual explanations can be manipulated

D Slack, A Hilgard, H Lakkaraju… - Advances in neural …, 2021 - proceedings.neurips.cc
Counterfactual explanations are emerging as an attractive option for providing recourse to
individuals adversely impacted by algorithmic decisions. As they are deployed in critical …

Reliable post hoc explanations: Modeling uncertainty in explainability

D Slack, A Hilgard, S Singh… - Advances in neural …, 2021 - proceedings.neurips.cc
As black box explanations are increasingly being employed to establish model credibility in
high stakes settings, it is important to ensure that these explanations are accurate and …

Post hoc explanations of language models can improve language models

S Krishna, J Ma, D Slack… - Advances in …, 2024 - proceedings.neurips.cc
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing
complex tasks. Moreover, recent research has shown that incorporating human-annotated …

Fairness warnings and Fair-MAML: learning fairly with minimal data

D Slack, SA Friedler, E Givental - … of the 2020 Conference on Fairness …, 2020 - dl.acm.org
Motivated by concerns surrounding the fairness effects of sharing and transferring fair machine
learning tools, we propose two algorithms: Fairness Warnings and Fair-MAML. The first is …

Differentially private language models benefit from public pre-training

G Kerrigan, D Slack, J Tuyls - arXiv preprint arXiv:2009.05886, 2020 - arxiv.org
Language modeling is a keystone task in natural language processing. When training a
language model on sensitive information, differential privacy (DP) allows us to quantify the …

Active meta-learning for predicting and selecting perovskite crystallization experiments

…, MA Najeeb, M Zeile, V Yu, X Wang, D Slack… - The Journal of …, 2022 - pubs.aip.org
Autonomous experimentation systems use algorithms and data from prior experiments to
select and perform new experiments in order to meet a specified objective. In most …