User profiles for Dylan Slack
Dylan SlackUC Irvine Verified email at uci.edu Cited by 1410 |
Fooling lime and shap: Adversarial attacks on post hoc explanation methods
As machine learning black boxes are increasingly being deployed in domains such as
healthcare and criminal justice, there is growing emphasis on building tools and techniques for …
healthcare and criminal justice, there is growing emphasis on building tools and techniques for …
Rethinking explainability as a dialogue: A practitioner's perspective
As practitioners increasingly deploy machine learning models in critical domains such as
health care, finance, and policy, it becomes vital to ensure that domain experts function …
health care, finance, and policy, it becomes vital to ensure that domain experts function …
Assessing the local interpretability of machine learning models
The increasing adoption of machine learning tools has led to calls for accountability via model
interpretability. But what does it mean for a machine learning model to be interpretable by …
interpretability. But what does it mean for a machine learning model to be interpretable by …
[HTML][HTML] Explaining machine learning models with interactive natural language conversations using TalkToModel
Practitioners increasingly use machine learning (ML) models, yet models have become
more complex and harder to understand. To understand complex models, researchers have …
more complex and harder to understand. To understand complex models, researchers have …
Counterfactual explanations can be manipulated
D Slack, A Hilgard, H Lakkaraju… - Advances in neural …, 2021 - proceedings.neurips.cc
Counterfactual explanations are emerging as an attractive option for providing recourse to
individuals adversely impacted by algorithmic decisions. As they are deployed in critical …
individuals adversely impacted by algorithmic decisions. As they are deployed in critical …
Reliable post hoc explanations: Modeling uncertainty in explainability
As black box explanations are increasingly being employed to establish model credibility in
high stakes settings, it is important to ensure that these explanations are accurate and …
high stakes settings, it is important to ensure that these explanations are accurate and …
Post hoc explanations of language models can improve language models
Large Language Models (LLMs) have demonstrated remarkable capabilities in performing
complex tasks. Moreover, recent research has shown that incorporating human-annotated …
complex tasks. Moreover, recent research has shown that incorporating human-annotated …
Fairness warnings and Fair-MAML: learning fairly with minimal data
D Slack, SA Friedler, E Givental - … of the 2020 Conference on Fairness …, 2020 - dl.acm.org
Motivated by concerns surrounding the fairness effects of sharing and transferring fair machine
learning tools, we propose two algorithms: Fairness Warnings and Fair-MAML. The first is …
learning tools, we propose two algorithms: Fairness Warnings and Fair-MAML. The first is …
Differentially private language models benefit from public pre-training
Language modeling is a keystone task in natural language processing. When training a
language model on sensitive information, differential privacy (DP) allows us to quantify the …
language model on sensitive information, differential privacy (DP) allows us to quantify the …
Active meta-learning for predicting and selecting perovskite crystallization experiments
Autonomous experimentation systems use algorithms and data from prior experiments to
select and perform new experiments in order to meet a specified objective. In most …
select and perform new experiments in order to meet a specified objective. In most …