Skip to content

Human First AI Roadmap

Vinh Luong edited this page Sep 1, 2020 · 4 revisions

The H1st framework plans to provide and support an array of capabilities and tools to enable Data Scientists to conveniently integrate human knowledge in their Data Science projects, as well as enhance the trustworthiness of the AI solutions to human stakeholders.

H1st Data Science Workbench (3Q-2020)

Data Scientists shall have a structured, interactive interface to work on their H1st projects. The Workbench is integrated with JupyterLab and enables Data Scientists to navigate and manage their h1st.Models and h1st.Graphs easily and transparently during development.

The H1st Data Science Workbench shall be runnable on both local computers as well as on the cloud.

Encoding of Human Knowledge

Integration of Rule-Based Logic (3Q-2020)

Data Scientists shall have the ability to wrap rule-based logic into h1st.Models to be used alongside ML models in a h1st.Graph.

Integration of Fuzzy Logic Models (4Q-2020)

Many useful statements of human knowledge cannot be stated very precisely. For example, for a commercial cooling system "when the output temperature is much higher than the setting, and the pressure is very low, there is a moderately high chance of the system having a gas leak". Fuzzy Logic helps encode such imprecise controls and judgements easily, by working with statements whose truth values are non-binary (0 or 1) but lie in a spectrum from "very likely wrong" to "very likely true".

Overall, Fuzzy Logic enables users to make natural statements about data phenomena and for the system to infer the degree of truth of such statements. It is very useful because: (i) Much of human expertise can be captured in such statements, as opposed to statements with fixed numbers and absolute binary truth values; and (ii) a Fuzzy Logic system can deal well with uncertainty and a certain level of mutual contradiction among various statements.

Trustworthy AI

SHAP- and LIME-based Model Explainability (3Q-2020)

Data Scientists shall be able to conveniently obtain global and local explanations of their Models using SHAP and LIME through built-in explainers in the h1st.core.trust.explainers module.

Human in the Loop (2021)

Human-in-the-Loop Decision Review during Scoring

Data Scientists shall be able to conveniently get human experts' approvals of machine-generated decisions, or human-revised decisions, through an interactive display of the relevant data and machine reasoning steps.