As tech industry giants like Amazon, Facebook and Google lead the development of advanced systems in e-commerce, adtech and personalization, similar algorithms are also being built and deployed across verticals like healthcare, finance, enterprise and government. These systems are being used to guide key decisions, from determining who gets the best inpatient treatment, to choosing who is approved for a loan or singling out the best applicant for a job. In other words, AI-driven conclusions can, in many ways, alter the course of our lives.
With the growing complexity of AI models and their far-reaching societal impacts, the ability for people to understand and assert control over their functionality has become critical. Can we detect bias in their models? Can we better analyze and understand their predictions? Can we trust the systems that impact the way we interface with the world?
There are many applications that generated massive amounts of data such as credit cards and retailers’ fidelity cards, therefore would benefit tremendously from sophisticated models like Deep Neural Networks (aka Deep Learning). These models are data hungry and have proven to achieve unprecedented prediction power in fields like image classification and advertising. By interviewing data scientists, as well as non-technical stakeholders in financial firms or retailers, these sectors already want to reap the benefits of implementing Deep Learning systems. However, model audit or risk departments hold back such projects because understanding these model remains a challenge. They can’t fully understand how models take decisions making them difficult to trust.
DeepViz is working on a solution to help non-technical professionals in the financial sector interpret machine learning models and explain how AI techniques work using interactive visualization. It will help non-experts, including those who use AI-powered devices and consumer applications, to become more aware of the decisions made by AI systems. If we can ensure that machine learning models are explainable, it will help address some of the most critical issues related to interpretability and transparency, including:
Fairness: Were the decisions made by the AI system were fair and ethical? Ensure predictions are unbiased and do not discriminate against minority and protected class.
Safety: Gain confidence in the reliability of AI system that can justify their decisions
Privacy: Ensure user data is protected and not misappropriated for unethical reason.
Trust: Humans can trust the system that explains its decisions compared to a black box
Storyboard for validating stakeholder workflow: bridging the gap between techies and business decision makers.
The widespread adoption of AI-based services has just begun. And as new areas of exploration in AI and machine learning continue, there is ample opportunity for industries and governments to implement a clearer chain of accountability—not only promote transparency, but to help win the trust and loyalty of at-risk individuals like the auditors who have to approve such models as well as the consumers who ultimately benefit from more and more personalized products and services.
Non-technical audiences and the general public, informed by a clear picture of how algorithms affect them are more likely to demand equitable conduct from the platforms putting the systems into use. In turn, industry leaders and policy-makers may be more willing to build a transparency first agenda. Whether it’s understanding advanced models, discovering and communicating model bias, or promoting AI safety, DeepViz can help set this future in motion by developing smart visualizations and reporting standards for making these models more accountable.
DeepViz helps build explainable and interpretable machine learning products and services.From University: NYU Tandon School of Engineering.
Team Members: Parvez Kose, Alberto Chierici.