Parinaz Sobhani TWIML Conference

Operationalizing Responsible AI

I recently spoke on a panel at the TWIML AI Platforms Conference discussing how to operationalize responsible AI with Rachel Thomas (Center for Applied Data Ethics),  Guillaume Saint-Jacques (LinkedIn) and Khari Johnson (VentureBeat). It was a great discussion and we touched several different topics. These included:

  • Creating a company vision and values around responsible AI
  • The biggest challenges and techniques to address them
  • Why team diversity is so important

Creating a culture of trust

We started our conversation by discussing the steps that every organization should take to lay the groundwork for the responsible development of machine learning and AI systems.

At Georgian Partners, we believe companies should begin by creating a clear vision where trust is your guiding light. This means developing a business model that seeks to optimize both the value of your offering and the comfort level of customers or end-users of your products. In our view, the main drivers of comfort are:

  • Customer control of data and influence on product direction,
  • Responsible and secure data handling
  • Reliability, stability and safety of AI systems
  • Transparency and explainability
  • Privacy
  • Bias and fairness

Organizational culture can also play a major role in responsible AI. Show that you stand by your vision by building your culture around trust and by holding your team accountable to the standards you have set.

We encourage our portfolio companies to appoint a Chief Trust Officer who understands the issues around responsible AI and can prioritize them as a part of the organization’s trust-building initiative. It’s particularly important to have someone in a leadership role to drive initiatives and partnership across the organization as trust touches multiple operational areas such as security, privacy and product. Their role should include creating organizational data values so that every employee understands the responsibility that comes with handling data.

Challenges we face to build responsible AI 

A strong culture of trust and data values certainly help, but there are issues specific to AI that will require more technical solutions. It’s important to remember that while machine learning models provide probabilistic predictions and outcomes, we normally use them for deterministic decision making. As a result, errors are unavoidable. Every effort should be taken to guard against unintended consequences.

Especially when the decisions made by machine learning models are critical, it’s essential to build guardrails and fault tolerance processes in case of unexpected model behavior. Moreover, before launching a product you need to define machine learning-specific quality assurance best practices to increase the reliability and resilience of these products. You might use tools like TensorFuzz to debug machine learning models and find errors in trained neural networks. Explainable AI techniques such as LIME can also help to find bugs in a machine learning model.

Two other major concerns related to responsible AI are privacy and bias in machine learning systems.

AI represents a new type of risk to personal information. Machine learning models have the capacity to memorize individual data points and can be simply reversed engineered by adversaries to access the data that has been used to train these models, thereby exposing personal information.

Simple de-identification or anonymization by masking personally identifiable information might not be sufficient. This approach does not provide any privacy guarantees since additional sources of data can be used to re-identify the individuals whose data was masked. Techniques such as homomorphic encryption and differential privacy provide more formal guarantees grounded in mathematics. Federated learning can also allow you to train machine learning models without sending data from local devices to the cloud, thereby reducing the privacy risk.

While technology has the power to greatly improve people’s lives, it can also reinforce existing societal biases and unintentionally create new ones. Fairness and objectivity in AI only exist if data and models are free of bias. If your machine learning model is trained on biased data sets, your product or service will perpetuate unfairness and discrimination.

To build fair machine learning systems, the first step is detecting potential biases and their roots. Tools such as fairTest and Google what-if can help to identify such unwanted associations between model prediction and sensitive attributes. The next step is mitigating bias by having a thorough plan and transparent communication. Removing sensitive attributes from a model does not solve the issue of fairness because there can be correlations between the remaining attributes and sensitive attributes, and removing any attributes can result in sacrificing the model’s predictive power.

Reducing bias with diversity

To avoid unfair treatment of individuals and to improve awareness of the sources of bias, you should come full circle to your culture.  Champion diversity and transparency in your culture. Not only gender or ethnic diversity but diversity in background and skills. As the potential impact of AI systems grows, it’s critical to bring more diverse voices into your design process so that different points of view are heard. Consider hiring for these roles for your data science team to enable responsible AI:

  • Sociologist / social technologist
  • AI Privacy and legal expert
  • Human-centered and computer-centered design experts

Getting started

It’s hard for startups to proactively prioritize responsible and trustworthy AI and it’s a challenging endeavor. That’s why the mission of our team is to educate our portfolio companies and help them to operationalize responsible AI through our principles of trust, our applied research and our software offerings.

You can read our 11 Principles of Trust white paper to assess your own company’s maturity and put a plan into place for operationalizing responsible AI.