Two weeks ago, Turing Award Winner and Facebook Chief AI scientist Yann LeCun announced his “last substantial post” on Twitter after a series of often bitter exchanges on the question of racial biases and fairness in machine learning systems. LeCun’s comment on the perceived skin-tone failure of a depixelization model, “ML systems are biased when data is biased,” was the catalyst for the fracas, which attracted social activists and members of the machine learning community.
The debate surrounding LeCun’s Twitter exit shed light on a pressing topic in machine learning system development. In the recent paper Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context, a team of researchers from Google, System Stars and DeepMind argue that “machine learning (ML) fairness research tends to focus primarily on mathematically-based interventions on often opaque algorithms or models and/or their immediate inputs and outputs. Such oversimplified mathematical models abstract away the underlying societal context where ML models are conceived, developed, and ultimately deployed.” The paper’s first author Donald Martin, Jr., Technical Program Manager at Google, tweeted, “understanding societal systems is more important than ever.”
There is however no blueprint for how to effectively include societal context when designing ML systems. In a bid to expand the abstraction boundary of ML fairness work to include social context, the researchers propose a CAS (Complex Adaptive Systems)-based taxonomic model of the core interacting elements of societal context for ML system designers and fair-ML researchers. In complex adaptive systems, components interact with each other directly or indirectly in a causal network, and it is difficult to predict system behaviour based solely on the behaviour of the components since they are adaptive to changes in the environment.
To promote fairer ML systems, the team introduced a collaborative causal theory formation that incorporates diverse stakeholder perspectives for discovering and considering societal context.
The researchers propose that the traditional Product Development Process (PDP), which involves funders, product leaders and potential customers, product owners such as ML system designers, target stakeholders and peripheral stakeholders, be placed under the lens of collaborative causal theory formation: “In other words, the PDP must incorporate the capability to collaboratively discover, understand and synthesize the causal theories of key stakeholders into new, more complete causal theories that more accurately reflect the dynamic complexity of the societal context in which the ML-based product (intervention) will ultimately be deployed.”
The paper Extending the Machine Learning Abstraction Boundary: A Complex Systems Approach to Incorporate Societal Context is available on arXiv,and is also the foundation work for the paper Participatory Problem Formulation for Fairer Machine Learning Through Community Based System Dynamics.
Journalist: Fangyu Cai | Editor: Michael Sarazen
This report offers a look at how the Chinese government and business owners have leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle.
Click here to find more reports from us.
We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.