AI Industry

Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair

In a recent interview with Synced, Shawe-Taylor spoke on his role at UNESCO, emerging AI trends and the tech’s unseemly side, human-centered AI research, and the progress of AI in Europe.

While machine learning today is dominated by deep neural network research, in the 1990s neural approaches were not recognized as reliable for real-world applications. Back then, researchers put their efforts into kernel methods and support vector machines (SVM).

One of the most notable and respected contributors to kernel methods and SVM is John Shawe-Taylor, a professor at University College London (UK) and Director of the Centre for Computational Statistics and Machine Learning (CSML). His main research area is Statistical Learning Theory, but his contributions range from neural networks to machine learning and graph theory.

Shawe-Taylor has published over 300 papers with over 42000 citations. Two books he co-authored with Nello Cristianini — An Introduction to Support Vector Machines in 2000 and Kernel Methods for Pattern Analysis in 2004* *have become standard monographs for the study of kernel methods and SVM.

Last year Shawe-Taylor was appointed to the new UNESCO (United Nations Educational, Scientific, and Cultural Organization) position of Chair in Artificial Intelligence. In a recent interview with Synced, the 66-year-old British AI veteran spoke on his role at UNESCO, emerging AI trends and the tech’s unseemly side, human-centered AI research, and the progress of AI in Europe. The interview has been edited for brevity and clarity.

Tell us about your new role at UNESCO.

I have been appointed the UNESCO Chair of Artificial Intelligence, this is actually a role that gives you access to the support from UNESCO, which is an organisation set up by the United Nations to enhance people’s ability to access education, science and culture. The role is quite open-ended.

In addition to enhancing access to education worldwide through AI, there is also the possibility of AI being used to address SDG (sustainable development goals) that have been identified by the UN. They include climate change and water supply, etc., and of course education as well. One of the interesting things about the African network is that we are trying to apply AI technology to solve problems they have on the ground in Africa, giving them the training they need, the resources they need, potentially computers, but also the discipline of how to collect the data and set up the problem in a way that they may really be able to make an impact.

What are some core problems or research areas you want to approach?

People are now solving problems just by throwing an enormous amount of computation and data at them and trying every possible way. You can afford to do that if you are a big company and have a lot of resources, but people in developing countries cannot afford the data or the computational resources. So the theoretical challenge, or the fundamental challenge, is how to develop methods that are better understood and therefore don’t need experiments with hundreds of variants to get things to work.

Another thing is that some of the problems with current datasets, especially in terms of the usefulness of these systems for different cultures, is that there is a cultural bias in the data that has been collected. It is Western data informed with the Western way of seeing and doing things, so to some extent having data from different cultures and different environments is going to help make things more useful. You need to learn from data that is more relevant to the task.

Are there certain geographical areas where you believe UNESCO efforts should be focused?

UNESCO has a big focus on Africa, it’s part of the way that they are targeted at the moment and I am enthusiastic. But I don’t want our project to be restricted to Africa. Our program is funded by the Canadian International Development Research Centre (IDRC), but they are also doing similar things in South America which is great. We have also applied for UK funding to build an education system that is more cross-cultural, working with South American and Indian partners as well as African.

Can you tell us a bit more about UNESCO efforts in AI to help poor countries in Africa, or to give African people access to an AI education?

In Africa, we have the Deep Learning Indaba network, there is also Data Science Africa and others. They are developing a broad base of researchers and practitioners interested in how to develop AI systems, and I think this is also the right way to tackle the application of AI and solve developing countries’ problems.

You are best known as co-author of An Introduction to Support Vector Machines in 2000, and Kernel Methods for Pattern Analysis in 2004. What are the implications of these two books in the era of deep learning?

Those books actually came out in the winter of deep learning. When I started out I was researching neural networks, what is now known as deep learning, but at that time there seemed to be a kind of limitation, in that the training and understanding of these systems was not sufficient for neural networks to become a reliable technology in real world applications. At that time it seemed the right way to go was to develop better tools for analysing machine learning systems and more principled methods of both training and designing such systems.

Kernel Methods were an important contribution to that development. They were perhaps more based on statistical analysis they had a very clear framework in which they operated, and they could be applied to very general situations. I think they set the scene for the reemergence of deep learning in the sense that with that understanding and framework it was possible to push forward again into the deep architectures, to try to advance the richer models that those deep learning systems provide. And indeed, that has been proven to be very successful in the application domain.

Again, we are in a period where theory needs to catch up to a better understanding of when and why deep learning systems work so well, and when and why the systems work less well or are less robust, and that dynamic is what’s driving research forward. So maybe having a big push in one direction, maybe towards more practical systems, and pushing for deeper understanding and statistical analysis will actually promote the advancement of science. It’s very good that we had that competitive dynamic of different research communities tackling essentially the same problem.

Could you tell us about particularly satisfying projects you have worked on over the past few years, whether with the University, Knowledge4All, or European projects.

I think there are two main projects I should mention. One is a Europe-wide project, “X5gon,” which is about using AI to improve the accessibility of global education resources by finding educational videos or documents that would be the most useful for learners for the particular educational goal that they are trying to achieve.

The other project to be mentioned is the AI network for researchers and practitioners in sub-Saharan Africa, for which we have so far organised a workshop in Nairobi, Kenya. It is just very enjoyable for me to see the amazing things that are going on there and their excitement for doing this kind of work.

What are some emerging machine learning trends you have seen in European academia?

I think the interest in ethics is very high in Europe, perhaps higher when compared to China or America, so that is something that is of interest to social scientists and AI practitioners in terms of developing systems that in some way address bias issues and are explainable. This humane AI initiative that we have been discussing is very much a European focus, and it also combines the idea of symbolic and machine learning, how you reason with logical statements, and this new trend in machine learning which is to learn from data as well as through logical inference.

Many are concerned that Europe’s AI regulations and the GDPR could hurt innovation. What is your opinion?

I don’t think so. Of course there are pluses and minuses, partly because AI research is being driven by data analysis, but that is perhaps more of a business problem than a research problem. Some businesses may be seen to be restricted by the GDPR regulations, but equally GDPR is driving a lot of research in terms of how to handle privacy and to build systems that can respect those constraints.

There is a lot of discussion today about AI downsides such as bias, interpretability issues, and potential inappropriate application. Do you think the advance of deep learning research can overcome such challenges, or are they part of the nature of machine learning models, which we will have to come up with new approaches to solve?

I think there will be new approaches to solve these problems, but I would see it as a part of the machine learning machinery. “Solving” is probably too strong, but for addressing those problems, as I’ve said, the problem is that we don’t realise that they are the reflections of our own problems. We don’t realise how biased we are until we see an AI reproduce the same bias, and we see that it’s biased.

How would you define human-centered AI?

The key there for me is that we definitely want AI to be empowering humans, not disempowering them. If you think of AI systems as somehow replacing humans, making decisions and allowing humans to relax, this isn’t giving humans a fulfilling life, it’s giving them boredom. What we need to do is design an AI system that gives humans excitement, more enjoyment, more interest, and empowers them to do the things they want to do. So, AI in service to humans rather than humans being in the service of AI is the key difference.

How does human-centered AI research differ from general machine learning research?

With the humane AI aspect, in the sense of interactions with the world and particularly with people, we want interactions to enhance the human experience rather than undermining or detracting from the human experience. What we don’t want to have is a situation where people have been turned into robots, being run by other robots who are real robots.

How can human-centered AI research deal with the inevitable negative consequences of AI, for example job replacement?

In some sense, if someone says “we are going to replace your job with a more interesting job, and we are going to upscale you, and provide you with the resources to do that job,” that has to be a win-win situation. The question obviously is how quickly we can upscale people, how quickly can we create new jobs, and I think that’s really the key in terms of this revolution that is happening: To get ahead of ourselves, not only by thinking about what’s currently done by humans and maybe we can also do with AI systems, but also what can we now do by putting AI together with humans to do things that had otherwise not been possible.

You’ve said a focus is on AI transforming education. We see many AI education companies in China for example developing adaptive learning systems or digital assistants to impart knowledge. Where do you see AI’s biggest impact on education and how long will it take to transform the whole industry?

For me, the key is whether we can get AI systems to transform the way people learn. AI has the potential to better analyse and understand what it takes for somebody to understand something and take on new knowledge. I don’t think even the most advanced education systems have really got to the point of understanding how to create that excitement and interest in students. I think AI systems have the potential to understand better, and to enhance the learning experience for a huge portion of the population. They can inspire excitement in learning that is more available and widespread, so that learning becomes an ongoing part of life rather than people stopping their education after getting a certificate.

Journalist: Tony Peng | Editor: Michael Sarazen

Credit to Laurence Zhang for interview/transcript

4 comments on “Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair

  1. Pingback: Humans Don’t Realize How Biased They Are Until AI Reproduces the Same Bias, Says UNESCO AI Chair – Synced – IAM Network

  2. That’s great information and I am glad to read this article. Good Stuff!
    Check out what I wrote about Artificial Intelligence:

    Either way, Keep up the good work.

  3. Pingback: Interview with John Shawe-Taylor, professor at University College London – Augmented Lawyer

  4. Pingback: Algorithmic Journalism And The False Equivalence Of Ideological Conflicts - THE CONTEMPORARY LAW FORUM

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: