B.D.C. was at W.IN. Forum (Women in Innovation Forum)

bdc digital retail women innovation catherine barba nyc

B.D.C. was at W.IN. Forum (Women in Innovation Forum)

On Monday, 21st of May, B.D.C. was attending and supporting the 3rd edition of the Women in Innovation Forum organized by Catherine Barba (a Digital Retail pioneer with a French and American background) in New York City. Its Manifesto: Diversity powers innovation.

Diversity still remains a challenge in 2018 while surveys from reknown institutions such as HBR have already proven the benefits of a diversity-powered corporate culture. However, if the future should be inclusive, the present is not always yet.

With more than 700 attendees of 12 countries (with a preponderance of France & United States), 100 startups and 60 CEOs, including key players of the Retail and Innovation scenes, the event showcased a nice program turned on a shared foundation – how innovation and diversity work together – and 5 verticals: Politics, Beauty & Fashion, Advertising & The Media, Food Chain, Finance.

Our favourite session? Diversity in Tech or can we build an unbiased AI.

While Laura Sherbin (Co-President @ Center for Talent Innovation, Professor @ Columbia University) & Emile Bruneau (Ph.D. Neuroscientist, Research Scientist @ MIT) clearly introduced and proved that we all have our automatic implicit bias, the Diversity in Tech panel highlighted two categories of bias in AI:

  • Explicit: Which problem an algorythm is supposed to solve, how and what audience it addresses refer to choices of decision-makers: Selecting criteria and ensuring compliance with the law and the organization policies while creating algorythms directly refer to explicit bias.
  • Implicit and unconscious: The data we use to train the algorythm actually contains bias of the past 5, 10 or even more years (the legacy is huge).

In History, our civilisations have never been inclusive. The true challenge is not only to train algorythm to have no bias, but also to train humans to remove their own bias (including the unconscious ones) while building and working on algorythms.

How to reduce biased AI?

In data science there is the word science, which we can define as follow: the intellectual and practical activity encompassing the systematic study of the structure and behavior of the physical and natural world through observation and experiment. Any science, including AI, should not escape from diligence of a scientific experiment.

Another way to reduce bias in AI is to support open data initiatives and policies, the current choice of both New York City and France (although we can discuss how open data also contains its own patterns and bias).

Last but not least, our society tends to value science and scientists more and more, but should not forget that scientists have also been philosophers for long, and vice versa. Facing so many controversies and ethic-related emerging questions with the rise of AI today, we should value humanities more than ever, leading to enlightened and inclusion-driven decision-making.

Indeed, today you must be a fool to think you can build change and create algorythms and products that will satisfy and fulfill the needs of so many diverse users, customers, citizens without integrating yourself a large variety of backgrounds, genders, etc in the decision process.

As a conclusion…

The best way for organizations that want to innovate and leverage AI is to be inclusive. For that they should hire computer scientists and learn how to work with them on reducing AI bias: Computer scientists should vulgarize and explain other decision-makers what they do. So decision-makers could feed them with their knowledge, experience of the industry and feedbacks.

Besides, giving employees a voice contributes to reduce bias within the organization. Which would help unbias the models. The more diverse background around the table, the less biased is expected.

Moreover, innovation and AI should be efficient ways to reach a goal, solve a problem, but are not a goal itself. If considered as a way, tool or even a commodity, it implies that soft skills like empathy as well as the ability to manage uncertainty and understand relationships within more and more complex organizations and worlds remain highly valuable and key assets for organization. Questioning, as a consequence, the liability in case of failures and scandals: no leader should make technology & algorythms liable for his own (human) failures.

Misc key learnings and inspiring thoughts:

  • Diversity and inclusion do not only refer to women and persons of color and go far beyond the gender gap: Disabled people, young (or old) people as well as a low socio-professional category are more often excluded than their abled, mid-forties and upper class counterparts.
  • Women inclusion implies to find new ways to combine both personal and professional lives
  • The responsibility of the Media (especially social media) was questioned, from fake news to the biased representation of diversity within the media that reinforces bias (myth of the successful white male entrepreneur, models in fashion magazines and advertising campaigns…)
  • Decision-makers should deep dive into technology, thus they could be enlightened and not dazzled by technology and Artificial Intelligence
  • The future of Education is key to anticipate and prepare the labor market evolution due to AI revolution
  • Trust and transparency as new values: Not all the startups pretending doing AI really do AI.
  • Connecting the dots to overcome complexity: Creating links between people within and outside the organization to make them understand the product / business, engage in the company culture & values and innovate (e.g. Nespresso employees spend 3 days meeting farmers in South America).

We are eager to learn. More interesting readings below: