Noon van der Silk

“AI is going to increasingly impact all of our lives in significant ways, yet it is currently in the hands of very few people, a very homogenous group of people. We want to address this imbalance.”

After completing a masters in pure mathematics at the University of Melbourne, Noon founded Braneshop, a platform dedicated to making Artificial Intelligence (AI) a more accessible, representative and ethical space. With AI increasingly affecting us all, Noon shared with us some insight into the field, including his thoughts on selling data, the automation of jobs, and why diversity is critical in AI development.

Photo of Noon teaching

Why did you choose to go into AI, and why did you think it was important to create Braneshop?

I entered AI after completing my Master of Science (Mathematics & Statistics) specialising in Pure Mathematics with a focus on Quantum Computing. Before starting my masters I had a career as a programmer, so AI was a nice way to combine my interest in maths and science with programming.

Braneshop was created as a way to address the fact that AI is going to increasingly impact all of our lives in significant ways, yet it is currently in the hands of very few people, a very homogenous group of people. We want to address this imbalance.

Our aim is to bring more people into the field, in general, but with a strong focus on people who have typically faced barriers in the tech industry, who are under-represented, and/or whose communities are being most affected.

AI is full of statistics, calculus, and linear algebra. A strong mathematical foundation, exposure to these concepts in an academic context and experience doing technical research has really allowed me to approach AI with a highly intuitive understanding. I feel comfortable explaining AI from the most tedious technical detail all the way through to a strategic level. This has enabled Braneshop’s offering to include practical workshops for existing programmers, through to ‘AI for Leadership’ workshops aimed at leadership teams wanting to understand what it means to bring AI into their business.

I love teaching and supporting people to learn new and challenging concepts - technical or otherwise. I love seeing the light of discovery on someone’s face when they first program their own neural network, train a model to generate cool pictures, or otherwise suddenly "get" a concept or idea that they've been struggling with.

I also really enjoy that I get to work with a diverse team of people, as well as the community and engagement we are building around Braneshop.

As a new company, I think your first success sits with you for a long time. The whole team put a lot of work into getting our first Technical Deep Learning Workshop running smoothly, and making everyone feel comfortable, safe, and engaged. We have spent a lot of time thinking about our pedagogy to make sure the content and exercises are adapted to peoples’ skill levels!

What are the biggest challenges in AI, and what do you think should be happening in response?

There are many and they are varied. One that is very important to me, and which we have opted to include as part of our focus at Braneshop, is AI bias and fairness. How do we build AI systems that treat people fairly and justly, and which do not create a negative feedback loop? How do we know what factors cause a model to make certain decisions? How can we interact with AI systems as decision-making tools?

I hope that AI is increasingly developed by a properly diverse group of people, and that its development and use is done within ethical guidelines. If it will affect us all, it should be developed by a set of people that is at least representative of the people who are using it. I am committed to this change!

Another is the issue around job changes and redundancy. How are people going to work alongside AI systems? Will AI be a “tool” that we use to do our jobs better, or will it eliminate entire streams of work?

I believe the best approach is organisationally-supported career-change and education programs. When new automation/AI technology is adopted, it should be accompanied by a changed management plan for the affected employees. Managing the change with empathy should be as important to the company as the adoption of AI itself. Either they should become part of the process, and their roles should adapt as the process does, or they should be supported by the organisation - financially, or otherwise - to transition into a new role.

The changing landscape of privacy, and the related ideas around data ownership, is a complex problem. AI requires significant amounts of data. Who should own this data? What rights do people have around it? What rights do people have around data that is inferred about them? What should companies do to protect consumers? What should consumers demand from companies?

There are many people interested in the idea of “data markets”; namely that we should control, own and ultimately profit directly from data about us and data we make. My concern is that this can have negative repercussions, for example, amongst marginalised communities, who could be compelled to sell their data, and, due to the significant quantities of data available in these markets, not make much money from it, and be potentially negatively impacted by the AI derived from their data, such as adjusted insurance rates, or targeted attention from authorities. In many ways, the data market already exists: through systems like Amazon’s Mechanical Turk, people can be paid a small hourly rate to provide answers to questions.

I think the answer ultimately lies in regulation and social pressure. Governments need to gain confidence and ability to make decisions and regulate in this arena to protect consumers and to put the most disadvantaged people front of mind. As an industry we need to take an empathetic and long-term approach to how we consume and manage consumer’s data, and enforce this through our business relationships and discussions within the industry. It is important that we remember to put people and local communities first in our decisions and our impact.

Finally, there are also environmental issues at play that many times don’t get much attention in the discussion of future challenges. AI requires significant computing power, and this is a significant strain on the environment. Should we be building more and more data-centers to collect data and do our computations? How can we minimise the environmental footprint of these calculations?

If any of these challenges seem interesting to you, now is the perfect time to be getting into AI!

So what advice would you give to someone wanting to learn more about AI?

Well, come to one of our workshops!

There are a lot of options for learning AI out there, to suit your learning style, your time, your situation. If you learn well from online material, there are plenty of options. If you are a hands-on learner, there are lots of tutorials. If you enjoy human connection in your learning, in a group setting, then come see us!

I would strongly encourage everyone to attend free meetups and events, if they are able. This is the single most important thing you can do, to build connections and learn about the industry. If time is a constraint, join some Slack channels in the space (ML/AI, WiMLDS, to name a few).