Raluca has more than years of experience in data science working with a variety of clients (UK retailers, banks & telco companies). Raluca's experience spans managing teams to hands-on data product development. She is Co-founder & CTO of Etiq, an observability platform which supports the entire AI/ML lifecycle with Etiq Test, Monitor, Optimise and Explain. Prior to Etiq she was Director - Data Science for Merkle Aquila. She has also built the global analytics team for a mobile marketing company as it was being successfully listed on NASDAQ and has co-founded an ML start-up in the past. Raluca has a BA from Amherst College.
Augmenting human minds: artificial intelligence and big data in risk assessmentSee more
22/06 - 09:00Visit the agenda
Title of talk
Watching the watchmen: testing AI systems intended for risk assessment
22/06 - 09:30
Abstract of talk
In food safety and as part of building sustainable food systems, machine learning and AI can play a great role in operationalising risk assessment. The potential for efficiencies is high and machine learning and AI can make possible widespread implementations of food safety best practices that were not possible before. However, as in all other sectors where it is implemented, ML and AI can also pose their own risks which need to be addressed. From data collection, to data pre-processing, ML/AI application build, and through to production, potential issues can appear at every stage. It is hard to generalise the issue types across the range of potential applications. NLP, computer vision, supervised and unsupervised tabular data-based ML, reinforcement learning - they all have different potential risks and issues that need to be identified and mitigated. However methodology-specific trends emerge. For instance, in a system which incorporates end-user actions in online food safety risk assessments, such as in public consultations and crowdsourcing, the potential for data collection issues is high. Additionally, In some instances, the sample the system will score in a production setting can easily incorporate edge cases and distributions not reflected by the sample the system was initially trained on. Even if initially the samples were similar enough, over time as perhaps more users start using the system, the differences between the initial sample and more recent population can decrease the performance of the model. ML testing is coming through as a new discipline with its respective approaches to the different types of ML and AI methodologies. In this presentation we will cover strategies and approaches to identify and mitigate potential issues in ML and AI applications relevant to the food safety sector. We will cover a few different applications and then conclude with higher level recommendation and links to further resources to point the audience in the direction of further exploration.