The program will be available shortly. Please check back later.
Presenter: Srishti Gautam, PhD student UiT Machine learning group
Artificial intelligence (AI) has seen significant advancements with the creation of sophisticated deep learning models that excel in multiple areas. Nonetheless, these advancements come with pressing challenges, as deep learning models may reflect and intensify the biases in their training data. Moreover, the complexity of these models leads to a lack of transparency, potentially hiding these biases and eroding trust, hindering their wider acceptance. It is therefore crucial to foster the creation of AI systems that are inherently transparent, trustworthy, and fair. This talk presents a universal method capable of converting any existing pre-trained black-box model into a self-explainable one, thereby addressing the issue of lack of transparency. Further, the discussion will also delve into the widely used Large Language Models, exposing their embedded unfairness and the perpetuation of social biases.
In compliance with GDPR consent requirements, presentations given in a Visual Intelligence context may be recorded with the consent of the speaker. All recordings are edited to remove all faces, names and voices of other participants. Questions and comments by the audience will hence be removed and will not appear in the recording. With the freely given consent from the speaker, recorded presentation may be posted on the Visual Intelligence YouTube channel.
This seminar is open for members of the consortium. If you want to participate as a guest please sign up.
Srishti Gautam, PhD student at UiT Machine learning group
This seminar is open for members of the consortium. If you want to participate as a guest please sign up.
Srishti Gautam, PhD student at UiT Machine learning group
This seminar is open for members of the consortium. If you want to participate as a guest please sign up.