Video by Phaedra Boinodiris for IBM on YouTube
In the above video, we have a business leader at IBM explaining to us the intricacies behind AI ethics in cybersecurity. With the rapid emergence of the new technology and how quickly it shifted the landscape, we began to collectively ask ourselves questions pertaining to ethics previously not deemed relevant. I chose this video because I believe Phaedra touches on several of the main relevant ethical points when discussing her five pillars of AI ethics.
This is an ongoing debate in the community given AI usage right now. Given that AI learns from past scenarios, it has been observed to unfairly skew its bias pertaining to race and ethnicity from time to time. Organizations must make sure to manually monitor their AI code as well in order to prevent it from going astray.
Every person that uses your system has the right to know that their information is being used in some shape or form by an AI model. Every user also reserves the right to refuse sharing that information if they so choose. That is why it is crucial to let users know which system will be handling their information, in addition to how it will be dealt with.
This is the aspect that I, amongst many others, believe to be the most paramount ethical consideration in regards to AI utilization. AI-powered monitoring can lead to excessive surveillance of employees. AI also requires analyzing vast amounts of datasets, leading to potential privacy violations and/or data poisoning.