The Ethics of AI in Monitoring and Surveillance
January 1st, 2024
AI is set to advance every aspect of our society. But, with great power comes great responsibility. AI is an imperfect technology that has the potential to cause both intentional and unintentional harm. As we implement AI into our monitoring and surveillance systems, we must understand the ethical challenges that can arise with the technology, as well as how to ensure its proper usage and outcomes.
Ethical Challenges in AI
AI reflects the motives and opinions of the person or people using the technology. Bad actors can use the technology to intentionally deceive, manipulate, or harm others. Users with good intentions can unknowingly do the same, because we all have unconscious bias, and AI absorbs those biases during its training. Following are a handful of situations that show how AI can be harmful and affirm the need for governance.
- Deepfakes Technology: AI can be used to create convincing deepfakes, which are doctored images, audio, or video that look and sound real. These fake media are then used to spread misinformation, typically to manipulate or trick people into certain beliefs or behaviors.
- AI-Based Cyber Attacks: Cyber attackers are increasingly using AI tools to make their attacks smarter and more automatic. Things like data poisoning, or reverse engineering have been used to carry out targeted cyber-attacks. For example, an attacker might use a chatbot to write fake messages that look real and don’t have obvious mistakes like spelling errors to trick people.
- Discriminatory Outcome: AI models have generated discriminatory outcomes against certain people/groups of people in the past. One of the common examples in the lending domain is an AI-based credit scoring model to approve or reject loans. In one such scenario, African American and Hispanic people got fewer loans than white people. This happened because the data used to make lending decisions had unfair biases. To make things better, we need to use more diverse data and the efficient use of fairness techniques to make the data usable.
- Financial Fraud: Bad actors are increasingly using AI-based techniques to carry out financial fraud that results in financial losses for investors and loss of trust in financial institutions. One common example is the use of fake news and sentiment (both positive and negative) to influence the prices of specific stocks by creating inflated demand/supply to make a quick profit. This emphasizes the need to have adequate financial market surveillance tools that can monitor employees’ trading and communication data to detect unusual patterns and deviations before fraud happens.
- Security/Privacy Concerns due to Increased Surveillance: Governments and organizations are increasing the use of AI-powered surveillance tools and techniques to monitor people, raising questions about data privacy and security. People are worried about how their personal information is used and how much is being used.
These examples emphasize the clear need for AI governance in day-to-day business processes. Governance should focus on tackling data privacy issues and data bias, as well as reducing misuse by bad actors.
Best Practices for AI Governance
Businesses have a responsibility to consider the ethics of AI and govern any AI systems they operate. In financial services, it’s also important to address any regulatory challenges AI might cause as it is used to understand, generate, and manipulate human language.
Here are some best practices to ensure your responsible use of AI:
- Consider the ethical implications before developing and deploying AI models and algorithms. For example, could your system be used to discriminate against certain individuals or groups? Could it be used to spread misinformation or propaganda?
- Use fair and unbiased data. The data that you use to train your AI/ML system can have a substantial impact on its performance. If your data is biased, your system is likely to learn to be biased as well. To avoid this, make sure that your data is fair and unbiased. This means that it should represent a diverse range of viewpoints and experiences.
- Be transparent with users about how your model or algorithm works. This means providing information about the data used to train your system, the algorithms that you used, and the limitations of your system. By being transparent, you can help users understand how your system works so they can make informed decisions about how to use it.
- Respect the privacy of individuals. When you are using AI to collect or process personal data, it is important to respect the privacy of individuals. This means obtaining consent from individuals before collecting their data and taking steps to protect their data from unauthorized access or disclosure.
- Hold yourself accountable. As a developer or user of AI systems, you have a responsibility to use these technologies responsibly and ethically. If you become aware of any potential harms that your system could cause, take steps to mitigate those harms. Hold individuals and teams accountable for ethical lapses. By holding yourself accountable, you can help to ensure that AI is used for good.
- Provide education and training on AI/ ML. Provide training to your team on ethical considerations in AI/ML development to foster a culture of ethical awareness and responsibility.
- Maintain human oversight and intervention: This is especially true in critical decision-making processes. Human experts should be available to review and intervene in complex or sensitive situations.
- Regularly monitor and maintain/update models and algorithms. Continuously monitor the performance of your AI models in real-world scenarios. Update and retrain models as needed to maintain their accuracy and effectiveness over time.
- Provide “informed consent” if your AI model is used to interact with users. Make sure users are informed about how their data will be used and obtain their consent. Provide clear explanations about the purpose, scope, and potential implications of using the technology.
- Perform regular “ethical reviews.” Establish an ethics review process for AI/ML initiatives, especially those that have the potential to impact society or customers. Consider forming ethics committees or seeking external input to evaluate the ethical implications of your work.
- Stay current with relevant AI/ML-related regulations and standards in your industry and region. Ensure that your systems comply with these regulations.
These practices will ensure your firm is both compliant with regulations and acting in good faith. Our systems at NICE Actimize automatically incorporate many of these steps, enabling our users to incorporate AI technologies into their workflows without worrying about the ethical implications.
How NICE Actimize Governs AI
Regulations governing the use of AI technology can vary significantly from one jurisdiction to another and are subject to change over time. Here are some of the best practices that we follow at NICE Actimize to ensure our AI technologies are always compliant, as well as fair, ethical, and equitable:
- We focus on data privacy and security: Regulations in compliance areas typically require companies to obtain consent from individuals or clients before collecting their data. At NICE Actimize, we follow strict data security and protection standards while using clients’ data for model training. Only specific people have access to the data to protect it from unauthorized access. This is achieved through things like encryption, access controls, and other security measures.
- We use diverse and representative data: We ensure that the training data used to build AI models is diverse and representative of the real-world population. However, this is one of the biggest challenges that the financial compliance industry faces today. The real examples of financial fraud/compliance are rare which often lead to data biases and incorrect model training and outcome. We use techniques such as resampling, re-weighting, or data augmentation to reduce bias in training data. For example, in our supervised machine-based “Alert Prediction” capability that predicts the outcome of alerts to reduce false positive alerts, we balance the data set to ensure equal representation of different groups, i.e. real versus false alerts. We also integrate fairness constraints into the training process that involve adjusting the loss function to penalize predictions that exhibit bias or fairness violations.
- We’re transparent about our modeling. We at NICE Actimize know that AI/ML algorithms can be complex and opaque, making it difficult for business users to understand how they work and how data is being used. Therefore, we always try to enable users to understand how the model arrived at its decisions supported with explanation. This is achieved through documentation that covers data sources, preprocessing steps, and algorithms used to develop the AI models. This helps ensure accountability and makes it easier for our clients to understand the models better.
For example, if a communication surveillance alert is predicted as “Attention”, then the explanation could be something like:
- Let’s connect over WhatsApp.
- Can you give me some information on your company’s management decision?
- Hey Mate, call me on my number, I have some tips for you.
- We continuously monitor and evolve our models. There is a lot of data that gets generated in today’s world and the AI/ML models need new data to improve accuracy, Therefore, we regularly monitor and re-evaluate our models in real-world applications to identify and address any emerging biases and corporate clients’ feedback. We also partner with clients during the data analysis and model training process to effectively make the models fair and transparent. In the compliance domain, compliance analysts play an essential role in providing feedback on alerts, which in turn improves alerting accuracy. Alerts need to be properly dispositioned. If they’re not, the surveillance system will make future predictions based on bad data.
- We make the user experience a top priority: At the end of the day, human beings who are responsible for implementing the technology need to understand what’s working and what isn’t. Predictions made by unsupervised models can and should be reviewed regularly to ensure that the models are making accurate predictions. For this reason, we focus on showing the outcome via real-time dashboards that provide a seamless experience for users and allow them to easily monitor your machine learning models’ accuracy. This helps users to see which models are accurate (based on dispositions of alerts) and which are not, or how a particular model is performing compared to others. Then they can use this information to make necessary tweaks and adjustments.
- We stay abreast of relevant NLP and AI/ML-related regulations. We always keep ourselves and our clients updated with the latest regulatory requirements, innovation, and the effective use of AI/ML techniques in financial crime, risk, and compliance domain. This is one of the key points that we always consider while building solutions to ensure that our solutions always comply with these internal compliance and external regulations.
Our goal at NICE Actimize is to improve our clients’ monitoring and surveillance, and we have incorporated several AI technologies to do so. We will always, however, be sure the use of these technologies is compliant with regulations in every jurisdiction, and do not pose any ethical risks for our users.
Improve Surveillance without Sacrificing Integrity
AI technologies will continue to improve the way we monitor and surveil our business operations. But firms have to take great care when implementing the technology to avoid instances of non-compliance with sacrificed integrity. As financial firms implement AI technologies, they should be considering the ethical implications of new tools and processes and building strong governance around these areas.
Learn more about how NICE Actimize is doing the heavy lifting for clients to both incorporate AI into monitoring and surveillance and govern its usage. Download our new White Paper and learn how new technology can enable your firm to improve its surveillance tools and reduce risk.