US bank reports itself to SEC after sending customer data to unauthorized AI app

A US commercial bank just turned itself in to federal regulators after employees fed sensitive customer data into an unauthorized AI application. The self-reporting highlights growing concerns about how AI tools are being used with confidential information across the financial sector.

Community Bank, which operates in southwestern Pennsylvania, Ohio, and West Virginia, filed an 8-K form with the Securities and Exchange Commission on Monday. The bank said it launched an investigation into the internal incident, which remains ongoing.

The exposed data included customer names, dates of birth, and Social Security numbers. Community Bank said it felt compelled to submit the filing “due to the volume and sensitive nature of the non-public information.” However, the bank provided no further details about exactly what happened or how many customers were affected.

The incident reflects a broader challenge facing organizations as AI tools become more widespread. Many companies are struggling to balance employee productivity with data security as workers increasingly turn to AI applications for various tasks. Social Security numbers are among the most sensitive types of personal data that US organizations handle, protected under multiple federal and state privacy laws.

Community Bank did not specify what the “unauthorized AI-based software application” was or how employees used it. One likely scenario is that staff entered customer information into a generative AI tool outside the bank’s approved systems. This could raise serious questions about:

  • Whether the information was transmitted to third-party AI providers
  • How the data may have been stored or processed by external systems
  • What safeguards existed to prevent such unauthorized usage

The timing of this disclosure comes as financial regulators are paying closer attention to how banks handle AI adoption. The incident also highlights the need for clearer policies around employee use of AI tools, especially when dealing with sensitive customer information.

Community Bank confirmed that its operations continued normally and customers maintained access to their accounts and payment services throughout the incident. The bank is now evaluating which customer data was affected and conducting required notifications under federal and state laws.

“The company has been, and continues to be, in communication with relevant banking and financial regulators regarding the incident,” Community Bank stated in its cybersecurity disclosure. The bank also promised to continue remediation efforts and implement measures to prevent future incidents.

This case serves as a warning for other financial institutions about the importance of establishing clear AI usage policies. As AI tools become more accessible and employees increasingly use them for daily tasks, organizations need robust controls to prevent sensitive data from ending up in unauthorized systems.