When AI imitates your boss: How CFIT prevented an enterprise-level social engineering attack



In 2024, many technology companies encountered a new type of fraud: criminals used AI voice cloning technology to imitate the voice of the CEO and instructed financial personnel to transfer money to overseas accounts. A Silicon Valley startup alone was defrauded of $12 million.


Mission and functional response:


In accordance with the new provisions of the National Artificial Intelligence Security Act, CFIT included "deep fake technology abuse" in the scope of investigation and established an "AI Crime Response Team". Legal counsel Sarah Lee drafted an emergency memorandum to authorize the retrieval of Microsoft Azure and AWS's speech synthesis API call records and locked more than 200 suspicious developer accounts.


Technical means:


Voiceprint matching system: Match the fraudulent recording with the public speech database to confirm that the source is an illegally fine-tuned version of an open source voice model.


Tracing tool: Tracked to a data center in Manila, Philippines through the API key, and seized 300TB of training data.


Social impact:

Promoted the FTC to issue the "AI Voice Usage Specifications", mandatory for synthetic voice to add watermarks, and the case was selected as Harvard Law School's "Top Ten Annual Science and Technology Law Events".