The field of artificial intelligence (AI) has witnessed remarkable growth in recent years, with projections indicating that, by 2030, the industry could add as much as $15.7 trillion to the global economy — a figure that far surpasses the current economic outputs of major players, including China and India combined.
A particular development in this regard emanating from this space and garnering a lot of traction recently is that of “deepfakes” — i.e., highly realistic video and/or audio recordings created with AI that can mimic real human appearances or voices, often indistinguishable from their genuine counterparts.
To this point, a recent viral post on X demonstrated how some are exploiting readily available open-source and commercial software to alter a person’s selfie with generative AI tools, creating counterfeit ID images that can potentially deceive many of today’s security checks.
AI will rapidly accelerate broad use of private key cryptography and decentralized ID.
Check out this Reddit “verification post” and ID made with Stable Diffusion. When we can no longer trust our eyes to ascertain whether content is genuine we’ll rely on applied cryptography. pic.twitter.com/6IjybWRhRa
— Justin Leroux (@0xMidnight) January 5, 2024
The deepfake conundrum
As the digital landscape has evolved, the rise of AI-generated deepfakes poses significant challenges to the existing Know Your Customer (KYC) paradigm. Toufi Saliba, CEO of HyperCycle — a company building the necessary components to enable AI microservices to communicate with one another — told Cointelegraph that a major crux of this challenge is the existing security processes themselves, adding:
“Perhaps KYC itself is the attack vector on self-sovereignty, and these [deepfake] tools are proving how today’s KYC systems could be rendered obsolete in the near future. A resilient solution would involve using certain cryptography properly to service the claimed intent of KYC proponents, thereby also protecting future generations.”
Saliba further noted the implications of AI and deepfakes on the cryptocurrency sector, emphasizing the urgency for rapid adaptation. “This issue of fake image creation is likely to disrupt entire centralized systems from the inside out, thus presenting one of the best opportunities for well-intentioned regulators and centralized entities to realize that cryptography can come to the rescue when needed,” he asserted.
Recent: Can Bitcoin become legal tender in Europe? One German MP thinks so
On the subject of deepfake content detection, Dimitry Mihaylov, an AI research expert for the United Nations, told Cointelegraph that with criminals beginning to use sophisticated tools to generate realistic fake identification documents, we are now beginning to face unforeseen challenges. As a result, he believes that it is imperative that industries across the board evolve rapidly, adding:
“The market for AI-generated image detection is evolving with projects like FakeCatcher, a project that has been developed in partnership with Umur Ciftci. The technology showcases the potential of real-time deepfake detection with a 96% accuracy rate.”
Looking ahead, Mihaylov also anticipates a significant shift in regulatory approaches to KYC, suggesting a future where dynamic and interactive verification processes stand to become the norm. “We may see the introduction of more dynamic KYC procedures, like video KYC, as regulatory frameworks adapt to these technological advancements,” he suggested.
Impact on cryptocurrency exchanges
The impact of deepfakes can be felt through a host of industries, including crypto. To this point, a platform called OnlyFake recently made headlines for allegedly bypassing the KYC protocols of several well-known cryptocurrency trading platforms successfully.
For a mere $15 each, the platform claims to produce counterfeit driver licenses and passports for 26 nations, including but not limited to the United States, Canada, the United Kingdom, Australia and several European Union member states.
On Feb. 5, a company called 404 Media revealed that it had used OnlyFake’s services to successfully elude the KYC verification process of the popular crypto exchange OKX.
Similarly, leaked discussions have revealed OnlyFake’s clientele celebrating their ability to skirt verification processes at numerous other cryptocurrency exchanges and financial institutions, including Kraken, Bybit, Bitget, Huobi and PayPal.
The process of generating a counterfeit document on the website is reported to be remarkably swift, with the platform reportedly able to generate up to 100 fake IDs at once using just Excel spreadsheet data. Moreover, users have the option to incorporate their own photo or select one from a curated “personal library of drops,” not relying on a neural network.
The fake driver’s licenses and passports appear arranged on various domestic surfaces like kitchen counters, bedsheets, carpets and desks, mimicking the typical presentation for online verifications. One post even showcased a fabricated Australian passport bearing the details of a former U.S. president laid out on a piece of fabric.
AI’s impact is being felt far and wide
During the latter part of 2022, blockchain security company CertiK revealed the existence of an underground marketplace where people offered their identities for sale for as little as $8. These individuals consented to serve as the legitimate face for deceitful cryptocurrency initiatives and to establish banking and exchange accounts for users who would otherwise be barred from a certain platform.
Additionally, the prevalent and straightforward availability of AI deep fake technology has caused alarm among leaders in the cryptocurrency sector, especially when it comes to ascertaining the reliability of video verification processes employed in certain identity validations.
Binance chief security officer Jimmy Su told Cointelegraph in May 2023 that he was concerned about the increase in fraudsters using deep fake technology to bypass exchange KYC procedures. He cautioned that these video forgeries were nearing a level of realism capable of deceiving human evaluators.
Finally, a study released by Netherlands-based Sensity AI indicated that liveness tests utilized for identity verification were significantly susceptible to deepfake assaults, enabling scammers to substitute their own faces with those of other individuals.
Recent: Privacy worries persist as UK’s digital pound CBDC plans progress
To this point, a man from Kerala, India recently fell victim to a ruse where the scammer, pretending to be his friend, stole 40,000 rupees (approximately $500). Similarly, a deepfake video of Elon Musk sharing misleading crypto investment advice also made the rounds on Twitter last year.
Beware of deepfake videos of Elon Musk. These are being used in various crypto scams on Youtube. ⚠️
pic.twitter.com/nVsZPJipoT— DogeDesigner (@cb_doge) May 13, 2023
What lies ahead?
As we head toward a future driven by artificial intelligence, there is ample evidence to suggest that the threat of “face swap” deepfake attacks will continue to increase. In fact, a recent study noted that attacks against remote identity verification systems increased by 704% between 2022 and 2023, with this surge being attributed to the accessibility of free and low-cost face swap tools, virtual cameras and mobile emulators.
Furthermore, with each passing month, it appears as though hackers and scammers are becoming more and more sophisticated with their attacks, as seen with the emergence of digital injection attack vectors and emulators, allowing miscreants to create and use deepfakes in real-time, posing a serious challenge to both mobile and video authentication systems. Thus, looking ahead, it will be interesting to see how this nascent security paradigm continues to evolve, especially given the growing reliance of humans on these advanced technologies.