‍AI Data Poisoning: A New Threat to Crypto Security New research reveals AI models, increasingly used in crypto for trading, fraud detection, and auditing, are vulnerable to data poisoning with as few as 250 malicious documents. This sabotage can lead to compromised trading bots, faulty smart contract audits, manipulated DeFi oracles, and ineffective security systems. Experts call mitigating this threat an "unsolved problem," emphasizing the urgent need for robust security across the AI development pipeline. Ensuring AI integrity is now critical for the future of digital assets.