The social engineering and physical security sides of breaches have always been interesting to me; how the physical or human aspects of an attack are used or interact with the cyber elements to allow for a more comprehensive compromise.
For me it probably started with the Metcalf sniper attack on power transformers in California - a co-ordinated attack on a number of facilities that showed domain knowledge (they hit the oil cooling parts of the transformers, causing them to eventually over-heat), but the impact of which was never clearly understood (power was routed quickly around ther damaged sections of the grid - so why the attack).
This more recent report is concerned with the social engineering involved in a financial scam - using modern voice-impersonation technology to convince a particular target that the money transfer being requested came from the CEO; bypassing most of the security training even experienced people have typically undertaken. Stats from the article reveal that 1 in 638 calls is now created in this way.
As often is the case, the undoing of the relevant attackers was greed; they called back soon after the first successful scam - requesting more money be moved. It was at this point that the victim became suspicious.
The interesting point for me is that we hear about how advanced techniques and machine-learning/"AI" will help us on the defending side. As applications of the technology are used to seamlessly create audio and visual content indistinguishable from reality - I can't help think that the 350% increase in these types of attacks seen between 2013 and 2017 are only the smallest leading indicator of what's to come.
Cybercrooks successfully fooled a company into a large wire transfer using an AI-powered deep fake of a chief executive’s voice, according to a report.