BBC interview with President Yuval Wollman: The risks of GenAI

The following interview with CyberProof President Yuval Wollman by the BBC’s presenter Sally Bundock took place on July 18, 2023, in advance of the first-ever session by the UN Security Council about the risks of AI.  

Interview with BBC presenter Sally Bundock 

Sally Bundock: Members of the European Parliament recently voted in favor of the EU's proposed Artificial Intelligence Act, which will put in place a strict legal framework that companies would need to follow. So, what do you think will be achieved at this first meeting that the UN Security Council is holding on AI? 

Yuval Wollman: I hope that this Council will take into account the cyber risks that are in place. Because it is not a question of “if”, but rather a matter of “when”. I'm not sure how many of your viewers heard about BlackMamba, a generative AI cyber simulation conducted just a few months ago by a group of researchers. It showed how easy it can be to bypass defense tools. Many of these defense products are signature based. But a malicious code, AI generated, can change itself and be polymorphic. So that then, it is harder to reveal – and this is exactly what GenAI is all about. 

Sally Bundock: Are the right players at this event? Are those who are really “in the know” about what the risks are – are they at the table, when it comes to this discussion today? 

Yuval Wollman: Yes, I I think they are. It's a good start. Take a look at China, for example - of all countries… You know that they just announced a new set of regulations, effective immediately in August. On the one hand, the Chinese regime identified the GenAI Revolution as an economic engine. But on the other hand, it has started to enforce ethical restrictions. It's rather old-fashioned – you need to register new algorithms or be subject to security reviews. But now the Chinese called the the Western governments to work with them, to take global action. So, I think we have good alignment, better even than what we saw just a few years ago with COVID. We have the will and the recognition that there’s a need to work together between governments and build policies across the board.  

But if I may, I would say it's not only between governments. It is also between sectors - the public and the private sectors. We've seen OpenAI’s CEO ask the US Congress to initiate regulatory action, and the White House - immediately after that – responded. As we speak, they are crafting policies. I think the right players are there – and the timing is perfect. 

Sally Bundock: Are you hopeful, then, that actually AI will prove to be more a positive, than a threat, in the future? 

Yuval Wollman: Well, I think we're seeing an arms race, whether it's in the workforce - you mentioned that earlier -  or in cyberspace. Let's take, for example, phishing attacks, phishing emails. I'm sorry that I'm taking the discussion back to cybersecurity, but I think it reflects the complexity of the situation. Just a few months ago, it was easier to ask ChatGPT or Bard to fabricate phishing emails. When OpenAI and Google started to restrict these kind of views, a new generative AI engine started to show up, WormGPT. So, now the rogue groups on the Dark Web have a new toy – and it's very, very risky. In the old world of cyber: This is an asymmetric war between the offenders who act faster than the defenders. I believe that GenAI might accelerate that. However, if we take steps toward global regulation, we might turn the wheel toward the right direction. We can then be optimistic, as you suggested. 

Sally Bundock: Let’s hope so! I know we'll talk about this again in the future. For now, thank you. 

Further Resources 

To read more about the UN Security Council session, see: