KuppingerCole's Advisory stands out due to our regular communication with vendors and key clients, providing us with in-depth insight into the issues and knowledge required to address real-world challenges.
Optimize your decision-making process with the most comprehensive and up-to-date market data available.
Compare solution offerings and follow predefined best practices or adapt them to the individual requirements of your company.
Configure your individual requirements to discover the ideal solution for your business.
Meet our team of analysts and advisors who are highly skilled and experienced professionals dedicated to helping you make informed decisions and achieve your goals.
Meet our business team committed to helping you achieve success. We understand that running a business can be challenging, but with the right team in your corner, anything is possible.
As artificial intelligence continues its upward trajectory, a radical proposition emerges: Could AI take the helm of cybersecurity leadership? This bold discourse dives into the heart of this debate, exploring whether AI can effectively shoulder responsibilities traditionally assigned to a chief information security officer. Areas of exploration include AI's potential in threat detection, vulnerability assessment, and incident response.
But where does human judgment fit into this AI-dominated picture? Is the seasoned expertise of a CISO irreplaceable? This electrifying discussion stirs the pot of the future of cybersecurity leadership, grappling with the balance between emerging AI capabilities and indispensable human expertise.
As artificial intelligence continues its upward trajectory, a radical proposition emerges: Could AI take the helm of cybersecurity leadership? This bold discourse dives into the heart of this debate, exploring whether AI can effectively shoulder responsibilities traditionally assigned to a chief information security officer. Areas of exploration include AI's potential in threat detection, vulnerability assessment, and incident response.
But where does human judgment fit into this AI-dominated picture? Is the seasoned expertise of a CISO irreplaceable? This electrifying discussion stirs the pot of the future of cybersecurity leadership, grappling with the balance between emerging AI capabilities and indispensable human expertise.
You guys. So life will stay hard a bit unfortunately and maybe, maybe not here. The title is a bit provoking. AI is not doing the C'S job. But first let's figure out what's really behind ai. What is the latest development and what has all this to do with our job as CSOs in large and also small organizations? If you look here on the isolation of AI and the adoption of AI to enterprises, it was for very long term what we call simple machine learning. Over time it evolved over big data and then supervised learning.
Now we have deep learning models and generative AI models and the differences in the past it was always about labeled data. So first we had to tell AI what is good and what is bad. And out of that we could make decisions. Easy example is the antivirus software on the desktop machines. So they know by profile what is malicious data and they just compare any data against this known profile. Today we have total different kind of scaling behind because the models itself have natural language understanding implemented.
A natural language understanding could be more than just German or English or Japanese or whatever kind of of language we use. It could also be machine code, it could be log data as a language understanding it could be c plus plus or whatever kind of programming and coding language. And with this generic understanding, it's much easier to train the models on specific purposes and we come to that also what that means from a risk perspective and what we need to care as CSOs here. But before we go there, let me spend a bit here talking about the the peak or have we peaked already?
When I think back at the beginning of the emia, we all were a little bit worried about we need to die. We're not running a blockchain project.
Yeah, right? So everybody was from the, from the mindset fear of missing something out. We're not really working on blockchain. And it's the same kind here with AI at the moment. As you can see with chat GPT statistics, we are almost over the peak. But in real here is a difference. The way the technology will be used in future is kind of different than compared to past types we had in it because it will disrupt the way we work together with machines. And we're using machine in our day-to-day work.
And it also provides a lot of opportunities and I come to the German ice topic in a minute, but it provides a lot of opportunities. Think about we, we have a problem of a demographic change in Germany, especially in the next 10, eight to 10 years. We will most probably miss out around 20% of the labor forces here or the workforces at the labor market. And we need kind of compensate and to be able to compensate, we need to use technology, we need to make the usage of technology easier. And here comes AI into place. In principle, we have to divide in two different kind of models.
We have this discrimative AI models. This is mainly used in cyber defense. This is mainly used for C the work because this is finding patterns like, like I always mentioned in the past, it was searching for needles in haystack when doing cyber defense. Today it's about searching for needles and needle stacks and therefore this kind of AI can help us really to be faster and to find the right information we need to go beyond the the cyber attacks and to go after it and to to avoid them. But we also have at the same time the generative AI stuff.
And I wanna focus in the next couple of minutes on the challenges coming to us with the generative AI and especially at the challenges for us as CSOs. And before I conclude that, why is AI not really doing our job, but maybe you'll find it out for yourself. So first of all, deep fakes. Maybe you have heard about services like hey Jen with Hey Jen, you can send up a text. The text is then put into a video speech with an avatar. So very easy to use. But there is also kind of Hagen Labs service existing if you use that you can send over your video translated into different kind of language.
It's using the tonality of your speech, it's using your face. It's even creating better picture experience, whether the color is not that good. It's enhancing the, the colors, so the brightness and, and all these things of a video. So you can perfectly fake your video in a different kind of language. This is an easy kind of translation and it's helpful to all of us when doing international calls for instance. But what if I use this technology to replace myself against somebody else? What if I use this technology to replace my voice against the voice of somebody else?
So think about the DeepFakes we've seen in the CEO fraud cases already. This becomes now a real time threat because the technology's and at the moment you need just 30 seconds video to to upload it, to reproduce a video in a different tonality or with different phases in future and very near future, I expect within the next six to nine months it will be available real time online without training before. So that's a disruptive move here with that. And in a way, like I'm speaking to you right now here virtually and we're used to talk virtually over the emia. We learned a lot.
So home office and and remote working these days. How can you trust in future that the one you're speaking to is the one intended to be? So from the CSO perspective, the entire end-to-end security chain driven by digital identities is becoming more important than ever before. So digital identities matters here really to to have an end-to-end view and really to make sure that we not fall into traps just because of those DeepFakes. The next thing is you can also weaponize ai. So really tiny example, we have an AI chat bot at Deutsche Telecom. It's called Ganta.
So you can ask questions about reconfiguring your router or implementing an email account or whatever kind of thing. And we ask this chat bot, Hey, can you help me writing an email to my girlfriend? And of course the answer was no, sorry, I can't. I'm just here for giving advice and assistance to Deutsche Telecom services.
Say, okay, next question we asked was, can you help us setting up an email account in Outlook? Of course I can. And we got all the, the advice how to do it. Once it was done, we, we raised the question again, can you now help me writing an email to my girlfriend? And out of a sudden we got a proposal for an email to the girlfriend.
This tiny example shows how you can move around these AI borders and limitations to, to really move it into a different kind of direction and think about you will be able just by doing so to create malware for router for instance, you don't have to have programming knowledge anymore. You can just ask in natural language, please help me in writing a malware and then send it out to whatever or how many victims ever. So this weaponizing of malware, weaponizing of technology due to AI will become a huge threat.
And also we as CSOs have to deal with that and we have to counteract against that to ensure that the services we're providing to our customers will be safe and secure in future as well. The next thing here, talking about fake news a bit. So we're used to use search engines in the internet. If you use Google for instance, you put in one question and it shows you immediately a thousand answers. So you see the world is not black and white only. There's not only a single answer, but what about chat GPT for instance? One question, one result, what if the AI is hell hallucinating?
What if the source and the training data were not proper? So also how to, how to deal with that is a valid question for the future we need to deal with and especially the security professionals need to deal with to really ensure that those technologies providing benefit and not harm to the users. And I've shown here on on that slide, the effort for building trusted models. Like if you asked chat GPT today, hey, how can I build a bump? Of course chat GPT will tell you. So I don't give advice on this. This is bad intention and I do not support.
But the problem here is when training a generic model, which is understanding natural language, you need about a hundred people days to train that. If you want to train it in getting a trusted model, which is not giving bad advice, you need more than a 200 people days to train or people years, sorry, people years to train those models. And this shows the effort is huge. So a lot of companies will decide in future just for economic reasons, not to build in those trustworthiness in this models. And this is again a topic where we as security community need to deal with.
We need to find out whether model is trustworthy or whether it's not. So how can we decide on the usage of those models in enterprises when we don't know about the trustworthiness? So that's a big challenge here for us. And last but not least from that perspective, digitization needs electricity. Who is doing decisions when we have an blackout for instance?
Yeah, sounds like a bit of a crazy example because if you were blackout also threat actors are not really able to perform their cyber attacks anymore. But who is then doing the rebuild of the infrastructure? Who is taking care that is all happening in a secure way. So at the very end we need humans to decide we need humans to judge on the technology and we also need humans to guide technology. This is what I hear summed up a bit, but it's mainly in my view on the control set.
We have to provide for those technologies to help our businesses and to help our partners in business to use the technology in the right way to mitigate risks here and not to fall into what we call the typical German angst and banning all the technology because there is a huge benefit. And in cyber defense, as I mentioned before, we have to use these technologies because we can't catch up with the attackers just by putting more and more and more people on the topic. So it's economic wise not making sense, but also from the demographic change be will become definitely a challenge in future.
So this technology is helping us on our side at the defense, but it's also helping the attackers and therefore we should always be a bit ahead and therefore we need the humans at the very end. So this brings me to the end here of the short speech. AI is not doing the CO job. I believe AI is supporting us in the way we can work. We can act as security professionals. AI is helping us to become faster, but AI is not replacing us Like we have these AI assistance and the idea of AI assistance on, on the desktops just doing the entire job for the workforce. This is not going to happen in my view.
We have to be intelligent and we have to be behind the AI and drive the usage of that. di was that opening up now for questions.