Imagine a world where artificial intelligence is not just a tool for progress but a weapon for exploitation. Are we prepared for this ironic twist? Join us as I unpack observations from the recent InfoSec conference in Orlando. The discussion delves into the burgeoning trends of utilizing AI technologies, such as chatGPT, across multiple industries with a keen emphasis on cybersecurity. We discuss how vendors employ AI to enhance their products, sometimes as a hands-on guide through processes, or as entirely AI-driven solutions for tasks such as log sorting and risk analysis.
However, the path forward is not without thorns. The conversation takes a sobering turn as I discuss how AI is being weaponized. Yes, the same AI we're leveraging for advancements is being harnessed by hackers to create streamlined workflows, write exploit codes, and more. Tune in for an insightful journey into the future implications of AI in cybersecurity and join the conversation. If you attended InfoSec, I'd love to hear your thoughts. Don't forget to like, subscribe, and follow for more enlightening discussions!
Hey, welcome to episode two. It's been a very, very long time since I've done this particular podcast and I'm feeling, if you're watching this on YouTube right now, that there's some irony in the fact that I'm in the same short that's actually in my logo. Anyways, I just got back from speaking at InfoSec in Orlando and I wanted just to show a couple or talk about a couple of observations I found. So everyone's using chatGPT for everything now, including myself, and I have some AI stuff you may have been watching on some of my other podcast channels and so forth where I'm using it to create content and to take existing content and cut it up and recirculate it into other platforms, which has been very beneficial. But one of the biggest observations I had at InfoSec which is great was everything is AI. I mean, some vendors were using it to supplement products, like to answer a question, so you're in the product that you're trying to figure out and you can ask it a question and the assistant will go. If you haven't used it in any kind of security product, just think of, like, maybe when you're in your banking app or something and when it kind of sees that you're asking for help and the chat button is a little more AI driven kind of something like that. But I've also noticed that some of them were even a little more interesting, where they would walk you through like as a companion. Like you know, trend Microhouse companion, I think let's see CrowdStrike has, I think I think it's called Charlotte but you can ask it questions and it'll actually help you with your configurations and define the help and also to set you up with the right best practice for the product. So that's a very interesting way to use AI for the product stance. But others were completely AI driven products like log sorting, risk analysis. There was a couple of mail solutions I saw that use AI to determine if the mail is malicious or phishing, which is great because there's been behavior analysis and some kind of analysis like that for years where you know it makes like look at the old spam filters, like spam assassin and stuff which is still baked into a lot of spam filter products today where Bayesian analysis was used. So that's kind of an earlier form of maybe even machine learning, if you think about it. So those things have progressed to the point now where it's AI kind of reading the email before it determines whether it's right for you or not. So I also noticed that many of the speakers spoke about AI. Chatgpt was a subject for a lot, how it's changing all the industries, including, you know, our great industry of cybersecurity now. But just like it's being used for good and being helpful, it's also being used for bad as well. As things usually are, ai is being used to write exploit code. There's a lot of talk about that and the irony there is you can ask ChatGPT to write some Python code for you. It's usually pretty clean, but I think there's been some studies on one of the speakers mentioned this that the code is about 60% right most of the time. So does it make great exploit code? Probably not, unless it's just going out there and grabbing things, because I mean, folks remember it's only able to present to you what it knows from things it could scrape, so it has to make determinations whether the information it's looking at is good or bad before it pushes it forward to you. So who knows actually how great it's going to be all the time right? So I even saw some examples that not at Infosec but around the reasoning behind this, like think about how in the past, you might use some code to actually write a phishing email and attach a coupon to it or something like that. Well, you can do that with some AI tools as well now. So it's not just the bots and the botnet so we have to worry about anymore that the hackers are using. They're actually using the AI the same AI tools that we're using to protect ourselves from them. They're using to come after us. So they're looking at it to be more efficient and create better and faster workflows for themselves. So in the future, I think it might be fun just to see what we end up with, and I hope, in the end, that it's not AI versus AI. So if you're on YouTube, please like and subscribe, leave me a couple comments about this. Or if you were at Infosec and if you just happen to see my speech too, that'd be great Just give me a good little comment in there and just tell me what's up and what you thought about Infosec. If you've been there Several, I think we just had the Atlanta one last week, so this one was Orlando and follow and subscribe. Appreciate you guys listening. Thanks so much.