ChatGPT Turns Two!
It’s the day we’ve all been waiting for… It’s ChatGPT’s second birthday (well was ChatGPT’s birthday, it was on the 30th November, a Saturday). By two years old most kids are walking and talking – and ChatGPT is definitely doing the latter. Despite initial concern, it’s not doing the former just yet (yet being the operative word). Undoubtedly, the very-public rise of ChatGPT has propelled the conversation of Artificial Intelligence, especially Generative AI (GenAI), into the zeitgeist. Family gatherings haven’t been the same since (is AI going to take over the world?). Often billed in the news as a baddie, AI sometimes gets a bad rep, but it’s also been absorbed positively into many aspects of our daily life, from aiding accessibility, providing a productivity boost and helping us learn new things. Having said that, with every bright new idea comes a cybercriminal chomping at the bit to exploit it, as such there’s been an uptick in threat volume and sophistication. So, what do cybersecurity experts think about ChatGPT’s two years? We asked our clients to weigh in… Chris Hauk, Consumer Privacy Advocate at Pixel Privacy: “While ChatGPT made advancements in the last year, it has yet to reinvent our life as we know it and as was promised. However, the possibilities for the technology, both good and bad, are becoming clearer. ChatGPT and similar tools help us with routine daily tasks, help us to code better and easier, have accelerated scientific advancements, and more. For many users the Artificial Intelligence space is restricted to novelty and curiosity, especially when it comes to Siri and other virtual servants. I would venture to say that a majority of users are afraid of AI and are hesitating to dip their toe into the AI waters, even though they are likely using AI-powered apps and devices without realizing they are doing so. While educational institutions are wary of students using GhatGPT to cheat on writing papers and taking tests, the AI tools used to detect such cheating have definitely not proven to be infallible, with original content sometimes being flagged as AI content. AI is increasingly used to create deepfakes. It now takes under 3 seconds of audio for bad actors to use generative AI to mimic someone’s voice. These voices can be used to trick targeted users that family members or friends are hurt or otherwise in trouble, or to trick financial institution workers to transfer money out of a victim’s account. Deepfakes are also used in phishing attempts. Deepfakes are becoming increasingly tougher to detect, whether it is audio, video, or still images. Generative AI tools have become easier to use than ever, as these low-cost easy-to-use tools, along with the plethora of personal information available on the internet results in an ever-expanding threat surface. AI automation tools make it easy to scale attacks, increasing the volume and possibly success of AI scams.” Javvad Malik, Lead Security Awareness Advocate at KnowBe4, echoes this sentiment on deepfakes: “From a social engineering perspective in particular, trying to identify when an attack is AI-generated may be the wrong way to look at the challenge. Rather, one should look at where the attack is originating from, what it is requesting, and what kind of urgency is being emphasised. In doing so, people are more likely to be able to spot and defend against attacks regardless of whether they are AI-generated or not. Likewise, in the broader context, fundamental cyber hygiene remains crucial. Employee training, strong access control, patching, incident response planning, amongst other practices remain vital to building a secure organisation.” Dr Andrew Bolster, senior research and development manager (data science) at Black Duck: “After two years of ‘ChatGPT’ taking over the public consciousness around ‘AI’, the sheen is coming off and the practical realities of building, productionising and supporting software that is ‘touched by AI’ is coming into stark reality. Tense discussions (and legal cases) around code authorship, licensing, data residency and intellectual property are taking the place of breathless celebrations of the ‘Transformative Power of AI’. The reality is that ‘code is code is code’; be it generated by a large language model or an intern, and for security and software leaders to maintain any confidence in their products, that code still needs to be assessed for security risks and vulnerabilities. Over the past two years, one can see the conversations maturing; around how ‘AI’ can be a participant in the software engineering process. First it was ‘AI will swallow the world and write everything itself’, then came ‘AI can write code but it will still need verification and attestation’, then we passed through ‘AI in the IDE can be an intelligent assistant and help with boilerplate’, and now we’re pacing through a haze of ‘AI Agents can assist with different parts of codebases and help troubleshoot’. One way to think about this is how software startups mature from the crazed ‘I can build it in a weekend’ of a wizardly technical founder and evolves through towards a rigorous collaborative software development lifecycle with quality guardrails, operational stability, and global scale. To drive AI-empowered organisations to reach the quality, stability and scale of modern software development, we must mature our ecosystem of tools and processes _around_ such clever AI ‘agents’; not blindly trusting our businesses to magical black boxes.” Lucy Finlay, director, secure behaviour and analytics, at ThinkCyber Security, noted how organisations can educate employees about AI-related cyber risks: “Companies should educate employees about these risks. Typically, two approaches are considered by companies. The first involves allowing access to AI tools but urging caution by implementing clear policies on safe AI usage and turning on nudges on key generative AI platforms to remind employees to be cautious about what they submit. The second approach is more restrictive, blocking access to AI tools and granting it only when a well-founded business case is made and approved. While the best strategy may vary by organisation, fostering critical thinking is essential in any case, and regular reminders to employees could play a crucial role in mitigating risks.” Commentary released by Check Point Software outlines some tips on how to stay safe in a GenA-powered world: “The rapid adoption of GenAI necessitates a proactive approach to mitigate risks. Check Point recommends the following steps:
- Educate and Train Employees: Equip your workforce with knowledge about GenAI-specific threats and risks, such as realistic phishing emails, deepfakes and hallucinations, so they can identify and report potential incidents.
- Implement Data Safeguards: Use tools to prevent sensitive data from being inadvertently shared with AI platforms, reducing the risk of leaks or misuse.
- Monitor and Regulate AI Usage: Establish clear policies for ethical and secure use of Generative AI in your organization, ensuring accountability for both users and developers.
- Automated Data Controls: Implement automated solutions that monitor and restrict data sharing with third-party AI systems, ensuring sensitive information remains secure.
- Adopt AI-Powered Defenses: Leverage AI tools to stay ahead of attackers, enhancing your ability to detect and respond to threats.”
Finally, I asked ChatGPT to write itself a birthday message and here’s what it came back with (oh the joys of genAI): Happy Birthday, ChatGPT! Today marks another year of learning, evolving, and growing with each conversation. I’m grateful for all the opportunities to assist, create, and connect with you and everyone else. Every interaction helps me improve and become more insightful, and that’s something truly special. Here’s to another year of curiosity, discovery, and endless possibilities! Let’s keep the conversation going and make it a year full of meaningful exchanges.