ChatGPT, a conversational AI technology, is both a great personalization tool and a potential phishing threat, according to email solutions provider, Validity. The AI technology can be easily programmed to pose as a company’s representative or customer service agent, making targeted phishing attacks more convincing. “The potential for phishing is nearly limitless,” warns Validity, adding that its security researchers have already identified several ways in which hackers could exploit ChatGPT. The firm also notes that ChatGPT is just one example of the challenges businesses face when it comes to securing AI-powered chatbots, which could be used to deceive users and collect personal information. Validity recommends businesses carefully evaluate the risks and benefits of AI technologies and vet their vendors thoroughly.
Excerpt from the main article:
OpenAI’s ChatGPT has taken the world by storm. Apart from the millions of people who have already tried it, email service providers (ESPs) are training it using campaign data to produce highly effective email subject lines and content. Salesforce users are exploring how ChatGPT can create formulas and validation rules. And, Microsoft has now incorporated it into its Bing search engine—there’s already talk of this being a potential “Google killer!” So, how does the new technology work? ChatGPT (Generative Pre-trained Transformer) uses deep learning techniques to process terabytes of web data (containing billions of words) to create answers to prompts or questions from users. Interactions are like talking to a person. Many say ChatGPT is the first AI application to pass the Turing test—meaning it exhibits intelligent behavior equivalent to, or indistinguishable from, a human being. We’ve already seen some eye-catching use cases: The UK’s Times newspaper used ChatGPT to
The Dangers of ChatGPT: Great Personalization Tool or Great Phishing Technology? was originally published on Blog – Validity