The Internet is already filled with many AI-powered chatbots such as ChatGPT and Meta AI that can be used for many things; however, they can also cause major problems, according to Jamie Akhtar. Speaking to *The Sun*, Akhtar notes that these chatbots pose significant risks of presenting users to cyber dangers in a report section.
AI Chatbots: The Hidden Gateway to Identity Theft and Fraud
The first of the risks can be named as the leakage of private information. People reveal personal information like name, address or financial details to the chatbot and it’s easy for the hacker to breach the platform if not well protected.
Additionally, there is the capacity for AI chatbots to be utilised in a manner that would facilitate it, namely phishing. They can utilize them to develop more elaborate work of art and genuine-appearing phishing emails that make the targets divulge their login facts or other relevant information.
Another negative phenomenon which should be mentioned is the so-called identity theft. If hackers collected enough data from the chatbot, they could reconstruct the person, and steal their accounts or engage in some fraud.
In a bid to address these risks, Akhtar has suggested that while using chatbots, a user ought Not to disclose sensitive information to the chatbots and Secondly, the developers of the AI platforms ought to enhance the security of the AI platforms.
Do chatbots pose a threat to your financial privacy?
It is almost impossible to imagine a modern man’s daily routine without interacting with a chatbot at least few times a day for assistance in utility and informative tasks, or even for financial management. But cyber security analyst Jamie Akhtar stating that such tools powered by artificial intelligence are a clear threat to financial privacy. Some of the information obtained during conversations may be sensitive or may be used or leaked by unauthorized persons.
Such things, says Akhtar, are that every exchange involving a chatbot, no matter what the form of the conversation is, (it can be a question or a request for a transaction), is recorded and documented by the organizations that own the type of tools. Although it is for enhancing the AI performance, it also yields some questions on how well this information is protected.
As users disclose more and more specific financial information to chatbots, the likelihood of getting these conversations listened to or used inappropriately by cybercriminals increases. This may result in identity theft scenarios or unauthorized access to important financial records.
Despite companies practicing that they have security policies implemented, accumulation of an entire population information poses risks. A single breach could put the users’ important financial information in the wrong hands.
Therefore, Akhtar recommend that only certain kinds of information are given to chatbot and appealed to developers to increase the security of the personal details.
Chatbot Data: A Goldmine for Cybercriminals—Are You at Risk?
Jamie Akhtar, the CEO of CyberSmart, has expressed alarm at the data that chatbots gather saying that it becomes a magnet for hackers. He added that this data may be used in many cybercrimes including identity theft and fraud as it was regarded useful.
Akhtar also noted that data collected by chatbots is common stored on servers and can be accessed by cybercriminals. However, good as these measures may be, it is clear that hard-core hackers or anyone with the necessary determination could definitely gain access to this data.
Because of the type of information gathered – name, login credentials, or even financial information, it remains a pull for cybercriminals. This allows the attackers to impersonate the victims or execute financial scams when the data is compromised.
Based on the risk considerations, Akhtar encouraged users to approach smart chatbots with caution by doing so as a private individual or in a business context. Information sharing should be restricted and a lot of concern should be taken when handling this data.
In response, Akhtar has urged chatbot developers ensure data security and privacy, so as to reduce the possibility of cyber criminals attacking the tool.