Is it possible for an automated system to detect malicious links, threats or inappropriate messages?
Is it also possible for a system to detect fraud or detect behavior that may indicate stolking?
We are still far from perfection, but to date the AI written by us, called spam assasin AI, is based on
a couple of approaches similar to those of antivirus, the link is viewed by an antivirus that, after a
scan, classifies it as valid or potentially dangerous.
In that case the sender of the malicious link is flagged and the link is deleted in a transparent way
and the AI avoids links that lead, for example, to fake login pages of banks or social networks.
We are also enhancing the interaction with the community by allowing users to give us an opinion on
"typical messages", thus increasing the accuracy of the algorithm and allowing the growth and
effectiveness of the algorithm itself.
SPAM reduced to a minimum
For the messages, the use of a dictionary for each country detects the vast
majority of unpleasant or inconvenient messages, also detecting the frequency,
avoiding possible sending acts of stolking.
We are also enhancing the part of interaction with the community allowing users to give us
an opinion on "typical messages", thus increasing the accuracy of the algorithm and allowing the
growth and effectiveness of the algorithm itself that learns thanks to the suggestions received.