Insights      Technology      Trust      Five Common Privacy Blind Spots in Messaging Tools

Five Common Privacy Blind Spots in Messaging Tools

We like to think that our private conversations are, well, private. No one listens in, right? Well, it might not be so simple when it comes to our messaging tools. I chatted with Navroop Mitter, CEO of ArmorText, about the privacy of business messaging tools.

ArmorText is an end-to-end encrypted collaboration platform, targeting the most sensitive communications in an enterprise.

This post looks at five areas where you might face risks with your existing tools and what you should consider when assessing your messaging tools.

1. Rules and Laws of Private Communications

When it comes to digital systems, you may be required under law, to turn over credentials to your systems.

As individuals, we expect private communications. But companies (especially those that are regulated) have a responsibility to review who said what, when, and to whom. In other words, they are required to provide an audit trail that includes communications and messages upon request. Governance requirements mean they cannot adopt truly private communications. Think about whether your tool provides the audit trail you need.

2. Get the Security-Privacy-UX Balance Right

When you’re selecting tools for your employees to use, you need to balance security, privacy and user experience. It’s easy for employees to download and start using messaging tools of their choice, but this could have serious implications.

When employees are left to choose their own apps, you can’t be sure that access rights are properly maintained. This could leave your company exposed.

Say, for example, an executive leaves company A for company B, a competitor. What would happen if they are not removed from group chats – like Whatsapp groups – allowing them to view confidential intellectual property? This shows the risk of having no centralized user management and revocation controls.

Security and privacy require a fine balancing act where compliance is a factor. For example, if doctors were to use an encrypted channel to communicate with their nurses, it would be hard for the hospital to review those encrypted messages when things go wrong. The communication channel needs to be secure but reviewable.

However, if you ignore user experience, you can be sure that employees will start to use their own tools. Make sure you’re not encouraging this by choosing a tool that nobody wants to use.

3. From On-Premise to Bulk-Hack

Enterprise messaging originally existed as on-premise technologies, like desktop email. There were vulnerabilities we had to accept, i.e. IT had eyes on that data. This level of risk was acceptable because it allowed for internal reviews.

Once these communications moved to the cloud, more people could view those conversations. Cloud providers also mine data as part of their offering. A ‘bulk-hack’ is a serious possibility.

Slack, for example, has been open about the risk from national states, organized crime, and hacktivists. A bulk-hack could expose data from thousands of organizations at once. What’s the risk of exposure for your business?

4. Communicating with Trusted Third Parties

Messaging tools are often set up primarily for internal use, but much collaboration happens between colleagues in different organizations. Look for a tool that allows you to set up “trust relationships” so that you can easily start a chat with people outside your organization while controlling the degree of access they have to your systems.

5. Your Adversary is Already There…

Your adversary is already on the network and already listening and gathering intelligence. You need to make sure that you can secure communications, no matter what, even if credentials are compromised.

Listen to the full podcast episode to find out more, including:

  • What secure communications actually means to the modern regulated enterprise
  • The evolution of secure, yet private communications
  • More examples of where the fine balance between security and control pay-off

 

 

Read more like this

Testing LLMs for trust and safety

We all get a few chuckles when autocorrect gets something wrong, but…

How AI is redefining coding

Sometimes it’s hard to know where to start when it comes to…