Lessons learned from a Man-in-the-Middle attack

It’s become a widely accepted mantra that experiencing a cyber breach is a question of ‘when’ and not ‘if’. For Fox-IT ‘if’ became ‘when’ on Tuesday, September 19 2017, when we fell victim to a “Man-in-the-Middle” attack. As a result of the multi-layered security protection, detection and response mechanisms we had in place, the incident was both small and contained, but as a cyber security specialist it has made us look long and hard at ourselves. While the police investigation is still on-going, we are sharing details of this incident with the public now that we feel confident that most details are sufficiently clear. This is about who we are and what we do. We believe that ultimately in these cases, transparency builds more trust than secrecy and there are lessons to be learned, both good and bad, that we want to share.

So what happened?

In the early morning of September 19 2017, an attacker accessed the DNS records for the Fox-IT.com domain at our third party domain registrar. The attacker initially modified a DNS record for one particular server to point to a server in their possession and to intercept and forward the traffic to the original server that belongs to Fox-IT. This type of attack is called a Man-in-the-Middle (MitM) attack. The attack was specifically aimed at ClientPortal, Fox-IT’s document exchange web application, which we use for secure exchange of files with customers, suppliers and other organizations. We believe that the attacker’s goal was to carry out a sustained MitM attack.

Because we detected and addressed the breach quickly we limited the total effective MitM time to 10 hours and 24 minutes. In the scheme of the industry average time of detection of weeks this was a short exposure, but we couldn’t prevent the attacker from intercepting a small number of files and information that they should not have had access to. An important first step in our response was to contact Law Enforcement and share the necessary information with them to enable them to start a criminal investigation.

The incident unfolded as follows (all times are CEST):

Sept 16 2017 First reconnaissance activities against our infrastructure that we believe are attributable to the attacker. These included regular port scans, vulnerability scans and other scanning activities.
Sept 19 2017, 00:38 The attacker changed DNS records for fox-it.com domain at a third party provider.
Sept 19 2017, 02:02 Latest moment in time that we have been able to determine that clientportal.fox-it.com still pointed to our legitimate ClientPortal server. This means that traffic destined for the ClientPortal was not being intercepted yet.
Sept 19 2017, 02:05-02:15 Maximum 10-minute time window during which the attacker temporarily rerouted and intercepted Fox-IT email for the specific purpose of proving that they owned our domain in the process of fraudulently registering an SSL certificate for our ClientPortal.
Sept 19 2017, 02:21 The actual MitM against our ClientPortal starts. At this point, the fraudulent SSL certificate for ClientPortal was in place and the IP DNS record for clientportal.fox-it.com was changed to point to a VPS provider abroad.
Sept 19 2017, 07:25 We determined that our name servers for the fox-it.com domain had been redirected and that this change was not authorized. We changed the DNS settings back to our own name servers and changed the password to the account at our domain registrar. This change will have taken time to have full effect, due to caching and the distributed nature of the domain name system.
Sept 19 2017, 12:45 We disabled the second factor authentication for our ClientPortal login authentication system (text messages), effectively preventing users of ClientPortal from successfully logging in and having their traffic intercepted. Other than that, we kept ClientPortal functional in order not to disclose to the attacker that we knew what they were doing, and to give ourselves more time to investigate. At this point, the MitM against ClientPortal was still active technically, but would no longer receive traffic to intercept as users would not be able to perform two factor authentication and log in.
Sept 19 – Sept 20 2017 A full investigation into the incident was undertaken, along with notification of all clients that had files intercepted and the relevant authorities, including the Dutch Data Protection Authority. A police investigation was launched and is still ongoing. Based on the outcome of our investigation, we understood the scope of the incident, we knew that the attack was fully countered and we were prepared to re-enable two factor authentication on ClientPortal in order to make it fully functional again.
Sept 20, 15:38 ClientPortal fully functional again. Our internal investigation into the incident continued.

What kind of information did the attacker intercept?

As mentioned, the attacker was able to redirect inbound traffic to ClientPortal and emails going to the fox-it.com domain for a short period of time. At no stage did they have access to any external or internal Fox-IT system, or indeed system level access to our ClientPortal.

When investigating the attack, we made heavy use of our own technology. For all Fox-IT traffic, we employ CTMp network sensors to detect anomalies and alert us of attacks. All our sensors do full packet capture and this is what allowed us to determine precisely, beyond any doubt, which information was intercepted by the attacker, the timeline of the event and who was affected by the attack.

Because of this we know the following about the information that the attacker was able to intercept:

  • We know the following facts about the information that the attacker intercepted:
    1. Nine individual users logged in and their credentials were intercepted and all users were contacted immediately. Because we use two factor authentication, these credentials alone were not enough for the attacker to log in to ClientPortal.
    2. Twelve files (of which ten were unique) were transferred and intercepted.
      • Of these, three files were client confidential in nature, none were classified as state secret. Files classified as state secret are never transferred through our ClientPortal.
      • Other files were in the public domain and not confidential.
      • All affected clients in respect of these files were contacted immediately.
    3. One mobile phone number was intercepted and this person was informed.
    4. A subset of names and email addresses of ClientPortal users were intercepted.
      • We determined that this information alone would not be considered sensitive, but we have nevertheless informed those people affected.
    5. The names of accounts in ClientPortal were intercepted.
      • This is a list of the names of organizations that we have exchanged files with in the past. This includes organizations such as customers, partners and suppliers.
      • After review, we determined that the names alone would not be considered sensitive.
  • Email: we know that the attacker intercepted Fox-IT email for a maximum of 10 minutes, and that during those 10 minutes, emails destined for Fox-IT were redirected to an external email provider. Since email distribution is decentralized on the internet, we have no way of determining which emails were intercepted by the attacker during that time.

The start of the incident response process

We have determined that the attacker successfully logged in to the DNS control panel of our third party domain registrar provider using valid credentials. This raises two key questions:

  • How were valid credentials obtained?
  • Why was access to the domain registrar possible with single factor rather than multi factor authentication?

On the question of how the credentials were obtained we have conducted extensive internal investigations alongside those conducted by the police investigating the incident. We interviewed people that had access to the vault, carried out extensive forensic investigation of all the systems that the password could been used on, and performed additional network forensics. We currently have no evidence that the password was stolen from any of our own internal systems. Right now, based on our own and the police investigation, we have strong evidence that supports our hypothesis that the adversary gained access to our credentials through the compromise of a third party provider. Investigations are ongoing.

A factor which possibly helped the attacker was that the password had not been changed since 2013. While our password is considered strong and can withstand brute force attacks, it was not changed because it was hardly ever used: DNS settings in general change very rarely. Nevertheless, this was something we could clearly have improved in our process.

The infrequency of our engagement with our domain registrar is also behind the answer to the second question, as to how the attacker was able to gain access to the DNS records solely using a username/password combination. Two factor authentication (2FA) – whereby the initial attempt to log-on can only be completed once a code sent to an authorized device is entered as second factor authentication – should be standard practice here. We chose our domain name registrar provider 18 years ago when 2FA was neither a consideration nor a possibility. We were surprised to find that the registrar still does not support 2FA. It is always worth asking: does your DNS registrar support 2FA? The answer may surprise you.

Reasonably quick detection but room for improvement

We hold our hands up to a number of errors on prevention, but we did decidedly better in the areas of detection and response. Once the attack was ongoing, we detected it reasonably quickly: within a few hours compared to the typical industry average of weeks before an attack is detected.

Our Security Operations Center (SOC) had noticed that a number of scans for weaknesses on our infrastructure were made in the days leading up to the attack. These were treated as regular “background noise on the internet” and not followed up on. While paying attention to those scans would probably not have helped us predict the following attack, it would certainly have raised our attention and we have now established mechanisms to improve early warning of suspicious security probing of this nature.

Effective response, containment and scope determination

In general, once an incident is detected, the most important goal becomes to limit the incident and determine the impact of the attack. Once we knew about the incident, we acted quickly and we were able to limit the impact of the incident quite effectively and in a timely manner.

The most important action that we took in order to limit the impact of the attack was to disable 2FA, in lieu of disabling login functionality altogether. We did this specifically to prevent people from successfully logging in but so as not to alert the attacker to the fact that we knew of the attack. This allowed us to better understand the modus operandi and scope of the attack before taking specific actions to mitigate it, an approach which is standard operating procedure for our CERT team. From that moment on, nobody could log in, effectively preventing traffic to our ClientPortal from being intercepted. Note that this did not directly stop the attack, but it stopped its effectiveness.

The use of full packet capture and CTMp network sensors was crucial in determining the scope of the attack. We could, within a few hours of finding out about the attack, determine exactly who was affected and what the scope of the attacker was. This helped us to understand the incident with confidence and to quickly notify those directly affected and the Dutch Data Protection Authority.

Lessons learned and recommendations

Looking back on this incident, we learned the following lessons and have the following recommendations:

  • Choose a DNS provider that doesn’t allow changes through a control panel but requires a more manual process, considering that name servers are very stable and hardly every change. If you do require more frequent changes, use 2FA.
  • Ensure that all system access passwords are reviewed regularly and changed, even those which are used rarely.
  • Deploy certificate transparency monitoring in order to detect, track and respond to fraudulent certificates.
  • Deploy full packet capture capabilities with good retention in crucial points of your infrastructure, such as the DMZ and border gateways.
  • Always inform Law Enforcement at an early stage, so that they can help you with your investigations, as we did in our case.
  • Make it a management decision to first understand an attack before taking specific actions to mitigate it. This may include letting an attack continue for a short period of time. We consciously made that decision.

Looking back in conclusion

While we deeply regret the incident and the shortcomings on our part which contributed to it, we also acknowledge that a number of the measures we had in place enabled us to detect the attack, respond quickly and confidently and thereby limited the scale and length of the incident. And that’s why the twin mantras in security should always be followed:

  • layered security; and
  • prevention, detection and response.

At the time that ‘if’ becomes ‘when’, it is the combination of these that ultimately determines your overall resilience and cyber security stance.