Aisuru Botnet
Dec 03, 13:03
For several days, parts of our network have experienced occasional packet loss and increased latency, usually lasting a few seconds, sometimes up to a minute. Some days it happens two or three times. Other days it does not happen at all. The cause is ongoing DDoS attacks driven by the “Aisuru” botnet, which is pushing many networks worldwide to their capacity limits. Alongside us, many other large cloud and hosting providers are affected, including some with significantly larger networks. Trade press reports we link below estimate the botnet’s capacity at 20-30 Tbps. Because the attacks target completely different customers and regularly our entire network (“carpet bombing”) at record bandwidths, effective filtering is very complex.
Your data is not at risk: the attacks aim solely to overload servers and network infrastructure.
Since the first attacks, we have been working almost 24/7 on various countermeasures. We have improved our filtering and detection mechanisms, expanded capacity, and used traffic engineering to distribute attacks across our upstreams to prevent overload on individual uplinks. We are also working closely with our transit providers on pre-filtering and early detection.
These measures are now having an effect – the vast majority of attacks are fully filtered and no longer cause damage. However, it is unavoidable that new measures must occasionally be deployed. During that time, latency-sensitive applications like game servers may experience issues. For TCP-based applications like web and mail servers, outages are usually not observed because these applications are not sensitive to short packet loss or latency spikes.
We expect this situation to continue for at least a few more days until all measures and network expansions are in place. Unfortunately, upgrades in the terabit range and the provisioning of cross-connects and fiber routes come with lead times we cannot circumvent. We are using this “waiting period” to implement other measures, prepare the planned upgrades, and bring new network hardware online ahead of schedule.
Rest assured: we are working continuously to restore the usual dataforest quality, and we thank you for your understanding, the motivating words, and your loyalty. A detailed blog post about our network expansion was already planned and will now appear a bit earlier.
In the coming days, urgent network maintenance may be necessary with short notice. We will keep you updated in this status post, which we will keep open until further notice, even if we do not expect any (negative) impact. Please contact us early if you notice unexpected issues that last longer than a few seconds.