New report from Barracuda explores emerging traffic trends and live examples of bot behavior and detection.
Barracuda, a trusted partner and leading provider of cloud-enabled security solutions, has released key findings about bad bots and the ways these automated attacks are evolving. The report, titled Bot Attacks: Top Threats and Trends – Insights into the Growing Number of Automated Attacks, explores emerging traffic patterns, live examples of bot behavior and detection, and the steps IT teams should take to protect their businesses.
Over the past few years, automated bot traffic has grown rapidly. Once used primarily by search engines, bots now have a variety of uses – both good and bad. The good bots are primarily search engine crawlers, social network bots, aggregator crawlers, monitoring bots, etc. These bots obey the website owner’s rules as specified in the robots.txt file, publish methods of validating them as who they say they are, and work in a way to avoid overwhelming the websites and applications they visit.
Bad bots are built to perform various malicious activities. They range from basic scrapers that try to get some data off an application (and are easily blocked) to advanced persistent bots that behave almost like human beings and look to evade detection as much as possible. These bots attempt attacks such as web and price scraping, inventory hoarding, account takeover attacks, distributed denial of service (DDoS) attacks and much more. Bad bots make up a significant part of website traffic today and detecting and blocking them is of critical importance to businesses.
The report looks at current trends, such as the volume of traffic from these bad bots, where bot attacks are originating from, and the time of day attacks are most likely to happen. It also breaks down live examples and covers the steps IT teams can take and technology they should be using to stop these types of attacks.
An in-depth look at bot traffic
Barracuda researchers analyzed traffic patterns over the first six months of 2021. Here are some of the key takeaways from their analysis:
- Bots make up nearly two-thirds of Internet traffic, with bad bots making up nearly 40% of all traffic. These bad bots include both basic web scrapers and attack scripts, as well as advanced persistent bots. These advanced bots try their best to evade standard defenses and attempt to perform their malicious activities under the radar. In this Baracuda dataset, the most common of these persistent bots were ones that went after e-commerce applications and login portals.
- E-commerce applications and login portals are the most common targets of advanced persistent bots.
- North America accounts for 67% of bad bot traffic-and most of it originates from public data centers.
- Most bot traffic comes in from the two large public clouds-AWS and Microsoft Azure-in roughly equal measure.
- Just over 22% of bad bot traffic comes from Europe, with European bad bot traffic more likely to come from hosting services or residential IPs.
- Bad bots follow a standard workday and with good reason. The attackers running these bad bots prefer to hide within the normal human traffic stream to avoid raising alarm bells. The common stereotype of a ‘hacker’ performing their attacks late into the night in a dark room with green fonts on a black screen has been replaced by people who set up their bots to carry out the automated attacks while they go about their day.
“While some bots like search engine crawlers are good, our research shows that over 60% of bots are dedicated to carrying out malicious activities at scale,” said Nitzan Miron, VP of Product Management, Application Security,Barracuda. “When left unchecked, these bad bots can steal data, affect site performance and even lead to a breach. That’s why it’s critically important to detect and effectively block bot traffic.”
Best practices to protect against bot attacks
When it comes to protecting against newer attacks, such as bots, defenders can be overwhelmed at times due to the number of solutions required. The good news is that solutions are consolidating into WAF/WAF- As-a-Service offerings, also known as Web Application and API Protection (WAAP) services. This will improve both user experience and overall security. A few key steps include:
• Put proper application security in place. Install a web application firewall or WAF-As-a-Service solution and make sure it is properly configured. This is an important first step to make sure your application security solution is working as intended.
• Invest in bot protection. Make sure the application security solution you choose includes anti-bot protection so it can effectively detect and stop advanced automated attacks.
• Take advantage of Machine Learning. With a solution that uses the power of Machine Learning, you can effectively detect and block hidden almost-human bot attacks. Be sure to turn on credential stuffing protection to prevent account takeover as well.Click below to share this article