Leverage web scraping to protect your brand during the golden quarter

(Image credit: Financial Express)

Counterfeit products are a multi-billion-pound online industry. Footwear and clothing are the most common places brands integrate their emblems and logos, making them some of the most heavily copied items in the world.

According to the Organisation for Economic Co-operation and Development (OECD), footwear accounted for 22% of the total value of counterfeit goods seized by customs in 2016 with clothing coming in second at 16%. While many of us have seen these goods being sold at street markets all over the world, much of it has now shifted online, valued at USD$590 billion per year according to the OECD. 

Counterfeited goods lower the value of their original counterparts. They are notorious for being made from cheaper materials, with lower quality controls and often contribute to wider criminal activities. Brands attempt to combat counterfeit products internally by targeting unauthorized traders individually. Besides being difficult, it is a time-consuming and expensive process.

About the author

Andrius Palionis is VP Enterprise Solutions at Oxylabs

Web scraping is a more efficient solution that combines highly sophisticated data techniques with automation to continuously monitor the presence of a brand online.

With the golden quarter now upon us and seasonal shopping events like Black Friday and Christmas fast approaching, it is critical businesses take action to protect the integrity of their brands. Here is what they can do:

Web scraping: a powerful solution

Web scraping uses “robots” - scripts that crawl the web and extract data from hundreds of websites in seconds. It’s a powerful way to obtain large amounts of raw public information that can then be cleaned up and analyzed by experts to extract insights. In the case of counterfeit goods web scraping can scan the internet to find sites selling fake versions of branded products. Traditionally, this was a very labor intensive process – physically visiting the markets and warehouses where these products are being sold. Web scraping retrieves this information in seconds.

Here’s a brief summary that outlines how it’s done:

Define website targets and employ proxy solutions

Identifying target websites holding valuable public intelligence is the essential first step in web scraping. Web scraping logic is then built, taking into account websites' specific use and acceptance of HTTP headers and/or proxy, as well as the HTML layout.

Proxies that leverage AI and machine learning will modify data requests to given websites in an “intelligent way”, reducing the complexity of this process. All websites are coded differently, and intelligent tools adapt to the changing websites’ structure in order to extract the data in an efficient manner with increased success rate.

Create keywords/search terms

Prior to conducting the scraping process, it is essential to define any keywords that will be used by the scraping mechanism to find the data for extraction. Since finding out who sells designer replicas is fairly easy, this is mostly a straightforward process.

Keywords that include brand names, item types and specific models are essential to scraping the data required for the following steps. Examples include “Rolex Dive watch”, “Gucci Dionysus bag” or “Ray-Ban Wayfarer”.

Compile website and counterfeit product information

After the data is scraped it needs to be sorted, analyzed and organized to determine what counterfeited products are being sold and by whom.

Once a list of unauthorized traders is compiled, the next step is to file a DMCA for each website selling the counterfeited products or use other legal remedies that are available and applicable to effectively respond to the identified misuse.

Report identified websites to search engines

The final step requires reporting requests to the largest search engines like Bing, Yahoo and Google to remove the sites from their index. Removing the websites means that prospective customers will never find the items unless they have a direct link.

Successful web scraping is easier said than done

While web scraping may seem simple, the actual procedure is not. Web scraping is a complex process that requires detailed technical knowledge to perform successfully. Fortunately, the industry has come a long way in terms of tools that crawl the web in real time to extract data in an efficient way.

In addition to scraping for counterfeit items, web scraping is also essential for obtaining publicly available information, such as pricing information, monitoring the competition, discovering competitor’s stock and shipping information, monitoring consumer sentiment, tracking social media mentions for brands, and much more.

Scrape now for bigger profits during the golden quarter and beyond

The golden quarter is synonymous for retail events such as Black Friday and the build-up to Christmas, as consumers capitalize on potentially big discounts on must-have items.

The high quality workmanship of branded goods is a big draw for many customers however knockoffs are increasingly taking a bite out of this market share.

Methods attempting to sniff out these vendors using manual techniques are costly and ineffective. Web scraping is the most technologically advanced way to root out these unauthorized traders. The use of this innovative technology can help remove these sites from the internet ensuring that retailers are able to fairly compete during the busiest trading periods of the year.

Andrius Palionis, VP Enterprise Solutions at Oxylabs