To thoroughly evaluate each antivirus app, we put it through a series of tests at three levels: performance, effectiveness and ease of use. 

Performance testing and system impact

A good antivirus program should be able to run without slowing your computer down. On Windows systems, we first run our proprietary Excel-based benchmark that sorts through a database of 20,000 names and addresses. We run the test five times and average the results to get a baseline performance level on our Lenovo ThinkPad T470 test system (Windows 10 as well as a 2.5GHz Core i5-7200U processor, 8GB of RAM and 256GB of solid-state storage).  Next, we time how long it takes to get and install the antivirus program. Before any scanning takes place, we run the benchmark test routine again to provide a measure of the passive decline in performance that the program imposes on the computer. The longer the benchmark time the lower its performance potential and we use percentage changes to compare programs. While the app is running a full scan for malware, we repeat the benchmark to see how much of the system’s resources are used by the program during this vital operation. We repeat this for a quick scan if the app offers it.  Separately, we time how long it takes to complete full and quick scans. This can entail three or more runs because after a couple scans, many of the programs learn what to examine and what to ignore, shortening the scan time considerably.

Effectiveness in detecting and removing viruses

With the quantitative testing out of the way, we look at how effective the software is at finding and eradicating infections with results from the most recent surveys by AV-Test, AV Comparatives and SE Labs. By looking at each program’s success rates and false positives, we get an idea of how protective and reliable each is. 

Evaluation of additional features

We also try out the app’s major features, including its traditional scanner, behavioral monitoring and cloud analysis. On to the interface design, where we look at the program’s ability to run the app full screen, show notifications and roll-back ransomware changes. If available, we dig deeply to use the program’s detailed settings options.  After trying out included tools and utilities like a firewall, password manager, parental controls, file encryption and shredding, we try to start a scan from the Windows Explorer and schedule scans if the program allows it. If the program has a Virtual Private Network (VPN), we look at what countries it’s available in and any limitations, like a maximum amount of VPN data per day. We connect to the VPN five times and average how long it took to get online as well as the upload and download speeds.   When done, we look at the uninstallation process and get ready for the next antivirus program. For more details on how we rate and review other products, check out the Tom’s Guide How We Test page.


title: “How We Test Antivirus Software And Apps” ShowToc: true date: “2022-12-12” author: “Michael Ackley”


To thoroughly evaluate each antivirus app, we put it through a series of tests at three levels: performance, effectiveness and ease of use. 

Performance testing and system impact

A good antivirus program should be able to run without slowing your computer down. On Windows systems, we first run our proprietary Excel-based benchmark that sorts through a database of 20,000 names and addresses. We run the test five times and average the results to get a baseline performance level on our Lenovo ThinkPad T470 test system (Windows 10 as well as a 2.5GHz Core i5-7200U processor, 8GB of RAM and 256GB of solid-state storage).  Next, we time how long it takes to get and install the antivirus program. Before any scanning takes place, we run the benchmark test routine again to provide a measure of the passive decline in performance that the program imposes on the computer. The longer the benchmark time the lower its performance potential and we use percentage changes to compare programs. While the app is running a full scan for malware, we repeat the benchmark to see how much of the system’s resources are used by the program during this vital operation. We repeat this for a quick scan if the app offers it.  Separately, we time how long it takes to complete full and quick scans. This can entail three or more runs because after a couple scans, many of the programs learn what to examine and what to ignore, shortening the scan time considerably.

Effectiveness in detecting and removing viruses

With the quantitative testing out of the way, we look at how effective the software is at finding and eradicating infections with results from the most recent surveys by AV-Test, AV Comparatives and SE Labs. By looking at each program’s success rates and false positives, we get an idea of how protective and reliable each is. 

Evaluation of additional features

We also try out the app’s major features, including its traditional scanner, behavioral monitoring and cloud analysis. On to the interface design, where we look at the program’s ability to run the app full screen, show notifications and roll-back ransomware changes. If available, we dig deeply to use the program’s detailed settings options.  After trying out included tools and utilities like a firewall, password manager, parental controls, file encryption and shredding, we try to start a scan from the Windows Explorer and schedule scans if the program allows it. If the program has a Virtual Private Network (VPN), we look at what countries it’s available in and any limitations, like a maximum amount of VPN data per day. We connect to the VPN five times and average how long it took to get online as well as the upload and download speeds.   When done, we look at the uninstallation process and get ready for the next antivirus program. For more details on how we rate and review other products, check out the Tom’s Guide How We Test page.