Skip to main content
Solved

AV-Comparatives and Our Unique Approach


Did this help you find an answer to your question?
Show first post

59 replies

  • 1122 replies
  • November 6, 2012
Fascinating, and highly informative thread. Throwing a sharper light onto the inner workings of WSA... 😛

  • New Member
  • 9 replies
  • November 7, 2012
Hi Cat
 
Thank You for answer
 
As I understand Your answer, Webroot had a low score, because files were not executed, and therefore You could not perform behavior analyze.
 
210 fp were found because these files were new to You.
 
That leads to the question:the 20% bad files, You did not find, must also have been new to You, so why were they not found positive?
 
Best regards
 
Steffen Ernst

  • Community Guide
  • 142 replies
  • November 7, 2012
@ wrote:
That leads to the question:the 20% bad files, You did not find, must also have been new to You, so why were they not found positive?
Hi Steffen.
 
The reason is, unfortunately, due to how the test was conducted. To better understand how this should work, I'll quote from a blog post by Webroot regarding the June real-world test where there were apparently 68 misses:
After looking at the 68 misses from June’s AV-Comparatives test, we found 65 of the samples had been classified within a few hours of their test, with the remaining three being classified minutes after receiving the samples. Of the 68 misses, 34 of the files were seen for the very first time during the test; none of our users were ever affected by them and we had never encountered those components across our entire user base. The other 932 samples were blocked automatically during the test.
Some people might then worry that they're still not protected until those files are classified. The same blog post goes on to say:
So this begs the question, how did WSA protect these infected endpoints while the infections were still unknown to the cloud user base? There are two pieces to this puzzle. The first piece focuses on ensuring WSA is able to reverse all system changes made by a new unknown file and to prevent any irreversible changes from taking place.
For example, if a newly discovered program makes file system, disk, registry, or memory changes, these are recorded and analyzed in real time. WSA then checks frequently with the cloud while the program runs to see if an updated classification is available for the unknown files on a system. During this period, the program is able to change the system, but it is under a transparent sandbox where all of the changes taking place are not only being analyzed for behavior correlation, but are also being recorded to see the before-and-after view of every modification to the system.
If at any point the cloud comes back and indicates a file is malicious, WSA will automatically remove the infection and restore the system perfectly to a pre-infection state.
Source for the quotes: http://blog.webroot.com/2012/07/19/webroot-bulletin-regarding-av-comparatives-results/

Kit
  • Retired Webrooter
  • 371 replies
  • November 7, 2012
That also touches on what I posted in this thread directly. 
 
Current AV and the tests work based 100% on "Detect immediately or fail forever".  It's the very distinct line.  If the current AVs don't detect the threat immediately, they don't even have the possibility of doing so until definitions are created for it, which is a time-consuming and involved process.  Therefore the tests count it as a fail if it's not detected immediately, because when it isn't the computer becomes infected and the infection has full control.  For example, clear a cookie, bring up the website, and capture your password when you log in.
 
Webroot works based on expanding a safe zone beyond that line.   Anything that is not detected immediately does NOT have full control and can't do anything malicious without being detected as doing so.  If it tried to bring up a website and capture the password when you logged in, it would see absolutely nothing when you type your password and be unable to capture it. 
 
The test works based on "Did the threat get detected immediately?" and if "No", then it uses the old AV concept that bad things could not have been prevented from happening in that case.  The test is testing for whether bad things will happen, so that threat got by and the AV fails on it.
 
Since Webroot keeps bad things from happening while the threat is undetected, suddenly the assumption that bad things will happen if it's not detected immediately is no longer accurate.  A threat that is not detected immediately will not do bad things at all, yet Webroot is failed on the threat because it was not detected immediately.  Since the test wants to figure out if bad things will happen or not, and it claimed they would when they won't, suddenly the test is wrong and needs to be revisited.
 
210 fp were found because these files were new to You. 
That leads to the question:the 20% bad files, You did not find, must also have been new to You, so why were they not found positive?
New does not mean bad.  The FPs were not because they were new.  The FPs on things that were new were because they were modified versions of otherwise-legitimate software which we recognized as "This is NOT what this file should look like, so that has to be wrong."

The threats that were new, by comparison, don't look like legitimate software that has been changed to not be itself, so they have to be evaluated from scratch.  That takes a few minutes generally, but as stated above, the threat can't do anything during that time frame.  In the tests, the threat definitely doesn't do anything because frequently it's not even run.  The fact that we can determine something is a threat within minutes or hours with nothing but scan data from the endpoint computer
 is pretty good stuff.
 

  • Community Guide
  • 142 replies
  • November 7, 2012
@ wrote:
If the current AVs don't detect the threat immediately, they don't even have the possibility of doing so until definitions are created for it, which is a time-consuming and involved process.
The one thing I will add here is that while testing organisations are seeing whether samples are being detected by the current definitions at the time of the test, it has to be remembered many of today's AVs, including Webroot, employ other techniques to recognise unknown, suspicious files, such as heuristics, behaviour analysis, sandboxing et al. When put together, it's possible some samples will be detected in a generic way without the need for signatures for them. Obviously some products do it better than others. 🙂

  • New Member
  • 9 replies
  • November 8, 2012
Hi Tony
How can we be sure, that the bad program is not able to disable one or more functions of WSA?
Best regards
Steffen

  • New Member
  • 9 replies
  • November 8, 2012
Hi Kit
What happened with the 210 fp's? Were they isolated, so that they were not able to function? If that happens all the time fp's are found, You will end up with a computer, which can not execute legitimate programs. Best regrads Steffen

  • Community Guide
  • 142 replies
  • November 8, 2012
@ wrote:
How can we be sure, that the bad program is not able to disable one or more functions of WSA?
WSA has very strong self protection mechanisms in place which helps prevent malicious software from modifying its program settings and processes. See: https://detail.webrootanywhere.com/agenthelp.asp?n=Setting_self_protection
 

Kit
  • Retired Webrooter
  • 371 replies
  • November 8, 2012
@ wrote:
Hi Kit
What happened with the 210 fp's? Were they isolated, so that they were not able to function? If that happens all the time fp's are found, You will end up with a computer, which can not execute legitimate programs. Best regrads Steffen
Yes, of course it will isolate the files it finds to be threats, but...
 
1: If a program is unique to a given computer and has never been seen before on anything else at all, and is not even a legitimate copy of a program, how can removing it disable the computer?  Any legitimate programs are not unique to the computer nor are they modified into illegitimate copies.
2: Take a look at all of the testing reports as well as general information across the web about WSA and you'll note that massive sets of FPs have only occurred on tests, and even then not on all or even most tests, which once again brings into question the idea of test matching or not matching reality.  Trust me, if we quarantine something and it disables the computer, we will hear from the customer immediately, and it results in an internal email to a group that I am a part of.  That list is silent.

  • New Member
  • 9 replies
  • November 8, 2012
Hi Kit
 
I must assume, that a legitimate file on a computer is needed somehow.
 
So  if WSA isolates a legitimate file, so that it can not function anymore, the computer can not perform as well as before.
 
Many reg-cleaners deletes files, that are needed, and You get problems with programs, that does not function well or does not function at all.
 
How can You be sure, that the file " has never been seen before on anything else at all"
 
If it did not do any harm to the computer when isolating legitimate files, it does not make any sense to do this testing

  • Community Guide
  • 142 replies
  • November 8, 2012
@ wrote:
How can You be sure, that the file " has never been seen before on anything else at all"
This has to be in relation to others using Webroot's cloud as well as you. If no-one else has that file, and it's not yet classified on Webroot servers, it hasn't been seen before. By analysis using a combination of techniques, such as behaviours which are being monitored all the time, that 'unknown' file gets classified and then everyone else connected to Webroot's cloud benefit. If someone else then gets the same file, it's already been determined for them. This can take place within minutes without the user realising.
 

Kit
  • Retired Webrooter
  • 371 replies
  • November 8, 2012
@ wrote:
Hi Kit
 
I must assume, that a legitimate file on a computer is needed somehow.
 
So  if WSA isolates a legitimate file, so that it can not function anymore, the computer can not perform as well as before.
 
Many reg-cleaners deletes files, that are needed, and You get problems with programs, that does not function well or does not function at all.
 
How can You be sure, that the file " has never been seen before on anything else at all"
 
If it did not do any harm to the computer when isolating legitimate files, it does not make any sense to do this testing
Good news (or bad news, depending on how you look at it): 
The assumption is incorrect. 
 
There are tens of thousands of legtimate files on a computer that can be removed without causing any harm to the system.  Will there be zero impact?  No, because technically if you needed the function of that file, you would not have it, but there are lots of files that are exceptionally rarely used.
 
The next bit of fun is that they are technically not legitimate files if they ar modified and individual.  They are illegitimately-modified versions of legitimate files.  If you downloaded a program to do something on your computer for a purpose, and a virus modified it to do something else, you wouldn't want to run it.
 
I just looked slightly deeper into the test (since I hadn't before and was going on generic information).  After deeper examination, chances are that the FPs were not quarantined.  See below.
 
The testing documents indicate that all AV programs are set to maximum settings.  This means maximum heuristics, maximum shields, maximum everything.  So our FPs were almost exclusively malware.gen, trojan.gen, and other heuristic detections on things that were "Under 100 users estimated" and highly transient files.  Which sits in line with the popularity heuristics set at maximum only allowing programs that have been seen by a very large percentage of the SecureAnywhere Community.  So every single thing that has NOT been seen by a lot of people is going to be warned about.
 
This is important too.  "Warned about" will fail both a FP test and a detection test.  In a detection test, it's not explicitly blocked, just warned, so it fails.  In a FP test, it is considered a fail because it said anything at all even if it doesn't take it off the system.
 
And Tony is correct.  Our cloud system keeps track of exactly how many computers a given file has been seen on.
 
It's more of the wonderful part about how testing doesn't reflect the real world.  In the real world, the number of users who have every single setting maxed out is in the fraction of a fraction of a percentage.  The number of that group who then run a manual scan on files like the test does is also a fraction of a fraction of a percent. 
 
So technically, yes, out of millions of Webroot users, about THIRTEEN might have a major rash of FPs like that, but twelve of those are testing organizations who don't use normal settings.  And our program does warn the user about such things as well.
 
The lesson:  If you do something that is advised against by the program and that maybe one in five million people do, then you take the warnings as "Definitely bad" for good programs and "Not bad" for bad programs, then there will be problems just like the test shows.
 
---
Bonus Tidbit:
I was sitting here for a while trying to figure out how the heck they managed to get the results they did on the FP test before I started writing this.  No real user has ever seen anything even vaguely close to that kind of result.  So I went onto a VM and set everything to maximum 100% highest settings possible everywhere using the OLD version of the program from the test period.  Bingo.  Warning about the extremely obscure items.  Which explains why we have never had a user encounter this kind of thing.  Users -can- set it that way, but only the testing people are silly enough to not read the warnings and do it anyway.
 
Edit:  Clarified that I had to use the older version of the program, as this issue was fixed as well about a week after the tests.

  • New Member
  • 9 replies
  • November 8, 2012
Kit wrote:
 
"There are tens of thousands of legtimate files on a computer that can be removed without causing any harm to the system.  Will there be zero impact?  No, because technically if you needed the function of that file, you would not have it, but there are lots of files that are exceptionally rarely used."
 
That is a very poor excuse for isolating legitimate files. No one can guarantee, that the isolated file were not needed for a program to function well. Many computer owners have had problems, because their antimalwareprogram did isolate legitimate files.
 
If the setting of max 100% security is a very silly thing to do, then WSA should cancel that setting

  • New Member
  • 9 replies
  • November 8, 2012
Thank You for a good answer

Kit
  • Retired Webrooter
  • 371 replies
  • November 9, 2012
motzmotz wrote:
No one can guarantee, that the isolated file were not needed for a program to function well. Many computer owners have had problems, because their antimalwareprogram did isolate legitimate files. 
If the setting of max 100% security is a very silly thing to do, then WSA should cancel that setting
Not an excuse at all.  That's pointing out the flaw in your assumption.
 
No security program can guarantee a complete lack of false positives either.  It's not physically possible on an ongoing basis.  The idea is to reduce the number of them as low as possible.  That being said, again, the high rate of FPs was caused by the testing explicitly.  No commonly-used, legitimate file will ever trigger that in reality even with the super-high settings.  The tests had to quite explicitly have both super-high settings -and- find files that no Webroot user has ever had on their computer before -and- has weird code in the entry point.
 
Also, "very silly" for the average user is not necessarily very silly for a power user who wants to evaluate everything and anything that may be found, thus the option to use it still exists.  Please remember that in the test, simply popping up a message saying effectively "Caution, this file has not been seen too much before, so it might pose a threat of being a generic malware, but you may allow it if you trust it." (In different words that I don't have access to at the moment) will cause it to fail the test.
 
The summary though, again is that what happens in the test doesn't happen in real life.  We have a tremendous number of power users and we'd have a huge stink and flood of people contacting support if the average user would find 210 FPs.

  • New Member
  • 9 replies
  • November 9, 2012
If the high rate of FPs was caused by the testing explicitly, then the other antimalwareprograms in the test should have had very many FP`s too. But WSA were the only one with such a high number of FP`s ???

Kit
  • Retired Webrooter
  • 371 replies
  • November 9, 2012
Not correct, but for a sad reason.  As even Malwarebytes has observed, almost all AV products these days focus more on getting good scores in the testing than on working better in real life situations.  The methodology also differs.  Other AV programs don't even have the capability to determine that something is odd based on how "popular" the file being scanned is, so they could never even give the warning alerts that resulted in Webroot being accused of FPs.  The very few that do have that capability will generally have "testbed failsafes" installed so that if they see more than, say, five of those types of alerts in one scan, they will assume they are on a test machine and stop giving alerts altogether.
 
Webroot is focused on the end user.  We will not compromise the security of a real, live user just to get better scores on a test.  If the test fails to take our mode of operation into account, then we will definitely open contact with them to discuss the issue.  But we will not make changes to pass a fake test that would negatively impact real users.
 
We already have made changes that help in tests without negatively impacting normal use by real users, but those changes do not help real users very much unfortunately.
 
Now the question for you:
Why are you so focused on the test, rather than reality?

superssjdan
Community Leader
Forum|alt.badge.img+13
  • Community Leader
  • 348 replies
  • November 9, 2012
Congrats to Webroot on the latest Real-World protection test from av-comparatives.Chart is HERE
Revised yesterday.An effective block rate(including user dependence)of 98.5 %!Way to go!

  • New Member
  • 9 replies
  • November 10, 2012
Kit wrote: Why are you so focused on the test, rather than reality?
 
I am just an average consumer. I can`t test these antimalwareprograms myself. If I drive a car, I have a feeling of how the car is functioning. When running an antimalwareprogram, You can only feel how much it is slowing down the computer. I can`t see how the program is doing its job. So I have to rely on these testings of antimalwareprograms. And with this test, it is very significant, that WSA had a very high score of FP and a very low detectionrate of malware. So I am curious to find out why. For me it is not a valid argument, that xxx thousands of users are very happy with the program.
You have told me, how it could happen, that WSA had a low score, but again I just have to take Your word for it. I can´t measure or control Your statements.

Forum|alt.badge.img+13
  • New Voice
  • 20 replies
  • November 10, 2012
Hi motzmotz
 
I'm just an average consumer like you and I understand your point of view. I would like to share with you my thoughts about Webroot and the AV tests labs. They are just personal opinions:
 
- When I saw the first test about Webroot from AV-Comparatives I was really concerned so I started to look for other independent lab tests like this http://www.av-test.org/en/home/ and this http://www.westcoastlabs.org/. On both, Webroot passed the tests and got the certification. If we focus on the AV-Test this shows a nice global performance during the last year, the only consistency with AV comparatives is regarding the FP's issue BUT the rate is lower and, at the same time, the FP's rate changes every month. For instance, it was 'zero' during August test.
 
- Personally I like AV-test 'cause also shows the Industry average.
 
- From my point of view AV products must be tested, it's the only way to get a standard comparison BUT AV test labs MUST also be compared in order to get a standard comparison too.
 
- Since I'm seeing a kind of discrepancy between Labs results I have to rely on my own judgment but considering the facts stated on the results from the testing Labs.
 
- From my personal experience I only got 5 FP's while performing a custom scan on my entire machine. This only happened with the first release of Webroot 8 one year ago. These files were not quarantined because I told the program to do so when it asked me. My machine never became useless or frozen.
 
-I have only had one major problem with Webroot 7 (not Webroot 😎. My PC took years to boot up. A technician from Weboot used a CITRIX connection to test my machine from Colorado (I live in Spain) FOR FREE. At the end it was just a bug in that particular version of Webroot 7 and it was fixed with a quick release of a new version.
 
This has been my personal consumer experience. As I mentioned before AV testing labs are not consistent with Webroot so If I were a potential new customer for Webroot I would use a trial version of the program (and from other vendors) to make my mind.
 
So far I'm happy with Webroot and appreciate they don't hide these AV tests results. As long as they keep explaining the situation and I'm not seeing any inconsistency I'll keep Webroot in my machine.
 
Best regards

remixedcat
Community Leader
Forum|alt.badge.img+26
  • Community Leader
  • 627 replies
  • November 20, 2012
Thanks for the clarification CatB! Most appreciated. 

Cat
Forum|alt.badge.img+4
  • Author
  • Retired Webrooter
  • 241 replies
  • December 3, 2012
Hi everyone,
 
We have an update regarding our future with AV-Comparatives. 
 
See this thread by Mike Malloy to view and comment on the joint statement from both companies.
 
We'll be sure to update the Webroot Community with any new tests that are developed. Stay tuned!

RetiredTripleHelix
Gold VIP
Forum|alt.badge.img+56
Thanks Cat as I could see this coming and I hope AV-C does come up with a testing procedure to properly test WSA in the near future! ;)
 
TH

remixedcat
Community Leader
Forum|alt.badge.img+26
  • Community Leader
  • 627 replies
  • December 3, 2012
Thank you for re-opening this thread. Also thank you for working with A/V comparitives on this. However this is going to take a LOT of damage control and to build confidence again. All the best.

RetiredTripleHelix
Gold VIP
Forum|alt.badge.img+56
@ wrote:
Thank you for re-opening this thread. Also thank you for working with A/V comparitives on this. However this is going to take a LOT of damage control and to build confidence again. All the best.
I don't think so we are well protected and know so you just have to understand how Webroot SecureAnywhere works and there have been many posts to read on how it does in this forum.
 
TH

Reply