Skip to main content
Solved

AV-Comparatives and Our Unique Approach


Did this help you find an answer to your question?
Show first post

59 replies

RetiredTripleHelix
Gold VIP
Forum|alt.badge.img+56
AV-C has a great report: Whole Product Dynamic "Real World" Protection Test Results Graph Bar for November Great Job Webroot teams! ;)
 
TH

remixedcat
Community Leader
Forum|alt.badge.img+26
  • Community Leader
  • 627 replies
  • December 7, 2012
 
[] what about the other "user dependant" ones???

RetiredTripleHelix
Gold VIP
Forum|alt.badge.img+56
As a user I can compare the 2 as I tested Panda Cloud and it does download a large definition file locally they are not the same it's still like comparing apples to oranges and the link I posted above shows that Panda doesn't use any user intervention (No Yellow portion of the Graph) and that's all I can go by. Also as this is the Webroot Community Support Forum I don't see a Webroot Staff member replying and putting down the competition as that would be unprofessional IMHO.
 
TH

RetiredTripleHelix
Gold VIP
Forum|alt.badge.img+56
@ wrote:
 
[] what about the other "user dependant" ones???
Since you change your Question from Panda to this the "user dependent" pop-ups are default Block with Allow once & Allow and the users that I know and myself have not even seen this pop-up window and the last I heard but don't quote me will be improved in future builds with less "user dependent" pop-ups. ;)
 
TH

remixedcat
Community Leader
Forum|alt.badge.img+26
  • Community Leader
  • 627 replies
  • December 10, 2012
great. thank you!!!!!

Forum|alt.badge.img+10
  • New Voice
  • 9 replies
  • December 31, 2012
I read the whole thread. Still puzzled. I understand that a lower test detection rate does not imply that a product offers low protection on the case of Webroot.
 
What I don't understand is why Webroot does not infect (they are not detected at first, right? ) a test PC with ALL missed ones in  a test, one by one, and report how many of those infection were detected at what time and what damage could be repaired.
 
I saw the interesting screen shot movie   if Webroot missess a virus  That certainly points in the right (convincing) direction, and gives a great  insight. I would be more convinced though, by seeing, just for once, a report on the follow up/run time detection of the 20% 'missed' at first..

JimM
  • Retired Webrooter
  • 1581 replies
  • December 31, 2012
The 20% being referenced is from a zoo test - a kind of test that dumps a bunch of malware into a directory, which is then scanned. Old definitions-based software is good at these kind of scans because that was the primary method by which that kind of software worked - a big, long scan. That kind of test is different than running any of those files individually as a real-world test, which is what WSA is best at. Detection routines mirror the imprint of a real infection and what it does. For this reason, Webroot did very well on the real-world portion of the test.

When testing samples are brought to our attention, those files are classified within minutes. The question you may be getting at though is "How long does it take for a missed file to be rolled back?" That would be an interesting thing for a third party tester to monitor in order to test efficacy and reactiveness.

It can vary, but generally speaking, it's very quick. Our threat researchers have many ways to seek out and classify malware. Worst case scenario, if a WSA user was to be infected and contacts support, we would then have the threat right in front of us. Our threat researchers can then deal with that threat globally on every WSA-protected system on which it is present, and trigger the rollback. Historically, this happens very, very fast.

Forum|alt.badge.img+10
  • New Voice
  • 9 replies
  • January 10, 2013
One other plea for proving by testing the different appoach at the user level.

Once WR's different approach discovers malware, apparently it can roll back the possible damage. I hope that after that discovery the malware is identified and added to a DB.
So some of the advantages/differences are a) the way of discovery and b) the rollback. However, one of the similarities is that DB. If that DB scores lower in tests, is it because it is somewhat lagging behind?

About a week ago many commented on the Impreva study. What surprised me most is that the 'Old definitions-based software' industry responded very similar to WR's different approach. Somewhat like: definitions are fine, but we do much more, and that 'more' has not been tested by Impreva.

So protection by discovery seems to be changing over there too. And roughly, isn't rollback then remaining as the main difference? If so, I'd still like more proof about the rollback. And perhaps I will test that on an old PC now my new subscription is valid for 5 licenses. Great!

JimM
  • Retired Webrooter
  • 1581 replies
  • January 10, 2013
@ wrote:
Once WR's different approach discovers malware, apparently it can roll back the possible damage. I hope that after that discovery the malware is identified and added to a DB.
 
About a week ago many commented on the Impreva study. What surprised me most is that the 'Old definitions-based software' industry responded very similar to WR's different approach. Somewhat like: definitions are fine, but we do much more, and that 'more' has not been tested by Impreva.
 
And perhaps I will test that on an old PC now my new subscription is valid for 5 licenses. Great!
Unless the discovery was due to by purely local heuristics, the cloud database would already have the malware entry prior to the rollback.  The rollback is triggered by the classification in the database once that occurs.
 
The study in this case looks like it utilized VirusTotal exclusively.  I'd say the complaints are justly made, because testing in that manner only tests part of the software instead of the whole thing.  As the article notes, VirusTotal themselves say their service should not be used to perform antivirus comparative analyses.  If standard testing has a long ways to go before it accurately mirrors the real world, testing purely through VirusTotal is like turning around and running the other direction.
 
Regarding the mention of private end-user testing, please see the Community Guidelines section "No Private Testing Discussions."
The whole point of antivirus software is to not get infected, and unfortunately when somebody sets a bad example, there will always be others who are influenced into following the same path.  It's not something we want to allow to be encouraged.