Skip to main content
Update April 28, 11:45 a.m. MDT: 

 

Please click here to see the most recent update.

 

 

UPDATE 4/28/17 11:45 a.m. MNT: We have 0 calls in queue on our phone line, and are working through about 80 tickets related to the False Positive repair utility. A good portion of those are simply awaiting customer verification.

 

Please note, the utility was built to address only this specific false positive issue. It will be deactivated in the future. 

 

If applications are operating normally on your systems, you do not need to implement the utility. 

 

If you haven’t yet submitted a support ticket and you need the repair utility, please do so here. Include your phone number as well with the support ticket.

 

Thank you.

 
Apologies for the frustration @ - we do want to continue the discussion here, we just want to have a place where only official updates can be posted for those who do not want to see all comments. Please continue posting your comments/questions here as our team is continuing to monitor this thread. Thanks!
Thank you for responding to point 1

 

Can you comment on point 2 for me and all of the users/partners that are affected.

 

Has the fix been pushed out, if not, when will the global fix be pushed out?  

 

I would ask other important questions but at the risk of questions being selectively answered I'll limit it to one at a time.
@ - please know that all hands are on deck with this issue and they are working through responses to your questions. I do not have the answers to your other questions yet but they will be addressed as soon as possible.
@

 

 [edited by community moderator] Regardless, all those unanswered questions are quite valid and for webroot to not have an answer after 24hrs is quite disheartening.  [edited by community moderator]

Anyways, this issue has created great damaged at our clients and your fix (automatic or manual) have still left some applications in an inoperable state (forcing us to either repair or reinstall the software).

As MSPs, we ask Webroot to be more up front and communicate with your partners.

 

Lastly, Webroot's update about not deleting files from quarantine is quite exasperating.  Do you believe having to wait for over 24hrs for a resolution is an acceptable path?  We had to do whatever we could to get our clients back in business.  [edited by community moderator]

 
A few notes for people who are working to resolve the issue on a local level.

 

We are still noting that attempts at restoring quarantined files from the cloud are not working.  We are using the 'Unmanaged' profile to access local Quarantined files.

 

When applying the 'Unmanaged' profile, you may use WRSA.exe -poll to immediately enforce the change from the local machine.  For our cloud instance, that is working quickly.  I suspect that the cloud instances for some of the larger MSPs here are under greater load (at the risk of understating the issue)

 

If for some reason, you cannot access Quarantined files, sometimes they will be in C:Quarantine as the restore command you issued sometime earlier this week was unable to restore to the prior location.  You MAY have the option to use 'previous versions' of a folder (ie. Windows Shadow Storage) to pull your files out of the nether.  

 

I thought we were all caught up last night and found that a fair number of customers were affected and not flagged, and that I did not receive email alerting for all endpoints with issues - I would recommend anyone with multiple organizations to run a report showing all detections in the last 24 hours in order to make sure your bases are covered.

 

I hope everyone has enough coffee to get through the day.

 

 
Is it possible to restore the latest working state: applications, files, system images; and fallback to the most previous working Webroot version?
@

 

Thanks for that we will try that now. On the third pot this morning but got no sleep last night.  Thanks for the insight I am willing to try just about anything at this point...

 

Can a Webroot Employee or support please let us know an ETA, lie to me I don't care but at least give us some hope of sleep today.

 

:mansad:
@ we are fully focused on resolving this issue and do not want to spend time defending false rumors. Please continue posting your questions, but kindly refrain from adding rumors and false information. Your post has been edited to remove the offending comments. You may also review the Community Guidelines here.
All,

 

I've been paying very close attention to this failure and been up most of the night trying to monitor and see if we were affected in any way.  We have seemingly dodged a bullet here but that's not to say we are out of the woods by any means.  We can't seem to pinpoint how we avoided the failure but only point to the fact that our scan times are set to 11pm-3am everyday as opposed to daytime defaults that Webroot uses.  I can't say for certain that this helped us avoid the disaster, but we can't find any other reason as to why we got lucky here.  Our guess is that because the update went out yesterday morning, and deep scans occurred after that for endpoints, perhaps that's when it flagged .exe's as false positives and quarantined them.  And since we didn't scan during that period, and Webroot released the first fix during the afternoon, we missed that window.  

 

Not sure if you guys will find this helpful, but if Webroot is issuing these updates during the day, perhaps think about changing your deeps scans for afterhours to avoid that window of updating to see if there are any issues and allows Webroot time to fix the problems. (Not that this should have happened in the first place)
@ I almost appreciate the "all hands-on deck" statement BUT in as much as this is making the national news, affecting business for so many companies, detrimentally affection client relationships with MSPs all over, not to mention the huge financial impact to both clients and MSPs, I would have expected Webroot to be much more transparent and responsive. As it is the response from Webroot appears to me to be extraordinarily apathetic.



Where is our official statement? Where is our comprehensive solution? Are we going to have to deal with the effects of this tomorrow as well?

 

Regards,

Mark G
" We have rolled back the false positives. Once the fix is deployed, the agent should pick up the re-determinations and perform as normal."

 

Has the fix been deployed yet? It is not clear.  What about files still in quarantine? Do we have restore from Quarantine (again)? Since we already did, yesterday, and it clearly does not work.

 

Will our files automatically be restored?

 

Please provide us with more details on this fix and how and when we will get it.

 

Thanks

 
UPDATE: We've got an update on the initial post in this thread. This update includes further messaging around addressing the issue manually. We are conducting a thorough technical review to ensure we have a complete understanding of the root cause. I wanted to make sure that all of our subscribers got the message. Please continue the discussion in this forum. Thanks.
Any word on a mass-scale fix? This is painful.
I'd really like to know if anyone has any idea on machines bluescreening after they get to the login screen in Windows 7/10 after this. Just bootlooping over and over. 

 

Has anyone found an effective fix other than wiping the machines? Removing the WRkrn.sys file (which works for a botched Webroot install) does not workin in this instance due to the mangled files from this issue.

 

Any help would be awesome!
Had one this morning but it was an machine we were in the process of deploying Webroot on and removing AVG Cloudcare. AVG had not removed yet so we manually removed AVG and then all was fine. File causing the BSOD error was netio.sys.
@ wrote:

Boy there were a lot of failures and weaknesses shown with this event.  Authenticode catalogs not honored, C&C overwhelmed, no notification to users and QC inadeqacies.  I hope webroot takes this wakeup call VERY seriously and makes these shortcomings their top priority.  No more features until this is addressed.

Webroot ignoring customer-created whitelists would be a weakness too.

 

We have multiple whitelist exceptions for all files in specific file paths.

 

Webroot steamrolled right over those exceptions this time around, ignoring them and marking files infected anyway.

 

Since when would this be okay behavior by an update?  I've been told by others they experienced this as well.
 

any update as of yet? 

 
For those that have not seen this email yet from Mike Malloy, Executive VP Product & Strategy, I wanted to share this with you. We sent this out to all MSP registered admins earlier today. 

 

 



 

Yesterday morning at 11:52 am MT, some good applications were mistakenly categorized as malware. This has created many false positives across the affected systems and has resulted in those applications being quarantined and unable to function. We recognize that we have not met the expectations of our customers, and are committed to resolving this complex issue as quickly as possible.

 

 Webroot is making progress on a resolution, and our entire organization is dedicated to addressing this issue.  We will update you with latest information on our Community and Blog.  In the meantime,

  • Affected customers should not uninstall the product or delete quarantine, as this will make quarantined files unrecoverable.

  • We have corrected the false positives in our backend systems, and we are working on an automated fix to reverse the false positives on endpoints. 

  • Customers should ensure that endpoints are on and connected to the Internet to receive a resolution.  Once files have been removed from quarantine, some endpoints may require rebooting.
Those who wish to address the issue manually should follow the instructions posted on Webroot Support.   We are conducting a thorough technical review to ensure we have a complete understanding of the root cause.  Once our analysis is complete, your Webroot account representatives will discuss the findings in greater detail with you. We apologize for the pain this has caused you and your customers.  Webroot appreciates your business, and our entire team is dedicated to being your most trusted partner.  We did not live up to that in this situation, but we are taking the actions to earn your trust going forward. Mike MalloyExecutive VP Product & Strategy
As an MSP this has been overly frustrating. I have several clients who have been affected by this. Their production has halted due to WebRoot flagging their main applications. Trying to go through all of the machines and figuring out how to get them up and running has been an ordeal and lost multiple clients an entire day of production. I've only verified a single company has been able to get back up and running of the ones that got hammered by this.
We got that e-mail 2.5 hours ago.

It's the same information you posted on the "Webroot False Positive" thread 4 hours ago.

 

When do we get new information?

 

 
I am personally disheartened by the Do You Know Who I Am attitude.

 

Shortly after some of our custom written programs started to be flagged as bad, Webroot came out and stated it was their fault. This enabled me to not chase ghosts and get our workstations cleaned using a combination of the methods described in this forum.

 

Unfortunately, most of the posts were not helpful; rather whining, complaining, demanding action. This only cluttered the minority responses that contained legitimate content. I understand many of you are upset or frustrated, but please be mindful of all of our time. Sifting through your rants to get to the actual info has been a waste of "my" time.

 

I appreciate the response to date from the Webroot employees and volunteers. Obviously, this is a difficult time, yet they have remained professional throughout. I have no plans of leaving Webroot over this incident.

 

My company is private, but we have over 700 devices with 12 sites. Even so, I don't consider myself any more important than Webroots other paying customers.

 

 
@, it's true there are several people posting to complain. I wasn't trying to do a don't you know who I am post. It's more of a we've been working on this since 4 pm yesterday and an official email wasn't sent out til earlier today. The reason that I mentioned I work for an MSP is because the current steps to take to get programs up and running are better designed on a small scale basis. The only quarantine restore method that seems to be working is getting the md5 hashes because files are being quarantined and not showing up in the web portal.



My current method is following the steps they have in the update today

1. Reverify

2. Rescan

3. Review quarantine on machines - means changing permissions for the workstations since it's currently locked down.

4. Check the quarantine

Grab the MD5 hashes for all the files

5. Manually add each program page at a time because it errors out when you try to do an entire site in one go.

Test and see if it's working yet and repeat the process.
     I've been having the same experience where restoring quarantine from the Cloud portal does nto seem to work. I've had to use the unmanaged polciy and local unquaratine option. I also notice there are many more things listed in the local quaratntine than in the cloud version. I wonder why that is. SO far i've been fortunate to only have to deal with a few machines here and there. I symothise with those of you having to deal with hundreds or thousands.  
Things have died down on our end. This morning was a little rough but not as bad as it could have been. I spent a greater part of my evening lastnight adding exceptions and then running the quarantine release one-by-one (Beer and ice cream helped). I woke up this morning at 5 AM to check on the progress and most of the backlog had caught up. This gave me hope that the world wasn't ending. It was just going to take time.

 

As someone pointed out, it wasn't so much of a "DO YOU KNOW WHO I AM?" but more of a reaction that the initial fix wasn't a feasible option for an MSP who deals with thousands of endpoints. And then when pointed out that this would not work for an MSP who does have thousands of endpoints, it just felt that the ball was dropped and we were left in the dark. Don't get me wrong, I like Webroot's product. I've worked with them for several years and will most likely continue working with them. We can't fix the past but I would like to know how Webroot plans to correct this so this doesn't occur again. What tools can they provide to us so that we can put a stop (if possible) instead of waiting more than a day. I commend all the Webroot techs and the sales reps that I've spoken with. They've been understanding and I know my day hasn't been as bad as theirs.

 

Anyways, one thing I did notice is that even though the command for releasing the quarantine had executed, we still had to manually remote to the computers. And then there was one odd computer that even though we released the quarantine and never actually released the quaratine. We then discovered that even though it was in an unmanaged mode, all of the .exe were set to be "blocked". Once we unblocked these, everything released and the workstation began working normally. Hope that helps anyone who encounter that issue like we did. :)

 

 
I know most of you know this but this is what has been working very well for us.

 

-Switch policy to unmanged in the Dashboard

-on local device, right click and Refresh Dashboard on WR tray icon

-on local device release the quaratined files

-on dashboard run the "Re-verify all file...." Agent command

-on local device, right click and Refresh Dashboard on WR tray icon

-Switch policy to back to the actual policy in the Dashboard

 

 

 

 

 

Reply