Update April 28, 11:45 a.m. MDT:
Please click here to see the most recent update.
UPDATE 4/28/17 11:45 a.m. MNT: We have 0 calls in queue on our phone line, and are working through about 80 tickets related to the False Positive repair utility. A good portion of those are simply awaiting customer verification.
Please note, the utility was built to address only this specific false positive issue. It will be deactivated in the future.
If applications are operating normally on your systems, you do not need to implement the utility.
If you haven’t yet submitted a support ticket and you need the repair utility, please do so here. Include your phone number as well with the support ticket.
Thank you.
Page 1 / 12
Hello Drew,
Respectfully, circutbreakers are not adequate safeguards.
A safeguard is having a client that can authenticode verify files as being from Microsoft and whitelisting them. It's completely, utterly unacceptable that in 2017 you could quarantine a file signed by Microsoft's root authority. This is a glaring issue.
In fact, I submitted product feedback about this omission last year, warning that this would happen without enforced system file whitelisting. I was seemingly ignored.
You can read that feedback here: https:///t5/Webroot-SecureAnywhere-Antivirus/Product-defect-Critical-oversight-in-file-signing-via-catalog/m-p/259299#M26248
(EDIT for clarification: Note this specific recommendation would have only prevented the issues with Windows 10 Insider Preview)
Webroot's product team needs to go back to the basics of the product and review fundemental protections other vendors implement.
I tell people in my professional circles that I mostly approve of your product and quality control. This is not okay. And this isn't one person's fault, or just human error. Humans make mistakes. I make stupid mistakes and have bad judgement. But systems are supposed to be designed so those mistakes are mitigated.
Regards,
explanoit
Respectfully, circutbreakers are not adequate safeguards.
A safeguard is having a client that can authenticode verify files as being from Microsoft and whitelisting them. It's completely, utterly unacceptable that in 2017 you could quarantine a file signed by Microsoft's root authority. This is a glaring issue.
In fact, I submitted product feedback about this omission last year, warning that this would happen without enforced system file whitelisting. I was seemingly ignored.
You can read that feedback here: https:///t5/Webroot-SecureAnywhere-Antivirus/Product-defect-Critical-oversight-in-file-signing-via-catalog/m-p/259299#M26248
(EDIT for clarification: Note this specific recommendation would have only prevented the issues with Windows 10 Insider Preview)
Webroot's product team needs to go back to the basics of the product and review fundemental protections other vendors implement.
I tell people in my professional circles that I mostly approve of your product and quality control. This is not okay. And this isn't one person's fault, or just human error. Humans make mistakes. I make stupid mistakes and have bad judgement. But systems are supposed to be designed so those mistakes are mitigated.
Regards,
explanoit
This is not a fix when you're an MSP....
How am I supposed to do this across 3 GSM's with over 3 thousand client sites?????
NOT GOOD ENOUGH.
NOT GOOD ENOUGH.
This is not a fix. This is a "hey, restore your files manually from the quarantine on all your endpoints".
A fix is an automatic rollback.
Figure it out.
A fix is an automatic rollback.
Figure it out.
I think I speak for all MSPs on here wanting more communication from Webroot. I am pretty sure you could easily setup a spam list (mailing list) we can all sign up to for critical issue alerts.
If you don't know how to set one of those up I bet I can find a few IT guys on here that can gladly help with this.
We know mistakes happen and most of us although are very pissed off still remain loyal because over all its a good product. But the lack of communication is unbelievable!
I hope that you are coming with a solution PDQ.
If you don't know how to set one of those up I bet I can find a few IT guys on here that can gladly help with this.
We know mistakes happen and most of us although are very pissed off still remain loyal because over all its a good product. But the lack of communication is unbelievable!
I hope that you are coming with a solution PDQ.
Are you serious webroot? As if we didn't wait for hours monday trying to get in on your support phone line and getting a ticket submitted via the completely non funcitonal GSM portal before finally giving up. Now that you finally have a fix you make us come and grovel for it?
I'm sure you will have some excuse like you need to track the number of people that need it or something, just put a **bleep** download counter on it and post it publicly. The world already knows about your screw-up, being shady about it now is not helping your case to keep your existing clients, and that is what I'm sure this is really about is making sure that we contact you so that you can send us to the retention department because you know about every MSP out there is scrambling to test other options right now.
Want to mitigate the customer loss? Own your screw up, don't minimize it with this "13 minutes" BS that you are trying to spread as if that makes it better, it's been an ongoing issue for us and our clients for days now.
Tell us what you are going to do to be better in the future. This is at least the 3rd major screw up from webroot in the past year, and some of them like the terminal server issues are still ongoing but webroot lost interest in fixing them. No updates have been released for the agents in the past 6 months, oh except that one that broke everything and had to be rereleased with rolled back code.
We want to know what the heck is going on at webroot and be convinced why we shouldn't change vendors, because we are losing clients because of you, and no I don't want to call you and ask for the privilege of lip service, it needs to be public and it needs to include an actual apology from the people in charge and a plan for turning the ship around.
I'm sure you will have some excuse like you need to track the number of people that need it or something, just put a **bleep** download counter on it and post it publicly. The world already knows about your screw-up, being shady about it now is not helping your case to keep your existing clients, and that is what I'm sure this is really about is making sure that we contact you so that you can send us to the retention department because you know about every MSP out there is scrambling to test other options right now.
Want to mitigate the customer loss? Own your screw up, don't minimize it with this "13 minutes" BS that you are trying to spread as if that makes it better, it's been an ongoing issue for us and our clients for days now.
Tell us what you are going to do to be better in the future. This is at least the 3rd major screw up from webroot in the past year, and some of them like the terminal server issues are still ongoing but webroot lost interest in fixing them. No updates have been released for the agents in the past 6 months, oh except that one that broke everything and had to be rereleased with rolled back code.
We want to know what the heck is going on at webroot and be convinced why we shouldn't change vendors, because we are losing clients because of you, and no I don't want to call you and ask for the privilege of lip service, it needs to be public and it needs to include an actual apology from the people in charge and a plan for turning the ship around.
Didn't opt in for beta fix:
Agent refused to checkin to cloud console.
-Booted workstation to safe mode
-WRSA -uninstall
-Reinstalled
Agent now checks in, no new false positives yet.
Agent refused to checkin to cloud console.
-Booted workstation to safe mode
-WRSA -uninstall
-Reinstalled
Agent now checks in, no new false positives yet.
Where should my company send the bill for all the time we've lost fixing this issue for our clients? Multiple core processors taken down across multiple banks and financial institutions is a very big deal.
Hi everyone,
Our team (Webroot development) has been working thru the night on a safe process for moving affected files out of quarantine. We needed to insure it would not create further issues. We will provide a more detailed message with current status in a little while. This will be followed by a report that will be something you can use in your discussions with your users and/ or clients. I speak for Webroot when I say we are very sorry for the aggravation this has caused you. Once things are settled down a bit, I would be happy to speak with each of you. We can set that up with your rep. More info in a bit.
Mike Malloy
EVP Products
Our team (Webroot development) has been working thru the night on a safe process for moving affected files out of quarantine. We needed to insure it would not create further issues. We will provide a more detailed message with current status in a little while. This will be followed by a report that will be something you can use in your discussions with your users and/ or clients. I speak for Webroot when I say we are very sorry for the aggravation this has caused you. Once things are settled down a bit, I would be happy to speak with each of you. We can set that up with your rep. More info in a bit.
Mike Malloy
EVP Products
This is a PR nightmare, the lack of communication is upsurd. My technicians, project managers, and developers have been up all night on this and they still have not slept. We are an MSP and I am the people side of our company, when I recieved the call from our techs yesterday evening they said we are on it and we will send you an email when we know more. I got an email around midnight tell us what had happened. When I started getting calls from directors and owners this morning asking if something had happened we were very transparent with our clients. The situation was company wide but we have the best techs that were able to resolve the issues overnight for most places. We did replace some hardware however $$$$$. We will be filing for compensation for this. From the business side this is unacceptable, this cannot happen I called our owner this morning. We have been very happy uptil now with WR, but this most likely will affect our bottomline and that cannot be remeded with we are sorry. We are going to need more!
Dear @ ,
We've found that if you set the affect computers with an Unmanaged policy, and then have the user "Refresh Configuration" (by right-clicking the green W icon) it will pull the policy from the console. Once that takes effect (usually about ten seconds) you can then enter the UI and restore the items from quarantine.
FWIW, We still were having detections happening this morning, and have had to shutdown protection completely on various companies in order for them to get their production orders out.
We've found that if you set the affect computers with an Unmanaged policy, and then have the user "Refresh Configuration" (by right-clicking the green W icon) it will pull the policy from the console. Once that takes effect (usually about ten seconds) you can then enter the UI and restore the items from quarantine.
FWIW, We still were having detections happening this morning, and have had to shutdown protection completely on various companies in order for them to get their production orders out.
Hi Folks
Just to recap, here is where we are at this point. The best current manual approach to fixing affected machines are the options posted on Webroot Support. There are several different specific techniques on that page that can be used for your critical situations. Its manual, but effective.
We are also continuing work on a "from the cloud" en masse approach to sending agent commands to restore from quarantine for all affected customers. It has proven to be trickier than we anticipated to get it right without downstream issues and test it with both live customers (using volunteers) and internal accounts. More on this as we progress it.
We are also working on a different approach in which you would download a small app that can be pushed to your endpoints that could be kicked off with a command and the app would do the restoration. This approach has not been completed or tested, so no timetable.
Lastly thank you to all on the forum especially those who have generously shared your own successful approaches with your colleagues here. Our support people are available to help of course, but its great to see the community sharing.
More news later.
MIke
Just to recap, here is where we are at this point. The best current manual approach to fixing affected machines are the options posted on Webroot Support. There are several different specific techniques on that page that can be used for your critical situations. Its manual, but effective.
We are also continuing work on a "from the cloud" en masse approach to sending agent commands to restore from quarantine for all affected customers. It has proven to be trickier than we anticipated to get it right without downstream issues and test it with both live customers (using volunteers) and internal accounts. More on this as we progress it.
We are also working on a different approach in which you would download a small app that can be pushed to your endpoints that could be kicked off with a command and the app would do the restoration. This approach has not been completed or tested, so no timetable.
Lastly thank you to all on the forum especially those who have generously shared your own successful approaches with your colleagues here. Our support people are available to help of course, but its great to see the community sharing.
More news later.
MIke
Just the affected ones will benefit from the tool because its specifically designed to move the affected files back into the right folders.
Mike
Mike
This event certainly uncovered some big issues.
Now that it's behind us I would like to know what your plans are for making sure the trifecta of bad (detection, backlog, no kill switch) does not happen again.
I have no doubt you are taking this very seriously. Just looking for more information.
Thanks.
Now that it's behind us I would like to know what your plans are for making sure the trifecta of bad (detection, backlog, no kill switch) does not happen again.
I have no doubt you are taking this very seriously. Just looking for more information.
Thanks.
Hi dsm55 and others
We are sending an email and posting a letter from our CEO, Dick Williams, which outlines some of the many steps we have taken already and are actively working on to 1) prevent similar issues; 2) communicate more rapidly and with better coverage; and 3) improve our systems so that you can take remediation steps yourself with better information. That note and others in the weeks ahead will hopefully provide you the assurance you need to depend on Webroot as a solid partner. We know this event was a big one and have neither dismissed it nor ignored its many lessons. Thanks for your note.
Mike
We are sending an email and posting a letter from our CEO, Dick Williams, which outlines some of the many steps we have taken already and are actively working on to 1) prevent similar issues; 2) communicate more rapidly and with better coverage; and 3) improve our systems so that you can take remediation steps yourself with better information. That note and others in the weeks ahead will hopefully provide you the assurance you need to depend on Webroot as a solid partner. We know this event was a big one and have neither dismissed it nor ignored its many lessons. Thanks for your note.
Mike
We spent an hour on hold and spoke to an agent. Same answer - follow this process. The console is getting hammered, thus restore commands are not processing. There is no local restore option if the agent is cloud managed.
The agent suggested uninstalling Webroot and then restoring or reinstalling the affected program. This was a laughable suggestion to be sure - except we didn't find it very humorous.
Seems like we found a major flaw in the underlying program. If the cloud console is having issues - then nothing can be done on the local agent in case of emergency. This is definitely something that will need to be reviewed and addressed moving forward.
We have found that sometimes you can refresh the agent, reboot the endpoint, and it will get the restore done.
The agent suggested uninstalling Webroot and then restoring or reinstalling the affected program. This was a laughable suggestion to be sure - except we didn't find it very humorous.
Seems like we found a major flaw in the underlying program. If the cloud console is having issues - then nothing can be done on the local agent in case of emergency. This is definitely something that will need to be reviewed and addressed moving forward.
We have found that sometimes you can refresh the agent, reboot the endpoint, and it will get the restore done.
I'll add another vote for GETTING THE CONSOLE AGENT COMMANDS FIXED NOW!
As an MSP, this is killing us. It's bad enough to have the false positive. It's even worse to be told "Just restore the file from the quarantine". But it's the WORST to not have the console Agent commands not work.
FIX IT! And, while you're at it, write a GLOBAL script that takes ALL the false positive files found in the quarantine AND RESTORE THEM TO THE ORIGINAL LOCATION.
Oh, and next time, when something like this happens, WHY DIDN"T YOU NOTIFY YOUR DISTRIBUTORS? I'm already pissed that my distributor didn't notify me. Talk about dumping on your resellers. Shame on you!
As an MSP, this is killing us. It's bad enough to have the false positive. It's even worse to be told "Just restore the file from the quarantine". But it's the WORST to not have the console Agent commands not work.
FIX IT! And, while you're at it, write a GLOBAL script that takes ALL the false positive files found in the quarantine AND RESTORE THEM TO THE ORIGINAL LOCATION.
Oh, and next time, when something like this happens, WHY DIDN"T YOU NOTIFY YOUR DISTRIBUTORS? I'm already pissed that my distributor didn't notify me. Talk about dumping on your resellers. Shame on you!
Like all the other MSP's I see listed here, you have absolutely crippled us and many of our clients. Backup Restores is simply not the right "Solution" or "Workarround" this needs resolved, and MSP's need a solution ASAP many critical systems are affected here and more come each hour as they update.
What is the status for a solution for MSPs?!?!
What is the status for a solution for MSPs?!?!
@ - The largest problem here is that it took 12 hours to get a response from someone other than a forum moderator. We still have not seen any communication from our Customer Engagement teams or any management.
Webroot ignoring customer-created whitelists would be a weakness too.@ wrote:
Boy there were a lot of failures and weaknesses shown with this event. Authenticode catalogs not honored, C&C overwhelmed, no notification to users and QC inadeqacies. I hope webroot takes this wakeup call VERY seriously and makes these shortcomings their top priority. No more features until this is addressed.
We have multiple whitelist exceptions for all files in specific file paths.
Webroot steamrolled right over those exceptions this time around, ignoring them and marking files infected anyway.
Since when would this be okay behavior by an update? I've been told by others they experienced this as well.
I am personally disheartened by the Do You Know Who I Am attitude.
Shortly after some of our custom written programs started to be flagged as bad, Webroot came out and stated it was their fault. This enabled me to not chase ghosts and get our workstations cleaned using a combination of the methods described in this forum.
Unfortunately, most of the posts were not helpful; rather whining, complaining, demanding action. This only cluttered the minority responses that contained legitimate content. I understand many of you are upset or frustrated, but please be mindful of all of our time. Sifting through your rants to get to the actual info has been a waste of "my" time.
I appreciate the response to date from the Webroot employees and volunteers. Obviously, this is a difficult time, yet they have remained professional throughout. I have no plans of leaving Webroot over this incident.
My company is private, but we have over 700 devices with 12 sites. Even so, I don't consider myself any more important than Webroots other paying customers.
Shortly after some of our custom written programs started to be flagged as bad, Webroot came out and stated it was their fault. This enabled me to not chase ghosts and get our workstations cleaned using a combination of the methods described in this forum.
Unfortunately, most of the posts were not helpful; rather whining, complaining, demanding action. This only cluttered the minority responses that contained legitimate content. I understand many of you are upset or frustrated, but please be mindful of all of our time. Sifting through your rants to get to the actual info has been a waste of "my" time.
I appreciate the response to date from the Webroot employees and volunteers. Obviously, this is a difficult time, yet they have remained professional throughout. I have no plans of leaving Webroot over this incident.
My company is private, but we have over 700 devices with 12 sites. Even so, I don't consider myself any more important than Webroots other paying customers.
UPDATE: We will be sending an email out to all our MSP partners for them to forward to their customers. It clearly states that Webroot was responsible for this issue, not our Partner MSPs, and reiterates the fact we are working on a comprehensive solution. Please be on the lookout. As a backup, we will post that email here on this thread.
1.) Confirmed that the endpoint was set to "unmanaged" in the admin console.
2.) Refreshed the workstation's endpoint.
3.) Confirmed that the user account that I was in was added to the local administrator group (I had a lot of files that were in the C:WindowsSystem ) and UAC was disabled. (I don't know if this matters but worth a shot)
4.) Could confirm that the files were still in the Webroot Quarantine (You should be able to see them in the C:Quarantine or even though the endpoints quarantine)
5.) Right clicked on the workstation's webroot > PC Security (cog) > Block / Allow
6.) Allowed ALL currently blocked files and saved.
7.) Went back to PC Security > Quarantine > Release
It took a few minutes and I checked the folders that the .exe were in, and I could confirm that the programs were back in their place. This was the only one computer that I ran into out of the 200+ that I worked on. Let me know if this works for you. :)
Reply
Login to the community
No account yet? Create an account
Enter your E-mail address. We'll send you an e-mail with instructions to reset your password.