SlideShare ist ein Scribd-Unternehmen logo
1 von 10
Downloaden Sie, um offline zu lesen
Anti‐Virus Comparative ‐ Retrospective test – March 2012             www.av‐comparatives.org 




Anti-Virus Comparative




Retrospective/Proactive test
(Heuristic detection and behavioural protection against
new/unknown malicious software)



Language: English
March 2012
Last revision: 20th July 2012

www.av-comparatives.org



                                                            ‐ 1 ‐ 
Anti‐Virus Comparative ‐ Retrospective test – March 2012                  www.av‐comparatives.org 




  Contents


    
   1. Introduction                                                   3 

   2. Description                                                    4 

   3. False alarm test                                               5

   4. Test results                                                   6 

   5. Summary results                                                7 

   6. Awards reached in this test                                    8 

   7. Copyright and Disclaimer                                       9




                                                            ‐ 2 ‐ 
Anti‐Virus Comparative ‐ Retrospective test – March 2012                                            www.av‐comparatives.org 




1. Introduction

This test report is the second part of the March 2012 test1. The report is delivered in late July due to
the large amount of work required, deeper analysis, preparation and dynamic execution of the retro-
spective test-set. This year this test is performed only once, but includes also a behavioural protec-
tion element.

New in this test
There are two major changes in this test relative to our previous proactive tests. Firstly, because of
the frequency of updates now provided by the vendors, the window between malware appearing and a
signature being provided by the vendor is much shorter. Consequently we collected malware over a
shorter period (~1 day), and the test scores are correspondingly higher than in earlier tests. Secondly,
we have introduced a second (optional) element to the test: behavioural protection. In this, any mal-
ware samples not detected in the scan test are executed, and the results observed. A participating
product has the opportunity to increase its overall score by blocking the malware on/after execution,
using behavioural monitoring.
The following vendors asked to be included in the new behavioural test: Avast, AVG, AVIRA, BitDe-
fender, ESET, F-Secure, G DATA, GFI, Kaspersky, Panda and PC Tools. The results published in this re-
port show results for all programs for the scan test, plus any additional protection by those products
participating in the behavioural test. Although it was a lot of work, we received good feedback from
various vendors, as they were able to find bugs and areas for improvement in the behavioural rou-
tines.

The products used the same updates and signatures they had on the 1st March 2012, and the same
detection settings as used in March (see page 5 of this report) were used for the heuristic detection
part. In the behavioural test we used the default settings. This test shows the proactive detection and
protection capabilities that the products had at that time. We used 4,138 new malware variants which
appeared around the 2nd March 2012. The following products were tested:

 AhnLab V3 Internet Security 8.0                            Fortinet FortiClient Lite 4.3
 avast! Free Antivirus 7.0                                  G DATA AntiVirus 2012
 AVG Anti-Virus 2012                                        GFI Vipre Antivirus 2012
 AVIRA Antivirus Premium 2012                               Kaspersky Anti-Virus 2012
 BitDefender Anti-Virus Plus 2012                           Microsoft Security Essentials 2.1
 BullGuard Antivirus 12                                     Panda Cloud Antivirus 1.5.2
 eScan Anti-Virus 11.0                                      PC Tools Spyware Doctor with AV 9.0
 ESET NOD32 Antivirus 5.0                                   Qihoo 360 Antivirus 2.0
 F-Secure Anti-Virus 2012                                   Tencent QQ PC Manager 5.3

At the beginning of the year, we gave the vendors the opportunity to opt out of this test. McAfee,
Sophos, Trend Micro and Webroot decided not to take part in this type of test, as their products rely
very heavily on the cloud.


1
    http://www.av-comparatives.org/images/docs/avc_fdt_201203_en.pdf


                                                               ‐ 3 ‐ 
Anti‐Virus Comparative ‐ Retrospective test – March 2012                                www.av‐comparatives.org 




2. Description

Many new viruses and other types of malware appear every day, which is why it is important that an-
tivirus products not only provide new updates, as frequently and as quickly as possible, but also that
they are able to detect such threats in advance (preferably without having to execute them or contact
the cloud) with generic/heuristic techniques; failing that, with behavioural protection measures. Even
if nowadays most antivirus products provide daily, hourly or cloud updates, without proactive meth-
ods there is always a time-frame where the user is not reliably protected.

The data shows how good the proactive heuristic/generic detection capabilities of the scanners were
in detecting new threats (sometimes also named as zero-hour threats by others) used in this test. By
design and scope of the test, only the heuristic/generic detection capability and behavioural protec-
tion capabilities (on-execution) were tested (offline). Additional protection technologies (which are
dependent on cloud-connectivity) are considered by AV-Comparatives in e.g. whole-product dynamic
(“real-world”) protection tests and other tests, but are outside the scope of retrospective tests.

This time we included in the retrospective test-set only new malware which has been seen in-the-field
and prevalent in the few days after the last update in March. Additionally, we took care to include
malware samples which belong to different clusters and that appeared in the field only after the freez-
ing date. Due to the use of only one sample per malware variant and the shortened period (~1 day) of
new samples, the detection rates are higher than in previous tests. We adapted the award system
accordingly. Samples which were not detected by the heuristic/generic on-demand/on-access detec-
tion of the products were then executed in order to see if they would be blocked using behaviour-
analysis features. As can be seen in the results, in at least half of the products the behaviour analyser
(if even present) did not provide much additional protection. Good heuristic/generic detection still
remains one of the core components to protect against new malware. In several cases we observed
behaviour analysers only warning about detected threats without taking any action, or alerting to
some dropped malware components or system changes without protecting against all malicious ac-
tions performed by the malware. If only some dropped files or system changes were detected/blocked,
but not the main file which showed the behaviour, it was not counted as a block. As behaviour analy-
sis only come into play after the malware is executed, a certain risk of getting compromised remains
(even when the security product claims to have blocked/removed the threat). Therefore it is prefera-
ble that malware gets detected before it gets executed, by e.g. the on-access scanner using heuristics
(this is also one of the reasons for the different thresholds on the next page). Behaviour analys-
er/blockers should be considered as a complement to the other features inside a security product
(multi-layer protection), and not as a replacement.

What about the cloud? Even in June (months later), many of the malware samples used were still not
detected by certain products which rely heavily on the cloud. Consequently, we consider it to be a
marketing excuse if retrospective tests - which test the proactive detection against new malware - are
criticized for not being allowed to use cloud resources. This is especially true considering that in
many corporate environments the cloud connection is disabled by the company policy, and the detec-
tion of new malware coming into the company often has to be provided (or is supposed to be provid-
ed) by other product features. Clouds are very (economically) convenient for security software vendors
and allow the collection and processing of large amounts of data. However, in most cases (not all)
they still rely on blacklisting known malware, i.e. if a file is completely new/unknown, the cloud will
usually not be able to determine if it is good or malicious.



                                                            ‐ 4 ‐ 
Anti‐Virus Comparative ‐ Retrospective test – March 2012                                            www.av‐comparatives.org 



AV-Comparatives prefer to test with default settings. Almost all products run nowadays by default
with highest protection settings or switch automatically to highest settings in the event of a detected
infection. Due to this, in order to get comparable results for the heuristic detection part, we also set
the few remaining products to highest settings (or leave them to default settings) in accordance with
the respective vendor’s wishes. In the behavioural protection part, we tested ALL products with DE-
FAULT settings. Below are notes about settings used (scan of all files etc. is always enabled) of some
products:

F-Secure: asked to be tested and awarded based on their default settings (i.e. without using their
advanced heuristics).
AVG, AVIRA: asked us not to enable/consider the informational warnings of packers as detections.
Because of this, we did not count them as such.
Avast, AVIRA, Kaspersky: the heuristic detection test was done with heuristics set to high/advanced.

This time we distributed the awards by using the following award system / thresholds:
                                                        Proactive Detection/Protection Rates
                   Heuristic Detection               0-25% 25-50%         50-75%      75-100%
    Heuristic + Behavioural Protection               0-30% 30-60%         60-90%      90-100%
                           Very few FP               tested STANDARD ADVANCED ADVANCED+
                                Few FP               tested STANDARD ADVANCED ADVANCED+
                              Many FP                tested    tested    STANDARD ADVANCED
                         Very many FP                tested    tested      tested    STANDARD
                        Crazy many FP                tested    tested      tested      tested

NB: To qualify for a particular award level, a program needs to get EITHER the relevant score on heu-
ristic detection alone, OR the relevant score for heuristic and behavioural protection, but not both.
Thus a program that scores 85% on heuristic detection receives an Advanced+ award, even if it fails
to improve its score at all in the behavioural test.

3. False alarm test

To better evaluate the quality of the detection capabilities, the false alarm rate has to be taken into
account too. A false alarm (or false positive [FP])2 is when an antivirus product flags an innocent file
as infected although it is not. False alarms can sometimes cause as much trouble as real infections.
The false alarm test results (with active cloud connection) were already included in the March test
report.

    Very few false alarms (0-3):                    Microsoft, ESET
    Few false alarms (4-15):                        BitDefender, F-Secure, BullGuard, Kaspersky, Panda, eScan,
                                                    G DATA, Avast, AVIRA
    Many false alarms (over 15):                    Tencent, PC Tools, Fortinet, AVG, AhnLab, GFI
    Very many false alarms (over 100):              Qihoo



2
  All discovered false alarms were already reported to the vendors in March and are now already fixed. For de-
tails, please read the report available at http://www.av-comparatives.org/images/docs/avc_fps_201203_en.pdf




                                                              ‐ 5 ‐ 
Anti‐Virus Comparative ‐ Retrospective test – March 2012                                www.av‐comparatives.org 




4. Test Results

The table below shows the proactive detection and protection capabilities of the various products.
The awards given (see page 9 of this report) consider not only the detection/protection rates against
new malware, but also the false alarm rates.




Key:
Dark green = detected on scan
Light green = blocked on/after execution
Yellow = user dependent
Red = not blocked



Some observations:

Behavioural detection was used successfully mainly only by Avast, AVG, BitDefender, F-Secure,
Kaspersky, Panda and PC Tools.

BitDefender and Kaspersky scored very high and have even detected some few more samples, but
failed to block or remove them (so they were counted as miss/not-blocked).

Qihoo detected a lot of malware using heuristics, but also had a high rate of false positives.

PC Tools was quite dependent on user decisions, i.e. gave a lot of warnings.




                                                            ‐ 6 ‐ 
Anti‐Virus Comparative ‐ Retrospective test – March 2012                                  www.av‐comparatives.org 




5. Summary results

The results show the proactive (generic/heuristic) detection and protection capabilities of the securi-
ty products against new malware. The percentages are rounded to the nearest whole number. Do not
take the results as an absolute assessment of quality - they just give an idea of whom detect-
ed/blocked more and who less, in this specific test. To know how these antivirus products perform
with actual signatures and cloud connection, please have a look at our File Detection Tests of March
and September. To find out about real-life online protection rates provided by the various products,
please have a look at our on-going Whole-Product Dynamic “Real-World” Protection tests.
Readers should look at the results and decide on the best product for them based on their individual
needs. For example, laptop users who are worried about infection from e.g. infected flash drives
whilst offline should pay particular attention to this Proactive test.

Below you can see the proactive heuristic detection and behavioural protection results over our set of
new/prevalent malware appeared in-the-field within ~1 day in March (4,138 different samples):

                                 Heuristic         Heuristic + Behavioural      False
                                 Detection            Protection Rate3         Alarms
1. Kaspersky                       90%                       97%                 few
2. BitDefender                     82%                       97%                 few
3. Qihoo                           95%                                       very many
4. F-Secure                        82%                      91%                  few
5. G DATA                          90%                      90%                  few
6. ESET                            87%                      87%               very few
7. Avast                           77%                      87%                  few
8. Panda                           75%                      85%                  few
9. AVIRA                           84%                      84%                  few
10. AVG                            77%                      83%                 many
11. BullGuard, eScan               82%                                           few
12. PC Tools                       53%                      82%                 many
13. Microsoft                      77%                                        very few
14. Tencent                        75%                                          many
15. Fortinet                       64%                                          many
16. GFI                            51%                      51%                 many
17. AhnLab                         47%                                          many




3
 User-dependent cases were given a half credit. Example: if a program blocks 80% of malware by itself, plus
another 20% user-dependent, we give it 90% altogether, i.e. 80% + (20% x 0.5).



                                                             ‐ 7 ‐ 
Anti‐Virus Comparative ‐ Retrospective test – March 2012                                      www.av‐comparatives.org 




6. Awards reached in this test

The following awards are for the results reached in the proactive/retrospective test:
                                   AWARDS                             PRODUCTS
                                                                      Kaspersky
                                                                     BitDefender
                                                                      F-Secure
                                                                       G DATA
                                                                         ESET
                                                                        Avast
                                                                        Panda
                                                                        AVIRA
                                                                      BullGuard
                                                                        eScan
                                                                      Microsoft



                                                                         AVG*
                                                                       Tencent*




                                                                       Qihoo*
                                                                      PC Tools*
                                                                      Fortinet*
                                                                        GFI*




                                                                       AhnLab




                                                                       McAfee
                                                                       Sophos
                               NOT INCLUDED4
                                                                     Trend Micro
                                                                       Webroot


*: these products got lower awards due to false alarms5




4
  As those products are included in our yearly public test-series, they are listed even if those vendors decided
not to be included in retrospective tests as they rely heavily on cloud-connectivity.
5
  Considering that certain vendors did not take part, it makes sense to set and use fixed thresholds instead of
using the cluster method (as by the non-inclusion of the low-scoring products clusters may be built “unfairly”).


                                                            ‐ 8 ‐ 
Anti‐Virus Comparative ‐ Retrospective test – March 2012                                www.av‐comparatives.org 




7. Copyright and Disclaimer

This publication is Copyright © 2012 by AV-Comparatives e.V. ®. Any use of the results, etc. in whole
or in part, is ONLY permitted after the explicit written agreement of the management board of AV-
Comparatives e.V., prior to any publication. AV-Comparatives e.V. and its testers cannot be held liable
for any damage or loss, which might occur as result of, or in connection with, the use of the infor-
mation provided in this paper. We take every possible care to ensure the correctness of the basic data,
but no representative of AV-Comparatives e.V. can he held liable for the accuracy of the test results.
We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of
any of the information/content provided at any given time. No one else involved in creating, produc-
ing or delivering test results shall be liable for any indirect, special or consequential damage, or loss
of profits, arising out of, or related to, the use or inability to use, the services provided by the web-
site, test documents or any related data. AV-Comparatives e.V. is an Austrian Non-Profit Organization.

                                                                      AV-Comparatives e.V. (July 2012)




                                                            ‐ 9 ‐ 
Advertisement
Anti‐Virus Comparative ‐ Retrospective test – March 2012              www.av‐comparatives.org 




                                                            ‐ 10 ‐ 

Weitere ähnliche Inhalte

Was ist angesagt?

Software Quality Analysis Using Mutation Testing Scheme
Software Quality Analysis Using Mutation Testing SchemeSoftware Quality Analysis Using Mutation Testing Scheme
Software Quality Analysis Using Mutation Testing SchemeEditor IJMTER
 
Agile Development And Medtech
Agile Development And MedtechAgile Development And Medtech
Agile Development And MedtechRobert Ginsberg
 
Software testing
Software testingSoftware testing
Software testingfatboysec
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWJournal For Research
 
Software data privacy threat analysis metric using no trust privacy risk metric
 Software data privacy threat analysis metric using no trust privacy risk metric Software data privacy threat analysis metric using no trust privacy risk metric
Software data privacy threat analysis metric using no trust privacy risk metricValdez Ladd MBA, CISSP, CISA,
 
Antivirus software testing for the new millenium
Antivirus software testing for the new milleniumAntivirus software testing for the new millenium
Antivirus software testing for the new milleniumUltraUploader
 
Gabriele Lana: Testing Web Applications
Gabriele Lana: Testing Web ApplicationsGabriele Lana: Testing Web Applications
Gabriele Lana: Testing Web ApplicationsFrancesco Fullone
 
The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017
The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017
The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017Jermund Ottermo
 
Exploiting the Testing System
Exploiting the Testing SystemExploiting the Testing System
Exploiting the Testing Systemfrisksoftware
 
LC Chen Presentation at Icinga Camp 2015 Kuala Lumpur
LC Chen Presentation at Icinga Camp 2015 Kuala LumpurLC Chen Presentation at Icinga Camp 2015 Kuala Lumpur
LC Chen Presentation at Icinga Camp 2015 Kuala LumpurIcinga
 
Psqt east risk testing
Psqt east risk testingPsqt east risk testing
Psqt east risk testingJorge Boria
 
Security's DevOps Transformation
Security's DevOps TransformationSecurity's DevOps Transformation
Security's DevOps TransformationMichele Chubirka
 
Rapid software testing
Rapid software testingRapid software testing
Rapid software testingSachin MK
 
Antivirus Comparative junio 2014
Antivirus Comparative junio 2014Antivirus Comparative junio 2014
Antivirus Comparative junio 2014Doryan Mathos
 

Was ist angesagt? (19)

Software Quality Analysis Using Mutation Testing Scheme
Software Quality Analysis Using Mutation Testing SchemeSoftware Quality Analysis Using Mutation Testing Scheme
Software Quality Analysis Using Mutation Testing Scheme
 
Agile Development And Medtech
Agile Development And MedtechAgile Development And Medtech
Agile Development And Medtech
 
Software testing
Software testingSoftware testing
Software testing
 
EFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEWEFFECTIVE TEST CASE DESING: A REVIEW
EFFECTIVE TEST CASE DESING: A REVIEW
 
Understanding IEC 62304
Understanding IEC 62304Understanding IEC 62304
Understanding IEC 62304
 
Windows 8 kasp1248
Windows 8 kasp1248Windows 8 kasp1248
Windows 8 kasp1248
 
Software data privacy threat analysis metric using no trust privacy risk metric
 Software data privacy threat analysis metric using no trust privacy risk metric Software data privacy threat analysis metric using no trust privacy risk metric
Software data privacy threat analysis metric using no trust privacy risk metric
 
Avc prot 2013a_en
Avc prot 2013a_enAvc prot 2013a_en
Avc prot 2013a_en
 
Antivirus software testing for the new millenium
Antivirus software testing for the new milleniumAntivirus software testing for the new millenium
Antivirus software testing for the new millenium
 
Avc sum 201212_en
Avc sum 201212_enAvc sum 201212_en
Avc sum 201212_en
 
Gabriele Lana: Testing Web Applications
Gabriele Lana: Testing Web ApplicationsGabriele Lana: Testing Web Applications
Gabriele Lana: Testing Web Applications
 
The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017
The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017
The AV-Comparatives Guide to the Best Cybersecurity Solutions of 2017
 
Exploiting the Testing System
Exploiting the Testing SystemExploiting the Testing System
Exploiting the Testing System
 
LC Chen Presentation at Icinga Camp 2015 Kuala Lumpur
LC Chen Presentation at Icinga Camp 2015 Kuala LumpurLC Chen Presentation at Icinga Camp 2015 Kuala Lumpur
LC Chen Presentation at Icinga Camp 2015 Kuala Lumpur
 
Psqt east risk testing
Psqt east risk testingPsqt east risk testing
Psqt east risk testing
 
Security's DevOps Transformation
Security's DevOps TransformationSecurity's DevOps Transformation
Security's DevOps Transformation
 
Rapid software testing
Rapid software testingRapid software testing
Rapid software testing
 
Veracode - Inglês
Veracode - InglêsVeracode - Inglês
Veracode - Inglês
 
Antivirus Comparative junio 2014
Antivirus Comparative junio 2014Antivirus Comparative junio 2014
Antivirus Comparative junio 2014
 

Andere mochten auch

Andere mochten auch (16)

Android pcsl 201208_en
Android pcsl 201208_enAndroid pcsl 201208_en
Android pcsl 201208_en
 
Hii assessing the_effectiveness_of_antivirus_solutions
Hii assessing the_effectiveness_of_antivirus_solutionsHii assessing the_effectiveness_of_antivirus_solutions
Hii assessing the_effectiveness_of_antivirus_solutions
 
Dtl 2013 q1_home.1
Dtl 2013 q1_home.1Dtl 2013 q1_home.1
Dtl 2013 q1_home.1
 
Zero days-hit-users-hard-at-the-start-of-the-year-en
Zero days-hit-users-hard-at-the-start-of-the-year-enZero days-hit-users-hard-at-the-start-of-the-year-en
Zero days-hit-users-hard-at-the-start-of-the-year-en
 
Avcomparatives Survey 2011
Avcomparatives Survey 2011Avcomparatives Survey 2011
Avcomparatives Survey 2011
 
Dtl 2012 q4_home.1
Dtl 2012 q4_home.1Dtl 2012 q4_home.1
Dtl 2012 q4_home.1
 
Rpt repeating-history
Rpt repeating-historyRpt repeating-history
Rpt repeating-history
 
Summary2010
Summary2010Summary2010
Summary2010
 
Avc per 201210_en
Avc per 201210_enAvc per 201210_en
Avc per 201210_en
 
Drweb curenet-manual-ru
Drweb curenet-manual-ruDrweb curenet-manual-ru
Drweb curenet-manual-ru
 
Kis2010 ru
Kis2010 ruKis2010 ru
Kis2010 ru
 
Security survey2013 en
Security survey2013 enSecurity survey2013 en
Security survey2013 en
 
Avc per 201206_en
Avc per 201206_enAvc per 201206_en
Avc per 201206_en
 
Online payments-threats-2
Online payments-threats-2Online payments-threats-2
Online payments-threats-2
 
Avc per 201304_en
Avc per 201304_enAvc per 201304_en
Avc per 201304_en
 
Ncr mobile europe_russia
Ncr mobile europe_russiaNcr mobile europe_russia
Ncr mobile europe_russia
 

Ähnlich wie Avc beh 201207_en

Avc report25
Avc report25Avc report25
Avc report25Era Brown
 
AV-Comparatives Real World Protection Test
AV-Comparatives Real World Protection TestAV-Comparatives Real World Protection Test
AV-Comparatives Real World Protection TestHerbert Rodriguez
 
Avtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishAvtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishDaniel zhao
 
Avtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishAvtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishКомсс Файквэе
 
Avtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishAvtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishКомсс Файквэе
 
AV Comparatives 2013 (Comparación de Antivirus)
AV Comparatives 2013 (Comparación de Antivirus)AV Comparatives 2013 (Comparación de Antivirus)
AV Comparatives 2013 (Comparación de Antivirus)Doryan Mathos
 
ANTIVIRUS
ANTIVIRUSANTIVIRUS
ANTIVIRUSfauscha
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGSachin Pathania
 
11 steps of testing process - By Harshil Barot
11 steps of testing process - By Harshil Barot11 steps of testing process - By Harshil Barot
11 steps of testing process - By Harshil BarotHarshil Barot
 
Types of software testing
Types of software testingTypes of software testing
Types of software testingTestbytes
 
What is Software Testing Definition, Types and Benefits.pdf
What is Software Testing Definition, Types and Benefits.pdfWhat is Software Testing Definition, Types and Benefits.pdf
What is Software Testing Definition, Types and Benefits.pdfJoeyWilliams21
 

Ähnlich wie Avc beh 201207_en (20)

Avc report25
Avc report25Avc report25
Avc report25
 
Performance dec 2010
Performance dec 2010Performance dec 2010
Performance dec 2010
 
Avc prot 2012b_en
Avc prot 2012b_enAvc prot 2012b_en
Avc prot 2012b_en
 
Technology auto protection_from_exploit
Technology auto protection_from_exploitTechnology auto protection_from_exploit
Technology auto protection_from_exploit
 
AV-Comparatives Real World Protection Test
AV-Comparatives Real World Protection TestAV-Comparatives Real World Protection Test
AV-Comparatives Real World Protection Test
 
Istqb v.1.2
Istqb v.1.2Istqb v.1.2
Istqb v.1.2
 
Avtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishAvtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_english
 
Avtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishAvtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_english
 
Avtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_englishAvtest 2012 02-android_anti-malware_report_english
Avtest 2012 02-android_anti-malware_report_english
 
Why is software testing important
Why is software testing important Why is software testing important
Why is software testing important
 
Why is software testing important
Why is software testing importantWhy is software testing important
Why is software testing important
 
Avc rem 201211_en
Avc rem 201211_enAvc rem 201211_en
Avc rem 201211_en
 
AV Comparatives 2013 (Comparación de Antivirus)
AV Comparatives 2013 (Comparación de Antivirus)AV Comparatives 2013 (Comparación de Antivirus)
AV Comparatives 2013 (Comparación de Antivirus)
 
ANTIVIRUS
ANTIVIRUSANTIVIRUS
ANTIVIRUS
 
Testing
Testing Testing
Testing
 
Avc prot 2016a_en
Avc prot 2016a_enAvc prot 2016a_en
Avc prot 2016a_en
 
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTINGWelingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
Welingkar_final project_ppt_IMPORTANCE & NEED FOR TESTING
 
11 steps of testing process - By Harshil Barot
11 steps of testing process - By Harshil Barot11 steps of testing process - By Harshil Barot
11 steps of testing process - By Harshil Barot
 
Types of software testing
Types of software testingTypes of software testing
Types of software testing
 
What is Software Testing Definition, Types and Benefits.pdf
What is Software Testing Definition, Types and Benefits.pdfWhat is Software Testing Definition, Types and Benefits.pdf
What is Software Testing Definition, Types and Benefits.pdf
 

Kürzlich hochgeladen

The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxLoriGlavin3
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxLoriGlavin3
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Commit University
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionDilum Bandara
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxLoriGlavin3
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demoHarshalMandlekar2
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxLoriGlavin3
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024Stephanie Beckett
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????blackmambaettijean
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxLoriGlavin3
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenHervé Boutemy
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rick Flair
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024Lorenzo Miniero
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.Curtis Poe
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESmohitsingh558521
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek SchlawackFwdays
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxLoriGlavin3
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxBkGupta21
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfAddepto
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningLars Bell
 

Kürzlich hochgeladen (20)

The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptxThe Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
The Fit for Passkeys for Employee and Consumer Sign-ins: FIDO Paris Seminar.pptx
 
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptxA Deep Dive on Passkeys: FIDO Paris Seminar.pptx
A Deep Dive on Passkeys: FIDO Paris Seminar.pptx
 
Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!Nell’iperspazio con Rocket: il Framework Web di Rust!
Nell’iperspazio con Rocket: il Framework Web di Rust!
 
Advanced Computer Architecture – An Introduction
Advanced Computer Architecture – An IntroductionAdvanced Computer Architecture – An Introduction
Advanced Computer Architecture – An Introduction
 
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptxDigital Identity is Under Attack: FIDO Paris Seminar.pptx
Digital Identity is Under Attack: FIDO Paris Seminar.pptx
 
Sample pptx for embedding into website for demo
Sample pptx for embedding into website for demoSample pptx for embedding into website for demo
Sample pptx for embedding into website for demo
 
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptxPasskey Providers and Enabling Portability: FIDO Paris Seminar.pptx
Passkey Providers and Enabling Portability: FIDO Paris Seminar.pptx
 
What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024What's New in Teams Calling, Meetings and Devices March 2024
What's New in Teams Calling, Meetings and Devices March 2024
 
What is Artificial Intelligence?????????
What is Artificial Intelligence?????????What is Artificial Intelligence?????????
What is Artificial Intelligence?????????
 
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptxMerck Moving Beyond Passwords: FIDO Paris Seminar.pptx
Merck Moving Beyond Passwords: FIDO Paris Seminar.pptx
 
DevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache MavenDevoxxFR 2024 Reproducible Builds with Apache Maven
DevoxxFR 2024 Reproducible Builds with Apache Maven
 
Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...Rise of the Machines: Known As Drones...
Rise of the Machines: Known As Drones...
 
SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024SIP trunking in Janus @ Kamailio World 2024
SIP trunking in Janus @ Kamailio World 2024
 
How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.How AI, OpenAI, and ChatGPT impact business and software.
How AI, OpenAI, and ChatGPT impact business and software.
 
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICESSALESFORCE EDUCATION CLOUD | FEXLE SERVICES
SALESFORCE EDUCATION CLOUD | FEXLE SERVICES
 
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
"Subclassing and Composition – A Pythonic Tour of Trade-Offs", Hynek Schlawack
 
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptxThe Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
The Role of FIDO in a Cyber Secure Netherlands: FIDO Paris Seminar.pptx
 
unit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptxunit 4 immunoblotting technique complete.pptx
unit 4 immunoblotting technique complete.pptx
 
Gen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdfGen AI in Business - Global Trends Report 2024.pdf
Gen AI in Business - Global Trends Report 2024.pdf
 
DSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine TuningDSPy a system for AI to Write Prompts and Do Fine Tuning
DSPy a system for AI to Write Prompts and Do Fine Tuning
 

Avc beh 201207_en

  • 1. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  Anti-Virus Comparative Retrospective/Proactive test (Heuristic detection and behavioural protection against new/unknown malicious software) Language: English March 2012 Last revision: 20th July 2012 www.av-comparatives.org ‐ 1 ‐ 
  • 2. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  Contents   1. Introduction 3  2. Description 4  3. False alarm test 5 4. Test results 6  5. Summary results 7  6. Awards reached in this test 8  7. Copyright and Disclaimer 9 ‐ 2 ‐ 
  • 3. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  1. Introduction This test report is the second part of the March 2012 test1. The report is delivered in late July due to the large amount of work required, deeper analysis, preparation and dynamic execution of the retro- spective test-set. This year this test is performed only once, but includes also a behavioural protec- tion element. New in this test There are two major changes in this test relative to our previous proactive tests. Firstly, because of the frequency of updates now provided by the vendors, the window between malware appearing and a signature being provided by the vendor is much shorter. Consequently we collected malware over a shorter period (~1 day), and the test scores are correspondingly higher than in earlier tests. Secondly, we have introduced a second (optional) element to the test: behavioural protection. In this, any mal- ware samples not detected in the scan test are executed, and the results observed. A participating product has the opportunity to increase its overall score by blocking the malware on/after execution, using behavioural monitoring. The following vendors asked to be included in the new behavioural test: Avast, AVG, AVIRA, BitDe- fender, ESET, F-Secure, G DATA, GFI, Kaspersky, Panda and PC Tools. The results published in this re- port show results for all programs for the scan test, plus any additional protection by those products participating in the behavioural test. Although it was a lot of work, we received good feedback from various vendors, as they were able to find bugs and areas for improvement in the behavioural rou- tines. The products used the same updates and signatures they had on the 1st March 2012, and the same detection settings as used in March (see page 5 of this report) were used for the heuristic detection part. In the behavioural test we used the default settings. This test shows the proactive detection and protection capabilities that the products had at that time. We used 4,138 new malware variants which appeared around the 2nd March 2012. The following products were tested:  AhnLab V3 Internet Security 8.0  Fortinet FortiClient Lite 4.3  avast! Free Antivirus 7.0  G DATA AntiVirus 2012  AVG Anti-Virus 2012  GFI Vipre Antivirus 2012  AVIRA Antivirus Premium 2012  Kaspersky Anti-Virus 2012  BitDefender Anti-Virus Plus 2012  Microsoft Security Essentials 2.1  BullGuard Antivirus 12  Panda Cloud Antivirus 1.5.2  eScan Anti-Virus 11.0  PC Tools Spyware Doctor with AV 9.0  ESET NOD32 Antivirus 5.0  Qihoo 360 Antivirus 2.0  F-Secure Anti-Virus 2012  Tencent QQ PC Manager 5.3 At the beginning of the year, we gave the vendors the opportunity to opt out of this test. McAfee, Sophos, Trend Micro and Webroot decided not to take part in this type of test, as their products rely very heavily on the cloud. 1 http://www.av-comparatives.org/images/docs/avc_fdt_201203_en.pdf ‐ 3 ‐ 
  • 4. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  2. Description Many new viruses and other types of malware appear every day, which is why it is important that an- tivirus products not only provide new updates, as frequently and as quickly as possible, but also that they are able to detect such threats in advance (preferably without having to execute them or contact the cloud) with generic/heuristic techniques; failing that, with behavioural protection measures. Even if nowadays most antivirus products provide daily, hourly or cloud updates, without proactive meth- ods there is always a time-frame where the user is not reliably protected. The data shows how good the proactive heuristic/generic detection capabilities of the scanners were in detecting new threats (sometimes also named as zero-hour threats by others) used in this test. By design and scope of the test, only the heuristic/generic detection capability and behavioural protec- tion capabilities (on-execution) were tested (offline). Additional protection technologies (which are dependent on cloud-connectivity) are considered by AV-Comparatives in e.g. whole-product dynamic (“real-world”) protection tests and other tests, but are outside the scope of retrospective tests. This time we included in the retrospective test-set only new malware which has been seen in-the-field and prevalent in the few days after the last update in March. Additionally, we took care to include malware samples which belong to different clusters and that appeared in the field only after the freez- ing date. Due to the use of only one sample per malware variant and the shortened period (~1 day) of new samples, the detection rates are higher than in previous tests. We adapted the award system accordingly. Samples which were not detected by the heuristic/generic on-demand/on-access detec- tion of the products were then executed in order to see if they would be blocked using behaviour- analysis features. As can be seen in the results, in at least half of the products the behaviour analyser (if even present) did not provide much additional protection. Good heuristic/generic detection still remains one of the core components to protect against new malware. In several cases we observed behaviour analysers only warning about detected threats without taking any action, or alerting to some dropped malware components or system changes without protecting against all malicious ac- tions performed by the malware. If only some dropped files or system changes were detected/blocked, but not the main file which showed the behaviour, it was not counted as a block. As behaviour analy- sis only come into play after the malware is executed, a certain risk of getting compromised remains (even when the security product claims to have blocked/removed the threat). Therefore it is prefera- ble that malware gets detected before it gets executed, by e.g. the on-access scanner using heuristics (this is also one of the reasons for the different thresholds on the next page). Behaviour analys- er/blockers should be considered as a complement to the other features inside a security product (multi-layer protection), and not as a replacement. What about the cloud? Even in June (months later), many of the malware samples used were still not detected by certain products which rely heavily on the cloud. Consequently, we consider it to be a marketing excuse if retrospective tests - which test the proactive detection against new malware - are criticized for not being allowed to use cloud resources. This is especially true considering that in many corporate environments the cloud connection is disabled by the company policy, and the detec- tion of new malware coming into the company often has to be provided (or is supposed to be provid- ed) by other product features. Clouds are very (economically) convenient for security software vendors and allow the collection and processing of large amounts of data. However, in most cases (not all) they still rely on blacklisting known malware, i.e. if a file is completely new/unknown, the cloud will usually not be able to determine if it is good or malicious. ‐ 4 ‐ 
  • 5. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  AV-Comparatives prefer to test with default settings. Almost all products run nowadays by default with highest protection settings or switch automatically to highest settings in the event of a detected infection. Due to this, in order to get comparable results for the heuristic detection part, we also set the few remaining products to highest settings (or leave them to default settings) in accordance with the respective vendor’s wishes. In the behavioural protection part, we tested ALL products with DE- FAULT settings. Below are notes about settings used (scan of all files etc. is always enabled) of some products: F-Secure: asked to be tested and awarded based on their default settings (i.e. without using their advanced heuristics). AVG, AVIRA: asked us not to enable/consider the informational warnings of packers as detections. Because of this, we did not count them as such. Avast, AVIRA, Kaspersky: the heuristic detection test was done with heuristics set to high/advanced. This time we distributed the awards by using the following award system / thresholds: Proactive Detection/Protection Rates Heuristic Detection 0-25% 25-50% 50-75% 75-100% Heuristic + Behavioural Protection 0-30% 30-60% 60-90% 90-100% Very few FP tested STANDARD ADVANCED ADVANCED+ Few FP tested STANDARD ADVANCED ADVANCED+ Many FP tested tested STANDARD ADVANCED Very many FP tested tested tested STANDARD Crazy many FP tested tested tested tested NB: To qualify for a particular award level, a program needs to get EITHER the relevant score on heu- ristic detection alone, OR the relevant score for heuristic and behavioural protection, but not both. Thus a program that scores 85% on heuristic detection receives an Advanced+ award, even if it fails to improve its score at all in the behavioural test. 3. False alarm test To better evaluate the quality of the detection capabilities, the false alarm rate has to be taken into account too. A false alarm (or false positive [FP])2 is when an antivirus product flags an innocent file as infected although it is not. False alarms can sometimes cause as much trouble as real infections. The false alarm test results (with active cloud connection) were already included in the March test report. Very few false alarms (0-3): Microsoft, ESET Few false alarms (4-15): BitDefender, F-Secure, BullGuard, Kaspersky, Panda, eScan, G DATA, Avast, AVIRA Many false alarms (over 15): Tencent, PC Tools, Fortinet, AVG, AhnLab, GFI Very many false alarms (over 100): Qihoo 2 All discovered false alarms were already reported to the vendors in March and are now already fixed. For de- tails, please read the report available at http://www.av-comparatives.org/images/docs/avc_fps_201203_en.pdf ‐ 5 ‐ 
  • 6. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  4. Test Results The table below shows the proactive detection and protection capabilities of the various products. The awards given (see page 9 of this report) consider not only the detection/protection rates against new malware, but also the false alarm rates. Key: Dark green = detected on scan Light green = blocked on/after execution Yellow = user dependent Red = not blocked Some observations: Behavioural detection was used successfully mainly only by Avast, AVG, BitDefender, F-Secure, Kaspersky, Panda and PC Tools. BitDefender and Kaspersky scored very high and have even detected some few more samples, but failed to block or remove them (so they were counted as miss/not-blocked). Qihoo detected a lot of malware using heuristics, but also had a high rate of false positives. PC Tools was quite dependent on user decisions, i.e. gave a lot of warnings. ‐ 6 ‐ 
  • 7. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  5. Summary results The results show the proactive (generic/heuristic) detection and protection capabilities of the securi- ty products against new malware. The percentages are rounded to the nearest whole number. Do not take the results as an absolute assessment of quality - they just give an idea of whom detect- ed/blocked more and who less, in this specific test. To know how these antivirus products perform with actual signatures and cloud connection, please have a look at our File Detection Tests of March and September. To find out about real-life online protection rates provided by the various products, please have a look at our on-going Whole-Product Dynamic “Real-World” Protection tests. Readers should look at the results and decide on the best product for them based on their individual needs. For example, laptop users who are worried about infection from e.g. infected flash drives whilst offline should pay particular attention to this Proactive test. Below you can see the proactive heuristic detection and behavioural protection results over our set of new/prevalent malware appeared in-the-field within ~1 day in March (4,138 different samples): Heuristic Heuristic + Behavioural False Detection Protection Rate3 Alarms 1. Kaspersky 90% 97% few 2. BitDefender 82% 97% few 3. Qihoo 95% very many 4. F-Secure 82% 91% few 5. G DATA 90% 90% few 6. ESET 87% 87% very few 7. Avast 77% 87% few 8. Panda 75% 85% few 9. AVIRA 84% 84% few 10. AVG 77% 83% many 11. BullGuard, eScan 82% few 12. PC Tools 53% 82% many 13. Microsoft 77% very few 14. Tencent 75% many 15. Fortinet 64% many 16. GFI 51% 51% many 17. AhnLab 47% many 3 User-dependent cases were given a half credit. Example: if a program blocks 80% of malware by itself, plus another 20% user-dependent, we give it 90% altogether, i.e. 80% + (20% x 0.5). ‐ 7 ‐ 
  • 8. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  6. Awards reached in this test The following awards are for the results reached in the proactive/retrospective test: AWARDS PRODUCTS Kaspersky BitDefender F-Secure G DATA ESET Avast Panda AVIRA BullGuard eScan Microsoft AVG* Tencent* Qihoo* PC Tools* Fortinet* GFI* AhnLab McAfee Sophos NOT INCLUDED4 Trend Micro Webroot *: these products got lower awards due to false alarms5 4 As those products are included in our yearly public test-series, they are listed even if those vendors decided not to be included in retrospective tests as they rely heavily on cloud-connectivity. 5 Considering that certain vendors did not take part, it makes sense to set and use fixed thresholds instead of using the cluster method (as by the non-inclusion of the low-scoring products clusters may be built “unfairly”). ‐ 8 ‐ 
  • 9. Anti‐Virus Comparative ‐ Retrospective test – March 2012  www.av‐comparatives.org  7. Copyright and Disclaimer This publication is Copyright © 2012 by AV-Comparatives e.V. ®. Any use of the results, etc. in whole or in part, is ONLY permitted after the explicit written agreement of the management board of AV- Comparatives e.V., prior to any publication. AV-Comparatives e.V. and its testers cannot be held liable for any damage or loss, which might occur as result of, or in connection with, the use of the infor- mation provided in this paper. We take every possible care to ensure the correctness of the basic data, but no representative of AV-Comparatives e.V. can he held liable for the accuracy of the test results. We do not give any guarantee of the correctness, completeness, or suitability for a specific purpose of any of the information/content provided at any given time. No one else involved in creating, produc- ing or delivering test results shall be liable for any indirect, special or consequential damage, or loss of profits, arising out of, or related to, the use or inability to use, the services provided by the web- site, test documents or any related data. AV-Comparatives e.V. is an Austrian Non-Profit Organization. AV-Comparatives e.V. (July 2012) ‐ 9 ‐