International Journal of Medicinal Chemistry

International Journal of Medicinal Chemistry / 2018 / Article

Research Article | Open Access

Volume 2018 |Article ID 3829307 | 10 pages | https://doi.org/10.1155/2018/3829307

Correlation between Virtual Screening Performance and Binding Site Descriptors of Protein Targets

Academic Editor: Patrick J. Bednarski
Received08 Aug 2017
Revised06 Nov 2017
Accepted29 Nov 2017
Published11 Jan 2018

Abstract

Rescoring is a simple approach that theoretically could improve the original docking results. In this study AutoDock Vina was used as a docked engine and three other scoring functions besides the original scoring function, Vina, as well as their combinations as consensus scoring functions were employed to explore the effect of rescoring on virtual screenings that had been done on diverse targets. Rescoring by DrugScore produces the most number of cases with significant changes in screening power. Thus, the DrugScore results were used to build a simple model based on two binding site descriptors that could predict possible improvement by DrugScore rescoring. Furthermore, generally the screening power of all rescoring approach as well as original AutoDock Vina docking results correlated with the Maximum Theoretical Shape Complementarity (MTSC) and Maximum Distance from Center of Mass and all Alpha spheres (MDCMA). Therefore, it was suggested that, with a more complete set of binding site descriptors, it could be possible to find robust relationship between binding site descriptors and response to certain molecular docking programs and scoring functions. The results could be helpful for future researches aiming to do a virtual screening using AutoDock Vina and/or rescoring using DrugScore.

1. Introduction

Molecular docking is a method in which it is attempted to find the most probable pose of the ligand in the active site of a receptor and estimation of the binding energy. Molecular docking is a computational approach whose applicability in virtual screening was approved. Comparing with experimental methods of HTS (High Throughput Screening) it can save time and cost of a drug discovery project. However, it suffers from some drawbacks such as a high rate of false positives [1, 2]. It was shown that docking programs have a reasonable power to predict correct binding pose of the ligands. However, their scoring powers were not same for different protein families and also there is a weak correlation between docked scores and binding affinities of the ligands [3, 4].

One of the most cited open source docking engines is AutoDock Vina [5]. It uses genetic algorithm to search for the most energy favorable pose of a flexible small molecule in either a rigid or a flexible binding site of a protein. Here, AutoDock Vina was employed as a docking engine. Generally, the docking engines use scoring functions to discriminate between favorable and unfavorable binding poses of the same molecule [6]. Furthermore, scoring functions rank the best binding poses of the different small molecules to find strong binders among them. The scoring functions deal with a trade-off between speed and accuracy. Thus, rescoring and consensus scoring approaches have been investigated to discover a stable method that possibly could add up the accuracy of various scoring functions and outperform single scoring functions [711]. However, it has been suggested that the scoring functions performances are target dependent. However, the present study is different in some aspects. The data set is retrieved from DUD-E [12] data set to avoid bias in the design of active groups and decoys data set for each protein target. In addition, the protein targets data set is diverse and we attempted to find possible relationships between scoring function performances and the binding site descriptors.

One of the proposed solutions that possibly could improve the virtual screening results is rescoring. Scoring functions can fall into three categories [6, 13]: (1) empirical scoring functions, including ChemScore [14], (2) knowledge-based potentials, including DrugScore [15], and (3) force-field based approaches, including AutoDock Vina [5] and AutoDock 4.2 [16]. Four metrics can be employed to assess the performance of a scoring function: the scoring power, ranking power, docking power, and screening power [6, 17]. Thus, rescoring can be done to find the best conformation of a single molecule (improvement of docking power) and for improvement of estimation of the binding energy and ranking the ligands (scoring and ranking power) or reranking the hits of a virtual screening to discriminate between decoys and true binders (improvement of screening power). The latter is the main concept of this research. A consensus scoring method so-called rank-by-number that had shown promising results [9] was also tested in this study. Several reports [1, 711] investigated the possible effects of rescoring on the different metrics of scoring performance. Among them the main result of more recent studies that have been done on larger data sets is that scoring function performance is very dependent on target [1]. In the other words, the current scoring functions are not universal.

In this study it was attempted to evaluate rescoring performance in virtual screenings conducted on a large set of predefined ligands and decoys for 32 receptors. In addition, the aim of this study is to find a method to predict the performance of a scoring function on specific targets. This study seeks to address two questions. (1) Can employed rescoring strategies consistently improve discrimination binders from decoys? (2) Can the performance of docking and/or scoring be predicted by specification of the receptors binding sites?

2. Methods

2.1. Receptors and Ligand Preparation

32 diverse targets were selected from the DUD-E database [12] (Table 1). The selection was based on the diversity and size of the set to keep the computational cost as low as possible. The same 3D structures that had been used in DUD-E for each of the 32 selected targets were retrieved from protein data bank (PDB) (Table 1). Then, the PDB files were prepared for AutoDock Vina docking. Cocrystal ligands and water molecules were removed, hydrogen and partial charges (Gasteiger) were added, and the coordinates of the 3D structures were saved in pdbqt format. The ligands from the DUD-E data set were used following modifications. The ligands in the DUD-E set have been divided into active compounds and decoy compounds for each target. There are approximately 50 decoys for each active compound in the whole DUD-E set. The active group contained some duplicate structures that differ in their protonation states. As this would generate an analog bias, the duplicate forms were omitted, and only a single structure, which was in its physiological protonation state, was kept. The corresponding decoy structures were also omitted from the study. All the ligands were converted to pdbqt files. The number of active groups and decoys for each target were reported in Table 1.


Abbreviation used in DUD-E Target namePDB codeNumber of ligandsNumber of decoys

ADAAdenosine deaminase 2E1W935444
AKT2Serine/threonine-protein kinase AKT2 3D0E1166891
COMTCatechol O-methyltransferase 3BWM413846
CP2C9Cytochrome P450 2C9 1R9O1207435
CXCR4C-X-C chemokine receptor type 4 3ODU403406
DEFE. coli peptide deformylase complexed with antibiotic actinonin 1LRU1025686
FA7Coagulation factor VII 1W7X1146239
FKB1AFK506-binding protein 1A 1J4H1115797
GLCMBeta-glucocerebrosidase 2V3F543799
GRIK1Glutamate receptor ionotropic kainate 1 1VSO1016540
HS90AHeat shock protein HSP 90-alpha 1UYG884848
HXK4Hexokinase type IV (human pancreatic glucokinase in complex with glucose and activator) 3F9M914692
INHAEnoyl-[acyl-carrier-protein] reductase (Mycobacterium tuberculosis enoyl reductase) 2H7L432297
KIF11Kinesin-like protein 1 3CJO1166844
KITHStem cell growth factor receptor (KIT kinase domain in complex with sunitinib) 2B8T572850
MAPK2MAP kinase-activated protein kinase 2 3M2W1016144
MCRMineralocorticoid receptor 2AA2904835
MK01MAP kinase ERK2 2OJG794548
MK10c-Jun N-terminal kinase 3 (mitogen-activated protein kinase 10) 2ZDT1046593
NOS1Nitric-oxide synthase, brain 1QW61008037
NRAMNeuraminidase (influenza virus neuraminidase) 1B9V986196
PA2GAPhospholipase A2 group IIA 1KVO995143
PLK1Serine/threonine-protein kinase PLK1 2OWB1076794
PUR2GAR transformylase 1NJS502694
PYGMMuscle glycogen phosphorylase 1C8K773940
PYRDDihydroorotate dehydrogenase 1D3G1116443
RENIRenin 3G6Z1046954
ROCK1Rho-associated protein kinase 1 2ETR1006293
SAHHAdenosylhomocysteinase 1LI4623438
THBThyroid hormone receptor beta-1 1Q4X1037349
TYSYThymidylate synthase1SYN1096732
WEE1Serine/threonine-protein kinase WEE1 3BIZ1026135
XIAPInhibitor of apoptosis protein 33HL51005143

2.2. Virtual Screening

The AutoDock Vina was employed for the molecular docking [5]. For each of the targets, a box was defined to dock the ligands properly in each active site. In all the docking runs, the exhaustiveness was set to 8. The cocrystal ligand for each target was redocked in the binding site of the target and the results are available as in Supplementary Materials (available here).

2.3. Rescoring

Four scoring functions and combinations of them have been evaluated in this study. These four scoring methods were from three different categories. Vina scoring (built-in scoring function of AutoDock Vina) and AutoDock4.2 scoring functions are force-field based. ChemScore is a SYBYL built-in scoring function that is an empirical scoring function. DrugScore is a knowledge base scoring function and is available as a standalone scoring function. All of the best docked poses of the ligands based on the Vina scoring function were rescored by other three scoring functions and also by all possible combinations. Thus, 11 consensus scorings were also applied (Tables 2 and 3).


ScoringAUCEF20%EF10%EF2%EF1%EF0.2%EF0.1%

0.6712.1372.936.3948.57611.1712.74
0.611.8552.333.5674.0074.5133.54
0.651.9422.725.2536.2758.7669.242
0.6231.8312.54.4414.9497.3018.866
0.6682.1623.086.7148.1738.75310.01
0.6672.1743.016.5378.7689.7469.515
0.6772.23.217.0969.16911.7214
0.6612.0883.015.9897.258.3717.76
0.6522.0682.845.1586.0867.417.47
0.6562.0872.936.2337.287.5647.415
0.6792.143.087.0269.29213.2114.76
0.6712.1923.056.0748.56211.7215.66
0.6462.0122.774.9165.9866.8966.06
0.6311.8952.534.3225.0577.3246.865
0.6582.0262.855.5797.1828.6888.51


ScoringAUCEF20%EF10%EF2%EF1%EF0.2%EF0.1%

0.0000.0000.0000.0000.0000.0000.000
−0.061−0.282−0.600−2.827−4.569−6.662−9.201
−0.021−0.195−0.211−1.140−2.301−2.409−3.499
−0.048−0.306−0.432−1.953−3.627−3.874−3.875
−0.0030.0260.1440.321−0.403−2.421−2.736
−0.0040.0370.0740.1430.192−1.429−3.226
0.0060.0630.2810.7020.5930.5411.263
−0.010−0.0490.072−0.405−1.325−2.804−4.982
−0.019−0.068−0.091−1.236−2.490−3.765−5.271
−0.015−0.050−0.003−0.161−1.296−3.611−5.326
0.0080.0040.1470.6320.7162.0332.014
0.0000.0550.119−0.320−0.0140.5432.920
−0.025−0.124−0.167−1.478−2.590−4.279−6.681
−0.040−0.242−0.399−2.072−3.519−3.850−5.877
−0.013−0.111−0.084−0.814−1.394−2.487−4.231

A previously defined consensus scoring (rank-by-number method [9]) was employed to summarize the results of multiple scoring functions. Rank-by-number consensus score is an average of the -scaled scores calculated by each of the individual scoring functions. Individual -scaled scoring function values (Score) are computed bywhere is the scoring value of an individual scoring function, is the mean value, and is the standard deviation of this scoring function for entire set.

2.4. Calculation of Binding Site Descriptors

Binding site environment properties were retrieved form PLIC [18] database. This is a database that provides cluster of binding sites. It uses Fpocket [19] and LPC [20] to generate the following binding site descriptors: pocket volume, number of alpha spheres, mean alpha sphere radius, proportion of apolar alpha spheres, mean local hydrophobic density, hydrophobicity scores, volume score, charge score, proportion of polar atoms, alpha sphere density, maximum distance between COM and alpha sphere, Maximum Theoretical Shape Complementarity, observed shape complementarity, and normalized shape complementarity.

2.5. Statistical Analysis

To assess the performance of each scoring function and the consensus scoring two parameters were used: area under the curve (AUC) of the ROC (receiver operating characteristic) curve and enrichment factor (EF) at different levels. To evaluate the performance of the scoring functions in discriminating active groups among decoys the scoring functions performance was tested on docked active and decoy compounds. The ROC curve and EF were applied to determine the performance of each scoring function. The increase in AUC of the ROC curve can be used as an indicator of improvement in discrimination between true ligands from decoys. AUC can have a value between 0 and 1, in which AUC = 0.5 means that the method of interest performed like a random selection in average, while AUC = 1 means the complete discrimination between true and false cases (active and decoys). EF is defined as the fraction of active compounds found divided by the fraction of the screened library:

EF1% and EF2% showed the ability of a particular scoring method to retrieve true ligands with a high rank among virtual screening results.

Significance of the difference between the AUC of the two ROC curves was assessed using online tool at http://vassarstats.net/roc_comp.html. Other statistical tests and plotting were done using R (R: a language and environment for statistical computing; R Foundation for Statistical Computing, Vienna, Austria; URL http://www.R-project.org/) including the following packages: enrichvs and ROCR.

3. Results

The average and difference in AUC of the ROC curve for each scoring method after rescoring are presented in Tables 2 and 3, respectively. They show the overall performance for each scoring method. The individual AUC of the ROC curve were shown in Table 4 and the details for each receptor and AutoDock Vina configuration files were presented in Supplementary Materials. The correlation between different scoring strategies and binding site descriptors was shown in Table 5. Screening power of AutoDock Vina original scoring and DrugScore demonstrated a good correlation with values of both Maximum Theoretical Shape Complementarity (MTSC) and Maximum Distance from Center of Mass and all Alpha spheres (MDCMA). Figure 1 demonstrated this fair correlation between DrugScore performance and the binding site descriptor, MTSC. In Table 6 the protein targets whose AUC of the ROC curve were significantly increased or decreased after rescoring by DrugScore were emphasized (Figure 2). According to the various classifications plot (data not shown) it was found out that these two groups can be separated based on two descriptors, volume score and MTSC (Figure 3).



WEE10.9490.8280.8410.5550.9170.9090.9270.9150.8530.9160.9300.9100.8520.7760.800
FA70.9170.8900.8760.8780.9290.9360.9260.9270.9090.9350.9290.9200.9080.8970.898
MAPK20.8860.7750.7760.7170.8770.8500.8770.8880.8480.8610.8490.8910.8090.8260.823
KIF110.8580.8450.8060.8400.8600.8600.8560.8650.8460.8670.8520.8640.8420.8490.835
TYSY0.8470.6070.6980.7700.7810.7680.8200.7780.7260.7600.8220.8290.6750.7100.762
PYRD0.8260.7490.7680.7300.7910.8070.7950.7840.7670.8030.8170.7890.7780.7470.763
PUR20.8190.3930.8560.6910.7490.7620.8270.6670.7020.6410.8690.7770.6960.5570.801
MK010.8060.7670.6320.6290.7480.7770.7190.7740.7020.8160.7470.7480.7260.7210.639
AKT20.7780.7440.6990.8030.8010.7880.8100.7940.7950.7760.7860.8060.7650.7850.799
THB0.7770.4840.4900.5780.6320.6300.6650.7000.5040.6930.6700.7770.4800.5230.510
MK100.7460.7010.6530.5980.6940.7210.6820.6970.6660.7370.7160.6840.6920.6590.633
FKB1A0.6930.7550.6570.6680.7300.7450.6970.7340.7240.7550.7020.6900.7380.7360.676
INHA0.6880.6800.7150.6930.7190.7220.7190.7080.7120.7050.7230.7020.7120.6960.714
KITH0.6880.5320.6990.6210.6460.6550.6670.6280.6320.6310.6920.6540.6360.5920.658
SAHH0.6770.2900.7080.6150.5900.5750.7190.5160.5390.4780.7260.6850.5120.3910.694
ROCK10.6660.6600.5940.6540.6680.6620.6590.6780.6570.6800.6420.6740.6450.6660.641
CXCR40.6610.7260.6040.7230.7060.6870.6850.7290.7060.7180.6400.7120.6820.7350.681
XIAP0.6320.6760.7890.6780.7240.7390.7220.6810.7410.6690.7410.6680.7720.6940.742
RENI0.6200.6860.7810.5880.6940.7330.6880.6380.7070.6640.7420.6050.7590.6420.703
PLK10.6190.6280.6680.5480.6280.6530.6250.6050.6250.6290.6590.5880.6590.5920.620
CP2C90.6130.6040.5520.5630.5970.6070.5880.6050.5820.6220.5930.5970.5880.5870.564
PA2GA0.6070.7950.6920.8140.7910.7600.7680.7830.8120.7460.6960.7440.7710.8210.801
PYGM0.5940.5970.5610.4460.5610.5970.5430.5550.5400.6080.5830.5300.5880.5220.502
NOS10.5700.5510.5060.4920.5450.5450.5330.5700.5330.5700.5330.5690.5330.5510.506
DEF0.5410.2620.6320.5780.5020.4650.6020.4560.4850.3840.6020.5690.4270.4150.621
GRIK10.5380.4640.4920.4420.4830.5000.4930.4800.4600.5030.5180.4950.4800.4390.467
NRAM0.5260.5220.6080.4430.5370.5740.5370.4960.5370.5300.5810.4780.5890.4810.536
COMT0.5250.3710.3630.7360.5750.3980.6450.6460.5930.4310.4390.7500.3400.6880.690
ADA0.5200.3770.5000.4350.4380.4440.4790.4280.4170.4300.5090.4740.4160.3950.459
HX40.5150.5520.5900.5340.5500.5540.5490.5330.5630.5320.5550.5240.5730.5450.566
MCR0.4980.6560.6390.5630.6280.6340.5710.5890.6910.5840.5710.4950.6990.6650.634
GLCM0.4860.4710.5480.5410.5200.5060.5360.5060.5280.4840.5170.5200.5150.5070.559
HS90A0.2500.3210.3930.3690.3080.2900.2950.3160.3460.2700.2960.2940.3380.3100.378


Pocket volumeNumber of alpha spheresMean alpha sphere radiusProportion of apolar alpha spheresMean local hydrophobic densityHydrophobicity scoreVolume scoreCharge scoreProportion of polar atomsAlpha sphere densityMax Dist. from Center of Mass and all Alpha spheresMaximum Theoretical Shape ComplementarityObserved shape complementarityNormalized shape complementarity

0.4150.418−0.0550.0010.249−0.108−0.1900.1700.0980.4580.5940.5320.4390.085
0.5090.4000.0680.1260.2660.131−0.1530.320−0.2040.4450.4590.4620.304−0.042
0.5100.3870.111−0.0660.161−0.0110.0140.1710.1410.3660.5550.7190.474−0.022
0.2340.3550.0340.0950.2170.1030.1350.268−0.1180.2070.3780.4130.3030.030
0.4830.4240.0610.0580.2740.001−0.0570.307−0.0410.4190.5860.6070.415−0.005
0.5410.4710.0640.0310.2760.017−0.1420.274−0.0110.4830.6200.6650.4640.000
0.4170.3870.0430.0100.234−0.081−0.0110.2430.0510.3630.5730.5880.4250.035
0.4430.4100.0280.1040.2950.011−0.0830.328−0.1050.4250.5570.5150.3650.005
0.4760.3780.0850.0800.2440.0680.0280.334−0.0950.3610.5190.5820.356−0.056
0.5150.4680.0310.0730.3020.022−0.2020.292−0.0730.5070.5960.5620.4130.015
0.4950.4540.031−0.0250.241−0.074−0.1120.1870.1120.4530.6310.6840.5060.045
0.3310.354−0.0120.0430.241−0.089−0.0360.2470.0110.3490.5320.4590.3650.064
0.5540.4260.1000.0330.2340.087−0.0860.292−0.0480.4390.5530.6510.409−0.058
0.4120.3390.0610.1480.2500.1160.0220.346−0.2120.3370.4360.4390.272−0.047
0.3790.3360.0750.0230.184−0.0020.1430.2690.0090.2600.4790.5670.366−0.017


Receptor

THB−0.2876
MK01−0.1738
COMT−0.1625
TYSY−0.1491
WEE1−0.1083
MK10−0.0929
AKT2−0.0787
ROCK1−0.0715
NOS1−0.0641
CP2C9−0.0605
PYRD−0.0582
CXCR4−0.0569
KIF11−0.0518
GRIK1−0.0467
FA7−0.0403
FKB1A−0.036
PYGM−0.0327
ADA−0.02
KITH0.0107
INHA0.0262
SAHH0.0303
PUR20.0364
PLK10.0492
GLCM0.062
HX40.0753
NRAM0.0812
PA2GA0.0854
DEF0.0906
MCR0.1411
HS90A0.1439
XIAP0.1577
RENI0.1605

4. Discussion

The calculated performance of AutoDock Vina on individual target can be used for selection of this docking engine for virtual screenings on specific targets. Furthermore, the results showed slight general improvement in discrimination between decoys and ligands by using consensus rescoring method which consisted of Vina and DrugScore scoring functions. By active site analysis it was shown that DrugScore improved the discrimination power of AutoDock Vina significantly in case of receptors that had both high volume score and MTSC. In addition, it was shown that AutoDock and DrugScore Screening powers had significant correlation with MTSC and MDCMA.

AutoDock Vina is free for academics and has showed a good scoring power in a recent study on large and diverse data set [4]. Thus, it was selected as a docking engine for pose prediction in the present study. The screening power of AutoDock Vina was correlated with MTSC and MDCMA. The reported AUC of the ROC curve and enrichment factor could be used for prediction of AutoDock Vina performance on each target. Furthermore, MTSC and MDCMA values could be used as a possible indicator of successfulness of AutoDock Vina in a virtual screening on a specific target protein. It was suggested [21] that AutoDock Vina had a better average performance for 31 protein targets’ virtual screening than DOCK [22]. As AutoDock Vina is an open source and shows good performance compared with other docking engines, improvements of AutoDock Vina code in different aspects such as parallel run [23] have been conducted during recent years.

It was suggested that the performances of docking program and scoring functions were target dependent [1, 4]. The nature of the active site of the proteins, the choice of scoring functions, and the set of ligands used for comparisons all affected the performance in scoring and ranking compounds [11]. Some studies concluded that consensus scoring (rank-by-number, consisting of three or four scoring functions) outperformed individual scoring performance [9]. In most of the studies that were conducted on more diverse and larger data sets, there is no strong correlation between affinity and scoring function predictions [4, 10]. In this study, only the ranking power of the scoring function was estimated. In overall consensus scoring with both DrugScore and Vina scoring functions, rescoring with DrugScore slightly improved the ranking metrics (AUC of the ROC curve and EF), but it was not statistically significant.

Rescoring by DrugScore produces most cases with significant increased or decreased screening power (assessed by changes in the AUC of the ROC curve) with respect to the original Vina scoring. Therefore, these data were used to find possible binding site descriptors that could predict the performance of DrugScore rescoring in improvement of original virtual screening results. Finally, after exploring different descriptors it was found that a simple model based on two descriptors (volume score and MTSC) could fairly predict the improvement of virtual screening results after rescoring by DrugScore for a target protein. DrugScore has been also successful in some other rescoring campaigns [8, 24] and was one of the best performers in a ranking power assessment among 16 scoring functions [7].

MTSC indicates the shape complementarity of a binding site with the specific cocrystalized ligand. Here, it was shown that the performance of DrugScore as well as AutoDock Vina docking and subsequent scoring are correlated with the value of MTSC. It could be due to the better performance of AutoDock Vina docking algorithm in finding near native pose of active groups in the case of a binding site with high MTSC. The values of the volume score descriptor were correlated with the improvement of virtual screening results by DrugScore rescoring. This could be explained as better performance of DrugScore in the case of the higher number of ligand-protein interactions in the bigger binding sites.

5. Conclusion

The results consistent with those previous studies suggested that performance of docking and scoring functions was target specific. Working on new scoring functions that include terms for aromatic-aromatic or π-cation or halogen protein interactions has been suggested. A correlation between screening power of AutoDock Vina and DrugScore and two binding site descriptors, MTSC and MDCMA, was found. The improvement after rescoring with DrugScore was predicted by two descriptors: volume score and MTSC. The ultimate goal of this study was to determine which of the scoring functions or combinations of them would yield the best results in terms of enrichment when used in a virtual screening study. The results could provide useful information for people to select the most appropriate target for using AutoDock Vina and/or DrugScore in their studies.

Conflicts of Interest

The author declares that they have no conflicts of interest in the publication.

Acknowledgments

This work was supported in part by MUMS. The author gratefully acknowledges the Sheikh Bahaei National High Performance Computing Center (SBNHPCC) for providing computing facilities. SBNHPCC is supported by scientific and technological department of presidential office and Isfahan University of Technology (IUT).

Supplementary Materials

Supplementary Materials contain a folder (configuration files) which includes the configuration files of the AutoDock Vina, a Microsoft Excel file (RMSD-redocking) which includes all of the RMSD obtained after redocking the cocrystal ligands in each corresponding target active site, and another Microsoft Excel file (rescoring-details) which includes detailed results of the rescoring study on each protein target. (Supplementary Materials)

References

  1. E. Yuriev, J. Holien, and P. A. Ramsland, “Improvements, trends, and new ideas in molecular docking: 2012-2013 in review,” Journal of Molecular Recognition, vol. 28, no. 10, pp. 581–604, 2015. View at: Publisher Site | Google Scholar
  2. M. Danishuddin and A. U. Khan, “Structure based virtual screening to discover putative drug candidates: Necessary considerations and successful case studies,” Methods, vol. 71, no. C, pp. 135–145, 2015. View at: Publisher Site | Google Scholar
  3. D. Plewczynski, M. Łaźniewski, R. Augustyniak, and K. Ginalski, “Can we trust docking results? Evaluation of seven commonly used programs on PDBbind database,” Journal of Computational Chemistry, vol. 32, no. 4, pp. 742–755, 2011. View at: Publisher Site | Google Scholar
  4. Z. Wang, H. Sun, X. Yao et al., “Comprehensive evaluation of ten docking programs on a diverse set of protein-ligand complexes: The prediction accuracy of sampling power and scoring power,” Physical Chemistry Chemical Physics, vol. 18, no. 18, pp. 12964–12975, 2016. View at: Publisher Site | Google Scholar
  5. O. Trott and A. J. Olson, “AutoDock Vina: improving the speed and accuracy of docking with a new scoring function, efficient optimization and multithreading,” Journal of Computational Chemistry, vol. 31, no. 2, pp. 455–461, 2010. View at: Publisher Site | Google Scholar
  6. S.-Y. Huang, S. Z. Grinter, and X. Zou, “Scoring functions and their evaluation methods for protein-ligand docking: recent advances and future directions,” Physical Chemistry Chemical Physics, vol. 12, no. 40, pp. 12899–12908, 2010. View at: Publisher Site | Google Scholar
  7. T. Cheng, X. Li, Y. Li, Z. Liu, and R. Wang, “Comparative assessment of scoring functions on a diverse test set,” Journal of Chemical Information and Modeling, vol. 49, no. 4, pp. 1079–1093, 2009. View at: Publisher Site | Google Scholar
  8. R. Wang, Y. Lu, X. Fang, and S. Wang, “An extensive test of 14 scoring functions using the PDBbind refined set of 800 protein-ligand complexes,” Journal of Chemical Information and Computer Sciences, vol. 44, no. 6, pp. 2114–2125, 2004. View at: Publisher Site | Google Scholar
  9. R. Wang and S. Wang, “How does consensus scoring work for virtual library screening? An idealized computer experiment,” Journal of Chemical Information and Computer Sciences, vol. 41, no. 3–6, pp. 1422–1426, 2001. View at: Publisher Site | Google Scholar
  10. G. L. Warren, C. W. Andrews, A.-M. Capelli et al., “A critical assessment of docking programs and scoring functions,” Journal of Medicinal Chemistry, vol. 49, no. 20, pp. 5912–5931, 2006. View at: Publisher Site | Google Scholar
  11. W. Xu, A. J. Lucke, and D. P. Fairlie, “Comparing sixteen scoring functions for predicting biological activities of ligands for protein targets,” Journal of Molecular Graphics and Modelling, vol. 57, pp. 76–88, 2015. View at: Publisher Site | Google Scholar
  12. M. M. Mysinger, M. Carchia, J. J. Irwin, and B. K. Shoichet, “Directory of useful decoys, enhanced (DUD-E): better ligands and decoys for better benchmarking,” Journal of Medicinal Chemistry, vol. 55, no. 14, pp. 6582–6594, 2012. View at: Publisher Site | Google Scholar
  13. P. F. W. Stouten and R. T. Kroemer, “Docking and Scoring,” in Comprehensive Medicinal Chemistry II, pp. 255–281, Elsevier Ltd., 2007. View at: Google Scholar
  14. M. D. Eldridge, C. W. Murray, T. R. Auton, G. V. Paolini, and R. P. Mee, “Empirical scoring functions: I. The development of a fast empirical scoring function to estimate the binding affinity of ligands in receptor complexes,” Journal of Computer-Aided Molecular Design, vol. 11, no. 5, pp. 425–445, 1997. View at: Publisher Site | Google Scholar
  15. G. Neudert and G. Klebe, “DSX: a knowledge-based scoring function for the assessment of protein-ligand complexes,” Journal of Chemical Information and Modeling, vol. 51, no. 10, pp. 2731–2745, 2011. View at: Publisher Site | Google Scholar
  16. G. M. Morris, H. Ruth, W. Lindstrom et al., “Software news and updates AutoDock4 and AutoDockTools4: automated docking with selective receptor flexibility,” Journal of Computational Chemistry, vol. 30, no. 16, pp. 2785–2791, 2009. View at: Publisher Site | Google Scholar
  17. M. A. Khamis, W. Gomaa, and W. F. Ahmed, “Machine learning in computational docking,” Artificial Intelligence in Medicine, vol. 63, no. 3, pp. 135–152, 2015. View at: Publisher Site | Google Scholar
  18. P. Anand, D. Nagarajan, S. Mukherjee, and N. Chandra, “PLIC: Protein-ligand interaction clusters,” Database: The Journal of Biological Databases and Curation, vol. 2014, Article ID bau029, 2014. View at: Publisher Site | Google Scholar
  19. V. Le Guilloux, P. Schmidtke, and P. Tuffery, “Fpocket: An open source platform for ligand pocket detection,” BMC Bioinformatics, vol. 10, article no. 168, 2009. View at: Publisher Site | Google Scholar
  20. V. Sobolev, E. Eyal, S. Gerzon et al., “SPACE: a suite of tools for protein structure prediction and analysis based on complementarity and environment,” Nucleic Acids Research, vol. 33, no. 2, pp. W39–W43, 2005. View at: Publisher Site | Google Scholar
  21. A. P. Carregal, F. V. Maciel, and J. B. Carregal, “Docking-based virtual screening of Brazilian natural compounds using the OOMT as the pharmacological target database,” Journal of Molecular Modeling, vol. 23, no. 111, pp. 1–9, 2017. View at: Publisher Site | Google Scholar
  22. T. J. A. Ewing, S. Makino, A. G. Skillman, and I. D. Kuntz, “DOCK 4.0: search strategies for automated molecular docking of flexible molecule databases,” Journal of Computer-Aided Molecular Design, vol. 15, no. 5, pp. 411–428, 2001. View at: Publisher Site | Google Scholar
  23. M. M. Jaghoori, B. Bleijlevens, and S. D. Olabarriaga, “1001 Ways to run AutoDock Vina for virtual screening,” Journal of Computer-Aided Molecular Design, vol. 30, no. 3, pp. 237–249, 2016. View at: Publisher Site | Google Scholar
  24. J. Shamsara, “Evaluation of 11 scoring functions performance on matrix metalloproteinases,” International Journal of Medicinal Chemistry and Analysis, vol. 2014, Article ID 162150, 9 pages, 2014. View at: Publisher Site | Google Scholar

Copyright © 2018 Jamal Shamsara. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

2337 Views | 513 Downloads | 3 Citations
 PDF  Download Citation  Citation
 Download other formatsMore
 Order printed copiesOrder

We are committed to sharing findings related to COVID-19 as quickly and safely as possible. Any author submitting a COVID-19 paper should notify us at help@hindawi.com to ensure their research is fast-tracked and made available on a preprint server as soon as possible. We will be providing unlimited waivers of publication charges for accepted articles related to COVID-19.