Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2017, Article ID 5452396, 13 pages
https://doi.org/10.1155/2017/5452396
Research Article

Assisting in Auditing of Buffer Overflow Vulnerabilities via Machine Learning

School of Electronic Science and Engineering, National University of Defense Technology (NUDT), Changsha, Hunan, China

Correspondence should be addressed to Chao Feng; nc.ude.tdun@gnefoahc

Received 1 July 2017; Revised 10 October 2017; Accepted 27 November 2017; Published 21 December 2017

Academic Editor: Nazrul Islam

Copyright © 2017 Qingkun Meng et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Linked References

  1. W. Le and M. L. Soffa, “Generating analyses for detecting faults in path segments,” in Proceedings of the 20th International Symposium on Software Testing and Analysis, ISSTA 2011, pp. 320–330, Canada, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. C. Cadar and K. Sen, “Symbolic execution for software testing: Three decades later,” Communications of the ACM, vol. 56, no. 2, pp. 82–90, 2013. View at Publisher · View at Google Scholar · View at Scopus
  3. K. J. Kratkiewicz, Evaluating Static Analysis Tools for Detecting Buffer Overflows in C Code, Harvard University, Boston, CA, USA, 2005.
  4. F. Yamaguchi, N. Golde, D. Arp, and K. Rieck, “Modeling and discovering vulnerabilities with code property graphs,” in Proceedings of the 35th IEEE Symposium on Security and Privacy, SP 2014, pp. 590–604, San Jose, CA, USA, May 2014. View at Publisher · View at Google Scholar · View at Scopus
  5. L. Moonen, “Generating robust parsers using island grammars,” in Proceedings of the Eighth Working Conference on Reverse Engineering, Stuttgart, Germany, 2-6 October, 2001. View at Publisher · View at Google Scholar
  6. F. Yamaguchi, M. Lottmann, and K. Rieck, “Generalized vulnerability extrapolation using abstract syntax trees,” in Proceedings of the 28th Annual Computer Security Applications Conference (ACSAC '12), pp. 359–368, ACM, Orlando, Fla, USA, December 2012. View at Publisher · View at Google Scholar · View at Scopus
  7. CVE-2016-9537, https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2016-9537.
  8. libtiff-4.0.6, https://github.com/vadz/libtiff/releases/tag/Release-v4-0-6.
  9. M. Zitser, R. Lippmann, and T. Leek, “Testing static analysis tools using exploitable buffer overflows from open source code,” in Proceedings of the the 12th ACM SIGSOFT twelfth international symposium, p. 97, Newport Beach, CA, USA, October 2004. View at Publisher · View at Google Scholar
  10. M. A. Rodriguez and P. Neubauer, “The Graph Traversal Pattern,” in Graph Data Management: Techniques and Applications, S. Sakr and E. Pardede, Eds., pp. 29–46, IGI Global, Hershey, PA, USA, 1st edition, 2012. View at Publisher · View at Google Scholar
  11. scikit-learn, http://scikit-learn.org/stable/.
  12. S. V. Stehman, “Selecting and interpreting measures of thematic classification accuracy,” Remote Sensing of Environment, vol. 62, no. 1, pp. 77–89, 1997. View at Publisher · View at Google Scholar · View at Scopus
  13. B. M. Padmanabhuni and H. B. K. Tan, “Predicting buffer overflow vulnerabilities through mining light-weight static code attributes,” in Proceedings of the 25th IEEE International Symposium on Software Reliability Engineering Workshops, ISSREW 2014, pp. 317–322, Naples, Italy, November 2014. View at Publisher · View at Google Scholar · View at Scopus
  14. Flawfinder, https://www.dwheeler.com/flawfinder/.
  15. Rats, https://code.google.com/archive/p/rough-auditing-tool-for-security/.
  16. Splint, http://splint.org.
  17. Y. Xie, A. Chou, and D. Engler, “ARCHER: Using Symbolic, Path Sensitive Analysis to Detect Memory Access Errors,” in In Proceedings of 11th ACM SIGSOFT International Symposium on Foundations of Software Engineering, Helsinki, Finland, September 2003.
  18. L. Wang, Q. Zhang, and P. Zhao, “Automated detection of code vulnerabilities based on program analysis and model checking,” in Proceedings of the 8th IEEE International Working Conference on Source Code Analysis and Manipulation, SCAM 2008, pp. 165–173, Beijing, China, September 2008. View at Publisher · View at Google Scholar · View at Scopus
  19. Coverity, https://scan.coverity.com/.
  20. Fortify, http://www.fortify.net/.
  21. CodeSonar, https://www.grammatech.com/products/codesonar.
  22. A. Rebert, S. K. Cha, T. Avgerinos et al., “Optimizing Seed Selection for Fuzzing,” in In the Proceedings of the 23rd USENIX Security Symposium, San Diego, CA, USA, August 2014.
  23. M. Woo, S. K. Cha, S. Gottlieb, and D. Brumley, “Scheduling Black-box Mutational Fuzzing,” in In Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, New York, NY, USA, November 2013.
  24. W. Wang, H. Sun, and Q. Zeng, “SeededFuzz: Selecting and Generating Seeds for Directed Fuzzing,” in Proceedings of the 10th International Symposium on Theoretical Aspects of Software Engineering, TASE 2016, Shanghai, China, July 2016. View at Publisher · View at Google Scholar · View at Scopus
  25. L. A. Clarke, “A program testing system,” in In Proceedings of the 1976 Annual Conference, pp. 488–491, Houston, Texas, United States, October 1976. View at Publisher · View at Google Scholar
  26. P. Godefroid, N. Klarlund, and K. Sen, “DART: Directed Automated Random Testing,” in In Proceedings of the 2005 ACM SIGPLAN Conference on Programming Language Design and Implementation, PLDI 05, pp. 213–223, Chicago, IL, USA, June 2005. View at Scopus
  27. K. Sen and G. Agha, “CUTE and jCUTE : Concolic Unit Testing and Explicit Path Model-Checking Tools,” in In Proceedings of the 18th International Conference on Computer-Aided Verificatio, Seattle, WA, USA, August 2006.
  28. J. Burnim and K. Sen, “Heuristics for scalable dynamic test generation,” in Proceedings of the ASE 2008 - 23rd IEEE/ACM International Conference on Automated Software Engineering, pp. 443–446, L'Aquila, Italy, September 2008. View at Publisher · View at Google Scholar · View at Scopus
  29. C. Cadar, D. Dunbar, and D. Engler, “KLEE: Unassisted and automatic generation of high-coverage tests for complex systems programs,” in OSDI'08 Proceedings of the 8th USENIX conference on Operating systems design and implementation, San Diego, Calif, USA, December 2008.
  30. B. Elkarablieh, P. Godefroid, and M. Y. Levin, “Precise pointer reasoning for dynamic test generation,” in Proceedings of the 18th International Symposium on Software Testing and Analysis, ISSTA 2009, Chicago, IL, USA, July 2009. View at Publisher · View at Google Scholar · View at Scopus
  31. V. Chipounov, V. Kuznetsov, and G. Candea, “S2E: A platform for in-vivo multi-path analysis of software systems,” in Proceedings of the 16th International Conference on Architectural Support for Programming Languages and Operating Systems, ASPLOS 2011, pp. 265–278, Newport Beach, CA, USA, March 2011. View at Publisher · View at Google Scholar · View at Scopus
  32. S. Rawat and L. Mounier, “An evolutionary computing approach for hunting buffer overflow vulnerabilities: A case of aiming in dim light,” in Proceedings of the 6th European Conference on Computer Network Defense, EC2ND 2010, pp. 37–45, Berlin, Germany, October 2010. View at Publisher · View at Google Scholar · View at Scopus
  33. S. Rawat and L. Mounier, “Finding buffer overflow inducing loops in binary executables,” in Proceedings of the 2012 IEEE 6th International Conference on Software Security and Reliability, SERE 2012, pp. 177–186, Waikiki, HONO, USA, June 2012. View at Publisher · View at Google Scholar · View at Scopus
  34. L. Li, C. Cifuentes, and N. Keynes, “Practical and effective symbolic analysis for buffer overflow detection,” in Proceedings of the 18th ACM SIGSOFT International Symposium on the Foundations of Software Engineering, FSE-18, pp. 317–326, Santa Fe, NM, USA, November 2010. View at Publisher · View at Google Scholar · View at Scopus
  35. I. Haller, A. Slowinska, M. Neugschwandtner, and H. Bos, “Dowsing for Overflows: A Guided Fuzzer to Find Buffer Boundary Violations,” in In Proceedings of the 22nd USENIX Security Symposium, Washington, DC, USA, August 2013.
  36. F. Yamaguchi, C. Wressnegger, and H. Gascon, “Chucky: Exposing missing checks in source code for vulnerability discovery,” in Proceedings of the 2013 ACM SIGSAC Conference on Computer and Communications Security, CCS 2013, pp. 499–510, Berlin, Germany, November 2013. View at Publisher · View at Google Scholar · View at Scopus
  37. F. Yamaguchi, A. Maier, H. Gascon, and K. Rieck, “Automatic inference of search patterns for taint-style vulnerabilities,” in Proceedings of the 36th IEEE Symposium on Security and Privacy, SP 2015, pp. 797–812, San Jose, CA, USA, May 2015. View at Publisher · View at Google Scholar · View at Scopus
  38. B. M. Padmanabhuni and H. B. K. Tan, “Auditing buffer overflow vulnerabilities using hybrid static-dynamic analysis,” in Proceedings of the 38th Annual IEEE Computer Software and Applications Conference, COMPSAC 2014, pp. 394–399, Vasteras, Sweden, July 2014. View at Publisher · View at Google Scholar · View at Scopus