RESUMEN
I. INTRODUCCIÓN
II. ANTECEDENTES
III. DISEÑO
IV. MODELADO
V. RECOPILACIÓN DE DATOS
VI. CARACTERIZACIÓN
VII. RESULTADO
VIII. DISCUSIÓN
IX. TRABAJO RELACIONADO
CONCLUSIÓN Y REFERENCIAS
\ \
Este artículo presentó un enfoque práctico y preventivo de pruebas de seguridad basado en una predicción precisa y en línea de cambios de código potencialmente vulnerables en el momento previo al envío. Presentamos los tres tipos de datos de características nuevas que son efectivos en la predicción de vulnerabilidades y evaluamos su recall y precisión mediante validación N-fold utilizando datos del proyecto de código abierto Android, grande e importante.
\ También evaluamos el modo de despliegue en línea e identificamos el subconjunto de tipos de datos de características que no son específicos de un proyecto objetivo donde se recopilan los datos de entrenamiento y, por lo tanto, pueden utilizarse para otros proyectos (por ejemplo, configuración de múltiples proyectos). Los resultados de la evaluación mostraron que nuestro framework VP identifica ~80% de los cambios evaluados que inducen vulnerabilidades en el momento previo al envío con una precisión del 98% y una tasa de falsos positivos <1,7%.
\ Los resultados positivos exigen investigaciones futuras (por ejemplo, utilizando técnicas avanzadas de ML y GenAI) para aprovechar el enfoque o framework VP para los proyectos de código abierto upstream gestionados por comunidades y que son al mismo tiempo críticos para los numerosos productos de software y computación utilizados por varios miles de millones de usuarios diariamente.
\ La urgencia de este artículo proviene de sus potenciales beneficios sociales. La adopción generalizada de enfoques basados en ML como el framework VP podría mejorar enormemente nuestras capacidades para compartir los datos de credibilidad de los contribuyentes y proyectos de código abierto. Estos datos compartidos permitirían a las comunidades de código abierto combatir amenazas como cuentas falsas (como se vio en el ataque de backdoor de Linux XZ util16).
\ Además, este enfoque basado en ML puede facilitar una respuesta rápida entre proyectos de código abierto cuando surgen ataques planificados a largo plazo. Compartir información entre proyectos similares o derivados mejora la preparación y reduce el tiempo de respuesta ante ataques similares.
\ Por lo tanto, hacemos un llamado a una iniciativa comunitaria de código abierto para establecer una práctica de compartir la base de datos de credibilidad de desarrolladores y proyectos para fortalecer nuestras cadenas de suministro de software de código abierto de las que dependen numerosos productos informáticos y de software.
\
[1] T. Menzies, J. Greenwald, and A. Frank, "Data Mining Static Code Attributes to Learn Defect Predictors," IEEE Transactions on Software Engineering, 33(1):2-13, 2007.
[2] M. Halstead, Elements of Software Science, Elsevier, 1977.
[3] T. McCabe, "A Complexity Measure," IEEE Transactions on Software Engineering, 2(4):308-320, 1976.
[4] R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufman, 1992.
[5] T. Zimmermann, N. Nagappan, H. Gall, E. Giger, and B. Murphy, "Cross-project defect prediction: a large scale experiment on data vs. domain vs. process," in Proceedings of the Joint Meeting of the European Software Engineering Conference and the ACM SIGSOFT Symposium on the Foundations of Software Engineering (ESEC/FSE), pp. 91-100, 2009.
[6] A. Chou, J. Yang, B. Chelf, S. Hallem, and D. Engler, "An Empirical Study of Operating Systems Errors," in Proceedings of the ACM Symposium on Operating Systems Principles (SOSP), pp. 73-88, 2001.
[7] S. Kim, T. Zimmermann, E. J. Whitehead Jr., and A. Zeller, "Predicting Faults from Cached History," in Proceedings of the ACM International Conference on Software Engineering (ICSE), pp. 489- 498, 2007.
[8] F. Rahman, D. Posnett, A. Hindle, E. Barr, and P. Devanbu, "BugCache for inspections," in Proceedings of the ACM SIGSOFT Symposium and the European Conference on Foundations of Software Engineering (SIGSOFT/FSE), p. 322, 2011.
[9] C. Lewis, Z. Lin, C. Sadowski, X. Zhu, R. Ou, and E. J. Whitehead Jr., "Does bug prediction support human developers? findings from a google case study," in Proceedings of the International Conference on Software Engineering (ICSE), pp. 372-381, 2013.
[10] J. Walden, J. Stuckman, and R. Scandariato, "Predicting Vulnerable Components: Software Metrics vs Text Mining," in Proceedings of the IEEE International Symposium on Software Reliability Engineering, pp. 23-33, 2014.
[11] A. Chou, J. Yang, B. Chelf, S. Hallem, and D. Engler, "An empirical study of operating systems errors," in Proceedings of the ACM Symposium on Operating Systems Principles (SOSP), pp. 73-88, 2001.
[12] S. R. Chidamber and C. F. Kemerer, "A Metrics Suite for Object Oriented Design," IEEE Transactions on Software Engineering, 20(6):476-493, 1994.
[13] R. Quinlan, C4.5: Programs for Machine Learning, Morgan Kaufmann Publishers, San Mateo, CA, 1993.
[14] A. Chou, J. Yang, B. Chelf, S. Hallem, and D. Engler, "An empirical study of operating systems errors," in Proceedings of the ACM Symposium on Operating Systems Principles (SOSP), pp. 73-88, 2001.
[15] R. Chillarege, I. S. Bhandari, J. K. Chaar, M. J. Halliday, D. S. Moebus, B. K. Ray, and M-Y. Wong, "Orthogonal defect classification-a concept for in-process measurements", IEEE Transactions on Software Engineering, 18(11):943-956, 1992.
[16] R. Natella, D. Cotroneo, and H. Madeira, "Assessing Dependability with Software Fault Injection: A Survey", ACM Computing Surveys, 48(3), 2016.
[17] K. S. Yim, "Norming to Performing: Failure Analysis and Deployment Automation of Big Data Software Developed by Highly Iterative Models," in Proceedings of the IEEE International Symposium on Software Reliability Engineering (ISSRE), pp. 144-155, 2014.
[18] S. R. Chidamber and C. F. Kemerer, "A Metrics Suite for Object Oriented Design," IEEE Transactions on Software Engineering, 20(6):476-493, 1994.
[19] K. S. Yim, "Assessment of Security Defense of Native Programs Against Software Faults," System Dependability and Analytics, Springer Series in Reliability Engineering, Springer, Cham., 2023.
[20] M. Fourné, D. Wermke, S. Fahl and Y. Acar, "A Viewpoint on Human Factors in Software Supply Chain Security: A Research Agenda," IEEE Security & Privacy, vol. 21, no. 6, pp. 59-63, Nov.-Dec. 2023.
[21] P. Ladisa, H. Plate, M. Martinez and O. Barais, "SoK: Taxonomy of Attacks on Open-Source Software Supply Chains," in Proceedings of the IEEE Symposium on Security and Privacy (SP), pp. 1509-1526, 2023.
[22] D. Wermke et al., ""Always Contribute Back": A Qualitative Study on Security Challenges of the Open Source Supply Chain," in Proceedings of the IEEE Symposium on Security and Privacy (SP), pp. 1545-1560, 2023.
[23] A. Dann, H. Plate, B. Hermann, S. E. Ponta and E. Bodden, "Identifying Challenges for OSS Vulnerability Scanners - A Study & Test Suite," IEEE Transactions on Software Engineering, vol. 48, no. 9, pp. 3613-3625, 1 Sept. 2022.
[24] S. Torres-Arias, A. K. Ammula, R. Curtmola, and J. Cappos, "On omitting commits and committing omissions: Preventing git metadata tampering that (re)introduces software vulnerabilities," in Proceedings of the 25th USENIX Security Symposium, pp. 379-395, 2016.
[25] R. Goyal, G. Ferreira, C. Kastner, and J. Herbsleb, "Identifying unusual commits on github," Journal of Software: Evolution and Process, vol. 30, no. 1, p. e1893, 2018.
[26] C. Soto-Valero, N. Harrand, M. Monperrus, and B. Baudry, "A comprehensive study of bloated dependencies in the maven ecosystem," Empirical Software Engineering, vol. 26, Mar 2021.
[27] R. Duan, O. Alrawi, R. P. Kasturi, R. Elder, B. Saltaformaggio, and W. Lee, "Towards measuring supply chain attacks on package managers for interpreted languages," arXiv preprint arXiv:2002.01139, 2020.
[28] Enduring Security Framework, "Securing the software supply chain: Recommended practices guide for developers," Cybersecurity and Infrastructure Security Agency, Washington, DC, USA, August 2022.
[29] Z. Durumeric et al., "The matter of heartbleed," in Proceedings of the ACM Internet Measurement Conference, pp. 475-488, 2014.
[30] D. Everson, L. Cheng, and Z. Zhang, "Log4shell: Redefining the web attack surface," in Proceedings of the Workshop on Measurements, Attacks, and Defenses for the Web (MADWeb), pp. 1-8, 2022.
[31] "Highly evasive attacker leverages SolarWinds supply chain to compromise multiple global victims with SUNBURST backdoor," Mandiant, available at https://www.mandiant.com/resources/blog/evasive-attackerleverages-solarwinds-supply-chain-compromises-with-sunburstbackdoor
[32] W. Enck and L. Williams, "Top five challenges in software supply chain security: Observations from 30 industry and government organizations," IEEE Security Privacy, vol. 20, no. 2, pp. 96-100, 2022.
[33] "CircleCI incident report for January 4, 2023 security incident." CircleCI, available at https://circleci.com/blog/jan-4-2023-incidentreport/ [34] K. Toubba, "Security incident update and recommended actions," LastPass, available at https://blog.lastpass.com/2023/03/securityincident-update-recommended-actions/
[35] D. Wermke, N. Wöhler, J. H. Klemmer, M. Fourné, Y. Acar, and S. Fahl, "Committed to trust: A qualitative study on security & trust in open source software projects," in Proceedings of the IEEE Symposium on Security and Privacy (S&P), pp. 1880-1896, 2022.
[36] D. A. Wheeler, "Countering trusting trust through diverse doublecompiling," in Proceedings of the IEEE Annual Computer Security Applications Conference (ACSAC), pp. 13-48, 2005.
[37] K. S. Yim, I. Malchev, A. Hsieh, and D. Burke, "Treble: Fast Software Updates by Creating an Equilibrium in an Active Software Ecosystem of Globally Distributed Stakeholders," ACM Transactions on Embedded Computing Systems, 18(5s):104, 2019.
[38] G. Holmes, A. Donkin, and I. H. Witten, "WEKA: a machine learning workbench," in Proceedings of the Australian New Zealnd Intelligent Information Systems Conference (ANZIIS), pp. 357-361, 1994.
[39] D. Yeke, M. Ibrahim, G. S. Tuncay, H. Farrukh, A. Imran, A. Bianchi, and Z. B. Celik,, "Wear's my Data? Understanding the Cross-Device Runtime Permission Model in Wearables," in Proceedings of the 2024 IEEE Symposium on Security and Privacy (SP), pp. 76-76, 2024.
[40] G. S. Tuncay, J. Qian, C. A. Gunter, "See No Evil: Phishing for Permissions with False Transparency," in Proceedings of the 29th USENIX Security Symposium, pp. 415-432, 2020.
[41] K. R. Jones, T.-F. Yen, S. C. Sundaramurthy, and A. G. Bardas, "Deploying Android Security Updates: an Extensive
