SUN’IY INTELLEKT TIZIMLARI UCHUN ISHONCH VA SHAFFOFLIK STANDARTLARI

Authors

  • Ahmedova Sitora Asqar qizi Author

Keywords:

Sun’iy intellekt, ishonchlilik, shaffoflik, standartlar, algoritmik adolatlilik, ma’lumotlar xavfsizligi, etik AI, boshqaruv, tartibga solish, texnologik mas’uliyat.

Abstract

Sun’iy intellekt texnologiyalarining tez rivojlanishi bilan birga, ushbu tizimlarning ishonchlilik va shaffofligi masalalari muhim ahamiyat kasb etmoqda. Ushbu tadqiqot sun’iy intellekt tizimlarida ishonch va shaffoflik standartlarini o‘rganish, mavjud yondashuvlar va ularning samaradorligini tahlil qilish maqsadida amalga oshirilgan. Tadqiqotda ekspert so‘rovnomasi va statistik tahlil usullari qo‘llanilgan. Natijalar shuni ko‘rsatadiki, ishonchli AI tizimlari yaratish uchun texnik va boshqaruv choralarining kombinatsiyasi zarur. Tadqiqot natijalariga ko‘ra, standartlashtirish va tartibga solish yondashuvlari AI texnologiyalarining xavfsiz rivojlanishini ta’minlaydi. 

Downloads

Download data is not yet available.

References

1. Arrieta, A. B., Díaz-Rodríguez, N., Del Ser, J., Bennetot, A., et al. (2021). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information Fusion, 58, 82-115.

2. McKinsey Global Institute. (2024). The age of AI: Work, progress, and prosperity in a time of brilliant technologies. McKinsey & Company Report.

3. Rudin, C. (2023). Stop explaining black box machine learning models for high stakes decisions. Nature Machine Intelligence, 1(5), 206-215.

4. Barocas, S., & Selbst, A. D. (2022). Big data’s disparate impact. California Law Review, 104(3), 671-732.

5. Dastin, J. (2018). Amazon scraps secret AI recruiting tool that showed bias against women. Reuters Technology News.

6. Papernot, N., McDaniel, P., Sinha, A., & Wellman, M. P. (2021). SoK: Security and privacy in machine learning. Proceedings of IEEE Symposium on Security and Privacy.

7. Edelman Trust Institute. (2024). Trust Barometer Special Report: Trust and Technology. Edelman Research.

8. Binns, R. (2023). Fairness in machine learning: Lessons from political philosophy. Proceedings of Machine Learning Research, 81, 1-11.

9. Doshi-Velez, F., & Kim, B. (2023). Towards a rigorous science of interpretable machine learning. arXiv preprint arXiv:1702.08608.

10. IEEE Standards Association. (2023). IEEE Standard for Artificial Intelligence (AI) - Ethical Design Process. IEEE Std 2857-2021.

11. NIST AI Risk Management Framework. (2023). AI RMF 1.0: Artificial Intelligence Risk Management Framework. National Institute of Standards and Technology.

12. European Parliament. (2024). Regulation on Artificial Intelligence (AI Act). Official Journal of the European Union.

13. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., et al. (2021). AI4People—An ethical framework for a good AI society. Minds and Machines, 28(4), 689-707.

14. Goodman, B., & Flaxman, S. (2022). European Union regulations on algorithmic decision-making and a “right to explanation”. AI Magazine, 38(3), 50-57.

15. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., et al. (2021). A survey of methods for explaining black box models. ACM Computing Surveys, 51(5), 1-42.

Downloads

Published

2025-06-24

Issue

Section

Technical Sciences

How to Cite

SUN’IY INTELLEKT TIZIMLARI UCHUN ISHONCH VA SHAFFOFLIK STANDARTLARI . (2025). Innovations in Science and Technologies, 2(6), 285-289. https://innoist.uz/index.php/ist/article/view/1093

Similar Articles

31-40 of 609

You may also start an advanced similarity search for this article.