References: [1] Manon Bischoff. (2023) New tools reveal How AI makes decisions, Scientific American. 15 June 2023) [2] (ChatGPT: five priorities for research by Eva A. M. van Dis, Johan Bollen, Robert van Rooij, Willem Zuidema & Claudi L. Bockting. Vol 224 | Nature | Vol 614 | 9 February 2023 [3] Transformative Effects of ChatGPT on Modern Education: Emerging Era of AI Chatbots. Available from: https:// www.researchgate.net/publication/371347113_Transformative _Effects_of_ChatGPT_on_ Modern_Education_Emerging_ Era_of_AI_Chatbots [accessed Jul 05 2023]. Transformative Effects of ChatGPT on Modern Education: EmergingEra of AI Chatbots by Sukhpal Singh Gill et al. Preprint June 2023. DOI: 10.1016/j.iotcps.2023.06.002 [4] Lin, J.C. et al (2023). Comparison of GPT-3.5, GPT-4, and human user performance on a practice ophthalmology written examination. Eye, Nature (2023). [5] Dash, Bibhu, and Pawankumar Sharma. (2023). Are ChatGPT and Deepfake Algorithms Endangering the Cybersecurity Industry? A Review. International Journal of Engineering and Applied Sciences 10 (1) (2023). [6] Seghier, Mohamed L. ChatGPT: not all languages are equal. Nature 615, no. 7951 (2023): 216-216. [7] Zawacki-Richter et al. Systematic review of research on artificial intelligence applications in higher education–where are the educators?. International Journal of Educational Technology in Higher Education 16 (1) (2019): 1-27. [8] Pearls and pitfalls of ChatGPT in medical oncology (Author links open overlay panel Jacob Blum 1, Arjun K. Menta 1, Xiyu Zhao 1, Victor B. Yang 1, Mohamed A. Gouda 2, Vivek Subbiah - Trends in Cancer. Available online 4 July 2023- https://doi.org/ 10.1016/j.trecan.2023.06.007) [9] Dai, J., Chen, C. (2020, August). Text classification system of academic papers based on hybrid Bert-BiGRU model. In 2020 12th International Conference on Intelligent Human- Machine Systems and Cybernetics (IHMSC) (Vol. 2, pp. 40-44). IEEE. [10] Devlin, J., Chang, M. W., Lee, K., Toutanova, K. (2018). Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. [11] Liu, Y., Ott, M., Goyal, N., Du, J., Joshi, M., Chen, D., Stoyanov, V. (2019). Roberta: A robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. [12] Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I. (2019). Language models are unsupervised multitask learners. OpenAI blog, 1(8), 9. [13] Yang, Z., Dai, Z., Yang, Y., Carbonell, J., Salakhutdinov, R. R., Le, Q. V. (2019). Xlnet: Generalized autoregressive pretraining for language understanding. Advances in neural information processing systems, 32. [14] Mike THELWALL, Methods for reporting on the targets of links from national systems of university Web sites, Information Processing & Management), 40 (1) page 125-144), (2004), DOI: https://doi.org/10.1016/S0306-4573(02)00083-3. [15] Cronin, B., Snyder, H. W., Rosenbaum, H., Martinson, A., Callahan, E. (1998). Invoked on the Web. Journal of the American Society for Information Science, 49 (14) 1319–1328. DOI: 10.1002/(SICI)1097-4571(1998) 49: 14). |