Inteligencia artificial: transformando la elaboración y publicación de artículos científicos
DOI:
https://doi.org/10.47196/da.v30i3.2600Palabras clave:
inteligencia artificial, artículos científicos, herramientas de IA, ética, regulacionesResumen
La inteligencia artificial (IA) tiene el potencial de revolucionar la investigación y la producción de artículos científicos. Se han desarrollado herramientas de IA que pueden impactar en la elaboración de manuscritos, desde la búsqueda de literatura científica, hasta la redacción del artículo. Para comprender cómo funcionan y adoptar una mirada crítica, es necesario conocer algunos conceptos básicos sobre IA. Además, es importante analizar las fortalezas y limitaciones del uso de estas tecnologías y discutir aspectos éticos y regulatorios. Sin dudas, la IA repercutirá también en el proceso de la revisión por pares y será necesario adaptar las normas editoriales de las revistas científicas a los avances tecnológicos.
Citas
I. Wei ML, Tada M, So A, Torres R. Artificial intelligence and skin cancer. Front Med (Lausanne). 2024;11:1331895.
II. Di Lillo S, Marinucci D, Salvi M, Vigogna S. Spectral complexity of deep neural networks. arXiv 2024. doi: 10.48550/arXiv.240509541.
III. Shakil H, Mahi AM, Nguyen P, Ortiz Z, et ál. Evaluating text summaries generated by large language models using OpenAI’s GPT. arXiv 2024. doi: 10.48550/arXiv.2405.04053
IV. Yenduri G, Ramalingam M, Selvi GC, Supriya Y, et ál. GPT (Generative Pre-Trained Transformer). A comprehensive review on enabling technologies, potential applications, emerging challenges, and future directions. IEEE Access. 2024;12:54608-54649.
V. OpenAI, Achiam J, Adler S, Agarwal S, et ál. GPT-4 Technical report. arXiv 2024. doi: 10.48550/arXiv.2303.08774.
VI. Wieneke H, Voigt I. Principles of artificial intelligence and its application in cardiovascular medicine. Clin Cardiol. 2023;47:e24148.
VII. Yoon J, Gupta A, Anumanchipalli G. Is bigger edit batch size always better? An empirical study on model editing with Llama-3. arXiv 2024. doi: 10.48550/arXiv.2405.00664
VIII. Enis M, Hopkins M. From LLM to NMT: Advancing low-resource machine translation with Claude. arXiv 2024. doi: 10.48550/arXiv.2404.13813.
IX. Gemini Team, Anil R, Borgeaud S, Alayrac JB, et ál. Gemini: a family of highly capable multimodal models. arXiv 2024. doi: 10.48550/arXiv.2312.11805.
X. Carolan K, Fennelly L, Smeaton AF. A review of multi-modal large language and vision models. arXiv 2024. Doi: 10.48550/arXiv.2404.01322
XI. ChatGPT. Disponible en: https://chatgpt.com.
XII. Copilot. Disponible en: https://copilot.microsoft.com.
XIII. Meet Claude. Disponible en: https://www.anthropic.com/claude.
XIV. Gemini. Disponible en: https://gemini.google.com.
XV. Gemini Team, Georgiev P, Lei VI, Burnell R, et ál. Gemini 1.5: Unlocking multimodal understanding across millions of tokens of context. arXiv 2024. doi: 10.48550/arXiv.2403.05530.
XVI. Temsah MH, Jamal A, Alhasan K, Aljamaan F, et ál. Transforming virtual healthcare: the potentials of ChatGPT-4omni in telemedicine. Cureus. 16:e61377.
XVII. OpenAI: Learning to reason with LLMs Disponible en: https://openai.com/index/learning-to-reason-with-llms/.
XVIII. Wei J, Wang X, Schuurmans D, Bosma M, et ál. Chain of thought prompting elicits reasoning in large language models. arXiv 2023. doi: 10.48550/arXiv.2201.11903.
XIX. Wang Y, Zhao S, Wang Z, Huang H, et ál. Strategic Chain of thought: guiding accurate reasoning in LLMs through strategy elicitation. arXiv 2024. Doi: 10.48550/arXiv.2409.03271
XX. Havrilla A, Du Y, Raparthy SC, Nalmpantis C, et ál. Teaching large language models to reason with reinforcement learning. arXiv 2024. doi: 10.48550/arXiv.2403.04642.
XXI. Sivarajkumar S, Kelley M, Samolyk-Mazzanti A, Visweswaran S, et ál. An empirical evaluation of prompting strategies for large language models in zero-shot clinical natural language processing: algorithm development and validation study. JMIR Med Inform. 2024;12:e55318.
XXII. Nolin-Lapalme A, Theriault-Lauzier P, Corbin D, Tastet O, et ál. Maximizing large language model utility in cardiovascular care: a practical guide. Can J Cardiol. 2024:S0828-282X(24)00415-X.
XXIII. Zhang X, Talukdar N, Vemulapalli S, Ahn S, et ál. Comparison of prompt engineering and fine-tuning strategies in large language models in the classification of clinical notes. AMIA Jt Summits Transl Sci Proc. 2024;2024:478-487.
XIV. Kıyak YS, Kononowicz AA. Case-based MCQ generator: A custom ChatGPT based on published prompts in the literature for automatic item generation. Med Teach. 2024;1-3.
XXV. Masters K, Benjamin J, Agrawal A, MacNeill H, et ál. Twelve tips on creating and using custom GPTs to enhance health professions education. Med Teach. 2024;46:752-756.
XXVI. Gorelik Y, Ghersin I, Arraf T, Ben-Ishay O, et ál. Using a customized GPT to provide guideline-based recommendations for management of pancreatic cystic lesions. Endosc Int Open. 2024;12:E600-E603.
XVII. Microsoft Copilot para Microsoft 365. Disponible en: https://www.microsoft.com/es-es/microsoft-365/enterprise/copilot-for-microsoft-365.
XVIII. Fireflies. Disponible en: https://fireflies.ai.
XXIX. Supernormal. Disponible en: https://supernormal.com/.
XXX. OpenFuture. Disponible en: https://openfuture.ai/.
XXXI. Toolify. Disponible en: https://www.toolify.ai/.
XXXII. Hugging Face 2024. Disponible en: https://huggingface.co/models.
XXIII. Wu C, Varghese AJ, Oommen V, Karniadakis GE. GPT vs human for scientific reviews: a dual source review on applications of ChatGPT in science. arXiv 2023. doi: 10.48550/arXiv.2312.03769.
XXXIV. SciSpace. Disponible en: https://typeset.io/t/about/.
XXXV. ResearchRabbit. Disponible en: https://www.researchrabbit.ai.
XXXVI. Giglio AD, da Costa MUP. The use of artificial intelligence to improve the scientific writing of non-native english speakers. Rev Assoc Med Bras 1992;69:e20230560.
XXXVII. Glickman M, Zhang Y. AI and generative AI for research discovery and summarization. arXiv 2024. Doi: 10.48550/arXiv.2401.06795.
XXXVIII. Consensus. How it Works & Consensus FAQ’s. Consensus: AI search engine for research. 2022. Disponible en: https://consensus.app/blog/welcome-to-consensus/
XXXIX. Inciteful.xyz. Disponible en: https://inciteful.xyz/.
XL. Uppalapati VK, Nag DS. A Comparative analysis of AI models in complex medical decision-making scenarios: evaluating ChatGPT, Claude AI, Bard, and Perplexity. Cureus. 16:e52485.
XLI. Del Rey FC, Arias MC. Exploring the potential of artificial intelligence in traumatology: conversational answers to specific questions. Rev Esp Cir Ortop Traumatol. 2024;S1888-4415(24)00086-9.
XLII. Perplexity. Frequently asked questions. Disponible en: https://www.perplexity.ai/hub/faq.
XLIII. Whitfield S, Hofmann MA. Elicit: AI literature review research assistant. Public Services Quarterly. 2023;19:201-207.
XLIV. Byun J, Stuhlmüller A. Elicit: Language models as research tools. Paris: OECD; 2023 jun. Disponible en: https://www.oecd-ilibrary.org/science-and-technology/artificial-intelligence-in-science_174aee8f-en.
XLV. Kung JY. Elicit. J Can Health Libr Assoc. 2023;44:15-18.
XLVI. McDonnell T, Cosgrove G, Hogan E, Martin J, et ál. Methods to derive composite indicators used for quality and safety measurement and monitoring in healthcare: a scoping review protocol. BMJ Open. 2023;13:e071382.
XLVII. Elicit. Disponible en: https://elicit.com/.
XLVIII. Begasse de Dhaem O, Bernstein C. Yoga for migraine prevention: an ancient practice with evidence for current use. Curr Pain Headache Rep. 2024;28:383-393.
XLIX. Litmaps. Disponible en: https://www.litmaps.com/.
L. Cumpston M, Flemyng E, Thomas J, Higgins J, et ál. Chapter I: Introduction. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Disponible en: www.training.cochrane.org/handbook.
LI. Deeks J, Higgins J, Altman D. Chapter 10: Analysing data and undertaking meta-analyses. In: Higgins JPT, Thomas J, Chandler J, Cumpston M, Li T, Page MJ, Welch VA (editors). Cochrane Handbook for Systematic Reviews of Interventions version 6.4 (updated August 2023). Cochrane, 2023. Disponible en: www.training.cochrane.org/handbook.
LII. Muthiah A, Lee LK, Koh J, Liu A, et ál. Quality of systematic reviews and meta-analyses in dermatology. Cochrane Evidence Synthesis and Methods. 2024;2:e12056.
LIII. Ashkanani Z, Mohtar R, Al-Enezi S, Smith PK, et ál. AI-assisted systematic review on remediation of contaminated soils with PAHs and heavy metals. J Hazard Mater. 2024;468:133813.
LIV. Fabiano N, Gupta A, Bhambra N, Luu B, et ál. How to optimize the systematic review process using AI tools. JCPP Advances 2024; 4(2):e12234.
LV. Zia A, Aziz M, Popa I, Khan SA, et ál. Artificial intelligence-based medical data mining. J Pers Med. 2022;12(9):1359.
LVI. Xiao L, Li M, Feng Y, Wang M, et ál. Exploration of attention mechanism-enhanced deep learning models in the mining of medical textual data. arXiv 2024. doi: 10.48550/arXiv.2406.00016
LVII. Thomas J, Graziosi, Brunton J, Ghouze Z, et ál. EPPI-Reviewer: advanced software for systematic reviews, maps and evidence synthesis. EPPI Centre, UCL Social Research Institute, University College London 2023 [Citado 14 de septiembre de 2024].
LVIII. Aliani R. From manual to machine: How covidence’s ML is streamlining systematic reviews. Covidence 2024. Disponible en: https://www.covidence.org/blog/from-manual-to-machine-how-covidences-ml-is-streamlining-systematic-reviews/.
LVIX. Covidence systematic review software, Veritas Health Innovation, Melbourne, Australia [Internet]. Covidence. Disponible en: https://www.covidence.org/.
LX. Thomas J, McDonald S, Noel-Storr A, Shemilt I, et ál. Machine learning reduced workload with minimal risk of missing studies: development and evaluation of a randomized controlled trial classifier for Cochrane Reviews. J Clin Epidemiol. 2021;133:140-151.
LXI. Shemilt I, Noel-Storr A, Thomas J, Featherstone R, et ál. Machine learning reduced workload for the Cochrane COVID-19 Study Register: development and evaluation of the Cochrane COVID-19 Study Classifier. Syst Rev. 2022;11:15.
LXII. dos Reis AHS, de Oliveira ALM, Fritsch C, Zouch J, et ál. Usefulness of machine learning softwares to screen titles of systematic reviews: a methodological study. Syst Rev. 2023;12:68.
LXIII. Rayyan 2023. Disponible en: https://www.rayyan.ai/features-and-benefits-of-rayyan-to-boost-your-productivity/.
LXIV. Ouzzani M, Hammady H, Fedorowicz Z, Elmagarmid A. Rayyan a web and mobile app for systematic reviews. Syst Rev. 2016;5:210.
LXV. Murphy K, Di Ruggiero E, Upshur R, Willison DJ, et ál. Artificial intelligence for good health: a scoping review of the ethics literature. BMC Med Ethics. 2021;22:14.
LXVI. Nebeker C, Torous J, Bartlett Ellis RJ. Building the case for actionable ethics in digital health research supported by artificial intelligence. BMC Med. 2019;17:137.
LXVII. Inglada Galiana L, Corral Gudino L, Miramontes González P. Ethics and artificial intelligence. Revista Clínica Española (English Edition). 2024;224:178-186.
LXVIII. Tang L, Li J, Fantus S. Medical artificial intelligence ethics: a systematic review of empirical studies. Digit Health. 2023;9:20552076231186064.
LXIX. Bouhouita-Guermech S, Gogognon P, Bélisle-Pipon JC. Specific challenges posed by artificial intelligence in research ethics. Front Artif Intell. 2023;6:1149082.
LXX. Radanliev P, Santos O, Brandon-Jones A, Joinson A. Ethics and responsible AI deployment. Front Artif Intell. 2024;7:1377011.
LXXI. UNESCO’s Recommendation on the Ethics of Artificial Intelligence: key facts. UNESCO Biblioteca Digital. Disponible en: https://unesdoc.unesco.org/ark:/48223/pf0000385082.page=12.
LXXII. Regulación de la Inteligencia Artificial por la Eurocámara. Bruselas: Parlamento Europeo; 2024. Disponible en: https://artificialintelligenceact.eu/es/el-acto/.
LXXIII. Martínez MV, Dumas VG, Sarabia M, Kisilevsky IF, et ál. Hoja de ruta: innovar con datos en el sector público. 1º Ed. Ciudad Autónoma de Buenos Aires: Fundación Sadosky; 2022. Disponible en: https://innovacionpublicacondatos.fundacionsadosky.org.ar/descargar/HojaDeRuta.pdf.
LXXIV. Boletín oficial República Argentina. Disposición 2/2023. Disponible en: https://www.boletinoficial.gob.ar/detalleAviso/primera/287679 [Citado mayo de 2024].
LXXV. Boletín oficial República Argentina. Resolución 161/2023. Disponible en: https://www.boletinoficial.gob.ar/detalleAviso/primera/293363 [Citado mayo de 2024].
LXXVI. Vercelli A. Regulaciones e inteligencias artificiales en Argentina. Inmediaciones de la Comunicación. 2024;19:52-74.
LXXVII. Cheng K, Sun Z, Liu X, Wu H, et ál. Generative artificial intelligence is infiltrating peer review process. Crit Care. 2024;28:149.
LXXVIII. Zielinski C, Winker M, Aggarwal R, Ferris L, et ál., for the WAME Board. Chatbots, generative AI, and scholarly manuscripts. WAME Recommendations on chatbots and generative artificial intelligence in relation to scholarly publications. WAME. 2023. Disponible en: https://wame.org/page3.php?id=106
LXXIX. ICMJE. Recommendations for the conduct, reporting, editing, and publication of scholarly work in medical journals. Disponible en: https://www.icmje.org/icmje-recommendations.pdf.
LXXX. COPE Council. COPE Discussion document: artificial intelligence (AI) in decision making. English. COPE: Committee on Publication Ethics. doi: 10.24318/9kvAgrnJ.
LXXXI. Misra DP, Chandwar K. ChatGPT, artificial intelligence and scientific writing: What authors, peer reviewers and editors should know. Journal of the Royal College of Physicians of Edinburgh. 2023;53:90-93.
LXXXII. Májovský M, Černý M, Kasal M, Komarc M, et ál. Artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been ppened. J Med Internet Res. 2023;25:e46924.
LXXXIII. Habibzadeh F. Plagiarism: A Bird’s Eye View. J Korean Med Sci. 2023;38:e373.
LXXXIV. Lee JY. Can an artificial intelligence chatbot be the author of a scholarly article? J Educ Eval Health Prof. 2023;20:6.
LXXXV. Liu N, Brown A. AI increases the pressure to overhaul the scientific peer review process. Comment on “artificial intelligence can generate fraudulent but authentic-looking scientific medical articles: Pandora’s box has been opened”. J Med Internet Res. 2023;25:e50591.
LXXXVI. Nazarovets S, Teixeira da Silva JA. ChatGPT as an «author»: Bibliometric analysis to assess the validity of authorship. Account Res. 2024;1-11.
LXXXVII. Habibzadeh F. GPTZero Performance in identifying artificial intelligence-generated medical texts: a preliminary study. J Korean Med Sci. 2023;38:e319.
LXXXVIII. GPTZero. Disponible en: https://gptzero.me/.
LXXXIX. Bellini V, Semeraro F, Montomoli J, Cascella M, et ál. Between human and AI: assessing the reliability of AI text detection tools. Curr Med Res Opin. 2024;40:353-358.
XC. Pan ET, Florian-Rodriguez M. Human versus machine: identifying ChatGPT-generated abstracts in Gynecology and Urogynecology. Am J Obstet Gynecol. 2024;S0002-9378(24)00571-4.
XCI. Knoth N, Decker M, Laupichler MC, Pinski M, et ál. Developing a holistic AI literacy assessment matrix. Bridging generic, domain-specific, and ethical competencies. Computers and Education Open. 2024;6:100177.
XCII. Ong JCL, Chang SYH, William W, Butte AJ, et ál. Ethical and regulatory challenges of large language models in medicine. The Lancet Digital Health. 2024;6:e428-e432.
XCIII. Miao F, Holmes W. Guidance for generative AI in education and research. UNESCO; 2023. Disponible en: https://unesdoc.unesco.org/ark:/48223/pf0000386693?posInSet=1&queryId=66d97658-288e-4d55-9be2-7508ef607e38.
XCIV. Collins GS, Moons KGM, Dhiman P, Riley RD, et ál. TRIPOD+AI statement: updated guidance for reporting clinical prediction models that use regression or machine learning methods. BMJ. 2024;385:e078378.
XCV. Polemi N, Praça I, Kioskli K, Bécue A. Challenges and efforts in managing AI trustworthiness risks: a state of knowledge. Front Big Data. 2024;7:1381163.
XCVI. Hicks MT, Humphries J, Slater J. ChatGPT is bullshit. Ethics Inf Technol. 2024;26:38.
XCVII. Templin T, Perez MW, Sylvia S, Leek J, et ál. Addressing 6 challenges in generative AI for digital health: A scoping review. PLoS Digit Health. 2024;3:e0000503.
XCVIII. Gusenbauer M. Audit AI search tools now, before they skew research. Nature. 2023;617:439.
Descargas
Publicado
Número
Sección
Licencia
Derechos de autor 2024 Dermatología Argentina
Esta obra está bajo una licencia internacional Creative Commons Atribución-NoComercial-SinDerivadas 4.0.
El/los autor/es tranfieren todos los derechos de autor del manuscrito arriba mencionado a Dermatología Argentina en el caso de que el trabajo sea publicado. El/los autor/es declaran que el artículo es original, que no infringe ningún derecho de propiedad intelectual u otros derechos de terceros, que no se encuentra bajo consideración de otra revista y que no ha sido previamente publicado.
Le solicitamos haga click aquí para imprimir, firmar y enviar por correo postal la transferencia de los derechos de autor