Policy on the use of artificial intelligence (AI)

The journal Tecnología, Ciencia y Educación, in order to maintain the highest standards of quality and scientific integrity in the articles it publishes, establishes a policy on the use of AI:

  • Promote transparency, scientific quality, and recognition of all AI tools used in the research process and communication of the results of published articles.
  • This AI policy is addressed to all persons involved in the publication of Research Studies, Projects, Academic Contributions, and Bibliographic Reviews, that is, the authors, reviewers, and editors of the publication.

1. Author responsibilities

  • Any author who uses AI tools to prepare their article must include a «Statement of AI Use» at the end of the manuscript, immediately after the «Author Contributions» section. In this statement, the authors must provide the following information:
    • Name of the AI ​​tool(s) used, as well as their version(s).
    • Description of how and in which parts of the text AI was used (for example, to prepare a paragraph or section of the article, in the literature review, in editing and revising drafts, in data analysis, in the translation of text, etc.).
  • Unsupervised use of generative AI is prohibited: the use of AI tools for the automated generation of full articles or the fabrication of nonexistent bibliographic citations is strictly prohibited.
  • Authors must properly cite the AI ​​tool(s) they used in their articles. They must follow the APA (7th ed.) citation format.
  • Authors are ultimately responsible for the content of their articles; therefore, the use of AI does not relieve them of any liability.
  • Any AI-generated content must be carefully reviewed and edited by the authors to ensure accuracy and relevance.
  • AI should not be used to generate or modify research data.

2. Reviewers' responsibilities

  • They will immediately inform the journal editors if they detect undeclared use of AI in the preparation of the manuscript.
  • They will notify the journal editors of any potential detection of AI-generated texts using analysis and detection tools.
  • They will specifically assess the declared use of AI in the study.
  • They will refrain from using AI to conduct the review without proper oversight and without informing the journal editors.

3. Editors' responsibilities

  • The editorial team will use AI detection systems in manuscripts (iTenticate) to ensure their ethical and transparent use.
  • The editorial team will evaluate the «AI Use Statement» provided by the authors of the articles and reserves the right to request additional information on the use of AI.
  • It will periodically update the editorial policies regarding AI and its implications for scientific publishing.
  • It will promote good practices in scholarly publishing and ensure that the journal complies with international standards of research ethics.

4. Consequences of non-compliance

If the journal's editors observe non-compliance with the AI ​​policy by the authors, they may:

  • Reject the manuscript.
  • Request the retraction of published articles.
  • Prohibit the authors from publishing in the journal temporarily or permanently.
  • Notification to the authors' academic institutions in cases of fraud or malpractice.

If the journal's editors observe a breach of the AI ​​policy by reviewers, they may:

  • Request additional review of the manuscript due to suspected misuse of AI.
  • Prohibit reviewers from evaluating the journal temporarily or permanently.

5. Updating the AI ​​policy

The guidelines comprising this policy will be periodically reviewed to adapt to regulatory changes and advances in AI, ensuring the integrity of the academic publication.

.