Skip to Main Content

Artificial Intelligence

Positive Impact on the Learning Experience (and more)

 

How can learners use AI effectively and appropriately? What tasks can AI fulfill? 

Generative AI can be an excellent writing aid, assist with project planning and time management, in helping to understand literature, and break down complex topics and explain terms etc. The possibilities are diverse! For more on this topic, read this article, and explore th4 list below:

  1. For personalised learning - a person can interact with AI in their own unique way using personalised prompts that really suit their needs - try requesting AI to produce additional exercises on a particular topic or provide advanced versions of materials for further practice.
  2. As a "virtual tutor" for brainstorming/getting started on project work - try requesting AI to assist you with getting started on a particular topic or project.
  3. For exam preparation material (using appropriate examples)- MCQs and discussion points - try requesting to develop exam practice material based on past papers (that you can provide). 
  4. Overcoming language barriers - try requesting AI to act as a language tutor.
  5. Writing assistance (grammar, spelling, style) - in fact apps like Grammarly have been around for some time - try requesting AI to act as a writing tutor and suggest improvements in areas like structure and tone. 
  6. Assistive technology to promote access and inclusivity - contact comppas@rcsi.ie for more information.   

LEARN MORE Training Resources: How can I use AI?

What are some Concerns?

AI platforms like Chat GPT or Microsoft Co-pilot, for example, generates content based on data retrieved from the world wide web - a massive amount of data and a myriad sources. Thus, any output could contain mistakes or incorrect responses and there is no way to hold AI accountable. We must evaluate and verify its output, and be wary of its capacity to make mistakes and "hallucinate" (read more about AI hallucinations).  See the below example, which was crafted with AI as an assistant. The prompt is clear and the response appears to be factual, yet it is wholly incorrect. 

The below example was crafted with AI as an assistant to specifically highlight some potential issues.

In relation to trustworthiness, it is possible that Chat GPT may reproduce biases that it has "picked up" based on the world of text from the internet. Again, a user must be wary of this and carefully review, and even fact-check any content produced by generative AI. A lot of the bias present is based on the fact that it is "trained" on data from Western countries or those countries that produce more online content. This means its perspective can be skewed. 

See the below example, which was crafted with AI as an assistant to specifically highlight some potential issues.    

We know that AI is trained on material available on the world wide web and currently, there is a lack of transparency or a standardised agreement on the use of copyrighted material to train Gen A.

Indeed, there is no consensus on the ownership of material produced by AI. You can read more about that topic here, or see how a group of artists are responding to a specific case


📚 Scenario: Who Owns the Output?

You’re working on a group presentation about emerging health technologies. To save time, you use an AI tool to generate:

  • A slide deck with visuals
  • A summary of recent journal articles
  • A catchy title and conclusion

Your group is impressed—but one member asks:

“Can we actually use this? Who owns it? If we choose to use it, who or what do we cite?”

You realize:

  • The AI didn’t cite its sources.
  • You’re not sure if the images are original or pulled from the web.
  • You didn’t check the tool’s terms of use and you don't know what you are allowed to do with the content it generates.

 

🧠 Reflect

 Is AI-generated content automatically free to use?

 Are you responsible for checking the originality of AI output?

 Should you credit the AI tool?

 What about the data it was   trained on?

 How does this affect academic integrity and copyright compliance?

 

The ethics surrounding the use and management of AI is a fast developing field in its own right. We all have unique perspectives and this is also true for AI!

As a society we need to work towards responsible and controlled use and consider things like equity of access and the digital divide, the ethics of AI use and implementation in education and healthcare, its environmental impact, and other dilemmas that user might face, including transparency around its use. You can find out more about the ethics and principles for AI use in education here.

Consider the below ethical dilemma (authored with the help of Microsoft Co-pilot!).


🤔 Ethical Dilemma: Who Did the Thinking?

A medical student is working on a reflective essay about patient empathy. They ask an AI tool to generate a first draft based on personal notes. The result is well-written, insightful, and even includes a powerful anecdote that the learner hadn’t thought of—but it’s fictional.

Being short on time, the learner considers submitting the AI-generated version with minor edits.

 

Reflect:

 Is this still leaner's reflection?

 Should they disclose the use of AI?

 Would the tutor or future patients expect this to be the leaner's own voice?

 What are the risks if others do the same?

 

Sustainability & Environmental Impact

As with an technology, there is an environmental impact and the true scale of AI's environmental impact is till being measured and understood. These systems rely on vast amounts of data and data storage, and this means data centres (for more on data centres, read HERE). AI is energy-hungry and can place demand on a country's energy infrastructure, consider that data centres in Ireland account for 20% of national energy consumption (Ryan-Christensen, 2024)!

The central sustainability question is: How can governments and institutions develop AI to meet societal needs without harming the environment — now and for future generations? (Litvinets and Pijselman, 2024). 

As users, we also have a role to play. Responsible AI use means being aware of its environmental cost and asking ourselves:

  • Do I need AI for this specific task?
  • Could a simpler tool (like a search engine or a manual method) suffice?
  • Am I using AI efficiently and intentionally? Have I thought of a plan to use it efficiently?
  • Can I avoid excessive and repetitive prompts? Read more about efficient prompting here

 

 Limitations based on datasets: Gen AI operates of large pools of   datasets and thus, is limited or constrained by the information it is   "trained" upon. It is not always up-to-date and more recent events do   not form part of its training. There are also questions around inherent   human bias affecting Gen AI output and its propensity to repeat and   spread harmful stereotypes and misinformation.

 

 

Limited knowledge and lack of perception: ChatGPT is limited to the knowledge it absorbs from input and also lacks the ability to 'weigh up' scenarios. It cannot apply common sense to a scenario and lacks nuanced understanding of key human attributes, i.e., humour, irony, cultural sensitivity or understanding. 

Critical thinking and real learning: While it can generate logical and confident responses to most scenarios, it lacks any real ability to truly analyse information, context, and ultimately produce original thought. Critical awareness is a skill unique to humans and one that RCSI seeks to assist learners in developing.

Gen AI can be used to easily create authentic looking disinformation. We are probably all aware of the idea of Fake News, and this will only become more common in the era of Gen AI.  Do you know how to spot fake news/disinformation? (Infographic from IFLA, 2017).

 

 

 

Users may rely too heavily on the content produced by AI tools. Although they are fantastic writing aids, critical awareness and evaluation still play an essential role. Can you fully trust content produced by AI tools?

Additionally, over-reliance on AI tools has the ability to stifle creativity and problem solving, and disrupt critical thinking skills. This is called cognitive offloading, where we stop doing things for ourselves and lose some independent function. 

Consider the below ethical dilemma (authored with the help of Microsoft Co-pilot!).


🧠 Scenario: Outsourcing Your Thinking?

A learner is preparing for a pharmacology exam and decides to use an AI tool to explain complex drug interactions. It gives clear, simplified summaries—so helpful that the learner stops reading the textbook altogether.

A few days later, during a clinical simulation, the learner is asked to explain why a certain drug combination is contraindicated. They can remember that AI gave an answer… but they can’t recall the reasoning behind it.

 

 Reflect:

 Has the learner engaged in deep learning —or just memorized the AI’s output?

 What happens when we rely on AI to think for us, rather than with us?

 How can you balance convenience with deep understanding?

There are many concerns surrounding privacy and security that are simply too numerous to list here. However, consider that AI technology collects vast amount of data (some of it personal) and are designed to learn and improve through the analysis of this data. 

Quite rightly, as the collection and storing of personal data continues, we can ask the questions: Is it secure? How is it being stores? How will it be used? Thus, security refers to the protection of data from external threats or misuse.

✅ Safe Practices for Using AI Tools

  1. Avoid sharing sensitive personal, academic, or health information.
  2. Use anonymous or generic examples when testing prompts.
  3. Check the tool’s privacy policy—does it store or reuse your data?
  4. Log out or use incognito mode when experimenting with new tools.
  5. Ask: Would I be okay if this input were made public?

AI Safety

AI systems as they continue to develop will have wide-ranging impacts on practically all aspects of society from education, to healthcare, to the labour market.

 

AI Safety includes the safe development of AI systems, with protection of humans and human agency in mind: "crucial for harnessing the potential of AI technologies for economic growth, social welfare, and environmental sustainability while protecting individuals and societal values" (OECD, 2024).

 

The OECD AI Principles - listed to the left (OECD, 2024) - are the first intergovernmental standard on AI and you can read more about them here.