Skip to Main Content

Artificial Intelligence and Use: For Students

How A.I. can be used as a tool for research and organizing information.

Use in the Classroom

Understanding Artificial Intelligence

Despite their broad potential, generative AI models also have several important limitations. Understanding these limitations is critical for using these technologies ethically and effectively.

Ethical Concerns

  • Bias and Fairness: Generative AI models can learn biases present in the training data, producing outputs that reflect, reinforce, or amplify social prejudices and stereotypes.  Models can also unintentionally reinforce existing beliefs or opinions present in training data, resulting in biased outputs that align with specific ideologies.
  • Misinformation and Manipulation: AI-generated content can be used to create convincing fake news, deepfakes, and other forms of misinformation, leading to potential manipulation and harm.
  • Plagiarism and Copyright: The use of AI-generated content raises significant questions about authorship, intellectual property, and attribution, potentially leading to issues with plagiarism and copyright infringement.  Plagiarism undermines the trust and credibility of individuals or organizations who inappropriately use someone else's property without permission or noted attribution.  
  • Attribution and Accountability: Determining responsibility for AI-generated content can be challenging, raising questions about who is accountable for errors, biases, or malicious outputs.
  • Inequality: As AI providers move free to fee-based service models, unequal access to these tools could exacerbate existing global inequalities.

Quality and Reliability

  • Quality: AI outputs may contain false, misleading, or inaccurate information.
  • Consistency: Generative AI models can produce irrelevant or inconsistent results, even in response to the same prompt.
  • Superficiality: While AI can generate content, it might lack true creativity, originality, and deep understanding of complex concepts.
  • Degeneration: As AI-generated content fills the internet and becomes the source data on which future generations of AI are trained, the quality of AI output may degrade over time leading to "model collapse".

Data Privacy and Security

  • Data Exposure: The training of generative AI models requires large datasets, which could contain sensitive or private information that might be inadvertently revealed in generated outputs.
  • User Privacy: AI platforms may collect and retain personal data that could be used for purposes other than what was originally intended or disclosed to the user.

--adapted from https://libguides.rutgers.edu/artificial-intelligence

 

AI Guidance

Generating content like this can be done efficiently using a large language model, but it is important to remember to review the output carefully and acknowledge the source.

 

Artificial Intelligence (AI)

Artificial intelligence refers to the development of computer systems that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and natural language understanding.

 

Types of Artificial Intelligence

  • Chatbot

A chatbot is a software application that uses natural language processing (NLP) and machine learning to simulate conversation with humans, either via text or voice interfaces.

  • Generative AI

Generative artificial intelligence refers to algorithms and models that can generate new content or data, such as images, videos, music, or text, based on patterns learned from existing information.

  • Machine Learning (ML)

Machine learning is a subset of artificial intelligence that involves training computer systems to learn from data and improve their performance over time through experience.

  • Natural Language Processing (NLP)

NLP is a subfield of artificial intelligence that deals with the interaction between computers and human language, including text and speech processing, sentiment analysis, machine translation, and dialogue systems.

  • Large Language Model (LLM)

A large language model is a type of machine learning model that is trained on vast amounts of text data to generate language outputs that are coherent and contextually appropriate.

 

Large Language Models (LLMs)

  • Hallucination

In the context of AI, hallucination refers to the phenomenon where a model generates inaccurate or imaginary output that cannot be explained by its training data.  Hallucinations can also contribute to the spread of false information, leading to negative consequences for individuals and society.  These errors can lead to incorrect and potentially harmful outcomes.

  • Prompt

A prompt is a specific task or question that is given to an AI system to elicit a response or output.

  • Prompt Engineering

Prompt engineering is the process of designing and refining prompts to elicit desired responses or behaviors from AI systems, in order to improve their performance and versatility.

 

Understanding Large Language Models (LLMs)

  • Parameters

Parameters are settings or values that are adjusted during the training process to optimize the performance of an AI model, such as the learning rate, regularization strength, or number of hidden layers.

  • Tokens

In Natural Language Processing and machine learning, tokens refer to individual words or phrases in a text dataset, which are used as input features for models to analyze and understand the meaning of the text.

  • Training Data

Training data is the set of examples or inputs used to train an AI system, which helps the model learn patterns and relationships in the data and make predictions or decisions.

Trible Library provides links to other websites to aid in research and is not responsible for the content or privacy policy of those sites.