Skip to Main Content

EXSC 200: Introductory Research Methods

What is Generative Artificial Intelligence?

According to the Center of Teaching Innovation at Cornell University (2023), generative artificial intelligence (AI) is "a subset of AI that utilizes learning models to create new, original content, such as images, text, or music, based on patterns and structures learned from existing data." Many AI web tools use large language models (LLMs), algorithms, data, and statistical models to make inferences about what content should be generated based on prompts provided by end users. Generative AI predicts tokens (pieces of data) based on what it has been fed before. For example, when an end user provides a prompt to an AI tool such as ChatGPT or Perplexity AI, it provides a text response based on the LLMs it has been trained on.

The generative AI landscape is rapidly changing. Private and public industries, including state and local governments, are bringing to probe and test ways that generative AI can either improve or replace human-created work.  Generative AI tools can help users create content, gather background information on a topic, and synthesize information. However, due to how generative AI tools "learn" and lack of regulation, they can "hallucinate", or make up sources, facts, and information, and create biased and harmful content. There is some debate about whether or not the hallucination problem is fixable. While there are some ways to differentiate and spot AI-generated content from human-generated content, as generative AI becomes more sophisticated, it may become more difficult to differentiate real content from fabricated content, such as deepfakes.

 

Text-Generating AI Examples

Below are some examples of generative AI tools. Generative AI is a rapidly growing industry so this list is not exhaustive. There may be additional tools you know of that work to varying degrees. Some features may also be paywalled and require a subscription.

When using these tools, there are things you should be mindful of:

  • You don't know what data these AI's are trained on. If they were created from biased or harmful data, their output will reflect that.
  • You don't know how recently these AI's have updated their data. They may not be appropriate when searching for information on current events.
  • AI's tend to hallucinate and make-up responses; They're not necessarily trying to produce true and accurate responses, but rather believable responses. This unfortunately means they're skilled at unintentionally lying to you.
  • Appropriating words and ideas from artificial intelligence and representing them as your own without proper citation is considered plagiarism by the University's Academic Integrity policy.

What about AI for research?

You should not use any of the above tools for finding peer-reviewed scholarly journal articles because they have not been trained on a dataset of scholarly papers, and the responses they generate will not be based on scholarly sources.

However, there are some AI tools that claim to have been trained on scholarly literature and link back to these papers in their responses. In their current form, these search tools can supplement, but not replace traditional search techniques:

  • They may not have been trained on all the relevant papers surrounding your topic.
  • They may only be analyzing the abstracts of papers, and not the entire article, which is often locked behind a paywall.
  • They can still hallucinate, providing false or misleading answers.

If you want to use AI-based research tools, you should always navigate to and read the original articles they link. That being said, here are some examples of AI-based research tools: