Skip to Main Content
Syracuse University Libraries

Artificial Intelligence

This guide offers an introduction to generative AI, guidance on using AI tools, and additional resources for learning more and getting help

Glossary

Artificial general intelligence, or AGI: Artificial general intelligence (AGI) is a hypothetical type of AI that mimics human-like intelligence. Unlike regular AI, which is designed for specific tasks (such as playing chess, grammar correction, or speech translation), AGI is characterized by its general cognitive abilities. Which means it can perform any intellectual task that a human can do, adapt to new situations, and improve its performance over time.

AI ethics: A field of applied ethics focused on the development and use of AI in a way that aligns with moral principles, particularly fairness, transparency, accountability, and respect for human values.

AI safety: An interdisciplinary field aiming to mitigate risks from AI systems. It encompasses technical solutions to ensure reliable AI function, aligning AI goals with human values, and developing safeguards against misuse and unintended consequences.

Algorithm: A finite set of well-defined instructions for performing a specific task. It operates on defined inputs and produces a corresponding output through a series of steps, ensuring a solution exists and can be reached efficiently.

Alignment: Refers to ensuring an AI system's goals and actions match those of its creators or human values. The primary objective of AI alignment is to prevent scenarios where AI systems, especially highly autonomous and intelligent ones, might act in ways that are harmful or contrary to human interests.

Artificial intelligence, or AI: the endeavor of creating intelligent agents, which are systems that reason, learn, and act autonomously in pursuit of goals. This field encompasses diverse approaches like machine learning, symbolic reasoning, and optimization to simulate human-like cognitive abilities in machines.

Bias: Refers to systematic prejudice within an algorithm or model. This can arise from imbalanced training data reflecting societal biases, or limitations in the algorithm's design. Biased AI can lead to unfair or discriminatory outcomes.

Chatbot: A program simulating conversation with human users through text or voice commands.

Credits: A unit of access that controls usage of compute-intensive features. The cost of a credit depends on the complexity of the generated output and the specific AI function employed. Similar to prepaid phone plans, credits typically reset periodically, allowing for a measured amount of generative AI interaction. This system helps manage resource allocation and potentially monetize access to advanced functionalities.

Data augmentation: An artificial data manipulation technique in machine learning. It involves creating modified versions of existing data points to artificially expand training datasets.

Deep learning: A subfield of machine learning inspired by the structure and function of the human brain. It utilizes artificial neural networks with multiple hidden layers of interconnected nodes to process information. These layers extract increasingly complex features from data, enabling deep learning models to handle intricate tasks like image recognition, speech understanding, and natural language processing.

Diffusion: A process that progressively adds noise to data, transforming it from a clean state towards a state of random noise. Training involves learning to reverse this diffusion process, essentially denoising the data to recover the original distribution. This allows the model to generate new, realistic data (like images or text) by starting from pure noise and iteratively removing it.

End-to-end learning, or E2E: Training a single model in machine learning to map raw input data directly to the desired output, bypassing the need for manually designed feature extraction steps. This approach leverages deep learning architectures like convolutional neural networks to automatically learn informative features from the data itself, simplifying model design and potentially improving performance in complex tasks.

FOMU (Fast Takeoff Oversight and Mitigation Underestimation): A concept in AI safety highlighting the potential for underestimating the difficulty of controlling or mitigating risks associated with extremely rapid AI development. It emphasizes the urgency of developing safeguards before AI capabilities surpass our ability to manage them.

Generative adversarial networks, or GANs: A class of deep learning models for generating new data. They consist of two neural networks: a generator that creates new data points, and a discriminator that tries to distinguish real data from the generator's creations. Through an adversarial training process, the generator learns to mimic the real data distribution, while the discriminator becomes adept at spotting forgeries. This competition drives both networks to improve, ultimately enabling the generator to produce high-fidelity, realistic data.

Generative AI: refers to artificial intelligence techniques that create new data, like images, text, or music. These models learn the underlying patterns and structure of existing data and use that knowledge to generate novel content that resembles the training data. Generative AI leverages deep learning architectures and techniques like generative adversarial networks (GANs) to achieve this.

Hallucination: Refers to outputs that deviate significantly from real data. These can be nonsensical creations, factual errors, or content biased by the training data. Hallucinations arise from limitations in the model's understanding of the underlying data distribution.

Large language model, or LLM: A complex AI system trained on massive amounts of text data. These models leverage deep learning architectures like transformers to analyze and process information, enabling them to perform diverse tasks in natural language processing.

Machine learning, or ML A subfield of AI where algorithms improve their performance on a specific task through experience. They learn from data, identifying patterns and relationships without explicit programming. This allows them to make predictions, classifications, or decisions on new data, constantly refining their abilities.

Multimodal AI: Refers to AI systems that process and learn from multiple data types, like text, images, audio, and sensor data. It employs data fusion techniques to combine information from these modalities, leading to a richer understanding of the data compared to single-modality approaches. This enables multimodal AI to perform tasks like image captioning, video question answering, and robot perception in the real world.

Natural language processing (NLP): A subfield of AI concerned with enabling computers to understand and manipulate human language. It employs techniques from computer science, linguistics, and statistics to analyze and process written or spoken language. NLP tasks include extracting meaning from text, generating human-like text, and enabling communication between humans and machines.

Neural network: Computational models inspired by the structure and function of the brain. They consist of interconnected nodes (artificial neurons) arranged in layers. These nodes process information and transmit signals to other nodes, mimicking how neurons fire in the brain. By adjusting the connections between nodes (learning), neural networks can perform complex tasks like image recognition, speech understanding, and natural language processing.

Parameters: Adjustable elements within a model, like weights and biases in neural networks. These values determine how the model transforms input data into generated outputs. Tweaking these parameters allows fine-tuning the model's behavior, influencing the style, creativity, and accuracy of the generated content.

Prompt: A user-provided input that guides the model's generation process. It can be text instructions, descriptions, or even existing data.

Prompt chaining: involves feeding the output of one generative model as the prompt for a subsequent model. This creates a sequence of prompts, where each refines or builds upon the previous generation.

Prompt engineering: The art of crafting instructions for artificial intelligence models, specifically generative AI models. Imagine it as giving a detailed recipe to a cook instead of just throwing a bunch of ingredients at them. By carefully wording prompts and choosing the right format, you can guide the AI to create exactly what you want, like a specific kind of text, computer code, or even creative writing.

Style transfer: A technique for applying the visual style of one image (e.g., a painting) to another (e.g., a photograph). Deep learning models analyze the "style" (texture, brushstrokes, colors) and "content" (objects, shapes) of both images. The model then generates a new image that preserves the content of the target image but renders it with the artistic style of the reference image temperature: Parameters set to control how random a language model's output is. A higher temperature means the model takes more risks. 

Text-to-3D model generation: Creating digital representations of three-dimensional objects based on textual descriptions.

Text-to-audio generation: Creating audio based on textual descriptions.

Text-to-code generation: Creating programming code based on textual descriptions.

Text-to-design generation: guide a design tool in creating a visual element based on textual descriptions.

Text-to-face generation: Creating realistic facial images based on textual descriptions.

Text-to-game content generation: Creating content for video games based on textual descriptions.

Text-to-image generation: Creating images based on textual descriptions.

Text-to-video generation: Creating video based on textual descriptions.

Tokens: The fundamental units of text processed by the model. The tokenization process breaks down text into these units, which can be words, characters, or even phrases depending on the model's configuration.

Training data: The datasets used to help AI models learn, including text, images, code or data.

Transformer model: A neural network architecture and deep learning model that learns context by tracking relationships in data, like in sentences or parts of images. So, instead of analyzing a sentence one word at a time, it can look at the whole sentence and understand the context.

Turing test: Named after famed mathematician and computer scientist Alan Turing, gauges a machine's ability to exhibit human-like intelligence through conversation. A human judge interacts with a hidden entity, human or machine, and judges intelligence based solely on responses. If the judge can't reliably distinguish between the two, the machine is deemed to have passed the test. While influential, the Turing test is criticized for conflating conversation ability with true intelligence.

Zero-shot learning: A type of machine learning technique where a model is able to recognize and classify objects or perform tasks it has never seen before, based on the knowledge it has learned from other related tasks or objects.