Master AI Math with 500 free flashcards. Study using spaced repetition and focus mode for effective learning in AI.
\[\mathbf{x} = \begin{bmatrix}x_1 \\ x_2 \\ \vdots \\ x_n\end{bmatrix} \in \mathbb{R}^n\]
Symbols: \(x_i\) = i-th component; \(\mathbb{R}^n\) = n-dimensional real space.
Intuition: A vector is an ordered list of n numbers. In ML it represents a data point, feature embedding, or parameter set as a point in n-dimensional space.
\[\mathbf{a} + \mathbf{b} = \begin{bmatrix}a_1+b_1 \\ a_2+b_2 \\ \vdots \\ a_n+b_n\end{bmatrix}\]
Symbols: \(a_i, b_i\) = corresponding components of vectors a and b.
Intuition: Add element-wise. Geometrically, place the tail of b at the head of a; the sum points to the new head.
\[c\mathbf{x} = \begin{bmatrix}cx_1 \\ cx_2 \\ \vdots \\ cx_n\end{bmatrix}\]
Symbols: \(c \in \mathbb{R}\) = scalar; \(x_i\) = i-th component.
Intuition: Scales every component by c. Stretches (|c|>1), shrinks (|c|
\[\|\mathbf{x}\|_2 = \sqrt{\sum_{i=1}^n x_i^2}\]
Symbols: \(\|\cdot\|_2\) = L2 norm; \(x_i\) = i-th component; \(n\) = dimension.
Intuition: The straight-line (Euclidean) distance from the origin to the tip of the vector. Most common norm in ML for measuring distances.
\[\|\mathbf{x}\|_1 = \sum_{i=1}^n |x_i|\]
Symbols: \(|x_i|\) = absolute value of i-th component.
Intuition: Sum of absolute values. Also called the 'Manhattan' or 'taxicab' distance. Encourages sparsity in optimization (L1 regularization).
\[\|\mathbf{x}\|_p = \left(\sum_{i=1}^n |x_i|^p\right)^{1/p}\]
Symbols: \(p \geq 1\) = norm order; \(x_i\) = i-th component.
Intuition: Generalizes L1 (p=1) and L2 (p=2). As \(p \to \infty\) it becomes the max norm \(\|\mathbf{x}\|_\infty = \max_i |x_i|\).
\[\|A\|_F = \sqrt{\sum_{i,j} A_{ij}^2}\]
Symbols: \(A_{ij}\) = element at row i, column j.
Intuition: The L2 norm applied to a matrix by treating it as a flattened vector. Equivalent to \(\sqrt{\mathrm{tr}(A^\top A)}\).
\[\mathbf{a} \cdot \mathbf{b} = \sum_{i=1}^n a_i b_i = \|\mathbf{a}\|_2\|\mathbf{b}\|_2\cos\theta\]
Symbols: \(a_i, b_i\) = components; \(\theta\) = angle between vectors.
Intuition: Measures how aligned two vectors are. Zero = orthogonal; positive = same direction; negative = opposite direction.
\[\mathbf{a} \perp \mathbf{b} \iff \mathbf{a} \cdot \mathbf{b} = 0\]
Symbols: \(\perp\) = perpendicular symbol.
Intuition: Two vectors are orthogonal (perpendicular) iff their dot product is zero — cos 90° = 0.
\[\cos\theta = \frac{\mathbf{a} \cdot \mathbf{b}}{\|\mathbf{a}\|_2\|\mathbf{b}\|_2}\]
Symbols: \(\theta\) = angle between a and b.
Intuition: Normalizes the dot product by the product of norms to get a similarity score in [-1, 1]. Widely used in NLP to compare embedding vectors.
\[|\mathbf{a} \cdot \mathbf{b}| \leq \|\mathbf{a}\|_2 \|\mathbf{b}\|_2\]
Symbols: \(|\cdot|\) = absolute value; \(\|\cdot\|_2\) = L2 norm.
Intuition: The dot product can never exceed the product of the norms. Equality holds iff vectors are parallel.
\[A \in \mathbb{R}^{m \times n},\quad A = \begin{bmatrix}a_{11} & \cdots & a_{1n} \\ \vdots & \ddots & \vdots \\ a_{m1} & \cdots & a_{mn}\end{bmatrix}\]
Symbols: \(m\) = rows; \(n\) = columns; \(a_{ij}\) = element at row i, col j.
Intuition: A rectangular array of numbers. In ML, represents linear maps, datasets (rows = samples, cols = features), or weight tensors.
Flashcards
Flip to reveal
Focus Mode
Spaced repetition
Multiple Choice
Test your knowledge
Type Answer
Active recall
Learn Mode
Multi-round mastery
Match Game
Memory challenge