Pipeworx实战:增强Docker Swarm存储与编排能力的官方示例解析
2026/5/17 3:41:06
线性代数是深度学习的数学基础:
线性代数核心概念 向量: 数据表示 矩阵: 变换操作 张量: 多维数据 矩阵乘法: 层间变换| 概念 | 作用 | 应用场景 |
|---|---|---|
| 向量空间 | 数据表示 | 特征向量 |
| 矩阵乘法 | 线性变换 | 神经网络层 |
| 特征值分解 | 降维 | PCA |
| 奇异值分解 | 矩阵近似 | 推荐系统 |
张量维度 0维: 标量 1维: 向量 2维: 矩阵 3维: 时间序列/图像通道 4维: 批量图像import numpy as np class VectorOperations: @staticmethod def dot_product(v1, v2): return np.dot(v1, v2) @staticmethod def cross_product(v1, v2): return np.cross(v1, v2) @staticmethod def norm(v): return np.linalg.norm(v) @staticmethod def normalize(v): return v / np.linalg.norm(v) @staticmethod def cosine_similarity(v1, v2): return np.dot(v1, v2) / (np.linalg.norm(v1) * np.linalg.norm(v2)) class VectorSpace: def __init__(self, vectors): self.vectors = np.array(vectors) def basis(self): rank = np.linalg.matrix_rank(self.vectors) return rank def orthogonalize(self): q, r = np.linalg.qr(self.vectors) return q def project(self, v): basis = self.orthogonalize() projection = np.dot(basis, np.dot(basis.T, v)) return projectionclass MatrixOperations: @staticmethod def multiply(A, B): return np.dot(A, B) @staticmethod def inverse(A): return np.linalg.inv(A) @staticmethod def transpose(A): return A.T @staticmethod def trace(A): return np.trace(A) @staticmethod def determinant(A): return np.linalg.det(A) @staticmethod def eigen_decomposition(A): eigenvalues, eigenvectors = np.linalg.eig(A) return eigenvalues, eigenvectors @staticmethod def svd(A): u, s, vh = np.linalg.svd(A) return u, s, vh class LinearTransform: def __init__(self, matrix): self.matrix = np.array(matrix) def apply(self, vector): return np.dot(self.matrix, vector) def compose(self, other): return LinearTransform(np.dot(self.matrix, other.matrix)) def inverse(self): return LinearTransform(np.linalg.inv(self.matrix))class TensorOperations: @staticmethod def reshape(tensor, shape): return np.reshape(tensor, shape) @staticmethod def transpose(tensor, axes): return np.transpose(tensor, axes) @staticmethod def matmul(tensor1, tensor2): return np.matmul(tensor1, tensor2) @staticmethod def einsum(subscripts, *operands): return np.einsum(subscripts, *operands) @staticmethod def broadcast(tensor1, tensor2): return tensor1 + tensor2 class TensorNetwork: def __init__(self): self.tensors = {} def add_tensor(self, name, tensor): self.tensors[name] = tensor def contract(self, name1, name2, axis1, axis2): tensor1 = self.tensors[name1] tensor2 = self.tensors[name2] result = np.tensordot(tensor1, tensor2, axes=((axis1,), (axis2,))) new_name = f"{name1}_{name2}" self.tensors[new_name] = result return result| 方法 | 100x100矩阵 | 1000x1000矩阵 | 优化程度 |
|---|---|---|---|
| Python循环 | 1000ms | 1000000ms | 基础 |
| NumPy | 1ms | 100ms | 高度优化 |
| BLAS | 0.5ms | 50ms | 硬件优化 |
| CuBLAS | 0.1ms | 5ms | GPU加速 |
| 运算 | CPU时间 | GPU时间 | 加速比 |
|---|---|---|---|
| 矩阵乘法(1000x1000) | 100ms | 5ms | 20x |
| 卷积(3x3, 64通道) | 50ms | 2ms | 25x |
| 矩阵求逆(1000x1000) | 500ms | 10ms | 50x |
| 精度 | 内存 | 速度 | 适用场景 |
|---|---|---|---|
| FP64 | 8字节 | 1x | 科学计算 |
| FP32 | 4字节 | 2x | 深度学习 |
| FP16 | 2字节 | 4x | 推理 |
| INT8 | 1字节 | 8x | 部署 |
def optimize_matrix_operations(A, B): if A.shape[1] != B.shape[0]: raise ValueError("矩阵维度不匹配") if A.size > 10000: return optimized_matmul(A, B) return np.dot(A, B) class MatrixOptimizer: @staticmethod def optimize(A): if np.linalg.matrix_rank(A) < min(A.shape): return MatrixOptimizer._low_rank_approx(A) return A @staticmethod def _low_rank_approx(A, rank=10): u, s, vh = np.linalg.svd(A) return u[:, :rank] @ np.diag(s[:rank]) @ vh[:rank, :]class TensorOperationPatterns: @staticmethod def batch_matrix_multiply(batch_A, batch_B): return np.einsum('bij,bjk->bik', batch_A, batch_B) @staticmethod def attention_computation(Q, K, V): scores = np.einsum('bih,bjh->bij', Q, K) weights = np.softmax(scores, axis=-1) output = np.einsum('bij,bjh->bih', weights, V) return output线性代数是深度学习的数学基础:
对比数据如下: