Skip to main content

AI Basic Notes

Multilayer Perceptron

Multilayer Perceptron

多层感知机是一种前馈神经网络 (Feedforward Neural Network) 就像是一个模拟大脑处理信息的过程, 通过多层处理, 从原始数据中提取特征, 并做出预测或分类:

  • 线性变换: H=wx+bH=w*x+b.
  • 激活函数: y=σ(H)y=\sigma(H), e.g ReLU (Rectified Linear Unit), Sigmoid, 引入非线性特性, 使得网络可以学习和模拟复杂函数.
  • 它通过调整内部连接权重来学习和改进其预测能力.

Residual Architecture

ResNet 通过残差学习解决了深度网络的退化问题 (深度网络的训练问题), 最短的路, 决定容易优化的程度: 残差连接 (Residual Connection) 可以认为层数是 0. 最长的路, 决定深度网络的能力: 解决了深度网络的训练问题后, 可以做到更深的网络 (100+).

ResNet 通过引入残差连接, 允许网络学习残差映射而不是原始映射. 残差映射就是目标层与它的前面某层之间的差异 F(x):=H(x)x\mathcal{F}(\mathrm{x}):=\mathcal{H}(\mathrm{x})-\mathrm{x}. 如果输入和输出之间的差异很小, 那么残差网络只需要学习这些微小的差异即可. 这种学习通常比学习原始的复杂映射要简单得多.

y=F(x,Wi)+Wsx=F(x)+x=H(x)x+x=H(x)\begin{equation} \begin{split} \mathrm{y}&=\mathcal{F}(\mathrm{x}, W_i)+W_s\mathrm{x} \\ &=\mathcal{F}(\mathrm{x})+\mathrm{x} \\ &=\mathcal{H}(\mathrm{x})-\mathrm{x}+\mathrm{x} \\ &=\mathcal{H}(\mathrm{x}) \end{split} \end{equation}

Residual Architecture

从深层网络角度来讲, 不同的层学习的速度差异很大, 表现为网络中靠近输出的层学习的情况很好, 靠近输入的层学习的很慢, 有时甚至训练了很久, 前几层的权值和刚开始随机初始化的值差不多 (无法收敛). 因此, 梯度消失或梯度爆炸的根本原因在于反向传播训练法则, 本质在于方法问题. ResNet 通过残差连接保持梯度传播, 使得网络的训练更加容易:

θf(g(x))+g(x)θx=θf(g(x))θg(x)θg(x)θx+θg(x)θx\begin{equation} \frac{\theta{f(g(x))+g(x)}}{\theta{x}}= \frac{\theta{f(g(x))}}{\theta{g(x)}}\cdot{\frac{\theta{g(x)}}{\theta{x}}}+\frac{\theta{g(x)}}{\theta{x}} \end{equation}

Transformer Architecture

Illustrated Transformer

  • Self-attention only: comparing to Recurrent Neural Networks (RNNs), no recurrent layers, allows for more parallelization.
  • Multi-headed attention: consistent with Convolutional Neural Networks (CNNs), multiple output channels.

Self-Attention Mechanism

In layman's terms, a self-attention module takes in n inputs and returns n outputs:

  • Self: allows the inputs to interact with each other
  • Attention: find out who they should pay more attention to.
  • The outputs are aggregates of these interactions and attention scores.
Attention(Q,K,V)=softmax(QKTdk)V\begin{equation} \text{Attention}(Q, K, V)=\text{softmax}(\frac{QK^T}{\sqrt{d_k}})V \end{equation}

Self-Attention Mechanism

The illustrations are divided into the following steps:

  • Prepare inputs.
  • Weights initialization (Constant/Random/Xavier/Kaiming Initialization).
  • Derive query, key and value.
  • Calculate attention scores for Input 1.
  • Calculate softmax.
  • Multiply scores with values.
  • Sum weighted values to get Output 1.
  • Repeat steps 4–7 for Input 2 & Input 3.
Input 1: [1, 0, 1, 0]
Input 2: [0, 2, 0, 2]
Input 3: [1, 1, 1, 1]

Weights for query, key and value (these weights are usually small numbers, initialized randomly using an appropriate random distribution like Gaussian, Xavier and Kaiming distributions):

[[1, 0, 1], [[0, 0, 1], [[0, 2, 0],
[1, 0, 0], [1, 1, 0], [0, 3, 0],
[0, 0, 1], [0, 1, 0], [1, 0, 3],
[0, 1, 1]] [1, 1, 0]] [1, 1, 0]]

Derive query, key and value:

Query representations for every input:
[1, 0, 1]
[1, 0, 1, 0] [1, 0, 0] [1, 0, 2]
[0, 2, 0, 2] x [0, 0, 1] = [2, 2, 2]
[1, 1, 1, 1] [0, 1, 1] [2, 1, 3]

Key representations for every input:
[0, 0, 1]
[1, 0, 1, 0] [1, 1, 0] [0, 1, 1]
[0, 2, 0, 2] x [0, 1, 0] = [4, 4, 0]
[1, 1, 1, 1] [1, 1, 0] [2, 3, 1]

Value representations for every input:
[0, 2, 0]
[1, 0, 1, 0] [0, 3, 0] [1, 2, 3]
[0, 2, 0, 2] x [1, 0, 3] = [2, 8, 0]
[1, 1, 1, 1] [1, 1, 0] [2, 6, 3]

QKTQK^T for Input 1:

            [0, 4, 2]
[1, 0, 2] x [1, 4, 3] = [2, 4, 4]
[1, 0, 1]
XXTXX^T

XXTXX^T 为行向量分别与自己和其他两个行向量做内积 (点乘), 向量的内积表征两个向量的夹角 (cosθ=abab\cos\theta=\frac{a\cdot{b}}{|a||b|}), 表征一个向量在另一个向量上的投影, 投影的值大, 说明两个向量相关度高 (Relevance/Similarity).

Softmaxed(σ(zi)=ezij=1Kezj\sigma(z_i)=\frac{e^{z_i}}{\sum_{j=1}^K{e^{z_j}}}) attention scores, softmax(QKTdk)\text{softmax}(\frac{QK^T}{\sqrt{d_k}}):

softmax([2, 4, 4]) = [0.0, 0.5, 0.5]
dk\sqrt{d_k}

矩阵 AA 中每一个元素除以 dk\sqrt{d_k} 后, 方差变为 1. 这使得 softmax(A)\text{softmax}(A) 的分布"陡峭"程度与 dkd_k 解耦, 从而使得训练过程中梯度值保持稳定.

Alignment vectors (yellow vectors) addition to Output 1, softmax(QKTdk)V\text{softmax}(\frac{QK^T}{\sqrt{d_k}})V:

1: 0.0 * [1, 2, 3] = [0.0, 0.0, 0.0]
2: 0.5 * [2, 8, 0] = [1.0, 4.0, 0.0]
3: 0.5 * [2, 6, 3] = [1.0, 3.0, 1.5]

[0.0, 0.0, 0.0] + [1.0, 4.0, 0.0] + [1.0, 3.0, 1.5] = [2.0, 7.0, 1.5]

Repeat for Input 2 & Input 3:

Output 2: [2.0, 8.0, 0.0]
Output 3: [2.0, 7.8, 0.3]
QKTVQK^TV

Self-attention 中的 QKVQKV 思想, 另一个层面是想要构建一个具有全局语义整合功能的数据库.

import torch
from torch.nn.functional import softmax

# 1. Prepare inputs
x = [
[1, 0, 1, 0], # Input 1
[0, 2, 0, 2], # Input 2
[1, 1, 1, 1], # Input 3
]
x = torch.tensor(x, dtype=torch.float32)

# 2. Weights initialization
w_query = [
[1, 0, 1],
[1, 0, 0],
[0, 0, 1],
[0, 1, 1],
]
w_key = [
[0, 0, 1],
[1, 1, 0],
[0, 1, 0],
[1, 1, 0],
]
w_value = [
[0, 2, 0],
[0, 3, 0],
[1, 0, 3],
[1, 1, 0],
]
w_query = torch.tensor(w_query, dtype=torch.float32)
w_key = torch.tensor(w_key, dtype=torch.float32)
w_value = torch.tensor(w_value, dtype=torch.float32)

# 3. Derive query, key and value
queries = x @ w_query
keys = x @ w_key
values = x @ w_value

# 4. Calculate attention scores
attn_scores = queries @ keys.T

# 5. Calculate softmax
attn_scores_softmax = softmax(attn_scores, dim=-1)
# tensor([[6.3379e-02, 4.6831e-01, 4.6831e-01],
# [6.0337e-06, 9.8201e-01, 1.7986e-02],
# [2.9539e-04, 8.8054e-01, 1.1917e-01]])
# For readability, approximate the above as follows
attn_scores_softmax = [
[0.0, 0.5, 0.5],
[0.0, 1.0, 0.0],
[0.0, 0.9, 0.1],
]
attn_scores_softmax = torch.tensor(attn_scores_softmax)

# 6. Multiply scores with values
weighted_values = values[:, None] * attn_scores_softmax.T[:, :, None]

# 7. Sum weighted values
outputs = weighted_values.sum(dim=0)

print(outputs)
# tensor([[2.0000, 7.0000, 1.5000],
# [2.0000, 8.0000, 0.0000],
# [2.0000, 7.8000, 0.3000]])
# tensor([[1.9366, 6.6831, 1.5951],
# [2.0000, 7.9640, 0.0540],
# [1.9997, 7.7599, 0.3584]])

Multi-Head Attention Mechanism

Multiple output channels:

MultiHead(Q,K,V)=Concat(head1,,headh)WOwhere headi=Attention(QWiQ,KWiK,VWiV)\begin{equation} \begin{split} \text{MultiHead}(Q,K,V)&=\text{Concat}(\text{head}_1,\ldots,\text{head}_h)W^O \\ \text{where}\ \text{head}_i&=\text{Attention}(QW_i^Q,KW_i^K,VW_i^V) \end{split} \end{equation}
from math import sqrt
import torch
import torch.nn

class Self_Attention(nn.Module):
# input : batch_size * seq_len * input_dim
# q : batch_size * input_dim * dim_k
# k : batch_size * input_dim * dim_k
# v : batch_size * input_dim * dim_v
def __init__(self, input_dim, dim_k, dim_v):
super(Self_Attention, self).__init__()
self.q = nn.Linear(input_dim, dim_k)
self.k = nn.Linear(input_dim, dim_k)
self.v = nn.Linear(input_dim, dim_v)
self._norm_fact = 1 / sqrt(dim_k)

def forward(self, x):
Q = self.q(x) # Q: batch_size * seq_len * dim_k
K = self.k(x) # K: batch_size * seq_len * dim_k
V = self.v(x) # V: batch_size * seq_len * dim_v

# Q * K.T() # batch_size * seq_len * seq_len
atten = nn.Softmax(dim=-1)(torch.bmm(Q, K.permute(0, 2, 1))) * self._norm_fact

# Q * K.T() * V # batch_size * seq_len * dim_v
output = torch.bmm(atten, V)

return output

class Self_Attention_Muti_Head(nn.Module):
# input : batch_size * seq_len * input_dim
# q : batch_size * input_dim * dim_k
# k : batch_size * input_dim * dim_k
# v : batch_size * input_dim * dim_v
def __init__(self, input_dim, dim_k, dim_v, nums_head):
super(Self_Attention_Muti_Head, self).__init__()
assert dim_k % nums_head == 0
assert dim_v % nums_head == 0
self.q = nn.Linear(input_dim, dim_k)
self.k = nn.Linear(input_dim, dim_k)
self.v = nn.Linear(input_dim, dim_v)

self.nums_head = nums_head
self.dim_k = dim_k
self.dim_v = dim_v
self._norm_fact = 1 / sqrt(dim_k)

def forward(self, x):
Q = self.q(x).reshape(-1, x.shape[0], x.shape[1], self.dim_k // self.nums_head)
K = self.k(x).reshape(-1, x.shape[0], x.shape[1], self.dim_k // self.nums_head)
V = self.v(x).reshape(-1, x.shape[0], x.shape[1], self.dim_v // self.nums_head)
print(x.shape)
print(Q.size())

# Q * K.T() # batch_size * seq_len * seq_len
atten = nn.Softmax(dim=-1)(torch.matmul(Q, K.permute(0, 1, 3, 2)))

# Q * K.T() * V # batch_size * seq_len * dim_v
output = torch.matmul(atten, V).reshape(x.shape[0], x.shape[1], -1)

return output

Positional Encoding Mechanism

位置编码 使用正弦和余弦函数的 d 维向量编码方法, 用于在输入序列中表示每个单词的位置信息, 丰富了模型的输入数据, 为其提供位置信息 (把词序信号加到词向量上帮助模型学习这些信息):

  • 唯一性 (Unique): 为每个时间步输出一个独一无二的编码.
  • 一致性 (Consistent): 不同长度的句子之间, 任何两个时间步之间的距离保持一致.
  • 泛化性 (Generalizable): 模型能毫不费力地泛化到更长的句子, 位置编码的值是有界的.
  • 确定性 (Deterministic): 位置编码的值是确定性的.

编码函数使用正弦和余弦函数, 其频率沿着向量维度进行减少. 编码向量包含每个频率的正弦和余弦对, 以实现 sin(x+k)\sin(x+k)cos(x+k)\cos(x+k) 的线性变换, 从而有效地表示相对位置.

For ptRd\vec{p_t}\in\mathbb{R}^d (where d20d\equiv_2{0}), then f:NRdf:\mathbb{N}\to\mathbb{R}^d

pt(i)=f(t)(i):={sin(ωkt),if i=2kcos(ωkt),if i=2k+1\begin{align} \vec{p_t}^{(i)}=f(t)^{(i)}&:= \begin{cases} \sin({\omega_k}\cdot{t}), &\text{if}\ i=2k \\ \cos({\omega_k}\cdot{t}), &\text{if}\ i=2k+1 \end{cases} \end{align}

where

ωk=1100002k/d\omega_k=\frac{1}{10000^{2k/d}}

outcomes

pt=[sin(ω1t)cos(ω1t)sin(ω2t)cos(ω2t)sin(ωd/2t)cos(ωd/2t)]d×1\vec{p_t} = \begin{bmatrix} \sin({\omega_1}\cdot{t})\\ \cos({\omega_1}\cdot{t})\\ \\ \sin({\omega_2}\cdot{t})\\ \cos({\omega_2}\cdot{t})\\ \\ \vdots\\ \\ \sin({\omega_{d/2}}\cdot{t})\\ \cos({\omega_{d/2}}\cdot{t}) \end{bmatrix}_{d\times{1}}

LLM

LangChain

LangChain aims to make programming with LLMs easier.

LangChain Modules

Model I/O module normalize LLM inputs (e.g. prompts), APIs, and outputs (e.g. completions):

LangChain Model I/O Module

import { PromptTemplate } from '@langchain/core/prompts'
import { OpenAI } from '@langchain/openai'
import { CommaSeparatedListOutputParser } from '@langchain/core/output_parsers'

const template = PromptTemplate.fromTemplate(
'List 10 {subject}.\n{format_instructions}'
)
const model = new OpenAI({ temperature: 0 })
const listParser = new CommaSeparatedListOutputParser()

const prompt = await template.format({
subject: 'countries',
format_instructions: listParser.getFormatInstructions(),
})

const result = await model.invoke(prompt)

const listResult = await parser.parse(result)

Retrieval module help to process data alongside the user inputs, making it easier to retrieve relevant information:

LangChain Retrieval Module

Quality in, quality out.

import { CSVLoader } from 'langchain/document_loaders/fs/csv'
import { RecursiveCharacterTextSplitter } from 'langchain/text_splitter'
import { OpenAIEmbeddings } from '@langchain/openai'
import { UpstashVectorStore } from '@langchain/community/vectorstores/upstash'
import { ScoreThresholdRetriever } from 'langchain/retrievers/score_threshold'

// CSV data.
const loader = new CSVLoader('path/to/example.csv')
const docs = await loader.load()

// Text splitter.
const splitter = new RecursiveCharacterTextSplitter({
chunkSize: 10,
chunkOverlap: 1,
})
const docs = await splitter.createDocuments(['...'])

// Embeddings and vector store.
const vectorStore = new UpstashVectorStore(new OpenAIEmbeddings())
await vectorStore.addDocuments(docs)
const retriever = ScoreThresholdRetriever.fromVectorStore(vectorStore, {
minSimilarityScore: 0.9,
})
const result = await retriever.getRelevantDocuments('...?')

Chains module link tasks together:

LangChain Chains Module

import { PromptTemplate } from '@langchain/core/prompts'
import { RunnableSequence } from '@langchain/core/runnables'
import { OpenAI } from '@langchain/openai'
import { CommaSeparatedListOutputParser } from '@langchain/core/output_parsers'

const template = PromptTemplate.fromTemplate(
'List 10 {subject}.\n{format_instructions}'
)
const model = new OpenAI({ temperature: 0 })
const listParser = new CommaSeparatedListOutputParser()

const chain = RunnableSequence.from([template, model, listParser])

const result = await chain.invoke({
subject: 'countries',
format_instructions: listParser.getFormatInstructions(),
})

Agents module is chains with a list of functions (called tools) it can execute, while chains are hardcoded, agents choose their actions with the help of an LLM:

LangChain Agents Module

import { VectorStoreToolkit, createVectorStoreAgent } from 'langchain/agents'

const toolkit = new VectorStoreToolkit(
{ name: 'Demo Data', vectorStore },
model
)
const agent = createVectorStoreAgent(model, toolkit)

const result = await agent.invoke({ input: '...' })

AI Platform

  • OpenAI GPT API.
  • Google Gemini API.