分块策略
¥Chunking Strategies
分块策略对于将大文本划分为可管理的部分至关重要,从而实现有效的内容处理和提取。这些策略是基于余弦相似度的提取技术的基础,该技术允许用户仅检索与给定查询最相关的内容块。此外,它们还有助于直接集成到 RAG(检索增强生成)系统中,以实现结构化且可扩展的工作流程。
¥Chunking strategies are critical for dividing large texts into manageable parts, enabling effective content processing and extraction. These strategies are foundational in cosine similarity-based extraction techniques, which allow users to retrieve only the most relevant chunks of content for a given query. Additionally, they facilitate direct integration into RAG (Retrieval-Augmented Generation) systems for structured and scalable workflows.
为什么要使用分块?
¥Why Use Chunking?
1.余弦相似度和查询相关性:准备用于语义相似性分析的块。2. RAG系统集成:无缝处理和存储块以供检索。3.结构化处理:允许多种分割方法,例如基于句子、基于主题或窗口的方法。
¥1. Cosine Similarity and Query Relevance: Prepares chunks for semantic similarity analysis. 2. RAG System Integration: Seamlessly processes and stores chunks for retrieval. 3. Structured Processing: Allows for diverse segmentation methods, such as sentence-based, topic-based, or windowed approaches.
分块方法
¥Methods of Chunking
1.基于正则表达式的分块
¥1. Regex-Based Chunking
根据正则表达式模式拆分文本,对于粗略分割很有用。
¥Splits text based on regular expression patterns, useful for coarse segmentation.
代码示例:
¥Code Example:
class RegexChunking:
def __init__(self, patterns=None):
self.patterns = patterns or [r'\n\n'] # Default pattern for paragraphs
def chunk(self, text):
paragraphs = [text]
for pattern in self.patterns:
paragraphs = [seg for p in paragraphs for seg in re.split(pattern, p)]
return paragraphs
# Example Usage
text = """This is the first paragraph.
This is the second paragraph."""
chunker = RegexChunking()
print(chunker.chunk(text))
2.基于句子的组块
¥2. Sentence-Based Chunking
使用 NLP 工具将文本分成句子,非常适合提取有意义的语句。
¥Divides text into sentences using NLP tools, ideal for extracting meaningful statements.
代码示例:
¥Code Example:
from nltk.tokenize import sent_tokenize
class NlpSentenceChunking:
def chunk(self, text):
sentences = sent_tokenize(text)
return [sentence.strip() for sentence in sentences]
# Example Usage
text = "This is sentence one. This is sentence two."
chunker = NlpSentenceChunking()
print(chunker.chunk(text))
3.基于主题的分割
¥3. Topic-Based Segmentation
使用 TextTiling 等算法来创建主题连贯的块。
¥Uses algorithms like TextTiling to create topic-coherent chunks.
代码示例:
¥Code Example:
from nltk.tokenize import TextTilingTokenizer
class TopicSegmentationChunking:
def __init__(self):
self.tokenizer = TextTilingTokenizer()
def chunk(self, text):
return self.tokenizer.tokenize(text)
# Example Usage
text = """This is an introduction.
This is a detailed discussion on the topic."""
chunker = TopicSegmentationChunking()
print(chunker.chunk(text))
4. 固定长度的词组块
¥4. Fixed-Length Word Chunking
将文本分割成固定字数的块。
¥Segments text into chunks of a fixed word count.
代码示例:
¥Code Example:
class FixedLengthWordChunking:
def __init__(self, chunk_size=100):
self.chunk_size = chunk_size
def chunk(self, text):
words = text.split()
return [' '.join(words[i:i + self.chunk_size]) for i in range(0, len(words), self.chunk_size)]
# Example Usage
text = "This is a long text with many words to be chunked into fixed sizes."
chunker = FixedLengthWordChunking(chunk_size=5)
print(chunker.chunk(text))
5.滑动窗口分块
¥5. Sliding Window Chunking
生成重叠块以实现更好的上下文连贯性。
¥Generates overlapping chunks for better contextual coherence.
代码示例:
¥Code Example:
class SlidingWindowChunking:
def __init__(self, window_size=100, step=50):
self.window_size = window_size
self.step = step
def chunk(self, text):
words = text.split()
chunks = []
for i in range(0, len(words) - self.window_size + 1, self.step):
chunks.append(' '.join(words[i:i + self.window_size]))
return chunks
# Example Usage
text = "This is a long text to demonstrate sliding window chunking."
chunker = SlidingWindowChunking(window_size=5, step=2)
print(chunker.chunk(text))
将组块与余弦相似度相结合
¥Combining Chunking with Cosine Similarity
为了增强提取内容的相关性,可以将分块策略与余弦相似度技术结合使用。以下是一个示例工作流程:
¥To enhance the relevance of extracted content, chunking strategies can be paired with cosine similarity techniques. Here’s an example workflow:
代码示例:
¥Code Example:
from sklearn.feature_extraction.text import TfidfVectorizer
from sklearn.metrics.pairwise import cosine_similarity
class CosineSimilarityExtractor:
def __init__(self, query):
self.query = query
self.vectorizer = TfidfVectorizer()
def find_relevant_chunks(self, chunks):
vectors = self.vectorizer.fit_transform([self.query] + chunks)
similarities = cosine_similarity(vectors[0:1], vectors[1:]).flatten()
return [(chunks[i], similarities[i]) for i in range(len(chunks))]
# Example Workflow
text = """This is a sample document. It has multiple sentences.
We are testing chunking and similarity."""
chunker = SlidingWindowChunking(window_size=5, step=3)
chunks = chunker.chunk(text)
query = "testing chunking"
extractor = CosineSimilarityExtractor(query)
relevant_chunks = extractor.find_relevant_chunks(chunks)
print(relevant_chunks)