Skip to main content

递归按字符分割

LangChain

这个文本分割器是用于通用文本的推荐分割器。它通过一个字符列表进行参数化。它会按顺序尝试使用这些字符进行分割,直到块的大小足够小。默认列表是 ["\n\n", "\n", " ", ""]。这样做的效果是尽可能地保持所有段落(然后是句子,然后是单词)在一起,因为它们通常是在语义上相关的文本片段中的最强关联部分。

  1. 文本如何分割:按字符列表。
  2. 块的大小如何衡量:按字符数。
This is a long document we can split up.
with open('../../../state_of_the_union.txt') as f:
state_of_the_union = f.read()
from langchain.text_splitter import RecursiveCharacterTextSplitter
text_splitter = RecursiveCharacterTextSplitter(
# Set a really small chunk size, just to show.
chunk_size = 100,
chunk_overlap = 20,
length_function = len,
)
texts = text_splitter.create_documents([state_of_the_union])
print(texts[0])
print(texts[1])
    page_content='Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and' lookup_str='' metadata={} lookup_index=0
page_content='of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.' lookup_str='' metadata={} lookup_index=0
text_splitter.split_text(state_of_the_union)[:2]
    ['Madam Speaker, Madam Vice President, our First Lady and Second Gentleman. Members of Congress and',
'of Congress and the Cabinet. Justices of the Supreme Court. My fellow Americans.']