WordPiece词表的创建

news/2024/7/7 22:28:13

文章目录

    • 一、简单介绍
    • 二、步骤流程
      • 2.1 预处理
      • 2.2 计数
      • 2.3 分割
      • 2.4 添加subword
    • 三、代码实现

本篇内容主要介绍如何根据提供的文本内容创建 WordPiece vocabulary,代码来自谷歌;

一、简单介绍

wordpiece的目的是:通过考虑单词内部构造,充分利用subwords的优势,在把长word转化为短word提高文字的灵活性以及提高word转化的效率这两处之间取得一个良好的平衡;

前者会增加词表大小,后者会减少词表大小

二、步骤流程

2.1 预处理

在读取所有的文本内容后,第一步便是对文本内容预处理;

  • 对英文来说,我们可以把字符都转化为小写形式,去掉accents,á 变成 a,然后利用whitespacepunctuation进行分割;也就是空格和标点符号;
  • 对中文来说,我们可以把繁体转化为简体,分割的方式就只有单个单个字词进行分割了,而优化的方式只有从外部引入tokenizer对文本内容做分词,然后进行后续步骤,不然单个中文字词无法进行分解,有人可能想通过偏旁部首来,但偏旁部首如何区分顺序呢?后续的内容将围绕英文展开;

在将文本切割成以word为单位的小块后,我们进行下一步骤;

2.2 计数

在预处理这一阶段,我们得到了以word为单位的小块,为了了解总体words的情况,我们需要对word进行计数处理,并按照数量从大到小排列;如果文本内容很大,我们可以在此做一个优化,过滤掉数量太大或者太小的word以及过滤掉长度太长的word

由于word piece的本质是subwords,为了合理的把word转化为subwords,我们必须考虑word的基本单元;因为如果词表中缺少组成word的基本单元,那么该词的表示就无法实现或者不完整和其他词照成混淆;

所以在这里我们统计所有的词其单个字符出现的次数;同理,在这里我们可以优化一下,删除出现次数较少的字符,由于出现次数较少的字符删除了,哪包含这些字符的words也就无法表示,所以我们同时要删除包含这些字符的words

2.3 分割

在这一环节,我们对计数字典的word进行分割,其处理方式如下:

首先对word设置一个首指针和一个尾指针,以指针之间的内容求匹配计数字典和字符字典的合集,若成功,则将首指针指向尾指针,然后尾指针重新指向最后的位置,若失败,则将尾指针向首指针移动一步;直到停止首尾位置一致,若首尾在尾部则返回output_tokens,若在其他地方则说明不能分词,返回None

实现过程如图所示:

实现代码如下:

def get_split_indices(word, curr_tokens, include_joiner_token, joiner):  
    indices = []  
    start = 0  
    while start < len(word):  
        end = len(word)  
        while end > start:  
            subtoken = word[start:end]  
            # Subtoken includes the joiner token.  
            if include_joiner_token and start > 0:  
                subtoken = joiner + subtoken  
            # If subtoken is part of vocab, 'end' is a valid start index.  
            if subtoken in curr_tokens:  
                indices.append(end)  
                break  
            end -= 1  
        if end == start:  
            return 1  
        start = end  
    return indices  
  
  
if __name__ == '__main__':  
    res = get_split_indices('hello', ['h', '##e', '##llo', '##o'], True, '##')  
    # print(res)  res: [1, 2, 5]

2.4 添加subword

上一步分割的作用实际上是在找最大的分词块,但是其采用的是一种贪婪算法,并不是最优解;在对word进行分割找最大的分割块的indice之后,我们可以更快的找到常常出现在一起的字符串;处理方式如下:获取每一个以indice位置开始,长度依次增加的subword,构建subword字典并计数,其每次增加的数目应该是word.count

这种遍历方式产生的subword的数目过于庞大,因此如果有需要,我们需要对其进行一些优化,比如删掉一些长度较长的subword,删除一些次数比较小的subword,这样添加subword的这一步骤就算完成了;

但是要注意的是,这里的subword出现了重复计数,我们考虑了长的字符串,那么短的字符串一定会被考虑,这里我们从长字符串到短字符开始遍历,当确定长字符串有一定数目确定为vocabulary中的元素时,我们把所有有相同前缀的短字符串减去长字符串的数目避免影响;

与此同时,vocabulary并不一定包含了字符字典,所以我们需要将其进行合并,最后得到的vocabulary就是wordpiece vocabulary

三、代码实现

首先我们对word进行预处理,这里代码省略;

在这里我们传入一个iterable迭代器,然后用collections库中的Counter,对每个词进行计数;

def count_words(iterable) -> collections.Counter:  
    """Converts a iterable of arrays of words into a `Counter` of word counts."""  
    counts = collections.Counter()  
    for words in iterable:  
        # Convert a RaggedTensor to a flat/dense Tensor.  
        words = getattr(words, 'flat_values', words)  
        # Flatten any dense tensor  
        words = np.reshape(words, [-1])  
        counts.update(words)  
  
    # Decode the words if necessary.  
    example_word = next(iter(counts.keys()))  
    if isinstance(example_word, bytes):  
        counts = collections.Counter(  
            {word.decode('utf-8'): count for word, count in counts.items()})  
  
    return counts

根据当前词频以及upper_threshlower_thresh确定词频的界限;

def get_search_threshs(word_counts, upper_thresh, lower_thresh):  
    """Clips the thresholds for binary search based on current word counts.  
  
    The upper threshold parameter typically has a large default value that can    result in many iterations of unnecessary search. Thus we clip the upper and    lower bounds of search to the maximum and the minimum wordcount values.  
    Args:      word_counts: list of (string, int) tuples      upper_thresh: int, upper threshold for binary search      lower_thresh: int, lower threshold for binary search  
    Returns:      upper_search: int, clipped upper threshold for binary search      lower_search: int, clipped lower threshold for binary search    """  
    counts = [count for _, count in word_counts]  
    max_count = max(counts)  
    min_count = min(counts)  
  
    if upper_thresh is None:  
        upper_search = max_count  
    else:  
        upper_search = max_count if max_count < upper_thresh else upper_thresh  
  
    if lower_thresh is None:  
        lower_search = min_count  
    else:  
        lower_search = min_count if min_count > lower_thresh else lower_thresh  
  
    return upper_search, lower_search

对单个的char的数量设置一个上限;

def get_allowed_chars(all_counts, max_unique_chars):  
    """Get the top max_unique_chars characters within our wordcounts.  
  
    We want each character to be in the vocabulary so that we can keep splitting    down to the character level if necessary. However, in order not to inflate    our vocabulary with rare characters, we only keep the top max_unique_chars    characters.  
    Args:      all_counts: list of (string, int) tuples      max_unique_chars: int, maximum number of unique single-character tokens  
    Returns:      set of strings containing top max_unique_chars characters in all_counts    """  
    char_counts = collections.defaultdict(int)  
  
    for word, count in all_counts:  
        for char in word:  
            char_counts[char] += count  
  
    # Sort by count, then alphabetically.  
    sorted_counts = sorted(sorted(char_counts.items(), key=lambda x: x[0]),  
                           key=lambda x: x[1], reverse=True)  
  
    allowed_chars = set()  
    for i in range(min(len(sorted_counts), max_unique_chars)):  
        allowed_chars.add(sorted_counts[i][0])  
    return allowed_chars

结合all_countsallowed_chars,删掉包含allowed_char的字符,控制结果为max_input_tokens个出现次数最大的word

def filter_input_words(all_counts, allowed_chars, max_input_tokens):  
    """Filters out words with unallowed chars and limits words to max_input_tokens.  
  
    Args:      all_counts: list of (string, int) tuples      allowed_chars: list of single-character strings      max_input_tokens: int, maximum number of tokens accepted as input  
    Returns:      list of (string, int) tuples of filtered wordcounts    """    # Ensure that the input is sorted so that if `max_input_tokens` is reached  
    # the least common tokens are dropped.    all_counts = sorted(  
        all_counts, key=lambda word_and_count: word_and_count[1], reverse=True)  
    filtered_counts = []  
    for word, count in all_counts:  
        if (max_input_tokens != -1 and  
                len(filtered_counts) >= max_input_tokens):  
            break  
        has_unallowed_chars = False  
        for char in word:  
            if char not in allowed_chars:  
                has_unallowed_chars = True  
                break        if has_unallowed_chars:  
            continue  
        filtered_counts.append((word, count))  
  
    return filtered_counts

获得splitindex,要保证curr_tokens可以对word进行分割;

def get_split_indices(word, curr_tokens, include_joiner_token, joiner):  
    """Gets indices for valid substrings of word, for iterations > 0.  
  
    For iterations > 0, rather than considering every possible substring, we only    want to consider starting points corresponding to the start of wordpieces in    the current vocabulary.  
    Args:      word: string we want to split into substrings      curr_tokens: string to int dict of tokens in vocab (from previous iteration)      include_joiner_token: bool whether to include joiner token      joiner: string used to indicate suffixes  
    Returns:      list of ints containing valid starting indices for word    """  
    indices = []  
    start = 0  
    while start < len(word):  
        end = len(word)  
        while end > start:  
            subtoken = word[start:end]  
            # Subtoken includes the joiner token.  
            if include_joiner_token and start > 0:  
                subtoken = joiner + subtoken  
            # If subtoken is part of vocab, 'end' is a valid start index.  
            if subtoken in curr_tokens:  
                indices.append(end)  
                break  
            end -= 1  
  
        if end == start:  
            return None  
        start = end  
  
    return indices

进行最后的步骤;

import collections  
from typing import List, Optional  
  
  
Params = collections.namedtuple('Params', [  
    'upper_thresh', 'lower_thresh', 'num_iterations', 'max_input_tokens',  
    'max_token_length', 'max_unique_chars', 'vocab_size', 'slack_ratio',  
    'include_joiner_token', 'joiner', 'reserved_tokens'  
])  
  
  
def extract_char_tokens(word_counts):  
    """Extracts all single-character tokens from word_counts.  
  
    Args:      word_counts: list of (string, int) tuples  
    Returns:      set of single-character strings contained within word_counts    """  
    seen_chars = set()  
    for word, _ in word_counts:  
        for char in word:  
            seen_chars.add(char)  
    return seen_chars  
  
  
def ensure_all_tokens_exist(input_tokens, output_tokens, include_joiner_token,  
                            joiner):  
    """Adds all tokens in input_tokens to output_tokens if not already present.  
  
    Args:      input_tokens: set of strings (tokens) we want to include      output_tokens: string to int dictionary mapping token to count      include_joiner_token: bool whether to include joiner token      joiner: string used to indicate suffixes  
    Returns:      string to int dictionary with all tokens in input_tokens included    """  
    for token in input_tokens:  
        if token not in output_tokens:  
            output_tokens[token] = 1  
  
        if include_joiner_token:  
            joined_token = joiner + token  
            if joined_token not in output_tokens:  
                output_tokens[joined_token] = 1  
  
    return output_tokens  
  
  
def get_search_threshs(word_counts, upper_thresh, lower_thresh):  
    """Clips the thresholds for binary search based on current word counts.  
  
    The upper threshold parameter typically has a large default value that can    result in many iterations of unnecessary search. Thus we clip the upper and    lower bounds of search to the maximum and the minimum wordcount values.  
    Args:      word_counts: list of (string, int) tuples      upper_thresh: int, upper threshold for binary search      lower_thresh: int, lower threshold for binary search  
    Returns:      upper_search: int, clipped upper threshold for binary search      lower_search: int, clipped lower threshold for binary search    """  
    counts = [count for _, count in word_counts]  
    max_count = max(counts)  
    min_count = min(counts)  
  
    if upper_thresh is None:  
        upper_search = max_count  
    else:  
        upper_search = max_count if max_count < upper_thresh else upper_thresh  
  
    if lower_thresh is None:  
        lower_search = min_count  
    else:  
        lower_search = min_count if min_count > lower_thresh else lower_thresh  
  
    return upper_search, lower_search  
  
  
def get_input_words(word_counts, reserved_tokens, max_token_length):  
    """Filters out words that are longer than max_token_length or are reserved.  
  
    Args:      word_counts: list of (string, int) tuples      reserved_tokens: list of strings      max_token_length: int, maximum length of a token  
    Returns:      list of (string, int) tuples of filtered wordcounts    """  
    all_counts = []  
  
    for word, count in word_counts:  
        if len(word) > max_token_length or word in reserved_tokens:  
            continue  
        all_counts.append((word, count))  
  
    return all_counts  
  
  
def generate_final_vocabulary(reserved_tokens, char_tokens, curr_tokens):  
    """Generates final vocab given reserved, single-character, and current tokens.  
  
    Args:      reserved_tokens: list of strings (tokens) that must be included in vocab      char_tokens: set of single-character strings      curr_tokens: string to int dict mapping token to count  
    Returns:      list of strings representing final vocabulary    """  
    sorted_char_tokens = sorted(list(char_tokens))  
    vocab_char_arrays = []  
    vocab_char_arrays.extend(reserved_tokens)  
    vocab_char_arrays.extend(sorted_char_tokens)  
  
    # Sort by count, then alphabetically.  
    sorted_tokens = sorted(sorted(curr_tokens.items(), key=lambda x: x[0]),  
                           key=lambda x: x[1], reverse=True)  
    for token, _ in sorted_tokens:  
        vocab_char_arrays.append(token)  
  
    seen_tokens = set()  
    # Adding unique tokens to list to maintain sorted order.  
    vocab_words = []  
    for word in vocab_char_arrays:  
        if word in seen_tokens:  
            continue  
        seen_tokens.add(word)  
        vocab_words.append(word)  
  
    return vocab_words  
  
  
def learn_with_thresh(word_counts, thresh, params):  
    """Wordpiece learning algorithm to produce a vocab given frequency threshold.  
  
    Args:      word_counts: list of (string, int) tuples      thresh: int, frequency threshold for a token to be included in the vocab      params: Params namedtuple, parameters for learning  
    Returns:      list of strings, vocabulary generated for the given thresh    """  
    # Set of single-character tokens.  
    char_tokens = extract_char_tokens(word_counts)  
    curr_tokens = ensure_all_tokens_exist(char_tokens, {},  
                                          params.include_joiner_token,  
                                          params.joiner)  
  
    for iteration in range(params.num_iterations):  
        subtokens = [dict() for _ in range(params.max_token_length + 1)]  
        # Populate array with counts of each subtoken.  
        for word, count in word_counts:  
            if iteration == 0:  
                split_indices = range(1, len(word) + 1)  
            else:  
                split_indices = get_split_indices(word, curr_tokens,  
                                                  params.include_joiner_token,  
                                                  params.joiner)  
                if not split_indices:  
                    continue  
  
            start = 0  
            for index in split_indices:  
                for end in range(start + 1, len(word) + 1):  
                    subtoken = word[start:end]  
                    length = len(subtoken)  
                    if params.include_joiner_token and start > 0:  
                        subtoken = params.joiner + subtoken  
                    if subtoken in subtokens[length]:  
                        # Subtoken exists, increment count.  
                        subtokens[length][subtoken] += count  
                    else:  
                        # New subtoken, add to dict.  
                        subtokens[length][subtoken] = count  
                start = index  
  
        next_tokens = {}  
        # Get all tokens that have a count above the threshold.  
        for length in range(params.max_token_length, 0, -1):  
            for token, count in subtokens[length].items():  
                if count >= thresh:  
                    next_tokens[token] = count  
                # Decrement the count of all prefixes.  
                if len(token) > length:  # This token includes the joiner.  
                    joiner_len = len(params.joiner)  
                    for i in range(1 + joiner_len, length + joiner_len):  
                        prefix = token[0:i]  
                        if prefix in subtokens[i - joiner_len]:  
                            subtokens[i - joiner_len][prefix] -= count  
                else:  
                    for i in range(1, length):  
                        prefix = token[0:i]  
                        if prefix in subtokens[i]:  
                            subtokens[i][prefix] -= count  
  
        # Add back single-character tokens.  
        curr_tokens = ensure_all_tokens_exist(char_tokens, next_tokens,  
                                              params.include_joiner_token,  
                                              params.joiner)  
  
    vocab_words = generate_final_vocabulary(params.reserved_tokens, char_tokens,  
                                            curr_tokens)  
  
    return vocab_words  
  
  
def learn_binary_search(word_counts, lower, upper, params):  
    """Performs binary search to find wordcount frequency threshold.  
  
    Given upper and lower bounds and a list of (word, count) tuples, performs    binary search to find the threshold closest to producing a vocabulary    of size vocab_size.  
    Args:      word_counts: list of (string, int) tuples      lower: int, lower bound for binary search      upper: int, upper bound for binary search      params: Params namedtuple, parameters for learning  
    Returns:      list of strings, vocab that is closest to target vocab_size    """    thresh = (upper + lower) // 2  
    current_vocab = learn_with_thresh(word_counts, thresh, params)  
    current_vocab_size = len(current_vocab)  
  
    # Allow count to be within k% of the target count, where k is slack ratio.  
    slack_count = params.slack_ratio * params.vocab_size  
    if slack_count < 0:  
        slack_count = 0  
  
    is_within_slack = (current_vocab_size <= params.vocab_size) and (  
            params.vocab_size - current_vocab_size <= slack_count)  
  
    # We've created a vocab within our goal range (or, ran out of search space).  
    if is_within_slack or lower >= upper or thresh <= 1:  
        return current_vocab  
  
    current_vocab = None  
  
    if current_vocab_size > params.vocab_size:  
        return learn_binary_search(word_counts, thresh + 1, upper, params)  
  
    else:  
        return learn_binary_search(word_counts, lower, thresh - 1, params)  

整合:

def learn(word_counts,  
          vocab_size: int,  
          reserved_tokens: List[str],  
          upper_thresh: Optional[int] = int(1e7),  
          lower_thresh: Optional[int] = 10,  
          num_iterations: int = 4,  
          max_input_tokens: Optional[int] = int(5e6),  
          max_token_length: int = 50,  
          max_unique_chars: int = 1000,  
          slack_ratio: float = 0.05,  
          include_joiner_token: bool = True,  
          joiner: str = '##') -> List[str]:  
    """Takes in wordcounts and returns wordpiece vocabulary.  
  
    Args:      word_counts: (word, count) pairs as a dictionary, or list of tuples.      vocab_size: The target vocabulary size. This is the maximum size.      reserved_tokens: A list of tokens that must be included in the vocabulary.      upper_thresh: Initial upper bound on the token frequency threshold.      lower_thresh: Initial lower bound on the token frequency threchold.      num_iterations: Number of iterations to run.      max_input_tokens: The maximum number of words in the initial vocabulary. The        words with the lowest counts are discarded. Use `None` or `-1` for "no        maximum".      max_token_length: The maximum token length. Counts for longer words are        discarded.      max_unique_chars: The maximum alphabet size. This prevents rare characters        from inflating the vocabulary. Counts for words containing characters        ouside of the selected alphabet are discarded.      slack_ratio: The maximum deviation acceptable from `vocab_size` for an        acceptable vocabulary. The acceptable range of vocabulary sizes is from        `vocab_size*(1-slack_ratio)` to `vocab_size`.      include_joiner_token: If true, include the `joiner` token in the output        vocabulary.      joiner: The prefix to include on suffix tokens in the output vocabulary.        Usually "##". For example 'places' could be tokenized as `['place',        '##s']`.  
    Returns:      string, final vocabulary with each word separated by newline    """    if isinstance(word_counts, dict):  
        word_counts = word_counts.items()  
  
    params = Params(upper_thresh, lower_thresh, num_iterations, max_input_tokens,  
                    max_token_length, max_unique_chars, vocab_size, slack_ratio,  
                    include_joiner_token, joiner, reserved_tokens)  
  
    upper_search, lower_search = get_search_threshs(word_counts,  
                                                    params.upper_thresh,  
                                                    params.lower_thresh)  
  
    all_counts = get_input_words(word_counts, params.reserved_tokens,  
                                 params.max_token_length)  
  
    allowed_chars = get_allowed_chars(all_counts, params.max_unique_chars)  
  
    filtered_counts = filter_input_words(all_counts, allowed_chars,  
                                         params.max_input_tokens)  
  
    vocab = learn_binary_search(filtered_counts, lower_search, upper_search,  
                                params)  
  
    return vocab

http://lihuaxi.xjx100.cn/news/1869659.html

相关文章

Hadoop学习笔记(HDP)-Part.18 安装Flink

目录 Part.01 关于HDP Part.02 核心组件原理 Part.03 资源规划 Part.04 基础环境配置 Part.05 Yum源配置 Part.06 安装OracleJDK Part.07 安装MySQL Part.08 部署Ambari集群 Part.09 安装OpenLDAP Part.10 创建集群 Part.11 安装Kerberos Part.12 安装HDFS Part.13 安装Ranger …

【3】密评-物理和环境安全测评

0x01 依据 GB/T 39786 -2021《信息安全技术 信息系统密码应用基本要求》针对等保三级系统要求&#xff1a; 物理和环境层面&#xff1a; a&#xff09;宜采用密码技术进行物理访问身份鉴别,保证重要区域进入人员身份的真实性&#xff1b; b&#xff09;宜采用密码技术保证电子门…

智能优化算法应用:基于学生心理学算法无线传感器网络(WSN)覆盖优化 - 附代码

智能优化算法应用&#xff1a;基于学生心理学算法无线传感器网络(WSN)覆盖优化 - 附代码 文章目录 智能优化算法应用&#xff1a;基于学生心理学算法无线传感器网络(WSN)覆盖优化 - 附代码1.无线传感网络节点模型2.覆盖数学模型及分析3.学生心理学算法4.实验参数设定5.算法结果…

WebDriver运行原理的深入剖析

在现代软件开发中&#xff0c;自动化测试已经成为了不可或缺的一部分。它可以帮助开发者快速、准确地完成软件的功能测试&#xff0c;提高开发效率。而WebDriver就是实现这一目标的重要工具之一。那么&#xff0c;WebDriver是如何工作的呢&#xff1f;本文将通过生活案例&#…

软著项目推荐 深度学习的智能中文对话问答机器人

文章目录 0 简介1 项目架构2 项目的主要过程2.1 数据清洗、预处理2.2 分桶2.3 训练 3 项目的整体结构4 重要的API4.1 LSTM cells部分&#xff1a;4.2 损失函数&#xff1a;4.3 搭建seq2seq框架&#xff1a;4.4 测试部分&#xff1a;4.5 评价NLP测试效果&#xff1a;4.6 梯度截断…

vue3 input 上传 vuedraggable 实现拖拽排序

效果如下 input 实现上传功能 通过隐藏 input 元素&#xff0c;点击上传触发 input 点击事件&#xff0c;监听 input change 事件 accept 上传文件的类型 multiple 是否允许上传多个 <template><div class"cursor-pointer" click"submitUpload"&…

Python 中错误 ModuleNotFoundError: No Module Named Configparser

ModuleNotFoundError 是使用一些需要导入当前程序的内置功能、类、库和包时最常见的错误之一。 大多数情况下&#xff0c;您需要使用 import 关键字将这些内置功能导入到您当前的程序中&#xff1b; 但是&#xff0c;对于高级包和库&#xff0c;您将需要通过命令行界面 (CLI) …

Hadoop学习笔记(HDP)-Part.10 创建集群

目录 Part.01 关于HDP Part.02 核心组件原理 Part.03 资源规划 Part.04 基础环境配置 Part.05 Yum源配置 Part.06 安装OracleJDK Part.07 安装MySQL Part.08 部署Ambari集群 Part.09 安装OpenLDAP Part.10 创建集群 Part.11 安装Kerberos Part.12 安装HDFS Part.13 安装Ranger …