HTTP/2 200
content-type: application/octet-stream
x-guploader-uploadid: ABgVH88TiyvgFt2E91flPXdbzhGJWiKm9a0X21rfV7k464Uj1KTuCAUZaMF498pt9RyzM1v1
expires: Wed, 23 Jul 2025 23:18:22 GMT
date: Wed, 23 Jul 2025 22:18:22 GMT
cache-control: public, max-age=3600
last-modified: Fri, 19 Jul 2024 13:04:18 GMT
etag: "3623b66710192e7fe79b3c493fddda93"
x-goog-generation: 1721394258339594
x-goog-metageneration: 1
x-goog-stored-content-encoding: identity
x-goog-stored-content-length: 34738
x-goog-hash: crc32c=ktPXGg==
x-goog-hash: md5=NiO2ZxAZLn/nmzxJP93akw==
x-goog-storage-class: MULTI_REGIONAL
accept-ranges: bytes
content-length: 34738
server: UploadServer
alt-svc: h3=":443"; ma=2592000,h3-29=":443"; ma=2592000
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"id": "Tce3stUlHN0L"
},
"source": [
"##### Copyright 2020 The TensorFlow Authors."
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {
"cellView": "form",
"execution": {
"iopub.execute_input": "2024-07-19T12:52:52.426158Z",
"iopub.status.busy": "2024-07-19T12:52:52.425657Z",
"iopub.status.idle": "2024-07-19T12:52:52.429261Z",
"shell.execute_reply": "2024-07-19T12:52:52.428707Z"
},
"id": "tuOe1ymfHZPu"
},
"outputs": [],
"source": [
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n",
"# you may not use this file except in compliance with the License.\n",
"# You may obtain a copy of the License at\n",
"#\n",
"# https://www.apache.org/licenses/LICENSE-2.0\n",
"#\n",
"# Unless required by applicable law or agreed to in writing, software\n",
"# distributed under the License is distributed on an \"AS IS\" BASIS,\n",
"# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n",
"# See the License for the specific language governing permissions and\n",
"# limitations under the License."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "qFdPvlXBOdUN"
},
"source": [
"# Tokenizing with TF Text"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MfBg1C5NB3X0"
},
"source": [
"
"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "xHxb-dlhMIzW"
},
"source": [
"## Overview\n",
"\n",
"Tokenization is the process of breaking up a string into tokens. Commonly, these tokens are words, numbers, and/or punctuation. The `tensorflow_text` package provides a number of tokenizers available for preprocessing text required by your text-based models. By performing the tokenization in the TensorFlow graph, you will not need to worry about differences between the training and inference workflows and managing preprocessing scripts.\n",
"\n",
"This guide discusses the many tokenization options provided by TensorFlow Text, when you might want to use one option over another, and how these tokenizers are called from within your model."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "MUXex9ctTuDB"
},
"source": [
"## Setup"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:52:52.433115Z",
"iopub.status.busy": "2024-07-19T12:52:52.432889Z",
"iopub.status.idle": "2024-07-19T12:53:17.169559Z",
"shell.execute_reply": "2024-07-19T12:53:17.168711Z"
},
"id": "z0oj4HS26x05"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"\u001b[31mERROR: pip's dependency resolver does not currently take into account all the packages that are installed. This behaviour is the source of the following dependency conflicts.\r\n",
"tensorflow-datasets 4.9.3 requires protobuf>=3.20, but you have protobuf 3.19.6 which is incompatible.\r\n",
"tensorflow-metadata 1.15.0 requires protobuf<4.21,>=3.20.3; python_version < \"3.11\", but you have protobuf 3.19.6 which is incompatible.\u001b[0m\u001b[31m\r\n",
"\u001b[0m"
]
}
],
"source": [
"!pip install -q \"tensorflow-text==2.11.*\""
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:17.174113Z",
"iopub.status.busy": "2024-07-19T12:53:17.173424Z",
"iopub.status.idle": "2024-07-19T12:53:19.289765Z",
"shell.execute_reply": "2024-07-19T12:53:19.289023Z"
},
"id": "alf2kDHJ60rO"
},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-07-19 12:53:17.407541: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-07-19 12:53:18.206227: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer.so.7'; dlerror: libnvinfer.so.7: cannot open shared object file: No such file or directory\n",
"2024-07-19 12:53:18.206313: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libnvinfer_plugin.so.7'; dlerror: libnvinfer_plugin.so.7: cannot open shared object file: No such file or directory\n",
"2024-07-19 12:53:18.206330: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Cannot dlopen some TensorRT libraries. If you would like to use Nvidia GPU with TensorRT, please make sure the missing libraries mentioned above are installed properly.\n"
]
}
],
"source": [
"import requests\n",
"import tensorflow as tf\n",
"import tensorflow_text as tf_text"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "i4rfKxVvBBu0"
},
"source": [
"## Splitter API\n",
"\n",
"The main interfaces are `Splitter` and `SplitterWithOffsets` which have single methods `split` and `split_with_offsets`. The `SplitterWithOffsets` variant (which extends `Splitter`) includes an option for getting byte offsets. This allows the caller to know which bytes in the original string the created token was created from.\n",
"\n",
"The `Tokenizer` and `TokenizerWithOffsets` are specialized versions of the `Splitter` that provide the convenience methods `tokenize` and `tokenize_with_offsets` respectively.\n",
"\n",
"Generally, for any N-dimensional input, the returned tokens are in a N+1-dimensional [RaggedTensor](https://www.tensorflow.org/guide/ragged_tensor) with the inner-most dimension of tokens mapping to the original individual strings.\n",
"\n",
"```python\n",
"class Splitter {\n",
" @abstractmethod\n",
" def split(self, input)\n",
"}\n",
"\n",
"class SplitterWithOffsets(Splitter) {\n",
" @abstractmethod\n",
" def split_with_offsets(self, input)\n",
"}\n",
"```\n",
"\n",
"There is also a `Detokenizer` interface. Any tokenizer implementing this interface can accept a N-dimensional ragged tensor of tokens, and normally returns a N-1-dimensional tensor or ragged tensor that has the given tokens assembled together.\n",
"\n",
"```python\n",
"class Detokenizer {\n",
" @abstractmethod\n",
" def detokenize(self, input)\n",
"}\n",
"```"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BhviJXy0BDoa"
},
"source": [
"## Tokenizers\n",
"\n",
"Below is the suite of tokenizers provided by TensorFlow Text. String inputs are assumed to be UTF-8. Please review the [Unicode guide](https://www.tensorflow.org/text/guide/unicode) for converting strings to UTF-8."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eWFisXk-68BQ"
},
"source": [
"### Whole word tokenizers\n",
"\n",
"These tokenizers attempt to split a string by words, and is the most intuitive way to split text.\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-CxjAs5wOYKh"
},
"source": [
"#### WhitespaceTokenizer\n",
"\n",
"The `text.WhitespaceTokenizer` is the most basic tokenizer which splits strings on ICU defined whitespace characters (eg. space, tab, new line). This is often good for quickly building out prototype models."
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:19.294494Z",
"iopub.status.busy": "2024-07-19T12:53:19.294093Z",
"iopub.status.idle": "2024-07-19T12:53:19.938013Z",
"shell.execute_reply": "2024-07-19T12:53:19.937352Z"
},
"id": "k4a11Hlm7C4k"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'What', b'you', b'know', b'you', b\"can't\", b'explain,', b'but', b'you', b'feel', b'it.']]\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"2024-07-19 12:53:19.830808: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcudart.so.11.0'; dlerror: libcudart.so.11.0: cannot open shared object file: No such file or directory\n",
"2024-07-19 12:53:19.830906: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublas.so.11'; dlerror: libcublas.so.11: cannot open shared object file: No such file or directory\n",
"2024-07-19 12:53:19.830967: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcublasLt.so.11'; dlerror: libcublasLt.so.11: cannot open shared object file: No such file or directory\n",
"2024-07-19 12:53:19.831026: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcufft.so.10'; dlerror: libcufft.so.10: cannot open shared object file: No such file or directory\n",
"2024-07-19 12:53:19.888404: W tensorflow/compiler/xla/stream_executor/platform/default/dso_loader.cc:64] Could not load dynamic library 'libcusparse.so.11'; dlerror: libcusparse.so.11: cannot open shared object file: No such file or directory\n",
"2024-07-19 12:53:19.888619: W tensorflow/core/common_runtime/gpu/gpu_device.cc:1934] Cannot dlopen some GPU libraries. Please make sure the missing libraries mentioned above are installed properly if you would like to use GPU. Follow the guide at https://www.tensorflow.org/install/gpu for how to download and setup the required libraries for your platform.\n",
"Skipping registering GPU devices...\n"
]
}
],
"source": [
"tokenizer = tf_text.WhitespaceTokenizer()\n",
"tokens = tokenizer.tokenize([\"What you know you can't explain, but you feel it.\"])\n",
"print(tokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "VHS6dEQ7cR9E"
},
"source": [
"You may notice a shortcome of this tokenizer is that punctuation is included with the word to make up a token. To split the words and punctuation into separate tokens, the `UnicodeScriptTokenizer` should be used."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-xohhm0Q7AmN"
},
"source": [
"#### UnicodeScriptTokenizer\n",
"\n",
"The `UnicodeScriptTokenizer` splits strings based on Unicode script boundaries. The script codes used correspond to International Components for Unicode (ICU) UScriptCode values. See: https://icu-project.org/apiref/icu4c/uscript_8h.html\n",
"\n",
"In practice, this is similar to the `WhitespaceTokenizer` with the most apparent difference being that it will split punctuation (USCRIPT_COMMON) from language texts (eg. USCRIPT_LATIN, USCRIPT_CYRILLIC, etc) while also separating language texts from each other. Note that this will also split contraction words into separate tokens."
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:19.941742Z",
"iopub.status.busy": "2024-07-19T12:53:19.941493Z",
"iopub.status.idle": "2024-07-19T12:53:20.026991Z",
"shell.execute_reply": "2024-07-19T12:53:20.026235Z"
},
"id": "68u0XF3L6-ay"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'What', b'you', b'know', b'you', b'can', b\"'\", b't', b'explain', b',', b'but', b'you', b'feel', b'it', b'.']]\n"
]
}
],
"source": [
"tokenizer = tf_text.UnicodeScriptTokenizer()\n",
"tokens = tokenizer.tokenize([\"What you know you can't explain, but you feel it.\"])\n",
"print(tokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "J0Ja_h1qO7P0"
},
"source": [
"### Subword tokenizers\n",
"\n",
"Subword tokenizers can be used with a smaller vocabulary, and allow the model to have some information about novel words from the subwords that make create it.\n",
"\n",
"We briefly discuss the Subword tokenization options below, but the [Subword Tokenization tutorial](https://www.tensorflow.org/text/guide/subwords_tokenizer) goes more in depth and also explains how to generate the vocab files."
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "BLif2owYPBos"
},
"source": [
"#### WordpieceTokenizer\n",
"\n",
"WordPiece tokenization is a data-driven tokenization scheme which generates a set of sub-tokens. These sub tokens may correspond to linguistic morphemes, but this is often not the case.\n",
"\n",
"The WordpieceTokenizer expects the input to already be split into tokens. Because of this prerequisite, you will often want to split using the `WhitespaceTokenizer` or `UnicodeScriptTokenizer` beforehand."
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:20.030362Z",
"iopub.status.busy": "2024-07-19T12:53:20.030109Z",
"iopub.status.idle": "2024-07-19T12:53:20.050497Z",
"shell.execute_reply": "2024-07-19T12:53:20.049881Z"
},
"id": "srIHtzU2fxCi"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'What', b'you', b'know', b'you', b\"can't\", b'explain,', b'but', b'you', b'feel', b'it.']]\n"
]
}
],
"source": [
"tokenizer = tf_text.WhitespaceTokenizer()\n",
"tokens = tokenizer.tokenize([\"What you know you can't explain, but you feel it.\"])\n",
"print(tokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uUZe66RngCGU"
},
"source": [
"After the string is split into tokens, the `WordpieceTokenizer` can be used to split into subtokens."
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:20.054012Z",
"iopub.status.busy": "2024-07-19T12:53:20.053466Z",
"iopub.status.idle": "2024-07-19T12:53:20.566818Z",
"shell.execute_reply": "2024-07-19T12:53:20.566165Z"
},
"id": "ISEUjIsYAl2S"
},
"outputs": [
{
"data": {
"text/plain": [
"52382"
]
},
"execution_count": 7,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"url = \"https://github.com/tensorflow/text/blob/master/tensorflow_text/python/ops/test_data/test_wp_en_vocab.txt?raw=true\"\n",
"r = requests.get(url)\n",
"filepath = \"vocab.txt\"\n",
"open(filepath, 'wb').write(r.content)"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:20.570015Z",
"iopub.status.busy": "2024-07-19T12:53:20.569774Z",
"iopub.status.idle": "2024-07-19T12:53:20.590371Z",
"shell.execute_reply": "2024-07-19T12:53:20.589691Z"
},
"id": "uU8wJlUfsskU"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[[b'What'], [b'you'], [b'know'], [b'you'], [b\"can't\"], [b'explain,'], [b'but'], [b'you'], [b'feel'], [b'it.']]]\n"
]
}
],
"source": [
"subtokenizer = tf_text.UnicodeScriptTokenizer(filepath)\n",
"subtokens = tokenizer.tokenize(tokens)\n",
"print(subtokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "ncBcigHAPFBd"
},
"source": [
"#### BertTokenizer\n",
"\n",
"The BertTokenizer mirrors the original implementation of tokenization from the BERT paper. This is backed by the WordpieceTokenizer, but also performs additional tasks such as normalization and tokenizing to words first."
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:20.593482Z",
"iopub.status.busy": "2024-07-19T12:53:20.593108Z",
"iopub.status.idle": "2024-07-19T12:53:20.618971Z",
"shell.execute_reply": "2024-07-19T12:53:20.618365Z"
},
"id": "2tOz1hNhtdV2"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[[b'what'], [b'you'], [b'know'], [b'you'], [b'can'], [b\"'\"], [b't'], [b'explain'], [b','], [b'but'], [b'you'], [b'feel'], [b'it'], [b'.']]]\n"
]
}
],
"source": [
"tokenizer = tf_text.BertTokenizer(filepath, token_out_type=tf.string, lower_case=True)\n",
"tokens = tokenizer.tokenize([\"What you know you can't explain, but you feel it.\"])\n",
"print(tokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "-rb_dORMO-3t"
},
"source": [
"#### SentencepieceTokenizer\n",
"\n",
"The SentencepieceTokenizer is a sub-token tokenizer that is highly configurable. This is backed by the Sentencepiece library. Like the BertTokenizer, it can include normalization and token splitting before splitting into sub-tokens.\n"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:20.621778Z",
"iopub.status.busy": "2024-07-19T12:53:20.621543Z",
"iopub.status.idle": "2024-07-19T12:53:21.291773Z",
"shell.execute_reply": "2024-07-19T12:53:21.291050Z"
},
"id": "0dUbFCfDCojr"
},
"outputs": [],
"source": [
"url = \"https://github.com/tensorflow/text/blob/master/tensorflow_text/python/ops/test_data/test_oss_model.model?raw=true\"\n",
"sp_model = requests.get(url).content"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:21.295814Z",
"iopub.status.busy": "2024-07-19T12:53:21.295214Z",
"iopub.status.idle": "2024-07-19T12:53:21.306565Z",
"shell.execute_reply": "2024-07-19T12:53:21.305929Z"
},
"id": "uvsm6iuNupEZ"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'\\xe2\\x96\\x81What', b'\\xe2\\x96\\x81you', b'\\xe2\\x96\\x81know', b'\\xe2\\x96\\x81you', b'\\xe2\\x96\\x81can', b\"'\", b't', b'\\xe2\\x96\\x81explain', b',', b'\\xe2\\x96\\x81but', b'\\xe2\\x96\\x81you', b'\\xe2\\x96\\x81feel', b'\\xe2\\x96\\x81it', b'.']]\n"
]
}
],
"source": [
"tokenizer = tf_text.SentencepieceTokenizer(sp_model, out_type=tf.string)\n",
"tokens = tokenizer.tokenize([\"What you know you can't explain, but you feel it.\"])\n",
"print(tokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "1TatehW0Q0qV"
},
"source": [
"### Other splitters\n"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "wqNgtoFPQ1sG"
},
"source": [
"#### UnicodeCharTokenizer\n",
"\n",
"This splits a string into UTF-8 characters. It is useful for CJK languages that do not have spaces between words."
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:21.309967Z",
"iopub.status.busy": "2024-07-19T12:53:21.309417Z",
"iopub.status.idle": "2024-07-19T12:53:21.329635Z",
"shell.execute_reply": "2024-07-19T12:53:21.329018Z"
},
"id": "4GjiAnQFvIhW"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[87, 104, 97, 116, 32, 121, 111, 117, 32, 107, 110, 111, 119, 32, 121, 111, 117, 32, 99, 97, 110, 39, 116, 32, 101, 120, 112, 108, 97, 105, 110, 44, 32, 98, 117, 116, 32, 121, 111, 117, 32, 102, 101, 101, 108, 32, 105, 116, 46]]\n"
]
}
],
"source": [
"tokenizer = tf_text.UnicodeCharTokenizer()\n",
"tokens = tokenizer.tokenize([\"What you know you can't explain, but you feel it.\"])\n",
"print(tokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "XHyWQcJZGOwL"
},
"source": [
"The output is Unicode codepoints. This can be also useful for creating character ngrams, such as bigrams. To convert back into UTF-8 characters."
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:21.332541Z",
"iopub.status.busy": "2024-07-19T12:53:21.332282Z",
"iopub.status.idle": "2024-07-19T12:53:21.348155Z",
"shell.execute_reply": "2024-07-19T12:53:21.347554Z"
},
"id": "_uuyz3XC0NdU"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'Wh', b'ha', b'at', b't ', b' y', b'yo', b'ou', b'u ', b' k', b'kn', b'no', b'ow', b'w ', b' y', b'yo', b'ou', b'u ', b' c', b'ca', b'an', b\"n'\", b\"'t\", b't ', b' e', b'ex', b'xp', b'pl', b'la', b'ai', b'in', b'n,', b', ', b' b', b'bu', b'ut', b't ', b' y', b'yo', b'ou', b'u ', b' f', b'fe', b'ee', b'el', b'l ', b' i', b'it', b't.']]\n"
]
}
],
"source": [
"characters = tf.strings.unicode_encode(tf.expand_dims(tokens, -1), \"UTF-8\")\n",
"bigrams = tf_text.ngrams(characters, 2, reduction_type=tf_text.Reduction.STRING_JOIN, string_separator='')\n",
"print(bigrams.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "oCmTbCnkQ4At"
},
"source": [
"#### HubModuleTokenizer\n",
"\n",
"This is a wrapper around models deployed to TF Hub to make the calls easier since TF Hub currently does not support ragged tensors. Having a model perform tokenization is particularly useful for CJK languages when you want to split into words, but do not have spaces to provide a heuristic guide. At this time, we have a single segmentation model for Chinese."
]
},
{
"cell_type": "code",
"execution_count": 14,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:21.351071Z",
"iopub.status.busy": "2024-07-19T12:53:21.350847Z",
"iopub.status.idle": "2024-07-19T12:53:22.879464Z",
"shell.execute_reply": "2024-07-19T12:53:22.878666Z"
},
"id": "R8rWv3rAv_cb"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'\\xe6\\x96\\xb0\\xe5\\x8d\\x8e\\xe7\\xa4\\xbe', b'\\xe5\\x8c\\x97\\xe4\\xba\\xac']]\n"
]
}
],
"source": [
"MODEL_HANDLE = \"carview.php?tsp=https://tfhub.dev/google/zh_segmentation/1\"\n",
"segmenter = tf_text.HubModuleTokenizer(MODEL_HANDLE)\n",
"tokens = segmenter.tokenize([\"新华社北京\"])\n",
"print(tokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "cRXOToXTVCep"
},
"source": [
"It may be difficult to view the results of the UTF-8 encoded byte strings. Decode the list values to make viewing easier."
]
},
{
"cell_type": "code",
"execution_count": 15,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:22.883074Z",
"iopub.status.busy": "2024-07-19T12:53:22.882804Z",
"iopub.status.idle": "2024-07-19T12:53:22.887546Z",
"shell.execute_reply": "2024-07-19T12:53:22.886963Z"
},
"id": "XeJHbr8XVctR"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[['新华社', '北京']]\n"
]
}
],
"source": [
"def decode_list(x):\n",
" if type(x) is list:\n",
" return list(map(decode_list, x))\n",
" return x.decode(\"UTF-8\")\n",
"\n",
"def decode_utf8_tensor(x):\n",
" return list(map(decode_list, x.to_list()))\n",
"\n",
"print(decode_utf8_tensor(tokens))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "eCnKgtjYRhOK"
},
"source": [
"#### SplitMergeTokenizer\n",
"\n",
"The `SplitMergeTokenizer` & `SplitMergeFromLogitsTokenizer` have a targeted purpose of splitting a string based on provided values that indicate where the string should be split. This is useful when building your own segmentation models like the previous Segmentation example.\n",
"\n",
"For the `SplitMergeTokenizer`, a value of 0 is used to indicate the start of a new string, and the value of 1 indicates the character is part of the current string."
]
},
{
"cell_type": "code",
"execution_count": 16,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:22.890525Z",
"iopub.status.busy": "2024-07-19T12:53:22.890290Z",
"iopub.status.idle": "2024-07-19T12:53:22.897918Z",
"shell.execute_reply": "2024-07-19T12:53:22.897340Z"
},
"id": "3c-2iBiuWgjP"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[['新华社', '北京']]\n"
]
}
],
"source": [
"strings = [\"新华社北京\"]\n",
"labels = [[0, 1, 1, 0, 1]]\n",
"tokenizer = tf_text.SplitMergeTokenizer()\n",
"tokens = tokenizer.tokenize(strings, labels)\n",
"print(decode_utf8_tensor(tokens))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "l5F0zPFDwmcb"
},
"source": [
"The `SplitMergeFromLogitsTokenizer` is similar, but it instead accepts logit value pairs from a neural network that predict if each character should be split into a new string or merged into the current one."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:22.900686Z",
"iopub.status.busy": "2024-07-19T12:53:22.900461Z",
"iopub.status.idle": "2024-07-19T12:53:22.905893Z",
"shell.execute_reply": "2024-07-19T12:53:22.905290Z"
},
"id": "JRWtRYMxw3oc"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[['新华社', '北京']]\n"
]
}
],
"source": [
"strings = [[\"新华社北京\"]]\n",
"labels = [[[5.0, -3.2], [0.2, 12.0], [0.0, 11.0], [2.2, -1.0], [-3.0, 3.0]]]\n",
"tokenizer = tf_text.SplitMergeFromLogitsTokenizer()\n",
"tokenizer.tokenize(strings, labels)\n",
"print(decode_utf8_tensor(tokens))"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mWrGTOzbVb8U"
},
"source": [
"#### RegexSplitter\n",
"\n",
"The `RegexSplitter` is able to segment strings at arbitrary breakpoints defined by a provided regular expression."
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:22.908632Z",
"iopub.status.busy": "2024-07-19T12:53:22.908413Z",
"iopub.status.idle": "2024-07-19T12:53:22.920174Z",
"shell.execute_reply": "2024-07-19T12:53:22.919600Z"
},
"id": "Szw0QZ6ecExC"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'What', b'you', b'know', b'you', b\"can't\", b'explain,', b'but', b'you', b'feel', b'it.']]\n"
]
}
],
"source": [
"splitter = tf_text.RegexSplitter(\"\\s\")\n",
"tokens = splitter.split([\"What you know you can't explain, but you feel it.\"], )\n",
"print(tokens.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "uPIMvyot7GFv"
},
"source": [
"## Offsets\n",
"\n",
"When tokenizing strings, it is often desired to know where in the original string the token originated from. For this reason, each tokenizer which implements `TokenizerWithOffsets` has a *tokenize_with_offsets* method that will return the byte offsets along with the tokens. The start_offsets lists the bytes in the original string each token starts at, and the end_offsets lists the bytes immediately after the point where each token ends. To refrase, the start offsets are inclusive and the end offsets are exclusive."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:22.923167Z",
"iopub.status.busy": "2024-07-19T12:53:22.922920Z",
"iopub.status.idle": "2024-07-19T12:53:22.976009Z",
"shell.execute_reply": "2024-07-19T12:53:22.975399Z"
},
"id": "UmZ91zl87J7y"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'Everything', b'not', b'saved', b'will', b'be', b'lost', b'.']]\n",
"[[0, 11, 15, 21, 26, 29, 33]]\n",
"[[10, 14, 20, 25, 28, 33, 34]]\n"
]
}
],
"source": [
"tokenizer = tf_text.UnicodeScriptTokenizer()\n",
"(tokens, start_offsets, end_offsets) = tokenizer.tokenize_with_offsets(['Everything not saved will be lost.'])\n",
"print(tokens.to_list())\n",
"print(start_offsets.to_list())\n",
"print(end_offsets.to_list())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "mVGbkB-80819"
},
"source": [
"## Detokenization\n",
"\n",
"Tokenizers which implement the `Detokenizer` provide a `detokenize` method which attempts to combine the strings. This has the chance of being lossy, so the detokenized string may not always match exactly the original, pre-tokenized string."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:22.979415Z",
"iopub.status.busy": "2024-07-19T12:53:22.978792Z",
"iopub.status.idle": "2024-07-19T12:53:22.990091Z",
"shell.execute_reply": "2024-07-19T12:53:22.989517Z"
},
"id": "iyThnPPQ0_6Q"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[87, 104, 97, 116, 32, 121, 111, 117, 32, 107, 110, 111, 119, 32, 121, 111, 117, 32, 99, 97, 110, 39, 116, 32, 101, 120, 112, 108, 97, 105, 110, 44, 32, 98, 117, 116, 32, 121, 111, 117, 32, 102, 101, 101, 108, 32, 105, 116, 46]]\n",
"[b\"What you know you can't explain, but you feel it.\"]\n"
]
}
],
"source": [
"tokenizer = tf_text.UnicodeCharTokenizer()\n",
"tokens = tokenizer.tokenize([\"What you know you can't explain, but you feel it.\"])\n",
"print(tokens.to_list())\n",
"strings = tokenizer.detokenize(tokens)\n",
"print(strings.numpy())"
]
},
{
"cell_type": "markdown",
"metadata": {
"id": "iVNFPYSZ7sf1"
},
"source": [
"## TF Data\n",
"\n",
"TF Data is a powerful API for creating an input pipeline for training models. Tokenizers work as expected with the API."
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {
"execution": {
"iopub.execute_input": "2024-07-19T12:53:22.992959Z",
"iopub.status.busy": "2024-07-19T12:53:22.992712Z",
"iopub.status.idle": "2024-07-19T12:53:23.669352Z",
"shell.execute_reply": "2024-07-19T12:53:23.668643Z"
},
"id": "YSykDr1d7uxr"
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.\n",
"Instructions for updating:\n",
"Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089\n"
]
},
{
"name": "stderr",
"output_type": "stream",
"text": [
"WARNING:tensorflow:From /tmpfs/src/tf_docs_env/lib/python3.9/site-packages/tensorflow/python/autograph/pyct/static_analysis/liveness.py:83: Analyzer.lamba_check (from tensorflow.python.autograph.pyct.static_analysis.liveness) is deprecated and will be removed after 2023-09-23.\n",
"Instructions for updating:\n",
"Lambda fuctions will be no more assumed to be used in the statement where they are used, or at least in the same block. https://github.com/tensorflow/tensorflow/issues/56089\n"
]
},
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[b'Never', b'tell', b'me', b'the', b'odds.']]\n",
"[[b\"It's\", b'a', b'trap!']]\n"
]
}
],
"source": [
"docs = tf.data.Dataset.from_tensor_slices([['Never tell me the odds.'], [\"It's a trap!\"]])\n",
"tokenizer = tf_text.WhitespaceTokenizer()\n",
"tokenized_docs = docs.map(lambda x: tokenizer.tokenize(x))\n",
"iterator = iter(tokenized_docs)\n",
"print(next(iterator).to_list())\n",
"print(next(iterator).to_list())"
]
}
],
"metadata": {
"colab": {
"collapsed_sections": [],
"name": "tokenizers.ipynb",
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.19"
}
},
"nbformat": 4,
"nbformat_minor": 0
}