Tokenization Models

Overview

Xyxyx's tokenization models are framework standards that define how text records are converted into on-chain text-based tokens. Xyxyx introduces two foundational tokenization models (1x1 and A4), which adhere to different intended use cases.

Model
Description

A model for tokenizing short text records

A model for tokenizing long text records


Key Concepts

Text-Based Tokens

Xyxyx's tokenization models are founded upon text-based tokens.

Text-based tokens are digital representations of text records that include an embedded visual output, similar to NFTs. This makes the text records easily accessible to token beholders. Text-based tokens form the backbone of Xyxyx's tokenization models, providing a seamless way to digitize/digitalize text records stored entirely on blockchain rails.

On-Chain Data Storage

To provide the highest level of integrity and provenance (i.e., immutability and permanency), the text-based tokens built upon Xyxyx's tokenization models are rendered and stored fully on-chain.

Every text-based token is embodied through a Base64-encoded SVG output that is encapsulated into the token metadata, which is stored entirely on blockchain rails.

As a result, through text-based tokens running fully on-chain, Xyxyx proposes a fully blockchain-based method for data storage that requires zero dependencies on off-chain hosting services (such as IPFS, Arweave, or Swarm); leveraging the Ethereum blockchain as the data storage layer.

Portability

Portability is a core feature of Xyxyx's tokenization models.

Each text-based token — encoded as a Base64 string that renders a human-readable SVG output — is cryptographically etched on the blockchain and, on the other hand, can also be saved & exported as an SVG file. This allows text-based tokens to function as immutable blockchain records and as portable data files for analysis and management.

Multilingual

Xyxyx's tokenization models support the use of text from all of the world's digitized writing systems; whether it’s Japanese, Arabic, Russian, Korean, or Chinese.

Last updated