Testing and Results

Test Results

Extensive testing has been conducted to validate key properties of a strong encryption system:

  1. Statistical Randomness – Ensuring ciphertext is indistinguishable from true randomness.
  2. Uniqueness – Confirming no collisions across large-scale encryptions.

This encryption method is novel and does not conform to conventional cryptographic structures, making direct theoretical analysis non-trivial. It is a 5-layer transformer that leverages referential transformations, controlled entropy cascades, and layered state evolution to eliminate statistical patterns and resist cryptanalysis. Due to its unconventional architecture, empirical testing has been used as the primary validation method, with specialized modeling in progress to establish additional theoretical proofs.

Test Categories (Conducted So Far):

  • NIST STS (Statistical Test Suite)
  • ENT (Entropy & Compression Analysis)
  • PRACTRAND (Randomness & Bias Detection)
  • Hash Collision Analysis (SHA-256, SHA-512, BLAKE2-256, BLAKE2-512)
Statistical Randomness Disclaimer

Disclaimer: The randomness tests were conducted to determine whether encrypted outputs are statistically consistent with random data.
These results alone do not prove security, nor were they intended to. The objective is to establish whether the encryption produces ciphertext free of detectable patterns or structure.

Theoretical Perspective

An approach towards perfect encryption security can be possible, wherein ciphertext that provides no foothold for analysis, pattern recognition, or key inference provides security both theoretically and practically. The results presented here support this hypothesis by a 4-pillar rule:

  1. Random: encrypted output should be as close to true random data as possible
  2. Featureless: encrypted output should not leak any information about the cleartext, the key, or the algorithm used
  3. Unique: every encrypted output should be universally unique. Never repeating for even the same cleartext and same key.
  4. Key protection: encrypted output should not leak or provide footholds to determine the key size, structure, usage, or any other direct or indirect information.

When applied together and equally, these principles approach the theoretical and practical limits of encryption security by removing all known routes of attack via ciphertext analysis, cleartext manipulation, pattern recognition, bias detection, and key inference.

Test Methodology

A carefully chosen approach and methods were designed to provide both broad coverage of test types as well as deep analysis for each type.

Encryption Keys

Encryption Keys in this algorithm are a randomized permutation of values – keys are not large numeric values.
Key sizes in this algorithm are not fixed and can be any size by this rule: 2^N elements where N also equals the number of bits per element.
Key size is an inherent secret as a byproduct of the key itself.

A 2^8 key (256 bytes) was used for all tests.
A total of 4 randomly generated keys were used.
Each key was used once per analyzed file.

Cleartext Selections

A total of 803 unique cleartext selections of varying sizes were created across 7 groups to provide a diverse selection of transformations.

Cleartext Groups and Detail (click to expand)
  • No-Entropy
    • 275 cleartext values of increasing lengths from 7 bytes to over 4MB
    • Values that are repeated single byte values like 0x00, 0xFF, 0x41, and 0x20
    • Values that are repeated double, triple, quadruple byte sequences
    • Values that are ordered short sequences like 0-9, A-Z, a-z, etc.
    • NIST Alternating bits
    • NIST Sliding Ones
    • NIST Incrementing values
    • Single Bit Flips
    • And many more if similar style
  • Low Entropy
    • 42 cleartext values of increasing lengths from 7 bytes over 3MB
    • Values that have very few bytes of choice such as:
      • 2 different bytes of random distribution like: 0x4444234423234423444444442323
      • 3 different bytes of random distribution like: 0x26265151513838262638
      • 5 different bytes of random distribution like: 0x22196692921919A219A2
    • Values that have very few byte choices but where those bytes are gapped by random lengths of another byte
      • Example: 0x41000021000000000041000000330000000000000021
    • Values that have very few byte choices but where bytes of the same value are placed in random clusters
      • Example: 0x33333333a8a8777777777777a8a8a8a8a83333337777
  • Natural Language (English)
    • 108 cleartext values generated from random sentences that were randomly created.
    • Includes common punctuation like: ? ” . !
    • Values are between 229 bytes and over 10kb.
    • Values are random quantities of sentences per paragraph and with extra line breaks between paragraphs.
  • Random Bytes
    • 126 cleartext values generated from a cryptographic random number generator
    • Values are between 7 bytes and over 3MB
    • Each value is unique
  • Pre Compressed
    • 108 cleartext values generated in the same way as the Natural Language data, but then compressed using BROTLI
    • Values range from 161 bytes to over 4KB
  • Binary Data
    • 72 cleartext values generated by reading random exe and dll files from a windows host.
    • Files were selected from sizes ranging from 3KB to 6MB
  • Structured Data
    • 72 cleartext values were generated by reading random structured files from a windows host of these types: xml, json, js, csv, and html
    • Values range in size from 3KB to 6MB

Randomness Tests

NIST STS

Purpose: To check short and mid-range randomness for quality and closeness to statistically random data.

Methodology

NIST STS (Assess) was configured and executed using the following parameters:

  • BitStream Length: 2,000,000 bits
  • BitStream Count: 100
  • Input File Size: 25MB
  • Mode: Binary
  • Test Types: All available

Each file analyzed by STS was created by sequentially appending encrypted outputs to the file end-on-end and without padding until it reached or exceeded 25MB. Each encrypted output appended to the file was created from the same cleartext input and the same encryption key.

Because 4 keys were created for testing, the same cleartext value was tested 4 times, meaning 4 input files created for STS and 4 executions of STS.

STS executes 188 different tests across 15 groups and all tests and subtests were allowed to execute.

Results Summary

  • Number of STS Executions: 3,212
  • Number of Encrypts Tested: 18,867,676
  • Total Data Tested: 80.3GB
  • Pass Rate (Arithmetic Mean): 98.9897%
  • Pass Rate (Geometric Mean): 98.9843%
  • P-Values
    • Mean: 0.489545225126579
    • Standard Deviation: 0.28799267454915145
    • Median: 0.491599
    • Q1 (25th percentile): 0.23681
    • Q3 (75th percentile): 0.739918
    • P95 (95th percential): 0.946308

STS Results by Test Group

STS Full Results

The full results of all 3,212 executions – including the actual STS textual reports – and the 603,856 individual results are stored in a MariaDB database and is available for download upon request here: Contact.

ENT Tests

Purpose: ENT was used to provide early sanity checks on encrypted outputs to look for gross failures of entropy and randomness. While ENT is not as aggressive as NIST STS or PractRand or others, it is a quick check for determining if data should continue on and be tested by more comprehensive tests.

Nonetheless, ENT does provide another perspective of randomness validation and its results in combination (not separately) with other results help paint a comprehensive picture of the quality of the encryption output.

Methodology

ENT was configured and executed using the following parameters:

  • -b parameter: Causes ENT to evaluate at the bit level, aligning with STS
  • -c parameter: Causes ENT to perform character frequency tests
  • -t parameter: Causes ENT to perform Monte Carlo (π estimation) tests

Each file analyzed by ENT was created by sequentially appending encrypted outputs to the file end-on-end and without padding until it reached or exceeded 25MB. Each encrypted output appended to the file was created from the same cleartext input and the same encryption key.

Because 4 keys were created for testing, the same cleartext value was tested 4 times, meaning 4 input files created for ENT and 4 executions of ENT.

Results Summary

  • Number of ENT Executions: 3,212
  • Number of Encrypts Tested: 18,867,676
  • Total Data Tested: 80.3GB
  • ENT Averages
    • Entropy: 1.0 (this is not a mistake. All 3,212 tests reported this value)
    • ChiSquare: 1.6597985616438327
    • Serial Correlation: 0.000009448007472
    • Mean: 0.500008135740971
    • Pi Approximation: 3.1415191690535527

PractRand Tests

Purpose: PractRand was used to provide more sensitive tests for randomness and for testing across mid-range and longer-range views.

Methodology

PractRand was run with the following settings

  • -a parameter: Causes PractRand to perform all possible tests
  • -tlmin 10: Causes PractRand to start with test segments of 2^10 bits in length (1024 bytes)
  • -tlmax 29: Causes PractRand to have its maximum segment size at 2^29 bits (512MB)

Each file analyzed by PractRand was created by sequentially appending encrypted outputs to the file end-on-end and without padding until it reached or exceeded 512MB. Each encrypted output appended to the file was created from the same cleartext input and the same encryption key.

The same cleartext input was used to generate five (5) input files for each key resulting in a total of 2.5GB of tests data per cleartext sample for the same encryption key.

Because 4 keys were created for testing, the same cleartext value was tested 4×5 (20) times. This results in a total of 10GB of data tested per cleartext sample.

Results Summary

  • Number of PractRand Executions: 16,060
  • Number of Encrypts Tested: ~1.608 billion
  • Total PractRand Tests (including variants): 31,557,167
  • Total Data Tested: 80.30 TB
  • ENT Averages
  • PractRand Averages
    • BCFN: 99.80% passing
    • BRank: 100% passing
    • DC6: 99.88% passing
    • FPF: 99.79% passing
    • Gap-16: 99.99% passing
    • Mod3n: 99.99% passing
    • TMFn: 100% passing

PractRand Summary By Test Group

Some notes about the some of the summary table’s columns.
PractRand scores its various tests across several textual results (seen in the summary table below). If a tests passes as expected it will show as “normal” or “normalish”. If a tests fails it will show “Fail” or “Fail !” and so on depending on the severity of the failure. While PractRand does have p-values and R values, each test has different criteria for pass/fail and so the textual results are used for clarity across tests.

  • Percent Normal: This is the percent of the “Normal Count” column divided across the “Total Tests”
  • Percent Normal+: This is the percent after combining “Normal” + “Normalish” divided across “Total Tests”
  • Percent Normal++: This is the percent after combining “Normal” + “Normalish” + “Unusual” divided across “Total Tests”
  • Total Tests: this is the total number of tests PractRand performed for that group.

Additional notes: Normal++ is the most relevant value for scoring. PractRand is designed for testing random number generators and small percentages of oddities and failures are expected when testing encryption outputs. The counts seen in the suspicious and failure columns are artifacts of the encryption and no exploitable patterns were found in the data. Most of the suspicious and failure counts occurred in the max segment sizes (256 and 512 megabyte) and are not correlated across the other tests.

The entirety of the PractRand results are available upon request. Use the contact form to make such a request: Contact. Note that the full results are very large – 31+ million rows.

Hash Collision Analysis

Purpose: to check for any chances of an encryption causing a duplicated output. This encryption algorithm is designed to create wildly unique outputs every time, even for the same cleartext and same key.

Methodology

During the STS executions, each encrypted output was hashed using 4 different hashing algorithms: SHA2-256, SHA2-512, BLAKE2-256, and BLAKE2-512. Each hash value was then checked for prior existence before being added to a database table. If a collision were to occur, the count for that hash would be incremented.

Across the 18+ million encrypts that occurred during STS testing, there were zero collisions – as expected. This provides a level of empirical testing for the requirement and expectation of the algorithm.