Billion Bit Software
Extensive testing has been conducted to validate key properties of a strong encryption system:
This encryption method is novel and does not conform to conventional cryptographic structures, making direct theoretical analysis non-trivial. It is a 5-layer transformer that leverages referential transformations, controlled entropy cascades, and layered state evolution to eliminate statistical patterns and resist cryptanalysis. Due to its unconventional architecture, empirical testing has been used as the primary validation method, with specialized modeling in progress to establish additional theoretical proofs.
⚠ Disclaimer: The randomness tests were conducted to determine whether encrypted outputs are statistically consistent with random data.
These results alone do not prove security, nor were they intended to. The objective is to establish whether the encryption produces ciphertext free of detectable patterns or structure.
An approach towards perfect encryption security can be possible, wherein ciphertext that provides no foothold for analysis, pattern recognition, or key inference provides security both theoretically and practically. The results presented here support this hypothesis by a 4-pillar rule:
When applied together and equally, these principles approach the theoretical and practical limits of encryption security by removing all known routes of attack via ciphertext analysis, cleartext manipulation, pattern recognition, bias detection, and key inference.
A carefully chosen approach and methods were designed to provide both broad coverage of test types as well as deep analysis for each type.
Encryption Keys in this algorithm are a randomized permutation of values – keys are not large numeric values.
Key sizes in this algorithm are not fixed and can be any size by this rule: 2^N elements where N also equals the number of bits per element.
Key size is an inherent secret as a byproduct of the key itself.
A 2^8 key (256 bytes) was used for all tests.
A total of 4 randomly generated keys were used.
Each key was used once per analyzed file.
A total of 803 unique cleartext selections of varying sizes were created across 7 groups to provide a diverse selection of transformations.
Purpose: To check short and mid-range randomness for quality and closeness to statistically random data.
NIST STS (Assess) was configured and executed using the following parameters:
Each file analyzed by STS was created by sequentially appending encrypted outputs to the file end-on-end and without padding until it reached or exceeded 25MB. Each encrypted output appended to the file was created from the same cleartext input and the same encryption key.
Because 4 keys were created for testing, the same cleartext value was tested 4 times, meaning 4 input files created for STS and 4 executions of STS.
STS executes 188 different tests across 15 groups and all tests and subtests were allowed to execute.
The full results of all 3,212 executions – including the actual STS textual reports – and the 603,856 individual results are stored in a MariaDB database and is available for download upon request here: Contact.
Purpose: ENT was used to provide early sanity checks on encrypted outputs to look for gross failures of entropy and randomness. While ENT is not as aggressive as NIST STS or PractRand or others, it is a quick check for determining if data should continue on and be tested by more comprehensive tests.
Nonetheless, ENT does provide another perspective of randomness validation and its results in combination (not separately) with other results help paint a comprehensive picture of the quality of the encryption output.
ENT was configured and executed using the following parameters:
Each file analyzed by ENT was created by sequentially appending encrypted outputs to the file end-on-end and without padding until it reached or exceeded 25MB. Each encrypted output appended to the file was created from the same cleartext input and the same encryption key.
Because 4 keys were created for testing, the same cleartext value was tested 4 times, meaning 4 input files created for ENT and 4 executions of ENT.
Purpose: PractRand was used to provide more sensitive tests for randomness and for testing across mid-range and longer-range views.
PractRand was run with the following settings
Each file analyzed by PractRand was created by sequentially appending encrypted outputs to the file end-on-end and without padding until it reached or exceeded 512MB. Each encrypted output appended to the file was created from the same cleartext input and the same encryption key.
The same cleartext input was used to generate five (5) input files for each key resulting in a total of 2.5GB of tests data per cleartext sample for the same encryption key.
Because 4 keys were created for testing, the same cleartext value was tested 4×5 (20) times. This results in a total of 10GB of data tested per cleartext sample.
Some notes about the some of the summary table’s columns.
PractRand scores its various tests across several textual results (seen in the summary table below). If a tests passes as expected it will show as “normal” or “normalish”. If a tests fails it will show “Fail” or “Fail !” and so on depending on the severity of the failure. While PractRand does have p-values and R values, each test has different criteria for pass/fail and so the textual results are used for clarity across tests.
Additional notes: Normal++ is the most relevant value for scoring. PractRand is designed for testing random number generators and small percentages of oddities and failures are expected when testing encryption outputs. The counts seen in the suspicious and failure columns are artifacts of the encryption and no exploitable patterns were found in the data. Most of the suspicious and failure counts occurred in the max segment sizes (256 and 512 megabyte) and are not correlated across the other tests.
The entirety of the PractRand results are available upon request. Use the contact form to make such a request: Contact. Note that the full results are very large – 31+ million rows.
Purpose: to check for any chances of an encryption causing a duplicated output. This encryption algorithm is designed to create wildly unique outputs every time, even for the same cleartext and same key.
During the STS executions, each encrypted output was hashed using 4 different hashing algorithms: SHA2-256, SHA2-512, BLAKE2-256, and BLAKE2-512. Each hash value was then checked for prior existence before being added to a database table. If a collision were to occur, the count for that hash would be incremented.
Across the 18+ million encrypts that occurred during STS testing, there were zero collisions – as expected. This provides a level of empirical testing for the requirement and expectation of the algorithm.