The Dot Theory
Full and complete paper in 11 sections available in blog posts for comment, please navigate by selecting “older posts” for further sections
1 Prologue, 2 Abstract, 3. Introduction, 4. Method, 5. Structure, 6. Discussion, 7. Conclusions,
8. Addenda A-D, 9. Addendum E 10. Reinterpreting Spinors 11. Addenda F-K & references
5. Structure
Structure
Please note that derivative formulas and pictures do not show in this blog post format but are available within the document on Kindle. This is due to site editing options.
The Patient X “label”
i.e. the perspective from which we will observe, for relevance, the offered data’s relationships to that of:
all data accessible across the data lake(s) and
its association/relationship to the cluster of data representing all known data points of, or relating to traits and behaviours most-same to those of patient X.
Patient X is truly anonymous. On the one hand, only accessible with the agreed permission of the individual representing the data, on the other (simultaneously as well as inevitably) having left a trail of data that modifies the totality of the meaning of Patient X’s “having been” something at some point in space and time in relation to the rest of the archetype-constructed neural network (located in the data lake(s)). This is because it had been given meaning by an individual in real-world life-experience terms, thus changing the neural network’s state.
It also offers no insight to mimic the known data in a fake wallet (imposter) as it only offers the searcher the ability to find out what would be best for another person, if and assuming that what they knew was correct and complete.
Current structure
What this means, in practical terms, is that we today, have data lakes of data points that already exist as widely associated to individual traits (hair colour, height, diagnoses, postcodes etc.) and are seen safely/anonymously representative of the impact of an event or group of events on an individual with that/those trait(s). These are currently being assessed on scale with the emergence of new computational technology, LLM, QBit and AI facilities and reliably produce cost-saving archetypes and formulaic expressions as represented in their optimisation paths. This is the current situation and only this arbitrary avatar-based referencing is required for higher-order calculation/prediction. The idea behind this paper was to expedite this inevitable process without incurring time-delays in the ability to deliver computational equivalent of real-time prediction. This is why we have not pursued patents ahead of this paper.
Derivatives
Using the BSM equation (for elaboration refer to the Black-Scholes paper, Pricing of Options and Corporate Liabilities) we complete the meaning for the value of a call option in a fairly priced market as:
b1= ln kxc+12v2(t∗−t)v(t∗−t)−−−−−−√b1= ln kxc+12v2t∗−tvt∗−t
Option value b1= the natural logarithm of the objectively evaluated qualities of the individual’s assets (stock price x) X (times) k= the reasoned expectation of improvement from current position divided by the perception of current status of quality of life (expected stock price/ current stock price) over c (exercise price), the degree of sense of being better off than you would have been without intervention (realistic expectation) for ln kx/c .
PLUS half v^2 or the current quality of health for variance rate of return on stock X (times) the internal sense of expected improvement in the health problem : ½ v^2(t*-t).
DIVIDED BY the quality of the robustness (past clinical history) times the current subjective sense of “victimisation” for:
v(t∗−t)−−−−−−√vt∗−t
This can further be rationalised as:
B1=Objective life Value (lV)Subjective lV Perception+Current QOH ×Internal expectationPast Clinical HistoryRobustness×Current perception of sufferingvictimisationB1=Objective life Value lVSubjective lV Perception+Current QOH ×Internal expectationPast Clinical HistoryRobustness×Current perception of sufferingvictimisation
This can in turn further be rationalised to:
B1= Experiential Reality + current access to placeboTechnical Clinical Reality ×Nocebic input factorsB1= Experiential Reality + current access to placeboTechnical Clinical Reality ×Nocebic input factors
Or:
This can further be translated as ratios of: Strength/Sensitivity or Power/Control
For analytical purposes we recycle
kxN(b1)−k∗cN(b2)kxNb1−k∗cNb2
and position that:
The individual under evaluation (the patient) is exposed to data representative of the “most-like” avatar that can be derived from the data given by the individual, rather than the data of another individual.
(b1) and (b2) are the fairest possible footing on which to provide healthcare and most likely to result in the provision of beneficial information.
What this process does is “Quantise” the subjective concept of “meaning” by creating a new set of “sub-types of that meaning” from which to analyse the data. It produces this unique and hyper-specific computational perspective by being constructed from unique combinations of unique sub-defining traits (metadata) and divides its statistical distribution of presentations into (one of 5) arbitrary subcategories (referred to in this document as “Platonics”, suggested as referring to irreducible shapes (unless a new defining term is added). It thinks about itself and doesn’t decide itself just on traits or behaviours but with knowledge of both. This ultimately seems to logically enable the crossing of the computational chasm of health care data. Doing this enables system prediction analogous to successful Hedge fund management equations and systems. This level of financial predictive efficiency should be outmatched by healthcare systems thanks to the ability to directly influence the system user (both clinician and patient) via apps and wearable technology as well as possible VR and AR applications within the healthcare setting.
Further discussion on Platonics in Addendum C