Spatial Computing vs the Object Network
Techie article! Scoring Spatial Computing against my architectural vision
Here I’m going to compare Spatial Computing against The Inversion - my architectural vision behind the Object Network - so please do read that first. I don’t go into lots of detail about my vision or its justification or why it’s better, this is just about how there’s nothing out there that does what *I* want.
Spatial Computing
Although the phrase “Spatial Computing” has a long history, it’s been made popular recently by Apple with their Vision Pro XR headset and corresponding VisionOS operating system.
Even though this is all XR (AR plus VR), the purpose or application isn’t primarily to create augmenting or immersive virtual worlds or scenes. Rather, its main selling point is that you no longer need a fixed, small screen on your computer: the screen is now the whole space around you. So you still have applications and the familiar “desktop”, but now floating in front of you.
Spatial Computing of course gets full 3D points, and we’ll give VisionOS the OS points. There’s nothing inherently Decentralised about it, any more than a normal computer OS. As for Deconstructed and Declarative, Spatial Computing as used for desktop applications would get two more red zero blobs.
However, since it can also used in normal XR mode to project scenes or immersive virtual worlds, it would then inherit the Inversion inherent in all virtual worlds: in a virtual world you see the states of each world object around you first (Deconstructed), and those world objects can be internally-animated (Declarative: just like in 2D in spreadsheets).
This is the visible aspect of an internal scenegraph of one sort or another. But the Inversion of all virtual worlds is only implicit and is still locked away in the Imperative code that implements that scenegraph and its behaviour. Further, scenes, avatars and more transient property are usually all handled and rendered in different ways, with some parts still modelled in aggregates.
|Deconstructed
| |Declarative
| | |Decentralised
| | | |OS
| | | | |3D
|8 |8 |8 |4 |2 |
--+--+--+--+--+--+
SC|🟠|🟠|🔴|🟢|🟢| 14
--+--+--+--+--+--+
Differences to the Object Network
The first thing to do is to get rid of the desktop and the applications, of course! Although it makes sense to go in small steps with commercial products, this is a research project, so we can break the boundaries!
Now that we have a “3D OS without apps”, we can just follow the steps in my recent article on the Meta-Web, where I showed how it’s possible to fully bring out the implicit Inversion of virtual worlds (VR and AR), through links to and between all world objects.
These links have three facets corresponding to the first three blobs in the score: seamless (Deconstructed), self-animated (Declarative) and symmetric (Decentralised).
So these links can be used in this operating system to seamlessly sew together little self-animated world objects to form an explicit scenegraph.
This scenegraph would not be internal to a single operating system instance. It’s a single, global, distributed scenegraph that would stretch symmetrically across and between all instances. You could explore a global network of world objects owned by different people all over the planet.
Hit the subscribe button to learn more, and feel free to leave a comment or question!