Development strategy FAQs
Although we are focussed on delivering a way of remotely assessing the home environment, with all the significant improvement this can make, Conja has a clear development strategy with the goal of significantly disrupting the way training is delivered.
We fully intend to be riding the crest of the wave that is about to sweep over the information sector, especially with increasingly mobile workforces demanding upskilling as required anywhere.
-
Mobile is our strength. We focus on Mobile hardware instead of Goggles because we want to support usability, accessibility and scale for workers on the job.
-
We design for XR first because delivering rich information directly into the Mixed Reality Stage is the nearest thing to having a trainer demonstrate in front of you. It also allows generative design and the manipulation of USDX twins and objects without the need to have 3d design skills.
-
The application of visual spatial tools with 3D measurement emphasises creating evidence of competency and judgment.
-
As referred to above, Health and field workers prefer for trainers to show them, instead of tell them. Physical movements and demonstrations are understood better when people can see from all angles.
-
Frontline workers are on site, in the field and aged care workers in people’s homes. We need tools that enable them to visualise on site.
-
Computer Vision AI can be tied to a Trainer’s vision and judgment. Computer Vision x AI can guide a Trainee with “Trainer-like” co-pilot visual judgment.
Most Health Data is visual. Vision can provide the visual context which AI needs to guide workers specific to the physical environments they are in.
-
In training, Generative AI can be used by Instructional designers to accelerate the creation of photos and materials for training. However, we believe that Generative AI is more useful if it is chained in the process after the use of Computer Vision.
-
Co-pilot XR digital twins can demonstrate physical body movements and multi step processes in a way that is closer to a physical trainer, instead of just a photo or video. In situations where trainees want the trainer to “demonstrate” to them - this is where XR digital twins come closer to demonstrating the movements and processes of trainers.
Photos dislocate workers from their context, but XR can keep workers in context. If we use AI to detect danger in the environment, we can mark-up and highlight the actual location in the XR context. XR can overlay information on the workers physical environment.
Built in conversational interactivity will allow even greater realism and therefore a far quicker path to understanding and better retention of knowledge.
-
In an assessment and training context, Conja can apply Computer Vision with AI and inbuilt mobile tools like LiDAR, to visually recognise and label a significant or dangerous item in the workers environment, then use AI to generate visual spatial guidance in the form of digital twins or XR markup.
Intelligent guidance can overlay the physical environment with generated visual spatial guidance.
-
SCORM was critical to standardise training technologies over the past decades.
We believe OpenUSD with Omniverse will change how training and assessment is done in the next few decades. Rather than stream Omniverse into Conja, we’re building Conja with native mobile first principles for OpenUSD, so that when we connect to Omniverse, we will bring offline, edge, remote native capabilities to multiply Conja with Omniverse.