
By Sylvie Ouziel,
International President, Envision Digital.
Today, a lot of appliances come with a direct connection to the cloud of their manufacturer, be it simple devices or more complex equipment.
Any light switch, pump, chiller, battery or elevator dialogs in quasi real time with their OEM’s cloud. Commercial and passenger vehicles: cranes, trucks, forklifts, vans, cars also transmit numerous information about position, speed, trajectory, state of health and utilization to their manufacturers’ systems. This data feeds digital twins of the physical devices which allows default prediction, what-if scenario simulation. Does it make the manufacturers better informed about their products; how they are used; how they age; and how to improve their design? We can probably think so. But does it make clients and users better off? Not necessarily so…
An industrial was recently counting that there were fifty different pump brands equipping their factories and thus, they benefited from reports coming from fifty different clouds!
Too much smartness was killing smartness! This information dispersion and lack of standardization was inhibiting comparison, analysis, learning and prediction regarding performance, durability, energy consumption and maintenance patterns across the pump category. Siloes not only exist between brands within one category (like pumps) but also across categories, limiting transversal automations and optimizations… Minimizing overall energy consumption of a building, cross appliances, would require data-crunching of information from the different sources of consumption. Automating fault detection and diagnosis would maximize users’ comfort as a problem would be detected, or even anticipated, before a user would have to suffer from it and report it. Devices could report their status in a transparent and coordinated way, triggering synchronized interventions instead of spot siloed ones. The consequence of a power surge or a water leakage would be reported by the various impacted devices in real time and one coordinated intervention would be scheduled. Such automatic fault detection and diagnosis would suppress the need for existing field force workers checking the status of devices and redundant intervention schedules. Similar challenges are met in residential and commercial buildings, infrastructures like airports or ports, districts and cities or even individual homes.
The next benefit to users would come from actual “no human touch” automation. This could be: orchestrating storage of energy into a battery and consumption from this battery depending on solar panels’ energy production, house needs and market prices without human interventions. Logistics needs, energy costs and equipments’ health information could be fed into a decision making algorithm controlling the picking tours, maintenance downtime and charging of picking vehicles. For those use cases, the difficulty is two-fold… First, one needs to gather information from various cloud sources, make this information understandable and exploitable, draw insight and come up with control actions. Second, one must trigger and execute those control actions timely with the suitable reactivity. The data journey is now: from multiple devices on the ground into various siloed clouds by device types and by brands, then into one “brain” which processes the data, transforming this data into actionable insights and, eventually, back into a set of control instructions which needs to be propagated down to the devices for timely execution. Obviously, such chains of command struggle to ensure speed and reactivity but also reliability and resilience, for instance, in case of incidents on the telecommunication network.
Regarding vehicles, the rich information collected by cars’ embedded software is only partly available to the driver via the car’s dashboard and smartphone OEM applications. Failure and incident codes, wear rates, battery’s state of health are not readable by third parties; available physical diagnosis “plug” made available to third parties in the vehicle itself are exposing OEM-coded information which cannot be easily interpreted without the collaboration of the car manufacturers who would disclose their “dictionaries”. Some car manufacturers propose selected categories of data for sales, typically in an anonymized format. However, such practices are not standardized across car brands and do not really enable clients and drivers to see the benefits of optimization at their individual level.
When it comes to automated guided vehicles and even more when it comes to truly autonomous cars, a vehicle to vehicle information exchange, with a close loop of communication and action, becomes indispensable to guarantee safety of operations. Two cars sharing the same urban space probably cannot wait for information to go back and forth between between their respective clouds, in order to to make decisions, even in an era of 5G… Then each car would locally make safety and emergency decision at its own level, missing the benefits of collective intelligence if it could simply dialog in a fluid way with its environment via local protocols.
Envision Digital’s is focusing on solving this Babel Tower challenge of siloed unactionable IoT data and digital twins.
EnOS, the operating system of Envision, is bringing overarching orchestration, as a “system of systems”, providing both “brain” and “arms”. EnOS use cases create insight across the siloes of brands and devices types (e.g between pumps, bearing, elevators, photovoltaic panels and batteries), identifies business benefits across dimensions and domains (e.g between logistics operations, maintenance and energy consumption) and, finally, makes sure this insight in converted into optimization actions, sometimes via human intervention, but preferably via direct automatic “machine to machine” trigger. In technical terms, this starts with gathering data from OEMs’ various clouds but also, for real added-value use cases, this usually means connecting sensor and devices directly into EnOs’ cloud or deploying new Edge boxes to benefit from first hand fresh and comprehensive open data. This is ensuring not only the full open and transparent interoperable data gathering bot also providing a direct control interface for full machine-to-machine end-to-end automation. This is an investment and an on-site effort (and so was the deployment of a corporate ERP twenty years ago) but this is the way to free-up locked-down data and unlock true rich use cases for plants, infrastructures, buildings optimizations and finally for users’ benefits. Actionable insight driven from multi-dimensional digital twins can be put back into the hands of the asset owners, operators and users.
In the world of IT, cross-functions, cross-countries and even cross-company synergies did not happen by spontaneous serendipity. CIOs formulated, together with business stakeholders, their corporate IT strategy; they defined the target system architecture and drove urbanism programs to implement it throughout the company. Common services and protocols were established and adopted, standing on harmonized master data, to allow ERPs, MES, CRM, PLM, HRIS, BI and MIS… to interoperate. The world of IoT is typically not architectured by CIOs but more closely administered by chief engineering officers, inheriting IoT features with the “things” they procure on behalf of the company. In our views, company-wide standardization and urbanism exercises are required in today’s world of smart connected machines to unlock the power of “things” data and address the modern version of the “spaghetti plate” challenge, burdening legacy systems with inextricable nets of interfaces, that CIOs successfully addressed years ago… Whatever shapes the organization, governance and process solutions given to this challenge could take, Chief IoT officers and IoT strategies are urgently needed to make sure IoT data do not exponentially and costly expand in duplicated overlapping unactionable fashions.