It is summer and holiday season, so we figured that we would write up a short article on the topic or rather topics encompassing the headline, since a lot of technology is flowing together to create digitalization and as such for the potential for transformation.
We are in mid-stream convergence meaning that technology today is more than buying a digital tool, it is about creating new business models, more sustainability, restructuring the value chain in construction in a way where we can break the productivity paradigm and catch up to advanced manufacturing which is 60% ahead of construction.
This is not only a business imperative, it is also a need for mankind. In Germany we have a deficit of 1 million affordable homes, suited for mid- and low-income families but in urbanized areas where there are jobs available. In the US alone, this number is 5 million units a year, but on a global scale the number is staggering as the same needs are found in MENA, Asia and throughout Europe.
However, this often translates into “cheap” homes which does not offer the same qualities such as infrastructure, recreational areas and in some areas of the world not running water and electricity. The result is the same, the cheap homes create ghettos without social mobility and a stimulating environment for the next generation.
To break out of this, we need to find ways to make it profitable for developers and contractors, to produce quality homes in quality communities, in a sustainable way without waste. Waste is a significant factor in driving up costs due to re-work, oscillations and general waste of material and resources.
In the late nineties and early years for this centre I worked with digitalization of advanced manufacturing and first hand witnessed how technology from e.g. SAP supported the transformation of producers to advanced manufactures. How they broke out of spaghetti software architectures with hundreds or even thousands of different applications serving as data silos into much more scalable and transparent end-to-end architectures, allowing for more collaboration through the supply chain creating a value chain, building virtually before physically driving our risk by elimination rather than transfer and as such ending up in a much more scalable and sustainable paradigm. This is what technology and Industry 4.0 thinking can do for the engineering and construction industry in all modesty among others lead by RIB and our partners.
In the digital age of 2018 Information Technology is exponential, the computing power one dollar can purchase, will double every 2 years. This is an historically true observation. This growth, shows no sign of stopping and is beginning to give us the resources to create a fully automated society with Artificial intelligence in the lead.
It is important to realize that much of the new technologies that seems to be driving the disruption is mostly NOT new inventions, they are inventions from the previous decades that now has become cheap and therefore scalable. This leads to different technologies converging to create new innovative technologies. This is the main driver in what we call industry 4.0 and the potential value creation can follow the exponential curve, generated by digitalization.
The effect of Digitalization comes in different stages and as such must be presented individually. Many parts of our lives are being digitalized to a further degree even in areas that are already highly saturated with technology. There are many obvious areas where digitalization is showing it’s impact, but also others which are less known or inimitable. As such agriculture have been vastly digitalized in recent years with IoT, machine control and labs where science and technology increase the output and quality of the crops trying to feed an increasing population in a more productive and sustainable way. In an extension, Carlsberg the brewery, is using AI in its attempt to predict the taste of different barleys in the final products from the brewing process, combining craftmanship and technology.
Peter Diamandis, founder and chairman of the X Prize Foundation has a Concept he calls the 6D’s of technological disruption. This Concept is largely based on Ray Kurzweil’s observation The Law of Accelerating Returns which explains the Exponential Growth of Information Technology or IT and why us humans can’t really grasp the technological potential of Industry 4.0:
- Anything that becomes digitized enters the same exponential growth that we see in computing. Digital information is easy to access, share, and distribute. It can spread at the speed of the internet. Once something can be represented in zeros and ones – from music to biotechnology – it becomes an information-based technology and enters exponential growth.
- When something starts being digitized, its initial period of growth is deceptive because exponential trends don’t seem to grow very fast at first. Doubling only .01 only gets you to .02, then .04 and so on. Exponential growth really takes off after it breaks the whole number barrier. 2 quickly becomes 32, which becomes 32.000 before you know it.
- The existing market for a product or service is disrupted by the new market the exponential technology creates because digital technologies outperform in effectiveness and cost. Once you can stream music on your phone, why buy CDs? If you can snap, store and share photographs, why buy camera and film?
- Money is increasingly removed from the equation as the technology becomes cheaper, often to the point of being free. Software is less expensive to produce than hardware and copies are virtually free. You can now download any number of apps on your phone to access terabytes of information and enjoy a multitude of services at costs approaching zero.
- Separate physical products are removed from the equation. Technologies that were once bulky or expensive – radio, camera, GPS, phones, maps – are now all in a smartphone that fits your pocket.
- Once something is digitized, more people can have access to it. Powerful technologies are no longer only for governments, large organizations, or the wealthy.
Computing Power: Moore’s Law
Moore’s Law describes the exponential growth generated by Digitalization, based on learnings from the HP labs. In RIB we have over the past ten years experienced this exponential theory in full effect, since it initially was a hardware- and processing power scalability issue that limited our ability to run further services in our advanced technology. With the increase on processing power doubling each year, we quickly crossed the inflection point where on-prem hardware did not offer any limitations, why we continued a journey of integrated concurrent engineering in our software teams, meaning that we hedged that Moore’s law would allow hardware and compute power to catch up and as such should not limit our R&D programs. Similar hedge as we have made with MTWO and iTWO 4.0 preparing our offering for full cloud enabling before GPU was available in any public- and as such multitenant cloud offering.
The CPU is the Computers brain, it is an Integrated Circuit that consist of many transistors, the transistor count determines the efficiency of the CPU. The transistor count has historically doubled every 2 years, thereby doubling the Computer Performance. This observation is what we call Moore’s Law. This method of observing the past to foresee the future was in trouble from 2011-2015 a period in time where the gap between CPU performance and the performance that Moore’s Law predicted, continued to widen.
GPU Accelerated Computing: NVIDIA
GPU is essential to reflect about, since this allowed RIB iTWO, as well as other object oriented and graphical data models, to be run in the cloud and as such seamlessly accessed on multiple devices via aps and browsers, making accessibility much greater, and infrastructures for running the software more scalable and affordable. This was also the kick start for MTWO, when Microsoft included GPU capability in Azure.
The Japanese Company Nvidia invented the words first GPU in 1999, GeForce 256, it is a specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the creation of images in a frame buffer intended for output to a display device.
In 2012 Nvidia introduced the world’s first virtualized GPU, Grid, bringing graphics to cloud computing. In 2015 Nvidia introduced ‘Titan X’ which is a fundamental part of their parallel computing platform CUDA. This platform is Nvidia’s approach to GPU Accelerated Computing or GPGPU (General-Purpose computing on Graphics Processing Units) This invention puts Moore’s law back on track and is what makes Deep Learning possible.
In 2017 Nvidia introduced ‘Volta’ which is currently their fastest GPU,which powers the world’s largest supercomputers. It is also being adopted by all leading Cloud service providers, and every major data centre system manufacturer. Nvidia’s next generation GPU is called Turing, and it is rumoured to be released later this year named after the great mathematician Alan Turing. Turing is widely considered to be the father of theoretical Computer Science, and Artificial Intelligence.
At GTC 2017 Nvidia’s CEO Jen-Hsun Huang claims that GPU Accelerated Computing will keep Moore’s Law on track, proposing 1000x growth in Computer Performance from 2015 to 2025.
There are a lot of misconceptions when it comes to Artificial intelligence. Some claims that Artificial Intelligence is only capable to do repetitive jobs, while others are under the dilution that it can create its own opinions and become hostile to humans. Both are wrong. In the real-world things are a bit more nuanced. However, Microsoft CEO Satya Nadella suggested very strongly during his key note at INSPIRE 2018, that AI and data could be “weaponized” and that we as such need a Digital Geneva Convention.
Artificial Intelligence is a subset of Computer Science, imitating intelligence using Algorithms. Algorithms can perform calculation, data processing and automated reasoning tasks. It is an unambiguous specification of how to solve a class of problems.
Machine Learning is a subset of Artificial Intelligence, often referred to as Artificial Narrow Intelligence or ANI, where the Artificial intelligence is limited to expertise within ONE specified task. It can be defined as Algorithms based, pattern recognition techniques to solve practical business and technological application problems. This ML can come as supervised learning, where the computer is prompted with suggestions and ranges to form it’s hypothesis to test, or unsupervised where non such are given and the ML algorithms are creating these hypothesis itself.
Deep Learning is a subset of Machine Learning and can be defined as software agents that if given some resources and a well-defined set of tasks, can solve those tasks in the minimum amount of time available. Even though a ‘well-defined set of tasks’ is not the same as General intelligence it is often referred to as Artificial general intelligence or general-purpose AI, or AGI. AGI also uses algorithms.
The difference is, that in this case, the Artificial Intelligence learns by using something called Neural Networks or Neural Nets which is many parallel networks on top of each other, that (oversimplified) imitates the layers of neurons in the human brain, and learning is determined by the relations of neurons in different layers. This method became viable in 2015 with the release of Nvidia’s GeForce GTX Titan X.
Artificial Intelligence generally performs better, the more data you feed it why it is an imperative that companies today think about data as an asset rather than a utility. For software companies like RIB, it is equally important to not only reflect on the software and product capabilities, but also on the data model and ability not only to capture but also transact, analyse and farm data as we do in iTWO 4.0 and MTWO. The more diversified the information is, the better, why the capture of data is top to bottom and bottom up. In construction this means everything form financial data and simulations to actual progress- and field data collected. Artificial Intelligence is predicted to beat the Turing Test in 2029 according to Ray Kurzweil. As such, we are all in a data capture race since the quantity of data, and the quality of data is what will offer e.g. sources for real AI which can be used for artificial engineering as showcased below with our AI Assistant McTWO:
Computing power available whenever, wherever you are with the likes of Microsoft Azure, AWS, Alibaba Cloud, Tencent, but also in hybrid formats e.g. Ingram Blue Cloud. cloud computing is the new cost-efficient way of the delivering software services. This is a much more reliable, scalable, cost-effective model for managing, storing and processing data over the Internet. cloud computing is a Paradigm that allows access to shared Computing Resources. Typically ,cloud is divided into Pubic cloud, meaning many sharing the same resources, private cloud where one “client” have a private cloud environment, but also a hybrid is offered.
There are three delivery models of cloud computing; SaaS, PaaS, and IaaS. SaaS is a Service of application Software to Users. The Application Software runs on an independent platform like a web-browser on PC’s. Other Devices usually use Applications that’s runs on the Mobile or Tablet’s Operating System as the platform. Microsoft Office 365 and the Google App Eco-system are good examples of SaaS. It is Available for Multiple End-Users and the computing Resources are Managed by the Platform and Infrastructure providing MSP. A managed service provider – MSP is a company that remotely manages a customer's IT infrastructure and/or end-user systems, typically on a proactive basis and under a subscription model.
PAAS, or Platform as a service, a Service that’s is made up of a Programming Language Execution environment, an Operating System, a Web server and a Database. This encapsulates the environment where users (Developers) can build, compile and run their Programs, while the Infrastructure is managed by the providing MSP.
IaaS, or Infrastructure as a service, is a service that offers the Computing Architecture and infrastructure. All Computing resources like; Data storage, virtualization, servers and networks, but in a Virtual Environment so that multiple users can access them.
IoT, or Internet of Things, is a network of physical Devices embedded with Electronics, Software, Sensors, Actuators, and connectivity which enables these things to connect and exchange data, creating opportunities for more direct integration of the physical world into computer-based systems, enabling insights never available before.
This is mainly possible because sensors have become cheap. However, this opens up doors and makes the system vulnerable to hackers, this requires better cybersecurity.
As more and more IoT gets connected, it has been stipulated that we will reach almost 80 Billion connected devices by 2025. This will generate massive amounts of data.
IoT is making assets smart and with the fast decrease in sensor prices, existing assets are now being converted into smart assets. In the design and construction, many supplies are already smart at their inception meaning they are able to send information about their usage and surrounding, why engineers and designers have to take this into considerations already early on, just like the clients.
Great IoT examples can be found in many places, but ISS have gone to the ropes and fully digitalised it’s headquarter in Copenhagen with sensors. This means, that the canteen knows how many people are in the building to avoid waste of food, that assets are monitored for wear and maintenance plans prescribed accordingly, noise, temperature and oxygen levels monitored to secure an optimal working environment, and areas of the building not used is not scheduled for cleaning in the night. As such, IoT will also offer consumption based or as-a-service business models for asset owners, which leads to a new paradigm of Facility Management and Asset Management based around data rather than intentional plans. Companies are emerging in this space to offer brown-stone conversion into smart assets like the consulting firm Glaze. Don’t miss out on the BLC View with Glaze (translation available):
Sensor technology has dropped dramatically in price over the last 10 years, combine it with Artificial Intelligence and you have got the Robotics industry. Recent advances from i.e. Boston Robotics have shown robots which are able to problem solve and are send out as fields surveyor robots.
The drop in price for sensor makes robotics scalable. Soon we will have Autonomous cars for transportation, Autonomous drones for logistics and Androids/3D Printers for manufacturing. In the future, the transportation industry, the manufacturing industry AND the Construction industry is going to be a part of the Robotics industry.
In RIB we work significantly in the robotics space developing software that supports running pre-fab factories outputting concrete elements and volumetric as well as penalized objects. Our software controls the robots and the factory floor, allowing for smart analysis to identify bottlenecks and simulate constant optimizations.
MTWO / iTWO 4.0
Bringing all the above together is our MTWO offering with Microsoft, which is a subscription-based offering of our core product iTWO including Azure infrastructure, while iTWO 4.0 is for private cloud customers which often are found in our key-account segment of the top 1000 companies globally as these have own data centres and complex ERP structures.
In the application layer we are offering SaaS with different infrastructure platform options as mentioned above such as PaaS. Furthermore, a platform layer (PaaS) offers a possibility to script connections to other solutions or embed software directly in the platform like we have done with REVIT from Autodesk. The beauty here is, that we jointly can develop tools to improve the scoop and functionality e.g. our “fast modelling” suite which are a unique because of the embedded nature of REVIT in the platform, which allows one-click operations to be done automatically in the model preparing it for the engineering processes, rather than engineers or BIM managers having to spend hours and days.