31 Most Disruptive Technologies in Modern Age
The word disruption gets thrown around a lot. There are technical
definitions of disruption like Clay Christensen’s and looser definitions
that have led the label to be applied to pretty much any product or
startup. But in the last 20 years, there have been specific technologies and
product categories that truly changed how we live and work, and which
are certainly notable.
Long the darling of a certain breed of tech devotee, Linux is disruptive on a couple of levels. From its humble beginnings as a fairly unwieldy operating system, there are now countless free Linux desktop OSes available that make handling day-to-day tasks like email and office suite work (making docs, decks, spreadsheets, etc.) just as simple as on Windows and Mac machines. Linux also pioneered their own version of the app store phenomenon by having all downloadable products in one spot. This way of adding new functions to a Linux system has also helped keep it free of much of the malware that plagues PCs.
Meanwhile, Linux has long been the leading language on servers and has recently begun creeping into more and more applications by major corporations. IBM’s Watson makes use of the Apache Hadoop framework, which was developed on Linux, and runs on the SUSE Linux Enterprise Server 11 operating system. They’ve also been contributing code to Linux projects for years. And no mention of Linux’s disruption in the modern tech world would be complete without mentioning that it’s the code that underlies the Android phone OS, which currently holds about 90% of the smartphone market. While we may never see Linux dominating personal computing, it’s all around us nonetheless, quietly disrupting the world.
cbinsights
Cloud Computing
Before the rise of cloud computing, businesses were
responsible for all aspects of their networking hardware and
maintenance, regardless of how much they were using these resources at
any given time. They were also without options when their servers were
maxed out. Cloud computing companies like Amazon Web Services offered an
alternative: rent storage space and processing power on their massive
servers as needed. When clients use a provider’s servers, they’re
charged. As soon as they stop, charges stop. This allowed companies of
all sizes to access cost-effective computing and storage solutions and
greatly lowered the cost of entry for tech startups. It’s proven so
game-changing that even competitors like Netflix rely on AWS to keep
their streaming services up and running. And for individuals as well,
distributed cloud storage from services like Dropbox and Google Drive
allow users to store files of all kinds and sizes on remote servers and
access them at any time.
MP3s
Few innovations have so disrupted the music industry like
the advent of digital music files, specifically the MP3. These compact
files could fit easily on CDs and hard drives as data, allowing
enterprising early adopters to cram more music on older storage media
and opened the way for compact MP3 players. It also opened the way for
widespread and unstoppable file sharing and music theft, via early
services like Napster and Kazaa and later via torrenting systems. More
above-board services that use MP3 technology include iTunes, direct
online music sales, and current streaming options like Spotify and
SoundCloud.
Touchscreens
The first touch screen may have been invented in 1965,
but the technology didn’t make its way into widespread commercial usage
until 2007, when Apple released its first iPhone. That opened the
floodgates: future iPods and iPads relied on touch interfaces, as well
as most smartphones following Apple’s lead. Now touch screen technology
appears seemingly anywhere engineers can put it: self-service kiosks at
museums and on subway platforms, video game and gambling machines in
bars, and on restaurant order and checkout tablets.
Wi-Fi
As disruptive technologies go, Wi-Fi may have set a record
for going from “not existing” to “ubiquitous” the fastest. The bandwidth
that most Wi-Fi networks use was designated in the US in 1985 and the Wi-Fi Alliance, which helped create the technology,
came about soon thereafter. Despite being introduced in the 1990s,
though, Wi-Fi took off in the 2000s, when just about every new laptop
and electronic device was outfitted with it. Around the same time,
advances in wireless broadcasting and routers allowed networks to
support more devices at faster speeds. Wi-Fi made the internet fast and
essential and paved the way for its dominance in our everyday lives.
Bluetooth
In 1994, Swedish telecom company Ericsson hit upon the idea
to wirelessly connect peripheral devices to computers. With Intel,
Nokia, and others also interested in this type of technology, the tech
giants formed a Special Interest Group to ensure universal
compatibility. The code name bluetooth was suggested by an Intel
engineer and refers to a nickname of a Danish king
who united what are now Denmark, Norway, and Sweden into a single
kingdom. The technology is now widespread and unites wireless headsets,
wearables, sensors, peripherals, and a widening array of devices.
Mobile Internet, 4G/LTE
The names 4G and LTE have deceptively simple definitions: 4G
is the fourth generation of the wireless mobile infrastructure that is
used by many of the most recently released smartphones. LTE stands for
“Long-Term Evolution” and is a standard designed to “to
bridge the functional data exchange gap between very high data rate
fixed wireless Local Area Networks (LAN) and very high mobility cellular
networks.” These two technologies together helped enable the rise of
smartphones and other mobile devices, thus creating an entirely new
platform for computing, e-commerce, communications, and more.
Deep Learning
Artificial intelligence has been slowly developing for many years, but deep learning
represented a leap forward in machines learning to learn. Deep learning
systems attempt to mimic the human brain via layers and layers of
artificial neurons arranged in a “neural network.”
That original conception has been around for decades, but recent
advances in processing power allow modern deep learning systems to
simulate more neurons than ever. These powerful computers use these
artificial neurons to analyze massive volumes of data and learn to
recognize patterns. The systems can then learn from their mistakes and
successes and get better and better at recognizing images, speech, and
even facial expressions, with applications across healthcare, fintech,
customer service, and many other categories.
Virtual Reality
Long the promise of science fiction and still far from a
widely used consumer technology, VR has nonetheless taken massive
strides of late. The latest generation of smartphones are more equipped
to fit into low-cost VR headsets such as Google Cardboard, giving large
swaths of consumers the essential computing device to power VR.
Higher-end options like Samsung Gear, HTC Vive, and Oculus Rift have
also made the VR experience more immersive, for everything from watching
live events to playing video games. Startups like Lucid VR are also
supplying consumers with devices to more easily create VR content,
including a handheld VR camera.
It’s by no means a dominant medium yet, but its use cases seem to be
growing quickly, with interest far beyond entertainment to B2B
applications such as job training, retail floor planning, and healthcare
rehabilitation/training.
Augmented Reality
Augmented reality tech blends digital images and information with the “real world.” Google’s less-than-successful Glass
tried, years ago, to augment user’s realities but fell short. Yelp’s
Monocle feature allows users to peer through their smartphones and see
information about businesses around them, which is cool, but not
life-changing. A watershed moment in augmented reality came in 2016,
when Pokemon Go took the world by storm, filling it with AR monsters
that sent eager fans out on the hunt. High-profile startups like Magic
Leap are betting big on VR and AR taking on larger and larger roles in
an increasingly connected, digitally-backed world.
Low-Priced Sensors
The heart of the Internet of Things is about connecting
devices to each other, users, and the world via sensors, allowing users
to access, control, and learn about the performance of these devices
remotely. Low-priced sensors led to a boom in what could suddenly be
connected: HVAC equipment, appliances, and vehicles are just a few of
the earliest places low-cost sensors were employed. Corporates and
startups alike are using information from sensors to help create smarter
electrical grids, develop connected cars, and secure homes, among many
other uses.
Natural Language Processing
Natural language processing,
related to deep learning, is the science of teaching computers to
recognize human language (speech and writing). Early search engines like
Ask Jeeves encouraged users to type their questions using natural
speech as opposed to just searching for key terms related to their
request and hoping for the best. Now, every major company’s AI chatbot,
including Amazon’s Alexa, Google’s Assistant, Apple’s Siri, and
Microsoft’s Cortana, relies on natural language processing to understand
user requests and return relevant results. And as more and more devices
become connected, the number of things the average consumer can talk to
will only rise.
Lithium battery
The modern technological revolution is built on the backs
of lithium ion batteries. Consumers are awash in digital devices of all
shapes and sizes and they all need power: from laptops, cell phones,
digital cameras, tablets, and smart watches all the way up to Tesla cars and at-home energy storage systems.
Lithium ion batteries were disruptive because they enabled the spread
of high-power usage devices that could be recharged. Successive
improvements in the technology have increased their storage capacity and
decreased their costs, leading to innovations in electric vehicles and solar energy systems.
Accelerometer Motion Sensor
The accelerometer is the tiny chip inside of a cell phone
or tablet that monitors its position to keep the screen properly
oriented, but this disruptive technology has a wide array of
applications. For starters, it connects the device with the outside
world, telling it what angle it’s at and which way it’s facing.
Accelerometers inside laptops detect sudden falls and help protect their hard drives
by deactivating them before the drop can impact the storage medium.
They are also used in conjunction with GPS systems to figure out if a
user is moving in the right direction and help orient them. Gaming
companies also make use of these tiny devices by allowing players to
shift the phone, tablet, or controller from side to side to control
their digital avatar, especially in racing games. One sector that makes
special use of accelerometers is wearables. Quantified self, fitness,
and health monitoring wearables use accelerometers to measure how many steps users take, how quickly they move, and even if they have an accidental fall.
3D Printing
At its debut in the 1980s, 3D printing was hailed as an
amazing new way for designers to create models and prototypes quickly
and easily without using larger-scale production methods. The technology
has progressed in leaps and bounds since then, with new printers using
materials of all kinds including precious metals and concrete, and
coming in a variety of sizes. Emerging startups make fabricating custom
metal parts or even printing entire buildings possible. Replacement jawbones have been 3D printed and successfully implanted into people and bioprinting technology may soon allow scientists to create custom-printed organs. 3D-printed vehicles include a car and a robotic plane.
Computer Vision
Computer vision is one of the linchpins making autonomous
cars a reality. Advances in computing power and neural networks (like
the ones used in deep learning systems) have led to computer vision
systems that are able to more accurately process visual data by using
large samplings of images and matching what they are seeing with the
definitions in these groups. More sophisticated cameras also give
computers even more information to use when making their assessments.
Robust systems are now able to recognize faces and even facial expressions
and early self-driving car tech can recognize where obstacles are and
help avoid them. As this technology continues to advance, computer
vision systems will become more sophisticated and see more like humans
do.
Multi-Factor Authentication
In an increasingly online and connected world, security is
more important than ever. It’s not enough to just have a difficult
password, multi-factor authentication (MFA) asks the user for multiples
pieces of information before allowing a user to access an account,
making it more difficult for malefactors to gain entry. While not
impervious, many high-profile companies (Facebook, Google, Twitter,
Apple) offer MFA on their systems to help make them more secure. One of
the most common forms is to have a successful password login trigger an
automated text message to the user’s registered cell phone. The user
then enters the code they’ve been given and can log in. Even if a hacker
was able to beat the user’s password, they’d still have difficulty
accessing the account without also having access to the user’s cell
phone. Now many startups also include MFA on their apps and as biometric
security becomes more widespread, MFA may become the norm, with a
biometric element being one of the factors in the login process.
Biometric Cybersecurity, Fingerprint Scanner
Here’s another disruptive technology that has long been the
province of science fiction: just press your finger on a surface and
have the system recognize you and grant access. Cheaper sensors and more
sophisticated phones and computers mean that more and more of these
devices are coming standard with the ability to scan, store, and
recognize a user’s unique fingerprints or other biometric data; in
Google’s Pixel, a fingerprint scanner operates as a one-touch feature
for unlocking the phone and specific apps and making payments.
Cybersecurity experts warn that fingerprint scanners are far from foolproof, though as part of an MFA system, they can be helpful. Additionally, courts have ruled that a user can be compelled into unlocking their phone
with a fingerprint, since biometric data isn’t protected under your
right not to incriminate yourself the way a PIN is. Other recent biometric security features
with disruptive potential include behavioral systems designed to
recognize the way users typically do things online (browsing speed,
typing cadence, etc.) and detect when these behaviors differ, indicating
that the account has been compromised.
Blockchain
Blockchain is a powerful force with the potential to
minimize fraud because its distributed ledger system and consensus
requirement creates a record of a given transaction that’s stored in
multiple places and cannot be easily tampered with. This system, with no
third party needed to verify a transaction (because it’s secure and
replicated across numerous computers) has been referred to as “trustless.”
This is emerging as a major disruptive force in financial services and
other asset-related industries where a third party like a bank or real
estate company is typically called upon to verify a transaction between
two parties (like a stock trade or real estate sale); blockchain makes
these third parties less necessary while keeping transactions secure.
Mobile Messaging
When texting first came to phones (during the era of keypad
phones), it was a chore. Early adopters’ thumbs learned the keypads of
these primitive phones by feel and mastered the Multi-Tap and T9
(“Texting on 9 Keys”) to send texts rapidly. Successive phones added
full keyboards and eventually the fully featured touchscreen. Once texting hit the smartphone age, it really changed the way many people use their phones. The length and frequency of voice calls dropped way off
and even email changed, becoming seen as more the province of business.
Social media messaging made simple non-voice communication even more
powerful, with Facebook’s Messenger and WhatsApp taking the texting
framework and creating a faster and more flexible user experience. Then,
when Snapchat created its self-destructing messages, it added a layer
of urgency to instant communication. Now messaging in its many forms —
via social media, encrypted apps, or through text — is a primary means
people use to communicate, emojis and all.
Search Engines, Google Pagerank
The early days of online searching were a crapshoot: you
waited for the dial-up process to finish, then poke around Lycos or
Yahoo! or Alta Vista, and hope for the best. Users had no clear idea how
powerful a search engine could be until Google and their Page Rank
system came along. Named after co-founder Larry Page, it was designed to
determine how important a site is,
based on the number of links to it (the idea being that more important
websites have more links to them), among other factors. The system
eventually became a best-in-class search engine that ended up being so
widely used that “to google” something has become a verb.
Genomics, CRISPR
Maybe one of the least well-known tech revolutions on this
list is the CRISPR gene-editing system. In the 1980s, the invention of a
gene-editing process allowed researchers to make changes in the DNA of
certain organisms. The process was long, slow, and costly; CRISPR
represented a massive jump forward in gene editing, making it faster,
easier, and less expensive. According to researcher James Haber,
“[It] effectively democratized the technology so that everyone is using
it. It’s a huge revolution.” Researchers are editing crops to make them
heartier and more productive, engineering treatments to dangerous
diseases, and more. Patents mentioning the technology and funding to
companies working with it have skyrocketed.
Big Data, Hadoop
A couple of decades of online
activity and vast new streams of data coming from IoT and mobile devices
created a massive pool of data that companies knew had value, if only
they could manage and analyze it. Enter Apache Hadoop:
this open-source framework, released in 2011, helps organizations
manage and process massive data sets in parallel computers
simultaneously. Hadoop has allowed massive companies like Danske Bank
to analyze the massive amounts of transactional data that the
150-year-old bank has acquired. Companies and consultants are using big
data and in-depth analysis to spot market trends and customer
preferences. Hospitals are even using analysis
of their data stores to improve care by spotting risk patterns and
analyzing relationships between the care provided and outcomes recorded
and adjusting care accordingly.
Robots, Amazon Kiva
Robotics research has been chugging along steadily for
decades, bringing automation to factories and other industrial
locations. The Jetsons’ dream of a personal robot may still be far off
(though maybe not as far as we think, based on the work some startups are doing),
but one noteworthy advancement that recently hit warehouse floors has
been Amazon’s Kiva robot system. Founded in 2003 by a former Webvan alum
(a failed online grocery delivery startup),
Kiva was designed from the outset to change the way warehouses worked.
Apparently it has, because Amazon acquired the company for $775M in 2012
and immediately put the robots to work in their facilities. These
stout, wheeled robots are only about 16 inches tall, but weigh over 300
pounds. They work by hauling around packages of up to 700 pounds in
computer-optimized routes and have dramatically improved efficiency
in the facilities where they’ve been deployed. They’re an amazing
example of a disruptive technology taking an unexpected form. Instead of
human-shaped robots doing activities the way a human would (with hands,
etc.) these bots scoot around narrow warehouse corridors with more
speed and efficiency than a human could achieve and have helped drive
the e-commerce giant’s delivery costs and times down even further.
Mobile Apps, App Stores
In an era where there’s literally an app for everything, it
can be hard to think back to a time when there just…wasn’t. But as soon
as there were apps, there were app stores: clearinghouses for the
menagerie of brightly colored icons that now occupy countless phone and
tablet screens. The first one in existence, Hadango,
actually predates the rise of the iPhone and smartphones in general.
App stores didn’t really become disruptive until Apple’s App Store hit
the market in 2007 to serve its game-changing iPhone and then iPad. They
gave app developers a place to host their creations and get access to
an audience, to the point where reviews and advertising on app stores
can be instrumental to an app’s success. Now all major platforms have
their own native app store.
RFID Tags
RFID tags, like the Internet of Things, bring the digital
and the real worlds together. These tiny devices can be almost as small
as a grain of rice and allow electronic systems to passively track the
location of the tag. While the first patent referring to the term “RFID”
was filed in 1983, it’s only recently that these devices have become
widespread. EZ Pass and other automated toll-payment devices use RFID
tech to allow cars to move through toll plazas without stopping. It’s
become increasingly common to have pets that might escape to the outside
world tagged with RFID chips that can identify the animal even when a
collar or physical tag has become lost. These tiny chips also appear in
countless security badges and allow people through access points without
keys or ID checks. One of the primary places where RFID tags have become popular is
retail. From pallet- to item-level tracking, RFID tags allow retailers
to know what they have in stock and where it’s located, as well as
adding a layer of security since taking a tagged item out of a store can
trigger an alarm.
JavaScript
Recent changes to Javascript have made it one of the most
disruptive ways to code. The “Node.js” expansion allowed developers to
work on more platforms than before, specifically server-side developing.
One reason for this sudden rise to prominence is that “Node.js and the browser are both fluent JavaScript speakers.” JavaScript has also proven instrumental in cloud-computing solutions
and can even be used to make mobile and desktop apps, build 3D-rendered
and VR games, program the underlying systems for bots, and interact
with IoT devices. It makes sense, then, that a recent Stack Overflow survey found that 85% of programmers knew some JavaScript; it was the most popular language for both front- and back-end programmers.
Virtualization
Virtualization offers users increased flexibility by
untethering the software that can be run on a machine from its physical
hardware. Though in use since the 60s, virtualization is another
technology that has recently become even more disruptive. The company VMWare was originally
founded to bring virtualization to PCs, then servers, finally launching
VMWare Server for full server virtualization in 2006. These programs
and others like them allow IT departments the freedom to run more apps
and environments on fewer machines.
Containerization
Containerization and virtualization are very much cousins.
While virtualization laid the groundwork for telling hardware what
software system to emulate, containerization creates an even smaller
encapsulated operating environment. It’s a less resource-intensive
alternative to full virtualization. Encapsulation really took off with
the release of open-source solution Docker, which helps users
implement applications as “portable, self-sufficient containers that
can run virtually anywhere on any type of server.” Containerized
instances of a given operating system can occur on remote cloud servers
and then be rapidly shut down and the resources reallocated. This is a
big step forward for cloud computing systems that helps make them even more efficient
for users and possibly more cost-effective as well, they could possibly
“provision resources that are much more closely matched to demand on a
minute-by-minute basis.”
LIDAR
While LIDAR speed guns have been in use by law enforcement
groups for many years, their disruptive potential has spread to
countless other industries. LIDAR determines the distance of items by
sending out a laser beam and recording its travel time. They’ve given
surveyors a much more efficient way to map larger areas more quickly, handling a single intersection in minutes
rather than hours and giving back thousands of detailed data points.
This, combined with sophisticated GPS systems, has led to a sudden surge
in the availability of high-quality maps and navigation programs like
Google Maps. It could also be instrumental in making truly autonomous
vehicles a reality. MIT has even developed a “LIDAR on a chip” that is set to cause even more disruptions in 3D scanning.
World Wide Web
So many disruptions on this list are reliant on the
internet in some forms or another that it can be easy to forget how
disruptive the world wide web actually was when it first debuted.
Massive amounts of data, instantly available to schools, institutions,
companies, and most importantly, in people’s homes. Though the concept
of the “internet” is older than the WWW, it was originally conceived of
in 1989 by Sir Tim Berners-Lee, a British computer researcher then
working at CERN in Switzerland. He conceived of three technologies that
still underlie much of the WWW today: HTML, URI (also called a URL), and
HTTP. Berners-Lee’s original proposal to develop the Web wasn’t
immediately picked up; his manager actually put the note “Vague but exciting” on the cover of the proposal.
The technology to support the Web was constructed by 1990,
but it wasn’t until January of 1991 that servers outside CERN were
activated. From there, adoption moved slowly and steadily throughout the
industrialized world, with users purchasing clunky desktops wired with
dial-up modems. Once it started catching on and home ownership of
WWW-capable machines became the norm, the technology spread rapidly, reaching a quarter of American consumers in just 7 years.
Faster download speeds were demanded by consumers, telecoms struggled
to keep up, mobile internet became a thing, and the interconnectedness
of the whole world as revealed by the WWW has fundamentally changed how
we live and communicate. In 2011, a UN report declared access to the internet a human right.
Linux
Long the darling of a certain breed of tech devotee, Linux is disruptive on a couple of levels. From its humble beginnings as a fairly unwieldy operating system, there are now countless free Linux desktop OSes available that make handling day-to-day tasks like email and office suite work (making docs, decks, spreadsheets, etc.) just as simple as on Windows and Mac machines. Linux also pioneered their own version of the app store phenomenon by having all downloadable products in one spot. This way of adding new functions to a Linux system has also helped keep it free of much of the malware that plagues PCs.
Meanwhile, Linux has long been the leading language on servers and has recently begun creeping into more and more applications by major corporations. IBM’s Watson makes use of the Apache Hadoop framework, which was developed on Linux, and runs on the SUSE Linux Enterprise Server 11 operating system. They’ve also been contributing code to Linux projects for years. And no mention of Linux’s disruption in the modern tech world would be complete without mentioning that it’s the code that underlies the Android phone OS, which currently holds about 90% of the smartphone market. While we may never see Linux dominating personal computing, it’s all around us nonetheless, quietly disrupting the world.
cbinsights
Post a Comment