Birth of IBM - 100 years ago
Originally established in 1911 under the name Computing-Tabulating-Recording Company (CTR), IBM decided to shake things up in 1924 and rebranded to the powerhouse house we know today. With a name that boldly declares its vision, IBM, short for International Business Machines, spent its formative years dominating the market—ranging from electric typewriters to electromechanical calculators (IBM created the first subtracting calculator) and personal computers. IBM also played a pivotal role in the development of many tech innovations including the automated teller machine (ATM), the SQL programming language, the floppy disk, and the hard disk drive.
By the mid-90s. the IBM mainframe stood as the unrivalled computing powerhouse and the brand boasted an impressive 80% market share of computers in the U.S. Yet, by 1992, IBM’s market share plummeted to a concerning 20%.
The root cause of IBM's decline can be traced back to the intense competition it faced in the personal computer market. Despite creating an impressive PC with features like 16 KB of RAM, a 16-bit CPU, and two floppy drives, IBM failed to foresee the soaring demand for personal computers. This miscalculation allowed rival companies such as Apple, Epson, and more to enter the scene, offering cheaper alternatives and creating a market where IBM was no longer in control.
As more competitors flooded the market, prices began to drop, and IBM found itself struggling to dictate market terms. Although IBM maintained high-quality machines, their premium pricing strategy contributed to diminished profits and a decline in market share.
A change in leadership, coupled with strategic cost-cutting measures and a keen focus on consumer feedback revitalized IBM. Although it may not be leading the pack as it did in its glory years, it continues to innovate and now stands as a major advocate for the adoption and responsible governance of Artificial Intelligence (AI).
The oldest surviving programming language - 70 years ago
Where would we be today if John Backus, along with his team of programmers at IBM, had not invented the first high-level programming language back in 1954?
FORTRAN, short for Formula Translation, may seem different from the typical assembly languages, but it still shares the basic principles and some syntax as it was largely developed as an (arguably elevated) alternative to program mainframe computers.
The program first ran on September 20, 1954, but it wasn’t until April of 1957 that it was commercially released because customers elected to wait until its first optimized compiler was made available.
Now, why did the lack of a compiler seem like a deal breaker to users at the time? A compiler is a special program that translates source code—complex, human-readable language—to machine code or another simpler programming language, allowing the possibility of running it on different platforms.
And so, three years following its inception, it quickly gained traction as the dominant language for applications in science and engineering because of its speed when used in applications, as well as ease of use after it had made machine operations possible despite reducing the necessary programming statements by a factor of 20.
FORTRAN is not the only major programming language available today, but it boasts of being the oldest surviving one to date. Not only that, it still remains popular among users after many updates and iterations over the years. What makes it a great language is how, despite the new features included in each update, it still ensures compatibility with older versions. Its 70 years of trusted performance keeps it competitive with contemporary languages.
As a matter of fact, FORTRAN 2023 has just been released last year, replacing its 2018 standard, with its next standard rumoured to already be in development - so whoever said you can’t teach an old dog new tricks has clearly never met FORTRAN and the team behind it.
Lo (and Behold!), the first message on the Internet - 55 years ago
Over five decades ago, within the walls of UCLA's Boelter Hall, a cadre of computer scientists achieved a historic milestone by establishing the world's first network connection as part of the Advanced Research Projects Agency Network (ARPANET) initiative.
Conceived in the late 1960s, ARPANET aimed to improve communication and resource sharing among researchers and scientists who were spread across different locations. Under the guidance of the esteemed computer science professor, Leonard Kleinrock, the team successfully transmitted the first message over the ARPANET from UCLA to the Stanford Research Institute, situated hundreds of miles to the north.
And if, by now, you’re still unsure why this is so cool, it’s thanks to this team that we can now enjoy the perks of working from home.
The ARPANET initiative, spearheaded by the United States Department of Defense's Advanced Research Projects Agency (ARPA, now DARPA), aimed to create a resilient and decentralized network capable of withstanding partial outages. This initiative laid the groundwork for the rather dependable (on good days!) modern communication systems we enjoy today.
Amidst the astronomic graveyard of unsuccessful messages, which lone envoy emerged victorious, you might ask?
The first host-to-host message took flight at 10:30 p.m. on October 29, 1969, as one of the programmers, Charley Kline, tried to "login" to the SRI host from the UCLA host. According to Kleinrock, the grand plan was to beam the mighty "LOGIN" into the digital ether, but a cheeky little system crash cut off the dispatch, and the message received was "LO" as if the cosmos itself whispered, "Lo and behold!"
“We hadn’t prepared a special message[...] but our “lo” could not have been a more succinct, a more powerful or a more prophetic message.” Kleinrock said.
The birth of the Internet - 55 years ago
Let's time-travel to November 21, 1969, when the Internet took its baby steps. Just a smidge after the first test message was sent, the first permanent link on the ARPANet was established between the IMP at UCLA and the IMP at the Stanford Research Institute, marking the genesis of what we now recognize as the Internet.
According to the Computer History Museum, the initial four-node network was finally operational by December 5, 1969, and it employed three techniques developed by ARPANET:
Packet-switching: fundamentally, packet-switching is how we send information in computer networks. Rather than sending data as a whole, it is fragmented into smaller components known as "packets". Each packet has some data and details like where it's from, where it's going, and how to put it back together. This helps make data transmission more efficient and adaptable.
Flow-control: flow control manages how fast data travels between devices to keep communication efficient and reliable. It prevents a fast sender from overwhelming a slower receiver, preventing data loss or system congestion.
Fault-tolerance: fault tolerance refers to how well a system can tolerate problems like hardware failures or software errors and ensure that they don't cause the whole system to break down.
However, the Internet didn't claim its official birthday until January 1, 1983. Before this date, the digital realm existed in a state of linguistic chaos, with various computer networks lacking a standardized means of communication. It was not until the introduction of the Transfer Control Protocol/Internetwork Protocol (TCP/IP), which served as a universal communication protocol, that the digital fiasco finally seamlessly interconnected and birthed the global Internet we now know and love.
The term "internetted," meaning interconnected or interwoven, was in use as far back as 1849. In 1945, the United States War Department referenced "Internet" in a radio operator's manual, and by 1974, it became the shorthand form for "Internetwork."
The laser printer remains one of the most popular types of printers even 55 years later
The written word has been essential for information dissemination and integral to societal development since civilization moved from oral tradition. It has helped make knowledge permanent and has documented ideas, discoveries, and changes throughout history. Our ancestors may have started with manual writing, but we have long since passed that era of making copies of documents by hand with the advent of the printer.
Printers have helped make human lives easier since 1450 when Johannes Gutenberg invented the first printer, the Gutenberg press. As the name suggests, pressure is applied to an inked surface to transfer the ink to the print medium. We don’t really use printing presses as much anymore, except for traditional prints and art pieces as we moved from black and white only to vibrant color.
If you’ve ever wondered how “xerox” became a generic trademark in some parts of the world, here’s why:
The Xerox Palo Alto Research Center made its mark in history because this is where Gary Starkweather, a product development engineer at the time, had the brilliant idea of using light, particularly a laser beam, to “draw” copies or make duplicates. After developing the prototype, he then collaborated with two other colleagues, namely, Butler Lampson and Ronald Rider, to move past making copies to creating new outputs. They called it the Ethernet, Alto Research character generator, Scanned laser output terminal, or EARS, and this is what became the Xerox 9700, the first laser printer in 1969. It was completed by November 1971, but it wasn’t until 1978 that it was released to the market.
The laser printer has evolved from its first version, but more than that, other types of printers also came into fruition. Now, when you talk about printers, most people probably have two prevailing types coming to mind: the aforementioned laser printer and the inkjet printer, which came later and was introduced in 1984. There may be several differences between these two, but there is no doubt that these not only remain popular options despite being 55 and 40 years old, respectively, but also continue to improve upon their previous versions to accommodate evolving consumer needs in documentation and information dissemination.
The first barcode and its scanner - 75 years ago
Have you ever wondered how checkout counters came to be? Or how our generation achieved the ease of scanning QR codes on our phones to access, well, pretty much anything on the internet?
This all dates back to October 20 1949 when Norman Joseph Woodland and Bernard Silver first invented the barcode and, eventually, its associated scanner. The barcode is a series of different shapes and forms containing data that is not readily available to the naked eye.
It may have begun as a series of lines and striations of varying thickness forming a round symbol, but it would later birth George Laurer’s barcode—what we now refer to as the universal product code or the UPC, which is still widely used to store product information, especially for supermarket and retail which need to catalog thousands of goods for tracking and inventory. However, on top of these features, it is said that the barcode’s greatest value to business and industry lies in how it has shaped market research by presenting hard statistical evidence for what’s marketable—what sells and what does not.
That said, it is important to note that the scale at which the UPC and other codes are used now would not have been possible without the means to access the data. This is where the scanner comes in as a device that uses fixed light and a photosensor to transfer input into an external application capable of decoding said data.
On June 26, 1974, the Marsh Supermarket in Troy, Ohio made history when a pack of Wrigley’s Juicy Fruit chewing gum became the very first UPC scanned. Imagine testing out a piece of equipment that’s pivotal in modern American history and the first thing you reach for is a pack of gum. Iconic.
To this day, the town continues to celebrate this historic event every few years. With its 50th anniversary approaching, you can check out (heh) the supermarket scanner among the collections at the Smithsonian.
50th anniversary of Altair 8800
The Altair 8800 was a ground breaking milestone that shaped the landscape of personal computing and laid the foundation for tech innovations benefiting B2B enterprises today.
For those who didn’t take Computer History 101, the Altair 8800 is the first commercially successful personal computer and served as a powerful driver for the microcomputer revolution in the 1970s.
The Altair 8800 is a microcomputer created in 1974 by the American electronics company MITS or Micro Instrumentation and Telemetry Systems. It was powered by the Intel 8080 CPU and gained rapid popularity after being featured on the January 1975 cover of Popular Electronics. It introduced the widely adopted S-100 bus standard, and its first programming language, Altair BASIC, marked the beginning of Microsoft's journey.
It was developed by Henry Edward Roberts while he was serving at the Air Force Weapons Laboratory. After founding MITS with his colleagues in 1969, the team started selling radio transmitters and instruments for model rockets with limited success. From that point onward, MITS diversified its product line to target electronics hobbyists, introducing kits such as voice transmission over an LED light beam, an IC test equipment kit, and the mildly successful calculator kit, initially priced at $175, or $275 when assembled.
Their big break finally came with the Altair when they marketed it as “The World’s Most Inexpensive BASIC language system” on the front cover of Popular Electronics in the August 1975 issue. At that time, the kit was priced at $439, while the pre-assembled computer was available for $621.
Originally named PE-8, the Altair earned its more captivating moniker, "Altair," courtesy of Les Soloman, the writer for the article. This suggestion came from his daughter, inspired by the Star Trek crew's destination that week. A great reference considering the then sci-fi-esque Altair “minicomputer” was devoid of a keyboard and monitor, and relied on switches for input and flashing lights for output.
Microsoft BASIC was released 45 years ago
In the late 1970s, Microsoft, a budding tech giant, was poised to propel itself to new heights. Microsoft started the year by bidding farewell to its Albuquerque, New Mexico headquarters and establishing its new home in Bellevue, Washington on January 1, 1979.
Three months later, Microsoft unveiled the M6800 version of Microsoft Beginner's All-purpose Symbolic Instruction Code (BASIC). This programming language, developed by Microsoft, had already gained notable recognition for its role in the Altair 8800, one of the first personal computers. Over the years, Microsoft BASIC solidified its role as a cornerstone in the success of many personal computers, playing a vital part in the development of early models from renowned companies such as Apple and IBM.
But the year has just started for Microsoft. On April 4, 1979, the 8080 version of Microsoft BASIC became the first microprocessor software product to receive the esteemed ICP Million Dollar Award, catapulting Microsoft into the limelight. Later that year, Microsoft extended its global footprint by adding Vector Microsoft, located in Haasrode, Belgium, as a new representative, signalling Microsoft’s entry into the broader international market.
Founded by Bill Gates and Paul Allen on April 4, 1975, Microsoft's initial mission was to create and distribute BASIC interpreters for the Altair 8800.
First Killer App, VisiCalc, was released on October 17, 1979
On October 17, 1979, the first killer app was released. The VisiCalc or "visible calculator" is the first spreadsheet computer program for personal computers. It’s the precursor to modern spreadsheet software, including the app we love to hate, Excel.
While the term "killer app" may sound like a phrase straight out of the Gen Z lexicon, its origins trace back to the 1980s during the rise of personal computers and the burgeoning software market. Coined during the era of emerging technology, the term "killer app" refers to a software application so ground breaking and indispensable that it becomes the reason for consumers to buy the specific technology platform or device that hosts it.
Due to VisiCalc’s exclusive debut on the Apple II for the first 12 months, users initially forked out $100 (equivalent to $400 in 2022) for the software, followed by an additional commitment ranging from $2,000 to $10,000 (equivalent to $8,000 to $40,000) for the Apple II with 32K of random-access memory (RAM) needed to run it. Talk about a killer sales plan.
What made the app a killer, you wonder? VisiCalc is the first spreadsheet with the ability to instantly recalculate rows and columns. Sha-zam! It gained immense popularity among CPAs and accountants, and according to VisiCalc developer Dan Bricklin, it was able to reduce the workload for some individuals from a staggering 20 hours per week to a mere 15 minutes.
By 1982, the cost of VisiCalc had surged from $100 to $250 (equivalent to $760 in 2022). Following its initial release, a wave of spreadsheet clones flooded the market, with notable contenders like SuperCalc and Multiplan emerging. Its decline started in 1983 due to slow updates and lack of innovation, and its reign was eventually eclipsed by Lotus 1-2-3, only to be later surpassed by the undisputed world dominator, Microsoft Excel.
First wireless mouse - 40 years ago
In 1984, Logitech threw down the gauntlet in the tech peripherals arena by dropping the mic – or rather, the cord – with the world's first wireless mouse. No more cable chaos, Logitech was set to give the world of computing freedom.
The breakthrough came with Logitech's use of infrared (IR) light, connecting the mouse to the Metaphor Computer Systems workstation. Developed by former Xerox PARC engineers David Liddle and Donald Massaro, this wireless setup initially required a clear line of sight, limiting its effectiveness on cluttered desks.
However, Logitech continued to push boundaries by introducing the first radio frequency-based mouse, the Cordless MouseMan, in 1991. This innovation eliminated the line-of-sight constraint of the earlier version, setting the stage for efficient, clutter-free, and Pinterest-worthy desk setups.
Beyond snipping cords, the Cordless MouseMan was Logitech's adieu to the brick-like vestiges of the past. It's the first mouse that looked just like the modern versions we now effortlessly glide across our desks. Plus, it even came with an ergonomic thumb rest, the first of its kind.
Logitech began selling mice in 1982, and its commitment to catering to a wide range of user preferences solidified its position as a leader in the mice market today. Originally established to design a word processing software for a major Swiss company, it quickly and fortuitously pivoted to developing the computer mouse in response to a request from a Japanese company. It is known as Logitech globally, derived from logiciel, the French word for software, except in Japan, where it is known as Logicool.
Logitech remains a powerhouse in the mice market, showcasing its prowess with a diverse line up. Among its offerings, the MX mouse line up stands as a crowd favorite, renowned for its ergonomic design and functionality. On the minimalist end, there’s the chic and compact Pebble Mouse 2, and for gamers seeking top-tier performance, there’s Logitech G Pro X Superlight 2 Lightspeed.
40 years of GNU and software freedom, yes or yes?
If you’ve ever heard of the operating system called the GNU, there’s a high chance you’ve heard it being called Linux or Unix—it’s not.
The GNU Project started on the 5th of January 1984. Interestingly, GNU—pronounced as g’noo in a single syllable, much like how one would say “grew”—stands for “GNU’s Not Unix” but is often likened to Unix. Why? While it already had its own system or collection of programs with the essential toolkit, it was yet to be finished and therefore used Unix to fill in the gaps.
In particular, GNU had a missing kernel, so it is typically used with the Linux kernel, leading to the GNU/Linux confusion. The Linux kernel is still in use, not because GNU doesn’t have its own kernel. As a matter of fact, the development of its kernel, the GNU Hurd, began in the 1990s, even before Linux was developed, but it is unfinished and continues to be worked on as a technical project courtesy of a group of volunteers.
It proudly remains a 100% free software, in which the concept of software freedom functions similarly as freedom of speech and freedom of expression instead of just being free of cost. Its open source nature means users are free to run, copy, distribute, make changes, and improve upon the existing software.
At a more extreme level, some supporters argue that the GNU made it possible for anyone to freely use a computer. Whether you agree with or believe that statement is up to you. Regardless, the GNU maintains this stance of freedom as it advocates for education through free software by ensuring that the system is not proprietary and therefore can be studied by anyone willing to learn from it. It also puts users in control of their own computing and, in theory, makes them responsible for their own software decisions.
That said, GNU’s software freedom can incite one of two perspectives from users: potential or pressure (or both). Which one is it for you?