When the Internet Was Scary Too
Everything people fear about AI today, they feared about the internet in 1995 — and understanding that pattern might be the most useful thing you can do right now.
On June 26, 1995, Newsweek published an article by astronomer Clifford Stoll titled "The Internet? Bah!" It opened with this sentence: "After two decades of online, I'm perplexed." Stoll proceeded to explain why the internet was overhyped, why online databases would never replace daily newspapers, why e-commerce was a fantasy, and why no one would ever want to read a book on a screen.
"The truth is," he wrote, "no online database will replace your daily newspaper, no CD-ROM can take the place of a competent teacher and no computer network will change the way government works."
Every single prediction was wrong. Spectacularly, comprehensively, laughably wrong. And Stoll wasn't a technophobe — he was one of the internet's early users, a systems administrator at Berkeley who'd literally caught a KGB hacker in 1986. He understood the technology better than most. He just couldn't see where it was going.
This article has become a punchline, regularly shared on social media as an example of how badly even smart people can misjudge technological transformation. But I want to use it differently. I want to use it as a window into something important: the pattern that repeats every time a transformative technology arrives, and what that pattern can tell us about AI fear in 2026.
The Pattern
Every transformative technology goes through the same emotional arc in public consciousness. First: curiosity. Then: hype. Then: fear. Then: backlash. Then: normalization. Then: invisibility. The telephone went through it. Radio went through it. Television went through it. The internet went through it. And AI is going through it right now.
Let's look at where AI is in 2026 and compare it directly to where the internet was in 1995-1998.
In 1995, roughly 14% of American adults had ever used the internet. By 1998, it was around 40%. The technology was spreading fast but hadn't yet reached saturation. Most people had heard of it. Many had tried it. But it wasn't yet woven into daily life in the way it would become.
Sound familiar? In 2026, surveys suggest around 45-50% of American adults have used an AI tool at least once. The technology is spreading fast but hasn't reached saturation. Most people have heard of it. Many have tried it. But for the majority, it isn't yet woven into daily life.
The parallels extend to how institutions responded. Let's look at a few.
"It Will Destroy Our Children"
In 1995, Senator James Exon introduced the Communications Decency Act, declaring that the internet was "the most pervasive pornography outlet in America" and warning that children would be exposed to "indecency" without government intervention. Time magazine ran a cover story titled "CYBERPORN" with lurid claims about the prevalence of sexual content online. The source? A deeply flawed study by a Carnegie Mellon undergraduate that was later widely debunked.
The panic was real. Parents were terrified. Schools banned internet access or required such heavy filtering that the technology became useless. News segments showed predatory chat rooms and dangerous "online strangers" who could reach into your home through the phone line.
Now compare: in 2024 and 2025, headlines warned that AI would generate child sexual abuse material at scale, that AI chatbots were grooming teenagers, that deepfakes would destroy truth. Some of these concerns have legitimate basis. Some are overblown. The pattern is identical: take a real but manageable risk, inflate it to existential proportions, and use it to justify broad restrictions that may or may not address the actual problem.
This isn't to say the concerns are baseless — it's to say that panic is a poor foundation for policy. The Communications Decency Act was struck down by the Supreme Court in 1997. The internet's actual harms (and there were real ones) were eventually addressed through a combination of industry self-regulation, parental tools, education, and more targeted legislation. The process was messy and imperfect. But the "ban it to protect children" approach was neither effective nor proportionate.
"It Will Kill Jobs"
In 1997, Business Week ran a special report asking whether the internet would eliminate the middle class. The concern: e-commerce would destroy retail jobs, email would eliminate postal workers, online news would kill journalism, and automated services would replace customer service workers.
Two decades later, we can assess: some of these predictions were correct. The internet did devastate certain industries. Retail employment shifted dramatically. Newspapers lost advertising revenue and laid off thousands. But the net effect on employment was complex. The internet created entire categories of jobs that didn't exist before — social media managers, SEO specialists, app developers, YouTube creators, podcast producers, e-commerce logistics workers.
The current AI employment panic follows the same template. "AI will replace writers." "AI will eliminate customer service jobs." "AI will make programmers obsolete." These predictions may be partially correct. Some roles will change or disappear. But the history of technology suggests that the net effect will be a reshuffling rather than a mass extinction — painful for some, beneficial for others, and ultimately productive of new categories of work that we can't yet imagine.
The key insight from the internet era: the people who thrived weren't necessarily the most technically skilled. They were the most adaptable. The journalists who learned digital tools. The retailers who embraced e-commerce early. The musicians who figured out streaming before their labels did. Adaptability beat expertise almost every time.
"You Can't Trust Anything Online"
Perhaps the most resonant parallel between internet fear and AI fear is the question of truth.
In the late 1990s, the concern was simple: anyone could publish anything online. There were no editors, no fact-checkers, no gatekeepers. "How will people know what's true?" experts asked. The answer, it turned out, was: imperfectly. The internet did create an information crisis. Misinformation did spread. But people also developed new literacy skills — checking sources, recognizing propaganda, understanding that not everything on a screen is true.
Today, the concern is that AI can generate convincing falsehoods at scale. Deepfake videos. Fabricated articles. Hallucinated citations. And yes — this is a real problem. But the trajectory is likely similar. People will develop new literacy skills. Tools will emerge to detect AI-generated content. Institutions will adapt. The process will be messy and imperfect, just as it was with the internet.
What's worth noting is that the internet's truth crisis wasn't solved by technology alone. It was partially solved by education, partially by social norms, and partially by people simply getting better at navigating an information-rich environment. The AI truth crisis will likely follow a similar path.
"Nobody Actually Needs This"
Stoll's 1995 Newsweek article captures something that's easy to forget: many smart people genuinely believed the internet was a solution in search of a problem. Why would you buy a book online when there's a bookstore down the street? Why would you send email when you could make a phone call? Why would you read news on a screen when the newspaper was perfectly fine?
These questions seem absurd now, but they were asked in good faith by people who couldn't yet see the adjacent possibilities. They couldn't imagine Amazon because they were thinking of bookstores. They couldn't imagine social media because they were thinking of phone calls. They couldn't imagine Wikipedia because they were thinking of encyclopedias.
The same failure of imagination is happening with AI. "Why would I use AI to write an email when I can write it myself?" "Why would I ask a chatbot for information when I can Google it?" These questions will seem similarly quaint within a decade. Not because the questioners are stupid, but because they're evaluating a new technology by the standards of the old one. The transformative uses of AI won't be better versions of things we already do. They'll be entirely new categories of activity that are currently invisible to us.
What Was Actually Dangerous
Here's where the comparison gets uncomfortable: the internet was dangerous, just not in the ways most people feared.
The biggest actual harms of the internet — mass surveillance, attention economy manipulation, algorithmic radicalization, mental health impacts of social media on teenagers — were almost entirely unpredicted by the 1990s fear discourse. People were worried about pornography and job loss while the real threats — data exploitation, monopoly platform power, the weaponization of engagement — developed largely unnoticed until they were deeply entrenched.
This is perhaps the most important lesson for the AI era. The things we're panicking about today — job displacement, deepfakes, AI-generated spam — are probably manageable. The actual dangers of AI are likely things we're not yet discussing, or discussing only at the margins. The concentration of AI power in a few companies. The use of AI for invisible manipulation. The erosion of human skill through over-reliance on automated systems. The potential for AI to make existing inequalities more efficient.
These are subtler threats. They don't make good newspaper headlines. They develop slowly. And by the time they're obvious, they may be difficult to reverse. Just as it took fifteen years for society to recognize the attention economy as a crisis, we may be blind to AI's actual dangers until they're ambient and normalized.
What We Can Actually Learn
If the internet parallel teaches us anything, it's this:
The technology will be more transformative than skeptics think. Stoll was wrong. The internet changed everything. AI will change everything too. The people dismissing AI as a fad or a toy are making the same mistake.
The technology will be less utopian than boosters promise. The internet didn't create a global democratic paradise. It created a complicated, messy, wonderful, terrible thing that amplified both the best and worst of human nature. AI will do the same.
The biggest risks are the ones we're not panicking about. Fear and hype are equally poor guides to reality. The actual impact of AI will be largely orthogonal to current debates.
Adaptation beats prediction. Nobody in 1995 could have predicted what the internet would look like in 2025. Nobody in 2026 can predict what AI will look like in 2056. The best strategy isn't to predict correctly — it's to remain adaptable, curious, and engaged.
Digital literacy is the great equalizer. In the internet era, the people who thrived were the ones who engaged early and developed fluency. The same will be true of AI. Sites like a-gnt.com exist because navigating the AI landscape requires the same kind of discovery and evaluation that navigating the early web did — finding tools that work, understanding what they do, building workflows that suit your needs.
The Courage to Be Early
There's one more thing the internet parallel teaches us, and it's perhaps the most personal. In 1995, the people who dismissed the internet missed out on years of head start. The people who embraced it — even imperfectly, even without fully understanding it — positioned themselves for a world that was coming whether they participated or not.
The internet didn't wait for skeptics to finish being skeptical. It just kept growing, kept becoming more central to daily life, kept rewarding early adopters with skills and opportunities and connections that latecomers had to scramble to catch up on.
AI isn't waiting either. The fear is understandable. The caution is reasonable. But the pattern is clear. This technology is going to become as fundamental as the internet itself — probably more so. The question isn't whether to engage with it. The question is whether to engage now, while there's still room to shape your relationship with it, or later, when you're playing catch-up.
The internet was scary too. And then it was just... life. AI is following the same path. The sooner we recognize the pattern, the better equipped we'll be for what comes next.
Ratings & Reviews
0.0
out of 5
0 ratings
No reviews yet. Be the first to share your experience.