Hacker News new | past | comments | ask | show | jobs | submit login
js;dr = JavaScript required; Didn’t Read. (tantek.com)
230 points by ezist 9 months ago | hide | past | favorite | 215 comments



Some people don't seem to understand what the whole JS SPA thing is about, and it's quite strange to me.

It's not popular because it's a fad, it's not about replacing good old static websites with fancy over-engineered JS code.

It's about making desktop-class applications more accessible via the web. Desktop-class Apps have lower latency requirements then server rendered frameworks are capable of delivering, plain and simple. You could certainly build Facebook as a fully server-rendered PHP app, but that would hurt Facebook's business because its servers would need to do more work and its users would have to wait longer for content.

Fully server-rendered frameworks are not capable of delivering low-latency desktop-class applications. If your app doesn't require low-latency updates, then you can certainly use a classic PHP or Ruby on Rails stack with no problem.

Sure, you can use JQuery style code to make your PHP app more interactive, but you're probably just going to end up with a messy, hard to understand JS code base eventually if you you don't have some sort of low-latency client-side declarative templating framework.

There's certainly some companies that are caught in the hype and build a SPA app when they would probably be better served with a simple PHP site, but on every project I've worked on that used React across a few companies (interactive apps for ex: cropping & manipulating images, building visualizations and reporting on data), not using an SPA framework would have slowed down development dramatically, or left us with a really poor product.


> It's about making desktop-class applications more accessible via the web.

Everyone understands this is what devs are trying to do. The complaint is that my local newspaper doesn’t need a desktop class application. Nor does my bank, nor does Reddit for that matter.

There are vanishingly few websites that need a “desktop application” performance profile. Most websites are just viewing documents. The size of the SPA frameworks is frequently higher than the actual content being viewed.

Finally, if matching desktop app performance is truly the goal, the majority of SPAs fail horribly. Poor request patterns and transitions make the pages just as slow as server side rendered html. I would rather wait one second to get a JS-free response than look at another damn spinning circle for 5 seconds as the page sputters new elements in.


Are there really that many local newspapers using SPAs though? I feel like advertisers delivering boatloads of JS with their ads sometimes gets conflated in this discussion.

Another point, it's true Reddit doesn't need a SPA. However, while I hate their new design, at the scale of Reddit an SPA can have real effects on server costs as well as user engagement. And, if you look at their growth numbers it seems to be working out for them, despite their design churning me as a user.

Third, the ecosystem of well-built components means that even if you don't really need a SPA, using React could save you money in launching your MVP, which is a big consideration for startups.

In the end, widgets with lower-latency interactivity, even when not absolutely necessary, is a better experience (for example, a form field that tells you what's wrong with your password as you're typing). Also, because the SPA crowd are building richer applications, you're going to have access to richer open-source widgets when building with that technology. So it's a hard thing to ask people to avoid React when it could be the difference between a great user experience and a mediocre one down the line.

Let's just continue to build better SSR tools so we can have the best of both worlds. We can build tools to help developers cut JS from their app, because it's true, eventually the latency bottle-neck isn't the speed of a round-trip request with PHP but the speed of downloading and executing all that JS with your SPA.

I believe this is a loosing battle, not because of hype, but because of the business reasons for using frameworks that are capable of delivering richer widgets. We can have our cake and eat it too, instead of fighting the shift in technology.


Our government's COVID statistics site[0] is a SPA. It doesn't need to be, it just shows tables and graphs, but these days any new page seems to needlessly be made as a sluggish SPA.

If you want to check hospital bed occupancy by state, it's 3 clicks, a 5 second spinner while it loads the results and shows the first 20 rows. It doesn't sound that bad, unless you refresh, in that case it throws you back at the landing page.

This should be a static site, but even if not, this is actually slower than a server-side rendered page. SPAs are not desktop-like when they constantly request server data, even if they hide it with slow animations and spinners.

[0] https://www.gits.igg.unam.mx/red-irag-dashboard/reviewHome


I've found that the new SPAified paypal is also unbelievably sluggish. I'm sure this is because they're doing something dumb rather than some intrinsic quality to SPAs, but SPAs sure do make it easy to do something dumb without noticing. especially if you're developing in 0-latency environments


Yes, SPAs can easily get horribly sluggish. The Josh Hopkins University dashboard[0] has been the de facto international source on this pandemic, and man, while I'm grateful for it, it is horribly sluggish. How much power has been wasted by clients rendering the exact same map and data over and over all over the world?

Compare their approach to Worldmeters' COVID statistics page[1], which while less functional, it's very lightweight and responsive.

[0]. https://coronavirus.jhu.edu/map.html

[1]. https://www.worldometers.info/coronavirus/


Yeah our county covid stats are the same. Static pages would work great here, instead of having to hit the API and DB for the data, since the data is updated once a day.


> Third, the ecosystem of well-built components means that even if you don't really need a SPA, using React could save you money in launching your MVP, which is a big consideration for startups

In my experience, that ecosystem of components does not compensate the cost os having to take a thousand decisions and making an equal amount of discussions and considerations and trade/off evaluations within your team (How to do state management, what router to use, class or funcion components, do we do SSR or not? Should we use typescript? etc,etc vs just using Ruby On Rails / Symfony / etc.

> Let's just continue to build better SSR tools so we can have the best of both worlds

Yes, just adding more and more complexity to compensate what you get by default with traditional MVC seems to be the right approach (irony).

Almost none of the SPAs I've been involved in in the last 5 years, required "desktop like" interativity. All of those would have been served a lot better by more traditional approaches.

I think SPAs and in general frontend-heavy frameworks are an amazing technology, but certainly overused. Bussiness wise, it makes no sense. Most of us, building SPAs, shouldn't. Problem is, it is not trendy or hip to use RoR, and everyone wants to have fun too.


Don't know about local, but The New York Times is server-side-rendered React, ditto The Intercept, or Latvia-based Russian media called Meduza. The Times seems to be using server-side-rendered React and graphql/apollo as well.

In 2014-2015, React was eating the world, and newspapers didn't want to get left behind.


>Are there really that many local newspapers using SPAs though? I feel like advertisers delivering boatloads of JS with their ads sometimes gets conflated in this discussion.

Yes, I encounter it frequently. Some slow fucking main page loads for 5 seconds with no article in sight (presumably chewing on all of the trackers, assets, whatever, but no story nonetheless). Then a spinner starts to load the article and something stutters in after another 5 seconds and I get false hope that I can start reading. Another two seconds and gdpr cookie notice pops over, then a subscribe widget, then maybe a local weather widget. Close all this shit and now read the article on half of my screen because the top half is occupied by a banner with a breaking news ticker.

> Another point, it's true Reddit doesn't need a SPA. However, while I hate their new design, at the scale of Reddit an SPA can have real effects on server costs as well as user engagement. And, if you look at their growth numbers it seems to be working out for them

I’m sure their developers smoke some pot too, but we don’t attribute that to their growth. Why would you think an SPA helps when it’s one of the most widely hated features of the site? Has it occurred to you that the growth might be happening for a different reason?

> Third, the ecosystem of well-built components means that even if you don't really need a SPA, using React could save you money in launching your MVP, which is a big consideration for startups.

The ecosystem of server side stuff is far deeper and more mature. I don’t buy this argument.

> believe this is a loosing battle, not because of hype, but because of the business reasons for using frameworks that are capable of delivering richer widgets. We can have our cake and eat it too, instead of fighting the shift in technology.

Unlikely. Once the hype dies down people will realize SPAs are like mobile apps. You don’t need them for the vast majority of use cases and their instability and general shit performance will result in punishment by the search engines to the point where people with html5+css sites will rank higher and suffocate the bloated turds.


I use 3 different banks quite regularly and there is a huge difference between them. My latest bank has great interest rates but also the worst UI experience mostly due to every interaction causing a page reload. I’m very much hoping for more JS usage in the banking world to make the experience a bit smoother!


Between my wife and I we actively use 3 different banks (until recently 4), and we've also been through a bunch of redesigns, modernizations and acquisitions. I must have used some 10 banking UIs by now, not counting mobile apps.

One thing I can tell is: they get worse over time, and the more JS they involve, the worse they end up being. I can talk a lot about the UI design itself and it won't be nice (suffice to say: it's getting worse, and it's most likely because the goal of the bank isn't to provide a productive financial management UI, but to confuse you and upsell you financial products), but beyond that, performance goes down with each iteration.

(My least favorite piece of garbage of user interfaces is the offering of IDEA Bank in Poland - an Angular.js monstrosity where every operation - like opening account history, or downloading a PDF with transaction details - seems to take 30 second to one minute on a good day. The interface itself lags a lot, starts to visibly slow down if listing more than couple dozen items - say, transaction history for the past 6 months. But showing such long lists isn't a good idea anyway, because if you scroll to the end of it, some random XHR will fire and reset the list to "last month" or something like that, because of course it's a reasonable thing to do.)

Banking pages absolutely do not have anything in them that would require "desktop like functionality". They're the poster child of the document model - their job is literally to be digitized bureaucracy. They present you with forms and respond to queries. Every interaction you want to have on a banking interface boils down to that. Request a list of this. Request details about that. Send that much money there.


> My latest bank has great interest rates but also the worst UI experience mostly due to every interaction causing a page reload.

I think this is heavily dependent on how fast the screen refresh is. Page reloads can be quick enough to appear pretty responsive - but if the website takes 3 seconds to serve the page up that’s obviously no good.


Did I forget to mention that they store the application state on the server end so the back button in your browsers doesn’t work?


> I’m very much hoping for more JS usage in the banking world to make the experience a bit smoother!

As a React developer this makes me anxious, lol.


Totally agree. One of the banks I bank with has a fully interactive JS application with no reloads. It's super smooth to interact with and makes the experience amazing.

I also bank with another bank that does full-page reloads and it's a huge pain. And it's frustrating when I have to do something because of how slow it is comparatively.


This is pretty spot on. The problem isn't JS, it's people turning small blogs, brochure sites, static pages, etc into SPAs


Amusingly, my local newspaper's paywall only works with JS enabled. Disable it, and you can read everything unencumbered.


HN gets frothing mad about javascript, and to some extent I agree: sites that are fundamentally about content (news websites, blogs, etc.) should not be locked behind JS gates. The amount of code bloat on those pages tends to be related to tracking and advertising, so not running JS on those pages makes sense.

On the other hand, web applications that provide full-featured experiences are only possible because of the full spectrum of web technologies. Choosing not to run JS and claiming that Google Docs should work without JS is ridiculous.


It’s almost as if you should pick the right tool for the job. ;-)


We all agree on that. I just suspect that Hacker News only knows about two jobs: massively distributed applications and their static blog.

In the middle, you have countless SPAs, but also slightly interactive websites that are 90% content, and 10% calculators/maps/widgets.


It's not like those widgets need a full routing system with ssr, etc. Even with vue, you can just drop that component on the page.


But add enough of those widgets to a page, and now you need to coordinate local state, sync it with the URL, etc, which is way easier to do when the whole page is a SPA.


But at that point your site is more an application than content


It's not always so cut and dry; some content websites are very interactive.


feel so true


Aside: outline.com is useful for news websites which require JavaScript.


Many but by no means all.

I increasingly rely on either Internet Archive's Wayback Machine (which also fails remarcably often to fully, or even partially, present SPAs), or Archive.today (archive.is, archive.fo, and friends), which is painfully slow to acquire content but does manage to render most it attempts.


> There's certainly some companies that are caught in the hype and build a SPA app when they would probably be better served with a simple PHP site

Yeah, like your example, Facebook. Ever since their redesign, I've switched to only using the mobile+noscript site (on desktop), because the SPA version is resource-hungry to the point that it regularly DoSes whatever browser thread it gets assigned to and has UX that, ironically, seems to be terrible as anything other than a mobile app (they've replaced text with abstract square "touchable" buttons and introduced airy spacing everywhere that allows you to see maybe about half a post per screen).

It's as if its designers have been trying to ram their mobile app down my throat for years (by nagging screens at first, and then by outright removing the ability to view private messages from the mobile page - except sometimes by refreshing sufficiently many times you could trigger a bug and still drop through to the old messenger view, adding insult to injury), and when I still didn't bite, they decided to replace the whole service (which up until then had been one of the last remaining decent mainstream websites) with a facsimile of one.


> Yeah, like your example, Facebook. Ever since their redesign, I've switched to only using the mobile+noscript site (on desktop), because the SPA version is resource-hungry to the point that it regularly DoSes whatever browser thread it gets assigned to and has UX that, ironically, seems to be terrible as anything other than a mobile app (they've replaced text with abstract square "touchable" buttons and introduced airy spacing everywhere that allows you to see maybe about half a post per screen).

I have literally not heard of anyone complaining about this (not that your argument is invalid). A whole lot of people are just not gonna bother or check how resource intensive it is.


> I have literally not heard of anyone complaining about this

EVERYONE complains about the speed of "new" (2020?) Facebook.


I will add my voice to the list of complainers. I stopped using Facebook after they forced me to upgrade to the new site.


Well you can count a second complaint here. Of course it's much smoother on Chrome.


I rarely use Facebook, but can confirm too. The older version wasn't great either.


> It's about making desktop-class applications more accessible via the web.

I am not sure what "web" you're using, but as someone who uses noscript and has been enabling scripts for over a year now, I can firmly say JS is NOT used for making "desktop-class applications more accessible".

It's used for ads. And spying. Lots and lots of spying.

Seriously.

Googletagmanager/analytics is everywhere, it doesn't deserve to be, it doesn't need to be. That domain needs to die a painful, horrible death.

facebook, twitter, sessioncam, and many others are used to bloat pages, increase my energy usage, decrease my battery quicker and contribute to the wastage of energy on an unprecedented scale.

Just ask yourself, how much money, battery life, bandwidth is spent every year on downloading useless scripts that as far as I can see offer no value whatsoever. By selectively deciding what scripts to enable I get the folliowing result:

1: pages are lighter, less bloated, and STILL WORK

2: I download less scripts, use less battery and save bandwidth and energy for myself and all humanity.

3: There is less spying as fetch() requests are blocked and there can be hundreds of them across a single web-page session (you can watch really bloated pages for 10 minutes, there can be 100's of requests easily).

Test: load the following two pages, study their usage, test each with JS enabled/disabled:

1: https://old.reddit.com (with JS:2.69, without: 2.34mb)

2: https://reddit.com (with JS: 8.58mb, without: 8.10kb, but the page is broken...)

Baring in mind old.reddit works fine with JS disabled, showing JS is not needed for a site like reddit to work at all.

Yet, "web developers" use them as if it's nothing. Pages that execute over seconds, possibly MB's of data, all the additional requests that are made to enrich the likes of FB/Google et al.

So no, SPA are a scam and the web is worse today than I remember back in the 2000's, at least we didn't have cookie pop up boxes because people can't help but abuse JS.

Most of the GDPR violations I've found thus far are from scripts that have no place or purpose, that slurp up user data without remorse, that, if disabled, doesn't impact the functionality of the page, and that enables the great surveillance capitalism and data-raping we are seeing today.


> I can firmly say JS is NOT used for making "desktop-class applications more accessible"

yes it is. It is used for ads and spying too, but to say it is "NOT" used for desktop-class apps is wrong


I agree with all except for one thing:

> at least we didn't have cookie pop up boxes because people can't help but abuse JS

pop-up and pop-under banners with IE6 were BAD.


Ok, so you want to disable tracking and ads.

Why does that necessitate disabling all JS? You can enjoy the performance of an SPA while selectively disabling tracking.


Sadly, it's not often that simple. Giving amazon.com permissions to JS should be enough, but it requires media. or ssl-images to get basic functions to work.

Then there's the issue of [randomstring].clouldfront.com - what does this script do? Do I need it?

Often finding the right combination of scripts to enable to get one piece of functionality to work makes the whole experience painful and frustrating, often I just forget what I was trying to do and go elsewhere.


> Googletagmanager/analytics is everywhere, it doesn't deserve to be, it doesn't need to be

You might not care, but probably most people publishing any sort of content online want to know who visits it ( from where, mobile or web, what numbers, when, etc.). The easiest way to achieve that ( considering there's shared hosting and that most people who care about those numbers aren't the people who installed whatever software runs the site, or developed it. The average website runs wordpress ) is via Google Analytics.


Web apps are more accessible because more people can build web apps and thus there are more web apps for people to access. There are still many web apps that aren't accessible because they're too hard, like for example I might say that modern frameworks don't make games easy enough and that's why we've seen far less flash-style games. You don't get to access the games that don't get created.


> It's not popular because it's a fad, it's not about replacing good old static websites with fancy over-engineered JS code.

Yet, in practice, this is what happens.

When turning JS off is an option, that's usually one of the biggest improvements I can make to my page surfing experience. Things load faster, ads don't expand over content, the page reflows less often as things are injected, shit doesn't autoplay, and spinners and animations don't distract. I can just read the content.

It's possible to use JS well, but I don't see it happen very often. And the unpleasantness from of abuse usually outweighs the benefits from good use.


> its servers would need to do more work

So far I have seen no evidence that this would be the case (rather, I have reasons to believe the contrary).

> and its users would have to wait longer for content.

They would actually have to wait less. Recently two of the sites that I used to use switched to react. I was able to have open literally 100s of tabs with them, each of them loading almost instantly (excluding media). Now I can barely hold 2 open and they load slowly (even if we ignore the time that it takes to load the media).


> It's about making desktop-class applications more accessible via the web.

...

Go and take a look at a "desktop class application" from 10 years ago - say Photoshop 6. Compare the speed, the UI, the native look & feel.

Go to a SPA today.

Cry.


Yup. And if you are to compare desktop-class applications from the time where desktop apps weren't just embedded webviews with what typical web SPA does (how complex is it, how much actual content), then the first thing to remember is that this scale of desktop software had its executable size measured in dozens to hundreds of kilobytes.


I think a better comparison would be a networked 'enterprise' app. Think SAP or Sage or something.


No, it’s a fad. Most corporate JavaScript developers cannot actually write JavaScript of their own. They need giant frameworks to do the heavy lifting and the output of such is a SPA. If you really wanted accessibility you wouldn’t complicate your life with screen reader compliance via a SPA.

To be clear a SPA is generally a front end for user input, such as a form (or series of forms) clobbered together so that a page is loaded fewer times. This a traditional web path that exchanges page traversal for interaction, often to maintain state locally. Conversely, a browser application is an application that executes in the browser without regard for data and state, such as photoshop or a spreadsheet in the browser, which aren’t concerned with any server application.


> It's about making desktop-class applications more accessible via the web.

The problem I see is folks unnecessarily turning their websites into desktop-class applications. It's especially popular on ecommerce sites -- today I tried to look at some lumber prices and the website had input latency measured in seconds before my phone's browser just crashed.


> It's about making desktop-class applications more accessible via the web.

If we, for a second, stick to the meaning accessibility used to have, namely, usability by people with disabilities, quite the contrary is the reality. The SPA trend, "lets just move every app into the web" fuels the digital divide like nothing else. It has become harder and harder to actually use the modern web, and a lot of why that is comes down to SPA and JS.


The current fb/messenger interface is a disaster. Transitions are slow and they still didn't get the local state right and coordinated different parts of the UI. I've stopped using messenger because of this, managed to send messages to the wrong recipient more than once because of latency in the UI.


> Some people don't seem to understand what the whole JS SPA thing is about, and it's quite strange to me.

Most people understand the promises of SPAs, but there are several forces that play against everything you said:

- Some websites don't need a desktop experience yet they go the SPA route.

- SPAs are -in my experience- significantly harder to get right than server-rendered apps. For sites that are a good fit for "the regular old web" we are speaking about at least an order of magnitude here.

- Oftentimes it is companies/products with significant resources that embark on the SPA ordeal. This usually means that they also have several teams demanding product analytics, A/B testing and what not, and hence their sites end up loaded with random shit (gtm, analytics, optimizely, facebook pixel and the kitchen-sink).

For all these reasons, it takes an extraordinary (i.e: significantly better than average) team, from the developers all the way to management, to deliver on the SPA promise.

As a result, most SPAs suck, and hence a lot of people cultivated an aversion to them. It really is that simple.


I don’t want to run your proprietary JavaScript on my computer.


All the popular SPAs I use have awful latency. It may have been a design goal, but huge failures all around.


> You could certainly build Facebook as a fully server-rendered PHP app, but that would hurt Facebook's business because its servers would need to do more work and its users would have to wait longer for content.

I think they do that mostly because for the users who’ll have to download the header 100 times for each action they’ll do. Not really sure that all benefits from SPA are hidden in company costs. Mostly they are in modern tech approach.

Writing this I still agree with the article. SPA is needed when you are under authentication but publicly you can live with progressive web.


> I think they do that mostly because for the users who’ll have to download the header 100 times for each action they’ll do.

If only they weren't deploying some changes every other day, pretty much eliminating any benefit from caching...


Also, SPA and server side rendering are not mutually exclusive. Both make sense as performance optimizations depending on the context.


IMO you can have your cake and eat it too - just offer a fallback of some sort, it's good SEO anyway. I haven't done anything in react but it's my understanding that this is actually possible to do, have react deliver an initial pre-rendered page and then still have your fancy SPA on top of it


To a first approximation, you can do all your heavy-cpu computation on the server and have the client be a close-to-dumb terminal or you can have the server do some preprocessing and the client do some postprocessing.

One spreads the load over CPUs better.


> One spreads the load over CPUs better.

Better for companies running high-powered servers with economic benefits of scale, or for users running low-end devices? This is just dumping an externality.

Also, arguably, most performance issues of SPAs don't come from business-relevant calculations being slow. It comes from scaffolding, gratuitous, and bunch of other overhead that would not happen at all, server-side.


The end user is also often paying for bandwidth, and doing local computation locally saves that cost.


Yet somehow all these locally-computed sites end up being much more bandwidth-demanding than old-school sites. Shouldn't be like this, but that's how it ended up working in practice.


> Fully server-rendered frameworks are not capable of delivering low-latency desktop-class applications

Evidently, neither is JavaScript.


People don't care about 10 years from now or edge case users. It matters how it looks today and that it drives engagement and profit now. People aren't making their websites for eternity. It's like lamenting that the billboards on the street are not archived or the menu card of the kebab shop is not available as it was 10 years ago. The web is there to provide functionality and satisfy business needs.

It's usually not the hobbyists who pack their sites with all the modern fancy js.


> The web is there to provide functionality and satisfy business needs.

lol

Remember when the web was for people to communicate? Not just to 'satisfy business needs'?


I actually don’t, my family got on the web circa 1996 and within a year or two we were buying books on Barnes & Noble and having them shipped to Spain.

What are you referring to?


The reason for purchasing an internet connection was not to shop back then. You may have purchased some books but the majority of online activity had no apparent business purpose.


The e-commerce felt like a side project of the web, it was nascent, potentially cool, but, at least to me, never the reason I was connecting. I wanted to read, chat, play.

I really think that the business first mindset made internet rot faster than necessary. No monetization means no tracking means no cookies warnings.. so on and so forth.


Pre-ecommerce Internet. (Aka: the good ol' days)


The original intention of the Web - as devised by Tim Berners Lee - was to share documents in a research context. The idea of a URI or URL was to uniquely identify documents, but also to link them as hypertext. So you could jump from document to document.

Then Netscape came. They were the first having a primitive idea of the potential of the Web beyond simply linking pages.

The Dotcom bubble of 1999 and the first boom of e-commerce was exactly because in the few years preceding, everyone was scrambling to get dominate this Web thing. This included Microsoft, Sun and so on.

Now, nothing of this is new.

The entire idea of "Rich Web Applications" is as old as the Web. Silverlight, Java Web Applets, Flash,... They were all about this idea of building applications that could be loaded - and more importantly: controlled - via the Web.

Over the past 15 years, Google succeeded in dominating the browser market, and building a set of API's that made it easy to build such Rich We Applications without needing extra plugins or 3rd party sandboxes. You can just do it using Javscript, HTML and CSS.

And while that's not a bad thing in itself, the problem is that an entire generation of engineers has created an entire layer of abstractions on top of the browser engine in order to reinvent the exact same things which already existed back in the '90s. Only now, instead of running separate native programs and applications, everything is now corralled into a single browser environment... which tends to be entirely controlled by Google, if you use anything powered by Chromium.

And this often includes how simple text documents are consumed. The text you get doesn't arrive as a HTML body in a HTTP request, it arrives as a chunk - or chunks - of JSON and needs to get parsed and assembled by layers of javascript until the assembled code can be fed to a browser engine... which will parse that once again in order to paint it on the browser canvas.

None of that is truly necessary when it comes to plain text. Heck, none of that is even necessary to render a single image on the browser canvas.

Plenty of major, highly visible, high traffic outlets like newspapers or media use these layers of complexity to publish text. Why? Because it gives them control over your experience and what you can do with the content e.g. paywalls, DRM, advertising, intricate metrics,...

Now, plenty of the Web still offers plain HTML and CSS, just like 20 years ago. And that's awesome. But that's a long tail of websites which largely remains in the dark since the vast majority of Internet users have been corralled into a set of centralized services that tend to promote links to these highly visible outlets through their recommendation engines.


That situation you describe never actually happened, except maybe during the very first few hours, days or weeks after the Web's birth.

Remember bandwidth, machines and access used to cost way more back then than they do now, so resource owners wanted to get the best bang for their buck from the get go. Note that resource owners means those who actually paid for network resources (companies, administrations, schools, labs), not people who merely benefitted from them for free because of their positions (teachers, students, lab rats etc)

You seem to be the unfortunate victim of a false memory, a similar phenomenon to that experienced by the "things used to be better" crowd.


My first web pages were hosted by the same ISP I was paying for a modem pool, shell account, and Usenet feed. Hardware was more expensive but not prohibitively so.


I don’t remember any such time, and I had a 14.4k modem.


The idea that business isn’t about people communicating is strange


The first and only rule of business is that: You are there to make money. Everything, and I mean everything is secondary. Employee wages, whatever 'solution' you're aiming to market, those are all secondary points to the fact that you are there to make money and do what works to get money.

It's a harsh reality, but it's very much the case and it's the first thing you learn on any business course.


If you want, that doesn’t change the fact that business is almost entirely based on communicating between humans.

Communication with coworkers, customers, investors, regulators, partners, etc. One of the main reason people interact and communicate with humans that’s aren’t their close circle is for business


So are charities like hobbies under your definition?


> So are charities like hobbies under your definition?

A charity is not a business, it's a different type of organization. The fact/reason that most of them decide to operate like businesses is outside of the scope of what I stated.


Business, as I believe parent meant it, incorporates parts where people communicate; I suspect though that the parent would mostly say that business is about money.


I remember when it lacked enough abstractions to satisfy business needs.

That's not a proud age; it was less than what we have now.


I think this is a bit of a false analogy. It's not just billboards and menus that need to be archived but also newspapers.

Maybe a right-leaning news site gets archived (for lack of JS) and a left-leaning one doesn't. I would certainly want to see what both sides were writing about at the same time.


Corporate user facing websites are created for 3-5 year time horizon with sunset at 8 years. The primary reason is because most developers in the corporate space cannot write original code. They are dependent upon large frameworks that have a limited lifespan.

Maintenance is hard when your site is littered with code you never wrote and never understood beyond the API.


Fair enough. How about documents? Would anyone be able to access ebooks in 50 years from now? How about 1000? We have very old books that are still ‘functioning’ and the problem posed is that technology is becoming more complex and more fleeting at the same time. When it comes to billboards and menus, absolutely nobody gives a damn but when it comes to internet, a lot of knowledge of the time gets lost as it can’t be archived properly and while it’s not the biggest problem it still sucks


> Would anyone be able to access ebooks in 50 years from now?

Access as in the original manuscript of the book is still around somewhere in a readable format so new ebooks can be published in whatever format is in demand then? I certainly hope so.

But should ebooks sold now be readable by anyone in 50 years? I don't think many people worry about that, nor should they. Archiving should be for persistence, so done in some format we can be absolutely sure we can read (such as plain text).

Distribution should be for convenience, i.e. likely whatever proprietary format users want.


> People don't care about 10 years from now

But they really effing should. 10 years is not a long time at all.

> it's usually not the hobbyists who pack their sites with all the modern fancy js.

BS. News outlets, Google Anything, Facebook itself, etc., are usually way worse.


10 years, at least in the industries related to internet, is an eternity.


The billboards on the street or the menu card of the kebab shop are actually great examples of things that would be worth archiving. The stuff that isn't worth archiving is things like the current ordering of items in my facebook feed, the current search results for a given term, etc.


I think we worry too much about hoarding information. It's ephemeral, mostly garbage. Yeah it's kinda cool to see old items from hundred of years ago, but we will leave plenty of stuff either way. But forgetting is part of moving forward. The next generations will have enough to deal with regarding their times than to dive into minute details of the past.

Same about worrying about personal photos. I don't have many photos about the youth of my grandparents. And none about my great grandparents. But that's okay. I don't want to be defined by them, and I don't want my great grandkids to know tiny details of my life. Legends and stories are just the right amount to know. It's a feature that the past decays not a bug.


Precisely.

In 10 years, most of the data on the web is outdated and useless to everyone but historians.


Yes, most of it. But there’s definitely content that still is talked about and referenced many years later. Joel Spolsky’s Things You Should Never Do was written in the year 2000 and sometimes comes up in discussions at HN. Imagine if Brooks No Silver Bullet would have been published on some now defunct blog site.


> Because in 10 years nothing you built today that depends on JS for the content will be available, visible, or archived anywhere on the web.

What does that even mean? Saying browsers won't support JS in 10 years is an idiotic claim. Even more so considering it was written 5 years ago.

Beyond that, I don't know why people think that the web should just be either simple pages for serving content or massive desktop-class applications. There is an entire world between these two that is perfectly valid. Yes I want to host a blog but I also want it full of widgets and other fancy JS. I want to use new frameworks and rewrite it every few months. It may be messy, it may be sometimes inaccessible, and yes it may not be available a few years from now. But the web is and always has been about creativity and expressing yourself in any way you want. Heck my personal site ran on Flash once upon a time.

People here are the kind who would complain about geocities or myspace pages back in the day ("why is it all so flashy? Why can't it just be a simple page of text?")


If your "web page" is essentially just a script tag and relies on content through first and third party APIs it's going to be pretty difficult to keep that running long term.

If an API shuts down, an incompatible change is pushed, or a JS CDN goes defunct in that JavaScript-only site breaks. JavaScript-only sites are pretty fragile.

If the same content was just a normal HTML document it could at least be easily archived. It's also trivial to keep online. Even if some CDN dies and CSS or JS doesn't load it's still readable.

JavaScript developers have never been great about progressive enhancement but it seems of late they've gotten worse. If the sole purpose of a site is to be an app then tons of JavaScript is necessary. But that's a minority of sites. A blog post or news article doesn't need to dynamically build the DOM with JavaScript and short circuit all of a browser's loading and rendering logic.


Not everyone builds a site to immortalize content. Some are building businesses and trying to move fast. If an API can serve a website, mobile apps and desktop apps, then maybe that’s the only server I want, I move the rest to the client side. If in 10 years I’m still going, then I’ll keep the websites/apps available. If not, I likely will be too busy failing to care about content archiving. If keeping content on the internet was my goal, I’d build my site what way. But it’s not. Providing useful services is.

I wonder if brick and mortar stores are worried about their place of business if they fail? Not likely.

The internet is diverse, some things will be archived, some won’t.


You're conflating web site and web app which I specifically called out. If you're building the new great gig delivery app, using the same API for a mobile app and web app isn't a big deal.

My issue, and I think a serious issue with the web today, is all of the essentially static content treated like it has the same needs as some web app. Your gig delivery app needs to represent the instantaneous state of the database back end. Your blog post does not.

JavaScript-based "sites" are often little better than the Flash sites they've replaced. They're often inaccessible, opaque to search engines, break linking, and are also opaque to microbrowsers [0]. For every site that relies on JavaScript but doesn't have those problems there's dozens that do.

[0] https://24ways.org/2019/microbrowsers-are-everywhere/


> Not everyone builds a site to immortalize content.

True, and that is fine. However, by using JS + APIs, we risk that others who want to immortalize content for purposes of historical or cultural research fail to capture quite a bit of our current culture and content. I would be sad if my grandchildren are unable to get a feel for the world I live in like I could get a feel for the world my grandparents lived in by means of archives, magazines, newspapers from their day.


Google already archives pages for search in spite of their JS requirements.

If archive.org isn't doing the same, that's a bridgeable tech gap.


archive.org probably doesn't have the same computing power as Googlebot.


If you write real HTML then an archiver only has to retrieve the HTML in order to be able to save the a future-proof representation of the content. (Edit: I'm talking about text-first documents, not interactive webapps here).

But if the content doesn't render at all without javascript, and if the archiver wants to index page or preserve it for posterity, it needs to retrieve all the external assets, and then execute them (because it may asychronously loads further scripts whith are required to load the content). And then wait for it all to execute.

This means that archiving services need do to a lot more expensive work. It raises the bar to exclude smaller, more diverse web crawlers and indexers.

And regardless of that, it's a big gamble to hope that the third-party CDNs that you rely on will continue to host exactly the same JS libraries ad infinitum.


Lots of different things being mixed up here.

To archive a web page you need the source code, all assets, a layout engine and a JavaScript engine. Search engines and archival tools can handle all this very well today.

Yes, if a site loads a ton of external content it has the potential to break. But that doesn't apply to just JS but also images and all other static assets. If a JS-heavy site is hosted completely from its own domain there's nothing stopping easy archival of it.


We're talking about mostly-text-pages which have no sensible reason to rely on JavaScript to render, not rich webapps. In these cases I consider the text to be a meaningful representation for the purposes of indexing or archive.

I don't agree with this:

> Search engines and archival tools can handle all this very well today.

I think most developers have been in a situation where they had to explain to users that a moon on a stick isn't easy just because Google can do it. It may be commonplace, but it's not easy or cheap.

The difference between real HTML and JS execution is orders of magnitude of time, expense, maintenance, and stuff to break. That really can't be underestimated. It's the difference between a 500 millisecond call to curl and a whole headless browser and a 5 second wait.

I hope you don't think I'm putting this too strongly, but this raised bar is a force in the monopolisation and de-diversification of the web.


>and then execute them (because it may asychronously loads further scripts whith are required to load the content). And then wait for it all to execute.

not to mention potentially rewrite the executable code to still work on the archive copy, or potentially end up with a copy where all clickable links are broken depending on the specifics (and sometimes also content further down the page if scrolling triggers dynamic loading). And if the site doesn't update the URLs to be unique for each "page", well good luck finding that dynamically loaded content in the archive at all to begin with.


I read it as: Your stack is so complex that:

- you will not have it running 10 years from now

- I am not able to make a snapshot of the content using tools like curl


Weird how the conclusion is that the sites are wrong, not that curl is an insufficient tool for indexing the modern web.

Google search hasn't died because sites use JS.


> Google search hasn't died because sites use JS.

I believe it's the other way around. Websites use just as much JS as they can get away with, without sinking in Google results. If Google never ran JS in the first place, most websites wouldn't depend on it.

And now every crawler has to keep up with Googlebot's computing power if they want to archive the web.


But it shouldn’t require such a complex js execution engine when the end result is something that is equivalent to what used to be retrievable with a simple curl request.


I assume crawlers have improved since this was written, but:

If the content is only visible when executing JS, then an archive crawler which does not execute JS will not archive it, and a search engine crawler will not index it.

Another thing I may be out of date with: do people still link to 3rd party copies of JavaScript libraries? Because the left-pad npm package being deleted caused a lot of problems, and that kind of drift is inevitable over a decade.


> do people still link to 3rd party copies of JavaScript libraries?

Sadly yes. Occasionally, sites use Subresource Integrity (SRI, https://caniuse.com/subresource-integrity) hashes to ensure the scripts are identical to what they expect. But it still surprises me to see secure, sensitive sites loading third-party scripts.


Webpack is supposed to solve the inevitable “left-pad 2.0” problem. The downside is that each website serves their own copy of the libraries defeating cross domain caching (which browsers are starting to prevent anyways). It also requires setting up a build system. The upside of Webpack is that it does “tree shaking” which removes code that is never touched which makes the browser have to download less.

Tradeoffs...


Actually Webpack has nothing to do with the left-pad issue, and it predates it. The original left-pad problem was due to it being deleted from NPM itself. It wasn't a package served by a CDN like jQuery. Its deletion from NPM prevented people from starting new projects or re-fetching dependencies, not websites from working.

And the issue of CDNs being offline or deleting packages can be solved by vendoring dependencies, which is possible without Webpack and was already the common practice for more than a decade before using CDNs for dependencies became super popular.


Right, my bad! I forgot that left-pad wasn’t a CDN issue, but a removal from npm... I think the big problem with JavaScript isn’t necessarily npm, but JavaScript’s abysmally small standard library that leads to situations like left-pad. Including left-pad into your project is easier than searching how to do something that JS’ stdlib doesn’t have.

That doesn’t mean there aren’t dumb packages like `is-array`[0] that are nothing more than `Array.isArray` with a polyfill[1] for those working with Chrome 4 or earlier. Even IE9 has `Array.isArray` according to MDN![2] And that package somehow had 59,000 downloads this past week?!

[0]: https://www.npmjs.com/package/is-array

[1]: https://github.com/retrofox/is-array/blob/master/index.js

[2]: https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


With all the deprecations currently happening in browsers (often driven by Google), I think it's not unrealistic to assume that we will see a whole bunch of JavaScript becoming deprecated and removed in browser engines in the next ten years.


Every third party library you link to is going to be down in 10 years.


So just bundle them into your own source? Depending on a third party server for your site to load is bad practice even today.


Perhaps it was simpler to not import all those problems in the first place. A website that shows content is not an app, and shouldn't pay all the complexity cost.


Websites don't show content. Websites are content; the browser shows them.


Good thing we have open source, then :)


Open source doesn't really help. Like the sibling post said, you can just host those libraries, but people don't. Why? No idea.


So what is the problem? People can host those libraries and they don't because it's not a problem yet. If/when it does become one, they will.


People under deadline, boss, client, investor, competition, resume-driven-development, etc, pressure, or just plain normally, suck at abstract risk prediction.

The adjacent profitable is ever so much more compelling.



What does "open source" mean in this context? In 50 years time I seriously doubt github.com will still be the same place it is today doing the same things. Maybe older protocols will die out, making accessing content over HTTP(S) impossible, maybe P2P will just break due to advances or newer stacks taking over. Maybe the "internet" will change in some fundamental way, and breakage is part of the "innovation" process.

The way you write a comment on HN is likely to change over the course of decades, I doubt it'll still be the same HTML, HTTP, networking stack etc.


This kind of doomposting is completely irrelevant and unproductive. Sure, in 50 years computers will be different. Big shocker. What are we supposed to do? Do away with scripting for browsers and distribute markup files?


> Saying browsers won't support JS in 10 years is an idiotic claim.

I think his reductionist take on it has resulted in you missing his point. Lemme take a shot at expanding on that for you.

1. Yes, browser will run JS in 10 years. However, in 10 years, the market will have changed. Firefox might be dead, and everything might standardize on a Chromium-derived monoculture. Backwards compatibility will be kept to a modest extent but we may see a new 'quirks mode' shift in the ecosystem or other attempts to carve off backwards compatibility.

Look at how Apple uses this to their advantage to have internal agility, where MS uses the opposite approach and sacrifices it for stability. The former can eat the latter's lunch when it comes to technology on the cutting edge, even if the developer ecosystem and end user pays the price in subtle ways. If Google follows this lead with their monopolistic control of Chromium, we could easily have a world in 10 years where JS on a lot of sites that use it deeply, exploring the limits of the APIs, sites that already experience cross-browser compat issues, etc. will be slightly broken.

The problem may only get worse over time; after all, we already have many use cases within IT to run a VM of Windows XP and IE 6 to access older devices that simply don't work any more with anything newer. If they'd been designed with simple, server-side frameworks and vanilla Web 1.0 stuff they would have aged a lot better. Isn't it possible that we're on the wrong evolutionary path here if that's the case?

Think also of "applications" written in Node that you might want to run on your own server - it might not be possible to build them anymore, the dependencies might have dried up, making them difficult to rehydrate without doing extensive surgery.

2. Archival is another layer of the problem. If your single-page app doesn't present some mechanism where a scraper can meaningfully put in a URL and get back some amount of HTML reflecting the desired article / content / etc, it may not end up in the Wayback Machine or other archive sites, which are absolutely critical civilizational infrastructure if we want to have any hope at giving future historians anything even approaching a balanced and truthful perspective on this pivotal age.

This is not an overstatement, and it's something people with all political beliefs should be able to agree with for their own reasons. Things that are published need to be retained, in ways that give us confidence their contents haven't been altered. If your site requires I execute a bunch of Javascript just to be able to see the content, then that is not guaranteed; even with a headless browser rendering the output, it's hard to know that what comes out the other end is actually a useful copy or not.

Face it: Javascript is a third-tier language that occupies its position in the landscape due to first-mover advantage and inertia. Javascript frameworks paper over the fact that browsers weren't designed as application platforms by inserting gobs of difficult-to-understand code between developers and the actual underlying system. GUI things that worked trivially 20+ years ago are difficult and end up being omitted from most systems (you know, things like sortable tables, keyboard-accessible forms with complex controls, drag-and-drop, and other 90s gold). It's a big pile of trash.


I'm sure they said the same about Flash.


I don't think anyone said that about Flash. Its closed nature was always known to be a significant drawback.


Flash and Java got stabbed in the back due to stricter security properties then JS. You can't do spying as easily with Java Applets etc. that the user need to approve to run manually.


Heh. Tell that to Flash Cookies. https://www.cookiepro.com/knowledge/what-is-a-flash-cookie/

> Many end users are unaware that Flash cookies exist and have no idea that when they delete their browser’s HTTP cookies, Flash cookies could remain unaffected and be used to recreate deleted HTTP cookies. The recreation process, which is called respawning, is extremely controversial because it facilitates cross-browser tracking and poses privacy concerns when the use of Flash cookies is not disclosed in a website’s privacy policy.


I was a kid at the time but I remember I had to enable flash videos and games on a per website basis. Did flash or Java applets just run without the users explicit permission? Was the flash cookies added later on?


Flash did, not sure about Java, because I never had the runtime installed. Maybe you used NoScript?

edit: If I remember correctly, at first Flash wasn't blocked by browsers, but it was abused just like JavaScript is today and because of that NoScript came along. Then I think browsers started blocking Flash by default once JavaScript took over.


Flash was a major malware injection vector, so its "stricter security properties" seem to have been misguided.


Ye they had their own set of problems. But I remember them as something that was "click to run".


Flash would generally run without asking, it was only click-to-run if you has a browser plugin like Flashblock.


They said the same about PhP


Still using php to this day, nothing wrong with it. Maybe the most painful thing is package management.


> Maybe the most painful thing is package management.

Is Composer inadequate? Or maybe just not widely adopted by the community? I only used it for a short period of time several years ago.


Well, given all the quirks in browser compatibility I suppose it certainly can be painful to restore the software needed to run a JS-driven website that was written 5 or 10 years ago.

It would help if software was more self-contained, or at least offered the option to be that. This has nothing to do with JS specifically, of course.


From what I’ve seen, most of the popular JS libraries (like React (or even old school jQuery)) are well aware of the quirks in browser implementations and use polyfills to fix it


What quirks? There was a time when you had to essentially make different websites for IE, Chrome, Firefox etc. Even without JS, CSS layout and HTML tags were interpreted very differently by each one. Browsers have become a lot better at adhering to standards as time goes on.


Yes, but there is absolutely no guarantee that the browser will run a JS file from X years ago flawlessly.


Don't confuse browser vendors with the people creating and abandoning frameworks left and right. I like the part of the HN guidelines that refers to something in particular being "a semi-noob illusion". The fragility of the web is like that: a semi-noob illusion that people come away with because they don't understand the layers underneath and who's responsible for what. The web has robustness and forward compatibility baked in. TC-39 in particular, which is responsible for defining "JS", even operates with the unofficial motto "don't break the Web".

I'm also fond of this comment from Roy Fielding:

> As architectural styles go, REST is very simple.¶ REST is software design on the scale of decades: every detail is intended to promote software longevity and independent evolution. Many of the constraints are directly opposed to short-term efficiency. Unfortunately, people are fairly good at short-term design, and usually awful at long-term design. Most don't think they need to design past the current release.

https://roy.gbiv.com/untangled/2008/rest-apis-must-be-hypert...


Javascript is designed to be backwards compatible, always. You can visit a website made 20 years ago and all the APIs it uses will still be there.


Unless it uses some IE, Firefox or Chrome specific API. Or standard API's that has been deprecated. API's are constantly being added and removed.


Which APIs that were part of the ECMA standard were removed? As far as I know the only APIs that get removed are the ones that get implemented outside of the standard, and are later deprecated. That's fault of the developer, not of the technology.


Searching for site: developer.mozilla.org "deprecated" should reveal some deprecated browser API's and JavaScript stuff. I think the Chrome team wait until less then two million sites use something before removing it. For example sync XHR is currently on the deprecation list and will be removed once the usage is low enough. If sync XHR is removed you can no longer sync load modules on the web.


There's also no guarantee that browsers will render HTML and CSS as it exists today in 10 years. Maybe all of the internet should just be ASCII text then (if there even is a guarantee of that)?


We should distribute software in a way such that its dependencies are completely described. So when we pull something from the internet (be it a JS file or some HTML), the system will know what other software to pull and it will just run. The current way of doing things works most of the time, but it is fundamentally flawed, which becomes apparent when viewing stuff from a decade ago.


Are there any HTML 2.0 documents that aren't usable 25 years later?


It has became ridicolous how many pages simply render blank without javascript, and many are just simple articles/text-and-images without any functionality at all. I agree 100% with the author of this article. The whole concept of throwing angular/vue/react just for a blog doesn't make any sense in the first place.


Some people seem unable to embrace the fact that the web is as much an app delivery platform as it is a content delivery platform.

You simply can't get away from this fact, even if you don't like it.


And plenty people build apps to deliver content they could just deliver using the content delivery bits. (EDIT: or just add random JS to content. e.g. the number of blogs that do deliver a html page with some JS to show a loading animation, despite the content also being in the same HTML file, is surprisingly high. Although that at least doesn't interfere with archiving etc, since the content is there)


The web, as it is today, already let us escape walled gardens of yore,... and jump straight into new ones, built on the web. And these new walled gardens are operated by much more ethically challenged companies. So all in all, the old devil was better than the new one.


Also the web as an app delivery platform seems the best / most available route to escape the walled gardens of the world...

Sometimes I wonder what exactly folks want, desktop apps only, the web as purely as basic html?


really just a baseline of functionality IMO. I don't expect a spreadsheet app or game, or what have you to actually work without javascript, but if the page has the purpose of displaying data, I would expect it to at least offer that, even if in a "lesser" form. These days pages that are nothing but images and text just throw blank screens without javascript.


What staggers me is when sites like Twitter decide that JS is now required to read a tweet...Like it's literally some text like what you read on HN but it needs damn JS...Why?! The entire website could work as a simple .txt file for all I care, no reason why it should be so bloated.

All sites like FB, Twitter should by law be required to work without JS at a minimum due to how important they are in our society today.


Twitter's goal is explicitly to have a single codebase that works everywhere: desktop website, mobile website, and mobile apps. (Sorry I can't find a source for this, any search with the "twitter" keyword is flooded with irrelevant results...)

It makes sense in that context; but I agree it's a shame they won't keep a small alternative frontend just to view tweets.


I think the point is: you don’t need a “codebase” to display text and images. Which is what the vast majority of web content is: text and images.


Why? Why should you be in a position to dictate to twitter how to operate. It's a multi-million dollar company with many smart people working away on the product. Who are you? If you just want walls of text and a completely unengaging product, stick to hackernews.


This. When I get blank page instead of content - I usually ignore the site completely. I do have first-party JS enabled, so if the site is actually JS built SPA and it serves its own JS then it renders normally without me noticing anything. But when I have to elaborate which of the fifty 3rd party script serving servers I need to enable I gladly skip the whole site. Happy to see I'm not alone here.


Why would you browse the internet this way? Why not just enable the scripts and close the site if it's annoying?


Strong opinions follow

Flatly, this is my observation and I hadn’t as of the time of this post really seen it mentioned:

JS is over used not just because of tracking and ads, though this is a big part with all popular websites I’ve visited in the last 30 days (side note: thanks uBlock origin!)

It’s also in large part because I strongly believe frontend developers are re-enforced to think this way. I see a lot of blogs, community meetups and conferences organized around leveraging JS and specifically the large frameworks, which is okay! However it only re-enforces that JavaScript first solutions are implicitly better, rather than an emphasis on leveraging the whole tech stack correctly. I have friends that I respect very much who have largely gotten by with just a passing knowledge of CSS, HTML and hasn’t yet gained a deep understanding when it’s more appropriate to leverage those technologies over JS, let alone the trend of pushing so much work to the client (such as, not even bothering to scope APIs correctly. How many times have you had to sort the results of an API request because it is not sent over the wire sorted for it’s use case, even though your team controls the API?) The industry does not enforce wholistic thinking when it comes to this. That is the real problem to me.

Web components are somewhat an exception to this, as API considerations go they do attempt to strike a balance between server side rendered content and dynamic client side content, our industry just isn’t heading in a direction where that balance was struck


I mean all it takes is one script to not load or render, or for a script to not do what it is supposed to and your page is dead. Like a regex that doesn't accept .xxx gTLD, that one line of code is now broken as the web is changed.

I am not sure websites are designed to be anything more than "it's here today in this society, in this context, and may break in a few hours to a few years". Much of the "early" internet is now gone, just as FB, YT and others will also one day, be "gone".

It's entirely possible the concept of "data storage" may also one day be gone, we can't be sure the technology of today will be there in 5 years or 500 years, we'll all be long dead by that point anyway.

This message will likely be read by humans alive today, right now, and never seen again for the rest of time. Not everything needs to be archived and remembered.


"My name is Ozymandias, King of the Kings/Look on my posts, ye mighty, and despair!"


But...

We need our app to be accessible on essentially 4 different platforms. Even if we had the resources to code and maintain four native applications, getting users to update is like pulling teeth. Full page loads are out, because we don't want to rerender to things like reordering a list. Theres a whole bunch of UX quality of life things that go away if you remove javascript...

Honestly, I'd prefer that our internal business app DIDN'T appear in google...


I wish there was a better alternative to Electron’s method of giving each app its own copy of the browser. It makes sense for compatibility reasons (your code may only be tested on one version of Chrome and later ones may break it), but most apps don’t need that. Using the built in Chrome copy in a “frameless” mode would be nice.

JavaScript (through TypeScript) is a very nice programming language, but it gets a horrible rap from bad or lazy programmers. It’s kind of like how PHP 5 was. Everyone still rags on JS for things that ES6 fixed just like how people rag on PHP for things that have been fixed since PHP 7.


People also blame JS (the language) for things that are the fault of NodeJS (a runtime by one group that implements an incompatible fork of JS) and NPM (which is a community at the very least, and arguably a design and development methodology). Neither of these are "JS", but loads of people (both those who spend their days steeped in GitHub issues related to NodeJS development and those looking to heap scorn upon it) will argue as if they're inseparable or synonymous, even though that's not the case at all.

Imagine if people conflated Android development with programming in Java. (This might be a more appropriate metaphor than intended, since most of the time the people who strike me as being out for blood regarding JS tend to exhibit the arrogance and nuance of someone coming straight from their university's second year Java course.)


I usually navigate the web with javascript disabled and encounter this a lot with articles that require scripts just to display words on a page. A really weird example of this is engadget articles. They display an empty page if you have javascript disabled. The only reason why is their css has the html tag set to display:none. If you uncheck that css prop in the dev tools then the content appears normally. Weird choice on their part.


Some are helpfully pointing out here that the web is now used for both A) content delivery and B) app delivery, and the B parts are hard to do w/o JS.

I tend to browse with JS disabled as a default (images too, that's another conversation maybe), and for sites that are really important to me, I enable it in browser settings, just for that site (sometimes temporarily, depending). I leave a couple of tabs open that I can get to very quickly for that purpose, but fortunately most of the time I can ignore sites that require JS.

Reasons include speed, convenience, and security. (I also do most browsing in separate user accounts, depending on the security level of what I am doing, and what other data that account handles.)

Edit: Sometimes for those same reasons, or for automation purposes of some tasks (like checking a particular page for a certain fact that I want to act on when it changes, such as some security updates), it's nice to be able to use text browsers like links (or wget & curl) too, and have the info available w/o requiring JS for it.


I might not be able to read this 10 years from now so I'm not going to read it now.

Bizarre argument. Seems like something you don't need to worry about but whatever floats your boat


It's the 'voting with your feet' argument.

It's a presponse to "my site is js; if you don't like it, leave it!"


The problem is there aren't enough feet to make this vote count.


The problem is if original blogger wants to archive a page they need to use a different tool than curl. Honestly such a weird hill to die on.

Curl isn't even that great to archive with anyway is it? wget --mirror used to be the hotness.


I do understand his point. If you want content to be long-lived, requiring the execution of code to render it introduces a risk that it will become much more difficult to correctly render in future, if it's not maintained and updated.

I think it's a valid point to raise, but how much you care about it is going to depend hugely on what you're doing.


Why would most js heavy companies want content to be long lived? What's the benefit to them? If they're concerned about making data accessible then they'll open an API that they hold the keys to.


The point is that js has become a de facto way of rendering content, regardless of its nature, and in some cases this impacts usability and other non-functional requirements in ways that people don't always consider.

Obviously if a company wants tight control over their content then this may work to their benefit - although they may still find that their ability to render their own content requires non-trivial maintenance over time.

But that's a pretty narrow view of the web, and not everyone publishing on the web fits that model.


I went out of the way to exclude JS from my blog, partly for the fun of it, to see how far I can go before I need it, and partly for reasons mentioned here + optimizing for speed.

One of the things I enjoyed doing without JS was using a comma-separated tag system for posts and then filtering the post list by tag.


Did you use checkbox labels with it? I like the technique, but am concerned about accessibility.


No, I use a Python-based static site generator so I essentially generate a new page for each tag, populated with links to the posts containing that tag.


I see! This would be the standard server side / pre-rendered approach. I somehow thought you did a client side solution w/o JS.


That would be interesting, if it was possible to do it in a minimal way!


What is your automated deployment pipeline for this kind of website?


I write posts in an editor (presently nvim), generate the site, and have a git hook that basically publishes it to neocities along with pushing it to my repository. It's very much a tiny hobby blog so I haven't put much effort into the whole process.


> Because in 10 years nothing you built today that depends on JS for the content will be available, visible, or archived anywhere on the web.

What? That doesn't even make sense. Javascript and browser vendors are going out of their way to be backwards compatible (to the point of rejecting the perfect Array.prototype.flatten in favor of the ungrammatical Array.prototype.flat so as not to break websites that depend on MooTools). Is the author implying that in 10 years the javascript of today will no longer be executable by the browsers? Why?


> Because in 10 years nothing you built today that depends on JS for the content will be available, visible, or archived anywhere on the web. > All your fancy front-end-JS-required frameworks are dead to history,

Pretending to know anything 10 years in advance is foolish.

There’s a degree of contempt here. That makes me evaluate this as a rant.

If my target audience is developers, I’ll consider using server side rendering. For normal people I doubt it matters either way.


> For normal people I doubt it matters either way.

Did you ask? Like, honest to god talk to people about how they feel about the website - not just rely on deep telemetry that mostly serves to let people lie to themselves and whitewash doing whatever they want to do as being "data-driven".

A big problem with modern technology is that the feedback from users does not reach the vendors.


It should be pretty easy to tell what ratio of hits had JS disabled. If it’s more than a tiny minority, I’d do something.

I have plenty of things to talk to users about without getting them into whatever this is (still deciding if I class this as paranoia or being cautious or some kind of elitism). Based on the level of vitriol, looks like one of the outer 2 so far.

If you want to persuade developers to do things differently, bad mouthing their tech choices is a really bad strategy.


> It should be pretty easy to tell what ratio of hits had JS disabled. If it’s more than a tiny minority, I’d do something.

You can't expect normal people to browse with JS disabled, because by definition of "normal" they have not enough technical knowledge to do it, and to deal with the issues it causes on the modern web.

> I have plenty of things to talk to users about without getting them into whatever this is

This is the real answer to the question, "why is my computer so slow"? Regular people still mostly think "it's viruses", but it's not viruses - just web developers making websites with no consideration of end-user performance.

> If you want to persuade developers to do things differently, bad mouthing their tech choices is a really bad strategy.

I think this is a viable (even if not most efficient) strategy, because the current standard practice came from praising these choices. Counterbalancing the cargo cult through pointing out the trade-offs being implicitly made is a good idea, IMO.


> You can't expect normal people to browse with JS disabled

> This is the real answer to the question, "why is my computer so slow"?

Yes I can. That's one of my criteria to start taking this seriously. Though I could be persuaded other ways.

Do you have any data to back this up?

Do any internet security suites (whose job it is to "protect people from viruses") advocate disabling JS?

Is Apple Safari switching off JS?

Are there popular Chrome extensions that do this?

If it's really that important to you, you can probably go help make one of the above happen.

Actually I care about front-end performance a great deal. I advocate for it, as well as server side rendering.

I'm just not convinced about the evils of JS. I routinely have plenty of tabs open and if my fan starts running I close the one that's using the most CPU. Half the time it's Slack.

So far, "I've disabled JS" translates in my mind to "tech extremist". Not seeing a lot of people doing it and not believing the reasons. At least not yet.

My technical decisions are driven by efficacy, expediency, skill marketability and what I enjoy working with. If you want to change my mind you'll need to speak to those factors.


This makes sense to me for articles and the like. If the point of the page is information, it makes sense to me that the page should load content first.

OTOH, twitter, slack, and package tracking are consistently changing. It makes more sense for something like that to load content via JS, since there's no guarantee the page looks the same between two loads


I agree with this especially for online documentation. My my recent pet-peeve is deno, which I love, but cannot read via my cli browser.

- https://deno.land/manual

- https://doc.deno.land/builtin/stable

- https://deno.land/std

If the search feature fails due to me not having JS enabled, fine. But the entire documentation walled off from non-js browsers? For what purpose!?


Consider using the JustRead extension[0]

Yes, it requires JavaScript but consider it "good" JavaScript usage to combat "bad" JavaScript isage. You can then hit Ctrl+Shift+L on Chrome, and it will get rid of all the noise in the page, and display the article in 70-ish characters. The experience without the crap is really cool.

- [0]: https://chrome.google.com/webstore/detail/just-read/dgmanlpm...


2015


has only gotten more relevant


lr;dr = Login required. Didn't Read.


plr;dr = Page load required. Didn't Read.


"Page loading" is a lie. The truth is artifical delay has been intentionally inserted. You are waititng for ad auctions, e.g., header bidding, to conclude. About 1s.

aa;dr = Ad auction. Didn't Read.


OK, sometimes it's longer than 1s


This is uncanny - I was just now going through some historic code snippets for holy grail, and was thinking of Tantek and his box model hack[1]. I come here, and see this.

Coincidence? Or, is the world of technology really that small? :)

[1] http://tantek.com/CSS/Examples/boxmodelhack.html


I have no problem if it is for a SPA or a site that needs ot to function , but it starts to get annoying when sites use discus or whatever and I have to enable JS to see the solution for something I searched for, or even worse, a blog site that won't load unless JS is enabled ... to show text .. I don't understand that one at all.


If you're presenting Web 1.0 server-side apps as an alternative to JavaScript, you're trading accessibility for archivability. With a modern SPA, I can have the user download the app once and cache it in a service worker. All further requests can be made as minimal as possible using JSON or GraphQL requests. This matters, a lot, to users connecting over slow cell connections in metered bandwidth. I care a _lot_ more about making my sites accessible to users in developing countries than I do about y'all HN folks that refuse to run JavaScript on principle. Figure it out.

Sites with JavaScript are somewhat more difficult to archive, sure. We've done a pretty good job though. I'd be curious to see if anyone here can actually present a site that doesn't archive in the Wayback Machine or archive.today. We've thankfully developed better tools for this sort of thing than curl, that are entirely capable of archiving script tags (yes, even from CDNs!) and pertinent async HTTP requests.

Finally, server-side rendering obviates this problem entirely. Write your same React app, but also have the server return some coherent HTML content for `curl`. Next.js and others do this automatically. Turns out the solution was more JavaScript the whole time.


> I'd be curious to see if anyone here can actually present a site that doesn't archive in the Wayback Machine or archive.today.

* Reddit: https://web.archive.org/web/20210106020050/https://www.reddi... and https://archive.vn/https://www.reddit.com/

* Youtube videos are understandably not archived because of their size; but the rest of the website (comments, collections, ...) is barely usable because of the consent popup https://web.archive.org/web/20201112024051if_/https://www.yo...

* Microsoft support: https://web.archive.org/web/20210109125722/https://support.m...

* Instagram, even before it was all login-walled

* Facebook

* Amazon product pages are partially broken (no images, the rest loads without JS)


> Because in 10 years nothing you built today that depends on JS for the content will be available, visible, or archived anywhere on the web.

That's what web archiving technologies are for. See for example: https://github.com/webrecorder.


KaTeX is a pretty popular way to display math on a website. It's written in js. Sure, you can read the raw LaTeX, but images of math are not only strange, but also unreadable.

Does the author discount this kind of use as well?

https://katex.org


I really like Firefox' reader mode, and Pocket. If you run a content-based website, this is what you're competing against. There are very few ways to improve upon that reading experience. It's just nicely formatted text, adjusted to your preferences.


By his logic all the games and animations originally built in Flash should be lost to the sands of time now that Flash support has ended and Flash content is blocked within the Flash player. But that didn't happen. Ports were made over to WebGL and emulators were built to make it possible to continue to play these games and animations outside of a browser. People thought they were so important that they made efforts to preserve them. They were anything but ignoreable.

I think JS is quite fine by comparison.


This guy sells a book. But it comes with a DVD so I can't read it.


Probably what will actually happen is that archive.org will evolve to handle SPAs by recording and replaying server requests.


jb;dr = javascript bashing; didn't read.


best kind of post. short, to the point, and true, backed with even shorter links.


You did notice the article is from 2015?


It aged well... for some reason.


Author is conflating bad design with use of JavaScript.


What happened to good old accessibility? If a blind person can't access content on a website using text browser then this site goes into the trash.


My understanding is that text readers are actually quite good at reading from the assembled webpage


They are, and nearly all people with disabilities browse with JavaScript enabled.

https://webaim.org/techniques/javascript/

"Just because JavaScript is used on a page does not mean that the page is inaccessible. In many cases, JavaScript can be used to greatly improve accessibility and optimize the user experience. Some aspects of accessibility compliance would be difficult without JavaScript, especially for complex web applications."

"It is a misconception that people with disabilities don't have or enable JavaScript, so it is thus acceptable to present an inaccessible scripted interfaces, so long as there is an accessible, non-JavaScripted version available. A survey by WebAIM of screen reader users found that 99.3% of respondents had JavaScript enabled. The numbers are even higher for users with low vision or motor disabilities."


Yes, modern screen readers work off the actual active page DOM, so they don't mind if the page is generated by JS per se.

At the same time, JS-heavy sites often have dynamic elements that are difficult for screen readers to handle (i.e. an element pop ups somewhere: easy to see for a sighted user, tricky for a screen reader to decide if it's important, and if it moves focus there the user looses position on the page, which is confusing), and are equally as likely as non-JS pages to not have been built with accessibility in mind, while often being more complex, which makes the structure harder for both screenreader software and user to understand.


Why would it go to the trash? Unless they get sued in a jurisdiction with relevant laws, the site will be fine, because most visitors are not blind.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: