An Open Letter to Google for introducing ITTI in Core Web Vitals: The Uncanny Valley Between LCP and Interactivity

A constructive note to Google, the Chrome Web Performance team, and anyone else who has felt the gap between “the page looks ready” and “the page actually works.” The gap already has a name. It does not yet have a measurement.
On Puma’s mobile homepage, under Google’s own 1.6 Mbps / 150 ms throttling profile, my tool recorded 100 ignored taps on the hamburger menu before it finally responded – 2.67 seconds after LCP. The Core Web Vitals report for that page is green.
This open letter is not about inventing a metric – it is about joining a conversation that has been happening quietly for years and pushing it toward something measurable. The UX community has a name for this gap already: the uncanny valley of web performance. What we do not have is an official signal, a pair of concrete sub-metrics, or a reason for developers to optimize for it at industry scale – and I want to argue, with data, that those things are overdue.
The Valley Already Has a Name
The term uncanny valley, applied to web performance, has been circulating in framework and patterns literature for years. Patterns.dev describes it as “the period of time between the first render of the app, and the moment your app becomes interactive.” Ankit Sharma’s February 2026 Syncfusion post on incremental hydration is more direct: “This gap, the ‘Uncanny Valley’ of web performance, occurs between LCP (Largest Contentful Paint) and TTI (Time to Interactive)… the UI appears complete, but it feels dead.” Other writers – Shamim Bin Nur, various voices in the Angular and Astro communities – have arrived at the same language independently.
So the concept is not mine, and I am not going to pretend it is.
Many UX-aware engineers already see the problem. What is missing is not awareness. What is missing is a metric – a shared number that turns a known phenomenon into something developers and monitoring tools can actually optimize against.
Other names for the same territory are starting to appear too. Some industry writers have floated “Engagement Reliability” (ER) – how consistently interactive elements respond across devices, sessions, and wildly different network conditions, which is a perfectly legitimate question in its own right. ER points at a larger problem, session-wide reliability; the uncanny valley is narrower, specifically the opening seconds after LCP. Both deserve attention. This letter is about the opening seconds, because that is where first impressions form.
What Happened Between Puma and Nike
To put numbers on the valley, I built a small open-source tool (github.com/selimkoc/itti) that reproduces the user experience directly. It opens a page under Google’s standard mobile throttling profile – 1.6 Mbps downlink and 150 ms latency, the same conditions Lighthouse’s mobile audit uses – and then, starting at LCP, it taps a target element every 100 ms until something actually happens.
I ran it against two well-resourced brand sites. The results surprised me in opposite directions.
| Site | LCP | Menu TTI | Rage-Click Window | Ignored Taps |
|---|---|---|---|---|
| Nike.com | 12.66s | 6.98s | -5.68s | 15 |
| Puma.com | 9.82s | 12.49s | +2.67s | 100 |
Source: itti test tool, github.com/selimkoc/itti
Read that table carefully. Nike’s hamburger menu is usable roughly 5.7 seconds before the page’s LCP completes. Impressive hydration discipline – except that the site is interactive before it has finished telling the user what it is. Puma’s LCP arrives faster, but the menu does not respond for another 2.67 seconds and swallows 100 synthetic taps before it does. A real user tapping once or twice a second would see nothing happen, nothing happen, nothing happen – and by then, their impression of the brand is already formed.
Two top-tier sites. Two green Core Web Vitals reports. Two radically different first seconds – one interactive before it is dressed, the other dressed before it is interactive. Our current metrics cannot distinguish them.
I am not picking on either brand. These are competent sites built by capable teams. The point is structural: both of these pages pass their Core Web Vitals audit, and the common assumption that passing CWV means a site feels fast in its first second is simply not true. Without a standard for what first-second interactivity looks like, both outcomes are considered equally good.
They are not equally good.
Why TTI Got Retired, and Why That Might Have Been Premature
Google’s original Time to Interactive (TTI) was trying to measure something adjacent to the uncanny valley. Lighthouse 10 retired it in early 2023. The stated reasons were fair: TTI was too sensitive to outlier long tasks, too easily skewed by late-loading third-party code, and too volatile for field data – all real problems, all defensible grounds for retirement. The recommended replacement is INP.
INP is excellent work and I do not want to understate that. But there is a gap INP structurally cannot close:
- INP requires a successful interaction to fire. If the user taps and no listener is attached yet, the event may not register as an interaction at all. The rage-click window is, by definition, the window where INP has nothing to measure.
- INP reports a tail value across the whole session. A site that is sluggish on the first three taps and fast afterward can still post a respectable INP. The first-interaction experience is averaged into invisibility.
- No anchor to LCP. INP has no way to say “the element the user can see is not yet the element the user can use” – that relation is simply not part of its model.
Retiring TTI was a reasonable call. Not replacing what it was trying to measure – in the specific narrow form that survives today’s user-research lens – is the gap I am asking about. Either bring back a narrower, more disciplined successor, or extend INP with a first-interaction dimension.
The form matters less than the fact.
Two Sub-Metrics, and a Tool to Measure Them: ITTI or IINP
Here is where I want to make a concrete, measurable contribution, because “there is a gap” is not an argument a standards body can act on. The gap needs sub-metrics.
Menu TTI – the time from navigation start until a specific, user-facing element inside the initial viewport first responds to input. Not page-wide quiet. Not a tail aggregate across the session. Just: when does the button actually start working?
Rage-Click Window – the delta between LCP and Menu TTI. Negative values: the element is interactive before the page finishes painting (Nike, above). Positive values: the seconds during which users tap, tap, and nothing happens (Puma, above). I picked the name deliberately. It describes the user’s emotional state, not the engineer’s clock – and that naming choice is half the point.
A supporting count – ignored taps – makes the severity legible. “100 taps ignored” reads differently than “2.67 seconds of hydration delay.” The former is what the user feels. The latter is what the dashboard sees. We need both.
LCP taught developers to ask: “is the largest element painted yet?” The Rage-Click Window asks: “is the largest element alive yet?” Same discipline, one level deeper.
Whether these end up inside a revived TTI, a new field metric – call it IINP, ITTI, or whatever the standards process produces – or as a new dimension of INP, is a decision for the Chrome team and the Web Performance Working Group, not for me.
I just want the sub-metrics to exist somewhere.
To make those sub-metrics concrete rather than theoretical, the tool at github.com/selimkoc/itti implements exactly this measurement. The workflow is direct. A Node script opens a page under 1.6 Mbps / 150 ms throttling, captures FCP and LCP via the PerformanceObserver API, then hammers a target element every 100 ms starting at LCP until the element finally responds. Output: FCP and LCP per Chrome’s own definitions, Menu TTI as the first successful response, Rage-Click Window as Menu TTI minus LCP, and an ignored-taps counter to make hydration delay viscerally legible.
It is not a standard. It is not trying to be one. Run it against a single URL or a batch file. Disagree with the methodology. Fork it. File issues telling me the measurement is wrong or incomplete – the entire reason I published it in the open is so the conversation about how to measure the valley can happen in public, with actual code to argue over.
The First Second Belongs to the User, Not the Designer
One quiet assumption in performance optimization is that we already know what the user will do first. The hero slider. The big CTA. The brand video. That is what marketing designed the page around, so that is what we prioritize.
It is also very often wrong. Real users arrive with their own plans:
- Some type into the search bar and hit submit before the slider has reached its second image.
- Some open the hamburger because they already know which category they want.
- A tap on the logo, just to confirm they landed on the right site.
- Some hit the language switcher or the cart icon.
- And a fair share never look at the hero at all – scroll straight past the thing the marketing team built the page around.
Menu TTI has to account for this. It cannot assume a single “correct” first interaction. A useful version of the metric would report the worst case across every interactive element inside the initial viewport – menu, search, logo, cart, language switcher, hero controls. All of them. Anything narrower measures the designer’s intent, not the user’s behavior.
Fixing the Valley Is a Build-Time Decision
What surprised me most is how unglamorous the fix is. It does not require a new framework. No hydration rewrite. No demand to ship less JavaScript overall.
It requires one build-time decision: the CSS and JavaScript that power the interactive elements inside the initial viewport should load alongside – or slightly before – the LCP element. Not after. Not deferred into a late chunk. Not lazy-loaded on first interaction.
Developers already make this decision for LCP. The hero image gets a preload hint, the hero font gets priority, the critical CSS for the above-the-fold layout is inlined. The tooling exists. The community consensus exists. We simply have not extended the same discipline to the event listeners, the menu drawer logic, and the hydration path for the interactive components that live inside the same viewport.
Addy Osmani, who leads web performance work on the Chrome team, has been making a version of this argument for years: the JavaScript most responsible for perceived slowness is usually the code blocking the first moments of interactivity, not the total bundle size. It is the same idea from a different angle. Prioritizing the listeners, menu logic, and hero-component hydration that sit inside the initial viewport is the discipline he has been describing from the platform side – and the metric I am asking for would give the rest of us a way to check whether we are actually practicing it.
Selective hydration, partial hydration, and islands architecture already implement this pattern in React, Vue, Svelte, Astro, and Qwik. Wix has publicly reported roughly 40% INP improvements from selective hydration alone. The techniques are mature. What is missing is a metric that makes using them a priority rather than a preference.
The directional evidence is not subtle either. The annual HTTP Archive Web Almanac has tracked, year over year, that the median weight of JavaScript shipped to mobile pages keeps growing and that the share of mobile origins passing every Core Web Vital remains a minority. The industry is shipping more interactive code than it is successfully making interactive in time. Honestly, that trend has been visible for a decade now, and we are still arguing about the same tradeoffs. A metric for the opening seconds would shift the curve in a way no blog post or framework launch has managed to.
What I Am Actually Asking For
I am asking for a conversation, not a mandate. Specifically:
- Name the valley officially. The UX community already calls it the uncanny valley. The Chrome Web Performance team, together with the W3C Web Performance Working Group, naming it – or endorsing an existing name – is the step that turns a concept into a target developers optimize for.
- Standardize a sub-metric or two. Menu TTI and Rage-Click Window are a starting point, not a demand. If the standards process produces something cleaner, better named, or better grounded in the existing Event Timing API – I will happily retire my names in favor of theirs.
- Start as a diagnostic signal, not a ranking signal. That is where TTI started. Let field data decide whether it is stable and meaningful before it gets attached to search consequences.
- Invite the frameworks to the table. Hydration order is owned more by React, Next.js, Vue, Angular, Svelte, and Astro than by individual site authors. Any honest fix has to involve them directly.
If the answer is “INP already covers this if you analyze it a certain way,” I want to hear that argument laid out. If instead it is “we considered this and decided not to measure it, here’s why,” that counts too – I learn something either way. Best case: the answer is “interesting – let’s talk.” That is the outcome I am hoping for.
I am writing this publicly, with a tool attached and numbers from real sites, because I would rather be wrong in the open than quietly right. Users who close a tab after 100 ignored taps do not file bug reports. Someone has to.
Frequently Asked Questions
Are you claiming to have invented the “uncanny valley” of web performance?
No. The term has been in circulation in hydration and SSR literature for years – at Patterns.dev, at Syncfusion, across the Angular, React, and Astro communities. My contribution is not the name. My contribution is measurement: two concrete sub-metrics (Menu TTI [or ITTI, IINP]and Rage-Click Window) and an open-source tool that produces reproducible numbers for real sites under Google’s own testing conditions.
How is this different from the TTI that Lighthouse retired?
Deliberately narrower. The original TTI measured main-thread quiet across the entire page, which made it sensitive to any slow third-party script and too volatile to be reliable. The Menu TTI I am proposing is scoped to a single user-facing element inside the initial viewport and anchored to LCP. It shares an abbreviation family with TTI and nothing else.
Why not just extend INP to cover first-interaction readiness?
That is a perfectly reasonable path. If the Chrome team concludes that a first-interaction INP sub-metric is the cleanest way to expose the valley, I would support it. The technical constraint is that INP requires a successful interaction event to fire – and the rage-click window is the window where events often do not fire at all because listeners are not attached yet. That may require a different primitive than INP currently uses. The standards process is the right place to resolve that.
Why 1.6 Mbps and 150 ms latency?
Those are Lighthouse’s mobile audit defaults – the Slow 4G profile every CWV tool already uses. Deviating from them just makes cross-site numbers noisier.
Doesn’t Google already track “Engagement Reliability” (ER)?
The term is appearing in industry writing, and ER seems to describe something real: session-wide reliability of interactions. I see ER as a larger envelope than the uncanny valley – it asks whether interactions work every time across a session, whereas the valley is specifically about the opening seconds. Both are worth measuring. This letter is about the opening seconds because that is where first impressions are formed.
Will this just push developers to ship less JavaScript?
In many cases, yes – and that is a welcome side effect. But the metric itself does not demand less JavaScript. It demands better-ordered JavaScript: interactive code for the initial viewport prioritized over everything else. A build-time decision. A framework-level one too.
What if Google doesn’t pick this up?
Then the uncanny valley stays what it is today: a known phenomenon with no official metric. Practitioners can still measure their own sites using the tool, and they should. But without an official signal from the Core Web Vitals program, the industry pattern of “optimize LCP, defer everything else” will not shift at scale, because the incentive structure will not shift. I could be wrong about how decisive that official signal needs to be – maybe the frameworks get there first through incremental hydration adoption, and the metric follows the practice rather than leading it – but my bet would still be on the metric. The CWV program is the only mechanism in web performance that moves a million developers at once. That is why this letter is addressed to Google specifically – not because the observation is theirs to own, but because the consequence of endorsing or ignoring it will be felt across the entire web.
References
- Progressive Hydration – patterns.dev. Describes the period between first render and interactivity as the “uncanny valley” and outlines hydration strategies to escape it.
- Ankit Sharma, “Incremental Hydration in Angular” (Syncfusion, Feb 2026). Frames the gap between LCP and TTI explicitly as the uncanny valley of web performance.
- Shamim Bin Nur, “Server-Side Rendering, Uncanny Valley, and Hydration”. Independent use of the same framing.
- Time to Interactive (TTI) – web.dev. The original lab metric and the reasoning for its retirement from Lighthouse 10.
- Interaction to Next Paint (INP) – web.dev. The current Core Web Vital for responsiveness and its methodology.
- INP Becomes a Core Web Vital (March 2024). Google’s announcement of INP replacing FID.
- Wix Engineering – 40% Faster Interaction via Selective Hydration. Real-world production evidence that the valley is addressable.
- Addy Osmani – Google Chrome web performance team; long-running work on JavaScript cost, hydration discipline, and prioritizing above-the-fold interactivity. addyosmani.com
- HTTP Archive (annual). Web Almanac. State-of-the-web report covering JavaScript weight, framework adoption, and CWV pass rates by device and industry. almanac.httparchive.org
- W3C Web Performance Working Group. The standards body responsible for Navigation Timing, Resource Timing, Event Timing, and related specs that any ITTI/IINP-style metric would build on.
- itti test tool – github.com/selimkoc/itti. Open-source measurement tool for Menu TTI and Rage-Click Window.
- Speed Matters. Earlier writing on why web performance is a business decision.
Written in good faith, with respect for the people and teams who built Core Web Vitals, and genuine optimism about what this community can still add to it.