Back
System Design

Design a News Feed UI

ModeInterview answer templates and key talking points
System Design
Design a News Feed UIDesign a Component Library / Design SystemDesign a File Upload UIDesign an E-commerce Cart + Product PageDesign an Autocomplete / Search UIDesign a Video Player (YouTube / Netflix)Design a Data Table (Sort / Filter / Virtualize)Design an Image Carousel / Photo GalleryDesign a Real-time Chat UIDesign a Collaborative Editor (Google Docs)
mediumarchitecture

Design a News Feed UI

Interview Prep mode active

Interview answer templates and key talking points. Switch the mode in the header to change your experience.

Step 1 — Clarify the scope

The first thing you do in a frontend system design interview is ask questions. Never jump straight to components.

CandidateIs this a mobile web or desktop app, or both?
InterviewerBoth. Mobile is the priority.
CandidateWhat kind of items appear in the feed — text, images, videos?
InterviewerAll three. Assume a mixed feed like Twitter or LinkedIn.
CandidateIs the feed infinite scroll or paginated?
InterviewerInfinite scroll.
CandidateDoes the feed update in real-time while the user is reading, or only on refresh?
InterviewerNew posts can appear at the top — show a "N new posts" banner, don't auto-scroll.
CandidateShould posts support interactions — likes, comments, shares?
InterviewerYes, at least likes and comments.
CandidateHow many posts are visible before the fold?
InterviewerAbout 2–3 on mobile.
Simple Analogy

Think of a news feed like a stack of printed newspapers being loaded onto a truck. You don't load all of them at once — you load the first few, drive to the next stop, pick up more. The truck (your browser) only carries what it needs right now. Carrying the full warehouse (rendering 1000 posts at once) makes the truck too slow to move.


Step 2 — High-level component architecture

Start with a box diagram of the UI before talking about any implementation detail.

News Feed — Component Tree
Visual component flow from page shell to reusable UI blocks

App

FeedPage

FeedHeader

sticky top bar (logo, notifications, avatar)

NewPostsBanner

"5 new posts" pill (appears on new data)

FeedList

the virtualized scroll container

FeedItem (×N)

individual post card

PostHeaderPostBodyPostActions

FeedSkeleton

shown while loading next page

ScrollSentinel

invisible div at the bottom; triggers load

Why this structure?

  • FeedList owns the data and scroll state — children just receive props
  • FeedItem is a pure presentational component — easy to test and reuse
  • ScrollSentinel decouples scroll detection from rendering logic

Step 3 — Data fetching strategy

Cursor-based pagination (not offset)

Never use ?page=2&limit=20 offset pagination for a live feed. Here is why:

TEXT
1Timeline:
2 User opens feed → posts [1,2,3,4,5,...20] loaded (page 1)
3 Post #3 gets deleted
4 User scrolls → loads page 2 → posts [21,...40]
5 But now offset shifts — post #21 becomes post #20
6 → User sees a duplicate or misses a post

Use a cursor instead:

TEXT
1GET /api/feed?cursor=eyJpZCI6MTIzfQ==&limit=20
2Response: { items: [...], next_cursor: "eyJpZCI6MTAzfQ==" }

The cursor encodes the last-seen post ID (usually base64). The server returns posts strictly before that ID. No gaps, no duplicates, even as new posts arrive.

React Query implementation

TSX
1import { useInfiniteQuery } from '@tanstack/react-query';
2
3function useFeed() {
4 return useInfiniteQuery({
5 queryKey: ['feed'],
6 queryFn: ({ pageParam }) =>
7 fetch(`/api/feed?cursor=${pageParam ?? ''}&limit=20`).then(r => r.json()),
8 getNextPageParam: (lastPage) => lastPage.next_cursor ?? undefined,
9 staleTime: 30_000, // treat data as fresh for 30s — don't refetch on tab focus
10 });
11}

useInfiniteQuery accumulates all pages in data.pages[]. You flatten them for rendering:

TSX
1const allPosts = data?.pages.flatMap(page => page.items) ?? [];

Scroll sentinel with IntersectionObserver

TSX
1function ScrollSentinel({ onIntersect }: { onIntersect: () => void }) {
2 const ref = useRef<HTMLDivElement>(null);
3
4 useEffect(() => {
5 const observer = new IntersectionObserver(
6 ([entry]) => { if (entry.isIntersecting) onIntersect(); },
7 { rootMargin: '200px' } // trigger 200px before the user hits the bottom
8 );
9 if (ref.current) observer.observe(ref.current);
10 return () => observer.disconnect();
11 }, [onIntersect]);
12
13 return <div ref={ref} />;
14}
15
16// Usage in FeedList:
17<ScrollSentinel onIntersect={() => {
18 if (hasNextPage && !isFetchingNextPage) fetchNextPage();
19}} />

rootMargin: '200px' gives you a 200px look-ahead, so posts load before the user reaches the bottom — they never see a spinner mid-scroll.


Step 4 — Virtual scrolling (performance)

Rendering 500 DOM nodes simultaneously destroys scroll performance. A 60fps scroll requires each frame to complete in under 16ms. With 500 complex post cards, that's impossible.

Senior Perspective

Virtual scrolling (also called "windowing") keeps only the visible rows in the DOM — usually 5–10 rows above and below the viewport. As you scroll down, elements at the top are recycled and re-populated with new data. The DOM node count stays constant at ~20–30 regardless of list size. Libraries: react-window (lightweight), react-virtual (hooks-based, more flexible), @tanstack/virtual (headless).

When to apply virtual scrolling

List sizeApproach
< 100 itemsPlain map — no virtualization needed
100–500 itemsConsider it if items are complex (media, interactions)
500+ itemsAlways virtualize

Basic react-window setup

TSX
1import { VariableSizeList } from 'react-window';
2
3// Posts have variable heights (text-only vs image vs video)
4// so we use VariableSizeList, not FixedSizeList
5function VirtualFeed({ posts }: { posts: Post[] }) {
6 const listRef = useRef<VariableSizeList>(null);
7 const heightCache = useRef<Record&lt;string, number&gt;>({});
8
9 const getHeight = (index: number) =>
10 heightCache.current[posts[index].id] ?? 300; // default estimate
11
12 const onItemRendered = (id: string, height: number) => {
13 if (heightCache.current[id] !== height) {
14 heightCache.current[id] = height;
15 listRef.current?.resetAfterIndex(0); // tell react-window to recalculate
16 }
17 };
18
19 return (
20 <VariableSizeList
21 ref={listRef}
22 height={window.innerHeight}
23 itemCount={posts.length}
24 itemSize={getHeight}
25 width="100%"
26 >
27 {({ index, style }) => (
28 <div style={style}>
29 <FeedItem
30 post={posts[index]}
31 onHeightChange={(h) => onItemRendered(posts[index].id, h)}
32 />
33 </div>
34 )}
35 </VariableSizeList>
36 );
37}

The tricky part with variable-height virtualization: you need to measure each item after it renders (use ResizeObserver inside FeedItem) and report back.


Step 5 — Real-time new posts banner

You don't auto-inject new posts into the feed — that would jump the user's scroll position. Instead, show a banner.

TSX
1function useNewPostsPolling(latestKnownId: string) {
2 const [newCount, setNewCount] = useState(0);
3
4 useEffect(() => {
5 const interval = setInterval(async () => {
6 const res = await fetch(`/api/feed/new-count?since=${latestKnownId}`);
7 const { count } = await res.json();
8 setNewCount(count);
9 }, 15_000); // poll every 15s — cheap, no WebSocket needed
10
11 return () => clearInterval(interval);
12 }, [latestKnownId]);
13
14 return newCount;
15}
16
17// In FeedPage:
18{newCount > 0 && (
19 <button
20 onClick={() => { window.scrollTo({ top: 0, behavior: 'smooth' }); refetch(); }}
21 className="fixed top-16 left-1/2 -translate-x-1/2 z-50 px-4 py-2 bg-teal-500 rounded-full text-white text-sm font-semibold shadow-lg"
22 >
23 ↑ {newCount} new posts
24 </button>
25)}
Say This in Your Interview

"For real-time new post detection, I'd use polling over WebSocket here because polling at 15-second intervals is cheap — we're not doing full refetch, just fetching a count. WebSocket would add connection overhead for something the user checks rarely. If the product needed sub-second real-time (like a live trading feed), I'd switch to SSE or WebSocket."


Step 6 — Optimistic updates for likes

Never wait for the server to confirm a like before updating the UI. Users expect instant feedback.

TSX
1const queryClient = useQueryClient();
2
3const likeMutation = useMutation({
4 mutationFn: (postId: string) => fetch(`/api/posts/${postId}/like`, { method: 'POST' }),
5
6 onMutate: async (postId) => {
7 await queryClient.cancelQueries({ queryKey: ['feed'] });
8 const snapshot = queryClient.getQueryData(['feed']);
9
10 // Optimistically update the cache
11 queryClient.setQueryData(['feed'], (old: InfiniteData<FeedPage>) => ({
12 ...old,
13 pages: old.pages.map(page => ({
14 ...page,
15 items: page.items.map(post =>
16 post.id === postId
17 ? { ...post, liked: !post.liked, likes_count: post.liked ? post.likes_count - 1 : post.likes_count + 1 }
18 : post
19 ),
20 })),
21 }));
22
23 return { snapshot }; // return snapshot for rollback
24 },
25
26 onError: (_err, _postId, context) => {
27 // Roll back on failure
28 queryClient.setQueryData(['feed'], context?.snapshot);
29 },
30});

The pattern: snapshot → optimistic update → on error rollback. No spinner, no delay.


Step 7 — Image and video loading

Lazy load below-the-fold media

TSX
1function PostImage({ src, alt }: { src: string; alt: string }) {
2 return (
3 <img
4 src={src}
5 alt={alt}
6 loading="lazy" // native browser lazy loading
7 decoding="async" // non-blocking decode
8 className="w-full rounded-lg object-cover"
9 />
10 );
11}

Skeleton while loading

TSX
1function PostMedia({ src }: { src: string }) {
2 const [loaded, setLoaded] = useState(false);
3 return (
4 <div className="relative aspect-video bg-slate-800 rounded-lg overflow-hidden">
5 {!loaded && <div className="absolute inset-0 animate-pulse bg-slate-700" />}
6 <img src={src} onLoad={() => setLoaded(true)} className={loaded ? 'opacity-100' : 'opacity-0'} />
7 </div>
8 );
9}

Always reserve space for media with aspect-video or a fixed height before load — this prevents CLS (Cumulative Layout Shift), which hurts Core Web Vitals and scroll position jumps.


Step 8 — State management breakdown

Senior Perspective

For a news feed, avoid putting everything in global state. Follow this ownership model:

  • Server state (posts, likes, comments): React Query — it handles caching, background refetch, deduplication
  • UI state (is this post expanded? is comment box open?): local useState inside FeedItem
  • User state (logged-in user, preferences): Zustand or Context — changes rarely, needs to be globally accessible
  • Transient state (hover, focus): CSS :hover / :focus — never React state

The mistake junior devs make is putting everything in a global Redux store. That means a like button toggle triggers a re-render of every component subscribed to the store.


Step 9 — Accessibility checklist

  • Feed list should be a <main role="feed" aria-busy={isFetching}> landmark
  • Each post should be an <article> element
  • Like button: <button aria-label="Like post" aria-pressed={post.liked}>
  • Images: meaningful alt text — never empty for content images
  • "N new posts" banner: role="status" aria-live="polite" so screen readers announce it
  • Keyboard: Spacebar should scroll the feed (not activate buttons) — test this

Step 10 — Data model and normalized store

Before writing any component code, define your entities. Interviewers notice when candidates jump straight to JSX without thinking about data shape.

Core entities

TS
1// Post — what the server returns
2interface Post {
3 id: string;
4 author_id: string;
5 body: string; // raw text — may contain @mentions and #hashtags
6 media?: { type: 'image' | 'video'; url: string; width: number; height: number }[];
7 likes_count: number;
8 comments_count: number;
9 liked_by_me: boolean;
10 created_at: string; // ISO 8601
11}
12
13// User — minimal shape for feed display
14interface FeedUser {
15 id: string;
16 username: string;
17 display_name: string;
18 avatar_url: string;
19 is_verified: boolean;
20}
21
22// Feed page — cursor-paginated response
23interface FeedPage {
24 items: Post[];
25 next_cursor: string | null;
26 user_map: Record&lt;string, FeedUser&gt;; // embed authors inline
27}

Normalized store structure

A flat, normalized store prevents duplication and makes cache updates (likes, edits) cheap — you update one place instead of scanning every page.

Normalized Client Cache Structure
Component node + responsibility notes
QueryCache (React Query)
['feed']InfiniteData<FeedPage>
pages[0].itemsPost IDs only after normalization
pages[1].items
next_cursor
['posts', postId]individual Post by ID
id, body, media
likes_count, liked_by_me
author_idref to users cache
['users', userId]FeedUser by ID
display_name, avatar_url
is_verified
Senior Perspective

React Query doesn't normalize automatically — you need to do it manually in select. In the queryFn, store individual posts with queryClient.setQueryData(['posts', post.id], post) for each post returned. This way if a user opens a post detail page, it's already cached. Likes update the single ['posts', postId] entry and React Query invalidates all queries that derive from it.


Step 11 — HTTP caching, deduplication, and idempotency

HTTP cache headers

The feed API response should be uncacheable (always fresh), but static assets and user avatars should be cached aggressively.

TEXT
1# Feed endpoint — no browser caching
2Cache-Control: no-store
3
4# Avatar images — cache for 1 year (CDN-served with content hash URL)
5Cache-Control: public, max-age=31536000, immutable
6
7# Post images — cache for 7 days
8Cache-Control: public, max-age=604800

Request deduplication

React Query deduplicates identical in-flight requests automatically. If two components both call useQuery(['posts', id]), only one network request fires. The second subscriber shares the same promise.

Idempotent like/unlike

A like must be safe to retry. If the network fails mid-request, retrying should not double-like. The server should treat a like as an upsert, not an append:

TEXT
1POST /api/posts/:id/like
2→ Server: INSERT INTO likes (user_id, post_id) ON CONFLICT DO NOTHING

From the frontend, the mutation key ensures only one in-flight like per post:

TSX
1useMutation({
2 mutationKey: ['like', postId], // deduplicate: only one concurrent like per post
3 mutationFn: () => fetch(`/api/posts/${postId}/like`, { method: 'POST' }),
4})

Step 12 — Rendering rich text safely

Feed posts can contain @mentions and #hashtags. Rendering them naively as innerHTML is an XSS vulnerability.

Safe mention/hashtag rendering

TSX
1function RichPostBody({ body }: { body: string }) {
2 // Parse into segments — never use dangerouslySetInnerHTML
3 const segments = parseRichText(body);
4
5 return (
6 <p>
7 {segments.map((seg, i) => {
8 if (seg.type === 'mention') return (
9 <a key={i} href={`/u/${seg.value}`} className="text-teal-400 hover:underline">
10 @{seg.value}
11 </a>
12 );
13 if (seg.type === 'hashtag') return (
14 <a key={i} href={`/explore/${seg.value}`} className="text-indigo-400 hover:underline">
15 #{seg.value}
16 </a>
17 );
18 return <span key={i}>{seg.value}</span>;
19 })}
20 </p>
21 );
22}
23
24function parseRichText(text: string) {
25 // Split on @mention and #hashtag patterns, preserve tokens
26 const parts = text.split(/(@\w+|#\w+)/g);
27 return parts.map(part => {
28 if (part.startsWith('@')) return { type: 'mention' as const, value: part.slice(1) };
29 if (part.startsWith('#')) return { type: 'hashtag' as const, value: part.slice(1) };
30 return { type: 'text' as const, value: part };
31 });
32}

Post truncation

Long posts should be truncated with a "See more" toggle — show 3 lines by default.

TSX
1function TruncatedBody({ body }: { body: string }) {
2 const [expanded, setExpanded] = useState(false);
3 const isLong = body.length > 280;
4
5 return (
6 <div>
7 <div className={!expanded && isLong ? 'line-clamp-3' : ''}>
8 <RichPostBody body={body} />
9 </div>
10 {isLong && (
11 <button onClick={() => setExpanded(e => !e)} className="text-teal-400 text-sm mt-1">
12 {expanded ? 'See less' : 'See more'}
13 </button>
14 )}
15 </div>
16 );
17}
Say This in Your Interview

"I never use dangerouslySetInnerHTML for user-generated content. Instead I parse the post body into typed segments (text, mention, hashtag) and render each as a React element. This gives full XSS protection and lets me attach click handlers or prefetch user profiles on mention hover."


Step 13 — Scroll position and stale feeds

Preserving scroll position on remounting

When a user navigates to a post detail page and comes back, the feed should restore scroll position — not start from the top.

TSX
1// Store scroll position in sessionStorage before navigating away
2function FeedList() {
3 const scrollRef = useRef<HTMLDivElement>(null);
4
5 useEffect(() => {
6 const saved = sessionStorage.getItem('feed-scroll');
7 if (saved && scrollRef.current) {
8 scrollRef.current.scrollTop = parseInt(saved);
9 }
10 return () => {
11 if (scrollRef.current) {
12 sessionStorage.setItem('feed-scroll', String(scrollRef.current.scrollTop));
13 }
14 };
15 }, []);
16
17 return <div ref={scrollRef} className="overflow-y-auto h-screen">...</div>;
18}

With React Query's staleTime, the data is still cached when the user comes back — no refetch, instant render, scroll restored.

Stale feed detection

If the user leaves the tab for 10+ minutes and comes back, silently refetch in the background and show the "N new posts" banner — don't yank the feed out from under them.

TSX
1// In React Query config — refetch on window focus only if data is older than 10 min
2useInfiniteQuery({
3 queryKey: ['feed'],
4 staleTime: 10 * 60 * 1000, // 10 minutes
5 refetchOnWindowFocus: true, // triggers when tab becomes active
6 refetchOnMount: false, // don't refetch if data is still fresh
7})

Step 14 — Offline support

Reading offline (Service Worker cache)

Use a service worker to serve cached feed data when the network is unavailable:

JS
1// sw.js — network-first strategy for feed API
2self.addEventListener('fetch', event => {
3 if (event.request.url.includes('/api/feed')) {
4 event.respondWith(
5 fetch(event.request)
6 .then(response => {
7 const clone = response.clone();
8 caches.open('feed-cache-v1').then(cache => cache.put(event.request, clone));
9 return response;
10 })
11 .catch(() => caches.match(event.request)) // serve stale on network failure
12 );
13 }
14});

Show an offline banner so users know the feed is stale:

TSX
1function useOnlineStatus() {
2 const [online, setOnline] = useState(navigator.onLine);
3 useEffect(() => {
4 const on = () => setOnline(true);
5 const off = () => setOnline(false);
6 window.addEventListener('online', on);
7 window.addEventListener('offline', off);
8 return () => { window.removeEventListener('online', on); window.removeEventListener('offline', off); };
9 }, []);
10 return online;
11}

Writing offline — the outbox pattern

If a user likes a post while offline, don't silently drop the action.

Offline Outbox — Request Flow
Component node + responsibility notes
App
1

User taps Like (offline)

  • Optimistic UI update (immediate)
  • OutboxQueue
  • persist to IndexedDB { action: 'like', postId, timestamp }
2

Network restored (online event)

  • OutboxQueue.flush()
  • Dequeue oldest action
  • POST /api/posts/:id/like
  • On success: remove from IndexedDB
3

On failure

  • Retry with exponential backoff (up to 3 attempts)
TSX
1const OUTBOX_KEY = 'feed-outbox';
2
3function enqueueAction(action: { type: string; postId: string }) {
4 const queue = JSON.parse(localStorage.getItem(OUTBOX_KEY) ?? '[]');
5 queue.push({ ...action, id: crypto.randomUUID(), createdAt: Date.now() });
6 localStorage.setItem(OUTBOX_KEY, JSON.stringify(queue));
7}
8
9async function flushOutbox() {
10 const queue = JSON.parse(localStorage.getItem(OUTBOX_KEY) ?? '[]');
11 for (const item of queue) {
12 try {
13 await fetch(`/api/posts/${item.postId}/like`, { method: 'POST' });
14 // Remove from queue on success
15 const remaining = queue.filter((q: { id: string }) => q.id !== item.id);
16 localStorage.setItem(OUTBOX_KEY, JSON.stringify(remaining));
17 } catch {
18 break; // stop on failure — retry on next flush
19 }
20 }
21}
22
23// Flush when network comes back
24window.addEventListener('online', flushOutbox);

Step 15 — Retry strategy

Not every failure is worth retrying. Classify errors before deciding.

TSX
1useMutation({
2 mutationFn: submitPost,
3 retry: (failureCount, error) => {
4 // Don't retry client errors (400, 401, 403, 422)
5 if (error.status >= 400 && error.status < 500) return false;
6 // Retry server errors up to 3 times
7 return failureCount < 3;
8 },
9 retryDelay: (attempt) => Math.min(1000 * 2 ** attempt, 30_000), // exponential backoff, max 30s
10})
StatusRetry?Reason
429 Too Many RequestsYes, after delayRate limited, back off
500 Server ErrorYes, up to 3xTransient failure
503 UnavailableYesServer overloaded
400 Bad RequestNoFix the payload, not retryable
401 UnauthorizedNoRefresh token first, then resubmit
422 ValidationNoUser must correct input

Show a user-facing error state with a retry button for persistent failures — never fail silently.


Step 16 — Comments and live updates

Comment thread architecture

TSX
1function CommentSection({ postId }: { postId: string }) {
2 const { data: comments } = useQuery({
3 queryKey: ['comments', postId],
4 queryFn: () => fetch(`/api/posts/${postId}/comments`).then(r => r.json()),
5 enabled: !!postId,
6 staleTime: 10_000, // comments can be slightly stale
7 });
8
9 return (
10 <div>
11 {comments?.map(c => <CommentRow key={c.id} comment={c} />)}
12 <CommentComposer postId={postId} />
13 </div>
14 );
15}

Load comments lazily — only fetch when the user expands the comment section. This avoids an extra network request per visible post on initial page load.

Live comment updates via SSE

For real-time comments, Server-Sent Events (SSE) are better than WebSocket — they're unidirectional (server → client), work over plain HTTP/2, and don't need a persistent bidirectional connection.

TSX
1function useLiveComments(postId: string) {
2 const queryClient = useQueryClient();
3
4 useEffect(() => {
5 const es = new EventSource(`/api/posts/${postId}/comments/stream`);
6
7 es.addEventListener('new_comment', (event) => {
8 const newComment = JSON.parse(event.data);
9 queryClient.setQueryData(['comments', postId], (old: Comment[] = []) => [
10 ...old,
11 newComment,
12 ]);
13 });
14
15 return () => es.close();
16 }, [postId, queryClient]);
17}
Real-time Comment Flow
Component node + responsibility notes
App
1

User opens comment section

  • useLiveComments(postId)
  • new EventSource('/api/posts/:id/comments/stream')
  • HTTP/2 long-lived connection
2

Another user posts a comment

  • Server pushes SSE event: { type: 'new_comment', data: {...} }
  • Client receives — queryClient.setQueryData update
  • React re-renders comment list (no refetch needed)
3

User closes comment section

  • es.close() — connection terminated
Say This in Your Interview

"For live comments I'd use SSE over WebSocket. SSE is simpler — it works over a standard HTTP connection, auto-reconnects, and is supported in all modern browsers. WebSocket makes sense if we needed bidirectional streaming, like a cursor-collaboration feature. For comment counts on the feed itself, polling every 30s is sufficient — no need for a persistent connection just for a number."


Step 17 — JavaScript loading strategy

The feed should be interactive immediately. Every kilobyte of JS delays that.

Code split by route

TSX
1// Next.js does this automatically per page — each route is a separate JS chunk
2// For heavy components inside the feed, use dynamic imports:
3
4const VideoPlayer = dynamic(() => import('./VideoPlayer'), {
5 loading: () => <div className="aspect-video bg-slate-800 animate-pulse rounded-lg" />,
6 ssr: false, // video player doesn't need SSR
7});
8
9// Only load video player code when a video post is visible
10function PostBody({ post }: { post: Post }) {
11 if (post.media?.type === 'video') return <VideoPlayer src={post.media.url} />;
12 return <PostImage src={post.media?.url} />;
13}

What to lazy-load in a feed

ComponentStrategyWhy
Video playerdynamic() on demandHeavy library, rarely needed
Emoji picker (comment composer)dynamic() on clickOnly needed when user opens composer
Share modaldynamic() on clickInfrequent, don't pay upfront
Feed itemDo NOT lazy-loadVisible immediately on paint
Comment sectionLazy fetch data, not codeCode is small, data is expensive

Core Web Vitals targets

MetricTargetWhat breaks it in a feed
LCP (Largest Contentful Paint)< 2.5sFirst post image loading slowly
CLS (Cumulative Layout Shift)< 0.1Images without reserved dimensions
INP (Interaction to Next Paint)< 200msLike button handler blocking main thread
FID (First Input Delay)< 100msHeavy JS parsing on load
TSX
1// Prevent CLS: always reserve image dimensions
2<img
3 src={post.media.url}
4 width={post.media.width}
5 height={post.media.height}
6 style={{ aspectRatio: `${post.media.width}/${post.media.height}` }}
7 loading="lazy"
8/>

Step 18 — Accessibility deep dive

Feed list semantics

TSX
1<main>
2 <section role="feed" aria-label="News feed" aria-busy={isFetchingNextPage}>
3 {posts.map(post => (
4 <article key={post.id} aria-label={`Post by ${post.author.display_name}`}>
5 <PostHeader post={post} />
6 <PostBody post={post} />
7 <PostActions post={post} />
8 </article>
9 ))}
10 </section>
11</main>

Keyboard navigation

The role="feed" role has specific keyboard expectations:

  • Page Down / Page Up — scroll the feed one page
  • Tab — move focus between interactive elements (like buttons, links)
  • Users should be able to navigate through posts without a mouse

Motion and reduced motion

If your feed has animated transitions (post appearing, like animation), respect prefers-reduced-motion:

TSX
1const shouldAnimate = !window.matchMedia('(prefers-reduced-motion: reduce)').matches;
2
3// In Framer Motion:
4<motion.div
5 initial={shouldAnimate ? { opacity: 0, y: 8 } : false}
6 animate={{ opacity: 1, y: 0 }}
7>

Dynamic content announcements

When new posts load at the top (after the user clicks the "N new posts" banner), announce it to screen readers:

TSX
1<div role="status" aria-live="polite" aria-atomic="true" className="sr-only">
2 {newCount > 0 ? `${newCount} new posts loaded` : ''}
3</div>

Step 19 — Internationalization (i18n)

Timestamps

Never display raw ISO strings. Use Intl.RelativeTimeFormat for human-readable relative times, and Intl.DateTimeFormat for absolute dates:

TSX
1function formatTimestamp(isoString: string, locale: string): string {
2 const date = new Date(isoString);
3 const secondsAgo = (Date.now() - date.getTime()) / 1000;
4
5 if (secondsAgo < 60) return 'just now';
6 if (secondsAgo < 3600) {
7 const mins = Math.floor(secondsAgo / 60);
8 return new Intl.RelativeTimeFormat(locale, { numeric: 'auto' }).format(-mins, 'minute');
9 }
10 if (secondsAgo < 86400) {
11 const hrs = Math.floor(secondsAgo / 3600);
12 return new Intl.RelativeTimeFormat(locale, { numeric: 'auto' }).format(-hrs, 'hour');
13 }
14 return new Intl.DateTimeFormat(locale, { month: 'short', day: 'numeric' }).format(date);
15}

RTL (right-to-left) layout

Arabic and Hebrew users read RTL. Use logical CSS properties that flip automatically:

TSX
1// ✅ Correct — flips for RTL
2<div className="ms-4"> // margin-inline-start (= margin-left in LTR, margin-right in RTL)
3<div className="ps-3"> // padding-inline-start
4
5// ❌ Wrong — hardcoded direction
6<div className="ml-4"> // always left, breaks RTL

Set dir="rtl" on <html> based on the user's locale — Tailwind's rtl: variant handles the rest.


Step 20 — Telemetry and observability

Shipping a feed without metrics means you're flying blind. Define what to measure.

Key metrics to instrument

TSX
1// 1. Feed load time (time from page mount to first post visible)
2performance.mark('feed-start');
3// ... after first posts render:
4performance.measure('feed-load', 'feed-start');
5
6// 2. Scroll depth — how far users read
7function trackScrollDepth(scrollPct: number) {
8 if ([25, 50, 75, 90].includes(Math.floor(scrollPct))) {
9 analytics.track('feed_scroll_depth', { percent: scrollPct });
10 }
11}
12
13// 3. Interaction rate — likes and comments per session
14analytics.track('post_liked', { post_id: postId, position_in_feed: index });
15
16// 4. Error rate
17window.addEventListener('unhandledrejection', event => {
18 analytics.track('feed_error', { message: event.reason?.message });
19});

What to alert on

SignalThresholdAction
Feed load P95> 3sInvestigate API or client bundle
Error rate> 1% of sessionsPage on-call
Like success rate< 99%Mutation or server issue
Scroll depth median< 25%Content quality or pagination bug
INP regression> 200msJavaScript main-thread contention
Say This in Your Interview

"I'd instrument three things from day one: feed load time (P50/P95), scroll depth distribution, and error rate. Load time tells me if the pagination strategy is working. Scroll depth tells me if the content ranking is good — if median depth drops from 50% to 20% overnight, something broke. Error rate catches silent failures in mutations like likes."


Step 21 — Final performance checklist

ConcernTechnique
Large list renderVirtual scrolling (react-window / @tanstack/virtual)
Images below foldloading="lazy" + explicit width/height (prevent CLS)
Layout shiftReserve media dimensions before load
Excessive re-rendersReact.memo on FeedItem, stable callback refs
Too many fetchesstaleTime: 30s in React Query, cursor pagination
New post detection15s polling of count endpoint (not full refetch)
Like latencyOptimistic update with rollback on error
Offline likesOutbox pattern — persist to localStorage, flush on reconnect
Video/emoji JS costdynamic() imports — load only when needed
Timestamp formatIntl.RelativeTimeFormat — locale-aware, no library needed
RTL supportTailwind logical properties (ms-, ps-, rtl:)
Screen reader UXrole="feed", <article> per post, aria-live for banners
Say This in Your Interview

"The three biggest frontend challenges in a news feed are: first, performance with a large DOM — I'd solve with virtual scrolling and lazy media loading. Second, real-time without breaking scroll position — I'd use a 'N new posts' banner instead of auto-injecting. Third, data consistency — I'd use optimistic updates for interactions like likes so the UI feels instant, with rollback on error. For resilience I'd add an offline outbox so likes aren't silently dropped on flaky connections. The architecture: React Query for server state, cursor pagination, SSE for live comments, service worker for offline reads, and Intl APIs for i18n."

Next

Interview Prep

Interview answer templates and key talking points

On this page