W11-The Evolution of Twitter’s Algorithm and Iterations in Source Management
This year has had a consistent theme: iterating on the efficiency of how I curate my information network.
Last week I took a quick look at Twitter’s recommendation algorithm. I used Claude’s Cowork to set up two schedules.
Twitter’s recommendation algorithm has undergone a complete overhaul in recent years. It is also one of the few social platforms with its core codebase open sourced (Musk says it is the only one), making it a great reference for studying advanced search, recommendation, and ranking systems.
By comparing its earlier open-source codebase (twitter/the-algorithm) with its newer, large-model-based codebase (xai-org/x-algorithm), you can clearly see how its recommendation architecture evolved from a complex microservice system that relied heavily on manual feature engineering and heuristic rules into a minimal end-to-end deep learning architecture based on Transformer (Grok).
Below is a summary comparing the structure and technical characteristics of the two repositories, which makes the direction of the evolution easy to see.
Comparison dimension
2023 version (The-Algorithm)
2025 version (X-Algorithm)
Tech stack
Scala / Java (JVM ecosystem)
Rust / Python (AI ecosystem)
Feature engineering
Relies on thousands of manually designed statistical features
No manual features, based on engagement sequence learning
Core model
Heavy Ranker (48M neural network)
Grok-based Transformer
In-network storage
Timeline Cache / Fanout Service
Thunder (high-performance Rust in-memory storage)
Pipeline framework
Product Mixer (Scala)
Candidate Pipeline Crate (Rust)
Retrieval logic
Search index (Lucene) + graph traversal
Vectorized two-tower model
Development efficiency
Complex feature pipelines, high maintenance cost
Composable modular design, clean logic
After looking at Twitter’s recommendation algorithm, I still couldn’t really apply what I learned in practice—I was only at the level of writing prompts. Using Cowork, I built two schedules according to my own thinking framework: one for tracking the market and one for tracking technology. After several iterations, the current version basically matches the taste I was aiming for. Below are the specific instructions and recent examples for reference.
Tracking global markets and major asset classes:
Daily Market BriefingTracking mainstream tech institutions, communities, and media:
Weekly Tech Top 20Briefings or assistant-style tools can never fully replace wandering through different communities on your own, because they lack randomness and a bit of fun.
For example, a blog post I came across last week that I really liked,Temporal: The 9-Year Journey to Fix Time in JavaScriptIt came from Bloomberg’s JS team, which only started publishing technical blog posts this month, but it seems to be a team with deep industry expertise.
This article discusses Temporal, the most significant language-level extension in JavaScript since ES2015, and retraces the long process of standardization behind it. It systematically fixes the historical problems of the Date API over the past 30 years, redesigns a complete time type system, and lets multiple JS engines reduce implementation cost and improve consistency by sharing a Rust implementation library (temporal_rs). This may be the first time JavaScript has adopted, at the language-standard level, a Rust-based, multi-engine infrastructure solution.
Last updated