Put-call parity is exact as a terminal-payoff identity, yet its market enforcement is path-dependent and capital-using. This paper examines whether physical-measure drift is reflected in the carry gap, defined as the annualized wedge between option-implied and OIS-implied discounting, using SPX and RUT European index options. I derive a drift-preserving extension of the GBM implementation-risk term that adds an (r\mu\tau) component to the standard (r\sigma\sqrt{\tau}) path-risk component. The drift input (\mu) is measured by a lagged rolling-OLS trend proxy and should not be interpreted as an observed expected return. Empirically, the drift term improves both in-sample and leave-one-year-out fit, especially for SPX, consistent with drift-sensitive margin burden in parity enforcement rather than a failure of no-arbitrage.
Put-call parity is exact as a terminal-payoff identity, yet its market enforcement is path-dependent and capital-using. This paper examines whether physical-measure drift is reflected in the carry gap, defined as the annualized wedge between option-implied and OIS-implied discounting, using SPX and RUT European index options. I derive a drift-preserving extension of the GBM implementation-risk term that adds an (r\mu\tau) component to the standard (r\sigma\sqrt{\tau}) path-risk component. The drift input (\mu) is measured by a lagged rolling-OLS trend proxy and should not be interpreted as an observed expected return. Empirically, the drift term improves both in-sample and leave-one-year-out fit, especially for SPX, consistent with drift-sensitive margin burden in parity enforcement rather than a failure of no-arbitrage.
We show that under mild assumptions, the total value of information to informed traders in the market can be measured by the covariance between price changes and order flow. This covariance captures noise trader losses, which equal informed trader gains when market making is competitive. We estimate the value of information using high frequency data on US equities at about $3.5 million per year for the average stock. The aggregate value of information is about 0.04% of market cap, which is considerably lower than the 0.67% in fees investors pay each year searching for superior returns (French 2008). We discuss potential resolutions for these puzzling findings.
Dominant investor groups in each market shape how stocks move together beyond fundamentals.
abstractclick to expand
This study investigates how cross-stock information diffusion, driven by both retail and institutional investors, influences excess comovement in the Chinese retail-dominated market and the U.S. institution-dominated market. Using data from 4,533 Chinese stocks and 4,517 U.S. stocks from 2010 to 2022, we identify three key findings. First, the dominant investor group in each market significantly drives excess comovement. Specifically, in China, compared with institution-driven diffusion, retail-driven information diffusion has a notably stronger effect on excess comovement. In contrast, in the U.S., institution-driven diffusion is the primary driver of excess comovement, surpassing the influence of retail-driven diffusion. Second, we identify investors' trading behavior as the underlying mechanism through which information diffusion affects excess comovement. Third, we observe a lead-lag relationship: stocks with faster retail-driven information diffusion exhibit comovement that precedes those with slower diffusion. Based on this finding, we further demonstrate that the predictive power of information diffusion varies across markets. In China, retail-driven diffusion shows strong and persistent predictability for excess comovement, whereas in the U.S., institution-driven diffusion exhibits similarly robust predictive capacity.
Firms with higher disclosure see less early selling of winners, outweighing any extra holding of losers and lowering the overall bias.
abstractclick to expand
The disposition effect describes investors' irrational behavior of selling profitable assets too soon while holding onto losing assets for too long. This study examines the impact of transparency at the firm level on the disposition effect of individual investors who hold that company's stock. Our results show that an increase in corporate transparency significantly reduces the disposition effect. Further analysis reveals that for companies with greater transparency, when the held stock is profitable, investors' confidence in holding it increases, leading to a reduced bias toward selling profitable stocks. When the stock is held at a loss, investors' confidence in holding it weakens, but they often perceive the loss as temporary and maintain confidence in the company's long-term prospects, thus exacerbating the bias toward holding losing stocks. The effect of increased transparency on the selling behavior of profitable stocks is greater than its effect on the selling behavior of losing stocks. Overall, an increase in corporate transparency significantly reduces the disposition effect.
The irrational behavior of investors selling profitable assets too early while holding onto losing assets for too long is known as the disposition effect. Due to the development of the Internet, the information environment for individual investors has been greatly improved. As an important source of information for individual investors, whether social media can improve investors' behavioral biases and return to rational expectations is a question worth studying. Based on the post data and actual trading data of the social investment platform Xueqiu.com, this paper studies the impact of social media information on the disposition effect of individual investors. The research results show that social media information can significantly reduce the disposition effect. Furthermore, it is through negative information that social media information reduces the disposition effect. When presented with negative information, individual investors will gradually become more rational in adjusting their positions. At the individual level, factors such as investment experience, users followed, region, and gender can all influence the effectiveness of the information acquired by individual investors in reducing the disposition effect.
Current post-trade clearing systems rely almost exclusively on cash or cash-like collateral, leaving vast reserves of short-term liquidity embedded in trade credit outside formal settlement infrastructures. A key barrier to integrating this liquidity is the near-universal dependence of clearing services on novation, which imposes institutional overhead that restricts accessibility and limits the range of obligations that can be brought into settlement.
This paper introduces the Cycles Protocol: a distributed, multilateral clearing mechanism based on double-entry accounting and atomic cycle execution that maximizes balance sheet compression. Unlike novation-based clearing, Cycles does not redistribute counterparty risk; it can thus be applied generally to existing financial networks, without any change in counterparty relations, allowing it to complement existing clearing systems and Central Counterparties (CCPs).
By representing commitments as edges on a unified directed graph, Cycles surfaces liquidity hiding within existing network structure. We focus here on two applications of Cycles to deepening secondary market liquidity: first, as a compression layer between existing clearing participants and CCPs; and second, as a means to incorporate the liquidity of the trade credit network into formal settlement, extending market clearing beyond financial obligations and into real-economy financing.
Multi-year liquidation analysis shows modest impact, while long dormancy points to non-selling outcomes that support scarcity.
abstractclick to expand
Renewed public attention on the identity of Bitcoin's pseudonymous creator has sharpened focus on the Satoshi overhang, commonly framed as a tail risk for bitcoin. This paper argues that the mechanical downside of a disposition is bounded well below the existential-loss framing, and that the terminal states most consistent with sixteen years of holder behavior are nonbearish for bitcoin's effective supply. The approximately 1.148 million BTC Patoshi position is analyzed on two tracks. For a purely wealth-maximizing holder, a three-scenario quantitative analysis (Appendix A) shows that bitcoin's current market depth is sufficient to absorb a patient multi-year liquidation at a cumulative price impact in the mid-single-digit to mid-double-digit percent range relative to counterfactual, with the central scenario clustering near 10 percent. The paper maps a decision space rather than identifying a unique modal outcome, assuming a holder whose profile is consistent with the sixteen-year record. Preference sets consistent with the record, including ideological non-intervention, privacy above all, satisficing, and myth preservation, favor continued dormancy terminating in a cryptographically enforced nonrecovery or destruction arrangement; preference sets favoring adversarial or wealth-maximizing action are possible but less supported. Across the plausible region of the decision space, the bear case is bounded and the terminal states most consistent with observed behavior are neutral to slightly positive for bitcoin's effective supply.
This paper introduces a heterogeneous macroeconomic model of a Proof-of-Stake (PoS) network to analyze the long-term centralizing effects of external traditional finance (TradFi) yields. We model a continuum of rational actors divided into two distinct classes: investors, who optimize portfolios between staking and external variance-dominated investments, and consumers, who balance staking yields against the transactional utility of holding liquid assets. By employing a quasi-linear utility function to model consumer behavior, we derive a cubic polynomial that strictly defines the unique macroeconomic equilibrium of the coupled network. The model demonstrates that, at scale, external macroeconomic factors force the complete institutional capture of the PoS consensus layer. Because investors have access to external risk premiums, their wealth compounds exponentially, leading to massive capital inflows that crush the protocol's internal staking yield to effectively zero. We show that as the yield is crushed, consumer wealth becomes strictly upper-bounded. Ultimately, consumers are forced to cease staking entirely and hold all remaining wealth in liquid form to satisfy their transactional constraints.
Biodiversity loss is accelerating at an unprecedented pace, threatening ecosystem stability, economic resilience, and human well-being, with billions required to reverse current trends. Against this backdrop, biodiversity finance has emerged as a rapidly expanding but highly fragmented field spanning ecology, economics, finance, accounting, and policy. However, it remains emerging and complex, with the majority of relevant knowledge being produced in non-finance journals. This study employs quantitative bibliometric analysis to examine a corpus of 189,456 references underlying 3,998 articles related to biodiversity and finance. The analysis identifies eight primary research streams within the field that concern (1) strategic and financial approaches in global biodiversity conservation, (2) the impact and implementation of payments for environmental services (PES) in developing countries, (3) neoliberal influences and implications in environmental conservation, (4) biodiversity offsets and conservation, (5) ecosystem services and biodiversity, (6) integrating conservation and community interests in biodiversity management, (7) balancing agricultural intensification with biodiversity conservation, and (8) global and corporate biodiversity reporting. The characteristics of each research stream and its prevalent publications are outlined, alongside an analysis of their temporal evolution and the degree of information exchange among the research streams. The findings provide a structured map of the intellectual architecture of biodiversity finance, document pronounced silos between economically-oriented and critical/political-economy research streams, and translate these patterns into a focused research agenda and implications for policymakers, financial institutions, and corporate actors.
Frozen large language model (LLM) checkpoints extract information from pre-cutoff public text that is associated with future fundamentals and equity returns beyond standard contemporaneous valuation measures. Because each frozen checkpoint has a fixed knowledge cutoff, it can be interpreted as a compressed representation of publicly available textual information at a given point in time. We treat twelve OpenAI snapshots spanning 2021-2025 as time-stamped summaries of the public textual record and extract a sector-neutral LLM outlook score for roughly 7,000 U.S. equities per cross-section. The outlook score is positively associated with analyst revisions, target-price changes and one-month cross-sectional returns in both Fama-MacBeth regressions and pooled panels with model fixed effects (t = 6.02), after direct controls for market-implied valuation and standard factors. Predictability broadly increases with the return horizon, despite a non-monotonic intermediate dip, and, in the pooled panel, is stronger for firms with high analyst coverage, consistent with the view that the bottleneck is not investor inattention but the cost of aggregating dispersed qualitative information across many documents.
Put-call parity is a risk-neutral identity, but enforcing it is path-dependent and capital-using. I study the carry gap, the annualized wedge between option-implied and OIS discount factors, in SPX and RUT options. Because parity enforcement ties up scarce capital, its opportunity cost may reflect outside investment opportunities. Adding low-frequency global asset-return components to an OIS-based baseline improves in-sample and leave-one-year-out out-of-sample R^2, with gains robust to broad-dollar neutralization, alternative asset blocks, and nested horizon selection. The evidence indicates reduced-form P-Q alignment: the carry gap is not empirically separable from physical-measure outside-option proxies, rather than behaving as a purely OIS-contained wedge.
Put-call parity is a risk-neutral identity, but enforcing it is path-dependent and capital-using. I study the carry gap, the annualized wedge between option-implied and OIS discount factors, in SPX and RUT options. Because parity enforcement ties up scarce capital, its opportunity cost may reflect outside investment opportunities. Adding low-frequency global asset-return components to an OIS-based baseline improves in-sample and leave-one-year-out out-of-sample R^2, with gains robust to broad-dollar neutralization, alternative asset blocks, and nested horizon selection. The evidence indicates reduced-form P-Q alignment: the carry gap is not empirically separable from physical-measure outside-option proxies, rather than behaving as a purely OIS-contained wedge.
Put-call parity is a terminal-payoff identity; quoted residuals against traded futures are near zero. Yet enforcing parity is path-dependent, exposing arbitrageurs to daily settlement, margin, and finite capital. Using minute-level NBBO data on S&P 500 and Russell 2000 options, I extract option-implied discount factors, compare them with the OIS curve, and construct an annualized carry gap. A reduced-form specification centered on a volatility times sqrt(tau) path-risk term links the carry gap to implementation risk, trading frictions, and financial conditions, with coefficient signs stable across leave-one-year-out validation. The carry gap is an implementation wedge invisible in price space but systematic in carry space.
Put-call parity is a terminal-payoff identity; quoted residuals against traded futures are near zero. Yet enforcing parity is path-dependent, exposing arbitrageurs to daily settlement, margin, and finite capital. Using minute-level NBBO data on S&P 500 and Russell 2000 options, I extract option-implied discount factors, compare them with the OIS curve, and construct an annualized carry gap. A reduced-form specification centered on a volatility times sqrt(tau) path-risk term links the carry gap to implementation risk, trading frictions, and financial conditions, with coefficient signs stable across leave-one-year-out validation. The carry gap is an implementation wedge invisible in price space but systematic in carry space.
Sparsity or complexity? In modern high-dimensional asset pricing, these are often viewed as competing principles: richer feature spaces appear to favor complexity, while economic intuition has long favored parsimony. We show that this tension is misplaced. We distinguish capacity sparsity-the dimensionality of the candidate feature space-from factor sparsity-the parsimonious structure of priced risks-and argue that the two are complements: expanding capacity enables the discovery of factor sparsity. Revisiting the benchmark empirical design of Didisheim et al. (2025) and pushing it to higher complexity regimes, we show that nonlinear feature expansions combined with basis pursuit yield portfolios whose out-of-sample performance dominates ridgeless benchmarks beyond a critical complexity threshold. The evidence shows that the gains from complexity arise not from retaining more factors, but from enlarging the space from which a sparse structure of priced risks can be identified. The virtue of complexity in asset pricing operates through factor sparsity.
AI stocks trade at extraordinary valuations. We develop an asset pricing model in which investors use AI stocks to hedge against an AI singularity that displaces their consumption. Because markets are incomplete -- investors cannot trade private AI capital -- AI stocks command a premium. Market incompleteness distorts both valuations and the efficient development of AI, creating a rationale for government transfers that becomes compelling when singularity-driven growth overwhelms deadweight costs. This paper was generated by AI, using https://github.com/chenandrewy/ralph-wiggum-asset-pricing/.
LLM-classified 24-hour jump causes show macroeconomic announcements price the strongest and most lasting premium, supporting a real-time re-
abstractclick to expand
In this paper, I present the first comprehensive, around-the-clock analysis of systematic jump risk by combining high-frequency market data with contemporaneous news narratives identified as the underlying causes of market jumps. These narratives are retrieved and classified using a state-of-the-art open-source reasoning LLM. Decomposing market risk into interpretable jump categories reveals significant heterogeneity in risk premia, with macroeconomic news commanding the largest and most persistent premium. Leveraging this insight, I construct an annually rebalanced real-time Fama-MacBeth factor-mimicking portfolio that isolates the most strongly priced jump risk, achieving a high out-of-sample Sharpe ratio and delivering significant alphas relative to standard factor models. The results highlight the value of around-the-clock analysis and LLM-based narrative understanding for identifying and managing priced risks in real time.
In recent years Australia has observed a growing, unexplained resilience of increasing house price trends. Here, we seek to understand what is driving Australia's indestructible asset using insights from market experts. We construct a differential equation model of house price to develop intuition for its historical behaviour and responsiveness to changes in mortgage rates. Using this model, we identify a point of 'decoupling' between house price and mortgage rate in the system with supply limitations found to be the main driver for this change. From there, modern extreme value techniques are implemented on real-world data to investigate how the effectiveness of mortgage rate in moderating extreme house price has changed before and after this historical decoupling. We find that without an increase in the housing supply chain, through either deregulation or reduced competition with government building, an 11\% increase in mortgage rate will be needed to slow extreme housing costs.
Cross-sectional dispersion in firm-level realized skewness is significantly and negatively related to future stock market returns. The predictive power of skewness dispersion is robust to in-sample and out-of-sample estimation and is incremental over a broad set of existing predictors, with only a few alternatives retaining independent explanatory ability. Skewness dispersion also delivers substantial economic gains in portfolio allocation. Its forecasting power is concentrated in months with monetary policy announcements, reflecting an information-based mechanism. The empirical evidence suggests that skewness dispersion captures the gradual incorporation of macro news into prices, which is driven by variation in aggregate risk and valuation adjustments.
The global financial architecture is undergoing a shift from intermediary centric-settlement to programmable infrastructure, to transmute trillions in static illiquid capital into active, high-velocity instruments. We argue that Real World Asset (RWA) tokenization represents a conceptual evolution beyond mere digitization, converting passive ledger entries into programmable economic agents capable of autonomous settlement and algorithmic collateralization. However, achieving such seamless capital efficiency necessitates resolving the fundamental friction between deterministic on-chain code and probabilistic off-chain reality, navigating the oracle problem and jurisdictional interoperability. This systematization of knowledge presents a taxonomy for the RWA lifecycle and deconstructs the multi-layered architecture, spanning legal custody, technical standards, and cryptoeconomic valuation, required to enforce off-chain rights within on-chain environments. We study systemic constraints such as latency and regulatory fragmentation through a comparative overview of sovereign debt, private credit, and real estate protocols, complemented by an empirical case study of on-chain U.S. Treasuries. We synthesize these findings to propose a prognostic outlook, positing that while asset tokenization provides a transitional bridge, it is not necessarily the inevitable shift compared to the emergence of unified, programmable ledgers.
Agentic AI rivals human capabilities across a wide range of domains. Looking ahead, it is foreseeable that AI agents will autonomously handle complex workflows and interactions. Early prototypes of this paradigm are emerging, e.g., OpenClaw and Moltbook, signaling a shift toward Agent-to-Agent (A2A) ecosystems. However, despite these promising blueprints, critical trust and security challenges remain, particularly in scenarios involving financial transactions. Ensuring secure and reliable payment mechanisms between unknown and untrusted agents is crucial to complete a fully functional and trustworthy A2A ecosystem. Although blockchain-based infrastructures provide a natural foundation for this setting, via programmable settlement, transparent accounting, and open interoperability, trust and security challenges have not yet been fully addressed. Hence, for the first time, we systematize blockchain-based A2A payments, e.g., X402, with a four-stage lifecycle: discovery, authorization, execution, and accounting. We categorize representative designs at each stage and identify key challenges, including weak intent binding, misuse under valid authorization, payment-service decoupling, and limited accountability. We highlight future directions for strengthening cross-stage consistency, enabling behavior-aware control, and supporting compositional payment workflows across agents and systems.
Prior research shows that large language models (LLMs) exhibit systematic extrapolation bias when forming predictions from both experimental and real-world data, and that prompt-based approaches appear limited in alleviating this bias. We propose a supervised fine-tuning (SFT) approach that uses Low-Rank Adaptation (LoRA) to train off-the-shelf LLMs on instruction datasets constructed from rational benchmark forecasts. By intervening at the parameter level, SFT changes how LLMs map observed information into forecasts and thereby mitigates extrapolation bias. We evaluate the fine-tuned model in two settings: controlled forecasting experiments and cross-sectional stock return prediction. In both settings, fine-tuning corrects the extrapolative bias out-of-sample, establishing a low-cost and generalizable method for debiasing LLMs.
Decentralized finance introduces new business models and use cases as part of digital finance. Restaking has recently emerged as a transformative mechanism in DeFi, promising extra yields but introducing complex and interconnected risks. The paper monitors the current restaking landscape, empirically analyzes the revenue drivers of a liquid restaking protocol, and conducts a technical investigation on the emitted risk arising from the interconnection between liquid restaking and other protocols. The revenue dynamics of Renzo Protocol are analyzed by employing an OLS regression model, Granger-causality and random forest feature importance tests. Our results identify that revenue is primarily predicted by the value locked in the underlying EigenLayer ecosystem, the yield of Renzo protocol's liquid restaking token and the multi-blockchain expansion of that token. The multi-blockchain expansion of the liquid restaking token presents a double-edged sword: bridging to other networks is crucial for user adoption, but it adds the bridge risks to the existing risks of restaking. We investigate the cross-contamination risk between different DeFi services and the liquid restaking protocol. By mapping the asset flow across the decentralized finance ecosystem, it is detected that the bridge risk of the current size of Renzo's liquid-restaking assets does not impose a systemic risk on the current restaking and staking ecosystem. To address the potential consequences of the emphasized interconnection risks, we introduce two hypothetical scenarios and a stress test, assuming a large number of compromised liquid restaking tokens and a smart contract logic failure in a DeFi protocol. Considering the overall liquid-restaking protocols and the growing interconnection, this analysis requires further work to explore the growing complexities.