pith. machine review for the scientific record. sign in

arxiv: 2506.03800 · v2 · submitted 2025-06-04 · 🧬 q-bio.BM

Recognition: unknown

STELLA: A Multimodal LLM for Protein Functional Annotation via Unified Sequence-Structure Encoding

Authors on Pith no claims yet
classification 🧬 q-bio.BM
keywords proteinfunctionalstellaannotationmultimodalbimodalencodinglanguage
0
0 comments X
read the original abstract

Understanding the intricate interplay among sequence, structure, and function remains a fundamental challenge in proteomics. The sequence-structure-function paradigm posits that biological roles are governed by the tertiary geometric conformations encoded within primary sequences; consequently, integrating these multi-modal descriptors is imperative for accurate functional annotation. While protein language models (pLMs) have achieved significant progress via representation learning on massive sequence data, they often lack the capacity to incorporate high-resolution structural information and the rich textual context that characterizes protein roles. In this work, we present STELLA, a multimodal LLM that synergistically aligns bimodal (sequence-structure) representations with the textual modality to advance protein functional annotation. By leveraging ESM3 for unified bimodal encoding and Llama-3.1-8B-Instruct for natural language modeling, STELLA achieves state-of-the-art performance in two critical tasks: Functional Description Prediction and Enzyme-catalyzed Reaction Prediction. This study demonstrates that multimodal LLMs represent a paradigm shift beyond pure pLMs, offering a new frontier for protein biology and biomedical discovery. The codes can be accessed via https://github.com/ocx-lab/STELLA.

This paper has not been read by Pith yet.

discussion (0)

Sign in with ORCID, Apple, or X to comment. Anyone can read and Pith papers without signing in.

Forward citations

Cited by 1 Pith paper

Reviewed papers in the Pith corpus that reference this work. Sorted by Pith novelty score.

  1. Multimodal Protein Language Models for Enzyme Kinetic Parameters: From Substrate Recognition to Conformational Adaptation

    cs.CV 2026-03 unverdicted novelty 7.0

    ERBA is a new staged multimodal adapter that improves protein language model predictions of enzyme kinetic parameters by separately modeling substrate recognition and induced-fit conformational changes.