Get trending papers in your email inbox once a day!
Get trending papers in your email inbox!
SubscribeAny-Resolution AI-Generated Image Detection by Spectral Learning
Recent works have established that AI models introduce spectral artifacts into generated images and propose approaches for learning to capture them using labeled data. However, the significant differences in such artifacts among different generative models hinder these approaches from generalizing to generators not seen during training. In this work, we build upon the key idea that the spectral distribution of real images constitutes both an invariant and highly discriminative pattern for AI-generated image detection. To model this under a self-supervised setup, we employ masked spectral learning using the pretext task of frequency reconstruction. Since generated images constitute out-of-distribution samples for this model, we propose spectral reconstruction similarity to capture this divergence. Moreover, we introduce spectral context attention, which enables our approach to efficiently capture subtle spectral inconsistencies in images of any resolution. Our spectral AI-generated image detection approach (SPAI) achieves a 5.5% absolute improvement in AUC over the previous state-of-the-art across 13 recent generative approaches, while exhibiting robustness against common online perturbations. Code is available on https://mever-team.github.io/spai.
On Learning the Transformer Kernel
In this work we introduce KERNELIZED TRANSFORMER, a generic, scalable, data driven framework for learning the kernel function in Transformers. Our framework approximates the Transformer kernel as a dot product between spectral feature maps and learns the kernel by learning the spectral distribution. This not only helps in learning a generic kernel end-to-end, but also reduces the time and space complexity of Transformers from quadratic to linear. We show that KERNELIZED TRANSFORMERS achieve performance comparable to existing efficient Transformer architectures, both in terms of accuracy as well as computational efficiency. Our study also demonstrates that the choice of the kernel has a substantial impact on performance, and kernel learning variants are competitive alternatives to fixed kernel Transformers, both in long as well as short sequence tasks.
IXPE Observation of the Low-Synchrotron Peaked Blazar S4 0954+65 During An Optical-X-ray Flare
The X-ray polarization observations made possible with the Imaging X-ray Polarimetry Explorer (IXPE) offer new ways of probing high-energy emission processes in astrophysical jets from blazars. Here we report on the first X-ray polarization observation of the blazar S4 0954+65 in a high optical and X-ray state. During our multi-wavelength campaign on the source, we detected an optical flare whose peak coincided with the peak of an X-ray flare. This optical-X-ray flare most likely took place in a feature moving along the parsec-scale jet, imaged at 43 GHz by the Very Long Baseline Array. The 43 GHz polarization angle of the moving component underwent a rotation near the time of the flare. In the optical band, prior to the IXPE observation, we measured the polarization angle to be aligned with the jet axis. In contrast, during the optical flare the optical polarization angle was perpendicular to the jet axis; after the flare, it reverted to being parallel to the jet axis. Due to the smooth behavior of the optical polarization angle during the flare, we favor shocks as the main acceleration mechanism. We also infer that the ambient magnetic field lines in the jet were parallel to the jet position angle. The average degree of optical polarization during the IXPE observation was (14.3pm4.1)%. Despite the flare, we only detected an upper limit of 14% (at 3sigma level) on the X-ray polarization degree; although a reasonable assumption on the X-ray polarization angle results in an upper limit of 8.8% (3sigma). We model the spectral energy distribution (SED) and spectral polarization distribution (SPD) of S4 0954+65 with leptonic (synchrotron self-Compton) and hadronic (proton and pair synchrotron) models. The constraints we obtain with our combined multi-wavelength polarization observations and SED modeling tentatively disfavor hadronic models for the X-ray emission in S4 0954+65.
A slowly pulsating run-away B star at high Galactic latitude ejected from a spiral arm
We report the discovery of the young B6V run-away star LAMOST J083323.18+430825.4, 2.5\,kpc above the Galactic plane. Its atmospheric parameters and chemical composition are determined from LAMOST spectra, indicating normal composition. Effective temperature (Teff=14,500) and gravity (log g=3.79) suggest that the star is close to terminating hydrogen burning. An analysis of the spectral energy distribution allowed us to determine the angular diameter as well as the interstellar reddening. Using evolutionary models from the MIST database we derived the stellar mass (4.75Msun) and age (104^+11_-13 Myr). The spectroscopic distance (4.17 kpc), the radius (4.5 Rsun), and the luminosity (log(L/Lsun)=2.89) then result from the atmospheric parameters. Using Gaia proper motions, the trajectory is traced back to the Galactic disk to identify the place of birth in a spiral arm. The ejection velocity of 92 km s^{-1} is typical for runaway stars in the halo. The age of the star is larger than its time of flight (78+-4 Myr), which favors a binary supernova event as the likely ejection mechanism. The TESS light curve shows variations with a period of 3.58 days from which we conclude that it is a slowly pulsating B-star, one of very few run-away B-stars known to pulsate.
ALMA/SCUBA-2 COSMOS Survey: Properties of X-ray- and SED-selected AGNs in Bright Submillimeter Galaxies
We investigate the properties of active galactic nuclei (AGNs) in the brightest submillimeter galaxies (SMGs) in the COSMOS field. We utilize the bright sample of ALMA/SCUBA-2 COSMOS Survey (AS2COSMOS), which consists of 260 SMGs with S_{870, mu m}=0.7--19.2,mJy at z=0--6. We perform optical to millimeter spectral energy distribution (SED) modeling for the whole sample. We identify 24 AGN-host galaxies from the SEDs. Supplemented by 23 X-ray detected AGNs (X-ray AGNs), we construct an overall sample of 40 AGN-host galaxies. The X-ray luminosity upper bounds indicate that the X-ray undetected SED-identified AGNs are likely to be nearly Compton thick or have unusually suppressed X-ray emission. From visual classification, we identify 25^{+6}_{-5}\% of the SMGs without AGNs as major merger candidates. This fraction is almost consistent with the general galaxy population at zsim2, suggesting that major mergers are not necessarily required for the enhanced star formation in SMGs. We also identify 47^{+16}_{-15}\% of the AGN hosts as major merger candidates, which is about twice as high as that in the SMGs without AGNs. This suggests that major mergers play a key role in triggering AGN activity in bright SMGs.
Discovery of 118 New Ultracool Dwarf Candidates Using Machine Learning Techniques
We present the discovery of 118 new ultracool dwarf candidates, discovered using a new machine learning tool, named SMDET, applied to time series images from the Wide-field Infrared Survey Explorer. We gathered photometric and astrometric data to estimate each candidate's spectral type, distance, and tangential velocity. This sample has a photometrically estimated spectral class distribution of 28 M dwarfs, 64 L dwarfs, and 18 T dwarfs. We also identify a T subdwarf candidate, two extreme T subdwarf candidates, and two candidate young ultracool dwarfs. Five objects did not have enough photometric data for any estimations to be made. To validate our estimated spectral types, spectra were collected for 2 objects, yielding confirmed spectral types of T5 (estimated T5) and T3 (estimated T4). Demonstrating the effectiveness of machine learning tools as a new large-scale discovery technique.
The DESI PRObabilistic Value-Added Bright Galaxy Survey (PROVABGS) Mock Challenge
The PRObabilistic Value-Added Bright Galaxy Survey (PROVABGS) catalog will provide measurements of galaxy properties, such as stellar mass (M_*), star formation rate ({rm SFR}), stellar metallicity (Z_{rm MW}), and stellar age (t_{rm age, MW}), for >10 million galaxies of the DESI Bright Galaxy Survey. Full posterior distributions of the galaxy properties will be inferred using state-of-the-art Bayesian spectral energy distribution (SED) modeling of DESI spectroscopy and Legacy Surveys photometry. In this work, we present the SED model, Bayesian inference framework, and methodology of PROVABGS. Furthermore, we apply the PROVABGS SED modeling on realistic synthetic DESI spectra and photometry, constructed using the L-GALAXIES semi-analytic model. We compare the inferred galaxy properties to the true galaxy properties of the simulation using a hierarchical Bayesian framework to quantify accuracy and precision. Overall, we accurately infer the true M_*, {rm SFR}, Z_{rm MW}, and t_{rm age, MW} of the simulated galaxies. However, the priors on galaxy properties induced by the SED model have a significant impact on the posteriors. They impose a {rm SFR}{>}10^{-1} M_odot/{rm yr} lower bound on {rm SFR}, a {sim}0.3 dex bias on log Z_{rm MW} for galaxies with low spectral signal-to-noise, and t_{rm age, MW} < 8,{rm Gyr} upper bound on stellar age. This work also demonstrates that a joint analysis of spectra and photometry significantly improves the constraints on galaxy properties over photometry alone and is necessary to mitigate the impact of the priors. With the methodology presented and validated in this work, PROVABGS will maximize information extracted from DESI observations and provide a probabilistic value-added galaxy catalog that will extend current galaxy studies to new regimes and unlock cutting-edge probabilistic analyses.
An X-ray Significantly Variable, Luminous, Type 2 Quasar at z = 2.99 with a Massive Host Galaxy
We present a comprehensive X-ray analysis and spectral energy distribution (SED) fitting of WISEA J171419.96+602724.6, an extremely luminous type 2 quasar at z = 2.99. The source was suggested as a candidate Compton-thick (column density N_{rm H}>1.5 times 10^{24} cm^{-2}) quasar by a short XMM-Newton observation in 2011. We recently observed the source with deep NuSTAR and XMM-Newton exposures in 2021 and found that the source has a lower obscuration of N_{rm H}sim5 times 10^{22} cm^{-2} with an about four times lower flux. The two epochs of observations suggested that the source was significantly variable in X-ray obscuration, flux, and intrinsic luminosity at 2-3~sigma in less than 2.5 years (in the source rest frame). We performed SED fitting of this source using CIGALE thanks to its great availability of multiwavelength data (from hard X-rays to radio). The source is very luminous with a bolometric luminosity of L_{rm BOL}sim 2.5 times 10^{47} erg s^{-1}. Its host galaxy has a huge star formation rate (SFR) of sim1280 Solar mass yr^{-1} and a huge stellar mass of sim1.1 times 10^{12} Solar mass. The correlation between the SFR and stellar mass of this source is consistent with what was measured in the high-z quasars. It is also consistent with what was measured in the main-sequence star-forming galaxies, suggesting that the presence of the active nucleus in our target does not enhance or suppress the SFR of its host galaxy. The source is an Infrared hyper-luminous, obscured galaxy with significant amount of hot dust in its torus and shares many similar properties with hot, dust obscured galaxies.
Optical Spectroscopy of Classical Be Stars in Old Open Clusters
We performed the optical spectroscopy of 16 classical Be stars in 11 open clusters older than 100 Myr. Ours is the first spectroscopic study of classical Be stars in open clusters older than 100 Myr. We found that the H alpha emission strength of most of the stars is less than 40 Angstrom, in agreement with previous studies. Our analysis further suggests that one of the stars, KW97 35 12, might be a weak H alpha emitter in nature, showing H alpha equivalent width of negative 0.5 Angstrom. Interestingly, we also found that the newly detected classical Be star LS III 47 37b might be a component of the possible visual binary system LS III 47 37, where the other companion is also a classical Be star. Hence, the present study indicates the possible detection of a binary Be system. Moreover, it is observed that all 16 stars exhibit a lesser number of emission lines compared to classical Be stars younger than 100 Myr. Furthermore, the spectral type distribution analysis of B type and classical Be stars for the selected clusters points out that the existence of CBe stars can depend on the spectral type distribution of B type stars present in these clusters.
Unlocking the radio-gamma spectrum of the pulsar wind nebula around PSR J1124-5916 in SNR G292.0+1.8
We present the first detection of GeV gamma-ray emission potentially associated with the pulsar wind nebula (PWN) hosted by the young core-collapse supernova remnant G292.0+1.8, based on a detailed time-resolved analysis of Fermi-LAT data. By isolating the unpulsed component from the dominant magnetospheric radiation of PSR~J1124-5916, we successfully disentangle a candidate nebular emission in the GeV range, characterise its morphology and extract its spectrum. This identification places G292.0+1.8 among the few systems in which the pulsar and PWN contributions have been spectrally resolved at high energies, offering new insight into their respective emission mechanisms. We characterise the gamma-ray spectrum of the pulsar and model the broadband spectral energy distribution (SED) of the PWN using radio, X-ray, and GeV data. The emission is well described by a single electron population with two spectral breaks: one intrinsic to the injection spectrum and another produced by synchrotron cooling in a magnetic field of sim15~muG. Notably, the inferred magnetic field and the low TeV flux of the nebula resemble those of 3C~58, suggesting that similar low-field environments can arise in young PWNe. The high-energy portion of the SED is now tightly constrained by our GeV detection and existing TeV upper limits. Compared to our model, earlier predictions tend to underpredict the gamma-ray flux, while others that succeed in reproducing the GeV component often overpredict the TeV emission. This mismatch underscores the challenges in modelling particle acceleration and radiation processes in young PWNe and establishes G292.0+1.8 as a valuable benchmark for testing and refining such models.
AppleCiDEr II: SpectraNet -- A Deep Learning Network for Spectroscopic Data
Time-domain surveys such as the Zwicky Transient Facility (ZTF) have opened a new frontier in the discovery and characterization of transients. While photometric light curves provide broad temporal coverage, spectroscopic observations remain crucial for physical interpretation and source classification. However, existing spectral analysis methods -- often reliant on template fitting or parametric models -- are limited in their ability to capture the complex and evolving spectra characteristic of such sources, which are sometimes only available at low resolution. In this work, we introduce SpectraNet, a deep convolutional neural network designed to learn robust representations of optical spectra from transients. Our model combines multi-scale convolution kernels and multi-scale pooling to extract features from preprocessed spectra in a hierarchical and interpretable manner. We train and validate SpectraNet on low-resolution time-series spectra obtained from the Spectral Energy Distribution Machine (SEDM) and other instruments, demonstrating state-of-the-art performance in classification. Furthermore, in redshift prediction tasks, SpectraNet achieves a root mean squared relative redshift error of 0.02, highlighting its effectiveness in precise regression tasks as well.
Cosmic Evolution Early Release Science (CEERS) survey: The colour evolution of galaxies in the distant Universe
The wavelength-coverage and sensitivity of JWST now enables us to probe the rest-frame UV - optical spectral energy distributions (SEDs) of galaxies at high-redshift (z>4). From these SEDs it is, in principle, through SED fitting possible to infer key physical properties, including stellar masses, star formation rates, and dust attenuation. These in turn can be compared with the predictions of galaxy formation simulations allowing us to validate and refine the incorporated physics. However, the inference of physical properties, particularly from photometry alone, can lead to large uncertainties and potential biases. Instead, it is now possible, and common, for simulations to be forward-modelled to yield synthetic observations that can be compared directly to real observations. In this work, we measure the JWST broadband fluxes and colours of a robust sample of 5<z<10 galaxies using the Cosmic Evolution Early Release Science (CEERS) Survey. We then analyse predictions from a variety of models using the same methodology and compare the NIRCam/F277W magnitude distribution and NIRCam colours with observations. We find that the predicted and observed magnitude distributions are similar, at least at 5<z<8. At z>8 the distributions differ somewhat, though our observed sample size is small and thus susceptible to statistical fluctuations. Likewise, the predicted and observed colour evolution show broad agreement, at least at 5<z<8. There is however some disagreement between the observed and modelled strength of the strong line contribution. In particular all the models fails to reproduce the F410M-F444W colour at z>8, though, again, the sample size is small here.
First Light And Reionisation Epoch Simulations (FLARES) XVI: Size Evolution of Massive Dusty Galaxies at Cosmic Dawn from UV to IR
We use the First Light And Reionisation Epoch Simulations (FLARES) to study the evolution of the rest-frame ultraviolet (UV) and far-infrared (FIR) sizes for a statistical sample of massive (gtrsim10^{9}M_{odot}) high redshift galaxies (z in [5,10]). Galaxies are post-processed using the SKIRT radiative transfer code, to self-consistently obtain the full spectral energy distribution and surface brightness distribution. We create mock observations of the galaxies for the Near Infrared Camera (NIRCam) to study the rest-frame UV 1500 xC5 morphology. We also generate mock rest-frame FIR (50 mum) photometry and mock ALMA (158 mum) (0.01"-0.03" and approx0.3" angular resolution) observations to study the dust-continuum. We find the effect of dust on observed sizes reduces with increasing wavelength from the UV to optical (sim0.6 times the UV at 0.4mum), with no evolution in FIR sizes. Observed sizes vary within 0.4-1.2 times the intrinsic sizes at different signal to noise ratios (SNR = 5-20) across redshifts. The effect of PSF and noise makes bright structures prominent, whereas fainter regions blend with noise, leading to an underestimation (factor of 0.4-0.8) of sizes at SNR=5. At SNR=15-20, the underestimation reduces (factor of 0.6-0.9) at z=5-8 but due to PSF, at z=9-10, bright cores are dominant, resulting in an overestimation (factor of 1.0-1.2). For ALMA, low resolution sizes are effected by noise which acts as extended emission. The size evolution in UV broadly agrees with current observational samples and other simulations. This work is one of the first to analyse the panchromatic sizes of a statistically significant sample of simulated high-redshift galaxies, complementing a growing body of research highlighting the importance of conducting an equivalent comparison between observed galaxies and their simulated counterparts in the early Universe.
1FLAT: a Firmamento-based catalog of AGN in Fermi-LAT high Galactic latitude γ-ray sources
We present a systematic reassessment of 5,062 high-Galactic latitude gamma-ray sources from the Fermi-LAT 4FGL-DR4 catalog using Firmamento, a web-based platform for multi-frequency source discovery and analysis. Our goal is to provide an independent evaluation of LAT gamma-ray source associations through alternative spectral and spatial methods that combine recent and legacy survey data, supplemented by human supervision of spectral energy distributions (SEDs), source morphology, flux variability, and template-based comparisons. Firmamento confirms the 4FGL-DR4 and 4LAC-DR3 counterparts or unassociated sources in 4,493 cases (88.8%), demonstrating the robustness of both approaches. Beyond this general agreement, we identify 421 new blazar counterparts among previously unassociated sources, thereby reducing the fraction of unidentified extragalactic Fermi-LAT sources from 25% to 17%. In addition, in 64 cases we find alternative blazar associations, while in 49 instances we do not confirm the 4FGL-DR4 association. For all confirmed blazar counterparts we provide homogeneous estimates of synchrotron peak frequency and peak flux using machine-learning and template-based methods; these agree with 4LAC-DR3 values in most cases, though significant discrepancies appear for a few dozen sources, often due to improved X-ray coverage. The primary outcome of this work is the 1st Firmamento LAT AGN table (1FLAT), made publicly available through the Firmamento platform (https://firmamento.nyuad.nyu.edu), where all related multi-wavelength data and images are available. The project involved extensive manual validation and benefited from the active participation of graduate and undergraduate students, highlighting the platform's value for both research and education.
The Binary Fraction of Red Supergiants in the Magellanic Clouds
Red supergiants (RSGs), as the descendants of OB-type stars and the progenitors of supernovae, provide crucial insights into the evolution of massive stars, particularly in binary systems. Previous studies show that the binary fraction of RSGs (approx 15% - 40%) is significantly lower than that of their predecessors (approx 50% - 70%). In this work, we investigate the binary fraction of RSGs with the recently selected largest samples of 4695 and 2097 RSGs in the Large Magellanic Cloud (LMC) and Small Magellanic Cloud (SMC), respectively. The binary system with a hot companion (O-, B- and A-type star) is identified by detecting the ultraviolet (UV) excess in the observed spectral energy distribution (SED) ranging from ultraviolet to mid-infrared after subtracting the model SED of RSG since RSGs are very weak in the UV band. It is found that the lower limit of binarity is 30.2% pm 0.7% and 32.2% pm 1% in the LMC and SMC, respectively. If the sample is limited to luminous RSGs with log L/L_{odot} > 4.0, the binary fraction becomes 26.6% pm 1.1% and 26.4% pm 1.7% in the LMC and SMC, respectively. The derived binary fraction is valid in the range of sim 2.3 < log P / [d] < sim 8. Our study suggests that roughly one-third of massive stars host a third companion within sim 30,000 AU. In addition, 15 RSGs are also identified as binary via HST/STIS spectra, and a handful of the binaries identified by the SED fitting are confirmed by their light curve and radial velocity dispersion. The stellar parameters of the companions, i.e. T_{eff}, R, L and log g, are calculated by model fitting.
Snapshot hyperspectral imaging of intracellular lasers
Intracellular lasers are emerging as powerful biosensors for multiplexed tracking and precision sensing of cells and their microenvironment. This sensing capacity is enabled by quantifying their narrow-linewidth emission spectra, which is presently challenging to do at high speeds. In this work, we demonstrate rapid snapshot hyperspectral imaging of intracellular lasers. Using integral field mapping with a microlens array and a diffraction grating, we obtain images of the spatial and spectral intensity distribution from a single camera acquisition. We demonstrate widefield hyperspectral imaging over a 3times3 mm^2 field of view and volumetric imaging over 250times250times800 mum^3 volumes with a spatial resolution of 5 mum and a spectral resolution of less than 0.8 nm. We evaluate the performance and outline the challenges and strengths of snapshot methods in the context of characterising the emission from intracellular lasers. This method offers new opportunities for a diverse range of applications, including high-throughput and long-term biosensing with intracellular lasers.
Euclid Quick Data Release (Q1) Exploring galaxy properties with a multi-modal foundation model
Modern astronomical surveys, such as the Euclid mission, produce high-dimensional, multi-modal data sets that include imaging and spectroscopic information for millions of galaxies. These data serve as an ideal benchmark for large, pre-trained multi-modal models, which can leverage vast amounts of unlabelled data. In this work, we present the first exploration of Euclid data with AstroPT, an autoregressive multi-modal foundation model trained on approximately 300 000 optical and infrared Euclid images and spectral energy distributions (SEDs) from the first Euclid Quick Data Release. We compare self-supervised pre-training with baseline fully supervised training across several tasks: galaxy morphology classification; redshift estimation; similarity searches; and outlier detection. Our results show that: (a) AstroPT embeddings are highly informative, correlating with morphology and effectively isolating outliers; (b) including infrared data helps to isolate stars, but degrades the identification of edge-on galaxies, which are better captured by optical images; (c) simple fine-tuning of these embeddings for photometric redshift and stellar mass estimation outperforms a fully supervised approach, even when using only 1% of the training labels; and (d) incorporating SED data into AstroPT via a straightforward multi-modal token-chaining method improves photo-z predictions, and allow us to identify potentially more interesting anomalies (such as ringed or interacting galaxies) compared to a model pre-trained solely on imaging data.
EPOCHS Paper V. The dependence of galaxy formation on galaxy structure at z < 7 from JWST observations
We measure the broad impact of galaxy structure on galaxy formation by examining the ongoing star formation and integrated star formation history as revealed through the stellar masses of galaxies at z < 7 based on JWST CEERS data from the Extended Groth Strip (EGS). Using the morphological catalog of 3965 visually classified JWST galaxies from Ferreira et al. (2023), we investigate the evolution of stars, and when they form, as a function of morphological type as well as galaxies classified as passive and starburst through spectral energy distributions. Although disk galaxies dominate the structures of galaxies at z < 7, we find that these disks are in general either `passive', or on the main-sequence of star formation, and do not contain a large population of starburst galaxies. We also find no significant correlation between morphological type and the star formation rate or colours of galaxies at z < 7. In fact, we find that the morphologically classified `spheroids' tend to be blue and are not found to be predominately passive systems at z > 1.5. We also find that the stellar mass function for disk galaxies does not evolve significantly during this time, whereas other galaxy types, such as the peculiar population, evolve dramatically, declining at lower redshifts. This indicates that massive peculiars are more common at higher redshifts. We further find that up to z sim 7, the specific star formation rate (sSFR) does not vary with visual morphology, but strongly depends on stellar mass and internal galaxy mass density. This demonstrates that at early epochs galaxy assembly is a mass-driven, rather than a morphologically-driven, process. Quenching of star formation is therefore a mass-dominated process throughout the universe's history, likely due to the presence of supermassive black holes.
Intriguing properties of synthetic images: from generative adversarial networks to diffusion models
Detecting fake images is becoming a major goal of computer vision. This need is becoming more and more pressing with the continuous improvement of synthesis methods based on Generative Adversarial Networks (GAN), and even more with the appearance of powerful methods based on Diffusion Models (DM). Towards this end, it is important to gain insight into which image features better discriminate fake images from real ones. In this paper we report on our systematic study of a large number of image generators of different families, aimed at discovering the most forensically relevant characteristics of real and generated images. Our experiments provide a number of interesting observations and shed light on some intriguing properties of synthetic images: (1) not only the GAN models but also the DM and VQ-GAN (Vector Quantized Generative Adversarial Networks) models give rise to visible artifacts in the Fourier domain and exhibit anomalous regular patterns in the autocorrelation; (2) when the dataset used to train the model lacks sufficient variety, its biases can be transferred to the generated images; (3) synthetic and real images exhibit significant differences in the mid-high frequency signal content, observable in their radial and angular spectral power distributions.
Spectral Alignment as Predictor of Loss Explosion in Neural Network Training
Loss explosions in training deep neural networks can nullify multi-million dollar training runs. Conventional monitoring metrics like weight and gradient norms are often lagging and ambiguous predictors, as their values vary dramatically across different models and even between layers of the same model, making it difficult to establish a unified standard for detecting impending failure. We introduce Spectral Alignment (SA), a novel, theoretically-grounded metric that monitors the distributional alignment between layer inputs and the principal singular vectors of weight matrices. We show that a collapse in the sign diversity of this alignment is a powerful early predictor of representational collapse and training divergence. Empirical results on language models demonstrate that monitoring the SA distribution provides a significantly earlier and clearer warning of loss explosions than traditional scalar metrics. SA's low computational overhead makes it a practical tool for safeguarding model training.
Spectral Codecs: Spectrogram-Based Audio Codecs for High Quality Speech Synthesis
Historically, most speech models in machine-learning have used the mel-spectrogram as a speech representation. Recently, discrete audio tokens produced by neural audio codecs have become a popular alternate speech representation for speech synthesis tasks such as text-to-speech (TTS). However, the data distribution produced by such codecs is too complex for some TTS models to predict, hence requiring large autoregressive models to get reasonable quality. Typical audio codecs compress and reconstruct the time-domain audio signal. We propose a spectral codec which compresses the mel-spectrogram and reconstructs the time-domain audio signal. A study of objective audio quality metrics suggests that our spectral codec has comparable perceptual quality to equivalent audio codecs. Furthermore, non-autoregressive TTS models trained with the proposed spectral codec generate audio with significantly higher quality than when trained with mel-spectrograms or audio codecs.
Understanding and Mitigating Distribution Shifts For Machine Learning Force Fields
Machine Learning Force Fields (MLFFs) are a promising alternative to expensive ab initio quantum mechanical molecular simulations. Given the diversity of chemical spaces that are of interest and the cost of generating new data, it is important to understand how MLFFs generalize beyond their training distributions. In order to characterize and better understand distribution shifts in MLFFs, we conduct diagnostic experiments on chemical datasets, revealing common shifts that pose significant challenges, even for large foundation models trained on extensive data. Based on these observations, we hypothesize that current supervised training methods inadequately regularize MLFFs, resulting in overfitting and learning poor representations of out-of-distribution systems. We then propose two new methods as initial steps for mitigating distribution shifts for MLFFs. Our methods focus on test-time refinement strategies that incur minimal computational cost and do not use expensive ab initio reference labels. The first strategy, based on spectral graph theory, modifies the edges of test graphs to align with graph structures seen during training. Our second strategy improves representations for out-of-distribution systems at test-time by taking gradient steps using an auxiliary objective, such as a cheap physical prior. Our test-time refinement strategies significantly reduce errors on out-of-distribution systems, suggesting that MLFFs are capable of and can move towards modeling diverse chemical spaces, but are not being effectively trained to do so. Our experiments establish clear benchmarks for evaluating the generalization capabilities of the next generation of MLFFs. Our code is available at https://tkreiman.github.io/projects/mlff_distribution_shifts/.
Distributionally Robust Optimization with Bias and Variance Reduction
We consider the distributionally robust optimization (DRO) problem with spectral risk-based uncertainty set and f-divergence penalty. This formulation includes common risk-sensitive learning objectives such as regularized condition value-at-risk (CVaR) and average top-k loss. We present Prospect, a stochastic gradient-based algorithm that only requires tuning a single learning rate hyperparameter, and prove that it enjoys linear convergence for smooth regularized losses. This contrasts with previous algorithms that either require tuning multiple hyperparameters or potentially fail to converge due to biased gradient estimates or inadequate regularization. Empirically, we show that Prospect can converge 2-3times faster than baselines such as stochastic gradient and stochastic saddle-point methods on distribution shift and fairness benchmarks spanning tabular, vision, and language domains.
Beta Sampling is All You Need: Efficient Image Generation Strategy for Diffusion Models using Stepwise Spectral Analysis
Generative diffusion models have emerged as a powerful tool for high-quality image synthesis, yet their iterative nature demands significant computational resources. This paper proposes an efficient time step sampling method based on an image spectral analysis of the diffusion process, aimed at optimizing the denoising process. Instead of the traditional uniform distribution-based time step sampling, we introduce a Beta distribution-like sampling technique that prioritizes critical steps in the early and late stages of the process. Our hypothesis is that certain steps exhibit significant changes in image content, while others contribute minimally. We validated our approach using Fourier transforms to measure frequency response changes at each step, revealing substantial low-frequency changes early on and high-frequency adjustments later. Experiments with ADM and Stable Diffusion demonstrated that our Beta Sampling method consistently outperforms uniform sampling, achieving better FID and IS scores, and offers competitive efficiency relative to state-of-the-art methods like AutoDiffusion. This work provides a practical framework for enhancing diffusion model efficiency by focusing computational resources on the most impactful steps, with potential for further optimization and broader application.
Probing X-ray Timing and Spectral Variability in the Blazar PKS 2155-304 Over a Decade of XMM-Newton Observations
Blazars, a class of active galactic nuclei (AGN) powered by supermassive black holes, are known for their remarkable variability across multiple timescales and wavelengths. With advancements in both ground- and space-based telescopes, our understanding of AGN central engines has significantly improved. However, the mechanisms driving this variability remain elusive, and continue to fascinate both theorists and observers alike. The primary objective of this study is to constrain the X-ray variability properties of the TeV blazar PKS 2155-304. We conduct a comprehensive X-ray spectral and timing analysis, focusing on both long-term and intra-day variability. This analysis uses data from 22 epochs of XMM-Newton EPIC-pn observations, collected over 15 years (2000-2014). To investigate the variability of the source, we applied both timing and spectral analyses. For the timing analysis, we estimated fractional variability, variability amplitude, minimum variability timescales, flux distribution, and power spectral density (PSD). In the spectral analysis, we fitted the X-ray spectra using power-law, log-parabola, and broken power-law (BPL) models to determine the best-fitting parameters. Additionally, we studied the hardness ratio (HR). We observed moderate intra-day variability in most of the light curves. Seven out of the twenty-two observations showed a clear bimodal flux distribution, indicating the presence of two distinct flux states. Our analysis revealed a variable power-law PSD slope. Most HR plots did not show significant variation with flux, except for one observation (OBSID 0124930501), where HR increased with flux (Count/s). The fitted X-ray spectra favored the BPL model for the majority of observations. The findings of this work shed light on the intraday variability of blazars, providing insights into the non-thermal jet processes that drive the observed flux variations.
Near out-of-distribution detection for low-resolution radar micro-Doppler signatures
Near out-of-distribution detection (OODD) aims at discriminating semantically similar data points without the supervision required for classification. This paper puts forward an OODD use case for radar targets detection extensible to other kinds of sensors and detection scenarios. We emphasize the relevance of OODD and its specific supervision requirements for the detection of a multimodal, diverse targets class among other similar radar targets and clutter in real-life critical systems. We propose a comparison of deep and non-deep OODD methods on simulated low-resolution pulse radar micro-Doppler signatures, considering both a spectral and a covariance matrix input representation. The covariance representation aims at estimating whether dedicated second-order processing is appropriate to discriminate signatures. The potential contributions of labeled anomalies in training, self-supervised learning, contrastive learning insights and innovative training losses are discussed, and the impact of training set contamination caused by mislabelling is investigated.
Learning Continually by Spectral Regularization
Loss of plasticity is a phenomenon where neural networks become more difficult to train during the course of learning. Continual learning algorithms seek to mitigate this effect by sustaining good predictive performance while maintaining network trainability. We develop new techniques for improving continual learning by first reconsidering how initialization can ensure trainability during early phases of learning. From this perspective, we derive new regularization strategies for continual learning that ensure beneficial initialization properties are better maintained throughout training. In particular, we investigate two new regularization techniques for continual learning: (i) Wasserstein regularization toward the initial weight distribution, which is less restrictive than regularizing toward initial weights; and (ii) regularizing weight matrix singular values, which directly ensures gradient diversity is maintained throughout training. We present an experimental analysis that shows these alternative regularizers can improve continual learning performance across a range of supervised learning tasks and model architectures. The alternative regularizers prove to be less sensitive to hyperparameters while demonstrating better training in individual tasks, sustaining trainability as new tasks arrive, and achieving better generalization performance.
Hyperspectral Image Super-Resolution with Spectral Mixup and Heterogeneous Datasets
This work studies Hyperspectral image (HSI) super-resolution (SR). HSI SR is characterized by high-dimensional data and a limited amount of training examples. This exacerbates the undesirable behaviors of neural networks such as memorization and sensitivity to out-of-distribution samples. This work addresses these issues with three contributions. First, we observe that HSI SR and RGB image SR are correlated and develop a novel multi-tasking network to train them jointly so that the auxiliary task RGB image SR can provide additional supervision. Second, we propose a simple, yet effective data augmentation routine, termed Spectral Mixup, to construct effective virtual training samples to enlarge the training set. Finally, we extend the network to a semi-supervised setting so that it can learn from datasets containing only low-resolution HSIs. With these contributions, our method is able to learn from heterogeneous datasets and lift the requirement for having a large amount of HD HSI training samples. Extensive experiments on four standard datasets show that our method outperforms existing methods significantly and underpin the relevance of our contributions. Code has been made available at https://github.com/kli8996/HSISR.
UnMix-NeRF: Spectral Unmixing Meets Neural Radiance Fields
Neural Radiance Field (NeRF)-based segmentation methods focus on object semantics and rely solely on RGB data, lacking intrinsic material properties. This limitation restricts accurate material perception, which is crucial for robotics, augmented reality, simulation, and other applications. We introduce UnMix-NeRF, a framework that integrates spectral unmixing into NeRF, enabling joint hyperspectral novel view synthesis and unsupervised material segmentation. Our method models spectral reflectance via diffuse and specular components, where a learned dictionary of global endmembers represents pure material signatures, and per-point abundances capture their distribution. For material segmentation, we use spectral signature predictions along learned endmembers, allowing unsupervised material clustering. Additionally, UnMix-NeRF enables scene editing by modifying learned endmember dictionaries for flexible material-based appearance manipulation. Extensive experiments validate our approach, demonstrating superior spectral reconstruction and material segmentation to existing methods. Project page: https://www.factral.co/UnMix-NeRF.
Modeling Eye Gaze Velocity Trajectories using GANs with Spectral Loss for Enhanced Fidelity
Accurate modeling of eye gaze dynamics is essential for advancement in human-computer interaction, neurological diagnostics, and cognitive research. Traditional generative models like Markov models often fail to capture the complex temporal dependencies and distributional nuance inherent in eye gaze trajectories data. This study introduces a GAN framework employing LSTM and CNN generators and discriminators to generate high-fidelity synthetic eye gaze velocity trajectories. We conducted a comprehensive evaluation of four GAN architectures: CNN-CNN, LSTM-CNN, CNN-LSTM, and LSTM-LSTM trained under two conditions: using only adversarial loss and using a weighted combination of adversarial and spectral losses. Our findings reveal that the LSTM-CNN architecture trained with this new loss function exhibits the closest alignment to the real data distribution, effectively capturing both the distribution tails and the intricate temporal dependencies. The inclusion of spectral regularization significantly enhances the GANs ability to replicate the spectral characteristics of eye gaze movements, leading to a more stable learning process and improved data fidelity. Comparative analysis with an HMM optimized to four hidden states further highlights the advantages of the LSTM-CNN GAN. Statistical metrics show that the HMM-generated data significantly diverges from the real data in terms of mean, standard deviation, skewness, and kurtosis. In contrast, the LSTM-CNN model closely matches the real data across these statistics, affirming its capacity to model the complexity of eye gaze dynamics effectively. These results position the spectrally regularized LSTM-CNN GAN as a robust tool for generating synthetic eye gaze velocity data with high fidelity.
DDS2M: Self-Supervised Denoising Diffusion Spatio-Spectral Model for Hyperspectral Image Restoration
Diffusion models have recently received a surge of interest due to their impressive performance for image restoration, especially in terms of noise robustness. However, existing diffusion-based methods are trained on a large amount of training data and perform very well in-distribution, but can be quite susceptible to distribution shift. This is especially inappropriate for data-starved hyperspectral image (HSI) restoration. To tackle this problem, this work puts forth a self-supervised diffusion model for HSI restoration, namely Denoising Diffusion Spatio-Spectral Model (DDS2M), which works by inferring the parameters of the proposed Variational Spatio-Spectral Module (VS2M) during the reverse diffusion process, solely using the degraded HSI without any extra training data. In VS2M, a variational inference-based loss function is customized to enable the untrained spatial and spectral networks to learn the posterior distribution, which serves as the transitions of the sampling chain to help reverse the diffusion process. Benefiting from its self-supervised nature and the diffusion process, DDS2M enjoys stronger generalization ability to various HSIs compared to existing diffusion-based methods and superior robustness to noise compared to existing HSI restoration methods. Extensive experiments on HSI denoising, noisy HSI completion and super-resolution on a variety of HSIs demonstrate DDS2M's superiority over the existing task-specific state-of-the-arts.
SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping
Neural vocoder using denoising diffusion probabilistic model (DDPM) has been improved by adaptation of the diffusion noise distribution to given acoustic features. In this study, we propose SpecGrad that adapts the diffusion noise so that its time-varying spectral envelope becomes close to the conditioning log-mel spectrogram. This adaptation by time-varying filtering improves the sound quality especially in the high-frequency bands. It is processed in the time-frequency domain to keep the computational cost almost the same as the conventional DDPM-based neural vocoders. Experimental results showed that SpecGrad generates higher-fidelity speech waveform than conventional DDPM-based neural vocoders in both analysis-synthesis and speech enhancement scenarios. Audio demos are available at wavegrad.github.io/specgrad/.
DiffVox: A Differentiable Model for Capturing and Analysing Professional Effects Distributions
This study introduces a novel and interpretable model, DiffVox, for matching vocal effects in music production. DiffVox, short for ``Differentiable Vocal Fx", integrates parametric equalisation, dynamic range control, delay, and reverb with efficient differentiable implementations to enable gradient-based optimisation for parameter estimation. Vocal presets are retrieved from two datasets, comprising 70 tracks from MedleyDB and 365 tracks from a private collection. Analysis of parameter correlations highlights strong relationships between effects and parameters, such as the high-pass and low-shelf filters often behaving together to shape the low end, and the delay time correlates with the intensity of the delayed signals. Principal component analysis reveals connections to McAdams' timbre dimensions, where the most crucial component modulates the perceived spaciousness while the secondary components influence spectral brightness. Statistical testing confirms the non-Gaussian nature of the parameter distribution, highlighting the complexity of the vocal effects space. These initial findings on the parameter distributions set the foundation for future research in vocal effects modelling and automatic mixing. Our source code and datasets are accessible at https://github.com/SonyResearch/diffvox.
Interpretable structural model error discovery from sparse assimilation increments using spectral bias-reduced neural networks: A quasi-geostrophic turbulence test case
Earth system models suffer from various structural and parametric errors in their representation of nonlinear, multi-scale processes, leading to uncertainties in their long-term projections. The effects of many of these errors (particularly those due to fast physics) can be quantified in short-term simulations, e.g., as differences between the predicted and observed states (analysis increments). With the increase in the availability of high-quality observations and simulations, learning nudging from these increments to correct model errors has become an active research area. However, most studies focus on using neural networks, which while powerful, are hard to interpret, are data-hungry, and poorly generalize out-of-distribution. Here, we show the capabilities of Model Error Discovery with Interpretability and Data Assimilation (MEDIDA), a general, data-efficient framework that uses sparsity-promoting equation-discovery techniques to learn model errors from analysis increments. Using two-layer quasi-geostrophic turbulence as the test case, MEDIDA is shown to successfully discover various linear and nonlinear structural/parametric errors when full observations are available. Discovery from spatially sparse observations is found to require highly accurate interpolation schemes. While NNs have shown success as interpolators in recent studies, here, they are found inadequate due to their inability to accurately represent small scales, a phenomenon known as spectral bias. We show that a general remedy, adding a random Fourier feature layer to the NN, resolves this issue enabling MEDIDA to successfully discover model errors from sparse observations. These promising results suggest that with further development, MEDIDA could be scaled up to models of the Earth system and real observations.
The 100 pc White Dwarf Sample in the SDSS Footprint II. A New Look at the Spectral Evolution of White Dwarfs
We increase the spectroscopic completeness of the 100 pc white dwarf sample in the SDSS footprint with 840 additional spectra. Our spectroscopy is 86% complete for white dwarfs hotter than T_{rm eff}= 5000 K, where Halpha remains visible and provides reliable constraints on the atmospheric composition. We identify 2108 DA white dwarfs with pure hydrogen atmospheres, and show that ultramassive DA white dwarfs with Mgeq1.1~M_{odot} are an order of magnitude less common below 10,000 K. This is consistent with a fraction of them getting stuck on the crystallization sequence due to ^{22}Ne distillation. In addition, there are no ultramassive DA white dwarfs with Mgeq1.1~M_{odot} and T_{rm eff}leq6000 K in our sample, likely because Debye cooling makes them rapidly fade away. We detect a significant trend in the fraction of He-atmosphere white dwarfs as a function of temperature; the fraction increases from 9% at 20,000 K to 32% at 6000 K. This provides direct evidence of convective mixing in cool DA white dwarfs. Finally, we detect a relatively tight sequence of low-mass DQ white dwarfs in color-magnitude diagrams for the first time. We discuss the implications of this tight DQ sequence, and conclude with a discussion of the future prospects from the upcoming ULTRASAT mission and the large-scale multi-fiber spectroscopic surveys.
Rank and Align: Towards Effective Source-free Graph Domain Adaptation
Graph neural networks (GNNs) have achieved impressive performance in graph domain adaptation. However, extensive source graphs could be unavailable in real-world scenarios due to privacy and storage concerns. To this end, we investigate an underexplored yet practical problem of source-free graph domain adaptation, which transfers knowledge from source models instead of source graphs to a target domain. To solve this problem, we introduce a novel GNN-based approach called Rank and Align (RNA), which ranks graph similarities with spectral seriation for robust semantics learning, and aligns inharmonic graphs with harmonic graphs which close to the source domain for subgraph extraction. In particular, to overcome label scarcity, we employ the spectral seriation algorithm to infer the robust pairwise rankings, which can guide semantic learning using a similarity learning objective. To depict distribution shifts, we utilize spectral clustering and the silhouette coefficient to detect harmonic graphs, which the source model can easily classify. To reduce potential domain discrepancy, we extract domain-invariant subgraphs from inharmonic graphs by an adversarial edge sampling process, which guides the invariant learning of GNNs. Extensive experiments on several benchmark datasets demonstrate the effectiveness of our proposed RNA.
Spectrally Transformed Kernel Regression
Unlabeled data is a key component of modern machine learning. In general, the role of unlabeled data is to impose a form of smoothness, usually from the similarity information encoded in a base kernel, such as the epsilon-neighbor kernel or the adjacency matrix of a graph. This work revisits the classical idea of spectrally transformed kernel regression (STKR), and provides a new class of general and scalable STKR estimators able to leverage unlabeled data. Intuitively, via spectral transformation, STKR exploits the data distribution for which unlabeled data can provide additional information. First, we show that STKR is a principled and general approach, by characterizing a universal type of "target smoothness", and proving that any sufficiently smooth function can be learned by STKR. Second, we provide scalable STKR implementations for the inductive setting and a general transformation function, while prior work is mostly limited to the transductive setting. Third, we derive statistical guarantees for two scenarios: STKR with a known polynomial transformation, and STKR with kernel PCA when the transformation is unknown. Overall, we believe that this work helps deepen our understanding of how to work with unlabeled data, and its generality makes it easier to inspire new methods.
Repeating fast radio bursts from synchrotron maser radiation in localized plasma blobs: Application to FRB 20121102A
The radiation physics of repeating fast radio bursts (FRBs) remains enigmatic. Motivated by the observed narrow-banded emission spectrum and ambiguous fringe pattern of the spectral peak frequency (nu_{rm pk}) distribution of some repeating FRBs, such as FRB 20121102A, we propose that the bursts from repeating FRBs arise from synchrotron maser radiation in localized blobs within weakly magnetized plasma that relativistically moves toward observers. Assuming the plasma moves toward the observers with a bulk Lorentz factor of Gamma=100 and the electron distribution in an individual blob is monoenergetic (gamma_{rm e}sim300), our analysis shows that bright and narrow-banded radio bursts with peak flux density sim 1 {rm Jy} at peak frequency (nu_{rm pk}) sim 3.85 GHz can be produced by the synchrotron maser emission if the plasma blob has a magnetization factor of sigmasim10^{-5} and a frequency of nu_{rm P}sim 4.5 MHz. The spectrum of bursts with lower nu_{rm pk} tends to be narrower. Applying our model to the bursts of FRB 20121102A, the distributions of both the observed nu_{rm pk} and isotropic energy E_{rm iso} detected by the Arecibo telescope at the L band and the Green Bank Telescope at the C band are successfully reproduced. We find that the nu_{rm P} distribution exhibits several peaks, similar to those observed in the nu_{rm pk} distribution of FRB 20121102A. This implies that the synchrotron maser emission in FRB 20121102A is triggered in different plasma blobs with varying nu_{rm P}, likely due to the inhomogeneity of relativistic electron number density.
Generating arbitrary polarization states by manipulating the thicknesses of a pair of uniaxial birefringent plates
We report an optical method of generating arbitrary polarization states by manipulating the thicknesses of a pair of uniaxial birefringent plates, the optical axes of which are set at a crossing angle of {\pi}/4. The method has the remarkable feature of being able to generate a distribution of arbitrary polarization states in a group of highly discrete spectra without spatially separating the individual spectral components. The target polarization-state distribution is obtained as an optimal solution through an exploration. Within a realistic exploration range, a sufficient number of near-optimal solutions are found. This property is also reproduced well by a concise model based on a distribution of exploration points on a Poincar\'e sphere, showing that the number of near-optimal solutions behaves according to a power law with respect to the number of spectral components of concern. As a typical example of an application, by applying this method to a set of phase-locked highly discrete spectra, we numerically demonstrate the continuous generation of a vector-like optical electric field waveform, the helicity of which is alternated within a single optical cycle in the time domain.
Kolmogorov-Arnold Attention: Is Learnable Attention Better For Vision Transformers?
Kolmogorov-Arnold networks (KANs) are a remarkable innovation consisting of learnable activation functions with the potential to capture more complex relationships from data. Although KANs are useful in finding symbolic representations and continual learning of one-dimensional functions, their effectiveness in diverse machine learning (ML) tasks, such as vision, remains questionable. Presently, KANs are deployed by replacing multilayer perceptrons (MLPs) in deep network architectures, including advanced architectures such as vision Transformers (ViTs). In this paper, we are the first to design a general learnable Kolmogorov-Arnold Attention (KArAt) for vanilla ViTs that can operate on any choice of basis. However, the computing and memory costs of training them motivated us to propose a more modular version, and we designed particular learnable attention, called Fourier-KArAt. Fourier-KArAt and its variants either outperform their ViT counterparts or show comparable performance on CIFAR-10, CIFAR-100, and ImageNet-1K datasets. We dissect these architectures' performance and generalization capacity by analyzing their loss landscapes, weight distributions, optimizer path, attention visualization, and spectral behavior, and contrast them with vanilla ViTs. The goal of this paper is not to produce parameter- and compute-efficient attention, but to encourage the community to explore KANs in conjunction with more advanced architectures that require a careful understanding of learnable activations. Our open-source code and implementation details are available on: https://subhajitmaity.me/KArAt
Metis: Training Large Language Models with Advanced Low-Bit Quantization
This work identifies anisotropic parameter distributions as a fundamental barrier to training large language models (LLMs) with low-bit quantization: a few dominant singular values create wide numerical ranges that conflict with the inherent bias of block-wise quantization. This bias disproportionately preserves high-magnitude values while discarding smaller ones, causing training instability and low model performance. This work introduces Metis, a training framework that combines (i) spectral decomposition with random embedding to efficiently disentangle dominant from long-tail components, compressing broad distributions into quantization-friendly narrow ranges; (ii) adaptive learning rates in the spectral domain to amplify underrepresented directions and better capture diverse features critical for performance; and (iii) a dual-range regularizer that jointly constrains numerical precision and parameter range distribution, ensuring stable, unbiased low-bit training. With Metis, FP8 training surpasses FP32 baselines, and FP4 training achieves accuracy comparable to FP32, paving the way for robust and scalable LLM training under advanced low-bit quantization. The code implementation for Metis is available at: https://github.com/typename-yyf/Metis-quantization.
A New Circle Theorem for Two Dimensional Ising Spin Glasses
The Lee-Yang circle theorem revolutionized our understanding of phase transitions in ferromagnetic systems by showing that the complex zeros of partition functions lie on the unit circle, with criticality arising as these zeros approach the real axis in the thermodynamic limit. However, in frustrated systems such as antiferromagnets and spin glasses, the zeros deviate from this structure, making it challenging to extend the Lee-Yang theory to disordered systems. In this work, we establish a new circle theorem for two-dimensional Ising spin glasses, proving that the square of the partition function exhibits zeros densely packed along the unit circle. Numerical simulations on the square lattice confirm our theoretical predictions, demonstrating the validity of the circle law for quenched disorder. Furthermore, our results uncover a finite-temperature crossover in pm J spin glasses, characterized by the emergence of a spectral gap in the angular distribution of zeros. This result extends the Lee-Yang framework to disordered systems, offering new insights into spin-glass criticality.
More for Keys, Less for Values: Adaptive KV Cache Quantization
This paper introduces an information-aware quantization framework that adaptively compresses the key-value (KV) cache in large language models (LLMs). Although prior work has underscored the distinct roles of key and value cache during inference, our systematic analysis -- examining singular value distributions, spectral norms, and Frobenius norms -- reveals, for the first time, that key matrices consistently exhibit higher norm values and are more sensitive to quantization than value matrices. Furthermore, our theoretical analysis shows that matrices with higher spectral norms amplify quantization errors more significantly. Motivated by these insights, we propose a mixed-precision quantization strategy, KV-AdaQuant, which allocates more bit-width for keys and fewer for values since key matrices have higher norm values. With the same total KV bit budget, this approach effectively mitigates error propagation across transformer layers while achieving significant memory savings. Our extensive experiments on multiple LLMs (1B--70B) demonstrate that our mixed-precision quantization scheme maintains high model accuracy even under aggressive compression. For instance, using 4-bit for Key and 2-bit for Value achieves an accuracy of 75.2%, whereas reversing the assignment (2-bit for Key and 4-bit for Value) yields only 54.7% accuracy. The code is available at https://tinyurl.com/kv-adaquant
