∅
https://www.nature.com/articles/s41598-024-76900-1
https://www.youtube.com/live/a6gj7E4AblI?si=xw534iWSpp7ZSwBD
тут критику підвезли, але поки у мене нема вільних 3.5 годин
тут критику підвезли, але поки у мене нема вільних 3.5 годин
YouTube
No. Peer review time!
Got some issues with this paper I want to share.
Link: https://www.nature.com/articles/s41598-024-76900-1
Title: AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably
Discord: https://discord.gg/3bTXRJ7SyD
Twitch:…
Link: https://www.nature.com/articles/s41598-024-76900-1
Title: AI-generated poetry is indistinguishable from human-written poetry and is rated more favorably
Discord: https://discord.gg/3bTXRJ7SyD
Twitch:…
Media is too big
VIEW IN TELEGRAM
https://x.com/vincentweisser/status/1867719020444889118
@ilyasut
full talk at neurips 2024 "pre-training as we know it will end" and what comes next is superintelligence: agentic, reasons, understands and is self aware
@ilyasut
full talk at neurips 2024 "pre-training as we know it will end" and what comes next is superintelligence: agentic, reasons, understands and is self aware
Forwarded from Just links
Optimized einsum https://optimized-einsum.readthedocs.io/en/stable/
Forwarded from Кругообіг Йонансінів у природі
Ефект Коппа-Етчелса. Високошвидкісна лопать проходить через частинки піску, викликаючи мікроіскри
The Geometry of Concepts: Sparse Autoencoder Feature Structure
Abstract:
Sparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept universe has interesting structure at three levels: 1) The "atomic" small-scale structure contains "crystals" whose faces are parallelograms or trapezoids, generalizing well-known examples such as (man-woman-king-queen). We find that the quality of such parallelograms and associated function vectors improves greatly when projecting out global distractor directions such as word length, which is efficiently done with linear discriminant analysis. 2) The "brain" intermediate-scale structure has significant spatial modularity; for example, math and code features form a "lobe" akin to functional lobes seen in neural fMRI images. We quantify the spatial locality of these lobes with multiple metrics and find that clusters of co-occurring features, at coarse enough scale, also cluster together spatially far more than one would expect if feature geometry were random. 3) The "galaxy" scale large-scale structure of the feature point cloud is not isotropic, but instead has a power law of eigenvalues with steepest slope in middle layers. We also quantify how the clustering entropy depends on the layer.
https://arxiv.org/abs/2410.19750
Abstract:
Sparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept universe has interesting structure at three levels: 1) The "atomic" small-scale structure contains "crystals" whose faces are parallelograms or trapezoids, generalizing well-known examples such as (man-woman-king-queen). We find that the quality of such parallelograms and associated function vectors improves greatly when projecting out global distractor directions such as word length, which is efficiently done with linear discriminant analysis. 2) The "brain" intermediate-scale structure has significant spatial modularity; for example, math and code features form a "lobe" akin to functional lobes seen in neural fMRI images. We quantify the spatial locality of these lobes with multiple metrics and find that clusters of co-occurring features, at coarse enough scale, also cluster together spatially far more than one would expect if feature geometry were random. 3) The "galaxy" scale large-scale structure of the feature point cloud is not isotropic, but instead has a power law of eigenvalues with steepest slope in middle layers. We also quantify how the clustering entropy depends on the layer.
https://arxiv.org/abs/2410.19750
arXiv.org
The Geometry of Concepts: Sparse Autoencoder Feature Structure
Sparse autoencoders have recently produced dictionaries of high-dimensional vectors corresponding to the universe of concepts represented by large language models. We find that this concept...
https://www.securityweek.com/air-gapped-computers-can-communicate-through-heat-researchers/
#oldbutgold
#oldbutgold
SecurityWeek
Air-Gapped Computers Can Communicate Through Heat: Researchers
BitWhisper: Stealing Data From Isolated Computers Using Heat Emissions and Built-in Thermal SensorsResearchers at the Ben Gurion University in Israel have demonstrated that two computers in close proximity to each other can communicate using heat emissions…