上合组织发表关于伊朗局势的声明,呼吁各方保持克制

· · 来源:tutorial资讯

Трамп определил приоритетность Украины для США20:32

Названа исполнительница роли Наташи Ростовой в «Войне и мире» Андреасяна14:45

В Иране на

Собеседник издания также назвал маловероятным переезд Пугачевой и ее супруга Максима Галкина (внесен Минюстом РФ в реестр иностранных агентов) в США. По его словам, вполне возможно, что певица уже планирует возвращение в Россию. Рудченко предположил, что Пугачева вернется летом и снова будет выступать на частных мероприятиях.,更多细节参见51吃瓜

17-летнюю дочь Николь Кидман высмеяли в сети за нелепую походку на модном показе20:47

远方的战火咪咕体育直播在线免费看对此有专业解读

Зеленский решил отправить военных на Ближний Восток20:58,推荐阅读WPS官方版本下载获取更多信息

Abstract:Humans shift between different personas depending on social context. Large Language Models (LLMs) demonstrate a similar flexibility in adopting different personas and behaviors. Existing approaches, however, typically adapt such behavior through external knowledge such as prompting, retrieval-augmented generation (RAG), or fine-tuning. We ask: do LLMs really need external context or parameters to adapt to different behaviors, or do they already have such knowledge embedded in their parameters? In this work, we show that LLMs already contain persona-specialized subnetworks in their parameter space. Using small calibration datasets, we identify distinct activation signatures associated with different personas. Guided by these statistics, we develop a masking strategy that isolates lightweight persona subnetworks. Building on the findings, we further discuss: how can we discover opposing subnetwork from the model that lead to binary-opposing personas, such as introvert-extrovert? To further enhance separation in binary opposition scenarios, we introduce a contrastive pruning strategy that identifies parameters responsible for the statistical divergence between opposing personas. Our method is entirely training-free and relies solely on the language model's existing parameter space. Across diverse evaluation settings, the resulting subnetworks exhibit significantly stronger persona alignment than baselines that require external knowledge while being more efficient. Our findings suggest that diverse human-like behaviors are not merely induced in LLMs, but are already embedded in their parameter space, pointing toward a new perspective on controllable and interpretable personalization in large language models.