NEWS

新闻

了解openKylin最新资讯,关注社区和产品动态。

NEWS

Learn about the latest news.

点赞!openKylin社区AI4OS SIG多篇大模型相关论文被人工智能领域顶会AAAI和顶刊TKDE录用!

2023-12-19 14:07:24

2023年12月9日,第38届国际人工智能大会(AAAI2024)论文接收结果正式公布,在openKylin社区技术委员会的指导下,openKylin AI4OS SIG纪斌(国防科技大学博士、新加坡国立大学博士后)、宋社政(国防科技大学博士)、李小鹏(国防科技大学博士)发表的3篇在人工智能大模型方面的论文均被AAAI2024成功录取。据其官网数据显示,AAAI2024共收到12000余份论文投稿,最终录用2342篇论文,接收率仅为23.75%,被录用的论文主题涵盖了人工智能领域的各项前沿工作。


AAAI(Association for the Advance of Artificial Intelligence)是美国人工智能协会主办的年会,是人工智能领域中历史最悠久、涵盖内容最广泛的的国际顶级学术会议之一。在中国计算机学会的国际学术会议排名中,AAAI被列为人工智能领域的A类顶级会议。



同时,openKylin社区AI4OS SIG刘慧君(国防科技大学博士)的论文也于近日正式被人工智能领域顶刊TKDE录用TKDE为中国计算机学会(CCF)推荐的人工智能、大数据类的A类国际期刊,CCF认为“A类指国际上极少数的顶级刊物和会议,鼓励我国学者去突破”

4篇论文的题目和摘要信息如下:

01
论文题目:A More Context-aware Approach for Textual Adversarial Attacks using Probability Difference-guided Beam Search

摘要:
Textual adversarial attacks expose the vulnerabilities of text classifiers and can be used to improve their robustness. Previous context-aware attack models suffer from several limitations. They generally rely on out-of-date substitutes, solely consider the gold label probability, and use the greedy search when generating adversarial examples, often limiting the attack efficiency. To tackle these issues, we propose MC-PDBS, a More Context-aware textual adversarial attack model using Probability Difference-guided Beam Search. MC-PDBS generates substitutes using the newest perturbed text sequences in each attack iteration, enabling the generation of more context-aware adversarial examples. The probability difference is an overall consideration of the probabilities of all class labels, which is more effective than the gold label probability in guiding the selection of attack paths. In addition, the beam search enables MC-PDBS to search attack paths from multiple search channels, thereby avoiding the limited search space problem. Extensive experiments and human evaluation demonstrate that MC-PDBS outperforms previous best models in a series of evaluation metrics, particularly bringing up to a +19.5% attack success rate. Extensive analyses further confirm the effectiveness of MC-PDBS.
02
论文题目:A Dual-way Enhanced Framework from Text Matching Point of View for Multimodal Entity Linking

摘要:Multimodal Entity Linking (MEL) aims at linking ambiguous mentions with multimodal information to entity in Knowledge Graph (KG) such as Wikipedia, which plays a key role in many applications. However, existing methods suffer from shortcomings, including modality impurity such as noise in raw image and ambiguous textual entity representation, which puts obstacles to MEL. We formulate multimodal entity linking as a neural text matching problem where each multimodal information (text and image) is treated as a query, and the model learns the mapping from each query to the relevant entity from candidate entities. This paper introduces a dual-way enhanced (DWE) framework for MEL: (1) our model refines queries with multimodal data and addresses semantic gaps using cross-modal enhancers between text and image information. Besides, DWE innovatively leverages fine-grained image attributes, including facial characteristic and scene feature, to enhance and refine visual features. (2)By using Wikipedia descriptions, DWE enriches entity semantics and obtains more comprehensive textual representation, which reduces between textual representation and the entities in KG. Extensive experiments on three public benchmarks demonstrate that our method achieves state-of-the-art (SOTA) performance, indicating the superiority of our model. The code will be made public available.
03
论文题目:PMET: Precise Model Editing in a Transformer

摘要:Model editing techniques modify a minor proportion of knowledge in Large Language Models (LLMs) at a relatively low cost, which have demonstrated notable success. Existing methods assume Transformer Layer (TL) hidden states are values of key-value memories of the Feed-Forward Network (FFN). They usually optimize the TL hidden states to memorize target knowledge and use it to update the weights of the FFN in LLMs. However, the information flow of TL hidden states comes from three parts: Multi-Head Self-Attention (MHSA), FFN, and residual connections. Existing methods neglect the fact that the TL hidden states contains information not specifically required for FFN. Consequently, the performance of model editing decreases. To achieve more precise model editing, we analyze hidden states of MHSA and FFN, finding that MHSA encodes certain general knowledge extraction patterns. This implies that MHSA weights do not require updating when new knowledge is introduced. Based on above findings, we introduce PMET, which simultaneously optimizes Transformer Component (TC, namely MHSA and FFN) hidden states, while only using the optimized TC hidden states of FFN to precisely update FFN weights. Our experiments demonstrate that PMET exhibits state-of-the-art performance on both the COUNTERFACT and zsRE datasets. Our ablation experiments substantiate the effectiveness of our enhancements, further reinforcing the finding that the MHSA encodes certain general knowledge extraction patterns and indicating its storage of a small amount of factual knowledge. Our code is available at \url{https://github.com/xpq-tech/PMET}.
04
论文题目:Chain-of-Thought Improves Text Generation with Citations in Large Language Models

摘要:Previous studies disclose that Large Language Models (LLMs) suffer from hallucination when generating texts, bringing a novel and challenging research topic to the public, which centers on enabling LLMs to generate texts with citations. Existing work exposes two limitations when using LLMs to generate answers to questions with provided documents, namely unsatisfactory answer quality and poor citation performance. To tackle the above issues, we investigate using Chain-of-Thought (CoT) to elicit LLMs’ ability to synthesize correct answers from multiple provided documents, as well as cite them properly. Moreover, we propose a Citation Insurance Mechanism, which enables LLMs to detect those missing citations. We conduct experiments on the ALCE benchmark with six open-source LLMs. Experimental results demonstrate that: (1) CoT prompting strategies significantly improve the quality of text generation with citations; (2) The Citation Insurance Mechanism delivers impressive citation performance gains at a low cost; (3) Our best approach performs as well as previous best ChatGPT-based baselines. Extensive analyses further validate the effectiveness of the proposed approach.



关于AI4OS SIG

AI4OS SIG致力于将人工智能(AI)与操作系统(OS)相结合,将大模型为代表的AI技术嵌入openKylin操作系统,让AI深扎底层操作系统,可以在没有任何应用作为中介的情况下,直接调用AI大模型能力完成任务,实现操作系统的智能化和性能优化
  • SIG主页:

https://gitee.com/openkylin/community/tree/master/sig/AI4OS

祝贺四位取得的优秀成果!未来,openKylin社区也将在技术创新的道路上持续精进,为openKylin社区用户和开发者带来更加高效、智能、便捷的操作系统使用和开发体验!