LLAM


The Lab of Large Audio Model (LLAM) is committed to exploring and advancing the forefront and future of audio and sound technology, and building large audio models.

LLAM

Recent News

All news»

[16/05/2024] $\bullet$ It feels amazing to receive an acceptance notification from a top-tier conference on a weekday afternoon! The latest research paper “Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning,” a collaboration between Ping An Technology’s Dr. Jianzong Wang’s team and Professor Tianyi Zhou’s team from the University of Maryland, has been accepted as a long paper at ACL 2024 CCF Class A paper, with an acceptance rate of less than 20%. This represents a significant breakthrough in the field of instruction-tuning for large models. For the first time, we have revealed the consistency in instruction difficulty perception across models of different scales and achieved over a 20-fold speed improvement in the large model training process through our superfiltering method. This achievement opens up new avenues for data filtering technology. We welcome citations from our peers! Research Highlights: 1. Weak-to-Strong Data Consistency: We discovered that both small and large language models exhibit a high degree of consistency in perceiving and evaluating the difficulty of instruction-tuning data. This finding is crucial for optimizing data filtering processes. 2. Efficient Superfiltering Strategy: We proposed the first superfiltering method that uses small models (e.g., GPT-2) to select data, significantly accelerating the fine-tuning process of large language models. 3. Effectiveness of Selected Training Data: Superfiltering is highly precise in allocating high-quality and information-rich data. Models trained with only 5% of the filtered data performed similarly to or even better than models trained with the entire dataset in multiple benchmark tests. The complete research results and code are publicly available on GitHub: https://github.com/tianyi-lab/Superfiltering. This is our second paper at a top NLP conference. Our team’s collaboration with the University of Maryland has already resulted in a paper published at NAACL, addressing the innovative problem of how to automatically identify high-quality instruction data from datasets during large model training.

[09/05/2024] $\bullet$ The 2024 Twentieth International Conference on Intelligent Computing (ICIC 2024) is scheduled to take place from August 5th to 8th, 2024, in Tianjin, China. In the recently released acceptance notifications, our two latest research endeavors have been selected for oral presentation. They are respectively titled “RREH: Reconstruction Relations Embedded Hashing for Semi-Paired Cross-Modal Retrieval” and “Enhancing Emotion Prediction and Recognition in Conversation through Fine-Grained Emotional Cue Analysis and Cross-Modal Fusion”. We eagerly anticipate sharing the content of our research achievements with the Intelligent Computing community at ICIC2024.

[02/05/2024] $\bullet$ Groundbreaking Research on Emotion Transfer TTS Model Accepted at APWeb 2024. The Asia Pacific Web (APWeb) and Web-Age Information Management (WAIM) Joint International Conference on Web and Big Data (APWeb-WAIM) is aiming at attracting professionals of different communities related to Web and Big Data who have common interests in interdisciplinary research to share and exchange ideas, experience and the underlying techniques and applications, including Web technologies, database systems, information management, software engineering and big data. In the latest acceptance notification, our latest paper titled with “RSET: Remapping-based Sorting Method for Emotion Transfer Speech Synthesis” on an advanced Text-to-Speech (TTS) model has been officially accepted by APWeb 2024. The innovative paper introduces a novel emotion transfer TTS model that surpasses traditional limitations experienced in emotion intensity controllable speech synthesis.

[08/04/2024] $\bullet$ We are thrilled to announce that our team’s paper “Retrieval-Augmented Audio Deepfake Detection” has been accepted for the ICMR 2024 conference (CCF-B). This pioneering research addresses the rising concerns surrounding the misuse of hyper-realistic audio deepfakes facilitated by recent advancements in speech synthesis technology. Our proposed innovative Retrieval Augmentation Detection (RAD) framework, inspired by Retrieval Augmentation Generation (RAG) used in Large Language Models (LLMs), significantly enhances deepfake detection by augmenting test samples with highly similar retrieved samples. The integration of multi-fusion attentive classifiers further improves the performance of the entire framework. Extensive experiments demonstrate the superiority of our RAD over baseline approaches, achieving state-of-the-art results on the ASVspoof 2021 DF dataset and competitive results on the 2019 and 2021 LA datasets. This acceptance emphasizes the importance of our research in combating audio deepfakes, offering a promising solution to safeguard the authenticity and credibility of digital content. We look forward to sharing our findings and contributing to the advancements in this field at the ICMR 2024 conference.

[16/03/2024] $\bullet$ Nine Groundbreaking Papers Accepted from Our Team at IJCNN 2024. We are thrilled to announce that our team’s latest submissions to the International Joint Conference on Neural Networks (IJCNN) 2024 have been met with exceptional success, with a total of 10 papers accepted for presentation. IJCNN stands as the foremost international conference dedicated to the theory, analysis, and applications of neural networks. The accepted works span a diverse array of cutting-edge research topics, ranging from speech recognition and conversion to enhancing singing voices, 3D action recognition, extractive question answering, and federated learning. These papers represent the forefront of innovation in artificial intelligence and its practical applications. Here is a glimpse of the accepted papers:Task-Agnostic Decision Transformer for Multi-Type Agent Control with Federated Split Training, QLSC: A Query Latent Semantic Calibrator for Robust Extractive Question Answering, PRENet: A Plane-Fit Redundancy Encoding Point Cloud Sequence Network for Real-Time 3D Action Recognition, MAIN-VC: Lightweight Speech Representation Disentanglement for One-Shot Voice Conversion, Learning Expressive Disentangled Speech Representations with Soft Speech Units and Adversarial Style Augmentation, EfficientASR: Speech Recognition Network Compression via Attention Redundancy and Chunk-Level FFN Optimization, Efficient Multi-Model Fusion with Adversarial Complementary Representation Learning, EAD-VC: Enhancing Speech Auto-Disentanglement for Voice Conversion with IFUB Estimator and Joint Text-Guided Consistent Learning, Enhancing Anomalous Sound Detection with Multi-Level Memory Bank, CONTUNER: Singing Voice Beautifying with Pitch and Expressiveness Condition. We are dedicated to providing detailed insights into our research, and we intend to release the final versions of these papers on arXiv soon. This will allow for further discussion, collaboration, and exploration of the groundbreaking ideas presented in our work. We invite fellow researchers, practitioners, and enthusiasts to engage with us in exploring the frontier of neural networks and artificial intelligence. Your insights and feedback are invaluable as we collectively strive to push the boundaries of what is possible in this rapidly evolving field.

Research Direction


Large Audio Model

Research on Large Audio Models aims to advance the field of audio processing, generation, understanding, and multimodal processing, with the goal of enabling new and innovative applications in areas such as speech recognition, virtual assistants, music composition, audio synthesis, and more.

Text to Speech

Research on high-quality audio, few-shot TTS, low resource TTS, and expressive TTS is mainly applied to scenarios such as speech interaction, information broadcasting, and text-to-speech reading, as well as in intelligent voice outbound calls and intelligent agents.

Voice Conversion

Research that aims to transform the vocal characteristics of a speaker while preserving the linguistic content of their speech. It has various applications in speech processing, including speaker adaptation, voice disguise, and emotion transfer.

Speech Security

Research aims to address various security threats and vulnerabilities associated with speech data, speech recognition systems, and voice communication.

Music AI

Research topics related to music information retrieval, including song detection, singer identification, main melody extraction, and voice beautification.

Latest Publication

Recent & Upcoming Events