Shenghua Liu
Professor, Trustworthy LLM and Big Graph Mining


No.6 Kexueyuan South Road, Haidian District
Beijing, China 100190
email: liushenghua at ict.ac.cn
I am a Professor at Institute of Computing Technology, Chinese Academy of Sciences. My research interests include trustworthy foundation model and big graph mining with applications in scientific deep research, anomaly detection, and various real-world networked systems ranging from academic collaborations, supply chain, financial transactions and biological networks. The arising of LLMs drives most of the interests to trustworthy foundation models, and graph LLMs which scale LLM the ability to understand graphs, to think and model with graphs. Believing that with graph LLMs, real-world problems with complex correlations and connections can be well and trustworthily solved.
The featured works are published on IEEE TKDE, ACM TKDD, and proceedings of top-tier conferences such as AAAI, ICLR, ACL, CIKM, WSDM, ECML-PKDD, etc. Some of the publications are recognized as ASP-DAC 2010 best paper candidate, ECML-PKDD 2020 best student DM paper award.
My educational and visiting experience:
-
Ph.D. degree from Computer Science & Technology Department, Tsinghua University in 2010, supervised by Prof. Xianlong Hong, an honorable professor in electronic design automation (EDA).
-
Visiting Ph.D. student at electronic engineering department, university of california, los angeles (ucla), which was hosted and supervised by Prof. Lei He, 2006-2007, and in consequence I am listed as one of the Alumni in Academia of Electrical & Computer Engineering, UCLA.
-
Research Scholar at Computer Science Department, Carnegie Mellon University (CMU), which was hosted and supervised by Prof. Christos Faloutsos, 2016-2017.
news
May 16, 2025 | Two of our works are accepted by ACL main 2025. |
---|
selected publications
- Decoding by Contrasting Knowledge: Enhancing Large Language Model Confidence on Edited FactsIn In Proc. of the Association for Computational Linguistics, ACL Main, 2025
- Can Graph Descriptive Order Affect Solving Graph Problems with LLMs?In In Proc. of the Association for Computational Linguistics, ACL Main, 2025
- Is Factuality Enhancement a Free Lunch For LLMs? Better Factuality Can Lead to Worse Context-FaithfulnessIn International Conference on Learning Representations, ICLR, 2025
- "Not Aligned" is Not "Malicious": Being Careful about Hallucinations of Large Language Models’ JailbreakIn In Proc. of the International Conference on Computational Linguistics, Coling, 2025
- SLANG: New Concept Comprehension of Large Language ModelsIn In Proc. of the Empirical Methods in Natural Language Processing, EMNLP Main, 2024
- Node Embedding Preserving Graph SummarizationACM Transactions on Knowledge Discovery from Data, TKDD, 2024
- Graph Summarization for Preserving Spectral CharacteristicsIn In Proc. of the SIAM International Conference on Data Mining, SDM, 2024
- Unified Dense Subgraph Detection: Fast Spectral Theory based AlgorithmsIEEE Transactions on Knowledge and Data Engineering, TKDE, 2024Published March 2024 (pub date: 17 July 2023)
- SpecGreedy: Unified Dense Subgraph DetectionIn In Proc. of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, ECML-PKDD, 2020Best student DM paper award. Acceptance rate: 19%. Verified on 40 real-world networks, and a 1.47-billion-edge graph
- A Contrast Metric for Fraud Detection in Rich GraphsIEEE Transactions on Knowledge and Data Engineering, TKDE, 2019