atum@Tencent % ls tags
atum@Tencent % ls -l
By 2025, our systems had automatically uncovered more than 60 real-world vulnerabilities. Half of them are high-risk vulnerabilities. Looking back, we found that **our success came not from a single technical breakthrough, but from correctly tracking paradigm shifts in AI and adapting our methods at each transition**. At the same time, we observed many top-tier papers gradually losing real-world impact as they failed to adapt to those shifts. This article is our attempt to make that pattern explicit: we trace three paradigm transitions in automated vulnerability discovery from 2022 to 2025—moving from "LLMs as classifiers" to "LLMs augmenting fuzzers and static analyzers" to "agentic, tool-using auditors"—and discuss how understanding these shifts can help you make research and engineering bets that survive across paradigms.
In today's digital world, classical public-key cryptography such as RSA-2048 and ECC are the most widely used encryption standards, supporting the underlying trust of network security, financial transactions, and privacy protection. However, this cornerstone is facing the potential threat of quantum computing. In theory, quantum computers can factorize large integers and solve discrete logarithms at speeds far exceeding classical computers, thereby breaking RSA and ECC encryption in a short time. This prospect is both exciting and worrying.The question is: what stage has the development of quantum computers reached? Some optimistically believe that the "countdown" to classical public-key cryptography has already begun; others doubt that truly usable quantum computers are still far away due to manufacturing difficulties. There are various opinions in the market, often optimistic or pessimistic, but the core question always lingers how far are quantum computers from breaking classical public-key cryptography?
Our AI-powered automated vulnerability discovery engine has uncovered more than 30 vulnerabilities across various types of important open-source software, nearly half of which pose significant real-world risks (such as RCE). In this article, we’ll share one particularly interesting case: a high-severity vulnerability (CVE-2025-57801, CVSS 8.6) discovered in the zero-knowledge proof library gnark. We’ll also be sharing more intriguing vulnerabilities in the future.
Recently, VibeCoding has become a new trend in the development community. With tools like Cursor and Claude Code, developers only need to describe requirements and AI can automatically generate code. From batch completing repetitive code to quickly building prototypes and refactoring legacy code, it greatly improves development efficiency. In our attempts, we found it can fully handle medium-difficulty engineering development work. The productivity boost it brings is impressive. However, many people feel disappointed when first encountering it: AI-written code doesn't run, changes mess up the project, and they end up returning to manual coding or asking questions in regular AI chat interfaces while writing.
Why can some articles keep readers engaged from start to finish, while others leave readers thinking "I understand every sentence, but can't grasp the whole picture"? The problem often lies not with readers, but with the writing approach. So how can we write articles that are both professional and readable? This article will use information-delivery articles (such as blogs, technical documentation, academic papers, etc.) as examples to summarize some practical experience. If you happen to be creating such content, these ideas should be helpful. It's worth noting that some techniques mentioned are not limited to information-delivery writing—some methods (such as making text more vivid) are equally applicable to other types of writing like essays and novels.
The essence of technology is a process, method, or apparatus formed to achieve a human purpose. In other words, technology has always served a purpose rather than being the purpose itself. Therefore, when we attempt to solve a problem with technology, the domain where the problem lies and the domain where the solution resides may be completely different. The problem domain is simply where the problem occurs, while the solution domain is where the answer lies - the two are not necessarily aligned. Thus, **the more comprehensive a person's technical mastery, the more likely they are to construct a good solution.**
Large Language Models (LLMs) are evolving from simple conversational tools into intelligent agents capable of writing code, operating browsers, and executing system commands. As LLM applications advance, the threat of prompt injection attacks continues to escalate. Imagine this scenario: you ask an AI assistant to help write code, but it suddenly starts executing malicious instructions, taking control of your computer. This seemingly science fiction plot is now becoming reality. This article introduces a novel prompt injection attack paradigm. Attackers need only master a set of "universal triggers" to precisely control LLM outputs to produce any attacker-specified content, thereby leveraging AI agents to achieve high-risk operations like remote code execution.
In today's digital era, wireless communication technologies such as 5G, 4G, and Wi-Fi have become essential infrastructure in our daily lives. These networks commonly employ advanced encryption protocols that theoretically provide effective protection for user communications. However, recent research findings published at EuroS&P 2025 by our Tencent Xuanwu Lab in collaboration with Professor Chen Jianjun's team from Tsinghua University have revealed a new security vulnerability called LenOracle. This research demonstrates that attackers can exploit radio frame length information as a side-channel to hijack TCP/UDP connections in encrypted networks without breaking the wireless encryption. We conducted tests in real commercial LTE networks and Wi-Fi environments, successfully injecting a forged short message into a victim device in TCP scenarios and polluting the victim device's DNS cache in UDP scenarios, demonstrating the potential destructive power of this attack on critical network services.
Sharing how Western philosophy rationally proves God's existence. This is a logical dialogue spanning across time and space—quite fascinating.
As a science student, I believed for a long time that logic was the only thing worth trusting. I once found Chinese philosophy regrettable—concepts like 'emptiness is form, form is emptiness' from the Heart Sutra seemed illogical. I even thought Eastern philosophy had gone astray, and that Western epistemology and ontology were the true path. Later, as my studies deepened, I discovered that logic has its limitations, and the methods of Chinese philosophy can do what logic cannot.
After the Spring Festival, I started reading Zhuangzi. Classical Chinese from the pre-Qin period is much harder to read than that from the Tang and Song dynasties onward. With the help of various commentaries and annotations, after reading for over a week, I've only finished "Free and Easy Wandering" and half of "The Adjustment of Controversies." Just this portion has already greatly benefited me, so today I'd like to share some insights.
2024 was the year I read the most, and the year I felt I grew the most. The books I read can be broadly categorized into three major topics: philosophy, finance, and some practical content. In this article, I will share some of my insights on these topics.
==> atum, Jun 16, 2024, [philosophy], On Freedom and the Meaning of Human Life <==
When we say "I made a free choice," what does that truly mean? In a universe governed by physical laws, does freedom really exist? And if the world itself has no purpose or meaning, what value do human choices have? Throughout life, we seem to move forward within three fundamental questions. This article attempts to explore these questions from scientific, philosophical, and conscious perspectives, seeking a "space for free and meaningful existence" between rationality and confusion.
This article is intended for security researchers who have developed scripts and personal projects and want to learn more about engineering development.
Last night, I came across an article in a WeChat group about how a family member's phone was stolen, and criminal gangs used the SIM card (primarily SMS verification codes) as their entry point to launch a series of attacks. Although the author took timely remedial measures, these attacks still caused significant losses to the victim, such as unauthorized micro-loans. I reflected on why this attack succeeded and how to defend against such attacks, and I'm sharing my thoughts here.
==> atum, Jun 2, 2019, [ctf], How to Create a High-Quality CTF Challenge <==
Our team r3kapig provided 10 challenges (out of 13 total) for the recently concluded Baidu CTF, one of the DEFCON qualifier competitions, and served as the chief referee to control the competition format, some rules, and challenge quality. Baidu CTF is a new qualifier competition, and we also made some attempts, such as having AEG teams compete against top human teams in the same competition. Although some minor issues arose during the innovation process, we still contributed a qualified qualifier competition through everyone's joint efforts. Today, I want to use this competition as an opportunity to discuss my personal understanding of CTF challenges.