By the time Cluely went viral earlier this year, it was already too late to ignore what it represented. Billed by its creators as an undetectable AI assistant for coding interviews, Cluely was more than just a tool.
At its launch, Chungin “Roy” Lee posted on X: “Cluely is out. cheat on everything.” The post featured him using a hidden AI prompt on a date, pretending to know art history. It racked up over 13 million views. With that, the internet met the first version of a product designed to blur the line between real skill and algorithmic support in job interviews and potentially everything else.
But while the tech world debated Cluely’s ethics and future, two Columbia University freshmen were already quietly building its opposite.
Building Truely, the “anti-Cluely”
Meet Truely, an open-source tool designed to flag AI-assisted interviews in real time. Launched on July 14 by Antonio Li (Columbia College ’28) and Patrick Shen (The Fu Foundation School of Engineering and Applied Science ’28), Truely doesn’t help you cheat. It is built to spot when you are, and if Cluely gamed the system, Truely wants to level the field.
“We just try and make sure that the interview is fair on both sides,” Shen said in an interview with The Columbia Spectator.“That’s pretty much the goal of Truely. It’s pretty simple.”
But the backstory isn’t. Cluely, before it went viral, started as Interview Coder, a stealth AI script created by then-Columbia students Roy Lee and Neel Shanmugam to offer real-time support during LeetCode-style technical interviews. By April, it had evolved into Cluely, capable of proactively generating responses beyond just code, all without the interviewer knowing.Its creators dropped out after disciplinary action from Columbia and now run Cluely full time in San Francisco.
Tech with transparency at the core
Li and Shen, by contrast, are still at Columbia. They are also spending their summer at Off Season, a tech founders’ makerspace in the Bay Area. They say they stumbled into the project after their original start-up idea left them feeling burned out. Truely began as a side project, built over late-night horror stories from friends navigating technical interviews in the age of ChatGPT.
Their premise was simple: If AI is going to be part of the hiring process, let’s at least be honest about it.
Truely works by monitoring open browser windows, microphone, and screen access, and live network requests during video interviews on Zoom, Google Meet, and other platforms. It then produces a cumulative score indicating how likely it is that a candidate used AI assistance.
The tool doesn’t block anyone or shut meetings down. It just flags unusual activity, and according to Li, that’s the point.
“We don’t want to make cheating a requirement to get a job,” he said. “I just don’t think that’s a really good norm to set.” he told The Columbia Spectator.
Built by students, for a new generation of hiring
There’s something deeply student-coded about how Truely was built. Despite the serious implications of AI detection, the duo approached it with a light touch. Shen calls it “a fun side project that blew up.” Li describes most traditional anti-cheating software as “invasive” and “psychologically daunting,” something Truely is actively avoiding.
“The way we think about it right now is, it’s just a normal Zoom meeting. You send a link over, and we just have a bot there to monitor. That’s it,” Li said. “It has no other impact on your life whatsoever.”Cluely, for all its controversy, is also shifting its focus. According to Lee, the product is already moving beyond interviews towards enterprise clients. The “undetectability” feature that once made Cluely famous is no longer its main draw.“Cluely is already transitioning out of interviews pretty quickly,” Lee said. “We’re building for bigger enterprise clients.” he told The Columbia SpectatorHe also weighed in on Truely’s success, adding that it was “super cool” to see other Columbia students building. “I’m no stranger to engagement bait and really like that they wrung out millions of views from the situation,” he said.
The bigger conversation on AI and fairness
For Li and Shen, the virality wasn’t exactly part of the plan. Within a week of their LinkedIn demo going live, the posts had racked up thousands of impressions. But the spotlight hasn’t changed their mindset. Their goal, as Li puts it, is to “make it as uninteresting as possible.”
That might be a stretch. In a world where AI is becoming a core skill and a core ethical debate, tools like Truely will likely stir more than a few conversations. Especially on campuses like Columbia’s, where AI literacy is growing fast and boundaries are still being drawn.Li himself is not anti-AI. He uses it for coding and believes interviews should test for AI proficiency. What matters to him is clarity.
“If I’m using AI, the interviewer should also know that I am using AI,” he said. “I think it’s just like an honest position of your ability. If you are good, then you should get the job, right?”It’s a sentiment that’s less flashy than “cheat on everything,” but maybe more sustainable.
For now, these two first-year students at Columbia are offering a rare twist in the AI narrative, one that is less about shortcuts and more about setting ethical standards.