Authors loathe AI. That’s evident in the dozens of lawsuits they’ve filed against AI companies for allegedly training their models on millions of copyrighted books without consent or compensation. But it turns out their publishing houses aren’t quite so against it. Some of the largest publishers in the country, including Penguin Random House, Macmillan, Sourcebooks and Wiley are recruiting AI engineers, according to public job listings reviewed by Forbes.
None of them are planning to use AI for editing or writing— at least for now. Instead, the job listings reveal that publishers are looking to adopt AI to run their businesses more intelligently, such as building forecasting models that can help predict which books will perform well and how they should be priced.
“It does, I think, put publishers in a very awkward position because they have to pay attention to the fact that this technology isn’t going anywhere,” said Jane Friedman, an industry expert who writes a widely read newsletter on the business of publishing and advises several publishing groups. “They’ve got shareholders who are probably asking, ‘How are you going to use this to make money, create efficiencies, et cetera?’ No one wants the publishing industry to stick its head in the sand from a profit perspective.”
Publishing giant Pan Macmillan is hiring two AI “solutions managers” to identify potential use cases for AI and design products that can help solve “complex business challenges.” The role includes building prototypes, designing new workflows and providing coaching to “embed AI tools into the daily fabric of the organization,” according to the job posting. The company has publicly announced it is using AI for back office tasks like tagging keywords to make books more discoverable as well as document summarization, translation and content moderation, per a blog post outlining its approach. But the publishing house is clear in the post that it wants to put authors (and their legislative rights) above machines. “We are a publisher of human stories, by human writers,” it declares. Macmillan did not respond to Forbes’ request for comment.
“There’s endless justifiable concerns about whether you are giving away the baby and the bathwater by doing that licensing, and so there’s a lot of mixed emotion about whether this is a good thing or not.”Thad Mcllroy, publishing technology consultant
Penguin Random House, the world’s largest trade book publisher, is hiring a senior AI solutions engineer to develop AI systems for book marketing and discovery and ensure AI applications scale reliably, per the position’s listing. The company is using AI to help achieve “operational excellence,” spokesperson Claire von Schilling told Forbes via email. That includes using AI to better manage its inventory and decide how many copies to print more accurately. In January 2025, Penguin Random House’s parent company Bertelsmann announced that it plans to roll out OpenAI’s ChatGPT Enterprise to employees.
That internal embrace of AI doesn’t mean the company isn’t fighting for its authors’ copyrights. In late 2024, the publisher started adding disclaimers to its books stating they can’t be used or reproduced to train AI models. “As publishers, our primary responsibility is to serve our authors. Above all, we are committed to protecting their intellectual property and copyrights,” von Schilling wrote.
Publishers have to be careful about how they publicly position themselves with regard to AI, given the widespread pushback from the writer community that largely views the technology as a machine that slurps up their work to create a competing product that threatens their livelihood. Famed writer Margaret Atwood told Reuters in 2024 that “AI is a crap poet” and a “data scrapper.” Novelist Zadie Smith has said anything AI writes will be “intrinsically hollow…like planets without gravity” because it lacks human insight. And George R. R. Martin, among the 17 plaintiffs in a 2023 Authors Guild lawsuit against OpenAI, has called AI “the world’s most expensive and energy-intensive plagiarism machine.”
No surprise, they don’t want AI anywhere near their work. In December 2025, Amazon came under fire for adding an AI feature to Kindle that allowed readers to “chat” with a book by asking questions about its content, like details of the plot or character names as well as analysis and summaries. But the Authors Guild and throngs of enraged writers criticized the feature because they feared an AI model had been trained on their copyrighted books without consent or compensation to the authors. Plus, there was no way to opt out, for writers or for readers. (Amazon responded that book content isn’t used to train an underlying model but sees the tool as “a natural language expansion of the search functionality that already exists in Kindle.”)
But legally, AI companies have seen some big wins. In a landmark ruling on a class action copyright case authors brought against Anthropic in June 2025, U.S. District Judge William Alsup ruled that using copyrighted works to train an AI model falls under the fair use doctrine of the law. In similar copyright infringement lawsuits against Meta and Stability AI, the courts have largely sided with AI companies. Some cases are still pending, such as the high profile class action copyright case filed by authors and media companies like The New York Times, where the courts have ordered OpenAI to share 20 million chat logs with the plaintiffs as part of the discovery process. (OpenAI has said that the case is without merit and using publicly available information constitutes fair use.)
One major caveat: downloading pirated versions of books from shadow libraries, illicit digital repositories of thousands of e-books, isn’t legal. In August 2025, Anthropic reached an out-of-court settlement to pay $1.5 billion to authors of 500,000 books in one such database without admitting any wrongdoing. In other cases, like that of OpenAI, the response has been mixed and major decisions are yet to be announced.
Some publishers have opted to strike multi-million dollar licensing deals with AI companies, rather than fight them in court. In fiscal year 2025, publishing outfit Wiley booked $40 million just from AI licensing deals, selling backlist titles and academic books to companies like Anthropic. But for other, often smaller publishers, the idea of licensing all their titles to AI companies is a difficult one to make, said Thad Mcllroy, a publishing technology analyst and consultant.
“There’s endless justifiable concerns about whether you are giving away the baby and the bathwater by doing that licensing, and so there’s a lot of mixed emotion about whether this is a good thing or not,” Mcllroy said.
Still, just like many businesses, publishers are eager to field test AI tools. AI could, for instance, help them sort through, analyze and provide editorial feedback on the avalanche of manuscripts they receive. “But as of today, if an author heard about that happening, they would be exceptionally upset,” Friedman said.
“Publishers are recognizing that this is a tool that’s going to make them more efficient, it’s going to allow them to do more work with less labor, and you’re going to make them sell more books,” Mcllroy said. “And so they’re faced with the dilemma of how many tools do they incorporate before word gets out.”



















