Many companies struggle with self-hosting and scaling AI models due to complex infrastructure requirements, high operational costs, and concerns about data privacy and vendor lock-in. Doubleword, formerly known as TitanML, addresses these challenges with a unified, self-hosted inference platform that simplifies deploying and managing AI models on any hardware. This enables organisations to focus on building AI-driven applications rather than managing infrastructure.
Today, Doubleword, the leading self-hosted inference platform for businesses, announced a $12 million Series A funding round led byDawn Capital. “With this funding, we’ll continue to grow our team globally and invest in our platform to solve even more of the inference problem for our customers,” said Meryem Arik, Co-Founder and CEO of Doubleword.
The AI inference market is experiencing rapid expansion, projected to grow from $106.15 billion in 2025 to$254.98 billionby 2030. While model training has received significant attention, the inference stage — where AI models deliver practical value — has emerged as the critical bottleneck for enterprise deployment.
Doubleword tackles one of the biggest barriers to large-scale enterprise AI adoption: self-hosted inference. Inference is where AI delivers real-world value, transforming models into business outcomes through tasks like answering questions and generating images.
As AI adoption grows, inference has become a mission-critical capability that enterprises must own and control, driving a shift toward self-hosted solutions. However, self-hosting requires building and maintaining high-performance, scalable inference infrastructure. Consequently, many teams are caught in an endless cycle of assembling tools, recruiting specialised talent, and continuously updating their infrastructure. This is where Doubleword steps in.
Meryem Arik,Jamie Dborin, andFergus Finnfounded Doubleword. All three have physics backgrounds, with Arik studying theoretical physics and philosophy at the University of Oxford. The company was established in London, UK, initially under TitanML.
What makes Doubleword’s story compelling is its academic research roots. Dborin and Finn, both postdoctoral researchers at UCL, discovered that techniques from their quantum machine learning model compression work could significantly improve AI inference performance. Arik, with her entrepreneurial family background and finance expertise, brought strategic vision to the team. Together, they anticipated that enterprises would face greater challenges in deploying and running AI models efficiently at scale rather than in training them.
Their foresight was validated when ChatGPT launched in 2022, igniting global AI interest and highlighting the urgent need for robust, scalable inference solutions. “Everyone was focusing on training and how expensive it was to train these LLMs. At the time, we thought the bigger challenge wouldn’t be training the models but implementing them and running them,” Arik explained.
The founders created Doubleword to bridge the gap between cutting-edge AI research, particularly in efficient inference optimisation and compression, and enterprise practical needs. Despite AI advances, businesses struggled to deploy and manage LLMs efficiently at scale due to infrastructure complexity and operational barriers. Doubleword emerged to make self-hosted AI inference effortless, enabling organisations to deploy any AI model securely and efficiently while focusing on building applications rather than infrastructure.
“Enterprises creating specific business-critical AI would gladly self-host, if ‘expertise’ and ‘cost’ didn’t sound like double trouble,” said Florian Douetteau, CEO at Dataiku. “Doubleword flips the script, making self-hosting effortless and reshaping the market for enterprise customers.”
Doubleword’s flagship product, initially launched as Titan Takeoff, has shown impressive technical capabilities. The company has achieved up to 90% reductions in compute costs and 20x latency improvements within hours of deployment. A notable achievement was the real-time deployment of the state-of-the-art Falcon LLM on a commodity CPU — a feat that earned significant industry recognition.
Doubleword aims to help enterprises own and control their AI by solving the inference problem. The company has expanded into the US and secured partnerships with Snowflake and Dataiku. The Dataiku collaboration is particularly significant, integrating Doubleword’s self-hosted inference stack with Dataiku’s LLM Mesh to enable organisations to use generative AI while maintaining data privacy and security.
Doubleword’s investors include K5 Tokyo Black and prominent AI entrepreneurs such as Hugging Face CEO Clément Delangue and Dataiku CEO Florian Douetteau as angel investors.
Doubleword is purpose-built for enterprises. Its end-to-end solution enables organisations to self-host AI models — whether open-source, proprietary, or fine-tuned — without building, maintaining, or optimising complex infrastructure. Working with Doubleword gives enterprises a production-ready, future-proofed self-hosted inference platform, empowering them to:
Doubleword’s success partly stems from its strategic use of accelerator programs, including Grow London, London and Partners, Accenture’s Fintech Innovation Lab, and Conception X Programme. Arik particularly values Intel Ignite: “Out of all of them, Intel Ignite has been a fantastic experience. Partly because of how hands-on they are, and because Intel backs them, you have this almost unlimited depth of resource that you can pull up on, and it’s also deeptech focused.”
Haakon Overli, General Partner at Dawn Capital, said: “Doubleword is the most exciting startup in this space, and we’re extremely excited to support Meryem, Jamie, Fergus and the team as they take the company to the next level. The team has a market-leading product and has proven they can flawlessly execute and deliver for global customers. They are scaling a product that businesses need at the right time, with the right expertise.”
Meryem Arik, Co-Founder and CEO of Doubleword, said: “Our customers want to build AI-powered applications, not AI infrastructure. We eliminate the heavy lifting of inference at scale so they can go from idea to production faster, without raising technical debt. We ensure that our customers can deploy any AI model with a single click, while always having the latest models and hardware supported – and without being wedded to a single model provider.”
Secretary of State for Science, Innovation, and TechnologyPeter Kylesaid: “AI will help us to deliver growth for our economy and new opportunities for people up and down the country, so it’s vital businesses have the confidence to adopt and realise its potential. Doubleword’s work is helping set the standard for how companies can do exactly that – adopting AI quickly and efficiently so they can realise their ambitions and allow their workers and customers to thrive in the age of AI.”
“This is yet another illustration not just of how British-born tech expertise is tapping into AI to help give businesses the world over a unique point of difference, but in the steps we’ve taken to make our tech sector a true global magnet for innovation and investment.”
As enterprises worldwide grapple with the challenges of deploying AI at scale, Doubleword stands at the intersection of cutting-edge technology and practical business needs. The company addresses a fundamental pain point that has hindered AI adoption by focusing on the often-overlooked but critical inference layer.