Category: Startup

San Francisco startup MaintainX raises $50 million for industrial AI.

MaintainX, a startup based in San Francisco that specializes in industrial maintenance software, discreetly revealed on Wednesday that it has successfully secured $50 million in a Series C funding round. The funding was led by Bain Capital Ventures (BCV), a prominent investor in growth-stage enterprises, and it placed a valuation of $1 billion on the company. This achievement marks MaintainX as the latest addition to the prestigious unicorn club for the current year. The infusion of capital will be instrumental in fueling MaintainX’s expansion efforts in research and development, bolstering its artificial intelligence capabilities, and broadening its customer base.

Established in 2018, MaintainX offers a mobile platform that empowers frontline workers to engage with their supervisors, monitor tasks, access manuals and checklists, and promptly report issues. Additionally, the platform aggregates and assesses data from diverse sources, including sensors, equipment usage, and parts inventory, to provide valuable insights and recommendations aimed at enhancing operational efficiency and reducing downtime.

Currently, MaintainX’s software is embraced by over 6,500 clients across a wide spectrum of industries encompassing manufacturing, energy, hospitality, and food and beverage. Notable customers include Duracell, Marriott, Volvo, AB InBev, McDonald’s, and the U.S. Department of Agriculture.

In an exclusive conversation with VentureBeat, Chris Turlica, the CEO and co-founder of MaintainX, articulated the company’s vision of forging a “zero-downtime future” for industrial operations. This ambitious goal hinges on leveraging artificial intelligence and real-time data sets to preemptively identify and mitigate potential breakdowns and operational challenges.

Turlica also underscored the company’s commitment to catering to the preferences of a new generation of frontline professionals and procurement managers who prioritize user-friendly and intuitive software solutions. He noted that the industrial sector is undergoing a transformation where the decision-makers are no longer traditional IT professionals but instead, frontline workers, plant managers, and maintenance supervisors who seek software that is easy to deploy, integrate, and use.

Merritt Hummer, a partner at Bain Capital Ventures who spearheaded the investment and joined MaintainX’s board of directors, expressed admiration for the company’s growth, product quality, and high customer satisfaction. She characterized MaintainX as one of the standout emerging growth-stage firms, citing the exceptional founding team, expansive market potential, and impressive track record as driving factors behind her enthusiasm.

Hummer also highlighted the potential synergy between Bain Capital Ventures, with its portfolio of more than 400 companies, and MaintainX. She emphasized the opportunity to introduce MaintainX’s software to their existing industrial and manufacturing companies, potentially facilitating further growth.

The industrial maintenance sector, estimated at over $49 billion globally by Allied Market Research, is witnessing a rising demand for software solutions that optimize workflows, reduce costs, and ensure compliance. The disruptions caused by the COVID-19 pandemic accelerated the adoption of digital technologies in the sector, emphasizing the importance of such solutions.

MaintainX, competing with players like UpKeep, Fiix, and eMaint, has experienced a remarkable revenue growth, surging 13-fold since its previous funding round in 2021, when it secured $39 million in a Series B round led by Bessemer Venture Partners. This substantial funding infusion underscores investor confidence in MaintainX’s mission and its ability to leverage artificial intelligence for industrial maintenance transformation. The participation of prominent business figures such as former GE CEO Jeff Immelt further underscores MaintainX’s traction.

In a landscape shaped by artificial intelligence and big data, MaintainX’s focus on these areas positions the company favorably in the marketplace. With its mission to reduce operational downtime through AI, MaintainX is poised to disrupt the industrial maintenance sector in a significant manner.

Meta AI introduces ‘Seamless’ translator enabling instant cross-language communication.

Meta AI researchers unveiled their latest achievement on Thursday, introducing a groundbreaking suite of artificial intelligence models known as “Seamless Communication.” These models are designed to facilitate more natural and authentic cross-language communication, effectively bringing the concept of a Universal Speech Translator into reality. This week, the research team made these models available to the public, alongside comprehensive research papers and associated data.

The flagship model, aptly named “Seamless,” amalgamates the capabilities of three other models: SeamlessExpressive, SeamlessStreaming, and SeamlessM4T v2, into a unified system. As outlined in the research paper, Seamless marks a significant milestone as “the first publicly accessible system that enables expressive cross-lingual communication in real-time.”

Understanding How Seamless Operates as a Universal Real-time Translator The Seamless translator signifies a pioneering advancement in the realm of AI-assisted communication across borders. It harnesses the power of three sophisticated neural network models to facilitate real-time translation across over 100 spoken and written languages, all while preserving the speaker’s vocal style, emotions, and prosody.

SeamlessExpressive’s primary focus is on safeguarding the speaker’s vocal style and emotional subtleties during language translation. As articulated in the research paper, “Translations should capture the nuances of human expression. While existing translation tools excel at conveying the content of a conversation, they typically rely on monotonous, robotic text-to-speech systems for their output.”

SeamlessStreaming takes the lead in offering nearly instantaneous translations, with a mere two seconds of latency. According to the researchers, it stands as the “first massively multilingual model” to deliver such rapid translation speeds across almost 100 spoken and written languages.

The third model, SeamlessM4T v2, serves as the cornerstone for the other two models. It represents an upgraded iteration of the original SeamlessM4T model released the previous year. This novel architecture enhances the “consistency between text and speech output,” as highlighted in the research paper.

“In summary, Seamless offers us a crucial glimpse into the technical underpinnings essential for transforming the Universal Speech Translator from a mere science fiction concept into a tangible real-world technology,” noted the researchers.

The Potential to Revolutionize Global Communication

These models’ capabilities have the potential to usher in new voice-based communication experiences, ranging from real-time multilingual conversations using smart glasses to automatically dubbed videos and podcasts. The researchers also envision these models breaking down language barriers for immigrants and others grappling with communication challenges.

The research paper states, “By openly sharing our work, we aspire to empower researchers and developers to extend the impact of our contributions by crafting technologies aimed at bridging multilingual connections in an increasingly interconnected and interdependent world.”

Nonetheless, the researchers acknowledge the potential misuse of this technology for voice phishing scams, deep fakes, and other harmful purposes. To ensure safety and responsible usage of the models, they have implemented various measures, including audio watermarking and novel techniques to minimize problematic outputs.

Models Now Publicly Available on Hugging Face In alignment with Meta’s commitment to open research and collaboration, the Seamless Communication models have been made accessible to the public on platforms such as Hugging Face and Github. This collection encompasses the Seamless, SeamlessExpressive, SeamlessStreaming, and SeamlessM4T v2 models, accompanied by relevant metadata.

Meta’s objective in providing these state-of-the-art natural language processing models to the public is to foster collaboration among fellow researchers and developers, allowing them to build upon and expand this work in order to connect people across diverse languages and cultures. This release underscores Meta’s leadership in the domain of open source AI, offering a valuable new resource for the global research community.

“In conclusion,” the researchers affirm, “the multifaceted experiences that Seamless may enable have the potential to revolutionize the landscape of machine-assisted cross-lingual communication.”

Reportedly, Adobe has acquired the text-to-video AI platform known as Rephrase.

As the five-day power struggle at OpenAI reaches its conclusion with Sam Altman’s reinstatement, Adobe is gearing up to enhance its generative AI capabilities. According to a report from the Economic Times, the software giant has internally announced its acquisition of Rephrase, a California-based company specializing in text-to-video technology.

While the exact financial details of the deal remain undisclosed, this move is poised to strengthen Adobe’s suite of Creative Cloud products, which have steadily incorporated generative AI improvements over the past year. In particular, Rephrase will enable Adobe to empower its customers to effortlessly produce professional-quality videos from text inputs.

CEO Ashray Malhotra of Rephrase disclosed the acquisition through a LinkedIn post but refrained from explicitly naming Adobe, referring to the acquiring entity as a “leading tech giant.” When pressed for further details, he cited limitations on sharing information at this stage.

What Rephrase brings to the table: Established in 2019 by Ashray Malhotra, Nisheeth Lahoti, and Shivam Mangla, Rephrase offers enterprises access to Rephrase Studio, a platform enabling users to create polished videos featuring digital avatars in mere minutes. The process involves selecting a video template, choosing an avatar along with the desired voice, and adding the necessary content.

Upon initiating the rendering process within Rephrase, the platform automatically combines all elements, synchronizing the script with the chosen avatar. Users can enhance their content’s naturalness through various customization options, such as resizing avatars, altering backgrounds, adjusting pauses between words, or incorporating custom audio.

Over the past four years, Rephrase has amassed over 50,000 customers and secured nearly $14 million in funding from multiple investors, including Red Ventures and Lightspeed India. Initially known for enabling enterprises and influencers to create custom avatars for personalized business videos, the acquisition will now bring these capabilities, along with a significant portion of the Rephrase team, into Adobe’s fold, bolstering their generative AI video offerings.

Ashley Still, Senior Vice President and General Manager for Adobe Creative Cloud, wrote in the internal memo, “The Rephrase.ai team’s expertise in generative AI video and audio technology, and experience-building text-to-video generator tools, will extend our generative video capabilities—and enable us to deliver more value to our customers faster— all within our industry-leading creative applications.”

When VentureBeat reached out to Adobe for comment, a spokesperson declined to provide additional insights into the development or how Rephrase’s tools will complement Adobe’s product portfolio.

Adobe’s Strong Embrace of AI: In recent months, Adobe has been at the forefront of advancing generative AI with several product updates. It introduced Firefly, an AI engine for image generation, which was integrated across Creative Cloud products like Photoshop. This innovation allowed users to manipulate images by describing changes in plain text.

Furthermore, at its annual Max conference last month, Adobe showcased various experimental generative AI-powered video features, including upscaling videos, changing textures and objects through text prompts, and compositing subjects and scenes from separate videos. While the timeline for the incorporation of these features into future releases remains uncertain, Rephrase’s digital avatar-based capabilities appear to be a promising addition.

Ashray Malhotra expressed his excitement for the future of Generative AI, emphasizing that it’s still in its early stages. Adobe Creative Cloud, known for decades as the dominant platform for digital art and media, currently offers six main products for audio and video-related work: Premiere Pro, After Effects, Audition, Character Animator, Animate, and Media Encoder. These tools are used by both professionals and amateurs to create, edit, and share digital content, leaving a lasting impact on online communities and trends through countless memes, parodies, and viral art.

Nvidia introduces its AI foundry service on Microsoft Azure, featuring the latest Nemotron-3 8B models.

Nvidia is strengthening its collaborative approach with Microsoft. During the Ignite conference, hosted by the Satya Nadella-led tech giant, Nvidia unveiled an AI foundry service aimed at assisting both enterprises and startups in building custom AI applications on the Azure cloud. These applications can leverage enterprise data through retrieval augmented generation (RAG) technology.

Jensen Huang, Nvidia’s founder and CEO, highlighted, “Nvidia’s AI foundry service combines our generative AI model technologies, LLM training expertise, and a massive AI factory. We built this service on Microsoft Azure, enabling enterprises worldwide to seamlessly integrate their custom models with Microsoft’s top-tier cloud services.”

In addition to this, Nvidia also introduced new 8-billion parameter models, which are part of the foundry service. They also announced their plan to incorporate their next-gen GPU into Microsoft Azure in the coming months.

So, how will the AI foundry service benefit Azure users? With Nvidia’s AI foundry service on Azure, cloud-based enterprises will gain access to all the essential components needed to create custom, business-focused generative AI applications in one place. This comprehensive offering includes Nvidia’s AI foundation models, the NeMo framework, and the Nvidia DGX cloud supercomputing service.

Manuvir Das, the VP of enterprise computing at Nvidia, emphasized, “For the first time, this entire process, from hardware to software, is available end to end on Microsoft Azure. Any customer can come and execute the entire enterprise generative AI workflow with Nvidia on Azure. They can procure the necessary technology components right within Azure. Simply put, it’s a collaborative effort between Nvidia and Microsoft.”

To provide enterprises with a wide range of foundation models for use with the foundry service in Azure environments, Nvidia is introducing a new family of Nemotron-3 8B models. These models support the creation of advanced enterprise chat and Q&A applications for sectors like healthcare, telecommunications, and financial services. They come with multilingual capabilities and will be accessible through the Azure AI model catalog, Hugging Face, and the Nvidia NGC catalog.

Among the other foundation models available in the Nvidia catalog are Llama 2 (also coming to the Azure AI catalog), Stable Diffusion XL, and Mistral 7b.

Once users have chosen their preferred model, they can move on to the training and deployment stage for custom applications using Nvidia DGX Cloud and AI Enterprise software, both of which are available through the Azure marketplace. DGX Cloud provides customers with scalable instances and includes the AI Enterprise toolkit, featuring the NeMo framework and Nvidia Triton Inference Server, enhancing Azure’s enterprise-grade AI service for faster LLM customization.

Nvidia noted that this toolkit is also available as a separate product on the marketplace, allowing users to utilize their existing Microsoft Azure Consumption Commitment credits to expedite model development.

Notably, Nvidia recently announced a similar partnership with Oracle, offering eligible enterprises the option to purchase these tools directly from the Oracle Cloud marketplace for training models and deployment on the Oracle Cloud Infrastructure (OCI).

Currently, early users of the foundry service on Azure include major software companies like SAP, Amdocs, and Getty Images. They are testing and building custom AI applications targeting various use cases.

Beyond the generative AI service, Microsoft and Nvidia have expanded their partnership to include the chipmaker’s latest hardware offerings. Microsoft unveiled new NC H100 v5 virtual machines for Azure, the first cloud instances in the industry featuring a pair of PCIe-based H100 GPUs connected via Nvidia NVLink. These machines provide nearly four petaflops of AI compute power and 188GB of faster HBM3 memory.

The Nvidia H100 NVL GPU offers up to 12 times higher performance on GPT-3 175B compared to the previous generation, making it suitable for inference and mainstream training workloads.

Furthermore, Microsoft plans to add the new Nvidia H200 Tensor Core GPU to its Azure fleet in the upcoming year. This GPU offers 141GB of HBM3e memory (1.8 times more than its predecessor) and 4.8 TB/s of peak memory bandwidth (a 1.4 times increase). It is designed for handling large AI workloads, including generative AI training and inference, and provides Azure users with multiple options for AI workloads alongside Microsoft’s new Maia 100 AI accelerator.

To accelerate LLM work on Windows devices, Nvidia announced several updates, including an update for TensorRT LLM for Windows. This update introduces support for new large language models like Mistral 7B and Nemotron-3 8B and delivers five times faster inference performance. These improvements make running these models smoother on desktops and laptops equipped with GeForce RTX 30 Series and 40 Series GPUs with at least 8GB of RAM.

Nvidia also mentioned that TensorRT-LLM for Windows will be compatible with OpenAI’s Chat API through a new wrapper, enabling hundreds of developer projects and applications to run locally on a Windows 11 PC with RTX, rather than relying on cloud-based infrastructure.

Interplay Raises $45 Million for Its Third Fund, Concentrating on B2B Marketplaces and Specialized Software

Interplay, a venture capital firm headquartered in New York, has successfully completed its third funding round, amassing $45 million. This latest fund follows two earlier rounds focusing on early-stage investments, particularly in software sectors like B2B marketplaces and specialized vertical software. We previously reported on Interplay in 2022 during its separate funding initiative.

Mark Peter Davis, the founder and managing partner at Interplay, highlighted in a conversation with TechCrunch the firm’s interest in companies revolutionizing previously un-digitized markets due to unfavorable economics. According to Davis, the recent trend is a move towards specialized services. Newer companies are challenging established broad-spectrum platforms by offering services more finely tuned to specific industries. This strategy has proven successful for Interplay, shaping the investment philosophy of their current fund.

Interplay’s initial fund operated on a small scale, akin to angel investing. However, the second fund marked a shift with external limited partners’ involvement. The third fund, distinct in its approach, attracts institutional investors, including funds of funds, family offices, and founders from Interplay’s own portfolio.

Davis outlines Interplay’s distinct qualities. Firstly, the consistency in their team of general partners, including Davis himself, Kevin Tung, and Mike Rogers, who have a collaborative investment history of over eight years. Secondly, their ability to offer significant value relative to their investment size. Lastly, their unique studio model, which fosters company incubation and creation, enhancing their deal flow.

With the latest fund, Interplay’s total assets under management reach $150 million. The plan is to invest in around 20 companies, allocating $1 to $2 million per investment, reserving funds for subsequent investments. Already, 40% of the fund has been invested, including recent investments in two construction tech firms, OnSiteIQ and Roofr.

2023 presented challenges in fundraising for both companies and venture capital firms. Davis acknowledges the difficult climate, yet praises his team’s achievement against market odds, attributing it to their decade-long dedication.

In recent times, advancing to a Series A funding round has been particularly challenging. Davis agrees that market fluctuations have impacted this stage, but notes that many promising companies are raising capital at what he deems “reasonable valuations.” Despite the lure of higher valuations during the investment surge, Interplay has remained disciplined in its capital allocation, often passing on opportunities reflective of market over-enthusiasm.

Davis finds the current market appealing for investments, as company valuations have realigned with what Interplay considers reasonable. He acknowledges the potential issues for entrepreneurs in cases of overcorrection in valuation but believes that fair valuations set the stage for sustained company success.

Scroll to top