Addressing the widespread demand for Nvidia GPUs, which dominated Silicon Valley conversations last summer, has evolved into a significant business opportunity within the AI sector.
This development has led to the emergence of new industry giants. For instance, Lambda, a company specializing in GPU cloud services powered by Nvidia GPUs, recently announced it has secured $320 million in funding, reaching a valuation of $1.5 billion. The company plans to use this investment to grow its AI cloud services.
This announcement followed a report from The Information that Salesforce had made a substantial investment in Together AI, valuing the company at over $1 billion. Furthermore, in December 2023, CoreWeave, another GPU cloud service provider, reached an impressive valuation of $7 billion after a $642 million investment from Fidelity Management and Research Co.
Nvidia’s stock has seen significant growth, and AI startups are eagerly seeking access to Nvidia’s high-performance H100 GPUs for large language model training. This desperation led Nat Friedman, a former GitHub CEO, to create a marketplace for GPU clusters, offering access to resources like “32 H100s available from 02/14/2024 to 03/31/2024.”
Moreover, Forbes reported that Friedman and his investment partner, Daniel Gross, have built a supercomputer known as the Andromeda Cluster, featuring over 4,000 GPUs. This resource is offered to portfolio companies at a rate below the market price.
Friedman shared with Forbes his role in assisting AI startups with acquiring GPUs, emphasizing the high demand for these resources.
The conversation about Nvidia GPU access continues against the backdrop of a report from The Wall Street Journal. OpenAI’s CEO, Sam Altman, has proposed reshaping the AI chip market, a venture with significant cost and geopolitical implications.
However, not everyone agrees with this approach. Databricks CEO Ali Ghodsi expressed skepticism about the ongoing “GPU hunger games,” predicting a decrease in AI chip prices and a rebalance of supply and demand within the next year. He compared the situation to the early 2000s concerns about internet bandwidth, suggesting a similar resolution could occur for GPUs, potentially alleviating the current scarcity affecting AI startups.
Today, LangChain, a pioneer in advancing large language model (LLM) application development through its open-source platform, announced a successful $25 million Series A funding round, spearheaded by Sequoia Capital. Alongside this financial milestone, the startup unveiled LangSmith, its premier subscription-based LLMOps solution, now widely available.
LangSmith serves as a comprehensive platform, empowering developers to expedite the lifecycle of LLM projects, encompassing everything from initial development and testing phases to final deployment and ongoing monitoring. Initially launched in a limited beta in July of the previous year, LangSmith has rapidly become a critical tool for numerous enterprises, witnessing widespread adoption on a monthly basis, the company reports.
This strategic launch addresses the growing demand among developers for robust solutions that enhance the development, performance, and reliability of LLM-driven applications in live environments.
What does LangChain’s LangSmith offer? LangChain has been instrumental in providing developers with an essential programming toolkit via its open-source framework. This toolkit facilitates the creation of LLM applications by integrating LLMs through APIs, linking them together, and connecting them to various data sources and tools to achieve diverse objectives. Originating as a hobby project, it swiftly evolved into a fundamental component for over 5,000 LLM applications, spanning internal tools, autonomous agents, games, chat automation, and beyond.
However, constructing applications is merely the beginning. Navigating the complexities of bringing an LLM application to market requires overcoming numerous obstacles, a challenge LangSmith addresses. This new paid offering aids developers in debugging, testing, and monitoring their LLM applications.
During the prototyping phase, LangSmith grants developers comprehensive insight into the LLM call sequence, enabling real-time identification and resolution of errors and performance issues. It also supports collaboration with experts to refine app functionality and incorporates both human and AI-assisted evaluations to ensure relevance, accuracy, and sensitivity.
Once a prototype is ready, LangSmith’s integrated platform facilitates deployment via hosted LangServe, offering detailed insights into production dynamics, from cost and latency to anomalies and errors, thereby ensuring the delivery of high-quality, cost-efficient LLM applications.
Early Adoption Insights A recent blog post by Sonya Huang and Romie Boyd from Sequoia revealed that LangSmith has attracted over 70,000 signups since its beta release in July 2023, with more than 5,000 companies now leveraging the technology monthly. Esteemed firms like Rakuten, Elastic, Moody’s, and Retool are among its users.
These companies utilize LangSmith for various purposes, from enabling Elastic to swiftly deploy its AI Assistant for security, to assisting Rakuten in conducting thorough tests and making informed decisions for their Rakuten AI for Business platform. Moody’s benefits from LangSmith for automated evaluations, streamlined debugging, and rapid experimentation, fostering innovation and agility.
As LangSmith transitions to general availability, its influence in the dynamic AI sector is poised to grow significantly.
Looking ahead, LangChain plans to enrich the LangSmith platform with new features such as regression testing, online production data evaluators, improved filtering, conversation support, and simplified application deployment via hosted LangServe. It will also introduce enterprise-level capabilities to enhance administration and security measures.
Following this Series A funding led by Sequoia, LangChain’s total fundraising has reached $35 million, with a prior $10 million round led by Benchmark, as reported by Crunchbase. LangChain stands alongside other platforms like TruEra’s TruLens, W&B Prompts, and Arize’s Pheonix, which also contribute to the evaluation and monitoring of LLM applications.
The Wall Street Journal recently reported that Sam Altman, CEO of OpenAI, aims to secure up to $7 trillion for an ambitious technology initiative designed to significantly enhance global semiconductor capacity, with funding from investors including the United Arab Emirates. This project aims to supercharge AI model capabilities.
However, the environmental ramifications of such a colossal undertaking are undeniable, as noted by Sasha Luccioni, the climate lead and researcher at Hugging Face. Luccioni highlights the staggering demand for natural resources this project would entail. She emphasizes that even with renewable energy, the required volume of water and rare earth minerals would be overwhelming.
For context, Fortune magazine in September 2023 disclosed that AI technologies contributed to a 34% rise in Microsoft’s water usage. Additionally, it was reported that Meta’s Llama 2 model consumed twice the water of its predecessor, and a study found that the training of OpenAI’s GPT-3 used 700,000 liters of water. The scarcity of rare earth minerals like gallium and germanium is exacerbating the global semiconductor dispute with China.
Luccioni critiques Altman’s approach for not prioritizing more efficient AI development methods, suggesting instead that his strategy is perceived by some as visionary despite its brute-force nature.
The shortage of GPUs, crucial for AI development, is a well-discussed issue in Silicon Valley, particularly the scarcity of Nvidia’s H100 GPU, essential for training large language models (LLMs). Meta’s CEO, Mark Zuckerberg, recently outlined the company’s AI ambitions, emphasizing the need for top-tier computing infrastructure, including the acquisition of approximately 350k H100 GPUs by year-end, contributing to a total of around 600k H100 equivalent units.
Furthermore, Luccioni raises concerns about the lack of transparency regarding the environmental impact of AI, particularly the carbon footprint associated with Nvidia’s product lifecycle. Despite Nvidia’s 2023 Corporate Responsibility Report detailing efforts to monitor and report on the environmental impact of their supply chain, Luccioni argues that overall, companies are becoming less transparent about the environmental costs of AI.
In conclusion, while Altman’s project garners attention and possibly hype akin to Elon Musk’s ventures, Luccioni remains skeptical about its feasibility, questioning the long-term sustainability and transparency of such ambitious technological endeavors in the face of significant environmental concerns.
Bugcrowd, a frontrunner in the field of crowdsourced cybersecurity, today announced the acquisition of $102 million in fresh funding, underscoring the swift expansion and widespread acceptance of using ethical hackers to identify vulnerabilities.
Spearheaded by General Catalyst, the latest investment round brings the company’s valuation to over $1 billion, as reported by insiders familiar with the transaction. This significant infusion of capital is set to boost Bugcrowd’s global expansion efforts, drive innovation in its AI-driven platform, and facilitate strategic acquisitions.
“Bugcrowd attracts customers looking for alternatives to traditional crowdsourced security providers due to dissatisfaction with slow response times, inconsistent and perplexing pricing structures, limited engagement with the community, and inadequate client support options,” Bugcrowd’s CEO, Dave Gerry, told VentureBeat in an exclusive discussion.
The market for crowdsourced security is expected to climb from $90 million in 2019 to over $135 million by 2024, according to industry predictions, as companies aim to supplement their in-house security measures with external expertise. Capitalizing on this trend, Bugcrowd has welcomed over 200 new clients in the last year alone.
Bugcrowd’s offerings, such as Penetration Testing-as-a-Service (PTaaS) and bug bounty initiatives, allow for ongoing scrutiny of client applications, networks, and systems through a crowdsourced approach. This model’s primary benefit is its capacity to harness varied expert knowledge on demand, uncovering flaws and vulnerabilities that conventional testing might miss.
The company’s unique CrowdMatch technology, powered by AI, ensures efficient pairing of researchers with clients based on specific needs, significantly boosting productivity. Additionally, Bugcrowd has integrated smoothly with leading developer platforms like GitHub, facilitating continuous crowdsourced testing throughout the software development lifecycle (SDLC).
“Looking ahead to 2024, we aim to significantly outperform our achievements in 2023 and offer customers the leading AI-powered crowdsourced security platform for real-time insights,” Gerry shared with VentureBeat.
With this new capital and its ongoing success, Bugcrowd is well-positioned to challenge the penetration testing and vulnerability management sectors, traditionally dominated by established consulting firms like Cloudflare and Crowdstrike. By merging crowd-sourced expertise with AI/ML technologies, Bugcrowd aims to expand its testing reach and offer continual monitoring over the entire attack landscape.
In this over $1 billion valuation round, General Catalyst, along with previous investors Rally Ventures and Costanoa Ventures, contributed to the new $102 million funding. These resources will be used to further refine the platform’s capabilities and pursue rapid global growth.
Microsoft’s journey into integrating artificial intelligence across its product spectrum signifies a new era for the technology giant. Under the leadership of CEO Satya Nadella, Microsoft’s AI assistant, Copilot, has become a focal point of innovation. Originally unveiled nearly a year ago and powered by OpenAI’s GPT-4, Copilot’s ambition is to permeate every facet of the corporate workflow. Nadella’s vision, shared on the social network X (formerly Twitter), is for Copilot to enhance “every role and function” within the tech landscape.
This week marked a significant milestone for Microsoft as it announced the general availability of two tailored versions of Copilot:
Previously available in preview to select customers, including Avanade—a joint venture by Accenture and Microsoft—these products have now been extended to a broader user base due to their successful reception.
Copilot for Sales has been instrumental in transforming the sales process. According to a blog post by Emily He, Microsoft’s corporate vice president of business applications marketing, Avanade has leveraged this technology for tasks such as updating CRM records from Outlook, summarizing email threads, and drafting emails. This suite of AI capabilities not only elevates productivity but also ensures clients remain a priority. Feedback from Avanade employees highlighted significant time savings and improved accessibility for neurodiverse team members, showcasing the broad impact of Copilot on the workforce.
Priced at $50 per user per month (when paid annually), Copilot for Sales offers a cost-effective solution for enhancing sales operations. Existing Copilot for Microsoft 365 users can access this sales-specific version at a reduced rate, further democratizing access to advanced AI tools.
The launch of Copilot for Service marks a leap forward in customer service technology. This product addresses the challenges posed by previous AI implementations in customer interactions, offering a robust and less vulnerable solution. By operating behind the scenes, Copilot for Service allows customer service personnel to access a wealth of organizational knowledge without the need to navigate through multiple applications or databases. This seamless integration of information drastically improves efficiency and the overall customer experience.
Microsoft’s He outlined the diverse systems that contain critical customer information, emphasizing the tool’s ability to streamline the access and management of this data. With integrations including Salesforce, ServiceNow, and Zendesk, Copilot for Service is poised to revolutionize how customer support is delivered by harnessing AI to provide relevant information for each unique customer interaction.
Like its counterpart for sales, Copilot for Service is priced at $50 per user per month (annually) or $20 per user per month for existing Copilot for Microsoft 365 users, making advanced AI capabilities accessible to a wide range of businesses.
Microsoft’s introduction of Copilot into its suite of services is just the beginning of a broader AI integration strategy. The company plans to continue expanding Copilot’s capabilities across its product lines, introducing additional features in Microsoft 365 apps that will further enhance productivity and efficiency in the workplace.
The feedback from early adopters like RSM indicates a positive trajectory for Copilot’s implementation in various business processes. These success stories underscore the potential of AI to transform industry standards and operational efficiencies.
Microsoft’s strategic deployment of Copilot across sales and service sectors represents a significant advancement in the application of AI technology. By offering tailored, sector-specific solutions, Microsoft is not only enhancing the productivity of individual users but also setting a new standard for AI integration in the business world. As Copilot continues to evolve and expand its reach, the potential for AI to revolutionize every role and function within the corporate environment becomes increasingly tangible. This initiative, championed by Satya Nadella, underscores Microsoft’s commitment to leading the charge in the AI revolution, promising an exciting future for technology-driven business solutions.