LLMs Archives - High-Performance Computing News Analysis | insideHPC https://insidehpc.com/tag/llms/ At the Convergence of HPC, AI and Quantum Mon, 21 Oct 2024 19:52:12 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 https://insidehpc.com/wp-content/uploads/2024/06/ihpc-favicon.png LLMs Archives - High-Performance Computing News Analysis | insideHPC https://insidehpc.com/tag/llms/ 32 32 57143778 Oriole Networks Raises $22M for Photonics to Cut LLM Energy Use https://insidehpc.com/2024/10/oriole-networks-raises-22m-for-photonics-to-cut-llm-energy-use/ Mon, 21 Oct 2024 13:30:37 +0000 https://insidehpc.com/?p=94987

London, 21st October: Oriole Networks – a company using light to train Large Language Models with low energy consumption – has raised an additional $22 million from investors to scale its “super-brain” solution.  The round was led by Plural with all existing investors – UCL Technology Fund, XTX Ventures, Clean Growth Fund, and Dorilton Ventures – reinvesting. Oriole Networks addresses […]

The post Oriole Networks Raises $22M for Photonics to Cut LLM Energy Use appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
94987
HPC News Bytes 20241014: AMD Rollout, Foxconn’s Massive AI HPC, AI Drives Nobels, Are LLM’s Intelligent? https://insidehpc.com/2024/10/hpc-news-bytes-20241014-amd-rollout-foxconns-massive-ai-hpc-ai-drives-nobels-are-llms-intelligent/ Mon, 14 Oct 2024 15:55:59 +0000 https://insidehpc.com/?p=94942

A good mid-October morn to you! Here’s a brief (6:30) run-through of developments from the world of HPC-AI, including: AMD's products rollout, Foxconn's big Blackwell AI HPC in Taiwan, AI for science drives Nobel Prizes, Meta AI guru's AGI skepticism

The post HPC News Bytes 20241014: AMD Rollout, Foxconn’s Massive AI HPC, AI Drives Nobels, Are LLM’s Intelligent? appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
94942
Cerebras Claims Fastest AI Inference https://insidehpc.com/2024/08/cerebras-claims-fastest-ai-inference/ Tue, 27 Aug 2024 19:40:53 +0000 https://insidehpc.com/?p=94665

AI compute company Cerebras Systems today announced what it said is the fastest AI inference solution. Cerebras Inference delivers 1,800 tokens per second for Llama3.1 8B and 450 tokens per second for Llama3.1 70B, according to the company, making it 20 times faster than GPU-based solutions in hyperscale clouds.

The post Cerebras Claims Fastest AI Inference appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
94665
NVIDIA and Google DeepMind Collaborate on LLMs https://insidehpc.com/2024/05/nvidia-and-google-deepmind-collaborate-on-llms/ Wed, 15 May 2024 09:52:05 +0000 https://insidehpc.com/?p=94047

Intended to make it easier for developers to create AI-powered applications with world-class performance, NVIDIA and Google today announced three new collaborations at Google I/O ’24. Using TensorRT-LLM, NVIDIA is working with Google to optimize two new models it introduced at the event: Gemma 2 and PaliGemma. These models are built from the same research and […]

The post NVIDIA and Google DeepMind Collaborate on LLMs appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
94047
Amazon Adds $2.75B to Stake in GenAI Startup Anthropic https://insidehpc.com/2024/03/amazon-adds-2-75b-to-stake-in-genai-startup-anthropic/ Wed, 27 Mar 2024 19:29:48 +0000 https://insidehpc.com/?p=93743

Amazon announced it has made its biggest-ever investment, $2.75 billion, in OpenAI/Chat-GPT competitor Anthropic, another indication that the generative AI phenomenon continues to heat up. Today’s news follows Amazon and Anthropic announcing an earlier $1.25 billion investment last September – the announcement today brings the total investment to $4 billion. “We have a notable history with […]

The post Amazon Adds $2.75B to Stake in GenAI Startup Anthropic appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
93743
Oriole Networks Raises £10m for Faster LLM Training https://insidehpc.com/2024/03/oriole-networks-raises-10m-for-faster-llm-training/ Wed, 27 Mar 2024 16:28:08 +0000 https://insidehpc.com/?p=93738

London, 27 March 2024: Oriole Networks – a startup using light to train LLMs faster with less power – has raised £10 million in seed funding to improve AI performance and adoption, and solve AI’s energy problem. The round, which the company said is one of the UK’s largest seed raises in recent years, was co-led […]

The post Oriole Networks Raises £10m for Faster LLM Training appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
93738
Accelerated HPC for Energy Efficiency with AWS and NVIDIA https://insidehpc.com/2024/02/accelerated-hpc-for-energy-efficiency-with-aws-and-nvidia/ Tue, 20 Feb 2024 11:00:45 +0000 https://insidehpc.com/?p=93470

Many industries are starting to run HPC in the cloud. Find out how GPU-accelerated compute, from AWS and NVIDIA, is helping organizations run HPC workloads and AI/ML jobs faster, in a more energy-efficient way.

The post Accelerated HPC for Energy Efficiency with AWS and NVIDIA appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
93470
Datasaur Launches LLM Lab for ChatGPT and Similar Models https://insidehpc.com/2023/11/datasaur-launches-llm-lab-for-chatgpt-and-similar-models/ Thu, 02 Nov 2023 13:51:10 +0000 https://insidehpc.com/?p=92750

Oct. 27, 2023 — Datasaur, a natural language processing (NLP) data-labeling platform, today launched LLM Lab, an interface designed for data scientists and engineers to build and train custom LLM models like ChatGPT. The product will provide a wide range of features for users to test different foundation models, connect to their own internal documents, […]

The post Datasaur Launches LLM Lab for ChatGPT and Similar Models appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
92750
Dell and Meta in GenAI Pact with Llama 2 LLMs https://insidehpc.com/2023/10/dell-and-meta-in-genai-pact-with-llama-2-llms/ Tue, 31 Oct 2023 18:40:44 +0000 https://insidehpc.com/?p=92788

Dell Technologies (NYSE: DELL) is collaborating with Meta in a partnership designed to make it easy to deploy Meta’s Llama 2 large language models on premises with Dell’s generative AI portfolio of IT infrastructure and client devices. Dell said the collaboration simplifies the on-prem AI environment by combining Dell’s infrastructure portfolio and Llama 2 AI models....

The post Dell and Meta in GenAI Pact with Llama 2 LLMs appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
92788
Federated GPU Infrastructure for AI Workflows https://insidehpc.com/2023/10/federated-gpu-infrastructure-for-ai-workflows/ Mon, 16 Oct 2023 10:00:24 +0000 https://insidehpc.com/?p=92617

[Sponsored Guest Article] With the explosion of use cases such as Generative AI and ML Ops driving tremendous demand for the most advanced GPUs and accelerated computing platforms, there’s never been a better time to explore the “as-a-service” model to help get started quickly.  What could take months of shipping delays and massive CapEx investments can be yours on demand....

The post Federated GPU Infrastructure for AI Workflows appeared first on High-Performance Computing News Analysis | insideHPC.

]]>
92617