أنشرها:

JAKARTA - At Google Cloud Next '23, Google Cloud announced a series of product innovations to empower every business and public sector organization in Southeast Asia to increase its AI infrastructure.

To help organizations in Southeast Asia run AI workloads at effective cost and scalability, Google Cloud launched a significant increase in their optimized AI infrastructure portfolio: Cloud TPU v5e, which is available in public preview and general availability of A3 VM with GPU NVIDIA H100.

Cloud TPU v5e is Google Cloud's most cost-effective, multipurpose, and scalability AI accelerator to date. Now, customers can use a single Cloud Tensor Processing Unit (TPU) platform to run large-scale AI and inference training.

Cloud TPU v5e provides up to 2 times higher training performance and up to 2.5 times higher inference performance for LLM and AI gene models, enabling more organizations to train and deploy larger and more complex AI models.

Cloud TPU v5e is currently available in public previews on cloud region Las Vegas and Columbus Google Cloud. Google plans to expand its availability to other regions, including Singapore's cloud region Google Cloud this year.

A3 VM, a supercomputer powered by the NVIDIA H100 Graphics Processing Unit (GPU), will be available in general next month, allowing organizations to achieve 3 times faster training performance compared to A2, the previous generation.

A3 VM is specially designed to train and serve LLM workloads and highly demand AI genes.

In Google Cloud Next '23, Google Cloud and NVIDIA also announced a new integration to help organizations take advantage of the same NVIDIA technology used by Google DeepMind and Google research teams over the past two years.


The English, Chinese, Japanese, Arabic, and French versions are automatically generated by the AI. So there may still be inaccuracies in translating, please always see Indonesian as our main language. (system supported by DigitalSiber.id)