Amazon Expands Its Generative AI Technology For Cloud Services, Presents Bedrock!

JAKARTA - Amazon Web Services (AWS) has expanded its generative Artificial Intelligence (AI) services to make technology more advanced and available to developers in its cloud services.

AWS' latest service includes Bedrock launched in preview as a series of basic model AIs, hosted by startups AI21 Labs, Anthropic, and AI Stabilities.

Generative AI is a type of AI that can create new content and ideas, including conversation, stories, images, videos, and music. Generative AI is empowered by ML models, a very large model that was previously trained on large amounts of data, commonly referred to as the Fondasi Model (FM).

Furthermore, Bedrock will offer the ability to access powerful various FMs for text and images. AWS machine learning section, Swami Sivasubramaniam states customers can easily find the right model for what they want to finish.

"(They can also) start quickly, adapt FM personally to their own data, and easily integrate and apply them into their application using the AWS tools and capabilities they have," said Sivasubramaniam, quoted from the AWS website, Friday, April 14.

Currently, Bedrock offers a large language model (LLM) capable of processing and producing text including Jurassic-2 AI21 Labs, Claude Anthropic, which follows natural language instructions to produce text in Spanish, French, German, Portuguese, Italian, and Dutch.

Bedrock can also produce an AItext-to-image Stability model including Stable Diffusion. One of the most important capabilities of Bedrock is how easy it is to customize the model.

Developers simply direct Bedrock on several labelled examples on Amazon S3, and services can fine-tune models for certain tasks without having to create large data volume annotations.

In addition, AWS has also released two models of its own foundation under the Titan brand. Developers can build their own generative AI-powered products and services behind this API and perfect the model for certain tasks by providing examples labeled themselves.

With the adjustment process, companies can better protect and secure their data without having to worry about leaking and being used to train other big language models.

Because Titan is created to detect and remove harmful content in data, reject inappropriate content in user input, and filter model outputs containing inappropriate content (such as hate speech, dirty words, and violence).

The company is also promoting AWS Trainium and Inferentia homemade AI chips to train and run this model in its cloud.

The Trn1 instants, supported by Trainium, can save up to 50 percent of training costs compared to other EC2 instances, and are optimized to distribute training to multiple servers connected to the 800 Gbps second-generation Elastic Fabric Adapter (EFA) network.

Developers can run a Trn1 instant in UltraClusters scaled up to 30,000 Trainium chips (more than 6 computational exaflops) located in the same AWS Availability Zone as the petabit-scale network.

Finally, the company also makes free AI CodeWhisperer pair programming tools for use. It has been expanded to support ten new languages, including Go, Kotlin, Rust, PHP, SQL, and others on Python, Java, JavaCrypt, TypeScript, and C#.