Google Reveals Five Company Misunderstandings Against Generative AI
JAKARTA - Artificial intelligence technology or Gemogeneous AI is currently being hot. Not only big technology companies, but many companies have used AI Generatively to encourage brand business performance.
"Since Google Cloud launched its newest generative AI capability, Google has attended a number of meetings with organizations to discuss how they can bring the technology into their business," said Megawaty Khie, Regional Director, Indonesia and Malaysia, Google Cloud on Google blog.
Khie explained that there are at least five misunderstandings that Google often finds in helping companies in the use of AI technology. Here's the explanation:
One model to control everything
Understanding about using one large language model (LLM) or one type of a generative AI model will generalize all cases to be myths. Technological markets are now largely controlled by a handful of companies. Generative AI basic properties, especially for companies, will show thousands of models or more.
The reasons vary, what is clear is that every industry or department has a different way of expressing their knowledge, then choosing the appropriate AI model is something to pay attention to.
Bigger, better
The generative AI model consumes a lot of computational resources. Potentially high cost of computing is one of the reasons why using the right model for a job is very important. The bigger the model, the more it costs for queries.
Google recommends that companies be more selective in deciding how much IQ a model needs for use. Thus, the company does not spend money on useless things.
Only me and my bot
Just like past "bring your own device" and "bring your own app" movements that raised concerns "Sowow IT", several financial institutions have closed access to the generalized AI available to the public, fearing the model could leak property rights information.
"Say a bank is exploring mergers for big industry clients, and someone in the merger and acquisition department (M&A) asked the public model and asked: "What are the good takeover targets for the XYZ Company?" If that information contributes to public model training data, services can be trained to answer these questions for anyone. By default, Google Cloud AI services do not use personal data like this," explained Khie.
Most companies fear the safety of questions they pose to AI technology, content trained by AI technology, and results.
Can I always trust my bot
Accuracy and reliability are among the greatest concerns with this new technology. The algorithm is designed to provide any answer that happens, and in some cases, the generative AI model can produce incorrect answers.
According to Google, it is very important for companies to use technology models and architectures based on their data. Most public-generative AI models ignore the company's data requirements, which could be dangerous, especially in regulated industry.
Ask my bot about anything
Some business leaders are increasingly interested in building all their information into LLMs, so they can answer all questions, be it at the organization level or at the global level.
After a company thinks about how they can keep their information private and factual, they quickly realize the next step: How to manage who can ask my model questions, and at what level?