What is Generative AI?
Generative AI (Artificial Intelligence) is an AI technology capable of generating new content such as text, images, audio, and video based on the data it has learned.
This technology uses large machine learning models, to create outputs that resemble human-made content.
The Evolution of Artificial Intelligence :
- Artificial Intelligence : is the field of study
- Machine Learning : is a branch of AI (optimization) that focus on the creation of intelligent machines that learn from data.
- Deep Learning: is a subset of Machine Learning methods, based on Artificial Neural Networks (ANN)
- Generative AI : a type of ANNs that generate data that is similar to the data it was trained on.
Generative AI’s impact on productivity could add trillions of dollars in value to the global economy. Our latest research estimates that generative AI could add the equivalent of $2.6 trillion to $4.4 trillion annually across the 63 use cases we analyzed—by comparison, the United Kingdom’s entire GDP in 2021 was $3.1 trillion. This would increase the impact of all artificial intelligence by 15 to 40 percent. This estimate would roughly double if we include the impact of embedding generative AI into software that is currently used for other tasks beyond those use cases.
Generative AI Assistant (GAIA) Web App
Offering Advanced Gen AI capabilities to internal GSIT users.
All of the Gen AI capabilities will be piloted in GAIA Web App before implemented
bank-wide in PENTA chatbot.
Personal Gen AI Assistant (PENTA)
PENTA (Personal GenAI Assistant) is a Generative AI-based chatbot on mybcaportal that can increase your productivity at work. All of the feature implemented at PENTA will be validated at GAIA Web App first. PENTA was be implemented in August 2024.
Functions
- Drafting : Report, email template, translation, checking typo
- Code Assistant : Code debugging, refactoring, optimization
- Query Assistant : SQL query optimization, Natural Language Query
- Testing Helper : Unit testing automatic generation, test case generation
- Data Analysis : Generate insight from data, summarize data
Risks and mitigation for Generative AI
Hallucination
Generative Al models are not infallible and may occasionally make errors.
However, these errors often appear to be authentic and human-like, resulting in a phenomenon often referred to as “hallucinations”.
- Provide context and leverage prompt engineering to generate context-aware answers
- Keep humans in the loop for Al oversight
Discrimination and bias
If the data used to train generative Al models contains biased information, stereotypes, or discriminatory patterns, the model will learn from and perpetuate these blases in its outputs.
- Mitigate bias through techniques such as Reinforcement Learning with HumanFeedback (RLHF) where the model is taught to be unbiased
- Keep humans in the loop to mitigate any potential biased outcomes of the model
Copyright infringement
Large volumes of data, including those that may be copyrighted, are required for the development of generative Al models. Developers of generative Al models may violate intellectual property regulations if found to be using copyrighted data without permission.
- Use foundation models that obey copyright laws
- Check the relevant licenses) when using datasets for training and testing of generative Al models and ensure that the relevant copyright laws are adhered to
Confidentiality and data privacy
Generative Al systems tend to have the ability to reproduce parts of training data and users may be able to reconstruct the training data through certain prompts, leading to infringement of data privacy and confidentiality clauses.
- Use role-based access controls or limit Al network access
- Omit potentially sensitive or confidential information from datasets or use data anonymization techniques where applicable
Deepfakes and fraud
Generative Al systems ability to generate realistic content can be used for convincing impersonations. Such fraudulent impersonations can be used to deceive others through scams or into revealing confidential information.
- Implement authentication procedures to differentiate synthetically generated content versus authentically generated ones
- Conduct regular audits and checks to ensure that generative Al systems are not being misused for fraudulent purposes
Disinformation
Generative Al can be used to generate false information at scale, leading to mass disinformation and lack of trust in information sources.
- Implement fact-checking mechanisms, which could involve a combination of human moderation and Al techniques to detect potential disinformation in generated content