Text Recognization with AWS Bedrock
Generative AI has emerged as a powerful technique for creating original content, enhancing user experiences, and automating complex tasks. Amazon Web Services (AWS) has recently introduced Amazon Bedrock, a fully managed service that empowers developers to harness the potential of foundation models(FMs) from leading AI start-ups and Amazon itself.
Before going further , lets understand what's the benefit of using foundation model over a Machine learning mode. Machine learning trained on task-specific data to perform a narrow range of functions. A foundation model is a large-scale machine learning model trained on a broad data set that can be adapted and fine-tuned for a wide variety of applications and downstream tasks.
Some of the Common Use case of Generative AI in different industry :
Communication : Chatbot & Question / Answering.
Financial Services : Risk Management & Fraud detection.
Healthcare : Drug development, Personalized Medicine.
Retail: Optimizing Pricing & inventory, flag product and brand category.
Technology Hardware: Chip Design & Robotics.
Energy & Utilities : Predictive Maintenance and Design Renewable energy sources.
Bedrock provides a user-friendly platform for customers to develop and expand generative AI-powered applications utilizing Foundation Models, thereby making access available to all developers. Bedrock provides users with access to a variety of robust Foundation Models for both text and images, including Amazon Titan Foundation Models. This is made possible through a secure, reliable, and scalable AWS managed service. Bedrock eliminates the need to manage the underlying infrastructure and integrates seamlessly into the AWS service landscape
AWS Bedrock currently supports four foundation models:
Jurassic-2 from AI21 Labs, a multilingual LLM for generating text, translating languages, and answering questions informatively.
Claude from Anthropic, an LLM for text-processing and conversational tools.
Stable Diffusion from Stability AI, a text-to-image tool for generating unique, realistic, high-quality images, art, logos, and designs.
Amazon Titan , is a foundation model from Amazon , it includes Titan text focused on NLP task such as summarization and text generation and Titan Embedding which enhance search accuracy and improve personalized recommendation.
Now we will explore using the AWS Serverless capabilities build a text generation and expose this as API from Amazon API Gateway which will use Amazon Bedrock using the Boto3 API . Text generation is the task of generating text that is fluent and appears indistinguishable from human-written text. It is also known as natural language generation.
There will be lambda function which will call the Bedrock API with a input text . Amazon Titan model processes the request and Bedrock responds with a generated text
Lambda function will be exposed though Amazon API Gateway to be consumed by the Static page hosted in S3. Before developing the lambda function we need to provide the role which has a policy to allow call for bedrock
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Statement1",
"Effect": "Allow",
"Action": "bedrock:*",
"Resource": "*"
}
]
}
Below is a sample lambda for generating text based on the input provided
import os
from typing import Optional
import json
# External Dependencies:
import boto3
from botocore.config import Config
session_kwargs = {"region_name": "us-east-1"}
client_kwargs = {**session_kwargs}
retry_config = Config(
region_name="us-east-1",
retries={
"max_attempts": 10,
"mode": "standard",
},
)
session = boto3.Session(**session_kwargs)
sts = session.client("sts")
response = sts.assume_role(RoleArn="role_arn",RoleSessionName="bedrockAssumeRoleSession")
client_kwargs["aws_access_key_id"] = response["Credentials"]["AccessKeyId"]
client_kwargs["aws_secret_access_key"] = response["Credentials"]["SecretAccessKey"]
client_kwargs["aws_session_token"] = response["Credentials"]["SessionToken"]
client_kwargs["endpoint_url"] = "https://bedrock.us-east-1.amazonaws.com)"
bedrock_client = session.client(service_name="bedrock",config=retry_config,**client_kwargs )
prompt_data = "sample text"
body = json.dumps({"inputText": prompt_data})
modelId = "amazon.titan-tg1-large"
accept = "application/json"
contentType = "application/json"
response = bedrock_client.invoke_model_with_response_stream(
body=body, modelId=modelId, accept=accept, contentType=contentType
)
response_body = json.loads(response.get('body').read())
outputText = response_body.get('results')[0].get('outputText')
Conclusion:
With AWS Bedrock, you can easily find and access the right model for you needs, privately customize the model with their own data, and seamlessly integrate and deploy it into your apps using other AWS tools and integrate with other AWS capabilities to build a full fledge solution