Frontend RAG Mastery: Deploying with AWS, GitHub, and Legacy Integration
Written on
Chapter 1: Introduction to Frontend Deployment
In this concluding part of our series, we will delve into the process of deploying a frontend application to a production-ready environment. The main hurdles we will tackle include publishing the frontend online and ensuring smooth integration with older websites.
Integrating with Legacy Systems
What exactly do we mean by integrating with legacy systems? In simple terms, after making our frontend available online, we strive to blend our new technology with existing outdated websites. For instance, if we've successfully launched our modern React application and a client requests its incorporation into their antiquated PHP site, do we need to start from the ground up? Absolutely not. We will investigate strategies to merge our existing code into any site that supports HTML.
The first step, as previously mentioned, is to host our local code online. To accomplish this, we must design the architecture for a production application. My preferred method, which I believe is the most effective, is to utilize CloudFront in conjunction with an S3 bucket on AWS. Why choose AWS over other providers? The answer lies in its user-friendly and economical approach. AWS offers a robust starter plan that grants access to all the essential features.
My Starter Architecture
Before diving into this final segment, let's ensure you have reviewed the earlier articles in the series:
- Intro: [here]
- Choose the Model: [here]
- Flask Server Setup: [here]
- ChatEngine Construction: [here]
- Advanced RAG Performance: [here]
- Dynamic Sources: [here]
- Frontend Construction: [here]
- Backend Deployment: [here]
Now, let’s begin with the first step: creating an AWS account. Once that's accomplished, head over to the S3 section and click on the "Create Bucket" button. Enter the name for your bucket, and make sure to adjust the following settings:
- Enable Public Access: By default, an S3 bucket blocks all public access to its contents. If you plan to host a website or serve files publicly, you must enable public access by modifying the permissions settings to allow internet access. This involves disabling the option that blocks public access, thereby allowing anyone on the internet to view the contents of your bucket. This is essential for hosting static websites or serving files like images, videos, or documents.
- Enable Access Control Lists (ACLs): ACLs are critical for defining who can access the objects in your S3 bucket and what actions they can perform. When setting up your bucket, ensure ACLs are enabled and configured correctly to manage access permissions effectively. This control is vital for maintaining the security and integrity of any sensitive data stored.
Having established a functional bucket for your code, it’s now time to set up CloudFront. Navigate to the CloudFront section, click on "Create Distribution," choose a name, select the Origin (the bucket you created earlier), and for the default root object, enter: index.html.
Once you configure S3 and CloudFront, you are one step closer to going live with your code. However, just having the infrastructure is insufficient; you must ensure that your code is deployed and accessible.
Manually copying the built files into S3 every time a change is made is tedious and impractical. Instead, we will establish a remote pipeline on GitHub that automatically deploys our code whenever we push updates. For this, we will need one final item from AWS: a remote access key and secret for deployment from GitHub.
To obtain these credentials, go to the IAM (Identity and Access Management) section within AWS. This section is essential for creating keys or users for various purposes, including remote deployment. In the IAM section, click on the "Users" tab and select "Create New User." Assign a name to the user, then proceed to "Attach Policies." Search for and attach the following policies:
- CloudFrontFullAccess
- AmazonS3FullAccess
Why both? We need the capability to upload our compiled code to S3 and to clear the CloudFront cache (known as an "invalidation").
After attaching the policies, access the newly created user and navigate to the "Security Credentials" section. Here, generate an access key. If everything goes smoothly, you should see something like this: Created access Key.
With the AWS setup finalized, let's shift our attention to the code. Within the frontend code from the previous article, locate the "scripts" folder. Inside this folder, create a script named "build.js" with the following code:
'use strict'
// Setting up the environment for production
process.env.BABEL_ENV = 'production'
process.env.NODE_ENV = 'production'
// Ensure unhandled promise rejections terminate the process
process.on('unhandledRejection', (err) => {
throw err
})
// Loading environment variables
require('../config/env')
const path = require('path')
const chalk = require('react-dev-utils/chalk')
const fs = require('fs-extra')
const bfj = require('bfj')
const webpack = require('webpack')
const configFactory = require('../config/webpack.config')
const paths = require('../config/paths')
const checkRequiredFiles = require('react-dev-utils/checkRequiredFiles')
const formatWebpackMessages = require('react-dev-utils/formatWebpackMessages')
const printHostingInstructions = require('react-dev-utils/printHostingInstructions')
const FileSizeReporter = require('react-dev-utils/FileSizeReporter')
const printBuildError = require('react-dev-utils/printBuildError')
// Thresholds for bundle size warnings
const WARN_AFTER_BUNDLE_GZIP_SIZE = 512 * 1024
const WARN_AFTER_CHUNK_GZIP_SIZE = 1024 * 1024
const isInteractive = process.stdout.isTTY
// Check for required files
if (!checkRequiredFiles([paths.appHtml, paths.appIndexJs])) {
process.exit(1)
}
// Generate configuration
const config = configFactory('production')
// Validate browsers
const { checkBrowsers } = require('react-dev-utils/browsersHelper')
checkBrowsers(paths.appPath, isInteractive)
.then(() => {
return measureFileSizesBeforeBuild(paths.appBuild)})
.then((previousFileSizes) => {
fs.emptyDirSync(paths.appBuild)
copyPublicFolder()
return build(previousFileSizes)
})
.then(
({ stats, previousFileSizes, warnings }) => {
if (warnings.length) {
console.log(chalk.yellow('Compiled with warnings.n'))
console.log(warnings.join('nn'))
console.log('nSearch for the ' + chalk.underline(chalk.yellow('keywords')) + ' to learn more about each warning.')
} else {
console.log(chalk.green('Compiled successfully.n'))}
console.log('File sizes after gzip:n')
printFileSizesAfterBuild(
stats,
previousFileSizes,
paths.appBuild,
WARN_AFTER_BUNDLE_GZIP_SIZE,
WARN_AFTER_CHUNK_GZIP_SIZE,
)
},
(err) => {
const tscCompileOnError = process.env.TSC_COMPILE_ON_ERROR === 'true'
if (tscCompileOnError) {
console.log(chalk.yellow('Compiled with type errors:n'))
printBuildError(err)
} else {
console.log(chalk.red('Failed to compile.n'))
printBuildError(err)
process.exit(1)
}
},
)
.catch((err) => {
if (err && err.message) {
console.log(err.message)}
process.exit(1)
})
// Create the production build and print deployment instructions.
function build(previousFileSizes) {
console.log('Creating an optimized production build...')
const compiler = webpack(config)
return new Promise((resolve, reject) => {
compiler.run((err, stats) => {
let messages
if (err) {
if (!err.message) {
return reject(err)}
let errMessage = err.message
if (Object.prototype.hasOwnProperty.call(err, 'postcssNode')) {
errMessage += 'nCompileError: Begins at CSS selector ' + err.postcssNode.selector}
messages = formatWebpackMessages({
errors: [errMessage],
warnings: [],
})
} else {
messages = formatWebpackMessages(stats.toJson({ all: false, warnings: true, errors: true }))}
if (messages.errors.length) {
if (messages.errors.length > 1) {
messages.errors.length = 1}
return reject(new Error(messages.errors.join('nn')))
}
const resolveArgs = {
stats,
previousFileSizes,
warnings: messages.warnings,
}
return resolve(resolveArgs)
})})
}
function copyPublicFolder() {
fs.copySync(paths.appPublic, paths.appBuild, {
dereference: true,
filter: (file) => file !== paths.appHtml,
})
}
This script is essential as it compiles our TypeScript code into JavaScript, making it ready for placement in the S3 bucket. After creating the script, ensure that your package.json file contains the build command.
Next, let's proceed to establish the GitHub Actions code:
name: Deploy to S3 And Cloudfront
on: [push]
jobs:
run:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_REGION }}
API_URL: ${{ secrets.API_URL }}
steps:
uses: actions/checkout@v1
name: Install dependencies
run: yarn
name: Build
run: yarn build
name: Deploy
uses: lbertenasco/s3-deploy@v1
with:
folder: build
bucket: ${{ secrets.AWS_S3_BUCKET }}
dist-id: ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }}
invalidation: / *
Let’s review the pipeline we’ve set up. This pipeline pulls all the secrets from GitHub. Ensure that you have added all the necessary secrets to your GitHub environments, as discussed in the previous article. The required secrets include:
- AWS_ACCESS_KEY_ID / AWS_SECRET_ACCESS_KEY: The keys created in the last AWS step.
- AWS_REGION: The region of your S3 bucket.
- API_URL: The URL of your remote server (found in the previous article).
- AWS_S3_BUCKET: Your S3 bucket's name.
- CLOUDFRONT_DISTRIBUTION_ID: The ID of your CloudFront distribution, accessible in your newly created CloudFront distribution.
Integrating with Legacy Websites
Now, how can you use it with legacy websites? You can incorporate it using an iframe along with the following code and CSS:
let open = false; // Initialize the 'open' variable
window.addEventListener('message', (event) => {
if (event?.data?.type === 'toggleWidget') {
open = event.data.payload;
updateWidgetSize(open);
}
});
function updateWidgetSize(open) {
const iframe = document.querySelector('.widget-chat');
if (open) {
iframe.style.height = '700px';
iframe.style.width = '400px';
} else {
iframe.style.height = '80px';
iframe.style.width = '100px';
}
}
.widget-chat {
position: absolute;
right: 0;
bottom: 0;
border: none;
z-index: 9999;
}
Indeed, it's quite simple! With this JavaScript snippet, you can seamlessly integrate your online code into any existing website using an iframe.
Conclusion and Final Thoughts
As we wrap up this series, if you found it beneficial, the best way to support it is by sharing this article and the series on social media. Don’t forget to leave a clap and comments; your feedback is invaluable.
I hope you've enjoyed the journey from creating the RAG model to deploying your production-ready application. Transitioning from a simple example to a practical implementation can be challenging, but I trust this series has made it easier for you.
The updated code used in this article is publicly available for further exploration and adaptation to your projects.
If you have any inquiries or feedback, feel free to reach out to me on Discord at sl33p1420. Special thanks to James Pham for bringing a bug to my attention. Your contributions and insights are greatly appreciated!
In Plain English 🚀
Thank you for being part of the In Plain English community! Before you leave:
- Be sure to clap and follow the writer ️👏️️
- Follow us on: X | LinkedIn | YouTube | Discord | Newsletter
- Visit our other platforms: Stackademic | CoFeed | Venture | Cubed
- Explore more content at PlainEnglish.io
Chapter 2: Video Resources
To enhance your understanding, check out these helpful videos.
Deploy Next.JS Frontend with AWS Amplify and AWS CodeCommit - YouTube: This video provides a detailed guide on deploying a Next.js frontend using AWS services.
An Intro to AWS Deployments with OpenTofu, Scalr, & GitHub! - YouTube: This video introduces AWS deployment strategies using OpenTofu and GitHub.