9.3 C
New York
Tuesday, March 5, 2024

Unlock the Power of Generative AI in Your Applications with Gemini

Unlock the Power of Generative AI in Your Applications with Gemini

It’s also a great way to adjust parameters like temperature, maximum output length, and top-k value, as well as get sample implementation code. You can request different formats of output, such as json, markdown, bullet lists, etc. We won’t get into prompt engineering or tuning these parameters here, but it’s worth looking into for full production applications.

Your prompts don’t have to be perfect to start building. Once you’re comfortable, let’s setup the project.

To connect Firebase Functions to any Google Cloud APIs, including Vertex AI which gives access to the Gemini and other models, you’ll need to

  1. If you don’t already have a project, create one at https://console.firebase.google.com
  2. Under Build, choose Functions.
  3. If you haven’t enabled billing yet, click “Upgrade project” and select a billing account to link to your Google Cloud project.

Using Firebase Functions requires the pay-as-you-go Blaze plan, but usage for this walkthrough should stay within the free quotas. The Gemini model’s pricing structure allows for 60 queries per minute for free. Only once usage exceeds these free limits will the billing account attached to the Cloud project will incur cost.

While Vertex AI API is publicly accessible and can be consumed by any application with the proper authentication, using Firebase Functions or Cloud Functions provides a big benefit over other cloud solutions: no need to manage API keys. If you looked at the sample code in Maker Suite, you may have noticed the placeholder for an API key. When you deploy a Firebase Functions or Cloud Functions application, the Cloud Platform SDKs use your project’s default service account that already has authorization to other Cloud resources.

First, a few prerequisites to install for Functions to work.

  • Install Node (I recommend using a version manager like nvm) if you haven’t already.
  • Ensure you have the latest Firebase CLI installed and logged in.
npm install --global firebase-tools
firebase login

Now let’s set up your local project.

cd the/folder/where/you/want/your/code
firebase init

For this demo, you need to enable Functions and press Enter.

? Which Firebase features do you want to set up for this directory? Press Space 
to select features, then Enter to confirm your choices. (Press <space> to
select, <a> to toggle all, <i> to invert selection, and <enter> to proceed)
◯ Realtime Database: Configure a security rules file for Realtime Database and
(optionally) provision default instance
◯ Firestore: Configure security rules and indexes files for Firestore
❯◉ Functions: Configure a Cloud Functions directory and its files
◯ Hosting: Configure files for Firebase Hosting and (optionally) set up GitHub
Action deploys
◯ Hosting: Set up GitHub Action deploys

Choose “Use an existing project” then your Firebase project. The default options for the remaining questions are fine. We’ll use Javascript as the language for this demo.

? Please select an option: (Use arrow keys)
❯ Use an existing project
Create a new project
Add Firebase to an existing Google Cloud Platform project
Don't set up a default project

Now you’re ready to code!

If you open the functions/index.js file, you’ll see the “hello world” sample. Running it now will deploy one endpoint, /helloworld . At any point, you can deploy or start the local emulators.

# Deploy changes
firebase deploy

# Or start emulators - this only works from the functions directory
cd functions
npm run serve

Google has a two Node.js client SDKs available to connect to their AI APIs:

Both work. It’s not clear to me why there are two that seem to do the same thing. This will use the newer one, @google-cloud/vertexai, only because the syntax is more straightforward. Personally, I’ve not found any functional differences, so it’s ultimately just preference.

Calling the APIs

First, add the npm module to the project with the following from the functions directory. Be sure you’re not at the Firebase project root folder.

npm install @google-cloud/vertexai

Once installed, you can use the SDK to build a new function. Add the following to your index.js.

const { VertexAI } = require('@google-cloud/vertexai');

exports.prompt = onRequest(async (request, response) => {
const vertex_ai = new VertexAI({project: process.env.GCLOUD_PROJECT, location: 'us-central1'});

// Available models: https://cloud.google.com/vertex-ai/docs/generative-ai/learn/models
const model = 'gemini-pro';

const generativeModel = vertex_ai.preview.getGenerativeModel({
model: model,
generation_config: { // Test impact of parameters: https://makersuite.google.com
"max_output_tokens": 2048,
"temperature": 0.9,
"top_p": 1

const prompt = `Tell me a joke`;
const req = {
contents: [{role: 'user', parts: [{text: prompt}]}],

const content = await generativeModel.generateContent(req);
const result = content.response.candidates.at(0).content.parts.at(0).text;

To test your changes, use the Functions emulator. Run the following from the functions directory.

npm run serve

If you haven’t changed the ports, you should be able to confirm the function response by opening in your browser.

Sample response from Vertex AI

Making responses more dynamic

At this point, you have a deployable endpoint that responds with a generated value, but it’s not very dynamic. Let’s modify the code to take an input to customize the joke. Change the prompt line to this:

const prompt = `Tell me a joke about ${request.query.theme}`;

Now open in a browser. Feel free to change the GET parameter value.

Generated joke relevant to theme of “planes”

Any dynamic content within the prompt will need to be set up this way. Take the input and build your prompt based on that value, typically as a variable directly in the prompt string.

You can see a fully working Functions and companion Flutter app using these APIs. The Flutter frontend takes a theme as user input and displays Monopoly properties for the given input. The prompt requests a response in a specific json structure which is then parsed by the frontend app.

This walkthrough and the samples linked above are just examples of using generative AI to build or enhance an application. This should give you the tools you need to integrate Vertex AI APIs into your applications to make them just that much better.

Further reading

Source link

Latest stories