AI Form Filler not working

Hello!

I want to test the AI Form Filler 1.1.1 version, but as gpt-3.5-turbo-16k-0613 service is deprecated, it cannot work.
Would there be an update to fix this issue?

Thank you for your reply :slight_smile:

Hey Steeve,

Wow, it is very nice to see the plugin used! You can add your service, copy-paste one of the models and change the model to some not deprecated one. It should work I feel.

This is some documentation about it:

So in short:

  • ChatGPTCompletionService copy-paste this and create a new service e.g. ChatGPTCompletionService<NewModelName>
  • Change the LLM model to an existing one which mentioned in the NewModelName
  • and then finally use your newly implemented service here, in your code.
FormFiller formFiller = new FormFiller(formLayout, new MyGPTService());
FormFillerResult result = formFiller.fill(input);

Now should work.

p.s: The only thing that maybe this service will be not as stable as our tested and prompt fine tuned ones. (Usually models have slightly different behivour, and you can prompt them to work exactly you want)

Thank you for your reactivity :+1:

I copy pasted your service and changed gpt-3.5-turbo-16k-0613 to gpt-4o-mini.
So on I have to deal with Chat GPT usage prices.

Do you know a testable model for free?
For my company, I am making a P.O.C., are you confident about this plugin future?

Thank you for your replay :slight_smile:

1 Like

Well, we have created a Proof of Concept (PoC) at one point using an AWS-hosted LLaMA model, and it worked. So, you can really add any model. Keeping track of all the free models is tricky! :smile: I’m not sure about the currently available ones, as they can change daily.

Here’s the WIP PR for the cloud-hosted model (if you have access):

Of course, you’ll also need to configure AWS for it to work.

What I am sure of is that you can integrate any local or cloud-based large language models (LLMs).

As for the product’s future within the company, I’m not the best person to speak on that, but I believe if a lot of people start using it, it has real potential! :smile: So, please give it a try!
(We’ve already made some fixes over the past few months when issues came up.)

By the way, do you like it? :) Any feedback is always welcome.

Also, if you haven’t already, could you please try our new product? It uses AI in a more sophisticated and complex way, offering much more functionality:

Thank you for your PoC :+1:

We are surprised about which Model is free to use or not.
What about you? Did you have to subscribe any paid method for Development?

As soon as I can make it works I will give feedbacks ;)
Do you want me to give feedbacks somewhere specifically?

I think I used some LLama2 7b chat or something similar to this model, it was supported on Sagemaker to deploy and use it (amazon-sagemaker-llama2-response-streaming-recipes/llama-2-hf-tgi/llama-2-7b-chat-hf/1-deploy-llama-2-7b-chat-hf-tgi-sagemaker.ipynb at main · aws-samples/amazon-sagemaker-llama2-response-streaming-recipes · GitHub)

But if you want to put your AI model in production you need to read the licenses of the models, plus you will have huge cost just to deploy, and host the models on servers on AWS. I think it was 50-100USD per day for a simpler model.

Feedback come in any form really :), especially about Copilot which is the focus right now :)

And just to answer:

  • I am using OpenAI models, but I tried out Perplexity supported models, and others as well, hard to keep in mind all :joy:

Thank you Steeve! Take care!