Code completion model used the same model as Code Generation Model in PaLM API. But the model will complete the given code despite of generating from zero. It has more parameters compared to the plain code generation model.

Parameters

  • prefix (required): It is the code given to the model to be completed.
  • suffix (optional): If it’s given, then the model will try to fill the code between prefix and suffix.
  • temperature (required): Has the same behavior with Parameters of Text Generation in PaLM API
  • maxOutputTokens (required): Has the same behavior with Parameters of Text Generation in PaLM API
  • stopSequences (optional): Lists of case-sensitive strings that tells the model to stop generating texts if one of the string is generated.

Example code:

code_completion_model = CodeGenerationModel.from_pretrained("code-gecko")
prefix = """
          def find_x_in_string(string_s, x):
         """
 
response = code_completion_model.predict(prefix=prefix, max_output_tokens=64)
 
print(response.text)

References