Jump to top

ml-vision

interface

The Firebase ML Kit service interface.

This module is available for the default app only.

Example

Get the ML Kit service for the default app:

const defaultAppMLKit = firebase.vision();

Properties

app

</>

The current FirebaseApp instance for this Firebase service.

app: FirebaseApp;

Methods

barcodeDetectorProcessImage

</>

Returns an array of barcodes (as VisionBarcode) detected for a local image file path.

barcodeDetectorProcessImage(imageFilePath: string, barcodeDetectorOptions?: VisionBarcodeDetectorOptions): Promise<VisionBarcode[]>;

cloudDocumentTextRecognizerProcessImage

</>

Detect text within a document using a local image file from the cloud (Firebase) model.

cloudDocumentTextRecognizerProcessImage(imageFilePath: string, cloudDocumentTextRecognizerOptions?: VisionCloudDocumentTextRecognizerOptions): Promise<VisionDocumentText>;

cloudImageLabelerProcessImage

</>

Returns an array of labels (as VisionImageLabel) of a given local image file path. Label detection is done on cloud (Firebase), resulting in slower results but more descriptive.

cloudImageLabelerProcessImage(imageFilePath: string, cloudImageLabelerOptions?: VisionCloudImageLabelerOptions): Promise<VisionImageLabel[]>;

cloudLandmarkRecognizerProcessImage

</>

Returns an array of landmarks (as VisionLandmark) of a given local image file path. Landmark detection is done on cloud (Firebase).

cloudLandmarkRecognizerProcessImage(imageFilePath: string, cloudLandmarkRecognizerOptions?: VisionCloudLandmarkRecognizerOptions): Promise<VisionLandmark[]>;

cloudTextRecognizerProcessImage

</>

Detect text from a local image file using the cloud (Firebase) model.

cloudTextRecognizerProcessImage(imageFilePath: string, cloudTextRecognizerOptions?: VisionCloudTextRecognizerOptions): Promise<VisionText>;

faceDetectorProcessImage

</>

Detects faces from a local image file.

faceDetectorProcessImage(imageFilePath: string, faceDetectorOptions?: VisionFaceDetectorOptions): Promise<VisionFace[]>;

imageLabelerProcessImage

</>

Returns an array of labels (as VisionImageLabel) of a given local image file path. Label detection is done on device, resulting in faster results but less descriptive.

imageLabelerProcessImage(imageFilePath: string, imageLabelerOptions?: VisionImageLabelerOptions): Promise<VisionImageLabel[]>;

textRecognizerProcessImage

</>

Detect text from a local image file using the on-device model.

textRecognizerProcessImage(imageFilePath: string): Promise<VisionText>;

Statics

VisionBarcodeAddressType

</>
ml-vision.VisionBarcodeAddressType: any;

VisionBarcodeEmailType

</>
ml-vision.VisionBarcodeEmailType: any;

VisionBarcodeFormat

</>
ml-vision.VisionBarcodeFormat: any;

VisionBarcodePhoneType

</>
ml-vision.VisionBarcodePhoneType: any;

VisionBarcodeValueType

</>
ml-vision.VisionBarcodeValueType: any;

VisionBarcodeWifiEncryptionType

</>
ml-vision.VisionBarcodeWifiEncryptionType: any;

VisionCloudLandmarkRecognizerModelType

</>
ml-vision.VisionCloudLandmarkRecognizerModelType: VisionCloudLandmarkRecognizerModelType;

VisionCloudTextRecognizerModelType

</>
ml-vision.VisionCloudTextRecognizerModelType: VisionCloudTextRecognizerModelType;

VisionDocumentTextRecognizedBreakType

</>
ml-vision.VisionDocumentTextRecognizedBreakType: VisionDocumentTextRecognizedBreakType;

VisionFaceContourType

</>
ml-vision.VisionFaceContourType: VisionFaceContourType;

VisionFaceDetectorClassificationMode

</>
ml-vision.VisionFaceDetectorClassificationMode: VisionFaceDetectorClassificationMode;

VisionFaceDetectorContourMode

</>
ml-vision.VisionFaceDetectorContourMode: VisionFaceDetectorContourMode;

VisionFaceDetectorLandmarkMode

</>
ml-vision.VisionFaceDetectorLandmarkMode: VisionFaceDetectorLandmarkMode;

VisionFaceDetectorPerformanceMode

</>
ml-vision.VisionFaceDetectorPerformanceMode: VisionFaceDetectorPerformanceMode;

VisionFaceLandmarkType

</>
ml-vision.VisionFaceLandmarkType: VisionFaceLandmarkType;