Handlers lambda development

Here is handlers lambda development description.

More information about lambda types and differences of handlers and others available at lambda types description.

Handlers lambda requirements

Handlers lambda has several requirements to addition with basic requirements:

  • Luna Faces available by credentials from Luna-Configurator

  • Luna Events available by credentials from Luna-Configurator; it can be disabled using ADDITIONAL_SERVICES_USAGE setting and it this case lambda should provide for work without Luna-Events usage

  • Luna Python Matcher available by credentials from Luna-Configurator

  • Luna Faces/Bodies/Images Samples Store available by credentials from Luna-Configurator; it can be disabled using ADDITIONAL_SERVICES_USAGE setting and it this case lambda should provide for work without Luna-Image-Store usage

  • Luna Remote SDK available by credentials from Luna-Configurator

Usage and purpose of these services described here.

Handlers lambda configuration

The handlers lambda required several settings from luna-configurator, whose can be separated to several groups:

  • LUNA_LAMBDA_UNIT_LOGGER - lambda logger settings

  • luna-services addresses and timeouts settings (for example, LUNA_FACES_ADDRESS and LUNA_FACES_TIMEOUTS will be used by lambda to make requests to luna-faces service)

  • ADDITIONAL_SERVICES_USAGE setting will be used to determine which luna-services can be used by the lambda (the lambda will not check connection to disabled services and will raise an error if user try to make request to such service)

Handlers lambda request usage

The request to handlers lambda has some additions compared to standalone lambda request.

The HandlersLambdaRequest has the data property (detailed description presented at handlers incoming data)

The simple example:

lambda_main.py
 from luna_lambda_tools import HandlersLambdaRequest, logger

 async def main(request: HandlersLambdaRequest) -> dict:
     logger.info(request.data.sources)    # log count of images in request
     logger.info(request.handlerId)   # request initiator handler id
     logger.info(request.headers)     # all request headers
     logger.info(request.args)        # print dictionary with query arguments
     ...

Handlers lambda incoming data

The handlers lambda must be designed to process requests from Luna-Handlers service.

The incoming request structure is presented here

luna_lambda_tools.public.handlers.schemas

Module contains pydantic schemas for handlers lambda

The data property usage assumes that request will be validated using presented schema and content type of this request is application/msgpack.

If request’s body content type will differs or incoming data not pass validation, it will raise an exception and reply with 400 status code will be returned.

It will be available using data property of incoming request as follows:

lambda_main.py
 from luna_lambda_tools import HandlersLambdaRequest, logger


 async def main(request: HandlersLambdaRequest) -> dict:
     logger.info("request images count: ", len(request.data.sources))
     for i, image in enumerate(request.data.sources):
         logger.info(f"face bounding box for image number {i+1}: {image.faceBoundingBoxes[0] if image.faceBoundingBoxes else 'not presented'}")
     ...

Handlers lambda examples

The handlers lambda can use luna-services for different operations.

  • The exception whose caused by requests to luna service processing can be regulated using raiseError flag (True by default). The examples demonstrate raiseError flag usage:

    • if request to luna service causes an error, the exception will be raised and the example will returns response with 500 status code and json like this:

      lambda_main.py
      from uuid import uuid4
      
      from luna_lambda_tools import HandlersLambdaRequest
      
      
      async def main(request: HandlersLambdaRequest) -> dict:
          nonExistFaceId = str(uuid4())
          reply = await request.clients.faces.getFace(faceId=nonExistFaceId, raiseError=True)
          return reply.json
      
      request example
      from luna3.luna_lambda.luna_lambda import LambdaApi
      
      SERVER_ORIGIN = "http://lambda_address:lambda_port"  # Replace by your values before start
      SERVER_API_VERSION = 1
      lambdaApi = LambdaApi(origin=SERVER_ORIGIN, api=SERVER_API_VERSION)
      lambdaId, accountId = "your_lambda_id", "your_account_id"  # Replace by your values before start
      
      
      def makeRequest():
          data = {
              "aggregate_attributes": 0,
              "sources": [
                  {"body": "", "filename": "empty.jpeg", "source_type": "raw_image"},
              ],
          }
          reply = lambdaApi.proxyLambdaPost(lambdaId=lambdaId, path="main", accountId=accountId, body=data)
          return reply
      
      
      if __name__ == "__main__":
          response = makeRequest()
          print(response.json)
      
      {
          "error_code": 42004,
          "desc": "Lambda exception",
          "detail": "Not expected code from service: 404 GET http://127.0.0.1:5030/3/faces/048e1428-c7de-426b-82f8-3971bf00284c, error: {\"error_code\":22002,\"desc\":\"Object not found\",\"detail\":\"Face with id '048e1428-c7de-426b-82f8-3971bf00284c' not found\",\"link\":\"https:\\/\\/docs.visionlabs.ai\\/info\\/luna\\/troubleshooting\\/errors-description\\/code-22002\"}",
          "link": "https://docs.visionlabs.ai/info/luna/troubleshooting/errors-description/code-42005"
      }
      
    • in this case, the request will raise an error and for any request, labmda will return response with 201 status code and json like this:

      lambda_main.py
      from uuid import uuid4
      
      from luna_lambda_tools import HandlersLambdaRequest
      
      
      async def main(request: HandlersLambdaRequest) -> dict:
          nonExistFaceId = str(uuid4())
          reply = await request.clients.faces.getFace(faceId=nonExistFaceId, raiseError=False)
          return reply.json
      
      request example
      from luna3.common.requests import RequestPayload
      from luna3.luna_lambda.luna_lambda import LambdaApi
      
      SERVER_ORIGIN = "http://lambda_address:lambda_port"  # Replace by your values before start
      SERVER_API_VERSION = 1
      lambdaApi = LambdaApi(origin=SERVER_ORIGIN, api=SERVER_API_VERSION)
      lambdaId, accountId = "your_lambda_id", "your_account_id"  # Replace by your values before start
      
      
      def getImage(pathToImage):
          """
          Make sure pathToImage is valid path to specified image
          """
          with open(pathToImage, "rb") as file:
              return file.read()
      
      
      def makeRequest():
          data = {
              "aggregate_attributes": 0,
              "sources": [
                  {"source": {"body": getImage("empty.jpeg")}, "filename": "empty.jpeg", "source_type": "raw_image"},
              ],
          }
          payload = RequestPayload.buildMsgpack(body=data)
          reply = lambdaApi.proxyLambdaPost(lambdaId=lambdaId, path="main", accountId=accountId, body=payload)
          return reply
      
      
      if __name__ == "__main__":
          response = makeRequest()
          print(response.json)
      
      {
          "error_code": 22002,
          "desc": "Object not found",
          "detail": "Face with id 'fece7f31-ea14-40fb-8c57-40230aa17256' not found",
          "link": "https://docs.visionlabs.ai/info/luna/troubleshooting/errors-description/code-22002"
      }
      
  • The example lambda which use canonical platform handler, enrich received result and send it to user as reply. The id of canonical (non-lambda) handler specified by query parameter.

    (such lambda cannot be used as canonical luna-platform handler)

    lambda_main.py
    from luna3.public.common import BinaryImage
    from luna_lambda_tools import HandlersLambdaRequest, UserException
    from luna_lambda_tools.public.handlers.schemas import EventSourceAggregatedRawImage, EventSourceNonAggregatedRawImage
    from vlutils.helpers import isUUID
    
    
    class HandlerNotSpecifiedException(UserException):
        statusCode = 400
        errorText = "expected `handler_id` in query arguments"
    
    
    class BadHandlerFormatException(UserException):
        statusCode = 400
        errorText = "expected uuid `handler_id` in query arguments"
    
    
    class SourceCountExceptionException(UserException):
        statusCode = 400
        errorText = "expected one source image"
    
    
    class UnexpectedSourceException(UserException):
        statusCode = 400
        errorText = "expected only one source with only one image"
    
    
    class HandlersNotEnabled(UserException):
        statusCode = 403
        errorText = "luna-handlers service is disabled"
    
    
    async def main(request: HandlersLambdaRequest) -> dict:
        if not request.handlersEnabled:
            raise HandlersNotEnabled
        if (canonicalHandlerId := request.args.get("handler_id")) is None:
            raise HandlerNotSpecifiedException
        if not isUUID(canonicalHandlerId):
            raise BadHandlerFormatException
    
        request.logger.info(f"Lambda handler with id (handler_id) `{request.handlerId}` is working")
        request.logger.info(f"Use canonical handler with id `{canonicalHandlerId}` as preprocessor")
    
        if len(request.data.sources) != 1:
            raise SourceCountExceptionException
        isRawImageSource = isinstance(request.data.sources[0], EventSourceNonAggregatedRawImage)
        isAggregatedImageSource = isinstance(request.data.sources[0], EventSourceAggregatedRawImage)
        if isRawImageSource is isAggregatedImageSource is False:
            raise UnexpectedSourceException
    
        replyJson = (
            await request.clients.handlers.emitEvents(
                handlerId=canonicalHandlerId,
                raiseError=True,
                inputData=BinaryImage(body=request.data.sources[0].source.body, filename="image.jpg"),
                aggregateAttributes=1 if isAggregatedImageSource else 0,
            )
        ).json
        enrichedReply = replyJson
        enrichedReply["custom-meta"] = "custom text"
        return enrichedReply
    
    request example
    from luna3.common.http_objs import Policies
    from luna3.common.requests import RequestPayload
    from luna3.handlers.handlers import HandlersApi
    from luna3.luna_lambda.luna_lambda import LambdaApi
    
    SERVER_ORIGIN = "http://lambda_address:lambda_port"  # Replace by your values before start
    SERVER_API_VERSION = 1
    lambdaApi = LambdaApi(origin=SERVER_ORIGIN, api=SERVER_API_VERSION)
    HANDLERS_SERVER_ORIGIN = "http://handlers_address:handlers_port"
    HANDLERS_API_VERSION = 1
    handlersApi = HandlersApi(origin=HANDLERS_SERVER_ORIGIN, api=HANDLERS_API_VERSION)
    lambdaId, accountId = "your_lambda_id", "your_account_id"  # Replace by your values before start
    HANDLER_ID = "your_handler_id"  # Replace by your values before start
    
    
    def getImage(pathToImage):
        """
        Make sure pathToImage is valid path to specified image
        """
        with open(pathToImage, "rb") as file:
            return file.read()
    
    
    def makeRequest():
        data = {
            "aggregate_attributes": 0,
            "sources": [
                {"source": {"body": getImage("empty.jpeg")}, "filename": "empty.jpeg", "source_type": "raw_image"},
            ],
        }
        payload = RequestPayload.buildMsgpack(body=data)
        handlerId = handlersApi.createHandler(
            accountId=accountId, handlerType=0, policies=Policies(), raiseError=True
        ).json["handler_id"]
    
        reply = lambdaApi.proxyLambdaPost(
            lambdaId=lambdaId,
            path="main",
            accountId=accountId,
            body=payload,
            headers={"handler_id": HANDLER_ID},
            queries={"handler_id": handlerId},
        )
        return reply
    
    
    if __name__ == "__main__":
        response = makeRequest()
        print(response.json)
    
  • The example lambda which estimates emotions on each specified image and return received results as reply

    (such lambda cannot be used as canonical luna-platform handler)

    lambda_main.py
    from luna3.public.common import BinaryImage
    from luna_lambda_tools import HandlersLambdaRequest
    
    
    async def main(request: HandlersLambdaRequest) -> dict:
        results = []
        for image in request.data.sources:
            # If you need to determine type of image use https://docs.python.org/3/library/imghdr.html or another library.
            mimetype = "image/jpeg"
            result = (
                await request.clients.sdk.sdk(
                    inputData=BinaryImage(
                        filename=image.filename or "raw_image", body=image.source.body, mimetype=mimetype
                    ),
                    detectFace=1,
                    estimateEmotions=1,
                )
            ).json
            results.append({"image_filename": image.filename, "estimations": result})
        return {"result": results}
    
    request example
    from luna3.common.requests import RequestPayload
    from luna3.luna_lambda.luna_lambda import LambdaApi
    
    SERVER_ORIGIN = "http://lambda_address:lambda_port"  # Replace by your values before start
    SERVER_API_VERSION = 1
    lambdaApi = LambdaApi(origin=SERVER_ORIGIN, api=SERVER_API_VERSION)
    lambdaId, accountId = "your_lambda_id", "your_account_id"  # Replace by your values before start
    
    
    def getImage(pathToImage):
        """
        Make sure pathToImage is valid path to specified image
        """
        with open(pathToImage, "rb") as file:
            return file.read()
    
    
    def makeRequest():
        data = {
            "aggregate_attributes": 0,
            "sources": [
                {"source": {"body": getImage("empty.jpeg")}, "filename": "empty.jpeg", "source_type": "raw_image"},
            ],
        }
        payload = RequestPayload.buildMsgpack(body=data)
        reply = lambdaApi.proxyLambdaPost(lambdaId=lambdaId, path="main", accountId=accountId, body=payload)
        return reply
    
    
    if __name__ == "__main__":
        response = makeRequest()
        print(response.json)
    
  • The lambda example which extracts face descriptors from two images, match them, saves general event if similarity higher than threshold and return result as reply

    (such lambda cannot be used as canonical luna-platform handler)

    lambda_main.py
    from datetime import datetime
    from uuid import uuid4
    
    from dateutil.tz import tz
    from luna3.public.common import BinaryImage
    from luna3.public.events import GeneralEvent
    from luna3.public.matcher import SDKDescriptorReference
    from luna_lambda_tools import HandlersLambdaRequest, UserException
    
    THRESHOLD = 0.1
    
    
    class ImageCountException(UserException):
        statusCode = 400
        errorText = "expected two images in request"
    
    
    class FaceCountException(UserException):
        statusCode = 400
        errorText = "excepted one face on each image"
    
    
    async def main(request: HandlersLambdaRequest) -> dict:
        if len(request.data.sources) != 2:
            raise ImageCountException
    
        images = []
        for i in range(2):
            img = request.data.sources[i]
            # If you need to determine type of image use https://docs.python.org/3/library/imghdr.html or another library.
            mimetype = "image/jpeg"
            images.append(BinaryImage(filename=img.filename, body=img.source.body, mimetype=mimetype))
        imagesEstimations = (await request.clients.sdk.sdk(images, estimateFaceDescriptor=1)).json["images_estimations"]
    
        descriptors = []
        for imageEstimations in imagesEstimations:
            estimations = imageEstimations["estimations"]
            if len(estimations) != 1:
                raise FaceCountException
            descriptors.append(estimations[0]["face"]["detection"]["attributes"]["descriptor"]["sdk_descriptor"])
    
        similarity = (
            await request.clients.matcher.matchRaw(
                candidates=[SDKDescriptorReference(descriptor=descriptors[0], referenceId=str(uuid4()))],
                references=[SDKDescriptorReference(descriptor=descriptors[1], referenceId=str(uuid4()))],
            )
        ).json["matches"][0]["matches"][0]["similarity"]
        if similarity > THRESHOLD:
            sourceData = {}
            if request.data.sourceData:
                sourceData = dict(
                    source=request.data.sourceData.source,
                    streamId=request.data.sourceData.streamId,
                    trackId=request.data.sourceData.trackId,
                    location=request.data.sourceData.location,
                )
            await request.clients.events.saveGeneralEvents(
                events=[
                    GeneralEvent(
                        accountId=request.accountId,
                        eventType="lambda_matching",
                        eventId=str(uuid4()),
                        createTime=datetime.now(tz=tz.tzlocal()).isoformat(),
                        eventContent={"similarity": similarity},
                        **sourceData,
                    )
                ]
            )
    
        return {"similarity": similarity}
    
    request example
    from luna3.common.requests import RequestPayload
    from luna3.luna_lambda.luna_lambda import LambdaApi
    
    SERVER_ORIGIN = "http://lambda_address:lambda_port"  # Replace by your values before start
    SERVER_API_VERSION = 1
    lambdaApi = LambdaApi(origin=SERVER_ORIGIN, api=SERVER_API_VERSION)
    lambdaId, accountId = "your_lambda_id", "your_account_id"  # Replace by your values before start
    
    
    def getImage(pathToImage):
        """
        Make sure pathToImage is valid path to specified image
        """
        with open(pathToImage, "rb") as file:
            return file.read()
    
    
    def makeRequest():
        data = {
            "aggregate_attributes": 0,
            "sources": [
                {"source": {"body": getImage("empty.jpeg")}, "filename": "empty.jpeg", "source_type": "raw_image"},
                {"source": {"body": getImage("empty.jpeg")}, "filename": "empty2.jpeg", "source_type": "raw_image"},
            ],
        }
        payload = RequestPayload.buildMsgpack(body=data)
        reply = lambdaApi.proxyLambdaPost(lambdaId=lambdaId, path="main", accountId=accountId, body=payload)
        return reply
    
    
    if __name__ == "__main__":
        response = makeRequest()
        print(response.json)
    
  • The lambda example which extracts face descriptor, basic attributes and saves face descriptor, sample and event with estimated data

    (such lambda can be used as canonical luna-platform handler)

    lambda_main.py
    import asyncio
    from datetime import datetime
    from uuid import uuid4
    
    from cow.errors.errors import Error
    from dateutil.tz import tz
    from luna3.public.common import BinaryImage
    from luna3.public.events import (
        BasicAttributes,
        BasicEthnicities,
        Descriptor,
        DetectionSamples,
        Event,
        EventDetection,
        EventFace,
        EventFaceAttributes,
        FaceSample,
    )
    from luna_lambda_tools import HandlersLambdaRequest, UserException
    from vlutils.descriptors.containers import sdkDescriptorDecode
    
    
    class ImageCountException(UserException):
        statusCode = 400
        errorText = "expected at least one image in request"
    
    
    class FaceCountException(UserException):
        statusCode = 400
        errorText = "excepted at least one face on each image"
    
    
    class DetectionException(UserException):
        statusCode = 400
        errorText = "failed to detect at least one face"
    
    
    async def saveData(request: HandlersLambdaRequest, image, handlerId: str, nowTime: str) -> tuple[dict, dict]:
        # If you need to determine type of image use https://docs.python.org/3/library/imghdr.html or another library.
        mimetype = "image/jpeg"
        sourceData = image.source.sourceData
        binaryImage = BinaryImage(filename=image.filename or "raw_image", body=image.source.body, mimetype=mimetype)
    
        imagesEstimations = (
            await request.clients.sdk.sdk(
                binaryImage, estimateFaceDescriptor=1, estimateBasicAttributes=1, estimateFaceWarp=1
            )
        ).json["images_estimations"]
        if len(imagesEstimations) < 1:
            raise FaceCountException
    
        imageEstimation = imagesEstimations[0]
        if len(imageEstimation["estimations"]) < 1:
            raise DetectionException
    
        imageName = imageEstimation["filename"]
        filename = image.filename or imageName
        externalId = sourceData.externalId or imageName
        faceDetection = imageEstimation["estimations"][0]["face"]["detection"]
        sdkDescriptor = faceDetection["attributes"]["descriptor"]["sdk_descriptor"]
        basicAttributes = faceDetection["attributes"]["basic_attributes"]
        faceSample = faceDetection["warp"]
    
        descriptorVersion, descriptor = sdkDescriptorDecode(sdkDescriptor)
    
        if request.faceSampleStoreEnabled:
            sampleId = (await request.clients.faceSamplesStore.postImage(imageInBytes=faceSample, raiseError=True)).json[
                "image_id"
            ]
        else:
            sampleId = None
    
        faceId = (
            await request.clients.faces.createFace(
                descriptors=[sdkDescriptor],
                externalId=externalId,
                descriptorSamples=[sampleId],
                raiseError=True,
            )
        ).json["face_id"]
    
        eventId = str(uuid4())
        event = Event(
            eventId=eventId,
            createTime=sourceData.eventTime or nowTime,
            handlerId=handlerId,
            externalId=externalId,
            userData=sourceData.userData,
            face=EventFace(faceId=faceId, lists=[]),
            faceAttributes=EventFaceAttributes(
                descriptorData=Descriptor(descriptor=descriptor, descriptorVersion=descriptorVersion),
                basicAttributes=BasicAttributes(
                    age=basicAttributes["age"],
                    gender=basicAttributes["gender"],
                    ethnicities=BasicEthnicities(
                        predominantEthnicity=basicAttributes["ethnicities"]["predominant_ethnicity"]
                    ),
                ),
            ),
            detections=[
                EventDetection(
                    filename=filename,
                    detectTime=image.detectTime or nowTime,
                    samples=DetectionSamples(face=FaceSample(sampleId=sampleId)),
                )
            ],
            streamId=sourceData.streamId,
        )
        if request.eventsEnabled:
            await request.clients.events.saveEvents([event], waitEventsSaving=True, raiseError=True)
        if request.senderEnabled:
            await request.clients.sender.publish(
                events=[{"value": f"Hello, I'm new event with id {eventId}!"}],
                handlerId="00000000-0000-4000-a000-000003491877",
                eventCreateTime=sourceData.eventTime or nowTime,
                eventEndTime=sourceData.eventEndTime or nowTime,
            )
    
        imageResult = {"filename": binaryImage.filename, "status": 1, "error": Error.Success.asDict()}
        eventResult = {
            "face_attributes": {
                "attribute_id": None,
                "url": None,
                "basic_attributes": basicAttributes,
                "samples": [sampleId],
            },
            "body_attributes": None,
            "aggregate_estimations": {
                "face": None,
                "body": None,
            },
            "source": sourceData.source,
            "tags": sourceData.tags or [],
            "external_id": "",
            "user_data": "",
            "face": {
                "face_id": faceId,
                "url": f"{request.clients.faces.getAddress()}/faces/{faceId}",
            },
            "event_id": eventId,
            "url": f"{request.clients.events.getAddress()}/events/{eventId}" if request.eventsEnabled else None,
            "matches": None,
            "location": sourceData.location.asDict() if sourceData.location is not None else {},
            "detections": [
                {
                    "filename": binaryImage.filename,
                    "samples": {
                        "face": {
                            "sample_id": sampleId,
                            "url": (
                                f"{request.clients.faceSamplesStore.getAddress()}/images/{sampleId}"
                                if request.faceSampleStoreEnabled
                                else None
                            ),
                            "detection": {},
                        },
                        "body": None,
                    },
                    "detect_time": image.detectTime or nowTime,
                    "detect_ts": image.detectTs or 123.456,
                    "image_origin": image.imageOrigin,
                }
            ],
            "track_id": sourceData.trackId,
            "meta": sourceData.meta,
        }
        return imageResult, eventResult
    
    
    async def main(request: HandlersLambdaRequest) -> dict:
        if not len(request.data.sources):
            raise ImageCountException
    
        handlerId = str(uuid4())
        nowTime = datetime.now(tz=tz.tzlocal()).isoformat()
        kwargs = {"request": request, "handlerId": handlerId, "nowTime": nowTime}
    
        results = await asyncio.gather(*[saveData(image=image, **kwargs) for image in request.data.sources])
        resultEvents, resultImages = [], []
        for imageResult, eventResult in results:
            resultEvents.append(eventResult)
            resultImages.append(imageResult)
    
        return {"events": resultEvents, "images": resultImages, "filtered_detections": {"face_detections": []}}
    
    request example
    from luna3.common.requests import RequestPayload
    from luna3.luna_lambda.luna_lambda import LambdaApi
    
    SERVER_ORIGIN = "http://lambda_address:lambda_port"  # Replace by your values before start
    SERVER_API_VERSION = 1
    lambdaApi = LambdaApi(origin=SERVER_ORIGIN, api=SERVER_API_VERSION)
    lambdaId, accountId = "your_lambda_id", "your_account_id"  # Replace by your values before start
    
    
    def getImage(pathToImage):
        """
        Make sure pathToImage is valid path to specified image
        """
        with open(pathToImage, "rb") as file:
            return file.read()
    
    
    def makeRequest():
        data = {
            "aggregate_attributes": 0,
            "sources": [
                {"source": {"body": getImage("empty.jpeg")}, "filename": "empty.jpeg", "source_type": "raw_image"},
            ],
        }
        payload = RequestPayload.buildMsgpack(body=data)
        reply = lambdaApi.proxyLambdaPost(lambdaId=lambdaId, path="main", accountId=accountId, body=payload)
        return reply
    
    
    if __name__ == "__main__":
        response = makeRequest()
        print(response.json)
    
  • The lambda example which gets face detection and estimate mask in some detection area directly using the LUNA SDK (such user lambda cannot be used as canonical luna-platform handler) (available at https://github.com/VisionLabs/lunasdk)

    LUNA python SDK must be added it to requirements.txt (requirements description available here)

    Note

    The LUNA python SDK is named and can be imported as lunavl library in lambda code.

    requirements.txt
    https://github.com/VisionLabs/lunasdk/archive/refs/tags/v.2.1.4.tar.gz
    

    Note

    To develop lambda with lunasdk locally it needs LUNA FSDK python bindings to be installed previously.

    Warning

    Such user lambda requires LUNA SDK data available from user lambda (the fsdk/data folder near the lambda_main.py main file) The LUNA SDK is available on VL release portal which must be extracted to fsdk folder (see archive file structure below). The only thing which is needed is data folder with used fsdk plans and config files (faceengine.conf and runtime.conf). It is recommended to not include not using plans from folder in archive to decrease result lambda image size.

    Archive file structure with files required for the example
      ├──lambda_main.py
      ├──requirements.txt
      └──fsdk
         └──data
             ├──faceengine.conf
             ├──runtime.conf
             ├──FaceDet_v3_a5_cpu-avx2.plan
             ├──FaceDet_v3_redetect_v3_cpu-avx2.plan
             ├──LNet_precise_v2_cpu-avx2.plan
             ├──mask_clf_v3_cpu-avx2.plan
             └──slnet_v5_cpu-avx2.plan
    
    lambda_main.py
    from luna_lambda_tools import HandlersLambdaRequest, UserException
    from lunavl.sdk.detectors.base import ImageForDetection
    from lunavl.sdk.faceengine.engine import VLFaceEngine
    from lunavl.sdk.faceengine.setting_provider import DetectorType
    from lunavl.sdk.image_utils.geometry import Rect
    from lunavl.sdk.image_utils.image import VLImage
    
    
    class ImageCountException(UserException):
        statusCode = 400
        errorText = "expected one image in request"
    
    
    class FaceDetectionException(UserException):
        statusCode = 400
        errorText = "failed to get face detection from image"
    
    
    async def main(request: HandlersLambdaRequest) -> dict:
        if len(request.data.sources) != 1:
            raise ImageCountException
    
        image = VLImage(body=request.data.sources[0].source.body)
        faceEngine = VLFaceEngine()
        detector = faceEngine.createFaceDetector(DetectorType.FACE_DET_V3)
        faceDetectionData = request.data.sources[0].source.faceDetectionData[0].boundingBox
        bbox = Rect(
            x=faceDetectionData.x, y=faceDetectionData.y, height=faceDetectionData.height, width=faceDetectionData.width
        )
        detections = detector.detect([ImageForDetection(image, bbox)])
        if not detections:
            raise FaceDetectionException
        faceDetections = detections[0]
        warper = faceEngine.createFaceWarper()
        warps = [warper.warp(faceDetection) for faceDetection in faceDetections]
        maskEstimator = faceEngine.createMaskEstimator()
        mask = await maskEstimator.estimate(warps[0].warpedImage, asyncEstimate=True)
    
        return {"results": mask.asDict()}
    
    request example
    from luna3.common.requests import RequestPayload
    from luna3.luna_lambda.luna_lambda import LambdaApi
    
    SERVER_ORIGIN = "http://lambda_address:lambda_port"  # Replace by your values before start
    SERVER_API_VERSION = 1
    lambdaApi = LambdaApi(origin=SERVER_ORIGIN, api=SERVER_API_VERSION)
    lambdaId, accountId = "your_lambda_id", "your_account_id"  # Replace by your values before start
    
    
    def getImage(pathToImage):
        """
        Make sure pathToImage is valid path to specified image
        """
        with open(pathToImage, "rb") as file:
            return file.read()
    
    
    def makeRequest():
        data = {
            "aggregate_attributes": 0,
            "sources": [
                {
                    "source": {
                        "body": getImage("empty.jpeg"),
                        "face_detection_data": [{"bounding_box": {"width": 250, "height": 250, "x": 0, "y": 0}}],
                    },
                    "filename": "empty.jpeg",
                    "source_type": "raw_image",
                },
            ],
        }
        payload = RequestPayload.buildMsgpack(body=data)
        reply = lambdaApi.proxyLambdaPost(lambdaId=lambdaId, path="main", accountId=accountId, body=payload)
        return reply
    
    
    if __name__ == "__main__":
        response = makeRequest()
        print(response.json)
    

    The LUNA SDK can work with CPU or GPU, by default all estimations, extraction and so on carried out using CPU. The GPU usage allows to speed up most of above actions. For more information about GPU usage, see LUNA SDK documentation

    It also needs to add required GPU plans to fsdk/data folder.

    Add the following code to the example to enable GPU usage for all estimators/extractor:

    example
         from lunavl.sdk.launch_options import DeviceClass, LaunchOptions
         from lunavl.sdk.faceengine.setting_provider import RuntimeSettingsProvider
    
         ...
    
                runtimeSettings = RuntimeSettingsProvider()
                runtimeSettings.runtimeSettings.deviceClass = DeviceClass.gpu
                faceEngine = VLFaceEngine(runtimeConf=runtimeSettings)
    
         ...
    

    Add the following code to the example to enable GPU usage for one specified estimator/extractor:

    example
         from lunavl.sdk.launch_options import DeviceClass, LaunchOptions
    
         ...
    
                faceEngine = VLFaceEngine()
                extractor = faceEngine.createFaceDescriptorEstimator(launchOptions=LaunchOptions(deviceClass=DeviceClass.gpu))
    
         ...