Appearance
Financial Scenarios API
Please note
The Financial Scenario API is a list of common APIs (/develop/api/basic.html) that FinClip maintains in collaboration with FinClip mini program ecosystem partners around financial scenarios. When using the following APIs, please ensure that the host App has integrated the FinClip SDK and third party SDKs, otherwise the mini program will not be able to implement the relevant functionality. If you have any doubts when using them, please contact us.
1. Background of use
The context in which the Financial Scenario API is used is as follows.
- If the App integrates FinClip SDK + an SDK that has been interfaced with FinClip, then the mini program can directly call the API and achieve the relevant functional goals.
- If the App only integrates the FinClip SDK, not the SDK for functions successfully interfaced with FinClip, the mini program will not respond to calls to the financial scenario API.
- If the App integrates FinClip SDK + SDK not coupled with FinClip and the host App implements custom registration based on the API content custom registration API, then the mini program can call the relevant API to implement the specified function.
On balance, we recommend.
- For developers of mini programs in the financial industry, the APIs described on this page can be used directly in the business code, minimising the amount of customisation required by developers by standardising API calls.
- For host App vendors in the financial industry, they can integrate FinClip SDK + other specified functionalities SDK according to their actual needs, which can minimize the integration, development and interfacing workload through this specification.
Cooperation notes
Third-party SDK developers can follow FinClip's unified development specifications to design specific ways to integrate relevant functions in mini program scenarios. You can call 0755-86967467 or Subscribe to DeepL Pro to edit this document. Visit www.DeepL.com/profor more information. write an email to wangzi@finogeeks.com to learn more about this.
2. Authentication-related APIs
2.1 OCR card recognition (finCardOcr)
Interface name: finCardOcr
Interface dependencies: none (can be called using any version of FinClip SDK)
Request Parameters
Name | Type | Default | Description |
---|
------------------------------------------- | | type | String | non-empty | "BankCard":Bank card recognition< br /> "IDCard":ID card recognition< br /> "IDCardCheck":ID photo quality check< br /> "BusiCert":Business license recognition | | imagePath | String | | The path where the photo file is located< br /> The mini program can extract this field by chooseImage< br /> IOS/Android pass the absolute path of the file< br /> Where IOS can write the UIImage file to a temporary file as a simple solution File access issues |
Return results
Name | Type | Default | Description |
---|
-------- | | type | String | non-empty | "BankCard":Bank card recognition< br /> "IDCard":ID card recognition< br /> "IDCardCheck":ID photo quality check< br /> "BusiCert":Business license recognition | | errorCode | Int | 0 | Identify result, 0 means success | | description | String | | Description of the recognition result | | recogResult | Object | | Identify the result {"cardNo":card number,......} | RecogResult recogResult field description Front of ID card
Name | Description |
---|---|
face | 0: front, 1: back |
nation | nationality (only available on the front) |
gender | gender(only available on the front) |
birthday | birthday (only available on the front) |
address | address (only available on the front) |
name | Name (only available on the front) |
idNo | ID number (available on the front only) |
startDate | Expiry date (only available on the reverse side) |
endDate | Expiry date (only available on the reverse side) |
signOrg | Issuing Authority (reverse side only) |
ID card quality control | |
Name | Description |
---- | -------------------------------------------------------------- |
-------------------------------------------------------------- -- | |
risk | Image risks:< br /> 0=none< br /> 1=copy< br /> 2=photo |
screen< br /> 3=fake ID< br /> 4=watermarked< br /> 5=obscured< br /> | |
6=cut edges< br /> 7=cut edges< br /> = card distortion< br /> 8= light | |
spots | |
Bank Cards | |
Name | Description |
-------- | ----------------------- |
cardNo | card number |
bankName | bank |
bankId | bankId |
cardType | card type: debit card quasi-credit card |
Business licence | |
Name | Description |
---------- | --------------------------- |
regOrg | Registration Authority |
busiScrope | Business Scope |
certNo | Uniform Social Credit Code / Business Licence No. |
regDate | Registration Date |
capital | registered capital |
address | residence |
expDate | Business Period |
represent | legal representative |
certType | Type of business licence: original, copy |
corpName | Business Name |
corpType | Business Type |
foundDate | Date of Establishment |
2.2 Face-connected identity verification (finFaceAuth)
Interface name: finFaceAuth
Interface dependencies: none (can be called using any version of FinClip SDK)
Request Parameters
| Name | Type | Default | Description | | --------- | ------ | ------- | ------------------ | ---- | | idNo | String | | ID Number | | name | String | | | Name | | imagePath | String | | path to photo file |
Return results
Name | Type | Default | Description |
---|---|---|---|
errorCode | Int | 0 | Identify result, 0 means success |
description | String | Description of the recognition result | |
score | double | 0-1 | similarity |
2.3 In vivo testing (finLivenessCheck)
Interface: finLivenessCheck
Interface dependencies: none (can be called using any version of FinClip SDK)
Request Parameters
| Name | Type | Default | Description | | ---- | ---- | ------- | ----------- | --- | --- | | None | | | | | |
Return results
Name | Type | Default | Description |
---|---|---|---|
---------- | |||
resultType | String | "success" success, "back" user click to | |
return to cancel the live test | |||
faceImgStr | String | Live detection returns the image, as a byte | |
array after Base64. Only successful returns |
2.4 Opening the three-way app (finOpenOtherApp)
Interface name: finOpenOtherApp
Interface dependencies: none (can be called using any version of FinClip SDK)
Request Parameters
Name | Type | Default | Description |
---|---|---|---|
-------------------------------- | |||
package | String | App package name, used by Android to determine if | |
the app is installed, only Android | package | ||
url | String | App's scheme plus url, used by iOS to determine if the | |
app is installed and jump; used by Android to jump | |||
downloadUrl | String | Download App Address | |
alertMsg | String | Download alert box message, if not pass it, | |
then jump to the page without alert box |
2.5 Two-way video authentication (finOpenWitnessVideo)
Interface name: finOpenWitnessVideo
Interface dependencies: none (can be called using any version of FinClip SDK)
Request Parameters
Name | Type | Default | Description |
---|---|---|---|
videoType | String | Video Service Type | |
videoIp | String | video service ip | |
videoPort | String | Video Service Port | |
loginName | String | Login Name | |
loginPwd | String | Login Password (optional) | |
roomId | String | roomid | |
roomName | String | room name | |
roomPwd | String | Room Password (optional) | |
appId | String | anychat cluster appId |
Return results
Name | Type | Default | Description |
---|---|---|---|
videoFlag | String | non-empty | return flag 0, success, 1, failure, |
2, rejected | |||
rejectReason | String | Reason for rejection | |
message | String | Details |
2.6 One-way video recording mini program implementation
Interface dependencies: this component does not depend on third party SDKs other than the FinClip SDK Use of the one-way recording component of the mini program 1、Download the recording video component example code package recording demo v1.0.0 The component code is in the components in the root directory, or you can open the recorded demo directly with the ide tool to see the results 2、Insert the recording component into the json file of the page or component to be used
json
{
"usingComponents": {
"video-recognition": "... /... /components/video-recognition/index"
}
}
- Within the fxml of a page or component, use the component
html
< view style="width: 100vw; height: 100vh;">
< video-recognition recordTime="{{recordTime}}"
top="{{top}}"
stepList="{{stepList}}"
buttonStyle="{{buttonStyle}}"
mask="... /... /assets/img_mask_person@3x.png"
resolution="low"
bind:onRecordReady="onRecordReady"
bind:onRecordStart="onRecordStart"
bind:onRecordEnd="onRecordEnd"
bind:onRecordError="onRecordError">
</video-recognition>.
</view
Note
The width and height dimensions need to be declared on the outer layer of the recording component and will be displayed as width: 100%; height: 100% within the recording component.
4、Component parameters at a glance
Name | Type | Required | Default | Remarks |
---|
-- | | resolution | String | No | medium | Resolution, optional values: low, medium, high Valid only at initialization, cannot be changed dynamically | | mask | String | No | - | The path to the mask resource for the framing area, we recommend using the relative path to the resource within the mini program, the https address will take time to load, the mask will be placed on the camera at a width 100% height 100% size, note that it matches the size of the component | | recordTime | Number | No | 30000 | Recording time in milliseconds | | top | Number | No | 20 | Unit rpx Distance from top for text tips, or modify wxss within the video-recognition component to customize the position of the text | | stepList | Array< Object> | No | - | Configure the voice and prompt file for each step, maximum supported length is 3, please refer to the description after the form for the data element structure | | buttonStyle | Object | No | - | Controls the style of the recorded button, allows you to fine-tune the position of the button, currently supports the following fields: width, height, left, top, bottom, right, only valid at initialization, cannot be changed dynamically | | onRecordReady | EventHandler | No | - | With onRecordReady binding, some asynchronous resources will be downloaded before ready and triggered when the resources are ready, which can be used to determine whether the recorded component is ready on the page where the component is used, so as to control loading and component display | | onRecordStart | EventHandler | No | - | Bind via bind:onRecordStart, triggered when recording starts | | onRecordEnd | EventHandler | No | - | Bind via bind:onRecordEnd, triggered at the end of recording, callback method parameter res, res.tempVideoPath is the local file address of the recorded video | | onRecordError | EventHandler | No | - | Bind via bind:onRecordError, triggered when recording an error, callback method parameter res, res.errMsg is the error message when an error occurs | stepList Parameter Description Configure voice and prompt files for each step, maximum length supported is 3 The data elements are structured as follows.
json
{
"audioSrc": "https://xxxxx.mp3",
"showTime": 0,
"textList": []
}
audioSrc - audio link, https link is recommended, the component attached will download the audio resource, if the download fails it will execute an error callback and report a resource error
Note
The domain name of the mp3 must be added to the whitelist in the administration backend, otherwise the download will fail
showTime - show time for text cues and audio, in milliseconds, 0 for initial show, 2000 for 2s into recording textList - Text tips, array type The textList parameter object is as follows.
json
{
"text": "Please read aloud in Mandarin"
}
text - text content Text style can be simply controlled by adding width|height|padding|margin|color|fontSize|fontWeight|textAlign and other style attributes.
json
{
"text": "text",
"color": "red",
"fontWeight": "bold",
"margin": "0 20rpx"
}
Note
A textList child element that is displayed as a single line without line breaks and can be split into multiple child elements on demand
Alternatively, if individual words need to be highlighted within a single line of content, the text attribute can be passed into an array with the same attributes as the object parameters above, as follows.
json
{
"text": [
{
"text": "text"
},
{
"text": "Highlighted text",
"color": "red",
"fontWeight": "bold",
"margin": "0 20rpx"
},
{
"text": "text"
}
]
}
stepList is only valid at initialisation and cannot be changed dynamically This component is developed using the native syntax of the mini program. In addition to the parameters described in the documentation, you can also modify the logic within the component yourself to meet different business needs.
3. WebRTC-related APIs
To support the use of WebRTC functionality in mini programs, the FinClip mini program SDK provides WebRTC-related APIs natively in the base library. In order to reduce the cost of migrating HTML5 to mini programs, we have kept as much of the WebRTC-related APIs as possible, and the mini-program-related APIs are as follows.
Please note
This feature is based on the Google WebRTC Library
and you will need to confirm that the base library and SDK version are supported.
3.1 MediaDevice
ft.webrtc.mediaDevices.enumerateDevices() Gets information about the audio and video hardware available to WebRTC and returns a Promise object after execution. Promise return value
Properties | Type | Description |
---|---|---|
devicesList | Array | Array containing device information objects |
Example code |
javascript
const devicesList = await ft.webrtc.mediaDevices.enumerateDevices();
// or
ft.webrtc.mediaDevices.enumerateDevices().then((devicesList) => {
console.log(devicesList);
});
ft.webrtc.mediaDevices.getSupportedConstraints() Gets the constraint properties supported by the current device (e.g. frame rate, window size) and returns a Promise object after execution. Promise return value
Properties | Type | Description |
---|---|---|
info | Object | An object containing the properties of the current |
device | ||
Example code |
javascript
const info = await ft.webrtc.mediaDevices.getSupportedConstraints();
// or
ft.webrtc.mediaDevices.getSupportedConstraints().then((info) => {
console.log(info);
});
MediaStream.getUserMedia(Object object) The user is asked for media input, which is then executed and a Promise object is returned, and the Promise finally returns an Object object.
Note
In mobile, getUserMedia has a call limit, if getUserMedia gets a stream and the webRTC is connected, calling getUserMedia again may throw an error. It is recommended to avoid repeated calls to the logic for business purposes.
ParametersObject object
Property | Type | Default | Required | Description |
---|---|---|---|---|
video | boolean | yes | whether to get the video stream | |
audio | boolean | yes | whether to get the audio stream | |
Promise return value | ||||
Properties | Type | Description | ||
------ | ------ | --------------------------------------------------- | ||
-------------------------------------------------- | ||||
stream | Object | A regular media stream cannot be transferred to an | ||
mini program, the stream here is a simple wrapper object containing only a | ||||
streamId and getTracks method | ||||
Example code |
javascript
const stream = await ft.webrtc.mediaDevices.getUserMedia({
audio: true,
video: true,
});
// or
ft.webrtc.mediaDevices.mediaDevices().then((stream) => {
console.log(stream);
console.log(stream.streamId);
});
stream.streamId String Pass the streamId to the webrtcVideo component to play the stream, see the webrtcVideo component documentation for details. stream.getTracks() Gets an array of tracks for the stream, executes it and returns a Promise object, the Promise finally returns an array of tracks. Promise return value
Properties | Type | Description |
---|---|---|
---------- | ||
tracks | Array | Get the stream tracks array, usually containing |
video tracks and audio tracks | ||
Example code |
javascript
const stream = await ft.webrtc.mediaDevices.getUserMedia({
audio: true,
video: true,
});
const tracks = await stream.getTracks();
tracks.forEach((track) => {
console.log(track);
});
track.stop() Close track, which can be used in scenarios where the stream is stopped. Example code
javascript
const stream = await ft.webrtc.mediaDevices.getUserMedia({
audio: true,
video: true,
});
const tracks = await stream.getTracks();
tracks.forEach((track) => {
track.stop();
});
3.2 RTCPeerConnection
ft.webrtc.createRTCPeerConnection(Object options) The rtc connection instance is created, executed and returned as a Promise, which finally returns the instance object. Object options The parameters are passed through to createRTCPeerConnection Specific options parameters can be found in the webRTC standard documentationExample code
javascript
const options = { iceServers: [{ urls: "stun:stun.stunprotocol.org" }] };
const pc = await ft.webrtc.createRTCPeerConnection(options);
RTCPeerConnection property Currently supported RTCPeerConnection properties
Properties | Description |
---|---|
canTrickleIceCandidates | |
conectionState | |
currentLocalDescription | |
currentRemoteDescription | |
iceConnectionState | |
iceGatheringState | |
localDescription | |
peerIdentity | |
remoteDescription | |
signalingState | |
Example code |
javascript
const options = { iceServers: [{ urls: "stun:stun.stunprotocol.org" }] };
const pc = await ft.webrtc.createRTCPeerConnection(options);
console.log(pc.canTrickleIceCandidates);
console.log(pc.currentLocalDescription);
console.log(pc.currentRemoteDescription);
console.log(pc.peerIdentity);
RTCPeerConnection Event Listening Currently supported RTCPeerConnection events
property | event | description |
---|---|---|
------------------------------------------------- | ||
icecandidate | { candidate: { ... }} | return icecandidate |
information when triggered, containing only candidate field data | ||
iceconnectionstatechange | { iceConnectionState, timeStamp } | |
negotiationneeded | ||
signalingstatechange | { signalingState } | |
track | { streams } | The array item object of streams is an object |
containing a streamId | ||
Example code |
javascript
const options = { iceServers: [{ urls:
"stun:stun.stunprotocol.org" }] }
const pc = await ft. webrtc. createRTCPeerConnection(options)
pc. addEventListener('icecandidate', event => {
console. log(event. candidate)
console. log(event. candidate. address)
console. log(event. candidate. type)
console. log(event. candidate. sdpMLineIndex)
console. log(event. candidate. sdpMid)
})
pc. addEventListener('iceconnectionstatechange', event => {
console. log(event. iceConnectionState)
})
pc. addEventListener('negotiationneeded', event => {
console. log(event)
})
pc. addEventListener('signalingstatechange', event => {
console. log(event)
})
pc. addEventListener('track', event => {
console. log(event. streams)
// Pass the streamId to the webrtcVideo component to play the stream,
see the webrtcVideo component documentation for details.
this. setData({
remoteStreamId: e. streams[0]. streamId
})
})
RTCPeerConnection.createOffer(Object object)
An SDP offer is created, executed and a Promise object is returned, and the Promise finally returns an Offer Object. ParametersObject object
Property | Type | Default | Required | Description |
---|---|---|---|---|
iceRestart | boolean | false | no | |
offerToReceiveAudio | boolean | false | no | |
offerToReceiveAudio | boolean | false | no | |
voiceActivityDetection | boolean | true | no | |
Promise return value | ||||
Properties | Type | Description | ||
----- | ------ | -------------------------------------- | ||
offer | Object | An Object containing both sdp and type fields | ||
Example code |
javascript
const pc = await ft.webrtc.createRTCPeerConnection(options);
const offer = await pc.createOffer({
offerToReceiveAudio: true,
offerToReceiveVideo: true,
});
await pc.setLocalDescription(offer);
RTCPeerConnection.createAnswer(Object object) Create an SDP answer, execute it and return a Promise object, which will eventually return an answer Object object. ParametersObject object
Property | Type | Default | Required | Description |
---|---|---|---|---|
iceRestart | boolean | false | no | |
offerToReceiveAudio | boolean | false | no | |
offerToReceiveAudio | boolean | false | no | |
voiceActivityDetection | boolean | true | no | |
Promise return value | ||||
Properties | Type | Description | ||
------ | ------ | -------------------------------------- | ||
answer | Object | An Object with two fields, sdp and type | ||
Example code |
javascript
const pc = await ft.webrtc.createRTCPeerConnection(options);
const answer = await pc.createAnswer({
offerToReceiveAudio: true,
offerToReceiveVideo: true,
});
RTCPeerConnection.setLocalDescription(Object object) Set up the offer/answer and return a Promise after execution. ParametersObject object Pass in the offer or answer that was obtained Example code
javascript
const pc = await ft.webrtc.createRTCPeerConnection(options);
const offer = await pc.createOffer({
offerToReceiveAudio: true,
offerToReceiveVideo: true,
});
await pc.setLocalDescription(offer);
RTCPeerConnection.setRemoteDescription(Object object) Set up the offer/answer and return a Promise after execution. ParametersObject object Pass in the offer or answer that was obtained Example code
javascript
const pc = await ft.webrtc.createRTCPeerConnection(options);
// Pseudo code, offer is sent from remote
const offer = getFromRemote();
await pc.setRemoteDescription(offer);
const answer = await pc.createAnswer({
offerToReceiveAudio: true,
offerToReceiveVideo: true,
});
await pc.setLocalDescription(answer);
RTCPeerConnection.addIceCandidate(Object object) Add candidate to connection ParametersObject object The candidate object should be passed as a parameter, i.e. the candidate object obtained by the icecandidate event Some attributes may vary slightly from one end to another, depending on the actual value obtained Example code
javascript
// Pseudo-code
// A-side
const pcA = await ft.webrtc.createRTCPeerConnection(options);
pcA.addEventListener("icecandidate", (event) => {
// Send to B side
sendToB(event.candidate);
});
// B-side
const pcB = await ft.webrtc.createRTCPeerConnection(options);
const candidate = await getFromA();
await pcB.addIceCandidate(candidate);
RTCPeerConnection.getConfiguration() Gets the configuration of the connection and returns a Promise after execution. Example code
javascript
const pc = await ft.webrtc.createRTCPeerConnection(servers, mediaConstraints);
const configuration = await pc.getConfiguration();
RTCPeerConnection.addTrack(Object object) Add track to the current connection, noting that the interface is asynchronous. ParametersObject object The track object should be passed as an argument, the track is the stream obtained from getUserMedia via getTracks. Example code
javascript
const stream = await ft.webrtc.mediaDevices.getUserMedia({
audio: true,
video: true,
});
const pc = await ft.webrtc.createRTCPeerConnection(servers, mediaConstraints);
const tracks = await stream.getTracks();
tracks.forEach((t) => {
pc.addTrack(t);
});
RTCPeerConnection.close()
Close the connection Example code
javascript
const pc = await ft.webrtc.createRTCPeerConnection(servers, mediaConstraints);
pc.close();
3.3 WebRTC related components
WebRTC Video component Components for playing WebRTC media streams Properties
Property | Type | Default | Required | Description |
---|---|---|---|---|
------------------------------------------------------------------- | ||||
src | string | No | must be in the format webrtc:// , | |
the streamId can be retrieved via the getUserMedia or connection track | ||||
events | ||||
muted | boolean | false | no | video whether to mute |
Example code |
html
< webrtc-video muted
src="webrtc://{{localStreamId}}"></webrtc-video>
< webrtc-video src="webrtc://{{remoteStreamId}}"></webrtc-video
javascript
// getUserMedia gets the local stream
const stream = await ft.webrtc.mediaDevices.getUserMedia({
audio: true,
video: true,
});
const { streamId } = stream;
this.setData({
localStreamId: streamId,
});
// webrtc connection track event to get the remote video stream
const pc = await ft.webrtc.createRTCPeerConnection();
pc.addEventListener("track", (event) => {
const { streams } = event;
this.setData({
remoteStreamId: streams[0].streamId,
});
});
WebRTC Audio component The component used to play WebRTC media streams differs from WebRTC Video in that only audio will be played. Properties
Property | Type | Default | Required | Description |
---|---|---|---|---|
----------------------------------------------------------------- | ||||
src | string | No | must be in the format webrtc:// , | |
the streamId can be retrieved via the getUserMedia or connection track | ||||
events | ||||
Example code |
html
<webrtc-audio src="webrtc://{{localStreamId}}"></webrtc-audio>
<webrtc-audio src="webrtc://{{remoteStreamId}}"></webrtc-audio>
javascript
// getUserMedia gets the local stream
const stream = await ft.webrtc.mediaDevices.getUserMedia({ audio: true });
const { streamId } = stream;
this.setData({
localStreamId: streamId,
});
// webrtc connection track event to get the remote video stream
const pc = await ft.webrtc.createRTCPeerConnection();
pc.addEventListener("track", (event) => {
const { streams } = event;
this.setData({
remoteStreamId: streams[0].streamId,
});
});