TABLE OF CONTENTS
- AI Server: TensorFlow Integration for Image/Video Recognition
- Technology Employed by FTK
AI Server: TensorFlow Integration for Image/Video Recognition
AccessData AI server is built on top of Google’s TensorFlow technology. It provides the capability to run image/video recognition with the use of pretrained models packaged within the application. These models can be utilised against multiple objects and cases and can be fully utilized with FTK Plus.
Image recognition is a resource intensive process. However, the AI Server supports Nvidia CUDA GPU acceleration. CUDA support for your GPU can be checked here.
Technology Employed by FTK
AI Server – TensorFlow
Google’s TensorFlow uses various algorithms and models to help in image/video recognition and classification tasks. The AI server is built on top of this technology which provides you the capability to train on custom objects/faces and then identify similar objects/faces with an AI model.
Supported TensorFlow AI Jobs
AI Job Type
Utilises pre-created AI models for object-based image recognition during evidence processing.
Live Facial Recognition
Utilises pre-created AI models for facial-based image recognition during evidence processing.
Utilises pre-created AI models for object-based video recognition during evidence processing.
The following steps must be followed on the machine where the GPU intended for AI jobs is located.
- AI Server
- Python 3.7 x64 (Installed with AI Server)
- CUDA 10.* or CUDA 11.*(Optional)
- cuDNN compatible with CUDA-enabled GPU – A developer account will be needed, this can be created for free on the NVidia website.
CUDA 10/11 Installation (Optional)
- Run the CUDA exe as an administrator.
- Select an extraction path and click OK.
- Review the License Agreement and click AGREE AND CONTINUE.
- Select Custom (Advanced) and click NEXT.
- Expand CUDA and deselect Visual Studio Integration and click NEXT.
- Ensure the installation locations are noted and click NEXT.
- Click CLOSE once the installation is complete.
cuDNN Installation (Optional)
- Locate and open the directory for NVIDIA GPU Computing Toolkit.
- Navigate the to CUDNN archive and open it.
- Copy and paste the following files:
- Open \cuda\bin\ and copy cudnn64_7.dll to where \NVIDIA GPU Computing Toolkit\CUDA\v10.*\bin\ is located.
- Open \cuda\include\ and copy cudnn.h to where \NVIDIA GPU Computing Toolkit\CUDA\v10.*\include\ is located.
- Open \cuda\lib\x64\ and copy cudnn.lib to where \NVIDIA GPU Computing Toolkit\CUDA\v10.*\lib\x64 is located.
cuDNN Installation for GPUs supporting CUDA 11 require additional DLL files to be added to \NVIDIA GPU Computing Toolkit\CUDA\v11.*\bin\.
These additional DLL files will be provided upon request.
AI Server Installation
- Run AccessData_AI_Server_x64.exe as an administrator.
- Click Install if prompted to install Python 3.7.
- Click Next on the Welcome screen.
- Review and Accept the License Agreement, then click Next.
- Check Install for GPU use if utilising a CUDA enabled graphics card (Optional).
- At the User Credentials screen, enter the credentials for an account to run the AI service and click Next.
- This account should be a member of the local administrators group, and be a domain-level account in a multi-box environment. The "Local System" account should only be used if all components, as well as case and evidence storage, will be on one single machine.
- Click Install.
- Click Finish.
Configuring the AI Server for FTK & FTK Plus
Either configuration method can be followed.
Manually Editing the Configuration File
- Navigate to \Program Files\AccessData\Forensic Tools\<version>\bin\.
- Open ADG.WeblabSelfHost.exe.config in a text editor.
- Update the value for the TensorFlow URL and save the changes.
- Any changes made in this file must be then appended by restarting the Exterro self-host service.
Adding the TensorFlow URL via the FTK UI
- Login to the FTK application.
- Click Tools > Preferences.
- Click Configure AccessData Servers.
- Enter the AI Server URL.
- Click Save.