Skychain Prostate Cancer Solution: data preparation, framework and usage scenario

Hello, Skychain community!

As we draw closer towards the testing, today we would like to share more details on our solution on prostate cancer (mostly, technical ones). Some of the details have already been featured in this post 1 year ago, so today we will show how far we have advanced since then.

Data preparation

Histopathological slides for the training of the neural network were prepared and marked out by the specialists from Moscow Central Hospital. On each of the 560 histopathological slides, 8664 labels were marked out, where each belongs to 1 of 7 different classes. The distribution of labels by classes is given in the table below.

Classification of labels

For more accurate marking of the slides, there were used special tools that allow you to mark out each label as accurate as possible, pixel by pixel.
Despite the fact that 560 images may not seem like enough images for training a neural network, the size of each histopathological slide is 200,000x100,000 pixels. Each slide, at the same time, was divided into smaller images of 1000x1000 pixels in size, which, with an approximate biopsy area of ​​10% of the total image area, yields more than a million images of marked tissue.

Each image was marked by a histologist who worked at the Central Design Bureau in Moscow and had at least 3 years of experience. To prevent markup errors, each image was sent for additional validation to a histologist with more than 10 years of experience.

Solution framework

To detect and confirm the pathologies present on the slide, it is necessary to make a full and comprehensive analysis of the slide. Some signs of cancer appear only at the highest magnification, that is, directly at the pixel level (nucleoli of the nuclei), while the rest are present only at the macro level (shape of glands, shape of groups of glands and others). Our solution is complex and consists of several neural networks, each of which contributes to the final solution. At the micro level, several networks with different architectures analyze the structures of gland cells and their nuclei.

We built these networks on the basis of classic residual blocks, but made our own architecture out of them, since the existing ones showed not a good result. They analyze on a patch (part of the image from a huge slide) of 512x512 pixels. For the analysis of macro level features, we also have several networks with a high receptive field in order to focus not on individual pixels, but on a higher-order picture.

The SkipNet model was chosen as the working network, in which ResNet-34 pre-trained on ImageNet was used as the core network. Further, the last few of its own were cut off from this model, which were replaced by their own fully connected layers.

Images with and without neural network labeling

During the training, various augmentations were applied on training, validation and test data, such as:
• reflection of pictures vertically, horizontally;
• image quality reduction through compression by jpeg codec;
• elastic distortion of the picture;
• adding brightness differences;
• change in hue and saturation;
• adding coarse noise;
• adding multiplicative noise.

OneCycleLR scheduler was used to change the learning speed. Crossentropy was used as a loss function. As an optimizer, Rectified Adam is used.
To obtain a more balanced sample during training, sampling was used in which samples were selected with equal probability for each class. Moreover, to prevent retraining, regularization was carried out — the process necessary for the neural network to not remember everything for each pixel. For this, an average of 50,000 iterations was performed for each image.

Software for pathologists (usage scenario)

A medical institution connected to the system gets the opportunity to use technology that will help histologists to carry out faster and more accurate diagnostics for each patient. For each clinic, the necessary number of accounts will be provided for all doctors involved in decision-making in the diagnosis of histopathological slides.

Our software provides not only the ability to get so-called “second opinion” from the AI, but also is a cloud solution in itself, which allows several doctors to analyze digitized histological preparations in real time in the format of a consultation. Doctors in each medical organization will be assigned accounts, each of which will have the appropriate capabilities, rights and access to medical images.

Doctors with a Specialist role will be able to download the digitized histopathological slide to obtain a “second opinion”. An image of a tissue column stained with hematoxylin and eosin (H&E) and scanned by any of the certified histological glass scanners is suitable for uploading.
When uploading an image, the doctor must choose the pathology to be analyzed. The image will be transferred in encrypted form to the data center, where the image will be processed by the neural network. After processing, this slide will be sent back to this doctor. The processed slide will include labeling indicating the affected areas, as well as a textual morphological description.

After receiving the processed image, the doctor should analyze how correct the labeling and morphological description isprovided by the neural network. Depending on the results of the analysis, the doctor must make a decision.
If the doctor agrees with the opinion of the neural network, the doctor must select the “Approve” button. In this case, the markup and the morphological description provided by the neural network are finally assigned to this slide. The doctor, based on the opinion of the neural network, prescribes the appropriate treatment.

If the doctor partially agrees with the opinion of the neural network, but would like to edit some parameters, he should select the “Edit” button. The doctor can manually correct the markup or change the morphological description. After that, the doctor should choose either “Accept changes” or “Discard changes”. Accepted changes will be assigned to this slide in the system.

In the event that the doctor is not sure of the correct opinion of the neural network and needs the opinion of a more experienced specialist, he can send this slide to his colleague for additional consultation by clicking the “Recieve additional opinion” button. This slide will be sent to the selected colleague.
Also, a Specialist will have limited access to all images, having access only to those patients who were assigned to him personally.

A doctor with the Leading Specialist role will have all the same markup opportunities as a Specialist. In addition, Leading specialist will be able to view all medical images stored in this medical institution.
Upon receipt of a request for an additional opinion from a specialist physician, the opinion of Leading specialist will be taken into account in a special way and be crucial in making a diagnosis.
Moreover, Leading specialist will have the right to independently view and edit the results of the work of ordinary specialists, adjusting the established diagnosis and the prescribed treatment plan. If a change is made by the Leading specialist, the specialist doctor who made the initial diagnosis will receive an appropriate notification.

That is all for today. Thanks for the support! Stay tuned for future updates!

Best regards,

Alexander Oksanenko, Skychain Team

Join Skychain on social media: Twitter, Facebook, Telegram

If you have any questions about Skychain, don’t hesitate to write to Alexander Oksanenko on Telegram and on email:

Blockchain infrastructure aimed to host, train and use artificial intelligence (AI) in healthcare. Our website: