Recently Published
Image Captioning with Visual Attention
Image captioning is the process of generating descriptive text for images. This post describes how the image captioning model can be trained using standard deep neural network architectures. We also implement the concept of visual attention in the model’s training. The model can itself decide the salient parts to focus on while generating the caption. We used the MS COCO (Common Objects in Context) [1] dataset for the model training and validation.
Power Analysis for Experimental Designs
This post demonstrates how to do a power analysis for a Completely Randomized Design (CRD) and a Randomized Complete Blocked Design (RCBD) using the GLIMMIX procedure in SAS. It also introduces the critical distinction between a non-zero and a clinically relevant difference, which the researchers are most interested in detecting through the experiments.
Document
ascasd