gravatar

Cartwheel

Cartwheel Technologies

Recently Published

May Biodelta report
Biodelta data
LIRA Feasibility report
Using Facebook' lira as a case study
BioDelta April Data science report
Data science report
Image recognition with MXNet
Cat vs Dogs
pyDocument
test
Leveraging IBM's Watson Speech recognition service
Speech Recognition
Weather Forecasting Using Decision Trees
Applying Decision Tree algorithm to Seattle weather data
Applying Neural Nets On Wisconsin Breast Cancer Data (Deep Learning)
Deep Learning with Neural Nets
Wordcloud in R
Using wordcloud to gain insights from a large pool of data
Simple Bank-Loan Model; K-Nearest Neighbors (Deep Learning)
In this paper K-Nearest Neighbors will be used to calculate the likelihood if a customer will be a defaulter or not. We will use a historical bank data to construct a predictive loan model.
Credit Scoring Using R
Goals The goal of this research paper is to show basic credit scoring computations in R programming language
Linear Discriminant Analysis
Using Linear Discriminant Analysis To Predict Defaulters Of A Bank Loan
Natural Language Processing- Text Normalization
As the bulk of the world's information is online, the task of making that data accessible becomes increasingly important. The challenge of making the world's information accessible to everyone, across language barriers, has simply outgrown the capacity for human translation. Natural language processing, or NLP is a branch of artificial intelligence concerned with the interactions between computers and human (natural) languages, and, in particular, concerned with programming computers to fruitfully process large natural language data. is an important technology in bridging the gap between human communication and digital data. Many NLP applications, including text-to-speech synthesis (TTS) and automatic speech recognition (ASR), require text to be converted from written expressions into appropriate "spoken" forms. This is a process known as text normalization, and helps convert 12:47 to "twelve forty-seven" and $3.16 into "three dollars, sixteen cents." The goal of this project is to automate the process of developing text normalization grammars via machine learning.