Thursday, September 28, 2017

How to : Your VOICE Based AI

#SpeechRecognition #VoiceBasedAI #DeepLearning #SpeechToText #Abzooba Created a simple Voice based AI Assistant using speech recognition library in python. It does the following : 1) Understands voice 2) Can convert speech to text ( My starting point , as wanted an Assistant who can help me write my stories )

3) Perform simple actions ,Like ?

3A) You ask for the current time and get's it instantly. 3B) Understands the intent or context in the speech and can perform specific action e.g. When I ask Rudra ( Have named my AI after Lord Shiva ) to find the location of a place , it automatically opens Google Map with the particular location , or when I ask to find a similar kind of shirt by providing the picture of a blue shirt , it scans the image and then find similar looking shirt in #Amazon or #Myntra.



Interesting, isn't it?

Pray to Lord Rudra and get started !


Let's look into how I did it.


Some Important Stuff FIRST -

For Speech Recognition, you need Speech Recognition library.

Do a pip install and get it installed.

pyaudio will also be required.

I am using Keras ( which using tensorflow backend) here for further processing .

Google has a great Speech Recognition API. This API converts spoken text (microphone) into written text (Python strings), briefly Speech to Text.

text-to-speech (TTS) system converts normal language text into speech.



Let's Look into the WorkFlow :


Let's see how it works :


A)  I say what TIME is it ....

Good, let's move to more complex stuff !

B)  I say to find me a Location like let's say I ask - Where is Abzooba ?


Automatically opens up Chrome browser with Goggle Map location the particular address.




Great , now let's move to more Complex things .

C) I show an Image to Rudra - I have kept the image in a folder as for now but once you create a simple UI along with it you can just upload the image .


I ask Rudra to search for similar item in #Myntra

Rudra scans the image and then automatically opens up Myntra for the possible choices.







Tuesday, September 26, 2017

Image Classification - Deep Learning

#github project  -  how to loop through a folder containing multiple images and classifying them using Keras and Pre-trained Networks. #tensorflow #keras #CNN #Neuralnet #INCEPTIONV3 #Machinelearning #DeepLearning


ImageClassificationDeepLearning

Here I will show how to loop through a folder containing multiple images and classifying them using Keras and Pre-trained Networks.
#TENSORFLOW #KERAS #NN #NEURALNET #INCEPTIONV3 #MACHINELEARNING #DEEPLEARNING
Our brains make vision seem easy. It doesn't take any effort for humans to tell apart a lion and a jaguar, read a sign, or recognize a human's face. But these are actually hard problems to solve with a computer: they only seem easy because our brains are incredibly good at understanding images.


How it helps ? 

Let’s say you have a folder wherein Multiple images get uploaded – best example will be like OLX or Quickr which are free Buy & Sell websites. Now, you need to determine if any harmful things ( e.g. Gun etc) are being bought and sold .








SEVERAL PRE-TRAINED NETWORKS :
  • VGG16, VGG19, ResNet50, Inception V3, and Xception
State-of-the-art deep learning image classifiers in Keras
Keras ships out-of-the-box with five Convolutional Neural Networks that have been pre-trained on the ImageNet dataset: VGG16 VGG19 ResNet50 Inception V3 Xception
Inception V3
The goal of the inception module is to act as a “multi-level feature extractor” by computing 1×1, 3×3, and 5×5 convolutions within the same module of the network — the output of these filters are then stacked along the channel dimension and before being fed into the next layer in the network. The original incarnation of this architecture was called GoogLeNet, but subsequent manifestations have simply been called Inception vN where N refers to the version number put out by Google.



Saturday, September 23, 2017

Humbled!


Humbled... to be nominated for India-International Achievers’ Awards.


Friday, September 22, 2017

Let’s first write a simple Image Recognition Model using Inception V3 and Keras

Image Recognition

#TENSORFLOW #KERAS #NN #NEURALNET #INCEPTIONV3 #MACHINELEARNING #DEEPLEARNING

Our brains make vision seem easy. It doesn't take any effort for humans to tell apart a lion and a jaguar, read a sign, or recognize a human's face. But these are actually hard problems to solve with a computer: they only seem easy because our brains are incredibly good at understanding images.

SEVERAL PRE-TRAINED NETWORKS :

VGG16, VGG19, ResNet50, Inception V3, and Xception

State-of-the-art deep learning image classifiers in Keras

Keras ships out-of-the-box with five Convolutional Neural Networks that have been pre-trained on the ImageNet dataset:

  1. VGG16
  1. VGG19
  1. ResNet50
  1. Inception V3
  1. Xception



Inception V3


The goal of the inception module is to act as a “multi-level feature extractor” by computing 1×13×3, and 5×5 convolutions within the same module of the network — the output of these filters are then stacked along the channel dimension and before being fed into the next layer in the network.
The original incarnation of this architecture was called GoogLeNet, but subsequent manifestations have simply been called Inception vN where N refers to the version number put out by Google.

LET'S WRITE A NICE LITTLE PROGRAM TO CLASSIFY IMAGES 

What are we going to Detect?
What does this Image say to a Computer?



Let's check it out :

import numpy as np
from keras.preprocessing import image
from keras.applications import inception_v3

# Load pre-trained image recognition model
model = inception_v3.InceptionV3()

# Load the image file and convert it to a numpy array
img = image.load_img('../input/Huggies.jpg', target_size=(299, 299))
input_image = image.img_to_array(img)

# Scale the image so all pixel intensities are between [-1, 1] as the model expects
input_image /= 255.
input_image -= 0.5
input_image *= 2.

# Add a 4th dimension for batch size (as Keras expects)
input_image = np.expand_dims(input_image, axis=0)

# Run the image through the neural network

predictions = model.predict(input_image)

# Convert the predictions into text and print them
predicted_classes = inception_v3.decode_predictions(predictions, top=1)
imagenet_id, name, confidence = predicted_classes[0][0]
#Let's print what the DL Program say
print("This is a {} with {:.4}% confidence!".format(name, confidence * 100))

Output: This is a diaper with 95.24% confidence!

Tuesday, September 19, 2017

HIT PROFIT BY MACHINE LEARNING BASED TRADING

I bought FRESHTROP Fruits at INR 94 on 08th AUGUST 2017 .


WHY ????– a very detailed study on prices across last 10 years or more, using logistic regression and it should that it’s at it lowest trading price.


Today it trades at 143 , today is 19th September , so in around a little more than a month.


Profit per share = 49 Rupees


If you have bought 1000 Shares, ( Investment INR 94000/- Less than a Lakh ) , you have gained around 50,000/- in just 1 month.


Tip – buy now, it will more to 250/- in a period of 1 years or little more .



It’s always make sense to invest with Machine Learning Statistics .


You can follow my blog on investment tips at http://saptak-firsttry.blogspot.in/

#StockTips #MachineLearning #LogisticRegression #AlgorithmicTrading

Saturday, September 16, 2017

Predictions and Suggestions from a machine learning based Algorithmic trading

#MachineLearning #AlgortihmicTrading #StockMarketAutomatedTrading  #LogisticRegression #Boosting

Predictions and Suggestions from a machine learning based Algorithmic trading



An algorithm is a specific set of clearly defined instructions aimed to carry out a task or process.

Algorithmic trading (automated trading, black-box trading, or simply algo-trading) is the process of using computers programmed to follow a defined set of instructions for placing a trade in order to generate profits at a speed and frequency that is impossible for a human trader. The defined sets of rules are based on timing, price, quantity or any mathematical model. Apart from profit opportunities for the trader, algo-trading makes markets more liquid and makes trading more systematic by ruling out emotional human impacts on trading activities.

We can create a Regression formula like below :




The dependent variable is the Return on capital invested and can be run across all stocks.

Error term ei can be boosted using Boosting Algos and thus increasing the prediction accuracy.

Now how to choose your Variables and what can be the ideal STOCK Equation :

YOY Quarterly sales growth  > 15 and
YOY Quarterly profit growth  > 20 and
Net Profit latest quarter  > 1 and
G Factor >= 7 and
Net Profit latest quarter > .33 AND
Other income latest quarter < Net Profit latest quarter * .5 AND
Net Profit preceding year quarter <= 0 AND
Expected quarterly net profit > 0 AND
Sales latest quarter > Sales preceding year quarter   AND
Return on invested capital > 25 and
Earnings yield > 15 and
Book value > 0 AND
Market Capitalization > 15

AND
Graham Number > Current price AND
PB X PE <=22.50 AND
PEG Ratio >0 AND
PEG Ratio <1 .5="" and="" o:p="">
Altman Z Score >=2.5 AND
Sales growth 5Years >25 AND
Profit growth 5Years >15 AND
Current ratio >2 AND
Market Capitalization >250 AND
Sales >100  AND
Piotroski score > 7
AND
Dividend yield > 2 AND
Average 5years dividend > 0 AND
Dividend last year > Average 5years dividend AND
Profit after tax > Net Profit last year * .8 AND
Dividend last year > .35 AND
( Profit growth 3Years > 10 OR
Profit growth 5Years > 10 OR
Profit growth 7Years > 10 ) 
OR
(Market Capitalization > 3000) AND
(Average return on equity 10Years Years > 20) AND
(Debt to equity < 1.5) AND
(Interest Coverage Ratio > 2) AND
( PEG Ratio <= 1) AND
(Profit growth 5Years > 20) 

AND
YOY Quarterly sales growth  > 40 and
YOY Quarterly profit growth  > 40 and
Average return on capital employed 3Years >30 and
Price to Earning <6 o:p="">
OR
Sales growth 10Years > 10 AND
Profit growth 10Years > 12 AND
OPM 10Year > 12 AND
Debt to equity < 0.5 AND
Current ratio > 1.5 AND
Altman Z Score > 3 AND
Average return on equity 10Years > 12 AND
Average return on capital employed 10Years >12 AND
Return on invested capital > 15 AND
Sales last year / Total Capital Employed > 2 AND
Average dividend payout 3years >15

AND
PEG Ratio <1 and="" o:p="">
Sales > 500 AND
Price to Earning < 40 AND
Profit growth > 20 AND
Debt to equity < 0.2 AND
Price to Cash Flow > 5

OR

EPS last year >20 AND
Debt to equity <.1 AND
Average return on capital employed 5Years >35 AND
Market Capitalization >500 AND
OPM 5Year >15

AND
Net Profit latest quarter > Net Profit preceding quarter AND
Net Profit preceding quarter > Net profit 2quarters back AND
Net profit 2quarters back > Net profit 3quarters back

AND

EPS latest quarter > 1.2 * EPS preceding year quarter AND
EPS latest quarter > 0 AND
YOY Quarterly sales growth > 25 AND
EPS last year > EPS preceding year AND
EPS > EPS last year AND
Profit growth 3Years > 25 AND
Return on equity > 17 AND
Down from 52w high < 15 AND
Market Capitalization > 100

AND

Price to Earning >0 and Price to Earning <10 5years="" and="" equity="" growth="" on="" return="">10 and Dividend yield >1 and Return on capital employed >10

AND
Profit growth 5Years > Sales growth 5Years AND
Sales growth 5Years > 3 AND
Return on equity > 15 AND
Working capital 5Years back < 0
AND
Price to Earning >0 and Return on equity 5years growth > 5 and Dividend yield >0  


Note : DEBT reacts inversely to the equation . Term period will be a spread over last 15 to 20 Years. 

Now , applying boosting algorithm ( like XGBoost) you can reduce the error coefficients.

Based on the above equation and a little variation choosing a flattened NN( Neural Network ) below stocks can be looked upon for Indian stock market.

1)  RELIANCE INDUSTRIES
2)  DCB BANK
3)  KAJARIA CERAMICS
4)  INFOSYS

5)  INDO COUNT INDUSTRIES

Followers