Real-life Industry use-cases of Neural Networks how it works.

Pradeep Kvssk
10 min readMar 24, 2021

--

(TASK-20) ==> Defence and Chip research domain

What are neural networks?

The idea of mimicking the human brain got into the goal of developing the structure of mind mapping in our brain, as neural networks into the machine.

The neuron is a structure that contains:

  1. Body (Where the dendrites or terminals or other neuron join)
  2. Axon (longest chain to inter-link the next neuron)
  3. Synapse (Gap between the 2 neurons)

Approximately 86 billion neurons in the human brain.

And to implement such gigantic cluster of neurons which are independent of their own way to think, process and this makes the thinking pattern in general.

What needs to be considered is that the human cognitive advantage over other animals may reside simply in the total number of brain neurons !!

Technology is growing these days and behind every technology is the chipsets that are being manufactured.

That is how Artificial Learning came into existence(where the goal is to find make the machine understand the use of learning from experience)

Artificial Learning is of Supervised and Un-supervised Learning.

In Supervised Learning, we have:

Linear Regression, Logistic Regression, etc.

In Un-Supervised Learning, we have:

K-means clustering, Naive Bayes Clustering, etc.

Every Learning needs to prepare a model which needs a dependent or independent variable to plot the outputs of the inputs.

There are 2 ways of selecting the features manually and automatically.

Manually is done by the statistical formula and automatically it can be done using co-relation(manually-kind off) or by neural networks(automatic)

We go with the neural networks with our example because at a certain point in time there is no limitation for the accuracy to stabilize which is seen in the statistical formulae-based features tends to achieve a threshold beyond which it cannot move higher further.

The model works on a basic principle

(loss-function) = Y1(predicted)- Y0(actual)

So we use the kernel/initializers y=b+w*x

b is Bias, W is Weight, x is the input value of a feature

But how does the code know our given weights and bias are right or not??

If loss reaches zero then whatever we predicting is going in the right direction

The goal of Machine Learning:

It is to reach the target with Zero distance/far(Distance means how much loss we have) from the actual output.

Adjusting Weights:

Weight(new) = Weight(old) + learning-rate

Weight vs Loss| cost| Distance:

So the loss is decreasing and reaching stability so it is in the right way.

The optimizer algorithm works on the gradient descent ad that makes the minimum loss instead of changing it to the best one. The optimizer algorithm is also to adjust the Learning Rate, we use it in the optimizer as a parameter too.

It is harder to reach the best possible value only easy way by optimizers.

Neuron ==> we use a linear function to generally neuron.

so 1 neuron == 1 output

Learning rate is to check with increment so to reach the right weight it takes a lot of time complexity and resources for it if large data set then surely high time !!, that is why we have the concept of hyper-parameters.

Let's see first the basic function in python to create the single layer:

import Sequential // to add the layer we need this package in python.

layer==input_shape(1,) ///as we are building single layer

no.of neurons ==unit ==1

model.get_layer(“dense”) // we are making the dense layer one over the other. Run and observe

model.get_config(),

model.get_layer(“dense”).input(),

model.get_layer(“dense”).output(),

model.get_layer(“dense”).get_weight()

model.summary() //model is generated we can see its details

We use this parameter to regulate the slope that determines the accuracy of the model in the graph.

  1. Bias
  2. Weights

Bias and Weights has learning rate to know we are in right track or not!!

Optimizers are to guide our path for the changes in bias/Weights and guide us in +ve direction (or) in -ve direction. As the Learning rate is slow it is hard to get the reach the minimal loss for weight change. But has the maximum best output but will take high time complexity. So we have one such optimizer like Adam(adaptive learning rate optimizer).

So deciding the learning rate is hard as it decided by the data scientist and developer in order to decide it.

Simple Example:

GoogleMaps we find the distance that is done by Neural network. And Right path is decided by suggestion using the Optimizers.

Every dataset is in the form of row-based or CSV (comma separated values) then how to know what value belongs to which column. so while writing the code we use the first column as the header of the complete dataset.

import pandas as pd

data=pd.read_csv(“file”,header=None)

data.info()

We use the Keras module, which is a pre-created module by using TensorFlow. So Keras is the abstraction of the internal implementations of the neural networks by using the Keras so we can use directly based on the requirement.

Deep Learning contains multi-layer and optimizer means gradient descent and neural network is an algorithm which trains itself from the datasets.

How many layers we give, gives the best possible learning and takes more time as the no. of layers increases.

Deep Learning is generation of the model by understanding the input better

The first layer is the input feature being given with respective weights and bias and the rest all until the last but one layer arrives they are known as hidden layers. The last layer is where we put some functions based on the type of input(continuous or discrete like yes or no)

For continuous it is Sigmoid, For Discrete it is Linear function.

Example:

from keras.models import Sequential

model=Sequential()

from keras.layers import Dense

model.add(Dense(units=8, activation=’relu’, input_dim=11)

model.summary()

model.get_layer(“dense_1”).get_weights()

// you will see the output with 88 and a zero matrix

model.get_config()

// to see the neuron structure configuration

from keras.optimizer

model.compile(loss=’binary_crossentropy’,optimizer=Adam(learning_rate=0.0001))

//this statement here says the learning rate must and should skip up and down based on the best weight reaching direction by taking steps of 0.0001 at a time.

model.fit(X_train,y_train)

Use-cases:

Chip manufacturing industry where the circuit designed is meeting all the requirements are not must be detected and if any issues that are also being detected these days using the neural networks based on the previous records.

Which proved to be more accurate than Human-made errors in detecting them in no time.

Pattern-based design/technology co-optimization (DTCO) estimates lithographic difficulty during the early stages of a new process technology node.

These are lithographic hotspots. what are they actually mean ??

A lithography hotspot is a place where it is susceptible to have fatal pinching (open circuit) or bridging (short circuit) error due to poor printability of certain patterns in a design layout. To avoid undesirable patterns in layout, it is mandatory to find hotspots in the early design stage.

Challenge being faced:

With the continuous shrinking of semiconductor process technology nodes, the minimum feature size of modern IC is much smaller than the lithographic wavelength.

1. Manually time taking Work

2. Cost of Testing is costing much

So they came up with

In order to bridge the wide gap between design demands and manufacturing limitations of the current mainstream 193nm lithography, various DFM techniques [2–5] have been proposed to improve product yield and avoid potentially problematic patterns (i.e., process hotspots).

However, for 45nm node and below, hotspot patterns still exist even after design rule checking (DRC) and various resolution enhancement techniques (RET) such as optical proximity correction (OPC), sub-resolution assist feature insertions/layout re-targeting, etc.

In order to achieve, Such high performance lithographic hotspot detection under real manufacturing conditions is especially suitable for guiding lithography friendly physical design.

DTCO helps to rapidly determine patterning difficulty based on the fundamentals of optical image processing techniques. It can analyze the frequency content of design shapes to determine patterning difficulties using computational patterning transfer. With the help of a Monte-Carlo random pattern generator, the DTCO flow can identify a set of difficult patterns that can be used to evaluate the design manufacturability and help with the optimization phases of post-tape outflows.

The DTCO flow provides designers with early predictions of potential problems before the rigorous model-based DFM kits are developed and establish a bi-directional platform for interaction between the design and the manufacturing communities.

If you would like to read a more in-depth discussion of the DTCO process, download a copy of the white paper,

Estimating Lithographic Difficulty During Process Node Development with Calibre Design/Technology Co-Optimization.

CONCLUSION:😎🤗

To alleviate the huge run-time cost of current lithographic hotspot simulators, in this paper we proposed an ultra-fast and high fidelity hotspot detection flow providing full layout, feature-centric assessment as an improvement over sliding window or raster scanning techniques.

Under the real manufacturing conditions, we incorporated a novel set of hotspot signature measurements, a hierarchically refined classification methodology, and powerful machine learning kernel implementations into an integrative flow.

We implemented our algorithm with an industry-strength engine [16] under real manufacturing conditions and showed that it significantly outperforms previous state-of-the-art algorithms in hotspot detection false alarm rate (2.4X to 2300X reduction) and simulation run-time (5X to 237X reduction), meanwhile archiving similar or slightly better hotspot detection accuracies. The demonstrated high performance makes our approach very suitable for identifying lithographic hotspots and guiding lithography-friendly physical design.

Future Node Semiconductors
In order to support 5G, AI, data centers, edge computing and other industries, semiconductor manufacturers continue to develop ICs with increasingly complex architectures and smaller feature sizes. At the 5nm/3nm design nodes, a leading-edge logic chip may utilize an advanced finFET or GAA architecture (nanosheet or nanowire) and leverage EUV lithography (EUVL).

Actions Speak Louder than words

The neural network has its impact in a lot of ways just deep dive into the technologies there is a lot more to explore. 😎😊🤗

REASON FOR ME PICKING THIS TOPIC on semi-conductors is that:

I made this blog for the upcoming major project I was working on Hardware logic verification using neural networks.

This basically means that the chips that were been manufactured easier are easy and we can do reverse engineering to know the actual logic of it. But due to advancement in technology, reverse engineering cannot be done to hide the golden IC logic of the creator and this gives the chance to implement many HARDWARE-ATTACKS using trojans that can be installed in the chips which is dangerous, what is even more dangerous is that follow the example:

If a chip-set which was made in China can be exported to India and can be used in some of the major parts of the Projects such as GSLV and even in the rocket launches, we even try to use them in the Mobiles that we are having. What I actually mean to say is that if the war between China and India arises there is mass destruction to us because if at all the defense equipment is equipped by the chip-sets made by the other countries not just China, when there is war or high-tension between those countries and we launch a Missile against them the chip-set which has trojan gets activated when we give the particular coordinates of hong-kong or any other place by making the missile re-direct to some other country causing chaos in the world and Mostly in INDIA. So my project works in the detecting of the hardware logic to verify whether any trojans exist or not by that, we can make sure Safe & Peaceful India.

FUTURE is about Hardware Security, as software security is getting stronger day-by-day.

Thank you hit a like and share,

If you find it interesting😊🤗

My other works you can find here:

--

--

Pradeep Kvssk
Pradeep Kvssk

Written by Pradeep Kvssk

Follow me for more Updates :) Passionate — >>🚩,🔍,🔮,Passion

No responses yet