week 9

This week’s topic is about neural network, it is a topic closely related with how the human brain work.

In every human brain, there exist a neuron that send activities signals using electrons in our body to the brain through the synapses and that activities are processed automatically and return the action to every inch of our body.

Similar to how a neural network works in a machine. The machine consist of one to many processes of calculations that are used to recognize things and return the result of their analytics and recognition.

The difference between how it works with the human brain and machine is that the brain learns by adaptation and recognition, but machine have to learn and recognize, their adaptation are calculated by the weight of their calculation. A machine learns by epochs, which consist of multiple iteration of calculations, while the brain by gradual information and adaptation. Which is what makes the brain so complex and so do the calculations in the machine.

week 8

This week’s topic is about decision tree. Decision tree is a supervised learning method that calculates the past data and condition that leads to the end results, for example the condition and cases of being sick, it will tell you from your current condition with the past data whether you are sick or not.

In an supervised learning, overfitting is not a good thing for the machine. Overfitting is when the machine learn too specifically about an object, and when someone give them a picture of that object but with a spot on them, they could not recognize the object, due to the fact that the model they are trained with are too good and the machine cannot recognize anything else.

There is 3 types of data gathering and learning for machine, which is classification, regression, and clustering.

We have learned clustering a couple of weeks ago. Classification is when a data is about an object with a distinct feature that differentiate them, for example, a fruit or an animal. Regression is about a data that are used to predict the up and down of a market price, or stock price.

As the name imply, a decision tree is a tree made based on the data given, start from the root with the highest entropy value, which is calculated from the yes and no data of the given condition. And if the information gain is the highest, it will be the root, and this calculation will iterate until the child and leaf are calculated.

week 7

This week’s topic is about apriori. Apriori is an algorithm to find out what is the most favorite pair of bought item in a market.

This apriori algorithm will calculate from the list of bought items, which pair is bought the most. The purpose is to organize the market items order. For example, one will buy butter and bread at the same time, or maybe with milk for breakfast. The market will try to organize those things together so it will be easier for the customer to buy and pick those things.

The way to the algorithm work is quite similar to the naivebayes, however, instead of finding the probability, this is finding the pair with the most probability of people buying together. The algorithm will have to count the percentage of each item, and eliminate the one the item with less than minimum support percentage, and group them with the other item. It will loop until the last group of items do not have any with more than the minimum support percentage. After that, calculate the confidentiality of the group of item(s).

week 6

This week lesson is about learning from observation. More like about clustering information. Clustering can depends on various information, gender, occupation, heredity(biologically), or how they look(species).

For an information, there can be multiple cluster, but which information belong to which cluster need to be calculated. To calculate which information belong to which cluster, in a graph, the calculation for each information to the main cluster need to be calculated. And the every information will be closer to one cluster than the other. The way the distance is calculated is using euclidean or manhattan method.

After calculating the distance of information to cluster(s), finding the mean coordinates of the cluster is necessary. And then recalculating the distance again, until the coordinates of the cluster is not changing nor the information related to the cluster.

As for the project, further discussion have not yet been held.

week 5

This week’s lecture is about uncertainty reasoning. It is part of learning about machine learning.

There are 3 types of machine learning:
– supervised learning
– unsupervised learning
– reinforcement learning

The difference of these types is, supervised means that the classification and learning are supervised by the maker, the maker will help with identifying the things showed to the machine.
Unsupervised means that they will be showed things and will move them in cluster by the similarity of their characteristics.
Reinforcement learning is learning by rewards, they will learn to know whether it is right or wrong by reward and will try to get the reward.

There is also a need for reason probability, to make a rational decision making depending on the situation. There is several rule to probability theory, such as: bayes and naïve bayes.

As for the project, due to our situation of unable to meet with each other, further discussion for the project has not been held.

week 4

This week’s lecture is about adversarial search, which the main topic for the lecture is about minimax. Which is usually used for an automated computer playing games to search for the which move should they do next.

The concept of this search is similar to Depth First Search (DFS), the tree will have an interchanging minimum and maximum, which represent the interchanging player after player 1 have their turns and player 2 will move next. The purpose of the min and max is to make sure that the 2 players have different choice of options depending on what the previous player moves.
This automated game is usually for 2D games such as, chess, go, tic tac toe, et cetera.

As for the project, we are sure to say that we are going to use tensorflow, however, what kind of projects we are going to make is still

week 3

This week session’s is about informed search, the opposite from last week’s, and also local search. Such as A* and greedy Best First Search (BFS), and the only part that I remember from local search is the genetic algorithm.

The difference of uninformed and informed search is the availability of heuristic value. For instance, on the road, our travel distance and convenience are calculated, the heuristic value is the rate of traffic. In greedy BFS, the algorithm will only calculate base on the traffic rate and will take the lower traffic rate path. However, in A* algorithm, it will calculate both the distance and the traffic rate to reach the destination. A* algorithm will give the best solution for the result as it calculate every information that it has.

As for the genetic algorithm, it take the example of genetic mutation for survival of the fittest. It will take the best gene for survival, the same as the algorithm. It will try to find the best combination of genes that will have the best result.

For the project, we are still deciding about what kind of algorithm we should use for this project, so as of now, we are still finalizing our idea for this project and there has been no progress.

week 2

This week’s session is about uninformed search such as the breadth first search, depth first search, iterative deepening depth first search, uniform cost search and depth limited search.

First of all, for the problem solving, it needs to know about the situation. What is their goal and their environment. They need to know in which situation they have to respond and which they need to ignore. So knowledge about their surroundings have to be adequate. And, in their data, they should know about the state, the cost of the path, and the action in the path (right, left, up, down), and the number of steps (level of nodes) they should take.

Secondly, we learned about the difference of each search algorithm, each have it’s own advantages and weaknesses for the problem solving. As for breadth first search, it visit each node by their level. In depth first search, it visit node by their child and backtrack if their bottom child is not the goal and search for the other child. Uniform cost search, use priority queue, so it search the node by their smallest cost possible.

For the project, me and my group decided to make something about recognizing emotions using tensorflow. We might elaborate it into something not just an emotions recognition, however, we are still deciding and researching about it.

week 1

This week’s session is about introducing us to the concept of intelligent system and the upcoming project that need to be done by the end of this course.

First of all, the concept of machine learning and artificial intelligence are often confused. The concept is about a machine (artificial) that is intelligent enough to be a part of our daily life. The purpose of it’s existence is to make our everyday life and problem solving become more easier and to improve it to be better. Some of us may deem an AI to be a terminator that could lead to the doom of humanity. However, the concept of machine learning is to train the AI to have common sense same as human to prevent that from happening.

Secondly, the process of training an AI need a lot of variable and factor that we need to know, such as the environment. It is necessary that the machines in each environment to know what they are doing and purpose for being there. And in case of something happened or appeared, they need to have the rationality and human common sense to deal the situation.

As for the upcoming project, me and my group have yet to discuss and decide about the topic that we should be making.