The story of algorithms Islam's gift to digital civilization The story of algorithms Islam's gift to digital civilization

The story of algorithms Islam's gift to digital civilization

The story of algorithms Islam's gift to digital civilization  Machine learning algorithms can perform many complex tasks today, from mastering challenging and complex games and identifying faces, to automating everyday tasks, and making important predictive decisions that are better than those made by some humans.  This decade has brought countless algorithmic breakthroughs, as well as many controversies, but one might find it hard to believe that all this development began less than a century ago with the scientist Walter Bates and Warren McCulloch.  How did this tale called Algorithms begin? An “algorithm” is a set of mathematical, logical and sequential steps necessary to solve a problem. The algorithm was named after the scientist Abu Jaafar Muhammad bin Musa al-Khwarizmi, who was the first to invent it in the ninth century AD.  The idea of ​​machine learning was first developed in 1943 by computational neuroscientist Walter Bates and computational neurophysiologist Warren McCulloch, who published a mathematical paper to map the decision-making process in human cognition and neural networks.  The paper treated each neuron in the brain as a simple digital processor, and the brain as a complete computing machine. Later mathematician and computer scientist Alan Turing introduced the "Turing test" in 1950, and a 3-person game test intended to determine whether or not machines are intelligent is still intractable to any machine. Nowadays, the Turing test requires a computer capable of deceiving a human into believing that a machine is also a human, as writer Avi Gobani mentioned in a report published by AnalyticsIndia, in which she deals with the history of algorithms.  In the fifties of the last century pioneering research was carried out in the field of machine learning using simple algorithms, where scientist Arthur Samuel of IBM (IPM) wrote the first computer program to play the famous game of checkers, where the game was written using an algorithm called "alpha beta". (alpha-beta), a search algorithm that reduces the number of nodes that are evaluated by another algorithm called “minimax” in search trees, and since then this method has been used in the design of games with two players, as the writer mentioned in her report .  In 1957, American psychologist Frank Rosenblatt designed the "Perceptron", the first neural network that stimulated thought processes in the human brain, and in 1967 the "near neighbor" algorithm was introduced, one of the main algorithms that solved A "traveling salesman problem" is a problem for a salesman who starts in a random city and visits neighboring cities frequently until all cities are visited.  late 20th century The concepts of "backpropagation" were introduced in the 1960s, and reintroduced in the 1980s to find hidden layers between the input and output layers of neural networks, making them suitable for commercial use.  Avi Gobani continues her monitoring of the development of algorithms to remember the 1981 discovery by Gerald Dejung of interpretation-based learning, including data annotation and generalization, followed by the invention of the neural network "NetTalk" in 1985 by Professor Terry Sejnowski, an algorithm capable of pronouncing words such as A child, based on the text. In 1989, Christopher Watkins developed the Q-Learning algorithm that improved the practical applications of reinforcement learning.  In the 1990s, the use of advanced statistical methods for algorithms became popular due to the fact that neural networks seemed less interpretable and required higher computational power, and these methods included the random forest algorithms that were introduced in 1995, and then the biggest victory for algorithms and artificial intelligence occurred when the "Deep Blue" program overcame From IBM to the World Chess Champion in 1997.  Rapid progress in the second millennium In the early 2000s, the author says, machine learning algorithms and unsupervised learning developed, and in 2009, V.V. Lee, a professor of computer science at Stanford University, created a large data set that reflects the real world, which later became the basis from which the world was launched. Alex Krejwski, who created AlexNet, the first version of CNN.  Meanwhile, in 2011, IBM's Watson beat its human competitors in the famous Jeopardy game, and around this time, Google introduced its own mechanism, which it called "Google Brain." (Google Brain) that can classify things, and the giant company continued its work by creating an algorithm for browsing videos in YouTube (YouTube).  The Word2vec algorithms, introduced in 2013, used neural networks to learn word combinations, and later became the basis for large language models. In 2014, Facebook developed its astonishing innovation that rewrote history, "Facebook DeepFace", an algorithm that beat all previous standards of computer algorithms through its ability to recognize human faces.  Algorithms are the backbone of modern technology, now and in the future Today, algorithms - as the writer stresses - have become the backbone of image, video and audio generation, and are commonly used for deep fakes, and in 2016, the AlphaGo algorithm featured one of the most common machine learning victories, beating the world champion in the board game. The Chinese “Go” (Go), and in 2017, “Alpha Gu” and its successors beat many heroes in several complex games, and in 2017, the “Waymo” company began testing its self-driving minibuses, and “Deep Mind” company achieved (Deepmind) Another victory came in 2018 with the AlphaFold algorithm for its ability to predict protein structure.  In fact, the prospects for the development of algorithms and the ways to use them in life are almost innumerable, and it is sufficient to know that the volume of the global trading market in algorithms reached $11.1 billion in 2019, and is expected to reach $18.8 billion in 2024 at an annual growth rate. A compound of 11.1%, as recently reported by the Markets and Markets platform .  In fact, we have recently been witnessing the creation of super-intelligent algorithms such as the algorithms of various social networks, and perhaps the algorithm adopted by the "Tiktok" platform is a good example of it, and the video shows how this elusive algorithm works, which is one of the most advanced and intelligent algorithms.  In short, the "Tik Tok" algorithm is a complex system designed to serve users' content based on what the algorithm believes to be of high importance to the user, and according to "Tik Tok", the system recommends content by arranging videos based on a set of factors, starting with the interests that express About it as a new user, and adapting to things that indicate that you are not interested in it either, that is, it is an algorithm that studies deeply and intelligently analyzes and actively monitors everything related to its users and the way they behave while they are on the platform, according to what writer Jessica Warb stated in her blog on the “Liter” website. (Later) recently.  Evil Algorithms Weapons of Mathematical Destruction However, there is another dark side to Al-Khwarizmi's moon. There is a sinister and extremely dangerous side to it.  In her book , Weapons of Math Destruction, mathematician Kathy O'Neill explores how blindly trusting algorithms to make sensitive decisions can harm many people.  In her book, O'Neill explores many cases in which algorithms have harmed people's lives. Examples include credit scoring systems that wrongly punish people, "recidivism algorithms" that give harsher judgments to defendants based on their color and ethnic background, and Teacher registration that ends well-performing teachers and rewards cheaters, and trading algorithms that generate billions of dollars at the expense of low-income groups, techtalks recently reported.  An algorithm effect could create a dangerous artificial intelligence system. Algorithms can decide to keep a person in prison based on their race, and a defendant has no way of knowing why they are deemed ineligible for a pardon.  And there is something more dangerous than that, as “AJ NET” reported in a recent report that a European research team has proven that artificial intelligence can, if used intentionally, make a large number of completely new biological and chemical weapons very easily, which can be done after work. Its development should help change the map of war on this planet.  Algorithms are usually designed to reduce harm and maximize benefit, meaning that after the chemical compounds are created in the first cycle of the algorithm, they are refined in the second cycle to serve primarily a safety purpose, so that they are not harmful to humans.  However, according to AJ NET, the researchers reversed this mechanism, so the algorithm learned to prefer harm and avoid benefit, and in less than 6 hours after running it, their model produced 40,000 chemical molecules that are harmful to the human body.  And yet, algorithms were invented by humans, supervised by humans, and fed with information by humans, and like any other innovation, they are able to play an important role in the development of humanity, and move them to another level of progress and prosperity, but they are also able to be an evil demon if humans want them. that.

Machine learning algorithms can perform many complex tasks today, from mastering challenging and complex games and identifying faces, to automating everyday tasks, and making important predictive decisions that are better than those made by some humans.

This decade has brought countless algorithmic breakthroughs, as well as many controversies, but one might find it hard to believe that all this development began less than a century ago with the scientist Walter Bates and Warren McCulloch.

How did this tale called Algorithms begin?
An “algorithm” is a set of mathematical, logical and sequential steps necessary to solve a problem. The algorithm was named after the scientist Abu Jaafar Muhammad bin Musa al-Khwarizmi, who was the first to invent it in the ninth century AD.

The idea of ​​machine learning was first developed in 1943 by computational neuroscientist Walter Bates and computational neurophysiologist Warren McCulloch, who published a mathematical paper to map the decision-making process in human cognition and neural networks.

The paper treated each neuron in the brain as a simple digital processor, and the brain as a complete computing machine. Later mathematician and computer scientist Alan Turing introduced the "Turing test" in 1950, and a 3-person game test intended to determine whether or not machines are intelligent is still intractable to any machine. Nowadays, the Turing test requires a computer capable of deceiving a human into believing that a machine is also a human, as writer Avi Gobani mentioned in a report published by AnalyticsIndia, in which she deals with the history of algorithms.

In the fifties of the last century pioneering research was carried out in the field of machine learning using simple algorithms, where scientist Arthur Samuel of IBM (IPM) wrote the first computer program to play the famous game of checkers, where the game was written using an algorithm called "alpha beta". (alpha-beta), a search algorithm that reduces the number of nodes that are evaluated by another algorithm called “minimax” in search trees, and since then this method has been used in the design of games with two players, as the writer mentioned in her report .

In 1957, American psychologist Frank Rosenblatt designed the "Perceptron", the first neural network that stimulated thought processes in the human brain, and in 1967 the "near neighbor" algorithm was introduced, one of the main algorithms that solved A "traveling salesman problem" is a problem for a salesman who starts in a random city and visits neighboring cities frequently until all cities are visited.

late 20th century
The concepts of "backpropagation" were introduced in the 1960s, and reintroduced in the 1980s to find hidden layers between the input and output layers of neural networks, making them suitable for commercial use.

Avi Gobani continues her monitoring of the development of algorithms to remember the 1981 discovery by Gerald Dejung of interpretation-based learning, including data annotation and generalization, followed by the invention of the neural network "NetTalk" in 1985 by Professor Terry Sejnowski, an algorithm capable of pronouncing words such as A child, based on the text. In 1989, Christopher Watkins developed the Q-Learning algorithm that improved the practical applications of reinforcement learning.

In the 1990s, the use of advanced statistical methods for algorithms became popular due to the fact that neural networks seemed less interpretable and required higher computational power, and these methods included the random forest algorithms that were introduced in 1995, and then the biggest victory for algorithms and artificial intelligence occurred when the "Deep Blue" program overcame From IBM to the World Chess Champion in 1997.

Rapid progress in the second millennium
In the early 2000s, the author says, machine learning algorithms and unsupervised learning developed, and in 2009, V.V. Lee, a professor of computer science at Stanford University, created a large data set that reflects the real world, which later became the basis from which the world was launched. Alex Krejwski, who created AlexNet, the first version of CNN.

Meanwhile, in 2011, IBM's Watson beat its human competitors in the famous Jeopardy game, and around this time, Google introduced its own mechanism, which it called "Google Brain." (Google Brain) that can classify things, and the giant company continued its work by creating an algorithm for browsing videos in YouTube (YouTube).

The Word2vec algorithms, introduced in 2013, used neural networks to learn word combinations, and later became the basis for large language models. In 2014, Facebook developed its astonishing innovation that rewrote history, "Facebook DeepFace", an algorithm that beat all previous standards of computer algorithms through its ability to recognize human faces.

Algorithms are the backbone of modern technology, now and in the future
Today, algorithms - as the writer stresses - have become the backbone of image, video and audio generation, and are commonly used for deep fakes, and in 2016, the AlphaGo algorithm featured one of the most common machine learning victories, beating the world champion in the board game. The Chinese “Go” (Go), and in 2017, “Alpha Gu” and its successors beat many heroes in several complex games, and in 2017, the “Waymo” company began testing its self-driving minibuses, and “Deep Mind” company achieved (Deepmind) Another victory came in 2018 with the AlphaFold algorithm for its ability to predict protein structure.

In fact, the prospects for the development of algorithms and the ways to use them in life are almost innumerable, and it is sufficient to know that the volume of the global trading market in algorithms reached $11.1 billion in 2019, and is expected to reach $18.8 billion in 2024 at an annual growth rate. A compound of 11.1%, as recently reported by the Markets and Markets platform .

In fact, we have recently been witnessing the creation of super-intelligent algorithms such as the algorithms of various social networks, and perhaps the algorithm adopted by the "Tiktok" platform is a good example of it, and the video shows how this elusive algorithm works, which is one of the most advanced and intelligent algorithms.

In short, the "Tik Tok" algorithm is a complex system designed to serve users' content based on what the algorithm believes to be of high importance to the user, and according to "Tik Tok", the system recommends content by arranging videos based on a set of factors, starting with the interests that express About it as a new user, and adapting to things that indicate that you are not interested in it either, that is, it is an algorithm that studies deeply and intelligently analyzes and actively monitors everything related to its users and the way they behave while they are on the platform, according to what writer Jessica Warb stated in her blog on the “Liter” website. (Later) recently.

Evil Algorithms Weapons of Mathematical Destruction
However, there is another dark side to Al-Khwarizmi's moon. There is a sinister and extremely dangerous side to it.

In her book , Weapons of Math Destruction, mathematician Kathy O'Neill explores how blindly trusting algorithms to make sensitive decisions can harm many people.

In her book, O'Neill explores many cases in which algorithms have harmed people's lives. Examples include credit scoring systems that wrongly punish people, "recidivism algorithms" that give harsher judgments to defendants based on their color and ethnic background, and Teacher registration that ends well-performing teachers and rewards cheaters, and trading algorithms that generate billions of dollars at the expense of low-income groups, techtalks recently reported.

An algorithm effect could create a dangerous artificial intelligence system. Algorithms can decide to keep a person in prison based on their race, and a defendant has no way of knowing why they are deemed ineligible for a pardon.

And there is something more dangerous than that, as “AJ NET” reported in a recent report that a European research team has proven that artificial intelligence can, if used intentionally, make a large number of completely new biological and chemical weapons very easily, which can be done after work. Its development should help change the map of war on this planet.

Algorithms are usually designed to reduce harm and maximize benefit, meaning that after the chemical compounds are created in the first cycle of the algorithm, they are refined in the second cycle to serve primarily a safety purpose, so that they are not harmful to humans.

However, according to AJ NET, the researchers reversed this mechanism, so the algorithm learned to prefer harm and avoid benefit, and in less than 6 hours after running it, their model produced 40,000 chemical molecules that are harmful to the human body.

And yet, algorithms were invented by humans, supervised by humans, and fed with information by humans, and like any other innovation, they are able to play an important role in the development of humanity, and move them to another level of progress and prosperity, but they are also able to be an evil demon if humans want them. that.

Post a Comment

Previous Post Next Post

Everything Search Here 👇👇👇