Rumus Melihat Pr Dari backlinks

Pagerank memang banyak dibutuhkan oleh kalangan blogger yang ingin memulai bisnis internet, atau untuk mengejar popularitas. beberapa artikel yang sering saya kupas mengenai pagerank adalah memperbanyak jumlah backlink yang kita miliki.

Beberapa trik dan tips tersebut memang terkesan sudah kuno dan banyak dilakukan oleh banyak orang, namun dalam dunia per-Blogger-an, langkah tersebut adalah yang dikehendaki oleh sang Google untuk mengatur peningkatan kualitas sebuah web site.
Mungkin beberapa blogger yang masih muda merasa kurang sabar untuk segera mendapat PageRank, agar blog bisa segera di jadikan ladang uang, namun perlu diketahui bagaimana pagerank bisa di miliki sebuah blog atau web site. Salah satunya adalah mengetahui jumlah backlink dari blog itu sendiri.
Sekarang mari kita coba untuk menghitung backlink yang harus kita miliki.


Cara bacanya seperti ini:

Misal kita ingin mengetahui syarat untuk memiliki PR 5.

To get a PR [5] Kamu butuh backlink dari Blog PR [1] sebanyak [16803] backlink
To get a PR [5] Kamu butuh backlink dari Blog PR [2] sebanyak [3055] backlink.

To get a PR [5] Kamu butuh backlink dari Blog PR [3] sebanyak [555] backlink
To get a PR [5] Kamu butuh backlink dari Blog PR [6] sebanyak [3] backlink.

Dan seterusnya……

Jadi kalau kamu ingin menaikkan pagerank maka hitung sendiri kebutuhan kamu untuk mendapatkan backlink.

Jadi sebelum kamu berkomentar pada blog do follow , lihatlah dulu PR per postingan . karena biasanya PR per artikel itu berbeda. karena biasanya PR besar itu hanya ada di halaman utama / home dari sebuah website atau blog. untuk melihat PR pada sebuah postingan anda tidak perlu harus pergi ke website untuk mengeceknya.

Jika anda menggunakan mozila firefox anda dapat menginstall add ons bernama SEOQUAKE.
Add Ons ini dapat menampilkan page rank per artikel dan aneka informasi lain yang berhubungan dengan SEO.

Selain itu anda dapat mengunjungi Postingan saya, dimana sudah saya sediakan postingan yang berisikan aneka artikel dengan berbagai macam PR yang anda bisa dapatkan dan website yang saya tulis adalah DO Follow , kunjungi artikel

READ MORE - Rumus Melihat Pr Dari backlinks

Number line

In mathematics, a number line is a picture of a straight line on which every point is assumed to correspond to a real number and every real number to a point. Often the integers are shown as specially-marked points evenly spaced on the line. Although this image only shows the integers from −9 to 9, the line includes all real numbers, continuing "forever" in each direction, and also numbers not marked that are between the integers. It is often used as an aid in teaching simple addition and subtraction, especially involving negative numbers.

The number line

It is divided into two symmetric halves by the origin, i.e. the number zero.

Drawing the number line

The number line is most often represented as being horizontal. Customarily, positive numbers lie on the right side of zero, and negative numbers lie on the left side of zero. An arrowhead on either end of the drawing is meant to suggest that the line continues indefinitely in the positive and negative reals, denoted by \mathbb{R}. The real numbers consist of irrational and rational numbers, as well as integers, whole numbers, and the natural numbers (the counting numbers).

A line drawn through the origin at right angles to the real number line can be used to represent the imaginary numbers. This extends the number line to a number plane, with points on the complex plane representing complex numbers.

READ MORE - Number line

Mathematics education

Mathematics education is the practice of teaching and learning mathematics, as well as the field of scholarly research on this practice. Researchers in mathematics education are primarily concerned with the tools, methods and approaches that facilitate practice or the study of practice. However mathematics education research, known on the continent of Europe as the didactics of mathematics, has developed into a fully fledged field of study, with its own characteristic concepts, theories, methods, national and international organisations, conferences and literature. This article describes some of the history, influences and recent controversies concerning mathematics education as a practice.

History

Illustration at the beginning of a 14th century translation of Euclid's Elements.

Elementary mathematics was part of the education system in most ancient civilisations, including Ancient Greece, the Roman empire, Vedic society and ancient Egypt. In most cases, a formal education was only available to male children with a sufficiently high status, wealth or caste.

In Plato's division of the liberal arts into the trivium and the quadrivium, the quadrivium included the mathematical fields of arithmetic and geometry. This structure was continued in the structure of classical education that was developed in medieval Europe. Teaching of geometry was almost universally based on Euclid's Elements. Apprentices to trades such as masons, merchants and money-lenders could expect to learn such practical mathematics as was relevant to their profession.

The first mathematics textbooks to be written in English and French were published by Robert Recorde, beginning with The Grounde of Artes in 1540.

In the Renaissance the academic status of mathematics declined, because it was strongly associated with trade and commerce. Although it continued to be taught in European universities, it was seen as subservient to the study of Natural, Metaphysical and Moral Philosophy.

This trend was somewhat reversed in the seventeenth century, with the University of Aberdeen creating a Mathematics Chair in 1613, followed by the Chair in Geometry being set up in University of Oxford in 1619 and the Lucasian Chair of Mathematics being established by the University of Cambridge in 1662. However, it was uncommon for mathematics to be taught outside of the universities. Isaac Newton, for example, received no formal mathematics teaching until he joined Trinity College, Cambridge in 1661.

In the eighteenth and nineteenth centuries the industrial revolution led to an enormous increase in urban populations. Basic numeracy skills, such as the ability to tell the time, count money and carry out simple arithmetic, became essential in this new urban lifestyle. Within the new public education systems, mathematics became a central part of the curriculum from an early age.

By the twentieth century mathematics was part of the core curriculum in all developed countries.

During the twentieth century mathematics education was established as an independent field of research. Here are some of the main events in this development:

  • In 1893 a Chair in mathematics education was created at the University of Göttingen, under the administration of Felix Klein
  • The International Commission on Mathematical Instruction (ICMI) was founded in 1908, and Felix Klein became the first president of the organisation
  • A new interest in mathematics education emerged in the 1960s, and the commission was revitalised
  • In 1968, the Shell Centre for Mathematical Education was established in Nottingham
  • The first International Congress on Mathematical Education (ICME) was held in Lyon in 1969. The second congress was in Exeter in 1972, and after that it has been held every four years

In the 20th century, the cultural impact of the "electric age" (McLuhan) was also taken up by educational theory and the teaching of mathematics. While previous approach focused on "working with specialized 'problems' in arithmetic", the emerging structural approach to knowledge had "small children meditating about number theory and 'sets'."[1]

Objectives

At different times and in different cultures and countries, mathematics education has attempted to achieve a variety of different objectives. These objectives have included:

  • The teaching of basic numeracy skills to all pupils
  • The teaching of practical mathematics (arithmetic, elementary algebra, plane and solid geometry, trigonometry) to most pupils, to equip them to follow a trade or craft
  • The teaching of abstract mathematical concepts (such as set and function) at an early age
  • The teaching of selected areas of mathematics (such as Euclidean geometry) as an example of an axiomatic system and a model of deductive reasoning
  • The teaching of selected areas of mathematics (such as calculus) as an example of the intellectual achievements of the modern world
  • The teaching of advanced mathematics to those pupils who wish to follow a career in Science, Technology, Engineering, and Mathematics (STEM) fields.
  • The teaching of heuristics and other problem-solving strategies to solve non-routine problems.

Methods of teaching mathematics have varied in line with changing objectives.

Research

An increasing amount of research has been done in the area of mathematics education in the last few decades. The National Council of Teachers of Mathematics has summarized the state of current research in mathematics education in nine areas of current interest, as follows.[2] (Though the NCTM has special interest in American education, the research summarized is international in scope.)

What can we learn from research?
Instead of just looking at whether a particular program works, we must also look at why and under what conditions it works. Teachers can adapt tasks used in studies for their own classrooms. Individual studies are often inconclusive, so it is important to look at a consensus of many studies to draw conclusions. Theory can put practice in a new perspective. For example, research shows that when students invent their own algorithms first, and then learn the standard algorithm, they understand better and make fewer errors. Such findings can have an impact on classroom practice.
Homework
Homework which leads students to practice past lessons or prepare future lessons are more effective than those going over today's lesson. Assignments should be a mix of easy and hard problems and ideally based on the student's learning style. Students must receive feedback. Students with learning disabilities or low motivation may profit from rewards. Shorter homework is better than long homework and group homework is sometimes effective, though these findings depend on grade level. Homework helps simple skills, but not broader measures of achievement.
Student learning
Most bilingual adults switch languages when calculating. Such code-switching has no impact on math ability and should not be discouraged.
When studying statistics, children need time to explore, study and share reasoning about centers, shape, spread and variability. The ability to calculate averages does not mean students understand the concept of averages, which students conceptualize in a variety of ways—from a simplistic "typical value" to a deeper idea of "representative value." Learning when to use mean, median and mode is difficult.
Algebra
It is important for elementary school children to spend a long time learning to express algebraic properties without symbols before learning algebraic notation. When learning symbols, many students believe letters always represent unknowns and struggle with the concept of variable. They prefer arithmetic reasoning to algebraic equations for solving word problems. It takes time to move from arithmetic to algebraic generalizations to describe patterns. Students often have trouble with the minus sign and understand the equals sign to mean "the answer is...."
American Curriculum
The US National Research Council has found it difficult to evaluate any given program, but two general patterns have become clear from large-scale studies: (1) Students achieve greater conceptual understanding from standards-based curricula compared to traditional curricula. (2) Students achieve the same procedural skill level in both types of curricula as measured by traditional standardized tests.
Effective instruction
The two most important criteria for helping students gain conceptual understanding are making connections and intentionally struggling with important ideas. Skill efficiency is best attained by rapid pacing, direct traditional teaching and a smooth transition from teacher modeling to error-free practice. Students who learn skills in conceptually-oriented instruction are better able to adapt their skills to new situations.
Students with difficulties
Students with genuine difficulties (unrelated to motivation or past instruction) struggle with basic facts, answer impulsively, struggle with mental representations, have poor number sense and have poor short-term memory. Techniques that have been found productive for helping such students include peer-assisted learning, explicit teaching with visual aids, instruction informed by formative assessment and encouraging students to think aloud.
Formative assessment
Formative assessment is both the best and cheapest way to boost student achievement, student engagement and teacher professional satisfaction. Results surpass those of reducing class size or increasing teachers' content knowledge. Only short-term (within and between lessons) and medium-term (within and between units) assessment is effective. Effective assessment is based on clarifying what students should know, creating appropriate activities to obtain the evidence needed, giving good feedback, encouraging students to take control of their learning and letting students be resources for one another.
Mathematics specialists and coaches
Little research has been done so far on mathematics coaches and the studies that have been done are hard to evaluate because coaching is usually part of larger programs. What research has been done seems to show that coaches can improve teaching, but the coaching program must be well designed.

Standards

Throughout most of history, standards for mathematics education were set locally, by individual schools or teachers, depending on the levels of achievement that were relevant to, realistic for, and considered socially appropriate for their pupils.

In modern times there has been a move towards regional or national standards, usually under the umbrella of a wider standard school curriculum. In England, for example, standards for mathematics education are set as part of the National Curriculum for England, while Scotland maintains its own educational system.

Ma (2000) summarised the research of others who found, based on nationwide data, that students with higher scores on standardised math tests had taken more mathematics courses in high school. This led some states to require three years of math instead of two. But because this requirement was often met by taking another lower level math course, the additional courses had a “diluted” effect in raising achievement levels. [3]

In North America, the National Council of Teachers of Mathematics (NCTM) has published the Principles and Standards for School Mathematics. In 2006, they released the Curriculum Focal Points, which recommend the most important mathematical topics for each grade level through grade 8. However, these standards are not nationally enforced in US schools.

Content and age levels

Different levels of mathematics are taught at different ages and in somewhat different sequences in different countries. Sometimes a class may be taught at an earlier age than typical as a special or "honors" class.

Elementary mathematics in most countries is taught in a similar fashion, though there are differences. In the United States fractions are typically taught starting from 1st grade, whereas in other countries they are usually taught later, since the metric system does not require young children to be familiar with them. Most countries tend to cover fewer topics in greater depth than in the United States.[4]

In most of the US, algebra, geometry and analysis (pre-calculus and calculus) are taught as separate courses in different years of high school. Mathematics in most other countries (and in a few US states) is integrated, with topics from all branches of mathematics studied every year. Students in many countries choose an option or pre-defined course of study rather than choosing courses à la carte as in the United States. Students in science-oriented curricula typically study differential calculus and trigonometry at age 16-17 and integral calculus, complex numbers, analytic geometry, exponential and logarithmic functions, and infinite series their final year of secondary school.

Methods

The method or methods used in any particular context are largely determined by the objectives that the relevant educational system is trying to achieve. Methods of teaching mathematics include the following:

  • Conventional approach - the gradual and systematic guiding through the hierarchy of mathematical notions, ideas and techniques. Starts with arithmetic and is followed by Euclidean geometry and elementary algebra taught concurrently. Requires the instructor to be well informed about elementary mathematics, since didactic and curriculum decisions are often dictated by the logic of the subject rather than pedagogical considerations. Other methods emerge by emphasizing some aspects of this approach.
  • Classical education - the teaching of mathematics within the classical education syllabus of the Middle Ages, which was typically based on Euclid's Elements taught as a paradigm of deductive reasoning.
  • Rote learning - the teaching of mathematical results, definitions and concepts by repetition and memorisation typically without meaning or supported by mathematical reasoning. A derisory term is drill and kill. Parrot Maths was the title of a paper critical of rote learning. Within the conventional approach, rote learning is used to teach multiplication tables.
  • Exercises - the reinforcement of mathematical skills by completing large numbers of exercises of a similar type, such as adding vulgar fractions or solving quadratic equations.
  • Problem solving - the cultivation of mathematical ingenuity, creativity and heuristic thinking by setting students open-ended, unusual, and sometimes unsolved problems. The problems can range from simple word problems to problems from international mathematics competitions such as the International Mathematical Olympiad. Problem solving is used as a means to build new mathematical knowledge, typically by building on students' prior understandings.
  • New Math - a method of teaching mathematics which focuses on abstract concepts such as set theory, functions and bases other than ten. Adopted in the US as a response to the challenge of early Soviet technical superiority in space, it began to be challenged in the late 1960s. One of the most influential critiques of the New Math was Morris Kline's 1973 book Why Johnny Can't Add. The New Math method was the topic of one of Tom Lehrer's most popular parody songs, with his introductory remarks to the song: "...in the new approach, as you know, the important thing is to understand what you're doing, rather than to get the right answer."
  • Historical method - teaching the development of mathematics within an historical, social and cultural context. Provides more human interest than the conventional approach.
  • Standards-based mathematics - a vision for pre-college mathematics education in the US and Canada, focused on deepening student understanding of mathematical ideas and procedures, and formalized by the National Council of Teachers of Mathematics which created the Principles and Standards for School Mathematics.

Mathematics teachers

The following people all taught mathematics at some stage in their lives, although they are better known for other things:

  • Lewis Carroll, pen name of British author Charles Dodgson, lectured in mathematics at Christ Church, Oxford
  • John Dalton, British chemist and physicist, taught mathematics at schools and colleges in Manchester, Oxford and York
  • Tom Lehrer, American songwriter and satirist, taught mathematics at Harvard, MIT and currently at University of California, Santa Cruz
  • Brian May, rock guitarist and composer, worked briefly as a mathematics teacher before joining Queen[5]
  • Georg Joachim Rheticus, Austrian cartographer and disciple of Copernicus, taught mathematics at the University of Wittenberg
  • Edmund Rich, Archbishop of Canterbury in the 13th century, lectured on mathematics at the universities of Oxford and Paris
  • Éamon de Valera, a leader of Ireland's struggle for independence in the early 20th century and founder of the Fianna Fáil party, taught mathematics at schools and colleges in Dublin
  • Archie Williams, American athlete and Olympic gold medalist, taught mathematics at high schools in California.


READ MORE - Mathematics education

Gaussian elimination

In linear algebra, Gaussian elimination is an algorithm for solving systems of linear equations, finding the rank of a matrix, and calculating the inverse of an invertible square matrix. Gaussian elimination is named after German mathematician and scientist Carl Friedrich Gauss.

Elementary row operations are used to reduce a matrix to row echelon form. Gauss–Jordan elimination, an extension of this algorithm, reduces the matrix further to reduced row echelon form. Gaussian elimination alone is sufficient for many applications.

History

The method of Gaussian elimination appears in Chapter Eight, Rectangular Arrays, of the important Chinese mathematical text Jiuzhang suanshu or The Nine Chapters on the Mathematical Art. Its use is illustrated in eighteen problems, with two to five equations. The first reference to the book by this title is dated to 179 CE, but parts of it were written as early as approximately 150 BCE.[1] It was commented on by Liu Hui in the 3rd century.

However, the method was invented in Europe independently by Carl Friedrich Gauss when developing the method of least squares in his 1809 publication Theory of Motion of Heavenly Bodies.[2]

Algorithm overview

The process of Gaussian elimination has two parts. The first part (Forward Elimination) reduces a given system to either triangular or echelon form, or results in a degenerate equation with no solution, indicating the system has no solution. This is accomplished through the use of elementary row operations. The second step uses back substitution to find the solution of the system above.

Stated equivalently for matrices, the first part reduces a matrix to row echelon form using elementary row operations while the second reduces it to reduced row echelon form, or row canonical form.

Another point of view, which turns out to be very useful to analyze the algorithm, is that Gaussian elimination computes a matrix decomposition. The three elementary row operations used in the Gaussian elimination (multiplying rows, switching rows, and adding multiples of rows to other rows) amount to multiplying the original matrix with invertible matrices from the left. The first part of the algorithm computes an LU decomposition, while the second part writes the original matrix as the product of a uniquely determined invertible matrix and a uniquely determined reduced row-echelon matrix.

Example

Suppose the goal is to find and describe the solution(s), if any, of the following system of linear equations:

\begin{alignat}{7} 2x &&\; + \;&& y             &&\; - \;&& z  &&\; = \;&& 8 & \qquad (L_1) \\ -3x &&\; - \;&& y             &&\; + \;&& 2z &&\; = \;&& -11 & \qquad (L_2) \\ -2x &&\; + \;&& y &&\; +\;&& 2z  &&\; = \;&& -3 &  \qquad (L_3) \end{alignat}

The algorithm is as follows: eliminate x from all equations below L1, and then eliminate y from all equations below L2. This will put the system into triangular form. Then, using back-substitution, each unknown can be solved for.

In the example, x is eliminated from L2 by adding \begin{matrix}\frac{3}{2}\end{matrix} L_1 to L2. x is then eliminated from L3 by adding L1 to L3. Formally:

L_2 + \frac{3}{2}L_1 \rightarrow L_2
L_3 + L_1 \rightarrow L_3

The result is:

\begin{alignat}{7} 2x &&\; + \;&& y             &&\; - \;&& z  &&\; = \;&& 8 &  \\ \frac{1}{2}y            &&\; + \;&& \frac{1}{2}z &&\; = \;&& 1 & \\ 2y &&\; +\;&& z  &&\; = \;&& 5 &   \end{alignat}

Now y is eliminated from L3 by adding − 4L2 to L3:

L_3 + -4L_2 \rightarrow L_3

The result is:

\begin{alignat}{7} 2x &&\; + \;&& y             &&\; - \;&& z  &&\; = \;&& 8 &  \\ \frac{1}{2}y            &&\; + \;&& \frac{1}{2}z &&\; = \;&& 1 & \\ -z  &&\; = \;&& 1 &   \end{alignat}

This result is a system of linear equations in triangular form, and so the first part of the algorithm is complete.

The second part, back-substitution, consists of solving for the unknowns in reverse order. It can thus be seen that

z = -1 \quad (L_3)

Then, z can be substituted into L2, which can then be solved to obtain

y = 3 \quad (L_2)

Next, z and y can be substituted into L1, which can be solved to obtain

x = 2 \quad (L_1)

The system is solved.

Some systems cannot be reduced to triangular form, yet still have at least one valid solution: for example, if y had not occurred in L2 and L3 after the first step above, the algorithm would have been unable to reduce the system to triangular form. However, it would still have reduced the system to echelon form. In this case, the system does not have a unique solution, as it contains at least one free variable. The solution set can then be expressed parametrically (that is, in terms of the free variables, so that if values for the free variables are chosen, a solution will be generated).

In practice, one does not usually deal with the systems in terms of equations but instead makes use of the augmented matrix (which is also suitable for computer manipulations). For example:

\begin{alignat}{7} 2x &&\; + \;&& y             &&\; - \;&& z  &&\; = \;&& 8 & \qquad (L_1) \\ -3x &&\; - \;&& y             &&\; + \;&& 2z &&\; = \;&& -11 & \qquad (L_2) \\ -2x &&\; + \;&& y &&\; +\;&& 2z  &&\; = \;&& -3 &  \qquad (L_3) \end{alignat}

Therefore, the Gaussian Elimination algorithm applied to the augmented matrix begins with:

\left[ \begin{array}{ccc|c} 2 & 1 & -1 & 8 \\ -3 & -1 & 2 & -11 \\ -2 & 1 & 2 & -3 \end{array} \right]

which, at the end of the first part of the algorithm, looks like this:

\left[ \begin{array}{ccc|c} 2 & 1 & -1 & 8 \\ 0 & \frac{1}{2} & \frac{1}{2} & 1 \\ 0 & 0 & -1 & 1 \end{array} \right]

That is, it is in row echelon form.

At the end of the algorithm, if the Gauss–Jordan elimination is applied:

\left[ \begin{array}{ccc|c} 1 & 0 & 0 & 2 \\ 0 & 1 & 0 & 3 \\ 0 & 0 & 1 & -1 \end{array} \right]

That is, it is in reduced row echelon form, or row canonical form.

Other applications

Finding the inverse of a matrix

Suppose A is a n \times n matrix and you need to calculate its inverse. The n \times n identity matrix is augmented to the right of A, forming a n \times 2n matrix (the block matrix B = [A,I]). Through application of elementary row operations and the Gaussian elimination algorithm, the left block of B can be reduced to the identity matrix I, which leaves A − 1 in the right block of B.

If the algorithm is unable to reduce A to triangular form, then A is not invertible.

General algorithm to compute ranks and bases

The Gaussian elimination algorithm can be applied to any m \times n matrix A. If we get "stuck" in a given column, we move to the next column. In this way, for example, some 6 \times 9 matrices can be transformed to a matrix that has a reduced row echelon form like

\begin{bmatrix} 1 & * & 0 & 0 & * & * & 0 & * & 0 \\ 0 & 0 & 1 & 0 & * & * & 0 & * & 0 \\ 0 & 0 & 0 & 1 & * & * & 0 & * & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 1 & * & 0 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}

(the *'s are arbitrary entries). This echelon matrix T contains a wealth of information about A: the rank of A is 5 since there are 5 non-zero rows in T; the vector space spanned by the columns of A has a basis consisting of the first, third, fourth, seventh and ninth column of A (the columns of the ones in T), and the *'s tell you how the other columns of A can be written as linear combinations of the basis columns.

Analysis

Gaussian elimination to solve a system of n equations for n unknowns requires n(n+1) / 2 divisions, (2n3 + 3n2 − 5n)/6 multiplications, and (2n3 + 3n2 − 5n)/6 subtractions,[3] for a total of approximately 2n3 / 3 operations. So it has a complexity of \mathcal{O}(n^3)\,.

This algorithm can be used on a computer for systems with thousands of equations and unknowns. However, the cost becomes prohibitive for systems with millions of equations. These large systems are generally solved using iterative methods. Specific methods exist for systems whose coefficients follow a regular pattern (see system of linear equations).

The Gaussian elimination can be performed over any field.

Gaussian elimination is numerically stable for diagonally dominant or positive-definite matrices. For general matrices, Gaussian elimination is usually considered to be stable in practice if you use partial pivoting as described below, even though there are examples for which it is unstable.[4]

Higher order tensors

Gaussian elimination does not generalize in any simple way to higher order tensors (matrices are order 2 tensors); even computing the rank of a tensor of order greater than 2 is a difficult problem.

[edit] Pseudocode

As explained above, Gaussian elimination writes a given m × n matrix A uniquely as a product of an invertible m × m matrix S and a row-echelon matrix T. Here, S is the product of the matrices corresponding to the row operations performed.

The formal algorithm to compute T from A follows. We write A[i,j] for the entry in row i, column j in matrix A. The transformation is performed "in place", meaning that the original matrix A is lost and successively replaced by T.

i := 1
j := 1
while (i ≤ m and j ≤ n) do
Find pivot in column j, starting in row i:
maxi := i
for k := i+1 to m do
if abs(A[k,j]) > abs(A[maxi,j]) then
maxi := k
end if
end for
if A[maxi,j] ≠ 0 then
swap rows i and maxi, but do not change the value of i
Now A[i,j] will contain the old value of A[maxi,j].
divide each entry in row i by A[i,j]
Now A[i,j] will have the value 1.
for u := i+1 to m do
subtract A[u,j] * row i from row u
Now A[u,j] will be 0, since A[u,j] - A[i,j] * A[u,j] = A[u,j] - 1 * A[u,j] = 0.
end for
i := i + 1
end if
j := j + 1
end while

This algorithm differs slightly from the one discussed earlier, because before eliminating a variable, it first exchanges rows to move the entry with the largest absolute value to the "pivot position". Such "partial pivoting" improves the numerical stability of the algorithm; some variants are also in use.

The column currently being transformed is called the pivot column. Proceed from left to right, letting the pivot column be the first column, then the second column, etc. and finally the last column before the vertical line. For each pivot column, do the following two steps before moving on to the next pivot column:

  1. Locate the diagonal element in the pivot column. This element is called the pivot. The row containing the pivot is called the pivot row. Divide every element in the pivot row by the pivot to get a new pivot row with a 1 in the pivot position.
  2. Get a 0 in each position below the pivot position by subtracting a suitable multiple of the pivot row from each of the rows below it.

Upon completion of this procedure the augmented matrix will be in row-echelon form and may be solved by back-substitution.

With the increasing popularity of multi-core processors, programmers now exploit thread-level parallel Gaussian elimination algorithms to increase the speed of computing. The shared-memory programming model (as opposed to the message exchange model) pseudocode is listed below.

void parallel(int num_threads,int matrix_dimension)
int i;
for
pthread_join(threads[i],NULL);

void *gauss(int thread_id)
int i,k,j;
for(k=0;k if(thread_id==(k%num_thread)) //interleaved-row work distribution
for(j=k+1;j M[k][j]=M[k][j]/M[k][k];
M[k][k]=1;
barrier(num_thread,&mybarrier); //wait for other thread finishing this round
for(i=k+1;i if(i%p==thread_id)
for(j=k+1;j M[i][j]=M[i][j]-M[i][k]*M[k][j];
M[i][k]=0;}
barrier(num_thread,&mybarrier);
return NULL;

void barrier(int num_thread, barrier_t * mybarrier)
pthread_mutex_lock(&(mybarrier->barrier_mutex));
mybarrier->cur_count++;
if(mybarrier->cur_count!=num_thread)
pthread_cond_wait(&(mybarrier->barrier_cond),&(mybarrier->barrier_mutex));
else
mybarrier->cur_count=0;
pthread_cond_broadcast(&(mybarrier->barrier_cond));
pthread_mutex_unlock(&(mybarrier->barrier_mutex));

READ MORE - Gaussian elimination

Binary operation

In mathematics, a binary operation is a calculation involving two operands, in other words, an operation whose arity is two. Examples include the familiar arithmetic operations of addition, subtraction, multiplication and division.

More precisely, a binary operation on a set S is a ternary relation that maps elements of the Cartesian product S × S to S:

\,f \colon S \times S \rightarrow S.

If f is not a function, but is instead a partial function, it is called a partial operation. For instance, division of real numbers is a partial function, because one can't divide by zero: a/0 is not defined for any real a. Note however that both in algebra and model theory the binary operations considered are defined on the whole of S \times S .

Sometimes, especially in computer science, the term is used for any binary function. That f takes values in the same set S that provides its arguments is the property of closure.

Binary operations are the keystone of algebraic structures studied in abstract algebra: they form part of groups, monoids, semigroups, rings, and more. Most generally, a magma is a set together with any binary operation defined on it.

Many binary operations of interest in both algebra and formal logic are commutative or associative. Many also have identity elements and inverse elements. Typical examples of binary operations are the addition (+) and multiplication (×) of numbers and matrices as well as composition of functions on a single set.

An example of an operation that is not commutative is subtraction (−). Examples of partial operations that are not commutative include division (/), exponentiation(^), and super-exponentiation(↑↑).

Binary operations are often written using infix notation such as ab, a + b, a · b or (by juxtaposition with no symbol) ab rather than by functional notation of the form f(a, b). Powers are usually also written without operator, but with the second argument as superscript.

Binary operations sometimes use prefix or postfix notation; this dispenses with parentheses. Prefix notation is also called Polish notation; postfix notation, also called reverse Polish notation, is probably more often encountered.

Pair and tuple

A binary operation, ab, depends on the ordered pair (a, b) and so (ab)c (where the parentheses here mean first operate on the ordered pair (a, b) and then operate on the result of that using the ordered pair ((ab), c)) depends in general on the ordered pair ((a,b),c). Thus, for the general, non-associative case, binary operations can be represented with binary trees.

However:

  • If the operation is associative, (ab)c=a(bc), then the value depends only on the tuple (a,b,c).
  • If the operation is commutative, ab=ba, then the value depends only on the multiset { {a,b},c}.
  • If the operation is both associative and commutative then the value depends only on the multiset {a,b,c}.
  • If the operation is both associative and commutative and idempotent, aa=a, then the value depends only on the set {a,b,c}.

External binary operations

An external binary operation is a binary function from K × S to S. This differs from a binary operation in the strict sense in that K need not be S; its elements come from outside.

An example of an external binary operation is scalar multiplication in linear algebra. Here K is a field and S is a vector space over that field.

An external binary operation may alternatively be viewed as an action; K is acting on S.

Note that the dot product of two vectors is not a binary operation, external or otherwise, as it maps from S × S to K, where K is a field and S is a vector space over K.

READ MORE - Binary operation

Commutative algebra

Commutative algebra is the branch of abstract algebra that studies commutative rings, their ideals, and modules over such rings. Both algebraic geometry and algebraic number theory build on commutative algebra. Prominent examples of commutative rings include polynomial rings, rings of algebraic integers, including the ordinary integers \mathbb{Z}, and p-adic integers.

Commutative algebra is the main technical tool in the local study of schemes.

The study of rings which are not necessarily commutative is known as noncommutative algebra; it includes ring theory, representation theory, and the theory of Banach algebras.

History

The subject, first known as ideal theory, began with Richard Dedekind's work on ideals, itself based on the earlier work of Ernst Kummer and Leopold Kronecker. Later, David Hilbert introduced the term ring to generalize the earlier term number ring. Hilbert introduced a more abstract approach to replace the more concrete and computationally oriented methods grounded in such things as complex analysis and classical invariant theory. In turn, Hilbert strongly influenced Emmy Noether, to whom we owe much of the abstract and axiomatic approach to the subject. Another important milestone was the work of Hilbert's student Emanuel Lasker, who introduced primary ideals and proved the first version of the Lasker–Noether theorem.

Much of the modern development of commutative algebra emphasizes modules. Both ideals of a ring R and R-algebras are special cases of R-modules, so module theory encompasses both ideal theory and the theory of ring extensions. Though it was already incipient in Kronecker's work, the modern approach to commutative algebra using module theory is usually credited to Emmy Noether.

READ MORE - Commutative algebra

Field theory (mathematics)

Field theory is a branch of mathematics which studies the properties of fields. A field is a mathematical entity for which addition, subtraction, multiplication and division are well-defined.

Please refer to Glossary of field theory for some basic definitions in field theory.

History

The concept of field was used implicitly by Niels Henrik Abel and Évariste Galois in their work on the solvability of equations.

In 1871, Richard Dedekind, called a set of real or complex numbers which is closed under the four arithmetic operations a "field".

In 1881, Leopold Kronecker defined what he called a "domain of rationality", which is indeed a field of polynomials in modern terms.

In 1893, Heinrich M. Weber gave the first clear definition of an abstract field.

In 1910 Ernst Steinitz published the very influential paper Algebraische Theorie der Körper (German: Algebraic Theory of Fields). In this paper he axiomatically studies the properties of fields and defines many important field theoretic concepts like prime field, perfect field and the transcendence degree of an field extension.

Galois, who did not have the term "field" in mind, is honored to be the first mathematician linking group theory and field theory. Galois theory is named after him. However it was Emil Artin who first developed the relationship between groups and fields in great detail during 1928-1942.

Introduction

Fields are important objects of study in algebra since they provide a useful generalization of many number systems, such as the rational numbers, real numbers, and complex numbers. In particular, the usual rules of associativity, commutativity and distributivity hold. Fields also appear in many other areas of mathematics; see the examples below.

When abstract algebra was first being developed, the definition of a field usually did not include commutativity of multiplication, and what we today call a field would have been called either a commutative field or a rational domain. In contemporary usage, a field is always commutative. A structure which satisfies all the properties of a field except possibly for commutativity, is today called a division ring or division algebra or sometimes a skew field. Also non-commutative field is still widely used. In French, fields are called corps (literally, body), skew fields are called corps gauche or anneau à divisions or also algèbre à divisions. The German word for body is Körper and this word is used to denote fields; hence the use of the blackboard bold \mathbb K to denote a field.

The concept of fields was first (implicitly) used to prove that there is no general formula expressing in terms of radicals the roots of a polynomial with rational coefficients of degree 5 or higher.

Extensions of a field

An extension of a field k is just a field K containing k as a subfield. One distinguishes between extensions having various qualities. For example, an extension K of a field k is called algebraic, if every element of K is a root of some polynomial with coefficients in k. Otherwise, the extension is called transcendental.

The aim of Galois theory is the study of algebraic extensions of a field.

Closures of a field

Given a field k, various kinds of closures of k may be introduced. For example the algebraic closure, the separable closure, the cyclic closure et cetera. The idea is always the same: If P is a property of fields, then a P-closure of k is a field K containing k, having property P, and which is minimal in the sense that no proper subfield of K that contains k has property P. For example if we take P(K) to be the property "every nonconstant polynomial f in K[t] has a root in K", then a P-closure of k is just an algebraic closure of k. In general, if P-closures exist for some property P and field k, they are all isomorphic. However, there is in general no preferable isomorphism between two closures.

Applications of field theory

The concept of a field is of use, for example, in defining vectors and matrices, two structures in linear algebra whose components can be elements of an arbitrary field.

Finite fields are used in number theory, Galois theory and coding theory, and again algebraic extension is an important tool.

Binary fields, fields of characteristic 2, are useful in computer science.

READ MORE - Field theory (mathematics)

Ring theory

In mathematics, ring theory is the study of rings; algebraic structures in which addition and multiplication are defined and have similar properties to those familiar from the integers. Ring theory studies the structure of rings, their representations, or, in different language, modules, special classes of rings (group rings, division rings, universal enveloping algebras), as well as an array of properties that proved to be of interest both within the theory itself and for its applications, such as homological properties and polynomial identities.

Commutative rings are much better understood than noncommutative ones. Due to its intimate connections with algebraic geometry and algebraic number theory, which provide many natural examples of commutative rings, their theory, which is considered to be part of commutative algebra and field theory rather than of general ring theory, is quite different in flavour from the theory of their noncommutative counterparts. A fairly recent trend, started in the 1980s with the development of noncommutative geometry and with the discovery of quantum groups, attempts to turn the situation around and build the theory of certain classes of noncommutative rings in a geometric fashion as if they were rings of functions on (non-existent) 'noncommutative spaces'.

Please refer to the glossary of ring theory for the definitions of terms used throughout ring theory.

History

Commutative ring theory originated in algebraic number theory, algebraic geometry, and invariant theory. Central to the development of these subjects were the rings of integers in algebraic number fields and algebraic function fields, and the rings of polynomials in two or more variables. Noncommutative ring theory began with attempts to extend the complex numbers to various hypercomplex number systems. The genesis of the theories of commutative and noncommutative rings dates back to the early nineteenth century, while their maturity was achieved only in the third decade of the twentieth century.

More precisely, William Rowan Hamilton put forth the quaternions and biquaternions; James Cockle presented tessarines and coquaternions; and William Kingdon Clifford was an enthusiast of split-biquaternions, which he called algebraic motors. These non-commutative algebras, and the non-commutative Lie algebras were studied under the title of universal algebra before the subject was divided into particular mathematical structure types. One sign of re-organization was the use of direct sums to describe algebraic structure.

Elementary introduction

Definition

Formally, a ring is an Abelian group (R, +), together with a second binary operation * such that for all a, b and c in R,

a * (b * c) = (a * b) * c
a * (b + c) = (a * b) + (a * c)
(a + b) * c = (a * c) + (b * c)

also, if there exists a multiplicative identity in the ring, that is, an element e such that for all a in R,

a * e = e * a = a

then it is said to be a ring with unity. The number 1 is a common example of a unity.

The ring in which e is equal to the additive identity must have only one element. This ring is called the trivial ring.

Rings that sit inside other rings are called subrings. Maps between rings which respect the ring operations are called ring homomorphisms. Rings, together with ring homomorphisms, form a category (the category of rings). Closely related is the notion of ideals, certain subsets of rings which arise as kernels of homomorphisms and can serve to define factor rings. Basic facts about ideals, homomorphisms and factor rings are recorded in the isomorphism theorems and in the Chinese remainder theorem.

A ring is called commutative if its multiplication is commutative. Commutative rings resemble familiar number systems, and various definitions for commutative rings are designed to recover properties known from the integers. Commutative rings are also important in algebraic geometry. In commutative ring theory, numbers are often replaced by ideals, and the definition of prime ideal tries to capture the essence of prime numbers. Integral domains, non-trivial commutative rings where no two non-zero elements multiply to give zero, generalize another property of the integers and serve as the proper realm to study divisibility. Principal ideal domains are integral domains in which every ideal can be generated by a single element, another property shared by the integers. Euclidean domains are integral domains in which the Euclidean algorithm can be carried out. Important examples of commutative rings can be constructed as rings of polynomials and their factor rings. Summary: Euclidean domain => principal ideal domain => unique factorization domain => integral domain => Commutative ring.

Non-commutative rings resemble rings of matrices in many respects. Following the model of algebraic geometry, attempts have been made recently at defining non-commutative geometry based on non-commutative rings. Non-commutative rings and associative algebras (rings that are also vector spaces) are often studied via their categories of modules. A module over a ring is an Abelian group that the ring acts on as a ring of endomorphisms, very much akin to the way fields (integral domains in which every non-zero element is invertible) act on vector spaces. Examples of non-commutative rings are given by rings of square matrices or more generally by rings of endomorphisms of Abelian groups or modules, and by monoid rings.

Some useful theorems

  • Artin–Wedderburn theorem

Generalizations

Any ring can be seen as a preadditive category with a single object. It is therefore natural to consider arbitrary preadditive categories to be generalizations of rings. And indeed, many definitions and theorems originally given for rings can be translated to this more general context. Additive functors between preadditive categories generalize the concept of ring homomorphism, and ideals in additive categories can be defined as sets of morphisms closed under addition and under composition with arbitrary morphisms.

READ MORE - Ring theory

Group theory

In mathematics and abstract algebra, group theory studies the algebraic structures known as groups. More poetically,

Group theory is the branch of mathematics that answers the question, "What is symmetry?"

—Nathan C. Carter (2009, p. 5)

The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces can all be seen as groups endowed with additional operations and axioms. Groups recur throughout mathematics, and the methods of group theory have strongly influenced many parts of algebra. Linear algebraic groups and Lie groups are two branches of group theory that have experienced tremendous advances and have become subject areas in their own right.

Various physical systems, such as crystals and the hydrogen atom, can be modelled by symmetry groups. Thus group theory and the closely related representation theory have many applications in physics and chemistry.

One of the most important mathematical achievements of the 20th century was the collaborative effort, taking up more than 10,000 journal pages and mostly published between 1960 and 1980, that culminated in a complete classification of finite simple groups.

History

Group theory has three main historical sources: number theory, the theory of algebraic equations, and geometry. The number-theoretic strand was begun by Leonhard Euler, and developed by Gauss's work on modular arithmetic and additive and multiplicative groups related to quadratic fields. Early results about permutation groups were obtained by Lagrange, Ruffini, and Abel in their quest for general solutions of polynomial equations of high degree. Évariste Galois coined the term “group” and established a connection, now known as Galois theory, between the nascent theory of groups and field theory. In geometry, groups first became important in projective geometry and, later, non-Euclidean geometry. Felix Klein's Erlangen program famously proclaimed group theory to be the organizing principle of geometry.

Galois, in the 1830s, was the first to employ groups to determine the solvability of polynomial equations. Arthur Cayley and Augustin Louis Cauchy pushed these investigations further by creating the theory of permutation group. The second historical source for groups stems from geometrical situations. In an attempt to come to grips with possible geometries (such as euclidean, hyperbolic or projective geometry) using group theory, Felix Klein initiated the Erlangen programme. Sophus Lie, in 1884, started using groups (now called Lie groups) attached to analytic problems. Thirdly, groups were (first implicitly and later explicitly) used in algebraic number theory.

The different scope of these early sources resulted in different notions of groups. The theory of groups was unified starting around 1880. Since then, the impact of group theory has been ever growing, giving rise to the birth of abstract algebra in the early 20th century, representation theory, and many more influential spin-off domains. The classification of finite simple groups is a vast body of work from the mid 20th century, classifying all the finite simple groups.

Main classes of groups

The range of groups being considered has gradually expanded from finite permutation groups and special examples of matrix groups to abstract groups that may be specified through a presentation by generators and relations.

Permutation groups

The first class of groups to undergo a systematic study was permutation groups. Given any set X and a collection G of bijections of X into itself (known as permutations) that is closed under compositions and inverses, G is a group acting on X. If X consists of n elements and G consists of all permutations, G is the symmetric group Sn; in general, G is a subgroup of the symmetric group of X. An early construction due to Cayley exhibited any group as a permutation group, acting on itself (X = G) by means of the left regular representation.

In many cases, the structure of a permutation group can be studied using the properties of its action on the corresponding set. For example, in this way one proves that for n ≥ 5, the alternating group An is simple, i.e. does not admit any proper normal subgroups. This fact plays a key role in the impossibility of solving a general algebraic equation of degree n ≥ 5 in radicals.

Matrix groups

The next important class of groups is given by matrix groups, or linear groups. Here G is a set consisting of invertible matrices of given order n over a field K that is closed under the products and inverses. Such a group acts on the n-dimensional vector space Kn by linear transformations. This action makes matrix groups conceptually similar to permutation groups, and geometry of the action may be usefully expoited to establish properties of the group G.

Transformation groups

Permutation groups and matrix groups are special cases of transformation groups: groups that act on a certain space X preserving its inherent structure. In the case of permutation groups, X is a set; for matrix groups, X is a vector space. The concept of a transformation group is closely related with the concept of a symmetry group: transformation groups frequently consist of all transformations that preserve a certain structure.

The theory of transformation groups forms a bridge connecting group theory with differential geometry. A long line of research, originating with Lie and Klein, considers group actions on manifolds by homeomorphisms or diffeomorphisms. The groups themselves may be discrete or continuous.

Abstract groups

Most groups considered in the first stage of the development of group theory were "concrete", having been realized through numbers, permutations, or matrices. It was not until the late nineteenth century that the idea of an abstract group as a set with operations satisfying a certain system of axioms began to take hold. A typical way of specifying an abstract group is through a presentation by generators and relations,

 G = \langle S|R\rangle.

A significant source of abstract groups is given by the construction of a factor group, or quotient group, G/H, of a group G by a normal subgroup H. Class groups of algebraic number fields were among the earliest examples of factor groups, of much interest in number theory. If a group G is a permutation group on a set X, the factor group G/H is no longer acting on X; but the idea of an abstract group permits one not to worry about this discrepancy.

The change of perspective from concrete to abstract groups makes it natural to consider properties of groups that are independent of a particular realization, or in modern language, invariant under isomorphism, as well as the classes of group with a given such property: finite groups, periodic groups, simple groups, solvable groups, and so on. Rather than exploring properties of an individual group, one seeks to establish results that apply to a whole class of groups. The new paradigm was of paramount importance for the development of mathematics: it foreshadowed the creation of abstract algebra in the works of Hilbert, Emil Artin, Emmy Noether, and mathematicians of their school.

Topological and algebraic groups

An important elaboration of the concept of a group occurs if G is endowed with additional structure, notably, of a topological space, differentiable manifold, or algebraic variety. If the group operations m (multiplication) and i (inversion),

 m: G\times G\to G, (g,h)\mapsto gh, \quad i:G\to G, g\mapsto g^{-1},

are compatible with this structure, i.e. are continuous, smooth or regular (in the sense of algebraic geometry) maps then G becomes a topological group, a Lie group, or an algebraic group.[1]

The presence of extra structure relates these types of groups with other mathematical disciplines and means that more tools are available in their study. Topological groups form a natural domain for abstract harmonic analysis, whereas Lie groups (frequently realized as transformation groups) are the mainstays of differential geometry and unitary representation theory. Certain classification questions that cannot be solved in general can be approached and resolved for special subclasses of groups. Thus, compact connected Lie groups have been completely classified. There is a fruitful relation between infinite abstract groups and topological groups: whenever a group Γ can be realized as a lattice in a topological group G, the geometry and analysis pertaining to G yield important results about Γ. A comparatively recent trend in the theory of finite groups exploits their connections with compact topological groups (profinite groups): for example, a single p-adic analytic group G has a family of quotients which are finite p-groups of various orders, and properties of G translate into the properties of its finite quotients.

Combinatorial and geometric group theory

Groups can be described in different ways. Finite groups can be described by writing down the group table consisting of all possible multiplications gh. A more important way of defining a group is by generators and relations, also called the presentation of a group. Given any set F of generators {gi}iI, the free group generated by F surjects onto the group G. The kernel of this map is called subgroup of relations, generated by some subset D. The presentation is usually denoted by F | D. For example, the group Z = 〈a | 〉 can be generated by one element a (equal to +1 or −1) and no relations, because n·1 never equals 0 unless n is zero. A string consisting of generator symbols is called a word.

Combinatorial group theory studies groups from the perspective of generators and relations.[2] It is particularly useful where finiteness assumptions are satisfied, for example finitely generated groups, or finitely presented groups (i.e. in addition the relations are finite). The area makes use of the connection of graphs via their fundamental groups. For example, one can show that every subgroup of a free group is free.

There are several natural questions arising from giving a group by its presentation. The word problem asks whether two words are effectively the same group element. By relating the problem to Turing machines, one can show that there is in general no algorithm solving this task. An equally difficult problem is, whether two groups given by different presentations are actually isomorphic. For example Z can also be presented by

x, y | xyxyx = 1〉

and it is not obvious (but true) that this presentation is isomorphic to the standard one above.

The Cayley graph of 〈 x, y ∣ 〉, the free group of rank 2.

Geometric group theory attacks these problems from a geometric viewpoint, either by viewing groups as geometric objects, or by finding suitable geometric objects a group acts on.[3] The first idea is made precise by means of the Cayley graph, whose vertices correspond to group elements and edges correspond to right multiplication in the group. Given two elements, one constructs the word metric given by the length of the minimal path between the elements. A theorem of Milnor and Svarc then says that given a group G acting in a reasonable manner on a metric space X, for example a compact manifold, then G is quasi-isometric (i.e. looks similar from the far) to the space X.

Representation of groups

Saying that a group G acts on a set X means that every element defines a bijective map on a set in a way compatible with the group structure. When X has more structure, it is useful to restrict this notion further: a representation of G on a vector space V is a group homomorphism:

ρ : GGL(V),

where GL(V) consists of the invertible linear transformations of V. In other words, to every group element g is assigned an automorphism ρ(g) such that ρ(g) ∘ ρ(h) = ρ(gh) for any h in G.

This definition can be understood in two directions, both of which give rise to whole new domains of mathematics.[4] On the one hand, it may yield new information about the group G: often, the group operation in G is abstractly given, but via ρ, it corresponds to the multiplication of matrices, which is very explicit.[5] On the other hand, given a well-understood group acting on a complicated object, this simplifies the study of the object in question. For example, if G is finite, it is known that V above decomposes into irreducible parts. These parts in turn are much more easily manageable than the whole V (via Schur's lemma).

Given a group G, representation theory then asks what representations of G exist. There are several settings, and the employed methods and obtained results are rather different in every case: representation theory of finite groups and representations of Lie groups are two main subdomains of the theory. The totality of representations is governed by the group's characters. For example, Fourier polynomials can be interpreted as the characters of U(1), the group of complex numbers of absolute value 1, acting on the L2-space of periodic functions.

Connection of groups and symmetry

Given a structured object X of any sort, a symmetry is a mapping of the object onto itself which preserves the structure. This occurs in many cases, for example

  1. If X is a set with no additional structure, a symmetry is a bijective map from the set to itself, giving rise to permutation groups.
  2. If the object X is a set of points in the plane with its metric structure or any other metric space, a symmetry is a bijection of the set to itself which preserves the distance between each pair of points (an isometry). The corresponding group is called isometry group of X.
  3. If instead angles are preserved, one speaks of conformal maps. Conformal maps give rise to Kleinian groups, for example.
  4. Symmetries are not restricted to geometrical objects, but include algebraic objects as well. For instance, the equation
x2 − 3 = 0
has the two solutions +\sqrt{3}, and -\sqrt{3}. In this case, the group that exchanges the two roots is the Galois group belonging to the equation. Every polynomial equation in one variable has a Galois group, that is a certain permutation group on its roots.

The axioms of a group formalize the essential aspects of symmetry. Symmetries form a group: they are closed because if you take a symmetry of an object, and then apply another symmetry, the result will still be a symmetry. The identity keeping the object fixed is always a symmetry of an object. Existence of inverses is guaranteed by undoing the symmetry and the associativity comes from the fact that symmetries are functions on a space, and composition of functions are associative.

Frucht's theorem says that every group is the symmetry group of some graph. So every abstract group is actually the symmetries of some explicit object.

The saying of "preserving the structure" of an object can be made precise by working in a category. Maps preserving the structure are then the morphisms, and the symmetry group is the automorphism group of the object in question.

Applications of group theory

Applications of group theory abound. Almost all structures in abstract algebra are special cases of groups. Rings, for example, can be viewed as abelian groups (corresponding to addition) together with a second operation (corresponding to multiplication). Therefore group theoretic arguments underlie large parts of the theory of those entities.

Galois theory uses groups to describe the symmetries of the roots of a polynomial (or more precisely the automorphisms of the algebras generated by these roots). The fundamental theorem of Galois theory provides a link between algebraic field extensions and group theory. It gives an effective criterion for the solvability of polynomial equations in terms of the solvability of the corresponding Galois group. For example, S5, the symmetric group in 5 elements, is not solvable which implies that the general quintic equation cannot be solved by radicals in the way equations of lower degree can. The theory, being one of the historical roots of group theory, is still fruitfully applied to yield new results in areas such as class field theory.

Algebraic topology is another domain which prominently associates groups to the objects the theory is interested in. There, groups are used to describe certain invariants of topological spaces. They are called "invariants" because they are defined in such a way that they do not change if the space is subjected to some deformation. For example, the fundamental group "counts" how many paths in the space are essentially different. The Poincaré conjecture, proved in 2002/2003 by Grigori Perelman is a prominent application of this idea. The influence is not unidirectional, though. For example, algebraic topology makes use of Eilenberg-MacLane spaces which are spaces with prescribed homotopy groups. Similarly algebraic K-theory stakes in a crucial way on classifying spaces of groups. Finally, the name of the torsion subgroup of an infinite group shows the legacy of topology in group theory.

A torus. Its abelian group structure is induced from the map CC/Z+τZ, where τ is a parameter.
The cyclic group Z/26 underlies Caesar's cipher.

Algebraic geometry and cryptography likewise uses group theory in many ways. Abelian varieties have been introduced above. The presence of the group operation yields additional information which makes these varieties particularly accessible. They also often serve as a test for new conjectures.[6] The one-dimensional case, namely elliptic curves is studied in particular detail. They are both theoretically and practically intriguing.[7] Very large groups of prime order constructed in Elliptic-Curve Cryptography serve for public key cryptography. Cryptographical methods of this kind benefit from the flexibility of the geometric objects, hence their group structures, together with the complicated structure of these groups, which make the discrete logarithm very hard to calculate. One of the earliest encryption protocols, Caesar's cipher, may also be interpreted of a (very easy) group operation. In another direction, toric varieties are algebraic varieties acted on by a torus. Toroidal embeddings have recently led to advances in algebraic geometry, in particular resolution of singularities.[8]

Algebraic number theory is a special case of group theory, thereby following the rules of the latter. For example, Euler's product formula

\begin{align} \sum_{n\geq 1}\frac{1}{n^s}& = \prod_{p \text{ prime}} \frac{1}{1-p^{-s}} \\ \end{align} \!

captures the fact that any integer decomposes in a unique way into primes. The failure of this statement for more general rings gives rise to class groups and regular primes, which feature in Kummer's treatment of Fermat's Last Theorem.

  • The concept of the Lie group (named after mathematician Sophus Lie) is important in the study of differential equations and manifolds; they describe the symmetries of continuous geometric and analytical structures. Analysis on these and other groups is called harmonic analysis. Haar measures, that is integrals invariant under the translation in a Lie group, are used for pattern recognition and other image processing techniques.[9]
  • In combinatorics, the notion of permutation group and the concept of group action are often used to simplify the counting of a set of objects; see in particular Burnside's lemma.
The circle of fifths may be endowed with a cyclic group structure
  • The presence of the 12-periodicity in the circle of fifths yields applications of elementary group theory in musical set theory.
  • An understanding of group theory is also important in physics and chemistry and material science. In physics, groups are important because they describe the symmetries which the laws of physics seem to obey. Physicists are very interested in group representations, especially of Lie groups, since these representations often point the way to the "possible" physical theories. Examples of the use of groups in physics include: Standard Model, Gauge theory, Lorentz group, Poincaré group
  • In chemistry, groups are used to classify crystal structures, regular polyhedra, and the symmetries of molecules. The assigned point groups can then be used to determine physical properties (such as polarity and chirality), spectroscopic properties (particularly useful for Raman spectroscopy and Infrared spectroscopy), and to construct molecular orbitals.
READ MORE - Group theory