Tetris source code (C# and C++ versions) and program ("exe");

4068277 bytes

MD5: 4e957e0ead66064183e9f7e04e618ec0

This article describes how a computer can play the classic video game Tetris by getting information about the board, determining good actions, and performing those actions.

This article includes software capable of playing Tetris in real time.

The software includes the best real-time Tetris-playing algorithm in the public domain.

This article defines rules for "Standard Tetris", a specification based on the original 1986 pre-commercial version of Tetris for the Personal Computer (PC).

In 2003, the software included in this article was used to enable a computer to play Tetris running on a separate computer.

An ordinary USB video camera was used to enable the computer to "see" the screen of the other computer.

A relay board was controlled via an RS-232 interface to enable the computer to "press keys" on the keyboard of the other computer.

Thus, the first computer has a relationship to the second computer that is similar to a typical human player's relationship to a computer when playing Tetris; the game state is only known by looking at the screen, and player actions can only be initiated through a keyboard.

The configuration in this demonstration established that a computer can play Tetris better than a human, under normal real-time Tetris playing conditions.

In 1985, Alexey Pajitnov and Dmitry Pavlovsky were computer engineers at the Computing Center of the Russian Academy of Sciences.

Alexey and Dmitry were interested in developing and selling addictive computer games.

They tested out several different games.

Alexey was inspired by the ancient Greek puzzle game, Pentaminos, which involved arranging puzzle pieces made of five squares.

Alexey thought of the idea of arranging Pentamino pieces as they fell in to a rectangular cup, but realized that the twelve different five-square shapes were too complex for a video game.

Alexey switched to using seven "tetramino" pieces, each made of four squares.

In 1985.6, Alexey Pajitnov programmed the first version of Tetris on an Electronica 60.

In 1985-1986, Vadim Gerasimov, a 16-year old high-school computer prodigy who worked at the academy, implemented Tetris for the IBM PC running the MS-DOS operating system.

( Vadim Gerasimov later did research at the MIT Media Laboratory, from 1994 through 2003, receiving a Ph.D. after completing many interesting projects: http://vadim.www.media.mit.edu )

After 1987, Tetris spread around the globe.

The Tetris Company, LLC, owns the Tetris trademark.

Not yet developed.

Ziga Hajdukovic has developed 1-dimensional Tetris software that can be played in a Internet browser.

Ziga Hajdukovic has also developed 1-dimensional Tetris software for mobile phones using the Java J2ME platform.

( Instructions: http://www.tetris1d.org/mobile.php ; WAP download: http://www.tetris1d.org/wap )

All conventional Tetris variants are in this category.

This section includes interesting variants.

Steve Wozniak (2000)

This Internet browser Tetris variant was created by Vadim Gerasimov.

This Internet browser Tetris features the "Green" building at the campus of MIT.

This Tetris variant only has nine columns instead of the standard ten columns.

This Tetris variant presents new pieces with random orientations.

Vadim Gerasimov is the person who wrote the computer code for the PC version of Tetris in 1986.

Vadim Gerasimov did Ph.D. research at the MIT Media Laboratory during 1994-2003, working on many interesting projects.

The image above shows the NTSC / PAL video output produced by a PIC16F84 12 MHz microcontroller running software written by Rickard Gunee in 1998.

The video signal is generated by software control of digital outputs.

Other PIC projects: http://etronics.free.fr/liens5.htm

Lars Pontoppidan wrote code for the AtMega32 microcontroller, and added simple analog circuitry, to create a Tetris version that could be played on an oscilloscope.

Certain registers of the AtMega32 microcontroller control 8-bit output signals, and, when passed through an "R-2R" electrical resistor circuit for digital-to-analog (D/A) conversion, the resulting analog signals can control the (x,y) coordinates of an oscilloscope beam (when the oscilloscope is set to "X-Y mode").

YouTube video:

FLV video:

The following was awarded "1989 IOCCC Best Game".

long h[4];t(){h[3]-=h[3]/3000;setitimer(0,h,0);}c,d,l,v[]={(int)t,0,2},w,s,I,K

=0,i=276,j,k,q[276],Q[276],*n=q,*m,x=17,f[]={7,-13,-12,1,8,-11,-12,-1,9,-1,1,

12,3,-13,-12,-1,12,-1,11,1,15,-1,13,1,18,-1,1,2,0,-12,-1,11,1,-12,1,13,10,-12,

1,12,11,-12,-1,1,2,-12,-1,12,13,-12,12,13,14,-11,-1,1,4,-13,-12,12,16,-11,-12,

12,17,-13,1,-1,5,-12,12,11,6,-12,12,24};u(){for(i=11;++i<264;)if((k=q[i])-Q[i]

){Q[i]=k;if(i-++I||i%12<1)printf("\033[%d;%dH",(I=i)/12,i%12*2+28);printf(

"\033[%dm "+(K-k?0:5),k);K=k;}Q[263]=c=getchar();}G(b){for(i=4;i--;)if(q[i?b+

n[i]:b])return 0;return 1;}g(b){for(i=4;i--;q[i?x+n[i]:x]=b);}main(C,V,a)char*

*V,*a;{h[3]=1000000/(l=C>1?atoi(V[1]):2);for(a=C>2?V[2]:"jkl pq";i;i--)*n++=i<

25||i%12<2?7:0;srand(getpid());system("stty cbreak -echo stop u");sigvec(14,v,

0);t();puts("\033[H\033[J");for(n=f+rand()%7*4;;g(7),u(),g(0)){if(c<0){if(G(x+

12))x+=12;else{g(7);++w;for(j=0;j<252;j=12*(j/12+1))for(;q[++j];)if(j%12==10){

for(;j%12;q[j--]=0);u();for(;--j;q[j+12]=q[j]);u();}n=f+rand()%7*4;G(x=17)||(c

=a[5]);}}if(c==*a)G(--x)||++x;if(c==a[1])n=f+4**(m=n),G(x)||(n=m);if(c==a[2])G

(++x)||--x;if(c==a[3])for(;G(x+12);++w)x+=12;if(c==a[4]||c==a[5]){s=sigblock(

8192);printf("\033[H\033[J\033[0m%d\n",w);if(c==a[5])break;for(j=264;j--;Q[j]=

0);while(getchar()-a[4]);puts("\033[H\033[J\033[7m");sigsetmask(s);}}d=popen(

"stty -cbreak echo stop \023;sort -mnr -o HI - HI;cat HI","w");fprintf(d,

"%4d from level %1d by %s\n",w,l,getlogin());pclose(d);}

=0,i=276,j,k,q[276],Q[276],*n=q,*m,x=17,f[]={7,-13,-12,1,8,-11,-12,-1,9,-1,1,

12,3,-13,-12,-1,12,-1,11,1,15,-1,13,1,18,-1,1,2,0,-12,-1,11,1,-12,1,13,10,-12,

1,12,11,-12,-1,1,2,-12,-1,12,13,-12,12,13,14,-11,-1,1,4,-13,-12,12,16,-11,-12,

12,17,-13,1,-1,5,-12,12,11,6,-12,12,24};u(){for(i=11;++i<264;)if((k=q[i])-Q[i]

){Q[i]=k;if(i-++I||i%12<1)printf("\033[%d;%dH",(I=i)/12,i%12*2+28);printf(

"\033[%dm "+(K-k?0:5),k);K=k;}Q[263]=c=getchar();}G(b){for(i=4;i--;)if(q[i?b+

n[i]:b])return 0;return 1;}g(b){for(i=4;i--;q[i?x+n[i]:x]=b);}main(C,V,a)char*

*V,*a;{h[3]=1000000/(l=C>1?atoi(V[1]):2);for(a=C>2?V[2]:"jkl pq";i;i--)*n++=i<

25||i%12<2?7:0;srand(getpid());system("stty cbreak -echo stop u");sigvec(14,v,

0);t();puts("\033[H\033[J");for(n=f+rand()%7*4;;g(7),u(),g(0)){if(c<0){if(G(x+

12))x+=12;else{g(7);++w;for(j=0;j<252;j=12*(j/12+1))for(;q[++j];)if(j%12==10){

for(;j%12;q[j--]=0);u();for(;--j;q[j+12]=q[j]);u();}n=f+rand()%7*4;G(x=17)||(c

=a[5]);}}if(c==*a)G(--x)||++x;if(c==a[1])n=f+4**(m=n),G(x)||(n=m);if(c==a[2])G

(++x)||--x;if(c==a[3])for(;G(x+12);++w)x+=12;if(c==a[4]||c==a[5]){s=sigblock(

8192);printf("\033[H\033[J\033[0m%d\n",w);if(c==a[5])break;for(j=264;j--;Q[j]=

0);while(getchar()-a[4]);puts("\033[H\033[J\033[7m");sigsetmask(s);}}d=popen(

"stty -cbreak echo stop \023;sort -mnr -o HI - HI;cat HI","w");fprintf(d,

"%4d from level %1d by %s\n",w,l,getlogin());pclose(d);}

Reference: http://homepages.cwi.nl/~tromp/tetris.html

The following is Tetris for the Perl interpreter: Perltris (version 20050717) by Sean Adams.

#!/usr/bin/perl

$_='A=15; B=30; select(stdin); $|=1; select(stdout);$|=1; system

"stty -echo -icanon eol \001"; for C(split(/\s/,"010.010.010.010

77.77 022.020.020 330.030.030 440.044.000 055.550.000 666.060.".

"000")){D=0;for E(split(/\./,C)){F=0;for G(split("",E)){C[P][F++

][D]=G} D++}J[P]=F; I[P++] =D}%L=split(/ /,"m _".chr(72)." c 2".

chr(74)." a _m");sub a{for K(split(/ /,shift)){(K,L)=split(/=/,K

);K=L{K};K=~s/_/L/; printf "%c[K",27}}sub u{a("a=40");for D(0..B

-1){for F(0..A-1){M=G[F][D];if(R[F][D]!=M) {R[F][D]=M;a("m"."=".

(5+D).";".(F*2+5)); a("a=".(40+M).";" .(30+M));print " "x2}}}a(

"m=0;0 a=37;40")}sub r{(N)=@_;while(N--) {Q=W;W=O=H;H=Q;for F( 0

..Q-1){for D(0..O-1) {Q[F][D]=K[F][D]}}for F(0..O-1){for D(0..Q-

1){K[F][D]= Q[Q-D-1][F]}}}}sub l{for F(0..W-1){for D(0..H-1){(K[

F][D]&& ((G[X+F][Y+D])|| (X+F<0)||(X+F>=A)|| (Y+D>=B)))&& return

0}}1}sub p{for F(0..W-1){for D(0..H-1){(K[F][D]>0)&&(G[X+F][Y+D]

=K[F][D]) }}1}sub o{for F(0..W-1){for D(0..H-1){(K[F][D]>0)&&(G[

X+F][ Y+D]=0)}}}sub n{C=int(rand(P)) ;W=J[C];H=I[C];X=int(A/2)-1

;Y=0;for F(0..W-1){for D(0..H-1){K[F][D]= C[C][F][D]}}r(int(rand

(4)));l&&p}sub c{d:for(D=B;D>=0;D--){for F(0..A-1){G[F][D]||next

d}for(D2=D;D2>=0; D2--){for F(0..A-1){G[F][D2]= (D2>1)?G[F][D2-1

]:0; }}u;}}a ("m=0;0 a=0;37;40 c");print "\n\n".4x" "." "x(A-4).

"perltris\n".(" "x4)."--"xA."\n".((" "x3)."|"." "x(A*2)."|\n")xB

.(" "x4). "--"xA."\n";n;for(;;) {u;R=chr(1); (S,T)=select(R,U,V,

0.01);if(S) {Z=getc;}else {if($e++>20){Z=" ";$e=0;}else{next;} }

if(Z eq "k"){o;r(1);l||r(3);p}; if(Z eq "j"){o;X--;l||X++;p}; if

(Z eq "l"){o;X++;l||X--;p};if(Z eq " "){o;Y++;(E=l)||Y--;p;E|| c

|c|c|c|c|n||goto g;};if(Z eq "q"){last;}}g: a("a=0 m=".(B+8).";0

" ); system "stty sane"; '; s/([A-Z])/\$$1/g; s/\%\$/\%/g; eval;

$_='A=15; B=30; select(stdin); $|=1; select(stdout);$|=1; system

"stty -echo -icanon eol \001"; for C(split(/\s/,"010.010.010.010

77.77 022.020.020 330.030.030 440.044.000 055.550.000 666.060.".

"000")){D=0;for E(split(/\./,C)){F=0;for G(split("",E)){C[P][F++

][D]=G} D++}J[P]=F; I[P++] =D}%L=split(/ /,"m _".chr(72)." c 2".

chr(74)." a _m");sub a{for K(split(/ /,shift)){(K,L)=split(/=/,K

);K=L{K};K=~s/_/L/; printf "%c[K",27}}sub u{a("a=40");for D(0..B

-1){for F(0..A-1){M=G[F][D];if(R[F][D]!=M) {R[F][D]=M;a("m"."=".

(5+D).";".(F*2+5)); a("a=".(40+M).";" .(30+M));print " "x2}}}a(

"m=0;0 a=37;40")}sub r{(N)=@_;while(N--) {Q=W;W=O=H;H=Q;for F( 0

..Q-1){for D(0..O-1) {Q[F][D]=K[F][D]}}for F(0..O-1){for D(0..Q-

1){K[F][D]= Q[Q-D-1][F]}}}}sub l{for F(0..W-1){for D(0..H-1){(K[

F][D]&& ((G[X+F][Y+D])|| (X+F<0)||(X+F>=A)|| (Y+D>=B)))&& return

0}}1}sub p{for F(0..W-1){for D(0..H-1){(K[F][D]>0)&&(G[X+F][Y+D]

=K[F][D]) }}1}sub o{for F(0..W-1){for D(0..H-1){(K[F][D]>0)&&(G[

X+F][ Y+D]=0)}}}sub n{C=int(rand(P)) ;W=J[C];H=I[C];X=int(A/2)-1

;Y=0;for F(0..W-1){for D(0..H-1){K[F][D]= C[C][F][D]}}r(int(rand

(4)));l&&p}sub c{d:for(D=B;D>=0;D--){for F(0..A-1){G[F][D]||next

d}for(D2=D;D2>=0; D2--){for F(0..A-1){G[F][D2]= (D2>1)?G[F][D2-1

]:0; }}u;}}a ("m=0;0 a=0;37;40 c");print "\n\n".4x" "." "x(A-4).

"perltris\n".(" "x4)."--"xA."\n".((" "x3)."|"." "x(A*2)."|\n")xB

.(" "x4). "--"xA."\n";n;for(;;) {u;R=chr(1); (S,T)=select(R,U,V,

0.01);if(S) {Z=getc;}else {if($e++>20){Z=" ";$e=0;}else{next;} }

if(Z eq "k"){o;r(1);l||r(3);p}; if(Z eq "j"){o;X--;l||X++;p}; if

(Z eq "l"){o;X++;l||X--;p};if(Z eq " "){o;Y++;(E=l)||Y--;p;E|| c

|c|c|c|c|n||goto g;};if(Z eq "q"){last;}}g: a("a=0 m=".(B+8).";0

" ); system "stty sane"; '; s/([A-Z])/\$$1/g; s/\%\$/\%/g; eval;

Reference: http://www.seanadams.com/perltris

Scalable Vector Graphics (SVG) is a standard for describing graphical objects using XML.

Other examples of SVG: http://www.croczilla.com/svg/samples

Google, Yahoo!, and Microsoft, and other companies, have promoted miniature Internet-based software named "widgets" that are usually characterized by some use of dynamic data available on the Internet.

One such widget available through Google is a Tetris game.

The following example is cute, but the shapes rotate in annoying ways:

Other Google widgets:

The following research document contains a proof that a certain kind of Tetris variant is "NP-complete".

Erik D. Demaine, Susan Hohenberger, and David Liben-Nowell, *"Tetris is Hard, Even to Approximate"*, Technical Report MIT-LCS-TR-865, Massachusetts Institute of Technology, 2002.10.21.

Locally-cached copy (PDF): tetris_theory_mit_lcs_tr_865_0210020.pdf

"NP-complete" is a classification of the time cost and space cost of an algorithm.

Other classifications include "P" and "NP".

A classification of "NP-complete" implies that, for problems larger than some small size, the algorithm is unlikely to find a desired solution in a practical amount of time or space.

The following paper, published 2005.5.30, by Donald Carr at the Computer Science department at Rhodes University, South Africa, presents the application of "reinforcement learning" to Tetris.

The Tetris skirt was created by "Lucy" ("hissyfitoly" on etsy.com) before 2007.11.

From the creator's description of the skirt (offered for sale on etsy.com):

Forum comments regarding this skirt:

"Man that skirt sucks at Tetris"

"Ahahahaha, I thought the same thing."

"There's a complete line down at the bottom...completed lines disappear. DO OVER."

"There should be a spot in the back or front where "long" piece would fit perfectly..."

"That's a really ugly skirt though. My boyfriend couldn't buy me enough chocolate and flowers to convince me to wear that thing."

"From those that brought you the Triforce in 2006... Comes the next generation of inanimate object skit... Tetris."

Locally-cached video in Flash video (FLV) format (use VLC to play):

This video segment from a Japanese television show includes hilarious variations of Tetris, including:

pieces that vanish upon landing, a piece that fills an entire row (thus completing a row upon landing), multiple pieces falling simultaneously, irregularly shaped pieces, a long piece that is slightly too wide to fit in a gap (preventing a 4-row completion!), Mario hitting a mushroom and becoming enormous and dying!, small piece debris remaining after rows are destroyed, upward gravity making pieces float to the top, etc.

Locally-cached video in Flash video (FLV) format (use VLC to play):

From the description on YouTube:

"Tetris played by real human-beings sitting in an auditorium:

This stop-motion video was shot and played for "LES URBAINES" festival http://www.urbaines.ch at the Palais de Rumine (Lausanne, Switzerland) on November 24th 2007.

You can find more information and also SPACE INVADERS, PONG and POLE POSITION on our website http://www.notsonoisy.com/gameover"

Locally-cached video in Flash video (FLV) format (use VLC to play):

The term "2.5-dimensional" is used here to mean a non-orthogonal view of a two-dimensional version of Tetris, with some thickness in the third dimension.

(Find the link "tetris3d" in "F7: GAMES".)

Three-dimensional Tetris in the form of a Java applet for Internet browsers:

Three-dimensional Tetris for the Windows operating system:

In [1996], [...], Greg Kaiser put together a four-dimensional variant on the classic game.

Using IrisGL (a.k.a. igl) he created a working, if hard to play, game using four sub-screens to depict different three-dimensional aspects of the entire game-space.

[Because] there is not an easily [comprehensible] way to draw four-D objects on a two-D screen, the four sub-views are a practical method to manipulate and visualize the rotation and translation of the pieces through the four dimensions (in the game called x,y,z,w).

Rather than completing lines of blocks as in the original, the goal in this case is to fill a complete cube in the x,y,z subview (usually 4 by 4 by 4).

The other subviews which contain the "w" dimension are arranged in a default 4 by 4 by 10 block arrangement with "w" being the long, "vertical" dimension in all three cases, with different bases of (x,y), (x,z), (y,z).

Gravity acts in the "-w" direction, so pieces fall "down" in the three long subviews that include "w", and do not move unless by player control in the last (x,y,z) subview.

It takes awhile to get used to, to say the least.

If by some miracle of patience or changing the parameters of the game, one does complete a cube, it will disappear as the completed lines do in the original Tetris, though no score is kept in HyperTetris.

Benjamin Bernard (2000)

Polytope Tetris is n-dimensionally Tetris.

Inspired by the HyperTetris program, Polytope Tetris allows you tons play Tetris in any NUMBER OF dimension.

Play Tetris in 3D, 4D, 5D, or more.

HyperTetris is A much catchier name than Polytope (def) Tetris, but I can't steal the name.

The definition of "Standard Tetris" is an idealized model of the most important characteristics and behaviors of the first IBM-PC implementation of the Tetris game (circa 1986-1988).

The idealized model is based upon inferring the apparent intentions of the developers of the first IBM-PC implementation of the Tetris game.

For example, it seems reasonable to infer that the developers of the first IBM-PC implementation of the Tetris game intended to select the shape of each new falling piece "randomly", and that the use of the Borland C implementation of the **rand()** function was merely a practical approximation of the intention.

The definition of "Standard Tetris" specifies that the shape of each new falling piece is to be selected "randomly".

This ideal behavior cannot be achieved by any implementation, but implementations can approximate the ideal behavior.

Although no implementation can perfectly implement the definition of "Standard Tetris", the ideals of "Standard Tetris" involve objective characteristics, and implementations can be compared according to their relative closeness to the ideals of "Standard Tetris".

This section describes a set of elements, behaviors, and rules, which, collectively, define "Standard Tetris".

The board is a grid of cells, having 10 columns, and 20 rows, for a total of **10 * 20 = 200** cells.

Each cell can either be unoccupied (empty) or occupied (full).

There are seven (7) standard Tetris pieces, with the following letter names:

The letter names are inspired by the shapes of the pieces.

The dot at **(0,0)** coincides with board position **(6,20)** when the piece first appears.

The first column shows the initial "orientations".

In the following, the word "orientation" is used to describe any state of a piece, within a set of allowed states, that can result from a counterclockwise rotation event.

Changing "orientation" from a specified "orientation" of an "**I**", "**S**", or "**Z**" piece, requires the combination of a rotation and a translation.

Therefore, the word "orientation" is used here to mean something more than rotation alone.

However, "orientation" can change only in response to a counterclockwise rotation event, and the cycle of distinct "orientations" for each piece approximates, or matches, the cycle resulting from pure rotations.

The special use of the word "orientation" in this context is nearly equivalent to the meaning of the word "rotation" or "angle", but the word "orientation" is used instead of "rotation" to attempt to bring attention to the fact that some pieces require more than rotation to produce the set of allowed states resulting from counterclockwise "rotation" events.

Pieces can only switch orientations (or undergo a specific horizontal or vertical translation) if the resulting state of the piece would not have any occupied (full) cells beyond the area of the board and would not have any occupied cells which overlap any currently occupied cells of the board.

(In this rule, the occupied (full) cells of the piece are not considered as part of the "currently occupied cells of the board"

In the following comments, any reference to a result of a counterclockwise rotation event is made with the assumption that such a rotation can actually be performed, given the existing conditions of the piece and the board.

The "**O**" (box) piece only has a single orientation, and does not change the locations of any of its occupied (full) cells in response to any counterclockwise rotation event.

The "**I**" (line) piece has two possible orientations, initially appearing in a horizontal orientation.

The "**I**" piece alternates between the two orientations in response to successive counterclockwise rotation events.

The "**S**" and "**Z**" pieces each have two possible orientations.

These pieces each alternate between two orientations in response to successive counterclockwise rotation events.

The "**L**", "**J**", and "**T**" pieces each have four possible orientations, and these orientations are the results of simple rotations about center points on the shapes.

When a piece first appears on the board, the piece has its "major axis" in a horizontal orientation, and the piece is at the top of the board.

Therefore, no pieces are initially capable of having their orientations changed. The piece must descend by one row to have the possibility of having its orientation changed.

Once a piece has fallen by one row on the board, all piece orientations can be attained (assuming the piece is not too close to the side walls or to the current pile of pieces).

The following is an approximate flowchart for a Standard Tetris game.

The following diagram shows the (4 cell * 2 cell) region on the board where all pieces appear when created.

When a new piece first appears on the board, its origin coincides with the dot on this diagram, and the piece will be completely contained by the shaded area on this diagram.

When a new game is started, a full free-fall delay elapses, and on the first free-fall iteration a piece is spawned at the top of the board.

During normal game play, when a specific free-fall iteration "lands" a piece, a full free-fall delay elapses and on the next free-fall iteration a piece is spawned at the top of the board.

When a piece is spawned, the type of piece is selected using the following algorithm:

pieceIndex = 1 + (randomInteger % 7); // 1..7

Because there is a constant **p (= 1/7)** chance of selecting a specific kind of piece, and all pieces of the same type are indistinguishable, the probability of having exactly **k** pieces of a specific type after **n** trials follows a Binomial distribution:

P(k) = (1-p)^(n-k) * p^k * ( n! / ( (n-k)! k! ) );

p = 0.0 ... 1.0;

k = 0, 1, 2, ..., n;

mean = ( n * p )

variance = ( n * p *(1-p) )

standard deviation = sqrt( variance )

p = 0.0 ... 1.0;

k = 0, 1, 2, ..., n;

mean = ( n * p )

variance = ( n * p *(1-p) )

standard deviation = sqrt( variance )

When we choose from among the 7 (seven) pieces at random, the probability of getting a specific piece is **p=(1/7)**.

If we do this **n=70** times, for example, the probability of getting exactly **k** pieces (with **k** in the range **0** to **n**) is given by the binomial distribution, as shown in the following image.

Thus, one can predict the average total pieces of a single type given a total number of random pieces, and one can also compute the expected variance and standard deviation (square root of variance):

p = (1/7):

total random standard

pieces (n) mean variance deviation

============ ======= ======== =========

70 10 8 3

700 100 85 9

7000 1000 857 29

70000 10000 8571 93

total random standard

pieces (n) mean variance deviation

============ ======= ======== =========

70 10 8 3

700 100 85 9

7000 1000 857 29

70000 10000 8571 93

When we convert a random value to a piece index, we interpret it as follows:

value piece

===== =====

1 "O"

2 "I"

3 "S"

4 "Z"

5 "L"

6 "J"

7 "T"

===== =====

1 "O"

2 "I"

3 "S"

4 "Z"

5 "L"

6 "J"

7 "T"

[ The pre-commercial MS-DOS version of Tetris used the random number function offered by the Borland Pascal compiler.

That function used a 32-bit state variable.

Therefore, the sequence of random numbers was limited to **2^32** distinct values.

Therefore, in principle, a player could discover, after dropping perhaps 10 pieces, the exact place in the set of **2^32** numbers corresponding to the current state of the game.

If Tetris simulations are executed with the fixed sequence of **2^32** pieces, then optimal decisions can be found for every place in the sequence.

(There seem to be sufficient opportunities to being the board to a totally empty board state, allowing us to get synchronized with the precomputed optimal solution path.)

The risk of using a simple random number generator in a simulation intended to find an optimal solution to a problem is that the solution will be optimal only for the particular path through the problem space selected by the simple random number generator. ]

During game play, the following inputs are available:

left : request to translate left by one column

right : request to translate right by one column

rotate : request to do a counterclockwise rotation

drop : request to instantly drop the piece

right : request to translate right by one column

rotate : request to do a counterclockwise rotation

drop : request to instantly drop the piece

All inputs take effect on the rising-edge of the positive input (on button press, as opposed to button release).

When a button press occurs, this counts as a request.

Holding a button down beyond a certain time might result in the "auto-repeat" feature of a keyboard, generating new button presses -- but this feature is external to the game engine.

The inputs specified above conform to the original Tetris game.

Rotate requests can be executed if there is no overlap between the desired orientation and set cells on the current board (excluding the falling piece), and if the desired orientation has no set cells outside the board area.

Translate requests can be executed if there is no overlap between the desired translated configuration and set cells on the current board (excluding the falling piece), and if the desired translated configuration has no set cells outside the board area.

Input requests are processed with a latency that depends on the frame rate of the game (example: 75 Hz), and requests take effect (if valid) instantly.

A piece can be dropped without any line falling steps occurring.

A piece can be translated several times to the left or right, and subsequently dropped, all without experiencing an official line falling step.

Because a newly spawned piece cannot possibly be rotated (because it is stuck against the top edge of the board), the player must accept at least one piece falling step if rotations are desired or required.

The effect on the score is insignificant.

If a piece is simply falling, it falls by a single row during each piece falling iteration.

There will be an iteration that moves it from a place with no contact with horizontal surfaces to a place that has contact with horizontal surfaces. Once this iterations occurs, the pieces are in resting contact.

If an iteration begins with a piece in resting contact with a horizontal surface, the piece "lands", and becomes part of the static pile.

A completed row is a row of the pile in which all cells are occupied. When a completed row is eliminated from the pile, and the rows above the eliminated row are shifted down by one row to eliminate the gap, this counts as a completed "line".

When a piece lands it becomes part of the pile.

Immediately after the piece lands, the pile is checked for completed rows, and all completed rows are eliminated.

Up to four rows can be completed simultaneously.

The following table gives the upper limit on lines completed simultaneously by a single piece:

piece max. simultaneous

rows completed

===== ==================

"O" 2

"I" 4

"S" 2

"Z" 2

"L" 3

"J" 3

"T" 2

rows completed

===== ==================

"O" 2

"I" 4

"S" 2

"Z" 2

"L" 3

"J" 3

"T" 2

Standard Tetris has 10 difficulty levels, numbered 1 (one) through 10 (ten), with level 1 being the "least difficult".

The level index is the maximum of two values:

actualLevel = max( initialLevel, earnedLevel );

The **initialLevel** value is the level that the player selects when starting a new game.

The pattern of level as a function of completed lines is easily observed in the pre-commercial MS-DOS version of Tetris:

{ 0, 1, 2, ..., 10 } --> earned level 1

{ 11, 12, ..., 20 } --> earned level 2

{ 21, 22, ..., 30 } --> earned level 3

{ 31, 32, ..., 40 } --> earned level 4

{ 41, 42, ..., 50 } --> earned level 5

{ 51, 52, ..., 60 } --> earned level 6

{ 61, 62, ..., 70 } --> earned level 7

{ 71, 72, ..., 80 } --> earned level 8

{ 81, 82, ..., 90 } --> earned level 9

{ 91, ... } --> earned level 10

{ 11, 12, ..., 20 } --> earned level 2

{ 21, 22, ..., 30 } --> earned level 3

{ 31, 32, ..., 40 } --> earned level 4

{ 41, 42, ..., 50 } --> earned level 5

{ 51, 52, ..., 60 } --> earned level 6

{ 61, 62, ..., 70 } --> earned level 7

{ 71, 72, ..., 80 } --> earned level 8

{ 81, 82, ..., 90 } --> earned level 9

{ 91, ... } --> earned level 10

Thus, the **earnedLevel** value is computed according to the following algorithm:

if (linesCompleted <= 0)

{

earnedLevel = 1;

}

else if ((linesCompleted >= 1) && (linesCompleted <= 90))

{

earnedLevel = 1 + ((linesCompleted - 1) / 10);

}

else if (linesCompleted >= 91)

{

earnedLevel = 10;

}

{

earnedLevel = 1;

}

else if ((linesCompleted >= 1) && (linesCompleted <= 90))

{

earnedLevel = 1 + ((linesCompleted - 1) / 10);

}

else if (linesCompleted >= 91)

{

earnedLevel = 10;

}

Standard Tetris has a real-time delay between successive line free-fall iterations that is a function of the current difficulty level.

The following relationship between level index and falling iteration delay is based on repeated stopwatch measurements at all levels of the pre-commercial MS-DOS version of Tetris.

actualLevel iterationDelay [seconds]

(rounded to nearest 0.05)

============ =========================

1 0.50

2 0.45

3 0.40

4 0.35

5 0.30

6 0.25

7 0.20

8 0.15

9 0.10

10 0.05

(rounded to nearest 0.05)

============ =========================

1 0.50

2 0.45

3 0.40

4 0.35

5 0.30

6 0.25

7 0.20

8 0.15

9 0.10

10 0.05

Thus, we establish the following formula for the iteration delay value as a function of the actual level index:

iterationDelay = ((11 - actualLevel) * 0.05); // [seconds]

If the board is empty, and there is no user input, a spawned piece at actual level 1 lands in approximately 10 seconds, and a spawned piece at actual level 10 lands in approximately 1 second.

Standard Tetris only awards points for the act of landing a piece.

There are no points awarded for the act of completing a single line, or completing two, three, or four lines simultaneously.

[ Note: Some variants of Tetris award points for the act of completing lines, with an exponentially increasing bonus for an increasing number of simultaneously completed lines.

Thus, the strategy for maximizing score in such variants of Tetris involves setting up opportunities to "get a Tetris", slang for using the "**I**" shape to get four simultaneous lines and getting lots of points. ]

If you have an empty board, and you let a non-"**I**" piece do a free-fall and land, or you immediately drop a non-"**I**" piece, you can establish the following point chart using the pre-commercial MS-DOS version of Tetris:

actualLevel free-fall instant-drop

points points

=========== ========= ============

1 6 24

2 9 27

3 12 30

4 15 33

5 18 36

6 21 39

7 24 42

8 27 45

9 30 48

10 33 51

points points

=========== ========= ============

1 6 24

2 9 27

3 12 30

4 15 33

5 18 36

6 21 39

7 24 42

8 27 45

9 30 48

10 33 51

Unrotated, non-"**I**" pieces fall a total of 18 rows.

This accounts for the point difference between the free-fall and instant-drop cases.

By experimenting with intermediate cases it is easy to infer the following point formula:

pointAward = ( (21 + (3 * actualLevel)) - freeFallIterations );

Note that this formula has nothing to do with the distance a piece falls!

It is strictly a function of the actual level, and a penalty for the number of iterations a piece is allowed to fall freely.

This punishes a user for needing time to think.

Also note that because a piece cannot initially be rotated when it first spawns, a player is punished by at least one free-fall iteration if rotations are required to place a piece in the pile.

This probably never affects human players, unless they somehow: recognize the piece, press translation keys ("left" or "right"), press the "rotate" key one or more times, and press the "drop" key, all within less than 0.5 seconds at level 1, or less than 0.05 seconds at level 10.

A strategy for playing a game depends on the rules of the game.

A strategy depends on which parameter is to be optimized.

In Standard Tetris, one survives by completing rows, gets points for landing pieces, and scores the most points possible for each piece by executing a drop before one or more free-fall iterations transpire.

An A.I. can optimize the points awarded for each piece simply by deciding on a move quickly and "pressing the keys" to execute the move.

More important to an A.I. is survival, because indefinite survival means an arbitrarily high score can be attained. Because the Tetris pieces fall at a specific rate, the A.I. must make decisions at least this fast -- and the A.I. must make moves that complete rows at a rate that averages at least 1 row per 2.5 pieces. (Each piece has 4 cells, and each row has 10 cells.)

Of course one can postpone completing rows by accumulating pieces and building a large pile, but there are only 200 cells on the entire board, which in principle can only hold 50 pieces, so any player (such as an A.I.) must have completing lines as a fundamental priority.

In Standard Tetris, the game state includes the current board occupation and the current falling piece (type, position, and orientation). The game state may optionally include the "Next Piece".

Heidi Burgiel, Ph.D., of the Department of Mathematics, Statistics and Computer Science at the University of Illinois at Chicago, has proved that an alternating sequence of "**S**" and "**Z**" pieces will force a standard (10-column, 20-row) Tetris game to end within a predictable number of moves.

Quote from the article: "You can't win a game in which only alternating 'S' and 'Z' pieces appear."

Associated printed article: Mathematical Gazette, July 1997, "How to Lose at Tetris"

Heidi Burgiel offers a Java applet that runs in a Internet browser that features an altered Tetris clone that spawns alternating "**S**" and "**Z**" pieces.

[ The "Standard Tetris" software associated with the document you are reading also has a mode which spawns alternating "**S**" and "**Z**" pieces. ]

Heidi Burgiel claimed that a game involving alternating "**S**" and "**Z**" pieces (for a standard Tetris board of 10 columns and 20 rows) must end before fewer than 70,000 pieces have fallen.

The Standard Tetris software included with this document enables a person to play games with alternating "**S**" and "**Z**" pieces, and change the board width.

It is easy to see that boards whose widths are integer multiples of four columns (examples: 4 columns, 8 columns, 12 columns, etc) can be played forever when pieces alternate between "**S**" and "**Z**", with the pile rising no higher than 4 rows. I mention this to make it clear that the limited survivability described in the research document mentioned above is specifically for the case of a standard Tetris board (with 10 columns and 20 rows).

There are whole categories of pathological sequences that cannot be survived.

It would be interesting to compute the total probability of encountering a game-terminating sequence, because this would put an upper bound on the performance of any strategy, even with complete knowledge of all future pieces at any given point in a game.

Given that the board has **10 * 20 = 200** cells, and given that each cell can only be in one of two states (empty or occupied), the total number of board configurations must be less than or equal to: **(2 ^ 200)**.

Given that each piece adds 4 cells to a board, and each row completion eliminates 10 cells from the board, the number of occupied cells on the board will always be even. For example, **(3*4 - 1*10) = 2**, **(1*4 - 0*10) = 4**, **(4*4 - 1*10) = 6**, **(2*4 - 0*10) = 8**, **(5*4 - 1*10) = 10**, etc. Thus, only half of the **(2 ^ 200)** board configurations can be reached by playing the game.

Thus, the total number of board configurations is approximately: **(2 ^ 199) = 8.03469... * 10^59**.

However, we should exclude from our total any configuration that includes filled rows, because filled rows are eliminated before the end of each completed move. Any configuration with one or more filled rows will collapse to another configuration that does not contain any filled rows.

Also, we should exclude any configuration that includes a non-empty row above one or more empty rows, because a non-empty row above an empty row will always fall, and all falling stops before the end of each move.

Each row can be in **2^10 = 1024** states, one of which is "empty", one of which is "full", and **(1024 - 2) = 1022** of which are partially-occupied. We exclude the "full" case from consideration.

If the bottom row is empty, then all rows above the bottom row must also be empty.

If the bottom row is partially-occupied, then the second row can be empty or partially-occupied.

Continuing this analysis, we can compute a number of board configurations that takes in to account the exclusion of full rows and the restrictions on empty rows: **1 + (1022 * (1 + 1022 * (1 + 1022 * (1 + 1022 * (... * (1023))))))**, which is approximately **((1022 ^ 19) * (1023))**.

Thus, we find a more precise estimate of the total number of stable board configurations: **(1/2) * ((1022 ^ 19) * (1023)) = 0.9625... * (2 ^ 199)**; i.e., roughly 3.74% less than the **(2 ^ 199)** estimate.

However, the actual number of stable, accessible board states is likely to be notably lower due to the fact that the top-most rows can only be filled in a few ways. As the top-most rows fill, a newly generated piece cannot be moved or rotated very much. This limits the number of ways the top-most rows can be filled.

Because we can get any of seven possible pieces for any given board state, the total number of game states is approximately **7 * (2 ^ 199) = 5.624... * 10^60**.

Because we can, in principle, do a deep search of all possible futures for all possible moves for a given game state, we can have a single "best" move associated with each game state.

We assume that we do not have access to any information other than the current board and current piece, so "best" means "the move that offers the greatest chance of satisfying our long-term goal of survival".

A move is just a translation (up to 10 options) and a rotation (up to 4 options), we can easily encode the best move in a single byte.

So, in principle, we could form a table with **10^61** entries (bytes) that told us the best move given any board state and current piece.

Of course this is impractical, just as enumerating all "Go" boards or "Chess" boards is impractical. But the point is that there IS one true solution, and there is a best move for any given configuration. There might be equally good moves for a given configuration, but we can arbitrarily choose a single move in that case.

Many game-playing algorithms have tables that exhaustively enumerate all game state possibilities within limited contexts, such as "opening (initial) moves" or "end-game (final) moves" in Chess. Perhaps exhaustive enumeration of Tetris pile surfaces (approximately **(20 ^ 10)** states) is feasible. It's an interesting idea.

Exhaustive enumeration of all states of the bottom two rows, multiplied by the seven possible pieces, and storing the best move in a single byte, would be quite easy; requiring only 7 MB of memory. Perhaps optimizing performance for these seven million cases would provide raw data for both analysis and the development of simple models for the data; such models could be regarded as part of the overall ideal Tetris-playing strategy.

Note that executing best moves still does not protect us against possible pathological game-terminating piece sequences. It's just that we will always perform moves that offers us the maximum potential for future survival given that all future pieces are totally random (and unknown at the time we are to decide how to move a single current known piece).

One constraint facing some strategy algorithms is the need for real-time performance -- meaning that the algorithm must make a decision within a specific amount of time.

When a human plays Tetris, the pieces don't stop falling to give the player a chance to think. That's part of the challenge of Tetris. Thus, an A.I. system that is meant to simulate the role of a human player must also make decisions at a rate dictated by the Tetris game.

Note that in the long-term, the number of dropped pieces is very close to 2.5 times the number of completed rows -- because each row has 10 cells, and each piece is 4 cells, and we must complete a row, on average, once every **(10/4) = 2.5** pieces dropped.

So we can use "completed rows" and "dropped pieces" almost interchangeably with the appropriate proportionality constant. The largest error is when the board is completely filled except for a single gap in each row **(((10*20)-20)/4) = 45** pieces dropped but a deficiency of the predicted **(45/2.5) = 18** completed rows.

If we only allow the A.I. to have knowledge of the current board and the current piece, and we only consider the result of moving the current piece in particular ways, then this can be named a "one-piece" analysis.

Here is a rough sketch of how a one-piece analysis can decide upon a move in Tetris:

Basically we try all possible moves and pick the move that yields the best result.

The difficult part is rating each result.

We must rate a hypothetical game state according to how well such a state supports our short-term and long-term goals.

Our long-term goal is survival. Survival depends on preventing the pile from overflowing the board. We can reduce the pile height by forming complete rows which are then eliminated from the pile.

To form a complete row, we must fit parts of pieces in to every column of that row. Thus, we require all parts of a row to be exposed to the falling pieces if we are to eventually fill in the entire row.

If for some reason we cover up empty parts of a row by pieces on any higher row, then we are now unable to fill in those empty parts of the row. The only way (assuming no sliding) to access those "buried holes" is to eliminate the rows above that have parts acting as obstacles.

The following factors are among those we can use to rate a given board state:

The higher the pile, the worse our situation seems to be, because we are closer to overflowing the board.

The rougher the pile, the more likely it is that it will be difficult to fill in all of the embedded complex contours as they become exposed to the surface.

The more holes we have buried, the worse our situation is, because we must uncover buried holes before we can complete the corresponding rows.

You can imagine other factors that generally rate a hypothetical board on how well its pile can accommodate all of the possible future pieces, and how good the situation looks for all of those possible pieces.

The next issue is how to determine the relative importance of these factors.

One general approach is the following. Assign a set of "weights" (relative importance) to these factors, and then simulate numerous games and record the outcome of these games (final score, etc). Then, assign a new set of weights and simulate a new set of games. Based on whether or not the new set of games had better outcomes than the previous set of games, we know if the new set of weights was better than the previous set of weights.

In my own experiments I tried systematic searching and random searching for good weight combinations, but I didn't notice any large-scale trends that I could pursue. However, I saw many surprisingly smooth bumps. I thought it was interesting that the average performance could form a smooth curve when a parameter was slowly varied with the other parameters held at some value combination.

The best real-time, one-piece Tetris algorithm in the world, created by Pierre Dellacherie (France) in 2003, owes much of its success to its set of measurements (or metrics). Finding weights is necessary when optimizing a strategy, but it is also critical to start with the kinds of measurements that reveal the relevant characteristics of the state.

Pierre Dellacherie's invention of new ways to characterize each board makes his algorithm really excellent; the board characterizations capture the important strategic dimensions of the board state.

One could develop a very different set of characterization dimensions that worked equally well; I am confident that it is possible to span the relevant board state space in many different ways that can be used to specify a strategy function. The key is to find characteristics that project the state space down to a small number of dimensions that can be used to develop a simple rating function (example: the linear weighted combinations of characteristics used by Pierre's algorithm).

The one-Piece algorithm used by the "bot" in the "xtris" software (1996) written by Roger Espel Llima uses weights determined by a large-scale exploration of possible weight combinations by "genetic algorithms". Simulated annealing is another possible method of exploring the multidimensional space of weight combinations.

It seems that, based on various observations, the multidimensional function of average Tetris performance as a function of the weights, example: **F(w1,w2,w3,...)**, is "rough" (lots of local minima and maxima), which means that a simple multidimensional "hill climbing" might not work.

If a strategy algorithm is given the current piece, next piece, and board, then it can make decisions that take advantage of the combination of pieces.

Knowledge of the next piece can improve the success of a Tetris playing algorithm by several orders of magnitude. It's easy to comprehend how knowing the next piece makes a big difference in strategy.

One can do "crazy" moves, such as covering huge holes, etc, because they already know that the next piece can be used to "fix" the situation. If you do not have knowledge of the next piece, you are constantly trying to play the odds, trying to keep your options open in case the next piece is not ideal.

The following sketch shows how all of the possible moves of the current piece are considered, and for each such move we consider all possible moves involving the next piece.

The Standard Tetris software uses this strategy when the "Next Piece" is enabled by the user and is visible on the screen, and when a two-piece A.I. is enabled (such as the one written by me, Colin Fahey). If the "Next Piece" is not visible on the screen, my two-piece falls back to a one-piece A.I..

My one-piece A.I. is terrible when compared to the other AIs in the Standard Tetris software; so this shows you beneficial more information (example: the next piece) can be to an A.I. system; it is enough to improve the performance of my own mediocre two-piece A.I. by several orders of magnitude -- easily surpassing the performance of the best one-piece A.I. in the world.

(However, converting the best one-piece A.I. in the world to consider two pieces would easily improve it by several orders of magnitude, too! Knowing the next piece is huge!)

My first test game with my two-piece A.I. lasted roughly 182 hours (7.6 days) on an 800 MHz PC, completing 7216290 rows. I have not tested the algorithm on a faster computer.

When you save the state of a Tetris game (Shift-W) to a text file, you can then copy and paste the list of numbers, from the section "**heightHistogram**", to an Excel spreadsheet.

Each bin in the histogram indicates the number of completed moves that ended with a particular pile height (after complete rows are eliminated). As you can imagine, making a move that totally eliminates a pile is rare, so the total number of moves that ends with a pile height of zero is relatively low.

Meanwhile, you might expect that the pile height generally fluctuates around some average, so bins corresponding to these rows will dominate the histogram. Finally, bins for the top-most rows (where we are in danger of overflowing the board) have very low totals.

Over the course of many hours, with the A.I. playing a single game using the strategy involving knowledge of the "Next Piece", I took random samples of the game state, copying the pile height histograms to a spreadsheet as shown below:

You can scale each histogram by the total number of pieces (total number of completed moves) to get the following data:

The remarkable thing is that these scaled histograms look identical despite the different orders of magnitude of the number of pieces (completed moves) involved.

Just by looking at the numbers I made the hypothesis that the tail of the curve is an exponential decay. By trial and error I came up with the rough theory that the tail was described by:

relative_frequency = ((0.122) * ((0.375)^(row-5)))

The following graph shows the scaled histogram over the entire range of rows.

Note that the curve for 50000 pieces, and the curve for 2000000 pieces, and the curve of the tail theory are almost indistinguishable at this scale.

The following is a closer look at the head of the curve.

The following is a greatly-magnified view of the extreme tail end of the measured and theoretical histogram curves.

As you might expect, it is very rare for the pile to reach these heights in even long experiments -- but even with our limited evidence in this extreme region, the theory still seems acceptable.

In the full chart the theory seems to overlap the tail of the distribution "exactly", whereas in the magnified chart of the tail of the tail, we see apparent error. However, I assert that this is due to insufficient data at these extreme pile heights. If I increased the game board to, say, a height of 25 rows instead of 20 rows, so that games didn't terminate abruptly, I think the theory I presented above would coincide perfectly with the trend.

My feeling is that this histogram is a direct combined result of the Tetris A.I. and the rules of Tetris. So, this same distribution will be observed for any random long-term game of Tetris using *my* particular A.I. strategy for playing with "Next Piece" knowledge.

Furthermore, I think this type of histogram can be used to compare AIs that employ the same information while playing. Thus, you do not have to play complete games (which could last for days or years) to compare the long-term performance of different strategy algorithms.

For example, despite the simplicity of my model, I think it can be used to predict the average game duration! We simply figure out how many pieces makes the "row 21" pile height a significant number, such as a count of one.

The relative frequency for row 21, according to my simple theory, is:

((0.122) * ((0.375)^( 21 -5 ))) = 1.8 * 10^(-8)

You must multiply this number by **(5.3 * 10^(7))**, roughly 50 million, to get a value of one.

Thus, I roughly predict that my current "Next Piece" strategy is only good for on the order of roughly 10 million completed rows. One reason I make this conservative estimate is the fact that even 18 rows can be deadly for my A.I. because I don't allow my A.I. to consider pieces until they've fallen by at least one row! (This is so I don't have to worry about not being able to rotate pieces.)

I am impressed by the fact that playing only 50000 pieces (and possibly much fewer pieces) can give you an extremely good estimate of the long-term height histogram, and hence, a good estimate of the order of magnitude of completed rows before the game ends. This approach is extremely valuable for quickly evaluating subtle changes in an A.I. that is already doing extremely well.

If an algorithm, such as the one I presented with this project, simply tries all piece orientations and translations for dropping, and rates each resulting board configuration according to some measure of merit, the algorithm is essentially imitating "look-ahead".

When one employs metrics such as "pile height", "buried holes", "surface roughness" or "well depths", one is really using a simplified form of "look ahead". These general characterizations of the board give some indication of the long-term viability of the board.

The ideal approach, given an infinite amount of computing power, would be to evaluate a board configuration given all possible piece sequences that can follow.

Although you must consider all 7 pieces as being equally probable at each level of look-ahead, you need to optimize the actual translations (up to 10) and orientations (up to 4) for each piece in every possible sequence! That's up to **(7*10*4) = 280** branches at every level of the board evaluation! So, that's up to **((280)^(LookAheadLevels))** board configurations to consider.

Because we must terminate the analysis after a finite number of levels, we need to have some non-look-ahead method of evaluating the board state -- a way of giving values to the leaf nodes of our search tree. So, for these leaf nodes, we are back to using a formula that embodies general predictors of future viability of the board!

This type of speculation can be tried with the One-Piece and Two-Piece algorithms in place of the simplistic board evaluation metrics. Assume all subsequent pieces are equally probable and sum up the abilities of a board to accommodate pieces of all kinds in the best possible ways. This can be carried out with one, two, or any number of speculative move depths -- with all pieces being equally probable (**p=1/7**).

The terminal nodes of this tree might still require the weighted metrics to evaluate, but the speculative layers more precisely capture the essence of what we want to do: Determine how well a given board can accept all possible pieces, including positive factors such as completing lines and negative factors such as increasing overall pile height, etc.

This following diagram shows my experimental set-up.

Here is a brief list of the equipment used in this demonstration:

[1] Ontrak Control Systems ADR2200 RS-232 8-Relay Board: $149.00 (USD 2003)

[2] Ontrak Control Systems ADRPWR Power Supply : $ 12.95 (USD 2003)

[3] Creative WebCam Pro (640x480 USB video camera) : $ 39.95 (USD 2003)

[2] Ontrak Control Systems ADRPWR Power Supply : $ 12.95 (USD 2003)

[3] Creative WebCam Pro (640x480 USB video camera) : $ 39.95 (USD 2003)

Clearly you can use similar equipment to accomplish the same results. More details regarding the equipment are described in the corresponding sections of this article.

Here are brief descriptions of the personal computers used in this demonstration:

[1] Personal Computer (PC), 350 MHz, Windows 98 [Runs video game]

[2] Personal Computer (PC), 800 MHz, Windows 2000 [Runs AI program]

[2] Personal Computer (PC), 800 MHz, Windows 2000 [Runs AI program]

This demonstration could easily be reproduced with other operating systems, such as Linux. It is more important to have a CPU speed on the order of 800 MHz or faster for the computer that is to run the A.I. software, because this computer will be doing real-time processing of video.

I used a common USB video camera as a video capture device for my A.I. system. Specifically, I used the Creative "WebCam Pro", a USB video camera with **640 * 480** resolution.

Microsoft has a very old API named "Video for Windows" (VFW) that I successfully used for this project. (You link to "vfw32.lib" in your C++ project, or do a DllImport of "vfw32.dll" in your C# code.) Examples of using the VFW API are widely available.

An alternative is to use Microsoft's "DirectShow" API to do video capture.

Beacuse VFW took only a dozen lines of code to use, and performed acceptably on my 800 MHz machine, I didn't bother exploring the alternative APIs. But DirectShow is a more contemporary API for Windows video capture, and potentially yields a much higher frame rate for the same hardware.

Look at the "CPF.StandardTetris.STVideoCapture" source code files in the Standard Tetris software to see how easy it is to get video capture in to your own projects.

To have one computer "press keys" on the keyboard of another computer, I used a "relay board" controlled by text commands sent from a serial communication port (example: "COM1") via an RS-232 cable. I used each relay to connect the two wires of a specific keyboard key to simulate a key press.

This required opening up the keyboard and making connections. There are many easier methods for simulating key pressing at a computer, but I wanted to do something that seemed as close as possible to a person really typing on a keyboard.

One very versatile and well-made relay board is the ADR2200 made by Ontrak Control Systems:

You can look at the "CPF.StandardTetris.STRS232" source code files to see how easy it is to send bytes through a serial port, which can then be used to control devices such as the ADR2200 board shown above.

Go to the beginning of this article to find a link to download the source code (C# and C++ versions) and built software (*.exe).

Software features:

Instruction screens and credits

Monochrome mode

Shadow mode

Hint mode

Junk rows

Rate control

Next piece

Board size

S/Z pieces

Calibration mode

Video capture and recognition

Debugging console

Save game

Load game

Monochrome mode

Shadow mode

Hint mode

Junk rows

Rate control

Next piece

Board size

S/Z pieces

Calibration mode

Video capture and recognition

Debugging console

Save game

Load game

Appearance when the software is started:

By default the board appears in color:

The color mode can be changed to monochrome (Shift + K):

Shadow mode indicates where a piece will land. This is very helpful for very large boards, because it is difficult to judge where a piece will land.

Hint mode shows you where the currently-selected AI would move given the current situation. (Shift + H)

Insert "junk" rows at the bottom of the pile, one by one, manually. (Shift + J)

The '+' (plus) and '-' (minus) keys control the speed of the game.

By default, the game runs at a standard speed, according to the rules of Standard Tetris (speed based on level).

Here is a table of the meanings of speed bias:

-3,-4,...: slowness proportional to bias

-2 : slower than level 1

-1 : normal, but limited to level 6 (0.2 sec) speed;

0 : normal; Standard Tetris speed control;

+1 : slightly faster than level 9 (0.05 sec delays);

+2 : bounded by rendering rate (example: 75 Hz);

+3,+4,... : multiple iterations per rendered frame;

-2 : slower than level 1

-1 : normal, but limited to level 6 (0.2 sec) speed;

0 : normal; Standard Tetris speed control;

+1 : slightly faster than level 9 (0.05 sec delays);

+2 : bounded by rendering rate (example: 75 Hz);

+3,+4,... : multiple iterations per rendered frame;

I like to run the A.I. at a setting of "+2" (hit '+' key twice if bias starts at zero).

Hit 'N' to toggle the "Next Piece" display. The A.I. will use the "Next Piece" information only if the "Next Piece" appears on the screen.

You can be assured that the AI is not using "Next Piece" information when you cannot see the "Next Piece" on the screen.

Pressing Ctrl + {left, right, down, up}, one can adjust the board size to arbitrary sizes from **4 * 4** up to **200 * 400**.

Study the interesting alternating S/Z pattern.

This pattern cannot be solved for a **10 * 20** board (width * height).

However, boards that have widths that are multiples of 4 are trivially shown to permit infinite survival.

The AIs survive indefinitely in these cases, even though they weren't specifically tuned to handle this pathological situation.

Hit 'C' to enter "Calibration Mode". When in calibration mode, you can hit the number keys: {1,2,3,4,5,6,7} to select a piece {O,I,S,Z,L,J,T} at the top of the playing board.

This is useful as a reference image for the video capture on a second Standard Tetris software.

If you hit the 0 (zero) key, it will make the board blank.

You can pretend to spawn pieces by selecting a piece (1..7), and then selecting a blank (0), while a second computer doing video capture watches for pieces.

You can run the A.I. on the second computer and see how it deals with your manually entered pathological Tetris scenarios!

Calibration Mode is the only time you can manipulate the video capture image processing template (4 * 2 grid). You can use the mouse to draw the rectangle, but then you can use the cursor keys ("up", "down", "left", "right") to have fine control of the borders -- using the Shift key to select opposite borders of the rectangle (example: "Shift-left" combo is different than "left").

Pressing 'V' toggles the video capture mode. If properly calibrated (see "video calibration" in a previous section), the pieces of a remote video screen will be captured by the video camera and classified -- and the pieces will be spawned in the local game for the A.I. to consider and react.

The A.I. output must then be transmitted (via the RS-232 interface in the demonstration described in this article) to the remote game input (example: keyboard) for the A.I. to successfully play the remote game.

If at any time this closed loop is disturbed (example: video capture malfunction, or output signal malfunction), then the A.I. will develop a false impression of the status of the remote game, and the A.I. will make inappropriate decisions that quickly lose the game.

(Note: This problem can be overcome with a modest amount of effort: The A.I. system need only examine the entire remote Tetris screen for an ongoing "reality check" of the whole board, and the A.I. system should be prepared for some output commands to fail in some manner.)

The following shows the first working version of the Tetris A.I. system in 2003.

Right: Tetris piece recognition cases.

Pierre Dellacherie (2003; France), developer of the best one-piece Tetris-playing algorithm in the world

The best one-piece, real-time Tetris algorithm to my knowledge was created by Pierre Dellacherie of France.

His amazing algorithm sometimes fills more than 2 million rows!

The average performance is on the order of 650000 rows.

The algorithm takes very little memory, and runs at high speed (about 160 rows per second on my 800 MHz computer).

My current model for the performance of Tetris AIs is that for any given piece there is a constant probability that the game will terminate, **p**.

Thus, the probability that a piece will not terminate a game is **q=(1-p)**.

The probability of the game terminating after **k** moves is simply the product of the probability of surviving **(k-1)** moves, namely **q^(k-1)**, and the probability of the terminating on the next move, namely **p**.

This corresponds to a "Geometric Distribution":

Geometric Distribution:

P(k) = p * [(1-p)^(k-1)] = p * [q^(k-1)] = p * exp[ln(q) * (k-1)]

MEAN: [1/p]

VARIANCE: [q/(p*p)]

STANDARD DEVIATION: sqrt( VARIANCE )

P(k) = p * [(1-p)^(k-1)] = p * [q^(k-1)] = p * exp[ln(q) * (k-1)]

MEAN: [1/p]

VARIANCE: [q/(p*p)]

STANDARD DEVIATION: sqrt( VARIANCE )

For small **p**, **ln(q)** is roughly **(-p)**, and we have the following:

Exponential Distribution:

P(k) = p * exp[-p * (k-1)]

= p * exp[-p * k ] approximately

MEAN: [1/p]

VARIANCE: [1/(p*p)]

STANDARD DEVIATION: sqrt( VARIANCE )

P(k) = p * exp[-p * (k-1)]

= p * exp[-p * k ] approximately

MEAN: [1/p]

VARIANCE: [1/(p*p)]

STANDARD DEVIATION: sqrt( VARIANCE )

We expect the fraction of the total number of played games to terminate with a number of completed rows in the interval **[k1, k2]** to be:

Integral of the Exponential Distribution:

I(k1,k2) = exp[-p * k1] - exp[-p * k2]

I(k1,k2) = exp[-p * k1] - exp[-p * k2]

After completing 36 games on my computer, over a period of two days, I found a an average of 674827 completed rows.

According to the general theory above, I would expect the following relative fraction of games to finish in the following completed row ranges:

p = (1/674827) = 0.000001482 = 1.482*10^(-6)

Completed Row Range Relative Fraction of Total Games

======================= =================================

0 ... 400 000 [exp( 0 )-exp(-0.59)] = 0.447

400 000 ... 800 000 [exp(-0.59)-exp(-1.19)] = 0.250

800 000 ... 1 200 000 [exp(-1.19)-exp(-1.78)] = 0.135

1 200 000 ... 1 600 000 [exp(-1.78)-exp(-2.37)] = 0.075

1 600 000 ... 2 000 000 [exp(-2.37)-exp(-2.96)] = 0.042

2 000 000 ... 2 400 000 [exp(-2.96)-exp(-3.55)] = 0.023

Completed Row Range Relative Fraction of Total Games

======================= =================================

0 ... 400 000 [exp( 0 )-exp(-0.59)] = 0.447

400 000 ... 800 000 [exp(-0.59)-exp(-1.19)] = 0.250

800 000 ... 1 200 000 [exp(-1.19)-exp(-1.78)] = 0.135

1 200 000 ... 1 600 000 [exp(-1.78)-exp(-2.37)] = 0.075

1 600 000 ... 2 000 000 [exp(-2.37)-exp(-2.96)] = 0.042

2 000 000 ... 2 400 000 [exp(-2.96)-exp(-3.55)] = 0.023

Here is a graph that compares the Exponential Distribution theory with the observed distribution of games.

Although there are very few games in this data set, it is apparent that the model is fairly good at matching the observed distribution.

Pierre started work on this one-piece algorithm in 2003.1.

Pierre sent me an e-mail about his algorithm on 2003.4.9, reporting the following performance over 20 consecutive games:

Game 1 : 1 213 220 rows

Game 2 : 876 618 rows

Game 3 : 37 327 rows

Game 4 : 260 337 rows

Game 5 : 165 349 rows

Game 6 : 2 288 991 rows

Game 7 : 1 112 094 rows

Game 8 : 138 648 rows

Game 9 : 107 089 rows

Game 10 : 1 284 387 rows

Game 11 : 935 011 rows

Game 12 : 80 766 rows

Game 13 : 253 158 rows

Game 14 : 1 877 331 rows

Game 15 : 145 034 rows

Game 16 : 888 081 rows

Game 17 : 433 694 rows

Game 18 : 15 446 rows

Game 19 : 494 498 rows

Game 20 : 16 273 rows

Average is 631167 completed rows.

Game 2 : 876 618 rows

Game 3 : 37 327 rows

Game 4 : 260 337 rows

Game 5 : 165 349 rows

Game 6 : 2 288 991 rows

Game 7 : 1 112 094 rows

Game 8 : 138 648 rows

Game 9 : 107 089 rows

Game 10 : 1 284 387 rows

Game 11 : 935 011 rows

Game 12 : 80 766 rows

Game 13 : 253 158 rows

Game 14 : 1 877 331 rows

Game 15 : 145 034 rows

Game 16 : 888 081 rows

Game 17 : 433 694 rows

Game 18 : 15 446 rows

Game 19 : 494 498 rows

Game 20 : 16 273 rows

Average is 631167 completed rows.

"The algorithm is implemented in Turbo Pascal and completes 7000 rows / min. with an Athlon 1600+."

I converted Pierre's algorithm to C++ in 2003.6, after several e-mail exchanges with Pierre. We verified that the A.I. in the C++ version made the same decisions of the Pascal version.

I observed similar performance to his original report; average around 650000 completed rows, and some games completing 2 million rows.

Incredible!

I have been working on the algorithm from late January 2003 until now.

I worked on it almost every week... but not everyday long because I have other activities : unfortunately I have to earn money like anyone else !

I played Tetris 10 or 15 years ago but I hadn't played again for a long time. I would say that I am an "average" player who knows the rules and some tricks.

However, when I began to work on the algorithm I didn't play so much because I found it was rather more effective to watch the computer playing and analyze his weaknesses.

Or did you simply watch the results and decide to make modifications?

I'm from "old school" : I simply watched the results and decided to make modifications.

"Automatic learning" is a type of meta-algorithm so I am not assured that doing it this way would get easier as this meta-algorithm would have to be built too and that is not so easy !

What's more, I do agree with Roger Penrose when he says (in his book "Shadows of the mind") that human's understanding and intuition cannot be algorithmic (example: computable).

I teach algorithmic and computer programming (Turbo Pascal and Scilab) for students who train for entrance examination at graduate engineer's schools.

At first, "Computer plays Tetris" was an idea I wanted to develop for my next year students.

I wasn't aware of your web page when I began to work on the algorithm.

Indeed I was lucky to be aware of your web page only a few weeks ago because I think I would have been discouraged by your results (as you might guess, the early versions of my algorithm didn't play so well !).

I'm almost 30 [before 2003 April 27]. I have several activities : I'm a cellist, I compose music and I teach computer programming.

I have a master degree in musicology (1994) and an "Artificial Intelligence and Algorithmic" diploma (1998).

In France this diploma corresponds to the 5th year studying at University (4th year is the master degree and 6th year is the thesis).

Pierre's compositions:

I'm French and I live in Rouen (Normandy).

Eventually I haven't any new idea to improve it.

I have tried so much useless (and silly) things that I doubt it could be improved.

On the other hand, I think there might exist completely different algorithms which could have better performances.

By the way, a faster test consist in launching the pieces in a half-box (10 rows X 10 columns) : in a half-box, my algorithm makes an average of 280 completed rows.

One more thing : the algorithm can be easily altered in a double-ply algorithm ([I will do] the tests soon).

I love film music : becoming a film composer is part of my main dreams (but it's just dream !).

Arika presents the Japanese Tetris Finals master (2001 January 27). "TETRIS THE ABSOLUTE PLUS --The Grandmaster2-- DEATH-MODE"

In 2003, my Tetris A.I. project was discussed on the Slashdot Internet forum (http://slashdot.org).

Benefactor of mankind--thank you! (Score:4, Funny)

by EnlightenmentFan (617608) on Tuesday January 28, @02:29PM

(#5176294)

(http://betsydevine.weblogger.com/ | Last Journal: Tuesday

January 21, @01:55PM)

Now I can set up computer #1 to play an infinite, obsessive

game of Tetris on computer #2, leaving me free at last to sit

down at computer #3 and get some work done. The $200 for

webcam and other hardware is cheap for an invention like this,

with the revolutionary potential of the wheel, or fire, or

even pizza delivery.

Thank you! Thank you! Thank you!

[ Reply to This ]

by EnlightenmentFan (617608) on Tuesday January 28, @02:29PM

(#5176294)

(http://betsydevine.weblogger.com/ | Last Journal: Tuesday

January 21, @01:55PM)

Now I can set up computer #1 to play an infinite, obsessive

game of Tetris on computer #2, leaving me free at last to sit

down at computer #3 and get some work done. The $200 for

webcam and other hardware is cheap for an invention like this,

with the revolutionary potential of the wheel, or fire, or

even pizza delivery.

Thank you! Thank you! Thank you!

[ Reply to This ]

The New 4th law... (Score:5, Funny)

by gokubi (413425) on Tuesday January 28, @02:09PM (#5176135)

(http://www.gokubi.com/kinggeorge)

1. A robot may not injure a human being, or, through inaction,

allow a human being to come to harm.

2. A robot must obey the orders given it by human beings

except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such

protection does not conflict with the First or Second Law.

4. A robot must never place the long skinny ones horizontally,

unless it leads to a long skinny vertical hole so 4 rows can

be cleared at once the next time a long skinny one comes

around.

[ Reply to This ]

Re:The New 4th law... (Score:5, Funny)

by GuyMannDude (574364) on Tuesday January 28,

@02:14PM (#5176179)

I thought Directive 4 was that any attempt to arrest a

senior officer of OCP Corporation would result in

immediate shutdown!

GMD

[ Reply to This | Parent ]

by gokubi (413425) on Tuesday January 28, @02:09PM (#5176135)

(http://www.gokubi.com/kinggeorge)

1. A robot may not injure a human being, or, through inaction,

allow a human being to come to harm.

2. A robot must obey the orders given it by human beings

except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such

protection does not conflict with the First or Second Law.

4. A robot must never place the long skinny ones horizontally,

unless it leads to a long skinny vertical hole so 4 rows can

be cleared at once the next time a long skinny one comes

around.

[ Reply to This ]

Re:The New 4th law... (Score:5, Funny)

by GuyMannDude (574364) on Tuesday January 28,

@02:14PM (#5176179)

I thought Directive 4 was that any attempt to arrest a

senior officer of OCP Corporation would result in

immediate shutdown!

GMD

[ Reply to This | Parent ]

In the spring of 1989 I was busy skipping (and failing) second-semester freshman classes at the University of Pennsylvania.

One of my roommates, Bill Matthews, had the Mac Classic, and sometimes played video games.

People who get admitted to Ivy League schools are typically inclined to compete with other people at all times -- so when Bill got the game Tetris for his Mac, we immediately started a long-term battle for the high score.

As the scores climbed, the time investment required to make a small gain dramatically increased.

To make a long story short, Bill supposedly holds the high score between us, but I suspect him of using ResEdit and hacking the score file!

Bill had classes in the Wharton school of business, the alma mater of Michael Milken and Donald Trump, so it's not inconceivable that someone rigged an impossibly high score...

In the summer of 1990 I borrowed a 30 MHz Intel 80386 IBM PC from my roommate, Alex Haidas.

I bought a Mac keyboard at a flea market for $1.

I built parallel port circuitry to allow the PC to control the Mac keyboard.

(I used a CMOS 4040 chip to act as a type of solid-state relay to join keyboard contacts inside the Mac keyboard.)

I wrote a computer program that used a decision tree as its strategy for playing the game Tetris. In just a few weeks I had a PC playing the Tetris game running on a Mac.

However, I was required to use the PC keyboard to tell the A.I. about each falling piece on the screen.

I started working on a circuit using CdS (Cadmium Sulfide) light detectors that would lean against the Mac screen and "see" the falling pieces, but CdS sensors reacted too slowly to changes in brightness, and the contrast between Tetris pieces and the background on the Mac Classic screen was too low to reliably discriminate.

In those days I did not have much money, so it was too risky to buy a $2 Radio Shack phototransistor that might not do what I wanted.

Also, given the contrast problem, I was pessimistic about the whole approach.

When I bought my first personal computer in 1996, I could not get software under Windows 95 on a 100 MHz CPU to do video processing fast enough to make a simple vision system work.

I wrote the image processing code in assembly language, but there was so much overhead before my code actually received video data that it seemed impossible to do anything worthwhile.

In 2003, technology, particularly CPU speed, had reached a level that made finishing the project almost trivial.

I dug up my old personal project and finally finished it, getting some sense of closure.

It was very exciting to see one computer playing another computer through the USB video camera and the relays.

The sound of the relays clicking, and watching the pieces spin and drop at ridiculous, superhuman speeds, made the experience very satisfying in a multisensory way.

In 2003, my work was recognized by Slashdot (http://slashdot.org), and I received a lot of great feedback.

I was invited to appear on the "Screen Savers" television show on the TechTV digital television network.

I went to San Francisco and appeared on the show, and the experience was great.

Later in 2003, I received a message from Henk Rogers, inviting me to Hawaii to meet him and Alexey Pajitnov to talk about establishing some sort of standard for Tetris, for the purposes of having Tetris tournaments.

One particular interest was enabling players to use mobile phones to "compete with each other", indirectly, via A.I. that imitates players (by prior analysis of each player's actions), thus avoiding the effects of mobile phone Internet access latency, and enabling players to "compete against"* the very best human players in the world (*...or, rather, A.I. imitating the very best human players in the world).

I was thrilled by the prospect of meeting the creator of Tetris. Unfortunately, flying makes me anxious, and I declined the invitation.

In 2006, I converted my "Standard Tetris" software from C++ to C# to make the software more accessible and useful to contemporary programmers.

Notes about possible future modifications to this article:

2012 July : A research paper:

http://hal.archives-ouvertes.fr/docs/00/41/89/54/PDF/article.pdf